0% found this document useful (0 votes)
6 views

Final Notes

An operating system (OS) is essential software that manages hardware and software resources on a computer, facilitating user interaction and task execution. It includes various types such as batch, multiprogramming, time-sharing, and real-time systems, each serving specific functions and user needs. The OS also encompasses services like user interface, program execution, and file management, along with components for process, memory, and device management.

Uploaded by

Jay Kadlag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Final Notes

An operating system (OS) is essential software that manages hardware and software resources on a computer, facilitating user interaction and task execution. It includes various types such as batch, multiprogramming, time-sharing, and real-time systems, each serving specific functions and user needs. The OS also encompasses services like user interface, program execution, and file management, along with components for process, memory, and device management.

Uploaded by

Jay Kadlag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

⚙️

Operating System
Chapter 1 - Overview of Operating System (8 Marks)
Operating System 🔴
An operating system (OS) is software that manages all the hardware and software on a computer
or device.

It acts as a bridge between the user and the computer's hardware, making it easier for users
to interact with the system.

The OS controls tasks like running applications, managing files, and handling input/output
operations.

Examples of operating systems include Windows, macOS, and Linux.

Need of Operating System 🔴


Resource Management: It controls hardware like the CPU, memory, and storage to keep the
system running smoothly.

User Interface: It provides an easy way for users to interact with the computer through
graphical interfaces or commands.

Program Execution: It helps programs run by managing their tasks and giving them the
resources they need.

File Management: It organizes and stores files in a structured way, allowing easy access and
retrieval.

Security and Protection: It keeps data safe from unauthorized access and prevents the system
from crashing.

Operations of Operating System. 🔴


1. Process Management: It manages running programs, ensuring they get enough CPU time to work
properly.

2. Memory Management: It allocates and tracks memory used by for programs and makes sure they
don’t use more memory than allowed.

3. File Management: It handles the storage, organization, and retrieval of files on the
computer.

4. Device Management: It controls input/output devices like the keyboard, mouse, printer, etc.,
ensuring they work smoothly.

5. Security and Access Control: Protects system resources and data, allowing access only to
authorized users and programs.

6. User Interface: Provides a way for users to interact with the computer, like using a GUI or
command line.

Resource Management 🟠 (4M - S-23)


Resource management in an operating system is the process of efficiently managing all
resources like CPU, memory, input/output devices, and other hardware so that all programs and
processes can run smoothly.

Since resources are limited, multiple users or programs may need the same resources, such as
memory and CPU, at the same time.

The operating system makes sure that all processes get the resources they need without
problems like deadlocks.

It uses scheduling methods to share CPU time fairly among processes.

The operating system also manages memory efficiently through virtual memory techniques.

It uses file system management to create, delete, and modify files and directories on storage
devices.

Additionally, network management techniques are used by the operating system to manage
network bandwidth efficiently.

Operating System 1
Batch Operating System. 🔴
A batch operating system is an operating system that processes jobs in groups, called
batches, without requiring user interaction during execution.

A job is a task or set of tasks that need to be completed by the system.

A batch is a collection of similar jobs grouped together to be processed without any user
input.

In a batch operating system, jobs are lined up in a queue and processed one by one
automatically by the OS.

Once a job is submitted, the user does not interact with it until it is finished. The OS
takes care of executing the tasks in the background.

Batch OS is useful for handling large volumes of repetitive tasks, reducing the need for user
involvement.

It is commonly used for tasks like payroll processing, report generation, or data analysis,
where automatic execution of large jobs is needed.

Multiprogramming Operating System 🟢 (4M - W-19)


A Multiprogramming Operating System allows multiple programs to be loaded into memory at the
same time.

The scheduler decides which programs to move into the ready queue.

The ready queue stores multiple programs in main memory, waiting to be executed.

Since there is only one processor, only one program is executed at a time.

The CPU switches between these programs to ensure it is always busy, which increases overall
efficiency.

Example: A user can run multiple applications like Word, Excel, and Access on a computer at
the same time.

Advantages:

1. Increased CPU Utilization

2. Higher Throughput

3. Efficient Resource Utilization

4. Better System Performance

Time Shared Operating System 🟢 (4M - W-22, S-22)


In a time-sharing system, the CPU runs multiple jobs by quickly switching between them.

This switching happens so fast that users can interact with each program as if it’s running
continuously.

Operating System 2
Time-sharing systems provide direct communication between users and the computer, giving each
user the feeling that they have their own CPU.

The system lets multiple users share computer resources at the same time.

The operating system gives each user a small time slice of CPU time. Once a user's time slice
ends, the CPU moves to the next user.

The time slice is so short that users experience minimal delay, giving the impression of
exclusive CPU use.

The main goal of a time-sharing system is to reduce response time and provide fast, efficient
user interaction.

Example: The concept of time-sharing system is shown in figure

In above figure, the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting
state whereas user 6 is in ready status.

Multiprocessing Operating System 🟢 (4M - W-23, S-23)


Multiprocessor systems, also known as parallel systems or tightly coupled systems, consist of
two or more processors that work together closely.

These processors share common computer resources like the bus, clock, memory, and peripheral
devices.

The operating system manages tasks in a multiprocessor system by assigning different tasks to
each processor.

Programs for multiprocessor systems are often threaded, meaning they are divided into smaller
parts that can run separately.

Multiple CPUs work together to divide a task, speeding up its completion. Once all parts are
finished, the results are combined to produce the final output.

Advantages: (2M, S-22)

1. Improved Reliability

2. Increased throughput

3. Cost Efficiency

4. Scalability

5. Faster Execution

Real-time Operating System 🟢 (2M - W-19, W-23, 4M - S-23)


A real-time operating system (RTOS) is designed to process tasks within strict, fixed time
limits.

In these systems, tasks must be completed within a set amount of time.

Types:

1. Hard:

Tasks must be completed within very strict deadlines.

Missing a deadline can lead to system failure.

Operating System 3
For example, in medical devices like pacemakers, it’s crucial to meet deadlines to
ensure patient safety.

2. Soft:

Allows some flexibility in timing but still tries to meet deadlines.

Missing a deadline may affect performance but won’t cause the system to fail.

For example, in video streaming, slight delays may reduce quality but won’t stop the
stream.

Applications:

Flight control systems

Simulations

Industrial automation and control

Military applications

Distributed Operating System 🔴


A distributed operating system connects several independent computers to work together as if
they were one system.

Each computer has its own processor, memory, and storage, but they are linked through a
network.

The operating system manages tasks and resources across all the computers, making sure they
share data and work together smoothly.

Users interact with the system as if it were a single machine, even though the work is being
done on different computers.

The system is fault-tolerant, meaning if one computer stops working, others can take over its
tasks without causing any issues.

Advantages include better performance, sharing resources, and flexibility, as many computers
can contribute to completing big tasks.

In summary, a distributed operating system helps multiple computers work together, making
tasks faster and more reliable.

Difference between Multiprogramming and Multitasking 🟠 (2M - W-22)


Multiprogramming Multitasking

Allows multiple programs to use the CPU at Allows user interaction while running
once. multiple tasks.

Works based on context switching. Works on a time-sharing mechanism.

Reduces CPU idle time and increases Runs multiple processes at the same time
throughput. to enhance CPU and system efficiency.

When one job finishes or needs I/O, it


Multiple processes run at the same time by
pauses, and the system selects another
allocating CPU for a set duration.
process to run.

The CPU quickly switches between different In a single-user environment, the CPU
programs or processes in a multiuser switches between processes of various
environment. programs.

Takes longer to execute processes. Takes less time to execute processes.

Difference between Time sharing system and Real time system 🟠 (2M - S-23)
Real-Time Operating System (RTOS) Time-Sharing Operating System (TSOS)

Built for tasks that need instant Built to share resources among many users
responses or tasks

Gives quick and reliable responses Responses can take longer and vary in time

Runs tasks based on a schedule, not on


Runs important tasks right away
importance

Used in critical systems like medical Used in general computing like desktops
devices and robots and servers

Difference between CLI based OS and GUI based OS 🟢 (4M - W-22, W-23, S-22)
Command-Line OS (CLI) Graphical User Interface OS (GUI)

Text-based commands Graphical icons and menus

Operating System 4
Command-Line OS (CLI) Graphical User Interface OS (GUI)

Harder to learn, needs practice Easier to learn, more intuitive

Uses less memory and power Uses more memory and power

Faster execution for experienced users Slower due to graphical elements

Provides full control over the system Limited control compared to CLI

High Accuracy Comparatively Low

Only Keyboard Mouse and Keyboard both can be used.

Efficient for repetitive tasks Best for visual tasks and navigation

MS-DOS, Linux Terminal Windows, macOS, GNOME on Linux

Chapter 2 - Services and Components of Operating System (10 Marks)


Services of Operating System 🟢 (2M - W-19, W-22, W-23, S-23, S-24, 4M - S-22)
1. User Interface

2. Program Execution

3. I/O Operations

4. File System Management

5. Communication

6. Error Detection

7. Resource Allocation

8. Accounting

9. Protection & Security

System Calls 🟢 (2M - S-22, 4M - W-19, W-22, W-23, S-23, S-24)


System calls provide a way for user programs to interact with the operating system.

They act as an interface between a program and the OS, allowing programs to request services
or access hardware resources that are controlled by the OS.

System calls handle various functions, including file operations, process control, and
communication.

Types:

1. Process Control:

Create/Terminate process

Load/Execute process

End/Abort process

Ready process/Dispatch process

Suspend process/Resume process

Get/Set process attributes

Wait event, Signal event

Allocate and deallocate memory

2. File Management:

Create new file, delete existing file

Open, close file

Create and delete directories

Read, write, reposition

Get/Set file attributes

3. Device Management:

Request device, release device

Read, write, reposition

Get/Set device attributes

Logically attach or detach devices

4. Information Maintenance:

Operating System 5
Get/Set time or date

Get/Set system data

Get/Set process, file, or device attributes

5. Communication:

Create, delete communication connection

Send, receive messages

Transfer status information

Attach or detach remote devices

Components of Operating System 🟢 (4M - W-19, W-22, W-23, S-22, S-23, S-24)
1. Process Management:

A program is a set of instructions, and when it starts running, it becomes a process.

A process requires resources like CPU time, memory, files, and I/O devices to execute.

These resources can be allocated when the process is created or during its execution.

The operating system is responsible for the following activities in connection with
process management:

Creating and deleting user and system processes.

Suspension and resumption of processes.

A mechanism for process synchronization.

A mechanism for process communication.

A mechanism for deadlock handling.

2. Main Memory Management:

Main memory is a large, fast storage space that holds data and programs.

It consists of smaller units called words or bytes, each with a unique address.

The CPU accesses main memory directly to fetch instructions and read/write data.

The operating system handles memory by:

Tracking which parts of memory are in use and by which processes.

Deciding what data and processes to load or remove from memory.

Allocating and deallocating memory as required.

3. File Management:

A file is a collection of related information created by a user.

Files are stored on secondary storage like disks or tapes for long-term use.

Some of the examples of storage media are magnetic disks and optical disks, and each of
these have unique features like speed, capacity, data transfer rate, etc.

Files are organized into directories to make them easier to find and manage.

The operating system manages files by:

Creating and deleting files and directories.

Allowing file manipulation and directory organization.

Mapping files to storage devices.

Backing up files to protect against data loss.

4. I/O Device Management:

It manages input and output devices like printers, scanners, and drives.

The operating system uses device drivers to communicate with these devices.

Device drivers convert OS data into a format that devices can process, like laser pulses
for a printer.

The I/O subsystem consists of several components

Memory management (e.g., buffering, caching, spooling).

A standard interface for device drivers.

Specific drivers for each hardware device.

5. Secondary Storage Management:

Operating System 6
Secondary storage is used to store data and programs that don’t fit in main memory.

It is also necessary because main memory loses data when the power is off.

Common secondary storage devices include disks and tapes, which store programs and files
until they are needed.

The operating system manages secondary storage by:

Managing free space on storage devices.

Allocating storage as needed.

Scheduling tasks to optimize disk performance.

Operating System Tools 🟢 (6M - W-19, W-22, W-23, S-22, S-23, S-24)
1. User Management:

The operating system creates, modifies, and deletes user accounts on the system.

It assigns or restricts access rights for individual users or groups.

It stores and manages user-related information, such as preferences, settings, and


history.

The system tracks user actions, such as login/logout, file access, and command execution,
for auditing or security purposes.

Users are organized into groups, making it easier to assign permissions to multiple users
at once.

2. Device Management:

The operating system manages the allocation of input/output devices, such as printers and
scanners.

It assigns devices to programs or processes as needed.

It keeps track of whether a device is free or currently in use.

The system detects and reports errors related to hardware devices.

It ensures smooth data transfer between the device and the operating system.

3. Performance Monitor:

The performance monitor tracks CPU usage, memory usage, disk activity, and other system
resources.

It identifies slow processes or bottlenecks in the system.

The system generates usage statistics for system administrators to analyze.

It notifies users about potential problems, such as high CPU usage or memory leaks.

4. Task Scheduler:

The task scheduler decides which process gets CPU time and when.

It divides CPU time among multiple processes using scheduling algorithms.

The scheduler ensures that all processes get a fair share of the CPU, preventing any one
process from monopolizing resources.

It prevents system overloading by managing the number of active tasks.

The scheduler allows multiple processes to run simultaneously by efficiently switching


between them.

5. Security Policy:

The security policy specifies rules and guidelines for protecting system data and
resources.

It determines who can access specific files, resources, or system settings.

The system ensures that users authenticate themselves using passwords, biometrics, or
other methods before gaining access.

It prevents unauthorized users from accessing sensitive data and applications.

Data is protected through encryption, ensuring it remains secure even in the event of a
breach.

Chapter 3 - Process Management (14 Marks)


Process 🟠 (2M - W-22)

Operating System 7
A process is a program in execution.

Process is also called as job, task or unit of work.

Process States 🟢 (2M - W-19, W-23, S-22, S-24, 4M - W-22, S-23)

1. New:

The process is being created and initialized. It is not yet ready to run.

Moves to the Ready state when it has been set up and is ready to be scheduled.

2. Ready:

The process is waiting in memory for CPU time. It is ready to run but not currently
executing.

Moves to the Running state when the CPU scheduler selects it for execution.

3. Running:

The process is currently being executed by the CPU.

Can move to the Waiting state if it needs to wait for an I/O operation to complete, or to
the Ready state if it is preempted by the scheduler. Moves to the Terminated state when it
finishes execution.

4. Waiting:

The process is waiting for an event or resource, such as an I/O operation, to complete.

Moves to the Ready state once the event or resource becomes available and the process is
ready to resume execution.

5. Terminated:

The process has finished execution or has been aborted.

It is no longer active and will be removed from memory.

Process Control Block (PCB) 🟠 (2M - W-22, 4M - W-19, S-23, S-24)


A PCB is a data structure used by the operating system to keep track of information about a
process. Each process in the system has its own PCB, which includes:

Process State: It indicates current state of a process. Process state can be new, ready,
running, waiting and terminated.

Process Number: Each process is associated with a unique number which is known process
identification number.

Program Counter: It indicates the address of the next instruction that needs to be executed
for the process.

CPU Registers: Includes information about various registers used by the CPU, such as
accumulators, stack pointers, and general-purpose registers.

Memory Management Information: Contains details about memory allocation, like base and limit
registers, page tables, or segment tables.

Accounting Information: Records data about CPU usage, time limits, and I/O devices used by
the process, such as a list of open files.

Operating System 8
Scheduling Queues 🔴 (4M - S-22)
Job Queue: Stores all processes that are waiting to be processed by the operating system.

Ready Queue: Contains processes that are ready to run and waiting for CPU time.

Device Queue: Contains processes that are waiting for an input/output device, such as a
printer or disk, to become available.

Schedulars 🟠 (4M - W-19, S-23)


1. Long Term Scheduler:

This scheduler selects programs from the job pool and loads them into the main memory.

It controls the degree of multiprogramming, which refers to the number of processes loaded
into memory at one time.

The system typically contains two types of processes:

I/O Bound Processes: These processes spend more time performing input/output
operations.

CPU Bound Processes: These processes spend more time doing computations with the CPU.

The long term scheduler balances the system by loading both I/O bound and CPU bound
processes into the main memory.

When it selects a process, the process's state changes from new to ready.

2. Short Term Scheduler:

Also known as the CPU scheduler, this scheduler selects processes that are ready for
execution from the ready queue.

It allocates the CPU to the selected process.

The short term scheduler executes more frequently than the long term scheduler.

When it selects a process, the process's state changes from ready to running.

3. Medium Term Scheduler:

This scheduler comes into play when a running process is blocked due to an interrupt.

It swaps out the blocked process and stores it in a queue for blocked and swapped-out
processes.

When there is space available in the main memory, the medium term scheduler looks at the
list of swapped-out but ready processes.

It selects one process from that list and loads it into the ready queue.

The job of medium term scheduler is to select a process from swapped out process queue and
to load it into the main memory.

The medium term scheduler works closely with the long term scheduler to manage which
processes are loaded into the main memory.

Differences between Long term, Medium term and Short term Scheduling 🟠 (4M - W-22)
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

Job Scheduler CPU Scheduler Process Swapping Scheduler

Speed between long-term and short-


Slower than short-term scheduler Fastest among the three
term schedulers

Controls the degree of Provides less control over Reduces the degree of
multiprogramming multiprogramming multiprogramming

Rare or minimal in time-sharing Also minimal in time-sharing Commonly used in time-sharing


systems systems systems

Operating System 9
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

Selects processes from the pool and Chooses processes ready for CPU Swaps processes in and out of
loads them into memory for execution execution memory, continuing execution later

Manages process transition from "new" Manages process transition from No specific process state
to "ready" state "ready" to "executing" state transition management

Context Switch 🟠 (4M - W-23, S-24)


A context switch happens when the CPU switches from executing one process to another.

It is a mechanism that saves and restores the CPU's state (or context) in a Process Control
Block (PCB), allowing the process to resume execution later.

During a context switch, the following steps occur:

State Save: The CPU's current state (register values, program counter, etc.) of the
process being removed is saved into its PCB.

State Restore: The CPU's state is restored from the PCB of the process that is about to
execute.

The context switch ensures that when a process is paused and then resumed, it can continue
from where it left off.

This process helps in managing multiple processes, allowing the CPU to switch between them
efficiently.

Inter-process communication 🟢 (4M - W-19, 6M - S-23)


Inter-process communication (IPC) is a way for processes to exchange data and information
with each other.

There are two main models of IPC: shared memory and Message passing.

1. Shared Memory: (4M, W-22, W-23)

The Shared Memory Communication Model allows multiple processes to exchange data
through a common memory space.

A section of memory is created that can be accessed by multiple processes.

All processes involved can read from or write to this shared memory area.

Since processes directly access the memory, this method is very fast.

Proper synchronization techniques are needed to prevent conflicts.

Without synchronization, processes might read incorrect data if they access shared
memory at the same time.

It is commonly used when processes are running on the same computer, especially in
systems with multiple processors.

Examples:

Client and server applications within the same machine sharing data.

Video games where multiple components (like graphics and physics engines) need to
share information quickly.

The shared memory model is great for fast communication but requires careful
synchronization to ensure data integrity.

Operating System 10
2. Message Passing: (4M - S-23)

The Message Passing Communication Model allows processes to communicate by sending and
receiving messages.

Processes communicate by sending messages to each other.

Messages can contain data or information, and are exchanged through the operating
system.

Unlike shared memory, processes do not share memory space.

Communication happens through sending (sending process) and receiving (receiving


process).

Often used when processes are running on different computers in a network.

The operating system is involved in delivering messages between processes, which makes
it slower compared to shared memory.

Kernel intervention ensures that the communication is managed properly and securely.

The message passing model is easier to synchronize because each process communicates
explicitly.

Examples:

Sending a request from a web browser to a web server.

Chat applications where each message is sent from one user to another.

Thread 🔴
A thread is the smallest unit of execution within a process. It is a sequence of instructions
that can be scheduled and executed by the CPU.

A process can have multiple threads, and they share the same memory space and resources of
the process but execute different tasks.

Advantages of Threads:

Faster Execution: Threads in the same process share resources, making them faster and more
efficient than separate processes.

Resource Sharing: Threads share memory, so they use fewer resources compared to separate
processes that need their own memory.

Improved Responsiveness: Multiple threads can work on different tasks at the same time,
making the application more responsive.

Better CPU Utilization: Threads can run on different CPU cores, boosting performance and
CPU use.

Simplified Communication: Threads in the same process can easily share data, making
communication simpler than between separate processes.

Difference between Process and Thread 🟠 (6M - W-23)


Process Thread

A thread is the smallest unit of execution


A process is a program in execution.
within a process.

Threads run within the process run in a


Processes run in separate memory spaces.
shared memory space.

Processes are heavyweight. Threads are lightweight.

Context switching is between processes Context switching between threads of the


more expensive. same process is less expensive.

Operating System 11
Process Thread

Processes are independent. Threads are dependent.

Processes are controlled by the operating Threads are controlled by programmer in a


system. program.

User level thread and Kernel level thread 🔴 (6M - S-24)


User-Level Threads (ULT):

User-level threads are managed by user libraries and the application, with no direct
involvement from the operating system (OS).

The OS is unaware of these threads, so all management like creation and scheduling is handled
by the user program.

Advantages:

Faster creation and management as no OS intervention is needed.

More control over scheduling and management.

Disadvantages:

If one thread blocks, the entire process is blocked because the OS sees it as a single
thread.

Not ideal for multi-core systems, as the OS can't use multiple cores for user-level
threads.

Kernel-Level Threads (KLT):

Kernel-level threads are managed by the OS, which is responsible for their creation,
scheduling, and management.

The OS has full knowledge of the threads and can schedule them on different CPU cores.

Advantages:

If one thread blocks, the OS can still run other threads in the same process.

Better use of multiple cores in multi-core systems.

Disadvantages:

Slower to create and manage because the OS is involved.

Higher overhead due to the need for system calls for management.

Multithreading 🔴
Multithreading is a technique where a single process is divided into multiple threads that
run at the same time (concurrently).

Each thread performs a specific task, allowing the process to carry out multiple operations
simultaneously, which boosts efficiency and performance.

Since threads share the same memory and resources of the process, communication between them
is faster and easier.

Benefits of Multithreaded Programming: (6M, W-23)

1. Responsiveness:

Multithreading keeps programs smooth and responsive, even during long tasks.

Example: A browser lets you use it while loading images in the background.

2. Resource Sharing:

Threads share memory and resources within the same process, making communication
easier.

Unlike processes, threads don’t need special methods to share resources.

3. Economy:

Threads are cheaper and faster to create and manage than processes.

Operating System 12
Switching between threads is quicker than switching between processes.

4. Scalability:

Threads can run on multiple processors at the same time, improving speed.

This allows better use of multi-processor systems.

Multithreading Models 🟠 (6M - W-19, S-22)


1. Many-to-One Model:

Many user threads are mapped to one kernel thread.

Thread management is done at the user level, making it more efficient.

If the kernel thread is blocked, all user threads are also blocked.

Even with multiple processors, only one processor will be used since there is only one
kernel thread.

Advantages:

1. Easier to implement because all user-level threads are mapped to a single kernel
thread.

2. Switching between threads is faster since it does not involve kernel-level context
switching.

Disadvantages:

1. Only one thread can execute at a time, so it does not take full advantage of
multiprocessor systems.

2. If one thread blocks, all threads in the process are blocked.

2. One-to-One Model:

One user thread is mapped to one kernel thread.

Each time a user thread is created, a corresponding kernel thread must be created.

Since each user thread is mapped to a different kernel thread, if one thread is blocked,
the others continue running.

Each kernel thread can run on different processors, allowing better use of multiple
processors.

Advantages:

1. Each user-level thread is paired with a kernel thread, allowing true parallel execution
on multiprocessor systems.

2. If one thread blocks, others can continue executing because they are independent.

Disadvantages:

1. More overhead for managing multiple kernel threads, including context switching.

Operating System 13
2. Each thread requires its own kernel resources, which can strain system resources.

3. Many-to-Many Model:

Many user threads are mapped to an equal or smaller number of kernel threads.

The number of kernel threads depends on the application or machine.

If a user thread makes a blocking system call, other threads are not blocked.

There is no extra overhead caused by creating kernel threads unnecessarily.

Advantages:

1. Allows multiple user-level threads to be mapped to multiple kernel threads, providing


better concurrency.

2. Can adapt to the number of processors available, making better use of system resources.

Disadvantages:

1. More complex to implement compared to the other models.

2. Managing the mapping of user threads to kernel threads can create overhead, potentially
slowing down performance.

Process Commands 🟢 (2M - S-22, 4M - W-23, S-22, 6M - W-22)


1. ps (Process Status) (2M - W-23, 4M - W-19)

Displays the list of currently running processes in the system.

Syntax: $ps [options]

Example: $ps – shows all running processes for the current user.

Options for ps:

1. -f : This option provides a detailed listing of process attributes.

2. -u : This option shows processes for a specifed user.

3. -a : This option shows processes from all users on the system.

4. -e : This options displays all processes, including both user and system processes.

2. wait

The wait command pauses the execution of a script or process until the specified process
finishes.

Syntax: wait <PID>

Example: wait 1234 – waits for the process with id 1234 to complete.

If no process_id is given, it waits for all child processes to complete.

3. sleep (2M - W-19, W-22)

Pauses the execution of a process for a specified amount of time.

Syntax: sleep[number][suffix]

Example: sleep 5 – pauses the process for 5 seconds.

4. kill (2M - W-19, W-22)

Sends a interrupt signal to terminate or stop a running process.

Syntax: kill pid

Example: kill 1234 – terminates the process with the PID 1234.

5. exit

Exits or terminates the current shell session or script.

Operating System 14
Syntax: exit

Example: exit – closes the terminal or script execution.

Chapter 4 - CPU Scheduling and Algorithms (14 Marks)


Objectives of Scheduling 🔴
Efficient CPU Utilization: The goal is to keep the CPU busy and avoid idle time.

Fair Allocation: Scheduling ensures that all processes are given a fair amount of CPU time,
preventing one process from monopolizing the CPU.

Maximize Throughput: It aims to increase the number of processes completed in a given time
period, improving system performance.

Minimize Waiting Time: Scheduling helps reduce the time processes spend waiting for CPU
access, leading to faster execution.

Maximize Response Time: In interactive systems, scheduling aims to respond to user inputs as
quickly as possible.

Prioritize Critical Tasks: It ensures that important or high-priority processes get CPU time
before less critical ones.

Maintain System Stability: Good scheduling prevents overloads and maintains a balanced
system.

CPU and I/O burst cycle 🟠 (2M - W-22)


CPU burst cycle: The time when a process is actively using the CPU to execute instructions.

I/O burst cycle: The time when a process is busy performing input/output (I/O) operations.

CPU bound program and I/O bound program 🟠 (2M - W-23)


CPU bound program: If execution of a program is highly dependent on the CPU, then it is known as
CPU bound program.
I/O bound program: If execution of a program is dependent on the input-output system and its
resources, such as disk drives and peripheral devices, then it is known as I/O bound program.

Compare preemptive and non-preemptive scheduling 🔴 (2M, S-22, S-23)


Preemptive Scheduling Non-Preemptive Scheduling

Resources are allocated for a limited time. Resources are held until the process completes or waits.

Process cannot be interrupted until it terminates or its


Process can be interrupted anytime.
time is up.

Low-priority processes may starve due to high-


Long-running processes may starve shorter ones.
priority ones.

Higher overhead due to context switching. No overhead from context switching.

More flexible. Rigid and less flexible.

Higher cost associated. No cost associated.

High CPU utilization. Low CPU utilization.

Less waiting time. High waiting time.

Less response time. High response time.

Scheduler decides based on priority and time


Process decides, OS follows instructions.
slice.

Round Robin, Shortest Remaining Time First. First Come First Serve, Shortest Job First.

Scheduling Criteria 🟢 (2M - W-19, 4M - W-22, W-23, S-22, S-23)


1. CPU Utilization:

In multiprogramming, the main goal is to keep the CPU busy.

CPU utilization can range from 0% to 100%.

2. Throughput:

It is the number of processes that are completed per unit time.

It indicates the work done in the system.

For long processes, throughput might be one process per time unit, while for short
processes, it could be ten or more.

Operating System 15
3. Turnaround Time:

The time interval from the time of submission of a process to the time of completion of
that process is called as turnaround time.

It includes waiting to enter memory, time spent in the ready queue, CPU execution time,
and I/O operations.

4. Waiting Time:

Waiting time is the total time a process spends in the ready queue before it gets CPU
time.

A process waits in the queue until the CPU is available.

If it needs resources while executing, it may go into a waiting state until those
resources are ready.

5. Response Time:

Response time is the time from when a request is submitted until the first response is
received.

It focuses on how quickly the system responds, not on the completion of the entire
process.

A process can produce early output while continuing to compute new results.

Describe I/O burst and CPU burst cycle with neat diagram 🟠 (4M - W-19)
CPU burst cycle: The time when a process is actively using the CPU to execute instructions.

I/O burst cycle: The time when a process is busy performing input/output (I/O) operations.

A process alternates between CPU execution and I/O operations during its execution.

It starts with a CPU burst cycle when the CPU is assigned to the process.

After the CPU burst, it enters an I/O burst cycle to perform I/O tasks.

The process keeps switching between CPU burst cycles and I/O burst cycles repeatedly.

The complete execution of a process starts with CPU burst cycle, followed by I/O burst cycle,
then followed by another CPU burst cycle, then followed by another I/O burst cycle and so on.

The process ends with a final CPU burst cycle that completes its execution and sends a
request to terminate.

Types of Scheduling Algorithms 🟢


First-come-first-serve (FCFS) (4M - W-22, W-23, 6M - W-19, S-22, S-23, S-24)

FCFS is a scheduling algorithm where processes are executed in the order they arrive in
the ready queue.

How it works:

The process that arrives first gets executed first.

Each process runs till it’s completed before the next one starts.

Advantages:

Simple to Implement: Easy to understand and implement in the operating system.

Operating System 16
Fair: Each process is treated equally, and no process is skipped.

Disadvantages:

Convoy Effect: If a long process arrives first, it delays all subsequent processes,
leading to inefficient CPU utilization.

No Prioritization: FCFS doesn't consider the priority or burst time of processes, which
can lead to poor performance for short tasks.

Example:

Process Arrival Time Burst Time

P1 0 7

P2 1 4

P3 2 10

P4 3 6

P5 4 8

Shortest Job First (SJF) (4M - W-22, 6M - W-19, W-22, W-23, S-22, S-23, S-24)

SJF is a scheduling algorithm that selects the process with the shortest burst time
(execution time) to execute next.

How it works:

The process that has the smallest CPU burst time is selected for execution first.

Once a process completes, the scheduler selects the next shortest process.

Types:

Preemptive SJF (Shortest Remaining Time First): If a new process with a shorter burst
time arrives, it preempts the current process.

Non-Preemptive SJF: Once a process starts execution, it runs to completion.

Advantages:

Minimizes Average Waiting Time: By executing shorter processes first, it minimizes the
time processes spend waiting in the queue.

Efficient for Batch Systems: Works well when processes have known burst times ahead of
time.

Disadvantages:

Starvation: Long processes might never get executed if there are always shorter
processes arriving.

Difficult to Predict: In practice, it's hard to know the exact burst time of a process
in advance.

Example:

Operating System 17
Round Robin (RR) (4M - W-19, 6M - W-22, S-23, S-24)

Round robin scheduling(RR) is a preemptive scheduling algorithm, meaning a running process


can be interrupted.

A small unit of time called a time quantum or time slice is used for pre-emption of a
currently running process.

The ready queue is implemented as a circular queue, where the CPU is given to the entire
processes in a first-come, first-served manner for a specific time period (time quantum).

When a process enters the system, it is added to the end of the queue. The CPU scheduler
selects the first process at the head of the queue and assigns the CPU to it for the
duration of the time quantum.

If a process finishes before the time quantum ends, the CPU is released and the next
process in the queue is given the CPU.

If a process doesn’t finish within the time quantum, it is preempted (paused), moved to
the end of the queue, and the CPU is given to the next process.

Advantages of Round Robin:

Every process gets an equal chance to use the CPU.

Newly created processes are added to the end of the queue, ensuring they get their
turn.

Disadvantages:

Longer Wait Times: Processes may wait longer to get CPU time.

Low Throughput: Fewer processes are completed in a given time.

Frequent Context Switches: Switching between processes takes time and resources.

Large Gantt Chart: If the time slice is too short (like 1 ms), the Gantt chart can
become unwieldy.

Time-Consuming Scheduling: Short time slices can lead to inefficient scheduling.

Example:

Process Burst Time (in ms)

P1 24

P2 3

P3 3

Time quantum: 4 ms

The resulting RR schedule will be:

Operating System 18
Priority Scheduling 🔴
Priority Scheduling is a scheduling algorithm where each process is assigned a priority.
The process with the highest priority is executed first.

How it works:

Each process is given a priority value, either by the system or the user.

Processes with higher priority values are scheduled before those with lower priority
values.

If two processes have the same priority, they are scheduled based on their arrival time
(FCFS can be used as a tie-breaker).

Types:

Preemptive Priority Scheduling: If a new process arrives with a higher priority than
the currently running process, the current process is preempted, and the new process is
executed.

Non-Preemptive Priority Scheduling: Once a process starts execution, it continues until


completion, even if a higher-priority process arrives.

Advantages:

Flexible: Allows for flexibility in handling important processes.

Efficient: Can give priority to important tasks, improving system responsiveness for
critical tasks.

Disadvantages:

Starvation: Low-priority processes may never get executed if there are always higher-
priority processes.

Difficult to Assign Priorities: Deciding how to assign priorities (based on resources,


time, etc.) can be complex.

Example:

Multilevel Queue Scheduling 🔴 (6M - S-23)


Multilevel Queue Scheduling is a scheduling algorithm where processes are divided into
multiple queues based on their priority or type, and each queue has its own scheduling
algorithm.

The system maintains multiple queues, each with a specific priority level.

Each queue can use different scheduling algorithms like FCFS, SJF, or Priority Scheduling.

Processes are assigned to a queue based on their characteristics, such as priority or


type.

Once a process enters a queue, it remains in that queue for the entire duration of its
execution.

Operating System 19
Advantages:

Separation of Processes: By categorizing processes, it makes managing and scheduling


tasks more efficient.

Customized Scheduling: Different scheduling methods can be applied to different queues,


optimizing performance.

Fairness: Important processes can be given higher priority with dedicated queues.

Disadvantages:

Inflexible: Once a process is assigned to a queue, it cannot move between queues, which
may not be ideal in some cases.

Complexity: Managing multiple queues and scheduling algorithms can be complex.

Starvation: Low-priority queues might experience starvation if high-priority queues are


always full.

Example:

Queue 1 (High Priority): Interactive processes using Round Robin.

Queue 2 (Medium Priority): Background processes using FCFS.

Queue 3 (Low Priority): Batch jobs using SJF.

Deadlock 🟠 (2M - S-23)


In a multiprogramming environment, multiple processes compete for a limited number of
resources.

When a process requests resources and they are unavailable, it enters a waiting state.

Sometimes, a waiting process cannot proceed because the resources it needs are held by other
waiting processes, leading to a situation called deadlock.

Deadlock occurs when a process requests resources held by another waiting process, which, in
turn, is waiting for resources held by yet another process. As a result, no process can
execute its task.

Example:

Consider a system with three disk drives and three processes. If each process is allocated
one disk drive, there are no drives left. If all three processes then request an additional
disk drive, they will enter a waiting state, causing a deadlock. None of the processes can
continue until one releases the disk drive it is holding.

Necessary Conditions for Deadlock 🟢 (4M - W-19, S-23, S-24)


1. Mutual Exclusion:

Operating System 20
At least one resource is held in a non-sharable mode, meaning only one process can use it
at a time.

If another process requests the same resource, it must wait until the resource is
released.

Each resource is either assigned to one process or is available.

2. Hold and Wait:

A process holding at least one resource is waiting to acquire additional resources


currently held by another process.

Processes holding resources can also request new ones.

3. No Preemption:

Resources cannot be taken forcibly from a process.

A resource can only be released voluntarily by the process holding it after completing its
task.

Once granted, a resource cannot be preempted but must be explicitly released.

4. Circular Wait:

There must be a circular chain of processes, where each process is waiting for a resource
held by the next process in the chain.

For example, if P0 is waiting for a resource held by P1 , P1 is waiting for a resource


held by P2 , and so on, until Pn , which is waiting for a resource held by P0 .

Deadlock Prevention 🟢 (4M - W-22, W-23, S-22)


Deadlock prevention is a technique used by the operating system to avoid deadlock situations
by ensuring that at least one of the necessary conditions for deadlock does not occur.

Methods of Deadlock Prevention:

1. Eliminate Mutual Exclusion:

Mutual exclusion occurs when a resource can only be used by one process at a time.

Deadlocks can happen due to this condition, as processes wait for the resource to
become free.

If a resource can be shared among multiple processes simultaneously (like read-only


files), deadlocks can be avoided.

Sharable resources do not require mutual exclusion, so they cannot cause deadlocks.

2. Eliminate Hold and Wait:

A process should not hold some resources while waiting for additional resources.

Protocols to prevent hold and wait:

Require a process to request all needed resources at the start before execution
begins.

Allow a process to request resources only if it currently holds none.

A process must release all currently allocated resources before requesting new ones.

3. Eliminate No Preemption:

If a process holding resources requests another resource that is unavailable, all its
currently held resources are released (preempted).

The preempted resources are added to the list of resources the process is waiting for.

The process restarts only when all required resources (old and new) are available.

Preemption ensures resources are efficiently utilized and prevents deadlocks caused by
hold-and-wait conditions.

4. Eliminate Circular Wait:

Circular wait can be avoided by ordering resources in a sequence.

Each process must request resources in increasing order based on this sequence.

Assign a unique number to each resource to establish the order.

This prevents processes from holding resources in a circular chain, breaking the
deadlock cycle.

Example:

Operating System 21
If a process needs two resources, A and B , it has to request both at once rather than
holding one (A) and waiting for the other (B) .

This prevents a situation where one process holds a resource and waits for another, which
could lead to a deadlock.

Deadlock Avoidance 🔴
Deadlock Avoidance is a method used by the operating system to prevent deadlock by managing
how resources are allocated to processes.

How it works:

The system carefully checks each resource request before granting it, ensuring the request
won’t cause a deadlock situation.

Resources are only given if they won't lead to a state where processes are stuck waiting
for each other (deadlock).

Methods of Deadlock Avoidance:

Safe State: Before granting a resource request, the system checks if the resources can
still be allocated safely, meaning all processes can eventually complete without deadlock.

Banker's Algorithm: This algorithm helps determine safe resource allocation. It checks the
available resources, each process's maximum resource needs, and what is currently
allocated. If granting a request leads to an unsafe state, it is not allowed.

Example:

Suppose there are two processes, P1 and P2 , and a resource R .

P1 needs R to continue, and P2 also needs R .

The operating system checks if granting R to P1 will allow all processes to eventually
complete (safe state). If not, it delays granting R to P1 to avoid a potential deadlock.

Chapter 5 - Memory Management (14 Marks)


Partitioning (Fixed and Variable) 🟢 (4M - W-19, S-23)
1. Fixed Partitioning (Static Partitioning): (4M - W-22)
In this method, main memory is divided into fixed-size partitions during system setup. A
process can be loaded into a partition that is either equal to or larger than its size.

Equal Size Partitioning: Main memory is divided into equal-size partitions. Any process
that fits within this size can be loaded into any available partition.

Unequal Size Partitioning: Main memory is divided into partitions of different sizes. Each
process is loaded into the smallest partition that can accommodate it.

2. Variable Partitioning (Dynamic Partitioning): (4M - S-22, S-24)

In this method, when a process enters main memory, it is allocated exactly the amount of
memory it needs.

Therefore, the size of partitions can vary based on the requirements of each process.

The operating system maintains a table that indicates which parts of memory are available
and which are occupied.

When a new process arrives, the system searches for available memory space and allocates
it by creating a partition if there is enough space.

Operating System 22
Example:

Consider the following table with processes and their required memory space:

Process Memory Space

P1 20 MB

P2 14 MB

P3 18 MB

Free Space Management Techniques 🟢 (4M - W-22, S-22, 6M - W-19, W-23, S-24)
1. Bitmap method: (4M - W-23)

The bitmap method, also known as the bit vector method, is a commonly used way to manage
free space.

In this method, each block on the hard disk is represented by a single bit (0 or 1).

Bit 0 means the block is allocated to a file.

Bit 1 means the block is free and available for use.

For example, consider a disk having 16 blocks where block numbers 2, 3, 4, 5, 8, 9, 10,
11, 12, and 13 are free, and the rest of the blocks, i.e., block numbers 0, 1, 6, 7, 14
and 15 are allocated to some files.

The bit vector for this disk will look like this-

The main advantage of this approach is that it is simple and efficient at finding the
first available block or a group of consecutive free blocks on the disk.

2. Linked List:

The linked list method is another way to manage free space on a disk.

In this approach, all the free blocks are linked together in a chain.

The address of the first free block is stored in memory.

Each free block contains a pointer to the next free block.

The last free block points to null, indicating the end of the list.

For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11,
12, 13, and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15
and 16 are allocated to some files.

If we maintain a linked list, then Block 3 will contain a pointer to Block 4, and Block 4
will contain a pointer to Block 5.

Operating System 23
Virtual Memory 🟠 (2M - W-19, S-23)
Virtual Memory is a feature of the operating system that helps manage memory when there is
not enough physical memory (RAM) available.

It allows the system to temporarily move data from RAM to disk storage, creating more space
for active processes.

This separation between logical memory (what programs think they have) and physical memory
(actual RAM) lets programs use more memory than is physically available.

Paging 🟠 (2M - W-23, S-23)


Paging is a memory management method where the physical memory of a process is divided into
fixed-size blocks called pages.

These pages are retrieved from secondary storage (like a hard drive) and loaded into the main
memory as needed.

Segmentation 🟠 (2M - W-23)


Segmentation is a memory management method that divides a program's address space and the
computer's physical memory into segments of different sizes.

Each segment functions as a separate unit in memory.

Difference between paging and segmentation 🟠 (2M - W-22, 4M - S-23)


Paging Segmentation

Process address space is divided into Process address space is divided into
blocks called pages. blocks called segments.

Pages are of fixed size. Segments are of variable sizes.

The compiler calculates the size and


The OS divides memory into pages.
addresses of segments.

Faster in accessing memory. Slower compared to Paging.

Page sizes are determined by the available


Segment sizes are determined by the user.
memory.

May cause internal fragmentation (wasting May cause external fragmentation (wasting
memory inside pages). memory between segments).

Logical address is divided into page Logical address is divided into segment
number and page offset. number and segment offset.

Data is stored using a page table. Data is stored using a segmentation table.

Fragmentation 🟠 (2M, S-22, S-24, 4M, S-23)


Fragmentation in an operating system happens when memory is not used efficiently due to the
way processes are allocated and deallocated.

It occurs when free memory gets divided into small, non-contiguous blocks, making it hard to
allocate large blocks to processes.

Types of Fragmentation:

1. Internal Fragmentation:

This happens when a process is given a block of memory that is larger than what it
actually needs.

The unused part of that block is considered internal fragmentation.

2. External Fragmentation:

External fragmentation occurs when there is enough free memory to fulfill a process's
request, but the memory is spread across different non-contiguous blocks.

This is common in systems with dynamic memory allocation methods.

Page Fault 🔴
A page fault occurs when a program tries to access a page in memory that is not currently
loaded into the computer's main memory (RAM).

First Fit, Best Fit, Worst Fit 🟢 (6M - W-22, W-23)

Operating System 24
Page replacment algorithms 🟢 (6M - S-23)
FIFO (6M - S-22, S-24)

1. Start with an empty page frame that has a fixed size.

2. Go through each page in the reference string one by one.

3. Check if the page is already in memory:

If the page is in the frame, move to the next page.

If the page is not in the frame, move to the next step.

4. Check if there is space in the page frame:

If there is space, load the new page into an empty space.

If there is no space, proceed to replace the oldest page.

5. Replace the oldest page (the one that has been in the frame the longest) with the new
page.

6. Repeat this process for every page in the reference string.

7. Keep count of how many times a page needs to be replaced (page faults).

LRU (6M - W-19, W-22, W-23)

1. Start with an empty page frame that has a fixed size.

2. Go through each page in the reference string one by one.

3. Check if the page is already in memory:

If the page is in memory, move to the next page.

If the page is not in memory, proceed to the next step.

4. Check if there is space in the page frame:

If there is space, load the new page.

If there is no space, find the page that hasn't been used for the longest time.

5. Replace the least recently used page with the new page.

6. Repeat this process for every page in the reference string.

Operating System 25
7. Keep count of how many times a page needs to be replaced (page faults).

Optimal (6M - W-22, W-23)

1. Start with an empty page frame that has a fixed size.

2. Go through each page in the reference string one by one.

3. Check if the page is already in memory:

If the page is already in the frame, move to the next page.

If the page is not in the frame, proceed to the next step.

4. Check if there is space in the page frame:

If there is space, load the new page.

If there is no free space, find the page that will not be used for the longest time in
the future.

5. Replace the page that will not be needed for the longest time with the new page.

6. Repeat this process for every page in the reference string.

7. Keep count of how many times a page needs to be replaced (page faults).

Chapter 6 - File Management (10 Marks)


File Attributes 🟠 (2M - W-19, S-23)
1. Name: The symbolic file name is the only part that is easy for people to read.

2. Identifier: The file system assigns a unique number or tag to each file, which helps identify
it within the system.

3. Type: This shows the file type, which is important for systems that handle different types of
files.

4. Location: This points to where the file is stored on a device.

5. Size: This indicates the current size of the file (in bytes, words, or blocks) and may also
include the maximum allowed size.

6. Protection: This information controls who can read, write, or execute the file.

7. Time, Date, and User Identification: This records when the file was created, last modified,
and last used. This data is useful for protection, security, and monitoring usage.

File Operations 🟢 (2M - W-22, W-23, S-22, S-24)


1. Creating a file

2. Writing to a file

3. Reading from a file

4. Renaming a file

5. Deleting a file

Operating System 26
6. Repositioning within a file

7. Creating copy of a file

Access Methods (Sequential and Direct) 🟠 (4M - W-19)


Sequential Access:

In sequential access, information is processed in order, one record after another.

This is the most common way to access files, used by programs like text editors and
compilers.

Read Operation: Reads data in sequence, one part after the other, automatically moving the
file pointer to the next part.

Write Operation: Writes data in sequence, adding new information to the end of the file and
moving the pointer to the end of the written data.

A sequential file can be reset to the beginning, and in some systems, programs can skip
forward or backward through records.

As shown in above diagram, a file can be rewind (moved in backward direction) from the
current position to start with beginning of the file or it can be read or write in forward
direction.

Direct Access (Relative Access):

In direct access, files consist of fixed-length records that allow quick reading and writing
in any order.

This method is based on the disk model and allows random access to file blocks.

File Structure: A file is viewed as a sequence of numbered blocks or records. For example,
you can directly read block 14, then block 53, etc.

Read/Write Operations:

Read n: Reads the nth block from the file.

Write n: Writes to the nth block.

The block numbers provided by the user are relative block numbers, starting from 0 for the
first block.

The operating system translates these into actual disk addresses.

This system prevents users from accessing areas outside their files and lets the OS manage
where files are stored on disk.

Example:

Databases often use direct access for quick retrieval of specific records.

If a query requests certain information, the system calculates which block holds the data
and reads it directly.

File Allocation Methods 🟢 (6M - W-19, S-23)


1. Contiguous Allocation: (4M - S-23)

In the contiguous allocation method, each file occupies a set of contiguous (continuous)
blocks of disk space.

This means that all parts of a file are stored one after another on the disk.

A file's location on the disk is defined by the starting address of its first block and
the length of the file (i.e., how many blocks it uses).

If a file starts at block b and has a length of n blocks, it will occupy blocks b, b+1,
b+2,..., b+n-1.

The directory entry for each file stores the starting address and the number of blocks
allocated to that file.

Operating System 27
Advantages:

It supports both sequential and direct access to data, meaning files can be read in
order or accessed directly at any block.

Provides good performance because the file blocks are stored together.

It is also easy to retrieve a single block from a file.

Reading all blocks belonging to each file is very fast.

Disadvantages:

Suffers from external fragmentation.

It is difficult to find enough contiguous blocks of space for new or growing files.

Compaction may be required and it can be very expensive.

2. Linked Allocation: (4M - W-22, W-23, S-22)

In this allocation method, each file is stored as a linked list of blocks. Each block has
a pointer to the next block in the sequence.

The blocks do not need to be contiguous on the disk; they can be scattered anywhere.

The file directory holds pointers to the first and last blocks of each file, making it
easy to create new files by adding a directory entry.

When writing to a file, the system takes the first free block from the free space list and
writes to it. This block is then linked to the end of the file's chain of blocks.

To read a file, the system follows the chain by reading each block in sequence.

There is no external fragmentation with this method; however, around 1.5% of disk space is
used to store pointers instead of actual data.

If a pointer gets lost or damaged (e.g., due to bugs or hardware issues), it may lead to
accessing the wrong part of the file.

Linked allocation does not support direct access; it only allows sequential access,
meaning you have to read blocks in order.

This method requires more space for pointers, so "clusters" (groups of blocks) are
sometimes used to reduce the number of pointers, but this can lead to internal
fragmentation within clusters.

3. Indexed Allocation:

Indexed Allocation is a method of storing files where a special block, called an index
block, keeps track of the addresses of all the file’s data blocks.

Each file has its own index block that lists where all its parts are stored, making it
easy to find and access any part of the file directly.

The operating system can quickly locate any part of the file without scanning through the
entire file.

Operating System 28
Advantages:

1. Direct Access: Any part of the file can be accessed quickly using the index block.

2. Efficient for Large Files: It works well for large files since data blocks don’t need
to be stored together.

3. No Fragmentation Issues: It avoids problems caused by fragmented storage, as data


blocks can be stored in different locations.

Disadvantages:

1. Extra Storage for Index Blocks: Additional storage is needed to store the index blocks
for each file.

2. Limited Index Size: If the index block is small, it limits how many data blocks a file
can have, which may be an issue for very large files.

3. More Complexity: Managing and updating the index blocks adds complexity to the file
management system.

Directory Structure 🟢 (6M - W-22, W-23, S-22, S-24)


A directory is a container used to store files and folders in an organized way.

There are three main types of directory structures:

1. Single-Level Directory

2. Two-Level Directory

3. Tree Structure (Hierarchical Structure)

1. Single-Level Directory:

This is the simplest type of directory.

All files are stored in a single directory.

It is easy to understand and use.

Advantages:

Simple operations like creating, searching, deleting, and updating files are possible.

Easy to understand and implement in practical scenarios.

Disadvantages:

All files must have unique names. If two users try to name their files the same, it
causes a conflict.

If the number of files increases, searching for a specific file becomes slow and
inefficient.

It is difficult to separate important files from unimportant ones.

It is not suitable for multi-user systems, as users cannot have their own directories.

2. Two-Level Directory: (4M - S-23)

Operating System 29
This structure solves the problem of file name conflicts in single-level directories.

In this system, each user has their own User File Directory (UFD).

The system maintains a Master File Directory (MFD), which keeps track of all users and
their directories.

Advantages:

Searching for files is easy.

Two files can have the same name if they are in different user directories.

Users can group their files easily.

One user cannot access or modify another user’s directory without permission.

Implementation is simple and straightforward.

Disadvantages:

It does not allow users to create subdirectories, which limits organization.

3. Tree Structure (Hierarchical Structure):

This is the most commonly used directory structure, especially in personal computers.

It allows users to create both files and subdirectories.

The structure looks like an upside-down tree, where the topmost directory is the root
directory.

Each user has their own directory under the root, and they can further create
subdirectories.

Advantages:

The root directory is highly secure and accessible only by the system administrator.

Users can create subdirectories for better organization.

Searching for files is easy.

It supports grouping and allows users to separate important files from unimportant
ones.

Users cannot access or modify another user's directory, ensuring privacy.

Disadvantages:

Too many subdirectories can make searching complicated.

Users cannot modify the root directory’s data.

File sharing between users is restricted.

Operating System 30

You might also like