OS PreEnd Solution
OS PreEnd Solution
Appendix-‘D’ Refer/WI/ACAD/18
Section A M
A1(a) An Operating System (OS) is system software that manages computer hardware, software [2]
resources, and provides common services for computer programs.
Key Points:
• High CPU Usage: The process keeps the CPU busy, which can lead to inefficient resource
usage.
• Simple Implementation: Easy to implement but not optimal for performance.
• Use Cases: Often used in low-level programming where wait times are expected to be very
short, such as in device drivers or certain real-time systems.
A1(c) Scheduling is essential in operating systems to manage the execution of processes efficiently. Here [2]
are the key reasons why scheduling is needed:
1. Maximize CPU Utilization:
o Ensures the CPU is busy as much as possible by allocating tasks in an efficient
manner.
2. Fairness:
o Provides equitable CPU time to all processes, ensuring no single process
monopolizes the CPU.
3. Efficiency:
o Reduces idle time and maximizes the throughput of the system by managing the
execution order of processes.
4. Response Time:
o Improves the response time for interactive users by prioritizing certain processes,
especially those requiring quick user feedback.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 1 of 29
5. Deadlock Avoidance:
o Helps in preventing deadlocks by careful allocation and deallocation of resources.
Effective scheduling leads to improved system performance, user satisfaction, and optimal resource
utilization.
A1(d) Fragmentation: Fragmentation refers to the phenomenon where available memory becomes [2]
divided into small, non-contiguous blocks over time, which cannot be efficiently utilized by the
system. This occurs in both main memory (RAM) and disk storage.
Fragmentation in the context of variable partitions refers to the inefficient utilization
of memory due to both internal and external fragmentation, which can impact system performance
and resource allocation efficiency.
A1(e) SCAN (Elevator) Scheduling Algorithm: The SCAN algorithm, also known as the elevator [2]
algorithm, moves the disk arm from one end of the disk to the other, serving requests along the
way, and then reverses direction when it reaches the end. Here's how it works:
• Movement: The disk arm moves across the disk in one direction, serving requests along the
way. Upon reaching the end, it reverses direction.
• Advantage: Minimizes average seek time by efficiently handling requests in a linear
fashion.
• Movement: Similar to SCAN, but the disk arm returns to the beginning of the disk after
reaching the end, servicing requests only in one direction.
• Advantage: Reduces maximum wait time compared to SCAN, particularly for requests
farthest from the current position of the disk arm.
Section B M
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 3 of 29
A2(a) [10]
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 4 of 29
A2(b) The Sleeping Barber Problem is a classic synchronization problem that demonstrates issues of [10]
resource management and mutual exclusion in concurrent systems. Here’s an illustration of the
problem and its solution using semaphores:
Problem Description:
• There is a barber shop with one barber and several chairs for waiting customers.
• If there are no customers, the barber sleeps in his chair.
• When a customer arrives:
o If the barber is sleeping, the customer wakes him up.
o If the barber is busy cutting hair, the customer sits in one of the chairs (if available)
or leaves if all chairs are occupied.
• Semaphores Used:
o customers_waiting: Counts the number of customers waiting.
o barber_ready: Indicates if the barber is ready to cut hair or is sleeping.
o barber_mutex: Ensures mutual exclusion when accessing shared variables.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 5 of 29
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 6 of 29
The Sleeping Barber Problem is a classical synchronization problem in which a barber shop with
one barber, a waiting room, and a number of customers is simulated. The problem involves
coordinating the access to the waiting room and the barber chair so that only one customer is in
the chair at a time and the barber is always working on a customer if there is one in the chair,
otherwise the barber is sleeping until a customer arrives.
A2(c) A thread is a single sequential flow of execution of tasks of a process so it is also known as thread [5]
i of execution or thread of control. There is a way of thread execution inside the process of any
operating system. Apart from this, there can be more than one thread inside a process. Each thread
of the same process makes use of a separate program counter and a stack of activation records and
control blocks. Thread is often referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many tabs can be
viewed as threads. MS Word uses many threads - formatting text from one thread, processing input
from another thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 7 of 29
Types of Threads
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the
user level thread. The kernel-level thread manages user-level threads as if they are single-threaded
processes? Examples: Java thread, POSIX threads, etc.
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 8 of 29
Kernel level thread:
The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-level
thread is implemented by the operating system. The kernel knows about all the threads and
manages them. The kernel-level thread offers a system call to create and manage the threads from
user-space. The implementation of kernel threads is more difficult than the user thread. Context
switch time is longer in the kernel thread. If a kernel thread performs a blocking operation, the
Banky thread execution can continue. Example: Window Solaris.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 9 of 29
Advantages of Kernel-level threads
A2(c) [5]
ii
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 10 of 29
A2(d) External Fragmentation: [10]
• Definition: External fragmentation occurs when free memory is divided into small, non-
contiguous blocks, making it difficult to allocate larger contiguous blocks of memory to
processes even though the total free memory might be sufficient.
• Cause: It arises when processes are loaded and unloaded from memory, leaving behind
small gaps of unused memory that are too small to be allocated to new processes.
• Solution: External fragmentation is typically managed by compaction or by using dynamic
memory allocation algorithms that can coalesce or merge fragmented memory blocks to
form larger contiguous blocks.
Internal Fragmentation:
• Definition: Internal fragmentation occurs when allocated memory may be slightly larger
than what is actually needed by a process. This results in wasted memory within allocated
blocks.
• Cause: It arises from fixed-size memory allocation strategies where processes are allocated
memory in fixed-size blocks, leading to unused memory within those blocks.
• Solution: Internal fragmentation can be reduced by using variable-size memory allocation
strategies such as paging or segmentation, where memory is allocated in smaller, variable-
sized units that better match the actual memory requirements of processes.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 11 of 29
Solving Fragmentation Problem Using Paging:
File system protection and security mechanisms are essential to safeguard data integrity,
confidentiality, and availability within a computer system. Here are key aspects:
• Access Control: Determines who can access files and directories, and what operations they
can perform (read, write, execute). This is often managed through permissions and access
control lists (ACLs).
• Authentication and Authorization: Ensures that users are authenticated before accessing
files. Authorization mechanisms verify whether a user has the necessary permissions to
perform requested actions.
• Encryption: Encrypts sensitive data to prevent unauthorized access even if the data is
intercepted. This ensures confidentiality.
• Auditing and Logging: Tracks access and modifications to files, providing accountability
and traceability in case of security incidents.
• Backup and Recovery: Implements strategies to back up files regularly and recover them
in case of data loss or corruption.
• File Integrity: Verifies that files have not been tampered with or corrupted, ensuring data
integrity.
• Antivirus and Malware Protection: Protects against malicious software that can
compromise file system security.
• Secure File Deletion: Ensures that files are securely erased to prevent recovery by
unauthorized parties.
Effective file system protection and security measures are critical in maintaining the privacy,
integrity, and availability of data, especially in multi-user and networked environments.
Linked file allocation is a method of organizing files on a disk where each file is a linked list of
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 12 of 29
disk blocks. Here's how it works:
• Structure: Each file is represented as a linked list of disk blocks (or clusters), where each
block contains a pointer to the next block in the file.
• Advantages:
o Dynamic Size: Files can grow or shrink dynamically since each block points to the
next.
o No External Fragmentation: Linked allocation does not suffer from external
fragmentation because files can be scattered across the disk without concern for
contiguous blocks.
• Disadvantages:
o Random Access: Direct access to specific parts of the file is inefficient because
each block must be accessed sequentially.
o Overhead: Requires extra space for pointers between blocks, which increases
storage overhead compared to other allocation methods.
o Reliability: The reliability of linked allocation can be compromised if pointers are
lost or corrupted, leading to data loss or file fragmentation.
• Variants:
o Indexed Linked Allocation: Uses an index block that contains pointers to all blocks
of a file, allowing for faster access than pure linked allocation.
o File Allocation Table (FAT): A variant of linked allocation where a centralized
table (FAT) manages pointers to disk blocks, enhancing reliability and performance.
Linked file allocation methods are suitable for systems where files vary greatly in size and require
dynamic allocation. However, they require careful management of pointers to ensure efficient and
reliable access to data stored on disk.
Section C M
A3(a) i) Real-Time Operating System (RTOS) [10]
A Real-Time Operating System (RTOS) is designed to manage and control the execution of tasks
with strict timing constraints. Here are key aspects of RTOS:
A Time-Sharing System is a multi-user operating system where multiple users can interact with a
computer system concurrently by sharing its resources. Here are key aspects of time-sharing
systems:
• Resource Sharing: Users share CPU time, memory, and peripherals (such as printers and
disks) simultaneously.
• Multiprogramming: System executes multiple tasks or processes concurrently by rapidly
switching between them, giving each user or application a time slice or quantum of CPU
time.
• User Interaction: Provides interactive computing environment where users can run
programs, execute commands, and access resources through terminals or graphical
interfaces.
• Scheduling: Employs CPU scheduling algorithms (e.g., Round Robin, Priority Scheduling)
to allocate CPU time fairly among competing processes or users.
• Advantages:
o Maximizes CPU and resource utilization by allowing efficient sharing among users
and applications.
o Provides responsiveness and quick turnaround time for interactive tasks.
o Supports multitasking and concurrent execution of diverse workloads.
• Examples: Unix/Linux, Windows, and macOS are modern examples of time-sharing
systems that provide a responsive and interactive computing environment for multiple users.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 14 of 29
A3(b) Difference between spooling and buffering: [5
i ]
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 15 of 29
3(b)i Multithreaded Systems: [5
i ]
Multithreaded systems are operating systems or applications that support the execution of multiple
threads within a single process. Multithreading is a function of the CPU that permits multiple
threads to run independently while sharing the same process resources. A thread is a conscience
sequence of instructions that may run in the same parent process as other threads.
Multithreading allows many parts of a program to run simultaneously. These parts are referred to as
threads, and they are lightweight processes that are available within the process. As a result,
multithreading increases CPU utilization through multitasking. In multithreading, a computer may
execute and process multiple tasks simultaneously.
Multithreading needs a detailed understanding of these two terms: process and thread. A process is
a running program, and a process can also be subdivided into independent units called threads. Skip
10sPlay Vid
Explanation:
• Threads: Threads are lightweight execution units within a process that can run
concurrently. They share the same memory space and resources of the parent process,
allowing for efficient communication and coordination.
• Multithreading: In a multithreaded system, a single process can have multiple threads of
execution, each performing different tasks concurrently. Threads within the same process
can communicate directly, making multithreading useful for tasks that benefit from
parallelism or responsiveness.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 16 of 29
1. Complexity: Multithreading introduces complexity in programming, as developers need to
manage thread synchronization, avoid race conditions, and handle potential deadlock
situations.
2. Difficulty in Debugging: Debugging multithreaded programs can be challenging due to
non-deterministic behavior and timing-dependent bugs.
3. Resource Contentions: Threads sharing resources can lead to contention issues (e.g.,
access conflicts to shared variables), requiring careful synchronization to maintain data
integrity.
4. Scalability Limitations: While multithreading improves performance on multi-core
systems, excessive threading beyond available cores can lead to diminishing returns or even
performance degradation due to overhead.
5. Security Risks: Improperly synchronized threads can lead to security vulnerabilities such
as data races and unintended information disclosure.
A4(a) [1
READERS WRITERS PROBLEM:
0]
The readers-writers problem is a classical problem of process synchronization, it relates to a data
set such as a file that is shared between more than one process at a time. Among these various
processes, some are Readers - which can only read the data set; they do not perform any updates,
some are Writers - can both read and write in the data sets.
The readers-writers problem is used for managing synchronization among various reader and writer
process so that there are no problems with the data sets, i.e. no inconsistency is generated.
Let's understand with an example - If two or more than two readers want to access the file at the
same point in time there will be no problem. However, in other situations like when two writers or
one reader and one writer wants to access the file at the same point of time, there may occur some
problems, hence the task is to design the code in such a manner that if one reader is reading then no
writer is allowed to update at the same point of time, similarly, if one writer is writing no reader is
allowed to read the file at that point of time and if one writer is updating a file other writers should
not be allowed to update the file at the same point of time. However, multiple readers can access
the object at the same time.
TABLE 1
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 17 of 29
The solution of readers and writers can be implemented using binary semaphores.
We use two binary semaphores "write" and "mutex", where binary semaphore can be
defined as:
1. wait (S )
2. {
3. while(S <= 0);
4. S--;
5. }
6.
7. 2. Signal ( S )
8. {
9. S++;
10. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter
into an infinite loop (because of the semicolon; after while loop). Whereas the job of the
signal is to increment the value of S.
The below code will provide the solution of the reader-writer problem, reader and writer
process codes are given as follows -
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 18 of 29
10. --READ THE FILE?
11.
12. wait(mutex);
13. readcount --; // on every exit of reader decrement readcount
14. if (readcount == 0)
15. {
16. signal (write);
17. }
18. signal(mutex);
In the above code of reader, mutex and write are semaphores that have an initial value
of 1, whereas the readcount variable has an initial value as 0. Both mutex and write are
common in reader and writer process code, semaphore mutex ensures mutual exclusion
and semaphore write handles the writing mechanism.
The readcount variable denotes the number of readers accessing the file concurrently. The
moment variable readcount becomes 1, wait operation is used to write semaphore which
decreases the value by one. This means that a writer is not allowed how to access the file
anymore. On completion of the read operation, readcount is decremented by one.
When readcount becomes 0, the signal operation which is used to write permits a writer
to access the file.
1. wait(write);
2. WRITE INTO THE FILE
3. signal(wrt);
If a writer wishes to access the file, wait operation is performed on write semaphore,
which decrements write to 0 and no other writer can access the file. On completion of the
writing job by the writer who was accessing the file, the signal operation is performed
on write.
A4(b) Inter-Process Communication (IPC) refers to mechanisms provided by an operating system that [1
allow processes to communicate and synchronize with each other. This facilitates collaboration and 0]
coordination between processes running concurrently on a system. Here's a detailed explanation of
IPC and its methods:
IPC enables processes to exchange data, coordinate activities, and synchronize their execution. This
is essential for tasks such as:
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 19 of 29
• Cooperation: Processes may need to collaborate on shared tasks or exchange information.
• Synchronization: Processes may need to synchronize their activities to ensure they operate
correctly and efficiently.
• Resource Sharing: Processes may need to share resources (like memory, files, or devices)
in a controlled manner.
There are several methods used for IPC, each suited to different scenarios and requirements:
1. Shared Memory:
o Description: Processes can communicate by mapping a shared portion of memory
into their address spaces. This allows them to read from and write to the shared
memory area, facilitating fast data exchange.
o Advantages: High performance, as data can be accessed directly without copying.
Useful for large data sets and frequent communication.
o Disadvantages: Requires synchronization mechanisms (like semaphores or
mutexes) to control access and prevent race conditions.
2. Message Passing:
o Description: Processes communicate by sending messages to each other through the
operating system kernel. Messages can be of fixed or variable size.
o Advantages: Simplifies synchronization and coordination as messages are explicitly
sent and received. Suitable for smaller data exchanges.
o Disadvantages: Overhead involved in message copying between user and kernel
space. Limited by message size and buffering capabilities.
3. Pipes and FIFOs (Named Pipes):
o Description: Provides unidirectional or bidirectional communication channels
between processes. Pipes are typically used for communication between related
processes, while FIFOs (named pipes) can be used between unrelated processes.
o Advantages: Simple and effective for sequential data exchange. Useful for
streaming data between processes.
o Disadvantages: Limited to communication between related processes or processes
that explicitly use named pipes.
4. Sockets:
o Description: Communication method used for networked IPC and also within the
same system (Unix domain sockets). Processes can communicate over a network or
locally using TCP/IP or UDP protocols.
o Advantages: Enables communication between processes on different systems or
within the same system using a flexible and standardized interface.
o Disadvantages: Overhead associated with network communication, even for local
IPC.
5. Signals:
o Description: Processes can send signals to other processes or handle predefined
signals sent by the operating system. Signals can indicate events or requests (e.g.,
termination, user interrupt).
o Advantages: Lightweight and efficient for notifying processes of events or
triggering specific actions.
o Disadvantages: Limited in the amount of data that can be communicated (typically
used for signaling events rather than data exchange).
6. Semaphores and Mutexes:
o Description: Synchronization mechanisms used to control access to shared
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 20 of 29
resources and manage critical sections of code. Processes use semaphores or
mutexes to coordinate access and prevent race conditions.
o Advantages: Ensures mutual exclusion and synchronization between processes
sharing resources. Can be used in conjunction with other IPC methods for safe data
sharing.
o Disadvantages: Requires careful programming to avoid deadlocks and ensure
correct synchronization.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 21 of 29
A5(b) Deadlock is a situation in a multitasking or multiprocessing system where two or more processes [10]
are unable to proceed because each is waiting for one of the others to release a resource, such as a
file, or waiting for an event (e.g., system resources like CPU cycles or memory) that another
process has caused to happen.
1. Mutual Exclusion: At least one resource must be held in a non-sharable mode (exclusive
use), meaning that only one process at a time can use the resource.
2. Hold and Wait: A process must be holding at least one resource and waiting to acquire
additional resources held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process holding them;
they must be released voluntarily by the process holding them.
4. Circular Wait: There must exist a set of waiting processes {P1, P2, ..., Pn} such that P1 is
waiting for a resource held by P2, P2 is waiting for a resource held by P3, ..., and Pn is
waiting for a resource held by P1, creating a circular chain of waiting.
A resource allocation graphs shows which resource is held by which process and which process is
waiting for a resource of a specific kind. It is amazing and straight – forward tool to outline how
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 22 of 29
interacting processes can deadlock. Therefore, resource allocation graph describes what the
condition of the system as far as process and resources are concern like what number of resources
are allocated and what is the request of each process. Everything can be represented in terms of
graph. One of the benefits of having a graph is, sometimes it is conceivable to see a deadlock
straight forward by utilizing RAG and however you probably won’t realize that by taking a
glance at the table. Yet tables are better if the system contains bunches of process and resource
and graph is better if the system contains less number of process and resource.
So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated and
what is the request of each process. Everything can be represented in terms of the diagram. One
of the advantages of having a diagram is, sometimes it is possible to see a deadlock directly by
using RAG, but then you might not be able to know that by looking at the table. But the tables are
better if the system contains lots of process and resource and Graph is better if the system
contains less number of process and resource. We know that any graph contains vertices and
edges.
If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only
one instance, then the processes will be in deadlock. For example, if process P1 holds resource
R1, process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for
R1, then process P1 and process P2 will be in deadlock.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 23 of 29
A6(a) Paging: [1
0]
Paging is a memory management scheme used by modern operating systems to manage memory
allocation for processes. It divides physical memory into fixed-size blocks called frames, and
logical memory (used by processes) is divided into blocks of the same size called pages. Paging
allows processes to be allocated memory in contiguous blocks of physical memory, regardless of
the physical location of these blocks.
Example of Paging:
1. Page Table:
o The operating system maintains a page table for each process, which maps logical
addresses to physical addresses.
o For example, if a process wants to access logical address 0x1234, the page table
translates this address to a physical address that specifies both the frame number and
the offset within the frame.
2. Memory Allocation:
o When a process is loaded into memory, its pages are divided into frames.
o For instance, a process requiring 12 KB of memory would be divided into 3 pages
(each 4 KB).
o These pages can be placed into any available frames in physical memory.
3. Address Translation:
o Logical addresses generated by the CPU are divided into a page number and an
offset within the page.
o The page number is used as an index into the page table to find the corresponding
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 24 of 29
frame number in physical memory.
o The offset specifies the location within the frame.
4. Advantages of Paging:
o Simplifies memory management by allowing dynamic allocation of memory in
fixed-size units (pages).
o Eliminates external fragmentation because physical memory is allocated in
contiguous frames.
Paging and Segmentation are both memory management techniques, but they differ in how they
divide and manage memory:
1. Division:
o Paging: Divides both physical and logical memory into fixed-size blocks (pages).
Pages are uniformly sized, typically ranging from 4 KB to 16 KB.
o Segmentation: Divides logical memory into variable-sized segments, which may
correspond to different parts of a program (e.g., code segment, data segment).
2. Addressing:
o Paging: Logical addresses are divided into a page number and an offset within the
page. The page number is used for mapping to physical memory frames.
o Segmentation: Logical addresses consist of a segment number and an offset within
the segment. Each segment can be of different sizes, and segments are mapped to
physical memory independently.
3. Fragmentation:
o Paging: Eliminates external fragmentation because pages are of fixed size.
However, internal fragmentation can occur if a page is not fully utilized.
o Segmentation: Can lead to both external and internal fragmentation, as segments
are of variable sizes and may leave unused space between segments or within
segments.
4. Usage:
o Paging: Commonly used in modern operating systems due to its simplicity and
efficient memory allocation.
o Segmentation: Also used, particularly in systems where memory requirements vary
widely across processes or where logical division into distinct segments (like code,
stack, heap) is beneficial.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 25 of 29
A6(b) [10
]
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 26 of 29
A7(a) i) RAID (Redundant Array of Independent Disks) [1
0]
RAID is a data storage technology that combines multiple physical disk drives into a single logical
unit. It provides redundancy, performance improvement, or both, depending on the RAID level
used. Here’s an overview:
• Levels of RAID:
o RAID 0: Striping without redundancy. Data is divided into blocks and written
across multiple drives simultaneously, improving performance but offering no fault
tolerance.
o RAID 1: Mirroring for redundancy. Data is duplicated across two drives, providing
fault tolerance as each drive contains a complete copy of the data.
o RAID 5: Striping with distributed parity. Data is striped across multiple drives, and
parity information is distributed among them. Offers both performance improvement
and fault tolerance.
o RAID 6: Striping with double distributed parity. Similar to RAID 5 but with an
additional parity block, providing fault tolerance against two drive failures.
o RAID 10 (RAID 1+0): Combines mirroring and striping. Data is striped across
mirrored sets of drives, providing redundancy and performance benefits.
• Advantages:
o Fault Tolerance: Protects against data loss due to drive failures, depending on the
RAID level.
o Performance: Improves read and write performance, especially in RAID levels that
involve striping.
o Scalability: Allows adding more drives to increase storage capacity and
performance.
• Applications: Used in servers, storage arrays, and systems requiring high availability and
performance, such as databases, virtualization, and multimedia editing.
File directories (or file systems directories) are structures used by operating systems to organize
and manage files on storage devices like hard drives. They provide a hierarchical structure that
allows users and programs to navigate and access files efficiently. Here’s a brief overview:
• Structure: Directories are organized in a tree-like structure, starting from a root directory
(e.g., C:\ in Windows, / in Unix/Linux).
• Purpose:
o Organization: Files are grouped into directories based on their type, purpose, or
user organization, making it easier to locate and manage them.
o Navigation: Users can navigate through directories using commands or graphical
interfaces, accessing files stored at various levels of the hierarchy.
• Components:
o Directories: Containers for files and other directories. Each directory may contain
multiple files and subdirectories.
o File Metadata: Each file entry in a directory contains metadata such as file name,
size, permissions, creation/modification timestamps, and pointers to the data blocks
on disk.
• Operations:
o Creation and Deletion: Users can create new directories or delete existing ones,
along with their contents.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 27 of 29
o Navigation: Users can move (rename) or copy directories, and traverse through the
directory hierarchy.
o Access Control: Directories can have permissions set to control who can read,
write, or execute files within them.
• Examples: Common directory operations include listing contents (ls in Unix/Linux, dir in
Windows), changing directories (cd), creating directories (mkdir), and deleting directories
(rmdir).
File directories are fundamental to organizing and managing data on storage devices, providing a
structured and efficient way to store and access files within a computer system.
In the SCAN algorithm the disk arm moves in a particular direction and services the requests
coming in its path and after reaching the end of the disk, it reverses its direction and again
services the request arriving in its path. So, this algorithm works as an elevator and is hence also
known as an elevator algorithm. As a result, the requests at the midrange are serviced more and
those arriving behind the disk arm will have to wait.
Example:
ii) C-SCAN
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 28 of 29
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So, the
disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).
Example:
____________X____________
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 29 of 29