0% found this document useful (0 votes)
7 views29 pages

OS PreEnd Solution

The document outlines the syllabus and key concepts for the BCS-401: Operating System course at Shri Ramswaroop Memorial College of Engineering & Management for the 2023-24 session. It covers major functions of operating systems, synchronization problems like the Sleeping Barber Problem, process management, memory fragmentation, scheduling algorithms, and file system protection. Additionally, it discusses the differences between user-level and kernel-level threads, as well as strategies for managing memory fragmentation.

Uploaded by

shreyashphoto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views29 pages

OS PreEnd Solution

The document outlines the syllabus and key concepts for the BCS-401: Operating System course at Shri Ramswaroop Memorial College of Engineering & Management for the 2023-24 session. It covers major functions of operating systems, synchronization problems like the Sleeping Barber Problem, process management, memory fragmentation, scheduling algorithms, and file system protection. Additionally, it discusses the differences between user-level and kernel-level threads, as well as strategies for managing memory fragmentation.

Uploaded by

shreyashphoto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

F:/Academic/26

Appendix-‘D’ Refer/WI/ACAD/18

SHRI RAMSWAROOP MEMORIAL COLLEGE OF ENGINEERING & MANAGEMENT

PRE-END SEMESTER EXAMINATION


[Session: 2023-24 (Even)]

B. Tech. IV Sem. (CSE 4A, 4C)


BCS-401: OPERATING SYSTEM
Duration: 03 Hours Maximum Marks: 100

Section A M
A1(a) An Operating System (OS) is system software that manages computer hardware, software [2]
resources, and provides common services for computer programs.

Major Functions of an Operating System

1. Process Management: Scheduling and managing processes.


2. Memory Management: Allocating and managing memory.
3. File System Management: Managing files and directories.
4. Device Management: Controlling and coordinating hardware devices.
5. Security and Access Control: Protecting data and resources.
6. User Interface: Providing interfaces for user interaction.

A1(b) Busy waiting: [2]


Busy waiting, also known as spinning, is a synchronization technique where a process repeatedly
checks for a condition to be met without relinquishing control of the CPU. During busy waiting, the
process remains in a loop, actively checking if a resource is available or if an event has occurred,
rather than sleeping or yielding the processor to other processes.

Key Points:

• High CPU Usage: The process keeps the CPU busy, which can lead to inefficient resource
usage.
• Simple Implementation: Easy to implement but not optimal for performance.
• Use Cases: Often used in low-level programming where wait times are expected to be very
short, such as in device drivers or certain real-time systems.

A1(c) Scheduling is essential in operating systems to manage the execution of processes efficiently. Here [2]
are the key reasons why scheduling is needed:
1. Maximize CPU Utilization:
o Ensures the CPU is busy as much as possible by allocating tasks in an efficient
manner.
2. Fairness:
o Provides equitable CPU time to all processes, ensuring no single process
monopolizes the CPU.
3. Efficiency:
o Reduces idle time and maximizes the throughput of the system by managing the
execution order of processes.
4. Response Time:
o Improves the response time for interactive users by prioritizing certain processes,
especially those requiring quick user feedback.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 1 of 29
5. Deadlock Avoidance:
o Helps in preventing deadlocks by careful allocation and deallocation of resources.
Effective scheduling leads to improved system performance, user satisfaction, and optimal resource
utilization.
A1(d) Fragmentation: Fragmentation refers to the phenomenon where available memory becomes [2]
divided into small, non-contiguous blocks over time, which cannot be efficiently utilized by the
system. This occurs in both main memory (RAM) and disk storage.
Fragmentation in the context of variable partitions refers to the inefficient utilization
of memory due to both internal and external fragmentation, which can impact system performance
and resource allocation efficiency.
A1(e) SCAN (Elevator) Scheduling Algorithm: The SCAN algorithm, also known as the elevator [2]
algorithm, moves the disk arm from one end of the disk to the other, serving requests along the
way, and then reverses direction when it reaches the end. Here's how it works:

• Movement: The disk arm moves across the disk in one direction, serving requests along the
way. Upon reaching the end, it reverses direction.
• Advantage: Minimizes average seek time by efficiently handling requests in a linear
fashion.

C-SCAN (Circular SCAN) Scheduling Algorithm: The C-SCAN algorithm is a variant of


SCAN designed to address potential inefficiencies in the SCAN algorithm, particularly with respect
to large waits at one end of the disk:

• Movement: Similar to SCAN, but the disk arm returns to the beginning of the disk after
reaching the end, servicing requests only in one direction.
• Advantage: Reduces maximum wait time compared to SCAN, particularly for requests
farthest from the current position of the disk arm.

A1(f) Safe State: [2]


• Definition: A state in which the system can allocate resources to each process in such a way
that all processes can eventually complete their execution.
• Characteristics:
o All processes can obtain the resources they request without leading to deadlock.
o No process needs to hold resources indefinitely, preventing others from executing.
Unsafe State:
• Definition: A state in which the system cannot guarantee that all processes can eventually
complete their execution due to potential deadlock or starvation.
• Characteristics:
o Processes may be blocked indefinitely waiting for resources.
o Allocation of resources may lead to deadlock, where processes are unable to
proceed.
A1(g) The main advantage of using variable partitions in multiprogramming is efficient memory [2]
utilization, Enhanced System Performance by accommodating varying sizes of processes,
thereby reducing internal fragmentation and optimizing overall system performance.
A1(h) A tree-level directory structure refers to a hierarchical organization of directories (folders) and [2]
subdirectories (nested folders) on a computer or file system. This structure resembles a tree, where
directories can have multiple levels of nesting, allowing for a clear and organized way to store and
manage files and folders. Each directory can contain files and/or additional directories, forming a
parent-child relationship that facilitates efficient organization and access of data.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 2 of 29
A1(i) Process management in an operating system oversees the creation, scheduling, [2]

synchronization, and termination of processes. It ensures efficient utilization of


system resources, facilitates multitasking, manages memory allocation, and
maintains system stability and security through error handling and process protection
mechanisms.
A1(j) The main states of a process in an operating system are: [2]

1. Ready: Waiting to be assigned to a processor for execution.


2. Running: Currently being executed on a processor.
3. Blocked (Waiting): Waiting for an event (such as I/O completion) before it can proceed.
4. Terminated: Finished execution or terminated by the operating system.

Section B M

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 3 of 29
A2(a) [10]

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 4 of 29
A2(b) The Sleeping Barber Problem is a classic synchronization problem that demonstrates issues of [10]
resource management and mutual exclusion in concurrent systems. Here’s an illustration of the
problem and its solution using semaphores:

Problem Description:

• There is a barber shop with one barber and several chairs for waiting customers.
• If there are no customers, the barber sleeps in his chair.
• When a customer arrives:
o If the barber is sleeping, the customer wakes him up.
o If the barber is busy cutting hair, the customer sits in one of the chairs (if available)
or leaves if all chairs are occupied.

Solution Using Semaphores:

• Semaphores Used:
o customers_waiting: Counts the number of customers waiting.
o barber_ready: Indicates if the barber is ready to cut hair or is sleeping.
o barber_mutex: Ensures mutual exclusion when accessing shared variables.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 5 of 29
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 6 of 29
The Sleeping Barber Problem is a classical synchronization problem in which a barber shop with
one barber, a waiting room, and a number of customers is simulated. The problem involves
coordinating the access to the waiting room and the barber chair so that only one customer is in
the chair at a time and the barber is always working on a customer if there is one in the chair,
otherwise the barber is sleeping until a customer arrives.
A2(c) A thread is a single sequential flow of execution of tasks of a process so it is also known as thread [5]
i of execution or thread of control. There is a way of thread execution inside the process of any
operating system. Apart from this, there can be more than one thread inside a process. Each thread
of the same process makes use of a separate program counter and a stack of activation records and
control blocks. Thread is often referred to as a lightweight process.

The process can be split down into so many threads. For example, in a browser, many tabs can be
viewed as threads. MS Word uses many threads - formatting text from one thread, processing input
from another thread, etc.

Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 7 of 29
Types of Threads

In the operating system, there are two types of threads.

1. Kernel level thread.


2. User-level thread.

User-level thread

The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the
user level thread. The kernel-level thread manages user-level threads as if they are single-threaded
processes? Examples: Java thread, POSIX threads, etc.

Advantages of User-level threads

1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.

Disadvantages of User-level threads

1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 8 of 29
Kernel level thread:

The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-level
thread is implemented by the operating system. The kernel knows about all the threads and
manages them. The kernel-level thread offers a system call to create and manage the threads from
user-space. The implementation of kernel threads is more difficult than the user thread. Context
switch time is longer in the kernel thread. If a kernel thread performs a blocking operation, the
Banky thread execution can continue. Example: Window Solaris.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 9 of 29
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads

1. The kernel thread manages and schedules all threads.


2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.

A2(c) [5]
ii

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 10 of 29
A2(d) External Fragmentation: [10]

• Definition: External fragmentation occurs when free memory is divided into small, non-
contiguous blocks, making it difficult to allocate larger contiguous blocks of memory to
processes even though the total free memory might be sufficient.
• Cause: It arises when processes are loaded and unloaded from memory, leaving behind
small gaps of unused memory that are too small to be allocated to new processes.
• Solution: External fragmentation is typically managed by compaction or by using dynamic
memory allocation algorithms that can coalesce or merge fragmented memory blocks to
form larger contiguous blocks.

Internal Fragmentation:

• Definition: Internal fragmentation occurs when allocated memory may be slightly larger
than what is actually needed by a process. This results in wasted memory within allocated
blocks.
• Cause: It arises from fixed-size memory allocation strategies where processes are allocated
memory in fixed-size blocks, leading to unused memory within those blocks.
• Solution: Internal fragmentation can be reduced by using variable-size memory allocation
strategies such as paging or segmentation, where memory is allocated in smaller, variable-
sized units that better match the actual memory requirements of processes.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 11 of 29
Solving Fragmentation Problem Using Paging:

• Paging: Paging is a memory management scheme that eliminates external fragmentation


and reduces internal fragmentation by dividing physical memory into fixed-size blocks
called pages, and dividing logical memory into blocks of the same size called frames.
• Advantages:
o Elimination of External Fragmentation: Pages can be allocated and deallocated
independently, and the physical memory is managed efficiently without external
fragmentation issues.
o Reduction of Internal Fragmentation: By allocating memory in fixed-size pages,
paging reduces internal fragmentation compared to allocating memory in larger,
fixed-size blocks.
• Working Principle:
o Processes are divided into smaller, equal-sized pages, which are then mapped to
frames in physical memory.
o Paging allows the operating system to allocate memory more efficiently, reduce
waste, and improve overall memory utilization.
• Page Replacement: To handle more processes than physical memory can accommodate,
the operating system uses page replacement algorithms (like LRU, FIFO) to decide which
pages to evict from memory when new pages need to be loaded.

A2(e) i) File System Protection and Security [10]

File system protection and security mechanisms are essential to safeguard data integrity,
confidentiality, and availability within a computer system. Here are key aspects:

• Access Control: Determines who can access files and directories, and what operations they
can perform (read, write, execute). This is often managed through permissions and access
control lists (ACLs).
• Authentication and Authorization: Ensures that users are authenticated before accessing
files. Authorization mechanisms verify whether a user has the necessary permissions to
perform requested actions.
• Encryption: Encrypts sensitive data to prevent unauthorized access even if the data is
intercepted. This ensures confidentiality.
• Auditing and Logging: Tracks access and modifications to files, providing accountability
and traceability in case of security incidents.
• Backup and Recovery: Implements strategies to back up files regularly and recover them
in case of data loss or corruption.
• File Integrity: Verifies that files have not been tampered with or corrupted, ensuring data
integrity.
• Antivirus and Malware Protection: Protects against malicious software that can
compromise file system security.
• Secure File Deletion: Ensures that files are securely erased to prevent recovery by
unauthorized parties.

Effective file system protection and security measures are critical in maintaining the privacy,
integrity, and availability of data, especially in multi-user and networked environments.

ii) Linked File Allocation Methods

Linked file allocation is a method of organizing files on a disk where each file is a linked list of
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 12 of 29
disk blocks. Here's how it works:

• Structure: Each file is represented as a linked list of disk blocks (or clusters), where each
block contains a pointer to the next block in the file.
• Advantages:
o Dynamic Size: Files can grow or shrink dynamically since each block points to the
next.
o No External Fragmentation: Linked allocation does not suffer from external
fragmentation because files can be scattered across the disk without concern for
contiguous blocks.
• Disadvantages:
o Random Access: Direct access to specific parts of the file is inefficient because
each block must be accessed sequentially.
o Overhead: Requires extra space for pointers between blocks, which increases
storage overhead compared to other allocation methods.
o Reliability: The reliability of linked allocation can be compromised if pointers are
lost or corrupted, leading to data loss or file fragmentation.
• Variants:
o Indexed Linked Allocation: Uses an index block that contains pointers to all blocks
of a file, allowing for faster access than pure linked allocation.
o File Allocation Table (FAT): A variant of linked allocation where a centralized
table (FAT) manages pointers to disk blocks, enhancing reliability and performance.

Linked file allocation methods are suitable for systems where files vary greatly in size and require
dynamic allocation. However, they require careful management of pointers to ensure efficient and
reliable access to data stored on disk.
Section C M
A3(a) i) Real-Time Operating System (RTOS) [10]

A Real-Time Operating System (RTOS) is designed to manage and control the execution of tasks
with strict timing constraints. Here are key aspects of RTOS:

• Deterministic Response: RTOS guarantees a deterministic response time to events and


tasks. Tasks are scheduled and executed within predefined deadlines.
• Types of Real-Time Systems:
o Hard Real-Time: Critical tasks have strict deadlines that must be met; failure to
meet a deadline can lead to system failure (e.g., aerospace systems).
o Soft Real-Time: Tasks have deadlines, but occasional misses can be tolerated
without catastrophic consequences (e.g., multimedia applications).
• Features:
o Task Scheduling: Prioritizes tasks based on deadlines and criticality using
scheduling algorithms like Rate Monotonic Scheduling (RMS) or Earliest Deadline
First (EDF).
o Interrupt Handling: Handles interrupts efficiently to ensure timely response to
external events.
o Resource Management: Manages resources such as CPU time, memory, and I/O
devices to meet real-time requirements.
• Applications:
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 13 of 29
o Used in industries such as automotive (for engine control), industrial automation,
medical devices, telecommunications, and embedded systems where timing
predictability and reliability are critical.

ii) Time-Sharing System

A Time-Sharing System is a multi-user operating system where multiple users can interact with a
computer system concurrently by sharing its resources. Here are key aspects of time-sharing
systems:

• Resource Sharing: Users share CPU time, memory, and peripherals (such as printers and
disks) simultaneously.
• Multiprogramming: System executes multiple tasks or processes concurrently by rapidly
switching between them, giving each user or application a time slice or quantum of CPU
time.
• User Interaction: Provides interactive computing environment where users can run
programs, execute commands, and access resources through terminals or graphical
interfaces.
• Scheduling: Employs CPU scheduling algorithms (e.g., Round Robin, Priority Scheduling)
to allocate CPU time fairly among competing processes or users.
• Advantages:
o Maximizes CPU and resource utilization by allowing efficient sharing among users
and applications.
o Provides responsiveness and quick turnaround time for interactive tasks.
o Supports multitasking and concurrent execution of diverse workloads.
• Examples: Unix/Linux, Windows, and macOS are modern examples of time-sharing
systems that provide a responsive and interactive computing environment for multiple users.

Time-sharing systems revolutionized computing by enabling efficient resource utilization and


interactive computing experiences, laying the foundation for modern multi-user operating systems
and networked computing environments.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 14 of 29
A3(b) Difference between spooling and buffering: [5
i ]

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 15 of 29
3(b)i Multithreaded Systems: [5
i ]
Multithreaded systems are operating systems or applications that support the execution of multiple
threads within a single process. Multithreading is a function of the CPU that permits multiple
threads to run independently while sharing the same process resources. A thread is a conscience
sequence of instructions that may run in the same parent process as other threads.

Multithreading allows many parts of a program to run simultaneously. These parts are referred to as
threads, and they are lightweight processes that are available within the process. As a result,
multithreading increases CPU utilization through multitasking. In multithreading, a computer may
execute and process multiple tasks simultaneously.

Multithreading needs a detailed understanding of these two terms: process and thread. A process is
a running program, and a process can also be subdivided into independent units called threads. Skip
10sPlay Vid

Here’s an explanation along with advantages and disadvantages:

Explanation:

• Threads: Threads are lightweight execution units within a process that can run
concurrently. They share the same memory space and resources of the parent process,
allowing for efficient communication and coordination.
• Multithreading: In a multithreaded system, a single process can have multiple threads of
execution, each performing different tasks concurrently. Threads within the same process
can communicate directly, making multithreading useful for tasks that benefit from
parallelism or responsiveness.

Advantages of Multithreaded Systems:

1. Concurrency: Threads enable concurrent execution of tasks within a single process,


improving overall system responsiveness and performance.
2. Resource Sharing: Threads within the same process share resources such as memory and
file descriptors, reducing overhead compared to processes that require separate address
spaces.
3. Efficiency: Creating and managing threads is typically faster and consumes fewer resources
than creating new processes, making multithreading efficient for parallel tasks.
4. Responsiveness: Multithreading allows certain tasks (e.g., handling user input, updating the
user interface) to run concurrently with other tasks, providing a more responsive user
experience.
5. Simplified Communication: Threads within the same process can communicate directly
through shared memory, message passing, or synchronization mechanisms like mutexes and
semaphores.

Disadvantages of Multithreaded Systems:

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 16 of 29
1. Complexity: Multithreading introduces complexity in programming, as developers need to
manage thread synchronization, avoid race conditions, and handle potential deadlock
situations.
2. Difficulty in Debugging: Debugging multithreaded programs can be challenging due to
non-deterministic behavior and timing-dependent bugs.
3. Resource Contentions: Threads sharing resources can lead to contention issues (e.g.,
access conflicts to shared variables), requiring careful synchronization to maintain data
integrity.
4. Scalability Limitations: While multithreading improves performance on multi-core
systems, excessive threading beyond available cores can lead to diminishing returns or even
performance degradation due to overhead.
5. Security Risks: Improperly synchronized threads can lead to security vulnerabilities such
as data races and unintended information disclosure.

A4(a) [1
READERS WRITERS PROBLEM:
0]
The readers-writers problem is a classical problem of process synchronization, it relates to a data
set such as a file that is shared between more than one process at a time. Among these various
processes, some are Readers - which can only read the data set; they do not perform any updates,
some are Writers - can both read and write in the data sets.

The readers-writers problem is used for managing synchronization among various reader and writer
process so that there are no problems with the data sets, i.e. no inconsistency is generated.

Let's understand with an example - If two or more than two readers want to access the file at the
same point in time there will be no problem. However, in other situations like when two writers or
one reader and one writer wants to access the file at the same point of time, there may occur some
problems, hence the task is to design the code in such a manner that if one reader is reading then no
writer is allowed to update at the same point of time, similarly, if one writer is writing no reader is
allowed to read the file at that point of time and if one writer is updating a file other writers should
not be allowed to update the file at the same point of time. However, multiple readers can access
the object at the same time.

TABLE 1

Case Process 1 Process 2 Allowed / Not Allowed

Case 1 Writing Writing Not Allowed

Case 2 Reading Writing Not Allowed

Case 3 Writing Reading Not Allowed

Case 4 Reading Reading Allowed

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 17 of 29
The solution of readers and writers can be implemented using binary semaphores.

We use two binary semaphores "write" and "mutex", where binary semaphore can be
defined as:

Semaphore: A semaphore is an integer variable in S, that apart from initialization is


accessed by only two standard atomic operations - wait and signal, whose definitions are
as follows:

1. wait (S )
2. {
3. while(S <= 0);
4. S--;
5. }
6.
7. 2. Signal ( S )
8. {
9. S++;
10. }

From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter
into an infinite loop (because of the semicolon; after while loop). Whereas the job of the
signal is to increment the value of S.

The below code will provide the solution of the reader-writer problem, reader and writer
process codes are given as follows -

Code for Reader Process


The code of the reader process is given below -

1. static int read count = 0;


2. wait (mutex);
3. read count ++; // on each entry of reader increment readcount
4. if (readcount == 1)
5. {
6. wait (write);
7. }
8. signal(mutex);
9.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 18 of 29
10. --READ THE FILE?
11.
12. wait(mutex);
13. readcount --; // on every exit of reader decrement readcount
14. if (readcount == 0)
15. {
16. signal (write);
17. }
18. signal(mutex);

In the above code of reader, mutex and write are semaphores that have an initial value
of 1, whereas the readcount variable has an initial value as 0. Both mutex and write are
common in reader and writer process code, semaphore mutex ensures mutual exclusion
and semaphore write handles the writing mechanism.

The readcount variable denotes the number of readers accessing the file concurrently. The
moment variable readcount becomes 1, wait operation is used to write semaphore which
decreases the value by one. This means that a writer is not allowed how to access the file
anymore. On completion of the read operation, readcount is decremented by one.
When readcount becomes 0, the signal operation which is used to write permits a writer
to access the file.

Code for Writer Process


The code that defines the writer process is given below:

1. wait(write);
2. WRITE INTO THE FILE
3. signal(wrt);

If a writer wishes to access the file, wait operation is performed on write semaphore,
which decrements write to 0 and no other writer can access the file. On completion of the
writing job by the writer who was accessing the file, the signal operation is performed
on write.
A4(b) Inter-Process Communication (IPC) refers to mechanisms provided by an operating system that [1
allow processes to communicate and synchronize with each other. This facilitates collaboration and 0]
coordination between processes running concurrently on a system. Here's a detailed explanation of
IPC and its methods:

Overview of Inter-Process Communication (IPC):

IPC enables processes to exchange data, coordinate activities, and synchronize their execution. This
is essential for tasks such as:

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 19 of 29
• Cooperation: Processes may need to collaborate on shared tasks or exchange information.
• Synchronization: Processes may need to synchronize their activities to ensure they operate
correctly and efficiently.
• Resource Sharing: Processes may need to share resources (like memory, files, or devices)
in a controlled manner.

Methods of Inter-Process Communication (IPC):

There are several methods used for IPC, each suited to different scenarios and requirements:

1. Shared Memory:
o Description: Processes can communicate by mapping a shared portion of memory
into their address spaces. This allows them to read from and write to the shared
memory area, facilitating fast data exchange.
o Advantages: High performance, as data can be accessed directly without copying.
Useful for large data sets and frequent communication.
o Disadvantages: Requires synchronization mechanisms (like semaphores or
mutexes) to control access and prevent race conditions.
2. Message Passing:
o Description: Processes communicate by sending messages to each other through the
operating system kernel. Messages can be of fixed or variable size.
o Advantages: Simplifies synchronization and coordination as messages are explicitly
sent and received. Suitable for smaller data exchanges.
o Disadvantages: Overhead involved in message copying between user and kernel
space. Limited by message size and buffering capabilities.
3. Pipes and FIFOs (Named Pipes):
o Description: Provides unidirectional or bidirectional communication channels
between processes. Pipes are typically used for communication between related
processes, while FIFOs (named pipes) can be used between unrelated processes.
o Advantages: Simple and effective for sequential data exchange. Useful for
streaming data between processes.
o Disadvantages: Limited to communication between related processes or processes
that explicitly use named pipes.
4. Sockets:
o Description: Communication method used for networked IPC and also within the
same system (Unix domain sockets). Processes can communicate over a network or
locally using TCP/IP or UDP protocols.
o Advantages: Enables communication between processes on different systems or
within the same system using a flexible and standardized interface.
o Disadvantages: Overhead associated with network communication, even for local
IPC.
5. Signals:
o Description: Processes can send signals to other processes or handle predefined
signals sent by the operating system. Signals can indicate events or requests (e.g.,
termination, user interrupt).
o Advantages: Lightweight and efficient for notifying processes of events or
triggering specific actions.
o Disadvantages: Limited in the amount of data that can be communicated (typically
used for signaling events rather than data exchange).
6. Semaphores and Mutexes:
o Description: Synchronization mechanisms used to control access to shared

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 20 of 29
resources and manage critical sections of code. Processes use semaphores or
mutexes to coordinate access and prevent race conditions.
o Advantages: Ensures mutual exclusion and synchronization between processes
sharing resources. Can be used in conjunction with other IPC methods for safe data
sharing.
o Disadvantages: Requires careful programming to avoid deadlocks and ensure
correct synchronization.

Choosing IPC Methods:

• Performance Considerations: Choose shared memory for high-performance data


exchange, message passing for smaller and controlled communication, and sockets for
networked communication.
• Synchronization Needs: Use semaphores, mutexes, or message passing for
synchronization and coordination requirements.
• Security and Scope: Consider security implications and whether communication is local
(pipes, shared memory) or networked (sockets).

A5(a) (i) SRTF [10]

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 21 of 29
A5(b) Deadlock is a situation in a multitasking or multiprocessing system where two or more processes [10]
are unable to proceed because each is waiting for one of the others to release a resource, such as a
file, or waiting for an event (e.g., system resources like CPU cycles or memory) that another
process has caused to happen.

Necessary Conditions for Deadlock:

1. Mutual Exclusion: At least one resource must be held in a non-sharable mode (exclusive
use), meaning that only one process at a time can use the resource.
2. Hold and Wait: A process must be holding at least one resource and waiting to acquire
additional resources held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process holding them;
they must be released voluntarily by the process holding them.
4. Circular Wait: There must exist a set of waiting processes {P1, P2, ..., Pn} such that P1 is
waiting for a resource held by P2, P2 is waiting for a resource held by P3, ..., and Pn is
waiting for a resource held by P1, creating a circular chain of waiting.

Resource Allocation Graph (RAG):

A resource allocation graphs shows which resource is held by which process and which process is
waiting for a resource of a specific kind. It is amazing and straight – forward tool to outline how
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 22 of 29
interacting processes can deadlock. Therefore, resource allocation graph describes what the
condition of the system as far as process and resources are concern like what number of resources
are allocated and what is the request of each process. Everything can be represented in terms of
graph. One of the benefits of having a graph is, sometimes it is conceivable to see a deadlock
straight forward by utilizing RAG and however you probably won’t realize that by taking a
glance at the table. Yet tables are better if the system contains bunches of process and resource
and graph is better if the system contains less number of process and resource.
So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated and
what is the request of each process. Everything can be represented in terms of the diagram. One
of the advantages of having a diagram is, sometimes it is possible to see a deadlock directly by
using RAG, but then you might not be able to know that by looking at the table. But the tables are
better if the system contains lots of process and resource and Graph is better if the system
contains less number of process and resource. We know that any graph contains vertices and
edges.

Example 1 (Single instances RAG)

If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only
one instance, then the processes will be in deadlock. For example, if process P1 holds resource
R1, process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for
R1, then process P1 and process P2 will be in deadlock.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 23 of 29
A6(a) Paging: [1
0]
Paging is a memory management scheme used by modern operating systems to manage memory
allocation for processes. It divides physical memory into fixed-size blocks called frames, and
logical memory (used by processes) is divided into blocks of the same size called pages. Paging
allows processes to be allocated memory in contiguous blocks of physical memory, regardless of
the physical location of these blocks.

Example of Paging:

Let's consider a system with the following specifications:

• Physical memory divided into frames, each frame is 4 KB in size.


• Logical memory divided into pages, each page is also 4 KB in size.
• Suppose a process requires 12 KB of memory.

1. Page Table:
o The operating system maintains a page table for each process, which maps logical
addresses to physical addresses.
o For example, if a process wants to access logical address 0x1234, the page table
translates this address to a physical address that specifies both the frame number and
the offset within the frame.
2. Memory Allocation:
o When a process is loaded into memory, its pages are divided into frames.
o For instance, a process requiring 12 KB of memory would be divided into 3 pages
(each 4 KB).
o These pages can be placed into any available frames in physical memory.
3. Address Translation:
o Logical addresses generated by the CPU are divided into a page number and an
offset within the page.
o The page number is used as an index into the page table to find the corresponding

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 24 of 29
frame number in physical memory.
o The offset specifies the location within the frame.
4. Advantages of Paging:
o Simplifies memory management by allowing dynamic allocation of memory in
fixed-size units (pages).
o Eliminates external fragmentation because physical memory is allocated in
contiguous frames.

Paging vs. Segmentation

Paging and Segmentation are both memory management techniques, but they differ in how they
divide and manage memory:

1. Division:
o Paging: Divides both physical and logical memory into fixed-size blocks (pages).
Pages are uniformly sized, typically ranging from 4 KB to 16 KB.
o Segmentation: Divides logical memory into variable-sized segments, which may
correspond to different parts of a program (e.g., code segment, data segment).
2. Addressing:
o Paging: Logical addresses are divided into a page number and an offset within the
page. The page number is used for mapping to physical memory frames.
o Segmentation: Logical addresses consist of a segment number and an offset within
the segment. Each segment can be of different sizes, and segments are mapped to
physical memory independently.
3. Fragmentation:
o Paging: Eliminates external fragmentation because pages are of fixed size.
However, internal fragmentation can occur if a page is not fully utilized.
o Segmentation: Can lead to both external and internal fragmentation, as segments
are of variable sizes and may leave unused space between segments or within
segments.
4. Usage:
o Paging: Commonly used in modern operating systems due to its simplicity and
efficient memory allocation.
o Segmentation: Also used, particularly in systems where memory requirements vary
widely across processes or where logical division into distinct segments (like code,
stack, heap) is beneficial.

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 25 of 29
A6(b) [10
]

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 26 of 29
A7(a) i) RAID (Redundant Array of Independent Disks) [1
0]
RAID is a data storage technology that combines multiple physical disk drives into a single logical
unit. It provides redundancy, performance improvement, or both, depending on the RAID level
used. Here’s an overview:

• Levels of RAID:
o RAID 0: Striping without redundancy. Data is divided into blocks and written
across multiple drives simultaneously, improving performance but offering no fault
tolerance.
o RAID 1: Mirroring for redundancy. Data is duplicated across two drives, providing
fault tolerance as each drive contains a complete copy of the data.
o RAID 5: Striping with distributed parity. Data is striped across multiple drives, and
parity information is distributed among them. Offers both performance improvement
and fault tolerance.
o RAID 6: Striping with double distributed parity. Similar to RAID 5 but with an
additional parity block, providing fault tolerance against two drive failures.
o RAID 10 (RAID 1+0): Combines mirroring and striping. Data is striped across
mirrored sets of drives, providing redundancy and performance benefits.
• Advantages:
o Fault Tolerance: Protects against data loss due to drive failures, depending on the
RAID level.
o Performance: Improves read and write performance, especially in RAID levels that
involve striping.
o Scalability: Allows adding more drives to increase storage capacity and
performance.
• Applications: Used in servers, storage arrays, and systems requiring high availability and
performance, such as databases, virtualization, and multimedia editing.

ii) File Directories

File directories (or file systems directories) are structures used by operating systems to organize
and manage files on storage devices like hard drives. They provide a hierarchical structure that
allows users and programs to navigate and access files efficiently. Here’s a brief overview:

• Structure: Directories are organized in a tree-like structure, starting from a root directory
(e.g., C:\ in Windows, / in Unix/Linux).
• Purpose:
o Organization: Files are grouped into directories based on their type, purpose, or
user organization, making it easier to locate and manage them.
o Navigation: Users can navigate through directories using commands or graphical
interfaces, accessing files stored at various levels of the hierarchy.
• Components:
o Directories: Containers for files and other directories. Each directory may contain
multiple files and subdirectories.
o File Metadata: Each file entry in a directory contains metadata such as file name,
size, permissions, creation/modification timestamps, and pointers to the data blocks
on disk.
• Operations:
o Creation and Deletion: Users can create new directories or delete existing ones,
along with their contents.
Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 27 of 29
o Navigation: Users can move (rename) or copy directories, and traverse through the
directory hierarchy.
o Access Control: Directories can have permissions set to control who can read,
write, or execute files within them.
• Examples: Common directory operations include listing contents (ls in Unix/Linux, dir in
Windows), changing directories (cd), creating directories (mkdir), and deleting directories
(rmdir).

File directories are fundamental to organizing and managing data on storage devices, providing a
structured and efficient way to store and access files within a computer system.

A7(b) i) SCAN [10


]

In the SCAN algorithm the disk arm moves in a particular direction and services the requests
coming in its path and after reaching the end of the disk, it reverses its direction and again
services the request arriving in its path. So, this algorithm works as an elevator and is hence also
known as an elevator algorithm. As a result, the requests at the midrange are serviced more and
those arriving behind the disk arm will have to wait.

Example:

Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at


50, and it is also given that the disk arm should move “towards the larger value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is calculated as
= (199-50) + (199-16) = 332

ii) C-SCAN

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 28 of 29
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So, the
disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).

Example:

Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at


50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
=(199-50) + (199-0) + (43-0) = 391

____________X____________

Date: DD/MM/YYYY
BCS-401_Er. Sarika Singh
Page 29 of 29

You might also like