0% found this document useful (0 votes)
27 views16 pages

Os Ia - 2

The document discusses key concepts in operating systems, including mutual exclusion, deadlock, paging, and semaphore. It explains the significance of mutual exclusion in preventing data corruption, outlines the conditions necessary for deadlock, and describes various page replacement algorithms and file allocation methods. Additionally, it covers the Dining Philosopher problem, deadlock avoidance, and memory allocation strategies such as first fit, best fit, and worst fit.

Uploaded by

Abhijit Logavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views16 pages

Os Ia - 2

The document discusses key concepts in operating systems, including mutual exclusion, deadlock, paging, and semaphore. It explains the significance of mutual exclusion in preventing data corruption, outlines the conditions necessary for deadlock, and describes various page replacement algorithms and file allocation methods. Additionally, it covers the Dining Philosopher problem, deadlock avoidance, and memory allocation strategies such as first fit, best fit, and worst fit.

Uploaded by

Abhijit Logavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

OS IA2

Q1) What is Mutual Exclusion? Explain its significance. 2M

Ans: Mutual exclusion is one of the conditions for a deadlock to occur. It must be holding true
for non-shareable resources .Non-sharable resources include printer, memory space etc. Mutual
exclusion in an operating system (OS) ensures that only one process or thread can access a
shared resource (like a file or memory) at a time, preventing data corruption and ensuring
program integrity.

Significance of mutual execution:

 Prevents race around condition.


 Prevents multiple threads to enter critical section at the same time.

Q2) What is the effect of page size on the performance of operating system.

Ans: In paging, the number of frames is always equal to the size of memory divided by the page
size. So, an increase in page size means a decrease in the number of available frames. Having
fewer frames will increase the number of page faults, because it provides less freedom in page
replacement strategies. Large page sizes also waste space due to Internal Fragmentation. Large
page size also draws in more memory per fault. Hence, the number of faults may decrease if
there is limited contention. Larger pages also reduce the number of TLB misses. Small Page size
increases the number of pages as well as the size of a page table. Hence large page size is
preferred most of the time. But it is not fixed, sometimes a small page size is preferred or
sometimes a larger page size is preferred. It is dependent on the nature of the problem or system
requirements.

Q3) What is deadlock? Explain the necessary and sufficient condition for deadlock.

Ans:

Deadlock:

 The computer system uses may types of resource which are then used by various
processes to carry out their individual functions.

 But problem is that the amount of resources available is limited and many process needs
to use it.

 A set of process is said to be in a deadlocked state when every process in the set is
waiting for an event that can be caused only by another process in the set. The event can
be resource acquisition, resource release etc. The resource can be physical (printers,
memory space) or logical (semaphores, files)
The necessary and sufficient conditions for deadlock to occur are:

 Mutual Exclusion

o A resource at a time can only be used by one process.


o If another process is requesting for the same resource, then it must be delayed
until that resource is released.

 Hold and Wait

o A process is holding a resource and waiting to acquire additional resources that


are currently being held by other processes.

 No Pre-emption:

o Resources cannot be pre-empted


o Resource can be released only by the process currently holding it based on its
voluntary decision after completing the task

 Circular wait

o A set of processes { P0,P1,….,Pn-1,Pn } such that the process P0 is waiting for


resource held by P1,P1 is waiting for P2 ,and Pn is waiting for P0 to release its
resources.
o Every process holds a resource needed by the next process.

All the four above mentioned conditions should occur for a deadlock to occur.

Q4) Explain counting semaphore with examples. 2M

Ans: A semaphore is a synchronization primitive used to manage access to shared resources in


concurrent programming, such as in operating systems. It is primarily used for controlling
access to a common resource in a multithreading environment and preventing race conditions.

A counting semaphore is a type of semaphore that allows a certain number of threads or


processes to access a resource simultaneously. It is typically used when there are multiple
identical resources available.

The counter in a counting semaphore can take values greater than 0, allowing multiple processes
or threads to access resources.

Example:

In this example, we have a pool of 3 printers and 5 processes that want to print documents. The
counting semaphore is used to control access to these printers.
1. Initialization: We initialize the semaphore with a count of 3, representing the 3 available
printers.
o Semaphore counter = 3.
2. P (Wait) Operation: When a process wants to print, it performs a P operation (ac-
quire()).
o If a printer is available (semaphore counter > 0), the counter is decremented by 1,
and the process can print.
o If all printers are in use (semaphore counter = 0), the process will be blocked until
a printer becomes available.
3. V (Signal) Operation: After finishing the printing, the process performs a V operation
(release()).
o This increments the semaphore counter by 1, freeing up a printer.
o If any processes were waiting for a printer, one of them can now proceed

Q5) What is paging? Explain FIFO, LRU and Optimal page replacement algorithm for the
following string. Page frame size = 4. Calculate the hit ratio for the same. 2M

1234534167878978954542

Ans: Paging is a memory management technique which allows physical address space of the
process to be noncontiguous. In some sense, paging mechanism is similar to reading of a book .
When we read a book we only see and need to open the current page to read.
All the other pages are not visible to us and remains closed.
In the same manner, we can say that even when we may have a large program available, the
processor only needs a small set of instructions to execute at any time.
In fact, all these instructions which the processor needs to execute are within a small proximity
of each other. This is like a page which contains all the statements which we are currently
reading. In this way paging allows to keep just the parts of a process that we're using in memory
and the rest on the disk.

Calculate the hit ratio for the same. 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 Page


frame size is 4.
LRU

No of Hits: 9

Miss: 13
No of Hits: 9

Miss: 13

No of Hits: 9

Miss: 13

Q6) Discuss Dining philosopher problem. 5M

Ans: The dining philosopher's problem is a version of the classical synchronization problem, in
which five philosophers sit around a circular table and alternate between thinking and eating. A
bowl of noodles and five forks for each philosopher are placed at the center of the table.

There are certain conditions a philosopher must follow:

1. A philosopher must use both their right and left forks to eat.
2. A philosopher can only eat if both of his or her immediate left and right forks are
available. If the philosopher's immediate left and right forks are not available, the
philosopher places their (either left or right) forks on the table and resumes thinking.

Let's look at the Dining Philosopher's Problem with the code below. The image above is a guide
to help you completely comprehend the problem. P0, P1, P2, P3, and P4 symbolize the five
Philosophers, whereas F0, F1, F2, F3, and F4 represent the five Forks.

void Philosopher
{
while(1)
{
take_fork[i];
take_fork[ (i+1) % 5] ;

EATING THE NOODLE

put_fork[i];
put_fork[ (i+1) % 5] ;

THINKING
}
}

A solution to the Dining Philosopher's Problem is to use a semaphore to represent a fork.

What is a semaphore?

Semaphore is simply a non-negative variable that is shared between threads. A semaphore is a


signaling mechanism, and another thread can signal a thread waiting on a semaphore.

For process synchronization, it employs two atomic operations:

1) Wait and

2) Signal.

Depending on how it is configured, a semaphore either allows or denies access to the resource.

A fork can be picked up by performing a wait operation on the semaphore and released by
performing a signal operation on the semaphore.

The structure of the fork is an array of a semaphore which is represented as shown below -

semaphore F[5];
Initially, the elements of the semaphore F0,F1,F2,F3, and F4 are set to 1 since the forks are on
the table.

Let us now modify the above code by adding wait and signal operations:

Pseudo Code

void Philosopher
{
while(true)
{
wait( F[i] );
wait( F[ (i+1) % 5]);

// EATING THE NOODLE

signal( F[i] );
signal( F[ (i+1) % 5] ) ;

// THINKING
}
}

Q7) Explain deadlock avoidance method.

Ans: It is the simplest and most friendly method. It requires that each process declare the
maximum number of resources of each type that it will need. The deadlock-avoidance algorithm
dynamically checks the resource-allocation state to ensure the system can never be in a circular-
wait condition. When a process requests a resource, the system must make sure that the
allocation would leave the system in a safe state. It the system is in a safe state, then there would
be no deadlock. However, if it is in an unsafe state, that there is a possibility (not certainty) of a
deadlock. The avoidance approach requires that knowledge of all processes, all the resources
available, the resources allocated presently and the future requests by the processes. For a single
instance of a resource type we use the resource allocation graph. For multiple instance of a
resource type, we use the banker’s algorithm. A major drawback of this method is that it is
difficult to know at the beginning itself of the maximum resource required.
Q8) Differentiate between deadlock avoidance and prevention method. 2M and 5M

Ans:

Q9) Given memory partitions of 150k,500k,200k,300k,550k(in order) how would each of


the first fit, best fit and worst fit algorithm places the processes of 220k,430k,110k,425k(in
order).Evaluate, which algorithm makes most efficient use of memory? 5M
Ans:
Given Memory Partition = 150 KB, 500 KB, 200 KB, 300 KB, and 550 KB (in order),
how would each algorithm place processes of size 220 KB, 430 KB, 110 KB & 425 KB (in
order)?

First Fit:
In the first fit, a partition is allocated which is first sufficient from the top of Main Memory.
 220 KB is put in a 500 KB partition. 220KB => 500 KB partition, leaves a 280 KB
partition
 430 KB is put in a 550 K partition. 430 KB => 550 KB partition, leaves a 120 KB
partition
 110 KB is put in a 280 K partition (new partition 280 KB = 500 KB - 220 KB). 110
KB => 280 KB partition, leaves a 170 KB partition
 425 KB must wait. Because 425 KB would not be able to allocate, no partition large
enough!
Fastest algorithm because it searches as little as possible. The remaining unused memory areas
left after allocation become waste if it is too smaller. Thus request for a larger memory
requirement cannot be accomplished.

Best-fit:
Allocate the process to the partition which is the first smallest sufficient partition among the free
available partition.
 220 KB is put in a 300 KB partition. 220 KB => 300 KB, leaving a 80 KB partition
 430 KB is put in a 500 KB partition. 430 KB => 500 KB, leaving a 70 KB partition
 110 KB is put in a 150 KB partition. 110 KB => 150 KB, leaving a 40 KB partition
 425 KB is put in a 550 KB partition. 425 KB => 550 KB, leaving a 125 KB partition
Memory utilization is much better than the first fit as it searches the smallest free partition first
available. It is slower and may even tend to fill up memory with tiny useless holes.

Worst-fit:
Allocate the process to the partition which is largest sufficient among the freely available
partitions available in the main memory.
 220 KB is put in a 550 KB partition. 220 KB => 550 KB, leaving a 330 KB partition
 430 KB is put in a 500 KB partition. 430 KB => 500 KB, leaving a 70 KB partition
 110 KB is put in a 330 KB partition. 110 KB => 330 KB (new partition 330 KB = 550
KB - 220 KB)., leaving a 220 KB partition
 425 KB must wait. Because 425 KB would not be allowed to allocate as no partition
is large enough!
Reduces the rate of production of small gaps. If a process requiring larger memory arrives at a
later stage then it cannot be accommodated as the largest hole is already split and occupied.

In this problem, the Best-Fit Algorithm makes the most efficient use of memory because it was
the only algorithm that meet all the memory requests.
Q10) Consider the following snapshot of the system. Determine using bankers algorithm
whether or not the system is in safe state. 5M
Ans:

Q11) Explain Different file allocation methods. 5M

Ans: When a hard drive is formatted, a system has numerous tiny spaces to store files called
blocks. The operating system stores data in memory blocks using various file allocation methods,
which enables the hard disk to be used efficiently and the file to be accessed.

Types of File Allocation Methods in Operating System

 Contiguous File Allocation


 Linked File Allocation
 Indexed File Allocation
 File Allocation Table (FAT)
 Inode

These are different file allocation methods used in computer systems to manage how data is
stored and accessed on storage devices. Here's an explanation of each:

1. Contiguous File Allocation

Description: In contiguous file allocation, a file is stored in a single, continuous block of


memory on the storage device. The starting block and the length (in terms of the number of
blocks) of the file are recorded.

Advantages:

o Simple and easy to implement.


o Fast access to file data since all data is stored contiguously.

Disadvantages:

o Fragmentation: Over time, as files are created and deleted, there may not be
enough contiguous space available for large files.
o Difficulty in expanding files: If the file needs more space, finding a contiguous
block big enough can be challenging.

2. Linked File Allocation


Description: In linked file allocation, each file is stored as a linked list of blocks. Each block
contains a pointer to the next block in the file. The operating system maintains the address of the
first block of the file.

Advantages:

o Eliminates fragmentation since blocks can be scattered across the storage.


o Easy to expand files: New blocks can be linked to the end of the list as needed.

Disadvantages:

o Slower access to data since each block must be followed through the pointers to
find the next block.
o Overhead due to storing pointers within each block.

3. Indexed File Allocation

Description: In indexed file allocation, each file has an index block, which contains the
addresses of all the data blocks used by the file. The file's index block points to the data blocks,
eliminating the need for sequential search like in linked allocation.

Advantages:

o Allows for random access to file data (direct access).


o No fragmentation issues like in contiguous allocation.

Disadvantages:

o The index block itself can become a bottleneck if the file has many blocks,
requiring more storage space.
o If the file is large, multiple index blocks might be required.

4. File Allocation Table (FAT)

Description: The File Allocation Table (FAT) is a type of indexed allocation used in systems
like MS-DOS and older versions of Windows. It uses a table where each entry corresponds to a
block in the storage device. The table keeps track of the blocks used by a file, with each entry
containing the address of the next block in the file.

Advantages:

o Supports both contiguous and linked allocation strategies.


o Allows for fast file system operations such as allocation, deallocation, and
retrieval.

Disadvantages:
o The table can become very large for big files or large storage devices.
o Limited in terms of scalability due to its design, especially in modern systems.

Q12) Consider a disk drive with 200 cylinders numbered 0 to 199. The request queus has
the following composition 55 56 39 18 90 160 150 38 184. If the current position is 100,
compute the total distance (in cylinders) that the disk arm would move for each of the
following algorithms FIFO, SSTF, SCAN and CSCAN.

Ans:

First Come First Serve (FCFS) Disk Scheduling

Total Head Movements

= (100 - 55) + (58 - 55) + (58 - 39) + (39 - 18) + (90 - 18) + (160 - 90) + (160 - 150) + (150 - 38)
+ (184 - 38)

= 45 + 3 + 19 + 21 + 72 + 70 + 10 + 112 + 146

= 498 Cylinders

Average Seek Length = 498/9 = 55.34 ms

Shortest Seek Time First (SSTF) Disk Scheduling


Total Head Movements

= (100 - 90) + (90 - 58) + (58 - 55) + (55 - 39) + (39 - 38) + (38 - 18) + (150 - 18) + (160 - 150)
+ (184 - 160)

= 10 + 32 + 3 + 16 + 1 + 20 + 132 + 10 + 24

= 248 Cylinders

Average Seek Length = 248/9 = 27.56 ms

SCAN (Elevator) Disk Scheduling

Total
Head Movements

= (150 - 100) + (160 - 150) + (184 - 160) + (199 - 184) + (199 - 90) + (90 - 58) + (58 - 55) + (55
- 39) + (39 - 38) + (38 - 18)

= 50 + 10 + 24 + 15 + 109 + 32 + 3 + 16 + 1 + 20

= 280 Cylinders

OR

Total Head Movements = (199 - 100) + (199 - 18) = 99 + 181 = 280 Cylinders

Average Seek Length = 280/10 = 28 ms

C-SCAN (Circular SCAN) Disk Scheduling

Total Head Movements

= (150 - 100) + (160 - 150) + (184 - 160) + (199 - 184) + (199 - 0) + (18 - 0) + (38 - 18) + (39 -
38) + (55 - 39) + (58 - 55) + (90 - 58)

= 50 +10 + 24 + 15 + 199 + 18 + 20 + 1 + 16 + 3 + 32

= 388 Cylinders

OR

Total Head Movements = (199 - 100) + (199 - 0) + (90 - 0) = 99 + 199 + 90 = 388 Cylinders

Average Seek Length = 388/11 = 35.28 ms


All the above findings of total head movements and average seek lengths indicate Shortest Seek
Time First (SSTF) Disk Scheduling Algorithm is best for this scenario because it has a low
number of total head movements as well as a low average length as compared to the other three
disk scheduling algorithms.

Q13) Write note on

1) Resource allocation graph


2) Readers and writer problem using semaphore
3) Inter-process Communication
4) Producer-Consumer Problem

Ans:

1) Resource Allocation Graph (RAG)

Definition: A Resource Allocation Graph (RAG) is a directed graph that models the allocation
of resources to processes in a system. It is used as a tool for detecting deadlocks, which occur
when processes are unable to proceed because they are waiting for each other to release
resources. In a RAG, processes are represented as nodes, and resources are represented as
separate nodes. Directed edges are used to show the relationships between processes and
resources: one type of edge indicates a process's request for a resource, while the other indicates
that a resource has been allocated to a process. By analyzing the graph, especially looking for
cycles, system administrators can detect the presence of deadlocks.

Components

o Processes (P): Represented by circles or nodes in the graph.

o Resources (R): Represented by squares or rectangles.

o Edges:

 Request Edge: From a process to a resource (P → R), indicating that the


process is requesting the resource.

 Assignment Edge: From a resource to a process (R → P), indicating that


the resource is assigned to the process.

Advantages:

Simple and effective for small systems.

Disadvantages:
Scalability issues in large systems with many resources and processes.

2) Readers and Writers Problem Using Semaphore

Definition: The Readers and Writers problem is a classic synchronization problem in which
multiple processes, categorized as readers and writers, access a shared resource. The challenge
arises because readers can access the resource simultaneously without interfering with each
other, but writers need exclusive access to the resource. The problem focuses on ensuring that
readers can read concurrently but writers have exclusive access when writing, while also
preventing race conditions such as readers or writers blocking each other indefinitely.

Semaphore Solution:

Semaphores:

 mutex (binary semaphore): This is used to protect the critical section


where the number of active readers is modified.

 readCount (counting semaphore): This keeps track of how many readers


are currently accessing the shared resource.

Advantages:

Efficient synchronization with minimal overhead.

Disadvantages:

Potential starvation issues (e.g., readers continuously accessing the resource and preventing
writers from gaining access).

3) Inter-Process Communication (IPC)

Definition: Inter-Process Communication (IPC) is a mechanism that allows processes to


communicate and exchange data with each other. Since processes in modern operating systems
often run in separate memory spaces, IPC is essential for processes that need to coordinate, share
data, or synchronize actions. IPC mechanisms help ensure that processes do not interfere with
each other, leading to better system performance, reliability, and coordination in multitasking
environments.

Methods of IPC:

1. Message Passing: This involves processes sending and receiving messages to


communicate. Messages can be sent either synchronously or asynchronously.
2. Shared Memory: In this method, processes share a region of memory for
communication.

3. Sockets: Sockets enable communication between processes over a network.

4. Signals: A simple form of IPC where one process sends a signal to another
process to notify it of an event or to request some action (e.g., SIGKILL or
SIGTERM).

Advantages:

Flexible and supports both low-level (e.g., shared memory) and high-level (e.g., message
passing) communication methods.

Disadvantages:

Overhead from synchronization mechanisms and potential for race conditions.

4) Producer-Consumer Problem

Definition: The Producer-Consumer problem is a classic synchronization problem that occurs


when processes (producers) generate data and place it into a shared buffer, while other processes
(consumers) retrieve and process the data. The problem lies in ensuring that the buffer does not
overflow (producers try to produce more than the buffer can hold) or underflow (consumers try
to consume data when the buffer is empty). Proper synchronization is necessary to ensure that
producers and consumers do not access the buffer simultaneously, leading to inconsistencies.

 Semaphore Solution:

Semaphores:

 Full Semaphore: Keeps track of how many items are currently in the
buffer. It is initialized to 0 since the buffer starts empty.

 Empty Semaphore: Keeps track of how many empty slots are available
in the buffer. It is initialized to the size of the buffer.

 Mutex (binary semaphore): Ensures that only one process (producer or


consumer) can access the buffer at a time, preventing race conditions.

Advantages:

Efficient use of semaphores to synchronize and manage access.

Disadvantages:

Risk of deadlock if semaphores are not correctly managed.

You might also like