Os Mid-2 Longs Unit-3 Only Theory Answers
Os Mid-2 Longs Unit-3 Only Theory Answers
Semaphores Used:
1. mutex: A binary semaphore used to ensure mutual
exclusion when readers or writers are updating the
shared variable (such as the count of readers).
2. rw_mutex: A binary semaphore that allows writers to
gain exclusive access to the shared resource.
3. read_count: A counting semaphore that keeps track of
the number of active readers.
Algorithm Explanation:
Readers:
Entry:
o Acquire the mutex semaphore to update the
read_count variable safely.
o If it's the first reader (i.e., read_count == 0), acquire
the rw_mutex to prevent writers from accessing
the shared resource.
o Increment the read_count to indicate a new reader.
o Release the mutex to allow other readers to check
their conditions.
Exit:
o Acquire the mutex semaphore to safely update the
read_count variable.
o Decrement read_count. If the last reader (i.e.,
read_count == 0), release the rw_mutex to allow
writers to access the shared resource.
o Release the mutex.
Writers:
Entry:
o Acquire the rw_mutex semaphore to ensure
exclusive access to the shared resource.
Exit:
o Release the rw_mutex semaphore to allow other
writers or readers to access the shared resource.
semaphore mutex = 1; // Mutex for critical section in
readers
semaphore rw_mutex = 1; // Mutex for writers' exclusive
access
int read_count = 0; // Track the number of active
readers
// Reader Process
Reader() {
while(true) {
wait(mutex); // Enter critical section for read_count
read_count++; // Increment the number of readers
if (read_count == 1) {
wait(rw_mutex); // First reader acquires write lock
}
signal(mutex); // Exit critical section
// Writer Process
Writer() {
while(true) {
wait(rw_mutex); // Wait for exclusive access to the
resource
void philosopher(int i) {
while (true) {
think(); // Philosopher is thinking
wait(mutex); // Enter critical section to pick up forks
wait(fork[i]); // Pick up left fork
wait(fork[(i + 1) % 5]); // Pick up right fork
signal(mutex); // Exit critical section, others can pick
forks
1. Process Termination
The process termination method involves aborting processes
involved in the deadlock, either by terminating one or more
of the processes completely or by rolling them back to a safe
state. This method is based on the idea that if processes
involved in a deadlock are terminated or rolled back, the
system can break the cycle and free up resources for the
remaining processes.
Subtypes of Process Termination
Terminate all processes involved in the deadlock:
o This approach is the most drastic. All the processes
in the deadlock cycle are terminated. By
terminating these processes, the resources they
hold are released, and the deadlock cycle is broken.
o Advantages:
Simple to implement and ensures that all
processes involved in deadlock are removed
from the cycle.
No need to track the order of resource
allocation or waiting.
o Disadvantages:
High overhead because all processes involved
in deadlock are lost.
Potential for significant loss of work, especially
if processes have performed a lot of work and
are terminated midway.
Terminate only one process at a time:
o This approach involves selectively terminating
processes that are part of the deadlock. The system
might consider the priority, cost, or progress of
each process and then decide which one to
terminate.
o Advantages:
More controlled than terminating all
processes. Some processes may continue to
execute if they are not part of the deadlock.
Less work is lost compared to terminating all
processes.
o Disadvantages:
The system must carefully determine which
processes to terminate, which can be complex.
The terminated processes will need to be
restarted or rolled back, potentially leading to
delays and inefficiencies.
Considerations:
When processes are terminated, resources are freed,
and other processes can proceed. However, there might
be rollback mechanisms involved if the processes have
executed some actions that need to be undone (e.g.,
transactions).
Process termination often causes wasted CPU time and
resource wastage.
2. Resource Preemption
Resource preemption involves taking resources away from
processes involved in the deadlock cycle and assigning them
to other processes in order to break the cycle. The
preempted processes may be restarted later when the
required resources become available.
How Resource Preemption Works
Identify processes involved in deadlock:
o The system needs to determine which processes
are involved in the deadlock cycle. This could be
achieved using techniques like wait-for graphs or
resource allocation graphs.
Preempt resources from processes:
o Once the deadlock processes are identified, the
system preempts (takes away) one or more
resources from a process and assigns it to another
process that is waiting for resources.
Rollback or restart preempted processes:
o The preempted processes are either rolled back (if
they have made some progress that needs to be
undone) or suspended and later resumed once the
required resources are available.
Subtypes of Resource Preemption
Preempt the smallest number of resources:
o Preempting only the minimum necessary resources
can help reduce the impact on the system while
breaking the deadlock. This can ensure that
deadlock is resolved with minimal disruption.
Rollback processes:
o The system may also choose to rollback a process
that is holding a critical resource and is causing a
deadlock. This allows the system to restart that
process and attempt a different approach to
resource allocation.
Advantages of Resource Preemption:
Less work lost: Since not all processes are terminated,
less work is lost compared to the process termination
approach. Only the preempted processes may need to
restart or roll back.
Controlled resource allocation: Resources are freed for
use by other processes, allowing the system to recover
from deadlock without wasting all resources.
Disadvantages of Resource Preemption:
Overhead in preemption: Preempting resources is
expensive in terms of system overhead. The system
must track the resources held by processes, decide
which resources to preempt, and manage the process
states.
Rollback or re-execution overhead: If a process has
already executed some work, rolling back or restarting it
might result in overhead as the work must be undone
and redone.
Potential starvation: There is a risk of starvation where
a process may continuously be preempted and not get
enough resources to complete its execution. This might
result in that process being indefinitely postponed.
Considerations for Resource Preemption:
When preempting resources, it is important to ensure
that the resources are not held in such a way that
preemption could lead to inconsistent states (e.g., in
database systems). Special attention must be given to
the state of the system when preempting resources to
avoid inconsistencies.