Assignment#3
Assignment#3
In operating systems, besides busy waiting, there are other forms of waiting, such as:
The process is put into a sleep state, freeing up CPU resources until a specific event occurs. The
operating system then wakes up the process when the event is detected.
Polling
Similar to busy waiting, but instead of a continuous loop, the process checks the condition at regular
intervals, reducing CPU usage but potentially increasing response time.
Interrupt-driven Waiting
The process performs other tasks or remains idle until an interrupt signals that the condition has been
met. This is efficient as it allows the CPU to be used for other operations.
Busy waiting can often be avoided by using more efficient synchronization mechanisms, such as:
These allow processes to wait without consuming CPU resources until a condition is met.
Processes can wait for certain conditions to be true, and the operating system will manage the transition
between states.
Interrupts
Using hardware or software interrupts to notify the process when a condition is met, allowing the CPU
to handle other tasks in the meantime.
However, in certain low-level hardware interactions or real-time systems, busy waiting might still be
used due to its simplicity and predictability. In such scenarios, the decision to use busy waiting depends
on the specific requirements and constraints of the system.
Q2 : Show that, if the wait () and signal () semaphore operations are not
executed atomically, then mutual exclusion may be violated?
Ans: To demonstrate that mutual exclusion can be violated if the wait() and signal() semaphore
operations are not executed atomically, consider the following scenario involving two processes, P1 and
P2, and a binary semaphore S initialized to 1.
Scenario
Initial State: Semaphore S = 1.
P1 Executes wait(S):
P1 checks S (S = 1).
P2 Executes wait(S):
P2 checks S (S = 1).
P1 Resumes Execution:
In this scenario, both P1 and P2 are in the critical section simultaneously, violating mutual exclusion. The
root cause is the non-atomic execution of the wait() operation. Here, the check and decrement of S
were interrupted, allowing both processes to enter the critical section.
Explanation
Atomic wait() and signal() Operations
The wait() operation must check the semaphore value and decrement it without interruption. This
ensures that only one process can enter the critical section at a time.
Atomic signal() Operation
The signal() operation must increment the semaphore value atomically to ensure proper
synchronization.
If these operations are not atomic, race conditions can occur, leading to multiple processes accessing
the critical section simultaneously.
Conclusion
Atomicity of the wait() and signal() operations is crucial to ensure mutual exclusion in concurrent
programming. Without atomicity, race conditions can occur, resulting in multiple processes entering the
critical section at the same time, as demonstrated in the scenario above.
Initialization
Initialize the Semaphore:
semaphore S = 1
Process Structure
Each process follows a specific structure to ensure mutual exclusion using the binary
semaphore.
Critical Section
The critical section is the part of the code where the process accesses shared resources. Only
one process can execute this section at a time.
After exiting the critical section, each process performs a signal(S) operation. This operation
increments the semaphore value to 1, allowing another blocked process (if any) to enter the
critical section.
signal(S) { S++; }
void process() { wait(S); // enter critical section // critical section signal(S); // exit critical section //
remainder section }
Explanation
Mutual Exclusion: The binary semaphore ensures that only one process can enter the critical
section at a time. When a process is in the critical section, the semaphore value is 0, blocking
other processes from entering.
Fairness: The semaphore mechanism ensures fairness by allowing processes to enter the critical
section in the order they performed the wait(S) operation.
By following this structure, 𝑛n processes can safely access shared resources without causing
race conditions, ensuring mutual exclusion.
Q4: Race conditions are possible in many computer systems. Consider a banking
system that maintains an account balance with two functions: deposit(amount)
and withdraw(amount). These two functions are passed the amount that is to
be deposited or withdrawn from the bank account balance. Assume that a
husband and wife share a bank account. Concurrently, the husband calls the
withdraw () function and the wife calls deposit (). Describe how a race condition
is possible and what might be done to prevent the race condition from
occurring?
Ans:
A race condition occurs when two or more threads (or processes) access shared data and try to
change it simultaneously. In this scenario:
Concurrent Operations:
The husband calls withdraw(100) and the wife calls deposit(100) concurrently.
Interleaved Execution:
The system retrieves the current balance, say $500, for both operations simultaneously.
Before the husband’s thread updates the balance, the wife’s thread adds $100, calculating a new
balance of $600.
Each thread updates the balance, resulting in either $400 or $600, not the correct $500.
To prevent race conditions, ensure that only one thread can modify the balance at a time by
using synchronization mechanisms:
Use a mutex to lock the balance while a thread is performing a deposit or withdrawal operation.
mutex lock;
void deposit(int amount) {
lock(mutex);
balance += amount;
unlock(mutex);
lock(mutex);
balance -= amount;
unlock(mutex);
Atomic Operations:
Use atomic operations provided by the programming language or hardware to ensure that reading and
updating the balance is done in a single, indivisible step.
Database Transactions:
In a database context, use transactions to ensure that the series of operations (read-modify-write) are
executed as a single atomic unit.
By implementing one of these synchronization mechanisms, the system ensures that only one thread
can access and modify the balance at a time, preventing race conditions.
Q5: The first known correct software solution to the critical-section problem for
two processes was developed by Dekker. The two processes, P0 and P1, share
the following variables: Boolean flag [2]; /* initially false */ int turn; The
structure of process Pi (i == 0 or 1). The other process is Pj (j == 1 or 0). Prove
that the algorithm satisfies all three requirements for the criticalsection
problem?
Ans : Mutual Exclusion:
This is ensured by the turn variable, which dictates which process can enter the critical section.
If it's P0's turn, P1 waits, and vice versa.
Progress:
If a process is not in its critical section, and the other process wants to enter its critical section,
it can do so unless the other process is in its critical section or wants to enter it.
Progress is guaranteed because if one process is not in its critical section and the other wants to
enter, it will be able to do so after the current process finishes its execution.
Bounded Waiting:
This is achieved as long as processes alternate turns fairly. If one process continually yields its
turn to the other, the waiting process will eventually enter its critical section.
Interrupts are not appropriate for implementing synchronization primitives in multiprocessor systems
for the following reasons:
In a multiprocessor system, disabling interrupts on one processor does not prevent other processors
from executing critical sections simultaneously. This can lead to race conditions because other
processors can still modify shared data while one processor is in a critical section .
Performance Degradation:
Disabling interrupts frequently can significantly degrade system performance. Since each processor may
handle interrupts independently, managing interrupts across multiple processors can lead to
inefficiencies and increased overhead .
Scalability Issues:
As the number of processors increases, coordinating interrupts across all processors becomes more
complex and less efficient. This scalability issue makes interrupt-based synchronization impractical for
larger multiprocessor systems .
Increased Latency:
Disabling interrupts can increase the latency of handling important system events, leading to delayed
responses in a system where timely processing is crucial .
Q7: Describe how the compare and swap () instruction can be used to provide
mutual exclusion that satisfies the bounded-waiting requirement?
Ans : The compare and swap (CAS) instruction can be used to provide mutual exclusion and satisfy the
bounded-waiting requirement by following these steps:
Initialization:
Each process repeatedly executes the CAS instruction in a loop to attempt to acquire the lock:
while (true) {
if (CAS(lock, 0, 1)) {
}
The CAS operation atomically checks if lock is 0 (unlocked). If true, it sets lock to 1 (locked)
and returns true, indicating the lock has been acquired. Otherwise, it returns false, and the
process continues the loop.
Once the process acquires the lock, it can safely execute its critical section code.
After completing the critical section, the process releases the lock by setting lock back to 0:
lock = 0;
Before attempting to acquire the lock, set turn[i] to the current maximum turn value plus one.
Fair Waiting:
Each process waits until its turn[i] is the smallest among all processes waiting to enter the
critical section:
while (true) {
while (lock != 0 || !isMyTurn(i)); // Wait until lock is free and it's my turn
if (CAS(lock, 0, 1)) {
}
}
The isMyTurn(i) function checks if turn[i] is the smallest, ensuring that the waiting is bounded
and fair.
By using CAS with proper management of process order, mutual exclusion and bounded-waiting
can be achieved effectively.
Q8: Demonstrate that monitors and semaphores are equivalent insofar as they
can be used to implement solutions to the same types of synchronization
problems?
Ans : Monitors and semaphores can both be used to solve synchronization problems by ensuring
mutual exclusion and coordinating process execution.
Monitors:
A high-level abstraction that encapsulates shared variables, the procedures that operate on the
variables, and the synchronization between concurrent procedures.
Uses condition variables and procedures wait() and signal() to manage synchronization.
Example: A bounded buffer can be implemented using monitors with condition variables to
handle full and empty states.
Semaphores:
A lower-level synchronization primitive that uses integer values to control access to shared
resources.
Example: The same bounded buffer can be implemented using semaphores to signal full and
empty states and ensure mutual exclusion.
Both monitors and semaphores can achieve the same synchronization goals, such as mutual
exclusion and conditional synchronization. While monitors provide a structured and easier-to-
use approach, semaphores offer more flexibility at a lower abstraction level.
Q9: Explain why spinlocks are not appropriate for single-processor systems yet
are often used in multiprocessor systems?
Efficiency: In multiprocessor systems, while one processor is holding the spinlock, another processor can
execute the spinning code, making use of available CPU resources without wasting them. This avoids the
overhead of putting a process to sleep and waking it up, which is more efficient in a multiprocessor
environment.
monitor BoundedBuffer {
// Constants
int buffer[MAX];
// Condition variables
procedure insert(item) {
if (count == MAX) {
buffer[in] = item;
in = (in + 1) % MAX;
count++;
procedure remove() {
int item;
if (count == 0) {
item = buffer[out];
count--;
return item;
}
Explanation:
Shared Variables:
in and out: Indices for inserting and removing items in the circular buffer.
Condition Variables:
Insert Procedure:
If the buffer is full (count == MAX), the inserting thread waits on notFull.
Remove Procedure:
If the buffer is empty (count == 0), the removing thread waits on notEmpty.
This algorithm ensures mutual exclusion and proper synchronization for a bounded buffer, with
the buffer embedded within the monitor itself.