0% found this document useful (0 votes)
133 views

Assignment#3

Uploaded by

younaszulfiqar11
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views

Assignment#3

Uploaded by

younaszulfiqar11
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Assignment#3

Name: Noman shafqat

Roll No: bsem-f20-145

Submitted to: Prof. Rafaqat Ali


Q1: What is the meaning of the term busy waiting? What other kinds of waiting
are there in an operating system? Can busy waiting be avoided altogether?
Explain your answer.
Ans : Busy waiting, also known as spinning or busy looping, is a process synchronization technique in
which a process repeatedly checks for a condition to be met while consuming CPU cycles without
performing any useful work. The process remains active and continuously checks the condition in a loop,
hence the term "busy".

Other Kinds of Waiting

In operating systems, besides busy waiting, there are other forms of waiting, such as:

Blocking (Sleep Waiting)

The process is put into a sleep state, freeing up CPU resources until a specific event occurs. The
operating system then wakes up the process when the event is detected.

Polling

Similar to busy waiting, but instead of a continuous loop, the process checks the condition at regular
intervals, reducing CPU usage but potentially increasing response time.

Interrupt-driven Waiting

The process performs other tasks or remains idle until an interrupt signals that the condition has been
met. This is efficient as it allows the CPU to be used for other operations.

Can Busy Waiting be Avoided Altogether?

Busy waiting can often be avoided by using more efficient synchronization mechanisms, such as:

Semaphores and Mutexes

These allow processes to wait without consuming CPU resources until a condition is met.

Event Flags or Condition Variables

Processes can wait for certain conditions to be true, and the operating system will manage the transition
between states.

Interrupts

Using hardware or software interrupts to notify the process when a condition is met, allowing the CPU
to handle other tasks in the meantime.
However, in certain low-level hardware interactions or real-time systems, busy waiting might still be
used due to its simplicity and predictability. In such scenarios, the decision to use busy waiting depends
on the specific requirements and constraints of the system.

Q2 : Show that, if the wait () and signal () semaphore operations are not
executed atomically, then mutual exclusion may be violated?
Ans: To demonstrate that mutual exclusion can be violated if the wait() and signal() semaphore
operations are not executed atomically, consider the following scenario involving two processes, P1 and
P2, and a binary semaphore S initialized to 1.

Scenario
Initial State: Semaphore S = 1.

P1 Executes wait(S):

P1 checks S (S = 1).

Before P1 decrements S, it is preempted (switched out).

P2 Executes wait(S):

P2 checks S (S = 1).

P2 decrements S (S = 0) and enters the critical section.

P1 Resumes Execution:

P1 decrements S (S = -1) and also enters the critical section.

In this scenario, both P1 and P2 are in the critical section simultaneously, violating mutual exclusion. The
root cause is the non-atomic execution of the wait() operation. Here, the check and decrement of S
were interrupted, allowing both processes to enter the critical section.

Explanation
Atomic wait() and signal() Operations

For mutual exclusion to be guaranteed:

Atomic wait() Operation

The wait() operation must check the semaphore value and decrement it without interruption. This
ensures that only one process can enter the critical section at a time.
Atomic signal() Operation

The signal() operation must increment the semaphore value atomically to ensure proper
synchronization.

If these operations are not atomic, race conditions can occur, leading to multiple processes accessing
the critical section simultaneously.

Conclusion
Atomicity of the wait() and signal() operations is crucial to ensure mutual exclusion in concurrent
programming. Without atomicity, race conditions can occur, resulting in multiple processes entering the
critical section at the same time, as demonstrated in the scenario above.

Q3: Illustrate how a binary semaphore can be used to implement mutual


exclusion among n processes?
Ans : A binary semaphore, which can take only the values 0 and 1, is an effective tool for ensuring
mutual exclusion among multiple processes. Here's a step-by-step illustration of how a binary
semaphore can be used to achieve mutual exclusion among 𝑛n processes:

Initialization
Initialize the Semaphore:

A binary semaphore 𝑆 is initialized to 1.

semaphore S = 1

Process Structure
Each process follows a specific structure to ensure mutual exclusion using the binary
semaphore.

Entering Critical Section


Wait (P) Operation:
Before entering the critical section, each process performs a wait(S) operation. This operation
decreases the value of the semaphore. If the semaphore value is 1, it is decremented to 0,
allowing the process to enter the critical section. If the semaphore value is 0, the process is
blocked until the semaphore value becomes 1.
Explain

wait(S) { while (S <= 0) ; // busy wait S--; }

Critical Section
The critical section is the part of the code where the process accesses shared resources. Only
one process can execute this section at a time.

Exiting Critical Section


Signal (V) Operation:

After exiting the critical section, each process performs a signal(S) operation. This operation
increments the semaphore value to 1, allowing another blocked process (if any) to enter the
critical section.

signal(S) { S++; }

Complete Process Code


The following is a complete example of how a process uses a binary semaphore for mutual
exclusion.

void process() { wait(S); // enter critical section // critical section signal(S); // exit critical section //
remainder section }

Explanation
Mutual Exclusion: The binary semaphore ensures that only one process can enter the critical
section at a time. When a process is in the critical section, the semaphore value is 0, blocking
other processes from entering.

Fairness: The semaphore mechanism ensures fairness by allowing processes to enter the critical
section in the order they performed the wait(S) operation.

By following this structure, 𝑛n processes can safely access shared resources without causing
race conditions, ensuring mutual exclusion.
Q4: Race conditions are possible in many computer systems. Consider a banking
system that maintains an account balance with two functions: deposit(amount)
and withdraw(amount). These two functions are passed the amount that is to
be deposited or withdrawn from the bank account balance. Assume that a
husband and wife share a bank account. Concurrently, the husband calls the
withdraw () function and the wife calls deposit (). Describe how a race condition
is possible and what might be done to prevent the race condition from
occurring?
Ans:

A race condition occurs when two or more threads (or processes) access shared data and try to
change it simultaneously. In this scenario:

Concurrent Operations:

The husband calls withdraw(100) and the wife calls deposit(100) concurrently.

Interleaved Execution:

The system retrieves the current balance, say $500, for both operations simultaneously.

The husband’s thread subtracts $100, calculating a new balance of $400.

Before the husband’s thread updates the balance, the wife’s thread adds $100, calculating a new
balance of $600.

Each thread updates the balance, resulting in either $400 or $600, not the correct $500.

Prevention of Race Condition

To prevent race conditions, ensure that only one thread can modify the balance at a time by
using synchronization mechanisms:

Mutex (Mutual Exclusion):

Use a mutex to lock the balance while a thread is performing a deposit or withdrawal operation.

mutex lock;
void deposit(int amount) {

lock(mutex);

balance += amount;

unlock(mutex);

void withdraw(int amount) {

lock(mutex);

balance -= amount;

unlock(mutex);

Atomic Operations:

Use atomic operations provided by the programming language or hardware to ensure that reading and
updating the balance is done in a single, indivisible step.

Database Transactions:

In a database context, use transactions to ensure that the series of operations (read-modify-write) are
executed as a single atomic unit.

By implementing one of these synchronization mechanisms, the system ensures that only one thread
can access and modify the balance at a time, preventing race conditions.

Q5: The first known correct software solution to the critical-section problem for
two processes was developed by Dekker. The two processes, P0 and P1, share
the following variables: Boolean flag [2]; /* initially false */ int turn; The
structure of process Pi (i == 0 or 1). The other process is Pj (j == 1 or 0). Prove
that the algorithm satisfies all three requirements for the criticalsection
problem?
Ans : Mutual Exclusion:

Only one process can enter the critical section at a time.

This is ensured by the turn variable, which dictates which process can enter the critical section.
If it's P0's turn, P1 waits, and vice versa.

Progress:

If a process is not in its critical section, and the other process wants to enter its critical section,
it can do so unless the other process is in its critical section or wants to enter it.

Progress is guaranteed because if one process is not in its critical section and the other wants to
enter, it will be able to do so after the current process finishes its execution.

Bounded Waiting:

No process will be indefinitely delayed from entering its critical section.

This is achieved as long as processes alternate turns fairly. If one process continually yields its
turn to the other, the waiting process will eventually enter its critical section.

Q6: Explain why interrupts are not appropriate for implementing


synchronization primitives in multiprocessor systems?
Ans :

Interrupts are not appropriate for implementing synchronization primitives in multiprocessor systems
for the following reasons:

Disabling Interrupts on One Processor Does Not Affect Others:

In a multiprocessor system, disabling interrupts on one processor does not prevent other processors
from executing critical sections simultaneously. This can lead to race conditions because other
processors can still modify shared data while one processor is in a critical section .

Performance Degradation:
Disabling interrupts frequently can significantly degrade system performance. Since each processor may
handle interrupts independently, managing interrupts across multiple processors can lead to
inefficiencies and increased overhead .

Scalability Issues:

As the number of processors increases, coordinating interrupts across all processors becomes more
complex and less efficient. This scalability issue makes interrupt-based synchronization impractical for
larger multiprocessor systems .

Increased Latency:

Disabling interrupts can increase the latency of handling important system events, leading to delayed
responses in a system where timely processing is crucial .

To implement synchronization primitives effectively in multiprocessor systems, mechanisms such as


locks, semaphores, and atomic operations (e.g., compare-and-swap) are preferred as they provide more
efficient and scalable solutions for ensuring mutual exclusion and synchronization.

Q7: Describe how the compare and swap () instruction can be used to provide
mutual exclusion that satisfies the bounded-waiting requirement?
Ans : The compare and swap (CAS) instruction can be used to provide mutual exclusion and satisfy the
bounded-waiting requirement by following these steps:

Initialization:

Use a shared lock variable (lock) initialized to 0 (unlocked).

Enter Critical Section:

Each process repeatedly executes the CAS instruction in a loop to attempt to acquire the lock:

while (true) {

while (lock != 0); // Busy wait if lock is not available

if (CAS(lock, 0, 1)) {

break; // Acquired the lock

}
The CAS operation atomically checks if lock is 0 (unlocked). If true, it sets lock to 1 (locked)
and returns true, indicating the lock has been acquired. Otherwise, it returns false, and the
process continues the loop.

Critical Section Execution:

Once the process acquires the lock, it can safely execute its critical section code.

Exit Critical Section:

After completing the critical section, the process releases the lock by setting lock back to 0:

lock = 0;

Satisfying Bounded-Waiting Requirement


To satisfy the bounded-waiting requirement, ensure that each process gets a fair chance to
enter the critical section within a bounded number of attempts. This can be achieved by
maintaining a queue or counter to track waiting processes and manage their order of entry.
Here's a simple approach:

Use a Turn Array:

Maintain an array turn[i] for each process i to indicate its order.

Before attempting to acquire the lock, set turn[i] to the current maximum turn value plus one.

Fair Waiting:

Each process waits until its turn[i] is the smallest among all processes waiting to enter the
critical section:

while (true) {

while (lock != 0 || !isMyTurn(i)); // Wait until lock is free and it's my turn

if (CAS(lock, 0, 1)) {

break; // Acquired the lock

}
}

The isMyTurn(i) function checks if turn[i] is the smallest, ensuring that the waiting is bounded
and fair.

By using CAS with proper management of process order, mutual exclusion and bounded-waiting
can be achieved effectively.

Q8: Demonstrate that monitors and semaphores are equivalent insofar as they
can be used to implement solutions to the same types of synchronization
problems?

Ans : Monitors and semaphores can both be used to solve synchronization problems by ensuring
mutual exclusion and coordinating process execution.

Monitors:

A high-level abstraction that encapsulates shared variables, the procedures that operate on the
variables, and the synchronization between concurrent procedures.

Uses condition variables and procedures wait() and signal() to manage synchronization.

Example: A bounded buffer can be implemented using monitors with condition variables to
handle full and empty states.

Semaphores:

A lower-level synchronization primitive that uses integer values to control access to shared
resources.

Provides two atomic operations: wait() (P) and signal() (V).

Example: The same bounded buffer can be implemented using semaphores to signal full and
empty states and ensure mutual exclusion.

Both monitors and semaphores can achieve the same synchronization goals, such as mutual
exclusion and conditional synchronization. While monitors provide a structured and easier-to-
use approach, semaphores offer more flexibility at a lower abstraction level.
Q9: Explain why spinlocks are not appropriate for single-processor systems yet
are often used in multiprocessor systems?

Ans : Spinlocks are not appropriate for single-processor systems because:


CPU Utilization: In a single-processor system, while one process is holding the spinlock, another process
trying to acquire the lock will keep spinning, consuming CPU cycles unnecessarily, leading to poor
performance and wasteful CPU utilization .

Spinlocks are often used in multiprocessor systems because:

Efficiency: In multiprocessor systems, while one processor is holding the spinlock, another processor can
execute the spinning code, making use of available CPU resources without wasting them. This avoids the
overhead of putting a process to sleep and waking it up, which is more efficient in a multiprocessor
environment.

Q10: Design an algorithm for a bounded-buffer monitor in which the buffers


(portions) are embedded within the monitor itself?
Ans : Here is an algorithm for a bounded-buffer monitor where the buffer is embedded within
the monitor itself:

monitor BoundedBuffer {

// Constants

const int MAX = 10;

// Shared buffer and counters

int buffer[MAX];

int count = 0, in = 0, out = 0;

// Condition variables

condition notFull, notEmpty;


// Insert an item into the buffer

procedure insert(item) {

if (count == MAX) {

wait(notFull); // Wait until the buffer is not full

buffer[in] = item;

in = (in + 1) % MAX;

count++;

signal(notEmpty); // Signal that the buffer is not empty

// Remove an item from the buffer

procedure remove() {

int item;

if (count == 0) {

wait(notEmpty); // Wait until the buffer is not empty

item = buffer[out];

out = (out + 1) % MAX;

count--;

signal(notFull); // Signal that the buffer is not full

return item;

}
Explanation:

Shared Variables:

buffer[MAX]: The buffer array to hold items.

count: The current number of items in the buffer.

in and out: Indices for inserting and removing items in the circular buffer.

Condition Variables:

notFull: Used to wait if the buffer is full.

notEmpty: Used to wait if the buffer is empty.

Insert Procedure:

If the buffer is full (count == MAX), the inserting thread waits on notFull.

Inserts the item into the buffer at the in index.

Updates the in index and count.

Signals notEmpty to indicate the buffer is not empty anymore.

Remove Procedure:

If the buffer is empty (count == 0), the removing thread waits on notEmpty.

Removes the item from the buffer at the out index.

Updates the out index and count.

Signals notFull to indicate the buffer is not full anymore.

This algorithm ensures mutual exclusion and proper synchronization for a bounded buffer, with
the buffer embedded within the monitor itself.

You might also like