0% found this document useful (0 votes)
16 views40 pages

Unit 2 Os Updated

Process synchronization is essential in computer systems to ensure that multiple processes access shared resources without interference, preventing issues like data inconsistency and deadlocks. Techniques such as semaphores, mutex locks, and critical sections are employed to manage access and maintain data integrity. The document also discusses the critical section problem, race conditions, and synchronization mechanisms necessary for efficient multi-process operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views40 pages

Unit 2 Os Updated

Process synchronization is essential in computer systems to ensure that multiple processes access shared resources without interference, preventing issues like data inconsistency and deadlocks. Techniques such as semaphores, mutex locks, and critical sections are employed to manage access and maintain data integrity. The document also discusses the critical section problem, race conditions, and synchronization mechanisms necessary for efficient multi-process operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Process Synchronization is used in a computer system to ensure

that multiple processes or threads can run concurrently without


interfering with each other.
The main objective of process synchronization is to ensure that
multiple processes access shared resources without interfering with
each other and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are
used.
In a multi-process system, synchronization is necessary to ensure
data consistency and integrity, and to avoid the risk of deadlocks and
other synchronization problems. Process synchronization is an
important aspect of modern operating systems, and it plays a crucial
role in ensuring the correct and efficient functioning of multi-process
systems.
Aiming for a top All India Rank in GATE CS/IT or GATE DA 2026? Our
courses, led by experts like Khaleel Sir, Chandan Jha Sir, and Vijay
Agarwal Sir, offer live classes, practice problems, doubt support,
quizzes, and All India Mock Tests—all in one place.
On the basis of synchronization, processes are categorized as one of
the following two types:
 Independent Process: The execution of one process does
not affect the execution of other processes.
 Cooperative Process: A process that can affect or be
affected by other processes executing in the system.
Process synchronization problem arises in the case of Cooperative
processes also because resources are shared in Cooperative
processes.
Process Synchronization
Process Synchronization is the coordination of execution of multiple
processes in a multi-process system to ensure that they access
shared resources in a controlled and predictable manner. It aims to
resolve the problem of race conditions and other synchronization
issues in a concurrent system.
Lack of Synchronization in Inter Process Communication Environment
leads to following problems:
1. Inconsistency: When two or more processes access shared
data at the same time without proper synchronization. This
can lead to conflicting changes, where one process’s update
is overwritten by another, causing the data to become
unreliable and incorrect.
2. Loss of Data: Loss of data occurs when multiple processes
try to write or modify the same shared resource without
coordination. If one process overwrites the data before
another process finishes, important information can be lost,
leading to incomplete or corrupted data.
3. Deadlock: Lack of Synchronization leads to Deadlock which
means that two or more processes get stuck, each waiting for
the other to release a resource. Because none of the
processes can continue, the system becomes unresponsive
and none of the processes can complete their tasks.
Types of Process Synchronization
The two primary type of process Synchronization in an Operating
System are:
1. Competitive: Two or more processes are said to be in
Competitive Synchronization if and only if they compete for
the accessibility of a shared resource.
Lack of Synchronization among Competing process may
lead to either Inconsistency or Data loss.
2. Cooperative: Two or more processes are said to be in
Cooperative Synchronization if and only if they get affected
by each other i.e. execution of one process affects the other
process.
Lack of Synchronization among Cooperating
process may lead to Deadlock.
Example:
Let consider a Linux code:
>>ps/grep "chrome"/wc
 ps command produces list of processes running in linux.
 grep command find/count the lines form the output of the ps
command.
 wc command counts how many words are in the output.
Therefore, three processes are created which are ps, grep and wc.
grep takes input from ps and wc takes input from grep.
From this example, we can understand the concept of cooperative
processes, where some processes produce and others consume, and
thus work together. This type of problem must be handled by the
operating system, as it is the manager.
Conditions That Require Process
Synchronization
1. Critical Section: It is that part of the program where shared
resources are accessed. Only one process can execute the
critical section at a given point of time. If there are no shared
resources, then no need of synchronization mechanisms.
2. Race Condition: It is a situation wherein processes are
trying to access the critical section and the final result
depends on the order in which they finish their update.
Process Synchronization mechanism need to ensure that
instructions are being executed in a required order only.
3. Pre Emption: Preemption is when the operating system
stops a running process to give the CPU to another process.
This allows the system to make sure that important tasks get
enough CPU time. This is important as mainly issues arise
when a process has not finished its job on shared resource
and got preempted. The other process might end up reading
an inconsistent value if process synchronization is not done.
What is Race Condition?
A race condition is a situation that may occur inside a critical section.
This happens when the result of multiple process/thread execution in
the critical section differs according to the order in which the threads
execute. Race conditions in critical sections can be avoided if the
critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race
conditions.
Let us consider the following example.
 There is a shared variable balance with value 100.
There are two processes deposit(10) and withdraw(10). The

deposit process does balance = balance + 10 and withdraw
process does balance = balance – 10.
 Suppose these processes run in an interleaved manner. The
deposit() fetches the balance as 100, then gets preempted.
 Now withdraw() get scheduled and makes balance 90.
 Finally deposit is rescheduled and makes the value as 110.
This value is not correct as the balance after both operations
should be 100 only
We can not notice that the different segments of two processes
running in different order would give different values of balance.

Critical Section Problem - Notes for Students


1. Introduction

The Critical Section Problem occurs in concurrent programming when multiple processes or
threads try to access shared resources simultaneously. The goal is to ensure safe and
synchronized access to avoid race conditions and data inconsistency.

2. Structure of a Process

A process can be divided into four parts:

1. Entry Section: Requests access to the critical section.


2. Critical Section: The part of the code that accesses shared resources.
3. Exit Section: Releases access after completing execution.
4. Remainder Section: The remaining part of the process that does not require shared
resources.

3. Conditions for a Solution

A solution to the critical section problem must satisfy the following conditions:

1. Mutual Exclusion: Only one process can be in the critical section at a time.
2. Progress: If no process is in the critical section, other processes should be able to enter.
3. Bounded Waiting: No process should wait indefinitely to enter the critical section.
4. Real-Time Example: Online Ticket Booking System
Imagine you are booking a movie ticket online. At the same time, another person is also trying to
book the last available seat. If there is no proper synchronization, both users may book the same
seat, causing issues.

Scenario

 Assume there is only one seat left in the theater.


 Two users (Process A and Process B) try to book that last seat simultaneously.

Steps in the Process

1. Entry Section:
o Both Process A and Process B check seat availability at the same time.
o Both processes see one available seat.
2. Critical Section (Race Condition Occurs):
o Process A selects the seat and proceeds to payment.
o Process B also selects the same seat and proceeds to payment simultaneously.
o Since both processes are in the critical section without synchronization, both
assume the seat is available.
3. Exit Section (Incorrect Behavior Occurs):
o If there is no proper synchronization, both transactions may be processed.
o The system overbooks the seat, resulting in double booking.
o This creates a conflict, requiring manual intervention or cancellation.
4. Remainder Section:
o After booking, the processes move to other tasks.

3. Problems Caused by the Critical Section Issue


1. Data Inconsistency
o The seat was booked twice, leading to errors in the booking system.
2. Financial Loss or Customer Dissatisfaction
o One of the customers may have to be refunded or denied entry.
3. System Crash or Overload
o If this happens frequently, the system may fail due to incorrect seat allocations.

4. Solution: Synchronization Mechanisms


To prevent such issues, we must ensure that only one process can enter the critical section at a
time. Solutions include:

1. Locks (Mutex)

 When Process A starts booking, the system locks the seat.


 Process B must wait until Process A completes the transaction.
 Once Process A finishes, Process B will see the updated seat availability.

2. Semaphores

 A counter-based synchronization mechanism that controls the number of available


resources.
 In our case, the counter is 1 (one seat available).
 If one process decrements the counter (books the seat), another process cannot proceed.

3. Atomic Transactions

 Ensures that checking, selecting, and booking happen as one indivisible operation.
 Example: Using a database transaction, ensuring that:
o If two users try to book the same seat, only one transaction succeeds.

4. Monitors

 A higher-level synchronization construct that manages multiple threads accessing shared


resources safely.

5. Corrected Online Ticket Booking Process


To fix the problem, the system should follow proper synchronization:

1. Entry Section:
o The system locks the seat when a user selects it.
2. Critical Section:
o The first user to proceed with payment gets the seat.
o The second user gets a "Seat Unavailable" message.
3. Exit Section:
o The system releases the lock after confirming the booking.
4. Remainder Section:
o The process continues for other users.

Unlock and Lock Algorithm: A Detailed Explanation


1. What is the Unlock and Lock Algorithm?

The Unlock and Lock Algorithm is a hardware-based process synchronization


mechanism that ensures mutual exclusion while allowing fair access to the critical section. It
uses TestAndSet to regulate the value of lock and introduces a waiting[i] flag for each
process to track if it has been waiting for access.

Unlike basic Test-And-Set (TAS) Locks, where a process directly releases the lock, this
algorithm checks a queue to find the next process that should enter the critical section. If no
process is waiting, the lock is set to false, allowing any new process to enter.

2. Key Features of the Unlock and Lock Algorithm


✅ Ensures Mutual Exclusion – Only one process accesses the critical section at a time.
✅ Fair Scheduling – Uses a circular queue to select the next process in line.
✅ Avoids Starvation – No process is indefinitely denied access.
✅ Reduces Wasted CPU Time – Unlike busy-waiting approaches, waiting processes are
efficiently managed.

3. Unlock and Lock Algorithm (Pseudocode)


pseudocode
Copy code
// Shared variable lock initialized to false
// Individual key and waiting[i] initialized

boolean lock = false;


boolean key;
boolean waiting[n];

while (1) {
waiting[i] = true;
key = true;

// Process waits until it can acquire the lock


while (waiting[i] && key)
key = TestAndSet(lock);

waiting[i] = false; // Process enters the critical section

// ------ Critical Section ------

j = (i + 1) % n;

// Look for the next waiting process in circular order


while (j != i && !waiting[j])
j = (j + 1) % n;

if (j == i)
lock = false; // No process is waiting, release lock
else
waiting[j] = false; // Allow the next process to proceed

// ------ Remainder Section ------


}

4. Explanation of the Algorithm


(A) Locking Mechanism (Entry Section)

1. The process i marks itself as waiting (waiting[i] = true).


2. It sets key = true and enters a while-loop to acquire the lock.
3. The TestAndSet(lock) function atomically:
o Returns the old value of lock.
o Sets lock = true (if it was previously false).
o If lock was already true, the process keeps spinning.
4. When the process successfully acquires the lock, it sets waiting[i] = false and enters
the critical section.

(B) Critical Section Execution

 The process executes its critical section safely.


 Other processes remain in the waiting state.

(C) Unlocking Mechanism (Exit Section)

1. The process searches for the next waiting process in a circular queue (j = (i+1) % n).
2. It checks for a waiting process from j to n and again from 0 to i-1.
3. If a waiting process is found, it sets waiting[j] = false, allowing it to enter.
4. If no process is waiting, it sets lock = false, making the critical section available for
any future process.

5. Real-Time Example: Managing a Shared Printer in an


Office
Scenario
 Multiple employees in an office need to print documents using a shared printer.
 If two employees send print requests at the same time, mutual exclusion is required.
 The Unlock and Lock Algorithm ensures fair access and prevents lost print jobs.

How Unlock and Lock Algorithm Works in This Case?

Real-Life Mapping
Step Pseudocode Concept
(Printer System)
Each employee marks
1. Employees send print waiting[i] = true themselves as waiting for the
requests
printer
2. Printer lock is checked while (waiting[i] && key) key First employee to check and
using TestAndSet = TestAndSet(lock); set the lock gets the printer
3. Employee prints waiting[i] = false;
The employee now prints
document (Critical Section) the document
The system searches for the
4. Check next waiting j = (i+1) % n; next employee waiting to
employee in queue
print
5. Assigns the printer to the waiting[j] = false; The next employee in queue
next employee gets access to print
6. If no one is waiting, lock = false;
If no requests are waiting,
printer becomes free the printer is now available

6. Why Use the Unlock and Lock Algorithm?


✅ Solves Synchronization Issues – Prevents two employees from printing at the same time.
✅ Fair Access – Ensures that everyone gets a turn in order.
✅ Avoids Deadlock – If an employee finishes printing, the next in queue is selected.
✅ Efficient Resource Management – The printer is always assigned to a waiting
employee rather than remaining idle.

7. Other Real-Life Applications

 ATM Withdrawals: Ensuring multiple users do not access the same ATM machine at
the same time.
 Database Transactions: Ensuring multiple users do not update the same database record
simultaneously.
 Traffic Signals: Managing the order of vehicles at a busy intersection to avoid collisions.
8. Conclusion
The Unlock and Lock Algorithm is an efficient hardware synchronization technique that
ensures mutual exclusion, fairness, and bounded waiting. It is widely used in operating
systems, databases, and multi-threaded applications to prevent race conditions and ensure
orderly execution of processes.

Mutex Lock in Operating System


1. What is a Mutex Lock?
A Mutex (Mutual Exclusion) Lock is a synchronization primitive used in operating systems
to prevent multiple processes or threads from accessing a shared resource simultaneously.

It ensures mutual exclusion, meaning that only one process can hold the lock at a time. If
another process tries to acquire the lock while it is already held, it will be blocked until the lock
is released.

2. Why Use a Mutex Lock?


Mutex locks are used to prevent race conditions when multiple processes or threads try to
modify shared data simultaneously. Without a mutex lock, data inconsistency and corruption can
occur.

Example Problem Without Mutex

Imagine two threads (T1 and T2) trying to withdraw money from a shared bank account with
an initial balance of ₹10,000:

Thread 1 (Withdraw Thread 2 (Withdraw Final Balance


Time
₹5000) ₹4000) (Incorrect!)
T1: Read
balance Reads ₹10,000 ₹10,000
T2: Read
balance Reads ₹10,000 ₹10,000
T1: Deduct
₹5000 Updates balance = ₹5,000 ₹5,000
T2: Deduct Updates balance = ₹6,000 ₹6,000 (Wrong!)
Thread 1 (Withdraw Thread 2 (Withdraw Final Balance
Time
₹5000) ₹4000) (Incorrect!)
₹4000

💡 Issue: The final balance should be ₹1,000, but due to simultaneous execution without
synchronization, it is incorrectly updated to ₹6,000.

3. How Mutex Locks Solve This Problem?


A mutex lock ensures that only one thread can access the shared resource at a time,
preventing race conditions.

How Mutex Works?

1. A thread must acquire the mutex lock before entering the critical section.
2. If another thread tries to acquire the lock while it is held, it must wait.
3. Once the thread completes its work, it releases the lock, allowing another waiting
thread to proceed.

Pseudocode:

pseudocode
Copy code
Mutex lock = 0; // 0 means available, 1 means locked

void acquire_lock() {
while (TestAndSet(&lock) == 1); // Wait until lock is free
}

void release_lock() {
lock = 0; // Unlock
}

void critical_section() {
acquire_lock(); // Enter critical section
// Perform operations
release_lock(); // Exit critical section
}

Step-by-Step Explanation
1. Initializing the Mutex Lock
pseudocode
Copy code
Mutex lock = 0;

 lock = 0 → The lock is available (no thread is using the critical section).
 lock = 1 → The lock is held by a thread (critical section is occupied).

2. Acquiring the Lock (acquire_lock())


pseudocode
Copy code
void acquire_lock() {
while (TestAndSet(&lock) == 1); // Wait until lock is free
}

 TestAndSet() is an atomic operation that:


1. Reads the current value of lock.
2. Sets lock = 1 (locking it).
3. Returns the old value of lock.
 The loop ensures that if another process is already inside the critical section, the current
process will keep waiting until the lock becomes 0.

🔹 Example:

Process Initial lock Value TestAndSet(lock) New lock Value Outcome


P1 0 (Free) Returns 0, sets lock = 1 1 (Locked) P1 enters
P2 1 (Locked) Returns 1 (Wait) 1 (Locked) P2 waits

3. Critical Section Execution


pseudocode
Copy code
void critical_section() {
acquire_lock(); // Enter critical section
// Perform operations
release_lock(); // Exit critical section
}

 The process executes important operations in the critical section.


 Example: Updating a shared bank balance, writing to a file, etc.

4. Releasing the Lock (release_lock())


pseudocode
Copy code
void release_lock() {
lock = 0; // Unlock
}

 After the process finishes its work, it sets lock = 0 so that other processes can enter the
critical section.

🔹 Example:

Process Initial lock Value Releases Lock? New lock Value Next Process?
P1 1 (Locked) Yes (Sets lock = 0) 0 (Free) P2 can enter

Real-World Example: Shared Printer in an Office


Scenario

 Multiple employees in an office want to print documents using a single shared printer.
 If two employees try to send a print job at the same time, documents may get mixed
up.

How Mutex Works in This Case?

1. Employee A wants to print → Acquires the mutex lock.


2. While Employee A is printing, Employee B cannot access the printer (mutex ensures
mutual exclusion).
3. After Employee A finishes printing, they release the mutex lock.
4. Employee B can now acquire the mutex and start printing.

✅ Prevents multiple print jobs from overlapping.


✅ Ensures fairness by serving one request at a time.

Semaphores in Operating Systems

What is a Semaphore?

A semaphore is a synchronization primitive used to control access to shared resources in a


concurrent system such as a multithreading or multiprocessing environment. Semaphores are
used to signal between processes or threads, ensuring that they cooperate without conflicting.

There are two main types of semaphores:

1. Binary Semaphore (also known as a Mutex)


2. Counting Semaphore
1. Binary Semaphore:

A binary semaphore is a type of semaphore that can have only two values:

 0: Resource is unavailable (locked).


 1: Resource is available (unlocked).

It is often used to implement mutual exclusion (mutex) for synchronizing access to critical
sections in a system.

Operations for Binary Semaphore:

 P (wait or down operation):


If the semaphore is 1 (resource available), it becomes 0 (locks the resource). If it is
already 0, the process waits until it becomes 1 again.
 V (signal or up operation):
If the semaphore is 0 (resource locked), it becomes 1 (unlocks the resource), allowing
other processes to proceed.

2. Counting Semaphore:

A counting semaphore can have any integer value, allowing it to represent a count of available
resources. It is useful when managing a pool of identical resources (e.g., a fixed number of
printers or connections).

Operations for Counting Semaphore:

 P (wait or down operation):


Decreases the semaphore's value. If the value is greater than 0, it is decremented. If the
value is 0, the process waits until the value becomes greater than 0.
 V (signal or up operation):
Increases the semaphore's value, indicating that a resource is now available.

Semaphore Operations in Detail

1. P Operation (Wait or Down):


o P(semaphore):
This operation decreases the value of the semaphore by 1. If the value is less
than or equal to 0, the process waits. If the value is greater than 0, it
is decremented, and the process proceeds.
2. V Operation (Signal or Up):
o V(semaphore):
This operation increases the value of the semaphore by 1. If the value was
previously 0, this may allow a waiting process to proceed.

Semaphore Algorithm Pseudocode:

Let’s consider a counting semaphore S initialized to a certain value, representing the number of
available resources.

Semaphore Initialization:

pseudocode
Copy code
Semaphore S = N; // N is the number of resources (e.g., printers,
connections)

P Operation (Wait/Acquire):

pseudocode
Copy code
P(S) {
while (S <= 0) { // If no resources are available, wait
// Process will be blocked here
}
S = S - 1; // Decrease the semaphore value (resource is being used)
}

V Operation (Signal/Release):

pseudocode
Copy code
V(S) {
S = S + 1; // Increase the semaphore value (resource is released)
}

2. Explanation of the Algorithm:

1. Semaphore Initialization: The semaphore S is initialized with a value N, which


represents the number of available resources. For example, if there are 3 printers
available, S = 3.
2. P (Wait) Operation:
o The P operation is called when a process (e.g., a user or task) wants to access a
resource (e.g., a printer or database connection).
o If the semaphore value is greater than 0 (resources are available), the value is
decremented (the process consumes one resource).
oIf the semaphore value is 0 (no resources available), the process will
be blocked until the semaphore value becomes greater than 0 (i.e., when a
resource is released).
3. V (Signal) Operation:
o The V operation is called when a process finishes using a resource and releases
it back into the system.
o The semaphore value is incremented, allowing other processes to access the
resource.
4. Blocking and Waiting:
o When there are more processes than resources, the processes will block and wait
until a resource becomes available. Once a resource is released, a waiting process
can continue by acquiring the resource.

3. Real-Time Example:

Let’s consider a real-time scenario where multiple users need to access a shared printer in an
office. There are only 3 printers available, but multiple users want to use them at the same time.

Scenario:

 Shared Resource: Printers (3 printers available)


 Semaphore Value (S): 3 (indicating that 3 printers are available)
 Processes (Users): User A, User B, User C, User D (4 users)

Semaphore Initialization:

pseudocode
Copy code
Semaphore printers = 3; // 3 printers available

Steps:

1. User A requests a printer:


o P(printers) is called.
o Semaphore printers = 3, so the process proceeds.
o printers = 2 (one printer is now in use by User A).
2. User B requests a printer:
o P(printers) is called.
o Semaphore printers = 2, so the process proceeds.
o printers = 1 (one more printer is now in use by User B).
3. User C requests a printer:
o P(printers) is called.
o Semaphore printers = 1, so the process proceeds.
o printers = 0 (one more printer is now in use by User C).
4. User D requests a printer:
o P(printers) is called.
o Semaphore printers = 0, so User D has to wait until a printer becomes
available.
o User D is blocked (waiting for a printer).
5. User A finishes printing and releases the printer:
o V(printers) is called.
o Semaphore printers = 1 (User A releases the printer).
o User D can now proceed and acquire the printer.
6. User D proceeds to print:
o P(printers) is called.
o Semaphore printers = 1, so the process proceeds.
o printers = 0 (User D is now using the printer).

Result:

 At any point in time, only 3 users can print, and User D was blocked until a printer
became available. The semaphore helps synchronize access to the shared printers.

Bounded Buffer Problem in Process Synchronization

The Bounded Buffer Problem, also known as the Producer-Consumer Problem, is a classical
synchronization problem in operating systems. It involves two types of processes:

1. Producer – Produces data and places it into a shared buffer.


2. Consumer – Consumes data from the shared buffer.

Since the buffer has a fixed size (bounded buffer), synchronization is needed to ensure that:

 The producer does not add data when the buffer is full.
 The consumer does not remove data when the buffer is empty.
 Multiple producers and consumers do not access the buffer simultaneously in an unsafe
way.

Solution Using Semaphores

We use three semaphores to solve the problem:

1. Mutex – Ensures mutual exclusion while accessing the buffer.


2. Full – Keeps track of the number of full slots in the buffer.
3. Empty – Keeps track of the number of empty slots in the buffer.
Initialization:

 mutex = 1 → Ensures only one process accesses the buffer at a time.


 full = 0 → Initially, the buffer is empty.
 empty = N → The buffer has N empty slots.

Producer Process:

c
Copy code
do {
// Produce an item
wait(empty); // Decrement empty slots
wait(mutex); // Enter critical section

// Add the item to the buffer


buffer[in] = item;
in = (in + 1) % N;

signal(mutex); // Exit critical section


signal(full); // Increment full slots
} while (true);

Step-by-Step Explanation of Producer Code


c
Copy code
do {
// Produce an item
wait(empty); // Decrement empty slots
wait(mutex); // Enter critical section

// Add the item to the buffer


buffer[in] = item;
in = (in + 1) % N;

signal(mutex); // Exit critical section


signal(full); // Increment full slots
} while (true);

Step 1: Wait for an Empty Slot (wait(empty))


c
Copy code
wait(empty);

 The producer checks if there is space in the buffer before adding an item.
 empty keeps track of available slots, so wait(empty):
o Decrements empty by 1.
o Blocks the producer if empty == 0 (i.e., buffer is full).
Step 2: Acquire Mutex Lock (wait(mutex))
c
Copy code
wait(mutex);

 Ensures mutual exclusion so only one process (either a producer or a consumer) can
modify the buffer at a time.
 Prevents race conditions, where multiple processes may try to modify the buffer
simultaneously.

Step 3: Add Item to the Buffer


c
Copy code
buffer[in] = item;
in = (in + 1) % N;

 The produced item is placed in the buffer at index in.


 The in index is updated using circular buffer logic:
o (in + 1) % N ensures that if in reaches the last index, it wraps around to the
start (0).
o This prevents overflow and allows continuous usage of buffer slots.

Step 4: Release Mutex Lock (signal(mutex))


c
Copy code
signal(mutex);

 Unlocks the buffer, allowing other producers or consumers to access it.


 This step ensures only one process modifies the buffer at a time.

Step 5: Signal Full Slot (signal(full))


c
Copy code
signal(full);

 Increments full by 1 to indicate that an item has been added.


 If the consumer was waiting for an item (wait(full)), it can now proceed.

Example Execution
Let's assume a buffer of size N = 5 and in = 0 initially.

Step Action Buffer State (N=5) in empty full


1 Producer waits(empty) (buffer empty) 0 4 1
2 Producer waits(mutex) Lock acquired 0 4 1
3 Producer adds item A [A, _, _, _, _] 1 3 2
4 Producer signals(mutex) Lock released 1 3 2
5 Producer signals(full) Consumer can now proceed 1 3 2

Consumer Process:

c
Copy code
do {
wait(full); // Decrement full slots
wait(mutex); // Enter critical section

// Remove the item from the buffer


item = buffer[out];
out = (out + 1) % N;

signal(mutex); // Exit critical section


signal(empty); // Increment empty slots

// Consume the item


} while (true);

Step-by-Step Explanation of Consumer Code


c
Copy code
do {
wait(full); // Decrement full slots
wait(mutex); // Enter critical section

// Remove the item from the buffer


item = buffer[out];
out = (out + 1) % N;

signal(mutex); // Exit critical section


signal(empty); // Increment empty slots
// Consume the item
consume_item(item);
} while (true);

Step 1: Wait for a Full Slot (wait(full))


c
Copy code
wait(full);

 The consumer checks if there is at least one item in the buffer before removing it.
 full keeps track of occupied slots, so wait(full):
o Decrements full by 1.
o Blocks the consumer if full == 0 (i.e., buffer is empty).

Step 2: Acquire Mutex Lock (wait(mutex))


c
Copy code
wait(mutex);

 Ensures mutual exclusion so only one process (producer or consumer) modifies the
buffer at a time.
 Prevents race conditions, where multiple processes may try to remove items
simultaneously.

Step 3: Remove Item from the Buffer


c
Copy code
item = buffer[out];
out = (out + 1) % N;

 The item is removed from the buffer at index out.


 The out index is updated using circular buffer logic:
o (out + 1) % N ensures that if out reaches the last index, it wraps around to 0.
o This prevents buffer overflow and allows continuous usage of buffer slots.

Step 4: Release Mutex Lock (signal(mutex))


c
Copy code
signal(mutex);

 Unlocks the buffer, allowing other consumers or producers to access it.


 This step ensures only one process modifies the buffer at a time.

Step 5: Signal an Empty Slot (signal(empty))


c
Copy code
signal(empty);

 Increments empty by 1 to indicate that an item has been removed.


 If a producer was waiting (wait(empty)), it can now proceed.

Step 6: Consume the Item


c
Copy code
consume_item(item);

 The consumer processes the item it removed from the buffer.


 This could be printing data, writing to a file, or any other task.

Example Execution
Let's assume a buffer of size N = 5 and out = 0 initially.

Step Action Buffer State (N=5) out empty full


1 Consumer waits(full) [A, B, _, _, _] 0 3 2
2 Consumer waits(mutex) Lock acquired 0 3 2
3 Consumer removes item A [_, B, _, _, _] 1 4 1
4 Consumer signals(mutex) Lock released 1 4 1
5 Consumer signals(empty) Producer can now proceed 1 4 1
6 Consumer processes item A Processing... 1 4 1

Key Takeaways
✔ Prevents buffer underflow – Consumer waits if the buffer is empty (full == 0).
✔ Ensures mutual exclusion – Only one process modifies the buffer at a time (mutex).
✔ Supports multiple producers and consumers – Uses full and empty to coordinate
operations.
✔ Uses a circular buffer – Avoids wasted space by wrapping around when out == N

Explanation of Synchronization:

1. Mutex Lock (mutex):


o Ensures only one producer or consumer accesses the buffer at a time.
2. Full Semaphore (full):
o Keeps track of the number of filled buffer slots.
o The consumer waits if full == 0 (i.e., the buffer is empty).
o The producer signals full after adding an item.
3. Empty Semaphore (empty):
o Keeps track of the number of empty buffer slots.
o The producer waits if empty == 0 (i.e., the buffer is full).
o The consumer signals empty after removing an item.

How It Works:

 When a producer wants to add an item:


o It waits if the buffer is full (wait(empty)).
o Locks the buffer (wait(mutex)) and adds the item.
o Unlocks the buffer (signal(mutex)) and increases the full count
(signal(full)).
 When a consumer wants to remove an item:
o It waits if the buffer is empty (wait(full)).
o Locks the buffer (wait(mutex)) and removes the item.
o Unlocks the buffer (signal(mutex)) and increases the empty count
(signal(empty)).

Key Features of the Solution:

✅ Prevents race conditions – The mutex semaphore ensures that only one process modifies the
buffer at a time.
✅ Avoids deadlock – The proper ordering of wait and signal operations prevents processes
from getting stuck.
✅ Efficient usage of buffer – Items are added and removed in a circular manner using (in + 1)
% N and (out + 1) % N.

Real-Life Example:

A print spooler system works like a bounded buffer:

 Users (producers) send print jobs to a shared printer queue (buffer).


 The printer (consumer) takes print jobs from the queue and prints them.
 If the queue is full, new print jobs must wait.
 If the queue is empty, the printer waits for a job

.Readers-Writers Problem in Process


Synchronization
The Readers-Writers Problem is a classic synchronization problem that deals with multiple
processes accessing a shared resource (e.g., a database or a file).

1. Understanding the Problem

 Readers (R): Can read the shared resource but do not modify it.
 Writers (W): Can both read and modify the shared resource.

Conditions to Satisfy:

1. Multiple readers can read simultaneously (since reading does not cause inconsistency).
2. Only one writer can write at a time (to prevent data corruption).
3. While a writer is writing, no other reader or writer should access the resource.

2. Types of Readers-Writers Problems

There are three versions of the problem:

1. First Readers-Writers Problem (Reader Priority)


o Multiple readers can read at the same time.
o A writer must wait until all readers have finished before writing.
o Issue: Writers may suffer from starvation if new readers keep arriving.
2. Second Readers-Writers Problem (Writer Priority)
oOnce a writer is ready, no new readers should start reading until the writer
finishes.
o Issue: Readers may suffer from starvation if writers keep arriving.
3. Third Readers-Writers Problem (Fair Solution)
o Uses a FIFO queue to ensure fair access to both readers and writers.

3. Solution to the First Readers-Writers Problem (Reader


Priority)
This solution allows multiple readers to read concurrently but ensures writers get exclusive
access to the resource.

Semaphores Used

 mutex → Protects the read_count variable to prevent race conditions.


 wrt → Ensures only one writer writes at a time.
 read_count → Keeps track of the number of active readers.

Reader Code
c
Copy code
do {
wait(mutex); // Lock read_count
read_count++; // Increase number of readers
if (read_count == 1)
wait(wrt); // First reader blocks writers
signal(mutex); // Unlock read_count

// Read the shared resource


read_data();

wait(mutex); // Lock read_count


read_count--; // Decrease number of readers
if (read_count == 0)
signal(wrt); // Last reader allows writers
signal(mutex); // Unlock read_count
} while (true);

Explanation of Reader Code

1. Acquire mutex to update read_count (ensures safe modification).


2. If this is the first reader, it blocks writers by waiting on wrt.
3. Reads the resource (multiple readers can read together).
4. After reading, decreases read_count.
5. If it is the last reader, it signals wrt, allowing writers to proceed.

Writer Code
c
Copy code
do {
wait(wrt); // Lock resource (only one writer at a time)

// Write to the shared resource


write_data();

signal(wrt); // Unlock resource


} while (true);

Explanation of Writer Code

1. Waits on wrt to ensure exclusive access.


2. Writes to the shared resource (modifies data).
3. Signals wrt after writing, allowing others to proceed.

4. Solution to the Second Readers-Writers Problem (Writer


Priority)
 Ensures no new readers can start if a writer is waiting.
 Prevents writer starvation by giving preference to writers.

Additional Semaphore Used

 read_try → Ensures new readers cannot start while a writer is waiting.

Reader Code (Writer Priority)


c
Copy code
do {
wait(read_try); // Ensure no writer is waiting
wait(mutex); // Lock read_count
read_count++;
if (read_count == 1)
wait(wrt); // First reader blocks writers
signal(mutex); // Unlock read_count
signal(read_try); // Allow next reader

// Read the shared resource


read_data();

wait(mutex);
read_count--;
if (read_count == 0)
signal(wrt); // Last reader allows writers
signal(mutex);
} while (true);

Writer Code (Writer Priority)


c
Copy code
do {
wait(read_try); // Prevent new readers from starting
wait(wrt); // Lock resource

// Write to the shared resource


write_data();

signal(wrt); // Unlock resource


signal(read_try);// Allow new readers
} while (true);

How This Works:

 A writer waits on read_try, blocking new readers.


 Readers must wait on read_try, ensuring writers get priority.

5. Solution to the Third Readers-Writers Problem (Fair


Solution)
 Ensures fair access to both readers and writers.
 Uses a FIFO queue to handle requests fairly.
 Can be implemented using a queue mechanism.

6. Key Differences Between the Solutions


Version Who Gets Priority? Potential Starvation Issue
First Problem Readers Writers may starve
Second Problem Writers Readers may starve
Third Problem Fair Access No starvation

7. Real-Life Examples
 Databases:
o Multiple users (readers) accessing a database for viewing data.
o A writer modifies data (e.g., updating a table), requiring exclusive access.
 File Systems:
o Many processes reading a file simultaneously.
o A process writing to the file must prevent reading to avoid inconsistency.

Readers-Writers Problem: Code Explanation in Detail

1. First Readers-Writers Problem (Reader Priority)


This solution allows multiple readers to read at the same time but ensures that only one
writer can access the resource at a time.

Semaphores Used

1. mutex – Ensures mutual exclusion when updating read_count (prevents race


conditions).
2. wrt – Ensures only one writer can write at a time.
3. read_count – Keeps track of the number of active readers.

Reader Code (Reader Priority)


c
Copy code
do {
wait(mutex); // Step 1: Lock read_count to update safely
read_count++; // Step 2: Increment number of active readers
if (read_count == 1)
wait(wrt); // Step 3: First reader blocks writers
signal(mutex); // Step 4: Unlock read_count

// Step 5: Read the shared resource


read_data();
wait(mutex); // Step 6: Lock read_count again
read_count--; // Step 7: Decrement number of readers
if (read_count == 0)
signal(wrt); // Step 8: Last reader allows writers to proceed
signal(mutex); // Step 9: Unlock read_count
} while (true);

Step-by-Step Explanation (Reader Code)

Step Action Explanation


1 wait(mutex); Lock the read_count variable to update it safely.
2 read_count++; Increase the count of active readers.
if (read_count == 1) If this is the first reader, it blocks writers by waiting
3 wait(wrt); on wrt.
Unlock mutex, allowing other readers to
4 signal(mutex);
increment read_count.
Read the shared resource (many readers can read
5 read_data();
simultaneously).
6 wait(mutex); Lock read_count again before modifying it.
7 read_count--; Decrease the count of active readers.
if (read_count == 0)
8 signal(wrt); If this is the last reader, it unblocks writers.
Unlock mutex, allowing other readers or writers to
9 signal(mutex);
proceed.

Writer Code (Reader Priority)


c
Copy code
do {
wait(wrt); // Step 1: Lock the resource (only one writer allowed)

// Step 2: Write to the shared resource


write_data();

signal(wrt); // Step 3: Unlock the resource


} while (true);

Step-by-Step Explanation (Writer Code)

Step Action Explanation


1 wait(wrt); Writer waits if any reader is currently reading.
2 write_data(); Writes to the shared resource.
Step Action Explanation
3 signal(wrt); Unblocks the next waiting writer or reader.

Problem with This Solution


✅ Multiple readers can read simultaneously.
❌ Writers may starve if new readers keep coming, as they always get priority.

2. Second Readers-Writers Problem (Writer Priority)


This solution ensures no new readers can start reading while a writer is waiting.
Fix: Introduce a new semaphore, read_try, to prevent new readers from starting if a writer is
waiting.

Additional Semaphore Used

 read_try → Ensures new readers do not start reading if a writer is waiting.

Reader Code (Writer Priority)


c
Copy code
do {
wait(read_try); // Step 1: Ensure no writer is waiting
wait(mutex); // Step 2: Lock read_count to update safely
read_count++;
if (read_count == 1)
wait(wrt); // Step 3: First reader blocks writers
signal(mutex); // Step 4: Unlock read_count
signal(read_try); // Step 5: Allow next reader

// Step 6: Read the shared resource


read_data();

wait(mutex);
read_count--;
if (read_count == 0)
signal(wrt); // Step 7: Last reader allows writers
signal(mutex);
} while (true);

Writer Code (Writer Priority)


c
Copy code
do {
wait(read_try); // Step 1: Prevent new readers from starting
wait(wrt); // Step 2: Lock the resource (only one writer allowed)

// Step 3: Write to the shared resource


write_data();

signal(wrt); // Step 4: Unlock resource


signal(read_try);// Step 5: Allow new readers
} while (true);

How Writer Priority Works

✅ A writer blocks new readers before waiting for wrt.


✅ Readers must wait on read_try, ensuring writers get priority.
❌ Readers may starve if too many writers keep arriving.

3. Third Readers-Writers Problem (Fair Solution)


This solution ensures fairness by alternating between readers and writers.

Key Idea:

 Use a FIFO queue to maintain a fair order.


 Ensure no process starves (both readers and writers get turns).

This solution requires advanced data structures like condition variables or queues to maintain
fairness.

4. Summary of Different Solutions


Problem Priority Given To Starvation Possibility
First Readers-Writers Problem Readers Writers may starve.
Second Readers-Writers Problem Writers Readers may starve.
Third Readers-Writers Problem Fair Access No starvation.

Sleeping Barber Problem – Explanation in Detail


The Sleeping Barber Problem is a classic inter-process synchronization problem in operating
systems that involves managing multiple processes (customers) and a limited resource (a barber
and chairs). This problem is commonly used to demonstrate synchronization techniques
using semaphores and mutex locks.

Problem Statement
A barber shop has:

 One barber
 One barber chair
 N waiting chairs for customers who arrive when the barber is busy
 Customers who arrive at random intervals

The barber follows these rules:

1. If there are no customers, the barber sleeps in his chair.


2. When a customer arrives:
o If the barber is asleep, the customer wakes him up for a haircut.
o If the barber is busy, the customer checks for an empty waiting chair.
o If there is an empty chair, the customer sits and waits.
o If all chairs are occupied, the customer leaves the shop.
3. When the barber finishes cutting a customer’s hair:
o If there are waiting customers, he calls one of them and gives a haircut.
o If no customers are waiting, the barber sleeps again.

Challenges in Synchronization
 Mutual Exclusion: Only one customer should occupy the barber chair at a time.
 Avoiding Deadlock: If not managed properly, the barber might not wake up when a
customer arrives.
 Avoiding Race Conditions: Multiple customers arriving at the same time should be
handled correctly.

Solution Using Semaphores


We can use semaphores to synchronize the barber and customer interactions.
Semaphores Used:

1. customers (counting semaphore): Tracks the number of customers in the shop.


2. barber (binary semaphore): Indicates if the barber is ready to cut hair.
3. mutex (binary semaphore): Ensures mutual exclusion when accessing shared resources
like waiting chairs.

Algorithm

1. Barber Process

c
Copy code
while (true) {
wait(customers); // Wait for a customer
wait(mutex); // Ensure mutual exclusion to access waiting
chairs
waiting--; // Reduce waiting count
signal(barber); // Wake up barber if needed
signal(mutex); // Release mutex
// Cut hair
}

2. Customer Process

c
Copy code
wait(mutex);
if (waiting < chairs) { // If a waiting chair is available
waiting++; // Increase waiting count
signal(customers); // Notify barber
signal(mutex); // Release mutex
wait(barber); // Wait for barber
// Get haircut
} else {
signal(mutex); // Release mutex (leave the shop)
}

Key Concepts Demonstrated


 Concurrency Control: Customers arrive at random times, so we need proper
synchronization.
 Producer-Consumer Problem Analogy: The barber is the consumer (servicing
customers), and customers are producers (demanding service).
 Blocking and Waking Up: The barber sleeps when there are no customers and wakes up
when one arrives.
Real-World Applications
 Resource Allocation in Operating Systems: Managing CPU scheduling (like jobs
waiting in a queue).
 Thread Synchronization in Multi-threaded Programs: Handling multiple users
accessing a shared resource (e.g., a printer).
 Service Systems: Handling customer queues in call centers, hospitals, etc.

Dining Philosophers Problem


The Dining Philosophers Problem is a classic synchronization problem in operating systems
that illustrates the challenges of resource sharing and avoiding deadlock. It was formulated
by Edsger Dijkstra in 1965.

Problem Statement
Imagine five philosophers sitting around a circular dining table. Each philosopher has a plate of
food in front of them and needs two chopsticks (or forks) to eat. There are only five chopsticks,
one placed between each pair of philosophers.

Each philosopher follows this routine:

1. Think (not needing chopsticks).


2. Pick up the left chopstick (if available).
3. Pick up the right chopstick (if available).
4. Eat for some time.
5. Put down both chopsticks.
6. Repeat the cycle.

Since each philosopher needs two chopsticks but only one can be held at a time, the problem
leads to potential deadlockor resource starvation.

Challenges in the Dining Philosophers Problem


1. Deadlock:
o If each philosopher picks up their left chopstick at the same time and waits for
the right chopstick, no one can proceed, leading to an infinite waiting state
(deadlock).
2. Starvation:
o If some philosophers frequently get both chopsticks while others keep waiting,
some may never get a chance to eat.
3. Concurrency Issues:
o Since multiple philosophers try to access the same chopsticks,
proper synchronization is needed to prevent conflicts.

Solutions to the Dining Philosophers Problem


Several approaches can prevent deadlock and starvation:

1. Resource Hierarchy Solution (Numbering Chopsticks)

 Assign numbers to chopsticks: 0 to 4.


 Each philosopher first picks up the lower-numbered chopstick and then the higher-
numbered one.
 One philosopher (e.g., philosopher 4) picks up the right chopstick first, breaking the
circular dependency and preventing deadlock.

2. Using a Semaphore

 Use a binary semaphore (mutex) to control access to chopsticks.


 Before picking up chopsticks, a philosopher checks if both are available.
 If they are not, the philosopher waits until both become available.

3. Allowing at Most 4 Philosophers to Sit

 If only 4 out of 5 philosophers are allowed to pick up chopsticks at a time, at least one
chopstick remains free, preventing circular wait and deadlock.

4. Odd-Even Strategy

 Odd-numbered philosophers pick up left first, then right.


 Even-numbered philosophers pick up right first, then left.
 This avoids circular wait conditions.

Implementation using Semaphores (Pseudocode)


Using mutex locks to ensure synchronization:

c
Copy code
semaphore chopstick[5];

void philosopher(int i) {
while (true) {
think();

wait(chopstick[i]); // Pick left chopstick


wait(chopstick[(i+1)%5]); // Pick right chopstick

eat();

signal(chopstick[i]); // Put down left chopstick


signal(chopstick[(i+1)%5]); // Put down right chopstick
}
}

Here:

 wait() locks the resource.


 signal() releases the resource.

Real-World Applications
1. Process synchronization in operating systems.
2. Database management where multiple transactions need shared resources.
3. Networking and communication protocols to avoid resource deadlock.
4. Traffic management where multiple cars need access to roads.
Deadlock Prevention in Operating Systems
(Detailed Explanation)
What is a Deadlock?
A deadlock is a state in a computer system where a group of processes is stuck in a waiting
cycle, each holding a resource and waiting for another resource held by another process. This
results in an indefinite standstill where no process can make progress.

Formal Definition of Deadlock

A deadlock occurs when the following conditions hold simultaneously:

1. Mutual Exclusion – A resource can be used by only one process at a time.


2. Hold and Wait – A process holding one resource is waiting for additional resources held
by other processes.
3. No Preemption – A resource cannot be forcibly taken away from a process; it must be
released voluntarily.
4. Circular Wait – A cycle of processes exists, each waiting for a resource held by another
process in the cycle.

If any one of these four conditions is prevented, deadlock cannot occur. This is the principle
behind deadlock prevention.

Real-Life Example of Deadlock


Example: Traffic Jam on a Narrow Road

Imagine a one-lane bridge where two cars are coming from opposite directions. Neither car can
move forward because the bridge is too narrow, and neither driver wants to reverse. The traffic
remains stuck, creating a deadlock.

Similarly, in a computer system, processes can be stuck waiting for resources held by other
processes, leading to a deadlock situation.

Deadlock Prevention Techniques


Deadlock prevention strategies eliminate at least one of the four necessary conditions for
deadlock.

1. Preventing Mutual Exclusion

→ Idea: Allow multiple processes to access the resource at the same time whenever possible.

 Definition: Mutual exclusion means that some resources can be used by only one process
at a time.
 Prevention Approach: If possible, design resources so that multiple processes can use
them simultaneously.

Example in Operating Systems:

 Shared Files – If a file is read-only, multiple processes can read it at the same time
without needing exclusive access.

Real-Life Example:

 Library Books – Instead of having a single copy of a book (which creates a mutual
exclusion problem), multiple copies are made available so that many readers can access it
at once.

❌ Limitation: Some resources, like printers or CPUs, must be used exclusively, so mutual
exclusion cannot always be avoided.

2. Preventing Hold and Wait

→ Idea: A process must request all required resources at once or release held resources before
requesting new ones.

 Definition: Hold and wait occurs when a process is holding some resources while
waiting for others.
 Prevention Approach:
o Require processes to request all resources at the start.
o If a process needs a new resource, it must release all currently held resources
and re-request everything.

Example in Operating Systems:

 When printing a document, a process should request both the printer and memory
space together rather than holding the printer and waiting for memory.

Real-Life Example:
 Restaurant Order: A customer orders all required food items at once instead of keeping
a table occupied and ordering one dish at a time.

❌ Limitation:

 This approach leads to resource underutilization because some resources remain idle
even when they could be used by other processes.

3. Preventing No Preemption

→ Idea: If a process holding a resource is waiting for another, the system can forcibly take
back the held resource and give it to another process.

 Definition: No preemption means that once a process gets a resource, it cannot be taken
away forcibly.
 Prevention Approach: If a process is holding resources but is blocked waiting for
another, the system will forcefully take away the held resources and assign them to
other processes.

Example in Operating Systems:

 CPU Scheduling: Some operating systems preempt a process’s CPU time if a higher-
priority process arrives.
 Memory Allocation: If a process is waiting for more memory but is holding a printer,
the system can take the printer back and assign it to another process.

Real-Life Example:

 Hospital Wheelchair System: If a patient is sitting in a wheelchair but is not moving,


the hospital staff can take the wheelchair and give it to another patient who needs it
urgently.

❌ Limitation:

 Some resources (e.g., a printer in the middle of printing) cannot be preempted


safely without causing errors or data loss.

4. Preventing Circular Wait

→ Idea: Impose an ordering rule so that processes always request resources in a predefined
sequence.
 Definition: Circular wait occurs when processes form a cycle where each process waits
for a resource held by the next.
 Prevention Approach: Assign a numerical order to resources and ensure that
processes always request resources in increasing order.

Example in Operating Systems:

 If a system has Scanner (R1) → Printer (R2) → Disk (R3), a process must always
request in ascending order(e.g., R1 → R2 → R3) and never in a different sequence (e.g.,
R3 → R1).

Real-Life Example:

 Traffic Signal System: Vehicles follow predefined lane directions to prevent circular
traffic jams.

❌ Limitation:

 Some processes may need resources in an order that does not match the predefined
sequence, leading to inefficiencies.

You might also like