0% found this document useful (0 votes)
2 views57 pages

OS Unit 2

The document provides an overview of concurrent processes, detailing the structure of a process in memory, including its stack, heap, text, and data sections. It discusses the process life cycle, the role of the Process Control Block (PCB), and the Producer-Consumer problem, emphasizing the importance of mutual exclusion in operating systems to prevent data inconsistency. Additionally, it outlines various implementation techniques for mutual exclusion, such as locks, semaphores, and atomic operations, along with real-world applications and common issues like deadlocks and starvation.

Uploaded by

deder50551
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views57 pages

OS Unit 2

The document provides an overview of concurrent processes, detailing the structure of a process in memory, including its stack, heap, text, and data sections. It discusses the process life cycle, the role of the Process Control Block (PCB), and the Producer-Consumer problem, emphasizing the importance of mutual exclusion in operating systems to prevent data inconsistency. Additionally, it outlines various implementation techniques for mutual exclusion, such as locks, semaphores, and atomic operations, along with real-world applications and common issues like deadlocks and starvation.

Uploaded by

deder50551
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

1.

Concurrent Processes:
Process
A process is a program in execution, progressing sequentially. It represents the basic unit of work
in an operating system.

When a program is loaded into memory and starts execution, it becomes a process. It is divided
into four main sections:

1. Stack – Stores temporary data like function calls and local variables.

2. Heap – Used for dynamic memory allocation.

3. Text – Contains the executable code.

4. Data – Stores global and static variables.

This structure defines how a process is organized in main memory.

S.N. Component & Description


1 Stack
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
4 Data
This section contains the global and static variables.

Program

A computer program is a set of instructions written in a programming language to perform a


specific task. For example, a simple C program:

#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A program becomes a process when executed. A specific task within a program is called an
algorithm. A complete set of programs, libraries, and data forms software.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N. STATE DESCRIPTION


1 Start Initial state when a process is created.
2 Ready Waiting for CPU allocation.
3 Running Currently being executed by the CPU.
4 Waiting Waiting for a resource (e.g., user input, file).
5 Terminated Execution complete or stopped; waits for removal from memory.
Process Control Block (PCB)

A Process Control Block (PCB) is a data structure used by the Operating System to store
essential information about a process. Each process has a unique PCB identified by a Process ID
(PID). It helps the OS manage and track processes.

Here is a concise table of key PCB components:

S.N. INFORMATION DESCRIPTION


1 Process State Current state: Ready, Running, Waiting, etc.
2 Process Privileges Access rights to system resources.
3 Process ID (PID) Unique ID for each process.
4 Pointer Points to the parent process.
5 Program Counter Address of the next instruction to execute.
6 CPU Registers Stores CPU register values for execution.
7 CPU Scheduling Info Includes priority and scheduling parameters.
8 Memory Management Info Includes page tables, segment tables, and memory limits.
9 Accounting Info CPU usage, execution time, and process ID info.
10 I/O Status Info List of I/O devices allocated to the process.
Note: PCB structure may vary across different operating systems.
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
2. Producer Consumer Problem in OS
Overview

Producer-Consumer problem is a classical synchronization problem in the operating system. With


the presence of more than one process and limited resources in the system the synchronization
problem arises. If one resource is shared between more than one process at the same time then
it can lead to data inconsistency. In the producer-consumer problem, the producer produces an
item and the consumer consumes the item produced by the producer.

What is Producer Consumer Problem?

Before knowing what is Producer-Consumer Problem we have to know what are Producer and
Consumer.

• In operating System Producer is a process which is able to produce data/item.


• Consumer is a Process that is able to consume the data/item produced by the Producer.
• Both Producer and Consumer share a common memory buffer. This buffer is a space of a
certain size in the memory of the system which is used for storage. The producer produces
the data into the buffer and the consumer consumes the data from the buffer.

So, what are the Producer-Consumer Problems?

1. Producer Process should not produce any data when the shared buffer is full.

2. Consumer Process should not consume any data when the shared buffer is empty.
3. The access to the shared buffer should be mutually exclusive i.e at a time only one process
should be able to access the shared buffer and make changes to it.

For consistent data synchronization between Producer and Consumer, the above problem should
be resolved.

Solution For Producer Consumer Problem

To solve the Producer-Consumer problem three semaphores variable are used :

Semaphores are variables used to indicate the number of resources available in the system at a
particular time. semaphore variables are used to achieve `Process Synchronization.

Full

The full variable is used to track the space filled in the buffer by the Producer process. It is
initialized to 0 initially as initially no space is filled by the Producer process.

Empty

The Empty variable is used to track the empty space in the buffer. The Empty variable is initially
initialized to the BUFFER-SIZE as initially, the whole buffer is empty.

Mutex

Mutex is used to achieve mutual exclusion. mutex ensures that at any particular time only the
producer or the consumer is accessing the buffer.

Mutex - mutex is a binary semaphore variable that has a value of 0 or 1.

We will use the Signal() and wait() operation in the above-mentioned semaphores to arrive at a
solution to the Producer-Consumer problem.

Signal() - The signal function increases the semaphore value by 1. Wait() - The wait operation
decreases the semaphore value by 1.

Let's look at the code of Producer-Consumer Process

The code for Producer Process is as follows :

void Producer(){
while(true){
// producer produces an item/data
wait(Empty);
wait(mutex);
add();
signal(mutex);
signal(Full);
}
}
Let's understand the above Producer process code :

• wait(Empty) - Before producing items, the producer process checks for the empty space
in the buffer. If the buffer is full producer process waits for the consumer process to
consume items from the buffer. so, the producer process executes wait(Empty) before
producing any item.

• wait(mutex) - Only one process can access the buffer at a time. So, once the producer
process enters into the critical section of the code it decreases the value of mutex by
executing wait(mutex) so that no other process can access the buffer at the same time.

• add() - This method adds the item to the buffer produced by the Producer process. once
the Producer process reaches add function in the code, it is guaranteed that no other
process will be able to access the shared buffer concurrently which helps in data
consistency.

• signal(mutex) - Now, once the Producer process added the item into the buffer it
increases the mutex value by 1 so that other processes which were in a busy-waiting state
can access the critical section.

• signal(Full) - when the producer process adds an item into the buffer spaces is filled by
one item so it increases the Full semaphore so that it indicates the filled spaces in the
buffer correctly.

The code for the Consumer Process is as follows :

void Consumer() {
while(true){
// consumer consumes an item
wait(Full);
wait(mutex);
consume();
signal(mutex);
signal(Empty);
}
}
Let's understand the above Consumer process code :

• wait(Full) - Before the consumer process starts consuming any item from the buffer it checks
if the buffer is empty or has some item in it. So, the consumer process creates one more
empty space in the buffer and this is indicated by the full variable. The value of the full variable
decreases by one when the wait(Full) is executed. If the Full variable is already zero i.e the
buffer is empty then the consumer process cannot consume any item from the buffer and it
goes in the busy-waiting state.

• wait(mutex) - It does the same as explained in the producer process. It decreases the mutex
by 1 and restricts another process to enter the critical section until the consumer process
increases the value of mutex by 1.

• consume() - This function consumes an item from the buffer. when code reaches the
consuming () function it will not allow any other process to access the critical section which
maintains the data consistency.

• signal(mutex) - After consuming the item it increases the mutex value by 1 so that other
processes which are in a busy-waiting state can access the critical section now.

• signal(Empty) - when a consumer process consumes an item it increases the value of the
Empty variable indicating that the empty space in the buffer is increased by 1.

Why can mutex solve the producer consumer Problem ?

Mutex is used to solve the producer-consumer problem as mutex helps in mutual exclusion. It
prevents more than one process to enter the critical section. As mutexes have binary values i.e 0
and 1. So whenever any process tries to enter the critical section code it first checks for the mutex
value by using the wait operation.

wait(mutex);

wait(mutex) decreases the value of mutex by 1. so, suppose a process P1 tries to enter the critical
section when mutex value is 1. P1 executes wait(mutex) and decreases the value of mutex. Now,
the value of mutex becomes 0 when P1 enters the critical section of the code.

Now, suppose Process P2 tries to enter the critical section then it will again try to decrease the
value of mutex. But the mutex value is already 0. So, wait(mutex) will not execute, and P2 will
now keep waiting for P1 to come out of the critical section.

Now, suppose if P2 comes out of the critical section by executing signal(mutex).

signal(mutex)

signal(mutex) increases the value of mutex by 1.mutex value again becomes 1. Now, the
process P2 which was in a busy-waiting state will be able to enter the critical section by
executing wait(mutex).

So, mutex helps in the mutual exclusion of the processes.


In the above section in both the Producer process code and consumer process code, we have the
wait and signal operation on mutex which helps in mutual exclusion and solves the problem of
the Producer consumer process.

Dive into the world of operating systems with our free Operating System course. Join today and
acquire the skills from industry experts!

Conclusion

• Producer Process produces data item and consumer process consumes data item.
• Both producer and consumer processes share a common memory buffer.
• Producer should not produce any item if the buffer is full.
• Consumer should not consume any item if the buffer is empty.
• Not more than one process should access the buffer at a time i.e mutual exclusion should
be there.
• Full, Empty and mutex semaphore help to solve Producer-consumer problem.
• Full semaphore checks for the number of filled space in the buffer by the producer process
• Empty semaphore checks for the number of empty spaces in the buffer.
• mutex checks for the mutual exclusion.
3. Mutual Exclusion in Operating Systems
1. Introduction to Mutual Exclusion

Definition

Mutual exclusion (Mutex) is a fundamental synchronization principle that ensures only one
process can access a shared resource (critical section) at any given time. If other processes need
to execute in their critical sections, they must wait until it is free.

Basic Concept

• A condition where a thread of execution never enters a critical section at the same time
as a concurrent thread

• Core principle for maintaining data consistency in concurrent environments

• Essential for preventing race conditions in modern operating systems

2. Critical Sections

What is a Critical Section?

• A segment of code where a process accesses and potentially modifies shared resources

• Examples: updating shared variables, modifying files, or changing shared data structures

• Must be executed atomically (as an uninterruptible unit)

Problems Without Mutual Exclusion

When processes execute their critical sections simultaneously:

• Race conditions occur when results depend on sequence or timing of operations

• Data inconsistency arises when shared resources are corrupted

• System state becomes unpredictable and unreliable

3. Requirements of Mutual Exclusion

Four Essential Conditions

1. Mutual Exclusion: Only one process can execute in its critical section at any time

2. Bounded Waiting: A process is permitted to execute in its critical section only for a finite
time
3. Progress/No Blocking: No process outside its critical section should prevent another
process from entering its critical section

4. No Indefinite Postponement: No process should wait indefinitely to enter its critical


section

Additional Important Conditions

• No assumptions should be made about the relative speeds of processes

• The system must return to a stable state after processes complete their critical sections

• Minimal overhead in implementing the exclusion mechanism

4. Why is Mutual Exclusion Required?

Linked List Example

When multiple threads modify a linked list simultaneously:

• Thread 1 removes node i by modifying node (i-1)'s next reference to point to node (i+1)

• Thread 2 simultaneously removes node (i+1) by modifying node i's next reference to point
to node (i+2)

• Result: Inconsistent state where node (i+1) still exists in the list due to incorrect references

Consequences of Race Conditions

• Data corruption and integrity issues

• Unpredictable program behavior

• Difficult-to-reproduce bugs

• System instability

5. Implementation Techniques

1. Locks/Mutex

• A binary state mechanism (locked/unlocked)

• Process must obtain lock before entering critical section

• If locked by another process, requesting process must wait

• Simple but can lead to busy waiting

2. Recursive Locks
• Can be locked multiple times by the same thread without deadlock

• Tracks owner thread and acquisition count

• Released only when the owner has released it as many times as acquired

• Useful for recursive algorithms and nested critical sections

3. Semaphores

• Advanced synchronization primitive with two atomic operations:

o wait(): Decrease counter and block if less than zero

o signal(): Increase counter and wake blocked process

• Types:

o Binary semaphores: Values limited to 0 or 1 (similar to mutex)

o Counting semaphores: Can take multiple values, used for resource counting

4. Readers-Writer (RW) Locks

• Optimized for scenarios with frequent reads and occasional writes

• Multiple readers can access simultaneously

• Writers require exclusive access (no concurrent readers or writers)

• Prevents read-write and write-write conflicts while allowing read-read concurrency

5. Atomic Operations

• Hardware-supported uninterruptible instructions

• Examples: Compare-and-Swap (CAS), Test-and-Set

• Allow lock-free synchronization mechanisms

• Often more efficient than lock-based approaches

6. Software-based Algorithms

• Peterson's algorithm: Two-process mutual exclusion

• Dekker's algorithm: First proven solution for two processes

• Lamport's bakery algorithm: Scalable to N processes

• Use flags, turns, and busy-waiting to coordinate access


6. Real-World Applications

1. Printer Spooling

• Multiple users submit print jobs simultaneously

• Mutex ensures one process at a time modifies the print queue

• Maintains print job order and prevents conflicts

2. Bank Account Transactions

• Multiple users may attempt to access/modify accounts concurrently

• Mutual exclusion prevents inconsistent balance states

• Ensures transaction atomicity and data integrity

3. Traffic Signal Control

• Coordinates traffic signals at intersections

• Prevents conflicting signals (green in all directions)

• Ensures safe and organized traffic flow

4. Resource Allocation in Shared Databases

• Prevents simultaneous modification of the same data by multiple transactions

• Maintains ACID properties (Atomicity, Consistency, Isolation, Durability)

• Implements concurrency control through locks or version control

5. Shared Memory in Real-Time Systems

• Protects critical shared memory regions

• Ensures proper coordination between processes

• Prevents timing issues in time-sensitive applications

7. Common Issues and Considerations

Deadlocks

• Two or more processes waiting indefinitely for resources held by each other

• Can occur with improper lock acquisition ordering

• Prevention strategies: resource hierarchy, timeout mechanisms, deadlock detection


Starvation

• Process indefinitely denied access to critical section

• Can occur if access prioritization is unfair

• Solution: implement fair scheduling or aging mechanisms

Priority Inversion

• Lower-priority process holds a lock needed by higher-priority process

• Solution: priority inheritance or priority ceiling protocols

Performance Considerations

• Lock contention reduces parallelism

• Fine-grained locking improves concurrency but increases complexity

• Lock-free algorithms can provide better performance in some scenarios

8. Best Practices for Teaching

Visualization Approaches

• Use diagrams to show process execution timelines

• Demonstrate race conditions with concrete examples

• Illustrate lock acquisition and release sequences


4.🧠 Critical Section in Operating Systems
Definition:

A Critical Section is a part of a program that accesses shared resources (e.g., variables, memory,
files) and must not be executed by more than one process or thread at the same time to avoid
race conditions and ensure data consistency.

The Critical Section Problem

When multiple processes access or modify shared resources concurrently, race conditions may
occur — leading to incorrect results.

Example: Let value = 3 (shared variable).

• Process P1 performs: value + 3 → stores 6.

• Process P2 performs: value - 3 → stores 0.


Expected final value = 6, but actual = 0.
This is a race condition caused by unsynchronized access.

Requirements for a Solution

Any solution to the critical section problem must satisfy three key conditions:

Property Meaning
Mutual
Only one process can execute in the critical section at a time.
Exclusion
If no process is in the critical section, the decision of who enters next should not
Progress
be postponed indefinitely.
Bounded There should be a limit on the number of times other processes can enter their
Waiting critical sections before a waiting process is granted access.

General Structure of Critical Section Code

do {
// Entry Section
acquireLock();

// Critical Section
// Access shared resource

// Exit Section
releaseLock();

// Remainder Section
} while (true);

Mechanisms to Solve Critical Section Problem

Mechanism Description
Mutex (Mutual Exclusion
Lock-based solution with acquire() and release() operations.
Object)
Integer-based control with atomic wait() and signal()
Semaphores
operations.
Hardware-level atomic instruction that prevents simultaneous
Test-and-Set
access.
Replaces a value only if it matches the expected value, used for
Compare-and-Swap
locking.
Condition Variables Used for blocking processes until a certain condition is true.

Types of Kernels and Critical Sections

• Preemptive Kernel: Allows a process to be interrupted in kernel mode. Prone to race


conditions.

• Non-Preemptive Kernel: A process runs until it blocks or finishes. Safer, easier to manage.

Issues in Critical Sections

Issue Explanation
Deadlock Processes wait indefinitely for each other to release resources.
Starvation A process waits too long and never gets access.
Overhead Extra processing to acquire/release locks may reduce performance.
Reduced Parallelism Only one process can work at a time on shared data.

Strategies to Avoid Problems


Strategy Benefit
Fine-Grained Locking Locks specific resources, increasing concurrency.
Lock Hierarchies Prevents deadlocks by imposing lock order.
Read-Write Locks Allows multiple readers, single writer.
Optimistic Concurrency Check for conflicts only at commit time.
Lock-Free Structures Use atomic operations to avoid locks entirely.

Real-World Example

Banking Scenario:

A bank account has ₹10,000.

• Cashier withdraws ₹7000 (takes 2 seconds to update).

• Simultaneously, ₹6000 is withdrawn from ATM.


→ Total withdrawal = ₹13,000 > ₹10,000 — Data Inconsistency.
Solution: Use a critical section to ensure only one withdrawal occurs at a time.

Impact on Scalability

Factor Impact
Bottlenecks Multiple processes waiting for the same resource.
Reduced Parallelism Limits concurrency in highly parallel systems.
Lock Overhead Cost of managing synchronization in complex systems.

Advantages of Critical Section

• ✔ Ensures Data Integrity

• ✔ Provides Mutual Exclusion

• ✔ Supports Predictable Execution

• ✔ Compatible with Legacy Code

• ✔ Simplifies Synchronization Design


Disadvantages of Critical Section

• Potential Deadlocks

• Reduced Concurrency

• Performance Overhead

• Difficult Debugging in Multi-threaded Code

Summary

Aspect Details
What A block of code accessing shared resources
Why Prevent race conditions, maintain consistency
How By using synchronization mechanisms
Key Properties Mutual Exclusion, Progress, Bounded Waiting
Common Solutions Mutexes, Semaphores, Test-and-Set, etc.
Challenges Deadlock, Starvation, Overhead, Debugging

Takeaway: Critical sections are fundamental in concurrent programming. Proper design and
use of synchronization mechanisms are key to building reliable, efficient, and scalable systems.
5. Dekker’s Algorithm in Process Synchronization
✦ Introduction

Dekker’s Algorithm was the first correct software-based solution to the Critical Section Problem
for two processes. It was invented by Dutch mathematician Theodorus Dekker in the 1960s and
is designed to ensure mutual exclusion using only shared memory and basic atomic operations.

✦ Critical Section Problem

When multiple processes share resources (like CPU, memory, I/O), we must ensure that only one
process enters its critical section at a time to avoid race conditions.

Requirements for a Solution:

1. Mutual Exclusion – Only one process is allowed in the critical section.

2. Progress – A process outside the critical section should not block others.

3. Bounded Waiting – Each process gets a fair chance to execute.

✦ Basic Structure of Process

do {
// Entry Section
// Critical Section
// Exit Section
// Remainder Section
} while (TRUE);

✦ Overview of Dekker’s Algorithm

• Solves the critical section problem for two processes.

• Uses:

o Flags (flag[0], flag[1]) to indicate intention to enter critical section.

o Turn variable to determine whose turn it is to enter.

• Final (5th) version satisfies all 3 necessary conditions:

o Mutual Exclusion
o Progress

o Bounded Waiting

✦ Dekker’s Algorithm (Final Version)

Idea:

If both processes want to enter the critical section simultaneously, only the process whose turn
it is will proceed. The other will wait.

Variables:

boolean flag[2] = {false, false}; // flag[i] = true if process i wants to enter


int turn = 0; // Whose turn it is (0 or 1)
Algorithm for Process i (0 or 1):

do {
flag[i] = true;
while (flag[j]) {
if (turn == j) {
flag[i] = false;
while (turn == j); // busy wait
flag[i] = true;
}
}

// Critical Section

turn = j;
flag[i] = false;

// Remainder Section
} while (TRUE);

✦ Working Explained

1. flag[i] = true → Process i indicates intent to enter CS.

2. If flag[j] is also true → Conflict arises.

o Check turn == j: If it’s not your turn, back off.

o Wait until it becomes your turn.


3. Once in CS, execute safely.

4. After exit:

o Give turn to the other process.

o Reset your flag.

✦ Versions of Dekker’s Algorithm and Their Issues

Version Approach Issue


1st Shared turn variable Lockstep Synchronization
2nd Two flag variables Mutual Exclusion Violation
3rd Set flags before waiting Possible Deadlock
4th Use flags + random delay Indefinite Postponement
5th Flags + Turn variable Final and Correct Version

✦ Advantages

✔ Fully software-based (no hardware instructions required)


✔ Satisfies all 3 conditions for critical section
✔ Useful for educational purposes and small systems

✦ Disadvantages

Only works for 2 processes


Difficult to scale for n processes
Uses busy waiting → CPU time wastage

✦ Conclusion

Dekker’s Algorithm is a pioneering approach in the field of process synchronization. It laid the
foundation for modern algorithms like Peterson’s and provided a software-only solution to the
classic Critical Section Problem.
6. Peterson’s Solution in Operating Systems
1. Introduction

• Peterson's Algorithm was proposed by Gary L. Peterson in 1981 as a solution to the Critical
Section Problem for two processes.
• It is a software-based solution, using shared memory to manage synchronization between
processes.
• The algorithm ensures mutual exclusion, meaning no two processes can enter their critical
section at the same time.
• It guarantees progress, ensuring that if no process is in its critical section, one of the waiting
processes will eventually enter.
• The algorithm provides bounded waiting, preventing indefinite blocking of a process trying
to enter its critical section.
• No hardware-level synchronization is required, making it a purely software-based approach
to process synchronization.
• It is an essential and classical solution for two-process systems in operating systems and
process management.

2. Definition

Peterson's Solution allows two processes to safely share a single-use resource (i.e., critical
section) without conflicts or race conditions.

Key Variables:

• flag[2]: Boolean array where flag[i] = true indicates that process Pi wants to enter the
critical section.

• turn: Indicates whose turn it is to enter the critical section.

3. Working Principle

For Process Pi:

do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j); // wait
// Critical Section
flag[i] = false;
// Remainder Section
} while (true);
For Process Pj:

do {
flag[j] = true;
turn = i;
while (flag[i] && turn == i); // wait
// Critical Section
flag[j] = false;
// Remainder Section
} while (true);

4. Explanation

1. Both processes set their flag[i] = true indicating intent to enter the critical section.
2. The turn is given to the other process.
3. A process waits in the loop if the other process is also interested (flag[j] == true) and it’s
the other’s turn.
4. The process exits the loop only when either the other is not interested or it’s not the
other’s turn.
5. After executing the critical section, it resets flag[i] = false.

5. Implementation in C (with pthread)

#include <stdio.h>
#include <pthread.h>

int flag[2];
int turn;
int val = 0;

void lock_init() {
flag[0] = flag[1] = 0;
turn = 0;
}

void lock(int self) {


int other = 1 - self;
flag[self] = 1;
turn = other;
while (flag[other] && turn == other); // busy wait
}

void unlock(int self) {


flag[self] = 0;
}

void *work(void *s) {


int self = (int)(size_t)s;
lock(self);
for (int i = 0; i < 100000000; i++)
val++;
unlock(self);
return NULL;
}

int main() {
pthread_t t1, t2;
lock_init();

pthread_create(&t1, NULL, work, (void *)0);


pthread_create(&t2, NULL, work, (void *)1);

pthread_join(t1, NULL);
pthread_join(t2, NULL);

printf("Final Value: %d\n", val);


return 0;
}
Output

Thread : 0
Thread : 1
Final Value: 200000000

6. Use Case Examples

• Two processes trying to use a shared printer.

• Two processes reading/writing a shared file.

• Two processes competing for a shared network connection.

7. Advantages
• Simple and easy to understand.

• Uses basic constructs – no hardware dependency.

• Demonstrates key synchronization concepts (mutual exclusion, progress, bounded


waiting).

• Suitable for teaching concurrency.

8. Disadvantages

• Works only for two processes.

• Involves busy waiting, leading to CPU wastage.

• Assumes atomic operations and ideal conditions.

• May not guarantee fairness if one process continually dominates.

9. Detailed Analysis of Peterson’s Solution

Entry Section

void Entry_Section(int process) {


int other = 1 - process;
interested[process] = TRUE;
turn = process;
while (interested[other] && turn == process); // wait
}
Exit Section

void Exit_Section(int process) {


interested[process] = FALSE;
}
Execution Flow

1. P1 sets interested[0] = TRUE and turn = 0.

2. If P2 also wants access, it will enter the loop but gets stuck if interested[0] == TRUE &&
turn == 1.

3. Once P1 exits the critical section, it sets interested[0] = FALSE, allowing P2 to enter.
10. Properties Ensured

Property Description
Mutual Exclusion Only one process in the critical section.
Progress No deadlock; one process doesn’t block the other unnecessarily.
Bounded Waiting Each process will get access to the critical section eventually.
Portability Fully software-based and can be run on any hardware.

11. Conclusion

Peterson’s Algorithm is a classic solution for the critical section problem. While it’s not used in
modern OS due to hardware support (like semaphores, mutexes, etc.), it remains important in
academic settings to understand the principles of process synchronization and concurrency.
7. Semaphore in OS
Overview

Semaphore is essentially a non-negative integer that is used to solve the critical section problem
by acting as a signal. It is a concept in operating systems for the synchronization of concurrent
processes.

What is Semaphore in OS?

If you have read about Process Synchronization, you are aware of the critical section problem that
arises for concurrent processes.

If not, let's quickly get comfortable with these terms above.

Concurrent Processes are those processes that are executed simultaneously or parallel and might
or might not be dependent on other processes. Process Synchronization can be defined as the
coordination between two processes that has access to common materials such as a common
section of code, resources or data, etc.

For example: There may be some resource that is shared by 3 different processes, and none of
the processes at a certain time can change the resource, since that might ruin the results of the
other processes sharing the same resource. You'll understand it more clearly soon.

Now, this Process Synchronization is required for concurrent processes. For any number of
processes that are executing simultaneously, let's say all of them need to access a section of the
code. This section is called the Critical Section.

Now that you are familiar with these terms, we can move on to understanding the need
for Semaphores with an example.

We have 2 processes, that are concurrent and since we are talking about Process Synchronization,
let's say they share a variable "shared" which has a value of 5. What is our goal here? We want to
achieve mutual exclusion, meaning that we want to prevent simultaneous access to a shared
resource. The resource here is the variable "shared" with the value 5.

int shared = 5

Process 1

int x = shared; // storing the value of shared variable in the variable x

x++;

sleep(1);
shared = x;

Process 2

int y = shared;

y--;

sleep(1);

shared = y;

We start with the execution of process 1, in which we declare a variable x which has initially the
value of the shared variable which is 5. The value of x is then incremented, and it becomes 6, and
post that the process goes into a sleep state. Since the current processing is concurrent, the CPU
does not wait and starts the processing of process 2. The integer y has the value of the shared
variable initially which is unchanged, and is 5.

Then we decrement the value of y and process 2 goes into a sleep state. We move back to
process 1 and the value of the shared variable becomes 6. Once that process is complete, in
process 2 the value of the shared variable is changed to 4.

One would think that if we increment and decrement a number, its value should be unchanged
and that is exactly what was happening in the two processes, however in this case the value of
the "shared" variable is 4, and this is undesired.

For example: If we have 5 resources and one process uses it, decrementing its value by 1, just like
in our example -- process X, had done. And if another process Y releases the same resource it had
taken earlier, a similar situation might occur and the resultant would be 4, which instead should
have been 5 itself.

This is called a race condition, and due to this condition, problems such as deadlock may occur.
Hence we need proper synchronization between processes, and to prevent these, we use a
signaling integer variable, called - Semaphore.

So to formally define Semaphore we can say that it is an integer variable that is used in a mutually
exclusive manner by concurrent processes, to achieve synchronization.

Since Semaphores are integer variables, their value acts as a signal, which allows or does not
allow a process to access the critical section of code or certain other resources**.

Types of Semaphore in OS

There are mainly two types of Semaphores, or two types of signaling integer variables:

1. Binary Semaphores
2. Counting Semaphores

Binary Semaphores

In these types of Semaphores the integer value of the semaphore can only be either 0 or 1. If the
value of the Semaphore is 1, it means that the process can proceed to the critical section (the
common section that the processes need to access). However, if the value of the binary
semaphore is 0, then the process cannot continue to the critical section of the code.

When a process is using the critical section of the code, we change the Semaphore value to 0,
and when a process is not using it, or we can allow a process to access the critical section, we
change the value of semaphore to 1.

We'll discuss the working of the semaphores soon.

Counting Semaphores

Counting semaphores are signaling integers that can take on any integer value. Using
these Semaphores we can coordinate access to resources and here the Semaphore count is the
number of resources available. If the value of the Semaphore is anywhere above 0, processes can
access the critical section or the shared resources. The number of processes that can access the
resources/code is the value of the semaphore. However, if the value is 0, it means that there
aren't any resources that are available or the critical section is already being accessed by a
number of processes and cannot be accessed by more processes. Counting semaphores are
generally used when the number of instances of a resource is more than 1, and multiple
processes can access the resource.
What is the primary purpose of a semaphore in operating systems?

To execute a process immediately.

To synchronize access to shared resources by concurrent processes.

To terminate a process.

To increase the execution speed of all processes.

Submit

Example of Semaphore in OS

Now that we know what semaphores are and their types, we must understand their working. As
we read above, our goal is to synchronize processes and provide mutual exclusion in the critical
section of our code. So, we have to introduce a mechanism that wouldn't allow more
than 1 process to access the critical section using the signaling integer - semaphore.

shared variable semaphore = 1;

process i
begin
.
.
P(mutex);
execute Critical Section
V(mutex);
.
.
end;
Here in this piece of pseudocode, we have declared a semaphore in line 1, which has the value
of 1 initially. We then start the execution of a process i which has some code, and then as you can
see, we call a function "P" which takes the value of mutex/semaphore as input and we then
proceed to the critical section, followed by a function "V" which also takes the value
of mutex/semaphore as input. Post that, we execute the remainder of the code, and the process
ends.

Remember, we discussed that semaphore is a signaling variable, and whether or not the process
can proceed to the critical section depends on its value. And in binary and counting semaphores
we read that we change the value of the semaphore according to the resources available. With
this thought, let's move further to read about these "P" and "V" functions in the above
pseudocode.

Wait and Signal Operations in Semaphores

Wait and Signal operations in semaphores are nothing but those "P" and "V" functions that we
read above.

Wait Operation

The wait operation, also called the "P" function, sleep, decrease, or down operation, is the
semaphore operation that controls the entry of a process into a critical section. If the value of
the mutex/semaphore is positive then we decrease the value of the semaphore and let the
process enter the critical section.

Note that this function is only called before the process enters the critical section, and not after
it.

In pseudocode:

P(semaphore){

if semaphore is greater than 0

then decrement semaphore by 1

Signal Operation

The function "V", or the wake-up, increase or up operation is the same as the signal function, and
as we know, once a process has exited the critical section, we must update the value of the
semaphore so that we can signal the new processes to be able to access the critical section.
For the updation of the value, once the process has exited the critical section, since we had
decreased the value of the semaphore by 1 in the wait operation, here we simply increment it.
Note that this function is added only after the process exits the critical section and cannot be
added before the process enters the section.

In pseudocode:

V(semaphore){

increment semaphore by 1

If the value of the semaphore was initially 1, then on the entering of the process into the critical
section, the wait function would have decremented the value to 0 meaning that no more
processes can access the critical section (making sure of mutual exclusion -- only in binary
semaphores). Once the process exits the critical section, the signal operation is executed and the
value of the semaphore is incremented by 1, meaning that the critical section can now be
accessed by another process.

Implementation of Binary and Counting Semaphore in OS

Let's take a look at the implementation of the semaphores with two processes P1 and P2. For
simplicity of the example, we will take 2 processes, however, semaphores are used for a very large
number of processes.

Binary Semaphores:

Initially, the value of the semaphore is 1. When the process P1 enters the critical section, the
value of the semaphore becomes 0. If P2 would want to enter the critical section at this time, it
wouldn't be able to, since the value of the semaphore is not greater than 0. It will have to wait till
the semaphore value is greater than 0, and this will happen only once P1 leaves the critical
section and executes the signal operation which increments the value of the semaphore.

This is how mutual exclusion is achieved using binary semaphore i.e. both processes cannot
access the critical section at the same time.
Code:

struct Semaphore{
enum value(0, 1);
/* This queue contains all the process control blocks (PCB) of the processes that get
blocked while performing the wait operation */
Queue<process> processes;
}

wait (Semaphore mutex){


if (mutex.value == 1){
mutex.value = 0;
}
else {
// since process cannot access critical section, adding it to waiting queue
processes.push(P);
sleep();
}

}
signal (Semaphore mutex){
if (mutex.processes is empty){
mutex.value = 1;
}
else {
// selecting a process from the waiting queue which can next access the critical
section
process p = processes.pop();
wakeup(p);
}
}
In this code above we have implemented a binary semaphore that provides mutual exclusion.

Counting Semaphores:

In counting semaphores, the only difference is that we will have a number of resources, or in
other words a set number of processes that can access the critical section at the same time.

Let's say we have a resource that has 4 instances, hence making the initial value of semaphore =
4. Whenever a process requires access to the critical section/resource we call the wait function
and decrease the value of the semaphore by 1 only if the value of the semaphore is greater
than 0. When 4 processes have accessed the critical section/resource and the 5th
process requires it as well, we put it in the waiting queue and wake it up only when a process has
executed the signal function, meaning that the value of the semaphore has gone up by 1.

struct Semaphore(){
int value;
Queue<process> processes;
}

wait (Semaphore s){


s.value -= 1;
if (s.value < 0){
processes.push(p);
block();
}
else{
return;
}
}

signal (Semaphore s){


s.value += 1;
if (s.value >= 0){
process p = processes.pop();
wakeup(p);
}
else{
return;
}
}
In the context of semaphores, what does the term "mutual exclusion" mean?

Only one process can execute at a time.

Only one process can be in the critical section at a time.

Only one process can be in the system at a time.

Only one resource can be used by all processes

Submit

Solving Producer-Consumer with Semaphores

Now that we have understood the working of semaphores, we can take a look at the real-life
application of semaphores in classic synchronization problems.

The producer-consumer problem is one of the classic process synchronization problems.

Problem Statement

The problem statement states that we have a buffer that is of fixed size, and the producer will
produce items and place them in the buffer. The consumer can pick items from the buffer and
consume them. Our task is to ensure that when the item is placed in the buffer by the producer,
the consumer should not consume it at the same time the producer produces and places an item
into the buffer. The critical section here is the buffer.

Solution

So to solve this problem, we will use 2 counting semaphores, namely "full" and "empty". The
counting semaphore "full" will keep track of all the slots in the buffer that are used, i.e. track of
all the items in the buffer. And of course, the "empty" semaphore will keep track of the slots in
the buffer that are empty, and the value of mutex will be 1.

Initially, the value of the semaphore "full" will be 0 since all slots in the buffer are unoccupied and
the value of the "empty" buffer will be 'n', where n is the size of the buffer since all slots are
initially empty.
For example, if the size of the buffer is 5, then the semaphore full = 0, since all the slots in
the buffer are unoccupied and empty = 5.

The deduced solution for the producer section of the problem is:

do{
// producer produces an item
wait(empty);
wait(mutex);
// put the item into the buffer
signal(mutex);
signal(full);
} while(true)
In the above code, we call the wait operations on the empty and mutex semaphores when the
producer produces an item. Since an item is produced, it must be placed in the buffer reducing
the number of empty slots by 1, hence we call the wait operation on the empty semaphore. We
must also reduce the value of mutex so as to prevent the consumer from accessing the buffer.

Post this, the producer has placed the item into the buffer and hence we can increase the value
of "full" semaphore by 1 and also increment the value of the mutex as the producer has
completed it's task and the signal will now be able to access the buffer.

Solution to the consumer section of the problem:

do{
wait(full);
wait(mutex);
// removal of the item from the buffer
signal(mutex);
signal(empty);
// consumer now consumed the item
} while(true)
The consumer needs to consume the items that are produced by the producer. So when the
consumer is removing the item from the buffer to consume it we need to reduce the value of
the "full" semaphore by 1 since one slot will be emptied, and we also need to decrement the
value of mutex so that the producer does not access the buffer.

Now that the consumer has consumed the item, we can increment the value of the empty
semaphore along with the mutex by 1.

Thus, we have solved the producer-consumer problem.

Advantages of Semaphores

As we have read throughout the article, semaphores have proven to be extremely useful
in process synchronization. Here are the advantages summarized:

• They allow processes into the critical section one by one and provide strict mutual
exclusion (in the case of binary semaphores).

• No resources go to waste due to busy waiting as with the usage of semaphores as we do


not waste processor time in checking the fulfillment of a condition to allow a process to
access the critical section.

• The implementation/code of the semaphores is written in the machine-independent code


section of the microkernel, and hence semaphores are machine independent.

Disadvantage of Semaphores

We have already discussed the advantages of semaphores; however, semaphores also have some
disadvantages. They are:

• Semaphores are slightly complicated and the implementation of the wait and signal
operations should be done in such a manner, that deadlocks are prevented.

• The usage of semaphores may cause priority inversion where the high-priority processes
might get access to the critical section after the low-priority processes.
8. Hardware Synchronization Algorithms : Unlock and
Lock, Test and Set, Swap
Process Synchronization problems occur when two processes running concurrently share the
same data or same variable. The value of that variable may not be updated correctly before its
being used by a second process. Such a condition is known as Race Around Condition. There are
a software as well as hardware solutions to this problem. In this article, we will talk about the
most efficient hardware solution to process synchronization problems and its implementation.

There are three algorithms in the hardware approach of solving Process Synchronization
problem:

1. Test and Set

2. Swap

3. Unlock and Lock

Hardware instructions in many operating systems help in the effective solution of critical section
problems.

1. Test and Set:

Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm works in
this way – it always returns whatever value is sent to it and sets lock to true. The first process will
enter the critical section at once as TestAndSet(lock) will return false and it’ll break out of the
while loop. The other processes cannot enter now as lock is set to true and so the while loop
continues to be true. Mutual exclusion is ensured. Once the first process gets out of the critical
section, lock is changed to false. So, now the other processes can enter one by one. Progress is
also ensured. However, after the first process, any process can go in. There is no queue
maintained, so any new process that finds the lock to be false again can enter. So bounded waiting
is not ensured.

Test and Set Pseudocode –

//Shared variable lock initialized to false


boolean lock;

boolean TestAndSet (boolean &target){


boolean rv = target;
target = true;
return rv;
}

while(1){
while (TestAndSet(lock));
critical section
lock = false;
remainder section
}

2. Swap:

Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in the
swap function, key is set to true and then swapped with lock. First process will be executed, and
in while(key), since key=true , swap will take place and hence lock=true and key=false. Again next
iteration takes place while(key) but key=false , so while loop breaks and first process will enter in
critical section. Now another process will try to enter in Critical section, so again key=true and
hence while(key) loop will run and swap takes place so, lock=true and key=true (since lock=true
in first process). Again on next iteration while(key) is true so this will keep on executing and
another process will not be able to enter in critical section. Therefore Mutual exclusion is ensured.
Again, out of the critical section, lock is changed to false, so any process finding it gets t enter the
critical section. Progress is ensured. However, again bounded waiting is not ensured for the very
same reason.

Swap Pseudocode –

// Shared variable lock initialized to false


// and individual key initialized to false;

boolean lock;
Individual key;

void swap(boolean &a, boolean &b){


boolean temp = a;
a = b;
b = temp;
}

while (1){
key = true;
while(key)
swap(lock,key);
critical section
lock = false;
remainder section
}

3. Unlock and Lock :


Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds another
value, waiting[i], for each process which checks whether or not a process has been waiting. A
ready queue is maintained with respect to the process in the critical section. All the processes
coming in next are added to the ready queue with respect to their process number, not
necessarily sequentially. Once the ith process gets out of the critical section, it does not turn
lock to false so that any process can avail the critical section now, which was the problem with
the previous algorithms. Instead, it checks if there is any process waiting in the queue. The
queue is taken to be a circular queue. j is considered to be the next process in line and the
while loop checks from jth process to the last process and again from 0 to (i-1)th process if
there is any process waiting to access the critical section. If there is no process waiting then
the lock value is changed to false and any process which comes next can enter the critical
section. If there is, then that process’ waiting value is turned to false, so that the first while
loop becomes false and it can enter the critical section. This ensures bounded waiting. So the
problem of process synchronization can be solved through this algorithm.

Unlock and Lock Pseudocode –

// Shared variable lock initialized to false


// and individual key initialized to false

boolean lock;
Individual key;
Individual waiting[i];

while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key)
key = TestAndSet(lock);
waiting[i] = false;
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}
9. Dining Philosophers Problem in OS
Overview

Dining Philosophers Problem in OS is a classical synchronization problem in the operating system.


With the presence of more than one process and limited resources in the system the
synchronization problem arises. If one resource is shared between more than one process at the
same time then it can lead to data inconsistency.

Dining Philosophers Problem in OS

Consider two processes P1 and P2 executing simultaneously, while trying to access the same
resource R1, this raises the question of who will get the resource and when. This problem is solved
using process synchronization.

The act of synchronizing process execution such that no two processes have access to the same
associated data and resources are referred to as process synchronization in operating systems.

It's particularly critical in a multi-process system where multiple processes are executing at the
same time and trying to access the very same shared resource or data.

This could lead to discrepancies in data sharing. As a result, modifications implemented by one
process may or may not be reflected when the other processes access the same shared data. The
processes must be synchronized with one another to avoid data inconsistency.

And the Dining Philosophers Problem is a typical example of limitations in process


synchronization in systems with multiple processes and limited resources. According to the Dining
Philosopher Problem, assume there are K philosophers seated around a circular table, each with
one chopstick between them. This means, that a philosopher can eat only if he/she can pick up
both chopsticks next to him/her. One of the adjacent followers may take up one of the chopsticks,
but not both.
For example, let’s consider P0, P1, P2, P3, and P4 as the philosophers or processes
and C0, C1, C2, C3, and C4 as the 5 chopsticks or resources between each philosopher. Now
if P0 wants to eat, both resources/chopsticks C0 and C1 must be free, which would
leave P1 and P4 void of the resource and the process wouldn't be executed, which indicates there
are limited resources(C0,C1..) for multiple processes(P0, P1..), and this problem is known as
the Dining Philosopher Problem.

:::

The Solution of the Dining Philosophers Problem

The solution to the process synchronization problem is Semaphores, A semaphore is an integer


used in solving critical sections.

The critical section is a segment of the program that allows you to access the shared variables or
resources. In a critical section, an atomic action (independently running process) is needed, which
means that only a single process can run in that section at a time.

Semaphore has 2 atomic operations: wait() and signal(). If the value of its input S is positive,
the wait() operation decrements, it is used to acquire resources while entry. No operation is done
if S is negative or zero. The value of the signal() operation's parameter S is increased, it is used to
release the resource once the critical section is executed at exit.

Here's a simple explanation of the solution:

void Philosopher
{
while(1)
{
// Section where the philosopher is using chopstick
wait(use_resource[x]);
wait(use_resource[(x + 1) % 5]);
// Section where the philosopher is thinking
signal(free_resource[x]);
signal(free_resource[(x + 1) % 5]);
}
}
Explanation:

• The wait() operation is implemented when the philosopher is using the resources while
the others are thinking. Here, the threads use_resource[x] and use_resource[(x + 1) %
5] are being executed.

• After using the resource, the signal() operation signifies the philosopher using no
resources and thinking. Here, the threads free_resource[x] and free_resource[(x + 1) %
5] are being executed.

To model the Dining Philosophers Problem in a C program we will create an array of philosophers
(processes) and an array of chopsticks (resources). We will initialize the array of chopsticks with
locks to ensure mutual exclusion is satisfied inside the critical section.

We will run the array of philosophers in parallel to execute the critical section (dine ()), the critical
section consists of thinking, acquiring two chopsticks, eating and then releasing the chopsticks.

C program:

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define NUM_PHILOSOPHERS 5
#define NUM_CHOPSTICKS 5

void dine(int n);


pthread_t philosopher[NUM_PHILOSOPHERS];
pthread_mutex_t chopstick[NUM_CHOPSTICKS];

int main()
{
// Define counter var i and status_message
int i, status_message;
void *msg;

// Initialise the semaphore array


for (i = 1; i <= NUM_CHOPSTICKS; i++)
{
status_message = pthread_mutex_init(&chopstick[i], NULL);
// Check if the mutex is initialised successfully
if (status_message == -1)
{
printf("\n Mutex initialization failed");
exit(1);
}
}

// Run the philosopher Threads using *dine() function


for (i = 1; i <= NUM_PHILOSOPHERS; i++)
{
status_message = pthread_create(&philosopher[i], NULL, (void *)dine, (int *)i);
if (status_message != 0)
{
printf("\n Thread creation error \n");
exit(1);
}
}

// Wait for all philosophers threads to complete executing (finish dining) before closing the
program
for (i = 1; i <= NUM_PHILOSOPHERS; i++)
{
status_message = pthread_join(philosopher[i], &msg);
if (status_message != 0)
{
printf("\n Thread join failed \n");
exit(1);
}
}

// Destroy the chopstick Mutex array


for (i = 1; i <= NUM_CHOPSTICKS; i++)
{
status_message = pthread_mutex_destroy(&chopstick[i]);
if (status_message != 0)
{
printf("\n Mutex Destroyed \n");
exit(1);
}
}
return 0;
}
void dine(int n)
{
printf("\nPhilosopher % d is thinking ", n);

// Philosopher picks up the left chopstick (wait)


pthread_mutex_lock(&chopstick[n]);

// Philosopher picks up the right chopstick (wait)


pthread_mutex_lock(&chopstick[(n + 1) % NUM_CHOPSTICKS]);

// After picking up both the chopstick philosopher starts eating


printf("\nPhilosopher % d is eating ", n);
sleep(3);

// Philosopher places down the left chopstick (signal)


pthread_mutex_unlock(&chopstick[n]);

// Philosopher places down the right chopstick (signal)


pthread_mutex_unlock(&chopstick[(n + 1) % NUM_CHOPSTICKS]);

// Philosopher finishes eating


printf("\nPhilosopher % d Finished eating ", n);
}
Output

Philosopher 2 is thinking
Philosopher 2 is eating
Philosopher 3 is thinking
Philosopher 5 is thinking
Philosopher 5 is eating
Philosopher 1 is thinking
Philosopher 4 is thinking
Philosopher 4 is eating
Philosopher 2 Finished eating
Philosopher 5 Finished eating
Philosopher 1 is eating
Philosopher 4 Finished eating
Philosopher 3 is eating
Philosopher 1 Finished eating
Philosopher 3 Finished eating

Which condition must be avoided to prevent deadlock in the Dining Philosophers Problem?

All philosophers acquiring one chopstick at the same time

Two philosophers eating at the same time.

Both A and B

None of the above

Let's Understand How the Above Code is Giving a Solution to the Dining Philosopher Problem?

We start by importing the libraries pthread for threads and semaphore for synchronization. And
create an array of 5 p_threads representing the philosophers. Create an array of 5 mutexes
representing the chopsticks.

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define NUM_PHILOSOPHERS 5
#define NUM_CHOPSTICKS 5

void dine(int n);


pthread_t philosopher[NUM_PHILOSOPHERS];
pthread_mutex_t chopstick[NUM_CHOPSTICKS];
The pthread library is used for multi-threaded programming which allows us to run parallel sub-
routines using threads. The <semaphore.h> header is used to perform semaphore operations.

We initialise the counter i and status message variable as int and a pointer msg, and intialise
the semaphore array.

int main()
{
// Define counter var i and status_message
int i, status_message;
void *msg;
// Initialise the semaphore array
for (i = 1; i <= NUM_CHOPSTICKS; i++)
{
status_message = pthread_mutex_init(&chopstick[i], NULL);
// Check if the mutex is initialised successfully
if (status_message == -1)
{
printf("\n Mutex initialization failed");
exit(1);
}
}
We create the philosopher threads using pthread_create and pass a pointer to the dine function
as the subroutine and a pointer to the counter variable i.

pthread_t philosopher[5];

// Run the philosopher Threads using *dine() function


for (i = 1; i <= NUM_PHILOSOPHERS; i++)
{
status_message = pthread_create(&philosopher[i], NULL, (void *)dine, (int *)i);
if (status_message != 0)
{
printf("\n Thread creation error \n");
exit(1);
}
}
All the philosophers start by thinking. We pass chopstick(n) (left) to pthread_mutex_lock to
wait and acquire lock on it.

Then the thread waits on the right((n+1) % NUM_CHOPSTICKS) chopstick to acquire a lock on it
(pick it up).

void dine(int n)
{
printf("\nPhilosopher % d is thinking ", n);

// Philosopher picks up the left chopstick (wait)


pthread_mutex_lock(&chopstick[n]);

// Philosopher picks up the right chopstick (wait)


pthread_mutex_lock(&chopstick[(n + 1) % NUM_CHOPSTICKS]);
When the philosopher successfully acquires lock on both the chopsticks, the philosopher starts
dining (sleep(3)).
// After picking up both the chopstick philosopher starts eating
printf("\nPhilosopher % d is eating ", n);
sleep(3);
Once the philosopher finishes eating, we call pthread_mutex_unlock on the left chopstick
(signal) thereby freeing the lock. Then proceed to do the same on the right chopstick.

// Philosopher places down the left chopstick (signal)


pthread_mutex_unlock(&chopstick[n]);

// Philosopher places down the right chopstick (signal)


pthread_mutex_unlock(&chopstick[(n + 1) % NUM_CHOPSTICKS]);

// Philosopher finishes eating


printf("\nPhilosopher % d Finished eating ", n);
}
We loop through all philosopher threads and call pthread_join to wait for the threads to finish
executing before exiting the main thread.

for (i = 1; i <= NUM_PHILOSOPHERS; i++)


{
status_message = pthread_join(philosopher[i], &msg);
if (status_message != 0)
{
printf("\n Thread join failed \n");
exit(1);
}
}
We loop thorough the chopstick array and call pthread_mutex_destroy to destroy the
semaphore array.

for (i = 1; i <= NUM_CHOPSTICKS; i++)


{
status_message = pthread_mutex_destroy(&chopstick[i]);
if (status_message != 0)
{
printf("\n Mutex Destroyed \n");
exit(1);
}
}
The Drawback of the Above Solution of the Dining Philosopher Problem

Through the above discussion, we established that no two nearby philosophers can eat at the
same time using the aforementioned solution to the dining philosopher problem. The
disadvantage of the above technique is that it may result in a deadlock situation. This occurs when
all of the philosophers choose their left chopstick at the same moment, resulting in a stalemate
scenario in which none of the philosophers can eat, and hence deadlock will happen.

We can also avoid deadlock through the following methods in this scenario -

1. The maximum number of philosophers at the table should not exceed four, let’s
understand why four processes is important:

o Chopstick C4 will be accessible for philosopher P3, therefore P3 will begin eating
and then set down both chopsticks C3 and C4, indicating that
semaphore C3 and C4 will now be increased to one.

o Now that philosopher P2, who was holding chopstick C2, also has chopstick C3, he
will place his chopstick down after eating to allow other philosophers to eat.

2. The four starting philosophers (P0, P1, P2, and P3) must pick the left chopstick first, then
maybe the right, even though the last philosopher (P4) should pick the right chopstick
first, then the left. Let's have a look at what occurs in this scenario:

o This will compel P4 to hold his right chopstick first since his right chopstick is C0,
which is already held by philosopher P0 and whose value is set to 0, i.e. C0 is
already 0, trapping P4 in an unending loop and leaving chopstick C4 empty.

o As a result, because philosopher P3 has both left C3 and right C4 chopsticks, it will
begin eating and then put down both chopsticks once finished, allowing others to
eat, thereby ending the impasse.

3. If the philosopher is in an even position, he/she should choose the right chopstick first,
followed by the left, and in an odd position, the left chopstick should be chosen first,
followed by the right.

4. Only if both chopsticks (left and right) are accessible at the same time should a
philosopher be permitted to choose his or her chopsticks.

What is the role of a semaphore in the Dining Philosophers Problem?

To limit the number of processes that can access a resource

To count the number of times a philosopher has eaten

To signal philosophers when it is their turn to leave the table

To ensure mutual exclusion when accessing shared resources

Conclusion
• Process synchronization is defined as no two processes have access to the same
associated data and resources.

• The Dining philosopher problem is an example of a process synchronization problem.

• Philosopher is an analogy for process and chopstick for resources, we can try to solve
process synchronization problems using this.

• The solution of the Dining Philosopher problem focuses on the use of semaphores.

• No two nearby philosophers can eat at the same time using the aforesaid solution to the
dining philosopher problem, and this situation causes a deadlock, this is a drawback of
the Dining philosopher problem.
10. Sleeping Barber Problem in Process Synchronization
Sleeping Barber problem in Process Synchronization

The Sleeping Barber problem is a classic problem in process synchronization that is used to illustrate synchronization
issues that can arise in a concurrent system. The problem is as follows:

There is a barber shop with one barber and a number of chairs for waiting customers. Customers arrive at random
times and if there is an available chair, they take a seat and wait for the barber to become available. If there are no
chairs available, the customer leaves. When the barber finishes with a customer, he checks if there are any waiting
customers. If there are, he begins cutting the hair of the next customer in the queue. If there are no customers
waiting, he goes to sleep.

The problem is to write a program that coordinates the actions of the customers and the barber in a way that avoids
synchronization problems, such as deadlock or starvation.

One solution to the Sleeping Barber problem is to use semaphores to coordinate access to the waiting chairs and the
barber chair. The solution involves the following steps:

Initialize two semaphores: one for the number of waiting chairs and one for the barber chair. The waiting chairs
semaphore is initialized to the number of chairs, and the barber chair semaphore is initialized to zero.

Customers should acquire the waiting chairs semaphore before taking a seat in the waiting room. If there are no
available chairs, they should leave.

When the barber finishes cutting a customer’s hair, he releases the barber chair semaphore and checks if there are
any waiting customers. If there are, he acquires the barber chair semaphore and begins cutting the hair of the next
customer in the queue.

The barber should wait on the barber chair semaphore if there are no customers waiting.

The solution ensures that the barber never cuts the hair of more than one customer at a time, and that customers
wait if the barber is busy. It also ensures that the barber goes to sleep if there are no customers waiting.

However, there are variations of the problem that can require more complex synchronization mechanisms to avoid
synchronization issues. For example, if multiple barbers are employed, a more complex mechanism may be needed
to ensure that they do not interfere with each other.

Prerequisite – Inter Process Communication Problem : The analogy is based upon a hypothetical barber shop with
one barber. There is a barber shop which has one barber, one barber chair, and n chairs for waiting for customers if
there are any to sit on the chair.

• If there is no customer, then the barber sleeps in his own chair.

• When a customer arrives, he has to wake up the barber.


• If there are many customers and the barber is cutting a customer’s hair, then the remaining customers either
wait if there are empty chairs in the waiting room or they leave if no chairs are empty.

Solution : The solution to this problem includes three semaphores.First is for the customer which counts the number
of customers present in the waiting room (customer in the barber chair is not included because he is not waiting).
Second, the barber 0 or 1 is used to tell whether the barber is idle or is working, And the third mutex is used to
provide the mutual exclusion which is required for the process to execute. In the solution, the customer has the
record of the number of customers waiting in the waiting room if the number of customers is equal to the number
of chairs in the waiting room then the upcoming customer leaves the barbershop. When the barber shows up in the
morning, he executes the procedure barber, causing him to block on the semaphore customers because it is initially
0. Then the barber goes to sleep until the first customer comes up. When a customer arrives, he executes customer
procedure the customer acquires the mutex for entering the critical region, if another customer enters thereafter,
the second one will not be able to anything until the first one has released the mutex. The customer then checks the
chairs in the waiting room if waiting customers are less then the number of chairs then he sits otherwise he leaves
and releases the mutex. If the chair is available then customer sits in the waiting room and increments the variable
waiting value and also increases the customer’s semaphore this wakes up the barber if he is sleeping. At this point,
customer and barber are both awake and the barber is ready to give that person a haircut. When the haircut is over,
the customer exits the procedure and if there are no customers in waiting room barber sleeps.

Algorithm for Sleeping Barber problem:


Semaphore Customers = 0;
Semaphore Barber = 0;
Mutex Seats = 1;
int FreeSeats = N;

Barber {
while(true) {
/* waits for a customer (sleeps). */
down(Customers);

/* mutex to protect the number of available seats.*/


down(Seats);

/* a chair gets free.*/


FreeSeats++;

/* bring customer for haircut.*/


up(Barber);

/* release the mutex on the chair.*/


up(Seats);
/* barber is cutting hair.*/
}
}

Customer {
while(true) {
/* protects seats so only 1 customer tries to sit
in a chair if that's the case.*/
down(Seats); //This line should not be here.
if(FreeSeats > 0) {

/* sitting down.*/
FreeSeats--;

/* notify the barber. */


up(Customers);

/* release the lock */


up(Seats);

/* wait in the waiting room if barber is busy. */


down(Barber);
// customer is having hair cut
} else {
/* release the lock */
up(Seats);
// customer leaves
}
}
}

The Sleeping Barber Problem is a classical synchronization problem in which a barber shop with one barber, a waiting
room, and a number of customers is simulated. The problem involves coordinating the access to the waiting room
and the barber chair so that only one customer is in the chair at a time and the barber is always working on a
customer if there is one in the chair, otherwise the barber is sleeping until a customer arrives.

Here’s a Python implementation of the Sleeping Barber Problem using semaphores:

import threading
import time
import random

# Define the maximum number of customers and the number of chairs in the waiting room
MAX_CUSTOMERS = 5
NUM_CHAIRS = 3

# Define the semaphores for the barber, the customers, and the mutex
barber_semaphore = threading.Semaphore(0)
customer_semaphore = threading.Semaphore(0)
mutex = threading.Semaphore(1)

# Define a list to keep track of the waiting customers


waiting_customers = []

# Define the barber thread function


def barber():
while True:
print("The barber is sleeping...")
barber_semaphore.acquire()
mutex.acquire()
if len(waiting_customers) > 0:
customer = waiting_customers.pop(0)
print(f"The barber is cutting hair for customer {customer}")
mutex.release()
time.sleep(random.randint(1, 5))
print(f"The barber has finished cutting hair for customer {customer}")
customer_semaphore.release()
else:
mutex.release()

# Define the customer thread function


def customer(index):
global waiting_customers
time.sleep(random.randint(1, 5))
mutex.acquire()
if len(waiting_customers) < NUM_CHAIRS:
waiting_customers.append(index)
print(f"Customer {index} is waiting in the waiting room")
mutex.release()
barber_semaphore.release()
customer_semaphore.acquire()
print(f"Customer {index} has finished getting a haircut")
else:
print(f"Customer {index} is leaving because the waiting room is full")
mutex.release()

# Create a thread for the barber


barber_thread = threading.Thread(target=barber)

# Create a thread for each customer


customer_threads = []
for i in range(MAX_CUSTOMERS):
customer_threads.append(threading.Thread(target=customer, args=(i,)))

# Start the barber and customer threads


barber_thread.start()
for thread in customer_threads:
thread.start()

# Wait for the customer

Output:
The barber is sleeping…
Customer 0 is waiting in the waiting room
The barber is cutting hair for customer 0
Customer 1 is waiting in the waiting room
Customer 2 is waiting in the waiting room
The barber has finished cutting hair for customer 0
Customer 0 has finished getting a haircut
The barber is cutting hair for customer 1
Customer 3 is waiting in the waiting room
The barber has finished cutting hair for customer 1
Customer 1 has finished getting a haircut
The barber is cutting hair for customer 2
Customer 4 is waiting in the waiting room
The barber has finished cutting hair for customer 2
Customer 2 has finished getting a haircut
The barber is cutting hair for customer 3
Customer 3 has finished getting a haircut
The barber is cutting hair for customer 4
Customer 4 has finished getting a haircut
The barber is sleeping…

The Sleeping Barber Problem is a classical synchronization problem that involves coordinating the access to a barber
chair and a waiting room in a barber shop. The problem requires the use of semaphores or other synchronization
mechanisms to ensure that only one customer is in the barber chair at a time and that the barber is always working
on a customer if there is one in the chair, otherwise the barber is sleeping until a customer arrives.

Advantages of using synchronization mechanisms to solve the Sleeping Barber Problem include:

1. Efficient use of resources: The use of semaphores or other synchronization mechanisms ensures that
resources (e.g., the barber chair and waiting room) are used efficiently, without wasting resources or causing
unnecessary delays.

2. Prevention of race conditions: By ensuring that only one customer is in the barber chair at a time and that
the barber is always working on a customer if there is one in the chair, synchronization mechanisms prevent
race conditions that could lead to errors or incorrect results.

3. Fairness: Synchronization mechanisms can be used to ensure that all customers have a fair chance to be
served by the barber.

However, there are also some disadvantages to using synchronization mechanisms to solve the Sleeping Barber
Problem, including:

1. Complexity: Implementing synchronization mechanisms can be complex, especially for larger systems or
more complex synchronization scenarios.

2. Overhead: Synchronization mechanisms can introduce overhead in terms of processing time, memory
usage, and other system resources.
3. Deadlocks: Incorrectly implemented synchronization mechanisms can lead to deadlocks, where processes
are unable to proceed because they are waiting for resources that are held by other processes.

4. Overall, the advantages of using synchronization mechanisms to solve the Sleeping Barber Problem
generally outweigh the disadvantages, as long as the mechanisms are implemented correctly and efficiently.

You might also like