OS Unit 2
OS Unit 2
Concurrent Processes:
Process
A process is a program in execution, progressing sequentially. It represents the basic unit of work
in an operating system.
When a program is loaded into memory and starts execution, it becomes a process. It is divided
into four main sections:
1. Stack – Stores temporary data like function calls and local variables.
Program
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A program becomes a process when executed. A specific task within a program is called an
algorithm. A complete set of programs, libraries, and data forms software.
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
A Process Control Block (PCB) is a data structure used by the Operating System to store
essential information about a process. Each process has a unique PCB identified by a Process ID
(PID). It helps the OS manage and track processes.
Before knowing what is Producer-Consumer Problem we have to know what are Producer and
Consumer.
1. Producer Process should not produce any data when the shared buffer is full.
2. Consumer Process should not consume any data when the shared buffer is empty.
3. The access to the shared buffer should be mutually exclusive i.e at a time only one process
should be able to access the shared buffer and make changes to it.
For consistent data synchronization between Producer and Consumer, the above problem should
be resolved.
Semaphores are variables used to indicate the number of resources available in the system at a
particular time. semaphore variables are used to achieve `Process Synchronization.
Full
The full variable is used to track the space filled in the buffer by the Producer process. It is
initialized to 0 initially as initially no space is filled by the Producer process.
Empty
The Empty variable is used to track the empty space in the buffer. The Empty variable is initially
initialized to the BUFFER-SIZE as initially, the whole buffer is empty.
Mutex
Mutex is used to achieve mutual exclusion. mutex ensures that at any particular time only the
producer or the consumer is accessing the buffer.
We will use the Signal() and wait() operation in the above-mentioned semaphores to arrive at a
solution to the Producer-Consumer problem.
Signal() - The signal function increases the semaphore value by 1. Wait() - The wait operation
decreases the semaphore value by 1.
void Producer(){
while(true){
// producer produces an item/data
wait(Empty);
wait(mutex);
add();
signal(mutex);
signal(Full);
}
}
Let's understand the above Producer process code :
• wait(Empty) - Before producing items, the producer process checks for the empty space
in the buffer. If the buffer is full producer process waits for the consumer process to
consume items from the buffer. so, the producer process executes wait(Empty) before
producing any item.
• wait(mutex) - Only one process can access the buffer at a time. So, once the producer
process enters into the critical section of the code it decreases the value of mutex by
executing wait(mutex) so that no other process can access the buffer at the same time.
• add() - This method adds the item to the buffer produced by the Producer process. once
the Producer process reaches add function in the code, it is guaranteed that no other
process will be able to access the shared buffer concurrently which helps in data
consistency.
• signal(mutex) - Now, once the Producer process added the item into the buffer it
increases the mutex value by 1 so that other processes which were in a busy-waiting state
can access the critical section.
• signal(Full) - when the producer process adds an item into the buffer spaces is filled by
one item so it increases the Full semaphore so that it indicates the filled spaces in the
buffer correctly.
void Consumer() {
while(true){
// consumer consumes an item
wait(Full);
wait(mutex);
consume();
signal(mutex);
signal(Empty);
}
}
Let's understand the above Consumer process code :
• wait(Full) - Before the consumer process starts consuming any item from the buffer it checks
if the buffer is empty or has some item in it. So, the consumer process creates one more
empty space in the buffer and this is indicated by the full variable. The value of the full variable
decreases by one when the wait(Full) is executed. If the Full variable is already zero i.e the
buffer is empty then the consumer process cannot consume any item from the buffer and it
goes in the busy-waiting state.
• wait(mutex) - It does the same as explained in the producer process. It decreases the mutex
by 1 and restricts another process to enter the critical section until the consumer process
increases the value of mutex by 1.
• consume() - This function consumes an item from the buffer. when code reaches the
consuming () function it will not allow any other process to access the critical section which
maintains the data consistency.
• signal(mutex) - After consuming the item it increases the mutex value by 1 so that other
processes which are in a busy-waiting state can access the critical section now.
• signal(Empty) - when a consumer process consumes an item it increases the value of the
Empty variable indicating that the empty space in the buffer is increased by 1.
Mutex is used to solve the producer-consumer problem as mutex helps in mutual exclusion. It
prevents more than one process to enter the critical section. As mutexes have binary values i.e 0
and 1. So whenever any process tries to enter the critical section code it first checks for the mutex
value by using the wait operation.
wait(mutex);
wait(mutex) decreases the value of mutex by 1. so, suppose a process P1 tries to enter the critical
section when mutex value is 1. P1 executes wait(mutex) and decreases the value of mutex. Now,
the value of mutex becomes 0 when P1 enters the critical section of the code.
Now, suppose Process P2 tries to enter the critical section then it will again try to decrease the
value of mutex. But the mutex value is already 0. So, wait(mutex) will not execute, and P2 will
now keep waiting for P1 to come out of the critical section.
signal(mutex)
signal(mutex) increases the value of mutex by 1.mutex value again becomes 1. Now, the
process P2 which was in a busy-waiting state will be able to enter the critical section by
executing wait(mutex).
Dive into the world of operating systems with our free Operating System course. Join today and
acquire the skills from industry experts!
Conclusion
• Producer Process produces data item and consumer process consumes data item.
• Both producer and consumer processes share a common memory buffer.
• Producer should not produce any item if the buffer is full.
• Consumer should not consume any item if the buffer is empty.
• Not more than one process should access the buffer at a time i.e mutual exclusion should
be there.
• Full, Empty and mutex semaphore help to solve Producer-consumer problem.
• Full semaphore checks for the number of filled space in the buffer by the producer process
• Empty semaphore checks for the number of empty spaces in the buffer.
• mutex checks for the mutual exclusion.
3. Mutual Exclusion in Operating Systems
1. Introduction to Mutual Exclusion
Definition
Mutual exclusion (Mutex) is a fundamental synchronization principle that ensures only one
process can access a shared resource (critical section) at any given time. If other processes need
to execute in their critical sections, they must wait until it is free.
Basic Concept
• A condition where a thread of execution never enters a critical section at the same time
as a concurrent thread
2. Critical Sections
• A segment of code where a process accesses and potentially modifies shared resources
• Examples: updating shared variables, modifying files, or changing shared data structures
1. Mutual Exclusion: Only one process can execute in its critical section at any time
2. Bounded Waiting: A process is permitted to execute in its critical section only for a finite
time
3. Progress/No Blocking: No process outside its critical section should prevent another
process from entering its critical section
• The system must return to a stable state after processes complete their critical sections
• Thread 1 removes node i by modifying node (i-1)'s next reference to point to node (i+1)
• Thread 2 simultaneously removes node (i+1) by modifying node i's next reference to point
to node (i+2)
• Result: Inconsistent state where node (i+1) still exists in the list due to incorrect references
• Difficult-to-reproduce bugs
• System instability
5. Implementation Techniques
1. Locks/Mutex
2. Recursive Locks
• Can be locked multiple times by the same thread without deadlock
• Released only when the owner has released it as many times as acquired
3. Semaphores
• Types:
o Counting semaphores: Can take multiple values, used for resource counting
5. Atomic Operations
6. Software-based Algorithms
1. Printer Spooling
Deadlocks
• Two or more processes waiting indefinitely for resources held by each other
Priority Inversion
Performance Considerations
Visualization Approaches
A Critical Section is a part of a program that accesses shared resources (e.g., variables, memory,
files) and must not be executed by more than one process or thread at the same time to avoid
race conditions and ensure data consistency.
When multiple processes access or modify shared resources concurrently, race conditions may
occur — leading to incorrect results.
Any solution to the critical section problem must satisfy three key conditions:
Property Meaning
Mutual
Only one process can execute in the critical section at a time.
Exclusion
If no process is in the critical section, the decision of who enters next should not
Progress
be postponed indefinitely.
Bounded There should be a limit on the number of times other processes can enter their
Waiting critical sections before a waiting process is granted access.
do {
// Entry Section
acquireLock();
// Critical Section
// Access shared resource
// Exit Section
releaseLock();
// Remainder Section
} while (true);
Mechanism Description
Mutex (Mutual Exclusion
Lock-based solution with acquire() and release() operations.
Object)
Integer-based control with atomic wait() and signal()
Semaphores
operations.
Hardware-level atomic instruction that prevents simultaneous
Test-and-Set
access.
Replaces a value only if it matches the expected value, used for
Compare-and-Swap
locking.
Condition Variables Used for blocking processes until a certain condition is true.
• Non-Preemptive Kernel: A process runs until it blocks or finishes. Safer, easier to manage.
Issue Explanation
Deadlock Processes wait indefinitely for each other to release resources.
Starvation A process waits too long and never gets access.
Overhead Extra processing to acquire/release locks may reduce performance.
Reduced Parallelism Only one process can work at a time on shared data.
Real-World Example
Banking Scenario:
Impact on Scalability
Factor Impact
Bottlenecks Multiple processes waiting for the same resource.
Reduced Parallelism Limits concurrency in highly parallel systems.
Lock Overhead Cost of managing synchronization in complex systems.
• Potential Deadlocks
• Reduced Concurrency
• Performance Overhead
Summary
Aspect Details
What A block of code accessing shared resources
Why Prevent race conditions, maintain consistency
How By using synchronization mechanisms
Key Properties Mutual Exclusion, Progress, Bounded Waiting
Common Solutions Mutexes, Semaphores, Test-and-Set, etc.
Challenges Deadlock, Starvation, Overhead, Debugging
Takeaway: Critical sections are fundamental in concurrent programming. Proper design and
use of synchronization mechanisms are key to building reliable, efficient, and scalable systems.
5. Dekker’s Algorithm in Process Synchronization
✦ Introduction
Dekker’s Algorithm was the first correct software-based solution to the Critical Section Problem
for two processes. It was invented by Dutch mathematician Theodorus Dekker in the 1960s and
is designed to ensure mutual exclusion using only shared memory and basic atomic operations.
When multiple processes share resources (like CPU, memory, I/O), we must ensure that only one
process enters its critical section at a time to avoid race conditions.
2. Progress – A process outside the critical section should not block others.
do {
// Entry Section
// Critical Section
// Exit Section
// Remainder Section
} while (TRUE);
• Uses:
o Mutual Exclusion
o Progress
o Bounded Waiting
Idea:
If both processes want to enter the critical section simultaneously, only the process whose turn
it is will proceed. The other will wait.
Variables:
do {
flag[i] = true;
while (flag[j]) {
if (turn == j) {
flag[i] = false;
while (turn == j); // busy wait
flag[i] = true;
}
}
// Critical Section
turn = j;
flag[i] = false;
// Remainder Section
} while (TRUE);
✦ Working Explained
4. After exit:
✦ Advantages
✦ Disadvantages
✦ Conclusion
Dekker’s Algorithm is a pioneering approach in the field of process synchronization. It laid the
foundation for modern algorithms like Peterson’s and provided a software-only solution to the
classic Critical Section Problem.
6. Peterson’s Solution in Operating Systems
1. Introduction
• Peterson's Algorithm was proposed by Gary L. Peterson in 1981 as a solution to the Critical
Section Problem for two processes.
• It is a software-based solution, using shared memory to manage synchronization between
processes.
• The algorithm ensures mutual exclusion, meaning no two processes can enter their critical
section at the same time.
• It guarantees progress, ensuring that if no process is in its critical section, one of the waiting
processes will eventually enter.
• The algorithm provides bounded waiting, preventing indefinite blocking of a process trying
to enter its critical section.
• No hardware-level synchronization is required, making it a purely software-based approach
to process synchronization.
• It is an essential and classical solution for two-process systems in operating systems and
process management.
2. Definition
Peterson's Solution allows two processes to safely share a single-use resource (i.e., critical
section) without conflicts or race conditions.
Key Variables:
• flag[2]: Boolean array where flag[i] = true indicates that process Pi wants to enter the
critical section.
3. Working Principle
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j); // wait
// Critical Section
flag[i] = false;
// Remainder Section
} while (true);
For Process Pj:
do {
flag[j] = true;
turn = i;
while (flag[i] && turn == i); // wait
// Critical Section
flag[j] = false;
// Remainder Section
} while (true);
4. Explanation
1. Both processes set their flag[i] = true indicating intent to enter the critical section.
2. The turn is given to the other process.
3. A process waits in the loop if the other process is also interested (flag[j] == true) and it’s
the other’s turn.
4. The process exits the loop only when either the other is not interested or it’s not the
other’s turn.
5. After executing the critical section, it resets flag[i] = false.
#include <stdio.h>
#include <pthread.h>
int flag[2];
int turn;
int val = 0;
void lock_init() {
flag[0] = flag[1] = 0;
turn = 0;
}
int main() {
pthread_t t1, t2;
lock_init();
pthread_join(t1, NULL);
pthread_join(t2, NULL);
Thread : 0
Thread : 1
Final Value: 200000000
7. Advantages
• Simple and easy to understand.
8. Disadvantages
Entry Section
2. If P2 also wants access, it will enter the loop but gets stuck if interested[0] == TRUE &&
turn == 1.
3. Once P1 exits the critical section, it sets interested[0] = FALSE, allowing P2 to enter.
10. Properties Ensured
Property Description
Mutual Exclusion Only one process in the critical section.
Progress No deadlock; one process doesn’t block the other unnecessarily.
Bounded Waiting Each process will get access to the critical section eventually.
Portability Fully software-based and can be run on any hardware.
11. Conclusion
Peterson’s Algorithm is a classic solution for the critical section problem. While it’s not used in
modern OS due to hardware support (like semaphores, mutexes, etc.), it remains important in
academic settings to understand the principles of process synchronization and concurrency.
7. Semaphore in OS
Overview
Semaphore is essentially a non-negative integer that is used to solve the critical section problem
by acting as a signal. It is a concept in operating systems for the synchronization of concurrent
processes.
If you have read about Process Synchronization, you are aware of the critical section problem that
arises for concurrent processes.
Concurrent Processes are those processes that are executed simultaneously or parallel and might
or might not be dependent on other processes. Process Synchronization can be defined as the
coordination between two processes that has access to common materials such as a common
section of code, resources or data, etc.
For example: There may be some resource that is shared by 3 different processes, and none of
the processes at a certain time can change the resource, since that might ruin the results of the
other processes sharing the same resource. You'll understand it more clearly soon.
Now, this Process Synchronization is required for concurrent processes. For any number of
processes that are executing simultaneously, let's say all of them need to access a section of the
code. This section is called the Critical Section.
Now that you are familiar with these terms, we can move on to understanding the need
for Semaphores with an example.
We have 2 processes, that are concurrent and since we are talking about Process Synchronization,
let's say they share a variable "shared" which has a value of 5. What is our goal here? We want to
achieve mutual exclusion, meaning that we want to prevent simultaneous access to a shared
resource. The resource here is the variable "shared" with the value 5.
int shared = 5
Process 1
x++;
sleep(1);
shared = x;
Process 2
int y = shared;
y--;
sleep(1);
shared = y;
We start with the execution of process 1, in which we declare a variable x which has initially the
value of the shared variable which is 5. The value of x is then incremented, and it becomes 6, and
post that the process goes into a sleep state. Since the current processing is concurrent, the CPU
does not wait and starts the processing of process 2. The integer y has the value of the shared
variable initially which is unchanged, and is 5.
Then we decrement the value of y and process 2 goes into a sleep state. We move back to
process 1 and the value of the shared variable becomes 6. Once that process is complete, in
process 2 the value of the shared variable is changed to 4.
One would think that if we increment and decrement a number, its value should be unchanged
and that is exactly what was happening in the two processes, however in this case the value of
the "shared" variable is 4, and this is undesired.
For example: If we have 5 resources and one process uses it, decrementing its value by 1, just like
in our example -- process X, had done. And if another process Y releases the same resource it had
taken earlier, a similar situation might occur and the resultant would be 4, which instead should
have been 5 itself.
This is called a race condition, and due to this condition, problems such as deadlock may occur.
Hence we need proper synchronization between processes, and to prevent these, we use a
signaling integer variable, called - Semaphore.
So to formally define Semaphore we can say that it is an integer variable that is used in a mutually
exclusive manner by concurrent processes, to achieve synchronization.
Since Semaphores are integer variables, their value acts as a signal, which allows or does not
allow a process to access the critical section of code or certain other resources**.
Types of Semaphore in OS
There are mainly two types of Semaphores, or two types of signaling integer variables:
1. Binary Semaphores
2. Counting Semaphores
Binary Semaphores
In these types of Semaphores the integer value of the semaphore can only be either 0 or 1. If the
value of the Semaphore is 1, it means that the process can proceed to the critical section (the
common section that the processes need to access). However, if the value of the binary
semaphore is 0, then the process cannot continue to the critical section of the code.
When a process is using the critical section of the code, we change the Semaphore value to 0,
and when a process is not using it, or we can allow a process to access the critical section, we
change the value of semaphore to 1.
Counting Semaphores
Counting semaphores are signaling integers that can take on any integer value. Using
these Semaphores we can coordinate access to resources and here the Semaphore count is the
number of resources available. If the value of the Semaphore is anywhere above 0, processes can
access the critical section or the shared resources. The number of processes that can access the
resources/code is the value of the semaphore. However, if the value is 0, it means that there
aren't any resources that are available or the critical section is already being accessed by a
number of processes and cannot be accessed by more processes. Counting semaphores are
generally used when the number of instances of a resource is more than 1, and multiple
processes can access the resource.
What is the primary purpose of a semaphore in operating systems?
To terminate a process.
Submit
Example of Semaphore in OS
Now that we know what semaphores are and their types, we must understand their working. As
we read above, our goal is to synchronize processes and provide mutual exclusion in the critical
section of our code. So, we have to introduce a mechanism that wouldn't allow more
than 1 process to access the critical section using the signaling integer - semaphore.
process i
begin
.
.
P(mutex);
execute Critical Section
V(mutex);
.
.
end;
Here in this piece of pseudocode, we have declared a semaphore in line 1, which has the value
of 1 initially. We then start the execution of a process i which has some code, and then as you can
see, we call a function "P" which takes the value of mutex/semaphore as input and we then
proceed to the critical section, followed by a function "V" which also takes the value
of mutex/semaphore as input. Post that, we execute the remainder of the code, and the process
ends.
Remember, we discussed that semaphore is a signaling variable, and whether or not the process
can proceed to the critical section depends on its value. And in binary and counting semaphores
we read that we change the value of the semaphore according to the resources available. With
this thought, let's move further to read about these "P" and "V" functions in the above
pseudocode.
Wait and Signal operations in semaphores are nothing but those "P" and "V" functions that we
read above.
Wait Operation
The wait operation, also called the "P" function, sleep, decrease, or down operation, is the
semaphore operation that controls the entry of a process into a critical section. If the value of
the mutex/semaphore is positive then we decrease the value of the semaphore and let the
process enter the critical section.
Note that this function is only called before the process enters the critical section, and not after
it.
In pseudocode:
P(semaphore){
Signal Operation
The function "V", or the wake-up, increase or up operation is the same as the signal function, and
as we know, once a process has exited the critical section, we must update the value of the
semaphore so that we can signal the new processes to be able to access the critical section.
For the updation of the value, once the process has exited the critical section, since we had
decreased the value of the semaphore by 1 in the wait operation, here we simply increment it.
Note that this function is added only after the process exits the critical section and cannot be
added before the process enters the section.
In pseudocode:
V(semaphore){
increment semaphore by 1
If the value of the semaphore was initially 1, then on the entering of the process into the critical
section, the wait function would have decremented the value to 0 meaning that no more
processes can access the critical section (making sure of mutual exclusion -- only in binary
semaphores). Once the process exits the critical section, the signal operation is executed and the
value of the semaphore is incremented by 1, meaning that the critical section can now be
accessed by another process.
Let's take a look at the implementation of the semaphores with two processes P1 and P2. For
simplicity of the example, we will take 2 processes, however, semaphores are used for a very large
number of processes.
Binary Semaphores:
Initially, the value of the semaphore is 1. When the process P1 enters the critical section, the
value of the semaphore becomes 0. If P2 would want to enter the critical section at this time, it
wouldn't be able to, since the value of the semaphore is not greater than 0. It will have to wait till
the semaphore value is greater than 0, and this will happen only once P1 leaves the critical
section and executes the signal operation which increments the value of the semaphore.
This is how mutual exclusion is achieved using binary semaphore i.e. both processes cannot
access the critical section at the same time.
Code:
struct Semaphore{
enum value(0, 1);
/* This queue contains all the process control blocks (PCB) of the processes that get
blocked while performing the wait operation */
Queue<process> processes;
}
}
signal (Semaphore mutex){
if (mutex.processes is empty){
mutex.value = 1;
}
else {
// selecting a process from the waiting queue which can next access the critical
section
process p = processes.pop();
wakeup(p);
}
}
In this code above we have implemented a binary semaphore that provides mutual exclusion.
Counting Semaphores:
In counting semaphores, the only difference is that we will have a number of resources, or in
other words a set number of processes that can access the critical section at the same time.
Let's say we have a resource that has 4 instances, hence making the initial value of semaphore =
4. Whenever a process requires access to the critical section/resource we call the wait function
and decrease the value of the semaphore by 1 only if the value of the semaphore is greater
than 0. When 4 processes have accessed the critical section/resource and the 5th
process requires it as well, we put it in the waiting queue and wake it up only when a process has
executed the signal function, meaning that the value of the semaphore has gone up by 1.
struct Semaphore(){
int value;
Queue<process> processes;
}
Submit
Now that we have understood the working of semaphores, we can take a look at the real-life
application of semaphores in classic synchronization problems.
Problem Statement
The problem statement states that we have a buffer that is of fixed size, and the producer will
produce items and place them in the buffer. The consumer can pick items from the buffer and
consume them. Our task is to ensure that when the item is placed in the buffer by the producer,
the consumer should not consume it at the same time the producer produces and places an item
into the buffer. The critical section here is the buffer.
Solution
So to solve this problem, we will use 2 counting semaphores, namely "full" and "empty". The
counting semaphore "full" will keep track of all the slots in the buffer that are used, i.e. track of
all the items in the buffer. And of course, the "empty" semaphore will keep track of the slots in
the buffer that are empty, and the value of mutex will be 1.
Initially, the value of the semaphore "full" will be 0 since all slots in the buffer are unoccupied and
the value of the "empty" buffer will be 'n', where n is the size of the buffer since all slots are
initially empty.
For example, if the size of the buffer is 5, then the semaphore full = 0, since all the slots in
the buffer are unoccupied and empty = 5.
The deduced solution for the producer section of the problem is:
do{
// producer produces an item
wait(empty);
wait(mutex);
// put the item into the buffer
signal(mutex);
signal(full);
} while(true)
In the above code, we call the wait operations on the empty and mutex semaphores when the
producer produces an item. Since an item is produced, it must be placed in the buffer reducing
the number of empty slots by 1, hence we call the wait operation on the empty semaphore. We
must also reduce the value of mutex so as to prevent the consumer from accessing the buffer.
Post this, the producer has placed the item into the buffer and hence we can increase the value
of "full" semaphore by 1 and also increment the value of the mutex as the producer has
completed it's task and the signal will now be able to access the buffer.
do{
wait(full);
wait(mutex);
// removal of the item from the buffer
signal(mutex);
signal(empty);
// consumer now consumed the item
} while(true)
The consumer needs to consume the items that are produced by the producer. So when the
consumer is removing the item from the buffer to consume it we need to reduce the value of
the "full" semaphore by 1 since one slot will be emptied, and we also need to decrement the
value of mutex so that the producer does not access the buffer.
Now that the consumer has consumed the item, we can increment the value of the empty
semaphore along with the mutex by 1.
Advantages of Semaphores
As we have read throughout the article, semaphores have proven to be extremely useful
in process synchronization. Here are the advantages summarized:
• They allow processes into the critical section one by one and provide strict mutual
exclusion (in the case of binary semaphores).
Disadvantage of Semaphores
We have already discussed the advantages of semaphores; however, semaphores also have some
disadvantages. They are:
• Semaphores are slightly complicated and the implementation of the wait and signal
operations should be done in such a manner, that deadlocks are prevented.
• The usage of semaphores may cause priority inversion where the high-priority processes
might get access to the critical section after the low-priority processes.
8. Hardware Synchronization Algorithms : Unlock and
Lock, Test and Set, Swap
Process Synchronization problems occur when two processes running concurrently share the
same data or same variable. The value of that variable may not be updated correctly before its
being used by a second process. Such a condition is known as Race Around Condition. There are
a software as well as hardware solutions to this problem. In this article, we will talk about the
most efficient hardware solution to process synchronization problems and its implementation.
There are three algorithms in the hardware approach of solving Process Synchronization
problem:
2. Swap
Hardware instructions in many operating systems help in the effective solution of critical section
problems.
Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm works in
this way – it always returns whatever value is sent to it and sets lock to true. The first process will
enter the critical section at once as TestAndSet(lock) will return false and it’ll break out of the
while loop. The other processes cannot enter now as lock is set to true and so the while loop
continues to be true. Mutual exclusion is ensured. Once the first process gets out of the critical
section, lock is changed to false. So, now the other processes can enter one by one. Progress is
also ensured. However, after the first process, any process can go in. There is no queue
maintained, so any new process that finds the lock to be false again can enter. So bounded waiting
is not ensured.
while(1){
while (TestAndSet(lock));
critical section
lock = false;
remainder section
}
2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in the
swap function, key is set to true and then swapped with lock. First process will be executed, and
in while(key), since key=true , swap will take place and hence lock=true and key=false. Again next
iteration takes place while(key) but key=false , so while loop breaks and first process will enter in
critical section. Now another process will try to enter in Critical section, so again key=true and
hence while(key) loop will run and swap takes place so, lock=true and key=true (since lock=true
in first process). Again on next iteration while(key) is true so this will keep on executing and
another process will not be able to enter in critical section. Therefore Mutual exclusion is ensured.
Again, out of the critical section, lock is changed to false, so any process finding it gets t enter the
critical section. Progress is ensured. However, again bounded waiting is not ensured for the very
same reason.
Swap Pseudocode –
boolean lock;
Individual key;
while (1){
key = true;
while(key)
swap(lock,key);
critical section
lock = false;
remainder section
}
boolean lock;
Individual key;
Individual waiting[i];
while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key)
key = TestAndSet(lock);
waiting[i] = false;
critical section
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
remainder section
}
9. Dining Philosophers Problem in OS
Overview
Consider two processes P1 and P2 executing simultaneously, while trying to access the same
resource R1, this raises the question of who will get the resource and when. This problem is solved
using process synchronization.
The act of synchronizing process execution such that no two processes have access to the same
associated data and resources are referred to as process synchronization in operating systems.
It's particularly critical in a multi-process system where multiple processes are executing at the
same time and trying to access the very same shared resource or data.
This could lead to discrepancies in data sharing. As a result, modifications implemented by one
process may or may not be reflected when the other processes access the same shared data. The
processes must be synchronized with one another to avoid data inconsistency.
:::
The critical section is a segment of the program that allows you to access the shared variables or
resources. In a critical section, an atomic action (independently running process) is needed, which
means that only a single process can run in that section at a time.
Semaphore has 2 atomic operations: wait() and signal(). If the value of its input S is positive,
the wait() operation decrements, it is used to acquire resources while entry. No operation is done
if S is negative or zero. The value of the signal() operation's parameter S is increased, it is used to
release the resource once the critical section is executed at exit.
void Philosopher
{
while(1)
{
// Section where the philosopher is using chopstick
wait(use_resource[x]);
wait(use_resource[(x + 1) % 5]);
// Section where the philosopher is thinking
signal(free_resource[x]);
signal(free_resource[(x + 1) % 5]);
}
}
Explanation:
• The wait() operation is implemented when the philosopher is using the resources while
the others are thinking. Here, the threads use_resource[x] and use_resource[(x + 1) %
5] are being executed.
• After using the resource, the signal() operation signifies the philosopher using no
resources and thinking. Here, the threads free_resource[x] and free_resource[(x + 1) %
5] are being executed.
To model the Dining Philosophers Problem in a C program we will create an array of philosophers
(processes) and an array of chopsticks (resources). We will initialize the array of chopsticks with
locks to ensure mutual exclusion is satisfied inside the critical section.
We will run the array of philosophers in parallel to execute the critical section (dine ()), the critical
section consists of thinking, acquiring two chopsticks, eating and then releasing the chopsticks.
C program:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define NUM_PHILOSOPHERS 5
#define NUM_CHOPSTICKS 5
int main()
{
// Define counter var i and status_message
int i, status_message;
void *msg;
// Wait for all philosophers threads to complete executing (finish dining) before closing the
program
for (i = 1; i <= NUM_PHILOSOPHERS; i++)
{
status_message = pthread_join(philosopher[i], &msg);
if (status_message != 0)
{
printf("\n Thread join failed \n");
exit(1);
}
}
Philosopher 2 is thinking
Philosopher 2 is eating
Philosopher 3 is thinking
Philosopher 5 is thinking
Philosopher 5 is eating
Philosopher 1 is thinking
Philosopher 4 is thinking
Philosopher 4 is eating
Philosopher 2 Finished eating
Philosopher 5 Finished eating
Philosopher 1 is eating
Philosopher 4 Finished eating
Philosopher 3 is eating
Philosopher 1 Finished eating
Philosopher 3 Finished eating
Which condition must be avoided to prevent deadlock in the Dining Philosophers Problem?
Both A and B
Let's Understand How the Above Code is Giving a Solution to the Dining Philosopher Problem?
We start by importing the libraries pthread for threads and semaphore for synchronization. And
create an array of 5 p_threads representing the philosophers. Create an array of 5 mutexes
representing the chopsticks.
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define NUM_PHILOSOPHERS 5
#define NUM_CHOPSTICKS 5
We initialise the counter i and status message variable as int and a pointer msg, and intialise
the semaphore array.
int main()
{
// Define counter var i and status_message
int i, status_message;
void *msg;
// Initialise the semaphore array
for (i = 1; i <= NUM_CHOPSTICKS; i++)
{
status_message = pthread_mutex_init(&chopstick[i], NULL);
// Check if the mutex is initialised successfully
if (status_message == -1)
{
printf("\n Mutex initialization failed");
exit(1);
}
}
We create the philosopher threads using pthread_create and pass a pointer to the dine function
as the subroutine and a pointer to the counter variable i.
pthread_t philosopher[5];
Then the thread waits on the right((n+1) % NUM_CHOPSTICKS) chopstick to acquire a lock on it
(pick it up).
void dine(int n)
{
printf("\nPhilosopher % d is thinking ", n);
Through the above discussion, we established that no two nearby philosophers can eat at the
same time using the aforementioned solution to the dining philosopher problem. The
disadvantage of the above technique is that it may result in a deadlock situation. This occurs when
all of the philosophers choose their left chopstick at the same moment, resulting in a stalemate
scenario in which none of the philosophers can eat, and hence deadlock will happen.
We can also avoid deadlock through the following methods in this scenario -
1. The maximum number of philosophers at the table should not exceed four, let’s
understand why four processes is important:
o Chopstick C4 will be accessible for philosopher P3, therefore P3 will begin eating
and then set down both chopsticks C3 and C4, indicating that
semaphore C3 and C4 will now be increased to one.
o Now that philosopher P2, who was holding chopstick C2, also has chopstick C3, he
will place his chopstick down after eating to allow other philosophers to eat.
2. The four starting philosophers (P0, P1, P2, and P3) must pick the left chopstick first, then
maybe the right, even though the last philosopher (P4) should pick the right chopstick
first, then the left. Let's have a look at what occurs in this scenario:
o This will compel P4 to hold his right chopstick first since his right chopstick is C0,
which is already held by philosopher P0 and whose value is set to 0, i.e. C0 is
already 0, trapping P4 in an unending loop and leaving chopstick C4 empty.
o As a result, because philosopher P3 has both left C3 and right C4 chopsticks, it will
begin eating and then put down both chopsticks once finished, allowing others to
eat, thereby ending the impasse.
3. If the philosopher is in an even position, he/she should choose the right chopstick first,
followed by the left, and in an odd position, the left chopstick should be chosen first,
followed by the right.
4. Only if both chopsticks (left and right) are accessible at the same time should a
philosopher be permitted to choose his or her chopsticks.
Conclusion
• Process synchronization is defined as no two processes have access to the same
associated data and resources.
• Philosopher is an analogy for process and chopstick for resources, we can try to solve
process synchronization problems using this.
• The solution of the Dining Philosopher problem focuses on the use of semaphores.
• No two nearby philosophers can eat at the same time using the aforesaid solution to the
dining philosopher problem, and this situation causes a deadlock, this is a drawback of
the Dining philosopher problem.
10. Sleeping Barber Problem in Process Synchronization
Sleeping Barber problem in Process Synchronization
The Sleeping Barber problem is a classic problem in process synchronization that is used to illustrate synchronization
issues that can arise in a concurrent system. The problem is as follows:
There is a barber shop with one barber and a number of chairs for waiting customers. Customers arrive at random
times and if there is an available chair, they take a seat and wait for the barber to become available. If there are no
chairs available, the customer leaves. When the barber finishes with a customer, he checks if there are any waiting
customers. If there are, he begins cutting the hair of the next customer in the queue. If there are no customers
waiting, he goes to sleep.
The problem is to write a program that coordinates the actions of the customers and the barber in a way that avoids
synchronization problems, such as deadlock or starvation.
One solution to the Sleeping Barber problem is to use semaphores to coordinate access to the waiting chairs and the
barber chair. The solution involves the following steps:
Initialize two semaphores: one for the number of waiting chairs and one for the barber chair. The waiting chairs
semaphore is initialized to the number of chairs, and the barber chair semaphore is initialized to zero.
Customers should acquire the waiting chairs semaphore before taking a seat in the waiting room. If there are no
available chairs, they should leave.
When the barber finishes cutting a customer’s hair, he releases the barber chair semaphore and checks if there are
any waiting customers. If there are, he acquires the barber chair semaphore and begins cutting the hair of the next
customer in the queue.
The barber should wait on the barber chair semaphore if there are no customers waiting.
The solution ensures that the barber never cuts the hair of more than one customer at a time, and that customers
wait if the barber is busy. It also ensures that the barber goes to sleep if there are no customers waiting.
However, there are variations of the problem that can require more complex synchronization mechanisms to avoid
synchronization issues. For example, if multiple barbers are employed, a more complex mechanism may be needed
to ensure that they do not interfere with each other.
Prerequisite – Inter Process Communication Problem : The analogy is based upon a hypothetical barber shop with
one barber. There is a barber shop which has one barber, one barber chair, and n chairs for waiting for customers if
there are any to sit on the chair.
Solution : The solution to this problem includes three semaphores.First is for the customer which counts the number
of customers present in the waiting room (customer in the barber chair is not included because he is not waiting).
Second, the barber 0 or 1 is used to tell whether the barber is idle or is working, And the third mutex is used to
provide the mutual exclusion which is required for the process to execute. In the solution, the customer has the
record of the number of customers waiting in the waiting room if the number of customers is equal to the number
of chairs in the waiting room then the upcoming customer leaves the barbershop. When the barber shows up in the
morning, he executes the procedure barber, causing him to block on the semaphore customers because it is initially
0. Then the barber goes to sleep until the first customer comes up. When a customer arrives, he executes customer
procedure the customer acquires the mutex for entering the critical region, if another customer enters thereafter,
the second one will not be able to anything until the first one has released the mutex. The customer then checks the
chairs in the waiting room if waiting customers are less then the number of chairs then he sits otherwise he leaves
and releases the mutex. If the chair is available then customer sits in the waiting room and increments the variable
waiting value and also increases the customer’s semaphore this wakes up the barber if he is sleeping. At this point,
customer and barber are both awake and the barber is ready to give that person a haircut. When the haircut is over,
the customer exits the procedure and if there are no customers in waiting room barber sleeps.
Barber {
while(true) {
/* waits for a customer (sleeps). */
down(Customers);
Customer {
while(true) {
/* protects seats so only 1 customer tries to sit
in a chair if that's the case.*/
down(Seats); //This line should not be here.
if(FreeSeats > 0) {
/* sitting down.*/
FreeSeats--;
The Sleeping Barber Problem is a classical synchronization problem in which a barber shop with one barber, a waiting
room, and a number of customers is simulated. The problem involves coordinating the access to the waiting room
and the barber chair so that only one customer is in the chair at a time and the barber is always working on a
customer if there is one in the chair, otherwise the barber is sleeping until a customer arrives.
import threading
import time
import random
# Define the maximum number of customers and the number of chairs in the waiting room
MAX_CUSTOMERS = 5
NUM_CHAIRS = 3
# Define the semaphores for the barber, the customers, and the mutex
barber_semaphore = threading.Semaphore(0)
customer_semaphore = threading.Semaphore(0)
mutex = threading.Semaphore(1)
Output:
The barber is sleeping…
Customer 0 is waiting in the waiting room
The barber is cutting hair for customer 0
Customer 1 is waiting in the waiting room
Customer 2 is waiting in the waiting room
The barber has finished cutting hair for customer 0
Customer 0 has finished getting a haircut
The barber is cutting hair for customer 1
Customer 3 is waiting in the waiting room
The barber has finished cutting hair for customer 1
Customer 1 has finished getting a haircut
The barber is cutting hair for customer 2
Customer 4 is waiting in the waiting room
The barber has finished cutting hair for customer 2
Customer 2 has finished getting a haircut
The barber is cutting hair for customer 3
Customer 3 has finished getting a haircut
The barber is cutting hair for customer 4
Customer 4 has finished getting a haircut
The barber is sleeping…
The Sleeping Barber Problem is a classical synchronization problem that involves coordinating the access to a barber
chair and a waiting room in a barber shop. The problem requires the use of semaphores or other synchronization
mechanisms to ensure that only one customer is in the barber chair at a time and that the barber is always working
on a customer if there is one in the chair, otherwise the barber is sleeping until a customer arrives.
Advantages of using synchronization mechanisms to solve the Sleeping Barber Problem include:
1. Efficient use of resources: The use of semaphores or other synchronization mechanisms ensures that
resources (e.g., the barber chair and waiting room) are used efficiently, without wasting resources or causing
unnecessary delays.
2. Prevention of race conditions: By ensuring that only one customer is in the barber chair at a time and that
the barber is always working on a customer if there is one in the chair, synchronization mechanisms prevent
race conditions that could lead to errors or incorrect results.
3. Fairness: Synchronization mechanisms can be used to ensure that all customers have a fair chance to be
served by the barber.
However, there are also some disadvantages to using synchronization mechanisms to solve the Sleeping Barber
Problem, including:
1. Complexity: Implementing synchronization mechanisms can be complex, especially for larger systems or
more complex synchronization scenarios.
2. Overhead: Synchronization mechanisms can introduce overhead in terms of processing time, memory
usage, and other system resources.
3. Deadlocks: Incorrectly implemented synchronization mechanisms can lead to deadlocks, where processes
are unable to proceed because they are waiting for resources that are held by other processes.
4. Overall, the advantages of using synchronization mechanisms to solve the Sleeping Barber Problem
generally outweigh the disadvantages, as long as the mechanisms are implemented correctly and efficiently.