Oschapter 5
Oschapter 5
SEng2032
Chapter 5
Process Synchronization
Process Synchronization
Process Synchronization or Synchronization is the way by which
processes that share the same resources are managed in an operating
system.
It helps maintain the consistency of data by using variables or
hardware so that only one process can make changes to the shared
memory at a time.
The main objective of process synchronization is to ensure that
multiple processes access shared resources without interfering with
each other and to prevent the possibility of inconsistent data due to
concurrent access.
To achieve this, various synchronization techniques such as
semaphores, monitors, and more.
On the basis of synchronization, processes are categorized as one of
the following two types:
Independent Process: The execution of one process does not affect
the execution of other processes. Ex. The process that does not
share any shared variable, database, files, etc.
2. Cooperative Process: A process that can affect or be affected by
other processes executing in the system. Ex. The process that
share file, variable, database, etc. NB: Process synchronization
problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
Conditions That Require Process Synchronization
Critical Section
Race Condition
Classical problems of synchronization(Producer
consumer problem, Readers-Writers Problem ,and Dining
Philosophers Problem) and more.
Critical Section
It is that part of the program where shared resources are accessed. In this case only one
process can execute the critical section at a given point of time. The challenge in critical
section is deciding when to allow or block processes from entering the critical section.
Critical Section Problem:-it describes challenge of designing a method for cooperative
processes to access shared resources without creating data inconsistencies.
Solution of critical section problem.
Any solution to the critical section problem needs to satisfy three requirements as
follows:-
1. Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any
time. If other processes need to execute in their critical sections, they must wait until it is
free.
To achieve this cooperation, each process must request permission
to enter its critical section. The section of code where a process does
so is called the entry section.
If the process is granted permission, it enters the critical section,
where it updates the values of the shared resource.
After the critical section, the process goes through the exit section,
in which it give up its control over the shared resource and
announces this to the other processes in the system.
The process then carries on executing its remaining
statements(remainder section).
2.Progress
When no process is currently in its critical section and some processes
want to enter their critical sections, only the processes not in their
remainder sections can be involved in deciding which process can enter
its critical section next, and this decision cannot be delayed indefinitely.
Progress ensures that there is fair arbitration among the processes so
that the processes continue with their execution.
3. Bounded Waiting
It describes each process must have a limited waiting time. It should not
wait endlessly to access the critical section. It sets a bound to the
number of times other processes can gain access to the critical section
when a process requests permission for its critical section.
Race Condition:
It is a situation where in processes are trying to access the critical section and the
final outcome depends on the order in which their operations happen or
race condition is an undesirable situation that occurs when multiple
processes or threads access and manipulate a shared resource concurrently,
leading to inconsistent outputs.
A race condition is a situation that may occur inside a critical section.
Process Synchronization mechanism need to ensure that instructions are being
executed in a required order only.
Example1:
Imagine process p0 incrementing the value of shared variable counter by 1 and
process p1 decrement the value of shared variable counter by 1.The initial value
variable counter is 2. if two process access the counter variable concurrently and
increments, the results of the counter values are 3,1 or 1,3. But if two process
access counter value by order and update, the counter value is 2.
so the problem arise when multiple processes or threads access and manipulate a
shared resource concurrently and leading inconsistent outputs is called Race
Condition.
Process synchronization techniques
These are techniques and tools used to ensure that multiple
processes or threads can safely access shared resources without
interference, race conditions, or deadlocks. There many techniques
but the common and widely used are:-
Mutex (Mutual Exclusion)
A lock that allows only one process/thread to enter the critical
section at a time.
Semaphore
A signaling mechanism that uses integer counters to control access
to shared resources.
Two types:
Binary Semaphore: Similar to a mutex, only values 0 or 1.
Counting Semaphore: Can have a range of values, useful for
managing multiple instances of a resource.
Supports operations like wait() (P) and signal() (V) to decrement or
increment the semaphore.
• Example: Think of a parking lot with 3 spaces.
Each car entering takes a spot (decreases the
count), and each car leaving frees a spot
(increases the count). If no spots are free, new
cars must wait.
Monitors
• Monitors are advanced tools provided by operating systems
to control access to shared resources. They group shared
variables, procedures, and synchronization methods into
one unit making sure that only one process can use the
monitor's procedures at a time.
• It provides mutual exclusion, condition variables, and data
encapsulation in a single construct.
Classical Problems of Synchronization
Below are some of the classical problem depicting
flaws of process synchronization in systems where
cooperating processes are present.
• Bounded Buffer (Producer-Consumer) Problem
•Dining Philosophers Problem
• The Readers Writers Problem
Bounded Buffer Problem
• The challenge is ensuring that the Producer doesn't
overfill the buffer, and the Consumer doesn't try to
consume data from an empty buffer.
• This problem is generalized in terms of the Producer
Consumer problem, where a finite buffer pool is used to
exchange messages between producer and consumer
processes.
• Because the buffer pool has a maximum size, this
problem is often called the Bounded buffer problem.
Solution to this problem is, creating two counting
semaphores (variable or abstract data type used to
control access to a common resource) "full" and "empty"
to keep track of the current number of full and empty
buffers respectively.
Producer-Consumer Problem Solution Using Semaphores
Semaphore and mutex intialization
mutex = 1 //for mutual exclusion
semaphore empty = N // number of empty slots in the buffer
Semaphore full = 0 // number of full slots
• Producer Pseudocode:
do {
produce_item(); // Generate an item to be added
wait(empty); // Wait if buffer is full
wait(mutex); // Enter critical section
add_item_to_buffer(); // Add item to the shared buffer
signal(mutex); // Exit critical section
signal(full); // Notify that an item is available
} while (true);
• Consumer Pseudocode:
do {
wait(full); // Wait if buffer is empty
wait(mutex); // Enter critical section
remove_item_from_buffer(); // Take item from the shared buffer
signal(mutex); // Exit critical section
signal(empty); // Notify that space is available
consume_item(); // Use the item
} while (true);
Dining Philosophers Problem
The dining philosopher's problem involves the
allocation of limited resources to a group of
processes in a deadlock-free and starvation-free
manner.
There are five philosophers sitting around a
table, in which there are five chopsticks/forks
kept beside them and a bowl of rice in the centre,
When a philosopher wants to eat, he uses two
chopsticks - one from their left and one from their
right. When a philosopher wants to think, he
keeps down both chopsticks at their original
place.
Dining Philosophers solution using semaphore
Variable declaration:-
N =5 // Number of philosophers and forks)
// 3 semaphores
semaphore mutex = 1; // for mutual exclusion.
semaphore forks[N]; // One semaphore for each fork(control forks
semaphore room = N - 1; //it allow up to N-1 philosophers to try picking forks(control philosophers)
Philosopher(i) {
while (true) {
// ---- Thinking Section ----
// The philosopher is thinking
// ---------------------------
wait(room); // Limit access to N-1 philosophers to avoid deadlock
wait(forks[i]); // Pick up left fork
wait(forks[(i+1)%N]); // Pick up right fork
// ---- Eating Section ----
// The philosopher is eating
Dining Philosophers solution using semaphore