0% found this document useful (0 votes)
18 views29 pages

Oschapter 5

Process synchronization is a method used in operating systems to manage processes that share resources, ensuring data consistency and preventing interference. It involves techniques like semaphores and monitors, and addresses issues such as critical sections, race conditions, and classical synchronization problems like the Producer-Consumer and Dining Philosophers problems. The main goals are mutual exclusion, progress, and bounded waiting to ensure safe access to shared resources.

Uploaded by

hirpaadugna1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views29 pages

Oschapter 5

Process synchronization is a method used in operating systems to manage processes that share resources, ensuring data consistency and preventing interference. It involves techniques like semaphores and monitors, and addresses issues such as critical sections, race conditions, and classical synchronization problems like the Producer-Consumer and Dining Philosophers problems. The main goals are mutual exclusion, progress, and bounded waiting to ensure safe access to shared resources.

Uploaded by

hirpaadugna1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Operating Systems

SEng2032
Chapter 5
Process Synchronization
Process Synchronization
Process Synchronization or Synchronization is the way by which
processes that share the same resources are managed in an operating
system.
It helps maintain the consistency of data by using variables or
hardware so that only one process can make changes to the shared
memory at a time.
The main objective of process synchronization is to ensure that
multiple processes access shared resources without interfering with
each other and to prevent the possibility of inconsistent data due to
concurrent access.
To achieve this, various synchronization techniques such as
semaphores, monitors, and more.
On the basis of synchronization, processes are categorized as one of
the following two types:
Independent Process: The execution of one process does not affect
the execution of other processes. Ex. The process that does not
share any shared variable, database, files, etc.
2. Cooperative Process: A process that can affect or be affected by
other processes executing in the system. Ex. The process that
share file, variable, database, etc. NB: Process synchronization
problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
Conditions That Require Process Synchronization

Critical Section
Race Condition
Classical problems of synchronization(Producer
consumer problem, Readers-Writers Problem ,and Dining
Philosophers Problem) and more.
Critical Section
It is that part of the program where shared resources are accessed. In this case only one
process can execute the critical section at a given point of time. The challenge in critical
section is deciding when to allow or block processes from entering the critical section.
Critical Section Problem:-it describes challenge of designing a method for cooperative
processes to access shared resources without creating data inconsistencies.
Solution of critical section problem.
Any solution to the critical section problem needs to satisfy three requirements as
follows:-
1. Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any
time. If other processes need to execute in their critical sections, they must wait until it is
free.
To achieve this cooperation, each process must request permission
to enter its critical section. The section of code where a process does
so is called the entry section.
If the process is granted permission, it enters the critical section,
where it updates the values of the shared resource.
After the critical section, the process goes through the exit section,
in which it give up its control over the shared resource and
announces this to the other processes in the system.
The process then carries on executing its remaining
statements(remainder section).
2.Progress
When no process is currently in its critical section and some processes
want to enter their critical sections, only the processes not in their
remainder sections can be involved in deciding which process can enter
its critical section next, and this decision cannot be delayed indefinitely.
Progress ensures that there is fair arbitration among the processes so
that the processes continue with their execution.
3. Bounded Waiting
It describes each process must have a limited waiting time. It should not
wait endlessly to access the critical section. It sets a bound to the
number of times other processes can gain access to the critical section
when a process requests permission for its critical section.
Race Condition:
It is a situation where in processes are trying to access the critical section and the
final outcome depends on the order in which their operations happen or
race condition is an undesirable situation that occurs when multiple
processes or threads access and manipulate a shared resource concurrently,
leading to inconsistent outputs.
A race condition is a situation that may occur inside a critical section.
Process Synchronization mechanism need to ensure that instructions are being
executed in a required order only.
Example1:
Imagine process p0 incrementing the value of shared variable counter by 1 and
process p1 decrement the value of shared variable counter by 1.The initial value
variable counter is 2. if two process access the counter variable concurrently and
increments, the results of the counter values are 3,1 or 1,3. But if two process
access counter value by order and update, the counter value is 2.
so the problem arise when multiple processes or threads access and manipulate a
shared resource concurrently and leading inconsistent outputs is called Race
Condition.
Process synchronization techniques
These are techniques and tools used to ensure that multiple
processes or threads can safely access shared resources without
interference, race conditions, or deadlocks. There many techniques
but the common and widely used are:-
 Mutex (Mutual Exclusion)
A lock that allows only one process/thread to enter the critical
section at a time.
Semaphore
A signaling mechanism that uses integer counters to control access
to shared resources.
Two types:
Binary Semaphore: Similar to a mutex, only values 0 or 1.
Counting Semaphore: Can have a range of values, useful for
managing multiple instances of a resource.
Supports operations like wait() (P) and signal() (V) to decrement or
increment the semaphore.
• Example: Think of a parking lot with 3 spaces.
Each car entering takes a spot (decreases the
count), and each car leaving frees a spot
(increases the count). If no spots are free, new
cars must wait.
 Monitors
• Monitors are advanced tools provided by operating systems
to control access to shared resources. They group shared
variables, procedures, and synchronization methods into
one unit making sure that only one process can use the
monitor's procedures at a time.
• It provides mutual exclusion, condition variables, and data
encapsulation in a single construct.
Classical Problems of Synchronization
Below are some of the classical problem depicting
flaws of process synchronization in systems where
cooperating processes are present.
• Bounded Buffer (Producer-Consumer) Problem
•Dining Philosophers Problem
• The Readers Writers Problem
Bounded Buffer Problem
• The challenge is ensuring that the Producer doesn't
overfill the buffer, and the Consumer doesn't try to
consume data from an empty buffer.
• This problem is generalized in terms of the Producer
Consumer problem, where a finite buffer pool is used to
exchange messages between producer and consumer
processes.
• Because the buffer pool has a maximum size, this
problem is often called the Bounded buffer problem.
Solution to this problem is, creating two counting
semaphores (variable or abstract data type used to
control access to a common resource) "full" and "empty"
to keep track of the current number of full and empty
buffers respectively.
Producer-Consumer Problem Solution Using Semaphores
Semaphore and mutex intialization
mutex = 1 //for mutual exclusion
semaphore empty = N // number of empty slots in the buffer
Semaphore full = 0 // number of full slots
• Producer Pseudocode:
do {
produce_item(); // Generate an item to be added
wait(empty); // Wait if buffer is full
wait(mutex); // Enter critical section
add_item_to_buffer(); // Add item to the shared buffer
signal(mutex); // Exit critical section
signal(full); // Notify that an item is available
} while (true);
• Consumer Pseudocode:
do {
wait(full); // Wait if buffer is empty
wait(mutex); // Enter critical section
remove_item_from_buffer(); // Take item from the shared buffer
signal(mutex); // Exit critical section
signal(empty); // Notify that space is available
consume_item(); // Use the item
} while (true);
Dining Philosophers Problem
The dining philosopher's problem involves the
allocation of limited resources to a group of
processes in a deadlock-free and starvation-free
manner.
There are five philosophers sitting around a
table, in which there are five chopsticks/forks
kept beside them and a bowl of rice in the centre,
When a philosopher wants to eat, he uses two
chopsticks - one from their left and one from their
right. When a philosopher wants to think, he
keeps down both chopsticks at their original
place.
Dining Philosophers solution using semaphore
Variable declaration:-
 N =5 // Number of philosophers and forks)
// 3 semaphores
 semaphore mutex = 1; // for mutual exclusion.
 semaphore forks[N]; // One semaphore for each fork(control forks
 semaphore room = N - 1; //it allow up to N-1 philosophers to try picking forks(control philosophers)

// Initialize all fork semaphores to 1 (available) or loop of forks


for (i = 0; i < N; i++) {
forks[i] = 1;
}

Philosopher(i) {
while (true) {
// ---- Thinking Section ----
// The philosopher is thinking
// ---------------------------
wait(room); // Limit access to N-1 philosophers to avoid deadlock
wait(forks[i]); // Pick up left fork
wait(forks[(i+1)%N]); // Pick up right fork
// ---- Eating Section ----
// The philosopher is eating
Dining Philosophers solution using semaphore

signal(forks[i]); // Put down left fork


signal(forks[(i+1)%N]); // Put down right fork
signal(room); // Leave room (allow another philosopher to enter)
// Repeat loop (think again)
}
}
The Readers Writers Problem
 is a synchronization problem that arises when multiple
processes need to read from and write to shared data,
ensuring that no data is corrupted by simultaneous
access.
 It describes the problem of ensuring that multiple readers
and writers can access shared data without interference.
 In this problem, there are some processes(called readers)
that only read the shared data, and never change it, and
there are other processes(called writers) who change the
data.
 There are various type of readers-writers problem, most
centered on relative priorities of readers and writers.
• Consider a situation where we have a file shared between many
people. If one of the people tries editing the file, no other person
should be reading or writing at the same time, otherwise changes
will not be visible to him/her.
•However if some person is reading the file, then others may read it
at the same time. Precisely in OS we call this situation as the
readers-writers problem
Problem parameters:
• One set of data is shared among a number of processes
• Once a writer is ready, it performs its write.
• Only one writer may write at a time
• If a process is writing, no other process can read it
• If at least one reader is reading, no other process can write
Readers may not write and only read
Reader writer solution using semaphore
• Variable intialization(two semaphore variable and one readcount variable)
– semaphore mutex = 1; // Semaphore to protect access to readCount variable
– semaphore wrt = 1; // Semaphore to control access to the shared r resource
– int readCount = 0; // Number of readers currently reading
Reader Process
Reader() {
wait(mutex); // Lock mutex to enter critical section and update readCount
readCount++; // Increment the count of active readers
if (readCount == 1) { // If this is the first reader
wait(wrt); // Lock the shared resource (no writer can access it now)
}
signal(mutex); //Unlock mutex so other readers/writers can proceed
// Reading
// The reader can now read the shared data
Reader writer solution using semaphore
When reader process want to leave :-
wait(mutex); // Lock mutex to safely update
readCount
readCount--; // One reader is leaving
if (readCount == 0) { // If this is the last reader
signal(wrt); // Allow writers to access the shared
resource
}
signal(mutex); // Unlock mutex
}
Reader writer solution using semaphore
Writer process
Writer() {
wait(wrt); // Lock the shared resource to prevent readers and other writers
// ---- Writing Section ----
// Writer modifies the shared resource here
signal(wrt); // Release the shared resource so
others can access it
}
Thank you

You might also like