ch5 Process Synchronization
ch5 Process Synchronization
Synchronization
1
5.1 Background
• process synchronization refers to the coordination of
processes that share resources or need to work together to
ensure correct execution.
• When multiple processes (or threads) are running at the same
time, especially when they access shared data (like variables,
files, or hardware), there can be problems like:
1. Race conditions — when two processes access and modify data at
the same time, leading to unpredictable results.
2. Data inconsistency — when shared data becomes incorrect because
of improper access.
3. Deadlocks — when processes are stuck waiting for each other
forever.
4. Starvation — when a process keeps getting denied access to
resources because others are favored.
2
Con…..
• Process synchronization provides mechanisms to avoid these problems. It
ensures that:
o Only one process at a time accesses a critical section (the part of the code where
shared data is modified).
o Shared resources are used in a way that maintains data consistency.
o Processes coordinate properly without conflicts.
• A critical section is a part of a program where a process or
thread accesses shared resources—like variables, files, or
memory—that could lead to problems if multiple processes
access them at the same time.
• Why Is It Important?
Without proper control, simultaneous access to shared resources can cause:
o Race conditions (unpredictable results due to overlapping operations)
o Data inconsistency (corrupted or incorrect data)
To prevent this, we need to make sure that only one process/thread enters the
critical section at a time.
3
Critical Section….
Structure of a Process with a Critical Section:
• A typical process using a critical section follows four key parts:
1. Entry Section
o Code that requests permission to enter the critical section.
o Implements synchronization (e.g., using locks, semaphores, or
monitors).
2. Critical Section
o The section where the process accesses/modifies shared resources.
o Only one process can be here at any given time.
3. Exit Section
o Code that releases control of the critical section, allowing other
processes to enter.
4. Remainder Section
o The rest of the code where the process performs operations that
don’t involve shared resources.
4
5.2. Process synchronization mechanisms
• The main problem of process synchronization is:
o How to correctly and efficiently coordinate multiple processes (or
threads) that share resources, ensuring data consistency, preventing
conflicts like race conditions, and avoiding deadlocks or starvation
• There are four main common synchronization mechanisms used in
operating systems to solve process synchronization problems. These
mechanisms help manage access to critical sections and shared resources
safely and efficiently. those are:
1. Locks (Mutexes)
• A mutex (mutual exclusion) lock is a mechanism to ensure that only one
thread or process can access a critical section at a time.
• How it works:
o Before entering a critical section, a process acquires the lock.
o When it leaves the critical section, it releases the lock.
o If another process tries to enter while it’s locked, it waits.
5
Process synchronization mechanisms…..
• Example:
mutex.lock(); // Entry section
// critical section
mutex.unlock(); // Exit section
• Used for:
– Ensuring mutual exclusion in critical sections.
• Limitation:
– Can lead to deadlock if multiple locks are involved and not handled
properly
2. semaphore
• A more powerful synchronization tool invented by Dijkstra
• It is a variable used to control access to shared resources.
• Types:
– Binary Semaphore (Mutex) — only 0 or 1.
– Counting Semaphore — can count multiple resources (e.g., number of buffer
slots).
6
Process synchronization mechanisms…
• Two basic operations in semaphores:
– wait(S) or P(S) — decreases the semaphore. If it's < 0, the process waits.
– signal(S) or V(S) — increases the semaphore and wakes up a waiting
process (if any).
• Used for:
– Managing multiple instances of resources (e.g., slots in a buffer).
– Synchronizing producer-consumer, readers-writers, etc.
• Pros:
– More flexible than mutexes.
– Can handle multiple resource units.
• Cons:
– Can be tricky to implement correctly.
– Easy to forget matching wait/signal, which can cause deadlocks.
7
Process synchronization mechanisms…
3. Monitors
• What it is:
– A high-level synchronization construct, usually built into programming
languages (like Java).
– A monitor is an object that allows safe access to shared data by
automatically handling mutual exclusion.
• Key Features:
– Only one process can execute a monitor procedure at a time.
– Uses condition variables for waiting and signaling.
• Condition Variable Operations:
– wait() — process goes to sleep until signaled.
– signal() — wakes up a waiting process.
8
Process synchronization mechanisms…
• Pros:
– Cleaner and safer (less error-prone than semaphores).
– Encapsulates synchronization logic with data.
• Cons:
– Not available in all languages.
– More abstract, so might be slower in some implementations.
4. Condition Variables
• Used inside monitors to allow a thread to wait for a certain condition to
become true.
• Supports two main operations:
– wait() — releases the monitor lock and suspends the thread until
signaled.
– signal() — wakes up one waiting thread.
– broadcast() — wakes up all waiting threads (in some systems).
9
5.3. What is the Critical Section Problem
• The critical section problem arises in concurrent programming, where
multiple processes or threads share resources (e.g., memory, files, variables).
• Each process has a part of its code called the critical section, where it
accesses or modifies shared resources. If two processes enter their critical
sections at the same time, it may lead to incorrect or unpredictable results.
• Common Solutions to the Critical Section Problem
a. Locks (Mutex)
o Only one process can acquire the lock.
o Others wait until it’s released.
b. Semaphores
o Binary or counting semaphores to control access.
c. Monitors
o High-level structure to manage access and waiting.
d. Peterson’s Algorithm (for two processes)
o A classical software-based solution that ensures mutual exclusion
10
5.4 Solution to Critical-Section Problem
• Consider each process in a system has a segment of code called critical-section in which
the process may be changing common variables, updating a table, writing a file and so
on.
• When one process is executing in its critical section, no other process is allowed to
execute in its critical section. Each process must request permission to enter critical
section.
11
Solution to Critical-Section Problem…
A solution to the critical section problem must satisfy :
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
Two approaches used to handle critical-section problem in OS
1. Preemptive kernels- should be carefully designed to ensure shared kernel data are
free from race conditions
2. Non-preemptive kernels-free from race condition b/c only one process is active in
the kernel at a time.
12
5.5 Peterson’s Solution
• Software based solution for critical-section problem . It is restricted to two processes
that alternate execution b/n the critical & remainder sections.
• Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted;
and the two processes share two variables:
• int turn;
• Boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical section.
• The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready!
• Algorithm for process Pi
while (true) {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
CRITICAL SECTION
flag[i] = FALSE;
REMAINDER SECTION
}
Pi enters critical-section only if either flag[j] is false or turn=i. Note : value of turn is either 0
or 1, can’t be both at a time.
13
5.6 Semaphore
• Swap() and TestAndSet() are complicated for application programmers to use.
• Semaphore-a synchronization tool that does not require busy waiting
• Semaphore S – integer variable
• Two standard operations modify S: wait() and signal()
– Originally called P() and V()
• Less complicated
• Can only be accessed via two indivisible (atomic) operations
– wait (S) {
while S <= 0
; // no-op
S--;
}
– signal (S) {
S++;
}
14
5.6.1 Semaphore as General Synchronization Tool
15
5.6.2 Semaphore Implementation
• Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time
• Disadvantage: busy waiting - while a process is in its critical section, any other process that
tries to enter its critical section must continuously loop in the entry section.
– This wastes CPU cycles in multiprogramming system with a single CPU
• To overcome busy waiting, two operations are provided by the OS as basic system calls
– block – places the process invoking the operation in the waiting queue associated with
the semaphore, rather than engaging in busy waiting.
– wakeup – remove one of processes in the waiting queue and place it in the ready queue.
• With each semaphore there is an associated waiting queue. Each entry in a waiting queue has
two data items:
– value (of type integer)
– pointer to next record in the list
• with application programs whose critical section is long, busy waiting is extremely inefficient.
16
Semaphore Implementation with no Busy waiting
• Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
• Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
17
5.7.Classical Problems of Process Synchronization
The three major classical synchronization problems are:
1. Producer-Consumer Problem
• Situation:
– There are two types of processes: a Producer and a Consumer.
– They share a common buffer (like a storage area).
– The Producer generates (produces) data, puts it into the buffer.
– The Consumer takes (consumes) data from the buffer.
• Challenges:
– The producer must wait if the buffer is full (no space to add more items).
– The consumer must wait if the buffer is empty (nothing to consume).
– Both must not access the buffer at the same time (to avoid data
corruption).
• Solutions typically use:
– Semaphores or mutex locks to control access to the buffer.
– One semaphore to count empty slots and one for full slots.
– A mutex to protect access to the buffer itself. 18
Classical Problems of Process Synchronization….
• 2. Readers-Writers Problem
• Situation:
– There is a shared data resource (like a database).
– Readers only read the data (no changes).
– Writers can modify the data.
• Challenges:
– Multiple readers can read the data at the same time without problems.
– But, when a writer is writing:
• No reader or another writer should access the data.
– So, reading and writing must be synchronized properly.
• Types:
1. First Readers-Writers Problem: No reader should wait unless a writer has
already locked the resource.
2. Second Readers-Writers Problem: Writers are given priority to avoid
starvation (writers don't wait forever).
3. Third version (fair solution): Tries to balance fairness between readers 19
and
writers.
Readers-Writers Problem (Cont.)
• Solutions use:
– Semaphores to track the number of readers and whether a writer is active.
– Mutexes to protect the shared data and counters.
• structure
– The structure of a writer process
while (true) {
wait (wrt) ;
// writing is performed
signal (wrt) ; }
– The structure of a reader process
while (true) {
wait (mutex) ;
readcount ++ ;
if (readcount == 1) wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0) signal (wrt) ;
signal (mutex) ; }
20
3. Dining-Philosophers Problem
• Shared data
– Bowl of rice (data set)
– Semaphore chopstick [5] initialized to 1
21
Dining-Philosophers Problem…
• Situation:
– Imagine five philosophers sitting around a table.
– In between each philosopher, there is one fork (so 5 forks
total).
– Each philosopher must pick up two forks (left and right) to
eat.
– Philosophers alternate between thinking and eating.
• Challenges:
– If every philosopher picks up their left fork at the same
time, they will all wait forever for the right fork (deadlock).
– If a philosopher never gets both forks, they starve
22
Dining-Philosophers Problem (Cont.)
• Solutions:
– Limit the number of philosophers who can pick up forks at the same time
(using semaphores).
– Enforce an order for picking up forks (pick up the lower-numbered fork first).
– Use an arbitrator (a waiter process) to control fork access.
While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think }
23
Synchronization Examples
• Solaris
• Windows XP
• Linux
• Pthreads
24
Solaris Synchronization
• Implements a variety of locks to support multitasking, multithreading (including
real-time threads), and multiprocessing
• Uses adaptive mutexes for efficiency when protecting data from short code
segments
• Uses condition variables and readers-writers locks when longer sections of code
need access to data
• Uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or reader-writer lock
25
Windows XP Synchronization
• Uses interrupt masks to protect access to global resources on uniprocessor
systems
• Uses spinlocks on multiprocessor systems
• Also provides dispatcher objects which may act as either mutexes and semaphores
• Dispatcher objects may also provide events
– An event acts much like a condition variable
26
Linux Synchronization
• Linux:
– disables interrupts to implement short critical sections
• Linux provides:
– semaphores
– spin locks
27
Pthreads Synchronization
• Pthreads API is OS-independent
• It provides:
– mutex locks
– condition variables
29