OS_Unit III - Process Synchronization
OS_Unit III - Process Synchronization
UNIT III
1
Course Overview
3
III
N IT Background
U
⮚ A cooperating process is one that can affect or be affected by
other processes executing in the system.
⮚ Processes can execute concurrently
⮚ May be interrupted at any time, partially completing
execution
⮚ Concurrent access to shared data may result in data
inconsistency
⮚ Maintaining data consistency requires mechanisms to ensure
the orderly execution of cooperating processes
Race Condition - Example
Producer Consumer
Count ++ Count --
Race Condition
III
N IT Race Condition
⮚ A situation where several processes access and manipulate the same data
U concurrently and the outcome of the execution depends on the particular order in
which the access takes place, is called a race condition.
⮚ Processes P0 and P1 are creating child processes using the fork() system call
⮚ Race condition on kernel variable next_available_pid which represents
the next available process identifier (pid)
} }
III
N IT Correctness of Peterson’s Solution
U
⮚ Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
III Peterson’s Solution and
IT
U N Modern Architecture
⮚ Although useful for demonstrating an algorithm,
Peterson’s Solution is not guaranteed to work on modern
architectures.
⮚ To improve performance, processors and/or
compilers may reorder operations that have no
dependencies
⮚ Understanding why it will not work is useful for better
understanding race conditions.
⮚ For single-threaded this is ok as the result will always be
the same.
⮚ For multithreaded the reordering may produce
inconsistent or unexpected results!
III Modern Architecture Example
IT
U N
⮚ Two threads share the data:
boolean flag = false;
int x = 0;
⮚ Thread 1 performs
while (!flag)
;
print x
⮚ Thread 2 performs
x = 100;
flag = true
⮚ What is the expected output?
100
III Modern Architecture Example
IT
U N (Cont.)
⮚ However, since the variables flag and x are independent of each
other, the instructions:
flag = true;
x = 100;
⮚ This allows both processes to be in their critical section at the same time!
⮚ To ensure that Peterson’s solution will work correctly on modern computer
architecture we must use Memory Barrier.
III
IT Peterson’s Solution Revisited
U N
Process0 Process1
while (true) while(true)
{ {
turn = 1; turn = 0;
flag[0] = true; flag[1] = true;
while (flag[1] && turn == 1); while (flag[0] && turn ==0);
} }
III Hardware Support for
IT
U N Synchronization
⮚ Many systems provide hardware support for implementing the critical
section code.
⮚ Uniprocessors – could disable interrupts
⮚ Currently running code would execute without preemption
⮚ Generally too inefficient on multiprocessor systems
⮚ Three hardware instructions that provide support for solving the critical-
section problem are
⮚ Memory Barriers
⮚ Hardware Instructions
⮚ Atomic Variables
III
N IT Memory Barriers
U
⮚ Memory model are the memory guarantees a computer architecture makes
to application programs.
/* critical section */
lock = false;
/* remainder section */
} while (true);
Process 1
Process 2
III The compare_and_swap Instruction
IT
U N
⮚ Definition
int compare_and_swap(int *value, int expected, int new_value)
{
int temp = *value;
if (*value == expected)
*value = new_value;
return temp;
}
⮚ Properties
⮚ Executed atomically
⮚ Returns the original value of passed parameter value
⮚ Set the variable value the value of the passed parameter new_value but
only if *value == expected is true. That is, the swap takes place only
under this condition.
III
N IT Solution using compare_and_swap
U
⮚ Shared integer lock initialized to 0;
⮚ Solution:
while (true){
while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
}
⮚ Answer : 5-6+4 = 3
III
N IT Problem
U
The following two functions P1 and P2 that share a variable B with an initial value
of 2 execute concurrently.
P1()
{
C = B – 1;
B = 2*C;
}
P2()
{
D = 2 * B;
B = D - 1;
}
The number of distinct values that B can possibly take after the execution is
III
N IT Solution
U C = B – 1; // C =
1
B = 2*C; // B = 2
D = 2 * B; // D =
4
B = D - 1; // B =
3
• If we execute P2 process after P1 process, then B = 3
• If we execute P1 process after P2 process, then B = 4
• If we did preemption between P1 & P2 processes, then B = 2 (Preemption
have done from P1 to P2) or B = 3 (Preemption have done from P2 to P1).
So, among 2 & 3 values, only one value will be saved in B.
• So, total no. of distinct values that B can possibly take after the execution
is 3.
III
N IT Problem
U
III
N IT Solution
U
⮚ signal(mutex) …. wait(mutex)
⮚ wait(mutex) … wait(mutex)
⮚ These – and others – are examples of what can occur when semaphores
and other synchronization tools are used incorrectly.
III
N IT Advantages of Monitors
U
⮚ Monitors are easy to implement than semaphores.
⮚ Mutual exclusion in monitors is automatic while in semaphores,
mutual exclusion needs to be implemented explicitly.
⮚ Monitors can overcome the timing errors that occur while using
semaphores.
⮚ Shared variables are global to all processes in the monitor while
shared variables are hidden in semaphores.
III
N IT Monitors
U
⮚ A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
⮚ Abstract data type, internal variables only accessible by code within the
procedure
⮚ A monitor type is an ADT that includes a set of programmer-defined
operations that are provided with mutual exclusion within the monitor
66