0% found this document useful (0 votes)
11 views

OS_Unit III - Process Synchronization

Unit III of the Operating Systems course focuses on Process Synchronization and Deadlocks, detailing the critical section problem, race conditions, and various solutions such as Peterson's solution, mutex locks, and semaphores. It emphasizes the importance of mutual exclusion, progress, and bounded waiting in process synchronization. Additionally, it discusses hardware support for synchronization and the implications of modern architecture on these solutions.

Uploaded by

Tharun Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

OS_Unit III - Process Synchronization

Unit III of the Operating Systems course focuses on Process Synchronization and Deadlocks, detailing the critical section problem, race conditions, and various solutions such as Peterson's solution, mutex locks, and semaphores. It emphasizes the importance of mutual exclusion, progress, and bounded waiting in process synchronization. Additionally, it discusses hardware support for synchronization and the implications of modern architecture on these solutions.

Uploaded by

Tharun Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

20CSE43 – Operating Systems

UNIT III

1
Course Overview

UNIT - 1 Operating Systems Overview


UNIT - 2 Process Management
UNIT - 3 Process Synchronization
UNIT - 4 Memory Management
UNIT - 5 Storage Management
III
N IT Unit III
U
⮚ Process Synchronization : The Critical Section Problem -
Peterson’s solution – Hardware support for
Synchronization – Mutex Locks – Semaphores –
Monitors. Deadlocks: Deadlock Characterization –
Methods for handling deadlocks - Deadlock Prevention
and Avoidance – Deadlock Detection – Recovery from
Deadlock

CO3: apply different methods for process synchronization


and deadlock handling

3
III
N IT Background
U
⮚ A cooperating process is one that can affect or be affected by
other processes executing in the system.
⮚ Processes can execute concurrently
⮚ May be interrupted at any time, partially completing
execution
⮚ Concurrent access to shared data may result in data
inconsistency
⮚ Maintaining data consistency requires mechanisms to ensure
the orderly execution of cooperating processes
Race Condition - Example
Producer Consumer

Count ++ Count --

Race Condition
III
N IT Race Condition
⮚ A situation where several processes access and manipulate the same data
U concurrently and the outcome of the execution depends on the particular order in
which the access takes place, is called a race condition.
⮚ Processes P0 and P1 are creating child processes using the fork() system call
⮚ Race condition on kernel variable next_available_pid which represents
the next available process identifier (pid)

Unless there is a mechanism to prevent P0 and P1 from accessing the variable


next_available_pid the same pid could be assigned to two different
processes!
III
IT Critical Section Problem
U N
⮚ Consider system of n processes {p0, p1, … pn-1}
⮚ Each process has critical section segment of code
⮚ Process may be changing common variables, updating
table, writing file, etc.
⮚ When one process in critical section, no other may be in
its critical section
⮚ Critical section problem is to design a protocol to solve this
⮚ Each process must ask permission to enter critical section in
entry section, may follow critical section with exit section,
then remainder section
III
N IT Critical Section
U
⮚ General structure of process Pi
III
N IT Critical-Section Problem (Cont.)
U
Requirements for solution to critical-section problem
1. Mutual Exclusion - If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections
2. Progress - Progress means that if a process is not using the critical
section, then it should not stop any other process from accessing it.
It should be a finite amount of time. In other words, any process
can enter a critical section if it is free. Ex: P1, P2, P3 are waiting to
enter into critical section. At that time a decision should be made
that which process to be entered into the critical section.
3. Bounded Waiting - Bounded waiting means that each process
must have a limited waiting time. It should not wait endlessly to
access the critical section.
III
N IT Peterson’s Solution(S/w Solution)
U ⮚ Two process solution (P0,P1)
⮚ The two processes share two variables:
⮚ int turn;
⮚ boolean flag[2]
flag[0]and flag[1]
Initial value of flag[0]and flag[1] is false.
⮚ The variable turn indicates whose turn it is to enter the critical
section
⮚ The flag array is used to indicate if a process is ready to enter the
critical section.
⮚ flag[i] = true implies that process Pi is ready!
flag[0] = true means process P0 is
interested to enter into critical section.
flag[0] = false means process P0 is not interested to enter
into critical section.
Algorithm for Process Pi Algorithm for Process Pj

while (true) while(true)


{ {

flag[i] = true; flag[j] = true;


turn = j; turn = i;
while (flag[j] && turn == j); while (flag[i] && turn ==i);

/* critical section */ /* critical section */

flag[i] = false; flag[j] = false;

/* remainder section */ /* remainder section */

} }
III
N IT Correctness of Peterson’s Solution
U
⮚ Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
III Peterson’s Solution and
IT
U N Modern Architecture
⮚ Although useful for demonstrating an algorithm,
Peterson’s Solution is not guaranteed to work on modern
architectures.
⮚ To improve performance, processors and/or
compilers may reorder operations that have no
dependencies
⮚ Understanding why it will not work is useful for better
understanding race conditions.
⮚ For single-threaded this is ok as the result will always be
the same.
⮚ For multithreaded the reordering may produce
inconsistent or unexpected results!
III Modern Architecture Example
IT
U N
⮚ Two threads share the data:
boolean flag = false;
int x = 0;
⮚ Thread 1 performs
while (!flag)
;
print x
⮚ Thread 2 performs
x = 100;
flag = true
⮚ What is the expected output?
100
III Modern Architecture Example
IT
U N (Cont.)
⮚ However, since the variables flag and x are independent of each
other, the instructions:

flag = true;
x = 100;

for Thread 2 may be reordered


⮚ If this occurs, the output may be 0!
III
IT Peterson’s Solution Revisited
U N
⮚ The effects of instruction reordering in Peterson’s Solution

⮚ This allows both processes to be in their critical section at the same time!
⮚ To ensure that Peterson’s solution will work correctly on modern computer
architecture we must use Memory Barrier.
III
IT Peterson’s Solution Revisited
U N

Process0 Process1
while (true) while(true)
{ {

turn = 1; turn = 0;
flag[0] = true; flag[1] = true;
while (flag[1] && turn == 1); while (flag[0] && turn ==0);

/* critical section */ /* critical section */

flag[0] = false; flag[1] = false;

/* remainder section */ /* remainder section */

} }
III Hardware Support for
IT
U N Synchronization
⮚ Many systems provide hardware support for implementing the critical
section code.
⮚ Uniprocessors – could disable interrupts
⮚ Currently running code would execute without preemption
⮚ Generally too inefficient on multiprocessor systems
⮚ Three hardware instructions that provide support for solving the critical-
section problem are
⮚ Memory Barriers
⮚ Hardware Instructions
⮚ Atomic Variables
III
N IT Memory Barriers
U
⮚ Memory model are the memory guarantees a computer architecture makes
to application programs.

⮚ Memory models may be either:


⮚ Strongly ordered – where a memory modification of one processor is
immediately visible to all other processors.
⮚ Weakly ordered – where a memory modification of one processor
may not be immediately visible to all other processors.

⮚ A memory barrier is an instruction that forces any change in memory to


be propagated (made visible) to all other processors.
III
IT Memory Barrier Instructions
U N
⮚ When a memory barrier instruction is performed, the system
ensures that all loads and stores are completed before any
subsequent load or store operations are performed.

⮚ Therefore, even if instructions were reordered, the memory barrier


ensures that the store operations are completed in memory and
visible to other processors before future load or store operations are
performed.
III
N IT Memory Barrier Example
U
⮚ Returning to the example of slides 6.17 - 6.18
⮚ We could add a memory barrier to the following instructions to ensure
Thread 1 outputs 100:
⮚ Thread 1 now performs
while (!flag)
memory_barrier();
print x
⮚ Thread 2 now performs
x = 100;
memory_barrier();
flag = true
⮚ For Thread 1 we are guaranteed that that the value of flag is loaded
before the value of x.
⮚ For Thread 2 we ensure that the assignment to x occurs before the
assignment flag.
III
N IT Hardware Instructions
U ⮚ Special hardware instructions that allow us to either test-and-modify
the content of a word, or to swap the contents of two words
atomically (uninterruptedly.)
⮚ Test-and-Set instruction
⮚ Compare-and-Swap instruction
⮚ Test and set
⮚ A hardware solution to the synchronization problem
⮚ There is a shared lock variable which can take either 0 or 1
⮚ Before entering into the critical section, a process enquires about the
lock
⮚ If it is locked, it keeps on waiting until it becomes free
⮚ If it is not locked, it takes the lock and executes the critical section
III The test_and_set Instruction
IT
U N
⮚ Definition
boolean test_and_set (boolean *target)
{
boolean rv = *target; mic
t o
*target = true; A
r a t i on
O pe
return rv:
}
⮚ Properties
⮚ Executed atomically
⮚ Returns the original value of passed parameter
⮚ Set the new value of passed parameter to true
III
N IT Solution Using test_and_set()
U
⮚ Shared boolean variable lock, initialized to false
⮚ Solution:
do {
while (test_and_set(&lock))
; /* do nothing */

/* critical section */

lock = false;
/* remainder section */
} while (true);

⮚ Does it solve the critical-section problem?


III
N IT Solution Using test_and_set()
U

Process 1

Process 2
III The compare_and_swap Instruction
IT
U N
⮚ Definition
int compare_and_swap(int *value, int expected, int new_value)
{
int temp = *value;
if (*value == expected)
*value = new_value;
return temp;

}
⮚ Properties
⮚ Executed atomically
⮚ Returns the original value of passed parameter value
⮚ Set the variable value the value of the passed parameter new_value but
only if *value == expected is true. That is, the swap takes place only
under this condition.
III
N IT Solution using compare_and_swap
U
⮚ Shared integer lock initialized to 0;
⮚ Solution:
while (true){
while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */

/* critical section */

lock = 0;

/* remainder section */
}

⮚ Does it solve the critical-section problem?


III Bounded-waiting with
IT
U N compare-and-swap
while (true) {
waiting[i] = true;
key = 1;
while (waiting[i] && key == 1)
key = compare_and_swap(&lock,0,1);
waiting[i] = false;
/* critical section */ process Pi can enter its
j = (i + 1) % n; critical section only if either
while ((j != i) && !waiting[j]) waiting[i] == false or
j = (j + 1) % n; key == 0.
if (j == i)
lock = 0;
else
waiting[j] = false;
/* remainder section */
}
III
N IT Atomic Variables
U
⮚ Typically, instructions such as compare-and-swap are used as building blocks
for other synchronization tools.
⮚ One tool is an atomic variable that provides atomic (uninterruptible) updates
on basic data types such as integers and booleans.
⮚ For example:
⮚ Let sequence be an atomic variable
⮚ Let increment() be operation on the atomic variable sequence
⮚ The Command:
increment(&sequence);
ensures sequence is incremented without interruption:
III
N IT Atomic Variables
U
⮚ The increment() function can be implemented as follows:

void increment(atomic_int *v)


{
int temp;
do {
temp = *v;
}
while (temp !=
(compare_and_swap(&v,temp,temp+1));
}
III Mutex Locks
IT
U N ⮚ Previous solutions are complicated and generally inaccessible to
application programmers
⮚ OS designers build software tools to solve critical section problem
⮚ Simplest is mutex lock
⮚ Boolean variable indicating if lock is available or not
⮚ Protect a critical section by
⮚ First acquire() a lock
⮚ Then release() the lock
⮚ Calls to acquire() and release() must be atomic

⮚ But this solution requires busy waiting. Busy waiting also


wastes CPU cycles
⮚ This lock therefore called a spinlock because the process
“spins” while waiting for the lock to become available.
III Mutex Locks
IT
U N
⮚ Spinlock is a locking system mechanism. It allows a thread
to acquire it to simply wait in loop until the lock is
available i.e. a thread waits in a loop or spin until the lock
is available. Spinlock is held for a short period of time.
Spinlock are useful in multiprocessor system

⮚ Locks are either contended or uncontended.


⮚ A lock is considered contended if a thread blocks while
trying to acquire the lock.
⮚ If a lock is available when a thread attempts to acquire it,
the lock is considered uncontended.
III
IT Solution to CS Problem Using
U N
Mutex Locks
III Semaphore
IT
U N ⮚ Synchronization tool that provides more sophisticated ways (than Mutex
locks) for processes to synchronize their activities.
⮚ Semaphore is a non negative variable shared between threads
⮚ Semaphore S – integer variable
⮚ Can only be accessed via two indivisible (atomic) operations
⮚ wait() and signal()
⮚ Originally called P() and V()
⮚ Definition of the wait() operation
wait(S) {
while (S <= 0)
; // busy wait
S--;
}
⮚ Definition of the signal() operation
signal(S) {
S++;
}
III Semaphore (Cont.)
IT
U N
⮚ Counting semaphore – integer value can range over an
unrestricted domain
⮚ Counting semaphore is initialized to the no. of resources available.

⮚ Binary semaphore – integer value can range only between 0 and 1


⮚ With semaphores we can solve various synchronization problems
III Semaphore Usage Example
IT
U N
⮚ Solution to the CS Problem [ only one resource- Binary semaphore]
⮚ Create a semaphore “S” initialized to 1
Wait(S);
CS
signal(S);
⮚ Consider P1 and P2 that with two statements S1 and S2 and the requirement
that S1 to happen before S2
⮚ Create a semaphore “S” initialized to 0
P1:
S1;
signal(S);
P2:
wait(S);
S2;
III
N IT Semaphore Implementation
U
⮚ Wait ()
wait(S) {
while (S <= 0)
; // busy wait
S--;
}
⮚ busy waiting- unnecessarily wasting CPU cycles in
executing the while loop in entry section.
⮚ Note that applications may spend lots of time in critical
sections and therefore this is not a good solution
III Semaphore Implementation with
IT
U N No Busy waiting
⮚ With each semaphore there is an associated waiting queue
⮚ Each entry in a waiting queue has two data items:
⮚ Value (of type integer)
⮚ Pointer to next record in the list
⮚ Two operations:
⮚ Sleep – place the process invoking the operation on the appropriate
waiting queue
⮚ wakeup – remove one of processes in the waiting queue and place
it in the ready queue
III Implementation with no Busy waiting
IT
U N Semaphore definition signal(semaphore *S)
typedef struct {
{ S->value++;
int value; if (S->value >= 0) {
remove a process P from S->list;
struct process *list;
wakeup(P);
} semaphore;
}
}
wait(semaphore *S)
{
S->value--;
if (S->value < 0) {
add this process to S->list;
sleep();
}
}
III
N IT Problem
U
⮚ A counting semaphore was initialized to 5. Then 6P (wait) operations and 4V
(signal) operations were completed on this semaphore. The resulting value of the
semaphore is

⮚ Answer : 5-6+4 = 3
III
N IT Problem
U
The following two functions P1 and P2 that share a variable B with an initial value
of 2 execute concurrently.
P1()
{
C = B – 1;
B = 2*C;
}
P2()
{
D = 2 * B;
B = D - 1;
}
The number of distinct values that B can possibly take after the execution is
III
N IT Solution
U C = B – 1; // C =
1
B = 2*C; // B = 2
D = 2 * B; // D =
4
B = D - 1; // B =
3
• If we execute P2 process after P1 process, then B = 3
• If we execute P1 process after P2 process, then B = 4
• If we did preemption between P1 & P2 processes, then B = 2 (Preemption
have done from P1 to P2) or B = 3 (Preemption have done from P2 to P1).
So, among 2 & 3 values, only one value will be saved in B.
• So, total no. of distinct values that B can possibly take after the execution
is 3.
III
N IT Problem
U
III
N IT Solution
U

flag[j] = true and turn = j


III
N IT Problem
U
⮚ Consider a non-negative counting semaphore S. The operation P(S) decrements S,
and V(S) increments S. During an execution, 20 P(S) operations and 12 V(S)
operations are issued in some order. The largest initial value of S for which at
least one P(S) operation will remain blocked is _________.
III
N IT Solution
U
⮚ We can assume the largest initial value of S for which at least one P(S) operation
→X
P(S) operation remain in blocked state therefore it will -1.
The negative value of the counting semaphore indicates the number of processes
in suspended list (or blocked).
Take any sequence of 20P and 12V operations, at least one process will always
remain blocked.
So, X - 20 + 12 = -1
Here P(S) = 20 and V(S) = 12
X=7
III
N IT Problem
U ⮚ Consider the methods used by processes P1 and P2 for accessing their critical
sections whenever needed, as given below. The initial values of shared boolean
variables S1 and S2 are randomly assigned. Note : P1 may not be interested in
entering the CS.

⮚ Which one of the following statements describes the properties achieved?

A Mutual exclusion but not progress


B Progress but not mutual exclusion
C Neither mutual exclusion nor progress
D Both mutual exclusion and progress
III
N IT Solution
U
⮚ In this mutual exclusion is satisfied because at any point of time either S1 = S2
or S1 ≠ S2, but not both.
⮚ But here progress is not satisfied because suppose S1 = 1 and S2 = 0 and P1 is
not interested to enter into critical section but P2 wants to enter into critical
section, and P2 will not be able to enter, because until P1 will not enter critical
section, S1 will not become equal to S2.
⮚ So if one process do not interested in entering critical section, will not allow
other process to enter critical section which is interested. So progress is not
satisfied.
III
IT Problems with Semaphores
U N
⮚ Incorrect use of semaphore operations:

⮚ signal(mutex) …. wait(mutex)

⮚ wait(mutex) … wait(mutex)

⮚ Omitting of wait (mutex) and/or signal (mutex)

⮚ These – and others – are examples of what can occur when semaphores
and other synchronization tools are used incorrectly.
III
N IT Advantages of Monitors
U
⮚ Monitors are easy to implement than semaphores.
⮚ Mutual exclusion in monitors is automatic while in semaphores,
mutual exclusion needs to be implemented explicitly.
⮚ Monitors can overcome the timing errors that occur while using
semaphores.
⮚ Shared variables are global to all processes in the monitor while
shared variables are hidden in semaphores.
III
N IT Monitors
U
⮚ A high-level abstraction that provides a convenient and effective
mechanism for process synchronization

⮚ ADT—encapsulates data with a set of functions to operate on that data


that are independent of any specific implementation of the ADT.

⮚ Abstract data type, internal variables only accessible by code within the
procedure
⮚ A monitor type is an ADT that includes a set of programmer-defined
operations that are provided with mutual exclusion within the monitor

⮚ Only one process may be active within the monitor at a time


III
N IT Pseudocode syntax of a monitor
U
Schematic view of a Monitor
III
N IT Condition Variables
U
⮚ Additional synchronization mechanisms are provided by the
condition construct.
⮚ condition x, y;
⮚ Two operations are allowed on a condition variable:
⮚ x.wait() – the process invoking this operation is suspended
until another process invokes x.signal();
⮚ x.signal() – resumes one of processes (if any) that invoked
x.wait()
⮚ If no x.wait() on the variable, then it has no effect on the variable
Monitor with Condition Variables
III Usage of Condition Variable Example
IT
U N
⮚ Consider P1 and P2 that need to execute two statements S1
and S2 and the requirement that S1 to happen before S2
⮚ Create a monitor with two procedures F1 and F2 that
are invoked by P1 and P2 respectively
⮚ One condition variable “x” initialized to 0
⮚ One Boolean variable “done”
⮚ F1:
S1;
done = true;
x.signal();
⮚ F2:
if done = false
x.wait()
S2;
III
N IT Condition Variables Choices
U
⮚ If process P invokes x.signal(), and process Q is suspended in x.wait(), what
should happen next?
⮚ Both Q and P cannot execute in parallel. If Q is resumed, then P must wait
⮚ Options include
⮚ Signal and wait – P waits until Q either leaves the monitor or it waits for
another condition
⮚ Signal and continue – Q waits until P either leaves the monitor or it waits
for another condition
III
N IT Course Outcome
U
On completion of the course, the students will be able to
⮚ CO1 : explain operating system structure, services and system calls and identify
appropriate system calls for a given service
⮚ CO2 : make use of process management strategies for scheduling processes
⮚ CO3 : apply different methods for process synchronization and deadlock
handling
⮚ CO4 : make use of memory management strategies and apply page replacement
policies to address demand paging
⮚ CO5 : apply various disk scheduling algorithms and elaborate file systems
concepts

66

You might also like