Synchronization OS
Synchronization OS
and
Synchronization
Chester Rebeiro
IIT Madras
1
Inter Process Communication
• Advantages of Inter Process Communication (IPC)
– Information sharing
– Modularity/Convenience
• 3 ways
– Shared memory
– Message Passing
– Signals
2
Shared Memory
userspace
• One process will create an area in RAM which
the other process can access Process 1
• Both processes can access shared memory like
a regular working memory
– Reading/writing is like regular reading/writing
– Fast Shared
• Limitation : Error prone. Needs synchronization memory
between processes
Process 2
3
Shared Memory in Linux
• int shmget (key, size, flags)
– Create a shared memory segment;
– Returns ID of segment : shmid
– key : unique identifier of the shared memory segment
– size : size of the shared memory (rounded up to the
PAGE_SIZE)
• int shmat(shmid, addr, flags)
– Attach shmid shared memory to address space of the
calling process
– addr : pointer to the shared memory address space
• int shmdt(shmid)
– Detach shared memory
4
Example
server.c client.c
5
Message Passing
userspace
• Shared memory created in the kernel
Process 1
• System calls such as send and receive
used for communication
– Cooperating : each send must have a
receive
• Advantage : Explicit sharing, less error
Process 2
prone
• Limitation : Slow. Each call involves
marshalling / demarshalling of Kernel
information
Shared
memory
6
Pipes
– Always between parent and child
– Always unidirectional
– Accessed by two associated file descriptors:
• fd[0] for reading from pipe
• fd[1] for writing to the pipe
7
Pipes for two way
communication
8
Example
(child process sending a string to parent)
9
Signals
• Asynchronous unidirectional communication
between processes
• Signals are a small integer
– eg. 9: kill, 11: segmentation fault
• Send a signal to a process
– kill(pid, signum)
• Process handler for a signal
– sighandler_t signal(signum, handler);
– Default if no handler defined
ref : http://www.comptechdoc.org/os/linux/programming/linux_pgsignals.html 10
Synchronization
Chester Rebeiro
IIT Madras
11
Motivating Scenario
shared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }
• Single core
– Program 1 and program 2 are executing at the same time but sharing a
single core
1 2 1 2 1 2 1 2
12
Motivating Scenario
Shared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }
13
Motivating Scenario
Shared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }
14
Race Conditions
• Race conditions
– A situation where several processes access and manipulate the
same data (critical section)
– The outcome depends on the order in which the access take
place
– Prevent race conditions by synchronization
• Ensure only one process at a time manipulates the critical data
{
*
*
counter++ critical section
*
} No more than one
process should execute in
critical section at a time
15
Race Conditions in Multicore
shared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }
• Multi core
– Program 1 and program 2 are executing at the same time on different
cores
1
2
CPU usage wrt time
16
Critical Section
• Requirements
– Mutual Exclusion : No more than one process in
critical section at a given time
– Progress : When no process is in the critical section,
any process that requests entry into the critical
section must be permitted without any delay
– No starvation (bounded wait): There is an upper
bound on the number of times a process enters the
critical section, while another is waiting.
17
Locks and Unlocks
shared variable
program 0 int counter=5; program 1
{ lock_t L; {
* *
* *
lock(L) lock(L)
counter++ counter--
unlock(L) unlock(L)
* *
} }
• lock(L) : acquire lock L exclusively
– Only the process with L can access the critical section
• unlock(L) : release exclusive access to lock L
– Permitting other processes to access the critical section
18
When to have Locking?
• Single instructions by themselves are
atomic
eg. add %eax, %ebx
19
How to Implement Locking
20
Process 1
Using Interrupts Process 2
while(1){ while(1){
lock disable interrupts () disable interrupts ()
critical section critical section
unlock enable interrupts () enable interrupts ()
other code other code
} }
• Simple
– When interrupts are disabled, context switches won’t
happen
• Requires privileges
– User processes generally cannot disable interrupts
• Not suited for multicore systems
21
Software Solution (Attempt 1)
Shared
int turn=1;
Process 1 Process 2
while(1){ while(1){
while(turn == 2); // lock while(turn == 1); // lock
critical section critical section
turn = 2; // unlock turn = 1; // unlock
other code other code
} }
22
Software Solution (Attempt 2)
shared
p2_inside = False, p1_inside = False
Process 1 Process 2
while(1){ while(1){
while(p2_inside == True); while(p1_inside == True);
lock
p1_inside = True; p2_inside = True;
critical section critical section
unlock p1_inside = False; p2_inside = False;
other code other code
} }
23
Attempt 2: No mutual exclusion
CPU p1_inside p2_inside
while(p2_inside == True); False False
context switch
time
Both p1 and p2 can enter into the critical section at the same time
24
Software Solution (Attempt 3)
globally defined
p2_wants_to_enter, p1_wants_to_enter
Process 1 Process 2
while(1){ while(1){
p1_wants_to_enter = True p2_wants_to_enter = True
lock while(p2_wants_to_enter = True); while(p1_wants_to_enter = True);
critical section critical section
unlock p1_wants_to_enter = False p2_wants_to_enter = False
other code other code
} }
25
Attempt 3: No Progress
CPU p1_inside p2_inside
p1_wants_to_enter = True False False
context switch
time
There is a tie!!!
26
Peterson’s Solution
globally defined
p2_wants_to_enter, p1_wants_to_enter, favored
Process 1
while(1){
If the second process wants to enter. favor
p1_wants_to_enter = True
lock it. (be nice !!!)
favored = 2
27
Peterson’s Solution
globally defined
p2_wants_to_enter, p1_wants_to_enter, favored
Process 1 Process 2
while(1){ while(1){
p1_wants_to_enter = True p2_wants_to_enter = True
favored = 2 favored = 1
28
Bakery Algorithm
• Synchronization between N > 2 processes
• By Leslie Lamport
Eat
when 196 displayed
http://research.microsoft.com/en-us/um/people/lamport/pubs/bakery.pdf 29
Simplified Bakery Algorithm
• Processes numbered 0 to N-1
• num is an array N integers (initially 0).
– Each entry corresponds to a process
lock(i){
num[i] = MAX(num[0], num[1], …., num[N-1]) + 1
for(p = 0; p < N; ++p){
while (num[p] != 0 and num[p] < num[i]);
}
}
This is at the doorway!!!
critical section It has to be atomic
to ensure two processes
unlock(i){ do not get the same token
num[i] = 0;
}
30
Original Bakery Algorithm
• Without atomic operation assumptions
• Introduce an array of N Booleans: choosing, initially all values False.
lock(i){
choosing[i] = True
num[i] = MAX(num[0], num[1], …., num[N-1]) + 1 doorway
choosing[i] = False
for(p = 0; p < N; ++p){
while (choosing[p]);
while (num[p] != 0 and (num[p],p)<(num[i],i));
}
}
critical section
Choosing ensures that a process
unlock(i){
Is not at the doorway
num[i] = 0;
}
(a, b) < (c, d) which is equivalent to: (a < c) or ((a == c) and (b < d)) 31
Analyze this
• Does this scheme provide mutual exclusion?
Process 1 Process 2
while(1){ while(1){
while(lock != 0); while(lock != 0);
lock= 1; // lock lock = 1; // lock
critical section critical section
lock = 0; // unlock lock = 0; // unlock
other code other code
} }
lock = 0
No context switch
P1: while(lock != 0);
P2: while(lock != 0);
P2: lock = 1;
P1: lock = 1;
…. Both processes in critical section
32
If only…
• We could make this operation atomic
Process 1
while(1){
Make atomic
while(lock != 0);
lock= 1; // lock
critical section
lock = 0; // unlock
other code
}
33
Hardware Support
(Test & Set Instruction)
35
High Level Constructs
• Spinlock
• Mutex
• Semaphore
36
Spinlocks Usage
Process 1
acquire(&locked) int xchg(addr, value){
critical section %eax = value
release(&locked) xchg %eax, (addr)
}
Process 2
acquire(&locked) void acquire(int *locked){
critical section while(1){
release(&locked) if(xchg(locked, 1) == 0)
• break;
One process will acquire the lock
}
• The other will wait in a loop }
repeatedly checking if the lock is
available void release(int *locked){
• The lock becomes available when locked = 0;
}
the former process releases it
xchg %eax, X
38
More issues with Spinlocks
xchg %eax, X
CPU0 cache coherence CPU1
protocol
L1 cache L1 cache
#LOCK
Memory
X
• No caching of (X) possible. All xchg operations are bus transactions.
– CPU asserts the LOCK, to inform that there is a ‘locked ‘ memory
access
• acquire function in spinlock invokes xchg in a loop…each operation
is a bus transaction …. huge performance hits
39
int xchg(addr, value){
A better acquire
%eax = value
xchg %eax, (addr)
}
40
Spinlocks
(when should it be used?)
• Characteristic : busy waiting
– Useful for short critical sections, where much CPU
time is not wasted waiting
• eg. To increment a counter, access an array element, etc.
41
Spinlock in pthreads
lock
unlock
create spinlock
destroy spinlock
42
Mutexes
int xchg(addr, value){
• Can we do better than busy %eax = value
xchg %eax, (addr)
waiting? }
– If critical section is locked then
yield CPU void lock(int *locked){
• Go to a SLEEP state while(1){
if(xchg(locked, 1) == 0)
– While unlocking, wake up break;
sleeping process else
sleep();
}
}
44
Thundering Herdint xchg(addr,
Problem value){
%eax = value
• The Solution }
xchg %eax, (addr)
46
Locks and Priorities
• What happens when a high priority task requests
a lock, while a low priority task is in the critical
section
– Priority Inversion
– Possible solution
• Priority Inheritance
Producer Consumer
48
Producer-Consumer Code
Buffer of size N
int count=0;
Mutex mutex, empty, full;
1 void producer(){ 1 void consumer(){
2 while(TRUE){ 2 while(TRUE){
3 item = produce_item(); 3 if (count == 0) sleep(full);
4 if (count == N) sleep(empty); 4 lock(mutex);
5 lock(mutex); 5 item = remove_item(); // from buffer
6 insert_item(item); // into buffer 6 count--;
7 count++; 7 unlock(mutex);
8 unlock(mutex); 8 if (count == N-1) wakeup(empty);
9 if (count == 1) wakeup(full); 9 consume_item(item);
10 } 10 }
} }
49
Lost Wakeups
• Consider the following 3 read count value // count 0
3 item = produce_item();
context of instructions 5 lock(mutex);
• Assume buffer is initially 6 insert_item(item); // into buffer
7 count++; // count = 1
empty 8 unlock(mutex)
9 test (count == 1) // yes
context switch 9 signal(full);
3 test (count == 0) // yes
3 wait();
51
Producer-Consumer
with Semaphores
Buffer of size N
int count; full = 0, empty = N
void producer(){ void consumer(){
while(TRUE){ while(TRUE){
item = produce_item(); down(full);
down(empty); wait(mutex);
wait(mutex); item = remove_item(); // from buffer
insert_item(item); // into buffer signal(mutex);
signal(mutex); up(empty);
up(full); consume_item(item);
} }
} }
52
POSIX semaphores
• sem_init
• sem_wait
• sem_post
• sem_getvalue
• sem_destroy
53
Dining Philosophers Problem
E
54
E First Try
#define N 5
5 1
A void philosopher(int i){
D while(TRUE){
think(); // for some_time
take_fork(i);
4 2 take_fork((i + 1) % N);
eat();
put_fork(i);
3 put_fork((i + 1) % N);
C B
}
}
What happens if only philosophers A and C are always given the priority?
B, D, and E starves… so scheme needs to be fair
55
E First Try
#define N 5
5 1
A void philosopher(int i){
D while(TRUE){
think(); // for some_time
take_fork(i);
4 2 take_fork((i + 1) % N);
eat();
put_fork(i);
3 put_fork((i + 1) % N);
C B
}
}
What happens if all philosophers decide to pick up their left forks at the same time?
Possible starvation due to deadlock
56
Deadlocks
• A situation where programs continue to run indefinitely
without making any progress
• Each program is waiting for an event that another
process can cause
57
Second try
• Take fork i, check if fork (i+1)%N is #define N 5
available
• Imagine, void philosopher(int i){
– All philosophers start at the same time while(TRUE){
– Run simultaneously
think();
– And think for the same time
take_fork(i);
• This could lead to philosophers taking
fork and putting it down continuously. a if (available((i+1)%N){
deadlock. take_fork((i + 1) % N);
eat();
• A better alternative }else{
– Philosophers wait a random time before put_fork(i);
take_fork(i) }
– Less likelihood of deadlock.
}
– Used in schemes such as Ethernet
58
Solution using Mutex
• Protect critical sections with a #define N 5
mutex
void philosopher(int i){
• Prevents deadlock while(TRUE){
• But has performance issues think(); // for some_time
– Only one philosopher can eat at a wait(mutex);
time take_fork(i);
take_fork((i + 1) % N);
eat();
put_fork(i);
put_fork((i + 1) % N);
signal(mutex);
}
}
59
Solution to Dining Philosophers
Uses N semaphores (s[0], s[1], …., s[N]) all initialized to 0, and a mutex
Philosopher has 3 states: HUNGRY, EATING, THINKING
A philosopher can only move to EATING state if neither neighbor is eating
void philosopher(int i){
while(TRUE){ void take_forks(int i){ void put_forks(int i){
think(); lock(mutex); lock(mutex);
take_forks(i); state[i] = HUNGRY; state[i] = THINKING;
eat(); test(i); test(LEFT);
put_forks(); unlock(mutex); test(RIGHT)
} down(s[i]); unlock(mutex);
} } }
A B
l ds 2
ho c e R
R2 B ur
e so
r
61
Deadlocks
Resource Allocation Graph
R1 B
l ds 1 res wai
ho ce R ou ts f
A ur rce or
so R1
re
A B
Aw
res ai s 2
ou ts fo l d
rce r ho c e R
R2 R2 B ur
e so
r
A Deadlock Arises:
Deadlock : A set of processes is deadlocked if each process in the set is
waiting for an event that only another process in the set can cause.
62
Conditions for Resource
Deadlocks
1. Mutual Exclusion
– Each resource is either available or currently assigned to exactly one
process
2. Hold and wait
– A process holding a resource, can request another resource
3. No preemption
– Resources previously granted cannot be forcibly taken away from a
process
4. Circular wait
– There must be a circular chain of two or more processes, each of
which is waiting for a resouce held by the next member of the chain
63
Deadlocks :
(A Chanced Event)
• Ordering of resource requests and allocations are probabilistic, thus
deadlock occurrence is also probabilistic
Deadlock occurs
64
No dead lock occurrence
(B can be granted S
after step q)
65
Should Deadlocks be handled?
• Preventing / detecting deadlocks could be tedious
• Can we live without detecting / preventing deadlocks?
– What is the probability of occurrence?
– What are the consequences of a deadlock? (How critical is a
deadlock?)
66
Handling Deadlocks
• Detection and Recovery
• Avoidance
• Prevention
67
Deadlock detection
• How can an OS detect when there is a
deadlock?
• OS needs to keep track of
– Current resource allocation
• Which process has which resource
– Current request allocation
• Which process is waiting for which resource
• Use this informaiton to detect deadlocks
68
Deadlock Detection
• Deadlock detection with one resource of each type
• Find cycles in resource graph
69
Deadlock Detection
• Deadlock detection with multiple resources of each type
P1
P2
P3
Current Allocation Matrix Request Matrix
Who has what!! Who is waiting for what!!
Process Pi holds Ci resources and requests Ri resources, where i = 1 to 3
Goal is to check if there is any sequence of allocations by which all current
requests can be met. If so, there is no deadlock.
70
Deadlock Detection
• Deadlock detection with multiple resources of each type
P1 cannot be satisfied
P1
P2 P2 cannot be satisfied
P3
P3 can be satisfied
Current Allocation Matrix Request Matrix
71
Deadlock Detection
• Deadlock detection with multiple resources of each type
P1
P2
P3
P1 cannot be satisfied
P1
P2 P2 cannot be satisfied
P3 2 1 1 0
P3 cannot be satisfied
Current Allocation Matrix Request Matrix
deadlock
Process Pi holds Ci resources and requests Ri resources, where i = 1 to 3
Deadlock detected as none of the requests can be satisfied
73
Deadlock Recovery
What should the OS do when it detects a deadlock?
• Raise an alarm
– Tell users and administrator
• Preemption
– Take away a resource temporarily (frequently not possible)
• Rollback
– Checkpoint states and then rollback
• Kill low priority process
– Keep killing processes until deadlock is broken
– (or reset the entire system)
74
Deadlock Avoidance
• System decides in advance if allocating a resource to a
process will lead to a deadlock Both processes request
process 2 instructions Resource R1
R1 Unsafe state
(may cause a deadlock)
R2 Both processes
request
Resource R2
76
Example with a Banker
• Consider a banker with 4 clients (P1, P2, P3, P4).
– Each client has certain credit limits (totaling 20 units)
– The banker knows that max credits will not be used at once, so
he keeps only 10 units
Has Max
A 3 9
B 2 4
C 2 7
77
Safe State
Allocate 2 units to B B completes
Has Max Has Max Has Max
A 3 9 A 3 9 A 3 9
B 2 4 B 4 4 B 0 -
C 2 7 C 2 7 C 2 7
free : 3 units free : 1 units free : 5 units Allocate 5 to C
Has Max
A 3 9
B 0 -
C 7 7
free : 0 units
Allocate 6 units to A C completes
Has Max Has Max
A 9 9 A 3 9
B 0 - B 0 -
C 0 - C 0 -
79
Banker’s Algorithm
(with a single resource)
When a request occurs
– If(is_system_in_a_safe_state)
• Grant request
– else
• postpone until later
Deadlock unsafe
safe
80
Deadlock Prevention
• Deadlock avoidance not practical, need to
know maximum requests of a process
• Deadlock prevention
– Prevent at-least one of the 4 conditions
1. Mutual Exclusion
2. Hold and wait
3. No preemption
4. Circular wait
81
Prevention
1. Preventing Mutual Exclusion
– Not feasible in practice
– But OS can ensure that resources are optimally allocated
3. No preemption
– Pre-empt the resources, such as by virtualization of resources (eg. Printer
spools)
4. Circular wait
– One way, process holding a resource cannot hold a resource and request for
another one
– Ordering requests in a sequential / hierarchical order.
82
Hierarchical Ordering of
Resources
• Group resources into levels
(i.e. prioritize resources numerically)
• A process may only request resources at higher levels
than any resource it currently holds
• Resource may be released in any order
• eg.
– Semaphore s1, s2, s3 (with priorities in increasing order)
down(S1); down(S2); down(S3) ; allowed
down(S1); down(S3); down(S2); not allowed
83