Concurrency in Shared Memory Systems: Synchronization and Mutual Exclusion

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 37

Concurrency in Shared Memory Systems

Synchronization and Mutual Exclusion

Processes, Threads, Concurrency


Traditional processes are sequential: one instruction at a time is executed. Multithreaded processes may have several sequential threads that can execute concurrently. Processes (threads) are concurrent if their executions overlap start time of one occurs before finish time of another.

Concurrent Execution
On a uniprocessor, concurrency occurs when the CPU is switched from one process to another, so the instructions of several threads are interleaved (alternate) On a multiprocessor, execution of instructions in concurrent threads may be overlapped (occur at same time) if the threads are running on separate processors.

Concurrent Execution
An interrupt, followed by a context switch, can take place between any two instructions. Hence the pattern of instruction overlapping and interleaving is unpredictable. Processes and threads execute asynchronously we cannot predict if event a in process i will occur before event b in process j.

Sharing and Concurrency


System resources (files, devices, even memory) are shared by processes, threads, the OS. Uncontrolled access to shared entities can cause data integrity problems Example: Suppose two threads (1 and 2) have access to a shared (global) variable balance, which represents a bank account. Each thread has its own private (local) variable withdrawali, where i is the thread number

Example
Let balance = 100, withdrawal1=50, and withdrawal2 = 75. Threadi will execute the following algorithm:
if (balance >= withdrawali) balance = balance withdrawali else // print Cant overdraw account!

If thread1 executes first, balance will be 50 and thread2 cant withdraw funds. If thread2 executes first, balance will be 25 and thread1 cant withdraw funds.

But --- what if the two threads execute concurrently instead of sequentially? Break down into machine-level operations:
if (balance >= withdrawali) balance = balance withdrawali move balance to a register compare register to withdrawali branch if less-than register = register withdrawali store register contents in balance

Example-Multiprocessor
(A possible instruction sequence showing interleaved execution)

Thread 1 (2) Move balance to register1 (register = 100) (4) compare register1 to withdraw1 (5)register1 = register1 withdraw1 (100-50) (7) store register1 in balance (balance = 50)

Thread 2 (1) Move balance to register2 (register = 100) (3) compare register2 to withdraw2 (6) register2 = register2 withdraw2 (100 75) (8) store register2 in balance (balance = 25)

Example Uniprocessor
(A possible instruction sequence showing interleaved execution)

Thread 1 Move balance to register (Reg. = 100)

Thread 2

P1s time slice expires its state is saved P1 is re-scheduled; its state is restored (Reg. = 100)

balance = balance withdraw1 (100-50) Result: balance = 50

Move balance to reg. balance >= withdraw2 balance = balance withdraw2 = (100-75)

Race Conditions
The previous examples illustrate a race condition (data race): an undesirable condition that exists when several processes access shared data, and
At least one access is a write and The accesses are not mutually exclusive

Race conditions can lead to inconsistent results.

Mutual Exclusion
Mutual exclusion forces serial resource access as opposed to concurrent access. When one thread locks a critical resource, no other thread can access it until the lock is released. Critical section (CS): code that accesses shared resources. Mutual exclusion guarantees that only one process/thread at a time can execute its critical section, with respect to a given resource.

Mutual Exclusion Requirements


It must ensure that only one process/thread at a time can access a shared resource. In addition, a good solution will ensure that
If no thread is in the CS a thread that wants to execute its CS must be allowed to do so When 2 or more threads want to enter their CSs, cant postpone decision indefinitely Every thread should have a chance to execute its critical section (no starvation)

Solution Model
Begin_mutual_exclusion /* some mutex primitive execute critical section End_mutual_exclusion /* some mutex primitive The problem: How to implement the mutex primitives?
Busy wait solutions (e.g., test-set operation, spinlocks of various sorts, Petersons algorithm) Semaphores (OS feature usually, blocks waiting process) Monitors (language feature e.g. Java)

Semaphores
Definition: an integer variable on which processes can perform two indivisible operations, P( ) and V( ), + initialization. (P and V sometimes called Wait & Signal) Each semaphore has a wait queue associated with it. Semaphores are protected by the operating system.

Semaphores
Binary semaphore: only values are 1 and 0 Traditional semaphore: may be initialized to any non-negative value; can count down to zero. Counting semaphores: P & V operations may reduce semaphore values below 0, in which case the negative value records the number of blocked processes. (See CS 490 textbook)

Semaphores
Are used to synchronize and coordinate processes and/or threads Calling the P (wait) operation may cause a process to block Calling the V (signal) operation never causes a process to block, but may wake a process that has been blocked by a previous P operation.

Traditional Semaphore
P(S): if S > = 1 then S = S 1 else block the process on S queue

Counting Semaphore
P(S): S=S1 if ( S < 0) then block the process on S queue

V(S): if some processes are blocked on S queue then unblock a process else S = S + 1

V(S): S=S+1 if (S <= 0) then move a process from S queue to the Ready queue

Usage Mutual Exclusion


Using a semaphore to enforce mutual exclusion. P(mutex) // mutex initially = 1 execute CS; V(mutex) Each process that uses a shared resource must first check (using P) that no other process is in the critical section and then must use V to release the critical section.

Bank Problem Revisited


Semaphore S = 1
Thread 1 P(S) Move balance to register1 Compare register1 to withdraw1 register1 = register1 withdraw1 Store register1 in balance Thread 2 P(S) Move balance to register2 Compare register2 to withdraw2 register2 = register2 withdraw2 Store register2 in balance V(S)

V(S)

Example Uniprocessor
Thread 1 P(S) S is decremented: S = 0,
T1 continues to execute

Thread 2

Move balance to register (Reg. = 100)

T1s time slice expires its state is saved T1 is re-scheduled; its state is restored (Reg. = 100)

P(S)
Since S = 0, T2 is blocked

balance = balance withdraw1 (100-50) V(S)


Thread 2 returns to run state, S remains 0

T2 resumes executing some time after T1 executes V(S)

Move balance to reg. (50) balance >= withdraw2 V(S)

Since !(50>=75), T2 does not make withdrawal Since no thread is waiting, S is set back to 1

Critical Sections are Indivisible


The effect of mutual exclusion is to make a critical section appear to be indivisible much like a hardware instruction. (Recall the atomic nature of a transaction) In the bank example, once T1enters its critical section no other thread is allowed to operate on balance until T1 signals it has left the CS. (assumes that all users employ mutual exclusion)

Implementing Semaphores: P and V Must Be Indivisible


Semaphore operations themselves must be indivisible, or atomic; i.e., execute under mutual exclusion. Once OS begins to execute a P or V operation, it cannot allow another P or V to begin on the same semaphore.

P and V Must Be Indivisible


P operation must be indivisible; otherwise there is no guarantee that two processes wont try to test P at the same time and both find it equal to 1. P(S): if S > = 1 then S = S 1 else block the process on S queue Two V operations executed at the same time could unblock two processes, leading to two processes in their critical sections concurrently. V(S): if some processes are blocked on the queue for S then unblock a process else S = S + 1

if S >= 1 then S = S 1 else block the process on S queue execute critical section if processes are blocked on the queue for S then unblock a process else S = S + 1

Semaphore Usage Event Wait


(synchronization that isnt mutex)

Suppose a process P2 wants to wait on an event of some sort (call it A) which is to be executed by another process P1 Initialize a shared semaphore to 0 By executing a wait (P) on the semaphore, P2 will wait until P1 executes event A and signals, using the V operation.

Event Wait Example


semaphore signal = 0; Process 1 . execute event A V(signal) Process 2 P(signal)

Semaphores Are Not Perfect


Programmer must know something about other processes using the semaphore Must use semaphores carefully (be sure to use them when needed; dont leave out a V(), etc.) Hard to prove program correctness when using semaphores.

Other Synchronization Problems


(in addition to simple mutual exclusion)

Dining Philosophers: resource deadlock Producer-consumer: buffering (as of messages, input data, etc.) Readers-writers: data base or file sharing
Readers priority Writers priority

Producer-Consumer
Producer processes and consumer processes share a (usually finite) pool of buffers. Producers add data to pool Consumers remove data, in FIFO order

Producer-Consumer Requirements
The processes are asynchronous. A solution must ensure producers dont deposit data if pool is full and consumers dont take data if pool is empty. Access to buffer pool must be mutually exclusive since multiple consumers (or producers) may try to access the pool simultaneously.

Bounded Buffer P/C Algorithm


Initialization: s=1; n=0; e=sizeofbuffer; Producer: while(true) produce v; P(e); // wait for buffer slot P(s); // wait for buffer pool access append(v); V(s); // release buffer pool V(n); // signal a full buffer Consumer: while(true) P(n); // wait for a full buffer P(s); // wait for buffer pool access w:=take(); V(s); // release buffer pool V(e); // signal an empty buffer consume(w);

Readers and Writers Problem


Characteristics:
concurrent processes access shared data area (files, block of memory, set of registers) some processes only read information, others write (modify and add) information

Restrictions:
Multiple readers may read concurrently, but when a writer is writing, there should be no other writers or readers.

Compare to Prod/Cons
Differences between Readers/Writers (R/W) and Producer/Consumer (P/C):
Data in P/C is ordered - placed into buffer and retrieved according to FIFO discipline. All data is read exactly once. In R/W, same data may be read many times by many readers, or data may be written by writer and changed before any reader reads. No order enforced on reads.

procedure writer; begin repeat P (wsem); write data; V (wsem);


forever end;

// Initialization code integer readcount = 0; // done only once semaphore x, wsem = 1; // done only once
procedure reader; begin repeat P (x); readcount = readcount + 1; if readcount = =1 then P (wsem); V (x); read data; P (x); readcount = readcount - 1; if readcount == 0 then V(wsem); V (x); forever end;

Any Questions?
Can you think of any real examples of producerconsumer or reader-writer situations?

Semaphores and User Thread Library


Thread libraries can simulate real semaphores. In a multi-(user-level) threaded process the OS only sees a single thread of execution; e.g., T1, T1, T1, L, L, T2, T2, L, L, T1, T1,
Library functions execute when a u-thread voluntarily yields control

Use a variable as a semaphore; access via P & V functions. A thread executes P(S) and finds S = 0. Then it yields control.

Semaphores and User Thread Library


Why is this safe? Because there is really never more than one thread of control violations of mutual exclusion happen when separate threads are scheduled concurrently. A user-level thread decides when to yield control; kernel-level threads dont. If the library is asked to execute P(S) or V(S) it will not be interrupted by another thread in the same process, so there is no danger.

You might also like