Operating Systems Design: © 2020 KL University
Operating Systems Design: © 2020 KL University
Critical
section:
Critical
section:
5
Protecting Accesses to Shared Variables:
Mutexes
1. Thread 1 fetches the current value of glob into its local variable loc. Let’s assume that the current value of glob is
2000.
2. The scheduler time slice for thread 1 expires, and thread 2 commences execution.
3. Thread 2 performs multiple loops in which it fetches the current value of glob into its local variable loc,
increments loc, and assigns the result to glob. In the first of these loops, the value fetched from glob will be 2000.
Let’s suppose that by the time the time slice for thread 2 has expired, glob has been increased to 3000.
4. Thread 1 receives another time slice and resumes execution where it left off. Having previously (step 1) copied
the value of glob (2000) into its loc, it now increments loc and assigns the result (2001) to glob. At this point, the
effect of the increment operations performed by thread 2 is lost.
If we run the program in Listing 30-1 multiple times with the same command-line argument, we see that the printed
value of glob fluctuates wildly:
$ ./thread_incr 10000000
glob = 10880429
$ ./thread_incr 10000000
glob = 13493953
This nondeterministic behavior is a consequence of the vagaries of the kernel’s CPU scheduling decisions. In
complex programs, this nondeterministic behavior means that such errors may occur only rarely, be hard to
reproduce, and therefore be difficult to find.
6
Protecting Accesses to Shared Variables:
Mutexes
To avoid the problems that can occur when threads try to update a shared variable, we must use a mutex (short for
mutual exclusion) to ensure that only one thread at a time can access the variable. More generally, mutexes can be used
to ensure atomic access to any shared resource, but protecting shared variables is the most common use.
A mutex has two states: locked and unlocked. At any moment, at most one thread may hold the lock on a mutex.
Attempting to lock a mutex that is already locked either blocks or fails with an error, depending on the method used to
place the lock.
When a thread locks a mutex, it becomes the owner of that mutex. Only the mutex owner can unlock the mutex. This
property improves the structure of code that uses mutexes and also allows for some optimizations in the
implementation of mutexes. Because of this ownership property, the terms acquire and release are
sometimes used synonymously for lock and unlock.
In general, we employ a different mutex for each shared resource (which may consist of multiple related variables),
and each thread employs the following protocol for accessing a resource:
• lock the mutex for the shared resource;
• access the shared resource; and
• unlock the mutex.
7
Protecting Accesses to Shared Variables:
Mutexes
Finally, note that mutex locking is advisory, rather than mandatory. By this, we mean that
a thread is free to ignore the use of a mutex and simply access the corresponding shared
variable(s). In order to safely handle shared variables, all threads must cooperate in their
use of a mutex, abiding by the locking rules it enforces.
8
9
Lock-based Concurrent Data structure
Adding locks to a data structure makes the structure thread safe.
A block of code is thread-safe if it can be simultaneously executed by multiple
threads without causing problems.
• Thread-safeness: in a nutshell, refers an application's ability to execute
multiple threads simultaneously without "clobbering" shared data or creating
"race" conditions.
• For example, suppose that your application creates several threads, each of
which makes a call to the same library routine:
• This library routine accesses/modifies a global structure or location in memory.
• As each thread calls this routine it is possible that they may try to modify this global
structure/memory location at the same time.
• If the routine does not employ some sort of synchronization constructs to prevent data
corruption, then it is not thread-safe.
Lock-based Concurrent Data structure
Solution #1
• An obvious solution is to simply lock the list any time that a thread attempts to
access it.
• A call to each of the three functions can be protected by a mutex.
Solution #2
• Instead of locking the entire list, we could try to lock individual nodes.
• A “finer-grained” approach.
1 // basic node structure
2 typedef struct __node_t {
3 int key;
4 struct __node_t *next;
5 pthread_mutex_t lock;
6 } node_t;
Concurrent Linked Lists
1 // basic node structure
2 typedef struct __node_t {
3 int key;
4 struct __node_t *next;
5 } node_t;
6
7 // basic list structure (one used per list)
8 typedef struct __list_t {
9 node_t *head;
10 pthread_mutex_t lock;
11 } list_t;
12
13 void List_Init(list_t *L) {
14 L->head = NULL;
15 pthread_mutex_init(&L->lock, NULL);
16 }
17
(Cont.)
12
Concurrent Linked Lists(Cont.)
(Cont.)
18 int List_Insert(list_t *L, int key) {
19 pthread_mutex_lock(&L->lock);
20 node_t *new = malloc(sizeof(node_t));
21 if (new == NULL) {
22 perror("malloc");
23 pthread_mutex_unlock(&L->lock);
24 return -1; // fail
26 new->key = key;
27 new->next = L->head;
28 L->head = new;
29 pthread_mutex_unlock(&L->lock);
30 return 0; // success
31 }
(Cont.)
13
Concurrent Linked Lists(Cont.)
(Cont.)
32
32 int List_Lookup(list_t *L, int key) {
33 pthread_mutex_lock(&L->lock);
34 node_t *curr = L->head;
35 while (curr) {
36 if (curr->key == key) {
37 pthread_mutex_unlock(&L->lock);
38 return 0; // success
39 }
40 curr = curr->next;
41 }
42 pthread_mutex_unlock(&L->lock);
43 return -1; // failure
44 }
14
Concurrent Linked Lists(Cont.)
The code acquires a lock in the insert routine upon entry.
The code releases the lock upon exit.
If malloc() happens to fail, the code must also release the lock before
failing the insert.
This kind of exceptional control flow has been shown to be quite error prone.
Solution: The lock and release only surround the actual critical section in the
insert code
15
Concurrent Linked List: Rewritten
1 void List_Init(list_t *L) {
2 L->head = NULL;
3 pthread_mutex_init(&L->lock, NULL);
4 }
5
6 void List_Insert(list_t *L, int key) {
7 // synchronization not needed
8 node_t *new = malloc(sizeof(node_t));
9 if (new == NULL) {
10 perror("malloc");
11 return;
12 }
13 new->key = key;
14
15 // just lock critical section
16 pthread_mutex_lock(&L->lock);
17 new->next = L->head;
18 L->head = new;
19 pthread_mutex_unlock(&L->lock);
20 }
21
16
Concurrent Linked List: Rewritten(Cont.)
(Cont.)
22 int List_Lookup(list_t *L, int key) {
23 int rv = -1;
24 pthread_mutex_lock(&L->lock);
25 node_t *curr = L->head;
26 while (curr) {
27 if (curr->key == key) {
28 rv = 0;
29 break;
30 }
31 curr = curr->next;
32 }
33 pthread_mutex_unlock(&L->lock);
34 return rv; // now both success and failure
35 }
17
Scaling Linked List
Hand-over-hand locking (lock coupling)
Add a lock per node of the list instead of having a single lock for the entire
list.
When traversing the list,
First grabs the next node’s lock.
And then releases the current node’s lock.
18
Pthreads Read-Write Locks
Neither of our multi-threaded linked lists exploits the potential for
simultaneous access to any node by threads that are executing Member.
The first solution only allows one thread to access the entire list at any instant
The second only allows one thread to access any given node at any instant.
A read-write lock is somewhat like a mutex except that it provides two lock
functions.
The first lock function locks the read-write lock for reading, while the second
locks it for writing.
19
Pthreads Read-Write Locks
So multiple threads can simultaneously obtain the lock by calling the read-lock
function, while only one thread can obtain the lock by calling the write-lock
function.
Thus, if any threads own the lock for reading, any threads that want to obtain
the lock for writing will block in the call to the write-lock function.
If any thread owns the lock for writing, any threads that want to obtain the
lock for reading or writing will block in their respective locking functions.
20
Pthreads Read-Write Locks
Readerwriter locks are similar to mutexes, except that they allow for higher degrees of
parallelism. With a mutex, the state is either locked or unlocked, and only one thread can lock
it at a time. Three states are possible with a readerwriter lock: locked in read mode, locked in
write mode, and unlocked. Only one thread at a time can hold a readerwriter lock in write
mode, but multiple threads can hold a readerwriter lock in read mode at the same time.
When a readerwriter lock is write-locked, all threads attempting to lock it block until it is
unlocked. When a readerwriter lock is read-locked, all threads attempting to lock it in read
mode are given access, but any threads attempting to lock it in write mode block until all the
threads have relinquished their read locks. Although implementations vary, readerwriter locks
usually block additional readers if a lock is already held in read mode and a thread is blocked
trying to acquire the lock in write mode. This prevents a constant stream of readers from
starving waiting writers.
21
Pthreads Read-Write Locks
Readerwriter locks are well suited for situations in which data structures are read more
often than they are modified. When a readerwriter lock is held in write mode, the data
structure it protects can be modified safely, since only one thread at a time can hold the
lock in write mode. When the readerwriter lock is held in read mode, the data structure
it protects can be read by multiple threads, as long as the threads first acquire the lock
in read mode.
Readerwriter locks are also called sharedexclusive locks. When a readerwriter lock is
read-locked, it is said to be locked in shared mode. When it is write-locked, it is said to
be locked in exclusive mode.
As with mutexes, readerwriter locks must be initialized before use and destroyed before
freeing their underlying memory.
22
Pthreads Read-Write Locks
#include <pthread.h>
int pthread_rwlock_init(pthread_rwlock_t *restrict rwlock, const
pthread_rwlockattr_t *restrict attr);
int pthread_rwlock_destroy(pthread_rwlock_t *rwlock);
Both return: 0 if OK, error number on failure
#include <pthread.h>
int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_unlock(pthread_rwlock_t *rwlock);
All return: 0 if OK, error number on failure
23
Thank you