0% found this document useful (0 votes)
45 views65 pages

Slides Synchronization I

The document discusses synchronization in operating systems and concurrency, describing the critical section problem that can occur when multiple processes access shared resources concurrently, and how synchronization techniques like semaphores, monitors, and lock variables can be used to enforce mutual exclusion and prevent race conditions to ensure orderly access to critical sections and maintain consistency of shared data. It also provides examples of producer-consumer problems to illustrate potential race conditions and the need for synchronization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views65 pages

Slides Synchronization I

The document discusses synchronization in operating systems and concurrency, describing the critical section problem that can occur when multiple processes access shared resources concurrently, and how synchronization techniques like semaphores, monitors, and lock variables can be used to enforce mutual exclusion and prevent race conditions to ensure orderly access to critical sections and maintain consistency of shared data. It also provides examples of producer-consumer problems to illustrate potential race conditions and the need for synchronization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Bilkent University

Department of Computer Engineering


CS342 Operating Systems

Synchronization

Last Update: Oct 24, 2022

1
Objectives and Outline

Objectives and Outline


• The critical-section Problem
• Pure software solutions
• Synchronization hardware
• Lock variables
• Condition variables
• Semaphores
• Monitors
• Classic problems of synchronization and their solutions

2
Synchronization: Concurrency, shared data, race condition

Background

• Concurrent access to shared data may result in data


inconsistency.
• Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes.
Shared Data
can be a
shared memory variable / object,
a global variable in a multi-thread program,
a file,
a kernel variable,

Concurrent Threads or Processes
(executing on same CPU or different CPUs)
3
Synchronization: Concurrency, shared data, race condition

Producer Consumer Problem Revisited


• Consider consumer-producer problem again.
• This time uses an integer count (shared) to keeping the number
of full slots. Initialized to 0.
– incremented by the producer after putting a new item
– decremented by the consumer after retrieving an item

count

Producer Consumer

Shared Buffer

at most BUFFER_SIZE items


4
Synchronization: Concurrency, shared data, race condition

Producer and Consumer Code

Producer Consumer

while (true) { while (true) {


// produce an item while (count == 0)
item = …. ; // do nothing
while (count == BUFFER_SIZE) // remove item
; // do nothing item = buffer[out];
// add item out = (out + 1) % BUFFER_SIZE;
buffer [in] = item; count--;
in = (in + 1) % BUFFER_SIZE; // consume item
count++; }
}

5
Synchronization: Concurrency, shared data, race condition

Accessing shared data

• count++ could be implemented as


register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2

same physical register can be used, because registers are saved


and reloaded while doing context switch

6
Synchronization: Concurrency, shared data, race condition

a possible problem: race condition

• Assume we have 5 items in the buffer


• Assume producer has just produced a new item (6 the item)
and put it into buffer and is about to increment the count.
• Assume the consumer has just retrieved an item from buffer
and is about to decrement the count.
• That means assume producer and consumer is now about to
execute count++ and count-- statements.

7
Synchronization: Concurrency, shared data, race condition

Race Condition

register1 Count
PRODUCER (count++)
56 654
register1 = count
register1 = register1 + 1
register2 count = register1
54
CONSUMER (count--)

register2 = count
register2 = register2 – 1
CPU count = register2

Main Memory
8
Synchronization: Concurrency, shared data, race condition

Interleaved Execution sequence

• Interleaving execution, “count = 5” initially:

P1: producer executes register1 = count {register1 = 5}


P2: producer executes register1 = register1 + 1 {register1 = 6}
C1: consumer executes register2 = count {register2 = 5}
C2: consumer executes register2 = register2 - 1 {register2 = 4}
P3: producer executes count = register1 {count = 6 }
C3: consumer executes count = register2 {count = 4}

• At the end, count became 4! Should be 5.

9
Synchronization: Concurrency, shared data, race condition

Race condition

• Count value may be 4, 6, or 5 in various runs.


• But it should be 5 (as a result of one increment, one decrement
operation)
• concurrent access to count causes data inconsistency: 4, 5, 6.
• Such situations are called race conditions: several processes
access and manipulate the same data concurrently.
• We should develop programs that do not have race conditions.
• Race conditions will cause incorrect operation. Additionally,
they are hard reproducable.

10
Synchronization: Concurrency, shared data, race condition

Race condition

• In previous example:
– For consistent result (5), either count++ should be
executed and finished first, or count-- should be executed
and finished. Not interleaved.
• To avoid race condition we need to enforce non-interleaved
access (atomic) to shared data.
• We need synchronization (coordination) of threads/processes
while accessing shared data.

11
Synchronization: Critical section problem

Programs and critical sections


The part of the program (process) that is accessing and using
shared data is called its critical section
Process 1 Code Process 2 Code Process 3 Code

Update X
Update X
Update Y
Update Y
Update Y

Assuming X and Y are different shared data.


12
Synchronization: Critical section problem

Program lifetime and its structure

• We should not allow more than one thread to be in their


critical sections at the same time.
• Critical sections should be executed one at a time.
• A thread may be also be executing non-critical section code
(remainder section). Concurrent execution of that part is
allowed and may be desirable.

13
Synchronization: Critical section problem

Program structure

• The general way to solve critical section problem:


do {
do {
entry section
critical section
critical section
remainder section

exit section

} while (TRUE) remainder

The general structure of a program


} while (TRUE)

Entry section will allow only one process to enter and execute critical section code.
14
Synchronization: Critical section problem

Solution to Critical-Section Problem

An ideal solution should have the following conditions satisfied:


1. Mutual Exclusion: If process 𝑃! is executing in its critical section, then
no other process can be executing in its critical section.
2. Progress: If there are processes wanting to enter critical section
while there is nobody in the critical section, they should not be
waited indefinitely to enter the critical section. // no deadlock
3. Bounded Waiting: A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before
that request is granted. // no starvation of a process

• Assume that each process executes at a nonzero speed.


• No assumption can be made concerning the relative speeds of the
processes
15
Synchronization: Critical section problem

Applications and Kernel

• Multi-process applications sharing a file or shared memory


segment may face critical section problems.
• Multi-threaded applications sharing global variables may also
face critical section problems.
• Similarly, kernel itself may face critical section problems. It has
critical sections.

16
Synchronization: Critical section problem

Kernel Critical Sections

• Execution of a kernel function x() may be interrupted by a


hardware interrupt and interrupt handler h() may run. Care
needed if x() and h() are accessing shared data.
• A process makes a system call for a function say s1(). While s1()
is running, a context switch may cause another process run and
make a system say s2(). Care needed if s1() and s2() access
shared data.
– This may happen in preemptive kernels, not in non-
preemptive kernels.
• A note: When a process makes a system call, we say the
process is running in kernel mode while the system call is being
executed.
• Kernel is developed considering such cases.
17
Synchronization: Pure software solutions: Peterson’s solution

Pure software solution


An example: Peterson’s Solution
• Two process solution
• Assume that the LOAD and STORE machine instructions are
atomic; that is, cannot be interrupted.
• The two processes share two variables:
– int turn;
– boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical
section.
• The flag array is used to indicate if a process is ready to enter
the critical section. flag[i] == true implies that process 𝑃! wants
to enter the critical section.
18
Synchronization: Pure software solutions: Peterson’s solution

Algorithm for Process Pi

do {

flag[i] = TRUE; // wants to enter CS


turn = j; entry section
while (flag[j] && turn == j);
critical section
flag[i] = FALSE; exit section
remainder section

} while (1)

19
Synchronization: Pure software solutions: Peterson’s solution

Two processes executing concurrently

PROCESS i (0) PROCESS j (1)


do { do {
flag[i] = TRUE; flag[j] = TRUE;
turn = j; turn = i;
while (flag[j] && turn == j) while (flag[i] && turn == i)
; // looping ; // looping
critical section….. critical section…..
flag[i] = FALSE; flag[j] = FALSE;
remainder section….. remainder section…..
} while (1) } while (1)

shared variables flag[2]


turn
20
Synchronization: Hardware support for synchronization

Synchronization Hardware

• We can use some hardware support (if available) for protecting


critical section code
– 1) Disable interrupts? maybe
• Sometimes (only in kernel)
• Not possible on multi-processors

– 2) Special machine instructions (acting on lock variables)


• TestAndSet
• CompareAndSwap (CAS instruction)

21
Synchronization: Hardware support for synchronization: Us of a lock variable via special instructions

Solution to Critical-section Problem Using Locks

• Use of lock variables is a general and very common approach


for solution of critical section problem.
• A lock variable is shared (can simple be an integer with values 0
and 1). A process (thread) can be structured as follows:
do {
acquire_lock (&lock)
critical section
release_lock (&lock)
remainder section
} while (TRUE);
Only one process can acquire the lock. Others has to wait (or busy loop)
Locks can be implemented using special hardware instructions.
22
Synchronization: Hardware support for synchronization: Access to a lock variable

Locks

What happens if we use an integer as a lock variable without using


special hardware instructions?

int lock = 0; // global variable (shared among threads)

Thread 1 Thread 2

while (lock == 1) while (lock == 1)


; // loop ; // loop
lock = 1; lock = 1;
// critical section // critical section
lock = 0; lock = 0;

above code is NOT a correct solution

Lock variable itself is source of race condition.


23
Synchronization: Hardware support for synchronization: Special instruction

Synchronization Hardware

• Therefore we need to use special machine instructions that can


do testing and setting atomically or something smilar (like
swapping).
• Some possible atomic (non-interruptable) machine instructions:
– TestAndSet instruction (we can as TSL):
Test memory word and set its value to 1
– CompareAndSwap instruction (CAS)

• Hardware ensures that they are executed atomically in a multi-


processor environment as well (one CPU at a time executes the
instruction: it involves memory access; memory is shared).

24
Synchronization: Hardware support for synchronization: Special instruction

TestAndSet Instruction

• is a machine/assembly instruction.
TestAndSet REGISTER, LOCK;

– Here we provide the definition of it using high-level language code.


Definition of TestAndSet Instruction

int TestAndSet (int *target)


{
int rv = *target;
*target = 1; // set to one - locked
return rv:
}

25
Synchronization: Hardware support for synchronization: Special instruction

Solution using TestAndSet

• To use it, we need to program in assembly language.


We can use a shared integer variable lock, initialized to 0.

do {
while ( TestAndSet (&lock)) entry section
; // do nothing

// critical section
exit_section
lock = 0;

// remainder section

} while (TRUE);

26
Synchronization: Hardware support for synchronization: Special instruction

In assembly

entry_section:
TestAndSet REGISTER, LOCK;
CMP REGISTER, #0 entry section code
JNE entry_section;
RET
exit_section:
move LOCK, #0 exit section code
RET

main:
..
call entry_section;
execute critical region;
call exit_section;

27
Synchronization: Hardware support for synchronization: Special instruction

CompareAndSwap Instruction (CAS)

• Again a machine instruction


• It has three operands: value, expected, newvalue
• If value is equal to expected value, then swaps (value becomes
newvalue)
Definition

int compare_and_swap (int *value, int expected, int new_value)


{
int temp = *value;
if (*value == expected)
*value == new_value;
return temp; // old value returned
}

28
Synchronization: Hardware support for synchronization: Special instruction

Solution using CompareAndSwap

• We need to program entry_section() in assembly

We use a shared int variable lock initialized to 0 (unlocked).

entry_section

exit_section

29
Synchronization: Hardware support for synchronization: Special instruction

Comments

• Use of TestAndSet and CompareAndSwap as explained


provides mutual exclusion: 1st property satisfied
• Progress is also satisfied. (no deadlock)
• But, Bounded waiting property, 3rd property, may not be
satisfied (starvation).

• A process X may be waiting, but we can have the other process


Y going into the critical region repeatedly (no limited).

30
Synchronization: Hardware support for synchronization: Special instruction

Bounded-waiting mutual exclusion with TestAndSet()

do { entry section code


waiting[i] = TRUE; // assuming will wait
key = 1; // lock assumed to be locked
process (i)
while (waiting[i] && key)
code
key = TestAndSet(&lock); // lock is 0, then key will be 0
waiting[i] = FALSE;

// critical section
exit section code
j = (i + 1) % n;
while ((j != i) && !waiting[j]) // search for a process waiting
j = (j + 1) % n; // j was not interested
if (j == i) // no other process wants to enter CS
lock = 0; // set lock to 0
else // there is a process j that is waiting in while loop
waiting[j] = FALSE; // process j will be in CS; lock still 1
// remainder section
} while (TRUE);
31
Synchronization: Mutex locks (high-level tools)

Mutex locks

• We can put these entry and exit section codes into lock() and
unlock() functions in a library/kernel. Then we have a lock
implementation. It is also called mutex lock (mutual exclusion
lock)
• Applications will not be directly using HW instructions; instead
lock implementation will use these HW instructions.
• Applications will just call mutex_lock() and mutex_unlock()
functions.

32
Synchronization: Mutex locks

Lock implementation (with busy waiting)

typedef struct __lock_t { void init (lock_t *lock)


int flag; {
} lock_t; lock->flag = 0;
}

void lock (lock_t *lock)


{
while ( TestAndSet (&lock->flag))
; // loop - do nothing
}

void unlock (lock_t *lock)


{
lock->flag = 0;
}
33
Synchronization: Mutex locks (high-level tools)

Application using a lock

lock_t mutex; //define lock variable

init (&mutex); // initialize

mutex_lock (&mutex); // acquire lock


//critical section
mutex_unlock (&mutex); // release lock

Terminology: during the time that a lock is held by a thread X, we say:


lock is acquired by X, or
held by X, or
belong to X, or
X has the lock, or
X got the lock, etc.
34
Synchronization: Mutex locks (high-level tools)

Locks

• Such a lock is also called spin-lock, since it busy waiting.


• Use of spin-locks in uniprocessor systems is very inefficient.
One process B will spin during the whole time quantum (q ms)
if the lock is held by another process A.
• But, it can be useful for short critical sections in multi-processor
systems.
• Kernel uses spin locks to protect short critical sections (a few
instructions) on multi-processor systems.

35
Synchronization: Mutex locks: spin locks

Spin Locks
Process A running in kernel mode Process B running in kernel mode
(i.e., executing kernel code shown) (i.e. executing kernel code shown)

f1() {… f2() {… busy


acquire_spin_lock (X); acquire_spin_lock (X); waiting
…//critical region…. …//critical region….
…touch to SD (shared data); …touch to SD (shared data);
release_spin_lock (X); release_spin_lock (X); …
} }

CPU 1 CPU 2

Kernel X lock variable (accessed atomically)


Main f1() {…}
Memory f2() {…} SD shared data

36
Synchronization: Mutex locks (high-level tools)

Lock implementation without busy waiting

• It is possible to implement locks without busy waiting.


• There will be an associated waiting queue with the lock data
type (lock object).
• A process that can not acquire the lock will wait (block or
sleep) on the waiting queue of the lock (will not spin).
• When unlock() is called on the lock, one of the waiting
processes will be waken up and will be put into ready queue. It
will have the lock now.
• We will not see the implementation here.

37
Synchronization: Condition variables

Condition Variables
• mutex variables (locks) can be used to solve the critical section
problem (exclusive access).
• But there are other synchronization needs and problems.
– For example, we may want to block (sleep, wait) a thread
until some event / condition happens. How can we do that?
• Condition variables are for such cases.
• A condition variable (cv) is an object/variable (ADT) that can
be used to cause a thread to sleep until a condition happens.
– Internally, a cv has a waiting queue associated with it.
– A cv has two operations that can be performed on it: wait()
and signal()

38
Synchronization: Condition variables

Condition Variables

• cv.wait() blocks (sleep/wait) the calling thread and adds it to the


cv waiting queue.
• cv.signal() wakes up (unblockes) one of the waiting threads (if
any) and removes it from the cv waiting-queue.
• If there is no thread sleeping on cv, signal() has no effect (signal
lost).
• cv.broadcast() wakes up all the waiting threads on cv (all are
removed from waiting-queue of cv).
• A condition variable is usually used in combination with a mutex
lock.

39
Synchronization: Condition variables

Example 1

• POSIX Pthreads API provides mutex and condition variables.


• Consider a program with two threads. They will share a variable
count. One thread (thread2) will update the count. We want
the other thread (thread1) to wait until count becomes 100.

global variables
int count = 0; // shared state between threads
pthread_mutex_t lock; // lock variable
pthread_cond_t cv; // condition variable

40
Synchronization: Condition variables

Example 1
cond_wait() body first releases the lock; then sleeps on
the cv queue. When waken up (signaled), it is removed
from cv queue. Then it tries to get the lock again (added
to the lock queue). when it has the lock, cond_wait()
returns.
// executed by thread 1
void * function1 (void *p) // executed by thread 1
{
// assume waited condition is “count == 100”
pthread_mutex_lock (&lock); // lock the lock
while (count < 100)
pthread_cond_wait (&cv, &lock);

// we are sure count is 100 now.


// do something with count being 100 if you wish
pthread_mutex_unlock(&lock); // unlock the lock
}

!"#$%&%"#'(!")#&'*+',--.'%/'0'123$%!0&3'&40&'%#!5)$3/'/4023$'602%0753/8''

41
Synchronization: Condition variables

Example 1

// executed by thread 2
void *function2 (void *p) {
pthread_mutex_lock (&lock); // get lock
while (count < 100)
count += 1; // changing shared data (count)
// waited condition happened
pthread_cond_singal (&cv); // signal and continue
// signaling thread still has the lock.
// waken up thread is moved from cv wait-queue
// to lock wait-queue.
pthread_mutex_unlock(&lock) // release the lock now
}

42
Synchronization: Condition variables

Example 2

• Assume we have a resource type that has 100 identical


instances. Multiple threads, for example 𝑁 threads, are running
concurrently. Each thread 𝑥 (1 ≤ 𝑥 ≤ 𝑁) will want to use 𝑘
instances (𝑘 is randomly chosen) from time to time (in an
endless loop). Write a program for this.

global – shared - variables


int rcount = 100; //resource count; shared
pthread_mutex_t lock; // protects rcount
pthread_cond_t cv; // to enforce sleep

43
Synchronization: Condition variables

Example 2

a thread 𝑥 will execute the code below.

...
int k // local variable of the thread – needed instances
while (1) {
k = generateRandomValue(1,100)// between 1 and 100
allocate_resources (k);
// use resources - may take a while
deallocate_resources (k);
}

44
Synchronization: Condition variables

Example 2

void allocate_resources (int n) // executed by thread x


{
pthread_mutex_lock (&lock);
while (rcount < = n) // check condition
pthread_cond_wait (&cv, &lock); // wait
rcount = rcount - n
pthread_mutex_unlock(&lock)
}

• The function will block until the requested number of resource


instances become available.
• Many threads may call the function simultaneously.

45
Synchronization: Condition variables

Example 2

void deallocate_resources (int n) // executed by thread x


{
pthread_mutex_lock (&lock);
rcount = rcount + n;
pthread_cond_broadcast (&cv); // wakeup all waiting
pthread_mutex_unlock(&lock)
}

46
Synchronization: Condition variables

lock and conditional variable in


pthread_cond_wait()
while (rcount < n) // rcount is shared variable
pthread_cond_wait (&cv, &lock);

When a thread calls pthread_cond_wait(), it releases the lock and


gets added to the cv queue and is blocked (sleeping). When it is
waken up, it will wait for the lock to become available – will be
added to the lock’s waiting-queue. When the thread finally gets
the lock, it will return from pthread_cond_wait() function.

Then it will loop and check the condition again while holding the
lock. If the condition did not happen yet, the thread will call
pthread_cond_wait() again. The call will cause the thread to
release the lock and sleep on the condition variable again.
47
Synchronization: Condition variables

POSIX mutex and condition variables

Tips
• Always wait in a while loop for condition to be true (unless you
are sure if statement is needed instead of while). This is safer
and more modular.
• If there are multiple threads waiting, just one signal() may not
wake up the correct thread in some cases. In those cases, use
of broadcast() can simplify coding.
– broadcast() wakes up all threads waiting on the condition
variable. Each thread, one by one, will check the condition
again, and if condition is not true, will wait on condition
variable queue again.
• Have the lock while accessing shared variables.
48
Synchronization: Semaphores: Definition

Semaphore

• Synchronization tool that does not require busy waiting


– Supported by OS or by a Library
• A semaphore S has an integer variable and a wait queue
associated. It is a shared object (for example, can be a kernel
object).
• Two standard operations modify S: wait() and signal()
• Originally called P() and V()
• Also called down() and up()
• Semaphores can only be accessed via these two atomic
operations;
• Semaphores can be implemented in kernel and accessed by
system calls.
49
Synchronization: Semaphores: Definition

Meaning (semantics) of operations

• wait (S):
if S positive
S-- and return
else
block here (until somebody wakes you up; then return)

• signal(S):
if there is a process waiting
wake it up and return
else
S++ and return
50
Synchronization: Semaphores: Definition

Comments

• Wait body and signal body have to be executed atomically: one


process at a time. Hence the body of wait and signal are critical
sections to be protected by the kernel.

51
Synchronization: Semaphores: Definition

Semaphores as general synchronization tool

• Binary semaphore: value can be 0 or 1. It can be simpler to


implement.
– Also known as non-busy-waiting mutex locks (that does not
busy-wait, but sleep)
– Binary semaphores provide mutual exclusion; can be used
for the critical section problem.
• Counting semaphore: integer value can be any value >= 0
– Can be used for other synchronization problems; for
example, for resource allocation.
– Can be implemented by using binary semaphores.

52
Synchronization: Semaphores: Usage

Semaphores usage:
critical section problem
• A semaphore variable should be defined, initialized to 1.
• Will be shared by multiple processes (say 𝑁) processes.
• Each process 𝑖, 1 ≤ 𝑖 ≤ 𝑁, is structured as follows:

Semaphore Mutex = 1; // define and initialize shared semaphore

do {
wait (mutex);
// critical section code
signal (mutex);
// remainder section
} while (1);

53
Synchronization: Semaphores: Usage
Semaphores usage:
critical section problem

Process 0 Process 1

do {
do {
wait (mutex);
wait (mutex);
// Critical Section
// Critical Section
signal (mutex);
signal (mutex);
// remainder section
// remainder section
} while (TRUE);
} while (TRUE);

wait() {…} signal() {…}


Kernel
Semaphore mutex; // initialized to 1

54
Synchronization: Semaphores: Usage
Semaphore usage:
other synchronization problems
P0 P1
… … Assume we definitely want to
S1; S2; have statement S1 executed
…. ….
before statement S2.

semaphore x = 0; // initialized to 0
P0 P1
… …
solution: S1; wait (x);
signal (x); S2;
…. ….

55
Synchronization: Semaphores: Usage

Uses of Semaphore: synchronization


Buffer is an array of BUF_SIZE Cells (at most BUF_SIZE items can be put)

Producer Consumer
do { do {
// produce item wait (Full_Cells);
… ….
put item into buffer remove item from buffer
.. ..
signal (Full_Cells); …
} while (TRUE);
} while (TRUE);

wait() {…} signal() {…}


Kernel
Semaphore Full_Cells = 0; // initialized to 0

56
Synchronization: Semaphores: Usage

Semaphore usage: resource allocation


• Assume we have a resource that has 5 identical instances. A process
will need one instance from time to time. We can allow at most 5
processes to use the resource concurrently. Other processes that
want to use the resource need to wait.
• Solution: one of the processes creates and initializes a semaphore
to 5. Each process has to be coded as below.
Semaphore x = 5; wait (x);

….use one instance
of the resource…

signal (x);

57
Synchronization: Semaphores: Implementation

Semaphore Implementation

• Semaphore data structure can be defined as below.


typedef struct {
int value; // semaphore value
struct process *waitlist; // semaphore wait queue
} semaphore;

• With each semaphore there is an associated wait queue.


– The processes waiting for the semaphore are waited here.

58
Synchronization: Semaphores: Implementation

Semaphore Implementation

Implementation sketch of wait: short critical


wait (semaphore *S) {
section
S->value--;
if (S->value < 0) {
add this process to S->waitlist;
block() // kernel blocking the process; context-switch happens
}
}

implementation sketch of signal:


signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->waitlist;
wakeup (the process);

}
}
59
Synchronization: Semaphores: Implementation

Kernel Implementing wait and signal

• The wait and signal operations must be atomic. The integer


value is updated.
• Multiple processes can make calls to wait() and signal()
simultaneously
• But no two processes can execute wait() and signal() critical
sections at the same time. Short critical sections.
• Kernel can guarantee this by:
– disabling interrupts in a single CPU system
– use of spin-locks in a multi-processor system

60
Synchronization: Semaphores: POSIX Semaphore API

POSIX Semaphores

Named Semaphores
Unnamed Semaphores sem_t * Sp;
sem_t S; char * sname = "semname1";

sem_init (&S, 0, 1); Sp = sem_open(sname,


O_CREAT, 0666, 1);
sem_wait(&S); // wait operation
.... sem_wait(Sp); // wait operation
sem_post(&S); // signal operation ...
sem_signal(Sp); // signal operation
sem_destroy (&S);
sem_close(Sp);
sem_unlink (sname)
(you can learn more info using man sem_overview at Linux command line)
61
Synchronization: Semaphores: Potential problems

Potential problems with semaphores

• Deadlocks: two or more processes are waiting indefinitely for an


event that can be caused by only one of the waiting processes.
• Example: Let S and Q be two semaphores initialized to 1.

Sem S=1;
P0 P1
Sem Q=1;
wait (S); wait (Q);
wait (Q); wait (S);
…. ….
signal (S); signal (Q);
signal (Q); signal (S);

62
Synchronization: Semaphores: Potential problems

Potential problems with semaphores


• Starvation: A process may never be removed from the
semaphore queue in which it is suspended.
• Priority Inversion: Scheduling problem when lower-priority
process (L) holds a lock needed by a higher-priority process
(H), and medium priority (M) gets scheduled.
state=ready state=ready priority state=waiting
inversion
L M H
has lock x scheduled to use the waiting for lock x
cpu. causing H to be
waited even more.

Solution: process holding the lock inherits the priority of


the process waiting for the lock. Then, L can run soon and release
lock, so H can run earlier than M.
63
Synchronization: Semaphores: Potential problems

Potential problems with semaphores

Incorrect use of semaphore operations:

• signal (mutex) …. wait (mutex)

• wait (mutex) … wait (mutex)

• Omitting of wait (mutex) or signal (mutex) (or both)

64
References

• Operating System Concepts, Silberschatz et al., Wiley.


• Modern Operating Systems, Andrew S. Tanenbaum et al.
• Operating Systems: Three Easy Pieces, Remzi H. Arpaci-
Dusseau et al.
• Operating Systems: Principals and Practice, T. Andersen et al.

65

You might also like