0% found this document useful (0 votes)
89 views

Classical IPC Problems Reader's and Writer Problem

Uploaded by

Ramagopal Vemuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

Classical IPC Problems Reader's and Writer Problem

Uploaded by

Ramagopal Vemuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 79

Process Synchronization

Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009
Module 4: Process Synchronization

 Background
 Producer Consumer Problem
 The Critical-Section Problem
 Peterson’s Solution
 Synchronization Hardware
 Semaphores
 Classic Problems of Synchronization
 Monitors
 Synchronization Examples
 Atomic Transactions

Operating System Concepts – 8th Edition 6.2 Silberschatz, Galvin and Gagne ©2009
Objectives
 To introduce the critical-section problem, whose
solutions can be used to ensure the consistency of
shared data

 To present both software and hardware solutions of


the critical-section problem

 To introduce the concept of an atomic transaction


and describe mechanisms to ensure atomicity

Operating System Concepts – 8th Edition 6.3 Silberschatz, Galvin and Gagne ©2009
Concurrent Execution
 Concurrent execution has to give the same results as
serial execution.
 Concurrent execution with shared data leads us to speak
about synchronization.
 To get data consistency we should have mechanism to
avoid data inconsistency problem.
 Synchronization as embedded system topic we have to
speak about producer consumer problem

Operating System Concepts – 8th Edition 6.4 Silberschatz, Galvin and Gagne ©2009
Background
 A cooperating process is one that can affect or be affected by
other processes executing in the system.

 Cooperating processes can either directly share a logical address


space (that is, both code and data) or be allowed to share data
only through files or messages.

 Concurrent access to shared data may result in data


inconsistency

Operating System Concepts – 8th Edition 6.5 Silberschatz, Galvin and Gagne ©2009
 Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes

 Suppose that we wanted to provide a solution to the consumer-


producer problem that fills all the buffers. We can do so by having
an integer count that keeps track of the number of full buffers.
Initially, count is set to 0.

 It is incremented by the producer after it produces a new buffer


and is decremented by the consumer after it consumes a buffer.

Operating System Concepts – 8th Edition 6.6 Silberschatz, Galvin and Gagne ©2009
Producer Consumer Problem
 The producer-consumer problem illustrates the need for
synchronization in systems where many processes share a
resource.
 In the problem, two processes share a fixed-size buffer. One
process produces information and puts it in the buffer, while
the other process consumes information from the buffer.
 These processes do not take turns accessing the buffer, they
both work concurrently.
 It is also called bounded buffer problem

Operating System Concepts – 8th Edition 6.7 Silberschatz, Galvin and Gagne ©2009
Producer

while (true) {
/* produce an item and put in nextProduced */
while (counter == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

Operating System Concepts – 8th Edition 6.8 Silberschatz, Galvin and Gagne ©2009
Consumer

while (true) {
while (counter == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in nextConsumed */
}

Operating System Concepts – 8th Edition 6.9 Silberschatz, Galvin and Gagne ©2009
Race Condition
 counter++ could be implemented as

register1 = counter
register1 = register1 + 1
counter = register1

 counter-- could be implemented as

register2 = counter
register2 = register2 - 1
count = register2

 Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute counter = register1 {count = 6 }
S5: consumer execute counter = register2 {count = 4}

Operating System Concepts – 8th Edition 6.10 Silberschatz, Galvin and Gagne ©2009
 Now we have arrived at the incorrect state "counter == 4", indicating that
four buffers are full, If we reversed the order of the statements at S4 and S5,
we would arrive at the incorrect state "counter —— 6".

 We would arrive at this incorrect state because we allowed both processes


to manipulate the variable counter concurrently.

 A situation like this, where several processes access and manipulate the
same data concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race condition.

 To avoid the race condition, we need to ensure that only one process at a
time can be manipulating the variable counter. To make such a guarantee, we
require that the processes be synchronized in some way.

Operating System Concepts – 8th Edition 6.11 Silberschatz, Galvin and Gagne ©2009
Critical Section Problem
 Consider system of n processes {p0, p1, … pn-1}
 Each process has critical section segment of code
 Process may be changing common variables, updating table, writing
file, etc
 When one process in critical section, no other may be in its critical
section
 Critical section problem is to design protocol to solve this
 Each process must ask permission to enter critical section in entry
section, may follow critical section with exit section, then remainder
section
 Especially challenging with preemptive kernels

Operating System Concepts – 8th Edition 6.12 Silberschatz, Galvin and Gagne ©2009
Critical Section
 General structure of process pi is

Operating System Concepts – 8th Edition 6.13 Silberschatz, Galvin and Gagne ©2009
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections

2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed
indefinitely

3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes

Operating System Concepts – 8th Edition 6.14 Silberschatz, Galvin and Gagne ©2009
Peterson’s Solution
 Two process solution

 Assume that the LOAD and STORE instructions are atomic; that is, cannot
be interrupted

 The two processes share two variables:


 int turn;
 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the critical section

 The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!

Operating System Concepts – 8th Edition 6.15 Silberschatz, Galvin and Gagne ©2009
Algorithm for Process Pi

do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);

 Provable that
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

Operating System Concepts – 8th Edition 6.16 Silberschatz, Galvin and Gagne ©2009
Synchronization Hardware
 Many systems provide hardware support for critical section code

 Uniprocessors – could disable interrupts


 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable

 Modern machines provide special atomic hardware instructions


 Atomic = non-interruptable
 Either test memory word and set value
 Or swap contents of two memory words

Operating System Concepts – 8th Edition 6.17 Silberschatz, Galvin and Gagne ©2009
Solution to Critical-section
Problem Using Locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

Operating System Concepts – 8th Edition 6.18 Silberschatz, Galvin and Gagne ©2009
TestAndSet Instruction

 Definition:

boolean TestAndSet (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv:
}

Operating System Concepts – 8th Edition 6.19 Silberschatz, Galvin and Gagne ©2009
Solution using TestAndSet
 Shared boolean variable lock, initialized to FALSE
 Solution:

do {
while ( TestAndSet (&lock ))
; // do nothing

// critical section

lock = FALSE;

// remainder section

} while (TRUE);

Operating System Concepts – 8th Edition 6.20 Silberschatz, Galvin and Gagne ©2009
Swap Instruction

 Definition:

void Swap (boolean *a, boolean *b)


{
boolean temp = *a;
*a = *b;
*b = temp:
}

Operating System Concepts – 8th Edition 6.21 Silberschatz, Galvin and Gagne ©2009
Solution using Swap
 Shared Boolean variable lock initialized to FALSE; Each process has a local
Boolean variable key
 Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );

// critical section

lock = FALSE;

// remainder section

} while (TRUE);

Operating System Concepts – 8th Edition 6.22 Silberschatz, Galvin and Gagne ©2009
Bounded-waiting Mutual Exclusion
with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);

Operating System Concepts – 8th Edition 6.23 Silberschatz, Galvin and Gagne ©2009
Semaphore
 A semaphore is hardware or a software variable whose value indicates the status
of a common resource. Its purpose is to lock the resource being used.
 A process which needs the resource will check the semaphore for determining the
status of the resource followed by the decision for proceeding.
 In multitasking operating systems, the activities are synchronized by using the
semaphore techniques.
 Example, say we have four rooms with identical locks and keys. The semaphore
count - the count of keys - is set to 4 at beginning (all four rooms are free), then
the count value is decremented as people are coming in. If all rooms are full, ie.
there are no free keys left, the semaphore count is 0. Now, when eq. one person
leaves the rooms, semaphore is increased to 1 (one free key), and given to the
next person in the queue.

 "A semaphore restricts the number of simultaneous users of a shared resource up


to a maximum number. Threads can request access to the resource
(decrementing the semaphore), and can signal that they have finished using the
resource (incrementing the semaphore)."

Operating System Concepts – 8th Edition 6.24 Silberschatz, Galvin and Gagne ©2009
 Synchronization tool that does not require busy waiting
 Semaphore S – integer variable
 Two standard operations modify S: wait() and signal()
 Originally called P() and V()
 Less complicated

 Can only be accessed via two indivisible (atomic) operations


 wait (S) {
while S <= 0
; // no-op
S--;
}
 signal (S) {
S++;
}

Operating System Concepts – 8th Edition 6.25 Silberschatz, Galvin and Gagne ©2009
Semaphore as
General Synchronization Tool
 Counting semaphore – integer value can range over an unrestricted domain
 Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
 Also known as mutex locks
 Can implement a counting semaphore S as a binary semaphore
 Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);

Operating System Concepts – 8th Edition 6.26 Silberschatz, Galvin and Gagne ©2009
Semaphore Implementation
 Must guarantee that no two processes can execute wait () and signal
() on the same semaphore at the same time

 Thus, implementation becomes the critical section problem where


the wait and signal code are placed in the critical section
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied

 Note that applications may spend lots of time in critical sections and
therefore this is not a good solution

Operating System Concepts – 8th Edition 6.27 Silberschatz, Galvin and Gagne ©2009
Semaphore Implementation
with no Busy waiting

 With each semaphore there is an associated waiting queue


 Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list

 Two operations:
 block – place the process invoking the operation on the appropriate
waiting queue
 wakeup – remove one of processes in the waiting queue and place it in
the ready queue

Operating System Concepts – 8th Edition 6.28 Silberschatz, Galvin and Gagne ©2009
Semaphore Implementation with
no Busy waiting (Cont.)
 Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
 Implementation of signal:

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}

Operating System Concepts – 8th Edition 6.29 Silberschatz, Galvin and Gagne ©2009
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an event that can be caused by
only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
 Starvation – indefinite blocking
 A process may never be removed from the semaphore queue in which it is suspended
 Priority Inversion – Scheduling problem when lower-priority process holds a lock needed by
higher-priority process
 Solved via priority-inheritance protocol

Operating System Concepts – 8th Edition 6.30 Silberschatz, Galvin and Gagne ©2009
Problems with Semaphores
 Incorrect use of semaphore operations:

 signal (mutex) …. wait (mutex)

 wait (mutex) … wait (mutex)

 Omitting of wait (mutex) or signal (mutex) (or both)

 Deadlock and starvation

Operating System Concepts – 8th Edition 6.31 Silberschatz, Galvin and Gagne ©2009
Classical Problems of Synchronization
 Classical problems used to test newly-proposed synchronization schemes

 Bounded-Buffer Problem

 Readers and Writers Problem

 Dining-Philosophers Problem

Operating System Concepts – 8th Edition 6.32 Silberschatz, Galvin and Gagne ©2009
Bounded-Buffer Problem
 N buffers, each can hold one item

 Semaphore mutex initialized to the value 1

 Semaphore full initialized to the value 0

 Semaphore empty initialized to the value N

Operating System Concepts – 8th Edition 6.33 Silberschatz, Galvin and Gagne ©2009
Bounded Buffer Problem (Cont.)
 The structure of the producer process

do {

// produce an item in nextp


The producer must wait
for an empty space in
We must make sure that
the producer and the
wait (empty); the buffer

consumer make changes wait (mutex);


to the shared buffer in a
mutually exclusive
manner
// add the item to the buffer

signal (mutex);
signal (full);
} while (TRUE);

Operating System Concepts – 8th Edition 6.34 Silberschatz, Galvin and Gagne ©2009
Bounded Buffer Problem (Cont.)
 The structure of the consumer process
The consumer must wait
do { for a filled space in the
buffer
We must make
wait (full);
sure that the wait (mutex);
producer and the
consumer make
changes to the
// remove an item from buffer to nextc
shared buffer in
a mutually
exclusive
manner signal (mutex);
signal (empty);

// consume the item in nextc

} while (TRUE);

Operating System Concepts – 8th Edition 6.35 Silberschatz, Galvin and Gagne ©2009
Readers/Writers Problem

W
R
R
R
 Motivation: Consider a shared database
 Two classes of users:
 Readers – never modify database
 Writers – read and modify database
 Is using a single lock on the whole database sufficient?
 Like to have many readers at the same time
 Only one writer at a time
Operating System Concepts – 8th Edition 6.36 Silberschatz, Galvin and Gagne ©2009
Readers/Writers Problem
• A database is to be shared among several concurrent processes. Some
of these processes may want only to read the database, whereas others
may want to update the database
• We distinguish between these two types of processes by referring to the
former as readers and to the latter as writers
• Obviously, if two readers access the shared data simultaneously,
nothing bad will happen
• However, if a writer and some other process (either a reader or a writer)
access the database simultaneously, chaos may ensue

Operating System Concepts – 8th Edition 6.37 Silberschatz, Galvin and Gagne ©2009
Readers/Writers Problem
• To ensure that these difficulties do not arise, we require that the writers
have exclusive access to the shared database
• There are several variations of this problem, all involving priorities
– The first and simplest one, referred to as the first readers/writers problem, requires
that no reader will be kept waiting unless a writer has already obtained permission
to use the shared object (i.e., no reader should wait for other readers to finish
simply because a writer is waiting) NOTE: writers may starve
– The second readers/writers problem requires that, once a writer is ready, that writer
performs its write as soon as possible (i.e., if a writer is waiting, no new readers
may start reading) NOTE: readers may starve

Operating System Concepts – 8th Edition 6.38 Silberschatz, Galvin and Gagne ©2009
Readers-Writers Problem (Cont.)
 The structure of a writer process
A writer will wait if either
another writer is
do { currently writing or one
or more readers are
wait (wrt) ; currently reading

// writing is performed

signal (wrt) ;
} while (TRUE);

Operating System Concepts – 8th Edition 6.39 Silberschatz, Galvin and Gagne ©2009
Readers-Writers Problem (Cont.)
 The structure of a reader process
A reader will wait only if a writer is
do { currently writing. Note that if
wait (mutex) ; readcount == 1, no reader is
readcount ++ ; currently reading and thus that is the
only time that a reader has to make
if (readcount == 1)
sure that no writer is currently
wait (wrt) ; writing (i.e., if readcount > 1, there is
We must make signal (mutex) at least one reader reading and thus
sure that readers the new reader does not have to
update the shared wait)
variable // reading is performed
readcount in a
mutually wait (mutex) ;
exclusive manner
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);

Operating System Concepts – 8th Edition 6.40 Silberschatz, Galvin and Gagne ©2009
Dining-Philosophers Problem

 Philosophers spend their lives thinking and eating


 Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a
time) to eat from bowl
 Need both to eat, then release both when done
 In the case of 5 philosophers
 Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5] initialized to 1

Operating System Concepts – 8th Edition 6.41 Silberschatz, Galvin and Gagne ©2009
Dining-Philosophers Problem Algorithm
 The structure of Philosopher i:
A philosopher must wait for
do { his/her left and right
chopsticks to be available
wait ( chopstick[i] );
before he/she can start
wait ( chopStick[ (i + 1) % 5] ); eating

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);

This solution guarantees that no two neighbors can be eating simultaneously (i.e., mutual
exclusion)
 What is the problem with this algorithm?
This solution could create a deadlock. How?

Operating System Concepts – 8th Edition 6.42 Silberschatz, Galvin and Gagne ©2009
Monitors
 The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between
processes.
 A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
 Goal of OS is to share resources amongst many programs.
 Separate schedulers should be created for each class of resource.
 Each scheduler contains local data + procedures that programs may use to
acquire and release resources. Such a collection of data + procedures is a
monitor. 
 Abstract data type, internal variables only accessible by code within the procedure
 Only one process may be active within the monitor at a time If more than one
program attempts to enter at the same time, only one will succeed, and the
remaining programs will remain on a queue. 
 But not powerful enough to model some synchronization schemes

Operating System Concepts – 8th Edition 6.43 Silberschatz, Galvin and Gagne ©2009
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
}

Operating System Concepts – 8th Edition 6.44 Silberschatz, Galvin and Gagne ©2009
Schematic view of a Monitor

1. The initialization component contains the code


that is used exactly once when the monitor is
created
2. The monitor procedures are procedures that can
be called from outside of the monitor.
3. The monitor entry queue contains all threads
that called monitor procedures but have not been
granted permissions. 
4. Shared Data

Operating System Concepts – 8th Edition 6.45 Silberschatz, Galvin and Gagne ©2009
Condition Variables

 condition x, y;

 Two operations on a condition variable:


 x.wait () – a process that invokes the operation is suspended until x.signal ()
If a procedure calls wait, the calling program will block until some other
procedure calls signal.

 x.signal () – resumes one of processes (if any) that invoked x.wait ()


When a procedure calls signal, then the lock on the monitor is also released,
and another program that previously called wait will run immediately.
 If no x.wait () on the variable, then it has no effect on the variable.

Operating System Concepts – 8th Edition 6.46 Silberschatz, Galvin and Gagne ©2009
Monitor with Condition Variables

Operating System Concepts – 8th Edition 6.47 Silberschatz, Galvin and Gagne ©2009
Condition Variables Choices

 If process P invokes x.signal (), with Q in x.wait () state, what should happen
next?
 If Q is resumed, then P must wait

 Options include
 Signal and wait – P waits until Q leaves monitor or waits for another
condition
 Signal and continue – Q waits until P leaves the monitor or waits for
another condition
 Both have pros and cons – language implementer can decide
 Monitors implemented in Concurrent Pascal compromise
 P executing signal immediately leaves the monitor, Q is resumed
 Implemented in other languages including Mesa, C#, Java

Operating System Concepts – 8th Edition 6.48 Silberschatz, Galvin and Gagne ©2009
Solution to Dining Philosophers
monitor DiningPhilosophers
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}

Operating System Concepts – 8th Edition 6.49 Silberschatz, Galvin and Gagne ©2009
Solution to Dining Philosophers (Cont.)

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

Operating System Concepts – 8th Edition 6.50 Silberschatz, Galvin and Gagne ©2009
Solution to Dining Philosophers (Cont.)

 Each philosopher i invokes the operations pickup() and putdown() in the following sequence:

DiningPhilosophers.pickup (i);

EAT

DiningPhilosophers.putdown (i);

 No deadlock, but starvation is possible

Operating System Concepts – 8th Edition 6.51 Silberschatz, Galvin and Gagne ©2009
Monitor Implementation Using Semaphores

 Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next_count = 0;

 Each procedure F will be replaced by

wait(mutex);

body of F;


if (next_count > 0)
signal(next)
else
signal(mutex);

 Mutual exclusion within a monitor is ensured

Operating System Concepts – 8th Edition 6.52 Silberschatz, Galvin and Gagne ©2009
Monitor Implementation – Condition Variables
 For each condition variable x, we have:

semaphore x_sem; // (initially = 0)


int x_count = 0;

 The operation x.wait can be implemented as:

x-count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x-count--;

Operating System Concepts – 8th Edition 6.53 Silberschatz, Galvin and Gagne ©2009
Monitor Implementation (Cont.)
 The operation x.signal can be implemented as:

if (x-count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}

Operating System Concepts – 8th Edition 6.54 Silberschatz, Galvin and Gagne ©2009
Resuming Processes within a Monitor
 If several processes queued on condition x, and x.signal() executed, which should be resumed?

 FCFS frequently not adequate

 conditional-wait construct of the form x.wait(c)


 Where c is priority number
 Process with lowest number (highest priority) is scheduled next

Operating System Concepts – 8th Edition 6.55 Silberschatz, Galvin and Gagne ©2009
A Monitor to Allocate Single Resource

monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = TRUE;
}
void release() {
busy = FALSE;
x.signal();
}
initialization code() {
busy = FALSE;
}
}

Operating System Concepts – 8th Edition 6.56 Silberschatz, Galvin and Gagne ©2009
Synchronization Examples
 Solaris

 Windows XP

 Linux

 Pthreads

Operating System Concepts – 8th Edition 6.57 Silberschatz, Galvin and Gagne ©2009
Solaris Synchronization
 Implements a variety of locks to support multitasking, multithreading (including real-time threads), and
multiprocessing

 Uses adaptive mutexes for efficiency when protecting data from short code segments
 Starts as a standard semaphore spin-lock
 If lock held, and by a thread running on another CPU, spins
 If lock held by non-run-state thread, block and sleep waiting for signal of lock being released

 Uses condition variables

 Uses readers-writers locks when longer sections of code need access to data

 Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock
 Turnstiles are per-lock-holding-thread, not per-object

 Priority-inheritance per-turnstile gives the running thread the highest of the priorities of the threads in its
turnstile

Operating System Concepts – 8th Edition 6.58 Silberschatz, Galvin and Gagne ©2009
Windows XP Synchronization
 Uses interrupt masks to protect access to global resources on uniprocessor systems

 Uses spinlocks on multiprocessor systems


 Spinlocking-thread will never be preempted

 Also provides dispatcher objects user-land which may act mutexes, semaphores, events, and timers

 Events
 An event acts much like a condition variable
 Timers notify one or more thread when time expired
 Dispatcher objects either signaled-state (object available) or non-signaled state (thread will block)

Operating System Concepts – 8th Edition 6.59 Silberschatz, Galvin and Gagne ©2009
Linux Synchronization
 Linux:
 Prior to kernel Version 2.6, disables interrupts to implement short critical sections
 Version 2.6 and later, fully preemptive

 Linux provides:
 semaphores
 spinlocks
 reader-writer versions of both

 On single-cpu system, spinlocks replaced by enabling and disabling kernel preemption

Operating System Concepts – 8th Edition 6.60 Silberschatz, Galvin and Gagne ©2009
Pthreads Synchronization

 Pthreads API is OS-independent

 It provides:
 mutex locks
 condition variables

 Non-portable extensions include:


 read-write locks
 spinlocks

Operating System Concepts – 8th Edition 6.61 Silberschatz, Galvin and Gagne ©2009
Atomic Transactions

 System Model
 Log-based Recovery
 Checkpoints
 Concurrent Atomic Transactions

Operating System Concepts – 8th Edition 6.62 Silberschatz, Galvin and Gagne ©2009
System Model

 Assures that operations happen as a single logical unit of work, in its entirety, or not at all
 Related to field of database systems
 Challenge is assuring atomicity despite computer system failures
 Transaction - collection of instructions or operations that performs single logical function
 Here we are concerned with changes to stable storage – disk
 Transaction is series of read and write operations
 Terminated by commit (transaction successful) or abort (transaction failed) operation
 Aborted transaction must be rolled back to undo any changes it performed

Operating System Concepts – 8th Edition 6.63 Silberschatz, Galvin and Gagne ©2009
Types of Storage Media

 Volatile storage – information stored here does not survive system crashes
 Example: main memory, cache
 Nonvolatile storage – Information usually survives crashes
 Example: disk and tape
 Stable storage – Information never lost
 Not actually possible, so approximated via replication or RAID to devices with independent failure
modes

Goal is to assure transaction atomicity where failures cause loss of


information on volatile storage

Operating System Concepts – 8th Edition 6.64 Silberschatz, Galvin and Gagne ©2009
Log-Based Recovery
 Record to stable storage information about all modifications by a transaction
 Most common is write-ahead logging
 Log on stable storage, each log record describes single transaction write operation, including
 Transaction name
 Data item name
 Old value
 New value
 <Ti starts> written to log when transaction Ti starts
 <Ti commits> written when Ti commits

 Log entry must reach stable storage before operation on data occurs

Operating System Concepts – 8th Edition 6.65 Silberschatz, Galvin and Gagne ©2009
Log-Based Recovery Algorithm
 Using the log, system can handle any volatile memory errors
 Undo(Ti) restores value of all data updated by Ti
 Redo(Ti) sets values of all data in transaction Ti to new values
 Undo(Ti) and redo(Ti) must be idempotent
 Multiple executions must have the same result as one execution
 If system fails, restore state of all updated data via log
 If log contains <Ti starts> without <Ti commits>, undo(Ti)
 If log contains <Ti starts> and <Ti commits>, redo(Ti)

Operating System Concepts – 8th Edition 6.66 Silberschatz, Galvin and Gagne ©2009
Checkpoints
 Log could become long, and recovery could take long
 Checkpoints shorten log and recovery time.
 Checkpoint scheme:
1. Output all log records currently in volatile storage to stable storage
2. Output all modified data from volatile to stable storage
3. Output a log record <checkpoint> to the log on stable storage
 Now recovery only includes Ti, such that Ti started executing before the most recent checkpoint, and all
transactions after Ti All other transactions already on stable storage

Operating System Concepts – 8th Edition 6.67 Silberschatz, Galvin and Gagne ©2009
Concurrent Transactions
 Must be equivalent to serial execution – serializability
 Could perform all transactions in critical section
 Inefficient, too restrictive
 Concurrency-control algorithms provide serializability

Operating System Concepts – 8th Edition 6.68 Silberschatz, Galvin and Gagne ©2009
Serializability
 Consider two data items A and B
 Consider Transactions T0 and T1
 Execute T0, T1 atomically
 Execution sequence called schedule
 Atomically executed transaction order called serial schedule
 For N transactions, there are N! valid serial schedules

Operating System Concepts – 8th Edition 6.69 Silberschatz, Galvin and Gagne ©2009
Schedule 1: T0 then T1

Operating System Concepts – 8th Edition 6.70 Silberschatz, Galvin and Gagne ©2009
Nonserial Schedule
 Nonserial schedule allows overlapped execute
 Resulting execution not necessarily incorrect
 Consider schedule S, operations Oi, Oj
 Conflict if access same data item, with at least one write
 If Oi, Oj consecutive and operations of different transactions & Oi and Oj don’t conflict
 Then S’ with swapped order Oj Oi equivalent to S
 If S can become S’ via swapping nonconflicting operations
 S is conflict serializable

Operating System Concepts – 8th Edition 6.71 Silberschatz, Galvin and Gagne ©2009
Schedule 2: Concurrent Serializable Schedule

Operating System Concepts – 8th Edition 6.72 Silberschatz, Galvin and Gagne ©2009
Locking Protocol
 Ensure serializability by associating lock with each data item
 Follow locking protocol for access control
 Locks
 Shared – Ti has shared-mode lock (S) on item Q, Ti can read Q but not write Q
 Exclusive – Ti has exclusive-mode lock (X) on Q, Ti can read and write Q
 Require every transaction on item Q acquire appropriate lock
 If lock already held, new request may have to wait
 Similar to readers-writers algorithm

Operating System Concepts – 8th Edition 6.73 Silberschatz, Galvin and Gagne ©2009
Two-phase Locking Protocol
 Generally ensures conflict serializability
 Each transaction issues lock and unlock requests in two phases
 Growing – obtaining locks
 Shrinking – releasing locks
 Does not prevent deadlock

Operating System Concepts – 8th Edition 6.74 Silberschatz, Galvin and Gagne ©2009
Timestamp-based Protocols
 Select order among transactions in advance – timestamp-ordering
 Transaction Ti associated with timestamp TS(Ti) before Ti starts
 TS(Ti) < TS(Tj) if Ti entered system before Tj
 TS can be generated from system clock or as logical counter incremented at each entry of transaction
 Timestamps determine serializability order
 If TS(Ti) < TS(Tj), system must ensure produced schedule equivalent to serial schedule where Ti appears
before Tj

Operating System Concepts – 8th Edition 6.75 Silberschatz, Galvin and Gagne ©2009
Timestamp-based Protocol Implementation
 Data item Q gets two timestamps
 W-timestamp(Q) – largest timestamp of any transaction that executed write(Q) successfully
 R-timestamp(Q) – largest timestamp of successful read(Q)
 Updated whenever read(Q) or write(Q) executed
 Timestamp-ordering protocol assures any conflicting read and write executed in timestamp order
 Suppose Ti executes read(Q)
 If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q that was already overwritten
 read operation rejected and Ti rolled back
 If TS(Ti) ≥ W-timestamp(Q)
 read executed, R-timestamp(Q) set to max(R-timestamp(Q), TS(Ti))

Operating System Concepts – 8th Edition 6.76 Silberschatz, Galvin and Gagne ©2009
Timestamp-ordering Protocol
 Suppose Ti executes write(Q)
 If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed previously and Ti assumed it would never be
produced
 Write operation rejected, Ti rolled back
 If TS(Ti) < W-timestamp(Q), Ti attempting to write obsolete value of Q
 Write operation rejected and Ti rolled back
 Otherwise, write executed
 Any rolled back transaction Ti is assigned new timestamp and restarted
 Algorithm ensures conflict serializability and freedom from deadlock

Operating System Concepts – 8th Edition 6.77 Silberschatz, Galvin and Gagne ©2009
Schedule Possible Under Timestamp Protocol

Operating System Concepts – 8th Edition 6.78 Silberschatz, Galvin and Gagne ©2009
End of Chapter 6

Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009

You might also like