0% found this document useful (0 votes)
9 views51 pages

Process Synchro

The document discusses process synchronization, categorizing processes as independent or cooperative, with a focus on the critical section problem where shared variables must be accessed by only one process at a time to maintain data consistency. It highlights mechanisms such as mutex locks and semaphores to manage access to critical sections, addressing issues like race conditions, deadlocks, and starvation. Various classic synchronization problems, including the bounded-buffer and readers-writers problems, are also explored, emphasizing the need for efficient synchronization methods in concurrent processing environments.

Uploaded by

manojshankar717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views51 pages

Process Synchro

The document discusses process synchronization, categorizing processes as independent or cooperative, with a focus on the critical section problem where shared variables must be accessed by only one process at a time to maintain data consistency. It highlights mechanisms such as mutex locks and semaphores to manage access to critical sections, addressing issues like race conditions, deadlocks, and starvation. Various classic synchronization problems, including the bounded-buffer and readers-writers problems, are also explored, emphasizing the need for efficient synchronization methods in concurrent processing environments.

Uploaded by

manojshankar717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Process Synchronization

Unit-3 (part-2)
Process Synchronization
Processes are categorized as one of the following two types:

• Independent Process: Execution of one process does not


affects the execution of other processes.

• Cooperative Process: Execution of one process affects the execution


of other processes

• Process synchronization problem arises in the case of Cooperative


process

• Critical section is a code segment that can be accessed by only one


process at a time.

• Critical section contains shared variables which need to be


synchronized to maintain consistency of data variables.
Process Synchronization

Concurrent access to shared data may result in data


inconsistency.

Discussion on various mechanisms to ensure the orderly


execution of cooperating processes so that data
consistency is maintained.
Bounded buffer
Consumer-Producer code Counter =0 is a shared
variable
while (true)
{ Counter++
/* produce an item in nextProduced */ Implementation
while (counter == BUFFER.SIZE)
; /* do nothing */
register1 = counter
buffer[in] = nextProduced;
register1 = register1 + 1
in = (in + 1) % BUFFER-SIZE;
counter++;
counter = register1
}

while (true)
Counter--
{
while (counter == 0)
Implementation
; /* do nothing */
nextConsumed = buffer [out] ,- register2 = counter
out = (out + 1) % BUFFER_SIZE; register2 = register2 - 1
counter--; counter = register2
}
Bounded buffer
Race condition
Counter=5

T0 Producer executes register1= counter


{register1 =5}
T1 Producer executes register1= register1 +1 {register1 =6}
T2 Consumer executes register2=counter {register2 =5}
T3 Consumer executes register2= register2 -1 {register2 =4}
T4 Producer executes counter =register1
{counter=6}
T5 Consumer executes counter =register2
{counter=4}

Several processes access and manipulates the same data concurrently and outcome of the
execution depends on the particular order in which the access takes place is called RACE
CONDITION

Remedy: Allow only one process to manipulate the variable at a time


(process must be synchronized)
Examples – Race Condition
1)If two processes were to open files simultaneously,
the separate updates to this list could result in a race
condition.

2) Two requests for withdrawal from the same account


comes to a bank from two different ATM machines.
Assume a balance of Rs.1000
The Critical-Section Problem
Consider a system consisting of n processes {P0, P1, ..., Pn-1}.

Each process has a segment of code, called a critical section, in


which the process may be changing common variables, updating a
table, writing a file, and so on.

When one process is executing in its critical section, no other


process is to be allowed to execute in its critical section.

The critical-section problem is to design a protocol that the


processes can use to cooperate.
The Critical-Section Problem
Each process must request permission to enter its
critical section. The section of code implementing
this request is the entry section.

The critical section may be followed by an exit


section.

The remaining code is the remainder section.


The Critical-Section Problem
A solution to the critical-section problem must satisfy the following three
requirements:

Mutual exclusion. If process Pi is executing in its critical section, then no


other processes can be executing in their critical sections.

Progress. If no process is executing in its critical section and some


processes wish to enter their critical sections, then only those processes that
are not executing in their remainder sections can participate in the decision
on which will enter its critical section next, and this selection cannot be
postponed indefinitely.

Bounded waiting. There exists a bound, or limit, on the number of times


that other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted.
The Critical-Section Problem
Kernel mode process:
Many kernel mode process may be active in the OS. Kernel code is
also subject to several possible race conditions
As an example a kernel data structure that maintains a list of all
open files in the system.

This list must be modified when a new file is opened or closed


(adding the file to the list or removing it from the list).

If two processes were to open files simultaneously, the separate


updates to this list could result in a race condition.

The other data structure that prone to race conditions


1. Maintaining memory allocation
2. Maintaining process list
The Critical-Section Problem
Two general approaches are used to handle critical sections in
operating systems:

(1) preemptive kernels and


(2) nonpreemptive kernels.

A preemptive kernel allows a process to be preempted while it is


running in kernel mode.
A nonpreemptive kernel does not allow a process running in
kernel mode to be preempted(it’s free from race condition)
Careful design is required to ensure that shared kernel data are
free from race conditions
Preemptive is useful in real-time process to preempt a process
Preemptive is more responsive
Peterson’s Solution
(classic s/w based solution)
No guarantee that Peterson's solution will work correctly on modern computer
architectures.

Peterson's solution is restricted to two processes and requires two data items.

Two data items to be shared


between the two process

int turn : Whose turn it is to enter


its critical section

boolean flag[2] : if a process is


ready to enter its critical section
Mutual Exclusion
Progress

Pj doesn’t block Pi to enter its Critical section.


This is known as progress.
Bounded Waiting

Process Pi has made a request to enter its critical section


flag(i)=True. How many times Pj is allowed to enter to its
critical section when Pi sets its flag.
Drawbacks of Software Solution
• Software solutions are very delicate.
• Processes that are requesting to enter their critical
sections are busy waiting (consuming processor time
needlessly).
• If critical sections are long, it would be more efficient
to block processes that are waiting.
Synchronization Hardware
The critical-section problem could be solved simply in a uni-
processor environment if we could prevent interrupts from
occurring while a shared variable was being modified.

In this manner, we could be sure that the current sequence of


instructions would be allowed to execute in order without pre-
emption.

Unfortunately, this solution is not as feasible in a multiprocessor


environment.

Disabling interrupts on a multiprocessor can be time consuming.


Synchronization Hardware

Many modern computer systems therefore provide


special hardware instructions.

These Instructions allow us either to test and modify the


content of a word or to swap the contents of two words
atomically—that is, as one uninterruptible unit.

We can use these special instructions to solve the


critical-section problem in a relatively simple manner.
Synchronization Hardware
Any solution to the critical-section problem requires a simple
tool — a lock.

A process must acquire a lock before entering a critical


section; it releases the lock when it exits the critical section.
Synchronization Hardware
TestAndSet() Definition of the
instruction

Mutual exclusion
Implementation
compare_and_swap Instruction
Definition:
int compare _and_swap(int *value, int expected, int
new_value) {
int temp = *value;

if (*value == expected)
*value = new_value;
return temp;
}
1.Execute atomically
2.It compares the contents of the memory location
with a given value and only if they are the same,
modifies the contents of that memory location to a
new given value.
Mutual exclusion implementation with
Compare_and _Swap()instruction
Shared integer “lock” initialized to 0;
Solution:
do {
while (compare_and_swap(&lock, 0, 1) != 0); (in a loop)
/* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);
Problem with TestAndSet and Compare_and
_Swap instructions
Both the instruction satisfy the Mutual exclusion
But do not satisfy Bounded buffer requirements
Example: P1, P2, P3 are ready to enter their Critical
section.
1) P1 Acquires lock and in its critical section
2) P2 and P3 are waiting
3) P2 may get lock and it can enter its critical
section
4) Again, P1 may get lock and enter its critical
section, Again p2 may get lock
5) P3 will not get a lock to enter its critical section :
Bounded buffer requirements not satisfied
Bounded-waiting mutual exclusion with
Common data structure are TestAndSet ().
boolean waiting[n];
boolean lock;
Mutex (mutual exclusion) locks
Hardware-based solutions are complicated and generally
inaccessible to application programmers
OS designers build software tools to solve critical section
problem
Simplest tool is mutex lock
Protect a critical section by first acquire() a lock then
release() the lock
 Boolean variable indicating if lock is available or not
Calls to acquire() and release() must be atomic
 Usually implemented via hardware atomic instructions
Mutex (mutual exclusion) locks
A process must acquire a lock before entering a critical
section; it releases the lock when it exits the critical section.

Solution to the
critical section
problem using
mutex locks
Acquire() and Release()
 acquire() {
while (!available) ; /* busy wait */
available = false;
} Available=1, lock
is available

Available =0,
release() {
lock is not
available = true;
available
}
do {
acquire lock
critical section
release lock
remainder section
} while (true);
Disadvantage is busy waiting (must loop continuously)
 This type of mutex lock is called a spinlock (process spins
while waiting for the lock to become available
(TestAndSet(), Compare_ and_ Swap()..)
Semaphores: Synchronization tool
A semaphore S is an integer variable that can be accessed only through two
standard atomic operations:

wait () ----- (test operation)


signal () -------- ( "to increment")

All the modifications to the integer value of the semaphore in the wait() and
signal() operations must be executed individually

Testing of the integer value of S (S < 0), and its


possible modification (S--), must also be executed
without interruption.
Semaphores: Synchronization tool
Two types Semaphores:

1. Counting Semaphores 2. Binary Semaphores (mutex locks)

Counting semaphores can be used to control access to a given resource consisting of a


finite number of instances.

The semaphore is initialized to the number of resources available.

Each process that wishes to use a resource performs a wait() operation on the
semaphore (thereby decrementing the count).

When a process releases a resource, it performs a signal () operation (incrementing the


count).

When the count for the semaphore goes to 0, all resources are being used. After that,
processes that wish to use a resource will block until the count becomes greater than 0.
Semaphores: Synchronization tool
2. Binary Semaphores (mutex locks)

The value of a binary semaphore could be either 0 and 1.

We can use binary semaphores to deal with the critical-section problem for
multiple processes.

The n processes share a semaphore, mutex, initialized to 1. Each process Pi, is


organized as shown here
Semaphores: Synchronization tool
The main disadvantage of the semaphore definition given here is that it requires
busy waiting.

While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code.

Busy waiting wastes CPU cycles that some other process might be able to use
productively.

This type of semaphore is also called a spinlock because the process "spins" while
waiting for the lock.
Semaphores: Synchronization tool
To overcome the need for busy waiting, we can modify the definition
of the wait() and signal () semaphore operations

When a process executes the wait() operation and finds that the
semaphore value is not positive, it must wait.

However, rather than engaging in busy waiting, the process can


block itself.

The block operation places a process into a waiting queue associated


with the semaphore, and the state of the process is switched to the
waiting state.

Then control is transferred to the CPU scheduler, which selects


another process to execute.
Semaphores: Synchronization tool
Semaphores: Synchronization tool
The critical aspect of semaphores is that they be executed atomically.

We must guarantee that no two processes can execute wait() and signal()
operations on the same semaphore at the same time.

This is a Critical Section problem: In a single-processor environment we can


solve it by simply disable interrupts during the time the wait () and signal ()
operations are executing.

In a multiprocessor environment, interrupts must be disabled on every


processor.

Disabling interrupts on every processor can be a difficult task and furthermore


can seriously diminish performance.
Semaphores: Synchronization tool
Deadlocks and Starvation

The implementation of a semaphore with a waiting queue may


leads to two or more process wait indefinitely for an event of
execution of a signal() operation

Execution of a signal() operation in hold. When such a state is


reached, these processes are said to be deadlocked.

The problem with the deadlock is Indefinite


blocking or Starvation:

Indefinite blocking may occur if we add and


remove processes from the list associated with a
semaphore in LIFO
Classic Problems of Synchronization
These problems are used for testing nearly every newly proposed
synchronization scheme.

The Bounded-Buffer Problem

Pool consists of n buffers, each capable of holding one item.

The mutex semaphore provides mutual exclusion for accesses to the buffer
pool and is initialized to the value 1.

The empty and full semaphores count the number of empty and full buffers.

The semaphore empty is initialized to the value n; the semaphore full is


initialized to the value 0.
Classic Problems of Synchronization
The Bounded-Buffer Problem [The structure of producer
problem]
Classic Problems of Synchronization
The Bounded-Buffer Problem [The structure of consumer
process]
 Self Study
Classic Problems of Synchronization
The Readers-Writers Problem

A database is to be shared among several concurrent


processes.

Some of these processes may want only to read the database


called readers, whereas others may want to update the
database called writers.

Obviously, if two readers access the shared data


simultaneously, no
adverse affects will result.

However, if a writer and some other thread (either a reader or


a writer) access the database simultaneously, then it will be
problem.
Classic Problems of Synchronization

We require that the writers have exclusive access to the


shared database. This synchronization problem is referred to
as the readers-writers problem.
Variations of readers – writers problems
first readers-writers problem:

No reader will be kept waiting unless a writer has already


obtained permission to use the shared object.

In other words, no reader should wait for other readers to


finish simply because a writer is waiting.
Classic Problems of Synchronization
second readers-writers problem

Once a writer is ready, that writer performs its write as soon


as possible. In other words, if a writer is waiting to access the
object, no new readers may start reading.

A solution to either problem may result in starvation. In the


first case, writers may starve; in the second case, readers
may starve.
Classic Problems of Synchronization
Solution to the first readers-writers problem:

semaphore mutex, wrt;


int readcount;

The semaphores mutex=1 and wrt=0 are initialized ;


readcount is initialized to 0. The semaphore wrt is common to
both reader and writer processes.

The mutex semaphore is used to ensure mutual exclusion


when the variable readcount is updated. The readcount
variable keeps track of how many processes are currently
reading the object.

The semaphore wrt functions as a mutual-exclusion


semaphore for the writers. It is also used by the first or last
reader that enters or exits the critical section
Classic Problems of Synchronization
Solution to the first readers-writers problem:
Classic Problems of Synchronization

Reader-writer locks are most useful in the following


situations:

In applications where it is easy to identify which


processes only read shared data and which threads only
write shared data.

In applications that have more readers than writers.


Classic Problems of Synchronization
The Dining-Philosophers Problem :
Classic Problems of Synchronization
The Dining-Philosophers Problem :

Consider five philosophers who spend their lives


thinking and eating. The philosophers share a
circular table surrounded by five chairs, each
belonging to one philosopher.

In the centre of the table is a bowl of rice, and the


table is laid
with five single chopsticks When a philosopher
thinks, she does not interact with her colleagues.

From time to time, a philosopher gets hungry and


tries to pick up the two chopsticks that are closest
to her
Classic Problems of Synchronization
The Dining-Philosophers Problem :

A philosopher may pick up only one chopstick at a


time. Obviously, she cannot pick up a chopstick that
is already in the hand of a neighbour.

When a hungry philosopher has both her chopsticks


at the same time, she eats without releasing her
chopsticks.

When she is finished eating, she puts down both of


her chopsticks and starts thinking again.
Classic Problems of Synchronization
The Dining-Philosophers Problem :

One simple solution is to represent each chopstick with


a semaphore.

A philosopher tries to grab a chopstick by executing a


wait () operation on that semaphore;

She releases her chopsticks by executing the signal()


operation on the appropriate semaphores. Thus, the
shared data are :

semaphore chopstick[5];

Where all the elements of chopstick are initialized to 1.


Classic Problems of Synchronization
The Dining-Philosophers Problem :

Suppose that all five philosophers become hungry


simultaneously
and each grabs her left chopstick.

When each philosopher tries to grab her right chopstick, she


will be
Classic Problems of Synchronization
The Dining-Philosophers Problem :

Possible remedies to the deadlock problem are:

Allow at most four philosophers to be sitting simultaneously at the


table.

Allow a philosopher to pick up her chopsticks only if both


chopsticks are available (to do this she must pick them up in a
critical section).

Use an asymmetric solution; that is, an odd philosopher picks up


first her left chopstick and then her right chopstick, whereas an
even philosopher picks up her right chopstick and then her left
chopstick.

You might also like