0% found this document useful (0 votes)
0 views9 pages

Process Synchronization

The document discusses process synchronization, focusing on cooperating processes that share data and the need for mutual exclusion to maintain data consistency. It outlines the critical section problem, solutions using semaphores, and various synchronization problems such as the bounded-buffer producer-consumer problem, readers-writers problem, and dining-philosophers problem. Additionally, it addresses deadlocks and their representation through resource-allocation graphs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views9 pages

Process Synchronization

The document discusses process synchronization, focusing on cooperating processes that share data and the need for mutual exclusion to maintain data consistency. It outlines the critical section problem, solutions using semaphores, and various synchronization problems such as the bounded-buffer producer-consumer problem, readers-writers problem, and dining-philosophers problem. Additionally, it addresses deadlocks and their representation through resource-allocation graphs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Process Synchronization

A cooperating process is one that can affect or be affected by other processes executing in the
system.

 Cooperating processes may either directly share a logical address space (that is, both code
and data), or be allowed to share data only through files.
 The former case is achieved through the use of lightweight processes or threads.
 Concurrent access to shared data may result in data inconsistency.
 Process Synchronization is a mechanisms to ensure the orderly execution of cooperating
processes that share a logical address space, so that data consistency is maintained.

How process synchronization is achieved?


 Given a collection of cooperating sequential processes that share data, mutual exclusion
must be provided. One solution is to ensure that a critical section of code is used by only one
process or thread at a time.
 Different algorithms exist for that critical section problem with the assumption that only
storage interlock is available.

 The main disadvantage of these user coded solutions is that they all require busy waiting.

 Semaphore can be used to solve various synchronization problem and can be implemented
efficiently especially if hardware support for atomic operations is available.

Critical Section Problem


A system consisting of n processes (Po, P1...Pn-1}. Each process has a segment of code, called a
critical section, in which the process may be changing common variables, updating a table, writing a
file, and so on.

 The important feature of the system is that, when one process is executing in its critical
section, no other process is to be allowed to execute in its critical section.
 Thus, the execution of critical sections by the processes is mutually exclusive in time.
 The critical-section problem is to design a protocol that the processes can use to cooperate.
 Each process must request permission to enter its critical section.
 The section of code implementing this request is the entry section.
 The critical section may be followed by an exit section
 The remaining code is the remainder section.

General Structure of a Typical Process Pi


1 do
2 {
3 entry section
4 critical section
5 exit section
6 reminder section
7 } while (1);

Solution to Critical Section Problem


A solution to the critical-section problem must satisfy following three requirements:

1. Mutual Exclusion

* If process P₁is executing in its critical section, then no other processes can be
executing in their critical sections.

2. Progress

* If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their remainder
section can participate in the decision on which will enter its critical section next, and this
selection cannot be postponed indefinitely.

3. Bounded Waiting

* There exists a bound on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.

Two Process Solutions


The processes are numbered Po and P₁. For convenience, when presenting P₁, we use Pj to denote
the other process, i.e. j = = 1 - i.

Algorithm 1

Let the processes share a common integer variable initialized to 0 or 1. If turn = = i, then process Piis
allowed to execute in critical section.

The structure of process is shown below:

do

while (turn 1 = i);

critical section

turn = j;

remainder section

} while (1);

 This solution ensures that only one process at a time can be in its critical section.
 However, it does not satisfy the progress requirement, since it requires strict
alternation of processes in the execution of the critical section.
 For example, if turn == 0 and P₁ is ready to enter its critical section, P₁ cannot do so,
even though Po may be in its remainder section.

The structure of process Pi

do

flag [i] = true;

true = j;

while (flag [i] && turn = =j);

critical Section

flag [i] = false;

reminder section

} while(1);

Semaphore
Semaphore is a synchronization tool. It is an integer variable S that, apart from initialization,
is accessed only through two standard atomic operations: wait and signal.

These operations were originally termed P (for wait, to test) and V (for signal, to increment).

* A semaphore is a synchronization object that controls access by multiple processes


to a common resource in a parallel programming environment.

* Semaphores are widely used files and shared memory.

Classical definition of 'wait' in pseudo code:

wait (S)

while (S <= 0);

// no operation

S--

Classical definition of 'signal' in pseudo code:

signal (S)

{ s++ }
When one process modifies the semaphore value, no other process can simultaneously modify that
same semaphore value.

Usage

Semaphore is used to deal with the n-process critical-section problem.

* The n processes share a semaphore, mutex (standing for mutual exclusion), which is
initialized to 1.

Mutual Exclusion Implementation with Semaphore

do

wait (mutex);

critical section

signal (mutex);

reminder section

} while(1);

Semaphore can be used to solve various synchronization problems.

* For example there are two concurrently running process P₁ with statement S₁ and P2 with
statement S2.

If it is required that S₂ be executed only after S₁ has been completed. This scheme is
implemented by letting P₁ and P2 share a common semaphore synch, initialized to 0, and by inserting
the statements:

S1;

signal (synch);

(in process P₁)

wait (synch);

S2;

(in process P2)

Because synch is initialized to 0, P2 will execute S₂ only after P₁ has invoked signal (synch),
which is after S₁.

Binary Semaphore
A binary semaphore is a semaphore with an integer value 0 or 1.

 A binary semaphore is easier to implement than counting semaphore (general semaphore).


 If S is a counting semaphore, then it can be implemented in terms of binary semaphore using
the following data structure.

binary -> semaphore S1 and S2;

int C;

 Initially S₁ = 1, S = 0, and the value of integer C is set to the initial value of the counting
semaphore S.

"Wait" operation on the counting semaphore S

wait (S1);

C--

if(C<0)

signal(S₁);

wait(S1);

signal(S1);

Signal" operation on the counting semaphore S

wait (S₁);

C++

if(C<=0)

signal(S2);

else

signal(S1);

Classic Problems of Synchronization


In this section, we present a number of different synchronization problems as example for a large
class of concurrency-control problems.

These problems are used for testing nearly every newly proposed synchronization scheme.

Semaphores are used for synchronization in our solutions.

* The Bounded-Buffer Producer-consumer Problem

* The Readers- Writers Problem

* The Dining-Philosophers Problem


I. Bounded Buffer Producer-consumer Problem

The general statement is that, there are one or more producers' processes generating some types of
data (records, characters) and placing these in a buffer.

 There is a signal 'consumer' that talking items out of the buffer one at a time.
 The system is to constraints to prevent the overlap of buffer operation.
1. Only one agent (producer or consumer) may access the buffer at a particular time.
2. The problem is assumed to be of fixed buffer size.
3. In this case the consumer must wait if the buffer is empty and the producer must wait if
the buffer is full.
 In the following code as the producer producing full buffers for the consumer, or as the
consumer producing empty buffers for the producer.

 Structure of Producers process

do {

Produce an item in nextp

wait (empty);
wait (mutex);

Add nextp to buffer

signal(mutex1);
signal (full);
) while (1);

 Structure of Producers process

Do
{
wait (full);
wait (mutex);

remove an item from buffer to nextc

signal (mutex);
signal (empty);

consume the item in nextc

} while (1);
II. The Readers- Writers Problem

 A data object (such as a file or record) is to be shared among several concurrent processes.

 Some of these processes may only want to read the content of the shared object, whereas
others may want to update (that is, to read and write) the shared object.

 We can distinguish between these two types of processes by referring to those processes
that are interested in only reading as readers, and to the rest as writers.

 If two readers access the shared data object simultaneously, no adverse effects will result.

 However, if a writer and some other process (either a reader or a writer) access the shared
object simultaneously, chaos may ensue.

 To ensure that these difficulties do not arise, we require that the writers have exclusive
access to the shared object.

 This synchronization problem is referred to as the readers-writers problem.

 Structure of Writers process

wait(wrt);

writing is performed

signal(wrt);
In the solution to the first readers-
writers problem, the reader processes
share the following data structures:
semaphore mutex, wrt;
int readcount

 Structure of Readers process


wait(mutex);
readcount ++;
if(readcount == 1)
wait(wrt);
signal(mutex);

reading is performed

wait(mutex);
readcount --;
if(readcount == 0)
signal(wrt);
signal(mutex);
III. The Dining-Philosophers Problem
Problem Scenario

 Consider five philosophers who spend their lives thinking and eating. The philosophers share
a common circular table surrounded by five chairs, each belonging to one philosopher.

 In the center of the table is a bowl of rice, and the table is laid with five single chopsticks.
When a philosopher thinks, he does not interact with his colleagues. From time to time, a
philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the
chopsticks that are between him and his left and right neighbors).

 A philosopher may pick up only one chopstick at a time. Obviously, he cannot pick up a
chopstick that is already in the hand of a neighbor. When a hungry philosopher has both his
chopsticks at the same time, he eats without releasing his chopsticks. When he is finished
eating, he puts down both of his chopsticks and starts thinking again.

 The dining-philosophers problem is considered a classic synchronization problem because it


is an example of a large class of concurrency-control problems.

 It is a simple representation of the need to allocate several resources among several


processes in a deadlock- and starvation free manner.

Solution

One simple solution is to represent each chopstick by a semaphore. A philosopher tries to grab the
chopstick by executing a wait operation on that semaphore; he releases his chopsticks by executing
the signal operation on the appropriate semaphores.

Thus, the shared data are:

semaphore chopstick [5];

Where all the elements of chopstick are initialized to 1.

Structure of philosopher i
do {
wait (chopstick[i];
wait (chopstick [(i+1) % 5];

eat

signal (chopstick[i];
signal(chopstick [(i+1) % 5];

Think

} while(1);

Deadlock:

In a multiprogramming environment, several processes may compete for a finite number of


resources. A process requests resources; and if the resources are not available at that time, the

process enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is

called a deadlock.

3.1 Resource-Allocation Graph:

Deadlocks can be described more precisely in terms of a directed graph called a system resource-
allocation graph. This graph consists of a set of vertices V and a set of edges E. The set of vertices V is
partitioned into two different types of nodes: P= {P1, P2, ---, Pn} the set consisting of all the active
processes in the system, and R = {R1, R2, ----,Rn }, the set consisting of allresource types in the
system.

A directed edge from process Pi to resource type Rj is denoted

by PiRj;

It signifies that process Pi has requested an instance of

resource type Rj and is currently waiting for that resource. A

directed edge from resource type Rj to process Pi is denoted by Rj

 Pi; it signifies that an instance of resource type Rj has been

allocated to process Pi. A directed edge PiRj is called a request

edge; a directed edge Rj  Pi is called an assignment edge.

We represent each process Pi as a circle and each resource

type Rj as a rectangle. Since resource type Rj may have more than

one instance, we represent each such instance as a dot within the

rectangle. The resource-allocation graph shown in Figure depicts

the following situation.

The sets P, R, and £:

 P={ P1, P2, P3}

 R={R1, R2, R3, R4}

 £ = {P1R1, P2R3,R1P2, R2P2, R2P1, R3P3 }

You might also like