0% found this document useful (0 votes)
42 views15 pages

UNIT - III O.S Notes

This document discusses process synchronization and solutions to the critical section problem. It describes the critical section problem, where multiple processes need exclusive access to a shared resource. It then covers Peterson's solution, which uses shared flags and a turn variable to coordinate access for two processes. The document also discusses synchronization hardware like test-and-set instructions and semaphores as solutions. It provides code examples of using these techniques.

Uploaded by

Bapuji DCP1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views15 pages

UNIT - III O.S Notes

This document discusses process synchronization and solutions to the critical section problem. It describes the critical section problem, where multiple processes need exclusive access to a shared resource. It then covers Peterson's solution, which uses shared flags and a turn variable to coordinate access for two processes. The document also discusses synchronization hardware like test-and-set instructions and semaphores as solutions. It provides code examples of using these techniques.

Uploaded by

Bapuji DCP1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT -III

PROCESS SYNCHRONIZATION

A co-operation process is one that can affect or be affected by other processes


executing in the system. Co-operating process may either directly share a logical
address space or be allotted to the shared data only through files. This concurrent
access is known as Process synchronization.

Critical Section Problem:


Consider a system consisting of n processes (P0, P1, ………Pn -1) each process has
a segment of code which is known as critical section in which the process may be
changing common variable, updating a table, writing a file and so on. The
important feature of the system is that when the process is executing in its
critical section no other process is to be allowed to execute in its critical section.
The execution of critical sections by the processes is a mutually exclusive. The
critical section problem is to design a protocol that the process can use to
cooperate each process must request permission to enter its critical section.
The section of code implementing this request is the entry section. The critical
section is followed on exit section. The remaining code is the remainder
section.

Example:
While (1)
{
Entry Section;
Critical
Section; Exit
Section;
Remainder Section;
}
A solution to the critical section problem must satisfy the following three
conditions.
1. Mutual Exclusion: If process Pi is executing in its critical section then
no any other process can be executing in their critical section.
2. Progress: If no process is executing in its critical section and some
process wish to enter their critical sections then only those process that
are not executing in their remainder section can enter its critical section
next.
3. Bounded waiting: There exists a bound on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request.

Peterson’s solution:
A classic software-based solution to the critical-section problem known as
Peterson’s solution. Because of the way modern computer architectures
perform basic machine-language instructions, such as load and store,
there are no guarantees that Peterson’s solution will work correctly on
such architectures. However, it provides a good algorithmic description of
solving the critical-section problem and illustrates some of the
complexities involved in designing software that addresses the
requirements of mutual exclusion, progress, and bounded waiting.

The structure of process Pi in Peterson’s solution:


do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = false;
remainder section
} while (true);
Peterson’s solution is restricted to two processes that alternate execution
between their critical sections and remainder sections. The processes are
numbered P0 and P1.
Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];
Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];
The variable turn indicates whose turn it is to enter its critical section. That is,
if turn == i, then process Pi is allowed to execute in its critical section. The
flag array is used to indicate if a process is ready to enter its critical section.
For example, if flag[i] is true, this value indicates that Pi is ready to enter
its critical section.
To enter the critical section, process Pi first sets flag[i] to be true and
then sets turn to the value j, thereby asserting that if the other process wishes
to enter the critical section, it can do so. If both processes try to enter at the
same time, turn will be set to both i and j at roughly the same time. Only one of
these assignments will last; the other will occur but will be overwritten
immediately.
The eventual value of turn determines which of the two processes is allowed
to enter its critical section first.
We now prove that this solution is correct. We need to show that:
1. Mutual exclusion is preserved.
2. The progress requirement is satisfied.
3. The bounded-waiting requirement is met.

Synchronization Hardware :
As with other aspects of software, hardware features can make the
programming task easier and improve system efficiency. In this section, we
present some simple hardware instructions that are available on many
systems, and show how they can be used effectively in solving the critical-
section problem.
The definition of the Test And Set instruction.
boolean TestAndSet(boo1ean &target)
The definition of the TestAndSet instruction.
boolean TestAndSet(boo1ean &target)
{
boolean rv = target;
target = true;
return rv;
}
The critical-section problem could be solved simply in a uniprocessor
environment if we could forbid interrupts to occur while a shared variable is
being modified. In this manner, we could be sure that the current sequence of
instructions would be allowed to execute in order without preemption. No
other instructions would be run, so no unexpected modifications could be
made to the shared variable.
Unfortunately, this solution is not feasible in a multiprocessor environment.
Disabling interrupts on a multiprocessor can be time-consuming, as the
message is passed to all the processors. This message passing delays entry into
each critical section, and system efficiency decreases. Also, consider the effect
on a system's clock, if the clock is kept updated by interrupts.
Many machines therefore provide special hardware instructions that allow us
either to test and modify the content of a word, or to swap the contents of two
words, atomically-that is, as one uninterruptible unit. We can use these special
instructions to solve the critical-section problem in a relatively simple manner.
Rather than discussing one specific instruction for one specific machine, let us
abstract the main concepts behind these types of instructions.
The TestAndSet instruction can be defined as shown in code. The important
characteristic is that this instruction
is executed atomically. Thus, if two TestAndSet instructions are executed
simultaneously (each on a different
CPU), they will be executed sequentially in some arbitrary order.
Mutual-exclusion implementation with TestAndSet
do{
while(TestAndSet(lock));
critical section
lock=false
Remainder section
}while(1);
void Swap(boo1ean &a, boolean &b) {
boolean temp = a;
a = b;
b = temp}
If the machine supports the TestAndSet instruction, then we can implement
mutual exclusion by declaring a Boolean variable lock, initialized to false.
If the machine supports the Swap instruction, then mutual exclusion can be
provided as follows. A global Boolean variable lock is declared and is initialized
to false. In addition, each process also has a local Boolean variable key.
Semaphores:

For the solution to the critical section problem one synchronization tool is
used which is known as semaphores. A semaphore ‘S’ is an integer variable
which is accessed through two standard operations such as wait and signal.
These operations were originally termed ‘P’ (for wait means to test) and ‘V’ (for
single means to increment). The classical definition of wait is

Wait (S)
{
While (S <= 0)
{
Test;
}
S--;
}

The classical definition of the


signal is Signal (S)
{
S++;
}
In case of wait the test condition is executed with interruption and the
decrement is executed without interruption.
Binary Semaphore:
A binary semaphore is a semaphore with an integer value which can range
between 0 and 1. Let ‘S’ be a counting semaphore. To implement the binary
semaphore we need following the structure of data.
Binary SemaphoresS1,S2;
int C;
Initially S1 = 1, S2 = 0 and the value of C is set to the initial value of the
counting semaphore ‘S’. Then the wait operation of the binary
semaphore can be implemented as follows.
Wa it (S1) C--;
if (C < 0)
{
Signal (S1);
Wait (S2);
}
Signal (S1);
The signal operation of the binary semaphore can be
implemented as follows: Wait (S1);
C++;
if (C <=0)
Signal
(S2);
Else

Signal (S1);

Classical Problem on Synchronization:


There are various types of problem which are proposed for synchronization
scheme such as
• Bounded Buffer Problem: This problem was commonly used to
illustrate the power of synchronization primitives. In this scheme we
assumed that the pool consists of ‘N’ buffer and each capable of holding
one item. The ‘mutex’ semaphore provides mutual exclusion for access to
the buffer pool and is initialized to the value one. The empty and full
semaphores count the number of empty and full buffer respectively. The
semaphore empty is initialized to ‘N’ and the semaphore full is initialized
to zero. This problem is known as procedure and consumer problem.
The code of the producer is producing full buffer and the code of
consumer is producing empty buffer. The structure of producer process
is as follows:
do {
produce an item in nextp
............
Wait (empty); Wait (mutex);
...........
add nextp to buffer
............

Signal (mutex); Signal (full);


} While (1);
The structure of consumer process is as follows:
do {
Wait (full);
Wait
(mutex);
...........
Remove an item from buffer to nextc
...........
Signal
(mutex)
; Signal
(empty)
;
............
Consume the item in nextc;
. . . . . . . .. . . .. .
} While (1);
• Reader Writer Problem: In this type of problem there are two types
of process are used such as Reader process and Writer process. The
reader process is responsible for only reading and the writer process is
responsible for writing. This is an important problem of synchronization
which has several variations like
o The simplest one is referred as first reader writer problem which
requires that no reader will be kept waiting unless a writer has
obtained permission to use the shared object. In other words no
reader should wait for other reader to finish because a writer is
waiting.
o The second reader writer problem requires that once a writer is
ready then the writer performs its write operation as soon as
possible.
The structure of a reader process is as
follows: Wait (mutex);
Read count++;
if (read count == 1) Wait (wrt);
Signal (mutex);
...........
Reading is performed
...........
Wait (mutex);
Read count --;
if (read count == 0) Signal (wrt);
Signal (mutex);
The structure of the writer process is as follows:
Wait (wrt);
Writing is performed; Signal (wrt);
• Dining Philosopher Problem: Consider 5 philosophers to spend
their lives in thinking & eating. A philosopher shares common circular
table surrounded by 5 chairs each occupies by one philosopher. In the
center of the table there is a bowl of rice and the table is laid with 6
chopsticks as shown in below figure.

When a philosopher thinks she does not interact with her colleagues.
From time to time a philosopher gets hungry and tries to pickup two
chopsticks that are closest to her. A philosopher may pickup one
chopstick or two chopsticks at a time but she cannot pickup a chopstick
that is already in hand of the neighbor. When a hungry philosopher has
both her chopsticks at the same time, she eats without releasing her
chopsticks. When she finished eating, she puts down both of her
chopsticks and starts thinking again. This problem is considered as
classic synchronization problem. According to this problem each
chopstick is represented by a semaphore. A philosopher grabs the
chopsticks by executing the wait operation on that semaphore. She
releases the chopsticks by executing the signal operation on the
appropriate semaphore. The structure of dining philosopher is as
follows:
do{
Wait ( chopstick [i]);
Wait (chopstick [(i+1)%5]);
.............
Eat
.............
Signal (chopstick [i]);
Signal (chopstick [(i+1)%5]);
.............
Think
.............
} While (1);

Critical Region:
According to the critical section problem using semaphore all processes must
share a semaphore variable mutex which is initialized to one. Each process must
execute wait (mutex) before entering the critical section and execute the signal
(mutex) after completing the execution but there are various difficulties may
arise with this approach like:
Case 1: Suppose that a process interchanges the order in which the wait and
signal operations on the semaphore mutex are executed, resulting in the
following execution:
Signal (mutex);
..........
Critical Section
...........
Wait (mutex);
In this situation several processes may be executing in their critical
sections simultaneously, which is violating mutual exclusion
requirement.
Case 2: Suppose that a process replaces the signal (mutex) with wait
(mutex).
The execution is as follows:
Wait (mutex);
...........
Critical Section
...........
Wait (mutex);
In this situation a deadlock will occur
Case 3: Suppose that a process omits the wait (mutex) and the signal (mutex).
In this case the mutual exclusion is violated or a deadlock will occur.
To illustrate the various types or error generated by using semaphore there are
some high level language constructs have been introduced such as critical
region and monitor.
Critical region is also known as conditional critical regions. It constructs guards
against certain simple errors associated with semaphore. This high level language
synchronization construct requires a variable V of type T which is to be shared
among many processes. It is declared as V: shared T;
The variable V can be accessed only inside a region statement as like below:
Wait (mutex);
While (! B) { First_count++;
if (second_count> 0)
Signal (second_delay);
Else
Signal (mutex);
Wait (first_delay); First_count--; Second_count++;

if (first_count> 0)
Signal (first_delay);
Else
Signal (second_delay);
Wait (second_delay); Second_count --;
}
S;
if (first_count> 0)
Signal (first_delay); Else if (second_count> 0)
Signal (second_delay);
Else
Signal (mutex);

(Implementation of the conditional region constructs)


Where B is a Boolean variable which governs the access to the
critical regions which is initialized to false. Mutex, First delay and
Second delay are the semaphores which are initialized to 1, 0,
and 0 respectively. First count and Second count are the
integer variables which are initialized to zero.

Monitor:
A monitor type is an ADT that includes a set of programmer Defined operations
that are provided with mutual exclusion within the monitor. The monitor type
also declares the variables whose values define the state of an instance of that
type, along with the bodies of functions that operate on those variables.

The syntax of monitor is as follows.


Monitor monitor name
{
Shared variable
declarations
Procedure body P1
(………) {
........
}
Procedure body P2 (………) {
........
}
.
.
.
Procedure body Pn (………) {
........
}
{
Initialization Code
}
}

The representation of a monitor type cannot be used directly by the various


processes. Thus, a function defined within a monitor can access only those
variables declared locally within the monitor and its formal parameters.
Similarly, the local variables of a monitor can be accessed by only the local
functions.

The monitor construct ensures that only one process at a time is active within
the monitor. Consequently, the programmer does not need to code this
synchronization constraint explicitly.
The monitor construct, as defined so far, is not sufficiently powerful for
modeling some synchronization schemes. For this purpose, we need to define
additional synchronization mechanisms. These mechanisms are provided by
the condition construct. A programmer who needs to write a tailor-made
synchronization scheme can define one or more variables of type condition:
condition x, y;

The only operations that can be invoked on a condition variable are wait()
and signal(). The operation
x.wait();
means that the process invoking this operation is suspended until another
process invokes
x.signal();
The x.signal() operation resumes exactly one suspended process. If no
process is suspended, then the signal() operation has no effect; that is, the state
of x is the same as if the operation had never been executed.
Now suppose that, when the x.signal() operation is invoked by a process
P, there exists a suspended process Q associated with condition x. Clearly, if
the suspended process Q is allowed to resume its execution, the signalling
process P must wait. Otherwise, both P and Q would be active
simultaneously within the monitor. Note, however, that conceptually
both processes can continue with their execution.
Two possibilities exist:
1. Signal and wait. P either waits until Q leaves the monitor or
waits for another condition.
2. Signal and continue. Q either waits until P leaves the monitor
or waits for another condition.

You might also like