Module3.1
Module3.1
MODULE-3
Process Synchronization
1. Background
A cooperating process is one that can affect or be affected by other processes executing in the system.
Processes can execute concurrently or in parallel. The CPU scheduler switches rapidly between
processes to provide concurrent execution.
Since processes frequently needs to communicate with other processes therefore, there is aneed
for a well- structured communication, without using interrupts, among processes.
In the bounded buffer our original solution allowed atmost BUFFER SIZE − 1 items in the buffer at
the same time. Suppose we want to modify the algorithm to remedy this deficiency. One possibility
is to add an integer variable counter, initialized to 0. counter is incremented every time we add a new
item to the buffer and is decremented every time we remove one item from the buffer.
Although the producer and consumer routines shown above are correct separately, they may not
function correctly when executed concurrently suppose that the value of the variable counter is
currently 5 and that the producer and consumer processes concurrently execute the
statements “counter++” and “counter--”. Following the execution of these two statements, the value
of the variable counter may be 4, 5, or 6! The only correct result, though, is counter == 5, which is
generated correctly if the producer and consumer execute separately.
Notice that we have arrived at the incorrect state “counter == 4”, indicating that four buffers are full,
when, in fact, five buffers are full. If we reversed the order of the statements at T4 and T5, we would
arrive at the incorrect state “counter == 6”.
A situation where several processes access and manipulate the same data concurrently and
the outcome of the execution depends on the particular order in which the access takes
place,is called a race condition.
To guard against the race condition ensure only one process at a time can be manipulating the
variable or data. To make such a guarantee processes need to be synchronized in some way
Two general approaches are used to handle critical sections in operating systems:
3. Peterson’s Solution
A classic software-based solution to the critical-section problem known as Peterson’s solution. It
addresses the requirements of mutual exclusion, progress, and bounded waiting.
It Is two process solution.
Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted.
The two processes share two variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate
if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready.
• The structure of process Pi in Peterson’s solution:
To prove property 1, we note that each Pi enters its critical section only if either flag[j] == false or
turn == i. Also note that, if both processes can be executing in their critical sections at the same time,
then flag[0] == flag[1] == true. These two observations imply that P0 and P1 could not have
successfully executed their while statements at about the same time, since the value of turn can be
either 0 or 1 but cannot be both.
To prove properties 2 and 3,we note that a process Pi can be prevented from entering the critical
section only if it is stuck in the while loop with the condition flag[j] == true and turn == j; this loop
is the only one possible. If Pj is not ready to enter the critical section, then flag[j] == false, and Pi can
enter its critical section. once Pj exits its critical section, it will reset flag[j] to false, allowing Pi to
enter its critical section. If Pj resets flag[j] to true, it must also set turn to i. Thus, since Pi does not
change the value of the variable turn while executing the while statement, Pi will enter the critical
section (progress) after at most one entry by Pj (bounded waiting)
• It proves that
1.Mutual exclusion is preserved
2.Progress requirement is satisfied
3.Bounded-waiting requirement is met
4. Semaphores
The hardware-based solutions to the critical-section problem are complicated as well as
generally inaccessible to application programmers. So operating-systems designers build
software tools to solve the critical-section problem, and this synchronization tool called as
Semaphore.
• Semaphore S is an integer variable
Implementation:
The disadvantage of the semaphore is busy waiting i.e While a process is in critical section,
any other process that tries to enter its critical section must loop continuouslyin the entry
code. Busy waiting wastes CPU cycles that some other process might be able to use
productively. This type of semaphore is also called a spin lock because the process spins while
waiting for the lock.
Solution for Busy Waiting problem:
Modify the definition of the wait() and signal()operations as follows: When a process executes
the wait() operation and finds that the semaphore value is not positive, it must wait. Rather than
engaging in busy waiting, the process can block itself. The block operation places a process
into a waiting queue associated with the semaphore, and the state of the process is switched to
the waiting state. Then control is transferred to the CPU scheduler,which selects another
process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted when some other
process executes a signal() operation. The process is restarted by a wakeup() operation, which
changes the process from the waiting state to the ready state. The process is then placed in the
ready queue.
To implement semaphores under this definition, define a semaphore as follows:
typedef struct {int
value;
struct process *list;
} semaphore;
Each semaphore has an integer value and a list of processes list. When a process must wait on
a semaphore, it is added to the list of processes. A signal() operation removes one process
from the list of waiting processes and awakens that process. Now, the wait() semaphoreoperation
can be defined as:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
and the signal() semaphore operation can be defined as
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
The block() operation suspends the process that invokes it. The wakeup(P) operation resumes
the execution of a blocked process P.
Deadlocks and Starvation
The implementation of a semaphore with a waiting queue may result in a situation where two
or more processes are waiting indefinitely for an event that can be caused by only one of the
waiting processes, these processes are said to be deadlocked.
Consider below example: a system consisting of two processes, P0 and P1, each accessing
two semaphores, S and Q, set to the value 1:
Suppose that P0 executes wait(S) and then P1 executes wait(Q).When P0 executes wait(Q), it
must wait until P1 executes signal(Q). Similarly, when P1 executes wait(S), it must wait until
P0 executes signal(S). Since these signal() operations cannot be executed, P0 and P1 are
deadlocked.
Another problem related to deadlocks is indefinite blocking or starvation.
Priority Inversion
A scheduling challenge arises when a higher-priority process needs to read or modify kernel data
that are currently being accessed by a lower-priority process—or a chain of lower-priority
processes.
As an example, assume we have three processes—L, M, and H—whose priorities follow the order
L < M < H. Assume that process H requires resource R, which is currently being accessed by
process L. Ordinarily, process H would wait for L to finish using resource R. However, now
suppose that process M becomes runnable, thereby pre-empting process L. Indirectly, a
process with a lower priority—process M—has affected how long process H must wait for L to
relinquish resource R.
This problem is known as priority inversion. It occurs only in systems with more than two
priorities, so one solution is to have only two priorities. That is insufficient for most general-
purpose operating systems, however. Typically, these systems solve the problem by implementing
a priority-inheritance protocol. According to this protocol, all processes that are accessing
resources needed by a higher-priority process inherit the higher priority until they are finished with
the resources in question. When they are finished, their priorities revert to their original values.
Shared Data
• Semaphore mutex initialized to 1
• Semaphore wrt initialized to 1
• Integer readcount initialized to 0
Consider five philosophers who spend their lives thinking and eating. The philosophers share a circular
table surrounded by five chairs, each belonging to one philosopher. In the center of the table is a bowl of
rice, and the table is laid with five single chopsticks.
A philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the chopsticks
that are between her and her left and right neighbors). A philosopher may pick up only one chopstick at a
time. When a hungry philosopher has both her chopsticks at the same time, she eats without releasing the
chopsticks. When she is finished eating, she puts down both chopsticks and starts thinking again.
It is a simple representation of the need to allocate several resources among several processesin a
deadlock-free and starvation-free manner.
Solution: One simple solution is to represent each chopstick with a semaphore. A philosopher tries to
grab a chopstick by executing a wait() operation on that semaphore. She releases her chopsticks by
executing the signal() operation on the appropriate semaphores.Thus, the shared data are
semaphore chopstick[5];
where all the elements of chopstick are initialized to 1. The structure of philosopher i isshown in Figure
Several possible remedies to the deadlock problem are replaced by:
• Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left
chopstick and then her right chopstick, whereas an even numbered philosopher picks up her
right chopstick and then her left chopstick.
6. Monitors
Incorrect use of semaphore operations:
• Suppose that a process interchanges the order in which the wait() and signal()
operations on the semaphore mutex are executed,resulting in the following
execution:
signal(mutex);
...
critical section
...
wait(mutex);
• Suppose that a process replaces signal(mutex) with
wait(mutex). That is, it executes
wait(mutex);
...
critical section
...
wait(mutex);
In this case, a deadlock will occur.
condition x, y;
The only operations that can be invoked on a condition variable are wait() and signal().
Theoperation
x.wait();
means that the process invoking this operation is suspended until another process
invokes
x.signal();
The x.signal() operation resumes exactly one suspended process. If no process is suspended,
then the signal() operation has no effect; that is, the state of x is the same as if the operation
had never been executed. Contrast this operation with the signal() operation associated with
semaphores, which always affects the state of the semaphore.
For each monitor, a semaphore mutex (initialized to 1) is provided. A process must execute
wait(mutex) before entering the monitor and must execute signal(mutex) after leaving the
monitor.
Since a signaling process must wait until the resumed process either leaves or waits, an
additional semaphore, next, is introduced, initialized to 0. The signaling processes can use
next to suspend themselves. An integer variable next_count is also provided to count the
number of processes suspended on next. Thus, each external function F is replaced by
Mutual exclusion within a monitor is ensured.
For each condition x, we introduce a semaphore x sem and an integer variable x count, both
initialized to 0. The operation x.wait() can now be implemented as
x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x_count--;
The operation x.signal() can be implemented as
where c is an integer expression that is evaluated when the wait() operation is executed. The
value of c, which is called a priority number, is then stored with the name of the process
that is suspended. When x.signal() is executed, the process with the smallest priority number
is resumed next.
The ResourceAllocator monitor shown in the above Figure, which controls the allocation of a
single resource among competing processes.
A process that needs to access the resource in question must observe the following sequence:
R.acquire(t);
...
access the resource;
...
R.release();
where R is an instance of type ResourceAllocator.
The monitor concept cannot guarantee that the preceding access sequence will be
observed.In particular, the following problems can occur:
• A process might access a resource without first gaining access permission to
theresource.
• A process might never release a resource once it has been granted access to
theresource.
• A process might attempt to release a resource that it never requested.
• A process might request the same resource twice (without first releasing the resource)