0% found this document useful (0 votes)
29 views19 pages

Operating Systems ch-2 Part2

Uploaded by

dhatri.ammulu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views19 pages

Operating Systems ch-2 Part2

Uploaded by

dhatri.ammulu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT-II (Part2)

Inter-process Communication (IPC)

 Mechanism for processes to communicate and to synchronize their actions.


 Message system –processes communicate with each other without resorting to shared
variables.
 IPC facility provides two operations:
1. send(message)–message size fixed or variable
2. receive(message)
 If P and Q wish to communicate ,they need to:
1. Establish a communication link between them
2. Exchange messages via send /receive
 Implementation of communication link
1. physical (e.g. ,shared memory ,hard ware bus)
2 logical (e.g. ,logical properties)

Direct Communication
Processes must name each other explicitly:
✦send (P, message)–send a message to process P

✦receive (Q, message)–receive a message from process Q Properties of communication link


✦Links are established automatically.
✦A link is associated with exactly one pair of communicating processes.
✦Between each pair there exists exactly one link.
✦The link may be uni directional ,but is usually bi-directional.
Indirect Communication
Messages are directed and received from mail boxes(also referred to as ports).
✦Each mail box has a unique id.
✦Processes can communicate only if they share a mailbox.

Properties of communication link


✦Link established only if processes share a common mai l box
✦A link may be associated with many processes.
✦Each pair of processes may share several communication links.
✦Link may be unidirectional or bi-directional.
Operations
✦create a new mail box
✦send and receive messages through mail box,
✦destroy a mailbox
Primitives are defined as:
send(A ,message) – send a message to mail box A

receive(A, message)– receive a message from mailbox A

Mailbox sharing
✦P1,P2,and P3 share mailbox A.

✦P1,sends;P2andP3receive.

✦Who gets message?

Solutions
✦Allow a link to be associated with at most two processes.
✦Allow only one process at a time to execute a receive operation.
✦Allow the system to select arbitrarily the receiver. Sender Is notified who the receiver was.

Concurrent Processes
The concurrent processes executing in the operating system may be either
independent processes or cooperating processes. A process is independent if it cannot affect
or be affected by the other processes executing in the system .Clearly, any process that does
not share any data (temporary or persistent) with any other process is independent. On the
other hand ,a process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other processes is a
cooperating process.
We may want to provide an environment that allows process cooperation for several
reasons:
Information sharing: Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to these types of resources.
Computation speedup: If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Such a speedup can be
achieved only if the computer has multiple processing elements (such as CPUS or I/O
channels).
Modularity: We may want to construct the system in a modular fashion, the system
functions into separate processes or threads.
Convenience: Even an individual user may have many tasks on which to work at one time.
For instance, a user may be editing , printing ,and compiling in parallel.

Message-Passing System
The function of a message system is to allow processes to communicate with one
another without the need to resort to shared data. We have already seen message passing
used as a method of communication in micro kernels. In this scheme, services are provided
as ordinary user processes. That is, the services operate outside of the kernel.
Communication among the user processes is accomplished through the passing of
messages .An IPC facility provides a t least the two operations : send (message)and receive
(message).
Messages sent by a process can be of either fixed or variable size. If only fixed-sized
messages can be sent ,the system-level implementation is straight forward. This
restriction ,how ever, makes the task of programming more difficult. On the other hand,
variable-sized messages require a more complex system-level implementation, but the
programming task becomes simpler.
If processes P and Q want to communicate, they must send messages to and receive
messages from each other; a communication link must exist between them. This link can be
implemented in a variety of ways. We are concerned here not with the link's physical
implementation, but rather with its logical implementation. Here are several methods for
logically implementing a link and the send/receive operations:
 Director indirect communication
 Symmetric or asymmetric communication
 Automatic or explicit buffering
 Send by copy or send by reference
 Fixed-sized or variable- sized messages
We look at each of these types of message systems next.

Synchronization
Communication between processes takes place by calls to send and receive primitives.
There are different design options for implementing each primitive. Message passing may
be either blocking or non blocking-also known as synchronous and asynchronous.
Blocking send: The sending process is blocked until the message is received by the
receiving process or by the mailbox.
Non blocking send: The sending process ends the message and resumes operation.

Blocking receive: The receiver blocks until a message is available.

Non blocking receive: The receiver retrieves either a valid message or a null.

Different combinations of send and receive are possible. When both the send and
receive are blocking, we have a rendezvous between the sender and the receiver.

Buffering
Whether the communication is direct or indirect, messages exchanged by communicating
processes reside in a temporary queue. Basically ,such a queue can be implemented in three
ways:
Zero capacity: The queue has maximum length 0; thus, the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it.
If the queue is not full when a new message is sent, the latter is placed in the queue (either
the message is copied or a pointer to the message is kept), and the sender can continue
execution without waiting. The link has a finite capacity, how ever. If the link is full ,the
sender must block until space is available in the queue.
Unbounded capacity: The queue has potentially infinite length; thus, any number of
messages can wait in it. The sender never blocks .The zero-capacity case is sometimes
referred to as a message system with no buffering; the other cases a referred to as automatic
buffering.

Client-Server Communication
 Sockets
 Remote Procedure Calls
 Remote Method Invocation(Java)

Process Synchronization
A situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in which the
access takes place, is called a race condition.

Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces information that is consumed by
a Consumer process.
To allow producer and consumer processes to run concurrently, we must have available a
buffer of items that can be filled by the producer and emptied by the consumer .A producer can
produce one item while the consumer is consuming another item. The producer and consumer
must be synchronized, so that the consumer does not try to consume an item that has not yet
been produced. In this situation, the consumer must wait until an item is produced.
✦unbounded-buffer places no practical limit on the size of the buffer
✦bounded-buffer assumes that there is a fixed buffer size.

Bounded-Buffer–Shared-Memory Solution

The consumer and producer processes share the following variables.


Shared data
#defineBUFER_SIZE10
Typedef struct
{
...
}item;
Item buffer[BUFFER_SIZE];
int in=0;
int out =0;
Solution is correct, but can only use BUFFER_SIZE-1elements.
The shared buffer is implemented as a circular array with two logical pointers :in and out.
The variable in points to the next free position in the buffer; out points to the first full
position in the buffer .The buffer is empty when in==out ; the buffer is full when((in
+1)%BUFFERSIZE)==out.

The code for the producer and consumer processes follows. The producer process has a local
variable Next produced in which the new item to be produced is stored:

Bounded-Buffer–Producer Process

Item nextProduced;
while(1)
{
while(((in+1)% BUFFER_SIZE)==out)
;/*do nothing */
buffer[in]=nextProduced;
in=(in+1)%BUFFER_SIZE;
}

Bounded-Buffer-consumer Process

Item next Consumed;


while(1)
{
while(in==out)
; /* do nothing
*/
nextConsumed
=buffer[out];
out=(out+1)% BUFFER_SIZE;
}

The critical section problem


Consider a system consisting of n processes {Po,P1, ..., Pn-1). Each process has a
segment of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file, and so on. The important feature of the system is
that, when one process is executing in its critical section, no other process is to be allowed to
execute in its critical section. Thus, the execution of critical sections by the processes is
mutually exclusive in time. The critical-section problem is to design a protocol that the
processes can use to cooperate. Each process must request permission to enter its critical
section. The section of code implementing this request is the entry section. The critical
section may be followed by an exit section. The remaining code is the remainder section.

do{
Entry section
Critical
section Exit section
Remainder section
}while(1);

A solution to the critical-section problem must satisfy the following three requirements:

1. Mutual Exclusion: If process Pi is executing in its critical section ,then no other


processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder section can participate in the decision on which will enter its critical section
next ,and this selection cannot be postponed in definitely.
3. Bounded Waiting: There exists a bound on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
Peterson’s solution

Peterson’s solution is a software based solution to the critical section problem.

Consider two processes P0 and P1. For convenience, when presenting Pi, we use Pi to denote
the other process; that is ,j ==1-i.

The processes share two variables:


boolean flag[2];
int turn;
Initially flag[0]=flag[1]=false, and the value of turn is immaterial (but is either0 or 1).The
structure of process Pi is shown below.
do{

flag[i]=true
turn=j
while(flag[j]&&turn==j);
critical section
flag[i]=false
Remainder section
}while(1);

To enter the critical section, process Pi first sets flag [il to be true and then sets turn to
the value j ,there by asserting that if the other process wishes to enter the critical section
it can do so. If both processes try to enter at the same time, turn will be set to both i and j at
roughly the same time. Only one of these assignments will last; the other will occur, but will
be over written immediately. The eventual value of turn decides which of the two processes
is allowed to enter its critical section first.

We now prove that this solution is correct. We need to show that:

1. Mutual exclusion is preserved,

2. The progress requirement is satisfied,

3. The bounded-waiting requirement is met.

To prove property 1,we note that each Pi enters its critical section only if either flag[jl==false or
turn==i. Also note that, if both processes can be executing in their critical sections
at the same time, then flag [i] ==flag[j]== true. These two observation simply that P0 and
P1 could not have successfully executed their while statements at about the same time, since the
value of turn can be either 0 or 1,but can not be both. Hence, one of the processes say Pj -must
have successfully executed the while statement, where as Pi had to execute atleast one
additional statement ("turn == j"). However, since, at that time, flag [j] == true, and turn == j,
and this condition will persist as long as Pi is in its critical section, the result follows:

To prove properties 2 and 3 , we note that a process Pi can be prevented from entering
the critical section only if it is stuck in the while loop with the condition flag [j] == true and turn
== j; this loop is the only one. If Pi is not ready to enter the critical section, then flag[j]==false
and Pi can enter its critical section. If Pi has set flag[j] to true and is also executing in its while
statement, then either turn == i or turn == j. If turn == i, then Pi will enter the critical section. If
turn == j, then Pi will enter the critical section. However, once Pi exits its critical section, it
will reset flag [ jl to false, allowing Pi to enter its critical section. If Pi resets flag [ j 1 to true, it
must also set turn to i.

Thus, since Pi does not change the value of the variable turn while executing the while
statement, Pi will enter the critical section (progress) after at most one entry by Pi (bounded
waiting).

Synchronization Hardware:
As with other aspects of software, hardware features can make the programming task easier
and improve system efficiency. In this section, we present some simple hardware instructions
that are available on many systems, and show how they can be used effectively in solving the
critical-section problem.

The definition of the TestAndSet instruction.

boolean TestAndSet(boo1ean&target)

boolean rv=target;

target =true;

return rv;

}
The critical-section problem could be solved simply in a uniprocess or environment if we
could for bid interrupts to occur while a shared variable is being modified. In this manner, we
could be sure that the current sequence of instructions would be allowed to execute in order
with out preemption. No other instructions would be run, so no unexpected modifications
could be made to the shared variable.

Unfortunately, this solution is not feasible in a multiprocessor environment. Disabling


interrupts on a multi processor can be time-consuming, as the message is passed to all the
processors. This message passing delays entry into each critical section, and system efficiency
decreases. Also, consider the effect on a system's clock, if the clock is kept updated by
interrupts.

Many machines therefore provide special hardware instructions that allow us either to test and
modify the content of a word, or to swap the contents of two words, atomically-that is,as one
uninterruptible unit. We can use these special instructions to solve the critical-section problem
in a relatively simple manner. Rather than discussing one specific instruction for one specific
machine, let us abstract the main concepts behind these types of instructions.

The TestAndSet instruction can be defined as shown in code. The important characteristic is
that this instruction is executed atomically. Thus, if two Test And Set instructions are
executed simultaneously (each on a different CPU),they will be executed sequentially in some
arbitrary order.

Mutual-exclusion implementation with TestAndSet

do{

while(TestAndSet(lock));
critical section
lock=false
Remaindersection
}while(1);

void
Swap(boo1ean&a,boolean&b)
{
boolean temp=a;
a=b;
b=temp}

If the machine supports the Test And Set instruction ,then we can implement mutual
exclusion by declaring a Boolean variable lock, initialized to false.
If the machine supports the Swap instruction, then mutual exclusion can be
provided as follows. A global Boolean variable lock is declared and is initialized to
false. In addition, each process also has a local Boolean variable key.

Semaphores
The solutions to the critical –section problem presented before are not easy to generalize to
more complex problems. To over come this difficulty, we can use a synchronization tool
called a semaphore. A semaphore S is an integer variable that, a part from initialization, is
accessed only through two standard atomic operations : wait and signal. These operations were
originally termed P (for wait; from the Dutch proberen, to test)and V(for signal; from
verhogen,to increment).

The classical definition of wait in pseudo code is

wait(S)
{
while(S<=0);
S--;
}
The classical definitions of signal in pseudo code is

Signal(S)

S++;

Modifications to the integer value of the semaphore in the wait and signal operations must
be executed indivisibly. That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value. In addition, in the case of the
wait (S), the testing of the integer value of S (S5O),and its possible modification (S--),must also
be executed with out interruption.

Usage
We can use semaphores to deal with the n-process critical –section problem. The n
processes share a semaphore, muter (standing for mutual exclusion), initialized to 1. Each
process Pi is organized as shown in Figure .We can also use semaphores to solve various
synchronization problems.
For example, consider two concurrently running processes : PI with a statement S1 and P2
with a statement S2.Suppose that we require that S2 be executed only after S1 has completed.
We can implement this scheme readily by letting P1 and P2 share a common semaphore synch,
initialized to 0,and by inserting the statements in process PI, and the statements

wait(synch);
s2;
in process P2.Because synch is initialized to 0,P2willexecute S2 only after PI has invoked
signal(synch), which is after S1.
s1;
signal(synch);
do{
wait(mutex);
critical section
signal(mutex) ;
remainder section

}while(1);

Mutual-exclusion implementation with semaphores.

Implementation
The main disadvantage of the mutual-exclusion solutions and of the semaphore definition given here ,is
that they all require busy waiting .While a process is in its critical section, any other process that tries to
enter its.

Critical section must loop continuously in the entry code. This continual looping is clearly a
problem in a real multi programming system, where a single CPU is shared among
many processes . Busy waiting wastes CPU cycles that some other process might be
able to use productively . This type of semaphore is also called a spin lock (because the
process "spins" while waiting for the lock). Spin locks are useful in multi processor systems.
The advantage of a spin lock is that no context switch is required when a process must
wait on a lock, and a context switch may take considerable time. Thus ,when locks are
expected to be held for short times , spin locks are useful.
To overcome the need for busy waiting, we can modify the definition of the wait
and signal semaphore operations . When a process executes the wait operation and
finds that the semaphore value is not positive ,it must wait. However, rather than busy
waiting, the process can block itself. The block operation places a process into a waiting
queue associated with the semaphore, and the state of the process is switched to the
waiting state .Then ,control is transferred to the CPU scheduler ,which selects another
process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted when some
other process executes a signal operation. The process is restarted by a wakeup operation,
which changes the process from the waiting state to the ready state. The process is then
placed in the ready queue. (The CPU may or may not be switched from the running
process to the newly ready process, depending on the CPU-scheduling algorithm.)
To implement semaphores under this definition, we define a
semaphore as a "C" struct:typedef
struct{
int value;
struct process*L;
)semaphore;

Each semaphore has an integer value and a list of processes. When a process must wait on a
semaphore, it is added to the list of processes. A signal operation removes one process from
the list of waiting processes and awakens that process.
The wait semaphore operation can now be defined as

Void wait
(semaphoreS)
{
S.value--;
if(S.value<0)
{
Add this
process
toS.L;
block();
}

The signal semaphore operation can


now be defined as void
signal(semaphore S) {
S. value++;
if(S. value<=0){
remove a
process P from
S.L; wakeup(PI) ;
}

The block operation suspends the process that invokes it. The wakeup(P1) operation
resumes the execution of a blocked process .These two operations are provided by the
operating system as basic system calls.

Binary Semaphores
The semaphore construct described in the previous sections is commonly known as a counting
semaphore, since its integer value can range over an unrestricted domain. A binary semaphore is
a semaphore with an integer value that can range only between 0 and 1. A binary semaphore
can be simpler to implement than accounting semaphore, depending on the underlying
hardware architecture. We will now show how a counting semaphore can be implemented
using binary semaphores. Let S be accounting semaphore.

To implement it interms of binary semaphores we need the following data structures:

binary-semaphore S1,S2;
int C;

InitiallyS1=1,S2=0,and the value of integer C is set to the initial value of the counting
semaphore S. The wait operation on the counting semaphores can be implemented as follows:
wait(S1);
C--;
if (C<0){
signal(S1) ;
wait(S2);
}
signal(S1);

ThesignaloperationonthecountingsemaphoreScanbeimplementedasfollows:
wai t(S1) ;
C++;
if (C<=0)
signal(S2);
else signal(S1);

Classic Problems of Synchronization


We present a number of different synchronization problems as examples for a large class of
concurrency-
controlproblems.Theseproblemsareusedfortestingnearlyeverynewlyproposedsynchronizations
cheme.Semaphoresareusedfor synchronization in our solutions.

The Bounded-Buffer Problem

The bounded-buffer problem is commonly used to illustrate the power of


synchronization primitives . We present here a general structure of this scheme, without
committing ourselves to any particular implementation. We assume that the pool consist so
fn buffers, each capable of holding one item. The mutex semaphore provides mutual
exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and full
semaphores count the number of empty and full buffers, respectively. The semaphore
empty is initialized to the value n; the semaphore full is initialized to the value 0.

The code for the producer process is

do{
produce an item in nextp
...
wait (empty) ;
wait(mutex);
...
add next p to buffer
...
signal(mutex);
signal(full);
)while(1);

The code for the consumer process is


do
{
wait (full) ;
wait(mutex);
remove an item from buffer to next c
...
signal (mutex) ;signal(empty);
...
Consume the item in next c
... )while(1);
Note the symmetry between the producer and the consumer. We can interpret this code as the
producer producing full buffers for the consumer, or as the consumer producing empty buffers
for the producer.

The Readers –Writers Problem


A data object (such as a file or record) is to be shared among several concurrent
processes. Some of these processes may want only to read the content of the shared object,
whereas others may want to update (that is, to read and write) the shared object. We
distinguish between the set types of processes by referring to those processes that are
interested in only reading as readers, and to the rest as writers. Obviously, if two readers
access the shared data object simultaneously, no adverse effects will result. However, if a
writer and some other process (either a reader or a writer) access the shared object
simultaneously.
To ensure that these difficulties do not arise, we require that the writers have
exclusive access to the shared object. This synchronization problem is referred to as the
readers-writers problem. Since it was originally stated, it has been used to test nearly every
new synchronization primitive. The readers-writers problem has several variations, all
involving priorities. The simplest one, referred to as the first readers-writers problem,
requires that no reader will be kept waiting unless a writer has already obtained permission
to use the shared object. In other words, no reader should wait for other readers to finish
simply because a writer is waiting. The second readers-writers problem requires that, once a
writer is ready, that writer performs its write as soon as possible .In other words ,if a writer
is waiting to access the object , no new readers may start reading.
A solution to either problem may result in starvation. In the first case, writers may
starve; in the second case, readers may starve. For this reason, other variants of the problem
have been proposed. In this section, we present a solution to the first readers- writers
problem.

In the solution to the first readers-writers problem, the reader processes share the following
data b structures:

Semaphore mutex,
wrt;int
readcount;

The semaphores mutex and w r t are initialized to 1; read count is initialized to 0.


The semaphore w r t is common to both the reader and writer processes. The mutex
semaphore is used to ensure mutual exclusion when the variable read count is updated. The
read count variable keeps track of how many processes are currently reading the object.
The semaphore wrt functions as a mutual- exclusion semaphore for the writers. It is also
used by the first or last reader that enters or exits the critical section. It is not used by
readers who enter or exit while other readers are in their critical sections.

The code for a writer process is


do{
wait(wrt);
...
Writing is performed
...
signal(wrt);
}while(1);

The code for a reader process is


do{
wait(mutex);
readcount++;
if(readcount==1)
wait(wrt) ;
signal(mutex);
...
reading is performed
...
wait (mutex) ;
readcount--;
if(readcount==0)
signal(wrt1;
signal(mutex);
}while(1);

Note that, if a writer is in the critical section and n readers are waiting, then one reader is
queued on wrt, and n -1 readers are queued on mutex. Also observe that, when a writer
executes signal (wrt), we may resume the execution of either the waiting readers or a single
waiting writer.
The Dining-Philosophers Problem
Consider five philosophers who spend their lives thinking and eating. The philosophers share a
common circular table surrounded by five chairs, each belonging to one philosopher. In the
center of the table is a bowl of rice, and the table is laid with five single chopsticks. When a
philosopher thinks, she does not interact with her colleagues. From time to time, a philosopher
gets hungry and tries to pick up the two chopsticks that are closest to her (the chopsticks that are
between her and her left and right neighbors). A philosopher may pick up only one chopstick at a
time. Obviously, she cannot pick up a chopstick that is already in the hand of a neighbor. When a
hungry philosopher has both her chopsticks at the same time, she eats without releasing her
chopsticks. When she is finished eating, she puts down both of her chopsticks and starts
thinking again.

The structure of philosopher ido

wait (chopstick[i]);

wait (chopstick[(i+1)%5]) ;

...

eat

...

signal (chopstick [i] ;

signal(chopstick[(i+1)%5]);

...

think

...

)while(1);

.
The dining-philosophers problem is considered a classic synchronization problem, neither
because of its practical importance nor because computer scientists dislike philosophers, but
because it is an example of a large class of concurrency –control problems. It is a simple
representation of the need to allocate several resources among several processes in a deadlock-
and starvation free manner.

One simple solution is to represent each chopstick by a semaphore. A philosopher tries to grab
the chopstick by executing a wait operation on that semaphore; she releases her chopsticks by
executing the signal operation on the appropriate semaphores. Thus, the shared data are
Semaphore chopstick [5]; where all the elements of chopstick are initialized to1.

Although this solution guarantees that no two neighbors are eating simultaneously,

it nevertheless must be rejected because it has the possibility of creating a deadlock. Suppose that
all five philosophers become hungry simultaneously, and each grabs her left chopstick. All
the elements of chopstick will now be equal to 0. When each philosopher tries to grab her right
chopstick, she will be delayed forever.

Use an asymmetric solution; that is, an odd philosopher picks up first her left chopstick and then
her right chopstick, whereas an even philosopher picks up her right chopstick and then her left
chopstick. Finally, any satisfactory solution to the dining-philosophers problem must guard
against the possibility that one of the philosophers will starve to death.

Monitors
Another high -level synchronization construct is the monitor type. A monitor is
characterized by a set of programmer -defined operators. The representation of a monitor
type consists of declarations of variables whose values define the state of an instance of the type,
as well as the bodies of procedures or functions that implement operations on the type. The
syntax of a monitor is

.The representation of a monitor type can not be used directly by the various processes. Thus, a
procedure defined with in a monitor can access only those variables declared locally within the
monitor and its formal parameters .Similarly ,the local variables of a monitor can be accessed by
only the local procedures.

The monitor construct ensures that only one process at a time can be active within the monitor.
Consequently, the programmer does not need to code this synchronization constraint explicitly.
However, the monitor construct, as defined so far, is not sufficiently powerful for modeling
some
Synchronization schemes. For this purpose, we need to define additional synchronization
mechanisms. These mechanisms are provided by the condition operations construct. A
programmer who needs to write her own tailor-made synchronization scheme can define one or
more variables of type condition:

Condition x,y;

The only operations that can be invoked on a condition variable are wait and signal. The
operation means that the process invoking this operation is suspended until another process
invokes

The x.signal operation resumes exactly one suspended process. If no process is suspended, then
the signal operation has no effect; that is, the state of x is as though the operation were never
executed .Contrast this operation with the signal operation associated with semaphores, which
always affects the state of the semaphore. Now suppose that, when the x. signal () operation is
invoked by a process P, there is a suspended process Q associated with condition x. Clearly, if
the operations suspended process Q is allowed to resume its execution, the signaling process P
must wait. Otherwise, both P and Q will be active simultaneously within the monitor. Note,
however, that both processes can conceptually continue with their execution.

Two possibilities exist:

1. P either waits until Q leaves the monitor, or waits for another condition.

2. Q either waits until Pl eaves the monitor, or waits for another condition.

There are reasonable arguments in favor of adopting either option 1 or option 2. Since P was
already executing in the monitor, choice 2 seems more reasonable. However, if we allow
process P to continue, the" logical" condition for which Q was waiting may no longer hold by the
time Q is resumed. Choice 1 was advocated by Ho are, mainly because the preceding argument
in favor of it translates directly to simpler and more elegant proof rules. A compromise between
these two choices was adopted in the language Concurrent C. When process P executes the
signal operation, process Q is immediately resumed. This model is less powerful than Hoare's,
because a process cannot signal more than once during a single procedure call.

You might also like