Module 3
Module 3
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently, in reality,
there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity.
Inter-process communication (IPC) is a mechanism that allows processes to communicate
with each other and synchronize their actions. The communication between these processes
can be seen as a method of co-operation between them.
Processes can communicate with each other through both:
1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared
memory method and via the message passing method.
An operating system can implement both methods of communication. First, we will discuss
the shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it.
One way of communication using shared memory can be imagined like this: Suppose
process1 and process2 are executing simultaneously, and they share some resources or use
some information from another process. Process1 generates information about certain
computations or resources being used and keeps it as a record in shared memory.
When process2 needs to use the shared information, it will check in the record stored in
shared memory and take note of the information generated by process1 and act accordingly.
Processes can use shared memory for extracting information as a record from another
process as well as for delivering any specific information to other processes.
Let’s discuss an example of communication between processes using the shared memory
method.
There are two processes: Producer and Consumer. The producer produces some items and
the Consumer consumes that item. The two processes share a common space or memory
location known as a buffer where the item produced by the Producer is stored and from
which the Consumer consumes the item if needed.
There are two versions of this problem: the first one is known as the unbounded buffer
problem in which the Producer can keep on producing items and there is no limit on the
size of the buffer, the second one is known as the bounded buffer problem in which the
Producer can produce up to a certain number of items before it starts waiting for Consumer
to consume it.
Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they
proceed as follows:
A link has some capacity that determines the number of messages that can reside in it
temporarily for which every link has a queue associated with it which can be of zero
capacity, bounded capacity, or unbounded capacity.
– Zero Capacity- this queue cannot keep any message waiting in it. Thus it
has maximum length 0. For this, a sending process must be blocked until the
receiving process receives the message. It is also known as no buffering.
– Bounded Capacity- this queue has finite length n. Thus it can have n
messages waiting in it. If the queue is not full, new message can be placed in
the queue, and a sending process is not blocked. It is also known as
automatic buffering.
– Unbounded Capacity- this queue has infinite length. Thus any number of
messages can wait in it. In such a system, a sending process is never blocked.
Blocking is considered synchronous and blocking send means the sender will be blocked
until the message is received by receiver. Similarly, blocking receive has the receiver
block until a message is available. Non-blocking is considered asynchronous and Non-
blocking send has the sender sends the message and continue. Similarly, Non-blocking
receive has the receiver receive a valid message or null.
In Direct message passing, The process which wants to communicate must explicitly
name the recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.
Similarly, receive(p2, message) means to receive the message from p2.
Symmetry and asymmetry between sending and receiving can also be implemented i.e.
either both processes will name each other for sending and receiving the messages or only
the sender will name the receiver for sending the message and there is no need for the
receiver for naming the sender for receiving the message. The problem with this method of
communication is that if the name of one process changes, this method will not work.
In Indirect message passing, processes use mailboxes (also referred to as ports) for
sending and receiving messages. Each mailbox has a unique id and processes can
communicate only if they share a mailbox. Link established only if processes share a
common mailbox and a single link can be associated with many processes. Each pair of
processes can share several communication links and these links may be unidirectional or
bi-directional.
Suppose two processes want to communicate through Indirect message passing, the
required operations are: create a mailbox, use this mailbox for sending and receiving
messages, then destroy the mailbox. T
he standard primitives used are: send(A, message) which means send the message to
mailbox A.
The primitive for the receiving the message also works in the same way e.g. received (A,
message).
There is a problem with this mailbox implementation. Suppose there are more than two
processes sharing the same mailbox and suppose the process p1 sends a message to the
mailbox, which process will be the receiver? This can be solved by either enforcing that
only two processes can share a single mailbox or enforcing that only one process is allowed
to execute the receive at a given time or select any process randomly and notify the sender
about the receiver.
A mailbox can be made private to a single sender/receiver pair and can also be shared
between multiple sender/receiver pairs.
Now, let’s discuss the Producer-Consumer problem using the message passing concept. The
producer places items (inside messages) in the mailbox and the consumer can consume an
item when at least one message present in the mailbox.
Advantages of IPC:
1. Enables processes to communicate with each other and share resources, leading
to increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall
system performance.
3. Allows for the creation of distributed systems that can span multiple computers
or networks.
4. Can be used to implement various synchronization and communication
protocols, such as semaphores, pipes, and sockets.
Disadvantages of IPC:
Critical Section
When more than one processes try to access the same code segment that segment is known
as the critical section. The critical section contains shared variables or resources that need
to be synchronized to maintain the consistency of data variables.
Overhead: When using critical sections, threads or processes must acquire and release
locks or semaphores, which can take time and resources. This may reduce the program’s
overall performance.
Semaphores are just normal variables used to coordinate the activities of multiple processes
in a computer system. They are used to enforce mutual exclusion, avoid race conditions, and
implement synchronization between processes.
Semaphores are compound data types with two fields one is a Non-negative integer S.V.
and the second is a set of processes in a queue S.L. It is used to solve critical section
problems, and by using two atomic operations, it will be solved. In this, wait and signal that
is used for process synchronization.
Initialization of Semaphore
Semaphore S must be initialized with a value of S.V. > 0 and with empty S.L.
Atomic Operations in Semaphore
Wait and signaling operations on semaphores
These two functions/operations are in the Semaphore. They are executed at the time of entry
in the critical section-Wait operation and exit from the critical section-Signal operation.
The wait and signaling operations on the Semaphore are just the “P” and “V” functions.
1. Wait operation
This operation, also known as a “P” feature, sleep, decrement, or down operation, is a
semaphore operation that controls the entry of a process into a critical section. If the
mutex/semaphore value is positive, then the process can enter the critical section, and if the
process enters, a semaphore value will decrease by 1.
2. Signal Operation
The “V” function or wakeup, increment, or wakeup operation is the same as the signal
function. When the process leaves the critical section, the semaphore value should be updated
to allow new processes to enter—the process for accessing the critical section( shared
variables or resources). When the process enters the critical section, the wait
operation decremented the semaphore value by one, i.e., if, in the starting, the value of the
Semaphore was 1(S=1. It will be decremented to 0 once the process enters the critical section.
If the process finished its execution in the critical section and left it, then the signal function
is executed and will increase the semaphore value by 1. Note that this operation is executed
only after the process has finished the critical section.
Types of Semaphores
Binary Semaphore
A binary semaphore is a binary (0 or 1) flag that can be enabled or disabled. If a binary
semaphore is used as the mutual exclusion mechanism, only the allocated resources will be
affected by the mutual exclusion.
Counting Semaphore
Conceptually, a semaphore is a non-negative integer. Semaphores are typically used to
coordinate access to resources, and the number of semaphores is initialized to the number
of free resources. The value then automatically increases the count when the resource
is added and decreases the count atomically when the resource is deleted.
A race condition is a potential scenario that might happen within a critical section area. This
occurs when different results from the execution of numerous threads in a crucial region are
obtained depending on the execution order of the threads.
If the critical section area is regarded as an atomic instruction, race conditions in certain areas
can be avoided. Race problems can also be avoided by employing locks or atomic variables
to properly synchronize threads.
Turn Variable or Strict Alternation Approach is the software mechanism implemented at user
mode. It is a busy waiting solution which can be implemented only for two processes. In this
approach, A turn variable is used which is actually a lock.
This approach can only be used for only two processes. In general, let the two processes be Pi
and Pj. They share a variable called turn variable.
The actual problem of the lock variable approach was the fact that the process was entering in
the critical section only when the lock variable is 1. More than one process could see the lock
variable as 1 at the same time hence the mutual exclusion was not guaranteed there.
This problem is addressed in the turn variable approach. Now, A process can enter in the
critical section only in the case when the value of the turn variable equal to the PID of the
process.
There are only two values possible for turn variable, i or j. if its value is not i then it will
definitely be j or vice versa.
In the entry section, in general, the process Pi will not enter in the critical section until its
value is i or the process Pj will not enter in the critical section until its value is j.
Initially, two processes Pi and Pj are available and want to execute into critical section.
The turn variable is equal to i hence Pi will get the chance to enter into the critical section.
The value of Pi remains I until Pi finishes critical section.
Pi finishes its critical section and assigns j to turn variable. Pj will get the chance to enter into
the critical section. The value of turn remains j until Pj finishes its critical section.
Conclusion
The turn variable is used to overcome the limitations of the lock variable. It is implemented
in the user mode and for synchronization between two processes. Turn variable ensures
mutual execution, portability, and bounded waiting and it is free from deadlock.
Let's understand with an example - If two or more than two readers want to access the file at
the same point in time there will be no problem. However, in other situations like when two
writers or one reader and one writer wants to access the file at the same point of time, there
may occur some problems, hence the task is to design the code in such a manner that if one
reader is reading then no writer is allowed to update at the same point of time, similarly, if
one writer is writing no reader is allowed to read the file at that point of time and if one writer
is updating a file other writers should not be allowed to update the file at the same point of
time. However, multiple readers can access the object at the same time.
Let us understand the possibility of reading and writing with the table given below:
TABLE 1
The solution of readers and writers can be implemented using binary semaphores.
The dining philosophers problem is another classic synchronization problem which is used to
evaluate situations where there is a need of allocating multiple resources to multiple
processes.
Consider there are five philosophers sitting around a circular dining table. The dining table
has five chopsticks and a bowl of rice in the middle as shown in the below figure.
Dining Philosophers Problem
At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he
uses two chopsticks - one from their left and one from their right. When a philosopher wants
to think, he keeps down both chopsticks at their original place.
From the problem statement, it is clear that a philosopher can think for an indefinite amount
of time. But when a philosopher starts eating, he has to stop at some point of time. The
philosopher is in an endless cycle of thinking and eating.
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks
up that chopstick. Then he waits for the right chopstick to be available, and then picks it too.
After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another chopstick
forever. The possible solutions for this are:
A philosopher must be allowed to pick up the chopsticks only if both the left and right
chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all the four philosophers
pick up four chopsticks, there will be one chopstick left on the table. So, one
philosopher can start eating and eventually, two chopsticks will be available. In this
way, deadlocks can be avoided.