0% found this document useful (0 votes)
3 views

Inter Process Communication

Uploaded by

lite66688
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Inter Process Communication

Uploaded by

lite66688
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

What is Inter Process Communication?

A process can be of two types:


 Independent process.
 Co-operating process.

In general, Inter Process Communication is a type of mechanism usually


provided by the operating system (or OS). The main aim or goal of this
mechanism is to provide communications in between several processes. In
short, the intercommunication allows a process letting another process know
that some event has occurred.

Let us now look at the general definition of inter-process communication,


which will explain the same thing that we have discussed above.

Definition
"Inter-process communication is used for exchanging useful information
between numerous threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following


given diagram that illustrates the importance of inter-process
communication

Role of Synchronization in Inter Process Communication


It is one of the essential parts of inter process communication. Typically, this
is provided by interprocess communication control mechanisms, but
sometimes it can also be controlled by communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock

Mutual Exclusion:-
It is generally required that only one process thread can enter the critical
section at a time. This also helps in synchronization and creates a stable
state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the


shared resources by several processes. Semaphore is further divided into
two types which are as follows:

1. Binary Semaphore
2. Counting Semaphore

Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.

The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}
 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −

 Counting Semaphores
These are integer value semaphores and have an unrestricted value
domain. These semaphores are used to coordinate the resource access,
where the semaphore count is the number of available resources. If the
resources are added, semaphore count automatically incremented and if
the resources are removed, the count is decremented.
 Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore
is 1 and the signal operation succeeds when semaphore is 0. It is
sometimes easier to implement binary semaphores than counting
semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow
the mutual exclusion principle strictly and are much more efficient than
some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is
fulfilled to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent
the creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes
may access the critical section first and high priority processes later.

Barrier:-

A barrier typically not allows an individual process to proceed unless all the
processes does not reach it. It is used by many parallel languages, and
collective routines impose barriers.
Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to
acquire the spinlock waits or stays in a loop while checking that the lock is
available or not. It is known as busy waiting because even though the
process active, the process does not perform any functional operation (or
task).

Approaches to Interprocess Communication


We will now discuss some different approaches to inter-process
communication which are as follows:

These are a few different approaches for Inter- Process Communication:

1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means


that the data in this type of data channel can be moved in only a single
direction at a time. Still, one can use two-channel of this type, so that he can
able to send and receive data in two processes. Typically, it uses the
standard methods for input and output. These pipes are used in all types of
POSIX systems and in different versions of window operating systems as
well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by


multiple processes simultaneously. It is primarily used so that the processes
can communicate with each other. Therefore the shared memory is used by
almost all POSIX and Windows operating systems as well.

i) Shared Memory Method

Ex: Producer-Consumer problem


There are two processes: Producer and Consumer. The producer produces
some items and the Consumer consumes that item. The two processes share
a common space or memory location known as a buffer where the item
produced by the Producer is stored and from which the Consumer consumes
the item if needed. There are two versions of this problem: the first one is
known as the unbounded buffer problem in which the Producer can keep on
producing items and there is no limit on the size of the buffer, the second
one is known as the bounded buffer problem in which the Producer can
produce up to a certain number of items before it starts waiting for
Consumer to consume it. We will discuss the bounded buffer problem. First,
the Producer and the Consumer will share some common memory, then the
producer will start producing items. If the total produced item is equal to the
size of the buffer, the producer will wait to get it consumed by the Consumer.
Similarly, the consumer will first check for the availability of the item. If no
item is available, the Consumer will wait for the Producer to produce it. If
there are items available,

Message Queue:-

In general, several different messages are allowed to read and write the data
to the message queue. In the message queue, the messages are stored or
stay in the queue unless their recipients retrieve them. In short, we can also
say that the message queue is very helpful in inter-process communication
and used by all operating systems.

To understand the concept of Message queue and Shared memory in more


detail, let's take a look at its diagram given below:
Message Passing:-

It is a type of mechanism that allows processes to synchronize and


communicate with each other. However, by using the message passing, the
processes can communicate with each other without restoring the hared
variables.

Usually, the inter-process communication mechanism provides two


operations that are as follows:

o send (message)
o received (message)

Note: The size of the message can be fixed or variable.

Direct Communication:-

In this type of communication process, usually, a link is created or


established between two communicating processes. However, in every pair
of communicating processes, only one link can exist.

Indirect Communication

Indirect communication can only exist or be established when processes


share a common mailbox, and each pair of these processes shares multiple
communication links. These shared links can be unidirectional or bi-
directional.

FIFO:-

It is a type of general communication between two unrelated processes. It


can also be considered as full-duplex, which means that one process can
communicate with another process and vice versa.

Some other different approaches

o Socket:-

It acts as a type of endpoint for receiving or sending the data in a network. It


is correct for data sent between processes on the same computer or data
sent between different computers on the same network. Hence, it used by
several types of operating systems.

o File:-

A file is a type of data record or a document stored on the disk and can be
acquired on demand by the file server. Another most important thing is that
several processes can access that file as required or needed.

o Signal:-

As its name implies, they are a type of signal used in inter process
communication in a minimal way. Typically, they are the massages of
systems that are sent by one process to another. Therefore, they are not
used for sending data but for remote commands between multiple
processes.

Usually, they are not used to send the data but to remote commands in
between several processes.

Why we need interprocess communication?


There are numerous reasons to use inter-process communication for sharing
the data. Here are some of the most important reasons that are given below:

o It helps to speedup modularity


o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other and
synchronize their actions as well.
Why Inter Process Communication (IPC) needed?

Inter Process Communication in OS is needed because:

 Resource Sharing: IPC enables multiple processes to share resources, such


as memory and file systems, allowing for better resource utilization and
increased system performance.
 Coordination and Synchronization: IPC provides a way for processes to
coordinate their activities and synchronize access to shared resources,
ensuring that the system operates in a safe and controlled manner.
 Communication: IPC enables processes to communicate with each other,
allowing for the exchange of data and information between processes.
 Modularity: IPC enables the development of modular software, where
processes can be developed and executed independently, and then combined
to form a larger system.
 Flexibility: IPC allows processes to run on different hosts or nodes in a
network, providing greater flexibility and scalability in large and complex
systems.

Overall, IPC is essential for building complex and scalable systems in


operating systems, as it enables processes to coordinate their activities, share
resources, and communicate with each other in a safe and controlled manner.

process can be of two types:


 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes
while a co-operating process can be affected by other executing processes.
Though one can think that those processes, which are running
independently, will execute very efficiently, in reality, there are many
situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process
communication (IPC) is a mechanism that allows processes to communicate
with each other and synchronize their actions. The communication between
these processes can be seen as a method of co-operation between them.
Processes can communicate with each other through both:
1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes
via the shared memory method and via the message passing method.

An operating system can implement both methods of communication. First,


we will discuss the shared memory methods of communication and then
message passing. Communication between processes using shared memory
requires processes to share some variable, and it completely depends on
how the programmer will implement it. One way of communication using
shared memory can be imagined like this: Suppose process1 and process2
are executing simultaneously, and they share some resources or use some
information from another process. Process1 generates information about
certain computations or resources being used and keeps it as a record in
shared memory. When process2 needs to use the shared information, it will
check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use
shared memory for extracting information as a record from another process
as well as for delivering any specific information to other processes.
Let’s discuss an example of communication between processes using the
shared memory method.
#define buff_max 25

#define mod %

struct item{

// different member of the produced data

// or consumed data

---------

// An array is needed for holding the items.

// This is the shared place which will be

// access by both process

// item shared_buff [ buff_max ];

// Two variables which will keep track of

// the indexes of the items produced by producer

// and consumer The free index points to

// the next free index. The full index points to

// the first full index.


int free_index = 0;

int full_index = 0;

Producer Process Code

 C

item nextProduced;

while(1){

// check if there is no space

// for production.

// if so keep waiting.

while((free_index+1) mod buff_max == full_index);

shared_buff[free_index] = nextProduced;

free_index = (free_index + 1) mod buff_max;

Consumer Process Code


 C

item nextConsumed;

while(1){

// check if there is an available

// item for consumption.

// if not keep on waiting for

// get them produced.

while((free_index == full_index);

nextConsumed = shared_buff[full_index];

full_index = (full_index + 1) mod buff_max;

In the above code, the Producer will start producing again when the
(free_index+1) mod buff max will be free because if it is not free, this implies
that there are still items that can be consumed by the Consumer so there is
no need to produce more. Similarly, if free index and full index point to the
same index, this implies that there are no items to consume.

Overall C++ Implementation:

 C++
#include <iostream>

#include <mutex>

#include <thread>

#include <vector>

#define buff_max 25

#define mod %

struct item {

// different member of the produced data

// or consumed data

// ---------

};

// An array is needed for holding the items.

// This is the shared place which will be

// access by both process

// item shared_buff[buff_max];
// Two variables which will keep track of

// the indexes of the items produced by producer

// and consumer The free index points to

// the next free index. The full index points to

// the first full index.

std::atomic<int> free_index(0);

std::atomic<int> full_index(0);

std::mutex mtx;

void producer() {

item new_item;

while (true) {

// Produce the item

// ...

std::this_thread::sleep_for(std::chrono::milliseconds(100));

// Add the item to the buffer

while (((free_index + 1) mod buff_max) == full_index) {

// Buffer is full, wait for consumer

std::this_thread::sleep_for(std::chrono::milliseconds(100));

}
mtx.lock();

// Add the item to the buffer

// shared_buff[free_index] = new_item;

free_index = (free_index + 1) mod buff_max;

mtx.unlock();

void consumer() {

item consumed_item;

while (true) {

while (free_index == full_index) {

// Buffer is empty, wait for producer

std::this_thread::sleep_for(std::chrono::milliseconds(100));

mtx.lock();

// Consume the item from the buffer

// consumed_item = shared_buff[full_index];

full_index = (full_index + 1) mod buff_max;

mtx.unlock();
// Consume the item

// ...

std::this_thread::sleep_for(std::chrono::milliseconds(100));

int main() {

// Create producer and consumer threads

std::vector<std::thread> threads;

threads.emplace_back(producer);

threads.emplace_back(consumer);

// Wait for threads to finish

for (auto& thread : threads) {

thread.join();

return 0;

}
Note that the atomic class is used to make sure that the shared variables
free_index and full_index are updated atomically. The mutex is used to
protect the critical section where the shared buffer is accessed. The
sleep_for function is used to simulate the production and consumption of
items.
ii) Messaging Passing Method
Now, We will start our discussion of the communication between processes
via message passing. In this method, processes communicate with each
other without using any kind of shared memory. If two processes p1 and p2
want to communicate with each other, they proceed as follows:

 Establish a communication link (if a link already exists, no need to


establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size,
it is easy for an OS designer but complicated for a programmer and if it is of
variable size then it is easy for a programmer but complicated for the OS
designer. A standard message can have two parts: header and body.
The header part is used for storing message type, destination id, source id,
message length, and control information. The control information contains
information like what to do if runs out of buffer space, sequence number,
priority. Generally, message is sent using FIFO style.

Message Passing through Communication Link.


Direct and Indirect Communication link
Now, We will start our discussion about the methods of implementing
communication links. While implementing the link, there are some questions
that need to be kept in mind like :

1. How are links established?


2. Can a link be associated with more than two processes?
3. How many links can there be between every pair of communicating
processes?
4. What is the capacity of a link? Is the size of a message that the link can
accommodate fixed or variable?
5. Is a link unidirectional or bi-directional?
A link has some capacity that determines the number of messages that can
reside in it temporarily for which every link has a queue associated with it
which can be of zero capacity, bounded capacity, or unbounded capacity. In
zero capacity, the sender waits until the receiver informs the sender that it
has received the message. In non-zero capacity cases, a process does not
know whether a message has been received or not after the send operation.
For this, the sender must communicate with the receiver explicitly.
Implementation of the link depends on the situation, it can be either a direct
communication link or an in-directed communication link.
Direct Communication links are implemented when the processes use a
specific process identifier for the communication, but it is hard to identify the
sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which
consists of a queue of messages. The sender keeps the message in mailbox
and the receiver picks them up.

Message Passing through Exchanging the Messages.


Synchronous and Asynchronous Message Passing:
A process that is blocked is one that is waiting for some event, such as a
resource becoming available or the completion of an I/O operation. IPC is
possible between the processes on same computer as well as on the
processes running on different computer i.e. in networked/distributed system.
In both cases, the process may or may not be blocked while sending a
message or attempting to receive a message so message passing may be
blocking or non-blocking. Blocking is
considered synchronous and blocking send means the sender will be
blocked until the message is received by receiver. Similarly, blocking
receive has the receiver block until a message is available. Non-blocking is
considered asynchronous and Non-blocking send has the sender sends the
message and continue. Similarly, Non-blocking receive has the receiver
receive a valid message or null. After a careful analysis, we can come to a
conclusion that for a sender it is more natural to be non-blocking after
message passing as there may be a need to send the message to different
processes. However, the sender expects acknowledgment from the receiver
in case the send fails. Similarly, it is more natural for a receiver to be
blocking after issuing the receive as the information from the received
message may be used for further execution. At the same time, if the
message send keep on failing, the receiver will have to wait indefinitely. That
is why we also consider the other possibility of message passing. There are
basically three preferred combinations:

 Blocking send and blocking receive


 Non-blocking send and Non-blocking receive
 Non-blocking send and Blocking receive (Mostly used)
In Direct message passing, The process which wants to communicate
must explicitly name the recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.
Similarly, receive(p2, message) means to receive the message from p2.
In this method of communication, the communication link gets established
automatically, which can be either unidirectional or bidirectional, but one link
can be used between one pair of the sender and receiver and one pair of
sender and receiver should not possess more than one pair of links.
Symmetry and asymmetry between sending and receiving can also be
implemented i.e. either both processes will name each other for sending and
receiving the messages or only the sender will name the receiver for sending
the message and there is no need for the receiver for naming the sender for
receiving the message. The problem with this method of communication is
that if the name of one process changes, this method will not work.
In Indirect message passing, processes use mailboxes (also referred to as
ports) for sending and receiving messages. Each mailbox has a unique id
and processes can communicate only if they share a mailbox. Link
established only if processes share a common mailbox and a single link can
be associated with many processes. Each pair of processes can share
several communication links and these links may be unidirectional or bi-
directional. Suppose two processes want to communicate through Indirect
message passing, the required operations are: create a mailbox, use this
mailbox for sending and receiving messages, then destroy the mailbox. The
standard primitives used are: send(A, message) which means send the
message to mailbox A. The primitive for the receiving the message also
works in the same way e.g. received (A, message). There is a problem with
this mailbox implementation. Suppose there are more than two processes
sharing the same mailbox and suppose the process p1 sends a message to
the mailbox, which process will be the receiver? This can be solved by either
enforcing that only two processes can share a single mailbox or enforcing
that only one process is allowed to execute the receive at a given time or
select any process randomly and notify the sender about the receiver. A
mailbox can be made private to a single sender/receiver pair and can also be
shared between multiple sender/receiver pairs. Port is an implementation of
such mailbox that can have multiple senders and a single receiver. It is used
in client/server applications (in this case the server is the receiver). The port
is owned by the receiving process and created by OS on the request of the
receiver process and can be destroyed either on request of the same
receiver processor when the receiver terminates itself. Enforcing that only
one process is allowed to execute the receive can be done using the concept
of mutual exclusion. Mutex mailbox is created which is shared by n process.
The sender is non-blocking and sends the message. The first process which
executes the receive will enter in the critical section and all other processes
will be blocking and will wait.
Now, let’s discuss the Producer-Consumer problem using the message
passing concept. The producer places items (inside messages) in the
mailbox and the consumer can consume an item when at least one message
present in the mailbox. The code is given below:
Producer Code

 C

void Producer(void){

int item;

Message m;
while(1){

receive(Consumer, &m);

item = produce();

build_message(&m , item ) ;

send(Consumer, &m);

Consumer Code

 C

void Consumer(void){

int item;

Message m;

while(1){
receive(Producer, &m);

item = extracted_item();

send(Producer, &m);

consume_item(item);

Examples of IPC systems

1. Posix : uses shared memory method.


2. Mach : uses message passing
3. Windows XP : uses message passing using local procedural calls
Communication in client/server Architecture:
There are various mechanism:

 Pipe
 Socket
 Remote Procedural calls (RPCs)
The above three methods will be discussed in later articles as all of them are
quite conceptual and deserve their own separate articles.
References:

1. Operating System Concepts by Galvin et al.


2. Lecture notes/ppt of Ariel J. Frank, Bar-Ilan University
Inter-process communication (IPC) is the mechanism through which
processes or threads can communicate and exchange data with each other
on a computer or across a network. IPC is an important aspect of modern
operating systems, as it enables different processes to work together and
share resources, leading to increased efficiency and flexibility.

Advantages of IPC:

1. Enables processes to communicate with each other and share resources,


leading to increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better
overall system performance.
3. Allows for the creation of distributed systems that can span multiple
computers or networks.
4. Can be used to implement various synchronization and communication
protocols, such as semaphores, pipes, and sockets.

Disadvantages of IPC:

1. Increases system complexity, making it harder to design, implement, and


debug.
2. Can introduce security vulnerabilities, as processes may be able to
access or modify data belonging to other processes.
3. Requires careful management of system resources, such as memory and
CPU time, to ensure that IPC operations do not degrade overall system
performance.
Can lead to data inconsistencies if multiple processes try to access or
modify the same data at the same time.
4. Overall, the advantages of IPC outweigh the disadvantages, as it is a
necessary mechanism for modern operating systems and enables
processes to work together and share resources in a flexible and efficient
manner. However, care must be taken to design and implement IPC
systems carefully, in order to avoid potential security vulnerabilities and
performance issues.

You might also like