0% found this document useful (0 votes)
5 views

Module 3

Inter Process Communication (IPC) allows processes to communicate and synchronize their actions, utilizing methods such as shared memory and message passing. Shared memory involves processes accessing a common memory space, while message passing involves sending messages through communication links, which can be direct or indirect. IPC has advantages like increased efficiency and coordination but also introduces complexity and potential security vulnerabilities.

Uploaded by

julee9800
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module 3

Inter Process Communication (IPC) allows processes to communicate and synchronize their actions, utilizing methods such as shared memory and message passing. Shared memory involves processes accessing a common memory space, while message passing involves sending messages through communication links, which can be direct or indirect. IPC has advantages like increased efficiency and coordination but also introduces complexity and potential security vulnerabilities.

Uploaded by

julee9800
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Inter Process Communication (IPC)



A process can be of two types:


 Independent process.
 Co-operating process.

An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently, in reality,
there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity.
Inter-process communication (IPC) is a mechanism that allows processes to communicate
with each other and synchronize their actions. The communication between these processes
can be seen as a method of co-operation between them.
Processes can communicate with each other through both:

1. Shared Memory
2. Message passing

Figure 1 below shows a basic structure of communication between processes via the shared
memory method and via the message passing method.

An operating system can implement both methods of communication. First, we will discuss
the shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it.
One way of communication using shared memory can be imagined like this: Suppose
process1 and process2 are executing simultaneously, and they share some resources or use
some information from another process. Process1 generates information about certain
computations or resources being used and keeps it as a record in shared memory.
When process2 needs to use the shared information, it will check in the record stored in
shared memory and take note of the information generated by process1 and act accordingly.
Processes can use shared memory for extracting information as a record from another
process as well as for delivering any specific information to other processes.

Let’s discuss an example of communication between processes using the shared memory
method.

i) Shared Memory Method

Ex: Producer-Consumer problem

There are two processes: Producer and Consumer. The producer produces some items and
the Consumer consumes that item. The two processes share a common space or memory
location known as a buffer where the item produced by the Producer is stored and from
which the Consumer consumes the item if needed.

There are two versions of this problem: the first one is known as the unbounded buffer
problem in which the Producer can keep on producing items and there is no limit on the
size of the buffer, the second one is known as the bounded buffer problem in which the
Producer can produce up to a certain number of items before it starts waiting for Consumer
to consume it.

ii) Messaging Passing Method

Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they
proceed as follows:

 Establish a communication link (if a link already exists, no need to establish it


again.)
 Start exchanging messages using basic primitives.

 We need at least two primitives:


– send(message, destination) or send(message)
– receive(message, host) or receive(message)
The message size can be of fixed size or of variable size

A standard message can have two parts: header and body.


The header part is used for storing message type, destination id, source id, message length,
and control information. The control information contains information like what to do if
runs out of buffer space, sequence number, priority. Generally, message is sent using FIFO
style.

Message Passing through Communication Link.

Direct and Indirect Communication link

A link has some capacity that determines the number of messages that can reside in it
temporarily for which every link has a queue associated with it which can be of zero
capacity, bounded capacity, or unbounded capacity.

– Zero Capacity- this queue cannot keep any message waiting in it. Thus it
has maximum length 0. For this, a sending process must be blocked until the
receiving process receives the message. It is also known as no buffering.
– Bounded Capacity- this queue has finite length n. Thus it can have n
messages waiting in it. If the queue is not full, new message can be placed in
the queue, and a sending process is not blocked. It is also known as
automatic buffering.
– Unbounded Capacity- this queue has infinite length. Thus any number of
messages can wait in it. In such a system, a sending process is never blocked.

Direct communication- it establish a link between two processes. A communication is a


unidirectional path or single link to share information.
• Each process that wants to communicate must explicitly name the recipient or
sender of communication.
• Send and Receive used in direct communication are given below:
Send(process name, message), Receive(process name, message)
Send(A, message) – send a message to process A
Receive(B, message) – receive a message from process B

In indirect communication, no direct communication link exists between two


processes.
• Message are sent to and received from mailboxes.
• A mailbox is a specialized repository where messages can be placed by processes
and from which messages can be removed.
• More than two processes can share a mailbox.
• No communication between two processes is possible if they do not share a
mailbox.
• Each mailbox has a unique identification.
• A process can communicate with some other process via a number of different
mailboxes.
• Send and Receive used in indirect communication are given below:
Send(mailbox, message), Receive(mailbox, message)
Send(A, message) – send a message to mailbox A
Receive(A, message) – receive a message from mailbox A

Message Passing through Exchanging the Messages.

Synchronous and Asynchronous Message Passing:


A process that is blocked is one that is waiting for some event, such as a resource becoming
available or the completion of an I/O operation. IPC is possible between the processes on
same computer as well as on the processes running on different computer i.e. in
networked/distributed system. In both cases, the process may or may not be blocked while
sending a message or attempting to receive a message so message passing may be blocking
or non-blocking.

Blocking is considered synchronous and blocking send means the sender will be blocked
until the message is received by receiver. Similarly, blocking receive has the receiver
block until a message is available. Non-blocking is considered asynchronous and Non-
blocking send has the sender sends the message and continue. Similarly, Non-blocking
receive has the receiver receive a valid message or null.

There are basically three preferred combinations:

 Blocking send and blocking receive


 Non-blocking send and Non-blocking receive
 Non-blocking send and Blocking receive (Mostly used)

In Direct message passing, The process which wants to communicate must explicitly
name the recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.
Similarly, receive(p2, message) means to receive the message from p2.

In this method of communication, the communication link gets established automatically,


which can be either unidirectional or bidirectional, but one link can be used between one
pair of the sender and receiver and one pair of sender and receiver should not possess more
than one pair of links.

Symmetry and asymmetry between sending and receiving can also be implemented i.e.
either both processes will name each other for sending and receiving the messages or only
the sender will name the receiver for sending the message and there is no need for the
receiver for naming the sender for receiving the message. The problem with this method of
communication is that if the name of one process changes, this method will not work.

In Indirect message passing, processes use mailboxes (also referred to as ports) for
sending and receiving messages. Each mailbox has a unique id and processes can
communicate only if they share a mailbox. Link established only if processes share a
common mailbox and a single link can be associated with many processes. Each pair of
processes can share several communication links and these links may be unidirectional or
bi-directional.
Suppose two processes want to communicate through Indirect message passing, the
required operations are: create a mailbox, use this mailbox for sending and receiving
messages, then destroy the mailbox. T
he standard primitives used are: send(A, message) which means send the message to
mailbox A.
The primitive for the receiving the message also works in the same way e.g. received (A,
message).

There is a problem with this mailbox implementation. Suppose there are more than two
processes sharing the same mailbox and suppose the process p1 sends a message to the
mailbox, which process will be the receiver? This can be solved by either enforcing that
only two processes can share a single mailbox or enforcing that only one process is allowed
to execute the receive at a given time or select any process randomly and notify the sender
about the receiver.

A mailbox can be made private to a single sender/receiver pair and can also be shared
between multiple sender/receiver pairs.

Now, let’s discuss the Producer-Consumer problem using the message passing concept. The
producer places items (inside messages) in the mailbox and the consumer can consume an
item when at least one message present in the mailbox.

Advantages of IPC:
1. Enables processes to communicate with each other and share resources, leading
to increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall
system performance.
3. Allows for the creation of distributed systems that can span multiple computers
or networks.
4. Can be used to implement various synchronization and communication
protocols, such as semaphores, pipes, and sockets.
Disadvantages of IPC:

1. Increases system complexity, making it harder to design, implement, and debug.


2. Can introduce security vulnerabilities, as processes may be able to access or
modify data belonging to other processes.
3. Requires careful management of system resources, such as memory and CPU
time, to ensure that IPC operations do not degrade overall system performance.
4. Can lead to data inconsistencies if multiple processes try to access or modify the
same data at the same time.

Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary


mechanism for modern operating systems and enables processes to work
together and share resources in a flexible and efficient manner. However, care
must be taken to design and implement IPC systems carefully, in order to avoid
potential security vulnerabilities and performance issues.

Critical Section
When more than one processes try to access the same code segment that segment is known
as the critical section. The critical section contains shared variables or resources that need
to be synchronized to maintain the consistency of data variables.

In simple terms, a critical section is a group of instructions/statements or regions of code


that need to be executed atomically such as accessing a resource (file, input or output port,
global data, etc..) In concurrent programming, if one thread tries to change the value of
shared data at the same time as another thread tries to read the value (i.e., data race across
threads), the result is unpredictable. The access to such shared variables (shared memory,
shared files, shared port, etc.) is to be synchronized.

Few programming languages have built-in support for synchronization. It is critical to


understand the importance of race conditions while writing kernel-mode programming (a
device driver, kernel thread, etc.) since the programmer can directly access and modify
kernel data structures
Although there are some properties that should be followed if any code in the critical
section
1. Mutual Exclusion: If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes
wish to enter their critical sections, then only those processes that are not
executing in their remainder sections can participate in deciding which will enter
its critical section next, and this selection cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound, or limit, on the number of times that
other processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is granted.

Critical Section Problem


The use of critical sections in a program can cause a number of issues, including:
Deadlock: When two or more threads or processes wait for each other to release a critical
section, it can result in a deadlock situation in which none of the threads or processes can
move. Deadlocks can be difficult to detect and resolve, and they can have a significant
impact on a program’s performance and reliability.

Starvation: When a thread or process is repeatedly prevented from entering a critical


section, it can result in starvation, in which the thread or process is unable to progress. This
can happen if the critical section is held for an unusually long period of time, or if a high-
priority thread or process is always given priority when entering the critical section.

Overhead: When using critical sections, threads or processes must acquire and release
locks or semaphores, which can take time and resources. This may reduce the program’s
overall performance.

Semaphores in Process Synchronization



Semaphores are just normal variables used to coordinate the activities of multiple processes
in a computer system. They are used to enforce mutual exclusion, avoid race conditions, and
implement synchronization between processes.

Semaphores are compound data types with two fields one is a Non-negative integer S.V.
and the second is a set of processes in a queue S.L. It is used to solve critical section
problems, and by using two atomic operations, it will be solved. In this, wait and signal that
is used for process synchronization.

States of the Process


Let’s go through the stages of the process that comes in its lifecycle. This will help in
understanding semaphores.
1. Running: It states the Process in execution.
2. Ready: It states that the process wants to run.
3. Idle: The process runs when no processes are running
4. Blocked: The processes are not ready and not a candidate for a running process.
It can be awakened by some external actions.
5. Inactive: The initial state of the process. The process is activated at some point
and becomes ready.
6. Complete: When a process executes its final statement.

Initialization of Semaphore
Semaphore S must be initialized with a value of S.V. > 0 and with empty S.L.
Atomic Operations in Semaphore
Wait and signaling operations on semaphores
These two functions/operations are in the Semaphore. They are executed at the time of entry
in the critical section-Wait operation and exit from the critical section-Signal operation.
The wait and signaling operations on the Semaphore are just the “P” and “V” functions.

1. Wait operation
This operation, also known as a “P” feature, sleep, decrement, or down operation, is a
semaphore operation that controls the entry of a process into a critical section. If the
mutex/semaphore value is positive, then the process can enter the critical section, and if the
process enters, a semaphore value will decrease by 1.

2. Signal Operation
The “V” function or wakeup, increment, or wakeup operation is the same as the signal
function. When the process leaves the critical section, the semaphore value should be updated
to allow new processes to enter—the process for accessing the critical section( shared
variables or resources). When the process enters the critical section, the wait
operation decremented the semaphore value by one, i.e., if, in the starting, the value of the
Semaphore was 1(S=1. It will be decremented to 0 once the process enters the critical section.
If the process finished its execution in the critical section and left it, then the signal function
is executed and will increase the semaphore value by 1. Note that this operation is executed
only after the process has finished the critical section.

Types of Semaphores
Binary Semaphore
A binary semaphore is a binary (0 or 1) flag that can be enabled or disabled. If a binary
semaphore is used as the mutual exclusion mechanism, only the allocated resources will be
affected by the mutual exclusion.

Implementation of Binary Semaphore


As the name suggests binary means two. So this Semaphore will have two values, i.e., O and
1. Initially, the semaphore value is 1. When process P1 enters the critical section, the
semaphore value will be 0. If P2 enters the critical section at this point, it will be impossible
due to the value of the Semaphore. Less than or equal to 0. It would help if you waited until
the semaphore value was greater than 0.
This only happens when P1 leaves the critical section and performs a signaling operation that
increases the value of the Semaphore. This is also known as mutex lock. This is how both
processes cannot access the critical section while simultaneously ensuring mutual exclusion.

Counting Semaphore
Conceptually, a semaphore is a non-negative integer. Semaphores are typically used to
coordinate access to resources, and the number of semaphores is initialized to the number
of free resources. The value then automatically increases the count when the resource
is added and decreases the count atomically when the resource is deleted.

Implementation of Counting Semaphore


If you have a resource with three instances, the initial value of the Semaphore is 3(i.e., S=3).
Whenever a process needs to access a critical section/resource, it calls the wait function and
then decrements the semaphore value by one. The semaphore value is greater than 0. In this
case, three processes can access the critical section/resource. When the fourth process needs
it, It is blocked and put in a waiting queue and wakes up only when any process out executing
process performs the signaling function. In other words, the Semaphore has increased by 1.

Race Condition in Critical Section Area Problem

A race condition is a potential scenario that might happen within a critical section area. This
occurs when different results from the execution of numerous threads in a crucial region are
obtained depending on the execution order of the threads.

If the critical section area is regarded as an atomic instruction, race conditions in certain areas
can be avoided. Race problems can also be avoided by employing locks or atomic variables
to properly synchronize threads.

Turn Variable or Strict Alternation Approach

Turn Variable or Strict Alternation Approach is the software mechanism implemented at user
mode. It is a busy waiting solution which can be implemented only for two processes. In this
approach, A turn variable is used which is actually a lock.

This approach can only be used for only two processes. In general, let the two processes be Pi
and Pj. They share a variable called turn variable.
The actual problem of the lock variable approach was the fact that the process was entering in
the critical section only when the lock variable is 1. More than one process could see the lock
variable as 1 at the same time hence the mutual exclusion was not guaranteed there.

This problem is addressed in the turn variable approach. Now, A process can enter in the
critical section only in the case when the value of the turn variable equal to the PID of the
process.

There are only two values possible for turn variable, i or j. if its value is not i then it will
definitely be j or vice versa.

In the entry section, in general, the process Pi will not enter in the critical section until its
value is i or the process Pj will not enter in the critical section until its value is j.

Initially, two processes Pi and Pj are available and want to execute into critical section.

The turn variable is equal to i hence Pi will get the chance to enter into the critical section.
The value of Pi remains I until Pi finishes critical section.
Pi finishes its critical section and assigns j to turn variable. Pj will get the chance to enter into
the critical section. The value of turn remains j until Pj finishes its critical section.

Analysis of Strict Alternation approach

Let's analyze Strict Alternation approach on the basis of four requirements.


Features of Turn Variable
 Mutual exclusion:
Mutual exclusion does not allow more than one process to access the same
shared resource at the same time. Turn variable ensures mutual exclusion
property.
 Progress: The turn variable does not guarantee progress. It follows the alteration
approach strictly.
 Portability: The turn variable is implemented in user mode and does not require
any kind of special instruction from the operating system. Therefore it provides
portability.
 Bounded waiting: Each process gets the chance, once a previous process is
executed the next process gets the chance theref+ore turn variable ensures
bounded waiting.
 Deadlock: The turn variable is free from deadlock. Only one process is in
a critical section at a time. Once the turn value is updated the next process goes
into the critical section.

Conclusion
The turn variable is used to overcome the limitations of the lock variable. It is implemented
in the user mode and for synchronization between two processes. Turn variable ensures
mutual execution, portability, and bounded waiting and it is free from deadlock.

READERS WRITERS PROBLEM

The readers-writers problem is a classical problem of process synchronization, it relates to a


data set such as a file that is shared between more than one process at a time. Among these
various processes, some are Readers - which can only read the data set; they do not perform
any updates, some are Writers - can both read and write in the data sets.
The readers-writers problem is used for managing synchronization among various reader and
writer process so that there are no problems with the data sets, i.e. no inconsistency is
generated.

Let's understand with an example - If two or more than two readers want to access the file at
the same point in time there will be no problem. However, in other situations like when two
writers or one reader and one writer wants to access the file at the same point of time, there
may occur some problems, hence the task is to design the code in such a manner that if one
reader is reading then no writer is allowed to update at the same point of time, similarly, if
one writer is writing no reader is allowed to read the file at that point of time and if one writer
is updating a file other writers should not be allowed to update the file at the same point of
time. However, multiple readers can access the object at the same time.

Let us understand the possibility of reading and writing with the table given below:

TABLE 1

Case Process 1 Process 2 Allowed / Not


Allowed

Case 1 Writing Writing Not Allowed

Case 2 Reading Writing Not Allowed

Case 3 Writing Reading Not Allowed

Case 4 Reading Reading Allowed

The solution of readers and writers can be implemented using binary semaphores.

Dining Philosophers Problem

The dining philosophers problem is another classic synchronization problem which is used to
evaluate situations where there is a need of allocating multiple resources to multiple
processes.

What is the Problem Statement?

Consider there are five philosophers sitting around a circular dining table. The dining table
has five chopsticks and a bowl of rice in the middle as shown in the below figure.
Dining Philosophers Problem

At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he
uses two chopsticks - one from their left and one from their right. When a philosopher wants
to think, he keeps down both chopsticks at their original place.

Here's the Solution

From the problem statement, it is clear that a philosopher can think for an indefinite amount
of time. But when a philosopher starts eating, he has to stop at some point of time. The
philosopher is in an endless cycle of thinking and eating.

An array of five semaphores, stick[5], for each of the five chopsticks.

When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks
up that chopstick. Then he waits for the right chopstick to be available, and then picks it too.
After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another chopstick
forever. The possible solutions for this are:

 A philosopher must be allowed to pick up the chopsticks only if both the left and right
chopsticks are available.

 Allow only four philosophers to sit at the table. That way, if all the four philosophers
pick up four chopsticks, there will be one chopstick left on the table. So, one
philosopher can start eating and eventually, two chopsticks will be available. In this
way, deadlocks can be avoided.

You might also like