0% found this document useful (0 votes)
95 views31 pages

Interprocess Communication (IPC)

This document discusses interprocess communication and mechanisms for processes to communicate and synchronize actions. It describes using messaging systems to allow processes to exchange messages without shared variables. Processes can directly communicate by explicitly naming the sender/recipient, or indirectly through shared mailboxes. Buffering and synchronization techniques like semaphores are used to control access to shared resources and ensure consistency. The critical section problem of controlling access to shared data is examined, along with solutions that satisfy requirements like mutual exclusion and bounded waiting.

Uploaded by

elipet89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views31 pages

Interprocess Communication (IPC)

This document discusses interprocess communication and mechanisms for processes to communicate and synchronize actions. It describes using messaging systems to allow processes to exchange messages without shared variables. Processes can directly communicate by explicitly naming the sender/recipient, or indirectly through shared mailboxes. Buffering and synchronization techniques like semaphores are used to control access to shared resources and ensure consistency. The critical section problem of controlling access to shared data is examined, along with solutions that satisfy requirements like mutual exclusion and bounded waiting.

Uploaded by

elipet89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 31

Interprocess Communication

(IPC)
Chapter 5
Interprocess Communication
Provides mechanism to allow processes to
communicate and synchronize their actions.
How?
Use messaging system for IPC
Allow processes to communicate without resorting to
shared variables. Messages can be variable of fixed
size
Basic operations
Send (message)
Receive (message)
Direct Communication
Each process that wants to communicate must
explicitly name the recipient or sender of the
communication
Send (P, message) send a message to P
Receive (Q, message) receive a message from
process Q
A communication link is this case is
Automatically established between every pair that wants to
communicate.
Link is associated with exactly two processes, link may be
unidirectional, but is usually bidirectional
Indirect
Use of mailboxes (ports) for message deposits
Each mailbox has a unique id.
Shared mailboxes to facilitate exchange of
messages.
Many mailboxes per process possible, but a link
is associated with exactly two processes.
Two processes can communicate only if they
have a shared mailbox.
The primitives are implemented as follows
Send (A, message) send a message to mailbox A
Receive (A, message) receive a message from mailbox A.
Communication Links
Use buffering (FIFO mostly)
Varying capabilities
1. Zero-capacity (a message transmitted only
on synchronization between two processes,
the link cannot have messages waiting in it)
2. Bounded-capacity (fixed size/ length queue)
3. Unbounded- capacity (infinite size queue)
Sender Types
1. Non-delay (no synchronization means
message from sender lost)
2. Wait-for-reply (wait for notification of receipt
of message sent so as to guarantee non-
message loss)
Exception conditions
process terminates
lost messages
scrambled messages (how do you deal with
these)
Process Synchronization
Background
Concurrent access to shared data may result
in data inconsistency.
Maintaining data consistency requires
mechanisms to ensure the orderly execution
of cooperating processes.
Shared memory solution to bounded buffer
problem allows at most n-1 items in an n
length buffer at the same time.
E.g. producer consumer problem
Critical Section Problem
How to control access to shared data
maintaining data consistency.
Critical section is a section of code where
shared data is
1. Accessed or
2. Overwritten
To ensure that when one process is executing
in its critical section no other process is
allowed to execute in its critical section
Solution to Critical section problem
Requirements of solution
Mutual Exclusion
Bounded waiting
Progress
Mutual Exclusion
If a process P0 is executing in its critical
section, then no other process can be in
their critical sections.
Progress
If no process is executing in its critical &
there exist some processes that wish to
enter their critical sections, then only those
processes that are not executing in their
remainder section can participate in the
decision of which processes that will enter
its critical section next, the selection
cannot be postponed indefinitely
Bounded Waiting
A bound must exist on the number of
times that other processes are allowed to
enter their critical sections after a process
has made a request to enter its critical
section & before that request is granted.
Assumptions
Each process is executing at a nonzero
speed
We present a trace to the solution using
two variables Pi and Pj whose general
structure is shown in the next slide
Structure of Process Pi

Repeat

Entry section (process


requests permission to enter its
critical section)

Critical section

Exit section

Remainder section

Until false
Solution 1
Assume two processes Pi & Pj competing
Shared variables:
turn : 01;
turn = 0; initially
Process Pi
Repeat
While (turn != i) wait;
Critical section
Turn =j;
Remainder section;
Until false;
Analysis of solution 1
Meets mutual exclusion requirement
P always starts critical section
Unnecessary waits for Pj if Pi is slow to
change turn = j;
No progress- there is strict alteration of
processes
Solution 2
Assume two processes P & P competing
P1 (flag = true) means P0 in critical section
P0 (flag = true) means P1 in critical section
Shared variables:
flag :Boolean
flag = false; initially
P1 (flag = true) P0 is in critical section
P0 (flag = true) P1 is in critical section
Solution 2 continued
Process Pi
Repeat
while (flag = true) wait;
flag = true; (ready to enter its critical section)
critical section;
flag = false;
remainder section;
Until false
No mutual exclusion
Solution 2 variation
Shared variables:
flag :array [0..1] of Boolean
flag [i]= false; initially
Process Pi
Repeat
flag [i] = true; (ready to enter its critical section)
while (flag [j] = true) wait;
critical section;
flag [i]= false;
remainder section;
Until false
Analysis
Does satisfy mutual exclusion
Progress not satisfied
Solution 3
Solution 1 + Solution 2
Process Pi
Repeat
flag = true;
turn = j;
While (flag == true && turn = =i) wait;
Critical section;
flag = false;
Remainder section;
Until false;
Solution 3 variation
Solution 1 + Solution 2
Var flag : array [0..1] of Boolean; turn : 0..1; flag [i] = flag [j] = false
initially
Process Pi
Repeat
flag [i] = true;
turn = j;
While (flag [j] & turn =j) wait;
Critical section;
flag [i]= false;
Remainder section;
Until false;
Analysis
Mutual exclusion is achieved
If changed at the same time last to one to
store value to turn executes critical section.
No unnecessary wait (bounded waiting)
Flag = false resets value after critical section.
There is progress (limit on number of
executions)
Solution 4
Solution 3 for many processes Bakery
algorithm for n>1 processes.
Algorithm
Shared variables:
Choosing : array[0 n-1] of Boolean;
Number : array[0n-1] of integer;
Choosing [i] = false for i = 0.. n-1;
Number [i] = 0 for i = 0 n-1; initially
Process Pi
Repeat
Choosing [i] = true;
Number [i] = max ( Number [0], , Number[n-1] )
Choosing [i] = false;// each process given number,
small numbered gets into critical section
For (j = 0; j<= n-1; j++)
{ While ( Choosing [j] = = true ) wait;
While ( Number [j] != 0 ) &
Number [j] < Number [i] ) wait;
}
Critical section;
Number [i] = 0;
Remainder section ;
Until false;
Problems
Busy waiting required
CPU time wastage
Solution (primitives)
1. Sleep & Wake up
2. Semaphores
3. Event Counters
4. Monitors
Sleep & Wake up
Sleep ()
Causes calling process to block until it is
woken up.
Wakeup (process_name)
Causes process_name to wake up and
continue execution.
Producer ()
Producer consumer soln
Producer ()
If buffer = full then sleep ()
If buffer = empty then wakeup (consumer)
after first production.
Consumer ()
Empty implies sleep
Was full should wakeup (producer) after
eating one item
Race condition due to unprotected vars
Semaphores
Integer variable indicating number of waits.
0 = no wakeups
> 0 = some blocked processes
Up (semaphore_ name)
Increase value
Wakeup one blocked process
Down (semaphore_ name)
If value > 0, process continues after decrementing
value.
If value = 0, process blocks & will decrement +
proceed after an up() by another process.
Successfully down () means proceed to critical
section
Producer consumer solution
Full (number of full slots)
Empty (number of empty slots)
Mutex (only one process for shared buffer either
0 or 1)
Producer () Consumer ()
Down (empty); down (full);
Down (mutex); down (mutex);
Put item; remove item;
Up (mutex); up (mutex)
Up (full); up (emptty);
Monitors
To avoid errors when using semaphores order of
downs and ups is critical.
Only one process allowed access at a time
Are programming language constructs.
E.g.
Class Buffer {
\\variables
Public synchronized void produce ()
{ .. If full wait(); blocking }
Public synchronized consumer ()
{ if empty wait (); if full notify all (); unblocking }
Synchronized mutual exclusion, notify all no deadlock

You might also like