0% found this document useful (0 votes)
13 views43 pages

OS - Unit 2 Complete

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views43 pages

OS - Unit 2 Complete

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Operating Systems

Unit 2
Concurrent Processes:
1. Process Concept, 9. Classical Problem in Concurrency-
a) Dining Philosopher Problem,
2. Critical Section Problem,
b) Sleeping Barber Problem;
2. Principle of Concurrency,
10. Inter Process Communication
3. Producer / Consumer Problem,
models and Schemes,
4. Mutual Exclusion,
11. Process generation.
5. Dekker’s solution,
6. Peterson’s solution,
7. Semaphores,
8. Test and Set operation;
Process Concept: Definition
• Processes are basically the programs that are dispatched from the
ready state and are scheduled in the CPU for execution.
• PCB(Process Control Block) holds the concept of process.
• A process can create other processes which are known as Child
Processes.
• The process takes more time to terminate and it is isolated means it
does not share the memory with any other process.
• The process can have the following states new, ready, running,
waiting, terminated, and suspended. Process is a program in
execution, It must progress in sequential fashion.
• A process is active entity while a program is passive entity stored on
disk (executable file),
• Program becomes process when executable file loaded into
memory
• Execution of program started via GUI mouse clicks, command line
entry of its name, etc
• One program can be several processes
Process Concept: States of a Process

• As a process executes, it changes state


• new: The process is being created
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur
• ready: The process is waiting to be assigned to a
processor
• terminated: The process has finished execution
Process Table
• To identify the processes, An OS assigns a
process identification number (PID) to each
process.
• The operating system needs to keep track of all
the processes
• Process control block (PCB) is used to track the
process’s execution status. Each block of
memory contains information about the
process state, program counter, stack pointer,
status of opened files, scheduling algorithms,
etc.
• All these information is required and must be
saved when the process is switched from one
state to another. When the process makes a
transition from one state to another, the
operating system must update information in
the process’s PCB.
PCB- Information associated with each process
• Process state – running, waiting, etc
• Program counter – location of instruction to next
execute
• CPU registers – contents of all process-centric registers
• CPU scheduling information- priorities, scheduling
queue pointers
• Memory-management information – memory allocated
to the process
• Accounting information – CPU used, clock time elapsed
since start, time limits
• I/O status information – I/O devices allocated to
process, list of open files
• A process control block (PCB) contains information
about the process, i.e. registers, quantum, priority, etc.
The process table is an array of PCB’s, that means
logically contains a PCB for all of the current processes
in the system.
Context Switching
• When CPU switches to another
process, the system must save the
state of the old process and load
the saved state for the new
process via a context switch
• Context of a process represented
in the PCB
• Context-switch time is overhead;
the system does no useful work
while switching
• The more complex the OS and the
PCB the longer the context
switch
• Time dependent on hardware
support
• Some hardware provides multiple
sets of registers per CPU multiple
contexts loaded at once
Process Synchronization
• On the basis of synchronization, processes are categorized as one of the following two types:
• Independent Process : Execution of one process does not affects the execution of other
processes.
• Cooperative Process : Execution of one process affects the execution of other processes.
• Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
• When more than one processes are executing the same code or accessing the same memory
or any shared variable in that condition there is a possibility that the output or the value of
the shared variable is wrong so for that all the processes doing the race to say that my output
is correct this condition known as a race condition. Several processes access and process the
manipulations over the same data concurrently, then the outcome depends on the particular
order in which the access takes place.
• A race condition is a situation that may occur inside a critical section. This happens when the
result of multiple thread execution in the critical section differs according to the order in
which the threads execute.
• Race conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction. Also, proper thread synchronization using locks or atomic variables can
prevent race conditions.
Understanding Race condition
• A Race condition is a scenario that occurs in a
multithreaded environment due to multiple threads
sharing the same resource or executing the same piece
of code. If not handled properly, this can lead to an
undesirable situation, where the output state is
dependent on the order of execution of the threads.
• Assume your current balance is ₹1000, and you add
₹200 from app A, and ₹500 from app B
• The following race condition occurs:
• App A reads the current balance, which is ₹1000
• App A adds ₹200 to ₹1000 and gets ₹1200 as the
final balance The above 3 steps in the app's flow belong to its critical
• Meanwhile, app B fetches the current balance, section, as the flow allows multiple threads to access
which is still ₹1000, as app A has not executed step the same resource. In the above case, the bank balance
3 variable is a shared resource
• App B adds ₹500 to ₹1000 and gets ₹1500 as the
final balance This is an example of a read-write-modify race
• App B updates the account balance to ₹1500 condition.
• App A updates the account balance to ₹1200
• Thus the final balance is ₹1200 instead of ₹1700
Threads
• A task is accomplished on the execution of a program,
which results in a process. Every task incorporates one or
many sub tasks, whereas these sub tasks are carried out
as functions within a program by the threads. The
operating system (kernel) is unaware of the threads in the
user space.
• There are two types of threads, User level threads (ULT)
and Kernel level threads (KLT).
• User Level Threads : Threads in the user space designed
by the application developer using a thread library to
perform unique subtask.
• Kernel Level Threads : Threads in the kernel space
designed by the os developer to perform unique functions
of OS. Similar to a interrupt handler.
Difference Between User Mode and Kernel Mode
Criteria Kernel Mode User Mode
Kernel-mode vs User mode In kernel mode, the program has direct In user mode, the application program
and unrestricted access to system executes and starts out.
resources.
Interruptions In Kernel mode, the whole operating In user mode, a single process fails if an
system might go down if an interrupt interrupt occurs.
occurs
Modes Kernel mode is also known as the User mode is also known as the
master mode, privileged mode, or unprivileged mode, restricted mode, or
system mode. slave mode.
Virtual address space In kernel mode, all processes share a In user mode, all processes get separate
single virtual address space. virtual address space.
Level of privilege In kernel mode, the applications have While in user mode the applications
more privileges as compared to user have fewer privileges.
mode.
Restrictions As kernel mode can access both the While user mode needs to access kernel
user programs as well as the kernel programs as it cannot directly access
programs there are no restrictions. them.
Mode bit value The mode bit of kernel-mode is 0. While; the mode bit of user-mode is 1.
Difference between User Level thread and Kernel Level thread
User level thread Kernel level thread
User thread are implemented by users. kernel threads are implemented by OS.
OS doesn’t recognize user level threads Kernel threads are recognized by OS.
Implementation of User threads is easy. Implementation of Kernel thread is
complicated.
Context switch time is less. Context switch time is more.
Context switch requires no hardware Hardware support is needed.
support.
If one user level thread perform blocking If one kernel thread perform blocking
operation then entire process will be operation then another thread can continue
blocked. execution.
User level threads are designed as Kernel level threads are designed as
dependent threads. Example : Java thread, independent threads. Example : Window
POSIX threads. Solaris.
Dependencies between User level thread and Kernel level thread
• Use of Thread Library : Thread library acts as an interface for the
application developer to create number of threads (according to the
number of subtasks) and to manage those threads. This API for a
process can be implemented in kernel space or user space.
• In real-time application, the necessary thread library is implemented
in user space. This reduces the system call to kernel whenever the
application is in need of thread creation, scheduling or thread
management activities. Thus, the thread creation is faster as it
requires only function calls within the process.
• The user address space for each thread is allocated at run-time.
Overall it reduces various interface and architectural overheads as all
these functions are independent of kernel support.
• The one and only major dependency between KLT and ULT arise
when an ULT is in need of the Kernel resources. Every ULT thread is
associated to a virtual processor called Light-weight process. This is
created and binded to ULT by the thread library according to the
application need.
• Whenever a system call invoked, a kernel level thread is created and
scheduled to the LWPs by the system scheduler. These KLT are
scheduled to access the kernel resources by the system scheduler
which is unaware of the ULT. Whereas the KLT is aware of each ULT
associated to it via LWPs.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower compared to
process context switch. Process context switching requires more overhead from the
CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in a single
process, then we can schedule multiple threads on multiple processor. This will make
process execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among all
threads within a process.
Note: stack and registers can’t be shared among the threads. Each thread has its own
stack and registers.
5. Communication: Communication between multiple threads is easier, as the threads
shares common address space. while in process we have to follow some specific
communication technique for communication between two process.
6. Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs completed
per unit of time is increased, thus increasing the throughput of the system.
Difference between Process and Thread
• A thread is a path of execution within a process. A
process can contain multiple threads.
• A thread is also known as lightweight process. The
idea is to achieve parallelism by dividing a process
into multiple threads. For example, in a browser,
multiple tabs can be different threads. MS Word
uses multiple threads: one thread to format the
text, another thread to process inputs, etc.
• The primary difference is that threads within the
same process run in a shared memory space,
while processes run in separate memory spaces.
• Threads are not independent of one another like
processes are, and as a result threads share with
other threads their code section, data section, and
OS resources (like open files and signals). But, like
process, a thread has its own program counter
(PC), register set, and stack space.
S.No. Process Thread
1 Process means any program is in execution. Thread means a segment of a process.
2 The process takes more time to terminate. The thread takes less time to terminate.
3 It takes more time for creation. It takes less time for creation.
4 It also takes more time for context switching. It takes less time for context switching.
5 The process is less efficient in terms of Thread is more efficient in terms of communication.
communication.
6 Multiprogramming holds the concepts of multi- We don’t need multi programs in action for multiple threads because a
process. single process consists of multiple threads.
7 The process is isolated. Threads share memory.
8 The process is called the heavyweight process. A Thread is lightweight as each thread in a process shares code, data,
and resources.
9 Process switching uses an interface in an Thread switching does not require calling an operating system and
operating system. causes an interrupt to the kernel.
10 If one process is blocked then it will not affect If a user-level thread is blocked, then all other user-level threads are
the execution of other processes blocked.
11 The process has its own Process Control Block, Thread has Parents’ PCB, its own Thread Control Block, and Stack and
Stack, and Address Space. common Address space.
12 Changes to the parent process do not affect Since all threads of the same process share address space and other
child processes. resources so any changes to the main thread may affect the behaviour
Principles of Concurrency
• Concurrency is the execution of the multiple instruction sequences at the
same time. It happens in the operating system when there are several
process threads running in parallel. The running process threads always
communicate with each other through shared memory or message
passing. Concurrency results in sharing of resources result in problems like
deadlocks and resources starvation.
• It helps in techniques like coordinating execution of processes, memory
allocation and execution scheduling for maximizing throughput.
• Both interleaved and overlapped processes can be viewed as examples of
concurrent processes, they both present the same problems.
• The relative speed of execution cannot be predicted. It depends on the
following:
• The activities of other processes
• The way operating system handles interrupts
• The scheduling policies of the operating system
• Advantages
Advantages and Drawbacks of Concurrency
• Running of multiple applications – It enable to run multiple applications at the same time.
• Better resource utilization – It enables that the resources that are unused by one
application can be used for other applications.
• Better average response time –Without concurrency, each application has to be run to
completion before the next one can be run.
• Better performance – It enables the better performance by the operating system. When
one application uses only the processor and another application uses only the disk drive
then the time to run both applications concurrently to completion will be shorter than the
time to run each application consecutively.
• Drawbacks
• It is required to protect multiple applications from one another.
• It is required to coordinate multiple applications through additional mechanisms.
• Additional performance overheads and complexities in operating systems are required for
switching among applications.
• Sometimes running too many applications concurrently leads to severely degraded
performance.
Problems in Concurrency
• Sharing global resources – Sharing of global resources safely is difficult. If two processes both make use
of a global variable and both perform read and write on that variable, then the order in which various
read and write are executed is critical.
• Optimal allocation of resources – It is difficult for the operating system to manage the allocation of
resources optimally.
• Locating programming errors – It is very difficult to locate a programming error because reports are
usually not reproducible.
• Locking the channel – It may be inefficient for the operating system to simply lock the channel and
prevents its use by other processes.
• Non-atomic – Operations that are non-atomic but interruptible by multiple processes can cause
problems.
• Race conditions – A race condition occurs of the outcome depends on which of several processes gets
to a point first.
• Blocking – Processes can block waiting for resources. A process could be blocked for long period of
time waiting for input from a terminal. If the process is required to periodically update some data, this
would be very undesirable.
• Starvation – It occurs when a process does not obtain service to progress.
• Deadlock – It occurs when two processes are blocked and hence neither can proceed to execute.
Critical Section Problem
• Critical section is a code segment that can be accessed by only one process at
a time. Critical section contains shared variables which need to be
synchronized to maintain consistency of data variables.
• Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion : If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress : If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those
processes that are not executing in their remainder section can participate
in deciding which will enter in the critical section next, and the selection
can not be postponed indefinitely.
• Bounded Waiting : A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
The producer consumer problem (bounded buffer)
problem)

• It describes two processes, the producer and the consumer, which share a common, fixed-size buffer
used as a queue.
• Producer produce an item and put it into buffer. If buffer is already full then producer will have to
wait for an empty block in buffer.
• Consumer consume an item from buffer. If buffer is already empty then consumer will have to wait
for an item in buffer.
• Implement Peterson’s Algorithm for the two processes using shared memory such that there is
mutual exclusion between them. The solution should have free from synchronization problems.
Peterson’s solution
Peterson’s Solution is a classical software based solution to the
critical section problem.
In Peterson’s solution, we have two shared variables:
• Boolean flag[i] :Initialized to FALSE, initially no one is
interested in entering the critical section
• int turn : The process whose turn is to enter the critical
section.
• Peterson’s Solution preserves all three conditions :
• Mutual Exclusion is assured as only one process can
access the critical section at any time.
• Progress is also assured, as a process outside the
critical section does not block other processes from
entering the critical section.
• Bounded Waiting is preserved as every process gets a
fair chance.
• Disadvantages of Peterson’s Solution
• It involves Busy waiting
• It is limited to 2 processes.
Peterson’s algorithm
// code for producer (j) // code for consumer i
// producer j is ready // consumer i is ready
// to produce an item // to consume an item
flag[j] = true; flag[i] = true;
// but consumer (i) can consume an item // but producer (j) can produce an item
turn = i; turn = j;
// if consumer is ready to consume an item // if producer is ready to produce an item
// and if its consumer's turn // and if its producer's turn
while (flag[i] == true && turn == i) while (flag[j] == true && turn == j)
{ {
// then producer will wait } // then consumer will wait }
// otherwise producer will produce // otherwise consumer will consume
// an item and put it into buffer (critical Section) // an item from buffer (critical Section)
// Now, producer is out of critical section // Now, consumer is out of critical section
flag[j] = false; flag[i] = false;
// end of code for producer // end of code for consumer
//--------------------------------------------------------
TestAndSet
• TestAndSet is a hardware solution to the The enter_CS() and leave_CS() functions to implement
critical section of a process are realized using test-and-set
synchronization problem. In TestAndSet, instruction as follows:
we have a shared lock variable which can void enter_CS(X)
take either of the two values, 0 as Unlock int TestAndSet(int &lock)
{
and 1 as Lock int initial = lock;
lock = 1;
• Before entering into the critical section, a return initial;
process inquires about the lock. If it is }
locked, it keeps on waiting until it becomes void enter CS(X)
{
free and if it is not locked, it takes the lock while test-and-set(X) ;
and executes the critical section. }
void leave CS(X)
• In TestAndSet, Mutual exclusion and {
progress are preserved but bounded X = 0;
}
waiting cannot be preserved. In the above solution, X is a memory location associated
with the CS and is initialized to 0.
Dekker’s algorithm
• Dekker’s algorithm allows two threads to share a single-use resource without conflict,
using only shared memory for communication. It avoids the strict alternation of a naïve
turn-taking algorithm, and was one of the first mutual exclusion algorithms to be invented.
• Although there are many versions of Dekker’s Solution, the final or 5th version is the one
that satisfies all of the conditions and is the most efficient of them all.
Main() Thread2()
// if 2nd thread is more favoured
{
{ if (favaouredthread == 2) do / wait until thread1 wants to enter
// to denote which thread will enter next { // its critical section
// gives access to other thread while (thread1wantstoenter == true)
int favouredthread = 1; thread1wantstoenter = false; {
// flags to indicate if each thread is in // wait until this thread is favoured // if 1st thread is more favored
{ thread2wantstoenter = true; if (favaouredthread == 1)
// queue to enter its critical section {
// entry section
boolean thread1wantstoenter = false; // gives access to other thread
/red
thread2wantstoenter = false;
boolean thread2wantstoenter = false; while (favouredthread == 2) ; // wait until this thread is favoured
thread1wantstoenter = true; while (favouredthread == 1) ;
startThreads();
} thread2wantstoenter = true;
} } }
Thread1() // critical section
// favor the 2nd thread // critical section
{ // favour the 1st thread
favouredthread = 2;
do { favouredthread = 1;
// exit section
// exit section
thread1wantstoenter = true; // indicate thread1 has completed // indicate thread2 has completed
// its critical section // its critical section
// entry section thread1wantstoenter = false; thread2wantstoenter = false;
// wait until thread2 wants to enter // remainder section // remainder section
} while (completed == false) } while (completed == false)
// its critical section
} }
while (thread2wantstoenter == true) {
Semaphores and its types
• Semaphores are compound data types with two fields one is a Non-negative
integer S.V and the second is Set of processes in a queue S.L.
• Semaphore was proposed by Dijkstra in 1965 which is a very significant
technique to manage concurrent processes by using a simple integer value,
which is known as a semaphore. Semaphore is simply an integer variable that
is shared between threads.
• It is used to solve critical section problems, and by using two atomic
operations, it will be solved. In this, wait and signal that is used for process
synchronization.
• Semaphores are of two types:
• Binary Semaphore – This is also known as mutex lock. It can have only two
values – 0 and 1. Its value is initialized to 1. It is used to implement the
solution of critical section problems with multiple processes.
• Counting Semaphore – Its value can range over an unrestricted domain. It is
used to control access to a resource that has multiple instances.
Semaphore Implementation
Some point regarding P and V operation :
• P operation is also called wait, sleep, or
down operation, and V operation is also
called signal, wake-up, or up operation.
• Both operations are atomic and
semaphore(s) is always initialized to one.
Here atomic means that variable on which
read, modify and update happens at the
same time/moment with no pre-emption
i.e. in-between read, modify and update no
other operation is performed that may
change the variable.
• A critical section is surrounded by both
operations to implement process
synchronization. See the below image. The
critical section of Process P is in between P
and V operation.
Implementation of binary semaphores
struct semaphore { sleep();
enum value(0, 1); }
// q contains all Process Control Blocks (PCBs) }
// corresponding to processes got blocked V(Semaphore s)
// while performing down operation. {
Queue<process> q; if (s.q is empty)
} {
P(semaphore s) s.value = 1;
{ }
if (s.value == 1) else
{ {
s.value = 0; // select a process from waiting queue
} Process p=q.pop();
else wakeup(p);
{ }
// add the process to the waiting queue }
q.push(P)
Implementation of binary semaphores
• Now, let us see how it implements mutual
exclusion. Let there be two processes P1 and
P2 and a semaphore s is initialized as 1. Now if
suppose P1 enters in its critical section then
the value of semaphore s becomes 0. Now if
P2 wants to enter its critical section then it will
wait until s > 0, this can only happen when P1
finishes its critical section and calls V
operation on semaphores.
• This way mutual exclusion is achieved. Look at
the image for details which is Binary
semaphore.
• Limitations of Semaphores :
• One of the biggest limitations of semaphore is
priority inversion.
• Deadlock, suppose a process is trying to wake up
another process which is not in a sleep state.
Therefore, a deadlock may block indefinitely.
• The operating system has to keep track of all calls
to wait and to signal the semaphore.
Implementation of counting semaphore
struct Semaphore { return;
int value; }
// q contains all Process Control Blocks(PCBs) V(Semaphore s)
// corresponding to processes got blocked {
// while performing down operation. s.value = s.value + 1;
Queue<process> q; if (s.value >= 0) {
} P(Semaphore s) // remove process p from queue
{ Process p=q.pop();
s.value = s.value - 1; wakeup(p);
if (s.value < 0) { }
// add process to queue else
// here p is a process which is currently executing return;
q.push(p); }
block();
}
else
Producer Consumer Problem using Semaphores
• Problem Statement – We have a buffer of fixed size. A producer can
produce an item and can place in the buffer. A consumer can pick items
and can consume them. We need to ensure that when a producer is
placing an item in the buffer, then at the same time consumer should not
Producer Consumer
consume any item. In this problem, buffer is the critical section.
do{ do{
• To solve this problem, we need two counting semaphores – Full and
Empty. “Full” keeps track of number of items in the buffer at any given
time and “Empty” keeps track of number of unoccupied slots. //produce an item wait(full);
• When producer produces an item then the value of “empty” is reduced wait(mutex);
by 1 because one slot will be filled now. The value of mutex is also wait(empty);
reduced to prevent consumer to access the buffer. Now, the producer has // remove item from buffer
placed the item and thus the value of “full” is increased by 1. The value wait(mutex);
of mutex is also increased by 1 because the task of producer has been
completed and consumer can access the buffer. //place in buffer signal(mutex);
• As the consumer is removing an item from buffer, therefore the value of signal(empty);
“full” is reduced by 1 and the value is mutex is also reduced so that the signal(mutex);
producer cannot access the buffer at this moment. Now, the consumer // consumes item
has consumed the item, thus increasing the value of “empty” by 1. The signal(full);
value of mutex is also increased so that producer can access the buffer
now. }while(true) }while(true)
• Initialization of semaphores –
• mutex = 1
• Full = 0 // Initially, all slots are empty. Thus full slots are 0
• Empty = n // All slots are empty initially
Inter Process Communication
• Cooperating processes need inter-process
communication (IPC)
• Two models of IPC
• Shared memory- example Producer-Consumer
problem
• Message passing

• Examples of IPC systems


• Posix : uses shared memory method.
• Mach : uses message passing
• Windows XP : uses message passing using local
procedural calls
Messaging Passing Method
• processes communicate with each other without using any kind of shared memory. If two
processes p1 and p2 want to communicate with each other, they proceed as follows:
• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
• We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
• The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for
an OS designer but complicated for a programmer and if it is of variable size then it is
easy for a programmer but complicated for the OS designer.
• A standard message can have two parts: header and body.
• The header part is used for storing message type, destination id, source id, message
length, and control information.
• The control information contains information like what to do if runs out of buffer
space, sequence number, priority. Generally, message is sent using FIFO style.
Messaging Passing Method
• Direct Communication links are implemented when the processes use a specific
process identifier for the communication, but it is hard to identify the sender ahead
of time. For example the print server.
• Properties of direct communication link
• Links are established automatically
• A link is associated with exactly one pair of communicating processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-directional

• In-direct Communication is done via a shared mailbox (port), which consists of a


queue of messages. The sender keeps the message in mailbox and the receiver picks
them up.
• Properties of indirect communication link
• Link established only if processes share a common mailbox
• A link may be associated with many processes
• Each pair of processes may share several communication links
• Link may be unidirectional or bi-directional
Synchronous and Asynchronous Message Passing
• A process that is blocked is one that is waiting for some event, such as a resource becoming
available or the completion of an I/O operation. IPC is possible between the processes on same
computer as well as on the processes running on different computer i.e. in networked/distributed
system.
• In both cases, the process may or may not be blocked while sending a message or attempting to
receive a message so message passing may be blocking or non-blocking. Blocking is considered
synchronous and blocking send means the sender will be blocked until the message is received by
receiver. Similarly, blocking receive has the receiver block until a message is available.
• Non-blocking is considered asynchronous and Non-blocking send has the sender sends the
message and continue. Similarly, Non-blocking receive has the receiver receive a valid message or
null. After a careful analysis, we can come to a conclusion that for a sender it is more natural to be
non-blocking after message passing as there may be a need to send the message to different
processes. However, the sender expects acknowledgment from the receiver in case the send fails.
Similarly, it is more natural for a receiver to be blocking after issuing the receive as the information
from the received message may be used for further execution.
• At the same time, if the message send keep on failing, the receiver will have to wait indefinitely.
That is why we also consider the other possibility of message passing. There are basically three
preferred combinations:
• Blocking send and blocking receive
• Non-blocking send and Non-blocking receive
• Non-blocking send and Blocking receive (Mostly used)
Examples of IPC Systems - POSIX
Producer Consumer
Remote Procedure Call (RPC)
• Remote Procedure Call (RPC) is a powerful technique for constructing
distributed, client-server based applications. It is based on extending the
conventional local procedure calling so that the called procedure need not
exist in the same address space as the calling procedure. The two processes
may be on the same system, or they may be on different systems with a
network connecting them.
• RPCs are a form of inter-process communication (IPC), in that different
processes have different address spaces: if on the same host machine, they
have distinct virtual address spaces, even though the physical address space is
the same; while if they are on different hosts, the physical address space is
different. Many different (often incompatible) technologies have been used to
implement the concept
Working of RPC
1. The calling environment is suspended, procedure
parameters are transferred across the network to the
environment where the procedure is to execute, and the
procedure is executed there.

2. When the procedure finishes and produces its results,


its results are transferred back to the calling
environment, where execution resumes as if returning
from a regular procedure call.

NOTE: RPC is especially well suited for client-server (e.g.


query-response) interaction in which the flow of control
alternates between the caller and callee. Conceptually,
the client and server do not both execute at the same
time. Instead, the thread of execution jumps from the
caller to the callee and then back again.
Classical Problems in Concurrency
• The Dining Philosopher Problem – States that
there are 5 Philosophers who are engaged in two
activities Thinking and Eating. Meals are taken
communally in a table with five plates and five
forks in a cyclic manner as shown in the figure.
• Constraints and Condition for the problem :
• Every Philosopher needs two forks in order to
repeat eat.
wait(chopstick[i]); • Every Philosopher may pick up the forks on the
wait(chopstick[(i+1) mod 5]); left or right but only one fork at once.
...
eat • Philosophers only eat when they had two forks.
... We have to design such a protocol i.e. pre and
signal(chopstick[i]); post protocol which ensures that a philosopher
signal(chopstick[(i+1) mod 5]); only eats if he or she had two forks.
...
think • Each fork is either clean or dirty.
...
until false;
Solution of the Dining Philosopher Problem
Solution : Correctness properties it needs semaphore array[0..4] fork ← [1, 1, 1, 1, 1]
to satisfy are :
• Mutual Exclusion Principle –No two semaphore room ← 4
Philosophers can have the two forks loop forever
simultaneously. p1 : think
• Free from Deadlock –Each philosopher p2 : wait(room)
can get the chance to eat in a certain p3 : wait(fork[i])
finite time. p4 : wait(fork[i + 1])
• Free from Starvation –When few p5 : eat
Philosophers are waiting then one gets a p6 : signal(fork[i])
chance to eat in a while. p7 : signal(fork[i + 1])
• No strict Alternation. p8 : signal(room)
• Proper utilization of time.
Classical Problems in Concurrency

• Sleeping Barber problem- The analogy is based upon a


hypothetical barber shop with one barber. There is a barber
shop which has one barber, one barber chair, and n chairs for
waiting for customers if there are any to sit on the chair.
• If there is no customer, then the barber sleeps in his own
chair.
• When a customer arrives, he has to wake up the barber.
• If there are many customers and the barber is cutting a
customer’s hair, then the remaining customers either wait
if there are empty chairs in the waiting room or they leave
if no chairs are empty.
Solution of Sleeping Barber problem
Semaphore Customers = 0; Customer {
Semaphore Barber = 0; while(true) {
Mutex Seats = 1; /* protects seats so only 1 customer tries to sit in a chair if
int FreeSeats = N; that's the case.*/
Barber { down(Seats); //This line should not be here.
while(true) { if(FreeSeats > 0) {
/* waits for a customer (sleeps). */ /* sitting down.*/
down(Customers); FreeSeats--;
/* mutex to protect the number of available seats.*/ /* notify the barber. */
down(Seats); up(Customers);
/* a chair gets free.*/ /* release the lock */
FreeSeats++; up(Seats);
/* bring customer for haircut.*/ /* wait in the waiting room if barber is busy. */
up(Barber); down(Barber);
/* release the mutex on the chair.*/ // customer is having hair cut
up(Seats); }
/* barber is cutting hair.*/ else {
} /* release the lock */
} up(Seats);
} // customer leaves
} }

You might also like