0% found this document useful (0 votes)
85 views52 pages

UNIT-2-Process Synchronization

AKTU OS UNIT 2

Uploaded by

Dheeraj kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views52 pages

UNIT-2-Process Synchronization

AKTU OS UNIT 2

Uploaded by

Dheeraj kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Dr Vikas Chaudhary

Professor & Head


Process Synchronization


 Process Synchronization is the coordination of execution of
multiple processes in a multi-process system to ensure that
they access shared resources in a controlled and predictable
manner.
 The main objective of process synchronization is to ensure that
multiple processes access shared resources without interfering
with each other, and to prevent the possibility of inconsistent
data due to concurrent access. To achieve this, various
synchronization techniques such as semaphores, monitors, and
critical sections are used.
 This can lead to the inconsistency of shared data. So the change
made by one process not necessarily reflected when other
processes accessed the same shared data. To avoid this type of
inconsistency of data, the processes need to be synchronized
with each other.
Thursday, June 15, 2023 2
Process Synchronization

 In a multi-process system, synchronization is
necessary to ensure data consistency and integrity,
and to avoid the risk of deadlocks and other
synchronization problems.
 Process synchronization is an important aspect of
modern operating systems, and it plays a crucial role
in ensuring the correct and efficient functioning of
multi-process systems.

Thursday, June 15, 2023 3


How Process Synchronization Works?

For Example, process A changing the data in a



memory location while another process B is trying to
read the data from the same memory location. There
is a high probability that data read by the second
process will be erroneous.

Thursday, June 15, 2023 4


On the basis of synchronization, processes are categorized
as one of the following two types:
 Independent Process : Execution of one process does not

affects the execution of other processes.
 Cooperative Process : Execution of one process affects the
execution of other processes.
Process synchronization problem arises in the case of
Cooperative process also because resources are shared in
Cooperative processes.

Thursday, June 15, 2023 5


Race Condition
 When more than one processes are executing the same

code or accessing the same memory or any shared
variable in that condition there is a possibility that the
output or the value of the shared variable is wrong so
for that all the processes doing race to say that my
output is correct this condition known as
race condition.
 Several processes access and process the manipulations
over the same data concurrently, then the outcome
depends on the particular order in which the access
takes place.

Thursday, June 15, 2023 6


Inter Process Communication (IPC)


A process can be of two types:
1. Independent process.
2. Co-operating process.
 An independent process is not affected by the
execution of other processes while a co-operating
process can be affected by other executing processes.
 Inter-process communication (IPC) is a mechanism
that allows processes to communicate with each
other and synchronize their actions. The
communication between these processes can be seen
as a method of co-operation between them.
Thursday, June 15, 2023 7
Inter Process Communication (IPC) Models
There are two fundamental models of interprocess
communication :
1. Shared Memory

2. Message passing
In the shared-memory model, a region of memory that is
shared by cooperating processes is established.
Processes can then exchange information by reading and
writing data to the shared region.
In the message-passing model, communication takes place
by means of messages exchanged between the
cooperating processes. The two communications models
are contrasted in Figure 3.12
Thursday, June 15, 2023 8
Inter Process Communication (IPC)

Thursday, June 15, 2023 9


 Both of the models just mentioned are common in
operating systems, and many systems implement
both.

 Message passing is useful for exchanging smaller
amounts of data, because no conflicts need be
avoided.
 Message passing is also easier to implement in a
distributed system than shared memory.
 Shared memory can be faster than message passing,
since message-passing systems are typically
implemented using system calls and thus require the
more time-consuming task of kernel intervention.

Thursday, June 15, 2023 10


IPC

 In shared-memory systems, system calls are required
only to establish shared- memory regions.
 Once shared memory is established, all accesses are
treated as routine memory accesses, and no
assistance from the kernel is required.
 Recent research on systems with several processing
cores indicates that message passing provides better
performance than shared memory on such systems.

Thursday, June 15, 2023 11


Shared-Memory Systems


 Interprocess communication using shared memory
requires communicating processes to establish a
region of shared memory.
 Typically, a shared-memory region resides in the
address space of the process creating the shared-
memory segment.
 Other processes that wish to communicate using this
shared-memory segment must attach it to their
address space.

Thursday, June 15, 2023 12


Shared-Memory Systems

 Recall that, normally, the operating system tries to
prevent one process from accessing another process’s
memory.
 Shared memory requires that two or more processes
agree to remove this restriction. They can then
exchange information by reading and writing data in
the shared areas.
 To illustrate the concept of cooperating processes,
let’s consider the producer–consumer problem,

Thursday, June 15, 2023 13


The producer–consumer problem
 A producer process produces information that is

consumed by a consumer process.
 For example, a compiler may produce assembly code
that is consumed by an assembler.
 The assembler, in turn, may produce object modules
that are consumed by the loader.
 The producer–consumer problem also provides a
useful metaphor for the client–server paradigm.
 We generally think of a server as a producer and a
client as a consumer. For example, a web server
produces (that is, provides) HTML files and images,
which are consumed (that is, read) by the client web
browser requesting the resource.
Thursday, June 15, 2023 14
The producer–consumer problem
 One solution to the producer–consumer problem
uses shared memory.

 To allow producer and consumer processes to run
concurrently, we must have available a buffer of
items that can be filled by the producer and emptied
by the consumer.
 This buffer will reside in a region of memory that is
shared by the producer and consumer processes.
 A producer can produce one item while the
consumer is consuming another item.
 The producer and consumer must be synchronized,
so that the consumer does not try to consume an
item that has not yet been produced

Thursday, June 15, 2023 15


The producer–consumer problem
Two types of buffers can be used.

 Unbounded buffer- places no practical limit on the size of
the buffer. The consumer may have to wait for new items,
but the producer can always produce new items.
 Bounded buffer- assumes a fixed buffer size. In this case,
the consumer must wait if the buffer is empty, and the
producer must wait if the buffer is full.
Let’s look more closely at how the bounded buffer illustrates
interprocess communication using shared memory.
The following variables reside in a region of memory shared
by the producer and consumer processes:

Thursday, June 15, 2023 16



 The shared buffer is implemented as a circular array
with two logical pointers: in and out.
 The variable in points to the next free position in
the buffer; out points to the first full position in the
buffer.
 The buffer is empty when in == out; the buffer is
full when ((in + 1) % BUFFER SIZE) == out.

Thursday, June 15, 2023 17


The producer process has a local variable


next_produced in which the new item to be produced is
stored.
The consumer process has a local variable
next_consumed in which the item to be consumed is
stored.
Thursday, June 15, 2023 18
Message-Passing Systems

 Message passing provides a mechanism to allow
processes to communicate and to synchronize their
actions without sharing the same address space.
 A message-passing facility provides at least two
operations:
send(message) receive(message)
 Messages sent by a process can be either fixed or
variable in size.

Thursday, June 15, 2023 19


Message-Passing Systems

 Here are several methods for logically implementing
a link and the send()/receive() operations:
• Direct or indirect communication
• Synchronous or asynchronous communication
• Automatic or explicit bufferin

Thursday, June 15, 2023 20


We look at issues related to each of these features next.
1. Naming
 Processes that want to communicate must have a way to
refer to each other.

 They can use either direct or indirect communication.
a) Direct communication: Each process that wants to
communicate must explicitly name the recipient or sender
of the communication.
 In this scheme, the send() and receive() primitives are
defined as:
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from
process Q.
This scheme exhibits symmetry in addressing; that is, both
the sender process and the receiver process must name the
other to communicate.
Thursday, June 15, 2023 21
 A variant of this scheme employs asymmetry in
addressing.
 Here, only the sender names the recipient; the recipient is


not required to name the sender.
 In this scheme, the send() and receive() primitives are
defined as follows:
• send(P, message)—Send a message to process P.
• receive(id, message)—Receive a message from any
process.
The variable id is set to the name of the process with which
communication has taken place

Thursday, June 15, 2023 22


B) Indirect communication:
 The messages are sent to and received from mailboxes,
or ports.


 A process can communicate with another process via a
number of different mailboxes, but two processes can
communicate only if they have a shared mailbox.
 Two processes can communicate only if they have a
shared mailbox.
 The send() and receive() primitives are defined as
follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from
mailbox A

Thursday, June 15, 2023 23


2. Buffering: Whether communication is direct or indirect,
messages exchanged by communicating processes reside in a
temporary queue.


Basically, such queues can be implemented in three ways:
• Zero capacity: The queue has a maximum length of zero;
thus, the link cannot have any messages waiting in it. In
this case, the sender must block until the recipient receives
the message.
• Bounded capacity: The queue has finite length n; thus, at
most n messages can reside in it. If the queue is not full
when a new message is sent, the message is placed in the
queue and the sender can continue execution without
waiting. The link’s capacity is finite, however. If the link is
full, the sender must block until space is available in the
queue.
Thursday, June 15, 2023 24
Buffering

• Unbounded capacity: The queue’s length is
potentially infinite; thus, any number of messages can
wait in it. The sender never blocks.
The zero-capacity case is sometimes referred to as a
message system with no buffering.

Thursday, June 15, 2023 25


Critical Section Problem
 Critical section is a part /segment of process, where

code for processing the shared resources is written. This
part/segment is called critical section.
 Consider a system consisting of n processes {P0, P1, ...,
Pn−1}. Each process has a segment of code, called a
critical section, in which the process may be changing
common variables, updating a table, writing a file, and
so on.
 The important feature of the system is that, when one
process is executing in its critical section, no other
process is allowed to execute in its critical section

Thursday, June 15, 2023 26


Thursday, June 15, 2023 27


Critical Section Problem

A solution to the critical section problem must satisfy the


following three conditions:
1. Mutual Exclusion 
If one process Pi is executing its critical section then no
other process can execute in its critical section (CS). Out of
a group of cooperating processes, only one process can be
in its critical section at a given point of time.
2. Progress
If no process is executing in its critical section and some
processes wish to enter their critical sections, then only
those processes that are not executing in their remainder
sections can participate in deciding which will enter its
critical section next, and this selection cannot be postponed
indefinitely
Thursday, June 15, 2023 28
Critical Section Problem
3. Bounded Waiting 
There exists a bound, or limit, on the number of times
that other processes are allowed to enter their critical
sections after a process has made a request to enter
its critical section and before that request is granted.

Thursday, June 15, 2023 29


Solution to Critical Section (CS) Problem


There two famous approach to provide solution for
critical section problem:
1. Peterson’s Solution
2. Dekker’s Solution

Thursday, June 15, 2023 30


Peterson’s Solution (Software approach)

Peterson’s Solution is a classical software based solution



to the critical section problem.
1. Software Approach (Peterson’s Solution )–
Software approach known as Peterson’s Solution is best
for Synchronization.
It uses two variables in the Entry Section so as to
maintain consistency, like Flag (boolean variable) and
Turn variable(storing the process states).
It satisfy all the three Critical Section requirements.

Thursday, June 15, 2023 31


Peterson’s Solution (Software approach)
A classic software-based solution to the critical-section
problem known as Peterson’s solution.


 Does not require strict alternation.
 Peterson’s solution is restricted to two processes that
alternate execution between their CSs and remainder
sections. The processes are numbered P0 and P1.
 Peterson’s solution requires two data items to be shared
between the two processes:
int turn;
boolean flag[i];
 The variable turn indicates whose turn it is to enter its CS. That is, if
turn == i, then process Pi allowed to execute in its CS.
 The flag array is used to indicate if a process is ready to enter its CS.
For example, if flag[i] is true, this value indicates that Pi is ready to
enter its CS.
Thursday, June 15, 2023 32

Thursday, June 15, 2023 33


Peterson’s Solution

Thursday, June 15, 2023 34


Dekker’s Algorithm
Process Pi Process Pj
do { do {
flag[i] = true;
while (flag[j])  flag[j] = true;
while (flag[i])
{ {
if (turn == j) if (turn == i)
{ {
flag[i] = false; flag[j] = false;
while (turn == j) ; /* do nothing */ while (turn ==ij) ; /* do nothing */
flag[i] = true; flag[j] = true;
} }
}//end of while loop }//end of while loop

/* critical section */ /* critical section */


turn = j; turn = i;
flag[i] = false; flag[j] = false;
/* remainder section */ /* remainder section */
} }
while (true);
Thursday, June 15, 2023 while (true); 35
Semaphores
 In 1965, Dijkstra proposed a new and very significant

technique for managing concurrent processes by using
the value of a simple integer variable.
 In very simple words, the semaphore is a variable that
can hold only a non-negative Integer value, shared
between all the processes, with
operations wait and signal.
 Semaphores are integer variables that are used to solve
the critical section problem by using two atomic
operations, wait and signal that are used for process
synchronization.
 The definitions of wait and signal are as follows −

Thursday, June 15, 2023 36


Semaphores

 Wait: The wait operation decrements the value of its
argument S, if it is positive. If S is negative or zero,
then no operation is performed

Thursday, June 15, 2023 37


Semaphores

 Signal The signal operation increments the value of
its argument S.

Thursday, June 15, 2023 38


Types of Semaphores
 There are two main types of semaphores i.e. counting

semaphores and binary semaphores. Details about these
are given as follows:
 Binary Semaphore: Mutex lock is another name for
binary Semaphore. It can only have two possible values: 0
and 1, and its value is set to 1 by default. It’s used to
implement numerous processes to solve critical section
problems.
 Counting Semaphore: Counting Semaphore’s worth can
be found anywhere in the world. It’s used to restrict
access to a resource with many copies..

Thursday, June 15, 2023 39


Critical Section Problem Solution using
Semaphore

do
Process Pi  do
Process Pj

{ {
Wait(S); Wait(S);
Critical Section Critical Section
Signal (S) Signal (S)
Remainder Section Remainder Section
} }
While (true) While (true)

Wait(S) Signal(S)
{ {
While(S<=0); S=S+1;
S=S-1; }
} June 15, 2023
Thursday, 40
Dining Philosophers Problem
 The dining philosophers problem states that there

are five philosophers sitting around a dinning table
 Philosopher either can eat or think
 Dinning table has five chopsticks and a bowl of rice
in the middle.
 When a philosopher wants to eat, he needs both their
right and left chopstick.
 When a philosopher wants to think, he keeps down
both chopstick.
 A hungry philosopher may only eat if there are both
chopsticks available.

Thursday, June 15, 2023 41


Dining Philosophers Problem

 The dining philosopher is a classic synchronization
problem as it demonstrates a large class of
concurrency control problems.

Thursday, June 15, 2023 42


Thursday, June 15, 2023 43


Solution to Dining Philosophers
Problem using Semaphore

 A solution of the Dining Philosophers Problem is to use a
semaphore to represent a chopstick. A chopstick can be
picked up by executing a wait operation on the
semaphore and released by executing a signal semaphore.
 The structure of the chopstick is shown below:
semaphore chopstick [5];
Initialize value of each semaphore equals to 1.
chopstick 1 1 1 1 1
1 2 3 4 5
 Initially the elements of the chopstick are initialized to 1
as the chopsticks are on the table and not picked up by a
philosopher.
Thursday, June 15, 2023 44
Semaphore solution: For philosopher i is given as
follows:

Thursday, June 15, 2023 45


 In the above structure, first wait operation is
performed on chopstick[i] and chopstick[ (i+1) % 5].
 This means that the philosopher i has picked up the

chopsticks on his sides. Then the eating function is
performed.
 After that, signal operation is performed on
chopstick[i] and chopstick[ (i+1) % 5]. This means
that the philosopher i has eaten and put down the
chopsticks on his sides. Then the philosopher goes
back to thinking.

Thursday, June 15, 2023 46


Difficulty with the solution
•The above solution makes sure that no two neighboring
philosophers can eat at the same time.

•But this solution can lead to a deadlock.
•This may happen if all the philosophers pick their left
chopstick simultaneously. Then none of them can eat
and deadlock occurs.

Thursday, June 15, 2023 47


Sleeping Barber problem
 Dijkstra introduced the sleeping barber problem.

 The sleeping barber problem is a synchronization
problem in computer science that deals with the
management of a shared resource by multiple processes.
 The barbershop is divided into two rooms, the waiting
room, and the workroom. The waiting room has n chairs
for waiting customers, and the workroom only has a
barber chair.
 Now, if there is no customer, then the barber sleeps in his
own chair(barber chair).

Thursday, June 15, 2023 48


Sleeping Barber problem
 he has to wake up the
 Whenever a customer arrives,
barber to get his haircut.
 If there are multiple customers and the barber is
cutting a customer's hair, then the remaining
customers wait in the waiting room with "n" chairs(if
there are empty chairs), or they leave if there are no
empty chairs in the waiting room.

Thursday, June 15, 2023 49


Thursday, June 15, 2023 50


This problem includes three semaphores.
1. Semaphore Customer: Counts the number of
customers present in the waiting room

(customer in the barber chair is not included
because he is not waiting).
2. Semaphore Barber: Keep the barber 0/1
,whether the barber is idle or is working
3. Semaphore Mutex: Used to provide the
mutual exclusion which is required for the
process to execute.

Thursday, June 15, 2023 51


 Integer Variable Waiting: Counts of waiting
customers

Thursday, June 15, 2023 52

You might also like