0% found this document useful (0 votes)
8 views28 pages

Unit 2

The document discusses the principle of concurrency in operating systems, highlighting the execution of multiple processes and the challenges such as deadlocks and resource starvation. It explains the Producer-Consumer problem, detailing the synchronization issues and solutions using semaphores and algorithms like Dekker's and Peterson's. Additionally, it covers critical sections, mutual exclusion, and the use of semaphores to manage access to shared resources.

Uploaded by

Sakshi Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views28 pages

Unit 2

The document discusses the principle of concurrency in operating systems, highlighting the execution of multiple processes and the challenges such as deadlocks and resource starvation. It explains the Producer-Consumer problem, detailing the synchronization issues and solutions using semaphores and algorithms like Dekker's and Peterson's. Additionally, it covers critical sections, mutual exclusion, and the use of semaphores to manage access to shared resources.

Uploaded by

Sakshi Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT II

Principle of Concurrency
• Concurrency is the execution of multiple instruction sequences at the
same time. It happens in the operating system when there are several
process threads running in parallel. The running process threads
always communicate with each other through shared memory or
message passing. Concurrency results in the sharing of resources
resulting in problems like deadlocks and resource starvation.
• It helps in techniques like coordinating the execution of processes,
memory allocation, and execution scheduling for maximizing
throughput.
Independent Process: Independent Processes are those processes whose task is not
dependent on any other processes.

Cooperating Processes: Cooperating Process are those processes that depend on


other processes or processes. They work together to achieve a common task in an
operating system.
There are several motivations for allowing concurrent execution

Physical resource Sharing: Multiuser environment since hardware


resources are limited.

Logical resource Sharing: Shared file (same piece of information).

Computation Speedup: Parallel execution.

Modularity: Divide system functions into separation processes.


Producer and Consumer Problem
• The Producer-Consumer problem is a classical multi-process
synchronization problem, that is we are trying to achieve
synchronization between more than one process.
• There is one Producer in the producer-consumer problem,
Producer is producing some items, whereas there is one
Consumer that is consuming the items produced by the
Producer. The same memory buffer is shared by both producers
and consumers which is of fixed-size.
• The task of the Producer is to produce the item, put it into the
memory buffer, and again start producing items. Whereas the
task of the Consumer is to consume the item from the memory
buffer.
Producer and Consumer Problem
• Below are a few points that considered as the problems
occur in Producer-Consumer:
• The producer should produce data only when the buffer is
not full. In case it is found that the buffer is full, the producer
is not allowed to store any data into the memory buffer.
• Data can only be consumed by the consumer if and only if
the memory buffer is not empty. In case it is found that the
buffer is empty, the consumer is not allowed to use any data
from the memory buffer.
• Accessing memory buffer should not be allowed to producer
and consumer at the same time.
Producer Consumer Problem
solution
(Shared Memory Bounded Buffer)
• The producer and Consumer process share the following Variables:
Var n;
Type item;
Var buffer : array [0..n-1] of item;
in , out : 0..n-1; //initialized to 0

• Shared buffer is circular array.


• in = first free position in the buffer;
• out = first full position in the buffer;
• in = out // buffer is empty
• (in+1) mod n = out // buffer is full
Producer Consumer with shared
fixed buffer
• Producer Process(nextp) • Consumer Process(nextc)
Repeat Repeat
Produce an item in nextp; while in = out do no-op;
while(in+1)mod n = out do no-op; nextc := buffer[out];
buffer[in] := nextp; out := (out+1) mod n;
in := (in+1) mod n; consume an item in nextc;
Until false Until false
Race Condition
• A situation where process
access and manipulate the
same data concurrently and
the outcome of the execution
depends on the particular
order in which the access takes
place is called as the race
condition.
Critical Section

The critical section is a code segment where the shared variables


can be accessed. An atomic action is required in a critical
section i.e. only one process can execute in its critical section at a
time. ... It acquires the resources needed for execution by the
process.
Critical Section
• A solution to the critical section problem must satisfy the following three conditions.

1. Mutual Exclusion: If process Pi is executing in its critical section then no any other

process can be executing in their critical section.

2. Progress: If no process is executing in its critical section and some process wish to
enter their critical sections then only those process that are not executing in their
remainder section can enter its critical section next.

3. Bounded waiting: There exists a bound on the number of times that other processes
are allowed to enter their critical sections after a process has made a request.
Dekker’s algorithm in Process
Synchronization

Process Synchronization is the task of coordinating the


execution of processes in a way that no two processes can
have access to the same shared data and resources. ... so
the change made by one process not necessarily reflected
when other processes accessed the same shared data.
Dekker’s algorithm was the first provably-correct solution
to the critical section problem. It allows two threads to
share a single-use resource without conflict, using only
shared memory for communication.
Dekker’s algorithm in Process
Synchronization
• Dekker’s algorithm will allow only a single process to use
a resource if two processes are trying to use it at the
same time. The highlight of the algorithm is how it solves
this problem.
• It succeeds in preventing the conflict by enforcing mutual
exclusion, meaning that only one process may use the
resource at a time and will wait if another process is
using it.
Dekker’s algorithm in Process
Synchronization
This is achieved with the use of two "flags" and a “turn". The
flags indicate whether a process wants to enter the critical
section (CS) or not; a value of 1 means TRUE that the process
wants to enter the CS, while 0, or FALSE, means the opposite.
The turn, which can also have a value of 1 or 0, indicates
priority when both processes have their flags set to TRUE.
var flag: array [0..1] of boolean;
turn: 0..1;
flag[i] = false;
flag[j] = false;
turn := i:
Process 1 Process 2
repeat repeat
flag[i] := true; flag[j] := true;
while flag[j] do while flag[i] do
if turn = j then if turn = i then
begin begin
flag[i] := false; flag[j] := false;
while turn = j do no-op; while turn = i do no-op;
flag[i] := true; flag[j] := true;
end; end;
critical section critical section

turn := j; turn := i;
flag[i] := false; flag[j] := false;
remainder section remainder section
until false; until false;
One advantage of this algorithm is that it doesn't require special test-and-set (atomic read/modify/write)
instructions and is therefore highly portable between languages and machine architectures.

One disadvantage is that it is limited to two processes and makes use of busy waiting instead of
process suspension.

If two processes attempt to enter a critical section at the same time, the algorithm will allow only one
process in, based on whose turn it is. If one process is already in the critical section, the other process
will busy wait for the first process to exit. This is done by the use of two flags, flag[i] and flag[j],
which indicate an intention to enter the critical section on the part of processes 0 and 1, respectively, and
a variable turn that indicates who has priority between the two processes.
Dekker's algorithm requires more complex conditions for
waiting, which can lead to a potential race condition when both
processes attempt to enter the critical section simultaneously.
Peterson's algorithm uses a simpler condition for waiting, which
avoids the race condition and ensures mutual exclusion.
Peterson’s Solution to
Critical Section Problem
Semaphore
• For the solution to the critical section problem one synchronization
tool is used which is known as semaphores. A semaphore ‘S’ is an
integer variable which is accessed through two standard atomic
operations wait and signal.

• Wait (S): while S<= 0 do no-op;


S=S-1;

• Signal(s): S= S+1;
• When a process performs a wait operation on a semaphore, the
operation checks whether the value of the semaphore is >0. If so, it
decrements the value of the semaphore and lets the process continue
its execution; otherwise, it blocks the process on the semaphore.

• A signal operation on a semaphore activates a process blocked on the


semaphore if any, or increments the value of the semaphore by 1.
Semaphores are of two types:
• Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and
1. Its value is initialized to 1. It is used to implement the solution of
critical section problems with multiple processes. It signifies that only
one individual will have simultaneous access to the critical part.
• Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.
Mutual Exclusion Problem
• We can use semaphores to deal with the n - process critical section
problem. The ‘n’ processes share a semaphore , mutex (mutual
exclusion), initialized to 1, Each Process is organised as:
• Repeat
wait(mutex);
critical section;
signal(mutex);
remainder section
Until false
Synchronization Problem
• Consider 2 processes P1 with statement S1 and P2 with statement S2.
Suppose we require S2 be executed only after S1.
• We can use a semaphore “sync” initialized to 0 and inserting the
following statements
P1: P2:

S1; Wait(sync);
Signal(sync); S2;
Producer Consumer Problem with Semaphore
To solve this problem, we need two counting semaphores – Full and Empty.
• “Full” keeps track of number of items in the buffer at any given time.
• “Empty” keeps track of number of unoccupied slots.

• Initialization of semaphores –

• mutex = 1
• Full = 0 // Initially, all slots are empty. Thus full slots are 0
• Empty = n // All slots are empty initially
Producer Consumer Problem with Semaphore
• When producer produces an item then the value of “empty” is reduced by 1
because one slot will be filled now. The value of mutex is also reduced to prevent
consumer to access the buffer. Now, the producer has placed the item and thus the
value of “full” is increased by 1. The value of mutex is also increased by 1
because the task of producer has been completed and consumer can access the
buffer.
• As the consumer is removing an item from buffer, therefore the value of “full” is
reduced by 1 and the value is mutex is also reduced so that the producer cannot
access the buffer at this moment. Now, the consumer has consumed the item, thus
increasing the value of “empty” by 1. The value of mutex is also increased so that
producer can access the buffer now.
Producer Consumer Problem with Semaphore
Solution for Producer – Solution for Consumer –
do{ do{
//produce an item
wait(empty); wait(full);
wait(mutex); wait(mutex);
//place in buffer // consume item from buffer
signal(mutex); signal(mutex);
signal(full); signal(empty);

}while(true) }while(true)

You might also like