Unit-2 Complete
Unit-2 Complete
(KCS-401)
Internal Marks 50
End Semester Marks 100
Concurrency means executing multiple tasks at the same time but not necessarily
simultaneously.
A system is said to be concurrent if it can support two or more actions in progress at the
same time.
A system is said to be parallel if it can support two or more actions executing simultaneously.
In fact, concurrency and parallelism are conceptually overlapped to some degree, but “in
progress” clearly makes them different.
Parallelism requires hardware with multiple processing units, essentially. In single-core CPU,
you may get concurrency but NOT parallelism.
Parallelism is a specific kind of concurrency where tasks are really executed simultaneously.
Concurrency is about dealing with lots of things at once. Parallelism is about doing lots
of things at once.
chin = getchar();
chout = chin;
putchar(chout);
Imagine two processes P1 and P2 both executing this code at the “same” time, with
the following interleaving due to multi-programming. 1. P1 enters this code, but is
interrupted after reading the character x into chin. 2. P2 enters this code, and runs it to
completion, reading and displaying the character y. 3. P1 is resumed, but chin now
contains the character y, so P1 displays the wrong character. The essence of the
problem is the shared global variable chin. P1 sets chin, but this write is subsequently
lost during the execution of P2.
Dr Manish Gupta MIT Moradabad
2. Optimal allocation of resources
It is difficult for the operating system to manage the allocation of resources optimally.
Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause
problems.
Race Conditions-
A race condition occurs of the outcome depends on which of several processes gets to
a point first.
Blocking –
Processes can block waiting for resources. A process could be blocked for long period
of time waiting for input from a terminal. If the process is required to periodically
update some data, this would be very undesirable.
Starvation-
It occurs when a process does not obtain service to progress.
Deadlock-
It occurs when two processes are blocked and hence neither can proceed to execute.
Process Interaction:
• Cooperation by Sharing
• Cooperation by Communication
When more than one processes are executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the race
to say that my output is correct this condition known as a race condition.
The task of the Producer is to produce the item, put it into the memory buffer, and
again start producing items. Whereas the task of the Consumer is to consume the item
from the memory buffer.
•The producer should produce data only when the buffer is not full. In case it is found
that the buffer is full, the producer is not allowed to store any data into the memory
buffer.
•Data can only be consumed by the consumer if and only if the memory buffer is not
empty. In case it is found that the buffer is empty, the consumer is not allowed to use
any data from the memory buffer.
•Accessing memory buffer should not be allowed to producer and consumer at the
same time.
As we can see in figure that Buffer has total 8 spaces out of which the first 5 are
filled, in = 5(pointing next empty position) and out = 0(pointing first filled position).
Buffer
The processes in most systems can execute concurrently, and they may be created and
deleted dynamically. Thus, these systems must provide a mechanism for process
creation and termination.
Process Creation: A process may create several new processes, via a create-process
system call, during the course of execution. The creating process is called a parent
process, and the new processes are called the children of that process. Each of these
new processes may in turn create other processes, forming a tree of processes. A
process is identified by a unique process identifier (or pid), which is typically an
integer number.
In general, a process will need certain resources (CPU time, memory, files, I/0
devices) to accomplish its task. When a process creates a subprocess, that subprocess
may be able to obtain its resources directly from the operating system, or it may be
constrained to a subset of the resources of the parent process.
When a process creates a new process, two possibilities exist in terms of execution:
In UNIX operating system, a new process is created by the fork() system call. the
exec() system call is used after a fork() system call. The exec() system call loads a
binary file into memory. The parent can then create more children; or, if it has nothing
else to do while the child runs, it can issue a wait() system call to move itself off the
ready queue until the termination of the child. The parent waits for the child process
to complete with the wait() system call. When the child process completes (by either
implicitly or explicitly invoking exit ()) the parent process resumes from the call to
wait (),where it completes using the exit() system call. Figure is illustrating the
Process creation using fork() system call:
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit () system call. At that point, all the
resources of the process-including physical and virtual memory, open files, and I/0
buffers-are deallocated by the operating system.
Termination can occur in other circumstances as well. A process can cause the
termination of another process via an appropriate system call (for example,
TerminateProcess () in Win32).
A parent may terminate the execution of one of its children for a variety of reasons,
such as these:
•The child has exceeded its usage of some of the resources that it has been allocated.
• The task assigned to the child is no longer required.
•The parent is exiting, and the operating system does not allow a child to continue if
its parent terminates.
Some systems, do not allow a child to exist if its parent has terminated. In such
systems, if a process terminates (either normally or abnormally), then all its children
must also be terminated. This phenomenon, referred to as cascading termination.
Dr Manish Gupta MIT Moradabad
Example: Process Creation
fork() :
1 3 7
2n-1 child process will be created. Dr Manish Gupta MIT Moradabad
Inter Process Communication (IPC)
Operations:
create a new mailbox.
send and receive messages through mailbox.
destroy a mailbox.
Primitives:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
Mailbox Sharing:
Let P1, P2, and P3 share mailbox A
P1 sends
P2 and P3 receive.
Blocking send: The sending process is blocked until the message is received by the
receiving process or by the mailbox. (Producer process blocks when the buffer is full)
Blocking receive: The receiver blocks until a message is available. (Consumer process
blocks when the buffer is empty)
Non-blocking send: The sending process sends the message and resumes operation
Zero Capacity: The queue has a length of zero. Thus the link cannot have any
messages waiting in it. The sender must block until the recipient receives the message.
Bounded Capacity: The queue has finite length n. Sender must wait if link is full
Unbounded Capacity: The queue is of infinite length. So any number of messages can
wait in it. Thus the sender never blocks
Shared memory allows two or more processes to share a given region of memory.
This is the fastest form of IPC because the data does not need to be copied between
the client and server.
The only trick in using shared memory is synchronizing access to a given region
among multiple processes.
If the server is placing data into a shared memory region, the client shouldn't try to
access the data until the sever is done.
Often semaphores are used to synchronize shared memory access. We can use record
locking as well.
In shared memory we declare a given section in the memory as one that will be used
simultaneously by several processes.
Typically a shared memory region resides in the address space of the process creating
the shared memory segment.
Other processes that wish to communicate using this shared memory segment must
attach it to their address space.
A process must first create a shared memory segment using shmget() system call
(SHared Memory GET).On success, shmget() returns a shared memory identifier that
is used in subsequent shared memory functions.
Concurrent Processes (if exist at the same time) are categorized into two types on the
basis of synchronization and these are given below:
Independent Process
Cooperative or Cooperating Process
Independent Processes
Two processes are said to be independent if the execution of one process does not
affect the execution of another process or not be affected by other processes.
A process that does not share any data with any other process.
Parallelism Concurrency
Processes are executing Only impression is there that
simultaneously in real time. processes are executing
simultaneously but in real time no
simultaneous execution
It is known as real concurrency It is known as virtual concurrency
Multiple cooperating processes are Parallelism is implemented on uni
running simultaneously on different processor systems (single CPU)
processors
In single chip system, multiple
processing element exist to execute
multiple processes. CPU P1 P2
In multiprocessor systems, multiple
CPUs are used to execute multiple I/O P2 P1
processes at the same time.
A race condition occurs when multiple processes are executing the same code or
accessing the same memory or any shared variable in that condition there is a
possibility that the output or the value of the shared variable depends on the particular
order in which the access takes place.
All the processes doing the race to say that my output is correct this condition known
as a race condition.
In race condition, concurrent processes are racing with each other to access a shared
resource in arbitrarily order and produce arbitrarily wrong result.
Two or more concurrent processes sharing a common resource have to follow some
defined protocols to avoid race condition.
The order of execution of the processes The order of execution of the processes is P1,
is P0, P1 respectively. P0 respectively.
Process P0 reads the value of A=0, Process P1 reads the value of A=0,
increments it by 1 (A=1) and writes the increments it by 1 (A=1) and writes the
incremented value in A. incremented value in A.
Now, process P1 reads the value of A Now, process P0 reads the value of A =1,
=1, increments its value by 1 (A=2) increments its value by 1 (A=2) and writes
and writes the incremented value in A. the incremented value in A.
So, after both the processes P0 & P1 So, after both the processes P0 & P1 finishes
finishes accessing the variable A. The accessing the variable A. The value of A is 2.
value of A is 2.
Dr Manish Gupta MIT Moradabad
But in concurrency, any process can execute any time.
Suppose that Po and P1 are permitted to execute in any arbitrary fashion then any of
the following could occur- (P0 and P1 are concurrent processes)
Possibility 1: Possibility 2:
P0, P1, P0, P1 P0, P1, P0
We get wrong result because we allow concurrent execution of the processes. They
can access shared variable in any order. Due to race condition we get different
wrong results in possibility 1 and possibility 2.
Dr Manish Gupta MIT Moradabad
CLASSIC BANKING EXAMPLE –
Now, due to concurrent access and processing time that computer takes in both ways
you were able to withdraw 3000$ more than your balance. In total 8000$ were taken
out and balance was just 5000$.
Mutual Exclusion means, at a time only one of the procesess should be executing its
critical section
The critical section is that section of the process where process may be changing
common variables, updating tables, writing a file and so on.
The critical section code must be design such that the process must initially request to
enter its critical section. If permitted, then execute the critical section and there must
be an exit section. The remaining code is under the remainder section. Below you can
see the general structure to implement critical section.
Critical Section
This is the segment of code where process changes common variables, updates a
table, writes to a file and so on. When a process is executing in its critical section, no
other process is allowed to execute in its critical section.
Exit Section
This segment of code is executed by a process immediately after its exit from the CS.
In this section, a process performs certain operations indicating its exit from CS and
thereby enabling the waiting processes to enter into its CS.
Remainder Section
When a process is executing in this section, it means that it is not waiting to enter its
CS. Dr Manish Gupta MIT Moradabad
General structure of process Pi
To avoid any inconsistency in the result the three properties that should be considered
mandatorily to implement critical section are as follow:
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
Progress
An entry of a process inside the critical section is not dependent on the entry of another
process inside the critical section.
A process can freely enter inside the critical section if there is no other process present
inside it.
A process enters the critical section only if it wants to enter.
A process is not forced to enter inside the critical section if it does not want to enter.
If a process doesn’t want to enter in its critical section then It should not be
permitted to block another process from entering it, in its critical section.
Dr Manish Gupta MIT Moradabad
Bounded Waiting
There is a bounded time up to which the process has to wait to enter its critical section
after making the request. The system can’t keep waiting, a process for the indefinite
time to enter its critical section.
Anyhow the execution of the critical section takes a short duration. So, every process
requesting to enter its critical section get the chance within the finite amount of time.
The wait of a process to enter the critical section is bounded. A process gets to enter
the critical section before its wait gets over.
We have to find the solution for n number of processes but, let start with just two
processes to make the understanding better and easy.
In the figure below you can see that we have two processes P0 and P1. Both have
code to enter their critical section.
Shared variable is turn (turn = 0 or 1) . If turn = i , it means that process i can enter
into its CS.
Initially turn = 0;
Similarly, P1 checks while (1) and enter the while loop and checks for while (turn! =
1) which turn out to be false. So P1 exit loop and enter critical section then make turn
0 again and execute its remainder section.
Here, observe one thing that every time P0 need the value of turn= 0 to enter its CS
and every time P1 need value of turn =1 to enter its CS.
So here, even if P0 after executing its CS immediately want to reenter its CS again. It
has to wait for P1 to make the value of turn = 0.
Here, we have achieved mutual exclusion but, progress is not achieved as if P1 does
not want to enter CS it will block P0 to enter its CS.
Now, P1 checks while (1) which is true so, it enters the loop and makes flag[1]=T
which confirms that P1 wants to enter CS. Further check while (flag[0]) to check
whether P0 is in its CS which is false as on exit P0 had made Flag[0]=F. So, it exit
while(flag[0]) loop and enter its CS and on exit make flag[1]=F.
Here we have achieved mutual exclusion .
If after executing it’s CS, P1 again wants to reenter its CS, it can, as the condition for
entry in CS flag[0]= F. Same is the case with P0. It may leads to indefinite waiting.
Consider the case when P0 has just confirmed that it wants to enter CS by making
flag[0]=T and just at that moment the context switch happens and P1 get charge and
start executing. It makes flag[1]=T and further check for while(flag[0]) which is true
now, it will get blocked in the loop and also blocks P0.
Here we have achieved mutual exclusion + progress.
Now, P1 enter while(1) and make flag[1]=T and turn =0 to confirm its entry in CS.
Further checks for while(turn==0 && flag[0]==T) which turn out false as flag[0]=F.
So P1 enter CS and on exit make flag[1]=F.
Here even, if context switch happens the processes would not get into a deadlock as
in the previous case. This solution is called Peterson’s solution.
Mutual Exclusion: If flag[1] is set and P1 is executing its CS then P0 can not enter CS
(as turn=1)
If flag[0] is set and P0 is executing its CS then P1 can not enter CS (as turn=0).
Thus at a time, atmost one of the cooperating process can enter its CS.
Thus the algorithm satisfies mutual exclusion requirement.
Suppose now P1 is dispatched to run. P1 is also intend to enter its CS and set the
flag[1] to true.
Now both have set their flags to True simultaneously. The value of turn variable will
break the tie since it can assume only one value at a time is either 0 or 1.
Bounded Waiting: Suppose P0 intends to enter its CS, set its flag[0] to True and then
check the status of flag[1].
Two possibilities are there
Thus after P0 indicates its intention to enter its CS and before it enters , P1 can enter
CS atmost once.
Dekker’s Algorithm
boolean flag[2];
flag[i]=F, flag[j]=F or flag[0]=F, flag[1]=F // Initially false //
int turn;
turn= i or 0 // Initially //
Two processes Pi and Pj or P0 and P1 (say i=0 and j=1)
Process Pi or P0 Process Pj or P1
while(1) while(1)
{ {
flag[i] = T; flag[j] = T;
while (flag[ j ]) while (flag[ i ])
{ {
if (turn = = j) if (turn = = i)
{ {
flag [i] = F; flag [ j ] = F;
while (turn = = j) ; \\ do nothing while (turn = = i) ; \\ do nothing
flag [i] = T; flag [ j ] = T;
} }
} }
critical section critical section
turn = j; turn = i;
flag [ i ] = F; flag [ j ] = F;
remainder section remainder section
} }
Before entering its critical section, a process receives a number (like in a bakery).
Holder of the smallest number enters the critical section.
If processes Pi and Pj receive the same number, if i < j, then Pi is served first;
else Pj is served first (PID assumed unique).
int Number[N] 0 1 2 3
It is array of size N 0 0 0 0
Initially it is 0
When Pi is waiting to enter its CS, Number[i] will be non zero
Number[i] = 0, it means that Pi do not want to enter into CS
Number[i] != 0, it means that Pi is waiting for CS
P0
P1 If P3 wants to enter into CS then
P2 CS Choosing[3] = T
P3 Number[3] != 0
P4 = max(Number[0],Number[1],Number[2])+1
-
-
PN
Example:
Let there are N=3 processes i.e. P0, P1 and P2
Choosing 0 1 2
F F F
Number 0 1 2
0 0 0
If P0 wants to enter in its CS
Choosing[0] = T // P0 goes to choose token
Choosing 0 1 2
T F F
Number[0] = max(Number[0],Number[1],Number[2])+1= 1
Number 0 1 2
1 0 0
Now i = 0 check that if any other process is residing in CS or not. Implement the for loop
j=0,1,2
Let P1 wants to enter into CS then P1 goes for choosing token,
Choosing[1]= T
Number[1] = 1+1 = 2
When j=1 in for loop, while(Choosing[j]) ; while loop condition is True for j=1
P0 should wait as while condition is True for j=1. It means that P1 is choosing the token
After choosing token by P1, Choosing[1] = F. Now while loop terminates, it means that
P1 has chosen its token.
P1 entered into CS
Let suppose that after exiting P0 from CS, token number of P1 and P2 becomes same i.e.
Number[1]=2 and Number[2] = 2 then P1 entered into CS
as PID of P1 is lesser than P2.
Let us see that Bakery algorithm meets all the three requirements of a CS problem
solution.
Mutual Exclusion:
When no process is executing in its CS , one of the waiting processes say Pi is chosen to
enter into its CS if it satisfies the following criteria-
Pi has lowest token number among the waiting processes.
If the lowest token number is found to be assigned to a set of processes then the
process with lowest process id is chosen to enter its CS
As process id of each process is distinct, so at a time only one process would meet the
above criteria and thus only one process will be executing in its CS.
Progress:
A waiting process Pi after selecting the token would check all other waiting processes,
whether any of them has priority to enter CS earlier than Pi, if no such process is waiting
and no such process is executing in CS then Pi would enter its CS immediately.
Bound Waiting:
This algorithm follows the FCFS principle as process which comes first having lesser PID
or token number.
After a process Pi selects its token to enter its CS, only those processes will be enter their
CS earlier than Pi (only once ) who had selected their token earlier than Pi or those who
had selected their tokens almost at the same time as Pi but their process ids were lower
than Pi.
The wait( ) operation is performed when the process wants to access the resources
and the signal() operation is performed when the process want to release the
resources.
The wait() and signal() operation must be performed indivisibly i.e. if a process is
modifying the semaphore value no other process must simultaneously modify the
semaphore value.
Along with that the execution of wait() and signal() must not be interrupted in
between.
While(1)
{
Entry section wait operation performed
CS
Exit section signal operation performed
Remainder section
}
A binary semaphore is an integer variable (non negative values) , which can be accessed
by a cooperating process through the use of two atomic operations-
wait( ) / P(s) / Down( )
signal( ) / V(s) / Up( )
s =1 (C.S. is free)
s =0 (C.S. is in use)
wait: This primitive is invoked by cooperating process, while requesting entry into its
CS.
The first process invoking “wait” will make the semaphore value to 0 (s =
0) and proceed to enter its CS.
Dr Manish Gupta MIT Moradabad
If s=0, it means that any process inside the CS.
If s=1, it means that CS is empty.
If a process Pi is requesting entry into its CS and at the same time another cooperating process is
still executing CS then Pi has to wait.
So at a time only one waiting process is permitted to enter its CS.
A waiting process repeatedly check the value of semaphore till it is found to be 1. Then it
decrement the value to 0 and proceed to enter its CS.
void wait( s)
{
while( s < =0) ; // no operation //
s- - ;
}
The wait operation executed atomically else it would possible that more than one
waiting processes may find the semaphore value to 1, decrement it to 0 and proceed
to enter their CS simultaneously and mutual exclusion requirement will be violated.
void signal( s)
{
s++;
}
The wait and signal operation must be executed atomically otherwise race condition
may occur.
While(1)
{
Entry section wait operation performed
CS
Exit section signal operation performed
Remainder section
}
Mutual Exclusion:
So, at a time only one cooperating process can enter CS, subjected to satisfaction of the
condition that “wait” operation is executed atomically.
Down( ) Up( ) {
{ if (suspended list is empty)
if (s.value = = 1) {
{ s.value = 1;
s.value = 0; }
} else
else {
{ select a process from suspended list
put the process in suspended list and wakeup( )
s.L( ) and block it. }
} }
}
Down( ) is successful only when s=1
Disadvantages:
Solution using binary semaphore does not meet the requirement of Bounded waiting.
A process waiting to enter its CS, will perform busy waiting thus wasting CPU cycles.
They are essentially shared global variables.
Access to semaphores can come from anywhere in the program (as semaphores are
global variables). There is no control or proper guarantee of proper usage.
>= 0 <0
Process is allowed to Process is not allowed to enter
enter in CS in CS.
Process is put to blocked in
waiting queue
The signal operation is executed when a process exit from the CS.
signal operation decrements the value of counting semaphore by 1.
s.count
<= 0
>0
A process is chosen
No action is taken
from waiting queue
and wakeup executed
so that process enters
into CS
Dr Manish Gupta MIT Moradabad
To implement the mutual exclusion, the value of counting semaphore is initialized
with 1. It ensures that only one process can be present in the CS at any given time,
multiple processes will not be present in CS.
The operation of a counting semaphore can be concluded as follows:
Advantages:
1. Since the waiting processes will be permitted to enter their critical sections in a
FCFS order, the requirement of bounded waiting is fully met.
2. A waiting process does not perform any busy waiting, thus saving CPU cycles.
Semaphore:-
Semaphore is a signaling mechanism that says take a variable,
make it free. when any process wants to go through a critical
section then it checks the variable is free or busy. If the
variable is busy then it waits until free otherwise the process
makes the semaphore variable busy and goes through the
critical section and does its job, then comes out from the
critical section by making the variable free.
philosopher.state[0]={eating:true, thinking:false }
Thinking();
Take-Fork(i);
Take-Fork((i+1) % N);
Eat();
Put-Fork(i);
Put-Fork((i+1) % N);
}
}
Dr Manish Gupta MIT Moradabad
void Philosopher (void)
{
while(true)
{
Thinking();
wait(Take-Fork(Si));
wait(Take-Fork((Si+1) % N));
Eat();
signal(Put-Fork(Si));
signal(Put-Fork((Si+1) % N));
}
}
Dr Manish Gupta MIT Moradabad
The Dining Philosophers Problem
If there are no customers, then the barber sits in his chair and
sleeps.
When a new customer arrives, and the barber is busy, then he will
sit on the chairs if there is any available, otherwise (when all the
chairs are full) he will leave.
The barber then goes to sleep. He stays asleep until the first customer shows up.
If another customer enters shortly thereafter, the second one will not be able to do
anything until the first one has released mutex .
The customer then checks to see if the number of waiting customers is less than
the number of chairs.If not, he releases mutex and leaves without a haircut.
When the customer releases mutex , the barber grabs it, does some
housekeeping, and begins the haircut.
When the haircut is over, the customer exits the procedure and
leaves the shop.
void barber(void) {
while(TRUE) {
down(customers); // block if no customers
down(mutex); // access to ‘waiting’
waiting = waiting - 1;
up(barbers); // barber is in..
up(mutex); // release ‘waiting’
cut_hair(); }
}
Dr Manish Gupta MIT Moradabad
Sleeping barber problem
void barber(void) {
while(TRUE) {
down(customers); // block if no customers
down(mutex); // access to ‘waiting’
waiting = waiting - 1;
up(barbers); // barber is in..
up(mutex); // release ‘waiting’
cut_hair();
down(synch) //wait for customer to leave }
}
Dr Manish Gupta MIT Moradabad
Sleeping barber problem
void customer(void) {
down(mutex); // access to `waiting’
if(waiting < CHAIRS) {
waiting = waiting + 1; // increment waiting
up(customers); // wake up barber
up(mutex); // release ‘waiting’
down(barbers); // go to sleep if barbers=0
get_haircut();
up(sync); //synchronize service
}
else {
up(mutex); /* shop full .. leave */
}} Dr Manish Gupta MIT Moradabad
Numerical
Suppose we wan to synchronize two processes P and Q using two binary semaphores
S and T. We want output string in this fashion “ 001100110011” . In which order the
process executes.
Process P Process Q
while(1) while(1)
{ {
W Y
Print ‘0’ Print ‘1’
Print ‘0’ Print ‘1’
X Z
} }
W X Y Z
A P(S) V(S) P(T) V(T) S, T =1
B P(S) V(T) P(T) V(S) S =1, T = 0
C P(S) V(T) P(T) V(S) S, T =1
D P(S) V(S) P(T) V(T) S=1, T =0
Process P Process Q
while(1) while(1)
{ {
W Y
Print ‘0’ Print ‘1’
Print ‘0’ Print ‘1’
X Z
} }
W X Y Z
A P(S) V(S) P(T) V(T) S, T =1
B P(S) V(T) P(T) V(S) S =1, T = 1
C P(S) V(S) P(S) V(S) S =1
D V(S) V(T) P(S) V(T) S=1, T =1
Process P Process Q
while(1) while(1)
{ {
P(S) P(T)
Print ‘0’ Print ‘1’
Print ‘0’ Print ‘1’
V(S) V(T)
} }