0% found this document useful (0 votes)
22 views126 pages

Unit-2 Complete

Uploaded by

Ali Sher Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views126 pages

Unit-2 Complete

Uploaded by

Ali Sher Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

Operating System

(KCS-401)
Internal Marks 50
End Semester Marks 100

Dr Manish Gupta MIT Moradabad


Dr Manish Gupta MIT Moradabad
What are Course Outcomes

Course Learning Outcomes are statements clearly


describing the specific type and level of
new learning students will have achieved

The Course Outcomes(COs): They are the resultant


knowledge skills the student acquires at the end of
a course

Dr Manish Gupta MIT Moradabad


What is Bloom’s Taxonomy

Bloom’s Taxonomy is a classification of the different


objectives and skills that educators set for their
students
The taxonomy was proposed in 1956 by Benjamin Bloom,
an educational psychologist at the University of
Chicago.
Knowledge , understand, Application, Analysis,
synthesis, Evaluation

Dr Manish Gupta MIT Moradabad


Bloom’s Taxonomy
Level Activities Keywords
Knowledge Arrange, define, describe, match, order,
Memorising information, defining memorize, name, note, repeat,
techniques, etc questions (Who? What? When?
Where?).
Comprehension/ Understanding an article with the Alter, change, classify, define in your
Understand objective of providing a summary own words, discuss, explain, extend,
give examples, translate, etc.
Application Using the knowledge of the learner to Apply, calculate, compute, construct,
apply it to concrete situations (real life) operate, practice, write an example
question (how many? Which? What
is?).
Analysis Asking a learner to dissect a subject, Analyse, appraise, categorise, compare,
explain how everything fits together conclude, contrast, criticize, diagnose,
differentiate, etc.
Synthesis Placing the pieces of a subject back Assemble, compile, compose, create,
together but in a novel way by gathering improve, synthesize, what if, etc.
information from several sources Questions (How can we improve? What
would happen if? How can we solve?).
Evaluation Judging the value of a subject for a Appraise, argue, choose, certify,
specific purpose criticize, decide, deduce, defend,
discriminate, estimate, evaluate,
Dr Manish
recommend, etc. Gupta MIT Moradabad
Dr Manish Gupta MIT Moradabad
Books
 1. Silberschatz, Galvin and Gagne, “Operating Systems
Concepts”, Wiley
 2. TMH 5. William Stallings, “Operating Systems: Internals
and Design Principles ”, 6th Edition, Pearson Education
 3. D M Dhamdhere, “Operating Systems : A Concept based
Approach”, 2nd Edition,

 4. Sibsankar Halder and Alex A Aravind, “Operating


Systems”, Pearson Education
 5. Harvey M Dietel, “ An Introduction to Operating System”,
Pearson Education

Dr Manish Gupta MIT Moradabad


Unit -2

Dr Manish Gupta MIT Moradabad


Process Concepts

A process can be of two types:


• Independent Process : Execution of one process does not affects the execution of
other processes.
• Cooperative Process : Execution of one process affects the execution of other
processes.

Dr Manish Gupta MIT Moradabad


Principles of Concurrency

Both interleaved and overlapped processes can be viewed as examples of concurrent


processes, they both present the same problems.

The relative speed of execution of processes cannot be predicted. It depends on the


following:
The activities of other processes
The way operating system handles interrupts
The scheduling policies of the operating system

Concurrency results in sharing of resources result in problems like deadlocks and


resources starvation.
It helps in techniques like coordinating execution of processes, memory allocation
and execution scheduling for maximizing throughput.

Dr Manish Gupta MIT Moradabad


Concurrency in Operating System

Concurrency means executing multiple tasks at the same time but not necessarily
simultaneously.
A system is said to be concurrent if it can support two or more actions in progress at the
same time.
A system is said to be parallel if it can support two or more actions executing simultaneously.

In fact, concurrency and parallelism are conceptually overlapped to some degree, but “in
progress” clearly makes them different.

Parallelism requires hardware with multiple processing units, essentially. In single-core CPU,
you may get concurrency but NOT parallelism.
Parallelism is a specific kind of concurrency where tasks are really executed simultaneously.

Concurrency is about dealing with lots of things at once. Parallelism is about doing lots
of things at once.

Dr Manish Gupta MIT Moradabad


Concurrency in Operating System

Dr Manish Gupta MIT Moradabad


Problems in Concurrency
1. Sharing of global resources
If two processes both make use of a global variable and both perform read
and write on that variable, then the order in which various read and write are executed
is critical.

The fundamental problem in concurrency is processes interfering with each other


while accessing a shared global resource. This can be illustrated with a surprisingly
simple example:

chin = getchar();
chout = chin;
putchar(chout);

Imagine two processes P1 and P2 both executing this code at the “same” time, with
the following interleaving due to multi-programming. 1. P1 enters this code, but is
interrupted after reading the character x into chin. 2. P2 enters this code, and runs it to
completion, reading and displaying the character y. 3. P1 is resumed, but chin now
contains the character y, so P1 displays the wrong character. The essence of the
problem is the shared global variable chin. P1 sets chin, but this write is subsequently
lost during the execution of P2.
Dr Manish Gupta MIT Moradabad
2. Optimal allocation of resources
It is difficult for the operating system to manage the allocation of resources optimally.

3. Locking the Channel


It may be inefficient for the operating system to simply lock the channel and prevents
its use by other processes

Dr Manish Gupta MIT Moradabad


Advantages of Concurrency

Running of multiple applications –


It enable to run multiple applications at the same time.
Better resource utilization –
It enables that the resources that are unused by one application can be used for other
applications.
Better average response time –
Without concurrency, each application has to be run to completion before the next one
can be run.
Better performance –
It enables the better performance by the operating system. When one application uses
only the processor and another application uses only the disk drive then the time to
run both applications concurrently to completion will be shorter than the time to run
each application consecutively.

Dr Manish Gupta MIT Moradabad


Drawbacks of Concurrency

It is required to protect multiple applications from one another.


It is required to coordinate multiple applications through additional mechanisms.
Additional performance overheads and complexities in operating systems are required
for switching among applications.
Sometimes running too many applications concurrently leads to severely degraded
performance.

Dr Manish Gupta MIT Moradabad


Issues of Concurrency

Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause
problems.

Race Conditions-
A race condition occurs of the outcome depends on which of several processes gets to
a point first.

Blocking –
Processes can block waiting for resources. A process could be blocked for long period
of time waiting for input from a terminal. If the process is required to periodically
update some data, this would be very undesirable.

Starvation-
It occurs when a process does not obtain service to progress.

Deadlock-
It occurs when two processes are blocked and hence neither can proceed to execute.

Dr Manish Gupta MIT Moradabad


Concurrency in Operating System

Hardware Parallelism : when having multiprogramming environment


Pseudo Parallelism : when having time sharing environment
Real Parallelism : actual parallelism, having multiple CPUs

Process Interaction:

• Unaware of each other


• Indirectly aware of each other
• Directly aware of each other

Competition among Processes:

• Cooperation by Sharing
• Cooperation by Communication

Dr Manish Gupta MIT Moradabad


Process Synchronization

Process synchronization problem arises in the case of Cooperative process also


because resources are shared in Cooperative processes.

When more than one processes are executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the race
to say that my output is correct this condition known as a race condition.

Dr Manish Gupta MIT Moradabad


Producer-Consumer problem

The Producer-Consumer problem is a classical multi-process synchronization


problem, that is we are trying to achieve synchronization between more than one
process.

There is one Producer in the producer-consumer problem, Producer is producing


some items, whereas there is one Consumer that is consuming the items produced by
the Producer. The same memory buffer is shared by both producers and consumers
which is of fixed-size.

The task of the Producer is to produce the item, put it into the memory buffer, and
again start producing items. Whereas the task of the Consumer is to consume the item
from the memory buffer.

Dr Manish Gupta MIT Moradabad


Producer-Consumer problem

•The producer should produce data only when the buffer is not full. In case it is found
that the buffer is full, the producer is not allowed to store any data into the memory
buffer.

•Data can only be consumed by the consumer if and only if the memory buffer is not
empty. In case it is found that the buffer is empty, the consumer is not allowed to use
any data from the memory buffer.

•Accessing memory buffer should not be allowed to producer and consumer at the
same time.

Dr Manish Gupta MIT Moradabad


Producer-Consumer problem

• "in" used in a producer code represent the next empty buffer


• "out" used in consumer code represent first filled buffer
• count keeps the count number of elements in the buffer

As we can see in figure that Buffer has total 8 spaces out of which the first 5 are
filled, in = 5(pointing next empty position) and out = 0(pointing first filled position).

Buffer

Dr Manish Gupta MIT Moradabad


Operations on processes

The processes in most systems can execute concurrently, and they may be created and
deleted dynamically. Thus, these systems must provide a mechanism for process
creation and termination.

Process Creation: A process may create several new processes, via a create-process
system call, during the course of execution. The creating process is called a parent
process, and the new processes are called the children of that process. Each of these
new processes may in turn create other processes, forming a tree of processes. A
process is identified by a unique process identifier (or pid), which is typically an
integer number.

In general, a process will need certain resources (CPU time, memory, files, I/0
devices) to accomplish its task. When a process creates a subprocess, that subprocess
may be able to obtain its resources directly from the operating system, or it may be
constrained to a subset of the resources of the parent process.

Dr Manish Gupta MIT Moradabad


Process Creation

When a process creates a new process, two possibilities exist in terms of execution:

• The parent continues to execute concurrently with its children.


• The parent waits until some or all of its children have terminated.

Process Creation in Windows:


Processes are created in the Win32 API using the CreateProcess () function. the parent
process waits for the child to complete by invoking the wait () system call. Once the
child process exits, control returns to the parent process.

Dr Manish Gupta MIT Moradabad


Process Creation in UNIX

In UNIX operating system, a new process is created by the fork() system call. the
exec() system call is used after a fork() system call. The exec() system call loads a
binary file into memory. The parent can then create more children; or, if it has nothing
else to do while the child runs, it can issue a wait() system call to move itself off the
ready queue until the termination of the child. The parent waits for the child process
to complete with the wait() system call. When the child process completes (by either
implicitly or explicitly invoking exit ()) the parent process resumes from the call to
wait (),where it completes using the exit() system call. Figure is illustrating the
Process creation using fork() system call:

Dr Manish Gupta MIT Moradabad


Process Termination

A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit () system call. At that point, all the
resources of the process-including physical and virtual memory, open files, and I/0
buffers-are deallocated by the operating system.

Termination can occur in other circumstances as well. A process can cause the
termination of another process via an appropriate system call (for example,
TerminateProcess () in Win32).

A parent may terminate the execution of one of its children for a variety of reasons,
such as these:
•The child has exceeded its usage of some of the resources that it has been allocated.
• The task assigned to the child is no longer required.
•The parent is exiting, and the operating system does not allow a child to continue if
its parent terminates.

Some systems, do not allow a child to exist if its parent has terminated. In such
systems, if a process terminates (either normally or abnormally), then all its children
must also be terminated. This phenomenon, referred to as cascading termination.
Dr Manish Gupta MIT Moradabad
Example: Process Creation

fork() :

•It creates a new process


•It takes no arguments
•It returns pid of the child process

How a new child process is created by the fork() system call ?


main() main() main()
{ { {
fork(); fork(); fork();
printf(“hello”); fork(); fork();
} printf(“hello”); fork();
} printf(“hello”);
}

1 3 7
2n-1 child process will be created. Dr Manish Gupta MIT Moradabad
Inter Process Communication (IPC)

Inter-process communication (IPC) is a mechanism that allows processes to


communicate with each other and synchronize their actions.

The communication between these processes can be seen as a method of co-operation


between them.
IPC Types: Processes can communicate with each other through both:

Dr Manish Gupta MIT Moradabad


Inter Process Communication (IPC) Types

There are two fundamental models of IPC:

Shared Memory – A region of memory that is shared by cooperating processes is


established. Processes can then exchange information by reading and writing data to the
shared region.

Messaging system – Communication takes place by means of messages exchanged


between the cooperating processes. Processes communicate with each other without
resorting to shared variables. (pipes, FIFO, sockets, message queues, semaphores,
signals)

Dr Manish Gupta MIT Moradabad


IPC: Message Passing Scheme
Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the
same address space.

It is particularly useful in a distributed environment, where the


communicating processes may reside on different computers
connected by a network.

For example; a chat program used on the www.

Message Passing is one solution to the communication requirement.


The added bonus is: It works with shared memory and with
distributed systems

Dr Manish Gupta MIT Moradabad


IPC: Message Passing Scheme (Direct Communication)

A message passing facility provides at least two operations:


send(message) – message size fixed or variable
receive(message)

If P and Q wish to communicate, they need to establish a communication link


between them and exchange messages via send/receive primitives.

In Direct Communication, processes must name each other explicitly:


send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q

Send primitive includes a specific identifier of the destination process.


Receive primitive could use source parameter to return a value when the receive
operation has been performed.

A communication link in Direct Communication is established automatically. A link


is associated with exactly one pair of communicating processes, i.e. Between each
pair there exists exactly one link.

Dr Manish Gupta MIT Moradabad


IPC: Message Passing Scheme (Indirect Communication)
Messages are directed and received from mailboxes .
Mailbox has a unique id.
Processes can communicate only if they share a mailbox.
We can think of a mailbox as an abstract object into which a message can be placed in or
removed from.

Properties of communication link in Indirect Communication:


Link is established only if processes share a common mailbox.
A link may be associated with many processes.
Each pair of processes may share several communication links

Dr Manish Gupta MIT Moradabad


Inter Process Communication (IPC)

Operations:
create a new mailbox.
send and receive messages through mailbox.
destroy a mailbox.

Primitives:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A

Mailbox Sharing:
Let P1, P2, and P3 share mailbox A
P1 sends
P2 and P3 receive.

Dr Manish Gupta MIT Moradabad


IPC: Message Passing- Synchronization

Message passing may be either blocking or non-blocking.

Blocking is considered synchronous whereas Non-blocking is considered asynchronous.

Blocking send: The sending process is blocked until the message is received by the
receiving process or by the mailbox. (Producer process blocks when the buffer is full)

Blocking receive: The receiver blocks until a message is available. (Consumer process
blocks when the buffer is empty)

Non-blocking send: The sending process sends the message and resumes operation

Non-blocking receive: The receiver receives either a valid message or a null

Dr Manish Gupta MIT Moradabad


IPC: Message Passing- Buffering

Whether communication is direct or indirect, messages exchanged by communicating


processes reside in a temporary queue. Such queues can be implemented in three ways:

Zero Capacity: The queue has a length of zero. Thus the link cannot have any
messages waiting in it. The sender must block until the recipient receives the message.

Bounded Capacity: The queue has finite length n. Sender must wait if link is full

Unbounded Capacity: The queue is of infinite length. So any number of messages can
wait in it. Thus the sender never blocks

Dr Manish Gupta MIT Moradabad


IPC: Shared Memory System

Shared memory allows two or more processes to share a given region of memory.
This is the fastest form of IPC because the data does not need to be copied between
the client and server.
The only trick in using shared memory is synchronizing access to a given region
among multiple processes.
If the server is placing data into a shared memory region, the client shouldn't try to
access the data until the sever is done.
Often semaphores are used to synchronize shared memory access. We can use record
locking as well.

Dr Manish Gupta MIT Moradabad


IPC: Shared Memory System

In shared memory we declare a given section in the memory as one that will be used
simultaneously by several processes.
Typically a shared memory region resides in the address space of the process creating
the shared memory segment.
Other processes that wish to communicate using this shared memory segment must
attach it to their address space.
A process must first create a shared memory segment using shmget() system call
(SHared Memory GET).On success, shmget() returns a shared memory identifier that
is used in subsequent shared memory functions.

Dr Manish Gupta MIT Moradabad


Process Synchronization

Concurrent Processes (if exist at the same time) are categorized into two types on the
basis of synchronization and these are given below:
Independent Process
Cooperative or Cooperating Process

Independent Processes
Two processes are said to be independent if the execution of one process does not
affect the execution of another process or not be affected by other processes.
A process that does not share any data with any other process.

Cooperative or Cooperating Processes


Two processes are said to be cooperative if the execution of one process affects the
execution of another process. These processes need to be synchronized so that the
order of execution can be guaranteed.
A process sharing data with other processes.

Dr Manish Gupta MIT Moradabad


Process Synchronization

Parallelism Concurrency
Processes are executing Only impression is there that
simultaneously in real time. processes are executing
simultaneously but in real time no
simultaneous execution
It is known as real concurrency It is known as virtual concurrency
Multiple cooperating processes are Parallelism is implemented on uni
running simultaneously on different processor systems (single CPU)
processors
In single chip system, multiple
processing element exist to execute
multiple processes. CPU P1 P2
In multiprocessor systems, multiple
CPUs are used to execute multiple I/O P2 P1
processes at the same time.

Dr Manish Gupta MIT Moradabad


Process Synchronization
.

Process Synchronization was introduced to handle problems that arose while


multiple process executions.
It is the task phenomenon of coordinating the execution of processes in such a way
that no two processes can have access to the same shared data and resources.

Process Synchronization is mainly needed in a multi-process system when multiple


processes are running together, and more than one processes try to gain access to the
same shared resource or any data at the same time.

It is the mechanism to ensure a systematic sharing of resources amongst concurrent


processs.

Dr Manish Gupta MIT Moradabad


Race Condition

A race condition occurs when multiple processes are executing the same code or
accessing the same memory or any shared variable in that condition there is a
possibility that the output or the value of the shared variable depends on the particular
order in which the access takes place.

All the processes doing the race to say that my output is correct this condition known
as a race condition.

The output depends on who finishes the race last.

In race condition, concurrent processes are racing with each other to access a shared
resource in arbitrarily order and produce arbitrarily wrong result.

The Race condition is undesirable and has to be avoided.

Two or more concurrent processes sharing a common resource have to follow some
defined protocols to avoid race condition.

Dr Manish Gupta MIT Moradabad


Race Condition

Dr Manish Gupta MIT Moradabad


Suppose there are two processes P0 and P1. Both share a common variable A=0. While
accessing A both the processes increments the value of A by 1.

The order of execution of the processes The order of execution of the processes is P1,
is P0, P1 respectively. P0 respectively.
Process P0 reads the value of A=0, Process P1 reads the value of A=0,
increments it by 1 (A=1) and writes the increments it by 1 (A=1) and writes the
incremented value in A. incremented value in A.
Now, process P1 reads the value of A Now, process P0 reads the value of A =1,
=1, increments its value by 1 (A=2) increments its value by 1 (A=2) and writes
and writes the incremented value in A. the incremented value in A.
So, after both the processes P0 & P1 So, after both the processes P0 & P1 finishes
finishes accessing the variable A. The accessing the variable A. The value of A is 2.
value of A is 2.
Dr Manish Gupta MIT Moradabad
But in concurrency, any process can execute any time.
Suppose that Po and P1 are permitted to execute in any arbitrary fashion then any of
the following could occur- (P0 and P1 are concurrent processes)

Possibility 1: Possibility 2:
P0, P1, P0, P1 P0, P1, P0

P0: read(A) = 0 P0: read(A) = 0


P1: read(A) = 0 P1: read(A) = 0
P0: A = A+1 = 1 P0: A = A+1 = 1
P1: A = A+1 = 0+1 = 1
Wrong result: A=1 Wrong result: A=1

We get wrong result because we allow concurrent execution of the processes. They
can access shared variable in any order. Due to race condition we get different
wrong results in possibility 1 and possibility 2.
Dr Manish Gupta MIT Moradabad
CLASSIC BANKING EXAMPLE –

Consider your bank account has 5000$.


You try to withdraw 4000$ using net banking and simultaneously try to withdraw via
ATM too.
For Net Banking at time t = 0 ms bank checks you have 5000$ as balance and you’re
trying to withdraw 4000$ which is lesser than your available balance. So, it lets you
proceed further and at time t = 1 ms it connects you to server to transfer the amount
Imagine, for ATM at time t = 0.5 ms bank checks your available balance which
currently is 5000$ and thus let’s you enter ATM password and withdraw amount.
At time t = 1.5 ms ATM dispenses the cash of 4000$ and at time t = 2 net banking
transfer is complete of 4000$.

EFFECT ON THE SYSTEM

Now, due to concurrent access and processing time that computer takes in both ways
you were able to withdraw 3000$ more than your balance. In total 8000$ were taken
out and balance was just 5000$.

Dr Manish Gupta MIT Moradabad


How to avoid Race Condition
We can avoid race condition by using Mutual Exclusion.

Mutual Exclusion means, at a time only one of the procesess should be executing its
critical section

Mutual exclusion algorithms are used in concurrent programming to avoid the


simultaneous use of common resource such as global variables by using computer
code known as Critical Section.

Dr Manish Gupta MIT Moradabad


Critical Section
Consider a system consisting of N processes {P0, P1, P2, P3…..PN-1}. Each process
has a segment of code called critical section.

The critical section is that section of the process where process may be changing
common variables, updating tables, writing a file and so on.

The critical section code must be design such that the process must initially request to
enter its critical section. If permitted, then execute the critical section and there must
be an exit section. The remaining code is under the remainder section. Below you can
see the general structure to implement critical section.

Dr Manish Gupta MIT Moradabad


Constituents of Critical Section
Entry Section
it is the code segment of the process that is executed when the process intend to enter
its critical section (CS). To enter the critical section code, a process must request
permission. Entry Section code implements this request.
By executing this, the process checks the eligibility to enter the CS.
If the process is meeting all the requirements of of its CS then it will enter into its CS
immediately otherwise it will wait till all the requirements are met.

Critical Section
This is the segment of code where process changes common variables, updates a
table, writes to a file and so on. When a process is executing in its critical section, no
other process is allowed to execute in its critical section.

Exit Section
This segment of code is executed by a process immediately after its exit from the CS.
In this section, a process performs certain operations indicating its exit from CS and
thereby enabling the waiting processes to enter into its CS.

Remainder Section
When a process is executing in this section, it means that it is not waiting to enter its
CS. Dr Manish Gupta MIT Moradabad
General structure of process Pi

Dr Manish Gupta MIT Moradabad


Properties to Implement Critical Section

To avoid any inconsistency in the result the three properties that should be considered
mandatorily to implement critical section are as follow:

1. Mutual Exclusion
2. Progress
3. Bounded Waiting

Dr Manish Gupta MIT Moradabad


Mutual Exclusion
Only one process can execute its critical section at a time. The other process must wait until
the previous process has completed its critical section execution completely.
No two processes may be simultaneously inside their CS,

Progress
An entry of a process inside the critical section is not dependent on the entry of another
process inside the critical section.
A process can freely enter inside the critical section if there is no other process present
inside it.
A process enters the critical section only if it wants to enter.
A process is not forced to enter inside the critical section if it does not want to enter.
If a process doesn’t want to enter in its critical section then It should not be
permitted to block another process from entering it, in its critical section.
Dr Manish Gupta MIT Moradabad
Bounded Waiting

There is a bounded time up to which the process has to wait to enter its critical section
after making the request. The system can’t keep waiting, a process for the indefinite
time to enter its critical section.

Anyhow the execution of the critical section takes a short duration. So, every process
requesting to enter its critical section get the chance within the finite amount of time.

The wait of a process to enter the critical section is bounded. A process gets to enter
the critical section before its wait gets over.

Dr Manish Gupta MIT Moradabad


Solution to Implement Critical Section

There can be many solutions to implement critical section.

But, solution must satisfy three criteria i.e.


Mutual exclusion (only one process can enter critical section at a time),
Progress (a process not wishing to enter its critical section should not block
a process whishing to enter its critical section) and
Bounded wait (a process should not have to wait for indefinite time to enter
its critical region).

We have to find the solution for n number of processes but, let start with just two
processes to make the understanding better and easy.

Dr Manish Gupta MIT Moradabad


Solution to Implement Critical Section for two process

Algorithm1 using turn variable:

In the figure below you can see that we have two processes P0 and P1. Both have
code to enter their critical section.

Shared variable is turn (turn = 0 or 1) . If turn = i , it means that process i can enter
into its CS.

Initially turn = 0;

Dr Manish Gupta MIT Moradabad


Suppose the value of turn is 0. Now, the process P0 tries to enter its critical section, it
executes while(1) and further checks for the condition while (turn!=0)which turn out
false. So, it exit while loop and enter the critical section and on exit make turn=1 and
execute the remainder section.

Similarly, P1 checks while (1) and enter the while loop and checks for while (turn! =
1) which turn out to be false. So P1 exit loop and enter critical section then make turn
0 again and execute its remainder section.

Here, observe one thing that every time P0 need the value of turn= 0 to enter its CS
and every time P1 need value of turn =1 to enter its CS.
So here, even if P0 after executing its CS immediately want to reenter its CS again. It
has to wait for P1 to make the value of turn = 0.

Here, we have achieved mutual exclusion but, progress is not achieved as if P1 does
not want to enter CS it will block P0 to enter its CS.

Dr Manish Gupta MIT Moradabad


Solution to Implement Critical Section for two process

Algorithm 2 using flag variable:

Shared variable : a flag [ ] array


If flag[0]= T or flag[i] = T . It means that process P0 or Pi wants to enter into its CS

flag [0]= F, flag [1]= F (initially)

Dr Manish Gupta MIT Moradabad


Now, let us begin again, P0 execute while(1) which is true so, it enters the loop make
flag [0]=T which confirm that P0 want to enter CS. Now P0 checks while(flag[1])to
see whether P1 is in its CS, which is false so it exits the loop and enter its CS and on
exit make flag[0]= F.

Now, P1 checks while (1) which is true so, it enters the loop and makes flag[1]=T
which confirms that P1 wants to enter CS. Further check while (flag[0]) to check
whether P0 is in its CS which is false as on exit P0 had made Flag[0]=F. So, it exit
while(flag[0]) loop and enter its CS and on exit make flag[1]=F.
Here we have achieved mutual exclusion .

If after executing it’s CS, P1 again wants to reenter its CS, it can, as the condition for
entry in CS flag[0]= F. Same is the case with P0. It may leads to indefinite waiting.

Consider the case when P0 has just confirmed that it wants to enter CS by making
flag[0]=T and just at that moment the context switch happens and P1 get charge and
start executing. It makes flag[1]=T and further check for while(flag[0]) which is true
now, it will get blocked in the loop and also blocks P0.
Here we have achieved mutual exclusion + progress.

Dr Manish Gupta MIT Moradabad


Solution to Implement Critical Section for two process

Peterson’s Algorithm/ Correct Solution


(I need it but first you have if you need it)

Two process solution


Two processes share two variables: flag[ ] and turn
The variable turn indicates whose turn it is to enter into CS
The flag array is used to indicate which process is ready to enter into CS i.e. if flag[i]=
True it means that Process i wants to enter into CS.
Here, we have implemented both flag array and the turn variable, f
lag[0]=F, flag[1]=F and turn = 0. (initially)

Dr Manish Gupta MIT Moradabad


Starting with P0, it enters while (1) and make flag[0]=T and turn =1 to confirm its
entry in CS. Further checks for while(turn==1 && flag[1]==T) which turn out false
as flag[1]=F. So, P0 enter CS and on exit make flag[0]=F.

Now, P1 enter while(1) and make flag[1]=T and turn =0 to confirm its entry in CS.
Further checks for while(turn==0 && flag[0]==T) which turn out false as flag[0]=F.
So P1 enter CS and on exit make flag[1]=F.
Here even, if context switch happens the processes would not get into a deadlock as
in the previous case. This solution is called Peterson’s solution.

Let us see whether it satisfies all the 3 requirements of a CS solution

Mutual Exclusion: If flag[1] is set and P1 is executing its CS then P0 can not enter CS
(as turn=1)
If flag[0] is set and P0 is executing its CS then P1 can not enter CS (as turn=0).
Thus at a time, atmost one of the cooperating process can enter its CS.
Thus the algorithm satisfies mutual exclusion requirement.

Dr Manish Gupta MIT Moradabad


Progress: Consider a situation when a process P0 is running and it want to enter in its
CS . It execute the statement flag[0]= True and is preempted thereafter prior to
execution of “while statement “.

Suppose now P1 is dispatched to run. P1 is also intend to enter its CS and set the
flag[1] to true.

Now both have set their flags to True simultaneously. The value of turn variable will
break the tie since it can assume only one value at a time is either 0 or 1.

If flag[0]= T and turn=0 then P0 enters into CS


If flag[1]= T and turn=1 then P1 enters into CS

So the requirement of progress is met.

Bounded Waiting: Suppose P0 intends to enter its CS, set its flag[0] to True and then
check the status of flag[1].
Two possibilities are there

Dr Manish Gupta MIT Moradabad


1. If flag[1] = F then P0 enters into its CS
2. If flag[1] = T , P1 enters into CS as turn =1. P0 will wait till P1 exits from its CS.
After exiting, flag[1] = F, thus enabling the P0 to enter its CS

Thus after P0 indicates its intention to enter its CS and before it enters , P1 can enter
CS atmost once.

Thus the requirement of Bounded waiting is met.

Here all three condition of CS satisfied.


Mutual Exclusion, Progress and Bounded waiting

Dr Manish Gupta MIT Moradabad


Solution to Implement Critical Section for two process

Dekker’s Algorithm

boolean flag[2];
flag[i]=F, flag[j]=F or flag[0]=F, flag[1]=F // Initially false //
int turn;
turn= i or 0 // Initially //
Two processes Pi and Pj or P0 and P1 (say i=0 and j=1)

Dr Manish Gupta MIT Moradabad


Solution to Implement Critical Section for two processes

Process Pi or P0 Process Pj or P1
while(1) while(1)
{ {
flag[i] = T; flag[j] = T;
while (flag[ j ]) while (flag[ i ])
{ {
if (turn = = j) if (turn = = i)
{ {
flag [i] = F; flag [ j ] = F;
while (turn = = j) ; \\ do nothing while (turn = = i) ; \\ do nothing
flag [i] = T; flag [ j ] = T;
} }
} }
critical section critical section
turn = j; turn = i;
flag [ i ] = F; flag [ j ] = F;
remainder section remainder section
} }

Here all three condition of CS satisfied.


Mutual Exclusion, Progress and Bounded waiting Dr Manish Gupta MIT Moradabad
Solution to Implement Critical Section for N Processes

Lamport’s Bakery Algorithm:

It works for the synchronization of an arbitrary set of N cooperating processes.

Before entering its critical section, a process receives a number (like in a bakery).
Holder of the smallest number enters the critical section.

The numbering scheme here always generates numbers in increasing order of


enumeration; i.e., 1,2,3,4,5...

If processes Pi and Pj receive the same number, if i < j, then Pi is served first;
else Pj is served first (PID assumed unique).

Two shared variables used:


Boolean Choosing[N] 0 1 2 3
It is array of size N F F F F
Initially it is False (F)
When Pi wants to enter in CS then Choosing[i] = T
Dr Manish Gupta MIT Moradabad
Solution to Implement Critical Section for N Processes

int Number[N] 0 1 2 3
It is array of size N 0 0 0 0
Initially it is 0
When Pi is waiting to enter its CS, Number[i] will be non zero
Number[i] = 0, it means that Pi do not want to enter into CS
Number[i] != 0, it means that Pi is waiting for CS

P0
P1 If P3 wants to enter into CS then
P2 CS Choosing[3] = T
P3 Number[3] != 0
P4 = max(Number[0],Number[1],Number[2])+1
-
-
PN

Dr Manish Gupta MIT Moradabad


Solution to Implement Critical Section for N Processes

Algorithm for Process Pi


int i, j // local variables
do
{
Choosing[i] = T ; // Pi goes to choose token for entry into CS
Number[i] = max(Number[0],Number[1],Number[i-1])+1
Choosing[i] = F ; // token chosen
for ( j = 0; j < N ; j++)
{
while(Choosing[j]) ; // if choosing[j]=T then keep looping
here, till Pj has finished choosing its token
while (Number[j]!=0 && (Number[j] , j) < (Number[i], i) ) ;
// if this is True then Pj enters into CS before Pi so
keep looping here, till Pj has finished execution of its CS
}
< Critical Section> // Pi enters into CS when Pj finished its execution and
set Number[j] = 0.
Number[i] = 0; // After execution CS, Pi set its token to 0
< Remainder Section>
} while(1) ; Dr Manish Gupta MIT Moradabad
Solution to Implement Critical Section for N Processes

Example:
Let there are N=3 processes i.e. P0, P1 and P2

Choosing 0 1 2
F F F

Number 0 1 2
0 0 0
If P0 wants to enter in its CS
Choosing[0] = T // P0 goes to choose token
Choosing 0 1 2
T F F

Number[0] = max(Number[0],Number[1],Number[2])+1= 1
Number 0 1 2
1 0 0

Choosing[0] = F // P0 chosen token


Choosing 0 1 2
F F F
Dr Manish Gupta MIT Moradabad
Solution to Implement Critical Section for N Processes

Now i = 0 check that if any other process is residing in CS or not. Implement the for loop
j=0,1,2
Let P1 wants to enter into CS then P1 goes for choosing token,
Choosing[1]= T
Number[1] = 1+1 = 2
When j=1 in for loop, while(Choosing[j]) ; while loop condition is True for j=1
P0 should wait as while condition is True for j=1. It means that P1 is choosing the token
After choosing token by P1, Choosing[1] = F. Now while loop terminates, it means that
P1 has chosen its token.

Now check the condition for i=0 and j=1

The above condition is False ( T && F)


While loop terminated
P0 enters into CS and P1 will wait because above while loop condition in P1 process (j=0
and i=1) is True ( T && T)
When P0 completed the CS and exited the CS then set Number[0] = 0. the while
condition in P1 process terminates as while condition becomes false ( F && T)
Dr Manish Gupta MIT Moradabad
Solution to Implement Critical Section for N Processes

P1 entered into CS

Let suppose that after exiting P0 from CS, token number of P1 and P2 becomes same i.e.
Number[1]=2 and Number[2] = 2 then P1 entered into CS
as PID of P1 is lesser than P2.

Let us see that Bakery algorithm meets all the three requirements of a CS problem
solution.

Mutual Exclusion:
When no process is executing in its CS , one of the waiting processes say Pi is chosen to
enter into its CS if it satisfies the following criteria-
Pi has lowest token number among the waiting processes.
If the lowest token number is found to be assigned to a set of processes then the
process with lowest process id is chosen to enter its CS

As process id of each process is distinct, so at a time only one process would meet the
above criteria and thus only one process will be executing in its CS.

Thus, the requirement of mutual exclusion is met.


Dr Manish Gupta MIT Moradabad
Solution to Implement Critical Section for N Processes

Progress:
A waiting process Pi after selecting the token would check all other waiting processes,
whether any of them has priority to enter CS earlier than Pi, if no such process is waiting
and no such process is executing in CS then Pi would enter its CS immediately.

No process can stop any other process to execute the CS.

Thus process requirement is met.

Bound Waiting:
This algorithm follows the FCFS principle as process which comes first having lesser PID
or token number.
After a process Pi selects its token to enter its CS, only those processes will be enter their
CS earlier than Pi (only once ) who had selected their token earlier than Pi or those who
had selected their tokens almost at the same time as Pi but their process ids were lower
than Pi.

Thus bound waiting requirement is met.

Dr Manish Gupta MIT Moradabad


Semaphores

The wait( ) operation is performed when the process wants to access the resources
and the signal() operation is performed when the process want to release the
resources.

The wait() and signal() operation must be performed indivisibly i.e. if a process is
modifying the semaphore value no other process must simultaneously modify the
semaphore value.
Along with that the execution of wait() and signal() must not be interrupted in
between.

The semaphores are of Two types-


Counting Semaphore
Binary Semaphore

Dr Manish Gupta MIT Moradabad


Semaphore:-
Semaphore is a signaling mechanism that says take a variable, make it free.
when any process wants to go through a critical section then it checks the
variable is free or busy. If the variable is busy then it waits until free
otherwise the process makes the semaphore variable busy and goes through
the critical section and does its job, then comes out from the critical section
by making the variable free.

While(1)
{
Entry section wait operation performed
CS
Exit section signal operation performed
Remainder section
}

Dr Manish Gupta MIT Moradabad


Binary Semaphores

It provides mutual exclusion.


It is used when there are multiple processes who wants to access critical section.
It is used to deal with CSP.

A binary semaphore is an integer variable (non negative values) , which can be accessed
by a cooperating process through the use of two atomic operations-
wait( ) / P(s) / Down( )
signal( ) / V(s) / Up( )

s =1 (C.S. is free)
s =0 (C.S. is in use)

A binary semaphore is initialized by the OS to 1.


int s = 1; // s is the binary semaphore
// s can access any process with the help of wait and signal

wait: This primitive is invoked by cooperating process, while requesting entry into its
CS.
The first process invoking “wait” will make the semaphore value to 0 (s =
0) and proceed to enter its CS.
Dr Manish Gupta MIT Moradabad
If s=0, it means that any process inside the CS.
If s=1, it means that CS is empty.

If a process Pi is requesting entry into its CS and at the same time another cooperating process is
still executing CS then Pi has to wait.
So at a time only one waiting process is permitted to enter its CS.

A waiting process repeatedly check the value of semaphore till it is found to be 1. Then it
decrement the value to 0 and proceed to enter its CS.

Wait( ) operation can be implemented as-

void wait( s)
{
while( s < =0) ; // no operation //
s- - ;
}

The wait operation executed atomically else it would possible that more than one
waiting processes may find the semaphore value to 1, decrement it to 0 and proceed
to enter their CS simultaneously and mutual exclusion requirement will be violated.

Dr Manish Gupta MIT Moradabad


Signal: This operation is invoked by cooperating process, when it is exiting from its CS.
The operation comprises of incrementing the value of semaphore to 1, so that one of the
waiting processes to enter its CS. After exiting from CS, value of s becomes 1.

signal( ) operation can be implemented as-

void signal( s)
{
s++;
}

The wait and signal operation must be executed atomically otherwise race condition
may occur.

Dr Manish Gupta MIT Moradabad


wait(s)/ P(s)/ Down(s) signal(s)/ V(s)/ Up(s)

void wait( s) void signal( s)


{ {
while( s < =0) ; // no operation // s++;
s- - ; }
}

While(1)
{
Entry section wait operation performed
CS
Exit section signal operation performed
Remainder section
}

Dr Manish Gupta MIT Moradabad


Let us see that whether a CS solution using binary semaphores satisfies the Three
requirements or not –

Mutual Exclusion:

Initially s = 1 no process is executing CS


P1 executes the wait operation enter its CS
s=0
P2 executes the wait operation and keep looping the while loop of
seen the value s = 0 wait operation
Due to this “spinning” of waiting process P2 in the “while loop”, the binary
semaphore is also known as “spin locks”.
P1 exit from CS, execute the signal It makes s =1
operation
P2 process which is looping in the while P2 enters its CS
condition of wait operation see the value of
s=1. Exit from while condition and make
value of s as s=0

Dr Manish Gupta MIT Moradabad


As “wait” operation is to be executed atomically, so other waiting processes (other than
P2) will find the value of s as s=0 and other waiting processes will continue to be reside
in the “while loop” .

So, at a time only one cooperating process can enter CS, subjected to satisfaction of the
condition that “wait” operation is executed atomically.

So, the requirement of mutual exclusion is met


Progress:
Initially s =1 No process is executed in CS
One of the waiting processes looping in the “while” loop of “wait” operation will find
the s=1, exit from the “while” loop , decrement the semaphore to 0 and enter its CS.
So if no process is executing in CS and some are waiting to enter CS then one of the
waiting processes will enter its CS immediately. This requirement of progress is met.
Bounded Wait:
Arbitrarily one of the waiting processes will get entry to its CS when a cooperating
process (executing in its CS) exits.
This selection of the process being arbitrarily, a process waiting to enter its CS is likely
to face starvation. So bounded wait requirement is not met.
There is no fixed rule. It may happen that a process did not get chance to enter its CS.

Dr Manish Gupta MIT Moradabad


Binary Semaphores:

The down and up operation are written as

Down( ) Up( ) {
{ if (suspended list is empty)
if (s.value = = 1) {
{ s.value = 1;
s.value = 0; }
} else
else {
{ select a process from suspended list
put the process in suspended list and wakeup( )
s.L( ) and block it. }
} }
}
Down( ) is successful only when s=1

Dr Manish Gupta MIT Moradabad


Advantages:

Semaphores are simple to implement.


Semaphores are machine independent.
Correctness is easy to determine

Disadvantages:

Solution using binary semaphore does not meet the requirement of Bounded waiting.
A process waiting to enter its CS, will perform busy waiting thus wasting CPU cycles.
They are essentially shared global variables.
Access to semaphores can come from anywhere in the program (as semaphores are
global variables). There is no control or proper guarantee of proper usage.

Dr Manish Gupta MIT Moradabad


Question: Given binary semaphore s=1, consider following operation 3P, 1V, 5P, 3V.
What will be the maximum size of the waiting queue?

Dr Manish Gupta MIT Moradabad


Counting Semaphores

Dr Manish Gupta MIT Moradabad


Counting Semaphores
It is used for giving control access to resources.

Dr Manish Gupta MIT Moradabad


The wait operation is executed when a process tries to enter the CS.
Wait operation decrements the value of counting semaphore by 1.
s.count

>= 0 <0
Process is allowed to Process is not allowed to enter
enter in CS in CS.
Process is put to blocked in
waiting queue
The signal operation is executed when a process exit from the CS.
signal operation decrements the value of counting semaphore by 1.
s.count

<= 0
>0
A process is chosen
No action is taken
from waiting queue
and wakeup executed
so that process enters
into CS
Dr Manish Gupta MIT Moradabad
To implement the mutual exclusion, the value of counting semaphore is initialized
with 1. It ensures that only one process can be present in the CS at any given time,
multiple processes will not be present in CS.
The operation of a counting semaphore can be concluded as follows:

1. If s.count = 1 : No process is executing in its CS and no process is waiting in the


semaphore queue.
2. If s.count = 0 : One process is executing in CS, but no process is waiting in the
semaphore queue.
3. If s.count = N: One process is executing in CS and N processes are waiting in the
semaphore queue.
4. When a process is waiting in semaphore queue, it is not performing any busy
waiting. It is rather in a “waiting/blocked” state. (while loop is not used here)
5. When a waiting process is selected for entry into its CS , it is transferred from
“Blocked” state to “Ready” state.

Advantages:
1. Since the waiting processes will be permitted to enter their critical sections in a
FCFS order, the requirement of bounded waiting is fully met.
2. A waiting process does not perform any busy waiting, thus saving CPU cycles.

Dr Manish Gupta MIT Moradabad


Disadvantages:

1. Complex to implement since it involves implementation of FCFS queue.


(but in binary semaphore we used only one integer variable)
2. Additional context switches. When a process is not able to enter its CS, it would
involve two context switches.
(a) from “running” state to “ waiting” state (when process cannot enter into
CS)
(b) from “waiting” state to “ ready” state (when process can enter into
CS)

After two context switches, process would enter its CS.


These context switches will involve certain overheads.

Dr Manish Gupta MIT Moradabad


Question: Consider a system where a counting semaphore is initialized to +17.
Operation like 23P, 18V, 16P, 14V, 1P are performed. Find the final value of semaphore
and number of blocked processes.

Dr Manish Gupta MIT Moradabad


Question: A shared variable x, initialized to zero, is operated on by four concurrent
processes W, X, Y, Z as follows. Each of the processes W and X reads x from memory,
increments by one, stores it to memory and then terminates. Each of the processes Y
and Z reads x from memory, decrements by two, stores it to memory, and then
terminates. Each process before reading x invokes the P operation (i.e. wait) on a
counting semaphore S and invokes the V operation (i.e. signal) on the semaphore S
after storing x to memory. Semaphore S is initialized to two. What is the maximum
possible value of x after all processes complete execution?

Dr Manish Gupta MIT Moradabad


Dr Manish Gupta MIT Moradabad
Mutex:-
Mutex is the locking mechanism that makes the resource lock
before going to use it and after the job completion, it makes the
resource free. Maybe, due to similarity in their implementation
a mutex would be referred to as a binary semaphore.

Semaphore:-
Semaphore is a signaling mechanism that says take a variable,
make it free. when any process wants to go through a critical
section then it checks the variable is free or busy. If the
variable is busy then it waits until free otherwise the process
makes the semaphore variable busy and goes through the
critical section and does its job, then comes out from the
critical section by making the variable free.

Dr Manish Gupta MIT Moradabad


1. Semaphore allows multiple access to shared resources but Mutex
allows one thread into the control section, another thread that tried
to access the control section has to wait until the first thread comes
out from that section.

2. Semaphore signal can be released by any other thread or process


but Mutex can only be released by the thread which had accessed
it.

3. Semaphore is a tool to overcome the critical section problem but


Mutex is the tool that provides deadlock-free mutual exclusion to
our work.

4. Semaphore is used in both cases for process and threads but


Mutex is used only for threads

Dr Manish Gupta MIT Moradabad


Dr Manish Gupta MIT Moradabad
Solution to Critical Section Problem using mutex lock

Dr Manish Gupta MIT Moradabad


Solution to Critical Section Problem using mutex lock

Dr Manish Gupta MIT Moradabad


Synchronization Hardware
Test and modify the content of a word atomically.
Shared data: boolean lock = false;

Dr Manish Gupta MIT Moradabad


Dr Manish Gupta MIT Moradabad
Dr Manish Gupta MIT Moradabad
The Dining Philosophers Problem

Dr Manish Gupta MIT Moradabad


The Dining Philosophers Problem

Five philosophers sitting in a circle. On each side of them is a


chopstick, which adds up to five chopsticks in total.

The philosophers, naturally, love to think. But they're also there to


eat. The only way that they can eat is if they have two chopsticks.

How do you design a system (or algorithm) that ensures the


philosophers can eat and think and eat and think and eat and
think?

Dr Manish Gupta MIT Moradabad


System constraints

There are three constraints to take into consideration here:

a. Food is infinite, ie a philosopher can eat for as long as they


want.

b. A philosopher can think for as long as they want.

c. A philosopher can only eat when they have two chopsticks

Dr Manish Gupta MIT Moradabad


The Dining Philosophers Problem

A semaphore is something used to keep track of a resource upon


which a process (or series of processes) is dependent. In the case of
the Dining Philosophers problem, we can think of the resource
(chopsticks) in terms of a counting semaphore.

A counting semaphore simply counts up (or down) on the availability


of the resource (the chopstick).

Dr Manish Gupta MIT Moradabad


The Dining Philosophers Problem Example Explanation:

Explained in a different way, if you wanted to use a bathroom in a


building with only three bathrooms, you could meet the building
manager who keeps an updated semaphore of how many bathrooms
are occupied and how many are free. Armed with the semaphore,
the building manager can then decide (using different techniques)
who gets to use the rooms when they're free. After allocating a
room, the building manager deducts one resource from the
semaphore, and if a room becomes available again, he adds one
more resource to the semaphore.
Long talk, but understanding the Dining Philosophers problem
suggests that this is a resource allocation problem, so we need a
counting semaphore to keep track of the chopsticks.
semaphore = 5
So, critical section here is chopsticks

Dr Manish Gupta MIT Moradabad


The Dining Philosophers Problem Explanation:
we can see that the philosophers have two states: eating and
thinking.

philosopher.state[0]={eating:true, thinking:false }

These states are mutually exclusive (a thinking philosopher is not


eating, and an eating philosopher is not thinking).
when one philosopher picks up a chopstick, it means they're ready
to eat, and are simply waiting for the philosopher on either side of
them to drop the chopstick they're holding.

Let's call it ready_to_eat state.

Dr Manish Gupta MIT Moradabad


A system works like this:

philosopher[i] has 0 chopsticks


philosopher[i] is in thinking state
semaphores = 5
philosopher[i] picks up 1 chopstick
ready_to_eat = true
semaphores=4
philosopher[i] picks up second chopstick
semaphores=3
philosopher[i] is in eating state
philosopher[i] puts down chopsticks
semaphores=5
philosopher[i] is in thinking state`
We can then loop indefinitely with these states and
conditions accounted for.
Dr Manish Gupta MIT Moradabad
Situation:

What happened if all philosophers picked a chopstick each!


They're not thinking or eating - but they're ready to eat but have
no spare chopsticks!

we'll experience something called a deadlock (when processes are


jammed and unable to terminate/complete execution because
resources are locked in place).

In other words, the semaphores go down to zero, but all


philosophers are locked in a ready_to_eat state forever.

Dr Manish Gupta MIT Moradabad


void Philosopher (void)
{
while(true)
{

Thinking();

Take-Fork(i);
Take-Fork((i+1) % N);

Eat();

Put-Fork(i);
Put-Fork((i+1) % N);
}
}
Dr Manish Gupta MIT Moradabad
void Philosopher (void)
{
while(true)
{

Thinking();

wait(Take-Fork(Si));
wait(Take-Fork((Si+1) % N));

Eat();

signal(Put-Fork(Si));
signal(Put-Fork((Si+1) % N));
}
}
Dr Manish Gupta MIT Moradabad
The Dining Philosophers Problem

Dr Manish Gupta MIT Moradabad


The readers and writers problem

Motivation: database access


Two groups of processes: readers, writers
Multiple readers may access database simultaneously
A writing process needs exclusive database access

Dr Manish Gupta MIT Moradabad


Dr Manish Gupta MIT Moradabad
The readers and writers problem

No reader is kept waiting,


waiting unless a writer has already
obtained the db semaphore
Writer
riter processes may starve - if readers keep
coming in and hold the semaphore db
An alternative version of the readers-writers
problem requires that no writer is kept waiting once
it is “ready” - when a writer is waiting, no new reader
can start reading

Dr Manish Gupta MIT Moradabad


Sleeping barber problem

There is one barber, and n chairs for waiting customers If there


are no customers, then the barber sits in his chair and sleeps.
Dr Manish Gupta MIT Moradabad
Sleeping barber problem

Barber shop - one service provider; many customers


A finite waiting queue
One customer is served at a time
Service provider, barber
barber, sleeps when no customers are waiting
Customer leaves if shop is full
Customer sleeps while waiting in queue

Dr Manish Gupta MIT Moradabad


Sleeping barber problem explanation:

There is one barber, and n chairs for waiting customers.

If there are no customers, then the barber sits in his chair and
sleeps.

When a new customer arrives and the barber is sleeping, then he


will wakeup the barber.

When a new customer arrives, and the barber is busy, then he will
sit on the chairs if there is any available, otherwise (when all the
chairs are full) he will leave.

Dr Manish Gupta MIT Moradabad


Sleeping barber solution

Solution uses three semaphores:


1. customers, which counts waiting customers (excluding the
customer in the barber chair, who is not waiting)
2. barbers, no. of barbers (0 or 1) based on barber is idle/waiting
3. mutex, which is used for mutual exclusion.

We also need a variable, waiting , which also counts the waiting


customers. It is essentially a copy of customers .
The reason for having waiting is that there is no way to read the
current value of a semaphore.
In this solution, a customer entering the shop has to count the
number of waiting customers. If it is less than the number of chairs,
he stays; otherwise, he leaves.
Dr Manish Gupta MIT Moradabad
Sleeping barber problem
When the barber shows up for work, he executes the procedure barber, causing
him to block on the semaphore customers because it is initially 0.

The barber then goes to sleep. He stays asleep until the first customer shows up.

When a customer arrives, he executes customer , starting by acquiring mutex to


enter a critical region.

If another customer enters shortly thereafter, the second one will not be able to do
anything until the first one has released mutex .

The customer then checks to see if the number of waiting customers is less than
the number of chairs.If not, he releases mutex and leaves without a haircut.

Dr Manish Gupta MIT Moradabad


Sleeping barber problem

If there is an available chair, the customer increments the integer


variable, waiting .

Then he does an up on the semaphore customers , thus waking up


the barber. At this point, the customer and barber are both awake.

When the customer releases mutex , the barber grabs it, does some
housekeeping, and begins the haircut.

When the haircut is over, the customer exits the procedure and
leaves the shop.

Dr Manish Gupta MIT Moradabad


Sleeping barber problem

Dr Manish Gupta MIT Moradabad


Sleeping barber problem
#define CHAIRS 5
semaphore customers = 0; // number of waiting customers
Semaphore barbers = 0; // number of available barbers:either 0 or 1
int waiting = 0; // copy of customers for reading
semaphore mutex = 1; // mutex for accessing ‘waiting’

void barber(void) {
while(TRUE) {
down(customers); // block if no customers
down(mutex); // access to ‘waiting’
waiting = waiting - 1;
up(barbers); // barber is in..
up(mutex); // release ‘waiting’
cut_hair(); }
}
Dr Manish Gupta MIT Moradabad
Sleeping barber problem

void customer(void) { Any problem


down(mutex); // access to `waiting’ with this
if(waiting < CHAIRS) { code?
waiting = waiting + 1; // increment waiting
up(customers); // wake up barber Two
up(mutex); // release ‘waiting’ customers on
down(barbers); // go to sleep if barbers=0 chair at once
get_haircut();
}
else {
up(mutex); /* shop full .. leave */
}}

Dr Manish Gupta MIT Moradabad


Sleeping barber problem: correct synchronization
#define CHAIRS 5
semaphore customers = 0; // number of waiting customers
Semaphore barbers = 0; // number of available barbers: either 0 or 1
Semaphore mutex = 1; // mutex for accessing ‘waiting’
Semaphore synch = 0; // synchronizing the service operation
int waiting = 0; // copy of customers for reading

void barber(void) {
while(TRUE) {
down(customers); // block if no customers
down(mutex); // access to ‘waiting’
waiting = waiting - 1;
up(barbers); // barber is in..
up(mutex); // release ‘waiting’
cut_hair();
down(synch) //wait for customer to leave }
}
Dr Manish Gupta MIT Moradabad
Sleeping barber problem
void customer(void) {
down(mutex); // access to `waiting’
if(waiting < CHAIRS) {
waiting = waiting + 1; // increment waiting
up(customers); // wake up barber
up(mutex); // release ‘waiting’
down(barbers); // go to sleep if barbers=0
get_haircut();
up(sync); //synchronize service
}
else {
up(mutex); /* shop full .. leave */
}} Dr Manish Gupta MIT Moradabad
Numerical
Suppose we wan to synchronize two processes P and Q using two binary semaphores
S and T. We want output string in this fashion “ 001100110011” . In which order the
process executes.
Process P Process Q
while(1) while(1)
{ {
W Y
Print ‘0’ Print ‘1’
Print ‘0’ Print ‘1’
X Z
} }

W X Y Z
A P(S) V(S) P(T) V(T) S, T =1
B P(S) V(T) P(T) V(S) S =1, T = 0
C P(S) V(T) P(T) V(S) S, T =1
D P(S) V(S) P(T) V(T) S=1, T =0

Dr Manish Gupta MIT Moradabad


W X Y Z
A P(S) V(S) P(T) V(T) S, T =1
Solution B P(S) V(T) P(T) V(S) S =1, T = 0
C P(S) V(T) P(T) V(S) S, T =1
D P(S) V(S) P(T) V(T) S=1, T =0

Process P Process Q
while(1) while(1)
{ {
W Y
Print ‘0’ Print ‘1’
Print ‘0’ Print ‘1’
X Z
} }

Dr Manish Gupta MIT Moradabad


Numerical
Suppose we wan to synchronize two processes P and Q using two binary semaphores
S and T. We do not want the output string in this fashion “ 01n0” or “10n1”. In which
order the process executes. (N IS ODD)
Process P Process Q
while(1) while(1)
{ {
W Y
Print ‘0’ Print ‘1’
Print ‘0’ Print ‘1’
X Z
} }

W X Y Z
A P(S) V(S) P(T) V(T) S, T =1
B P(S) V(T) P(T) V(S) S =1, T = 1
C P(S) V(S) P(S) V(S) S =1
D V(S) V(T) P(S) V(T) S=1, T =1

Dr Manish Gupta MIT Moradabad


W X Y Z
A P(S) V(S) P(T) V(T) S, T =1
Solution B P(S) V(T) P(T) V(S) S =1, T = 1
C P(S) V(S) P(S) V(S) S =1
D V(S) V(T) P(S) V(T) S=1, T =1

Process P Process Q
while(1) while(1)
{ {
P(S) P(T)
Print ‘0’ Print ‘1’
Print ‘0’ Print ‘1’
V(S) V(T)
} }

Dr Manish Gupta MIT Moradabad

You might also like