0% found this document useful (0 votes)
21 views

OS Unit - 3

The document discusses principles of concurrency including problems that can arise from concurrent processes like deadlocks and starvation. It covers concepts like mutual exclusion, semaphores, and pipes which can be used to coordinate concurrent processes and ensure safe sharing of resources.

Uploaded by

ZINKAL PATEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

OS Unit - 3

The document discusses principles of concurrency including problems that can arise from concurrent processes like deadlocks and starvation. It covers concepts like mutual exclusion, semaphores, and pipes which can be used to coordinate concurrent processes and ensure safe sharing of resources.

Uploaded by

ZINKAL PATEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit - 3

Principles of Concurrency:

Concurrency is the execution of the multiple instruction sequences at the


same time. It happens in the operating system when there are several
process threads running in parallel. The running process threads always
communicate with each other through shared memory or message passing.
Concurrency results in sharing of resources result in problems like deadlocks
and resources starvation.
It helps in techniques like coordinating execution of processes, memory
allocation and execution scheduling for maximizing throughput.

Principles of Concurrency
Both interleaved and overlapped processes can be viewed as examples of
concurrent processes, they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the
following:
 The activities of other processes
 The way operating system handles interrupts
 The scheduling policies of the operating system.

Problems in Concurrency:

 Sharing global resources –

Sharing of global resources safely is difficult. If two processes both


make use of a global variable and both perform read and write on
that variable, then the order in which various read and write are
executed is critical.
 Optimal allocation of resources –

It is difficult for the operating system to manage the allocation of


resources optimally.

pg. 1
 Locating programming errors –

It is very difficult to locate a programming error because reports are


usually not reproducible.
 Locking the channel –

It may be inefficient for the operating system to simply lock the


channel and prevents its use by other processes.

Advantages of Concurrency:

 Running of multiple applications –

It enable to run multiple applications at the same time.

 Better resource utilization –

It enables that the resources that are unused by one


application can be used for other applications.

 Better average response time –

Without concurrency, each application has to be run to


completion before the next one can be run.

 Better performance –

It enables the better performance by the operating


system. When one application uses only the processor and
another application uses only the disk drive then the time
to run both applications concurrently to completion will be
shorter than the time to run each application
consecutively.

pg. 2
Issues of Concurrency:

 Non-atomic –

Operations that are non-atomic but interruptible by


multiple processes can cause problems.

 Race conditions –

A race condition occurs of the outcome depends on which


of several processes gets to a point first.

 Blocking –

Processes can block waiting for resources. A process could


be blocked for long period of time waiting for input from a
terminal. If the process is required to periodically update
some data, this would be very undesirable.

 Starvation –

It occurs when a process does not obtain service to


progress.

 Deadlock –

It occurs when two processes are blocked and hence


neither can proceed to execute.

Mutual Exclusion: S/W approaches:

Mutual exclusion is a property of process synchronization which states that


“no two processes can exist in the critical section at any given point of time”.
The term was first coined by Dijkstra. Any process synchronization technique

pg. 3
being used must satisfy the property of mutual exclusion, without which it
would not be possible to get rid of a race condition.

Example:
In the clothes section of a supermarket, two people are shopping for clothes .

Boy A decides upon some clothes to buy and heads to the changing room to
try them out. Now, while boy A is inside the changing room, there is an
‘occupied’ sign on it – indicating that no one else can come in. Girl B has to
use the changing room too, so she has to wait till boy A is done using the
changing room.

pg. 4
Once boy A comes out of the changing room, the sign on it changes from
‘occupied’ to ‘vacant’ – indicating that another person can use it. Hence, girl
B proceeds to use the changing room, while the sign displays ‘occupied’
again.

The changing room is nothing but the critical section, boy A and girl B are two
different processes, while the sign outside the changing room indicates the
process synchronization mechanism being used.

H/W Support:

we are going to learn about hardware protection and it’s the type. so first
let’s see the type of hardware which is used in a computer system. we know
that a computer system contains the hardware like processor, monitor, RAM
and many more, and one thing that the operating system ensures that these
devices can not directly accessible by the user.

pg. 5
Basically, hardware protection is divided into 3 categories: CPU protection,
Memory Protection, and I/O protection. These are explained as following
below.

1. CPU Protection:

CPU protection is referred to as we cannot give CPU to a process


forever, it should be for some limited time otherwise other
processes will not get the chance to execute the process. So for
that, a timer is used to get over from this situation. which is
basically give a certain amount of time a process and after the
timer execution a signal will be sent to the process to leave the
CPU. hence process will not hold CPU for more time.

2. Memory Protection:

In memory protection, we are talking about that situation when two or


more processes are in memory and one process may access the other
process memory. and to prevent this situation we are using two
registers as:

 Bare register
 Limit register

So basically Bare register store the starting address of program and


limit register store the size of the process, so when a process wants to
access the memory then it is checked that it can access or can not
access the memory.

3. I/O Protection:

So when we’re ensuring the I/O protection then some cases will never
have occurred in the system as:

 Termination I/O of other process


 View I/O of other process
 Giving priority to a particular process I/O

pg. 6
Semaphores:

Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.
The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive.
If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}

 Signal
The signal operation increments the value of its argument S.

signal(S)
{
S++;
}

pg. 7
Types of Semaphores:
There are two main types of semaphores i.e. counting semaphores
and binary semaphores. Details about these are given as follows −

 Counting Semaphores:
These are integer value semaphores and have an unrestricted value
domain. These semaphores are used to coordinate the resource
access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the
count is decremented.

 Binary Semaphores:
The binary semaphores are like counting semaphores but their
value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.

Advantages of Semaphores:

Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow
the mutual exclusion principle strictly and are much more efficient than
some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is
fulfilled to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.

pg. 8
Disadvantages of Semaphores:

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations
prevent the creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority
processes may access the critical section first and high priority processes
later.

Pipes:

Pipe is a communication medium between two or more related or interrelated


processes. It can be either within one process or a communication between the
child and the parent processes. Communication can also be multi-level such as
communication between the parent, the child and the grand-child, etc.
Communication is achieved by one process writing into the pipe and other
reading from the pipe. To achieve the pipe system call, create two files, one to
write into the file and another to read from the file.
Pipe mechanism can be viewed with a real-time scenario such as filling water
with the pipe into some container, say a bucket, and someone retrieving it, say
with a mug. The filling process is nothing but writing into the pipe and the
reading process is nothing but retrieving from the pipe. This implies that one
output (water) is input for the other (bucket).

pg. 9
Message Passing:

Process communication is the mechanism provided by the operating system that


allows processes to communicate with each other. This communication could
involve a process letting another process know that some event has occurred or
transferring of data from one process to another. One of the models of process
communication is the message passing model.
Message passing model allows multiple processes to read and write data to the
message queue without being connected to each other. Messages are stored on
the queue until their recipient retrieves them. Message queues are quite useful
for interprocess communication and are used by most operating systems.

pg. 10
In the above diagram, both the processes P1 and P2 can access the message
queue and store and retrieve data.

 Advantages of Message Passing Model:


Some of the advantages of message passing model are given as
follows -

 The message passing model is much easier to implement


than the shared memory model.
 It is easier to build parallel hardware using message
passing model as it is quite tolerant of higher
communication latencies.

 Disadvantage of Message Passing Model


The message passing model has slower communication than
the shared memory model because the connection setup takes
time.

Signals:

A signal is a software generated interrupt that is sent to a process by the OS because


of when user press ctrl-c or another process tell something to this process.
There are fix set of signals that can be sent to a process. signal are identified by
integers. Signal number have symbolic names.

Example:
In the example below, the SIGINT ( = 2) signal is blocked and no signals are

pg. 11
pending.

A signal is sent to a process setting the corresponding bit in the pending


signals integer for the process. Each time the OS selects a process to be run
on a processor, the pending and blocked integers are checked. If no signals
are pending, the process is restarted normally and continues executing at its
next instruction.
If 1 or more signals are pending, but each one is blocked, the process is also
restarted normally but with the signals still marked as pending. If 1 or more
signals are pending and NOT blocked, the OS executes the routines in the
process’s code to handle the signals.

Monitors:

The monitor is one of the ways to achieve Process synchronization. The


monitor is supported by programming languages to achieve mutual exclusion

pg. 12
between processes. For example Java Synchronized methods. Java provides
wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined
together in a special kind of module or a package.
2. The processes running outside the monitor can’t access the internal
variable of the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

Condition
Variables
There are two types of operations that we can perform on the condition
variables of the monitor:

1. Wait
2. Signal

Suppose there are two condition variables


condition a, b // Declaring variable

 Wait Operation:
a.wait(): - The process that performs wait operation on the
condition variables are suspended and locate the suspended
process in a block queue of that condition variable.

pg. 13
 Signal Operation:

a.signal(): - If a signal operation is performed by the process on


the condition variable, then a chance is provided to one of the
blocked processes.

pg. 14

You might also like