Notes of Unit2
Notes of Unit2
UNIT-2
Inter Process Communication (IPC)
A process can be of two types:
i)Independent process.
ii)Co-operating process.
An independent process is not affected by the execution of other processes
while a co-operating process can be affected by other executing processes.
Though one can think that those processes, which are running independently,
will execute very efficiently, in reality, there are many situations when co-
operative nature can be utilized for increasing computational speed,
convenience, and modularity.
Inter-process communication (IPC) is a mechanism that allows processes to
communicate with each other and synchronize their actions.
The communication between these processes can be seen as a method of
co-operation between them. Processes can communicate with each other
through both:
1. Shared Memory
2. Message passing
Explanation
Consider a Tube Light, two switches. Let the two switches are connected to the Tube
Light and the Tube Light is in OFF State. Here, if on switch 1 the tube light gets switched
on. Then, if we on the switch 2 when switch 1 is in ON State, the Tube Light get switched
off. But, both the switched present are in ON State and the Tube Light is in OFF State.
Now, let us consider that the Tube Light is in OFF State and switch 1 and switch 2 are in
OFF State. Now, if we turn ON both the switches at a time then the Tube Light would be
ON State only. This is because of circuit breaker. One of the switch actions is tripped by
the circuit breaker present in the circuit. This is to prevent the functioning of switch going
irrelevant.
What if same condition is occurred in the computer? Here, the ON and OFF Conditions
gets replaced by Read and Write Operations. Here, the Tube Light is replaced with a
computer. Just imagine what happens if we are re writing the data on the computer while
the old data is being read. Because of this state, these conditions might occur:
Race conditions result in inconsistent output and degrade our application's endurance
and confidence.
In operating systems, concurrency is achieved via threads. The capacity to carry out
many operations concurrently is known as concurrency. Concurrency is achieved in the
OS via threads. We may encounter circumstances where the threads processing the
shared data provide different results each time if many threads access shared data
without being synced with one another.
Race Conditions Identification or Detection in Operating
Systems (OS)
The need for finding the Race Conditions is very important.
If we fail to identify them, then we are going to lose so much data and data already
present is also going to go corrupt.
So, it is very important for the user and the computer to find out the occurrence of
Race Condition in the Operating Systems (OS).
Finding and detecting racial conditions is thought to be challenging.
They are a semantic issue that might result from several potential coding errors.
It is preferable to write code that avoids these issues from the outset.
Tools for static and dynamic analysis are used by programmers to find race conditions.
Without starting the software, static testing tools scan everything.
However, they tends to generate a lot of inaccurate reports.
Although dynamic analysis methods provide fewer false positives, they could miss
race conditions that don't occur within the program itself.
Data races, which happen when two threads simultaneously target the same memory
region and at least one of them performs a write operation, can occasionally result in
race conditions.
Data races are simpler to spot than race conditions since they need particular
circumstances to manifest.
Data race scenarios are kept an eye out for by tools like the Data Race Detector from
the Go Project.
Race situations provide more significant issues and are more strongly related to
application semantics.
The Critical Section Problem is a Code Snippet. This code snippet contains a few
variables.
These variables can be accessed by a few processes. There is a condition for these
processes.
The condition is that only one process can only enter the critical section.
Remaining Processes which are interested to enter the critical section have to wait for
the process to complete its work and then enter the critical section.
The portion of a piece of code that is performed by many threads is considered in
critical section of a code.
The critical section area is vulnerable to a race condition because various outputs from
concurrently running threads potentially result in different orders of execution.
Critical Section Representation
The Problem is occurred when the Bank Account works at the same time. Due, to this
multiple accessing data gets corrupted.
Locks :
It is a mechanism that applies restrictions on access to a resource when
multiple threads of execution exist.
Recursive lock :
It is a certain type of mutual exclusion (mutex) device that is locked
several times by the very same process/thread, without making a
deadlock. While trying to perform the "lock" operation on any mutex may
fail or block when the mutex is already locked, while on a recursive
mutex the operation will be a success only if the locking thread is the
one that already holds the lock.
Semaphore :
It is an abstract data type designed to control the way into a shared
resource by multiple threads and prevents critical section problems in a
concurrent system such as a multitasking operating system. They are a
kind of synchronization primitive.
Readers writer (RW) lock :
It is a synchronization primitive that works out reader-writer problems. It
grants concurrent access to the read-only processes, and writing
processes require exclusive access. This conveys that multiple threads
can read the data in parallel however exclusive lock is required for
writing or making changes in data. It can be used to manipulate access
to a data structure inside the memory.
Hardware Solutions
Here are some hardware solutions to the mutual exclusion problem in
operating systems (OS):
1. Disable interrupts:
On single-processor systems, this is the simplest solution to achieve mutual exclusion.
It prevents interrupt service routines from running, which prevents a process from being
preempted.
However, disabling interrupts limits what software can do inside a critical section.
2. Test-and-set instruction:
This hardware-based approach performs an atomic test-and-set operation on a shared
memory location.
It allows only one thread to modify the value at a time.
3. Swap Algorithm:
This algorithm uses two boolean variables, lock and key, to regulate mutual exclusion of
processes.
4. Unlock and lock Algorithm:
This algorithm uses three boolean variables, lock, key, and waiting[i], for each process to
regulate mutual exclusion.
Progress
Progress is not guaranteed in this mechanism. If Pi doesn't want to get enter into the
critical section on its turn then Pj got blocked for infinite time. Pj has to wait for so long for
its turn since the turn variable will remain 0 until Pi assigns it to j.
Portability
The solution provides portability. It is a pure software mechanism implemented at user
mode and doesn't need any special instruction from the Operating System.
Peterson's Solution in OS
Peterson's solution is a classic solution to the critical section problem.
The critical section problem ensures that no two processes change or
modify a resource's value simultaneously.
For example, let int a=5, and there are two processes p1 and p2 that
can modify the value of a. p1 adds 2 to a a=a+2 and p2 multiplies a with
2, a=a*2.
If both processes modify the value of a at the same time, then a value
depends on the order of execution of the process.
If p1 executes first, a will be 14; if p2 executes first, a will be 12.
This change of values due to access by two processes at a time is the
cause of the critical section problem.
The section in which the values are being modified is called the critical
section.
There are three sections except for the critical sections: the entry
section,exit section, and the reminder section.
i) The process entering the critical region must pass the entry region in
which they request entry to the critical section.
ii) The process exiting the critical section must pass the exit region.
iii) The remaining code left after execution is in the remainder section.
The process may take a long time to wait for the other processes to
come out of the critical region. It is termed as Busy waiting.
This algorithm may not work on systems having multiple CPUs.
The Peterson solution is restricted to only two processes at a time.
Full
The full variable is used to track the space filled in the buffer by the
Producer process. It is initialized to 0 initially as initially no space is filled by
the Producer process.
Empty
The Empty variable is used to track the empty space in the buffer. The
Empty variable is initially initialized to the BUFFER-SIZE as initially, the
whole buffer is empty.
Mutex
Semaphores
A semaphore is a special kind of synchronization data that can be used only
through specific synchronization primitives.
When a process performs a wait operation on a semaphore, the operation
checks whether the value of the semaphore is >0.
If so, it decrements the value of the semaphore and lets the process
continue its execution; otherwise, it blocks the process on the semaphore.
A signal operation on a semaphore activates a process blocked on the
semaphore if any, or increments the value of the semaphore by 1.
Due to these semantics, semaphores are also called counting semaphores.
The initial value of a semaphore determines how many processes can get
past the wait operation.
Semaphores are of two types:
1. Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and 1.
Its value is initialized to 1. It is used to implement the solution of critical
section problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access
to a resource that has multiple instances.
Limitations :
1. One of the biggest limitations of semaphore is priority inversion.
2. Deadlock, suppose a process is trying to wake up another process that is not
in a sleep state. Therefore, a deadlock may block indefinitely.
3. The operating system has to keep track of all calls to wait and signal the
semaphore.
Advantages of Semaphores:
A simple and effective mechanism for process synchronization
Supports coordination between multiple processes
Provides a flexible and robust way to manage shared resources.
It can be used to implement critical sections in a program.
It can be used to avoid race conditions.
Disadvantages of Semaphores:
It Can lead to performance degradation due to overhead associated with
wait and signal operations.
Can result in deadlock if used incorrectly.
It was proposed by Dijkstra in 1965 which is a very significant technique to
manage concurrent processes by using a simple integer value, which is
known as a semaphore. A semaphore is simply an integer variable that is
shared between threads. This variable is used to solve the critical section
problem and to achieve process synchronization in the multiprocessing
environment.
It can cause performance issues in a program if not used properly.
It can be difficult to debug and maintain.
It can be prone to race conditions and other synchronization problems if not
used correctly.
It can be vulnerable to certain types of attacks, such as denial of service
attacks.
Event Counters
The Event Counters tab of the ACS Resources screen displays the event counters
for each customer. Each event counter is maintained separately.
Event counters simply count an event. They may be accessed at runtime and can be
used, for example, for televoting. Counters can be queried in real-time on the
Statistics Chart screen.
1. On the Event Counters tab, select from the table the event counter record to edit.
2. Click Edit.
Result: The Edit Event Counter screen is displayed.
3. Change the details as required.
4. Click Save.
Result: The changes are saved to the database.
Monitors
1. A monitor is essentially a module that encapsulates a shared resource and
provides access to that resource through a set of procedures. The
procedures provided by a monitor ensure that only one process can access
the shared resource at any given time, and that processes waiting for the
resource are suspended until it becomes available.
Advantages of Monitor
Monitors have the advantage of making parallel programming easier and
less error prone than using techniques such as semaphore.
Disadvantages of Monitor
Monitors have to be implemented as part of the programming language .
The compiler must generate code for them.
This gives the compiler the additional burden of having to know what
operating system facilities are available to control access to critical sections
in concurrent processes.
Message Passing
So message passing means how a message can be sent from one end to
the other end.
Either it may be a client-server model or it may be from one node to another
node.
The formal model for distributed message passing has two timing models
one is synchronous and the other is asynchronous.
The fundamental points of message passing are:
1. In message-passing systems, processes communicate with one another by
sending and receiving messages over a communication channel. So how the
arrangement should be done?
2. The pattern of the connection provided by the channel is described by some
topology systems.
3. The collection of the channels are called a network.
4. So by the definition of distributed systems, we know that they are
geographically set of computers. So it is not possible for one computer to
directly connect with some other node.
5. So all channels in the Message-Passing Model are private.
6. The sender decides what data has to be sent over the network. An example
is, making a phone call.
7. The data is only fully communicated after the destination worker decides to
receive the data. Example when another person receives your call and starts
to reply to you.
8. There is no time barrier. It is in the hand of a receiver after how many rings
he receives your call. He can make you wait forever by not picking up the
call.
9. For successful network communication, it needs active participation from
both sides.
Algorithm:
1. Let us consider a network consisting of n nodes named p0, p1, p2……..pn-
1 which are bidirectional point to point channels.
2. Each node might not know who is at another end. So in this way, the
topology would be arranged.
3. Whenever the communication is established and whenever the message
passing is started then only the processes know from where to where the
message has to be sent.
Advantages of Message Passing Model :
1. Easier to implement.
2. Quite tolerant of high communication latencies.
3. Easier to build massively parallel hardware.
4. It is more tolerant of higher communication latencies.
5. Message passing libraries are faster and give high performance.
In case there are two processes with both trying to access the resource
simultaneously at the same instance.
There are certain cases that I must look forward to understanding the reader-
writer problem in OS and how it impacts the data inconsistency problem that
must be avoided.
Case One
Two processes cannot be allowed to write into shared data parallelly thus they
must wait to get access to write into it.
Case Two
Even if one process is writing on data and the other is reading then also they
cannot be allowed to have the access to shared resources for the reason that
a reader will be reading an incomplete value in this case.
Case Three
The other similar scenario where one process is reading from the data and
another writing on it, on the shared resource, it cannot be allowed. Because
the writer updates some data that is not available to the reader. The solution
being that the writer completes successfully and gives access.
Case Four
In the case that both processes are reading the data then sharing of
resources among both the processes will be allowed as it is a case free from
any such anomaly because reading does not modify the pre-existing data.
Mutex makes the process to release and acquire a lock when the
readCount is being updated.
The writer waits for the semaphore until it is its turn to write and increments
for other processes to write.
The idea remains to use semaphores when a reader enters the critical
section up until it exits the critical section such that no writer can enter in
between as it can cause data inconsistency.
If successful, attempt to acquire the semaphore for the fork to the right.
If both forks are acquired successfully, eat for a random amount of time and
then release both semaphores.
If not successful in acquiring both forks, release the semaphore for the fork
to the left (if acquired) and then release the mutex semaphore and go back
to thinking.
4. Run the philosopher threads concurrently.
System Model :
For the purposes of deadlock discussion, a system can be modeled as a
collection of limited resources that can be divided into different categories and
allocated to a variety of processes, each with different requirements.
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other
resources are examples of resource categories.
By definition, all resources within a category are equivalent, and any of the
resources within that category can equally satisfy a request from that
category. If this is not the case (i.e. if there is some difference between the
resources within a category), then that category must be subdivided further.
For example, the term “printers” may need to be subdivided into “laser
printers” and “color inkjet printers.”
Some categories may only have one resource.
The kernel keeps track of which resources are free and which are allocated,
to which process they are allocated, and a queue of processes waiting for this
resource to become available for all kernel-managed resources. Mutexes or
wait() and signal() calls can be used to control application-managed resources
(i.e. binary or counting semaphores. )
When every process in a set is waiting for a resource that is currently assigned
to another process in the set, the set is said to be deadlocked.
Principles of Deadlock
A deadlock situation on a resource can arise if and only if all of the following conditions
hold simultaneously in a system:
Mutual exclusion: At least one resource must be held in a non-shareable mode. Otherwise,
the processes would not be prevented from using the resource when necessary. Only one
process can use the resource at any given instant of time.
Hold and wait or resource holding: a process is currently holding at least one resource
and requesting additional resources which are being held by other processes.
No preemption: a resource can be released only voluntarily by the process holding it.
Circular wait: each process must be waiting for a resource which is being held by another
process, which in turn is waiting for the first process to release the resource. In general,
there is a set of waiting processes, P = {P1, P2, …, PN}, such that P1 is waiting for a
resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for
a resource held by P1.
These four conditions are known as the Coffman conditions from their first
description in a 1971 article by Edward G. Coffman, Jr.
While these conditions are sufficient to produce a deadlock on single-instance
resource systems, they only indicate the possibility of deadlock on systems having
multiple instances of resources.
Deadlock Characterization
A deadlock happens in operating system when two or more processes need some
resource to complete their execution that is held by the other process.
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not
mutually exclusive. They are given as follows −
Mutual Exclusion
There should be a resource that can only be held by one process at a time.
In the diagram below, there is a single instance of Resource 1 and it is held by Process 1
only.
A process can hold multiple resources and still request more resources from other
processes which are holding them.
In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is
requesting the Resource 1 which is held by Process 1.
No Preemption
A resource cannot be preempted from a process by force. A process can only release a
resource voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1.
It will only be released when Process 1 relinquishes it voluntarily after its execution is
complete.
Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource
held by the first process.
This forms a circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1.
Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms
a circular wait loop.
This is being used by many operating systems mainly for end user uses.
In this approach, the Operating system assumes that deadlock never occurs. It
simply ignores deadlock.
This approach is best suitable for a single end user system where User uses the
system only for browsing and all other normal stuff.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and
circular wait holds simultaneously.
If it is possible to violate one of the four conditions at any time then the deadlock can
never occur in the system.
The idea behind the approach is very simple that we have to fail one of the four
conditions but there can be a big argument on its physical implementation in
the system.
i) Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used
by more than one process simultaneously which is fair enough but that is the main reason
behind the deadlock.
If a resource could have been used by more than one process at the same time then the
process would have never been waiting for any resource.
ii) Spooling
For a device like printer, spooling can work. There is a memory associated with the printer
which stores jobs from each of the process into it.
Later, Printer collects all the jobs and print each one of them according to FCFS.
By using this mechanism, the process doesn't have to wait for the printer and it can continue
whatever it was doing. Later, it collects the output when it is produced.
Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe
state or in unsafe state at every step which the operating system performs.
The process continues until the system is in safe state. Once the system moves to
unsafe state, the OS has to backtrack one step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause
the deadlock in the system.
In single instanced resource types, if a cycle is being formed in the system then
there will definitely be a deadlock.
On the other hand, in multiple instanced resource type graph, detecting a cycle is
not just enough.
Preempt the resource
We can snatch one of the resources from the owner of the resource (process) and
give it to the other process with the expectation that it will complete the execution and
will release this resource sooner.
Well, choosing a resource which will be snatched is going to be a bit difficult.
System passes through various states to get into the deadlock state. The operating
system can roll back the system to the previous safe state.
For this purpose, OS needs to implement check pointing at every state.
Kill a process
Killing a process can solve our problem but the bigger concern is to decide which
process to kill.
Generally, Operating system kills a process which has done least amount of work until
now.
This is not a suggestible approach but can be implemented if the problem becomes
very serious.
Killing all process will lead to inefficiency in the system because all the processes will
execute again from starting.