Unit III
Unit III
S.E.
Operating System
Concurrency is when multiple tasks can start, run, and complete at the same time, even if they aren't
running at the same moment.
In computer science, concurrency is when there are multiple processes running in parallel within an
operating system
E.g.
You open browser and enter 100 tabs on chrome/mozilla. Each tab works on its own process or
thread. Each runs its own javascript codes on its own web page. This is an example of
concurrent execution in a software
Concurrency generally refers to events or circumstances that are happening or existing at the same time.
In programming terms, concurrent programming is a technique in which two or more processes start,
run in an interleaved fashion through context switching and complete in an overlapping time period by
managing access to shared resources e.g. on a single core of CPU.
This doesn’t necessarily mean that multiple processes will be running at the same instant – even if the
results might make it seem like it.
Problems in Concurrency
Race conditions occur when the output of a system depends on the order and timing of the events,
which leads to unpredictable behavior. Multiple processes or threads accessing shared resources
simultaneously can cause race conditions.
Deadlocks occur when two or more processes or threads are waiting for each other to release
resources, resulting in a circular wait. Deadlocks can occur when multiple processes or threads
compete for exclusive access to shared resources.
Interprocess communication is the mechanism provided by the operating system that allows processes to
communicate with each other. This communication could involve a process letting another process know that
some event has occurred or the transferring of data from one process to another.
Synchronization is a necessary part of interprocess communication. It is either provided by the interprocess control
mechanism or handled by the communicating processes. Some of the methods to provide synchronization are as follows −
● Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes. The two
types of semaphores are binary semaphores and counting semaphores.
● Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is useful
for synchronization and also prevents race conditions.
● Barrier
A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel
languages and collective routines impose barriers.
● Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is
available or not. This is known as busy waiting because the process is not doing any useful operation even
though it is active.
Processes Synchronization or Synchronization is the way by which processes that share the same memory space are
managed in an operating system.
It helps maintain the consistency of data by using variables or hardware so that only one process can make changes to the
shared memory at a time.
There are various solutions for the same such as semaphores, mutex locks, synchronization hardware, etc.
For example, consider a bank that stores the account balance of each customer in the same
database. Now suppose you initially have x rupees in your account. Now, you take out some
amount of money from your bank account, and at the same time, someone tries to look at the
amount of money stored in your account. As you are taking out some money from your account,
after the transaction, the total balance left will be lower than x. But, the transaction takes time, and
hence the person reads x as your account balance which leads to inconsistent data. If in some
way, we could make sure that only one process occurs at a time, we could ensure consistent data.
In the above image, if Process1 and Process2 happen at
the same time, user 2 will get the wrong account balance
as Y because of Process1 being transacted when the
balance is X.
Let us take a look at why exactly we need Process Synchronization. For example, If a process1 is trying
to read the data present in a memory location while another process2 is trying to change the data present
at the same location, there is a high chance that the data read by the process1 will be incorrect.
When more than one process is either running the same code or modifying the same memory or any
shared data, there is a risk that the result or value of the shared data may be incorrect because all
processes try to access and modify this shared resource.
Thus, all the processes race to say that my result is correct.
This condition is called the race condition. Since many processes use the same data, the results of the
processes may depend on the order of their execution.
This is mostly a situation that can arise within the critical section.
In the critical section, a race condition occurs when the end result of multiple thread executions varies
depending on the sequence in which the threads execute.
Why do we need to have a critical section? What problems occur if we remove it?
A part of code that can only be accessed by a single process at any moment is known as a critical section.
This means that when a lot of programs want to access and change a single shared data, only one process
will be allowed to change at any given moment. The other processes have to wait until the data is free to
be used.
The wait() function mainly handles the entry to the critical section, while the signal() function handles
the exit from the critical section. If we remove the critical section, we cannot guarantee the consistency
of the end outcome after all the processes finish executing simultaneously.
We'll look at some solutions to Critical Section Problem but before we move on to that, let us take a look
at what conditions are necessary for a solution to Critical Section Problem.
● Mutual exclusion: If a process is running in the critical section, no other process should be
allowed to run in that section at that time.
● Remember that mutual exclusion is essential for maintaining consistency and avoiding conflicts in
concurrent systems
● Progress: If no process is still in the critical section and other processes are waiting outside the
critical section to execute, then any one of the threads must be permitted to enter the critical
section. The decision of which process will enter the critical section will be taken by only those
processes that are not executing in the remaining section.
● No starvation: Starvation means a process keeps waiting forever to access the critical section but
never gets a chance. No starvation is also known as Bounded Waiting.
○ A process should not wait forever to enter inside the critical section.
○ When a process submits a request to access its critical section, there should be a limit or
bound, which is the number of other processes that are allowed to access the critical section
before it.
○ After this bound is reached, this process should be allowed to access the critical section.
Answer:
Race conditions can be avoided by implementing mutual exclusion techniques, such
as locks, semaphores, or monitors. These techniques ensure that only one process or
thread can access a shared resource at a time, preventing simultaneous conflicting
accesses and maintaining data integrity.
Concurrent Processes are those processes that are executed simultaneously or parallely and might
or might not be dependent on other processes.
Process Synchronization can be defined as the coordination between two process that have access
to common materials such as a common section of code, resources or data etc.
For example: There may be some resource that is shared by 3 different processes, and none of
the processes at a certain time can change the resource, since that might ruin the results of the
other processes sharing the same resource.
Now this Process Synchronization is required for concurrent processes.
For any number of processes that are executing simultaneously, let's say all of them need to access
a section of the code.
This section is called the Critical Section.
We have 2 processes, that are concurrent and since we are talking about Process Synchronization,
let's say they share a variable "shared" which has a value of 5.
What is our goal here? We want to achieve mutual exclusion, meaning that we want to prevent
simultaneous access to a shared resource. The resource here being the variable "shared" with
value 5.
Process 1
int x = shared; // storing the value of shared variable in the variable x
x++;
sleep(1);
shared = x;
Process 2
int y = shared;
y--;
sleep(1);
shared = y;
We start with the execution of process 1, in which we declare a variable x which has initially the value of the shared variable which is 5. The
value of x is then incremented, and it becomes 6 and post that the process goes into sleep state. Since the current processing is concurrent, the
cpu does not wait and starts the processing of process 2. The integer y has the value of the shared variable initially which is unchanged, and is
5.
Then we decrement the value of y and process 2 goes into sleep state. We move back to process 1 and the value of shared variable becomes 6.
Once that process is complete, in process 2 the value of shared variable is changed to 4.
One would think that if we increment and decrement a number, it's value should be unchanged and that is exactly what was happening in the
two processes, however in this case the value of the "shared" variable is 4, and this is undesired.
This is called a race condition, and due to this condition, problems such as deadlock may occur. Hence
we need proper synchronization between processes, and to prevent these, we use a signaling integer
variable, called - Semaphore.
So to formally define Semaphore we can say that it is an integer variable which is used in a mutually
exclusive manner by concurrent processes, to achieve synchronization.
Since Semaphores are integer variables, their value acts as a signal, which allows or does not allow a
process to access the critical section of code or certain other resources.
Initially, the value of the semaphore is 1. When the process P1 enters the critical section, the value of
semaphore becomes 0. If P2 would want to enter the critical section at this time, it wouldn't be able to,
since the value of semaphore is not greater 0. It will have to wait till semaphore value is greater than 0,
and this will happen only once P1 leaves the critical section and executes the signal operation which
increments the value of the semaphore.
This is how mutual exclusion is achieved using binary semaphore i.e. both processes cannot access the
critical section at the same time.
The producer consumer problem is one of the classic process synchronization problems.
Problem Statement
The problem statement states that we have a buffer that is of fixed size, and the producer will
produce items and place them in the buffer. The consumer can pick items from the buffer and
consume them. Our task is to ensure that when the item is placed in the buffer by the producer,
the consumer should not consume it at the same time the producer produces and places an item
into the buffer. The critical section here, is the buffer.
do{
// producer produces an item
wait(empty);
wait(mutex);
// put the item into the buffer
signal(mutex);
signal(full);
} while(true)
In the above code, we call the wait operations on the empty and mutex semaphores when the producer produces an item.
Since an item is produced, it must be placed in the buffer reducing the number of empty slots by 1, hence we call the wait
operation on the empty semaphore. We must also reduce the value of mutex so as to prevent the consumer from accessing
the buffer.
Post this, the producer has placed the item into the buffer and hence we can increase the value of "full" semaphore by 1
and also increment the value of the mutex as the producer has completed it's task and the signal will now be able to
access the buffer.
The consumer needs to consume the items that are produced by the producer. So when the consumer is
removing the item from the buffer to consume it we need to reduce the value of the "full" semaphore by 1
since one slot will be emptied, and we also need to decrement the value of mutex so that the producer
does not access the buffer.
Now that the consumer has consumed the item, we can increment the value of the empty semaphore
along with the mutex by 1.
● They allow processes into the critical section one by one, and provide strict mutual exclusion (in
the case of binary semaphores).
● No resources go to waste due to busy waiting as with the usage of semaphores as we do not waste
processor time in checking the fulfillment of a condition to allow a process to access the critical
section.
● The implementation / code of the semaphores is written in the machine independent code section
of the microkernel, and hence semaphores are machine independent.
● Semaphores are slightly complicated and the implementation of the wait and signal operations should be done
in such a manner, that deadlocks are prevented.
● The usage of semaphores may cause priority inversion where the high priority processes might get access to
the critical section after the low priority processes.
In the above figure, there are two processes and two resources. Process 1 holds "Resource
1" and needs "Resource 2" while Process 2 holds "Resource 2" and requires "Resource 1".
This creates a situation of deadlock because none of the two processes can be executed.
Since the resources are non-shareable they can only be used by one process at a
time(Mutual Exclusion). Each process is holding a resource and waiting for the other
process the release the resource it requires. None of the two processes releases their
resources before their execution and this creates a circular wait. Therefore, all four
conditions are satisfied.
Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the problem of deadlock
situation is not frequent, some systems simply ignore it. Operating systems such as UNIX and
Windows follow this approach. However, if a deadlock occurs we can reboot our system and the
deadlock is resolved automatically.
A deadlock is a situation in which more than one process is blocked Starvation is a process in which the low priority processes
because it is holding a resource and also requires some resource that is are postponed indefinitely because the resources are never
acquired by some other process. allocated.
Resources are blocked by a set of processes in a circular fashion. Resources are continuously used by high-priority resources.
It is prevented by avoiding anyone necessary condition required for a It can be prevented by aging.(Aging-is a scheduling
deadlock or recovered using a recovery algorithm. technique used to prevent resource starvation and maintain
fair allocation of resources among processes.)
In a deadlock, none of the processes get executed. In starvation, higher priority processes execute while lower
priority processes are postponed.
Deadlock is also called circular wait. Starvation is also called lived lock.
● The processes must know the maximum resource of each type required to execute it.
● Preemptions are frequently encountered.
● It delays the process initiation.
● There are inherent pre-emption losses.
● It does not support incremental request of resources.
We use the resource-allocation graph for the pictographic depiction of a system’s state. It contains all
of the details about the processes that are holding and waiting for resources. The data about available
resources, as well as the resources that the process is consuming, is contained in the resource
allocation graph, which includes all information linked to all instances of resources.
The resource allocation graph is a visual depiction of a system’s current status. The resource allocation
graph, as its name implies, contains all the information about all of the activities that are holding or
waiting for resources.
It also provides information on all instances of all resources, whether available or in use by processes.
The process is represented by a circle in the Resource Allocation Graph, whereas the resource is
represented using a rectangle. Let’s take a closer look at the various types of vertices and edges.
Let's consider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are having 1 instance each.
According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting for R1 as well as R2.
The graph is deadlock free since no cycle is being formed in the graph.
In order to eliminate deadlock by aborting the process, we will use one of two methods given below. In
both methods, the system reclaims all resources that are allocated to the terminated processes.
● Aborting all deadlocked Processes Clearly, this method is helpful in breaking the cycle of
deadlock, but this is an expensive approach. This approach is not suggestable but can be used if the
problem becomes very serious. If all the processes are killed then there may occur insufficiency in
the system and all processes will execute again from starting.
● Abort one process at a time until the elimination of the deadlock cycle This method can be used
but we have to decide which process to kill and this method incurs considerable overhead. The
process that has done the least amount of work is killed by the Operating system firstly.
For example, let’s consider P0, P1, P2, P3, and P4 as the philosophers or processes and C0, C1, C2, C3, and C4 as the 5 chopsticks or
resources between each philosopher. Now if P0 wants to eat, both resources/chopstick C0 and C1 must be free, which would leave in P1
and P4 void of the resource and the process wouldn't be executed, which indicates there are limited resources(C0,C1..) for multiple
processes(P0, P1..), and this problem is known as the Dining Philosopher Problem.