0% found this document useful (0 votes)
4 views

Concurrency in Operating System

Concurrency in operating systems allows multiple tasks or processes to be executed simultaneously, enhancing efficiency and responsiveness. It involves managing independent and cooperating processes, ensuring proper resource allocation, and addressing issues like race conditions and deadlocks. While concurrency improves performance and resource utilization, it also introduces complexities and potential drawbacks such as increased overhead and the need for synchronization mechanisms.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Concurrency in Operating System

Concurrency in operating systems allows multiple tasks or processes to be executed simultaneously, enhancing efficiency and responsiveness. It involves managing independent and cooperating processes, ensuring proper resource allocation, and addressing issues like race conditions and deadlocks. While concurrency improves performance and resource utilization, it also introduces complexities and potential drawbacks such as increased overhead and the need for synchronization mechanisms.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Concurrency in Operating System

Concurrency in operating systems refers to the capability of an OS to handle more


than one task or process at the same time, thereby enhancing efficiency and
responsiveness. It may be supported by multi-threading or multi-processing
whereby more than one process or threads are executed simultaneously or in an
interleaved fashion.
Thus, more than one program may run simultaneously on shared resources of the
system, such as CPU, memory, and rso on. This helps optimize performance and
reduce idle times while improving the responsiveness of applications, generally in
multitasking contexts. Good concurrency handling is crucial
for deadlock situations, race conditions, and usually also for uninterrupted
execution of tasks. It helps in techniques like coordinating the execution of
processes, memory allocation, and execution scheduling for maximizing
throughput.
What is Concurrency in OS?
Concurrency in an operating system refers to the ability to execute multiple
processes or threads simultaneously, improving resource utilization and system
efficiency. It allows several tasks to be in progress at the same time, either by
running on separate processors or through context switching on a single processor.
Concurrency is essential in modern OS design to handle multitasking, increase
system responsiveness, and optimize performance for users and applications.
There are several motivations for allowing concurrent execution:
 Physical resource Sharing: Multiuser environment since hardware resources are
limited
 Logical resource Sharing: Shared file (same piece of information)
 Computation Speedup: Parallel execution
 Modularity: Divide system functions into separation processes

Relationship Between Processes of Operating System


The Processes executing in the operating system is one of the following two types:
 Independent Processes
 Cooperating Processes
Independent Processes
Its state is not shared with any other process.
 The result of execution depends only on the input state.
 The result of the execution will always be the same for the same input.
 The termination of the independent process will not terminate any other.
Cooperating Processes
Its state is shared along other processes.
 The result of the execution depends on relative execution sequence and cannot be
predicted in advanced(Non-deterministic).
 The result of the execution will not always be the same for the same input.
 The termination of the cooperating process may affect other process.

Process Operation in Operating System


Most systems support at least two types of operations that can be invoked on a
process creation and process deletion.

Process Creation
A parent process and then children of that process can be created. When more than
one process is created several possible implementations exist.
 Parent and child can execute concurrently.
 The Parents waits until all of its children have terminated.
 The parent and children share all resources in common.
 The children share only a subset of their parent’s resources.

Process Termination
A child process can be terminated in the following ways:
 A parent may terminate the execution of one of its children for a following
reasons:
1. The child has exceeded its allocation resource usage.
2. The task assigned to its child is no longer required.
 If a parent has terminated than its children must be terminated.

Principles of Concurrency
Both interleaved and overlapped processes can be viewed as examples of
concurrent processes, they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
 The activities of other processes
 The way operating system handles interrupts
 The scheduling policies of the operating system

Problems in Concurrency
 Sharing global resources: Sharing of global resources safely is difficult. If two
processes both make use of a global variable and both perform read and write on
that variable, then the order in which various read and write are executed is
critical.
 Optimal allocation of resources: It is difficult for the operating system to
manage the allocation of resources optimally.
 Locating programming errors: It is very difficult to locate a programming error
because reports are usually not reproducible.
 Locking the channel: It may be inefficient for the operating system to simply
lock the channel and prevents its use by other processes.

Advantages of Concurrency
 Running of multiple applications: It enable to run multiple applications at the
same time.
 Better resource utilization: It enables that the resources that are unused by one
application can be used for other applications.
 Better average response time: Without concurrency, each application has to be
run to completion before the next one can be run.
 Better performance: It enables the better performance by the operating system.
When one application uses only the processor and another application uses only
the disk drive then the time to run both applications concurrently to completion
will be shorter than the time to run each application consecutively.

Drawbacks of Concurrency
 It is required to protect multiple applications from one another.
 It is required to coordinate multiple applications through additional mechanisms.
 Additional performance overheads and complexities in operating systems are
required for switching among applications.
 Sometimes running too many applications concurrently leads to severely
degraded performance.

Issues of Concurrency
 Non-atomic: Operations that are non-atomic but interruptible by multiple
processes can cause problems.
 Race conditions: A race condition occurs of the outcome depends on which of
several processes gets to a point first.
 Blocking: Processes can block waiting for resources. A process could be blocked
for long period of time waiting for input from a terminal. If the process is
required to periodically update some data, this would be very undesirable.
 Starvation: Starvation occurs when a process does not obtain service to progress.
 Deadlock: Deadlock occurs when two processes are blocked and hence neither
can proceed to execute.
Concurrent Processes in Operating System

Generative Summary



Concurrent processing is a computing model in which multiple processors


execute instructions simultaneously for better performance. Concurrent means,
which occurs when something else happens. The tasks are broken into subtypes,
which are then assigned to different processors to perform simultaneously,
sequentially instead, as they would have to be performed by one processor.
Concurrent processing is sometimes synonymous with parallel processing. The
term real and virtual concurrency in concurrent processing:
1. Multiprogramming Environment: In a multiprogramming environment, there
are multiple tasks shared by one processor. While a virtual concept can be
achieved by the operating system, if the processor is allocated for each individual
task, the virtual concept is visible if each task has a dedicated processor. The
multilayer environment is shown in figure.

2. Multiprocessing Environment : In multiprocessing environment two or more


processors are used with shared memory. Only one virtual address space is used,
which is common for all processors. All tasks reside in shared memory. In this
environment, concurrency is supported in the form of concurrently executing
processors. The tasks executed on different processors are performed with each
other through shared memory. The multiprocessing environment is shown in

figure.
3. Distributed Processing Environment : In a distributed processing environment,
two or more computers are connected to each other by a communication network
or high speed bus. There is no shared memory between the processors and each
computer has its own local memory. Hence a distributed application consisting of
concurrent tasks, which are distributed over network communication via
messages. The distributed processing environment is shown in figure.


What is Race Condition?
When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the
output or the value of the shared variable is wrong so for that all the processes
doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same
data concurrently, and then the outcome depends on the particular order in which
the access takes place. A race condition is a situation that may occur inside a
critical section. This happens when the result of multiple thread execution in the
critical section differs according to the order in which the threads execute. Race
conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction. Also, proper thread synchronization using locks or atomic
variables can prevent race conditions.
Example
Let’s say there are two processes P1 and P2 which share a common variable
(shared=10), both processes are present in – queue and waiting for their turn to be
executed. Suppose, Process P1 first come under execution, and the CPU store a
common variable between them (shared=10) in the local variable (X=10) and
increment it by 1(X=11), after then when the CPU read line sleep(1),it switches
from current process P1 to process P2 present in ready-queue. The process P1 goes
in a waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable
(Shared=10) in its local variable (Y=10) and decrement Y by 1(Y=9), after then
when CPU read sleep(1), the current process P2 goes in waiting for state and CPU
remains idle for some time as there is no process in ready-queue, after completion
of 1 second of process P1 when it comes in ready-queue, CPU takes the process P1
under execution and execute the remaining line of code (store the local variable
(X=11) in common variable (shared=11) ), CPU remain idle for sometime waiting
for any process in ready-queue,after completion of 1 second of Process P2, when
process P2 comes in ready-queue, CPU start executing the further remaining line
of Process P2(store the local variable (Y=9) in common variable (shared=9) ).

Initially Shared = 10
Process 1 Process 2
int X = shared int Y = shared
X++ Y- -
sleep(1) sleep(1)
shared = X shared = Y

Note: We are assuming the final value of a common variable(shared) after


execution of Process P1 and Process P2 is 10 (as Process P1 increment variable
(shared=10) by 1 and Process P2 decrement variable (shared=11) by 1 and finally
it becomes shared=10). But we are getting undesired value due to a lack of proper
synchronization.

Actual meaning of Race-Condition


 If the order of execution of the process(first P1 -> then P2) then we will get the
value of common variable (shared) =9.
 If the order of execution of the process(first P2 -> then P1) then we will get the
final value of common variable (shared) =11.
 Here the (value1 = 9) and (value2=10) are racing, If we execute these two
processes in our computer system then sometime we will get 9 and sometime we
will get 10 as the final value of a common variable(shared). This phenomenon is
called race condition.

Critical Section Problem


A critical section is a code segment that can be accessed by only one process at a
time. The critical section contains shared variables that need to be synchronized
to maintain the consistency of data variables. So the critical section problem means
designing a way for cooperative processes to access shared resources without
creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
the critical section next, and the selection can not be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section
problem. In Peterson’s solution, we have two shared variables:
 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering
the critical section
 int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions


 Mutual Exclusion is assured as only one process can access the critical section
at any time.
 Progress is also assured, as a process outside the critical section does not block
other processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


 It involves busy waiting. (In the Peterson’s solution, the code statement-
“while(flag[j] && turn == j);” is responsible for this. Busy waiting is not favored
because it wastes CPU cycles that could be used to perform other tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.
Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread. This is different than a mutex as the mutex can
be signaled only by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process
synchronization.
A Semaphore is an integer variable, which can be accessed only through two
operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
 Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes can
share the same mutex semaphore that is initialized to 1. Then, a process has to
wait until the lock becomes 0. Then, the process can make the mutex semaphore
1 and start its critical section. When it completes its critical section, it can reset
the value of the mutex semaphore to 0 and some other process can enter its
critical section.
 Counting Semaphores: They can have any value and are not restricted to a
certain domain. They can be used to control access to a resource that has a
limitation on the number of simultaneous accesses. The semaphore can be
initialized to the number of instances of the resource. Whenever a process wants
to use that resource, it checks if the number of remaining instances is more than
zero, i.e., the process has an instance available. Then, the process can enter its
critical section thereby decreasing the value of the counting semaphore by 1.
After the process is over with the use of the instance of the resource, it can leave
the critical section thereby adding 1 to the number of available instances of the
resource.
Advantages of Process Synchronization
 Ensures data consistency and integrity
 Avoids race conditions
 Prevents inconsistent data due to concurrent access
 Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization
 Adds overhead to the system
 This can lead to performance degradation
 Increases the complexity of the system
 Can cause deadlock if not implemented properly.
Race Condition Vulnerability
Last Updated : 29 Aug, 2024
Generative Summary
Now you can generate the summary of any article of your choice.
Got it


Real-Time Examples of Race Conditions


Example 1 – Consider an ATM Withdrawal
Imagine Ram and his friend Sham both have access to the same bank account.
They both try to withdraw Rs,500 at the same time from different ATMs. The
system checks the balance and sees there’s enough money for both withdrawals.
Without proper synchronization, the system might allow both transactions to go
through, even if the balance is only enough for one, leaving the account
overdrawn.
Example 2 – Consider a Printer Queue
Imagine two people sending print jobs at the same time. If the printer isn’t
managed properly, the print jobs could get mixed up, with pages from one person’s
document being printed in the middle of another’s.
Key Terms in a Race Condition
 Critical Section: A code part where the shared resources are accessed. It is
critical as multiple processes enter this section at same time leading to data
corruption and errors.
 Synchronization: It is the process of controlling how and when multiple
processes or threads access the shared resources ensuring that only one can enter
the critical section at same time.
 Mutual Exclusion (Mutex): A mutex is like a lock that ensures only one process
can access a resource at a time. If a process holds a lock, others must wait their
turn preventing race conditions.
 Deadlock: A situation where two or more processes are stuck waiting for each
other’s resources, causing a deadlock(standstill).
What is a Race Condition Vulnerability?
Race Condition Vulnerability is a situation where two or more processes or threads
in a system simultaneously access the shared resources. If the processes lacks
coordination, it leads to unexpected behaviour or security issues.
Let’s understand this concept by taking a real-life scenario . Imagine there are 2
people named A and B , trying to write on the same notebook at same time without
agreeing on who needs to write first followed by other person which results into
writings getting mixed up leading to confusion.
Now let’s understand the same concept in computer system. Imagine a system
where one process’s duty is to read a file from the system and another process’s
duty is to write to the system. If both of them try to access the same file at same
time without a proper control leads to incorrect/incomplete data leading to errors.
Common Vulnerabilities Leading to Race Conditions
 In systems where multiple processes access the shared memory, failure to control
how memory is accessed can lead to conflicting operations, resulting in incorrect
data being read or written to the system.
 If multiple processes access the same file at same time without proper access
mechanism, the file’s content can become inconsistent.
 In some scenarios, a certain operations need to be happened in a specific order. If
the sequence is not followed and multiple processes run out of order, race
conditions can occur leading to error or vulnerabilities.
By identifying these common vulnerabilities, developers can reduce the chance of
race conditions that affect the system by proper locking mechanism, careful
sequencing and strong synchronization practices which can help to almost
reduce/eliminate these issues.
Race Condition, Deadlock and Threat Block
1. Race Condition
A race condition happens when two or more processes try to access the same
resource at the same time without proper coordination. This “race” can lead to
incorrect results or unpredictable behavior because the order of execution is not
controlled.
Example: Two people trying to edit the same document at the same time, causing
one’s changes to overwrite the other’s.
2. Deadlock
A deadlock occurs when two or more processes are waiting for each other to
release resources, and none can proceed. It’s like a standstill where each process is
holding a resource the other needs, leading to a complete freeze.
Example: Two cars stuck in a narrow road from opposite directions, each refusing
to move back and waiting for the other.
3. Thread Block
Thread blocking happens when a thread is unable to continue its work because it’s
waiting for a resource that’s currently unavailable. It pauses until the resource is
free.
Example: A cashier waiting for the customer ahead to finish paying before
attending to the next person in line.
How to Detect Race Conditions?
 Review the Code : Carefully inspecting of the code can help to identify the areas
where shared resources are accessed without proper locking or synchronization.
 Static Analysis Tools : Specialized tools analyze the code to automatically detect
potential race conditions by identifying unsafe access to shared resources.
 Testing with Multiple Threads/Processes: Simulate scenarios with many
threads or processes running simultaneously. If unexpected behaviors like data
inconsistencies or crashes occur, a race condition might be present.
 Logging and Monitoring: Adding logs to track resource access can reveal out-
of-order operations, signaling a race condition
How to Prevent Race Condition Attacks?
 Use Locks: Implement locks (like mutexes) to ensure that only one process or
thread can access a resource at a time, preventing conflicting operations.
 Proper Synchronization: Ensure processes or threads work in a coordinated
sequence when accessing shared data. Techniques like semaphores help achieve
this.
 Avoid Time-of-Check to Time-of-Use (TOCTOU) Vulnerabilities: Reduce the
gap between checking a condition (like permissions) and acting on it, minimizing
opportunities for an attacker to change the state in between.
 Priority Management: Prioritize certain processes or threads so they get
controlled access to critical resources, preventing uncoordinated access.
Critical Section in Synchronization
Last Updated : 03 Oct, 2023
Generative Summary
Now you can generate the summary of any article of your choice.
Got it


In computer science, a critical section refers to a segment of code that is executed


by multiple concurrent threads or processes, and which accesses shared resources.
These resources may include shared memory, files, or other system resources that
can only be accessed by one thread or process at a time to avoid data inconsistency
or race conditions.
1. The critical section must be executed as an atomic operation, which means that
once one thread or process has entered the critical section, all other threads or
processes must wait until the executing thread or process exits the critical section.
The purpose of synchronization mechanisms is to ensure that only one thread or
process can execute the critical section at a time.
2. The concept of a critical section is central to synchronization in computer
systems, as it is necessary to ensure that multiple threads or processes can
execute concurrently without interfering with each other. Various
synchronization mechanisms such as semaphores, mutexes, monitors, and
condition variables are used to implement critical sections and ensure that shared
resources are accessed in a mutually exclusive manner.
The use of critical sections in synchronization can be advantageous in improving
the performance of concurrent systems, as it allows multiple threads or processes
to work together without interfering with each other. However, care must be taken
in designing and implementing critical sections, as incorrect synchronization can
lead to race conditions and deadlocks.
Critical Section
When more than one processes try to access the same code segment that segment is
known as the critical section. The critical section contains shared variables or
resources that need to be synchronized to maintain the consistency of data
variables.
In simple terms, a critical section is a group of instructions/statements or regions
of code that need to be executed atomically (read this post for atomicity), such as
accessing a resource (file, input or output port, global data, etc..)
In concurrent programming, if one thread tries to change the value of shared data at
the same time as another thread tries to read the value (i.e., data race across
threads), the result is unpredictable. The access to such shared variables (shared
memory, shared files, shared port, etc.) is to be synchronized.
Few programming languages have built-in support for synchronization. It is critical
to understand the importance of race conditions while writing kernel-mode
programming (a device driver, kernel thread, etc.) since the programmer can
directly access and modify kernel data structures
Although there are some properties that should be followed if any code in the
critical section
1. Mutual Exclusion: If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes
wish to enter their critical sections, then only those processes that are not
executing in their remainder sections can participate in deciding which will enter
its critical section next, and this selection cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound, or limit, on the number of times that
other processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is granted.
Two general approaches are used to handle critical sections:
1. Preemptive kernels: A preemptive kernel allows a process to be preempted
while it is running in kernel mode.
2. Non preemptive kernels: A non preemptive kernel does not allow a process
running in kernel mode to be preempted; a kernel-mode process will run until it
exists in kernel mode, blocks, or voluntarily yields control of the CPU. A non
preemptive kernel is essentially free from race conditions on kernel data
structures, as only one process is active in the kernel at a time.
Critical Section Problem
The use of critical sections in a program can cause a number of issues, including:
Deadlock: When two or more threads or processes wait for each other to release a
critical section, it can result in a deadlock situation in which none of the threads or
processes can move. Deadlocks can be difficult to detect and resolve, and they can
have a significant impact on a program’s performance and reliability.
Starvation: When a thread or process is repeatedly prevented from entering a
critical section, it can result in starvation, in which the thread or process is unable
to progress. This can happen if the critical section is held for an unusually long
period of time, or if a high-priority thread or process is always given priority when
entering the critical section.
Overhead: When using critical sections, threads or processes must acquire and
release locks or semaphores, which can take time and resources. This may reduce
the program’s overall performance.

Critical section

It could be visualized using the pseudo-code below –


do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);
A simple solution to the critical section can be thought of as shown below,
acquireLock();
Process Critical Section
releaseLock();
A thread must acquire a lock prior to executing a critical section. The lock can be
acquired by only one thread. There are various ways to implement locks in the
above pseudo-code. Let us discuss them in future articles.
Strategies for avoiding problems: While deadlocks, starvation, and overhead are
mentioned as potential issues, there are more specific strategies for avoiding or
mitigating these problems. For example, using timeouts to prevent deadlocks,
implementing priority inheritance to prevent priority inversion and starvation, or
optimizing lock implementation to reduce overhead.
Examples of critical sections in real-world applications: While the article
describes critical sections in a general sense, it could be useful to provide examples
of how critical sections are used in specific real-world applications, such as
database management systems or web servers.
Impact on scalability: The use of critical sections can impact the scalability of a
program, particularly in distributed systems where multiple nodes are accessing
shared resources.
In process synchronization, a critical section is a section of code that accesses
shared resources such as variables or data structures, and which must be executed
by only one process at a time to avoid race conditions and other synchronization-
related issues.
A critical section can be any section of code where shared resources are accessed,
and it typically consists of two parts: the entry section and the exit section. The
entry section is where a process requests access to the critical section, and the exit
section is where it releases the resources and exits the critical section.
To ensure that only one process can execute the critical section at a time, process
synchronization mechanisms such as semaphores and mutexes are used. A
semaphore is a variable that is used to indicate whether a resource is available or
not, while a mutex is a binary semaphore that provides mutual exclusion to shared
resources.
When a process enters a critical section, it must first request access to the
semaphore or mutex associated with the critical section. If the resource is
available, the process can proceed to execute the critical section. If the resource is
not available, the process must wait until it is released by the process currently
executing the critical section.
Once the process has finished executing the critical section, it releases the
semaphore or mutex, allowing another process to enter the critical section if
necessary.
Proper use of critical sections and process synchronization mechanisms is essential
in concurrent programming to ensure proper synchronization of shared resources
and avoid race conditions, deadlocks, and other synchronization-related issues.
Advantages of critical section in process synchronization:
1. Prevents race conditions: By ensuring that only one process can execute the
critical section at a time, race conditions are prevented, ensuring consistency of
shared data.
2. Provides mutual exclusion: Critical sections provide mutual exclusion to shared
resources, preventing multiple processes from accessing the same resource
simultaneously and causing synchronization-related issues.
3. Reduces CPU utilization: By allowing processes to wait without wasting CPU
cycles, critical sections can reduce CPU utilization, improving overall system
efficiency.
4. Simplifies synchronization: Critical sections simplify the synchronization of
shared resources, as only one process can access the resource at a time,
eliminating the need for more complex synchronization mechanisms.
Disadvantages of critical section in process synchronization:
1. Overhead: Implementing critical sections using synchronization mechanisms
like semaphores and mutexes can introduce additional overhead, slowing down
program execution.
2. Deadlocks: Poorly implemented critical sections can lead to deadlocks, where
multiple processes are waiting indefinitely for each other to release resources.
3. Can limit parallelism: If critical sections are too large or are executed
frequently, they can limit the degree of parallelism in a program, reducing its
overall performance.
4. Can cause contention: If multiple processes frequently access the same critical
section, contention for the critical section can occur, reducing performance.
5. Overall, critical sections are a useful tool in process synchronization to ensure
proper synchronization of shared resources and prevent race conditions.
However, they can also introduce additional overhead and can be prone to
synchronization-related issues if not implemented correctly.
Important points related to critical section in process synchronization are:
1. Understanding the concept of critical section and why it’s important for
synchronization.
2. Familiarity with the different synchronization mechanisms used to implement
critical sections, such as semaphores, mutexes, and monitors.
3. Knowledge of common synchronization problems that can arise in critical
sections, such as race conditions, deadlocks, and livelocks.
4. Understanding how to design and implement critical sections to ensure proper
synchronization of shared resources and prevent synchronization-related issues.
5. Familiarity with best practices for using critical sections in concurrent
programming.
Mutual Exclusion in Synchronization
Last Updated : 22 Jul, 2024
Generative Summary
Now you can generate the summary of any article of your choice.
Got it


During concurrent execution of processes, processes need to enter the critical


section (or the section of the program shared across processes) at times for
execution. It might happen that because of the execution of multiple processes at
once, the values stored in the critical section become inconsistent. In other words,
the values depend on the sequence of execution of instructions – also known as a
race condition. The primary task of process synchronization is to get rid of race
conditions while executing the critical section.
What is Mutual Exclusion?
Mutual Exclusion is a property of process synchronization that states that “no two
processes can exist in the critical section at any given point of time“. The term
was first coined by Dijkstra. Any process synchronization technique being used
must satisfy the property of mutual exclusion, without which it would not be
possible to get rid of a race condition.
The need for mutual exclusion comes with concurrency. There are several kinds of
concurrent execution:
 Interrupt handlers
 Interleaved, preemptively scheduled processes/threads
 Multiprocessor clusters, with shared memory
 Distributed systems
Mutual exclusion methods are used in concurrent programming to avoid the
simultaneous use of a common resource, such as a global variable, by pieces of
computer code called critical sections.
The requirement of mutual exclusion is that when process P1 is accessing a shared
resource R1, another process should not be able to access resource R1 until process
P1 has finished its operation with resource R1.
Examples of such resources include files, I/O devices such as printers, and shared
data structures.
Conditions Required for Mutual Exclusion
According to the following four criteria, mutual exclusion is applicable:
 When using shared resources, it is important to ensure mutual exclusion between
various processes. There cannot be two processes running simultaneously in
either of their critical sections.
 It is not advisable to make assumptions about the relative speeds of the unstable
processes.
 For access to the critical section, a process that is outside of it must not obstruct
another process.
 Its critical section must be accessible by multiple processes in a finite amount of
time; multiple processes should never be kept waiting in an infinite loop.
Approaches To Implementing Mutual Exclusion
 Software Method: Leave the responsibility to the processes themselves. These
methods are usually highly error-prone and carry high overheads.
 Hardware Method: Special-purpose machine instructions are used for accessing
shared resources. This method is faster but cannot provide a complete solution.
Hardware solutions cannot give guarantee the absence
of deadlock and starvation.
 Programming Language Method: Provide support through the operating
system or through the programming language.
Requirements of Mutual Exclusion
 At any time, only one process is allowed to enter its critical section.
 The solution is implemented purely in software on a machine.
 A process remains inside its critical section for a bounded time only.
 No assumption can be made about the relative speeds of asynchronous concurrent
processes.
 A process cannot prevent any other process from entering into a critical section.
 A process must not be indefinitely postponed from entering its critical section.
In order to understand mutual exclusion, let’s take an example.
What is a Need of Mutual Exclusion?
An easy way to visualize the significance of mutual exclusion is to imagine a
linked list of several items, with the fourth and fifth items needing to be removed.
By changing the previous node’s next reference to point to the succeeding node,
the node that lies between the other two nodes is deleted.
To put it simply, whenever node “i” wants to be removed, node “with – 1″‘s
subsequent reference is changed to point to node “ith + 1” at that time. Two
distinct nodes can be removed by two threads at the same time when a shared
linked list is being used by many threads. This occurs when the first thread
modifies node “ith – 1” next reference, pointing towards the node “ith + 1,” and the
second thread modifies node “ith” next reference, pointing towards the node “ith +
2.” Although both nodes have been removed, the linked list’s required state has not
yet been reached because node “i + 1” still exists in the list because node “ith – 1”
next reference still points to it.
Now, this situation is called a race condition. Race conditions can be prevented by
mutual exclusion so that updates at the same time cannot happen to the very bit
about the list.
Example:
In the clothes section of a supermarket, two people are shopping for clothes.
Boy, A decides upon some clothes to buy and heads to the changing room to try
them out. Now, while boy A is inside the changing room, there is an ‘occupied’
sign on it – indicating that no one else can come in. Boy B has to use the changing
room too, so she has to wait till boy A is done using the changing room.

Once boy A comes out of the changing room, the sign on it changes from
‘occupied’ to ‘vacant’ – indicating that another person can use it. Hence, boy B
proceeds to use the changing room, while the sign displays ‘occupied’ again.
The changing room is nothing but the critical section, boy A and boy B are two
different processes, while the sign outside the changing room indicates the process
synchronization mechanism being used.

Conclusion
In conclusion, mutual exclusion is a key concept in synchronization that ensures
only one process accesses a shared resource at a time. This prevents conflicts and
data corruption, making sure that processes run smoothly and correctly. By using
mutual exclusion mechanisms, we can create stable and reliable systems that
handle multiple processes efficiently.

Process Synchronization in OS (Operating System)


When two or more process cooperates with each other, their order of execution
must be preserved otherwise there can be conflicts in their execution and
inappropriate outputs can be produced.

A cooperative process is the one which can affect the execution of other process or
can be affected by the execution of other process. Such processes need to be
synchronized so that their order of execution can be guaranteed.

The procedure involved in preserving the appropriate order of execution of


cooperative processes is known as Process Synchronization. There are various
synchronization mechanisms that are used to synchronize the processes.

Race Condition
A Race Condition typically occurs when two or more threads try to read, write and
possibly make the decisions based on the memory that they are accessing
concurrently.

Critical Section
The regions of a program that try to access shared resources and may cause race
conditions are called critical section. To avoid race condition among the processes,
we need to assure that only one process at a time can execute within the critical
section.

Critical Section Problem in OS (Operating System)

Critical Section is the part of a program which tries to access shared resources.
That resource may be any resource in a computer like a memory location, Data
structure, CPU or any IO device.

The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes
from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure
that the Race condition among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the
critical section problem. We need to provide a solution in such a way that the
following conditions can be satisfied.

Requirements of Synchronization mechanisms


Primary

1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean


that if one process is executing inside critical section then the other process
must not enter in the critical section.
2. Progress

Progress means that if one process doesn't need to execute into critical
section then it should not stop other processes to get into the critical section.

Secondary

1. Bounded Waiting

We should be able to predict the waiting time for every process to get into
the critical section. The process must not be endlessly waiting for getting
into the critical section.

2. Architectural Neutrality

Our mechanism must be architectural natural. It means that if our solution is


working fine on one architecture then it should also run on the other ones as
well.

Summary of Functions:
Function Description

mkdir Create a new directory


Function Description

rmdir Remove an empty directory

chdir Change the current working directory

getcwd Get the current working directory

opendir Open a directory for reading

readdir Read entries from a directory stream

closedir Close a directory stream

rewinddir Reset directory stream to the beginning

These functions are widely used for directory management in operating systems, especially in
UNIX-like systems.

You might also like