Concurrency in Operating System
Concurrency in Operating System
Process Creation
A parent process and then children of that process can be created. When more than
one process is created several possible implementations exist.
Parent and child can execute concurrently.
The Parents waits until all of its children have terminated.
The parent and children share all resources in common.
The children share only a subset of their parent’s resources.
Process Termination
A child process can be terminated in the following ways:
A parent may terminate the execution of one of its children for a following
reasons:
1. The child has exceeded its allocation resource usage.
2. The task assigned to its child is no longer required.
If a parent has terminated than its children must be terminated.
Principles of Concurrency
Both interleaved and overlapped processes can be viewed as examples of
concurrent processes, they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
The activities of other processes
The way operating system handles interrupts
The scheduling policies of the operating system
Problems in Concurrency
Sharing global resources: Sharing of global resources safely is difficult. If two
processes both make use of a global variable and both perform read and write on
that variable, then the order in which various read and write are executed is
critical.
Optimal allocation of resources: It is difficult for the operating system to
manage the allocation of resources optimally.
Locating programming errors: It is very difficult to locate a programming error
because reports are usually not reproducible.
Locking the channel: It may be inefficient for the operating system to simply
lock the channel and prevents its use by other processes.
Advantages of Concurrency
Running of multiple applications: It enable to run multiple applications at the
same time.
Better resource utilization: It enables that the resources that are unused by one
application can be used for other applications.
Better average response time: Without concurrency, each application has to be
run to completion before the next one can be run.
Better performance: It enables the better performance by the operating system.
When one application uses only the processor and another application uses only
the disk drive then the time to run both applications concurrently to completion
will be shorter than the time to run each application consecutively.
Drawbacks of Concurrency
It is required to protect multiple applications from one another.
It is required to coordinate multiple applications through additional mechanisms.
Additional performance overheads and complexities in operating systems are
required for switching among applications.
Sometimes running too many applications concurrently leads to severely
degraded performance.
Issues of Concurrency
Non-atomic: Operations that are non-atomic but interruptible by multiple
processes can cause problems.
Race conditions: A race condition occurs of the outcome depends on which of
several processes gets to a point first.
Blocking: Processes can block waiting for resources. A process could be blocked
for long period of time waiting for input from a terminal. If the process is
required to periodically update some data, this would be very undesirable.
Starvation: Starvation occurs when a process does not obtain service to progress.
Deadlock: Deadlock occurs when two processes are blocked and hence neither
can proceed to execute.
Concurrent Processes in Operating System
Generative Summary
figure.
3. Distributed Processing Environment : In a distributed processing environment,
two or more computers are connected to each other by a communication network
or high speed bus. There is no shared memory between the processors and each
computer has its own local memory. Hence a distributed application consisting of
concurrent tasks, which are distributed over network communication via
messages. The distributed processing environment is shown in figure.
What is Race Condition?
When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the
output or the value of the shared variable is wrong so for that all the processes
doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same
data concurrently, and then the outcome depends on the particular order in which
the access takes place. A race condition is a situation that may occur inside a
critical section. This happens when the result of multiple thread execution in the
critical section differs according to the order in which the threads execute. Race
conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction. Also, proper thread synchronization using locks or atomic
variables can prevent race conditions.
Example
Let’s say there are two processes P1 and P2 which share a common variable
(shared=10), both processes are present in – queue and waiting for their turn to be
executed. Suppose, Process P1 first come under execution, and the CPU store a
common variable between them (shared=10) in the local variable (X=10) and
increment it by 1(X=11), after then when the CPU read line sleep(1),it switches
from current process P1 to process P2 present in ready-queue. The process P1 goes
in a waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable
(Shared=10) in its local variable (Y=10) and decrement Y by 1(Y=9), after then
when CPU read sleep(1), the current process P2 goes in waiting for state and CPU
remains idle for some time as there is no process in ready-queue, after completion
of 1 second of process P1 when it comes in ready-queue, CPU takes the process P1
under execution and execute the remaining line of code (store the local variable
(X=11) in common variable (shared=11) ), CPU remain idle for sometime waiting
for any process in ready-queue,after completion of 1 second of Process P2, when
process P2 comes in ready-queue, CPU start executing the further remaining line
of Process P2(store the local variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10
Process 1 Process 2
int X = shared int Y = shared
X++ Y- -
sleep(1) sleep(1)
shared = X shared = Y
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
the critical section next, and the selection can not be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section
problem. In Peterson’s solution, we have two shared variables:
boolean flag[i]: Initialized to FALSE, initially no one is interested in entering
the critical section
int turn: The process whose turn is to enter the critical section.
Critical section
Once boy A comes out of the changing room, the sign on it changes from
‘occupied’ to ‘vacant’ – indicating that another person can use it. Hence, boy B
proceeds to use the changing room, while the sign displays ‘occupied’ again.
The changing room is nothing but the critical section, boy A and boy B are two
different processes, while the sign outside the changing room indicates the process
synchronization mechanism being used.
Conclusion
In conclusion, mutual exclusion is a key concept in synchronization that ensures
only one process accesses a shared resource at a time. This prevents conflicts and
data corruption, making sure that processes run smoothly and correctly. By using
mutual exclusion mechanisms, we can create stable and reliable systems that
handle multiple processes efficiently.
A cooperative process is the one which can affect the execution of other process or
can be affected by the execution of other process. Such processes need to be
synchronized so that their order of execution can be guaranteed.
Race Condition
A Race Condition typically occurs when two or more threads try to read, write and
possibly make the decisions based on the memory that they are accessing
concurrently.
Critical Section
The regions of a program that try to access shared resources and may cause race
conditions are called critical section. To avoid race condition among the processes,
we need to assure that only one process at a time can execute within the critical
section.
Critical Section is the part of a program which tries to access shared resources.
That resource may be any resource in a computer like a memory location, Data
structure, CPU or any IO device.
The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes
from entering the critical section.
The critical section problem is used to design a set of protocols which can ensure
that the Race condition among the processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the
critical section problem. We need to provide a solution in such a way that the
following conditions can be satisfied.
1. Mutual Exclusion
Progress means that if one process doesn't need to execute into critical
section then it should not stop other processes to get into the critical section.
Secondary
1. Bounded Waiting
We should be able to predict the waiting time for every process to get into
the critical section. The process must not be endlessly waiting for getting
into the critical section.
2. Architectural Neutrality
Summary of Functions:
Function Description
These functions are widely used for directory management in operating systems, especially in
UNIX-like systems.