OS Lesson 3 - Concurrency
OS Lesson 3 - Concurrency
Process interaction
Processes of an application need to interact with one another because they work towards a common
goal.
Types of interacting processes
Processes executing concurrently in the operating system may be of two types:
Independent process: These are processes whose execution cannot affect or be affected
by the execution of other processes.
Cooperating processes: These are processes whose execution affect or is affected by the
execution of other processes and share data with other processes.
Classification of process interaction
The ways in which processes interact can be classified on the basis of the degree to which they are
aware of each other‘s existence and they can be classified as follows
Processes unaware of each other: These are independent processes that are not intended
to work together thus the operating system needs to be concerned about competition for
resources, e.g. two independent applications may both want to access disk, file or printer,
the operating system must regulate these accesses.
Processes indirectly aware of each other: These are processes that are not necessarily
aware of each other by their respective process IDs but share access to a resource, such as
an I/O buffer. Such processes exhibit cooperation in sharing the common resource e.g.
memory, data or variable.
Processes directly aware of each other: In the two cases discussed above, each process
has its own isolated environment that does not include the other processes. The interaction
among processes are indirect and in both cases there is sharing. When processes cooperate
by communication, however the various processes participate in a common effort that links
all of the processes i.e. communication. Communication provides a way to synchronize or
coordinate, the various activities of concurrent processes.
Competition among processes unaware of each other
Concurrent processes come into conflict with each other when they are competing for the use of
the same resources, e.g. two or more processes need to access a resource during the course of their
execution each process is unaware of the existence of other processes and each is to be unaffected
by the execution of the other processes it follows from this that each process should leave the state
of any resource that it was using unaffected.
There is no exchange of information between the competing processes. However, the execution of
one process may affect the behavior of competing processes e.g. if two processes both desire
access to a single resource, then one process will be allocated that resource by the operating system
and the other will have to wait. Therefore, the process that is denied access will be allowed down.in
an extreme case, the blocked process may never get access to the resource and hence will never
run to completion (terminate successfully).
Potential control problems faced by concurrent processes
When concurrent processes compete the following control problems are faced:
Mutual exclusion: During the course of their execution two or more concurrent processes
require access to a critical resource, a single non-sharable resource, such as a printer and
each process will be sending commands to the I/O devices, receiving status information,
sending data and /or receiving data. The portion of the program that uses the critical
resource is known as the critical section or critical region, of the program. It is important
that only one program at a time be allowed in its critical section so that an individual
process can have exclusive control of the critical resource e.g. Assuming that the critical
resource is the printer, any individual process should have exclusive control of the printer
while it prints an entire file, otherwise lines from competing processes will be interleaved.
The enforcement of mutual exclusion creates the other two control problems.
Deadlock: Consider two concurrent processes P1 and P2 and two resources R1 and R2.
Suppose that each process needs access to both resources to perform part of its function
then it is possible to have the following situation; the operating system assigns R1 to P2 and
R2 to P1. Each process is waiting for one of the two resources. Neither will release the
resource that it already has until it has acquired the other source and performed the function
requiring both resources. The two processes are said to be deadlocked.
Starvation: Suppose that three concurrent processes (P1, P2, and P3) each require periodic
access to resource R. Consider the situation in which P1 is in possession of the resource
and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section,
either P2 or P3 should be allowed access to R. Assume that the operating system grants
access to P3 and that P1 again requires access to R before P3 completes its critical section.
If the operating system grants access to P1 and P3, then P2 may be indefinitely denied access
to the resource, even though there is no deadlock situation.
may have access to the same data. Thus, the processes must cooperate to ensure that the data they
share are properly managed. The control mechanism must ensure the integrity of the shared data.
Since data are held on resources (devices, memory) the control problems of mutual exclusion
deadlock and starvation are again present, the difference being that data items may be accessed
into two different modes, reading and writing randomly writing operating must be mutually
exclusive and the introduction of data coherence which requires data to be clear (comprehensible).
Communication between processes directly aware of each other
Communication between concurrent processes consist of messages of some sort and primitives
(variables) for sending and receiving messages may be provided as part of the programming
language or by the operating system kernel. Since nothing is shared between processes in the act
of passing messages, mutual exclusion is not a control requirement for this sort of cooperation.
However, the problems of deadlock and starvation are still present. An example of deadlock, two
processes may be blocked each waiting for communication from the other. As an example of
starvation consider three processes, P1, P2, and P3, that exhibit the following behavior P1 is
repeatedly attempting to communicate with P1. A sequence could arise in which P1 and P2 exchange
information repeatedly, while P3 is blocked waiting for a communication from P1.There is no
deadlock, because P1 remains active but P3 is starved.
Race condition
A race condition is a condition that occurs when multiple concurrent processes or threads read
and write (access and manipulates) the same data concurrently and the final result depends on the
particular order in which the access takes place.
Example
Suppose that two processes, P1 and P2 share a global variable a, at some point in it execution, P1
updates x to the value 1 and at some point in its execution P2 updates x to the value 2. Thus, the
two tasks are in a race to write variable x. In this example the ‘loser” of the race (the process that
updates the variable x, last) determines the final value of x. A solution to race conditions is to find
some way to prohibit more than one process from reading and waiting the shared data at the same
time i.e. mutual exclusion.
Reasons /Benefits of process Interaction
There are several reasons for providing an environment that allows process interaction. These
reasons include:
Information sharing: Since several users may be interested in the same piece of
information e.g. a shared file, the operating systems provides an environment that allows
concurrent access to such information.
Computation speed up: A particular task can be broken into subtasks for it to run fast.
Each subtask executes in parallel with others. The speed up is achieved by use of
microprocessors.
Modularity: A system can be constructed in a modular fashion dividing the system
functions into separate processes or threads.
Convenience: An end user may work on many tasks at the same time e.g. user may be
editing, listening to music and compiling in parallel.
Interprocess synchronization
Interprocess synchronization, also known as process synchronization or process coordination,
is the coordination of process activities to ensure that actions are performed in a desired order and
a common objective is achieved. Coordination is achieved through a process seeking the consent
of some other processes before performing a sensitive action ai, these processes give consent only
if they have performed all actions which are supposed to proceed action ai, and thus coordination
is achieved. Synchronization among cooperating concurrent process is essential for preserving
precedence relationships and for preventing concurrently related timing problems. Cooperating
processes must synchronize with each other when they are to share resources.
Between them is the critical section that must run in a mutually exclusive way.
A producer process produces data items (records, characters) and tries to insert them into an
empty slot of the buffer using the ‘in” pointer. The consumer process removes data items from a
filed slot in the buffer one at a time using the “out” pointer.
The operating system must maintain count of empty and full buffer slots and there has to be
synchronization between the producer process and the consumer process so as to avoid the
producer process trying to add data items into the buffer if full and the consumer process trying
to remove data items from an empty buffer.
Solution
Producer consumer problem can be solved using semaphores or monitors. Using semaphores, there
are three semaphores namely:-
Full: This is a counting semaphore which counts the number of full slots in the buffer at
any given instant.it represents the number of occupied slots in the buffer and its initial value
is 0
Empty: This is a counting semaphore which counts the number of empty slots in a buffer
at any instant. It represents the number of empty slots in the buffer and its initial value is
the number of slots in the buffer since initially all slots are empty.
Mutex (m): This is a semaphore used to enforce mutual exclusion. It ensures that only one
process either producer or consumer may access the buffer at any given instant.
Mutual exclusion is enforced by a Mutex semaphore, synchronization is enforced using ‘full” and
‘empty” semaphores that avoid race condition and deadlock in the system. If the above three
semaphores are not used there will be a race condition or deadlock situation as well as loss of
data.
Read and write problem
This is a synchronization problem where there is a shared resource which should be accessed by
multiple processes. There are two types of processes in this context namely;
Readers: These are a number of processes that only read (view) content from the shared
resource.
Writers: These are a number of processes that write (add) content into the shared
resource.
Since there is a resource being shared, then readers and writers can create conflict due to these
combinations: Writer - writer and write - reader access to critical section simultaneously that
creates synchronization problem, loss of data e.tc.
Solution
Semaphores are send as a solution to problems in reader writer problem and the following
conditions must be satisfied:
Any number of readers may simultaneously read from the shared resource (enter the
critical section).
Only one writer may write to the shared resource (enter the critical section) at a time.
When a write is writing data to the shared resource (inside the critical section), no other
process can access the resource (enter the critical section).
A writer cannot write to the resource, if there are non-zero number of readers accessing
the resource (in the critical section)
Each philosopher has a plate of spaghetti. There are no folks (resources) available and each
philosopher needs two forks to eat. Only one fork is available between each philosopher, at any
instant a philosopher is either eating or thinking when he wants to eat, he uses two forks one for
his left and one from his right. When he wants to think he puts down both forks at their original
places.
Solution
The dining philosophers problem can be solved by an algorithm that ensures maximum number
of philosophers can eat at once and none starves as long as each philosopher eventually stop eating
and it must satisfy the following conditions.
It must allow a philosopher to pick up the fork only if both the left and right fork are
available.
It must allow only four philosophers to sit at the table, that way ,if all four philosophers
pickup four forks, there will be one fork left on the table so, one philosopher can start eating
and eventually ,two forks will be available and deadlocks and starvation will be avoided.
When a customer arrives, he has to wake up the sleeping barber. If additional customers arrive
while the barber is cutting a customer’s hair, they either sit down if there are empty chairs) or leave
the shop (if all chairs are full).The problem is to program the barber and customers without getting
into race condition.
Solution
The sleeping barber problem can be solved using semaphores. There are three semaphore involved
namely:
Customers: This is a counting semaphore which counts waiting customer (exchanging the
customer in the barber’s chair, who is not waiting) and whose initial value is zero.
Barber: This is a counting semaphore which counts of barbers who are idle, waiting for
customers and whose initial value is zero.
Mutex: This is a semaphore used to enforce mutual exclusion and whose initial value is
one. There is need for a variable waiting which counts the number of waiting customer.
This variable is a copy of the customer semaphores since there is no way to read the current
value of the customer semaphore.
Approaches to achieving/enforcing mutual exclusion/Solutions to process synchronization
problems
Mutual exclusion can be achieving in various ways so that while one process is busy updating
memory in its critical section. Another process will enter its critical section and cause trouble.
These approaches are divided into four categories namely:-
Software solutions
These are mechanisms for enforcing mutual exclusion that use global variables to control access
to the critical section and take the form of algorithms whose correctness does not rely on any
other assumptions and examples include:
Strict alternation or turn variable or Dekker’s algorithm: This algorithm enforces
mutual exclusion by allowing only one process into the critical section at a time i.e. If one
process is already in its critical section, then the other will have to wait. Strict alteration
satisfies mutual exclusion and bounded waiting (no saturation) but does not satisfy progress
it also avoids deadlock.
Peterson’s Algorithm: This algorithm enforces mutual exclusion by use of two shared
data variables, turn and two Boolean flags. If process i wants to enter into its critical section
it sets flag [i] to true and if turn = = i then the process is allowed into its critical section.
Barkley’s algorithm: This algorithm enforces mutual exclusion by use of numbers. Each
process receives a number before entering its critical section and the holder of the smallest
number enters the critical section. If process Pi and Pj receive the same numbers, then Pi is
served first. Bakery algorithm satisfies mutual exclusion progress and bounded waiting (no
starvation). It also avoids deadlocks.
Lock variables: This algorithm uses a shared variable to enforce mutual exclusion. The
shared variable is initialized to zero and when a process wants to enter its critical section it
first tests the shared variables, if the shared variables value is zero the process locks (sets
it to 1)the shared variable and enters the critical section. After the shared variable has been
locked, the processes desiring to enter their critical sections go into busy waiting mode. The
term busy waiting or spin waiting refers to a technique in which a process can do nothing
until it gets permission to enter its critical section but continues to execute an instruction or
set of instruction that tests the appropriate variable to gain entrance. When a [process leaves
its critical section, it resets the lock to 0, at this point one and only one of the waiting process
is granted access to its critical section.
Disadvantages
Processes that are requesting to enter in their critical section are busy waiting.
Hardware
These are mechanisms for enforcing mutual exclusion which depend on machine instructions and
examples include:
Test and set lock (TSL): In this approach a shared variable which takes either of two
values, 0 or 1 is used to enforce mutual exclusion. A process inquires (tests) the share
variable before entering its critical section. If the shared variable is locked (value set to 1),
it keeps on waiting until the shared variable becomes free (unlocked), if the shared
variables is unlocked (value set to) then the process takes it and locks it.
Disabling Interrupts: This approach enforces mutual exclusion by allowing processes to
disable all interrupts just after entering their critical section and re-enable them before
leaving it. With interrupts disabled, no clock interrupts can occur. The processor is only
switched from process to process as a result of clock or other interrupts hence with
interrupts turned off the processor will be switched to another process.
Mutex Locks: This approach enforces mutual exclusion by use at a shared variable known
as a Mutex. A process wishing to enter its critical section, test the Mutex and if its value is
o (unlocked), the process locks it (set value to 1). As the resource is locked while a process
executes its critical section no other process can access it. The process currently in its
critical section must unlock (set value to 0) the Mutex upon leaving the critical section.
Sleep and wake: This approach uses two primitives (variables), sleep and wake to enforce
mutual exclusion. When a process is not permitted to access its critical section, it uses the
sleep primitive to cause the process to block. The blocked process is not scheduled to run
again until another process uses the wakeup primitive when leaving its critical section. This
approach is a solution to the wastage of processor time by processes waiting (busy waiting)
to enter their critical sections. Busy waiting processes waste processor time checking to
see if they can proceed (enter critical section).
Compare and swap: In this approach a process compares the contents of a memory
location with a given value and only if they are the same, modifies the content of that
memory location to a new given value. The entire compare and swap function is carried
out automatically i.e. it is not subject to interrupt .The atomicity guarantees that the new
value is calculated based on up-to-date information. If the value had been update by anther
process than the write fails.
Advantages
It is simple and therefore easy to verify
It is applicable to any number of processes on either a single processor or multiple
processor sharing main memory.
Disadvantages
Busy waiting is involved and thus processor time is wasted
Deadlock is possible
Starvation is possible
Binary semaphore: This is a special form of semaphore used for implementing mutual
exclusion, hence it is often called Mutex. A binary semaphore is initialized to 1 and only
takes the value 0 and 1 during program execution.