3 Rtos
3 Rtos
Page | 1
Concurrency:
The central themes of operating system design are all concerned with the management of
processes and threads:
Operating system structure: The same structuring advantages apply to systems programs,
and we have seen that operating systems are themselves often implemented as a set of
processes or threads.
Concurrent processes may share data to support communication, info exchange. Threads in the
same process can share global address space. Concurrent sharing may cause problems for
example: lost updates.
BCCML
S6 ECE Real Time Operating Systems- EC 366
Principles of Concurrency
atomic operation A function or action implemented as a sequence of one or more instructions that
appears to be indivisible; that is, no other process can see an intermediate state
Page | 2 or interrupt the operation. The sequence of instruction is guaranteed to execute
as a group, or not execute at all, having no visible effect on system state.
Atomicity guarantees isolation from concurrent processes.
critical section A section of code within a process that requires access to shared resources and
that must not be executed while another process is in a corresponding section of
code.
deadlock A situation in which two or more processes are unable to proceed because each
is waiting for one of the others to do something.
livelock A situation in which two or more processes continuously change their states in
response to changes in the other process(es) without doing any useful work.
mutual exclusion The requirement that when one process is in a critical section that accesses
shared resources, no other process may be in a critical section that accesses any
of those shared resources.
race condition A situation in which multiple threads or processes read and write a shared data
TRACE KTU
item and the final result depends on the relative timing of their execution.
At first glance, it may seem that interleaving and overlapping represent fundamentally different
modes of execution and present different problems. In fact, both techniques can be viewed as
examples of concurrent processing, and both present the same problems. In the case of a
uniprocessor, the problems stem from a basic characteristic of multiprogramming systems: The
relative speed of execution of processes cannot be predicted. It depends on the activities of
other processes, the way in which the OS handles interrupts, and the scheduling policies of the
OS.
1. The sharing of global resources is fraught with danger. For example, if two processes
both make use of the same global variable and both perform reads and writes on that variable,
then the order in which the various reads and writes are executed is critical.
2. It is difficult for the OS to manage the allocation of resources optimally. For example,
process A may request use of, and be granted control of, a particular I/O channel and then be
suspended before using that channel. It may be undesirable for the OS simply to lock the
channel and prevent its use by other processes; indeed this may lead to a deadlock condition.
BCCML
S6 ECE Real Time Operating Systems- EC 366
3. It becomes very difficult to locate a programming error because results are typically not
deterministic and reproducible.
All of the foregoing difficulties present themselves in a multiprocessor system as well, because
here too the relative speed of execution of processes is unpredictable. A multiprocessor system
Page | 3 must also deal with problems arising from the simultaneous execution of multiple processes.
Fundamentally, however, the problems are the same as those for uniprocessor systems. This
should become clear as the discussion proceed.
A race condition occurs when multiple processes or threads read and write data items so that
the final result depends on the order of execution of instructions in the multiple processes. Let
us consider two simple examples.
As a first example, suppose that two processes, P1 and P2, share the global variable a . At some
point in its execution, P1 updates a to the value 1, and at some point in its execution, P2 updates
a to the value 2. Thus, the two tasks are in a race to write variable a . In this example, the “loser”
of the race (the process that updates last) determines the final value of a.
For our second example, consider two process, P3 and P4, that share global variables b and c,
with initial values b=1 and c=2. At some point in its execution, P3 executes the assignment
b=b+c, and at some point in its execution, P4 executes the assignment c=b+c. Note that the two
processes update different variables. However, the final values of the two variables depend on
the order in which the two processes execute these two assignments. If P3 executes its
assignment statement first, then the final values are b=3 and c=5. If P4 executes its assignment
statement first, then the final values are b=4 and c=3.
TRACE KTU
What design and management issues are raised by the existence of concurrency?
1. The OS must be able to keep track of the various processes. This is done with the use of
process control blocks.
2. The OS must allocate and de-allocate various resources for each active process.
Processor time: This is the scheduling function, Memory: Most operating systems use a virtual
memory scheme, Files, I/O devices.
3. The OS must protect the data and physical resources of each process against unintended
interference by other processes. This involves techniques that relate to memory, files, and I/O
devices.
4. The functioning of a process, and the output it produces, must be independent of the
speed at which its execution is carried out relative to the speed of other concurrent processes.
BCCML
S6 ECE Real Time Operating Systems- EC 366
We can classify the ways in which processes interact on the basis of the degree to which they
are aware of each other’s existence. Below lists three possible degrees of awareness plus the
consequences of each:
These are processes that are not necessarily aware of each other by their respective process IDs
but that share access to some object, such as an I/O buffer. Such processes exhibit cooperation
in sharing the common object.
These are processes that are able to communicate with each other by process ID and that are
designed to work jointly on some activity. Again, such processes exhibit cooperation . Rather,
several processes may exhibit aspects of both competition and cooperation. Nevertheless, it is
productive to examine each of the three items in the preceding list separately and determine
TRACE KTU
their implications for the OS.
Resource Competition
Concurrent processes come into conflict with each other when they are competing for the use of
the same resource. In its pure form, we can describe the situation as follows. Two or more
processes need to access a resource during the course of their execution. Each process is
unaware of the existence of other processes, and each is to be unaffected by the execution of the
other processes. It follows from this that each process should leave the state of any resource
that it uses unaffected. Examples of resources include I/O devices, memory, processor time, and
the clock.
There is no exchange of information between the competing processes. However, the execution
of one process may affect the behavior of competing processes. In particular, if two processes
both wish access to a single resource, then one process will be allocated that resource by the OS,
and the other will have to wait. Therefore, the process that is denied access will be slowed
down. In an extreme case, the blocked process may never get access to the resource and hence
will never terminate successfully.
In the case of competing processes three control problems must be faced. First is the need for
mutual exclusion . Suppose two or more processes require access to a single non-sharable
resource, such as a printer. During the course of execution, each process will be sending
commands to the I/O device, receiving
BCCML
S6 ECE Real Time Operating Systems- EC 366
status information, sending data, and/or receiving data. We will refer to such a resource as a
critical resource , and the portion of the program that uses it as a critical section of the
program. It is important that only one program at a time be allowed in its critical section. We
cannot simply rely on the OS to understand and enforce this restriction because the detailed
requirements may not be obvious. In the case of the printer, for example, we want any
Page | 5 individual process to have control of the printer while it prints an entire file. Otherwise, lines
from competing processes will be interleaved.
The enforcement of mutual exclusion creates two additional control problems. One is that of
deadlock . For example, consider two processes, P1 and P2, and two resources, R1 and R2.
Suppose that each process needs access to both resources to perform part of its function. Then it
is possible to have the following situation: the OS assigns R1 to P2, and R2 to P1. Each process is
waiting for one of the two resources. Neither will release the resource that it already owns until
it has acquired the other resource and performed the function requiring both resources. The
two processes are deadlocked.
A final control problem is starvation . Suppose that three processes (P1, P2, P3) each require
periodic access to resource R. Consider the situation in which P1 is in possession of the
resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical
section, either P2 or P3 should be allowed access to R. Assume that the OS grants access to P3
and that P1 again requires access before P3 completes its critical section. If the OS grants access
to P1 after P3 has finished, and subsequently alternately grants access to P1 and P3, then P2
may indefinitely be denied access to the resource, even though there is no deadlock situation.
TRACE KTU
Need for Mutual Exclusion
The result of concurrent execution will depend on the order in which instructions are
interleaved.
Example:
static char a;
void echo( )
cin >> a;
cout << a;
Assume P1 and P2 are executing this code and share the variable a Processes can be preempted
at any time. Assume P1 is preempted after the input statement, and P2 then executes entirely.
The character echoed by P1 will be the one read by P2 !!
BCCML
S6 ECE Real Time Operating Systems- EC 366
Individual processes (threads) execute sequentially in isolation, but concurrency causes them to
interact. We need to prevent concurrent execution by processes when they are changing the
same data. We need to enforce mutual exclusion.
Page | 6 When a process executes code that manipulates shared data (or resources), we say that the
process is in its critical section (CS) for that shared data. We must enforce mutual exclusion on
the execution of critical sections. Only one process at a time can be in its CS (for that shared data
or resource).
Enforcing mutual exclusion guarantees that related CS’s will be executed serially instead of
concurrently. The critical section problem is how to provide mechanisms to enforce mutual
exclusion so the actions of concurrent processes won’t depend on the order in which their
instructions are interleaved.
Processes/threads must request permission to enter a CS, & signal when they leave CS.
Program structure:
Remainder section (RS): code that does not involve shared data and resources.
TRACE KTU
The CS problem exists on multiprocessors as well as on uniprocessors.
i) Disabling Interrupts
In a uniprocessor system, concurrent processes cannot have overlapped execution; they can
only be interleaved. To guarantee mutual exclusion, it is sufficient to prevent a process from
being interrupted.
Pseudo code:
while (true)
{
/* disable interrupts */;
/* critical section */;
/* enable interrupts */;
/* remainder */;
}
BCCML
S6 ECE Real Time Operating Systems- EC 366
Because the critical section cannot be interrupted, mutual exclusion is guaranteed. The
price of this approach, however, is high. The efficiency of execution could be noticeably
degraded because the processor is limited in its ability to interleave processes.
Code:
{
Int oldval;
Oldval=*word
If (oldval==testval)
*word=newval;
Return oldval;
}
• Exchange Instruction
void exchange (int register, int memory)
{
TRACE KTU
int temp;
temp = memory;
memory = register;
register = temp;
}
SEMAPHORE
Only three operations may be performed on a semaphore, all of which are atomic:
– initialize,
– Decrement (semWait)
– increment. (semSignal)
If the value becomes negative, then the process executing the semWait is blocked. Otherwise,
the process continues execution.
If the resulting value is less than or equal to zero, then a process blocked by a semWait
operation, if any, is unblocked.
BCCML
S6 ECE Real Time Operating Systems- EC 366
Dijkestra proposed a significant technique for managing concurrent processes for complex
mutual exclusion problems. He introduced a new synchronization tool called Semaphore.
2. Counting semaphore
Binary semaphore can take the value 0 & 1 only. Counting semaphore can take nonnegative
integer values.
Two standard operations, wait and signal are defined on the semaphore. Entry to the critical
section is controlled by the wait operation and exit from a critical region is taken care by signal
operation. The wait, signal operations are also called P and V operations. The manipulation of
semaphore (S) takes place as following:
1. The wait command P(S) decrements the semaphore value by 1. If the resulting value
becomes negative then P command is delayed until the condition is satisfied.
Mutual exclusion on the semaphore is enforced within P(S) and V(S). If a number of processes
attempt P(S) simultaneously, only one process will be allowed to proceed & the other
TRACE KTU
processes will be waiting.These operations are defined as under −
P(S) or wait(S):
If S > 0 then
Set S to S-1
Else
V(S) or signal(S):
Else
Set S to S+1
BCCML
S6 ECE Real Time Operating Systems- EC 366
The semaphore operation are implemented as operating system services and so wait and
signal are atomic in nature i.e. once started, execution of these operations cannot be
interrupted.
Thus semaphore is a simple yet powerful mechanism to ensure mutual exclusion among
Page | 9
concurrent processes.
Binary Semaphore:
For both counting semaphores and binary semaphores, a queue is used to hold processes
waiting on the semaphore.
The question arises of the order in which processes are removed from such a queue.
• The process that has been blocked the longest is released from the queue first; a
semaphore whose definition includes this policy is called a strong semaphore.
• A semaphore that does not specify the order in which processes are removed
from the queue is a weak semaphore.
Example of Semaphore:
TRACE KTU
BCCML
S6 ECE Real Time Operating Systems- EC 366
Page | 10
Mutex:
A similar concept related to the binary semaphore is the mutex. A key difference between the
two is that the process that locks the mutex (sets the value to zero) must be the one to unlock it
(sets the value to 1).
TRACE KTU
In contrast, it is possible for one process to lock a binary semaphore and for another to unlock
it.
The mutex is similar to the principles of the binary semaphore with one significant
difference: the principle of ownership. Ownership is the simple concept that when a task locks
(acquires) a mutex only it can unlock (release) it. If a task tries to unlock a mutex it hasn’t
locked (thus doesn’t own) then an error condition is encountered and, most importantly, the
mutex is not unlocked. If the mutual exclusion object doesn’t have ownership then, irrelevant of
what it is called, it is not a mutex.
Producer/Consumer Problem:
• There are one or more producers generating some type of data (records,
characters) and placing these in a buffer.
• There is a single consumer that is taking items out of the buffer one at a time.
BCCML
S6 ECE Real Time Operating Systems- EC 366
The problem is to make sure that the producer won’t try to add data into the buffer if it’s
full and that the consumer won’t try to remove data from an empty buffer.
Page | 11 We will look at a number of solutions to this problem to illustrate both the power and the
pitfalls of semaphores.
Producer:
while (true) {
/* produce item v */
b[in] = v;
in++;
Consumer:
while (true)
out++;
/* consume item w */
The producer can generate items and store them in the buffer at its own pace.
BCCML
S6 ECE Real Time Operating Systems- EC 366
The consumer proceeds in a similar fashion but must make sure that it does not attempt to read
from an empty buffer.
/*program, Producer/Consumer
Page | 12
#include<stdio.h>
#include<stdlib.h>
int mutex=1,full=0,empty=3,x=0;
int main()
{
int n;
void producer();
void consumer();
int wait(int);
int signal(int);
printf("\n1.Producer\n2.Consumer\n3.Exit");
while(1)
{
printf("\nEnter your choice:");
scanf("%d",&n);
switch(n)
{
case 1:
if((mutex==1)&&(empty!=0))
producer();
TRACE KTU
case 2:
else
printf("Buffer is full!!");
break;
if((mutex==1)&&(full!=0))
consumer();
else
printf("Buffer is empty!!");
break;
case 3:
exit(0);
break;
}
}
return 0;
}
int wait(int s)
{
return (--s);
}
int signal(int s)
{
return(++s);
}
BCCML
S6 ECE Real Time Operating Systems- EC 366
void producer()
{
mutex=wait(mutex);
full=signal(full);
empty=wait(empty);
Page | 13 x++;
printf("\nProducer produces the item %d",x);
mutex=signal(mutex);
}
void consumer()
{
mutex=wait(mutex);
full=wait(full);
empty=signal(empty);
printf("\nConsumer consumes item %d",x);
x--;
mutex=signal(mutex);
}
TRACE KTU
3. If a writer is writing to a file, no reader may read it.
4. Once a single reader accesses a file, then all other readers can retain control to access the file.
Problem – allow multiple readers to read at the same time. Only one single writer can access
the shared data at the same time.
Shared Data
- Data set
- Semaphore mutex initialized to 1 (controls access to readcount)
- Semaphore wrt initialized to 1 (writer access)
- Integer readcount initialized to 0 (how many processes are reading object)
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
BCCML
S6 ECE Real Time Operating Systems- EC 366
do {
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
TRACE KTU
} while (TRUE);
BCCML
S6 ECE Real Time Operating Systems- EC 366
do {
wait ( chopstick[i] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
• Emphasize that message-passing systems come in many forms. The given primitives are
a minimum set of operations needed for processes to engage in message passing.
•
TRACE KTU
receive (source, message)
The communication of a message between two processes implies some level of synchronization
between the two: the receiver cannot receive a message until it has been sent by another
process.
• Either the sending process is blocked until the message is received, or it is not.
Similarly, when a process issues a receive primitive, there are two possibilities:
• If a message has previously been sent, the message is received and execution
continues.
BCCML
S6 ECE Real Time Operating Systems- EC 366
Blocking send and blocking receive- Both sender and receiver are blocked until message is
delivered,known as a rendezvous. Allows for tight synchronization between processes.
Although the sender may continue on, the receiver is blocked until the requested message
Page | 16 arrives.
• A process that must receive a message before it can do useful work needs to be
blocked until such a message arrives.
• Sending process need to be able to specify which process should receive the message
• Direct addressing
• Receive primitive could know ahead of time which process a message is expected
•
TRACE KTU
Receive primitive could use source parameter to return a value when the receive
operation has been performed
• Indirect Addressing
• One process sends a message to the mailbox and the other process picks up the message
from the mailbox
Deadlock:
Definition: A set of processes is in a deadlock state when every process in the set is waiting for
an event that can be caused only by another process in the set.
BCCML
S6 ECE Real Time Operating Systems- EC 366
• Release resource
Page | 17
TRACE KTU
Four necessary and sufficient deadlock conditions (i.e., all must hold simultaneously)
– Mutual exclusion: At least one resource cannot be shared. Processes claim exclusive control of
resources
– Hold and wait: Processes hold resources already allocated to them while waiting for additional
resources
– Circular wait: Circular chain of processes exists in which each holds one or more resources
requested by the next process in the chain (implies hold and wait)
• Deadlock prevention – Construct system in such a way that deadlock cannot happen
• Deadlock avoidance – When deadlock could potentially occur, sidestep the deadlock situation
• Deadlock detection/recovery – When deadlock occurs, take steps to remove the deadlock
situation (e.g., roll back or terminate some process)
BCCML
S6 ECE Real Time Operating Systems- EC 366
Page | 18
TRACE KTU
Deadlock Detection:
General technique for detecting deadlock: reduction of the resource allocation graph
– if all of a process’ resource requests can be granted, remove the arrows from and to that
process (this is equivalent to the process completing and releasing its resources).
– repeat... – if the graph can be reduced by all of its processes then there is no deadlock
– if not, then the irreducible processes constitute the set of deadlocked processes in the graph
BCCML
S6 ECE Real Time Operating Systems- EC 366
Page | 19
Deadlock Prevention
– Mutual exclusion
– No preemption
– Circular wait
TRACE KTU
• Deadlock prevention is achieved by ensuring that at least one condition cannot occur
• Perhaps can substitute a sharable resource for a dedicated one in some cases (e.g., readers’
access to a file)
• Method 2: a process is allowed to request a resource only if it does not hold any resource
• Essentially resource requests are granted by the system on an “all or none” basis
• Problems
• Method 1: preempting the resources of a process (and then restarting the process or
reallocating resources to it) if its request cannot be granted
BCCML
S6 ECE Real Time Operating Systems- EC 366
• Method 2: preempting the resources of a process (and then restarting the process or
reallocating its resources) if it holds some resource that is being requested
• Problems
– Difficult for resources whose state cannot be saved and restored (e.g., tape and printers)
• Impose total ordering on resource types. Require that process requests resources in an
increasing order of enumeration.
• Problems
– for best efficiency, resource numbers must correspond to expected order of use of resources.
If use is out of order, the result is idle resources (waste)
TRACE KTU
• portability of program compromised
• Even so, this method has been used in a number of systems such as IBM MVS and VAX/VMS
Deadlock Avoidance
• Main idea
– Carefully allocate the resources such that the system will not run into deadlock
• More specifically – Require additional information about how resources will be requested. Use
this information to determine whether to grant an allocation request or to cause the requesting
process to wait.
• Costs
BCCML
S6 ECE Real Time Operating Systems- EC 366
• A system is in a safe state if the system can allocate resources to each process (up to its
maximum) in some order and let each of them compete successfully (hence, avoiding a
deadlock).
Page | 21 -If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished
-When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and
terminate
TRACE KTU
If a system is in unsafe state ⇒ possibility of deadlock
Avoidance Algorithms
BCCML
S6 ECE Real Time Operating Systems- EC 366
The request can be granted only if converting the request edge to an assignment edge does not
result in the formation of a cycle in the resource-allocation graph.
Page | 22
Banker’s Algorithm
• Multiple instances
• Less efficient than the resource-allocation graph algorithm
• Each process must a priori claim maximum number of instances of each resource type
• When a process requests a resource, it may have to wait
•
TRACE KTU
When a process gets all its resources, it must return them in a finite amount of time
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize: Work =
Available Finish[ i ] = false for i = 0, 1, …, n - 1
2. 2. Find an i such that both: (a) Finish[ i ] == false (b) Needi ≤ Work (Pi needs less
resources than still available) If no such i exists, go to step 4
3. Work = Work + Allocationi (release resources of Pi back into available) Finish[ i ] = true
go to step 2
BCCML
S6 ECE Real Time Operating Systems- EC 366
Request = request vector for process Pi If Requesti [ j ] == k then process Pi wants k instances of
resource type Rj Pi requests resources:
Page | 23
1. If Requesti ≤ Needi go to step 2. Otherwise, raise error condition, since process has exceeded
its maximum claim
3. Otherwise Pi must wait, since resources are not available 3. Pretend to allocate requested
resources to Pi by modifying the state as follows:
TRACE KTU
5 processes P0 through P4;
BCCML
S6 ECE Real Time Operating Systems- EC 366
Page | 24
TRACE KTU
Deadlock Detection
BCCML
S6 ECE Real Time Operating Systems- EC 366
√ Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle,
there exists a deadlock
√ An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is
the number of nodes in the graph
Page | 25
TRACE KTU
Available: A vector of length m indicates the number of available resources of each type
Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process
Request: An n x m matrix indicates the current request of each process. If Request[ i, j ] = k, then
process Pi is requesting k more instances of resource type Rj
Detection Algorithm:
1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work =
Available (b) For i = 1, 2, …, n, if Allocationi ≠ 0, then Finish[ i ] = false; otherwise, Finish[
i ] = true
2. Find an index i such that both: (a) Finish[ i ] == false (b) Requesti ≤ Work If no such i
exists, go to step 4
3. Work = Work + Allocationi (hoping Pi will return soon its resources) Finish[ i ] = true go
to step 2
4. If Finish[ i ] == false, for some i, 1 ≤ i ≤ n, then the system is in deadlock state. Moreover,
if Finish[ i ] == false, then Pi is deadlocked
BCCML
S6 ECE Real Time Operating Systems- EC 366
Page | 26
TRACE KTU
Deadlock Recovery
i) Process Termination
2. How long process has computed, and how much longer to completion
BCCML
S6 ECE Real Time Operating Systems- EC 366
TRACE KTU
BCCML