0% found this document useful (0 votes)
13 views27 pages

3 Rtos

The document discusses concurrency in operating systems, focusing on the management of processes and threads in various contexts such as multiprogramming, multiprocessing, and distributed processing. It outlines key concepts like atomic operations, critical sections, deadlock, and race conditions, emphasizing the importance of mutual exclusion to prevent inconsistent data states. Additionally, it highlights the challenges and control problems that arise from resource competition among concurrent processes and introduces semaphores as a synchronization tool for managing these issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views27 pages

3 Rtos

The document discusses concurrency in operating systems, focusing on the management of processes and threads in various contexts such as multiprogramming, multiprocessing, and distributed processing. It outlines key concepts like atomic operations, critical sections, deadlock, and race conditions, emphasizing the importance of mutual exclusion to prevent inconsistent data states. Additionally, it highlights the challenges and control problems that arise from resource competition among concurrent processes and introduces semaphores as a synchronization tool for managing these issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

S6 ECE Real Time Operating Systems- EC 366

Page | 1

Concurrency:

The central themes of operating system design are all concerned with the management of
processes and threads:

• Multiprogramming: The management of multiple processes within a uniprocessor system.

• Multiprocessing : The management of multiple processes within a multiprocessor.

• Distributed processing: The management of multiple processes executing on multiple,


distributed computer systems. The recent proliferation of clusters is a prime example of this
type of system.

Fundamental to all of these areas, and fundamental to OS design, is concurrency. Concurrency


encompasses a host of design issues, including communication among processes, sharing of and
competing for resources (such as memory, files, and I/O access), synchronization of the
activities of multiple processes, and allocation of processor time to processes. We shall see that
these issues arise not just in multiprocessing and distributed processing environments but even
in single-processor multiprogramming systems.

Concurrency arises in three different contexts:

• Multiple applications: Multiprogramming was invented to allow processing time to be


dynamically shared among a number of active applications.

• Structured applications: As an extension of the principles of modular design and structured


programming, some applications can be effectively programmed as a set of concurrent
processes.

Operating system structure: The same structuring advantages apply to systems programs,
and we have seen that operating systems are themselves often implemented as a set of
processes or threads.

Concurrent processes may share data to support communication, info exchange. Threads in the
same process can share global address space. Concurrent sharing may cause problems for
example: lost updates.

BCCML
S6 ECE Real Time Operating Systems- EC 366

Concurrency Key Terms

Principles of Concurrency

atomic operation A function or action implemented as a sequence of one or more instructions that
appears to be indivisible; that is, no other process can see an intermediate state
Page | 2 or interrupt the operation. The sequence of instruction is guaranteed to execute
as a group, or not execute at all, having no visible effect on system state.
Atomicity guarantees isolation from concurrent processes.

critical section A section of code within a process that requires access to shared resources and
that must not be executed while another process is in a corresponding section of
code.

deadlock A situation in which two or more processes are unable to proceed because each
is waiting for one of the others to do something.

livelock A situation in which two or more processes continuously change their states in
response to changes in the other process(es) without doing any useful work.

mutual exclusion The requirement that when one process is in a critical section that accesses
shared resources, no other process may be in a critical section that accesses any
of those shared resources.

race condition A situation in which multiple threads or processes read and write a shared data

TRACE KTU
item and the final result depends on the relative timing of their execution.

starvation A situation in which a runnable process is overlooked indefinitely by the


scheduler; although it is able to proceed, it is never chosen.

At first glance, it may seem that interleaving and overlapping represent fundamentally different
modes of execution and present different problems. In fact, both techniques can be viewed as
examples of concurrent processing, and both present the same problems. In the case of a
uniprocessor, the problems stem from a basic characteristic of multiprogramming systems: The
relative speed of execution of processes cannot be predicted. It depends on the activities of
other processes, the way in which the OS handles interrupts, and the scheduling policies of the
OS.

The following difficulties arise:

1. The sharing of global resources is fraught with danger. For example, if two processes
both make use of the same global variable and both perform reads and writes on that variable,
then the order in which the various reads and writes are executed is critical.

2. It is difficult for the OS to manage the allocation of resources optimally. For example,
process A may request use of, and be granted control of, a particular I/O channel and then be
suspended before using that channel. It may be undesirable for the OS simply to lock the
channel and prevent its use by other processes; indeed this may lead to a deadlock condition.

BCCML
S6 ECE Real Time Operating Systems- EC 366

3. It becomes very difficult to locate a programming error because results are typically not
deterministic and reproducible.

All of the foregoing difficulties present themselves in a multiprocessor system as well, because
here too the relative speed of execution of processes is unpredictable. A multiprocessor system
Page | 3 must also deal with problems arising from the simultaneous execution of multiple processes.
Fundamentally, however, the problems are the same as those for uniprocessor systems. This
should become clear as the discussion proceed.

A race condition occurs when multiple processes or threads read and write data items so that
the final result depends on the order of execution of instructions in the multiple processes. Let
us consider two simple examples.

As a first example, suppose that two processes, P1 and P2, share the global variable a . At some
point in its execution, P1 updates a to the value 1, and at some point in its execution, P2 updates
a to the value 2. Thus, the two tasks are in a race to write variable a . In this example, the “loser”
of the race (the process that updates last) determines the final value of a.

For our second example, consider two process, P3 and P4, that share global variables b and c,
with initial values b=1 and c=2. At some point in its execution, P3 executes the assignment
b=b+c, and at some point in its execution, P4 executes the assignment c=b+c. Note that the two
processes update different variables. However, the final values of the two variables depend on
the order in which the two processes execute these two assignments. If P3 executes its
assignment statement first, then the final values are b=3 and c=5. If P4 executes its assignment
statement first, then the final values are b=4 and c=3.

TRACE KTU
What design and management issues are raised by the existence of concurrency?

We can list the following concerns:

1. The OS must be able to keep track of the various processes. This is done with the use of
process control blocks.

2. The OS must allocate and de-allocate various resources for each active process.

At times, multiple processes want access to the same resource.

These resources include

Processor time: This is the scheduling function, Memory: Most operating systems use a virtual
memory scheme, Files, I/O devices.

3. The OS must protect the data and physical resources of each process against unintended
interference by other processes. This involves techniques that relate to memory, files, and I/O
devices.

4. The functioning of a process, and the output it produces, must be independent of the
speed at which its execution is carried out relative to the speed of other concurrent processes.

BCCML
S6 ECE Real Time Operating Systems- EC 366

We can classify the ways in which processes interact on the basis of the degree to which they
are aware of each other’s existence. Below lists three possible degrees of awareness plus the
consequences of each:

• Processes unaware of each other:


Page | 4 These are independent processes that are not intended to work together. The best example of
this situation is the multiprogramming of multiple independent processes. These can either be
batch jobs or interactive sessions or a mixture. Although the processes are not working
together, the OS needs to be concerned about competition for resources. For example, two
independent applications may both want to access the same disk or file or printer. The OS must
regulate these accesses.

• Processes indirectly aware of each other:

These are processes that are not necessarily aware of each other by their respective process IDs
but that share access to some object, such as an I/O buffer. Such processes exhibit cooperation
in sharing the common object.

• Processes directly aware of each other:

These are processes that are able to communicate with each other by process ID and that are
designed to work jointly on some activity. Again, such processes exhibit cooperation . Rather,
several processes may exhibit aspects of both competition and cooperation. Nevertheless, it is
productive to examine each of the three items in the preceding list separately and determine

TRACE KTU
their implications for the OS.

Resource Competition

Concurrent processes come into conflict with each other when they are competing for the use of
the same resource. In its pure form, we can describe the situation as follows. Two or more
processes need to access a resource during the course of their execution. Each process is
unaware of the existence of other processes, and each is to be unaffected by the execution of the
other processes. It follows from this that each process should leave the state of any resource
that it uses unaffected. Examples of resources include I/O devices, memory, processor time, and
the clock.

There is no exchange of information between the competing processes. However, the execution
of one process may affect the behavior of competing processes. In particular, if two processes
both wish access to a single resource, then one process will be allocated that resource by the OS,
and the other will have to wait. Therefore, the process that is denied access will be slowed
down. In an extreme case, the blocked process may never get access to the resource and hence
will never terminate successfully.

In the case of competing processes three control problems must be faced. First is the need for
mutual exclusion . Suppose two or more processes require access to a single non-sharable
resource, such as a printer. During the course of execution, each process will be sending
commands to the I/O device, receiving

BCCML
S6 ECE Real Time Operating Systems- EC 366

status information, sending data, and/or receiving data. We will refer to such a resource as a
critical resource , and the portion of the program that uses it as a critical section of the
program. It is important that only one program at a time be allowed in its critical section. We
cannot simply rely on the OS to understand and enforce this restriction because the detailed
requirements may not be obvious. In the case of the printer, for example, we want any
Page | 5 individual process to have control of the printer while it prints an entire file. Otherwise, lines
from competing processes will be interleaved.

The enforcement of mutual exclusion creates two additional control problems. One is that of
deadlock . For example, consider two processes, P1 and P2, and two resources, R1 and R2.
Suppose that each process needs access to both resources to perform part of its function. Then it
is possible to have the following situation: the OS assigns R1 to P2, and R2 to P1. Each process is
waiting for one of the two resources. Neither will release the resource that it already owns until
it has acquired the other resource and performed the function requiring both resources. The
two processes are deadlocked.

A final control problem is starvation . Suppose that three processes (P1, P2, P3) each require
periodic access to resource R. Consider the situation in which P1 is in possession of the
resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical
section, either P2 or P3 should be allowed access to R. Assume that the OS grants access to P3
and that P1 again requires access before P3 completes its critical section. If the OS grants access
to P1 after P3 has finished, and subsequently alternately grants access to P1 and P3, then P2
may indefinitely be denied access to the resource, even though there is no deadlock situation.

TRACE KTU
Need for Mutual Exclusion

 If there is no controlled access to shared data, processes or threads may get an


inconsistent view of this data

 The result of concurrent execution will depend on the order in which instructions are
interleaved.

 Errors are timing dependent and usually not reproducible.

Example:

static char a;

void echo( )

cin >> a;

cout << a;

Assume P1 and P2 are executing this code and share the variable a Processes can be preempted
at any time. Assume P1 is preempted after the input statement, and P2 then executes entirely.
The character echoed by P1 will be the one read by P2 !!

BCCML
S6 ECE Real Time Operating Systems- EC 366

This is an example of a race condition!

Individual processes (threads) execute sequentially in isolation, but concurrency causes them to
interact. We need to prevent concurrent execution by processes when they are changing the
same data. We need to enforce mutual exclusion.
Page | 6 When a process executes code that manipulates shared data (or resources), we say that the
process is in its critical section (CS) for that shared data. We must enforce mutual exclusion on
the execution of critical sections. Only one process at a time can be in its CS (for that shared data
or resource).

Enforcing mutual exclusion guarantees that related CS’s will be executed serially instead of
concurrently. The critical section problem is how to provide mechanisms to enforce mutual
exclusion so the actions of concurrent processes won’t depend on the order in which their
instructions are interleaved.

Processes/threads must request permission to enter a CS, & signal when they leave CS.

Program structure:

 entry section: requests entry to CS

 exit section: notifies that CS is completed

 Remainder section (RS): code that does not involve shared data and resources.

TRACE KTU
The CS problem exists on multiprocessors as well as on uniprocessors.

MUTUAL EXCLUSION- Hardware Support

i) Disabling Interrupts

In a uniprocessor system, concurrent processes cannot have overlapped execution; they can
only be interleaved. To guarantee mutual exclusion, it is sufficient to prevent a process from
being interrupted.

– A process runs until it invokes an operating system service or until it is


interrupted

– Disabling interrupts guarantees mutual exclusion

– Will not work in multiprocessor architecture

Pseudo code:

while (true)
{
/* disable interrupts */;
/* critical section */;
/* enable interrupts */;
/* remainder */;
}

BCCML
S6 ECE Real Time Operating Systems- EC 366

Because the critical section cannot be interrupted, mutual exclusion is guaranteed. The
price of this approach, however, is high. The efficiency of execution could be noticeably
degraded because the processor is limited in its ability to interleave processes.

ii) Special Machine Instructions


Page | 7 • Compare&Swap Instruction, also called a “compare and exchange instruction”.
The instruction checks a memory location (*word) against a test value(testval).
If its same, it is replaced with newval.

Code:

Int compare_and_swap(int *word, int testval, int newval)

{
Int oldval;
Oldval=*word
If (oldval==testval)
*word=newval;
Return oldval;
}

• Exchange Instruction
void exchange (int register, int memory)
{

TRACE KTU
int temp;
temp = memory;
memory = register;
register = temp;
}

SEMAPHORE

A semaphore may be initialized to a nonnegative integer value.

Only three operations may be performed on a semaphore, all of which are atomic:

– initialize,

– Decrement (semWait)

– increment. (semSignal)

The semWait operation decrements the semaphore value.

If the value becomes negative, then the process executing the semWait is blocked. Otherwise,
the process continues execution.

The semSignal operation increments the semaphore value.

If the resulting value is less than or equal to zero, then a process blocked by a semWait
operation, if any, is unblocked.

BCCML
S6 ECE Real Time Operating Systems- EC 366

Dijkestra proposed a significant technique for managing concurrent processes for complex
mutual exclusion problems. He introduced a new synchronization tool called Semaphore.

Semaphores are of two types −

Page | 8 1. Binary semaphore

2. Counting semaphore

Binary semaphore can take the value 0 & 1 only. Counting semaphore can take nonnegative
integer values.

Two standard operations, wait and signal are defined on the semaphore. Entry to the critical
section is controlled by the wait operation and exit from a critical region is taken care by signal
operation. The wait, signal operations are also called P and V operations. The manipulation of
semaphore (S) takes place as following:

1. The wait command P(S) decrements the semaphore value by 1. If the resulting value
becomes negative then P command is delayed until the condition is satisfied.

2. The V(S) i.e. signals operation increments the semaphore value by 1.

Mutual exclusion on the semaphore is enforced within P(S) and V(S). If a number of processes
attempt P(S) simultaneously, only one process will be allowed to proceed & the other

TRACE KTU
processes will be waiting.These operations are defined as under −

P(S) or wait(S):

If S > 0 then

Set S to S-1

Else

Block the calling process (i.e. Wait on S)

V(S) or signal(S):

If any processes are waiting on S

Start one of these processes

Else

Set S to S+1

BCCML
S6 ECE Real Time Operating Systems- EC 366

The semaphore operation are implemented as operating system services and so wait and
signal are atomic in nature i.e. once started, execution of these operations cannot be
interrupted.

Thus semaphore is a simple yet powerful mechanism to ensure mutual exclusion among
Page | 9
concurrent processes.
Binary Semaphore:

A more restrictive semaphore which may only have the value of 0 or 1.

For both counting semaphores and binary semaphores, a queue is used to hold processes
waiting on the semaphore.

The question arises of the order in which processes are removed from such a queue.

The fairest removal policy is first-in-first-out (FIFO):

• The process that has been blocked the longest is released from the queue first; a
semaphore whose definition includes this policy is called a strong semaphore.

• A semaphore that does not specify the order in which processes are removed
from the queue is a weak semaphore.

Example of Semaphore:

TRACE KTU

BCCML
S6 ECE Real Time Operating Systems- EC 366

Page | 10

Mutex:

A similar concept related to the binary semaphore is the mutex. A key difference between the
two is that the process that locks the mutex (sets the value to zero) must be the one to unlock it
(sets the value to 1).

TRACE KTU
In contrast, it is possible for one process to lock a binary semaphore and for another to unlock
it.

The mutex is similar to the principles of the binary semaphore with one significant
difference: the principle of ownership. Ownership is the simple concept that when a task locks
(acquires) a mutex only it can unlock (release) it. If a task tries to unlock a mutex it hasn’t
locked (thus doesn’t own) then an error condition is encountered and, most importantly, the
mutex is not unlocked. If the mutual exclusion object doesn’t have ownership then, irrelevant of
what it is called, it is not a mutex.

Classical Problems of Synchronization

Producer/Consumer Problem:

The general statement is this:

• There are one or more producers generating some type of data (records,
characters) and placing these in a buffer.

• There is a single consumer that is taking items out of the buffer one at a time.

BCCML
S6 ECE Real Time Operating Systems- EC 366

• The system is to be constrained to prevent the overlap of buffer operations. That


is, only one agent (producer or consumer) may access the buffer at any one time.

The problem is to make sure that the producer won’t try to add data into the buffer if it’s
full and that the consumer won’t try to remove data from an empty buffer.
Page | 11 We will look at a number of solutions to this problem to illustrate both the power and the
pitfalls of semaphores.

Producer:

while (true) {

/* produce item v */

b[in] = v;

in++;

Consumer:

while (true)

while (in <= out)

/*do nothing */; TRACE KTU


w = b[out];

out++;

/* consume item w */

The producer can generate items and store them in the buffer at its own pace.

BCCML
S6 ECE Real Time Operating Systems- EC 366

Each time, an index (in) into the buffer is incremented.

The consumer proceeds in a similar fashion but must make sure that it does not attempt to read
from an empty buffer.

/*program, Producer/Consumer
Page | 12
#include<stdio.h>
#include<stdlib.h>
int mutex=1,full=0,empty=3,x=0;
int main()
{
int n;
void producer();
void consumer();
int wait(int);
int signal(int);
printf("\n1.Producer\n2.Consumer\n3.Exit");
while(1)
{
printf("\nEnter your choice:");
scanf("%d",&n);
switch(n)
{
case 1:
if((mutex==1)&&(empty!=0))
producer();

TRACE KTU
case 2:
else
printf("Buffer is full!!");
break;

if((mutex==1)&&(full!=0))
consumer();
else
printf("Buffer is empty!!");
break;
case 3:
exit(0);
break;
}
}

return 0;
}

int wait(int s)
{
return (--s);
}

int signal(int s)
{
return(++s);
}

BCCML
S6 ECE Real Time Operating Systems- EC 366

void producer()
{
mutex=wait(mutex);
full=signal(full);
empty=wait(empty);
Page | 13 x++;
printf("\nProducer produces the item %d",x);
mutex=signal(mutex);
}

void consumer()
{
mutex=wait(mutex);
full=wait(full);
empty=signal(empty);
printf("\nConsumer consumes item %d",x);
x--;
mutex=signal(mutex);
}

ii) Readers/Writers Problem

1. Any number of readers may simultaneously read a file.

2. Only one writer at a time may write to the file.

TRACE KTU
3. If a writer is writing to a file, no reader may read it.

4. Once a single reader accesses a file, then all other readers can retain control to access the file.

Problem – allow multiple readers to read at the same time. Only one single writer can access
the shared data at the same time.

Shared Data

- Data set
- Semaphore mutex initialized to 1 (controls access to readcount)
- Semaphore wrt initialized to 1 (writer access)
- Integer readcount initialized to 0 (how many processes are reading object)

- The structure of a writer process

do {

wait (wrt) ;

// writing is performed

signal (wrt) ;

} while (TRUE);

BCCML
S6 ECE Real Time Operating Systems- EC 366

The structure of a reader process

do {

Page | 14 wait (mutex) ;

readcount ++ ;

if (readcount == 1)

wait (wrt) ;

signal (mutex)

// reading is performed

wait (mutex) ;

readcount - - ;

if (readcount == 0)

signal (wrt) ;

signal (mutex) ;

TRACE KTU
} while (TRUE);

Dining Philosopher Problem

BCCML
S6 ECE Real Time Operating Systems- EC 366

The structure of Philosopher i:

do {

wait ( chopstick[i] );

Page | 15 wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );

signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);

Message Passing techniques:

• Emphasize that message-passing systems come in many forms. The given primitives are
a minimum set of operations needed for processes to engage in message passing.

• The pair of primitives are –

send (destination, message)


TRACE KTU
receive (source, message)

A process sends information in the form of a message to another process


designated by a destination.

• A process receives information by executing the receive primitive, indicating the


source and the message.

The communication of a message between two processes implies some level of synchronization
between the two: the receiver cannot receive a message until it has been sent by another
process.

When a send primitive is executed in a process, there are two possibilities:

• Either the sending process is blocked until the message is received, or it is not.

Similarly, when a process issues a receive primitive, there are two possibilities:

• If a message has previously been sent, the message is received and execution
continues.

• If there is no waiting message, then either

a) the process is blocked until a message arrives, or


b) the process continues to execute, abandoning the attempt to receive.

BCCML
S6 ECE Real Time Operating Systems- EC 366

Blocking send and blocking receive- Both sender and receiver are blocked until message is
delivered,known as a rendezvous. Allows for tight synchronization between processes.

Nonblocking send, blocking receive:

Although the sender may continue on, the receiver is blocked until the requested message
Page | 16 arrives.

This is probably the most useful combination.

• It allows a process to send one or more messages to a variety of destinations as


quickly as possible.

• A process that must receive a message before it can do useful work needs to be
blocked until such a message arrives.

• Nonblocking send, nonblocking receive:

• Neither party is required to wait.

• Sending process need to be able to specify which process should receive the message

• Direct addressing

• Send primitive includes a specific identifier of the destination process

• Receive primitive could know ahead of time which process a message is expected


TRACE KTU
Receive primitive could use source parameter to return a value when the receive
operation has been performed

• Indirect Addressing

• Messages are sent to a shared data structure consisting of queues

• Queues are called mailboxes

• One process sends a message to the mailbox and the other process picks up the message
from the mailbox

Deadlock:

Definition: A set of processes is in a deadlock state when every process in the set is waiting for
an event that can be caused only by another process in the set.

• Resource model of process and system


– A system contains a finite number of resources to be distributed among a number of
competing processes
– Resources are partitioned into several types. Each instance of a type is identical to
other instances (e.g., CPU, CPU cycles, memory space, files, I/O devices)
– Processes use resource in only the following sequence
• Request (if resource is in use then must wait)
• Use resource

BCCML
S6 ECE Real Time Operating Systems- EC 366

• Release resource

Page | 17

Conditions for Deadlock •

TRACE KTU
Four necessary and sufficient deadlock conditions (i.e., all must hold simultaneously)

– Mutual exclusion: At least one resource cannot be shared. Processes claim exclusive control of
resources

– Hold and wait: Processes hold resources already allocated to them while waiting for additional
resources

– No preemption: Process keeps a resource granted to it until it voluntarily releases the


resource

– Circular wait: Circular chain of processes exists in which each holds one or more resources
requested by the next process in the chain (implies hold and wait)

Strategies for dealing with Deadlock

• Deadlock prevention – Construct system in such a way that deadlock cannot happen

• Deadlock avoidance – When deadlock could potentially occur, sidestep the deadlock situation

• Deadlock detection/recovery – When deadlock occurs, take steps to remove the deadlock
situation (e.g., roll back or terminate some process)

• Ignore the problem – Don’t worry, be happy!

BCCML
S6 ECE Real Time Operating Systems- EC 366

Page | 18

TRACE KTU
Deadlock Detection:

General technique for detecting deadlock: reduction of the resource allocation graph

– if all of a process’ resource requests can be granted, remove the arrows from and to that
process (this is equivalent to the process completing and releasing its resources).

– repeat... – if the graph can be reduced by all of its processes then there is no deadlock

– if not, then the irreducible processes constitute the set of deadlocked processes in the graph

Reduction of the Resource Allocation Graph

BCCML
S6 ECE Real Time Operating Systems- EC 366

Page | 19

Deadlock Prevention

• Recall the four necessary and sufficient conditions for deadlock

– Mutual exclusion

– Hold and wait

– No preemption

– Circular wait

TRACE KTU
• Deadlock prevention is achieved by ensuring that at least one condition cannot occur

Deadlock Prevention by denying Mutual Exclusion

• Perhaps can substitute a sharable resource for a dedicated one in some cases (e.g., readers’
access to a file)

• Generally not a useful solution because we need to provide dedicated resources.

Deadlock Prevention by denying Hold and Wait

• Method 1: allocating all the needed resources when starting a process

• Method 2: a process is allowed to request a resource only if it does not hold any resource

• Essentially resource requests are granted by the system on an “all or none” basis

• Problems

– resources may be left idle for long periods of time

– starvation possible, especially if a process is requesting several popular resources

Deadlock Prevention by denying No Preemption

• Method 1: preempting the resources of a process (and then restarting the process or
reallocating resources to it) if its request cannot be granted

BCCML
S6 ECE Real Time Operating Systems- EC 366

• Method 2: preempting the resources of a process (and then restarting the process or
reallocating its resources) if it holds some resource that is being requested

• Problems

– State of resource must be saved and restored


Page | 20
– Hence easy for resources whose state can be saved and restored (e.g., CPU registers, memory
space)

– Difficult for resources whose state cannot be saved and restored (e.g., tape and printers)

Deadlock Prevention by denying Circular Wait

• Impose total ordering on resource types. Require that process requests resources in an
increasing order of enumeration.

• Problems

– for best efficiency, resource numbers must correspond to expected order of use of resources.
If use is out of order, the result is idle resources (waste)

– Large effect on system programs

• change in resource numbers may require change in program

• programmer has to be aware of ordering in structuring program

TRACE KTU
• portability of program compromised

• Even so, this method has been used in a number of systems such as IBM MVS and VAX/VMS

Deadlock Avoidance

• Main idea

– Carefully allocate the resources such that the system will not run into deadlock

• More specifically – Require additional information about how resources will be requested. Use
this information to determine whether to grant an allocation request or to cause the requesting
process to wait.

The deadlock-avoidance algorithm dynamically examines the resource-allocation state to


ensure that there can never be a circularwait condition

• Costs

– Run-time overhead of decision making

– Extra information required from applications

BCCML
S6 ECE Real Time Operating Systems- EC 366

Deadlock Avoidance Safe and Unsafe states

• A system is in a safe state if the system can allocate resources to each process (up to its
maximum) in some order and let each of them compete successfully (hence, avoiding a
deadlock).
Page | 21 -If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished

-When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and
terminate

-When Pi terminates, Pi +1 can obtain its needed resources, and so on

If a system is in safe state ⇒ no deadlocks

TRACE KTU
If a system is in unsafe state ⇒ possibility of deadlock

Avoidance ⇒ ensure that a system will never enter an unsafe state

Avoidance Algorithms

• Single instance of a resource type

- Use a resource-allocation graph

• Multiple instances of a resource type


• Use the banker's algorithm

Resource-Allocation Graph Scheme

• Claim edge Pi → Rj indicates that process Pj may request resource Rj ;


represented by a dashed line
• Claim edge is converted to request edge when a process requests a resource
• Request edge is converted to an assignment edge when the resource is allocated
to the process
• When a resource is released by a process, assignment edge is reconverted to a
claim edge
• Resources must be claimed a priori in the system, i.e., from the start, claim edges
must be entered.

Suppose that process Pi requests a resource Rj.

BCCML
S6 ECE Real Time Operating Systems- EC 366

The request can be granted only if converting the request edge to an assignment edge does not
result in the formation of a cycle in the resource-allocation graph.

Page | 22

Banker’s Algorithm

• Multiple instances
• Less efficient than the resource-allocation graph algorithm
• Each process must a priori claim maximum number of instances of each resource type
• When a process requests a resource, it may have to wait

TRACE KTU
When a process gets all its resources, it must return them in a finite amount of time

Data Structures for the Banker’s Algorithm

Let n = number of processes, and m = number of resources types

√ Available: Vector of length m. If Available[ j ] = k, there are k instances of resource type


Rj available
√ Max: n x m matrix. If Max[ i, j ] = k, then process Pi may request at most k instances of
resource type Rj
√ Allocation: n x m matrix. If Allocation[ i, j ] = k then Pi is currently allocated k instances
of Rj
√ Need: n x m matrix. If Need[ i, j ] = k then Pi may need k more instances of Rj to complete
its task Need[ i, j ] = Max[ i, j ] – Allocation[ i, j ]

Safety Algorithm

1. Let Work and Finish be vectors of length m and n, respectively Initialize: Work =
Available Finish[ i ] = false for i = 0, 1, …, n - 1
2. 2. Find an i such that both: (a) Finish[ i ] == false (b) Needi ≤ Work (Pi needs less
resources than still available) If no such i exists, go to step 4
3. Work = Work + Allocationi (release resources of Pi back into available) Finish[ i ] = true
go to step 2

BCCML
S6 ECE Real Time Operating Systems- EC 366

4. If Finish[ i ] == true for all i , then the system is in a safe state

Resource-Request Algorithm for Process Pi

Request = request vector for process Pi If Requesti [ j ] == k then process Pi wants k instances of
resource type Rj Pi requests resources:
Page | 23
1. If Requesti ≤ Needi go to step 2. Otherwise, raise error condition, since process has exceeded
its maximum claim

2. If Requesti ≤ Available, go to step

3. Otherwise Pi must wait, since resources are not available 3. Pretend to allocate requested
resources to Pi by modifying the state as follows:

Available = Available – Request

Allocationi = Allocationi + Requesti

Needi = Needi – Requesti λ

If safe ⇒ the resources are allocated to Pi

If unsafe ⇒ Pi must wait, and the old resource-allocation state is restored

Example of Banker’s Algorithm

TRACE KTU
5 processes P0 through P4;

3 resource types: A (10 instances), B (5 instances), and C (7 instances)

BCCML
S6 ECE Real Time Operating Systems- EC 366

Page | 24

Example: P1 Request (1,0,2)

TRACE KTU

Deadlock Detection

-Single Instance of Each Resource Type

√ Maintain wait-for graph


√ Nodes are processes (collapse resources nodes in resourceallocation graph)
√ Pi → Pj if Pi is waiting for Pj (to release a requested resource)

BCCML
S6 ECE Real Time Operating Systems- EC 366

√ Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle,
there exists a deadlock
√ An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is
the number of nodes in the graph

Page | 25

Several Instances of a Resource Type

TRACE KTU
Available: A vector of length m indicates the number of available resources of each type

Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process

Request: An n x m matrix indicates the current request of each process. If Request[ i, j ] = k, then
process Pi is requesting k more instances of resource type Rj

Detection Algorithm:

1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work =
Available (b) For i = 1, 2, …, n, if Allocationi ≠ 0, then Finish[ i ] = false; otherwise, Finish[
i ] = true
2. Find an index i such that both: (a) Finish[ i ] == false (b) Requesti ≤ Work If no such i
exists, go to step 4
3. Work = Work + Allocationi (hoping Pi will return soon its resources) Finish[ i ] = true go
to step 2
4. If Finish[ i ] == false, for some i, 1 ≤ i ≤ n, then the system is in deadlock state. Moreover,
if Finish[ i ] == false, then Pi is deadlocked

Example of Detection Algorithm

Five processes P0 through P4 ;

three resource types A (7 instances), B (2 instances), and C (6 instances)

BCCML
S6 ECE Real Time Operating Systems- EC 366

Page | 26

TRACE KTU

Deadlock Recovery

Once the deadlock is detected, some strategy is needed for recovery.

i) Process Termination

-Abort all deadlocked processes


-Ask the operator to resolve manually the deadlock
-Abort one process at a time until the deadlock cycle is eliminated

In which order should we choose to abort?

1. Priority of the process

2. How long process has computed, and how much longer to completion

BCCML
S6 ECE Real Time Operating Systems- EC 366

3. Resources the process has used

4. Resources process needs to complete

5. How many processes will need to be terminated?

Page | 27 6. Is process interactive or batch?

ii) Resource Preemption


1. Selecting a victim –Which resource and which processes are to be preemted? One must
determine the order of pre-emption to minimize cost
2. Rollback – return to some safe state, restart process for that state, but must keep data on
the states of running processes
3. Starvation – same process may always be picked as victim, include number of rollbacks
in cost factor

TRACE KTU

BCCML

You might also like