0% found this document useful (0 votes)
12 views

Operating System Own Notes

Operating systems notes That's it
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Operating System Own Notes

Operating systems notes That's it
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

1.

critical section problem

Consider a system consisting of n processes {P0, P1,...,Pn−1}. Each process has a segment of code, called
a critical section, in which the process may be changing common variables, updating a table, writing a
file, and so on. The important feature of the system is that, when one process is executing in its critical
section, no other process is allowed to execute in its critical section. That is, no two processes are
executing in their critical sections at the same time. The critical-section problem is to design a protocol
that the processes can use to cooperate. Each process must request permission to enter its critical
section. The section of code implementing this request is the entry section. The critical section may be
followed by an exit section.The remaining code is the remainder section. The entry section and exit
section are enclosed in boxes to highlight these important segments of code.The general structure of a
typical process Pi is shown in the below syntax:

do {

entry section

criticalsection

exit section

remaindersection

}while(true);

A solution to the critical-section problem must satisfy the following three requirements:

1. Mutual exclusion.If process Pi is executing in its critical section,then no other processes can be
executing in their critical sections.

2. Progress. If no process is executing in its critical section and some processes wish to enter their
critical sections, then only those processes that are not executing in their remainder sections can
participate in deciding which will enter its critical section next,and this selection cannot be postponed
indefinitely.

3. Bounded waiting.There exists a bound,or limit,on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.

Two general approaches are used to handle critical sections in operating systems: preemptive kernels
and nonpreemptive kernels.

A preemptive kernel:allows a process to be preempted while it is running in kernel mode.A


nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode
process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.

Obviously,an on preemptive kernel is essentially free from race conditions on kernel data structures, as
only one process is active in the kernel at a time. We cannot say the same about preemptive kernels, so
they must be carefully designed to ensure that shared kernel data are free from race conditions.
Preemptive kernels are especially difficult to design for SMP architectures, since in these

environments it is possible for two kernel-mode processes to run simultaneously on different


processors.

2.peterson problem

a classic software-based solution to the critical-section problemknownasPeterson’ssolution.Because of


the way modern computer architectures perform basic machine-language instructions, such as load and
store, there are no guarantees that Peterson’ssolution will work correctly on such
architectures.However,we present the solution because it provides a good algorithmic description of
solving the critical-section problem and illustrates some of the complexities involved in designing
software that addresses the requirements of mutual exclusion,progress,and boundedwaiting.

ProcessSynchronization do {

flag[i] = true;

turn = j;

while (flag[j] && turn == j);

criticalsection

flag[i] = false;

remaindersection }while(true);

The structure of process P i in Peterson’s solution.

Peterson’s solution is restricted to two processes that alternate execution between their critical sections
and remainder sections. The processes are numbered P0 and P1. For convenience, when presenting Pi,
we use Pj to denotethe otherprocess;that is, j equals1−i. Peterson’ssolution requires the two
processes to share two data items:

int turn;

boolean flag[2];

Thevariable turn indicates whose turn it is toenter its critical section.That is, if turn == i, then
processPi is allowed to execute in its critical section. The flag array is used to indicate if a process is
ready to enter its critical section. For example, if flag[i] is true, this value indicates thatPi is ready to
enter its critical section. To enter the critical section, process Pi first sets flag[i] to be true and then
sets turn to the value

j,therebyassertingthatiftheotherprocesswishes
toenterthecriticalsection,itcandoso.Ifbothprocessestrytoenteratthesame
time,turnwillbesettobothiandjatroughlythesametime.Only one ofthese
assignmentswilllast;theotherwilloccurbutwillbeoverwrittenimmediately. The eventual value of
turn determineswhich of the two processes is allowed toenteritscriticalsectionfirst. Wenow
provethatthissolutioniscorrect.Weneedtoshow that:

1. Mutualexclusionispreserved.

2. Theprogressrequirementissatisfied.

3. Thebounded-waitingrequirementismet.

To prove property 1, we note that each Pi enters its critical section only if either flag[j] == false or turn
== i. Also note that, if both processes can be executing in their critical sections at the same time, then
flag[0] == flag[1]==true.ThesetwoobservationsimplythatP0 and P1 couldnothave
successfullyexecutedtheir while statementsataboutthesametime,since the value of turn can be either
0or1 but cannot be both.Hence,one of the processes —say, Pj —must have successfully executed the
while statement, whereas Pi had to execute at least one additional statement (“turn == j”). However, at
that time, flag[j] == true and turn == j, and this condition will persist as long as Pj is in its critical
section;as a result,mutual exclusion is preserved. To proveproperties2and3,we note that a process Pi
can be preventedfrom enteringthecriticalsectiononlyifitisstuckinthewhileloopwiththecondition flag[j] ==
true and turn == j;thisloopistheonlyonepossible.IfPj isnot readytoenterthecriticalsection,then flag[j] ==
false,andPi canenterits criticalsection.If Pj hasset flag[j]to trueandisalsoexecutinginitswhile
statement,then either turn == i or turn == j.Ifturn == i, thenPi willenter thecriticalsection.Ifturn==j,thenPj
willenterthecriticalsection.However, once Pj exits its critical section, it will reset flag[j] to false,
allowingPi to enteritscriticalsection.If Pj resetsflag[j]totrue,itmustalsosetturntoi. Thus, since Pi doesnot
change the value of the variable turn while executing the while statement, Pi will enter the critical
section (progress) after at most one entryby Pj (boundedwaiting).

3.deadlock:

Deadlock is a situation in computing where two or more processes are unable to proceed because each
is waiting for the other to release resources. Key concepts include mutual exclusion, resource holding,
circular wait, and no preemption.

Consider an example when two trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each other. This is a practical example of
deadlock.A situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process 1 is
holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting
for resource

1RequestL:. The process requests the resource. If the request cannot be


grantedimmediately(forexample,iftheresourceisbeingusedbyanother process), then the requesting
process must wait until it can acquire the resource.
2. Use.:The process can operate on the resource (forexample,if the resource is a printer,the process can
print on the printer).

3. Release: The process releases the resource

7.2.1 Necessary Conditions A deadlock situation can arise if the following four conditions hold
simultaneously in a system:

7.2 Deadlock Characterization 319

1. Mutual exclusion. At least one resource must be held in a nonsharable mode; that is, only one process
at a time can use the resource. If another process requests that resource, the requesting process must
be delayed

untiltheresourcehasbeenreleased.

2. Hold and wait. A process must be holding at least one resource and waiting to acquire additional
resources that are currently being held by otherprocesses.

3. No preemption. Resources cannot be preempted; that is, a resource can be released only voluntarily
by the process holding it, after that process hascompleteditstask.

4. Circular wait. A set{P0, P1, ...,Pn}of waiting processes must exist such that P0 is waiting for a resource
held by P1, P1 is waiting for a resource heldby P2,...,Pn−1 iswaitingforaresourceheldby Pn,andPn
iswaiting foraresourceheldb/y P0
5.resource allocation graph:.

A resource allocation graphs shows which resource is held by which process and which process is waiting
for a resource of a specific kind. It is amazing and straight – forward tool to outline how interacting
processes can deadlock. Therefore, resource allocation graph describe what the condition of the
system as far as process and resources are concern like what number of resources are allocated
and what is the request of each process. Everything can be represented in terms of graph. One of
the benefit of having a graph is, sometimes it is conveivable to see a deadlock straight forward by
utilizing RAG and however you probably won’t realize that by taking a glance at the table. Yet tables
are better if the system contains bunches of process and resource and graph is better if the system
contains less number of process and resource.

. So, resource allocation graph is explained to us what is the state of the system in terms of
processes and resources. Like how many resources are available, how many are allocated and what
is the request of each process. Everything can be represented in terms of the diagram. One of the
advantages of having a diagram is, sometimes it is possible to see a deadlock directly by using RAG,
but then yonbu might not be able to know that by looking at the table. But the tables are better if
the system contains lots of process and resource and Graph is better if the system contains less
number of process and resource. We know that any graph contains vertices and edges.

Types of Vertices in RAG

So RAG also contains vertices and edges. In RAG vertices are two types

1. Process Vertex: Every process will be represented as a process vertex. Generally, the process will
be represented with a circle.

2. Resource Vertex: Every resource will be represented as a resource vertex. It is also two types:

Single instance type resource: It represents as a box, inside the box, there will be one dot.So the
number of dots indicate how many instances are present of each resource type.

Multi-resource instance type resource: It also represents as a box, inside the box, there will be many
dots present.

Vertices
How many Types of Edges are there in RAG?

Now coming to the edges of RAG.There are two types of edges in RAG –

Assign Edge: If you already assign a resource to a process then it is called Assign edge.

Request Edge: It means in future the process might want some resource to complete the execution,
that is called request edge.

Edge
So, if a process is using a resource, an arrow is drawn from the resource node to the process node. If
a process is requesting a resource, an arrow is drawn from the process node to the resource node.

Example 1 (Single instances RAG)

Single instances RAG


If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only one
instance, then the processes will be in deadlock. For example, if process P1 holds resource R1,
process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for R1,
then process P1 and process P2 will be in deadlock.
Here’s another example, that shows Processes P1 and P2 acquiring resources R1 and R2 while
process P3 is waiting to acquire both resources. In this example, there is no deadlock because there
is no circular dependency. So cycle in single-instance resource type is the sufficient condition for
deadlock.

Example 2 (Multi-instances RAG)

Multi-instances RAG
From the above example, it is not possible to say the RAG is in a safe state or in an unsafe state.So to
see the state of this RAG, let’s construct the allocation matrix and request matrix.

Allocation

The total number of processes are three; P1, P2 & P3 and the total number of resources are two; R1
& R2.

Allocation matrix –

For constructing the allocation matrix, just go to the resources and see to which process it is
allocated.

R1 is allocated to P1, therefore write 1 in allocation matrix and similarly, R2 is allocated to P2 as


well as P3 and for the remaining element just write 0.

Request matrix –
In order to find out the request matrix, you have to go to the process and see the outgoing edges.

P1 is requesting resource R2, so write 1 in the matrix and similarly, P2 requesting R1 and for the
remaining element write 0.

So now available resource is = (0, 0).

Checking deadlock (safe or not) –

So, there is no deadlock in this RAG.Even though there is a cycle, still there is no deadlock.Therefore
in multi-instance resource cycle is not sufficient condition for deadlock.
Multi Instances with Deadlock

Above example is the same as the previous example except that, the process P3 requesting for
resource R1. So the table becomes as shown in below.

Allocation
So,the Available resource is = (0, 0), but requirement are (0, 1), (1, 0) and (1, 0).So you can’t fulfill
any one requirement.Therefore, it is in deadlock. Therefore, every cycle in a multi-instance
resource type graph is not a deadlock, if there has to be a deadlock, there has to be a cycle.So, in
case of RAG with multi-instance resource type, the cycle is a necessary condition for deadlock, but
not sufficient.

4.deadlock handling

Generally speaking, we can deal with the deadlock problem in one of three ways:

• We can use a protocol to prevent or avoid deadlocks, ensuring that the systemwillnever
enteradeadlockedstate.

• Wecan allowt hesystemtoenteradeadlockedstate,detectit,andrecover

. • We can ignore the problem altogether and pretend that deadlocks never occurinthesystem.

7.dead lock types ?

In deadlock prevention the aim is to not let full-fill one of the required condition of the deadlock.
This can be done by this method:

1. Mutual exclusion. At least one resource must be held in a nonsharable mode; that is, only one process
at a time can use the resource. If another process requests that resource, the requesting process must
be delayed

untiltheresourcehasbeenreleased.

2. Hold and wait. A process must be holding at least one resource and waiting to acquire additional
resources that are currently being held by otherprocesses.

3. No preemption. Resources cannot be preempted; that is, a resource can be released only voluntarily
by the process holding it, after that process hascompleteditstask.

4. Circular wait. A set{P0, P1, ...,Pn}of waiting processes must exist such that P0 is waiting for a resource
held by P1, P1 is waiting for a resource heldby P2,...,Pn−1 iswaitingforaresourceheldby Pn,andPn
iswaiting foraresourceheldb/y P0

Deadlock Avoidance: This is a proactive strategy where the operating system or application takes
steps to prevent deadlocks from happening. It involves careful resource allocation and scheduling
algorithms to ensure that the system does not enter a state where deadlock could occur.
Techniques like bankers' algorithm, resource allocation graphs, and ensuring a safe state are
examples of deadlock avoidance strategies.

The Banker's algorithm is a deadlock avoidance technique used in operating systems to manage
resource allocation among multiple processes. It ensures that resources are allocated in a way that
prevents deadlock by maintaining a safe state, where the system can grant resource requests only if
it guarantees that the processes can eventually complete their tasks and release allocated
resources. This algorithm is based on analyzing the current state of available resources, current
allocations, and maximum demands of processes, thereby allowing for the safe execution of
concurrent processes without the risk of deadlock.

Pi: Represents a specific process in the system.Pj: Represents another specific process in the
system.These variables denote individual processes that are managed by the Banker's algorithm.
The algorithm evaluates and manages resource requests and allocations from each process (Pi and
Pj) to ensure that resources are allocated in a way that avoids deadlock and maintains system
stability. Each process (Pi and Pj) has its own set of resource requests, current allocations, and
maximum resource needs, which the algorithm uses to determine safe resource allocation.
resource allocation varum inga :

safe state algorithm:

What is Deadlock Recovery?

If Deadlock prevention or avoidance is not applied to the software then we can handle this by
deadlock detection and recovery. which consist of two phases:

In the first phase, we examine the state of the process and check whether there is a deadlock or not
in the system.

If found deadlock in the first phase then we apply the algorithm for recovery of the deadlock.

In Deadlock detection and recovery, we get the correctness of data but performance decreases.

Methods of Deadlock Recovery

There are several Deadlock Recovery Techniques:

Manual Intervention

Automatic Recovery

Process Termination

Resource Preemption

1. Manual Intervention

When a deadlock is detected, one option is to inform the operator and let them handle the situation
manually. While this approach allows for human judgment and decision-making, it can be time-
consuming and may not be feasible in large-scale systems.

2. Automatic Recovery

An alternative approach is to enable the system to recover from deadlock automatically. This
method involves breaking the deadlock cycle by either aborting processes or preempting resources.
Let’s delve into these strategies in more detail.

3. Process Termination
Abort all Deadlocked Processes

This approach breaks the deadlock cycle, but it comes at a significant cost. The processes that were
aborted may have executed for a considerable amount of time, resulting in the loss of partial
computations. These computations may need to be recomputed later.

Abort one process at a time

Instead of aborting all deadlocked processes simultaneously, this strategy involves selectively
aborting one process at a time until the deadlock cycle is eliminated. However, this incurs overhead
as a deadlock-detection algorithm must be invoked after each process termination to determine if
any processes are still deadlocked.

Factors for choosing the termination order:

The process’s priority

Completion time and the progress made so far

Resources consumed by the process

Resources required to complete the process

Number of processes to be terminated

Process type (interactive or batch)

4. Resource Preemption

Selecting a Victim

Resource preemption involves choosing which resources and processes should be preempted to
break the deadlock. The selection order aims to minimize the overall cost of recovery. Factors
considered for victim selection may include the number of resources held by a deadlocked process
and the amount of time the process has consumed.

Rollback

If a resource is preempted from a process, the process cannot continue its normal execution as it
lacks the required resource. Rolling back the process to a safe state and restarting it is a common
approach. Determining a safe state can be challenging, leading to the use of total rollback, where the
process is aborted and restarted from scratch.

Starvation Prevention

To prevent resource starvation, it is essential to ensure that the same process is not always chosen
as a victim. If victim selection is solely based on cost factors, one process might repeatedly lose its
resources and never complete its designated task. To address this, it is advisable to limit the
number of times a process can be chosen as a victim, including the number of rollbacks in the cost
factor.

deadlock detection:

If all resource types has only single instance, then we can use a graph called wait-for-graph, which
is a variant of resource allocation graph. Here, vertices represent processes and a directed edge
from P1 to P2 indicate that P1 is waiting for a resource held by P2. Like in the case of resource
allocation graph, a cycle in a wait-for-graph indicate a deadlock. So the system can maintain a wait-
for-graph and check for cycles periodically to detect any deadlocks.

The wait-for-graph is not much useful if there are multiple instances for a resource, as a cycle may
not imply a deadlock. In such a case, we can use an algorithm similar to Banker’s algorithm to
detect deadlock. We can see if further allocations can be made on not based on current allocations.
You can refer to any operating system text books for details of these algorithms.

You might also like