0% found this document useful (0 votes)
36 views

Assignment OS

H

Uploaded by

Anees Ahmed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Assignment OS

H

Uploaded by

Anees Ahmed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

ON

Submitted to Prof. Irfana Hameed Memon


Roll No. 22-MCSE-05

Department of Quaid-e-Awam University of Engineering,


Science & Technology- Nawabshah (S.B.A)

1
Question No.1 Does Peterson’s solution to the mutual exclusion problem work
when process scheduling is preemptive? How about when it is non-preemptive?
Answer:1
Yes. Not only does Peterson's Solution work with preemptive scheduling, but it was designed for
that very case. In fact, when scheduling is non-preemptive, there is a possibility it might fail. For
example, in a case where 'turn' is initially 0, but process 1 runs first, it will loop perpetually, and
never release the CPU.
Answer:2
Before using the shared variable (i.e before entering its critical region), each process calls enter
the region with its own process number, 0 or 1, as a parameter.
 This call will cause it to wait if need be until it is safe to enter.
 After it has finished with the shared variables, the process calls leave the region to indicate
that it is done and to allow the other process to enter, if it so desires.
 Let us see how this solution works initially neither process is in its critical region. Now
process 0 calls enter the region.
 It indicates its interest by setting its array elements and sets turn to 0.
 Since process 1 is not interested, enter region returns immediately.
 If process 1 now makes a call to enter the region, it will hang there until interested go to
false, an event that happens only when process 0 calls leave the region to exit the critical
region.
 Now consider the case that both process calls enter the region almost simultaneously.
 Both will store their process numbers in turn. Whichever store is done last is the one that
counts, the first one is overwritten and lost suppose that process 1 store last, so turn is 1.
 When both processes come to the while statement, process 0 executes it zero times and
enters its critical region.
 Process 1 loops and does not enter its critical region until process 0 exits its critical region.
Solution:
#define FALSE 0
#define TRUE 1
#define N 2 // Number of process
int turn; // whose turn is it?
int interested[N]; // all value initially 0 (FALSE)
void enter_region(int process)
{
int other; // number of the other process
other = 1-process; // the opposite process
interested[process] = TRUE; // this process is interested
turn = process; // set flag
while (turn==process && interested [other] == TRUE); // wait
}
void Leave_region (int process)
{
interested[process] = FALSE; // process leave critical region
}
2
Question No:2 Show how counting semaphores (i.e semaphores that can hold
an arbitrary value) can be implemented using only binary semaphores and
ordinary machine instructions.

Answer:
Implementation:
CSem(K) cs { // counting semaphore initialized to K
int val = K; // the value of CSem
BSem gate(min(1, val)); // 1 if val>0; 0 if val-0
BSem mutex(1); // protects val
Pc(cs) {
P(gate)
a1: P(mutex);
val = val − 1;
if val > 0
V(gate);
V(mutex);
}
Vc(cs) {
P(mutex);
val = val + 1;
if val = 1
V(gate);
V(mutex);
}
}
Criteria for correct implementation:
Note that val is decremented at the end of Pc(cs) and incremented at the end of Vc(cs). Thus val
always equals the correct value of counting semaphore cs. Thus the program implements the
counting semaphore if (and only if) the following hold:
A1 : val ≥ 0 always holds.
A2 : If a thread t is at Pc(cs) and val > 0 holds,

then eventually either thread t gets past Pc(cs) or val = 0 holds.

A1 ensures that a thread gets past Pc(cs) only if val is higher than zero just before getting past

3
(otherwise, A1 would not hold just after the thread got past).
A2 ensures that if threads are waiting on Pc(cs) and Val is positive, one thread will get past. (This
is so-called "weak fairness". One can also prove "strong fairness" by assuming the same of the
binary semaphores.)

Effective atomicity
When analyzing the above program each of the functions Pc(cs) and Vc(cs) can be treated as
atomically executed.

Proof: While a thread is inside Vc(cs), it is not affected by its environment nor does it affect the
environment. The former is obvious. The latter is almost obvious.
While a thread t is executing a code chunk, the environment learns nothing about the state of its
execution. Another thread blocked on gate may get past P(gate) before thread t exits the code
chunk (i.e., before it executes V(mutex)). But the environment cannot distinguish this from the
situation where thread t executes V(mutex) first (but the news was slow to get to the
environment).
The argument for Pc(cs) is the same. (Blocking occurs only at the start, before thread t gets inside
Pc(cs).)
End of proof.

Question No.3 If a system has only two processes, does it make sense to use
barrier to synchronize them? Why or why not?

Answer:
If a system has only two processes, then the use of a barrier is not wise. A single semaphore
is more than enough in this case on the other hand if the program works in phases and none of
the processes enters the next phase unless both get finished with the current phase.

4
Question No.4 Students working at individual PCs in a computer laboratory
send their files to be printed by a server which spools the files on its hard disk.
Under what conditions may a deadlock occur if the disk space for the print
spool is limited? How may the deadlock be avoided?

Answer:
Disk space is a limited resource on the spooling partition and once it is filled the will cause
a deadlock. Every single block that comes into the spooling partition claims a resource, and the
one behind that wants resources as well, and so on. Since the spooling space is limited and half
of the jobs arrive and fill up space then no more blocks can be stored, causing a deadlock. This
can be prevented by allowing one job to finish and releasing the space for the next job.

Question No.5 Suppose that there is a resource deadlock in a system. Give an


example to show that can include processes that are not in the circular chain
in the corresponding resources allocation graph.

Answer:
Consider three processes A, B, and C, and two resources R and S. Suppose A is waiting
for R held by B, B is waiting for S held by A, and C is waiting for R held by B. All three processes,
A, B, and, C are deadlocked. However, only A and B belong to the circular chain.

Question No.6 A key limitation of the banker’s algorithm is that it requires


knowledge of maximum resource needs of all processes. Is it possible to design
a deadlock avoidance algorithm that does not require this information?
Explain your answer.

Answer:
Banker's Algorithm in Operating System
Banker's algorithm is a deadlock avoidance algorithm. It is named so because this algorithm is
used in banking systems to determine whether a loan can be granted or not.
Consider there are n account holders in a bank and the sum of the money in all of their accounts
is S. Every time a loan has to be granted by the bank, it subtracts the loan amount from the total
money the bank has. Then it checks if that difference is greater than S. It is done because, only
then, the bank would have enough money even if all the n account holders draw all their money at
once.
Banker's algorithm works in a similar way in computers.

5
Whenever a new process is created, it must specify the maximum instances of each resource type
that it needs, exactly.
Characteristics of Banker's Algorithm
The characteristics of Banker's algorithm are as follows:
 If any process requests a resource, then it has to wait.
 This algorithm consists of advanced features for maximum resource allocation.
 There are limited resources in the system we have.
 In this algorithm, if any process gets all the needed resources, then it is that it should return
the resources in a restricted period.
 Various resources are maintained in this algorithm that can fulfill the needs of at least one
client.
Let us assume that there are n processes and m resource types.

Data Structures used to implement the Banker’s Algorithm


Some data structures that are used to implement the banker's algorithm are:

1. Available

It is an array of length m. It represents the number of available resources of each type.


If Available[j] = k, then there are k instances available, of resource type Rj.

2. Max
It is an n x m matrix that represents the maximum number of instances of each resource that a
process can request. If Max[i][j] = k, then the process Pi can request at most k instances of
resource type Rj.

3. Allocation
It is an n x m matrix that represents the number of resources of each type currently allocated to
each process. If Allocation[i][j] = k, then process Pi is currently allocated k instances of resource
type Rj.

4. Need
It is a two-dimensional array. It is an n x m matrix that indicates the remaining resource needs of
each process. If Need[i][j] = k, then process Pi may need k more instances of resource type Rj to
complete its task.

Need[i][j] = Max[i][j] - Allocation [i][j]

Banker’s algorithm comprises of two algorithms:


1. Safety algorithm

6
2. Resource request algorithm

Safety Algorithm
A safety algorithm is an algorithm used to find whether or not a system is in its safe state. The
algorithm is as follows:

1. Let Work and Finish be vectors of length m and n, respectively. Initially,


Work = Available
Finish[i] =false for i = 0, 1, ... , n - 1.
This means, initially, no process has finished and the number of available resources is
represented by the Available array.

2. Find an index i such that both


Finish[i] ==false
Needi <= Work
1. If there is no such i present, then proceed to step 4.
It means, we need to find an unfinished process whose needs can be satisfied by the
available resources. If no such process exists, just go to step 4.

2. Perform the following:


Work = Work + Allocationi
Finish[i] = true

Go to step 2.
When an unfinished process is found, then the resources are allocated and the process is marked
finished. And then, the loop is repeated to check the same for all other processes.

1. If Finish[i] == true for all i, then the system is in a safe state.


That means if all processes are finished, then the system is in safe state.
This algorithm may require an order of mxn² operations in order to determine whether a state is
safe or not.

Resource Request Algorithm


Now the next algorithm is a resource-request algorithm and it is mainly used to determine whether
requests can be safely granted or not.
Let Requesti be the request vector for the process Pi. If Requesti[j]==k, then process Pi wants k
instance of Resource type Rj.When a request for resources is made by the process Pi, the following
are the actions that will be taken:
1. If Requesti <= Needi, then go to step 2; else raise an error condition, since the process has
exceeded its maximum claim.

7
2. If Requesti <= Availablei then go to step 3; else Pi must have to wait as resources are not
available.
3. Now we will assume that resources are assigned to process Pi and thus perform the following
steps:
Available= Available-Requesti;
Allocationi=Allocationi +Requesti;
Needi =Needi - Requesti;
If the resulting resource allocation state comes out to be safe, then the transaction is completed
and, process Pi is allocated its resources. But in this case, if the new state is unsafe, then Pi waits
for Requesti, and the old resource-allocation state is restored.

You might also like