0% found this document useful (0 votes)
36 views13 pages

OSC Sample Questions and Answers (2) 2024-25

EXAM QUESTIONS

Uploaded by

hooriyamasood80
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views13 pages

OSC Sample Questions and Answers (2) 2024-25

EXAM QUESTIONS

Uploaded by

hooriyamasood80
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Important Note:

These questions and answers are given here as guidance only.


The answers given here may either be lengthy or short. Do not
follow the same std for the exam. Just follow your question
requirement and answer accordingly.
The suggested answer given here may not be accurate. You
should check the lecture slides or reference material for the
correct answer.

Question 1
Two processes, P0 and P1, are to be run and they update a shared variable. This update is
protected by Petersons solution to the mutual exclusion problem.

a) Show Petersons algorithm and show the truth table for the part of the algorithm which
dictates if a process is allowed to enter its critical region.

b) P0, attempts to enter its critical region. Show the state of the variables that are
created/updated. Will P0 be allowed to enter its critical region? If not, why not?

c) P1, attempts to enter its critical region. Show the state of the variables that are
created/updated. Will P1 be allowed to enter its critical region? If not, why not?

d) P0 leaves its critical region. What effect does this have on the variables?

e) Assume no processes are running and P0 and P1 try to enter their critical region at
exactly the same time. What will happen?

Model Answer

(a) Petersons Solution – The algorithm

int No_Of_Processes;
int turn;
int interested[No_Of_Processes];

void enter_region(int process) {


int other;
other = 1 – process;
interested[process] = TRUE;
turn = process;
The truth table for the while loop

turn == process Interested[other] = TRUE Action


1. F F F (Continue)
2. F T F (Continue)
3. T F F (Continue)
4. T T T (Wait)

(b) P0 tries to enter its critical region


other = 1, turn = 0
interested[0] = TRUE, interested[1] = FALSE
turn == process == TRUE && interested[other] == FALSE, therefore line 3 will be
called (above truth table) and thus P0 will be allowed to enter its critical region

(c) P1 tries to enter its critical region


other = 0, turn = 1
interested[0] = TRUE, interested[1] = TRUE
turn == process == TRUE && interested[other] == TRUE, therefore line 3 will be
called and thus P1 will NOT be allowed to enter its critical region

(d) P0 leaves its critical region. What effect does this have on the variables?
P0 will set interested[0] to FALSE. This will allow the WHILE loop that P1 is
executing to terminate as
turn == process == TRUE && interested[0] == FALSE, thus allowing line 3 to be
called and P1 can now enter its critical region

(e) P0 and P1 try to enter their critical region at exactly the same time.
As other is a local variable then both calls can set this variable without any problems.
Similarly, the relevant element of the interested array can be set without race
conditions occurring.
As turn is a shared variable, there is the potential of race conditions occurring.
Assume P0 sets turn to zero and it is immediately set to one by P1 (or vice versa). Due
to the WHILE statement, it does not matter which process sets the variable first (or
last), only one of the processes will be allowed to enter its critical region. The other
will be forced to wait. That is,

turn == process interested[other] = TRUE Action


P0 F T F (Continue)
P1 T T T (Wait)
Question 2

How are devices represented in the UNIX operating system? How are the drivers
specified?

Answer:

In UNIX devices are mounted into the file system.

The drivers are specified by their major and minor numbers.

Question 3
a) Describe the following scheduling algorithms

 Non Pre-Emptive, First Come, First Serve


 Round Robin
 Shortest Job First

b) Given the following processes and burst times

Process Burst Time


P1 13
P2 5
P3 23
P4 3
P5 31
P6 3
P7 14

Calculate the average wait time when each of the above scheduling algorithms is used?

Assume that a quantum of 6 is being used.

c) Which scheduling algorithm, as an operating systems designer, would you implement?

3
Answer
a) Describe the following scheduling algorithms

 Non Pre-Emptive, First Come, First Serve


An obvious scheduling algorithm is to execute the processes in the order they arrive and to execute them to
completion. In fact, this simply implements a non-preemptive scheduling algorithm.
It is an easy algorithm to implement. When a process becomes ready it is added to the tail of ready queue.
This is achieved by adding the Process Control Block (PCB) to the queue.
When the CPU becomes free the process at the head of the queue is removed, moved to a running state and
allowed to use the CPU until it is completed.
The problem with FCFS is that the average waiting time can be long.

 Round Robin
The processes to be run are held in a queue and the scheduler takes the first job off the front of the queue
and assigns it to the CPU (so far the same as FCFS).
In addition, there is a unit of time defined (called a quantum). Once the process has used up a quantum the
process is preempted and a context switch occurs. The process which was using the processor is placed at
the back of the ready queue and the process at the head of the queue is assigned to the CPU.
Of course, instead of being preempted the process could complete before using its quantum. This would
mean a new process could start earlier and the completed process would not be placed at the end of the
queue (it would either finish completely or move to a blocked state whilst it waited for some interrupt, for
example I/O).

The average waiting time using RR can be quite long.

 Shortest Job First


Using the SJF algorithm, each process is tagged with the length of its next CPU burst. The processes are
then scheduled by selecting the shortest job first.

In fact, the SJF algorithm is provably optimal with regard to the average waiting time. And, intuitively, this
is the case as shorter jobs add less to the average time, thus giving a shorter average.

The problem is we do not know the burst time of a process before it starts.

For some systems (notably batch systems) we can make fairly accurate estimates but for interactive
processes it is not so easy.

b) Given the following processes and burst times etc.

FCFS
The processes would execute in the order they arrived. Therefore, the processes would
execute as follows, with the wait times shown.

4
Process Burst Time Wait Time
P1 13 0
P2 5 13
P3 23 18
P4 3 41
P5 31 44
P6 3 75
P7 14 78

The average wait time is calculated as WaitTime / No Of Processes


= (0 + 13 + 18 + 41 + 44 + 75 + 78) / 7
= 269 / 7
= 32.43

Round Robin
This scheme allocates the processes to the CPU in the order they arrived, but only allows
them to execute for a fixed period of time (quantum = 8). Then the process is pre-empted
and placed at the end of the queue.

This leads to the following wait times for each process

Process Wait Time


P1 47
P2 6
P3 56
P4 17
P5 61
P6 26
P7 60

I would expect the students to give more detail than this (as we did in the lecture
exercises), to show they have applied the algorithm correctly. This, in essence, shows the
wait time at each stage of the process. If the students do this, but give the correct answer,
they should be given credit for applying the algorithm (half marks).
However, if they do not show the working, but get the correct answer, then they should
get full marks as they must have applied the algorithm to get the correct answer.

Summing all the wait times and dividing by the number of processes, gives us the
average wait time. That is

(47 + 6 + 56 + 17 + 61 + 26 + 60) = 273 / 7


= 39.00

Shortest Job First


This scheme, simply executes the process using the burst time as the priority. That is

5
Process Burst Time Wait Time
P6 3 0
P4 3 3
P2 5 6
P1 13 11
P7 14 24
P3 23 38
P5 31 61

The average wait time is calculated as WaitTime / No Of Processes


= (0 + 3 + 6 + 11 + 24 + 38 + 61) / 7
= 143 / 7
= 20.43

c) Which scheduling algorithm, as an operating systems designer, would you


implement?

This is an opportunity for the student to give their views on scheduling algorithms. As we
discussed in the lectures there is no ideal scheduling algorithm, there are always trade
offs and compromises.

No marks will be awarded for saying shortest job first (SJF) would be implemented (as
this is not possible), but an algorithm that estimates the burst time, so that SJF can be
partially emulated would get some marks.

I would also give marks for saying that multi-level feedback queue scheduling would be a
good choice as, by varying the parameters to this algorithm, it is possible to emulate all
the other algorithms we considered. But the student should also say that even this is not
ideal as vast amounts of testing and guesswork would still be needed. All implementing
this algorithm does is give you the flexibility to try various algorithms.

Many other answers are possible and the marks will be awarded on the basis of their
argument and how they defend it.

Question 4
(a) Given a disk with 200 tracks, where track requests are received in the following
order

55, 58, 39, 18, 90, 160, 150, 38, 184.

The starting position for the arm is track 100. Calculate the number of tracks
crossed when the following algorithms are used

6
 First Come First Serve
 Shortest Seek First
 The elevator algorithm starting in the direction UP.

Answer

a) Given a disk with 200 tracks, where track requests are received in the
following order etc.

FCFS #crossings SSF #crossings Elevator #crossings


100 100 100
55 45 90 10 150 50
58 3 58 32 160 10
39 19 55 3 184 24
18 21 39 16 90 94
90 72 38 1 58 32
160 70 18 20 55 3
150 10 150 132 39 16
38 112 160 10 38 1
184 146 184 24 18 20
498 248 250

Question 5
a) The buddy system is a memory management scheme that uses variable sized partitions.

Explain the basic principle behind the buddy system.

b) Assume a computer with a memory size of 256K, initially empty. Requests are
received for blocks of memory of 17K, 6K, 63K and 9K. Show how the buddy system
would deal with each request, showing the memory layout at each stage and the status of
the lists at each stage.

(c) The processes terminate in the following order; 6K, 9K, 17K and 63K. Discuss what
happens as each process terminates.

d) Describe and evaluate an alternative to the buddy system

7
Answer
a) The buddy system is a memory management scheme that uses variable sized
partitions etc….

If we keep a list of holes (in memory) sorted by their size, we can make allocation to
processes very fast as we only need to search down the list until we find a hole that is big
enough. The problem is that when a process ends the maintenance of the list is
complicated. In particular, merging adjacent holes is difficult as the entire list has to be
searched in order to find its neighbours. The Buddy System is a memory allocation that
works on the basis of using binary numbers as these are fast for computers to manipulate.
Lists are maintained which stores lists of free memory blocks of sizes 1, 2, 4, 8,…, n,
where n is the size of the memory (in bytes). This means that for a 256K memory we
require 19 lists. If we assume we have 256K of memory and it is all unused then there
will be one entry in the 256K list; and all other lists will be empty.

b) Assume a computer with a memory size of 256K, initially empty. Requests are
received for blocks of memory of 17K, 6K, 63K and 9K. Show how the buddy
system would deal with each request, showing the memory layout at each stage and
the status of the lists at each stage.

Allocation ends

List No. Block Size Start 17K 6K 63K 9K


1 1
2 2
3 4
4 8
5 16
6 32
7 64
8 128
9 256
10 512
11 1024 (1K)
12 2048 (2K)
13 4096 (4K)
14 8192 (8K) 40K 40K 40K
15 16384 (16K) 48K 48K
16 32768 (32K) 32K
17 65536 (64K) 64K 64K
18 131072 (128K) 128K 128K 128K 128K
19 262144 (256K) 0

8
(c) The processes terminate in the following order; 6K, 9K, 17K and 63K. Discuss
what happens as each process terminates.

The effect of each of these is described below and the changing lists are also shown. The
student can show either – but a description would be better.

 When the 6K process is returned (b in the above diagram), the 8K slot of memory
is returned. This returns a block of memory from 32K to 40K. Checking the 8K
list, it is found that there is an adjacent block of memory, which can be merged
into a 16K block. Therefore the result is that the two 8K blocks are merged into a
16K block and this is added to the list.
 When the 9K block is returned (d in diagram) this releases a 16K block of
memory from 48K to 64K. Checking the 16K list there is a block from 32K (to
48K). As the two 16K blocks are consecutive it allows these two blocks to be
merged into a 32K block.
 The 17K block (a in diagram) returns a 32K block of memory from 0K to 32K.
This can be merged with the block from 32K to 64K, giving a block in the 64K
list.
 The final release of memory (64K, c in diagram) allows two 64K blocks to be
merged into a 128K block and then two 128K blocks to be merged to return to a
position where the memory is empty and there is only one list entry in the 256K
block starting at 0K.

List No. Block Size Initial 6K 9K 17K 63K


1 1
2 2
3 4
4 8
5 16
6 32
7 64
8 128
9 256
10 512
11 1024 (1K)
12 2048 (2K)
13 4096 (4K)
14 8192 (8K) 40K
15 16384 (16K) 32K
16 32768 (32K) 32K

9
17 65536 (64K) 0K
18 131072 (128K) 128K 128K 128K 128K
19 262144 (256K) 0K

b) Describe and evaluate an alternative to the buddy system

Two alternatives were presented in the lectures. These were managing memory with bit
maps and managing memory with linked lists.

I would only expect a brief discussion of one of these methods. The notes below give
sample answers; although I would not expect the student to go into as much detail
(certainly for linked lists) – just explain the basic principle of one of the schemes.

I would expect a brief evaluation with another scheme (probably the buddy system),
giving an evaluation of the scheme they have chosen to describe.

Memory Usage with Bit Maps


Under this scheme the memory is divided into allocation units and each allocation unit
has a corresponding bit in a bit map. If the bit is zero, the memory is free. If the bit in the
bit map is one, then the memory is currently being used.
This scheme can be shown as follows.

Allocation Units

1 0 0 0 1 1 1 0 1 Bit Map

The main decision with this scheme is the size of the allocation unit. The smaller the
allocation unit, the larger the bit map has to be. But, if we choose a larger allocation unit,
we could waste memory as we may not use all the space allocated in each allocation unit.

The other problem with a bit map memory scheme is when we need to allocate memory
to a process. Assume the allocation size is 4 bytes. If a process requests 256 bytes of
memory, we must search the bit map for 64 consecutive zeroes. This is a slow operation
and for this reason bit maps are not often used.

Memory Usage with Linked Lists

10
Free and allocated memory can be represented as a linked list. The memory shown above
as a bit map can be represented as linked list as follows.

P 0 1 H 1 3 P 4 3 H 7 1 P 8 1

Each entry in the list holds the following data


 P or H : for Process or Hole
 Starting segment address
 The length of the memory segment
 The next pointer is not shown but assumed to be present

In the list above, processes follow holes and vice versa (with the exception of the start
and the end of the list). But, it does not have to be this way. It is possible that two
processes can be next to each other and we need to keep them as separate elements in the
list so that if one process ends we only return the memory for that process.
Consecutive holes, on the other hand, can always be merged into a single list entry.

This leads to the following observations when we a process terminates and we return the
memory.

A terminating process can have four combinations of neighbours (we’ll ignore the start
and the end of the list to simplify the discussion).
If X is the terminating process the four combinations are

Before X terminates After X terminates

 In the first option we simply have to replace the P by an H, other than that the list
remains the same.
 In the second option we merge two list entries into one and make the list one entry
shorter.
 Option three is effectively the same as option 2.
 For the last option we merge three entries into one and the list becomes two entries
shorter.

In order to implement this scheme it is normally better to have a doubly linked list so that
we have access to the previous entry.

When we need to allocate memory, storing the list in segment address order allows us to
implement various strategies.

11
First Fit : This algorithm searches along the list looking for the first segment that is large
enough to accommodate the process. The segment is then split into a hole and a process. This
method is fast as the first available hole that is large enough to accommodate the process is
used.
Best Fit : Best fit searches the entire list and uses the smallest hole that is large enough to
accommodate the process. The idea is that it is better not to split up a larger hole that might be
needed later.
Best fit is slower than first fit as it must search the entire list every time. It has also be shown that
best fit performs worse than first fit as it tends to leave lots of small gaps.
Worst Fit : As best fit leaves many small, useless holes it might be a good idea to always use
the largest hole available. The idea is that splitting a large hole into two will leave a large
enough hole to be useful.
It has been shown that this algorithm is no very good either.

These three algorithms can all be speeded up if we maintain two lists; one for processes
and one for holes. This allows the allocation of memory to a process to be speeded up as
we only have to search the hole list. The downside is that list maintenance is complicated.
If we allocate a hole to a process we have to move the list entry from one list to another.

However, maintaining two lists allow us to introduce another optimisation. If we hold the
hole list in size order (rather than segment address order) we can make the best fit
algorithm stop as soon as it finds a hole that is large enough. In fact, first fit and best fit
effectively become the same algorithm.

The Quick Fit algorithm takes a different approach to those we have considered so far.
Separate lists are maintained for some of the common memory sizes that are requested.
For example, we could have a list for holes of 4K, a list for holes of size 8K etc. One list
can be kept for large holes or holes which do not fit into any of the other lists.
Quick fit allows a hole of the right size to be found very quickly, but it suffers in that
there is even more list maintenance.

Question 6
a) Every file in a filing system has a set of attributes (read only, date created etc.).
Assume a filing system allows an attribute of temporary, meaning the creating process
only uses the file during the time it is executing and has no need for the data thereafter.
Assume the process is written correctly, so that it deletes the file at the end of its
execution. Do you see any reason for an operating system to have temporary file
attribute? Give your reasons.

b) An operating system supplies system calls to allow you to COPY, DELETE and
RENAME a file.
Discuss the differences between using COPY/DELETE and RENAME to give a file new
name?

12
Answer

a) Every file in a filing system has a set of attributes (read only, date created etc.).
Assume a filing system allows an attribute of temporary, meaning the creating
process only uses the file during the time it is executing and has no need for the data
thereafter.
Assume the process is written correctly, so that it deletes the file at the end of its
execution. Do you see any reason for an operating system to have temporary file
attribute? Give your reasons.

The main reason for the attribute is when a process terminates abnormally, or if the
system crashes. Under these circumstances the temporary file would not be deleted.
However, by checking the temporary attribute of all files the operating system is able to
delete those files are marked as temporary, thus keeping the filing system “tidy.”
Under normal circumstances, the attribute, is not needed.
Other reasons could be that the OS could decide to place all temporary files in a certain
location – allowing the programmer to simply create a temporary file without having to
concern him/herself with the location details.

b) An operating system supplies system calls to allow you to COPY, DELETE and
RENAME a file.
Discuss the differences between using COPY/DELETE and RENAME to give a file
new name?

I would expect most students to say that there is a performance impact in using
copy/delete as the entire file is copied. If you use rename then only the index entry has to
be changed.
Limited marks will be give for this, with the rest of the marks being given for the students
other arguments – for example…

Perhaps a not so obvious reason, is that if you copy a file you create a brand new file and
some of the attributes will change (for example, date created). If you rename a file the,
date created attribute, for example, would not be changed.

13

You might also like