0% found this document useful (0 votes)
8 views

BEU Operating System 2022 cse solution

The document discusses the concepts of interrupts and traps in computing, highlighting their differences, with interrupts being hardware-triggered and traps being software-triggered. It also covers the implications of swapping on system efficiency, the types of process scheduling (short-term, medium-term, long-term), and the advantages of using threads over processes in a concurrent airline reservation system. Additionally, it examines CPU utilization in round-robin scheduling and explains why deadlocks cannot occur in a bounded buffer producers-consumers system.

Uploaded by

mehrakriti341
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

BEU Operating System 2022 cse solution

The document discusses the concepts of interrupts and traps in computing, highlighting their differences, with interrupts being hardware-triggered and traps being software-triggered. It also covers the implications of swapping on system efficiency, the types of process scheduling (short-term, medium-term, long-term), and the advantages of using threads over processes in a concurrent airline reservation system. Additionally, it examines CPU utilization in round-robin scheduling and explains why deadlocks cannot occur in a bounded buffer producers-consumers system.

Uploaded by

mehrakriti341
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1

VISIT = biharengineeringuniversity.com

(a) What is the purpose of interrupt. What are the differences


between a trap and an interrupt? Can traps be generated
intentionally by a user program? If so, for what purpose?

m
An interrupt is raised by a hardware device. USB device, NIC card, keyboard can

o
cause interrupts. Interrupts are asynchronous. Therefore, they can occur at any

.c
time.

s ity
er
A processor has a dedicated pin called interrupt pin. It is also called an INT pin.
iv
Devicess such as keyboards are connected to the processor via the interrupt pin.
n

When a key is pressed, it will generate an interrupt. The processor will switch from
gu

currently running process into an Interrupt Handler Routine. In this scenario, the
ri n

keyboard interrupt
rupt handler routine is invoked. After completing the interrupt
handler routine, the processor switches back to the original program that has
ee

being running. Basically, when an interrupt occurs, the processor switches the
in

context and executes the interrupt handler. After completion, it switches back to
g

the previous state.


en
ar

A processor has a single interrupt pin but there are multiple hardware devices.
h
bi

The interrupt controller helps to share the single interrupt pin between multiple
pins. Processor will communicate with the interrupt controller to determine which
device had actually generated the interrupt. Depending on that, the processor will
execute the corresponding interrupt handler routine. It can be timer, USB, or
Keyboard interrupt handler routine.

The main difference between trap and interrupt is that trap is triggered by a
user program to invoke OS functionality while interrupt is triggered by a
hardware device to allow the processor to execute the corresponding
interrupt handler routine.
An operating system is event-driven. An event can occur suddenly while executing
a program. It will trigger the operating system to execute. This will change the user
mode into kernel mode. After the execution of the OS, the control is passed back

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


2
VISIT = biharengineeringuniversity.com

to the original program. Traps and interrupts are two types of events. A trap is
raised by a user program whereas an interrupt is raised by a hardware device
such as keyboard, timer, etc. A trap passes the control to the trap handler and the
interrupt passes the control to an interrupt handler. After executing the handler,
the control switches back to the original program.

om
h ar

A trap is a specific type of interrupt which is generated by software and signals


bi

either 1. some unusual condition, such as an array index out of bounds, or 2. a


request for an operating system service (a system call.) User programs can
generate traps, although not all high-level programming languages make this
facility directly available to the programmer. (Ada, C and Java do allow the
d Java,

make system calls directly.) Traps are used to notify the CPU that some
unusual software event has taken place, or that some special service has been
requested by the program. The CPU can then either terminate the program, or
interrupt the normal execution sequence of the program and transfer control
to a special segment of code.

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


3
VISIT = biharengineeringuniversity.com

2(b) Does swapping improve or degrade the efficiency of


system utilization?

Swapping refers to the process where a computer's operating system moves data
from the Random Access Memory (RAM) to the hard disk drive temporarily when the
RAM is full. While swapping can prevent a system from running out of memory
completely, it can also impact efficiency.

Here's a breakdown:

1. Improvement in Availability: Swapping can improve system availability by


preventing crashes or slowdowns due to insufficient memory. It allows the
system to continue running by moving less-used used data to the slower storage

m
space (hard disk) when RAM is full.

o
.c
2. Impact on Performance: However, swapping data to the hard disk is

ity
significantly slower than accessing data in RAM. When the system needs to
retrieve swapped data, it leads to increased access times, slowing down the

s
er
overall performance of the system. Continuous swapping, known a as
iv
"thrashing," can significantly degrade performance as the system spends more
n

time moving data between RAM and the hard disk.


gu

3. Efficiency: In terms of efficiency, excessive swapping can decrease overall


ri n

efficiency due to the time it takes to move data back and forth between RAM
and disk. Ideally, systems aim to minimize swapping to maintain optimal
ee

performance.
in

4. Optimization: Efficient memory management strategies, such as using


g

appropriate swap space, optimizing applications to reduce memory usage,


en

and addingng more physical RAM, can help balance the need for system
ar

availability and performance.


h
bi

In summary, while swapping can prevent immediate crashes due to insufficient


memory, it often leads to decreased system performance due to the slower access
times of the hard disk. Thus, managing swapping and memory usage is crucial for
maintaining a balance between system availability and efficiency.

3. Including the initial parent process, how many processes are created by the program
shown below?
#include<stdio.h>
#include<unistd.h>
int main()

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


4
VISIT = biharengineeringuniversity.com

{
/*fork a child process */ fork();
/* fork another child process*/ fork();
/* and fork another */
fork();
return 0;
ANSWER=

There will be a total of 8 processes at the end.


Every
very time you call fork() an extra child process is introduced for the first
time you single child process is introduced both parent and child go go-ahead
to call fork() once again making the process count to 4 each of which
further call fork() making the processes
cesses count to 8 including the main
parent.

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


5
VISIT = biharengineeringuniversity.com

o m
.c
3. (B) Describe the differences among short- term, medium-term,
medium

ity
and long-term scheduling.

s
Process scheduling is an important activity
er
ty done by the process manager to
n iv
remove the process from the CPU and schedule the next process, the process
gu

removal and dispatch are based on multiple factors like process completion,
ri n

priority, I/O requirement, etc. Process scheduling plays important role in


Multiprogramming operating systems. There are mainly three types of
ee

schedulers in operating systems, which are: Short


Short-term schedulers, medium-
in

term schedulers, and long-term schedulers.


g
en

Short-Term
Term Scheduler:
h ar
bi

The short-term
term scheduler selects processes from
f the ready queue that are
residing in the main memory and allocates CPU to one of them. Thus, it plans
the scheduling of the processes that are in a ready state. It is also known as
a CPU scheduler. As compared to long-term schedulers, a short-term
scheduler has to be used very often i. e. the frequency of execution of short-
term schedulers is high. The short-term scheduler is invoked whenever an
event occurs. Such an event may lead to the interruption of the current
process or it may provide an opportunity to preempt the currently running
process in favor of another. The example of such events are:
1. Clock ticks (time-based interrupts)
2. I/O interrupts and 1/0 completions.

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


6
VISIT = biharengineeringuniversity.com

3. Operating system calls


4. Sending and receiving of signals.
5. Activation of the interactive program.

Medium-term Scheduler:

The medium-term scheduler is required at the times when a suspended or


swapped-out process is to be brought into a pool of ready processes. A
running process may be suspended because of an I/O request or by a system
call. Such a suspended process is then removed from the main memory and
is stored in a swapped queue in the secondary memory in order to create a
space for some other process in the main memory. This is done because there
is a limit on the number of active processes that can reside in the main

m
memory. The medium-termterm scheduler is in charge of handling the swapped-
swapped

o
.c
out process. It has nothing to do with when a process remains suspended.

ity
However, once the suspending condition is removed, the medium terms

s
scheduler attempts to allocate the required amount of main memory and
er
swap the process in & make it ready. Thus, the medium
medium-term scheduler plans
iv
the CPU scheduling for processes that have been waiting for the completion
n
gu

of another process or an I/O task.


ri n

Long-term Scheduler:
ee
in

The long-term
term scheduler works with the batch queue and selects the next
g
en

batch job to be executed. Thus it plans the CPU scheduling for batch jobs.
Processes, which are resource intensive and have a low priority are called
ar

batch jobs. These jobs are executed in a group or bunch. F


For example, a user
h
bi

requests for printing a bunch of files. We can also say that a long
long-term
scheduler selects the processes or jobs from secondary storage device eg, a
disk and loads them into the memory for execution. It is also known as a job
scheduler. The long- long-term
for which the scheduling is valid is long. This scheduler shows the best
performance by selecting a good process mix of I/O-bound and CPU-bound
processes. I/O bound processes are those that spend most of their time in I/O
than computing. A CPU-bound process is one that spends most of its time in
computations rather than generating I/O requests.

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


7
VISIT = biharengineeringuniversity.com

e ri
ne

4(a) An airline reservation system, using a centralized database


service, user requests concurrently. Is it preferable to use threads
rather than processes in this system.

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


8
VISIT = biharengineeringuniversity.com
An airline reservation system is an example of a real time distributed system which involves
heavy concurrency. It is beneficiary as it leads to :-
1. Resource Efficiency: Threads share the same address space, whereas
processes have separate memory spaces. As a result, threads require fewer
resources (memory and system overhead) compared to processes. In a system
with multiple concurrent user requests, using threads allows for more efficient
resource utilization.
2. Faster Communication: Threads within the same process can communicate
more efficiently than processes, as they share memory. This facilitates faster
communication and data sharing, which is essential when handling concurrent
requests accessing a centralized database.
3. Scalability: Threads are lightweight and easier to create compared to
processes. In a system where scalability is crucial, using threads can allow for

m
easier and more efficient scaling to accommodate
odate increasing numbers of

o
concurrent users without putting excessive strain on system resources.

.c
4. Synchronization: Threads within the same process can easily synchronize

ity
access to shared resources (like the centralized database) using mechanisms

s
such as mutexes
utexes or semaphores. Synchronization among threads is typically
easier and more efficient than inter-process
er
process communication required between
iv
separate processes.
n

5. Context Switching Overhead: Context switching between threads within the


gu

same process is generally


lly faster than context switching between different
ri n

processes. This leads to lower overhead when managing and switching


ee

between concurrent tasks.


in

However, using threads also introduces complexities related to synchronization and


g
en

shared resource management. Careful design and implementation are necessary to


ensure thread safety and prevent issues like race conditions or deadlocks when
ar

accessing the centralized database concurrently.


h
bi

In summary, in the context of a system handling concurrent user requests accessing a


centralized database, threads are often preferred due to their efficiency in resource
utilization, faster communication, scalability, and lower overhead compared to
processes. However, proper synchronization mechanisms and careful programming
practices are crucial when using threads to prevent concurrency-related issues.

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


9
VISIT = biharengineeringuniversity.com

(b) Consider a system running bound tasks and one CPU-bound task. Assume
that the I/O-bound tasks issue an I/O operation once for every millisecond of
CPU computing and that cach 1/0 operation takes 10 milliseconds to complete.
Also assume that the context switching overhead is 0.1 millisecond and that all
processes are long-running tasks. Describe the CPU utilization for a round-
robin scheduler when-
(i) the time quantum is 1 millisecond;
(ii) the time quantum is 10 milli- seconds.

a) Time quantum is 1 ms.

Whether a CPU bound or I/O bound process, it switches every one

m
millisecond and when doing so, it incurs a 0.1 ms overhead. Thus, for every

o
1.1 ms, the CPU is actually utilized only 1 ms.

.c
ity
So CPU utilization is (1 / 1.1) 100 = 91%

s
(b) Time quantum is 10 ms. er
n iv
Here, there is a difference between CPU bound and I/O bound processes.
gu

A CPU bound process can use the full 10 ms time slot, whereas an I/O
ri n

bound process can have it only for 1 ms because another I/O bound
process in the queue will snatch the time from iit.
ee
in

So a CPU bound process takes 10 ms, 10 I/O bound processes would take
g

10*1=10ms.
en

So, the CPU would be utilized for a total of 20 ms out of 21.1 ms. (Total
ar

time is 10*1.1 + 10.1=21.1ms).


h
bi

Thus the CPU utilization is (20 / 21.1) 100 = 95%

5.(A) Clearly justify why deadlocks cannot arise in a bounded buffer producers-
consumers system.

In a bounded buffer producers-consumers system, deadlocks are prevented due to


the inherent structure and mechanics of the system:

1. Finite Buffer Size: The key characteristic of a bounded buffer system is that it
has a fixed, limited capacity to hold items. This limitation ensures that the

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


10
VISIT = biharengineeringuniversity.com

buffer can only accommodate a certain number of items, preventing an


infinite accumulation of items and avoiding a deadlock scenario due to
resource exhaustion.
2. Synchronization Mechanisms: Producers and consumers in a bounded
buffer system use synchronization mechanisms (like semaphores, mutexes, or
condition variables) to coordinate access to the buffer. These mechanisms
ensure that producers do not produce when the buffer is full and consumers
do not consume when the buffer is empty. As a result, there's a controlled
flow of items in and out of the buffer, preventing situations where all
processes are waiting indefinitely for resources.
3. No Circular Dependencies: Deadlocks often occur when multiple processes
hold resources and wait for each other's resources in a circular dependency. In
a bounded buffer system, there's no circular dependency among the
resources (buffer slots)
ots) or the processes (producers and consumers). The

m
buffer slots are finite, and the processes access them in a controlled manner,

o
avoiding circular wait scenarios.

.c
4. Finite Access Requirements: Both producers and consumers in this system

ity
have finite access requirements. Producers need to store a limited number of

s
items in the buffer, and consumers need to retrieve a finite number of items
er
from the buffer. This finite and controlled access prevents scenarios where a
iv
process is indefinitely waiting for an infinite
inf inite number of resources, which often
n
gu

leads to deadlocks.
ri n

Given these factors finite buffer size, controlled access, synchronization


ee

mechanisms, and absence of circular dependencies deadlocks cannot arise in a


bounded buffer producers-consumers
consumers system. The Th limited capacity of the buffer and
g in

the controlled access ensure that processes can always make progress, either by
en

producing items, consuming items, or by waiting for space or items in the buffer,
avoiding the conditions necessary for deadlocks to occur.
h ar
bi

(b) Consider a system consisting of four resources of the same type that are
shared by three processes, each of which needs at most two resources. Show
that the system is deadlock-free.

If the system is deadlocked, it implies that each process is holding one resource and is waiting for
one more. Since there are 3 processes and 4 resources, one process must be able to obtain two
resources. This process requires no more resources and therefore it will return its resources when
done.

Given:

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


11
VISIT = biharengineeringuniversity.com

4 resources of the same type


3 processes, each needing at most 2 resources

Let's represent the resources as R1, R2, R3, and R4 and the processes as P1, P2, and
P3.

Process maximum resource requirements:

P1 needs at most 2 resources.


P2 needs at most 2 resources.
P3 needs at most 2 resources.

Considering the maximum resource requirements of each process and the total
available resources, let's analyze the potential allocation scenarios:

o m
1. P1 requests 2 resources. Available resources: 2 Allocated resources: 2

.c
Remaining available resources: 0

ity
2. P2 requests 2 resources. Available resources: 0 (since P1 has already used 2)

s
Allocated resources: 2 Remaining available resources: 0
er
3. P3 requests 2 resources. Available resources: 0 (since P1 and P2 have already
iv
used 4) Allocated resources: 2 Remaining available resources: 0
n
gu

In this scenario, all processes are able to acquire the maximum required resources
ri n

without any contention or waiting. The system does not encounter a situation where
ee

a process has to wait indefinitely for resources to be freed by another process,


thereby ensuring
nsuring a deadlock-free
free operation.
g in
en
ar

6. Five batch jobs, A through E, arrive at a computer center at essentially the same
h
bi

time. They have an estimated running time of 15, 9, 3, 6, and 12 minutes,


respectively. Their (externally defined) priorities are 6, 3, 7, 9, and 4, respectively,
with a lower value corresponding to a higher priority. For each of the following
scheduling algorithms, determine the turnaround time for each process and the
average turnaround for all jobs. Ignore process switching overhead. Explain how you
arrived at your answers. In the last three cases, assume that only one job at a time

(a).round robin with a time quantum of 1


(b).priority scheduling
(c).FCFS (run in order 15, 9, 3, 6, and 12)
(d).shortest job first

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


12
VISIT = biharengineeringuniversity.com

s
er
iv

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


13
VISIT = biharengineeringuniversity.com

s ity
er

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


14
VISIT = biharengineeringuniversity.com

7(A) bridge on a busy highway is damaged by a flood. One-way traffic is to be


instituted on the bridge by permitting vehicles traveling in opposite directions to use
the bridge alternately. The following rules formulated for use of the bridge :no for use
of the bridge coast 0251 are
a) At any time, the bridge is used by vehicle(s) traveling in one direction only. worla
(B)If vehicles are waiting to cross the bridge at both ends, only one vehicle from one
end is allowed to cross the bridge before a vehicle from the other end starts crossing
the bridge

e
g in
en
h ar
bi

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


15
VISIT = biharengineeringuniversity.com

s it
er

8. Consider a simple paging system with the following parameters: 232 bytes of physical
memory; page size of 210 byytes; 216 pages of logical address space.
a. How many bits are in a logical address?

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


16
VISIT = biharengineeringuniversity.com
b. How many bytes in a frame?
c. How many bits in the physical address specify the frame?d. How many entries in the page
table?
e. How many bits in each table entry?

8(B) Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB and 600
KB (in order). How would the first-fit, best- fit, and worst-fit algorithms place

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


17
VISIT = biharengineeringuniversity.com

processes of 212 KB, 417 KB, 112 KB and 426 KB (in order)? Which algorithm
makes the most efficient use of memory?

m
co
y.
e
n iv
gu

9(A) Consider a demand-paging


paging systern with a paging disk that has an average
ri n

access and transfer time of 20 milli-


milli - seconds. Addresses are translated through
ee

a page table in main memory, with an access time of 1 microsecond per


memory access. Thus, cach memory reference through the page table takes
g in

two accesses. To improve this time, we have added an associative memory


en

that reduces access time


time to one memory reference if the page-table
page entry is in
ar

the associative memory.


h
bi

Assume that 80 percent of the accesses are in the associative memory and that
Esigo of those remaining, 10 percent (or 2 percent of the total) cause page
faults. What is the effective memory access Sau time?

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


18
VISIT = biharengineeringuniversity.com

(b) The open-file table is used to maintain information about files that are
currently open. Should the operating system maintain a separate table for
each user or just maintain one table that contains references to files that are
currently being accessed by all users? If the same file is being accessed by two

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


19
VISIT = biharengineeringuniversity.com

different programs or users, should there be separate entries in the open- file
table?
By keeping a central open-file table, the operating system can perform the
activity that would be infeasible otherwise. Consider a file that is as of now
being accessed by at least one process. In the event that the file is erased, at
that point it ought not be taken out from the disk until all processes accessing
the file have closed it. This check could be performed just if there is centralized
accounting of the number of processes accessing the file. Then again, in the
event that two processes are accessing the file, at that point a separate state
should be kept up to monitor the current location of which parts of the file are
being accessed by the two processes. This requires the operating system to
keep up separate entries for the two processes.

o m
The decision of whether the operating system should maintain a separate open open-file

.c
table for each user or a single table containing references
rences to files accessed by all

ity
users depends on various factors:

s
er
1. Isolation and Security: If the system requires strong isolation between users,
iv
maintaining separate open-filefile tables can enhance security and privacy. Each
n

user would have their own table, reducing


educing the risk of unauthorized access or
gu

interference between users' file accesses.


ri n

2. Resource Efficiency: Maintaining a single open open-file table shared among all
ee

users can be more resource-


resource-efficient
resource -efficient as it avoids redundancy. It could reduce
memory overhead and simplify management compared to maintaining
in

multiple tables.
g
en

3. Concurrency and File Access: If the same file is accessed by different users or
programs simultaneously, having separate entries in the open open-file table for
ar

each instance of file access is beneficial. This approach allows each access
h

instance to maintain its own file position, access rights, and oth
other relevant
bi

information without conflicting with other users or programs accessing the


same file.
4. Conflict Resolution: When multiple users or programs access the same file,
maintaining separate entries in the open-file table helps in managing
concurrent access. It allows the operating system to handle conflicts, such as
different users writing to the same file at the same time, by enforcing
appropriate synchronization and access control mechanisms.

In practice, the operating system often maintains a single system-wide open-file


table containing references to files being accessed by all users, with separate entries
for each instance of file access. This approach balances resource efficiency while

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM


20
VISIT = biharengineeringuniversity.com

enabling proper management and control of file access among different users and
programs.

To summarize, maintaining separate open-file tables for each user can enhance
security and isolation but might increase resource usage. Using a single table with
separate entries for each instance of file access allows for better concurrency control
and resource management while ensuring proper isolation between users accessing
the same file.

o m
.c
s ity
er
n iv
gu
ri n
ee
g in
en
h ar
bi

FOR MORE SOLUTIONS VISIT BIHARENGINEERINGUNIVERSITY.COM

You might also like