0% found this document useful (0 votes)
333 views58 pages

PC-CS402 OS Question Bank

The document contains multiple choice and descriptive questions related to operating systems, covering topics such as resource management, process management, memory management, and file system management. It includes comparisons of monolithic and bi-layered architectures, as well as the role of the kernel in an operating system. The questions are designed to assess understanding and knowledge of key concepts in operating systems.

Uploaded by

jn.aditya21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
333 views58 pages

PC-CS402 OS Question Bank

The document contains multiple choice and descriptive questions related to operating systems, covering topics such as resource management, process management, memory management, and file system management. It includes comparisons of monolithic and bi-layered architectures, as well as the role of the kernel in an operating system. The questions are designed to assess understanding and knowledge of key concepts in operating systems.

Uploaded by

jn.aditya21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

The figures in the margin indicate full marks.

Candidates are required to give their answers in their own words as far as practicable.

Group-A
(Multiple Choice Type Questions)
i) To access the services of the operating system, the interface is provided by the ___________
a) Library b) API c) Assembly instructions d) System calls
ii) What is the full form of BIOS?
a) Between input-output system b) Binary input-output system
c) Basic input/output system d) All of the above
iii) What does RR-scheduling stand for?
a) Remaining Rest scheduling b) Robin Round scheduling
c) Round Robin scheduling d) None of the above
iv) What is an operating system?
a) interface between the hardware and application programs
b) collection of programs that manages hardware resources
c) system service provider to the application programs d) All of these.
v) What is the main function of the command interpreter?
a) to provide the interface between the kernel modules and application program
b) to handle the files in the operating system
c) to get and execute the next user-specified command
d) None of these options.
vi) Which one of the following is true?
a) kernel remains in the memory during the entire computer session
b) kernel is the program that constitutes the central core of the operating system
c) kernel is the first part of the operating system to load into memory during booting d) All of the above
options
vii) ____________ is not an approach to Handling Deadlock in OS.
a) Detect and recover b) Deadlock Avoidance
c) Virtual memory d) Deadlock prevention
viii) Which of the following usually provides the interface to get access to the services of OS?
a) System Call b) Library
c) API d) All of the above

ix) What is the use of a banker's algorithm?


Page 1 of 58
a) Rectify deadlock b) Prevent deadlock
c) Solve deadlock d) None of the above

x) What does RR-scheduling stand for?


a) Remaining Rest scheduling b) Robin Round scheduling
c) Round Robin scheduling d) None of the above
xi) Where is the operating system placed in the memory?
a) either low or high memory (depending on the location of interrupt vector)
b) in the low memory c) in the high memory d) none of these
xii) In a timeshare operating system, when the time slot assigned to a process is completed, the process
switches from the current state to?
a) Suspended state b) Terminated state
c) Ready state d) Blocked state
xiii) To access the services of the operating system, the interface is provided by the ___________

a) Library b) API c) Assembly instructions d) System calls

xiv) What else is a command interpreter called?

a. Prompt b.Kernel c.Shell d.Command

xv) What is the full name of FAT?

a) File attribute table b)File allocation table


c)Font attribute table d)Format allocation table

xvi) BIOS is used _________

a) By operating system b)By compiler c)By interpreter d)By application


software
xvii) What is the mean of the Booting in the operating system?
a)Restarting computer b)Install the program c)To scan d)To turn off
xviii) When does page fault occur?
a)The page is present in memory. b)The deadlock occurs.
c)The page does not present in memory. d)The buffering occurs.
xix) Banker's algorithm is used______________
a)To prevent deadlock b)To deadlock recovery c)To solve the deadlock d)None of these
xx) If the page size increases, the internal fragmentation also________

Page 2 of 58
a)Decreases b) Increases c)Remains constant d)None of these
(xxi) The semaphore whose value is either zero or one is known as
a) Binary semaphore b) Mutex semaphore
c) Counting semaphore d) Exclusion semaphore
(xxii) When a process does not get access to the resource, it loops continually for the resource and wastes
CPU cycles. It is known as,
a) Deadlock b) Livelock c) Mutuallock d) Spinlock
(xxiii) The operations that cannot be overlapped or interleaved with executions of any other operations are
known as,
a) Mutual exclusion b) Atomic operation
c) Progress d) Bounded wait
(xxiv) Part of code where more than one process access and update shared resource
a) Entry section b) Critical section
c) Exit section d) Remainder section
(xxv) ______________ is an interprocess communication tool that protects shared resources.
a) Message queue b) Shared Memory
c) Semaphore d) Signal
(xxvi) full and empty semaphores for mutex implementation of Producer-Consumer problem are respectively
initialized with
a) 0 and 1 b) 1 and 0 c) 0 and n d) n and 0
(xxvii) Time taken by disk head to move from one cylinder to another one is
a) Seek time b) Transfer time
c) Rotational latency d) Disk latency
(xxviii) C-Look achieves benefits of both
a) Scan and Look b) C-SCAN and LOOK
c) Scan and FCFS d) C-Scan and SSTF
__________ is also known as elevator algorithm.
(xxix) _______________ disc scheduling algorithm is known as ‘Elevator algorithm’.
(a)SCAN
(b)FIFO
(c) SSTF
(d) LOOK
(xxx) When multiple users sent print requests concurrently, all requests are handled through

Page 3 of 58
(a) Caching (b) Buffering (c) Spooling (d) Queuing
(v) The device specific code is written into
(a) Device controller (b) Device driver (c) Kernel I/O subsystem (d)Kernel device handler

(xxxi) Fork is
(a) Creation of new process (b) Dispatching of new process (c) Increasing priority of a process
(d) Creation of new thread from process
(xxxii) The default remedy of Starvation is
a) Ageing b) Critical Section Solution
c) Mutual Exclusion d) Buffering
(xxxiii) Total time taken by a process to complete execution is
a) Waiting time b) Turnaround time
c) Response time d) Throughput
(xxxiv)The optimal scheduling algorithm is a) FCFS b) SJF c) RR d) Priority
(xxxv) Throughput is
(a) Number of processes completed per time unit
(b) Completion time of the whole process
(c) Time for waiting in ready queue
(d) Number of processes loaded into main memory per unit time
(xxxvi) RAG is a useful tool to represent _____________ in a system.
(a) Deadlock (b) Resource Allocation (c) Multiprogramming (d) Race Condition
(xxxvii) An edge from a resource instance to a process in RAG is known as
Assignment edge (b) Claim edge (c) Request edge (d) Allocation edge
(xxxviii)The semaphore whose value is either zero or one is known as
(a) Binary semaphore (b) Mutex semaphore
(c) Counting semaphore (d) Exclusion semaphore

(xxxix) When a process does not get access to the resource, it loops continually for the resource and wastes
CPU cycles. It is known as,
(a) Deadlock (b) Livelock (c) Mutuallock (d) Spinlock
(xxxx) The operations that cannot be overlapped or interleaved with executions of any other operations are
known as,
(a) Mutual exclusion (b) Atomic operation (c) Progress (d) Bounded wait

Page 4 of 58
(xxxxi) Part of code where more than one process access and update shared resource
(a) Entry section (b) Critical section (c) Exit section (d) Remainder section
(xxxxii) Full and empty semaphores for mutex implementation of Producer-Consumer problem are
respectively initialized with
(i) 0 and 1 (b) 1 and 0 (c) 0 and n (d) n and 0
(xxxxiii) System software
(a) MS-Word, Excel, Powerpoint (b) Editor, OS (c) Assembler, Loader, Browser
(d) Assembler, Loader, OS
(xxxxiv) Banker’s Algorithm is a/an
(a) Deadlock Avoidance Algorithm (b) Deadlock Recovery Algorithm

(c) Deadlock Prevention Algorithm (d) All of the above


(xxxxv) The address of the next instruction to be executed by the current process is provided by the __________
(a) CPU registers (b) Program counter (c) Process stack (d) Pipe
(xxxxvi) The process is swapped out of memory, and it is later swapped in memory, by the ______
(a) Long-Term Scheduler (b) Short-Term Scheduler (c) Medium-term Scheduler
(d) None of the above
(xxxxvii) The time taken for the desired sector to rotate to the disk head is called ____________
(a) positioning time (b) random access time (c) seek time (d) rotational latency
(xxxxviii) Which of the following page replacement algorithms suffers from Belady’s Anomaly?
(a) Optimal replacement (b) LRU (c) FIFO (d) Both optimal replacement and FIFO
(xxxxix)A memory page containing a heavily used variable that was initialized very early and is in constant
use is removed, then the page replacement algorithm used is ____________
(a) LRU (b) LFU (c) FIFO (d) None of the mentioned
(xxxxx) Which one is not a multithreading model?
(a) Many-to-One (b) One-to-One (c) One-to-Many (d) Many-to-Many

(li) With respect to time, which of the following is not a CPU scheduling criteria
a) Response time
b) Waiting time
c) Access time
d) Turn-around time

(lii) Base and Limit registers


a) Non-contiguous memory allocation
b) contiguous memory allocation
Page 5 of 58
c) Both
d) None

(liii) _______________ eliminates unnecessary seek operations.


(a)SCAN (b)FIFO (c) SSTF (d) LOOK

(liv) Disk Access Time is the sum of


a) Seek time, rotational latency and transfer time
b) Response time, seek time and rotational latency
c) Response time, seek time and transfer time
d) Seek time, rotational latency and response time

(lv) into Device driver and Device Controller


a) I/O softwares (b) I/O hardware
(c) I/O software and I/O hardware (d) I/O hardware and I/O software

(lvi) User thread is _____________ compared to kernel thread


a) slower (b) faster (c) of equal execution speed (d) not comparable

(lvii) User threads are created and managed by


a) Kernel b) User level thread library
c) Shell d) Both (b)and(c)

(lviii) Total time taken by a process to produce first response to the user
a) Waiting time b) Turnaround time
c) Response time d) Throughput

(lix) System software


a) MS-Word, Excel, Powerpoint
b) Editor, OS
c) Assembler, Loader, Browser
d) Assembler, Loader, OS

(lx) Turn Around Time is


(a) Number of processes completed per time unit
(b) Submission to Completion time of the process
(c) Time for waiting in ready queue
(d) Number of processes loaded into main memory per unit time

(lxi) Which of the following page replacement algorithms suffers from Belady’s Anomaly?
Page 6 of 58
a) Optimal replacement
b) LRU
c) FIFO
d) Both optimal replacement and FIFO

(lxii) ___________ can be defined as the mechanism that notifies the exceptional conditions and alarms.
a) Semaphore (b) Message Passing system (c) Signal (d) Critical Section

Group-B
(Descriptive Type Questions)

2. Describe the role of Operating System. [Module-1/CO1/(Understand/LOCQ)] 5


The operating system (OS) serves as the fundamental software that manages computer hardware
resources and provides services to software applications. Its primary roles include:

1. Resource Management: The OS efficiently allocates and manages system resources such as CPU
time, memory, disk space, and peripheral devices. It ensures that multiple programs can run
concurrently without interfering with each other.
2. Process Management: It oversees the execution of processes, which are instances of running
programs. This involves scheduling processes, handling interruptions, and managing process
communication and synchronization.
3. Memory Management: The OS manages the system's memory hierarchy, including primary
memory (RAM) and secondary storage (like hard drives or SSDs). It handles tasks such as memory
allocation, deallocation, and swapping to optimize memory usage.
4. File System Management: Operating systems provide a file system that organizes and stores data
on storage devices. This includes managing files, directories, access permissions, and file metadata.
5. Device Management: The OS controls peripheral devices such as keyboards, mice, printers, and
network interfaces. It provides device drivers to facilitate communication between hardware
devices and software applications.
6. User Interface: Operating systems offer user interfaces that allow users to interact with the
computer system. This can range from command-line interfaces (CLI) to graphical user interfaces
(GUI), providing a means for users to execute programs, manage files, and configure system
settings.
7. Security: OSs implement security measures to protect the system and its data from unauthorized
access, viruses, malware, and other threats. This includes user authentication, access control,
encryption, and monitoring for suspicious activities.
8. Error Handling: The OS detects and handles errors that occur during system operation, such as
hardware failures, software crashes, or invalid user inputs. It may provide error messages, logging,
and recovery mechanisms to mitigate the impact of these errors.

Page 7 of 58
3. Compare monolithic and bi-layered architecture of operating system.
[Module-1/CO1/(Analyse/IOCQ)] 5
Monolithic architecture bi-layered architecture
1. The entire operating system kernel resides in a 1. The operating system kernel is divided into two
single executable binary. distinct layers: the user space and the kernel space
2. All OS components, including process 2.The user space contains user-level processes and
management, memory management, file system, applications, while the kernel space contains
device drivers, and system calls, are tightly privileged OS components such as device drivers,
integrated into a single entity. memory management, and scheduling.
3. Communication between different components 3. Communication between the user space and
within the kernel is straightforward and efficient, as kernel space occurs through well-defined interfaces,
they are part of the same address space. such as system calls.
4. Monolithic kernels typically have better 4. This architecture provides better modularity and
performance because there is less overhead in isolation, as user-level processes cannot directly
accessing system resources. access kernel resources.
5.Examples of monolithic architectures is Unix, Examples of bi-layered architectures Unix-FreeBSD,
Linux and Windows 9x. macOS and Windows NT- Windows NT, Windows
2000, Windows XP, Windows 10.

4. Describe the role of Kernel of Operating System.


[Module-1/CO1/(Understand/IOCQ)] 5
The role of the kernel in an operating system:

1. Resource Management: Manages hardware resources like CPU time, memory, disk space, and peripheral
devices.
2. Process Management: Controls the execution of processes, including scheduling, creation, termination, and
communication.
3. Memory Management: Handles memory allocation, deallocation, and protection to optimize memory
usage.
4. File System Management: Organizes and stores data on storage devices, managing files, directories, and
file I/O operations.
Page 8 of 58
5. Device Management: Controls peripheral devices through device drivers, handling device interrupts and
access.
6. System Calls: Exposes interfaces for user-level programs to request services from the kernel.
7. Interrupt Handling: Manages hardware interrupts efficiently to maintain system responsiveness.
8. Security: Enforces security policies and controls access to system resources to protect against unauthorized
access and malware.

5. Describe the role of Shell of Operating System.


[Module-1/CO1/(Understand/IOCQ)] 5
The shell is a command-line interface (CLI) that acts as the primary user interface to interact with the
operating system. Its role encompasses several key functions:

1. Command Execution: The shell interprets commands entered by the user and executes them. These
commands can include system utilities, application programs, or scripts.
2. Environment Customization: Users can customize their shell environment by setting environment
variables, defining aliases, and configuring shell options to suit their preferences and workflow.
3. File Management: The shell provides commands for navigating the file system, creating, copying, moving,
and deleting files and directories, as well as for viewing file contents and changing file permissions.
4. Process Management: Users can manage processes using shell commands, such as starting, pausing,
resuming, and terminating processes. The shell also provides facilities for running processes in the
background and managing job control.
5. Input/Output Redirection: The shell supports input/output redirection, allowing users to redirect the
input or output of commands to or from files, pipes, or other processes.
6. Pipeline Operations: Users can create pipelines by chaining together multiple commands, with the
output of one command serving as the input to the next command in the pipeline.
7. Scripting: Shells support scripting languages that allow users to write shell scripts—a series of
commands stored in a file—to automate tasks, create complex workflows, and customize system behavior.
8. User Interface Customization: Users can customize the appearance and behavior of the shell prompt,
including its color, format, and display of system information.

6.a) Explain what is meant by Process Scheduling with respect to OS.


[Module-2/CO2/(Understand/LOCQ)] 2
Process scheduling refers to the mechanism by which the operating system manages and allocates CPU time
to multiple processes competing for resources. It involves selecting processes from the ready queue and
Page 9 of 58
assigning CPU time to them based on scheduling algorithms. The goal is to optimize system performance by
maximizing CPU utilization, minimizing response time, and ensuring fairness among processes.

b) Define a system call with respect to OS. Also, mention its importance in an OS.
[Module-1/CO1/(Understand/IOCQ)] 3

A system call is a mechanism provided by the operating system that allows user-level processes to request
services from the kernel. It acts as an interface between user-space applications and the kernel, enabling
processes to access privileged functionalities and resources that are otherwise restricted.
Importance in an OS:
Resource Management, Interprocess Communication, File System Operations, Process Control etc.

7. State the mutual roles played by shell & kernel of operating system for execution of a user program.
[Module-1/ CO1/(Remember/IOCQ)]5
The mutual roles played by the shell and kernel of an operating system for the execution of a user program
are as follows:

Shell Role:
1. Interpret User Commands: The shell interprets commands entered by the user via the command-line
interface or scripts.
2. Parse Commands: It parses the commands to determine the actions requested by the user.
3. Handle Input/Output: The shell manages input/output redirection and piping of commands.
4. Initiate System Calls: The shell initiates system calls to request services from the kernel, such as
process creation, file operations, and I/O operations.
5. Manage Environment: It manages the user's environment, including environment variables, aliases,
and shell options.
6. Provide User Interface: The shell provides a user-friendly interface for interacting with the operating
system and executing commands.
Kernel Role:
1. Execute System Calls: The kernel receives system calls from the shell and executes them on behalf of
the user.
2. Manage Processes: It manages the creation, scheduling, and termination of processes requested by the
shell.
3. Allocate Resources: The kernel allocates system resources, such as CPU time, memory, and I/O devices,
to processes as requested by the shell.
4. Handle I/O Operations: It handles input/output operations requested by user programs, including file
operations, device I/O, and network communication.
5. Ensure Security: The kernel enforces security policies and access controls to protect system resources
and prevent unauthorized access.

Page 10 of 58
6. Provide System Services: It provides essential system services, such as memory management, file
system management, and device management, to user programs via system calls.

8. Describe and explain the process state transition diagram with a neat diagram.
[Module-2/ CO2/ (Understand/LOCQ)6
A process state transition diagram illustrates the various states a process can go through during its lifetime in
an operating system. Here's a description along with a simplified diagram:
1. New: The process is being created. Resources such as memory space and process control blocks are
being allocated, but the process has not yet begun execution.
2. Ready: The process is ready to run and waiting for the CPU. It has been loaded into memory and is
awaiting execution. Multiple processes may be in the ready state, waiting to be scheduled.
3. Running: The process is currently being executed by the CPU. Only one process can be in the running
state at any given time on a single-core system. On a multi-core system, multiple processes can be
running simultaneously.
4. Blocked (Wait or Sleep): The process is unable to execute further until a certain event occurs, such as
waiting for user input or waiting for a resource to become available (e.g., I/O operation completion).
The process is moved to this state by the kernel.
5. Terminated: The process has finished execution either by completing its task or by explicitly being
terminated. Resources allocated to the process are released, and its process control block is removed.

9. Compare preemptive and non-preemptive process scheduling algorithms of operating system.


[Module-2/ CO2/(Analyse/IOCQ)] 5

Page 11 of 58
10. Define a thread. Explain how it is different from a process.
[Module-2/CO2/(Understand/LOCQ)] 3
A thread is a single sequence of instructions that can be scheduled for execution within a process, allowing
for concurrent operations and efficient resource sharing.
Thread is different from a process:
A thread is a subset of a process, representing a single sequence of instructions that can be executed
independently. Threads within the same process share memory and resources, enabling concurrent
execution and communication. Processes, on the other hand, are independent instances of programs with
their own memory space and resources, managed by the operating system.

11. a) Explain how a new process can be created in an OS


[Module-2/ CO1/(Understand/IOCQ)] 4
A new process can be created in an operating system:

1. Initialization: The operating system allocates necessary resources for the new process, including
memory space, process control block, and other required data structures.
2. Copying: If the new process is created by forking an existing process (as in Unix-like systems), the
OS duplicates the entire address space of the parent process, including code, data, and stack
segments, into the memory space of the child process.
3. Loading: For programs started from executables, the OS loads the program's code and data into
memory from the executable file, setting up the initial program counter and stack pointer.
4. Setup: The OS initializes the process control block (PCB) for the new process, which contains
information such as process ID, program counter, stack pointer, CPU registers, and other process-
specific data.

Page 12 of 58
5. Context Switch: If necessary, the OS performs a context switch to switch from the currently
running process to the newly created process, allowing it to start execution.
6. Execution: The new process begins execution from its starting point, either at the beginning of the
program or at the point where it was forked from the parent process.
7. Scheduling: The OS schedules the new process for execution based on its scheduling algorithm,
considering factors such as process priority, CPU availability, and other system load metrics.

b) Mention the needs of scheduling processes in an Operating System.


[Module-2/ CO2/(Understand/IOCQ)] 4
scheduling processes in an operating system is necessary to:

1. Maximize CPU Utilization: Ensure that the CPU is utilized efficiently by selecting and executing
processes in a manner that minimizes idle time.
2. Fairness: Allocate CPU time fairly among competing processes to prevent starvation and ensure
equal opportunity for execution.
3. Response Time: Minimize the response time for interactive processes to maintain system
responsiveness and user satisfaction.
4. Throughput: Maximize the number of processes completed per unit of time to improve overall
system throughput and performance.
5. Prioritization: Allow for prioritization of processes based on factors such as importance, urgency,
or system requirements.
6. Resource Sharing: Manage access to system resources such as memory, I/O devices, and CPU time
among multiple processes to prevent conflicts and ensure efficient resource utilization.
7. Predictability: Provide predictable behavior and performance characteristics to support real-time
and time-critical applications.

12. Assuming Low Priority No as a high priority process, schedule the following system of processes using
Priority Scheduling algorithm(Non-Preemptive) and calculate the average turn-around time and average
waiting time:

[Module-2/ CO2/(Apply/IOCQ)] 9
Process No Arrival Time Burst Time Priority No

P1 1 8 3

P2 0 1 1

P3 2 2 4

P4 0 1 5

P5 1 4 2

Page 13 of 58
Page 14 of 58
13. Calculate the average response time and average waiting time for the following system of processes to be
scheduled according to SRTF algorithm.
[Module-2/ CO2/(Apply/IOCQ)]9

Page 15 of 58
Page 16 of 58
14. Mention the necessary and sufficient conditions for deadlock to occur in a system of processes.

[Module-3/ CO2/(Understand/IOCQ)] 5
In a system of processes, deadlock can occur when the following four necessary and sufficient conditions
are simultaneously present:

1. Mutual Exclusion: At least one resource must be held in a non-sharable mode, meaning only one
process can use it at a time.
2. Hold and Wait: Processes already holding resources may request additional resources while
waiting for other resources to be released. This condition creates situations where processes can
hold resources while waiting for others, leading to potential deadlock.
3. No Preemption: Resources cannot be forcibly taken away from processes holding them. If a
process holds a resource and requests additional resources that are not available immediately, it
must wait for them to be released by other processes voluntarily.
4. Circular Wait: A circular chain of two or more processes exists, where each process holds at least
one resource that is requested by the next process in the chain. This circular dependency results in
each process being unable to proceed until the resource it needs is released, leading to a deadlock
situation.

15. Identify the role of Resource Allocation Graph in detection of Deadlock.


[Module-3/ CO2/(Apply/IOCQ)] 5
The Resource Allocation Graph (RAG) plays a crucial role in detecting deadlock in a system of processes
by:

1. Visual Representation: Providing a graphical representation of the allocation of resources to


processes and the relationships between them.
2. Identifying Deadlock Conditions: Allowing analysts to visually identify deadlock conditions such
as circular wait, where a cycle in the graph indicates potential deadlock.
3. Cycle Detection: Enabling automated algorithms to detect cycles in the graph, indicating the
presence of deadlock.
4. Resource Status: Showing the status of resources, including which processes currently hold them
and which processes are waiting to acquire them.
5. Analysis: Facilitating the analysis of resource dependencies and potential deadlock scenarios
within the system.
6. Decision Making: Assisting system administrators and developers in making decisions to resolve
or prevent deadlock situations based on the information provided by the graph.

16. Compare Starvation with Deadlock. [Module-3/ CO2/(Evaluate/HOCQ)] 4

Page 17 of 58
17. Describe Banker’s Algorithm for Deadlock avoidance.
[Module-3/ CO2/(Understand/HOCQ)] 8

Page 18 of 58
18. Mention the ways in which to resolve in case there is a deadlock in a system of processes.
[Module-3/ CO2/(Understand/HOCQ)]7
When a deadlock occurs in a system of processes, several strategies can be employed to resolve the
deadlock and restore normal system operation. Here are some common approaches:

1. Deadlock Detection and Recovery:


• Detection: Use deadlock detection algorithms to identify when a deadlock has occurred.
Various algorithms, such as Banker's Algorithm or resource allocation graphs, can be
employed for this purpose.

Page 19 of 58
• Recovery: After detecting a deadlock, the system can take corrective actions to recover from
the deadlock state. This may involve terminating one or more processes involved in the
deadlock, releasing their allocated resources, and allowing other processes to proceed.
2. Deadlock Prevention:
• Resource Allocation Policies: Implement resource allocation policies and strategies to
prevent the occurrence of deadlocks. This includes techniques like avoiding circular wait,
ensuring that processes acquire resources in a predetermined order, and implementing
strategies to prevent hold and wait situations.
• Resource Allocation Control: Employ mechanisms such as locks, semaphores, or monitors
to control the allocation and access to shared resources, ensuring that conflicting resource
requests do not lead to deadlocks.
3. Deadlock Avoidance:
• Banker's Algorithm: Use deadlock avoidance algorithms like the Banker's Algorithm to
dynamically allocate resources in a manner that ensures safety and prevents the system
from entering deadlock-prone states.
• Resource Allocation Safety Checks: Perform safety checks before allocating resources to
ensure that the allocation will not lead to deadlock. This involves predicting the potential
future resource requests of processes and ensuring that allocating resources to one process
will not prevent other processes from completing.
4. Process Termination:
• Selective Process Termination: Identify and terminate specific processes involved in the
deadlock to break the circular dependency and allow other processes to proceed.
• Global Reboot: In extreme cases, perform a global system reboot to clear all resource
allocations and restart the system in a clean state, eliminating the deadlock.
5. Resource Preemption:
• Resource Reclamation: Implement mechanisms to preemptively reclaim resources from
processes to resolve deadlock situations. This involves forcibly removing resources from
processes to break the circular wait condition and allow other processes to proceed.
• Rollback and Restart: Rollback the execution of processes to a safe state before the
deadlock occurred and restart them with different resource allocations to avoid the
deadlock.

Each of these strategies has its advantages and limitations, and the choice of strategy depends on factors
such as system requirements, resource constraints, and the nature of the deadlock situation.

19. Consider the following snapshot of a system of 5 processes P1, P2, P3, P4, P5 and 3 resource types A, B and
C. Resource types A, B and C have 10, 5, 7 instances respectively. Decide whether the system is in a safe
state. Also, mention the safe sequence.
[Module-3/ CO2/(Evaluate/HOCQ)] 7

Page 20 of 58
Page 21 of 58
20. Describe the role of semaphore in process synchronisation in OS.
4 [Module-4, CO3, Understand, IOCQ]

Page 22 of 58
Semaphores play a vital role in process synchronization in operating systems by:

1. Mutual Exclusion: Enforcing mutual exclusion to ensure that only one process can access a critical
section of code or a shared resource at a time.
2. Process Coordination: Allowing processes to coordinate their activities and sequence their
execution to avoid conflicts and ensure proper synchronization.
3. Wait and Signal Mechanism: Providing wait and signal operations to allow processes to block and
wait for a semaphore to become available (wait operation) and to notify other processes when a
semaphore is released (signal operation).
4. Counting and Binary Semaphores: Supporting both counting and binary (mutex) semaphores.
Counting semaphores can have values greater than one and are typically used to control access to
a finite pool of resources, while binary semaphores have values of either 0 or 1 and are commonly
used for mutual exclusion.
5. Preventing Deadlocks: Helping to prevent deadlocks by ensuring that processes acquire and
release resources in a controlled manner, thereby avoiding situations where processes are
indefinitely blocked waiting for resources held by other processes.
6. Interprocess Communication: Facilitating interprocess communication by allowing processes to
synchronize their actions, exchange information, and coordinate resource usage through the use of
semaphores.
7. Resource Allocation: Managing the allocation of shared resources among multiple processes by
controlling access to critical sections of code or shared resources using semaphores.

21. Explain race condition in OS.


[Module-4, CO3, Understand, LOCQ] 2
A race condition in an operating system occurs when two or more processes or threads access shared
resources or variables concurrently.

22. Describe the Dining Philosophers’ problem. Propose a solution for the problem.
[Module-4/ CO3/(Create/HOCQ)]3+3

The Dining Philosophers' problem is a classic synchronization problem in computer science that illustrates
the challenges of resource allocation and deadlock avoidance in concurrent systems. Here's a brief
description:

1. Scenario: The problem imagines a group of philosophers sitting around a dining table with a bowl
of rice and chopsticks between each pair of adjacent philosophers.
2. Task: Each philosopher alternates between thinking and eating. To eat, a philosopher needs to pick
up the two chopsticks adjacent to them. Once they finish eating, they put down the chopsticks and
continue thinking.
3. Constraints:
• Philosophers cannot eat at the same time, and each chopstick can only be held by one
philosopher at a time.
Page 23 of 58
• A philosopher must hold both chopsticks to eat.
4. Challenge: The challenge is to design a solution that prevents deadlock, where all philosophers are
holding one chopstick and waiting for the other, leading to a circular dependency and no
philosopher being able to eat.

One common solution to the Dining Philosophers' problem is to use a semaphore or mutex to control
access to the chopsticks. Here's a simple solution using semaphores:

1. Each chopstick is represented by a semaphore, initialized to 1 (indicating it's available).


2. Each philosopher is represented by a separate thread.
3. When a philosopher wants to eat, they must acquire the semaphore for both the left and right
chopsticks.
4. If both chopsticks are available, the philosopher picks them up and starts eating.
5. After eating, the philosopher releases both chopsticks by releasing the semaphores.
6. If a philosopher cannot acquire both chopsticks (one or both are not available), they wait until both
are available before trying again.

23. Define Inter-process communication (IPC) with respect to operating system. Explain how IPC is
implemented in OS.
[Module-4, CO3, Understand, IOCQ)] 5
Inter-process communication (IPC) refers to the mechanisms and techniques used by processes to
communicate and synchronize with each other in an operating system. IPC allows processes to exchange data,
share resources, and coordinate their activities, enabling cooperation and collaboration between different
parts of a system.

In short, IPC is implemented in an operating system through various mechanisms such as:

1. Shared Memory: Processes can share a portion of memory, allowing them to communicate by
reading and writing to shared memory locations.
2. Message Passing: Processes exchange messages through system-provided facilities like pipes,
sockets, message queues, or signals, allowing for communication between unrelated processes.
3. Synchronization Primitives: Operating systems provide synchronization primitives like
semaphores, mutexes, and condition variables, which processes can use to coordinate access to
shared resources and avoid race conditions.
4. Remote Procedure Calls (RPC): Processes can invoke procedures or functions in remote processes
as if they were local, allowing for distributed communication and cooperation.
5. File System: Processes can communicate indirectly through shared files or named pipes, where
data is written by one process and read by another.

Page 24 of 58
24. Explain “process synchronization” with respect to a multitasking OS. Name the different techniques
applied for “process synchronization”.
[Module-4, CO2, Understand, LOCQ)] 3 +2

Process synchronization in a multitasking operating system refers to the coordination of multiple


processes to ensure orderly execution and proper sharing of resources. It involves mechanisms and
techniques to control the order of execution of processes, prevent race conditions, and manage access to
shared resources. By synchronizing processes, the operating system maintains system integrity, avoids
conflicts, and facilitates cooperation among concurrent processes.
Several techniques are applied for process synchronization in operating systems:

1. Mutexes (Mutual Exclusion): Mutexes are used to ensure that only one process at a time can
access a shared resource. They provide exclusive access to critical sections of code, preventing
concurrent access by multiple processes.
2. Semaphores: Semaphores are integer variables used for process synchronization. They can be
used to control access to shared resources and implement signaling mechanisms between
processes.

25. Differentiate Paging from Segmentation technique of memory management in OS.


[Module-5, CO4, Analyze, IOCQ)] 4

26. a) Compare SSTF and SCAN disk scheduling algorithms.


[Module-7, CO4, Analyze, HOCQ] 4

Page 25 of 58
SSTF disk scheduling algorithms SCAN disk scheduling algorithms
1. SSTF selects the I/O request with the shortest 1. Moves the disk arm in one direction, servicing
seek time (distance) to the current head position. It requests along the way, then reverses direction to
aims to minimize the head movement and reduce service remaining requests, ensuring fairness but
the average response time.
potentially resulting in higher average seek time
compared to SSTF.
2. Processes requests based solely on seek time, 2. Services all requests in the direction of
potentially leading to starvation of requests movement until reaching the end, then reverses
located farther away from the current head direction, ensuring fairness in servicing requests
position. across different disk regions.
3. Can result in low average seek time and fast 3. May have higher average seek time compared
response times, particularly when the majority of to SSTF, especially if there are frequent changes in
requests are located close to the current head the direction of disk arm movement. However, it
position.
ensures fairness in servicing requests across the
entire disk surface.
4. Relatively simple to implement and requires 4. Slightly more complex to implement due to the
minimal bookkeeping of request queue. need to manage the direction of disk arm
movement and handle requests in both directions.

b) Contrast Sequential File Access technique from Random File Access Technique.
[Module-8, CO4, Analyze, HOCQ)] 4
ertainly, here's a concise contrast between Sequential File Access and Random File Access techniques:

1. Sequential File Access:


• Access Method: Reads or writes data in a sequential order, starting from the beginning of
the file and proceeding sequentially through each record until reaching the desired record.
• Traversal: It requires traversing through all preceding records to access a specific record.
• Efficiency: Efficient for batch processing and processing large volumes of data sequentially.
• Examples: Tape drives, reading log files, batch processing of records.
2. Random File Access:
• Access Method: Allows direct access to any record in the file without having to read or
write preceding records.
• Traversal: Records can be accessed in any order, without the need to traverse through all
preceding records.
• Efficiency: Suitable for interactive applications and random data retrieval where accessing
specific records without sequential traversal is required.
• Examples: Hard disk drives, databases using index structures, interactive data processing
applications.

Page 26 of 58
27. Briefly describe major functional units of Operating System. [Module 1, CO1, Understand, LOCQ] 5

The major functional units of an operating system include:

1. Kernel: The core component responsible for managing system resources, providing essential
services such as process management, memory management, device management, and file system
management.
2. Process Management: Manages the execution of processes, including process creation,
scheduling, synchronization, and termination. It ensures efficient utilization of CPU resources and
provides mechanisms for inter-process communication and synchronization.
3. Memory Management: Controls the allocation and deallocation of memory resources, manages
virtual memory, and provides mechanisms for memory protection and sharing. It ensures efficient
utilization of physical memory and supports processes' memory requirements.
4. File System Management: Manages file storage and organization, including file creation, deletion,
and manipulation. It provides file access control, directory structure management, and file system
integrity maintenance.
5. Device Management: Controls the interaction between the operating system and peripheral
devices, including device detection, driver management, and input/output (I/O) operations
handling. It ensures efficient utilization of hardware resources and provides a consistent interface
for device access.
6. User Interface: Provides interfaces for user interaction with the operating system, including
command-line interfaces (CLI), graphical user interfaces (GUI), and application programming
interfaces (APIs). It facilitates user input, output, and interaction with system resources and services.
7. Networking: Manages network connections and communication, including network configuration,
protocol implementation, and data transmission. It supports network services, such as file sharing,
printing, and remote access, and provides security mechanisms for networked systems.

28. Differentiate between distributed Operating System and Network Operating System. [Module 1, CO1,
Understand, IOCQ] 5

Page 27 of 58
29. Define Process? Explain states of a process with process state diagram.
[Module 2, CO2, Understand, IOCQ] 5
Already done
30. Three processes P1, P2 and P3 of sizes 19900,19990,19888 bytes respectively. If partitions are of equal
size of 20000bytes, will there be any fragmentation? Can a process of size 200 bytes be accommodated?

Page 28 of 58
[Module 5, CO4, Apply, HOCQ] 5

31. Consider a logical address space with 8 pages of 1024 words each, mapped onto a physical memory on 32
frames. How many bits are in the logical address? How many bits are in the physical address?
[Module 6, CO4, Apply, HOCQ] 5

Page 29 of 58
32. What are Cooperative Processes? Explain with Producer-Consumer Process.
[Module 4, CO3, Understand, IOCQ] 5
Cooperating Process: Cooperating Processes are those processes that depend on other processes or
processes. They work together to achieve a common task in an operating system. These processes
interact with each other by sharing the resources such as CPU, memory, and I/O devices to complete the
task.

Page 30 of 58
The Producer-Consumer problem is a classic example of cooperative processes:

• Producer Process: Generates data items and puts them into a shared buffer.
• Consumer Process: Retrieves data items from the shared buffer and processes them.

In this scenario, the producer and consumer processes cooperate to ensure that the shared buffer remains
in a consistent state and that data items are produced and consumed in an orderly manner.

Key points:

1. The producer produces data items and adds them to the buffer.
2. The consumer consumes data items from the buffer.
3. The producer must wait if the buffer is full, and the consumer must wait if the buffer is empty,
ensuring synchronization.
4. Both processes cooperate to maintain the integrity of the shared buffer, avoiding race conditions or
data corruption.

Cooperative processes rely on mutual cooperation and synchronization to achieve their objectives
efficiently, without relying on the operating system to enforce scheduling or resource management.

33. Consider the following:-

Process Burst Time Priority Arrival


P1 10 3 0
P2 1 1 0
P3 2 3 1
P4 1 4 3
P5 5 2 6
Draw a Gantt chart and evaluate the better scheduling algorithm between FCFS and Preemptive Priority
scheduling with respect to average waiting time and average turnaround time.
[Module 2, CO2, Apply, HOCQ] 10

Page 31 of 58
Page 32 of 58
Page 33 of 58
34. State four necessary conditions for deadlock.
[Module 3, CO2, Understand, IOCQ] 4
ANS in Question no.14
35. Is it possible to have a deadlock involving one single process?
[Module 3, CO2, Understand, LOCQ] 3
No, it is not possible to have a deadlock involving just one single process. Deadlock occurs when two or
more processes are unable to proceed because each is waiting for a resource held by another process,
creating a circular dependency. In a single-process scenario, there are no other processes to hold
resources that the process might need, so deadlock cannot occur. Deadlock involves interactions between
multiple processes and their resource dependencies, making it inherently impossible in a single-process
context.

36. Five processes are competing for resources R1, R2, R3 and R4 where (R1, R2, R3, R4) = (6, 4, 4, 2).
The maximum claim of these processes and the initial resources allocated to these processes, are given
in the following table.
Processes MAX Alloc

R1 R2 R3 R4 R1 R2 R3 R4

P1 3 2 1 1 2 0 1 1

P2 1 2 0 2 1 1 0 0

P3 1 1 2 0 1 1 0 0

P4 3 2 1 0 1 1 1 0

P5 2 1 0 1 0 0 0 1

Does this initial allocation lead to a safe state? Explain with reason.
If P2 requests 2 instances of R1, 1 instance of R3, 1 instance for R4, check whether the system is still in safe
state. If it is, find out the safe sequence of process execution.
[Module 3, CO2, Apply, HOCQ] 10

Page 34 of 58
Page 35 of 58
37. State classical Producer/Consumer problem.
[Module 4, CO3, Understand, IOCQ] 3

The classical Producer/Consumer problem involves two types of processes, producers, and consumers,
which share a common, fixed-size buffer or queue. The goal is to ensure that the producers do not try to
add data to the buffer if it's full, and consumers do not try to remove data from an empty buffer.

Here's a description of the problem:

1. Producer Process:
• Generates data items.
• Attempts to add data items to the buffer.
• If the buffer is full, the producer waits until there is space available in the buffer.
2. Consumer Process:
• Removes data items from the buffer.
• Attempts to consume data items from the buffer.
• If the buffer is empty, the consumer waits until there are items available in the buffer.
3. Buffer:
• Acts as a shared resource between producers and consumers.
• Has a fixed size to hold a limited number of data items.
• Producers add data items to the buffer, and consumers remove data items from the buffer.

38. Show the pseudo code of Semaphore solution of classical Producer/Consumer problem.
[Module 4, CO3, Apply, HOCQ] 8
A Semaphores S is an integer variable that can be accessed only through two standard
operations : wait() and signal().
The wait() operation reduces the value of semaphore by 1 and the signal() operation
increases its value by 1.
wait(S){
Page 36 of 58
while(S<=0); // busy waiting
S--;
}

signal(S){
S++;
}

To solve this problem we need two counting semaphores – Full and Empty. “Full” keeps
track of number of items in the buffer at any given time and “Empty” keeps track of number
of unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
do{

//produce an item

wait(empty);
wait(mutex);

//place in buffer

signal(mutex);
signal(full);

}while(true)

When producer produces an item then the value of “empty” is reduced by 1 because one
slot will be filled now. The value of mutex is also reduced to prevent consumer to access
the buffer. Now, the producer has placed the item and thus the value of “full” is increased
by 1. The value of mutex is also increased by 1 because the task of producer has been
completed and consumer can access the buffer.
Page 37 of 58
Solution for Consumer –
do{

wait(full);
wait(mutex);

// consume item from buffer

signal(mutex);
signal(empty);

}while(true)

As the consumer is removing an item from buffer, therefore the value of “full” is reduced by

1 and the value is mutex is also reduced so that the producer cannot access the buffer at

this moment. Now, the consumer has consumed the item, thus increasing the value of

“empty” by 1. The value of mutex is also increased so that producer can access the buffer

now.
39. What are the disadvantages of semaphore?
[Module 4, CO3, Understand, IOCQ] 4

In short, some disadvantages of semaphores include:

1. Complexity: Semaphores can introduce complexity, especially when dealing with multiple
semaphores or complex synchronization patterns, leading to potential programming errors and
difficulties in debugging.
2. Deadlocks and Race Conditions: Improper use of semaphores can result in deadlocks or race
conditions, where processes get stuck or exhibit unpredictable behavior due to incorrect
synchronization.
3. Resource Contention: Semaphore-based synchronization can lead to resource contention and
potential performance degradation, especially in high-concurrency scenarios where multiple
processes compete for limited resources.

Page 38 of 58
4. Difficulty in Understanding: Semaphores may be difficult to understand for novice programmers,
requiring a solid understanding of concurrency concepts and careful consideration of
synchronization requirements.

40. Show the pseudo code of Semaphore solution of Dining Philosophers’ problem.
[Module 4, CO3, Apply, HOCQ] 8
Here's a pseudocode implementation of the Dining Philosophers' problem using semaphores:

In this pseudocode:

• Each philosopher is represented by a separate process or thread, running the philosopher() function.
• Semaphores chopstick[] are used to represent the availability of each chopstick. Each chopstick is initially
available (1).
• The take_chopsticks() function attempts to acquire both chopsticks for the philosopher to eat. If both
chopsticks are not available, the philosopher waits.
• The put_chopsticks() function releases both chopsticks after the philosopher finishes eating.
• Philosophers alternate between thinking, attempting to take chopsticks, eating, and putting down chopsticks,
ensuring that only one philosopher can use each chopstick at a time and preventing deadlock.

41. Distinguish between internal and external fragmentation.


[Module 5, CO4, Understand, IOCQ] 4
internal fragmentation external fragmentation
1. Internal fragmentation occurs when allocated 1. External fragmentation occurs when there is
memory blocks are larger than required by the enough total memory space to satisfy a request,
process. As a result, some portion of the allocated but it is not contiguous, resulting in unusable
memory remains unused. memory fragments scattered throughout the
memory space.
2.This phenomena are occur in fixed partition 2. 2.This phenomena are occur in variable partition
environment environment
3. Internal fragmentation wastes memory 3. External fragmentation can lead to inefficient
resources. memory utilization.
4. memory is allocated in fixed-size blocks or 4. memory blocks are allocated and deallocated
pages dynamically.

42. Describe Direct Mapping method of paging with a diagram.


[Module 5, CO4, Understand, IOCQ] 6

Page 39 of 58
43. In a Paging System with TLB, it takes 30 ns to search the TLB and 90 ns to access the memory. If the TLB hit
ratio is 70%, find the effective memory access time. What should be the hit ratio to achieve the effective memory
access time of 130 ns?
[Module 5, CO4, Apply, HOCQ] 9 44. What are different file allocation methods? Explain with examples.
[Module 8, CO4, Understand, IOCQ] 5

Page 40 of 58
45. Given memory partition of 100k, 500k, 200k, 300k, and 600k, in order. How would each of the First-fit,
Best-fit, and Worst-fit memory allocation algorithms place processes of 212k, 417k, 112k and 426k in
order?
Page 41 of 58
Which algorithm makes the most efficient use of memory?
[Module 5, CO4, Apply, HOCQ] 9

46. Suppose a disk drive has 300 cylinders, numbered 0 to 299. The current head position of the disk is at 90.
The queue of pending requests, in FIFO order, is 36, 79, 15, 120, 199, 270, 89, and 170.
Calculate the average cylinder movements for Shortest Seek Time First (SSTF) and SCAN algorithm.
[Module 7, CO4, Apply, HOCQ] 8

Page 42 of 58
Page 43 of 58
47. Explain the Set Associative paging scheme with a labeled schematic diagram.
[Module 5, CO4, Understand, HOCQ] 6
48. Calculate the number of page fault the reference string 7,0,1, 2, 0, 3, 4, 0, 3, 2, 0, 1, 2 ,3, 2, 0, 1, 7, 0, 1, where
LRU Page replacement policy has been used for a memory with three frames.
[Module 5, CO4, Apply, HOCQ]5

49. Explain the requirement for swap-space management.


[Module 5, CO4, Understand, LOCQ]3
Swap-space management is essential for efficient memory usage in computer systems. It involves
allocating a portion of the hard disk as virtual memory to supplement RAM. When the RAM gets full, less
frequently used data is swapped out to the disk to make room for more urgent tasks. Effective swap-
space management ensures that this swapping process is optimized, minimizing performance
degradation and ensuring smooth operation of the system.

50. Consider a paging system with the page table stored in memory
If a memory reference takes 200ns how long does a paged memory reference take?
If we add TLB and 75% of all page table references are found at TLB, what is the effective memory reference
time (EAT)?
[Assumption: finding a page table entry in TLB takes 0 times]
Explain your answer. [Module 5, CO4, Apply, HOCQ] 8

Page 44 of 58
51. Differentiate between
[Module 5, CO4, Apply, IOCQ] 5
(i) Logical address and physical address

(ii) Internal and External fragmentation


Already done.
52. Explain diagrammatically the concept of virtual machine. [Module 5, CO4, Understand, IOCQ] 5

53. A process of size 200 MB needs to be swapped in. There is no space in main memory. A process of 250 MB
is idle in main memory; therefore, it can be swapped out. Average latency time =10 ms. Transfer rate of
hard disk = 60 MB/sec. How much swap time is required for swap-in and swap-out.
[Module 5, CO4, Apply, HOCQ] 5
Page 45 of 58
54. Explain Linked and Indexed file allocation methods with examples.
[CO3, Understand, HOCQ] 5
55. Consider the following:- [Module 2, CO2, Apply, HOCQ]

Process Arrival Time Burst Time


P1 0 5
P2 2 3
P3 3 2
P4 5 7
Draw a Gantt chart and evaluate the better scheduling algorithm between FCFS and Round Robin
scheduling with respect to average waiting time and average turnaround time. 10

Page 46 of 58
Page 47 of 58
56. (a) In a system, following states of processes and resources are given. Draw RAG.
P1->R1, P2->R3, R2->P1, R1->P3, P4->R3, R1->P4. [Module 3, CO2, Apply, HOCQ] 5

(b) Consider the system with following information, [Module 3, CO2, Apply, HOCQ]
(R1, R2, R3) = (15,8,8).
The maximum claim of these processes and the initial resources allocated to these processes, are given in
the following table.
Processes MAX Alloc

R1 R2 R3 R1 R2 R3
P1 5 6 3 2 1 0
P2 8 5 6 3 2 3
P3 4 9 2 3 0 2
P4 7 4 3 3 2 0
P5 4 3 3 1 0 1
Does this initial allocation lead to a safe state? Explain with reason.
If now Req4 = [2 0 2], check whether the system is still in safe state. If it is, find out the safe sequence of
process execution. 10
5+10=15

Page 48 of 58
Page 49 of 58
57.(a) State the conditions that must be satisfied by the solution to the critical section problem.
[Module 3, CO2, Apply, IOCQ] 5
The critical section problem refers to the task of ensuring that concurrent processes or threads can access shared
resources or critical sections of code without interfering with each other's execution. To provide a correct solution to
the critical section problem, the following conditions must be satisfied:

1. Mutual Exclusion: Only one process or thread can execute in its critical section at a time. This means that if
a process is executing in its critical section, no other process should be allowed to execute in its critical
section concurrently.
2. Progress: If no process is executing in its critical section and there are processes that wish to enter their
critical sections, only those processes that are not executing in their remainder sections can participate in the
decision of which will enter the critical section next. This condition ensures that processes don't remain
indefinitely excluded from the critical section.
3. Bounded Waiting (or No Starvation): There exists a bound on the number of times other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section and before
that request is granted. This condition ensures that a process won't be indefinitely delayed from entering its
critical section.
4. Independence from the Speed of Execution: The solution should not assume anything about the relative
speed of execution of the processes or threads. It should work correctly regardless of variations in execution
speeds.

Satisfying these conditions ensures that concurrent processes can access shared resources safely without resulting in
race conditions, deadlock, or starvation. Various synchronization mechanisms such as semaphores, mutexes,
monitors, and atomic operations are used to implement solutions to the critical section problem while meeting these
conditions.

(a) There are three processes P1, P2 and P3, sharing a semaphore for synchronizing a shared variable.
Initially, the value of semaphore is 1. All the sequences of events are given in the table. Please fill the
table with appropriate information. [Module 3, CO2, Apply, HOCQ]

Page 50 of 58
Time Current semaphore Needs of Process Modified Description and
value semaphore value Current status of
the queue
1 1 P1 needs to access

2 P2 needs to access

3 P3 needs to access

4 P1 exits the critical


section
5 -

6 P2 exits the critical


section
7 P3 exits the critical
section
10
5+10=15
Time Current semaphore Needs of Process Modified Description and
value semaphore value Current status of
the queue
1 1 P1 needs to access 0 P1 enter critical
section
2 0 P2 needs to access -1 P2 is queued to
access the critical
section
3 -1 P3 needs to access -2 P3 is queued to
access the critical
section
4 -2 P1 exits the critical -1 P1 exits the
section critical section
5 -

6 -1 P2 exits the critical 0 P1 exits the


section critical section
7 0 P3 exits the critical 1 P1 exits the
section critical section

58. In a Paging System with TLB, it takes 30 ns to search the TLB and 90 ns to access the memory. If the TLB
hit ratio is 80%, find the effective memory access time. What should be the hit ratio to achieve the effective
memory access time of 100 ns? [Module 5,
CO4, Apply, HOCQ] 5

Page 51 of 58
Page 52 of 58
59.(a) Write short notes on: Thrashing and CPU Utilization [Module 5, CO4, Understand, IOCQ] 5
(b) Consider a disk queue with I/O requests on the following cylinders in their arrival orders:
[Module 7, CO4, Apply, HOCQ]
6, 10, 12, 54, 97, 73, 128, 15, 44, 110, 34, 45.
The disk head is assumed to be at cylinder 23 and moving in the direction of increasing number of
cylinders. The disk consists of total 150 cylinders. Calculate and show with diagram the disk head
movement using
FCFS, SSTF, SCAN, and C-LOOK disk scheduling algorithms. 10
5+10=15

Page 53 of 58
Page 54 of 58
Page 55 of 58
60. (a) Explain the benefits of multithreading model. [CO4, Understand, IOCQ] 5
some benefits of the multithreading model include:

1. Improved Responsiveness: Multithreading allows applications to remain responsive to user


interactions while performing resource-intensive tasks in the background, enhancing user
experience.
2. Concurrency: Multithreading enables concurrent execution of multiple tasks within a single
process, leveraging the available CPU resources more efficiently and potentially improving overall
system throughput.
3. Resource Sharing: Threads within the same process can share resources such as memory, file
handles, and network connections, reducing resource duplication and improving resource
utilization.
4. Simplified Programming: Multithreading simplifies programming by allowing developers to break
down complex tasks into smaller, manageable threads, facilitating modular design and easier
maintenance of code.
5. Parallelism: Multithreading enables parallelism, allowing multiple threads to execute
simultaneously on multicore processors, speeding up computation-intensive tasks and improving
overall system performance.
6. Asynchronous Operations: Multithreading enables asynchronous programming paradigms, where
tasks can execute independently and asynchronously, reducing idle time and improving system
responsiveness.
7. Scalability: Multithreading provides scalability by allowing applications to scale with increasing
hardware resources, leveraging additional CPU cores and threads for improved performance.

Overall, the multithreading model offers several advantages, including improved responsiveness,
concurrency, resource sharing, simplified programming, parallelism, asynchronous operations, and
scalability, making it a valuable approach for developing efficient and responsive software applications.

(b) Use following page reference string: [CO4, Apply, HOCQ]


5, 0, 2, 1, 0, 3, 0, 2, 4, 3, 0, 3, 2, 1, 3, 0, 1, 5
Compare optimal page replacement algorithm with 3 and 4 frames. 10
5+10=15

Page 56 of 58
61. a)What is the general structure of a process Pi (in context to process synchronization)? 5
[Module 4, CO3, Understand, IOCQ]

The general structure of a process 𝑃𝑖Pi in the context of process synchronization typically involves the
following components:

1. Initialization: The process initializes any necessary variables, resources, or data structures required
for synchronization.
2. Entry Section: This is the section of code where the process attempts to enter the critical section or
the shared resource. Before entering the critical section, the process may need to acquire certain
synchronization primitives such as locks, semaphores, or mutexes to ensure exclusive access to the
critical section.
3. Critical Section: The critical section contains the code segment where the process accesses or
modifies shared resources. It is the part of the code that must be executed atomically, i.e., without
interruption from other processes, to maintain data consistency and integrity.
4. Exit Section: After completing the operations in the critical section, the process exits the critical
section and releases any synchronization primitives it acquired during entry. This ensures that other
processes can access the critical section in the future.
Page 57 of 58
5. Remainder Section: The remainder section contains the remaining code of the process that does
not require exclusive access to shared resources. This section may involve computations, I/O
operations, or other tasks that can be performed concurrently with other processes.
6. Termination: Finally, the process terminates or enters a wait state, depending on the application's
requirements and the synchronization protocol being used.

Overall, the structure of a process 𝑃𝑖Pi in the context of process synchronization emphasizes the proper
management of critical sections and shared resources to ensure data consistency, integrity, and mutual
exclusion among concurrent processes.

b)What are the multiprogramming, multiprocessing and multitasking OSs? 5


[Module 4, CO3, Understand, IOCQ]

1. Multiprogramming OS:
• Allows multiple programs to be loaded into memory and executed concurrently.
• CPU switches between programs to maximize CPU utilization.
• Examples: Early versions of operating systems like MS-DOS.
2. Multiprocessing OS:
• Supports the execution of multiple processes across multiple CPUs or processor cores
simultaneously.
• Provides true parallel processing capabilities.
• Examples: Unix/Linux, Windows NT/2000/XP/Vista/7/8/10.
3. Multitasking OS:
• Allows multiple tasks (processes or threads) to run concurrently on a single CPU.
• CPU time is divided among tasks using scheduling algorithms.
• Examples: Modern desktop and server operating systems such as Linux, Windows, macOS.

Page 58 of 58

You might also like