0% found this document useful (0 votes)
8 views24 pages

ESE QB ANSt OS

The document is a comprehensive question bank on operating systems, covering topics such as system calls, process states, context switching, and CPU scheduling. It includes detailed explanations of various concepts, including the architecture of modern computers, inter-process communication mechanisms, and the structure of process control blocks. Additionally, it discusses multithreading benefits, scheduling algorithms, and synchronization techniques like semaphores.

Uploaded by

roshphilip19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views24 pages

ESE QB ANSt OS

The document is a comprehensive question bank on operating systems, covering topics such as system calls, process states, context switching, and CPU scheduling. It includes detailed explanations of various concepts, including the architecture of modern computers, inter-process communication mechanisms, and the structure of process control blocks. Additionally, it discusses multithreading benefits, scheduling algorithms, and synchronization techniques like semaphores.

Uploaded by

roshphilip19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

OPERATING SYSTEM QUESTION BANK

MODULE – 1
1) List the different types of system calls in an operating system.
a) System calls provide an interface between a process and the operating system. Major types include:
i) Process Control: Create, terminate, load, or execute processes (e.g., fork(), exit()).
ii) File Management: Operations on files like read, write, open, and close (e.g., open(), read()).
iii) Device Management: Request or release devices and perform I/O (e.g., ioctl(), read()).
iv) Information Maintenance: Retrieve system data, process attributes (e.g., getpid(), alarm()).
v) Communication: Facilitate inter-process communication (e.g., pipe(), shmget(), send()).
vi) Protection: Control access to resources and set permissions (e.g., chmod()).
These calls allow user programs to perform privileged operations in a controlled manner,
maintaining system integrity and security.

2) With a neat architecture diagram briefly explain about the operation of modern computer.
a) A modern computer system operates on the Von Neumann architecture, comprising:
i) Input Unit: Accepts data from external sources.
ii) Output Unit: Sends processed data to the user.
iii) Central Processing Unit (CPU): Contains the Control Unit (CU), Arithmetic Logic Unit (ALU),
and registers.
iv) Memory Unit (RAM): Stores instructions and data for processing.
v) Storage: Secondary memory like HDD/SSD for long-term storage.
Working: Input data is fed into memory, the CPU fetches and decodes instructions via CU, executes
through ALU, and stores results in memory or sends to output devices.

3) Write notes on system programs.


a) System programs provide a convenient environment for program development and execution. Unlike
system calls, which are OS-level interfaces, system programs act as intermediaries between the user
and system hardware. Types include:
i) File Management Programs: e.g., file copying, editing tools.
ii) Status Information Programs: Display system performance and resource usage.
iii) Language Processors: Compilers, assemblers, interpreters.
iv) Communication Programs: Enable data exchange over networks.
v) Program Loaders and Debuggers: Help run and test software.
They enhance usability, efficiency, and interaction between users and the system.
4) Explain various states of a process.
a) A process undergoes several states during its lifecycle:
i) New: The process is being created.
ii) Ready: The process is loaded into memory and waiting to be assigned to a CPU.
iii) Running: Instructions are being executed by the CPU.
iv) Waiting (Blocked): The process is waiting for some I/O operation to complete or an event to
occur.
v) Terminated: The process has finished execution or is aborted.
These states are essential for efficient multitasking, ensuring that the CPU always has work to do
while processes wait for resources or events.

5) Define a process and explain its states.


a) A process is an instance of a program in execution. It consists of program code, current activity, and
resources like open files and registers. Each process is assigned a Process Control Block (PCB) for
tracking information such as process ID, state, and CPU registers.
Process States:
i) New: Initialization is in progress.
ii) Ready: Awaiting CPU assignment.
iii) Running: Currently executing.
iv) Waiting: Awaiting a resource or event.
v) Terminated: Execution complete.
State transitions occur due to system calls or hardware interrupts, enabling multitasking and CPU
scheduling.

6) List out any four process control system calls.


a) Process control system calls manage process execution. Examples include:
i) fork() – Creates a new child process.
ii) exit() – Terminates the current process.
iii) wait() – Waits for a child process to complete.
iv) exec() – Replaces the current process image with a new program.
These system calls form the foundation of process management and are essential in multitasking
environments.

7) Infer the term context switching.


a) Context switching is the process of storing the state of a currently running process and loading the
state of another ready process. It enables the operating system to manage multiple processes on a
single CPU core.
Steps involved:
i) Save the current process’s state (registers, program counter, etc.) in its PCB.
ii) Load the next process’s state from its PCB.
iii) Update the CPU to start executing the new process.
Although context switching introduces overhead, it is crucial for multitasking and ensuring
responsiveness in modern OSes.

8) Discuss various types of system calls with suitable examples.


a) System calls are categorized based on the services they provide:
i) Process Control: Manage process execution.
Example: fork() creates a new process, exit() terminates one.
ii) File Management: Handle files and directories.
Example: open(), read(), write(), close().
iii) Device Management: Interact with hardware devices.
Example: ioctl() configures devices, read()/write() for I/O.
iv) Information Maintenance: Retrieve and set system data.
Example: getpid(), alarm(), setuid().
v) Communication: Enable inter-process communication (IPC).
Example: pipe(), shmget(), send(), recv().
Each type allows user programs to perform system-level functions securely and efficiently.

9) Explain the structure of a system call and its types with examples.
a) A system call consists of a controlled interface allowing user-level programs to request OS services.
Structure:
i) Invocation: The program requests a service.
ii) Transition to Kernel Mode: Using a software interrupt or trap.
iii) Execution: OS performs the service.
iv) Return to User Mode: Control returns with result or error code.
Types and Examples:
 Process Control: fork(), exec()
 File Management: open(), read()
 Device Management: ioctl()
 Communication: pipe(), msgsnd()
 Information Maintenance: getpid()
These calls ensure that hardware and protected OS functionalities are accessed securely.

10) Specify the role of program counter.


a) The Program Counter (PC) is a register in the CPU that holds the memory address of the next
instruction to be executed. As the CPU fetches instructions, the PC is automatically updated.
Roles:
i) Ensures sequential execution of instructions.
ii) Helps manage control flow (jumps, branches, loops).
iii) Facilitates interrupt handling and context switching by storing/resuming instruction locations.
It is vital for tracking process execution and enabling multitasking.

11) Describe the structure and components of an operating system.


a) An operating system has a layered architecture, comprising:
i) Kernel: Core of the OS; manages CPU, memory, processes.
ii) System Calls Interface: Acts as a bridge between user applications and kernel services.
iii) Device Drivers: Control hardware devices.
iv) File System: Manages data storage, directories, and files.
v) User Interface (Shell/GUI): Allows user interaction with the system.
These components work together to abstract hardware, manage resources, and provide services to
users and programs.

12) Illustrate the state diagram of a process, covering its lifecycle.


a) Lifecycle of process states :
i) New → Ready:
The OS admits a newly created process into the ready queue.
ii) Ready → Running:
The CPU scheduler selects a ready process for execution.
iii) Running → Waiting:
The process requests an I/O or waits for an event (like user input).
iv) Waiting → Ready:
Once the I/O is complete, the process is moved back to the ready queue.
v) Running → Ready:
If the process is preempted by the scheduler (e.g., time slice over), it's moved back to ready.
vi) Running → Terminated:
The process finishes execution or is forcibly terminated.

13) Discuss context switching with a neat diagram and real time scenario.
a) Context switching is the process where the CPU switches from executing one process to another.
This is essential in multitasking operating systems where the CPU needs to share time among
multiple processes or threads.
i) Interrupt or System Call Trigger
A context switch can be triggered by an interrupt (e.g., timer interrupt), a system call (e.g., I/O
request), or the CPU scheduler.
ii) Save the Current Process State
The OS saves the state of the currently running process (Program Counter, CPU registers, stack
pointer, etc.) into the Process Control Block (PCB) of that process.
iii) Update the Process Status
The OS updates the current process’s state to Ready or Waiting, depending on the reason for the
switch.
iv) Select the Next Process
The CPU scheduler selects another process from the ready queue based on scheduling algorithms
(e.g., Round Robin, Priority).
v) Load the New Process State
The OS loads the context of the selected process from its PCB (registers, program counter, etc.).
vi) Transfer Control to the New Process
The CPU begins executing the new process from where it left off.
Example (Real Time Scenario):
Imagine you're writing in a document while your computer is also playing music. When the music
player needs CPU time to process the next part of the song, the operating system saves the current
state of your document editing process and switches to the music player. After a short time, it
switches back so you can keep typing without any interruption.
14) Discuss the services provided by operating systems with a neat diagram.
a) An operating system (OS) provides essential services that make the computer usable and efficient for
both users and programs. These services abstract the hardware complexity and manage resources
effectively.
i) Program Execution
Loads programs into memory and executes them. Manages process creation, scheduling, and
termination.
ii) I/O Operations
Handles input/output operations through device drivers, abstracting hardware details from users
and applications.
iii) File System Manipulation
Manages files and directories on storage devices, allowing creation, reading, writing, and
deletion.
iv) Communication Services
Enables inter-process communication (IPC) using mechanisms like pipes, message queues, and
sockets.
v) Error Detection and Handling
Detects system and hardware errors, ensuring the system runs reliably and reports issues when
they occur.
vi) Resource Allocation
Allocates CPU time, memory space, and I/O devices to multiple processes efficiently.
vii) Security and Protection
Protects system resources and user data by managing permissions and access control.
viii) User Interface (UI)
Provides command-line interface (CLI) or graphical user interface (GUI) for user interaction with
the system.
15) Elaborate on the various inter-process communication (IPC) mechanisms.
a) IPC allows processes to exchange data and synchronize actions. Mechanisms include:
i) Pipes: Unidirectional/bidirectional channels for data flow between related processes.
ii) Message Queues: Asynchronous message storage between processes.
iii) Shared Memory: Processes share a memory segment for fast communication.
iv) Sockets: Enable communication over a network (used in client-server models).
v) Signals: Notify processes about events or exceptions.
IPC is vital for coordination, data sharing, and multitasking in distributed and concurrent systems.

16) Describe the actions taken by a kernel to context-switch between processes.


a) During a context switch, the kernel performs the following:
i) Save the state of the current process in its PCB (program counter, registers, etc.).
ii) Update PCB status to "ready" or "waiting".
iii) Select the next process from the ready queue (based on scheduling algorithm).
iv) Load the state of the selected process from its PCB.
v) Update memory maps, if needed.
vi) Resume execution of the new process.
This switch is fast but has some overhead. It’s essential for multitasking, ensuring CPU efficiency
and responsiveness.
MODULE – 2
1) List any four benefits of using multithreading.
a)
i) Resource Sharing: Threads within the same process share code, data, and files, making
communication and data access efficient.
ii) Responsiveness: In GUI applications, multithreading keeps the interface responsive while
executing background tasks.
iii) Economy: Creating and managing threads is cheaper than processes, reducing overhead for
context switching.
iv) Scalability: Multithreading utilizes multiprocessor systems efficiently, allowing tasks to run in
parallel, improving performance.

2) Differentiate user threads and kernel threads.

a)

3) Enlist the role of a dispatcher in CPU scheduling.


a) Context Switching: Loads context of the selected process into the CPU.
Mode Switching: Switches from kernel to user mode.
Program Counter Load: Starts execution at the correct instruction.
Importance: Dispatcher latency impacts the performance of CPU scheduling, especially in
preemptive algorithms.

4) Specify the Peterson’s solution for the critical section problem.


a) Two-Process Solution: Designed for two processes competing for shared resources.
Key Variables: flag[i] shows if process i wants to enter; turn decides whose turn it is.
Mutual Exclusion: Ensures only one process enters the critical section.
Progress & Bounded Waiting: Guarantees that waiting will end and doesn’t lead to starvation.
5) Problem on Banking system’s race condition: Two functions deposit(amount) withdraw(amount).
a) Problem: Simultaneous access to balance can cause incorrect results.
Race Condition: If both functions run concurrently, final balance may be incorrect due to overlapped
read/write operations.
Solution: Use synchronization (e.g., mutex/semaphore) to ensure only one thread modifies balance at
a time.

6) What are the Benefits of Multithreaded Programming?


a) Benefits of Multithreaded Programming:
i) Improved Performance: Multithreading enables parallel execution on multi-core processors,
improving performance.
ii) Better Resource Utilization: Threads within a process can share resources (e.g., memory),
leading to efficient resource usage.
iii) Enhanced Responsiveness: Long-running tasks (e.g., network communication) don’t block the
main application thread, allowing better user experience.
iv) Simplified Design: For certain applications, like servers or GUI applications, multithreading
simplifies task management and process coordination.

7) What does PCB contain?


a) PCB (Process Control Block) Contains:
i) Process ID (PID): Unique identifier for each process.
ii) Process State: The current state of the process (e.g., running, waiting, terminated).
iii) Program Counter (PC): Points to the next instruction to execute in the process.
iv) CPU Registers: Values of the CPU registers when the process was last scheduled.
v) Memory Management Information: Information about the memory allocated to the process
(e.g., page table, base and limit registers).
vi) Scheduling Information: Includes the priority, queue pointer, and other scheduling details.
vii) IO Status Information: List of IO devices allocated to the process.
viii) Accounting Information: Used for process management (e.g., CPU time used, time of
arrival).

8) Specify the basic operations of semaphores.


a) Basic Operations of Semaphores:
i) Wait (P or down): Decreases the semaphore value. If the value is less than 0, the process is
blocked.
ii) Signal (V or up): Increases the semaphore value. If there are blocked processes, one of them is
unblocked.
iii) Initialization: Set the initial value of a semaphore, usually indicating the number of resources
available.
Use Case: Semaphores are commonly used to synchronize access to shared resources in concurrent
programming.

9) Explain Round Robin and Priority Scheduling algorithms with examples.


a) Round Robin and Priority Scheduling Algorithms:
i) Round Robin:
(1) Description: Each process is given a fixed time slice or quantum. Once a process uses its
quantum, it is placed at the end of the ready queue.
(2) Example: If quantum is 5 ms and processes A, B, and C are in the ready queue, A runs for 5
ms, followed by B for 5 ms, and then C for 5 ms. The cycle repeats.
(3) Pros: Simple, fair, good for time-sharing systems.
(4) Cons: Can result in high turnaround time for longer processes.
ii) Priority Scheduling:
(1) Description: Processes are assigned priorities, and the CPU is allocated to the process with
the highest priority. Ties are typically broken by a FIFO approach.
(2) Example: Processes P1 (priority 5), P2 (priority 3), and P3 (priority 4) are scheduled in that
order.
(3) Pros: Ensures important tasks are executed first.
(4) Cons: Can lead to starvation of low-priority processes.

10) Write about the various CPU Scheduling algorithms.


a) CPU Scheduling Algorithms:
i) First Come First Serve (FCFS): Processes are executed in the order they arrive. Simple but can
result in poor performance (convoy effect).
ii) Shortest Job First (SJF): Selects the process with the smallest execution time. Optimal for
minimizing average waiting time but suffers from starvation.
iii) Priority Scheduling: Processes are scheduled based on priority. Low-priority processes may
starve.
iv) Round Robin (RR): Each process gets a time slice in a cyclic order, ensuring fair execution for
all processes.

11) Describe the multi-threading models and classify their types.


a) Multithreading models define the relationship between user and kernel threads.
i) 1:1 Model (One-to-One):
(1) Each user thread maps to one kernel thread.
(2) Allows true parallelism on multicore systems.
(3) Used in systems like Windows and Linux.
(4) Downside: High overhead for large numbers of threads.
ii) N:1 Model (Many-to-One):
(1) Multiple user threads mapped to one kernel thread.
(2) Thread management is done in user space.
(3) Only one thread can access the kernel at a time—no parallelism.
(4) Used in older systems like Green Threads (JVM).
iii) M:N Model (Many-to-Many):
(1) Maps many user threads to many kernel threads.
(2) Efficient and flexible; allows multiple threads to run in parallel.
(3) Complex to implement.
(4) Example: Solaris threads.
12) Explain FCFS, non-preemptive SJF, Preemptive priority, and Round Robin
a)
i) FCFS (First-Come, First-Served):
(1) Processes are executed in arrival order.
(2) Easy to implement, but poor for short jobs.
ii) Non-Preemptive SJF (Shortest Job First):
(1) Chooses the process with the shortest burst time.
(2) Reduces average wait time but suffers from starvation.
iii) Preemptive Priority:
(1) CPU is assigned to the highest-priority process.
(2) Preempts running processes if a higher-priority process arrives.
(3) Starvation can occur (solved with aging).
iv) Round Robin:
(1) Processes are given fixed time slices in cyclic order.
(2) Ensures fairness and responsiveness.
(3) Best suited for time-sharing systems.

13) Discuss the Dining Philosophers problem using semaphores


a) Dining Philosophers Problem Using Semaphores:
Problem: Philosophers alternately think and eat. They must pick up two forks (shared resources) to
eat. Can lead to deadlock.
i) Semaphore mutex ensures mutual exclusion.
ii) Fork semaphores ensure resource availability.
Issues: Risk of deadlock if all philosophers pick up one fork simultaneously. Solved using resource
hierarchy or limiting the number of philosophers.

14) Classic problems of process synchronization


a) Classic Problems of Process Synchronization:
i) Dining Philosophers: Models deadlock and starvation.
ii) Readers-Writers Problem: Multiple readers can access a resource, but only one writer at a time.
iii) Producer-Consumer Problem (Bounded Buffer): Synchronizes access to a buffer between
producers (add items) and consumers (remove items).
These problems help test and design robust synchronization mechanisms like semaphores, monitors,
and mutexes.

15) Explain the FCFS, non pre-emptive SJF scheduling , Pre emptive priority scheduling and Round
Robin scheduling algorithm(time quantum - 2ms) with Gantt Chart for the processes given. Find
the Average Turn around time and Waiting time.
Process Arrival Time Burst Time Priority
P1 0 8 0
P2 4 1 1
P3 2 9 3
P4 3 5 2
16) Consider the following set of processes, with the length of the CPU burst given in milliseconds:
Process Burst Time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
a. Draw four Gantt charts that illustrate the execution of these processes using the following
scheduling algorithms: FCFS, SJF, non-preemptive priority (a larger priority number implies a
higher priority), and RR (quantum = 2).
b. What is the turnaround time of each process for each of the scheduling algorithms mentioned
above?
17) Explain Preemptive priority & Round Robin (Gantt chart, TAT, WT) . Given data

Process Arrival Time Burst Time Priority


P1 0 6 2
P2 2 8 1
P3 3 7 4
P4 5 3 3
MODULE – 3
1) State the difference between deadlock avoidance and prevention.

a)

2) Compare best fit and worst fit strategies in memory allocation.

a)

3) Interpretation of a sample resource allocation graph.


a) Interpretation of a Sample Resource Allocation Graph (RAG):
i) Graph Elements:
(1) Processes: Circles (e.g., P1, P2).
(2) Resources: Squares (e.g., R1, R2).
(3) Edges:
(a) Request (process → resource).
(b) Assignment (resource → process).
ii) Cycle Detection:
(1) If there’s a cycle:
(a) Single instance per resource: Deadlock exists.
(b) Multiple instances: Cycle may or may not mean deadlock.
iii) Example:
(1) P1 → R1 and R1 → P2 and P2 → R2 and R2 → P1: Cycle detected → potential deadlock.
4) Illustrate the different kind of page tables with suitable examples.
a) Types of Page Tables:
i) Single-Level Page Table:
(1) Maps virtual to physical addresses directly.
(2) Large memory overhead for big address spaces.
ii) Multi-Level Page Table:
(1) Breaks page table into levels (e.g., 2-level).
(2) Reduces memory use by only allocating needed parts.
iii) Inverted Page Table:
(1) One entry per physical page frame.
(2) Index by frame, not by page number.
(3) Reduces size, but slower to search.
Example:
Virtual address 0x1A34 in 2-level page table → uses page directory and page table index.

5) List down the purpose of a Translation Lookaside Buffer.


a) A cache that stores recent virtual-to-physical address translations.
i) Function:
(1) Speeds up address translation.
(2) Reduces page table access time.
ii) Importance:
(1) A TLB hit avoids accessing the page table.
(2) A miss results in accessing memory for the page table.
iii) Types:
(1) Fully associative or set-associative.
Improves system performance in paging-based memory systems.
6) Write down the necessary conditions for deadlock.
a) Necessary Conditions for Deadlock:
i) Mutual Exclusion:
At least one resource must be held in a non-shareable mode.
ii) Hold and Wait:
A process holding resources is waiting for more.
iii) No Preemption:
Resources cannot be forcibly taken away.
iv) Circular Wait:
A closed chain of processes exists, each waiting for a resource held by the next.
All four must occur simultaneously for deadlock to happen.
7) How will you reduce External Fragmentation?
a) To Reduce External Fragmentation:
i) Compaction:
Shift processes in memory to make free space contiguous.
Requires hardware support (e.g., relocation registers).
ii) Segmentation with Paging:
Combines benefits of both methods; reduces fragmentation by breaking processes into
smaller parts.
iii) Best Fit Allocation:
Reduces leftover spaces but may cause unusable small fragments.
iv) Non-contiguous Memory Allocation:
Uses paging or segmentation to allocate memory blocks non-contiguously, reducing the issue.
v) Memory Pools or Slab Allocators:
Divide memory into fixed-size chunks to minimize fragmentation.
8) Describe deadlock prevention and avoidance methods.
a) Deadlock Prevention:
i) Ensures one or more necessary conditions for deadlock never hold.
ii) Methods:
(1) Mutual Exclusion: Make resources sharable.
(2) Hold and Wait: Require processes to request all resources at once.
(3) No Pre-emption: Pre-empt resources if necessary.
(4) Circular Wait: Impose resource ordering.
b) Deadlock Avoidance:
i) System checks if resource allocation leads to unsafe states.
ii) Banker’s algorithm is a classic example.
iii) Requires processes to declare maximum resource needs in advance.

9) Explain the Banker’s algorithm with example.


a) Avoids deadlock by checking system’s safe state before allocating resources.
i) Data Structures:
(1) Available, Max, Allocation, and Need matrices.
ii) Working:
(1) For each request, simulate allocation.
(2) Check if system remains in a safe state (at least one execution sequence exists).
(3) If yes → allocate. Else → block.
iii) Example:
(1) 3 processes, 3 resource types.
(2) If P1 requests resources, algorithm checks if all remaining processes can complete after
allocation.
(3) If yes, grant request.

10) Illustrate the swapping process.


a) Moving processes between main memory and disk (backing store) to manage memory.
i) When Used:
(1) When RAM is full, inactive processes are swapped out.
(2) Frees memory for active processes.
ii) Steps:
(1) Save process state.
(2) Move it to disk.
(3) Update PCB with new location.
(4) When needed, swap it back into RAM.
iii) Benefits:
(1) Increases multiprogramming.
(2) Allows more processes to run concurrently.
iv) Downsides:
(1) High disk I/O overhead.
(2) Latency in resuming swapped processes.
11) Explain paging technique and importance of TLBs.
a) Paging:
 Memory is divided into fixed-size pages (logical) and frames (physical).
 Page table maps pages to frames.
 Solves external fragmentation.
Translation Lookaside Buffer (TLB):
(1) Cache for page table entries.
(2) Stores recent page-to-frame translations.
(3) TLB hit: Fast memory access.
(4) TLB miss: Page table lookup in memory.
Benefits:
 Faster memory translation.
 Enhances performance in paging systems.

12) Given six memory partitions of 300 KB, 600 KB, 350 KB, 200 KB, 750 KB, and 125 KB (in order),
how would the first-fit, best-fit, and worst-fit algorithms place processes of size 115 KB, 500 KB,
358 KB, 200 KB, and 375 KB (in order)? Rank the algorithms in terms of how efficiently they use
memory.
13) Consider the given five memory partitions of 150 kb, 250 kb, 500 kb, 200 kb, 400 kb (in order),
how would the first fit, best fit and worst fit algorithms place process of 312 kb, 316 kb, 122 kb,
219 kb (in order) which algorithm is the most efficient one?
14) Consider the given five memory partitions of 100 kb, 500 kb, 200 kb, 300 kb, 600 kb (in order),
how would the first fit, best fit and worst fit algorithms place process of 412 kb, 317 kb, 112 kb,
326 kb (in order) which algorithm is the most efficient one?
15) Consider the following snapshot of a system:

Using the banker’s algorithm, determine whether or not each of the following states is unsafe. If
the state is safe, illustrate the order in which the processes may complete. Otherwise, illustrate why
the state is unsafe.
a. Available = (0, 3, 0, 1)
b. Available = (1, 0, 0, 2)
16) Consider the following snapshot of a system:
Process Allocation Maximum Available
AB C AB C AB C
P0 0 20 75 3 332
P1 2 10 32 2
P2 2 02 90 2
P3 2 01 22 2
P4 0 01 43 3
Suppose P1 request for (0, 0, 3) can the request be granted or not?
MODULE – 4
1) Define inode and explain its significance in file systems.
a) A data structure in UNIX-like file systems that stores metadata about files.
i) Information Stored:
(1) File size, ownership, permissions, timestamps (created, accessed, modified), and pointers to
data blocks.
ii) Significance:
(1) Every file is represented by an inode.
(2) The inode number is used internally by the OS to locate file content.
(3) Inodes do not store file names—names are stored separately in directory entries.
iii) Efficiency:
(1) Enhances file access speed and system performance.
(2) Supports file sharing and linking (via hard links).
Example: Accessing a file like /home/user/file.txt uses the directory to find its inode number, then
the inode to find the data blocks.

2) Write notes on virtual memory.


i) A memory management technique that gives the illusion of a large, continuous memory space by
using both physical RAM and disk.
ii) Features:
(1) Paging and segmentation are commonly used techniques.
(2) Page tables map virtual addresses to physical addresses.
iii) Benefits:
(1) Allows programs to use more memory than physically available.
(2) Provides process isolation and security.
(3) Enables multitasking with efficient memory usage.
iv) Swapping and Demand Paging:
(1) Pages are loaded into RAM only when needed (demand paging).
(2) Inactive pages are moved to disk (swapping).
Performance: TLB (Translation Lookaside Buffer) improves access speed.

3) Difference between a hard link and a soft link.

a)
4) Attributes of a file
a) Metadata stored about a file that describes its characteristics.
i) Common Attributes:
(1) Name: Human-readable identifier.
(2) Type: Format (e.g., .txt, .exe).
(3) Location: Disk address or inode reference.
(4) Size: File length in bytes.
(5) Protection: Permissions (read/write/execute).
(6) Timestamps: Creation, access, and modification times.
(7) Ownership: User ID (UID) and group ID (GID).
ii) Significance:
(1) Used by the OS to manage files efficiently.
(2) Supports access control and security mechanisms.
(3) Helps in organizing and identifying files.

5) Define UFD and MFD.


a) UFD (User File Directory):
i) Contains information about all files created by a specific user.
ii) Each user has a separate UFD.
iii) Provides file isolation between users.
b) MFD (Master File Directory):
i) Top-level directory containing entries of all UFDs.
ii) Helps in locating UFDs for different users.

6) Explain different file allocation methods.


a) File allocation refers to how disk space is assigned to files:
i) Contiguous Allocation:
- Each file occupies a set of contiguous blocks.
(1) Advantages:
(a) Fast access due to locality.
(b) Simple to implement.
(2) Disadvantages:
(a) Leads to external fragmentation.
(b) Difficult to extend files.
ii) Linked Allocation:
- Each file is a linked list of disk blocks.
- Blocks can be scattered anywhere.
(1) Advantages:
(a) No external fragmentation.
(b) Easy to grow files.
(2) Disadvantages:
(a) Sequential access only.
(b) Overhead of pointers.
iii) Indexed Allocation:
- A special index block holds pointers to all file blocks.
(1) Advantages:
(a) Supports direct and random access.
(b) No external fragmentation.
(2) Disadvantages:
(a) Extra overhead of maintaining index blocks.
Example: UNIX uses indexed allocation through inodes.
7) Explain with a neat diagram about various steps used to handle a page fault.
a) When a process accesses a page not in memory, a page fault occurs.
Steps:
i) Trap to OS: Page fault is detected and control switches to the OS.
ii) Check Validity: OS checks if the memory access is legal.
iii) Find Page Location: OS locates the page on secondary storage (usually disk).
iv) Free Frame Selection: Select a free frame from memory (may use page replacement if none is
free).
v) Load Page: Load the page from disk into the selected frame.
vi) Update Page Table: Modify the page table with the new frame number.
vii) Resume Execution: Restart the instruction that caused the fault.

8) Discuss the working and benefits of RAID levels.


a) RAID (Redundant Array of Independent Disks) improves performance and reliability of storage.
Common RAID Levels:
i) RAID 0 (Striping):
(1) Data split across disks.
(2) High speed, no redundancy.
ii) RAID 1 (Mirroring):
(1) Data duplicated on two disks.
(2) High fault tolerance, high cost.
iii) RAID 5 (Striping with Parity):
(1) Distributes parity across disks.
(2) Balances speed and fault tolerance.
iv) RAID 6:
(1) Similar to RAID 5 but with double parity.
(2) Survives two disk failures.
v) RAID 10 (1+0):
(1) Combination of RAID 1 and 0.
(2) Excellent performance and reliability.
9) Discuss various file access methods.
a) Access methods define how data in files can be read/written.
Types:
i) Sequential Access:
(1) Data is read or written in order.
(2) Simple and efficient for tape-based storage.
(3) Used in logs, text files.
ii) Direct (Random) Access:
(1) Access any part of file directly using offset.
(2) Efficient for databases, multimedia.
iii) Indexed Access:
(1) An index maps key values to block addresses.
(2) Combines sequential and direct access.
(3) Used in large databases.

10) Explain about single level and two level directory structure.
a) Single-Level Directory:
- All files are in one directory.
- Simple structure.
i) Drawbacks:
(1) Naming conflict: Unique file names required.
(2) Poor organization for large systems.
b) Two-Level Directory:
- Adds user-level directories.
i) Structure:
(1) MFD (Master File Directory) contains user entries.
(2) Each entry points to a UFD (User File Directory).
ii) Benefits:
(1) Isolates user files.
(2) Prevents name conflicts between users.
(3) Improves manageability in multi-user systems.

11) Consider the following page reference string:


7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0 , 1.
Assuming demand paging with three frames, how many page faults would occur for the following
replacement algorithms?
i) LRU replacement
ii) FIFO replacement
iii) Optimal replacement
12) Consider the following page reference string:1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 3, 7, 6, 3, 2, 1, 2, 3, 6. How
many page faults would occur for the following replacement algorithms, assuming three, four
frames? Remember all frames are initially empty.
(1) LRU replacement (2) Optimal replacement.
13) Illustrate the following disk scheduling algorithms with a request queue (0 -199). (Problems
similar to this)
Queue = 75, 13, 192, 12, 164, 24, 165, 167
Head pointer = 53
i) FCFS Scheduling
ii) SSTF Scheduling
iii) C-SCAN Scheduling
MODULE – 5
1) Write down the advantages of using a distributed file system.
a)
i) Data Sharing: DFS allows users to access and share files across geographically distributed
locations seamlessly.
ii) Fault Tolerance: Redundancy and replication in DFS ensure data is not lost even if one node fails.
iii) Scalability: DFS can grow by adding more nodes without major reconfiguration.
iv) Transparency: Users access files as if they are on a local system; location and access
transparency are provided.
v) Resource Optimization: DFS balances load and storage across multiple servers, improving
efficiency.
vi) Security and Access Control: Centralized management offers better authentication and
permission control.
Explanation: DFS decouples file access from the physical storage location, enhancing availability
and performance. It supports concurrent file access and maintains data consistency across nodes.

2) Justify the need for virtualization.


a)
i) Resource Utilization: Virtualization allows multiple virtual machines (VMs) to run on a single
physical machine, maximizing resource usage.
ii) Isolation: Applications and systems run independently in their own environments, improving
security and stability.
iii) Cost Efficiency: Fewer physical machines are needed, reducing hardware, power, and
maintenance costs.
iv) Testing and Development: Enables easy creation of isolated environments for testing without
affecting the main system.
v) Disaster Recovery: Simplifies backup, snapshot, and migration of virtual environments.
Explanation: Virtualization abstracts hardware resources, enabling flexibility, portability, and
centralized control, which is essential in modern cloud and enterprise computing environments.

3) Recall the concept of virtualization in operating systems.


a) Virtualization is the creation of virtual (rather than actual) versions of resources like OS, server,
storage, or network.
i) Hypervisor Role: A hypervisor sits between hardware and OS, managing virtual machines.
ii) Types: System-level (full) virtualization and application-level virtualization.
iii) Key Features:
(1) Isolation between VMs
(2) Dynamic resource allocation
(3) Snapshot and rollback support
Explanation: In OS-level virtualization, a single OS kernel allows multiple isolated user-space
instances (containers), offering lightweight and efficient virtualization. It helps in cloud deployments
and server consolidation.
4) Explain in detail about Trap-and-emulate virtualization implementation.
a) Used in full virtualization to handle privileged instructions.
i) Trap Mechanism:
(1) If a guest OS tries to execute a privileged instruction, it causes a trap (interrupt).
ii) Emulation:
(1) The hypervisor intercepts the trap and emulates the instruction in a safe manner.
iii) Hypervisor Type: Primarily used in Type 1 hypervisors (bare-metal).
Advantages:
(1) No need to modify the guest OS.
(2) Enables multiple unmodified OSes to run on the same hardware.
Explanation: Trap-and-emulate allows virtual machines to behave like they are running on real
hardware, but with the hypervisor controlling privileged operations. It’s foundational to traditional
VM systems like VMware and early versions of VirtualBox.

5) Why do we need distributed systems?


a) Distributed systems combine multiple independent systems to function as a single logical entity,
improving efficiency, collaboration, and system robustness across physical boundaries.
i) Resource Sharing: Enables access to hardware, software, and data across the network.
ii) Scalability: Systems can grow in size and processing power easily.
iii) Fault Tolerance: Redundant nodes increase system reliability and availability.
iv) Geographic Distribution: Useful for global organizations with multiple locations.
v) Cost Efficiency: Distributes workload over cheaper, commodity hardware.

6) Various types of network-based operating systems.


a) Network-Based Operating Systems (NOS) manage resources over a network and enable
communication among devices.
Types:
i) Peer-to-Peer NOS:
(1) All systems are equal (peers).
(2) Each node can access shared resources directly.
(3) Example: Windows for Workgroups.
ii) Client-Server NOS:
(1) Central server manages resources and access.
(2) Clients request services from the server.
(3) Example: Windows Server, Linux (Samba, NFS).
iii) Middleware-Based NOS:
(1) Abstracts network complexities.
(2) Manages communication, synchronization, and resource sharing.
(3) Example: CORBA, Java RMI.
Explanation: NOS provides services like file sharing, printer access, authentication, and
communication across the network. Its type depends on scale, functionality, and control
requirements.

7) Outline network-based operating systems.


a) Network-based operating systems are designed to provide seamless connectivity and resource
sharing across multiple systems connected through a network.
i) Key Features:
(1) Remote Login and File Access
(2) User Authentication and Permissions
(3) Centralized Resource Management
(4) Network Protocol Support (TCP/IP, NFS, SMB)
ii) Components:
(1) Clients: Request resources.
(2) Servers: Provide services like file sharing, email, web.
(3) Protocols: Handle data exchange.
iii) Examples:
(1) UNIX/Linux with NFS or SSH
(2) Windows Server with Active Directory
Explanation: NOS enhances collaboration and performance in networked environments, making it
fundamental in enterprise, education, and distributed computing systems.

8) Architecture of a distributed system.


a) Distributed systems architecture defines how system components interact across multiple nodes.
i) Main Architectures:
(1) Client-Server:
(a) Clients request services from centralized servers.
(2) Three-Tier:
(a) Client → Application Server → Database Server.
(b) Used in web applications.
(3) Peer-to-Peer (P2P):
(a) All nodes act as both clients and servers.
(b) Example: BitTorrent.
(4) Hybrid:
(a) Combines client-server and P2P models.
ii) Common Features:
(1) Transparency (location, access, failure)
(2) Scalability
(3) Concurrency
(4) Fault Tolerance
Explanation: The choice of architecture depends on system goals like scalability, availability, and
manageability. It determines how distributed components communicate and coordinate.

9) Explain various types of virtual machines.


a) Virtual Machines (VMs) can be classified based on their level of abstraction and use case.
Types:
i) System Virtual Machines:
(1) Emulates entire hardware.
(2) Supports running full OS.
(3) Examples: VMware, VirtualBox, KVM.
ii) Process Virtual Machines:
(1) Runs a single process or application.
(2) Designed for portability.
(3) Examples: Java Virtual Machine (JVM), .NET CLR.
iii) Container-Based Virtualization:
(1) Lightweight, shares OS kernel.
(2) Isolated environments for apps.
(3) Examples: Docker, LXC.
Explanation:
Each type of VM serves a different purpose—from full OS emulation to lightweight application
deployment—playing key roles in system development, testing, and cloud infrastructure.
10) Explain the working of TCP/IP protocol in a distributed system.
a) TCP/IP (Transmission Control Protocol/Internet Protocol) is the foundational protocol suite for
communication in distributed systems.
i) Key Layers:
(1) Application Layer: Handles end-user services (HTTP, FTP, SMTP).
(2) Transport Layer (TCP):
(a) Provides reliable, connection-oriented communication.
(b) Ensures error checking and flow control.
(3) Internet Layer (IP):
(a) Routes packets between source and destination across networks.
(4) Network Interface Layer:
(a) Handles physical transmission.
ii) Working:
(1) TCP breaks messages into segments and reassembles them.
(2) IP handles addressing and routing of packets.
(3) TCP/IP ensures reliable, ordered delivery of data across nodes.
Explanation: TCP/IP enables interoperability between different systems in a distributed
environment, providing robust and scalable communication over LANs and WANs.

11) Discuss the design issues of distributed system.


a) Designing a distributed system involves addressing several critical challenges to ensure consistency,
reliability, and efficiency.
Key Issues:
i) Transparency: Hides complexities (location, failure, access).
ii) Fault Tolerance: System should continue operating despite failures.
iii) Concurrency: Multiple users/processes should operate without conflict.
iv) Security: Protect communication and shared resources.
v) Scalability: Must support growth in users, data, and nodes.
vi) Resource Management: Dynamic allocation and load balancing.
Explanation: A well-designed distributed system balances performance and consistency while
addressing heterogeneity, network latency, and partial failures.

12) Examples for communication protocols in a distributed system.


a) Distributed systems rely on communication protocols to coordinate activities and share data between
nodes.
Common Protocols:
i) HTTP/HTTPS: Web-based client-server communication.
ii) RPC (Remote Procedure Call): Calls procedures on remote systems as if local.
iii) RMI (Remote Method Invocation): Java-based remote object invocation.
iv) gRPC: Modern high-performance RPC using HTTP/2 and protocol buffers.
v) Message Queuing Protocols (MQTT, AMQP): Used in IoT and messaging systems.
vi) TCP/IP: Fundamental for reliable communication.
Explanation: These protocols abstract the complexities of data transmission, ensuring effective
coordination and resource sharing across distributed components.
13) Network structure and communication structure.
a) Network Structure:
i) Defines how nodes are interconnected.
ii) Types:
(1) Star: Central node connected to others.
(2) Bus: All nodes connected to a single communication line.
(3) Ring, Mesh, Hybrid: Various redundancy and topology strategies.
b) Communication Structure:
i) Refers to the way processes exchange information.
ii) Mechanisms:
(1) Client-Server Model
(2) Message Passing
(3) Shared Memory
(4) Sockets
Explanation: A well-designed network and communication structure enables efficient data flow,
reduces latency, and increases system reliability and fault tolerance in distributed environments.

14) Analyze TCP/IP based network OS.


a) TCP/IP-based Network Operating Systems (NOS) enable seamless networking and resource sharing
through standardized protocols.
Features:
i) Uses TCP/IP stack for communication.
ii) Supports services like file sharing, remote login, and printer access.
iii) Common OS Examples: Unix/Linux, Windows Server.
Advantages:
 Platform-independent communication.
 Scalable and reliable.
 Broad hardware and software compatibility.
Explanation: TCP/IP-based NOS abstracts networking details from users, allowing devices to
interconnect regardless of architecture. It forms the backbone of internet-based and enterprise
distributed systems.

15) A disk drive has 200 cylinders, numbered from 0 to 199. The drive is currently at cylinder 50, and
the queue of pending requests (in order of arrival) is:
82, 170, 43, 140, 24, 16, 190
Use the C-SCAN scheduling algorithm to compute the total head movement.
16) A disk drive has 200 cylinders, numbered from 0 to 199. The drive is currently at cylinder 50, and
the queue of pending requests (in order of arrival) is:
82, 160, 34, 150, 24, 14, 80
Use the SCAN scheduling algorithm to determine the order of service. Compute the total head
movement. C-SCAN scheduling head movement calculation.

You might also like