Os Viva
Os Viva
2. How does an operating system act as an interface between the user and
hardware?
Batch OS
Time-sharing OS
Distributed OS
Real-time OS
Embedded OS
Network OS
o Batch OS: Executes jobs without user interaction. Suitable for tasks
that don’t require immediate feedback.
4. OS Services
Short Answer Questions:
o File Management: Organizes and stores data on disks, allowing for file
creation, access, modification, and deletion.
o I/O Management: Manages input and output operations, ensuring that
data is correctly transferred between peripherals and the computer
system.
5. System Calls
File Management
Device Management
Communication
o File Management System Calls: Open, close, read, write, and delete
files.
CHAPTER - 02
1. Processes: Definition, Process Relationship, Different States of a Process,
Process State Transitions, Process Control Block (PCB), Context Switching
1. What is a process?
From New to Ready (when it's loaded into memory and is ready
to run).
Scheduling Algorithms:
1. Explain the First-Come, First-Served (FCFS) Scheduling Algorithm.
o FCFS is a non-preemptive scheduling algorithm where processes are
executed in the order of their arrival in the ready queue. The process
that arrives first is executed first. It is simple but can lead to long
waiting times, especially if a short process arrives after a long one
(convoy effect).
o Assume three processes with the following arrival and burst times:
2. SJF Example:
P3 would run first (since it has the shortest burst time), followed
by P2, and then P1.
o Mutual exclusion ensures that only one process or thread can access
the critical section at a time, preventing conflicts and data
inconsistencies in shared resources.
5. What is a semaphore?
o A semaphore is a synchronization primitive that controls access to a
shared resource by multiple processes in a concurrent system. It uses
two operations: wait() (decrements the semaphore) and signal()
(increments the semaphore).
o The solution ensures that only one process can enter the critical
section at a time by alternating the turn between the processes when
both processes want to enter. It avoids race conditions by guaranteeing
that only one process gets to execute in the critical section at any time.
The consumer:
CHAPTER - 04
Deadlocks: Definition, Necessary and Sufficient Conditions for Deadlock,
Deadlock Prevention, Deadlock Avoidance (Banker's Algorithm), Deadlock
Detection, and Recovery
1. What is a deadlock?
o The sufficient condition for deadlock occurs when all four of the
necessary conditions for deadlock hold simultaneously in the system. If
all these conditions are met, deadlock is inevitable.
4. What is deadlock prevention?
2. Hold and Wait: A process holding one resource can wait for
other resources held by other processes.
1. Process Termination:
2. Resource Preemption:
3. Rollback:
Rollback involves restoring a process to a previous safe
state and retrying the execution. It is effective when
combined with transaction logging, as the process state
can be saved and restored when a deadlock is detected.
Example Calculation:
o Available resources:
A = 3, B = 2, C = 2
o Allocated resources:
Step 1: We check if the current available resources (A=3, B=2, C=2) can satisfy the
Need matrix for any process. For example, for P1, the need is (A=1, B=0, C=2),
which can be satisfied by the available resources.
Step 2: After P1 finishes, it releases its resources, and the available resources are
updated. We then check if any other process can proceed based on the updated
available resources, and repeat the process.
Summary:
Deadlock is a situation where processes cannot proceed because they are
waiting for each other to release resources.
Deadlock prevention, avoidance, detection, and recovery techniques aim to
either prevent deadlock from occurring, avoid it dynamically, or detect and
recover from it once it happens.
The Banker's Algorithm helps ensure that the system remains in a safe state
by evaluating whether a resource request will lead to a deadlock.
CHAPTER - 05
Memory Management & Virtual Memory:
o The working set refers to the set of pages that a process is actively
using during a given period of time. The working set changes
dynamically as the process executes, and the operating system may
manage it to optimize memory usage.
Least Recently Used (LRU): LRU replaces the page that has
not been accessed for the longest period of time. It
approximates the optimal algorithm and is commonly used in
practice.
Summary:
CHAPTER - 06
I/O Systems, File & Disk Management:
SCAN
3. What are the differences between linear list and hash table directory
implementation?
o Linear List: Files are stored in a list, and searching for a file requires
traversing the list sequentially. This method is simple but inefficient for
large directories.
1. Explain the concept of I/O hardware and its role in data transfer.
o When the CPU needs to read data from an I/O device, it sends a
command to the device controller. The controller processes the data
transfer, and once the operation is complete, it sends an interrupt
signal to the CPU to indicate that the I/O operation is done. This
mechanism enables efficient data transfer, allowing the CPU to
continue its processing while I/O devices handle their tasks.
o Direct Memory Access (DMA) is often used to improve I/O
performance by allowing peripheral devices to directly access the main
memory without CPU intervention, reducing overhead and speeding up
data transfer.
2. Describe the file management system in detail.
2. SSTF (Shortest Seek Time First): The disk arm moves to the
nearest request, minimizing the seek time. This can lead to
starvation of requests that are far away.
o Consider a disk with 100 cylinders (numbered 0 to 99) and a disk arm
at cylinder 20. The requests are for cylinders 10, 30, 40, 60, and 80.
o SCAN Algorithm:
The disk arm will first move toward the highest cylinder (99)
servicing requests along the way.
The order of service will be: 10, 30, 40, 60, 80.
Summary:
I/O systems manage the communication between the CPU and peripheral
devices, optimizing data transfer through device controllers, interrupt
handling, and DMA.
File management organizes and stores data efficiently using file types,
access methods, and various allocation strategies like contiguous, linked, and
indexed allocation.