Midsem Exam Os
Midsem Exam Os
Unit-1
2marks,
1. What is an Operating System?
An Operating System (OS) is a software that manages hardware resources and provides services for computer
programs, acting as an interface between the user and the hardware.
2. Define Kernel.
The Kernel is the core component of an OS that manages system resources and communication between hardware
and software, ensuring efficient and secure system operation.
3. Explain the Different Functions of an Operating System and Discuss the Various Services Provided by
an OS
Functions of an OS include:
- Process Management: Manages processes, multitasking, and process synchronization.
- Memory Management: Allocates and manages main memory for active processes.
- File Management: Manages file storage, access, and file system organization.
- Device Management: Controls and coordinates the operation of hardware.
- Security and Protection: Protects data and resources from unauthorized access.
OS Services:
- Program Execution: Loads and executes programs.
- I/O Operations: Manages input/output devices.
- File-System Manipulation: Provides access to files and directories.
- Communication: Enables data exchange between processes.
- Error Detection: Detects and responds to errors within the OS and processes.
4. (a) Explain Dual-Mode Operation in OS with a Block Diagram
Dual-Mode Operation uses two modes, user mode and kernel mode, to protect the system:
- User Mode: Limited access to hardware resources, only for general applications.
- Kernel Mode: Full access to hardware resources, reserved for system-level tasks.
A block diagram would show transitions between these two modes when a process requires services only
accessible in kernel mode, switching back to user mode after the task.
- Multiprogramming: OS keeps multiple jobs in memory, switching among them to improve CPU utilization.
- Time-Sharing: OS divides CPU time among multiple users, enabling simultaneous access and fast response times.
Unit-2
2marks,
1. Define Process
A process is an instance of a program in execution. It includes the program code and its current activity,
represented by the value of the Program Counter, registers, and variables.
3. Define Schedulers
Schedulers are system components that select processes from a queue to execute on the CPU. They manage
process states and determine which process runs at any given time.
7. Define Thread
A thread is the smallest unit of processing that can be scheduled by an operating system. It shares resources like
memory with other threads in the same process.
8. Define Time Slice
A time slice is the fixed duration allocated by the CPU to a process or thread in a round-robin scheduling system
before moving to the next process.
10marks,
1. a) Define Process? Explain Process State Diagram.
A process is an executing instance of a program, which consists of the program code, data, and a set of resources
required for its execution. Each process goes through different states during its lifetime, and these states are typically
represented in a Process State Diagram, which includes:
- New: The process is being created.
- Ready: The process is ready to execute and is waiting for CPU time.
- Running: The process is currently being executed by the CPU.
- Waiting: The process is waiting for some event, such as I/O completion.
- Terminated: The process has completed its execution and is ended.
The transitions between these states depend on process scheduling, resource availability, and other system events.
Diagram Explanation: A standard Process State Diagram visually depicts the transitions between these states,
highlighting when a process moves from one state to another based on CPU scheduling or events like I/O completion.
2. Consider 3 Processes P1, P2, and P3 with Time Requirements and Arrival Times.
Given:
- Process P1 requires 5 units, arrives at time 0
- Process P2 requires 7 units, arrives at time 1
- Process P3 requires 4 units, arrives at time 3
CPU Scheduling algorithms determine the order of process execution based on criteria like waiting time and
turnaround time. Major algorithms:
- First-Come, First-Serve (FCFS): Processes are scheduled in order of arrival. Simple but can lead to the convoy
effect.
- Shortest Job First (SJF): Selects the process with the shortest burst time. Optimal in terms of minimizing waiting
time but may lead to starvation.
- Priority Scheduling: Processes are scheduled based on priority; higher priority processes execute first.
- Round Robin (RR): Processes are assigned CPU in a circular order with a fixed time slice, ensuring fairness.
Unit-3
2marks,
1. Define Deadlock
Deadlock is a situation in which a set of processes are blocked because each process is holding a resource and
waiting to acquire a resource held by another process, creating a cycle of dependencies.
4. What Are the Requirements That a Solution to the Critical Section Problem Must Satisfy?
The solution must satisfy three requirements: mutual exclusion (only one process in the critical section at a time),
progress (a process outside the critical section cannot prevent others from entering), and bounded waiting (no
process should wait indefinitely).
6. Define Semaphores
A semaphore is a synchronization primitive used to control access to a common resource by multiple processes. It
uses integer values and operations like "wait" and "signal" to manage concurrent access.
10marks,
1. What is the Critical Section Problem? Explain with Example.
The critical section problem occurs in concurrent programming, where multiple processes or threads need exclusive
access to shared resources (like variables or files) to avoid conflicts. This problem involves designing a protocol that
allows only one process at a time to enter its critical section (the part of code that accesses shared resources) to
prevent data inconsistencies.
Example:
Consider two threads, T1 and T2, updating a shared variable, `counter`. If both enter the critical section
simultaneously, they might read, update, and store `counter` without seeing each other's changes, leading to
incorrect results. Solutions to the critical section problem must satisfy mutual exclusion, progress, and bounded
waiting requirements to prevent this.
Producers use `wait(empty)` and `wait(mutex)` before adding items and `signal(full)` after, while consumers use
`wait(full)` and `wait(mutex)` before consuming and `signal(empty)` afterward. This prevents race conditions and
ensures synchronization.
Peterson’s Solution:
Peterson’s algorithm is a classic solution to the critical section problem for two processes. It uses two variables:
- flag[i]: Indicates if process `i` wants to enter the critical section.
- turn: Indicates whose turn it is to enter the critical section.
Algorithm:
- Process `i` sets `flag[i] = true` and `turn = j` before entering the critical section, ensuring that if the other process
(j) wants to enter, it will wait.
- After exiting, `flag[i] = false`, allowing the other process to enter.
This method satisfies mutual exclusion, progress, and bounded waiting, making it suitable for synchronization.
Example:
Consider a `BufferMonitor` that provides methods `insert` and `remove` for managing items in a buffer.
```cpp
monitor BufferMonitor {
condition full, empty;
void insert(item) {
if buffer is full {
wait(full);
}
add item to buffer;
signal(empty);
}
void remove(item) {
if buffer is empty {
wait(empty);
}
remove item from buffer;
signal(full);
}
}
```
This monitor uses conditions to manage buffer state, providing a synchronized access method to avoid race
conditions.
Solution:
A solution is to use semaphores or monitors to control access to forks. One approach:
- Use a semaphore for each fork, allowing only one philosopher to hold it at a time.
- Philosophers pick up the left fork, then the right one, and after eating, they release both forks.
- Implement deadlock prevention by requiring each philosopher to check if both forks are available simultaneously
before picking up.
This method ensures no two philosophers can use the same fork simultaneously, preventing deadlock and
starvation.
Example:
Assume three processes with given resources and maximum requirements. The algorithm verifies whether fulfilling
a request would maintain system safety by simulating allocations and calculating the resulting state. It ensures that at
least one sequence exists that allows all processes to complete without entering deadlock.
A) Semaphore
A semaphore is a variable used to control access to a common resource in concurrent programming. Semaphores
are manipulated with `wait` and `signal` operations, allowing multiple processes to coordinate their access to shared
resources and ensuring mutual exclusion.
B) Monitor
A monitor is a synchronization construct encapsulating shared resources, variables, and procedures in one module,
ensuring only one process accesses the monitor’s procedures at a time. Monitors simplify synchronization by
handling complex resource access control internally, making code easier to maintain and less error-prone.