0% found this document useful (0 votes)
25 views10 pages

Midsem Exam Os

its operating system
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views10 pages

Midsem Exam Os

its operating system
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

MIDSEM EXAM (OS).

Unit-1
2marks,
1. What is an Operating System?
An Operating System (OS) is a software that manages hardware resources and provides services for computer
programs, acting as an interface between the user and the hardware.

2. Define Kernel.
The Kernel is the core component of an OS that manages system resources and communication between hardware
and software, ensuring efficient and secure system operation.

3. What are Batch Systems?


Batch systems are OS types where jobs are grouped and processed in batches without user interaction, making
them suitable for long, repetitive tasks.

4. How does Dual-Mode Operation work?


Dual-Mode Operation allows the OS to operate in two modes: user mode and kernel mode. This separation helps
protect the OS and system resources from user interference, with mode-switching based on system needs.

5. What is meant by Time-Sharing Systems?


Time-sharing systems allow multiple users to access and share system resources simultaneously, with the OS
allocating time slots for each process, giving the illusion of concurrent execution.

6. What are the advantages of Multiprogramming?


Multiprogramming increases CPU utilization by running multiple programs simultaneously, which reduces idle time
and improves overall system efficiency.

7. What are Multiprocessor Systems & their advantages?


Multiprocessor systems have multiple CPUs working together, which increases processing speed, reliability, and
system throughput by sharing tasks across processors.

8. List the services provided by an Operating System.


OS services include process management, memory management, file management, device management, security,
and user interface support.

9. Define System Calls.


System calls are interfaces provided by the OS that allow user applications to request services such as file handling,
process control, and communication from the OS.

10. What is a Virtual Machine?


A Virtual Machine (VM) is an emulation of a computer system that runs on physical hardware, allowing multiple OS
instances on a single physical machine and providing isolation between them.
10-mark ,
1. Define Operating System and Explain the Various Types of Operating Systems
An Operating System (OS) is software that manages computer hardware and software resources and provides
services for computer programs. It serves as an intermediary between users and the computer hardware.

Types of Operating Systems:


- Batch Operating System: Executes jobs in batches without user interaction.
- Time-Sharing Operating System: Allows multiple users to interact with a computer system simultaneously.
- Distributed Operating System: Manages a group of distinct computers and makes them appear as a single system
to the users.
- Network Operating System: Manages network resources and allows shared access to resources within a network.
- Real-Time Operating System: Processes data in real-time with strict time constraints, suitable for critical
applications like industrial controls.
- Embedded Operating System: Designed for embedded systems, optimized to perform specific tasks in limited
resources.

2. (a) Explain Operating System Structures


Operating System structures organize OS functions to ensure efficiency and security. Common
structures include:
- Monolithic Structure: A single large program with all OS functions in a single level.
- Layered Structure: Divides OS into layers, each providing services to the layer above.
- Microkernel Structure: Minimal kernel with essential services, adding other services as modules.
- Modular Approach: Uses loadable kernel modules that can be dynamically loaded or unloaded.

(b) Explain System Programs


System programs provide a convenient environment for program development and execution. Types
include:
- File Management Programs: Handle file creation, deletion, and manipulation.
- Status Information Programs: Provide system information, like CPU usage and memory statistics.
- File Modification Programs: Editors for modifying content in files.
- Programming Language Support: Assemblers, compilers, and interpreters.
- Communication Programs: Allow processes to exchange information.

3. Explain the Different Functions of an Operating System and Discuss the Various Services Provided by
an OS
Functions of an OS include:
- Process Management: Manages processes, multitasking, and process synchronization.
- Memory Management: Allocates and manages main memory for active processes.
- File Management: Manages file storage, access, and file system organization.
- Device Management: Controls and coordinates the operation of hardware.
- Security and Protection: Protects data and resources from unauthorized access.

OS Services:
- Program Execution: Loads and executes programs.
- I/O Operations: Manages input/output devices.
- File-System Manipulation: Provides access to files and directories.
- Communication: Enables data exchange between processes.
- Error Detection: Detects and responds to errors within the OS and processes.
4. (a) Explain Dual-Mode Operation in OS with a Block Diagram
Dual-Mode Operation uses two modes, user mode and kernel mode, to protect the system:
- User Mode: Limited access to hardware resources, only for general applications.
- Kernel Mode: Full access to hardware resources, reserved for system-level tasks.

A block diagram would show transitions between these two modes when a process requires services only
accessible in kernel mode, switching back to user mode after the task.

(b) Define OS and Explain Multiprogramming and Time-Sharing Systems


An OS controls and manages hardware and software resources, providing essential services for applications.

- Multiprogramming: OS keeps multiple jobs in memory, switching among them to improve CPU utilization.
- Time-Sharing: OS divides CPU time among multiple users, enabling simultaneous access and fast response times.

5. (a) Explain Virtual Machines (VMs) Concept


Virtual Machines emulate a complete computer system within a host machine, isolating environments and enabling
multiple OS instances on a single physical device. VMs provide resource sharing, portability, and protection.

(b) Differences between Monolithic Kernel and Microkernel


- Monolithic Kernel: Includes all OS services within a single kernel space, enhancing performance but limiting
modularity.
- Microkernel: Minimal kernel, with essential services only, offering greater stability and security at the cost of some
performance.

6. (a) Explain System Calls with Examples


System calls allow user applications to request OS services. Types include:
- Process Control: `fork()` to create processes.
- File Manipulation: `open()` to access files.
- Device Management: `ioctl()` for device control.
- Information Maintenance: `getpid()` to retrieve process ID.

(b) Different Operations Performed by OS


Operations include managing memory allocation, scheduling processes, handling I/O requests, providing access
control, maintaining system stability, and providing an interface for system management.

7. Explain Computing Environments


Computing environments are setups where computing occurs, including:
- Traditional Computing: Desktops, laptops, and stand-alone systems.
- Mobile Computing: Smartphones and tablets, requiring lightweight OS.
- Distributed Computing: Multiple machines work together on a network to solve complex problems.
- Cloud Computing: Resources and services provided over the internet.
- Client-Server Computing: Resources shared between clients and a centralized server.

8. (a) Explain Different Types of System Calls with Examples


- Process Control: `exec()` starts new processes.
- File Manipulation: `read()` and `write()` access files.
- Device Management: `open()` initiates device use.
- Information Maintenance: `gettimeofday()` checks system time.
- Communication: `send()` and `recv()` for network communication.
(b) Functionalities of Operating Systems
Functionalities include process, memory, file, and device management, providing a user interface, security, and
error handling, ensuring efficient system operation.

9. (a) Difference between Multitasking and Multiprogramming


- Multitasking: Runs multiple tasks for a single user simultaneously.
- Multiprogramming: Increases CPU usage by keeping multiple jobs in memory, executing them in turn.

(b) User and OS Interface


The user interface is where users interact with the OS, typically through CLI or GUI, allowing access to OS
functionalities and resource control.

10. Explain Different Types of System Calls


- Process Control System Calls: Manage processes (e.g., `fork`, `exec`).
- File Management System Calls: Handle files and directories (e.g., `open`, `close`).
- Device Management System Calls: Manage device operations (e.g., `ioctl`).
- Information Maintenance Calls: Provide system info (e.g., `getpid`).
- Communication System Calls: Facilitate communication between processes (e.g., `pipe`, `send`, `recv`).

Unit-2
2marks,
1. Define Process
A process is an instance of a program in execution. It includes the program code and its current activity,
represented by the value of the Program Counter, registers, and variables.

2. What is meant by the state of the process?


The state of a process indicates its current condition in the system, such as New, Ready, Running, Waiting, or
Terminated, based on its activity.

3. Define Schedulers
Schedulers are system components that select processes from a queue to execute on the CPU. They manage
process states and determine which process runs at any given time.

4. What requirement is to be satisfied for a solution of a critical section problem?


The solution to a critical section problem must satisfy three conditions: mutual exclusion, progress, and bounded
waiting.

5. What is the sequence of operation by which a process utilizes a resource?


A process typically requests a resource, uses the resource, and then releases it once done, allowing other processes
to use the resource.

6. What are the types of scheduler?


There are three types of schedulers: Long-term (job scheduler), Short-term (CPU scheduler), and Medium-term
(swapping scheduler).

7. Define Thread
A thread is the smallest unit of processing that can be scheduled by an operating system. It shares resources like
memory with other threads in the same process.
8. Define Time Slice
A time slice is the fixed duration allocated by the CPU to a process or thread in a round-robin scheduling system
before moving to the next process.

9. What does PCB contain?


The Process Control Block (PCB) contains essential information about a process, such as process ID, process state,
CPU registers, memory management information, and scheduling information.

10. What are the 3 different types of scheduling queues?


The three main scheduling queues are the Job Queue (holds all processes), the Ready Queue (holds processes in
memory, ready to execute), and the Device Queue (holds processes waiting for I/O).

10marks,
1. a) Define Process? Explain Process State Diagram.
A process is an executing instance of a program, which consists of the program code, data, and a set of resources
required for its execution. Each process goes through different states during its lifetime, and these states are typically
represented in a Process State Diagram, which includes:
- New: The process is being created.
- Ready: The process is ready to execute and is waiting for CPU time.
- Running: The process is currently being executed by the CPU.
- Waiting: The process is waiting for some event, such as I/O completion.
- Terminated: The process has completed its execution and is ended.
The transitions between these states depend on process scheduling, resource availability, and other system events.

Diagram Explanation: A standard Process State Diagram visually depicts the transitions between these states,
highlighting when a process moves from one state to another based on CPU scheduling or events like I/O completion.

b) Explain About Process Schedulers.


Process schedulers are responsible for selecting processes from different queues (ready queue, waiting queue, etc.)
and allocating CPU time to them. There are three types:
- Long-Term Scheduler: Also known as the job scheduler, it selects processes from the job pool and loads them into
the ready queue. It controls the degree of multiprogramming.
- Short-Term Scheduler: Also called the CPU scheduler, it selects processes from the ready queue for execution. This
scheduler operates frequently and has a direct impact on CPU utilization.
- Medium-Term Scheduler: This scheduler is responsible for swapping processes in and out of memory, optimizing
the system's performance by managing processes between main memory and secondary storage.

2. Consider 3 Processes P1, P2, and P3 with Time Requirements and Arrival Times.

Given:
- Process P1 requires 5 units, arrives at time 0
- Process P2 requires 7 units, arrives at time 1
- Process P3 requires 4 units, arrives at time 3

(i) Round Robin Scheduling (Quantum = 2 units)


Gantt Chart:
- P1 (0-2) → P2 (2-4) → P1 (4-5) → P3 (5-7) → P2 (7-9) → P3 (9-10) → P2 (10-12)
- Completion Order: P1, P3, P2
- Average Waiting Time Calculation
(ii) First-Come, First-Serve (FCFS) Scheduling
- Gantt Chart:
- P1 (0-5) → P2 (5-12) → P3 (12-16)
- Completion Order: P1, P2, P3
- Average Waiting Time Calculation

3. Explain CPU Scheduling Algorithms with Examples.

CPU Scheduling algorithms determine the order of process execution based on criteria like waiting time and
turnaround time. Major algorithms:
- First-Come, First-Serve (FCFS): Processes are scheduled in order of arrival. Simple but can lead to the convoy
effect.
- Shortest Job First (SJF): Selects the process with the shortest burst time. Optimal in terms of minimizing waiting
time but may lead to starvation.
- Priority Scheduling: Processes are scheduled based on priority; higher priority processes execute first.
- Round Robin (RR): Processes are assigned CPU in a circular order with a fixed time slice, ensuring fairness.

4. a) Explain Scheduling Criteria.


Scheduling criteria include:
- CPU Utilization: Maximizes CPU activity.
- Throughput: Measures the number of processes completed per time unit.
- Turnaround Time: Total time to complete a process.
- Waiting Time: Time a process waits in the ready queue.
- Response Time: Time from submission to the first response.

b) Evaluate FCFS CPU Scheduling for Given Processes


Processes: P1 (24), P2 (3), P3 (5), P4 (6). Gantt chart and average waiting time calculation based on FCFS scheduling.

5. Evaluate SJF CPU Scheduling Algorithm for Given Problem.


Processes: P1 (8), P2 (4), P3 (9), P4 (5), Arrival Times: 0, 1, 2, 3. Explain SJF scheduling with Gantt chart and calculate
average waiting time.

6. Evaluate Round Robin CPU Scheduling with Given Problem.


Given Processes: P1 (10), P2 (5), P3 (18), P4 (6), Time Slice = 3 ms. Explain Round Robin with Gantt chart and
compute average waiting time.

7. Explain in Detail Inter-Process Communication (IPC).


IPC allows processes to communicate and synchronize their actions. IPC methods include:
- Message Passing: Processes communicate by sending and receiving messages.
- Shared Memory: Multiple processes access a common memory space.
IPC is essential for resource sharing, data exchange, and process synchronization, ensuring coordination and data
integrity.

8. a) Process State Diagram with Sketch


Diagram depicting transitions: New, Ready, Running, Waiting, Terminated, with arrows showing state changes.

b) Write About Threads


Threads are lightweight processes sharing the same resources within a process. Types include user-level and kernel-
level threads.
9. a) Difference Between User-Level and Kernel-Level Threads
- User-Level Threads: Managed by user libraries; lightweight and fast but cannot utilize multiple processors.
- Kernel-Level Threads: Managed by OS kernel; each thread is treated as a separate entity, capable of running on
multiple processors.

b) Synchronization and Mechanisms


Synchronization coordinates the execution of processes to avoid conflicts over shared resources. Mechanisms
include:
- Locks: Basic mechanism to control access to resources.
- Semaphores: Counting mechanisms to signal resource availability.
- Monitors: High-level synchronization constructs encapsulating data and procedures.
10. a) Criteria for Evaluating CPU Scheduling Algorithms
Criteria include maximizing CPU utilization, throughput, minimizing turnaround and waiting times, and response
time to ensure fairness and efficiency.

b) Process Control Block (PCB)


The PCB contains all information about a process, including process ID, state, CPU registers, scheduling
information, memory management, and I/O status. The PCB helps the OS manage and schedule processes effectively.

Unit-3
2marks,
1. Define Deadlock
Deadlock is a situation in which a set of processes are blocked because each process is holding a resource and
waiting to acquire a resource held by another process, creating a cycle of dependencies.

2. Give the Condition Necessary for a Deadlock Situation to Arise


Four conditions must hold for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait.
These conditions are collectively known as the Coffman conditions.

3. Define Race Condition


A race condition occurs when the behavior of software depends on the timing or sequence of uncontrollable
events, leading to unpredictable and incorrect results when two or more processes access shared data concurrently.

4. What Are the Requirements That a Solution to the Critical Section Problem Must Satisfy?
The solution must satisfy three requirements: mutual exclusion (only one process in the critical section at a time),
progress (a process outside the critical section cannot prevent others from entering), and bounded waiting (no
process should wait indefinitely).

5. Define Starvation in Deadlock


Starvation occurs when a process is indefinitely delayed from acquiring the resources it needs to proceed, even
though those resources are available, due to scheduling policies or other processes continually getting prioritized.

6. Define Semaphores
A semaphore is a synchronization primitive used to control access to a common resource by multiple processes. It
uses integer values and operations like "wait" and "signal" to manage concurrent access.

7. Name Some Classic Problems of Synchronization


Classic synchronization problems include the Producer-Consumer problem, Dining Philosophers problem, Readers-
Writers problem, and Sleeping Barber problem.
8. Define ‘Safe State’
A safe state is a state in which the system can allocate resources to each process in some order and avoid a
deadlock, ensuring that there is a sequence that allows all processes to complete.

9. What is the Critical Section Problem?


The critical section problem is the challenge of designing a protocol to manage access to shared resources in
concurrent systems so that only one process accesses the critical section at a time.

10. Define Busy Waiting and Spinlock


Busy waiting occurs when a process repeatedly checks for a condition to be met, wasting CPU time. A spinlock is a
type of busy waiting where a process continuously checks a lock variable to gain access to a resource.

10marks,
1. What is the Critical Section Problem? Explain with Example.
The critical section problem occurs in concurrent programming, where multiple processes or threads need exclusive
access to shared resources (like variables or files) to avoid conflicts. This problem involves designing a protocol that
allows only one process at a time to enter its critical section (the part of code that accesses shared resources) to
prevent data inconsistencies.

Example:
Consider two threads, T1 and T2, updating a shared variable, `counter`. If both enter the critical section
simultaneously, they might read, update, and store `counter` without seeing each other's changes, leading to
incorrect results. Solutions to the critical section problem must satisfy mutual exclusion, progress, and bounded
waiting requirements to prevent this.

2. What is Semaphore? Explain Producer-Consumer Problem Using Semaphore.


A semaphore is a synchronization mechanism used to manage concurrent access to shared resources. It uses two
main operations:
- wait (P operation): Decrements the semaphore’s value and waits if the value is negative.
- signal (V operation): Increments the semaphore’s value and allows waiting processes to proceed if they were
blocked.

Producer-Consumer Problem Using Semaphore:


In this problem, producers generate items and place them in a buffer, while consumers remove items. Using
semaphores, we define:
- empty (semaphore): Tracks empty slots in the buffer.
- full (semaphore): Tracks filled slots.
- mutex (semaphore): Ensures mutual exclusion in accessing the buffer.

Producers use `wait(empty)` and `wait(mutex)` before adding items and `signal(full)` after, while consumers use
`wait(full)` and `wait(mutex)` before consuming and `signal(empty)` afterward. This prevents race conditions and
ensures synchronization.

3. Define Process Synchronization and Explain Peterson’s Solution Algorithm.


Process synchronization coordinates the execution of processes to prevent conflicts over shared resources, ensuring
data integrity and consistency.

Peterson’s Solution:
Peterson’s algorithm is a classic solution to the critical section problem for two processes. It uses two variables:
- flag[i]: Indicates if process `i` wants to enter the critical section.
- turn: Indicates whose turn it is to enter the critical section.
Algorithm:
- Process `i` sets `flag[i] = true` and `turn = j` before entering the critical section, ensuring that if the other process
(j) wants to enter, it will wait.
- After exiting, `flag[i] = false`, allowing the other process to enter.
This method satisfies mutual exclusion, progress, and bounded waiting, making it suitable for synchronization.

4. What is Monitor? Explain with Example Using Monitor.


A monitor is a high-level synchronization construct that bundles shared data, variables, and the methods to access
them. Only one process can execute a monitor’s method at a time, simplifying complex synchronization issues by
ensuring mutual exclusion.

Example:
Consider a `BufferMonitor` that provides methods `insert` and `remove` for managing items in a buffer.
```cpp
monitor BufferMonitor {
condition full, empty;
void insert(item) {
if buffer is full {
wait(full);
}
add item to buffer;
signal(empty);
}
void remove(item) {
if buffer is empty {
wait(empty);
}
remove item from buffer;
signal(full);
}
}
```
This monitor uses conditions to manage buffer state, providing a synchronized access method to avoid race
conditions.

5. Explain the Solution for the Dining-Philosophers Problem.


The Dining Philosophers problem illustrates synchronization and resource allocation issues. Five philosophers share
a table with five forks, each needing two forks to eat. If all pick up the left fork simultaneously, a deadlock occurs.

Solution:
A solution is to use semaphores or monitors to control access to forks. One approach:
- Use a semaphore for each fork, allowing only one philosopher to hold it at a time.
- Philosophers pick up the left fork, then the right one, and after eating, they release both forks.
- Implement deadlock prevention by requiring each philosopher to check if both forks are available simultaneously
before picking up.

This method ensures no two philosophers can use the same fork simultaneously, preventing deadlock and
starvation.

6. a) Methods for Handling Deadlock


Deadlock can be handled in four ways:
- Deadlock Prevention: Structurally negating one of the Coffman conditions to avoid deadlock.
- Deadlock Avoidance: Dynamically determining safe allocation using algorithms like Banker’s Algorithm.
- Deadlock Detection and Recovery: Allowing deadlock to occur but identifying and resolving it afterward.
- Ignoring Deadlock: Used in systems where deadlock occurrence is rare, known as the Ostrich Algorithm.

b) Write About Deadlock and Starvation


- Deadlock: Occurs when processes form a circular waiting pattern, each holding a resource and waiting for another
held by a different process.
- Starvation: When a process is continually denied necessary resources, leading to indefinite delay, often due to
prioritization policies.

7. a) Explain About Deadlock Avoidance


Deadlock avoidance algorithms, like the Banker’s Algorithm, use process resource requests and system state to
ensure safe allocation sequences. They allocate resources only if the system remains in a safe state, preventing
deadlock by ensuring that every request can be safely granted.

b) Explain How to Recover from Deadlock


Deadlock recovery involves actions like:
- Process Termination: Terminating one or more deadlocked processes to break the cycle.
- Resource Preemption: Temporarily reallocating resources from deadlocked processes and assigning them to
others, allowing some to proceed.

8. Explain Deadlock Detection (Banker’s Algorithm) with Example.


The Banker’s Algorithm helps in detecting deadlocks by checking resource allocation against a system’s “safe” state.
Processes declare their maximum resource needs, and resources are allocated if they keep the system in a safe state.

Example:
Assume three processes with given resources and maximum requirements. The algorithm verifies whether fulfilling
a request would maintain system safety by simulating allocations and calculating the resulting state. It ensures that at
least one sequence exists that allows all processes to complete without entering deadlock.

9. Write About Deadlock Prevention Methods.


Deadlock prevention structurally avoids one of the four Coffman conditions:
- Mutual Exclusion: Make resources shareable when possible.
- Hold and Wait: Require processes to request all resources upfront or release current resources before acquiring
new ones.
- No Preemption: Allow resource preemption if a process holding resources gets blocked.
- Circular Wait: Impose an order on resource acquisition, ensuring that processes request resources in a specific
sequence.

By preventing any one of these conditions, deadlock cannot occur.

10. Discuss the Following:

A) Semaphore
A semaphore is a variable used to control access to a common resource in concurrent programming. Semaphores
are manipulated with `wait` and `signal` operations, allowing multiple processes to coordinate their access to shared
resources and ensuring mutual exclusion.

B) Monitor
A monitor is a synchronization construct encapsulating shared resources, variables, and procedures in one module,
ensuring only one process accesses the monitor’s procedures at a time. Monitors simplify synchronization by
handling complex resource access control internally, making code easier to maintain and less error-prone.

You might also like