Operating system - Model Examination Question and Answer
Operating system - Model Examination Question and Answer
2 Marks Section - A
1. Define OS
An operating system is the primary software that manages all the hardware and
other software on a computer.
The operating system, also known as an "OS," interfaces with the computer's
hardware and provides services that applications can use.
2. Process:
A process is a program in execution which then forms the basis of all computation.
The process is not as same as program code but a lot more than it.
A process is an 'active' entity as opposed to the program which is considered to be a
'passive' entity.
Attributes held by the process include hardware state, memory, CPU, etc.
3. Critical Section:
Critical Section is the part of a program which tries to access shared resources.
That resource may be any resource in a computer like a memory location, Data
structure, CPU or any IO device.
4. Critical Region:
Critical regions are utilized to prevent concurrent entry to shared sources, along
with variables, information structures, or devices, that allow you to maintain
information integrity and keep away from race conditions.
5. Deadlock:
A Deadlock is a situation where each of the computer process waits for a resource
which is being assigned to some another process.
In this situation, none of the process gets executed since the resource it needs, is
held by some other process which is also waiting for some other resource to be
released.
6. Counting Semaphore:
A counting semaphore is again an integer value, which can range over an
unrestricted domain.
We can use it to resolve synchronization problems like resource allocation.
1
7. Dynamic Linking:
The process of linking external libraries and references at runtime, when the program is
loaded or executed.
Smaller file size, as libraries are linked dynamically at runtime.
8. Dynamic Loading:
Dynamic loading is utilized to ensure optimal memory consumption. In dynamic
loading, a routine is not loaded until it is invoked.
All of the routines are stored on disk in a reloadable load format.
9. Monitors:
A Monitor type high-level synchronization construct.
It is an abstract data type.
The Monitor type contains shared variables and the set of procedures that operate on
the shared variable.
10. Overlays:
Overlays refer to a technique used to manage memory efficiently by overlaying a
portion of memory with another program or data.
It's used to work with programs that are larger than the available memory.
2
12. Thrashing:
This high paging activity is called thrashing.
A process is thrashing if it is spending more time paging than executing.
5 Marks Section – B
1. Operating System Services.
Program Execution: The operating system provides services for the execution of
programs. It loads programs into memory, schedules them for execution, and manages
their termination, ensuring efficient use of system resources.
I/O Operations: It facilitates input and output operations by providing a set of system
calls and device drivers. This includes managing devices, handling file operations, and
ensuring data is read from and written to the appropriate devices.
File System Manipulation: The OS manages file systems, including creating, deleting,
reading, and writing files. It also enforces file access permissions and organization of
data on storage devices.
Communication Services: To enable communication between processes and systems,
the operating system provides inter-process communication (IPC) mechanisms like
message passing and shared memory, as well as network communication services.
Error Handling: The OS handles system errors and faults to ensure the system
operates smoothly. It provides error messages, crash recovery, and various fault
tolerance mechanisms to minimize the impact of failures on the system's operation.
3
2. System design and implementation
Requirements Analysis:
Understand the specific needs and goals of the operating system, such as the type
of hardware it will run on, user requirements, and system performance
expectations.
Architectural Design:
Define the overall structure and organization of the operating system, including the
choice of kernel architecture (e.g., monolithic, microkernel), memory management,
process management, and file systems.
Implementation:
Write the code for the operating system based on the design specifications. This
involves developing components like the kernel, device drivers, system libraries,
and user interfaces.
Testing and Debugging:
Rigorously test the operating system to identify and fix bugs, vulnerabilities, and
performance issues. This includes unit testing, integration testing, and system
testing.
Deployment and Maintenance:
Deploy the operating system on target hardware, ensure it functions as expected,
and provide ongoing maintenance and updates to address security patches, new
features, and improvements. This step includes user support and system
administration.
3. Monitors
Synchronization : Monitors provide a high-level synchronization mechanism that
ensures only one process or thread can access the critical section at any given time,
preventing race conditions.
Encapsulation : They encapsulate shared data and the operations that can be
performed on the data, providing a structured and organized way to control access
to shared resources.
Simplicity : Monitors simplify the task of writing concurrent programs by providing
an easier-to-use abstraction compared to other low-level synchronization primitives
like semaphores or locks.
Condition Variables : Monitors often include condition variables, which allow
threads to wait for a certain condition to become true before proceeding, enabling
efficient thread communication.
4
4. Various methods for handling deadlock
Methods of handling deadlocks: There are four approaches to dealing with
deadlocks.
Deadlock Prevention
Deadlock avoidance (Banker's Algorithm)
Deadlock detection & recovery
Deadlock Ignorance (Ostrich Method)
5
Methods of handling deadlocks:
Deadlock Prevention: Design the system to exclude deadlock possibilities by
addressing the necessary conditions, like mutual exclusion, hold and wait, and no
pre-emption. However, this approach may lead to inefficiencies.
Deadlock Avoidance (Banker's Algorithm): Proactively track resource usage to
minimize the risk of deadlock without guaranteeing its prevention. It involves
techniques like process initiation denial and resource allocation denial.
Deadlock Detection & Recovery: Periodically check for deadlocks and resolve them
by killing one or more processes to release resources. This approach doesn't
restrict resource access but incurs pre-emption losses.
Deadlock Ignorance (Ostrich Method): Completely ignore deadlocks and reboot
the system if they occur, suitable for rare deadlock situations. This approach lacks
information about deadlock occurrences and can lead to performance and resource
leakage issues.
5. Address Binding
Address Binding:
The Association of program instruction and data to the actual physical
memory locations is called the Address Binding.
Let’s consider the following example given below for better understanding.
Consider a program P1 has the set of instruction such that I1, I2, I3, I4, and program
counter value is 10, 20, 30, 40 respectively.
Program P1
I1 --> 10
I2 --> 20
I3 --> 30
I4 --> 40
6
Types of Address Binding:
1. Compile-time Address Binding: Address binding is done by the compiler before program
loading, involving interaction with the OS memory manager.
2. Load-time Address Binding: Address binding occurs after loading the program into memory,
managed by the OS loader.
3. Execution-time Address Binding: Address binding is postponed until program execution, with
dynamic changes to memory locations handled by the processor during runtime.
10 Marks Section - C
1. CPU Scheduling Algorithms:
First Come First Serve (FCFS): Processes are executed in the order they arrive,
implemented with a FIFO queue. It's simple but can result in high waiting times.
Shortest Job First (SJF): The process with the shortest execution time is executed
next, preemptive and non-preemptive versions exist.
Longest Job First (LJF): The process with the longest execution time is executed first,
typically non-preemptive.
Priority Scheduling: Processes are assigned priorities, and the CPU is allocated to the
highest priority process. Can be preemptive or non-preemptive.
Round Robin (RR): Processes are executed in a cyclic manner with a fixed time slice,
suitable for time-sharing systems.
Shortest Remaining Time First (SRTF): A preemptive version of SJF where the
process with the shortest remaining time is selected.
Longest Remaining Time First (LRTF): Opposite of SJF, where the process with the
longest remaining time is chosen.
Highest Response Ratio Next (HRRN): Selects the process with the highest response
ratio, a modification of SJF to reduce starvation.
Multiple Queue Scheduling: Divides processes into different classes with distinct
scheduling needs, such as foreground and background processes.
Multilevel Feedback Queue Scheduling: Processes can move between queues based
on their behavior, allowing for more flexibility and efficiency.
7
o Device Drivers: It loads and manages device drivers that enable communication with
hardware components like disks, keyboards, and network cards.
o I/O Scheduling: The subsystem prioritizes and manages the order of I/O requests to
optimize system performance, preventing bottlenecks.
o Buffering: It uses buffers to temporarily store data in memory before reading or
writing to/from a device, improving efficiency and reducing the number of I/O
operations.
o Caching: The kernel often caches frequently used data, reducing the need to access
slow storage devices, and speeding up access times.
o File Systems: It provides file system support, including reading, writing, and
managing files, directories, and access permissions.
o Interrupt Handling: The I/O subsystem handles hardware interrupts from devices,
ensuring that they are managed efficiently without disrupting the system's
operation.