OS - Assignment Kartik
OS - Assignment Kartik
Information Technology
OS ASSIGNMENT
Name – KUSHAL
Enrolment No. 04311104424
MCA – I (2024-2026)
Ques 1. Compare Monolithic Kernel and Microkernel. Which type of kernel is used in Microsoft Windows 10?
ANSWER:
Monolithic Kernel:
The monolithic kernel is a design where all essential operating system services run in the kernel space, creating a
single, large process running entirely in privileged mode. Key services, such as device drivers, file system
management, memory management, and other core functionalities, are integrated into a single executable binary.
Here are some important characteristics of monolithic kernels:
1. Performance: Since all services are in the kernel space, the communication between services (like device
drivers and memory management) occurs at high speed, without the need for extensive context switching.
This results in faster operations and low latency.
2. Design Simplicity: Monolithic kernels use a straightforward approach to OS design, allowing direct access
to resources and easier management of system calls within a single address space.
3. Resource Access and Stability: By running everything in one space, monolithic kernels can face stability
challenges. If one component fails (such as a driver), it can crash the entire system, as there is no isolation
between services.
4. Example: Linux OS is a widely known example of a monolithic kernel, as it manages resources efficiently
while leveraging high-speed communication between services.
Microkernel:
The microkernel architecture, on the other hand, is designed to keep only the most essential services, such as
memory management, process scheduling, and inter-process communication (IPC), in the kernel space. All other
services, like device drivers and file systems, are implemented in user space as isolated processes. Key features
include:
04811104424 KARTIK KAUSHIK MCA -1 SEM
1. Modularity and Isolation: Microkernels are highly modular; non-essential services (e.g., drivers) run in
user mode, separate from the core kernel. This creates a more stable system, as errors in a user-mode
service are less likely to compromise the entire OS.
2. Security: Microkernels are considered more secure due to service isolation. Each service operates
independently, limiting the impact of any single service's failure on overall OS stability.
3. Performance Overhead: The main drawback is the need for more frequent context switches between
kernel and user modes, increasing overhead and reducing overall performance. This design, though secure,
can be less efficient in terms of speed due to the added communication overhead.
4. Example: QNX and MINIX are examples of systems based on the microkernel architecture, emphasizing
modularity and robustness over raw performance.
Microsoft Windows 10 uses a hybrid kernel, which integrates features from both monolithic and microkernel
architectures. Here’s a breakdown of its hybrid design:
1. Core Components in Kernel Mode: Like a monolithic kernel, Windows 10 has essential services—such as
memory management, process scheduling, and system calls—running in kernel space. This setup allows for
high performance by reducing the communication overhead typical of pure microkernels.
2. Modularity for Stability: Although it operates similarly to a monolithic kernel, Windows 10 includes
modularity aspects for certain services, which run in user mode. For example, some device drivers and
subsystems may run outside of kernel mode to prevent entire-system crashes in the event of a service
failure.
4. Balance of Security and Performance: The hybrid kernel achieves a compromise between the
performance of a monolithic kernel and the stability and security of a microkernel. While the kernel handles
critical tasks efficiently in kernel mode, it also maintains separation for potentially less stable processes,
enhancing the system’s robustness.
5. Examples of Hybrid Kernel Features: Windows 10 uses elements like a Hardware Abstraction Layer (HAL)
and subsystem servers (e.g., Win32 API) to interact with hardware and applications, allowing it to support
diverse software while remaining stable.
The Windows 10 hybrid kernel combines the performance advantages of a monolithic kernel with the
stability and security benefits of a microkernel. By balancing these aspects, Windows 10 can deliver high
performance for end-users while retaining modularity for more reliable operations.
ANSWER:
The bootstrap process is a series of steps that an operating system performs to start up and initialize a computer
system. This process takes place every time a system is powered on or restarted. The term "bootstrap" refers to the
system "pulling itself up by its bootstraps," starting from minimal instructions in hardware and loading the full OS.
The sequence is designed to ensure that all hardware and software components are prepared for regular use.
1. Power On: When the power button is pressed, the system receives an electrical signal, powering up essential
hardware components like the CPU, memory, and motherboard.
2. POST (Power-On Self-Test): The BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware Interface)
firmware begins by executing the POST. POST is a diagnostic testing sequence that verifies the functionality and
integrity of hardware components, such as RAM, CPU, storage drives, and other peripherals.
3. Error Detection: If any critical hardware fails, the POST process may halt the boot process, typically showing an
error message or beep code indicating the issue. If all tests pass, the system continues to the next phase.
1. BIOS/UEFI Execution: After POST, the BIOS/UEFI firmware locates the bootable storage device where the
operating system resides (often a hard disk, SSD, or external drive).
2. Bootloader Identification: The firmware identifies the Master Boot Record (MBR) or GUID Partition Table (GPT) on
the storage device, which contains the location of the bootloader. The bootloader is a small program that facilitates
the loading of the OS.
3. Loading the Bootloader: The BIOS/UEFI loads the bootloader into memory. In modern systems, bootloaders are
sophisticated enough to support multiple OS options, like GRUB (for Linux) or the Windows Boot Manager, allowing
users to select an OS if multiple are installed.
1. Loading the Kernel: The bootloader’s primary job is to locate the operating system kernel and load it into RAM.
The kernel is the core component of the operating system, managing system resources and facilitating interactions
between hardware and software.
2. Kernel Selection and Parameters: If multiple OS kernels are available, the bootloader allows the user to select
one. The user can also pass special parameters to the kernel, such as diagnostic or safe modes, which may alter how
the OS loads.
3. Transfer of Control: Once the kernel is loaded into memory, the bootloader hands over control to the kernel,
marking the end of the bootloader’s role.
1. Hardware Initialization: The kernel initializes essential hardware components, setting up drivers to communicate
with hardware like the CPU, memory, and I/O devices.
3. Device Driver Loading: The kernel loads device drivers for hardware peripherals, enabling communication
between the OS and devices like graphics cards, network adapters, and input devices.
4. System Services Preparation: Essential system services are prepared for execution. This may include loading low-
level OS services that handle file management, security protocols, and system monitoring.
1. Loading System Daemons: The OS begins loading necessary background services, known as daemons in
Unix/Linux or services in Windows. These processes handle tasks like network management, timekeeping, and
logging.
2. Activation of User Space: The OS prepares the user space, where user applications and services operate. This
separation of kernel and user spaces is vital for system security and stability, preventing user applications from
directly interfering with the core OS.
3. User Login Services: The OS initiates user login services, which provide the login prompt, enabling users to
authenticate and access their accounts. Once a user logs in, the OS loads the desktop environment or command
shell, giving access to applications and tools.
ANSWER:
int turn;
Process P0:
flag[0] = true;
turn = 1;
// Critical Section
flag[0] = false;
Process P1:
flag[1] = true;
turn = 0;
// Critical Section
flag[1] = false;
Peterson's algorithm uses two flags and a turn variable to ensure mutual exclusion in critical sections for two
processes.
ANSWER:
Distributed Operating System (DOS) is a type of operating system that manages a collection of independent
computers and makes them appear to users as a single cohesive system. These systems coordinate resources and
tasks across networked computers, or "nodes," allowing them to work together seamlessly. In a DOS, tasks are
distributed among different machines, and resources are shared across the network, creating a powerful, scalable
computing environment.
The primary goal of a distributed operating system is to achieve transparency in resource sharing, process
management, and inter-process communication across multiple computers. Distributed OSs offer benefits like
improved performance, fault tolerance, and scalability, making them ideal for applications in fields such as scientific
computing, data centers, and cloud computing.
1. Transparency: The system hides the complexities of distributed resources from users, making it appear as a single,
unified system.
2. Fault Tolerance: Distributed OSs have the ability to recover from hardware or software failures by redistributing
tasks to other nodes.
3. Scalability: New nodes can be added to the system without significant changes to its structure, improving the
overall capacity and performance.
4. Parallel Processing: Tasks can be divided among different machines, allowing for faster processing and efficiency.
Distributed systems can be implemented using two primary models: Client-Server Computing and Peer-to-Peer
(P2P) Computing. Both models support distributed resource sharing and communication, but they have distinct
architectures, advantages, and limitations.
In Client-Server Computing, the network consists of dedicated servers that provide resources and services, and
client devices that request and utilize these services. This architecture is often used in environments where
resources (such as databases, files, or applications) need to be centralized and managed effectively.
1. Centralized Control: Servers control and manage resources, making it easier to monitor, update, and maintain.
This centralized structure provides better security and simplifies the administration of resources.
2. Defined Roles: Servers and clients have specific roles. Servers are responsible for storing data and executing tasks,
while clients make requests and consume the results. This clear division reduces conflicts and enables efficient
resource management.
3. Performance and Reliability: Client-server models can face bottlenecks if too many clients access a server
simultaneously, potentially slowing down performance. Additionally, if a server fails, client access to resources is
disrupted, affecting reliability.
4. Examples: Popular examples of client-server applications include web browsers accessing web servers, email
clients connecting to email servers, and applications using database servers.
5. Pros:
• Resource Sharing: Centralized servers make it easy to share resources and data across clients.
• Security: Since resources are centrally controlled, security policies can be enforced consistently.
• Efficient Management: Centralization simplifies the management, monitoring, and updating of services and
resources.
6. Cons:
• Single Point of Failure: If a server fails, client access is interrupted, affecting the availability of resources.
• Scalability Challenges: As the number of clients grows, servers can become overloaded, leading to
performance degradation.
In Peer-to-Peer Computing, every node in the network acts as both a client and a server. Each peer can provide and
consume resources, enabling direct sharing of resources and information among nodes without relying on a central
server.
1. Decentralized Structure: There is no central authority or dedicated server. Each node functions independently,
offering and requesting resources as needed. This decentralization enables a more resilient and scalable network.
2. Resource Distribution: Each peer contributes resources to the network, which makes P2P systems ideal for
environments where high resource availability is required and can be distributed across a wide area.
3. Reliability and Redundancy: Due to the distributed nature of P2P, if one peer fails, others can still provide
resources, ensuring that the network continues to function. However, this model can face challenges with
consistency and resource management.
4. Examples: Popular P2P applications include file-sharing networks like BitTorrent, decentralized communication
platforms, and blockchain-based applications.
04811104424 KARTIK KAUSHIK MCA -1 SEM
5. Pros:
• Scalability: P2P systems can scale easily, as new nodes contribute additional resources.
• Fault Tolerance: The failure of individual nodes has minimal impact on the overall network, making it highly
resilient.
• Cost-Effective: No need for centralized servers, which reduces infrastructure costs.
6. Cons:
• Reliability and Quality Control: As nodes are independent, maintaining consistent quality and availability of
resources can be challenging.
• Security Risks: Since peers directly connect and exchange data, they may be vulnerable to malicious nodes,
making security difficult to enforce.
ANSWER:
Explanation: Multi-programmed batch systems allow multiple jobs to reside in memory simultaneously. The
operating system selects jobs from the pool, loads them into memory, and executes them one by one. When a job
waits for I/O operations, the CPU switches to another job, maximizing its utilization. There is minimal user
interaction, as jobs are processed in batches.
Advantages:
o High CPU Utilization: By switching between jobs, the system keeps the CPU busy, reducing idle time and
improving efficiency.
o Increased Throughput: More jobs are processed in less time, as the CPU can handle other jobs while one job
waits for I/O operations.
Disadvantages:
o Lack of Interactivity: Users cannot interact with the system while their jobs are being processed. Jobs are
submitted, executed, and results are returned without real-time interaction.
o Longer Wait Time for Individual Jobs: Jobs may wait in the queue for a long time if other jobs are ahead of
them, especially if there are I/O-bound processes.
• Time-Sharing Systems:
Explanation: Time-sharing systems allow multiple users to access the system simultaneously by rapidly switching the
CPU among user processes. Each user gets a small time slice, allowing them to interact with the system in real time.
This approach gives the illusion that each user has their own dedicated system.
Advantages:
o Interactive and User-Friendly: Users can interact with their programs in real time, making it ideal for
environments requiring immediate responses, such as online applications.
o Efficient Resource Utilization: The system manages resources efficiently across multiple users, ensuring that
no single job monopolizes the CPU.
Disadvantages:
o Potential for High Overhead: Rapid switching between tasks introduces overhead, as context switches
consume CPU time and resources.
o Complex Scheduling: Ensuring fair and efficient allocation of CPU time among users requires sophisticated
scheduling algorithms, adding complexity to the system.
Time-Sharing Systems
ANSWER:
Inter-Process Communication (IPC) is crucial in an operating system as it enables processes to interact, coordinate,
and exchange data. This is essential for multitasking systems, where multiple processes run concurrently and may
need to cooperate to perform a task. IPC allows processes to:
1. Synchronize Operations: Ensures that processes coordinate effectively, preventing race conditions and conflicts
over shared resources.
2. Share Data: Facilitates data sharing between processes, which is vital in applications where tasks need to
collaborate by exchanging information.
3. Improve Efficiency: Enables parallel processing by allowing processes to perform specific parts of a task
simultaneously, enhancing system performance.
4. Enable Modularity: IPC allows large applications to be built as modular components that communicate with each
other, making software easier to manage and maintain.
IPC mechanisms primarily use two models: Shared Memory and Message Passing. Both have unique methods of
data exchange and synchronization, making them suitable for different applications and system architectures
In the Shared Memory Model, processes communicate by sharing a specific region of memory. This area can be
accessed by multiple processes, enabling fast data exchange without involving the kernel after the initial setup.
1. Data Exchange: Processes can directly read from and write to the shared memory, which allows high-speed data
transfer. This model is often used in systems with high performance requirements.
2. Synchronization Mechanisms: Since multiple processes access the same memory, there is a risk of data
inconsistency. Synchronization mechanisms such as semaphores and mutexes are required to manage access,
ensuring that only one process modifies the data at a time.
3. Efficiency: Shared memory is generally faster than message passing because it eliminates the need for copying
data between processes. Instead, processes operate on the same data structure in memory.
4. Examples: Shared memory is widely used in applications requiring real-time data sharing, such as multimedia
processing, gaming, and database management systems.
5. Pros:
6. Cons:
In the Message Passing Model, processes communicate by explicitly sending and receiving messages through a
communication channel. Each message is a data packet that can be transmitted between processes, often using the
operating system’s IPC facilities.
1. Data Exchange: Communication occurs via message passing, where a process sends data as a message, and the
receiving process reads it. This approach is more structured and can be used across different machines in a network.
2. Simpler Synchronization: Since each message is a discrete unit of data, there is no need for complex
synchronization mechanisms. The sender and receiver communicate sequentially, reducing the risk of data races.
3. Performance: Message passing generally incurs more overhead than shared memory due to the need to copy data
between processes and, potentially, across networks. This can result in slower performance for high-frequency
communication.
4. Examples: Message passing is widely used in distributed systems and networked applications, such as
microservices, cloud applications, and remote procedure calls.
5. Pros:
6. Cons:
o Lower Speed: Typically slower than shared memory due to data copying and potential network latency.
o Less Efficient for Large Data: Transmitting large volumes of data can be slow and resource-intensive.
ANSWER:
An interrupt is a signal generated by either hardware or software to inform the operating system that an event
requires immediate attention. Interrupts temporarily halt the execution of the current process, allowing the OS to
address the event, after which it resumes the interrupted process. They play a crucial role in managing system
resources efficiently and responding quickly to external events.
Types of Interrupts
1. Hardware Interrupts: Triggered by external hardware devices, such as a keyboard, mouse, or network interface.
For example, when a key is pressed, the keyboard sends an interrupt to notify the CPU.
2. Software Interrupts: Initiated by software, typically to request services from the operating system. These are
often called system calls or traps.
3. Timer Interrupts: Generated by an internal system timer, allowing the OS to perform routine tasks like scheduling.
Interrupts are essential for multitasking, as they allow the operating system to manage multiple processes efficiently
by responding to events and prioritizing tasks.
Operating systems provide a variety of essential services to enable effective use of computer resources and to
support application software. Here are some of the core services offered:
1. Process Management
Process management involves creating, scheduling, and terminating processes (programs in execution). It ensures
that processes are executed efficiently and without interference. Key functions include:
o Process Scheduling: The OS uses algorithms to decide the order in which processes execute,
managing CPU time for each process.
o Multitasking: Enables multiple processes to run concurrently, maximizing CPU utilization.
o Inter-Process Communication (IPC): Allows processes to communicate and share data, enabling
cooperation between programs.
o Process Synchronization: Ensures that processes run in sync, avoiding conflicts when accessing
shared resources.
2. Memory Management
Memory management controls how memory is allocated and managed among processes. Since multiple processes
may need memory simultaneously, the OS ensures that each one has access to the necessary memory without
conflicts.
File system management handles the creation, deletion, and organization of files and directories. It provides users
and applications with a systematic way to store and retrieve data.
4. Device Management
Device management enables communication between the system and peripheral devices, such as printers,
keyboards, and storage devices. The OS uses device drivers to interface with hardware components.
o Device Drivers: Provides a standardized interface to communicate with hardware, making it easier
for applications to use various devices.
o Device Allocation: Manages and allocates devices to processes as needed, ensuring no conflicts.
o I/O Scheduling: Controls the order in which input/output requests are processed, optimizing device
use and performance.
Security and protection services safeguard system resources, data, and applications from unauthorized access and
potential threats. The OS enforces policies to protect user data and system integrity.
o User Authentication: Verifies user identity through methods like passwords or biometrics.
o Access Control: Manages permissions for accessing files, directories, and system resources, ensuring
that users and applications can only access what they are authorized to.
o Data Encryption: Encrypts sensitive data to protect it from unauthorized access and theft.
6. User Interface
The OS provides a user interface (UI) that allows users to interact with the system and execute commands. The UI
can be command-line-based (CLI) or graphical (GUI).
o Command-Line Interface (CLI): Allows users to type commands to perform tasks, often used by
advanced users.
o Graphical User Interface (GUI): Provides visual elements like windows, icons, and menus for easier
interaction, commonly found in desktop operating systems.
ANSWER
➢ Functions of a Dispatcher:
A dispatcher is a key component of the operating system's process scheduling mechanism. It plays a crucial role in
switching the CPU from one process to another, enabling multitasking and efficient CPU utilization. Here are the
main functions of a dispatcher:
1. Context Switching: The dispatcher saves the state (context) of the currently running process and loads the state of
the next process selected by the scheduler. This context includes information like registers, program counter, and
memory state.
2. CPU Allocation: Once the context switch is complete, the dispatcher allocates the CPU to the selected process,
allowing it to resume execution.
3. Setting Up User Mode: The dispatcher ensures that the selected process is set up to run in user mode (as opposed
to kernel mode), preventing the process from accessing restricted system resources.
4. Jumping to the Proper Location in the Program: The dispatcher moves the program counter to the appropriate
location in the new process, allowing it to continue from where it left off or to start from the beginning if it’s a new
process.
5. Time Management: In systems with time-sharing or round-robin scheduling, the dispatcher ensures that each
process gets a fair amount of CPU time within a defined time slice (quantum). If a process exceeds its allocated time,
it’s preempted, and the dispatcher selects the next process.
Multilevel Queue Scheduling is a scheduling method that organizes processes into multiple separate queues based
on their priority or type. Each queue can have a different scheduling algorithm, and processes in higher-priority
queues are typically executed before those in lower-priority queues. This approach is suitable for systems where
different types of processes have distinct requirements (e.g., interactive versus batch processes).
1. Multiple Queues: The system divides processes into different categories, such as:
2. Independent Scheduling Policies: Each queue may have its own scheduling policy. For instance:
3. No Process Movement: Processes remain in the queue they are assigned to, without moving between queues.
Each process is allocated to a queue based on its type and priority when it arrives.
1. System Queue (highest priority) – handles critical system-level tasks, scheduled with FCFS.
2. Interactive Queue (medium priority) – handles user-interactive tasks, scheduled with Round Robin.
3. Batch Queue (lowest priority) – handles background, non-interactive tasks, scheduled with FCFS or SJN.
1. The scheduler always selects a process from the System Queue if it has any tasks, as these have the highest
priority.
2. If the System Queue is empty, the scheduler moves to the Interactive Queue and selects a task using Round Robin
to give each user process a fair share of CPU time.
3. If both the System and Interactive Queues are empty, the scheduler selects a process from the Batch Queue.
Example:
1. Process A is executed first due to the high priority of the System Queue.
2. When Process A finishes, Process B is selected next from the Interactive Queue.
3. Once Process B completes its time slice, Process C is selected from the Batch Queue.
This approach allows the operating system to manage diverse types of processes efficiently, giving priority to
system-critical and user-interactive tasks over background jobs.
Advantages:
o Priority-Based Scheduling: Critical processes get immediate CPU access, enhancing system
responsiveness.
o Efficient Resource Allocation: Different types of processes are managed according to their specific
requirements.
Disadvantages:
o Rigid Queue Assignment: Processes are permanently assigned to a queue, reducing flexibility.
o Starvation Risk: Lower-priority processes in lower queues may experience starvation if higher-
priority queues are consistently busy.
o Complexity: Managing multiple queues and scheduling algorithms can increase system complexity.
• Organizes processes into separate queues (e.g., for system processes, interactive processes, batch
processes).
• Each queue may have its own scheduling algorithm, and processes are assigned based on priority.
ANSWER:
Readers-Writers Problem
The Readers-Writers Problem is a classic synchronization problem in operating systems that involves processes (or
threads) accessing shared data, such as a database, in two different ways:
1. Readers: Processes that only need to read (view) the shared data and do not modify it.
2. Writers: Processes that need to write (modify) the shared data.
The main challenge in this problem is to design a system where readers and writers can access the data without
causing conflicts or inconsistencies. Specifically:
• Multiple readers should be allowed to read the data simultaneously, as reading does not affect the data's
state.
• However, writers must have exclusive access to the data to prevent data corruption. This means:
o No two writers can access the data simultaneously.
o No reader can access the data while a writer is writing.
To ensure consistency, the system must avoid situations where writers overwrite each other’s changes or where
readers read incomplete or inconsistent data while a writer is in the process of updating it.
There are different variations of the Readers-Writers Problem, each with different priorities:
A typical solution involves the use of semaphores or mutexes to control access to the shared data and ensure proper
synchronization. Below is a basic explanation of how the solution works:
1. Read Count: A counter to keep track of the number of readers currently accessing the data.
2. Mutex (or Lock): Used to ensure mutual exclusion when updating the read count variable.
3. Write Lock: A semaphore that allows writers exclusive access to the shared data.
Solution Logic:
For Writers:
Semaphore mutex = 1;
Semaphore rw_mutex = 1;
int read_count = 0;
Reader:
wait(mutex);
read_count++;
if (read_count == 1) wait(rw_mutex);
signal(mutex);
// Reading
wait(mutex);
read_count--;
if (read_count == 0) signal(rw_mutex);
signal(mutex);
Writer:
wait(rw_mutex);
// Writing
signal(rw_mutex);
ANSWER:
In concurrent programming, the critical-section problem involves designing a mechanism that allows processes to
share resources safely without interfering with each other. To achieve this, a solution must satisfy the following
three conditions:
1. Mutual Exclusion: Only one process can enter the critical section at any given time. This ensures that no two
processes are executing in their critical sections simultaneously, which prevents data inconsistencies.
2. Progress: If no process is in the critical section and multiple processes wish to enter, only those not in their
remainder sections (the parts of the program outside the critical section) should be involved in deciding
which process enters next. This ensures that the selection of the next process to enter the critical section is
not postponed indefinitely.
3. Bounded Waiting: A process waiting to enter its critical section should not have to wait indefinitely. There
must be a bound on the number of times other processes can enter their critical sections after a process has
requested entry and before it is granted access. This prevents starvation, ensuring that every process
eventually gets a chance to enter its critical section.
TestAndSet():
The TestAndSet instruction is a hardware-supported atomic operation used to manage access to the critical section.
It operates on a single bit and works as follows:
• Atomic Operation: TestAndSet is an indivisible operation, meaning it cannot be interrupted once started,
ensuring that no two processes can perform TestAndSet at the same time.
• Mechanism: The instruction reads the current value of a memory location and sets it to 1 simultaneously.
• Return Value: If the original value of the location was 0 (indicating no process is in the critical section), the
calling process can enter the critical section. If the original value was 1, another process is already in the
critical section, and the calling process must wait.
// Entry Section
while (TestAndSet(lock)) {
// Critical Section
// Exit Section
In this example:
• The TestAndSet instruction continuously checks and sets the lock variable.
Advantages:
• Simple and efficient for systems that support the atomic TestAndSet instruction.
• Provides mutual exclusion.
Disadvantages:
• Busy-waiting can lead to CPU wastage, especially in systems with many processes.
• Does not inherently satisfy bounded waiting without additional mechanisms.
Swap():
The Swap instruction is another hardware-based atomic operation that swaps the values of two variables. It is
commonly used in conjunction with a "lock" variable to control access to the critical section. The Swap instruction
operates as follows:
// Entry Section
boolean key = true; // Local variable for each process
while (key == true) {
Swap(lock, key); // Attempt to enter the critical section
}
// Critical Section
// Exit Section
lock = false; // Release the lock
In this example:
Advantages:
Disadvantages:
ANSWER:
The ready queue is a crucial component in an operating system's process management. It holds all the processes
that are ready to execute but are waiting for CPU time. When a process transitions from a new or waiting state to a
ready state, it is placed in the ready queue. The CPU scheduler (also known as the short-term scheduler) then selects
processes from this queue based on a scheduling algorithm (e.g., First-Come-First-Served, Shortest Job First, Round
Robin) to allocate CPU time. By managing the ready queue, the operating system ensures efficient and fair CPU
utilization, helping achieve smooth multitasking. The structure of the ready queue can vary, and it may be
implemented as a simple queue, priority queue, or even a tree, depending on the scheduling policy in use.
ANSWER:
Multithreading is a programming technique that allows multiple threads (smaller, independent paths of execution)
to run concurrently within a single process. Each thread shares the same memory space and resources of the parent
process, but can perform different tasks simultaneously. This parallelism is beneficial in applications where tasks can
be divided into smaller units, such as web servers (handling multiple client requests), GUIs (responding to user inputs
while performing background tasks), and scientific computations. Multithreading improves CPU utilization by
keeping the CPU busy even if one thread is waiting (e.g., for I/O). It also enhances responsiveness and allows
applications to make better use of multi-core processors.
ANSWER:
To solve the critical section problem in concurrent programming, any solution must satisfy the following three
criteria:
1. Mutual Exclusion: Only one process can be in the critical section at any time. This prevents race conditions,
where two or more processes simultaneously attempt to modify shared data, leading to data
inconsistencies.
2. Progress: If no process is in the critical section, any process that wishes to enter should be allowed to do so
without indefinite delays. This ensures that decisions about which process enters the critical section do not
depend on processes that are not involved.
3. Bounded Waiting: Each process should have a limit on the number of times other processes can enter the
critical section after it has requested to enter. This prevents starvation, ensuring that all processes
eventually get a chance to access the critical section.
These requirements help ensure efficient and fair access to shared resources while avoiding issues like deadlock and
starvation.
ANSWER:
Multiprogramming improves CPU utilization by having multiple programs in memory, switching between them as
necessary, while multiprocessing leverages multiple CPUs to handle tasks in parallel, enhancing both speed and
efficiency.
Ques 15. Differentiate Between Kernel Level and User Level Threads?
ANSWER:
Kernel-level threads are directly managed by the operating system, which enables true parallelism on multi-core
systems. However, this also results in higher overhead. User-level threads are faster to switch but have limitations in
true parallel execution and are limited by the single process’s view from the OS perspective.
ANSWER:
A context switch occurs when the CPU shifts from executing one process or thread to another. During this switch,
the operating system saves the current state (context) of the executing process, which includes the program
counter, CPU registers, and other essential data, so the process can resume from the same point later. The CPU then
loads the saved state of the next process to be executed. While context switching allows the OS to achieve
multitasking and efficient CPU utilization, it also incurs overhead, as the saving and restoring process takes time and
does not perform any useful computation.
Ques 17. What is the Use of fork() and exec() System Calls?
ANSWER:
• fork(): The fork() system call is used to create a new process, known as the child process, which is a duplicate
of the calling (parent) process. After a successful fork(), two processes run concurrently, sharing the same
code segment but with separate data. This system call is essential for multitasking in Unix-based systems, as
it enables the execution of multiple processes.
• exec(): The exec() system call replaces the memory space of the current process with a new program. When
exec() is called, the current process image is completely replaced by the new program’s image. Typically,
exec() is used after fork() to run a different program in the child process without affecting the parent
process’s execution. This pair of calls (fork() and exec()) allows one process to create another and run a new
program within it.
ANSWER:
Pre-emptive scheduling is suitable for real-time and interactive systems where quick responses are necessary, while
non-preemptive scheduling is ideal for batch systems, where tasks can run to completion without interruption.
ANSWER:
The long-term scheduler regulates which processes are loaded into memory, managing overall system load, while
the short-term scheduler allocates CPU time, aiming to balance CPU utilization and responsiveness.
Ques 20. How Do Distributed Operating Systems Differ from Multiprogrammed and Time-Shared Operating
Systems? Give Key Features of Each.
• Definition: A distributed operating system manages a network of independent computers and presents
them as a unified system to users. It enables the sharing of resources like files, printers, and processing
power across multiple locations.
• Key Features:
o Transparency: Provides users with a seamless experience of a single system despite the physical
distribution of resources.
o Resource Sharing: Enables efficient resource utilization by allowing shared access across systems.
o Fault Tolerance: Ensures system reliability and availability through redundancy.
o Scalability: Can expand the system by adding more nodes to the network without significant
reconfiguration.
• Definition: These systems allow multiple programs to reside in memory simultaneously, interleaving their
execution to make better use of CPU time.
• Key Features:
o CPU Utilization: Maximizes the efficiency of the CPU by switching between tasks.
o Task Switching: Quickly switches between programs, reducing idle CPU time.
o Batch Processing: Suited for batch jobs, where multiple tasks are processed without interaction.
• Definition: A time-shared OS allows multiple users to interact with the system simultaneously, giving each
user the impression of having dedicated access.
• Key Features:
o Time-Slicing: Allocates CPU time in small intervals to multiple users or tasks.
o Interactive Environment: Designed for responsiveness, allowing users to execute commands and
receive feedback quickly.
o Fair Allocation: Ensures fair CPU access among all active users, enabling effective multitasking.
ANSWER:
A multitasking system is an operating system that allows multiple tasks (processes) to run concurrently by rapidly
switching between them. This switching gives the appearance that all tasks are executing simultaneously, though, in
reality, only one task is executed at any given moment. This system is essential for making effective use of the CPU,
ensuring no resource is idle when it could be working on another task. Benefits of multitasking include:
• Efficient Resource Utilization: Multitasking keeps the CPU and other resources occupied by assigning tasks
whenever possible.
• Responsive User Interaction: Multitasking supports running background processes while handling active
user applications.
• Preemptive Multitasking: The OS has control over task scheduling, forcing a task to yield after a set amount
of time (called a time slice). This prevents a single process from monopolizing the CPU.
• Cooperative Multitasking: Each process voluntarily releases control of the CPU, allowing the OS to switch
tasks. This model depends on well-behaved processes and is less efficient when processes are not
cooperative.
A real-time system is designed to respond to inputs or events within strict time constraints, known as deadlines. In
such systems, timing is as crucial as logical correctness, as any missed deadline could lead to critical failures. Real-
time systems are divided into:
• Hard Real-Time Systems: These systems require absolute adherence to deadlines, with any delay potentially
causing catastrophic results. Examples include pacemakers, industrial control systems, and flight navigation.
• Soft Real-Time Systems: Deadlines are important, but missing one occasionally is acceptable without causing
severe issues. Examples include multimedia streaming, online gaming, and stock trading systems.
Real-time systems prioritize quick processing and predictable response times. They are built for reliability and
accuracy in time-critical operations and often use specialized hardware and software to meet these constraints.
ANSWER:
A semaphore is a synchronization tool used in concurrent programming to control access to shared resources by
multiple processes or threads. It helps prevent race conditions and ensures that only a specified number of
processes can access a shared resource simultaneously. Semaphores operate on two basic atomic operations:
• Wait (P): Decrements the semaphore value. If the semaphore value is less than or equal to zero, the process
waits until the semaphore is greater than zero.
• Signal (V): Increments the semaphore value, potentially releasing a waiting process.
Types of Semaphores:
• Binary Semaphore: Also called a mutex, it only has values 0 or 1, indicating whether a resource is available
(1) or locked (0).
• Counting Semaphore: Can take any integer value, typically used when multiple instances of a resource are
available (e.g., a pool of printers).
Busy-Waiting Semaphores: Busy waiting occurs when a process repeatedly checks the semaphore in a loop
(spinlock) while waiting to acquire the resource. While this approach ensures quick access once the resource
becomes available, it consumes CPU time inefficiently as it constantly checks the semaphore state. Busy waiting is
commonly used in systems where the wait time is short and system efficiency is less of a concern. However, in
multiprocessor systems, busy waiting can improve performance due to reduced overhead in process switching.
ANSWER:
Multilevel Queue Scheduling and Multilevel Feedback Queue Scheduling are two process-scheduling algorithms
designed to organize and prioritize tasks efficiently. Here’s an explanation of each along with a comparison in tabular
format.
• Definition: Multilevel Queue Scheduling divides processes into separate queues based on characteristics
such as priority, type, or expected response time.
• Characteristics:
o Each queue is permanently assigned a priority level, and higher-priority queues are processed first.
o Different scheduling algorithms can be assigned to different queues; for instance, high-priority
queues might use Round Robin, while low-priority ones use First Come, First Served (FCFS).
o Processes stay in the same queue throughout their lifecycle, which limits flexibility.
• Definition: In Multilevel Feedback Queue Scheduling, processes are allowed to move between queues based
on their behavior and CPU usage. This system adapts to process needs and provides a more flexible
structure.
• Characteristics:
o Processes can move to higher or lower priority queues based on their CPU burst characteristics or
time spent waiting, ensuring fair access.
o This flexibility allows the algorithm to prioritize short tasks, interactive processes, and I/O-bound
jobs.
o Each queue can use a different scheduling method, and the feedback structure helps prevent
starvation of low-priority processes.
ANSWER:
Semaphores are synchronization tools used in operating systems to manage concurrent processes accessing shared
resources. They help prevent race conditions by ensuring that only a specific number of processes can access a
critical section or resource at any given time.
1. Wait (P): Decreases the semaphore’s count. If the count becomes less than zero, the process waits until the
semaphore is positive.
2. Signal (V): Increases the semaphore’s count, potentially allowing a waiting process to enter the critical
section.
• Mutual Exclusion: Only one process (in binary semaphore cases) or a specific number of processes (in
counting semaphore cases) can access the resource simultaneously.
• Orderly Process Synchronization: Processes are controlled in an orderly manner to prevent issues such as
race conditions.
ANSWER:
Processes and Threads are fundamental units of execution in an operating system. While they share some
similarities, they differ significantly in how they manage resources, memory, and scheduling. Additionally, threads
can be further categorized into User-Level Threads and Kernel-Level Threads, each with distinct characteristics.
• User-Level Threads: Managed by user-level libraries rather than directly by the OS. These threads are
lightweight and fast to switch but may lack system-level integration, as the OS treats them as a single
process.
• Kernel-Level Threads: Managed directly by the operating system kernel. They allow more direct interaction
with OS resources and system scheduling but are generally slower due to kernel overhead.
ANSWER:
The Critical Section Problem arises in concurrent programming when multiple processes or threads access a shared
resource simultaneously. To solve this problem, synchronization is required to ensure that only one process enters
the critical section at a time, and no two processes access shared resources simultaneously in an unsafe way.
A semaphore is a synchronization primitive used to control access to shared resources by multiple processes in a
concurrent system. It can be used to solve the critical section problem by using a binary semaphore (also called a
mutex) to lock and unlock the critical section.
1. Initialize the Semaphore: We create a binary semaphore, typically initialized to 1, which represents that the
critical section is available. The value 1 indicates that no process is currently in the critical section, while 0
means it is occupied.
2. Enter the Critical Section:
o Each process that wants to enter the critical section will first check the semaphore.
o It performs a wait (P) operation (e.g., P(semaphore)) to decrement the semaphore value.
o If the semaphore value is 1, it will be set to 0, and the process will be allowed to enter the critical
section.
o If the value is already 0, the process will wait until the semaphore becomes 1, indicating that
another process has exited the critical section.
3. Exit the Critical Section:
o After completing its operations within the critical section, the process performs a signal (V)
operation (e.g., V(semaphore)).
o This increments the semaphore value back to 1, signaling that the critical section is now available for
other processes to enter.
Pseudocode Solution
Here’s the pseudocode for the critical section solution using a binary semaphore.
// Critical Section
// (Only one process can execute this part at a time)
// Remainder Section
}
1. Mutual Exclusion:
o Mutual exclusion means that only one process can enter the critical section at a time.
o In this solution, the wait(mutex) operation ensures mutual exclusion. When a process enters the
critical section, it sets mutex to 0, blocking other processes from entering until it exits.
o Once the process exits, it increments mutex to 1 through the signal(mutex) operation, allowing other
processes to enter.
2. Progress:
o The progress condition ensures that if no process is in the critical section, the decision of which
process will enter next depends on the waiting processes.
o In this solution, processes outside the critical section cannot block others from entering. If a process
finishes its work in the critical section, it signals (increments) the semaphore, allowing other
processes to make progress.
3. Bounded Waiting:
o Bounded waiting prevents indefinite delays for processes waiting to enter the critical section.
o While basic semaphores do not enforce strict bounded waiting, additional logic (such as queuing
waiting processes) could be implemented to satisfy this requirement.
o This solution is fair, as every waiting process will get a chance to enter the critical section in a finite
amount of time.
Suppose we have two processes, P1 and P2, that both need to access a shared resource:
• Simplicity: Using a single semaphore simplifies managing access to the critical section.
• Efficiency: This solution provides mutual exclusion with minimal overhead.
• Flexibility: Semaphores can be applied to both single and multiple shared resource scenarios.
ANSWER:
The readers-writers problem is a classic synchronization problem, where multiple readers can access a shared
resource simultaneously, but only one writer should access it at any time. Here’s a solution using semaphores:
void reader() {
wait(mutex); // Protecting access to read_count
read_count++;
if (read_count == 1) {
wait(wrt); // First reader locks out writers
}
signal(mutex);
// Reading section
wait(mutex);
read_count--;
if (read_count == 0) {
signal(wrt); // Last reader releases the lock
}
signal(mutex);
}
void writer() {
wait(wrt); // Writers gain exclusive access to the resource
// Writing section
signal(wrt); // Writer releases lock after writing
}
• Justification:
o Multiple readers can read concurrently if no writers are present.
o Only one writer can write to the resource, ensuring data consistency.
o Prevents race conditions on read_count with the mutex semaphore.
ANSWER:
The Linux Operating System is an open-source, Unix-like operating system known for its flexibility, security, and
customization. Developed by Linus Torvalds in 1991, Linux has grown into a highly versatile OS, powering everything
from personal computers to servers and embedded systems.
• Linux is distributed under the GNU General Public License (GPL), meaning the source code is freely available
to anyone.
• Users can modify, distribute, and use Linux without licensing fees, making it a popular choice for individuals
and organizations worldwide.
• Multitasking: Linux allows multiple processes to run simultaneously, making it ideal for complex and
intensive computing tasks.
• Multiuser: Multiple users can access the system at the same time without interfering with each other,
making it suitable for server environments and shared systems.
• Linux provides robust security through user accounts, permissions, and roles.
• The OS uses permission modes (read, write, execute) to protect files and directories, making it more
resistant to unauthorized access and malware.
• Linux supports multiple file systems, including EXT3, EXT4, XFS, Btrfs, and NTFS.
• It organizes data hierarchically and uses a virtual file system (VFS) layer to interact with various physical file
systems, allowing compatibility and flexibility.
• Linux can run on various hardware platforms, including x86, ARM, and RISC, making it a highly portable OS.
• Linux is compatible with numerous devices and software, providing support for a wide range of peripherals,
drivers, and applications.
• The Linux kernel is monolithic but modular, allowing users to add or remove features without modifying the
kernel itself.
• This modular design enables users to load only the required components, optimizing performance.
• Linux provides powerful command-line interfaces (CLI) such as Bash, Zsh, and Fish, enabling users to
perform tasks efficiently.
• Users can automate tasks, manage system processes, and perform complex operations through scripting.
• Linux provides efficient process scheduling through algorithms such as CFS (Completely Fair Scheduler).
9. Networking Capabilities
• Linux includes strong networking features, supporting TCP/IP protocols and enabling services like DNS, FTP,
and HTTP.
• Many web servers, cloud platforms, and internet services are built on Linux due to its reliability and
scalability in networking.
• Linux is a favorite among developers due to its extensive support for programming languages, libraries, and
tools.
• Users can customize the kernel and modify distributions, creating tailored versions for specific needs.
ANSWER:
In operating systems, threads enable concurrent task execution within a process. Different threading models
manage the relationship between user threads (managed by application) and kernel threads (managed by OS).
Many-to-Many and One-to-One are two key threading models with distinct characteristics.
1. Definition: In the One-to-One model, each user-level thread has a corresponding kernel-level thread.
2. Characteristics:
o Each user thread is mapped directly to a kernel thread.
o Thread operations such as creation, termination, and switching are managed by the OS, giving fine-
grained control over each thread.
3. Advantages:
o Allows true parallelism on multi-core systems, as each thread can run on a separate processor.
o Threads can take advantage of kernel-level scheduling and priority mechanisms, improving
performance.
4. Disadvantages:
o High resource usage, as each thread requires a separate kernel-level thread.
o Limited by the OS’s support for kernel threads, potentially impacting performance on systems with
many threads.
1. Definition: In the Many-to-Many model, multiple user-level threads are mapped onto a smaller or equal
number of kernel threads.
2. Characteristics:
o The OS and runtime library manage the mapping between user and kernel threads, allowing
flexibility.
o User threads do not require a direct kernel thread, reducing resource consumption.
3. Advantages:
o Efficient resource use, as multiple user threads can share fewer kernel threads.
o Allows high concurrency without creating too many kernel threads, optimizing system performance.
4. Disadvantages:
o Context switching between user and kernel threads adds complexity and overhead.
o Limited control over individual thread priorities since several user threads share kernel resources.
Comparison Table:
ANSWER:
Multilevel Feedback Queue (MLFQ) Scheduling is an advanced scheduling algorithm designed to handle processes
with varying priorities dynamically. Unlike traditional scheduling, MLFQ allows processes to change priority based on
their behavior and requirements, balancing CPU-bound and I/O-bound tasks effectively.
1. Adaptability:
o MLFQ adapts to process behavior, providing high-priority for short, interactive processes and low-
priority for long-running, CPU-bound tasks.
2. Efficient CPU Utilization:
o By allowing CPU-bound processes to move to lower-priority queues, MLFQ improves CPU utilization
while prioritizing interactive processes.
3. Prevention of Starvation:
o With aging mechanisms, MLFQ prevents lower-priority processes from starving, ensuring fair access
to CPU resources over time.
1. Complex Implementation:
o Managing multiple queues, priority adjustments, and aging can be complex.
2. Unpredictable Response Times:
Example:
A new process starts in Queue 1. If it exhausts its time slice, it moves to Queue 2. If it also uses its time slice in Queue
2, it is moved to Queue 3. Processes that use only a portion of their time slice may stay in their current queue or be
promoted.
Ques 31. Job Queue, Ready Queue, and Device Queue in Process Scheduling?
ANSWER:
Job Queue: The job queue is a collection of processes that are waiting to be admitted into the system for execution.
It is essentially a pool of all the processes that are in the system but are not yet in the ready state. When processes
are submitted to the operating system (OS), they are initially placed in the job queue, waiting for the OS to allocate
resources for their execution. The job queue is where processes wait for admission to the ready queue, where they
can be scheduled for CPU time.
• Functionality: The job queue handles incoming processes that are waiting to be loaded into memory and
executed.
• State: The processes in the job queue are typically in a "new" state.
Ready Queue: The ready queue is a queue that contains all processes that are ready to execute, but waiting for CPU
time. Once a process has been loaded into memory and is ready to execute, it is moved from the job queue to the
ready queue. The processes in the ready queue are in the "ready" state, meaning they are fully prepared to execute
but are waiting for the CPU to become available. The ready queue is typically implemented as a circular queue or a
priority queue.
• Functionality: The ready queue holds processes that are waiting for CPU time to execute. The CPU scheduler
selects processes from this queue for execution.
• State: Processes in this queue are in the "ready" state, meaning they are prepared to run once they get CPU
time.
Device Queue: The device queue contains processes that are waiting for an I/O device, such as a disk, printer, or
network interface. These processes are in the "blocked" state, meaning they cannot continue execution until the
device becomes available. When a process requires a specific device, it is moved from the ready queue to the
appropriate device queue. After the process completes its I/O operation, it is moved back to the ready queue,
provided it has not been blocked by any other conditions.
ANSWER:
CPU scheduling algorithms are designed to determine the order in which processes should be assigned CPU time to
ensure efficient execution. The various criteria considered in these algorithms are:
1. CPU Utilization: The degree to which the CPU is being used. High CPU utilization is generally desired to keep
the CPU busy.
2. Throughput: The number of processes completed per unit of time. High throughput means that the system
can complete more tasks in a given period.
3. Turnaround Time: The total time taken by a process from submission to completion. It includes waiting,
execution, and I/O times. The goal is to minimize turnaround time.
4. Waiting Time: The amount of time a process spends waiting in the ready queue before it gets executed.
5. Response Time: The time between submitting a request and receiving the first response. This is important in
interactive systems where the user expects prompt feedback.
6. Fairness: Ensuring that every process gets a fair share of CPU time, especially in multi-user systems.
Now, let's dive into the two specific CPU scheduling algorithms:
SJF is a non-preemptive CPU scheduling algorithm that selects the process with the shortest burst time (the next CPU
execution time) to run first. The key idea behind SJF is that processes with shorter execution times should be
executed before those with longer execution times to minimize the average waiting time.
• Preemptive vs Non-Preemptive: The basic version of SJF is non-preemptive, meaning once a process starts
executing, it runs to completion. However, a preemptive version, known as Shortest Remaining Time First
(SRTF), allows processes to be preempted if a new process with a shorter burst time arrives.
• Advantages: SJF minimizes the average waiting time and is optimal in terms of minimizing the average
turnaround time for a set of processes.
• Disadvantages: The major drawback is that it requires knowledge of the process burst time, which may not
always be known. Additionally, it can lead to starvation, as longer processes may never get CPU time if
shorter processes keep arriving.
Round Robin is one of the simplest and most widely used preemptive CPU scheduling algorithms. In RR, each process
is assigned a fixed time quantum (also called a time slice). When a process is allocated CPU time, it runs for a time
equal to the time quantum or until it finishes execution, whichever comes first. If a process doesn't finish execution
within its time slice, it is preempted, and the next process in the ready queue gets its turn.
• Time Quantum: The time quantum is a critical parameter in RR. If the quantum is too large, RR behaves
similarly to First-Come, First-Served (FCFS), and if it's too small, the system might incur high context-
switching overhead.
• Advantages: RR is simple to implement and fair, as every process gets an equal share of CPU time. It is well-
suited for time-sharing systems where processes are expected to interact with users.
• Disadvantages: RR can result in higher turnaround times if the time quantum is too large, and if it’s too
small, the overhead from frequent context switching can degrade performance.
ANSWER:
Semaphores are synchronization primitives used to manage concurrent processes in operating systems. A
semaphore is an integer variable used to control access to a shared resource. Semaphores can be classified as binary
(taking values 0 or 1) or counting (taking non-negative integer values).
• Functionality: Semaphores are used to signal and wait for conditions to be met, allowing processes to
coordinate their actions and avoid race conditions or deadlocks.
• Types:
o Binary Semaphores (or mutexes): Used to represent a lock, allowing only one process to access a
critical section at a time.
o Counting Semaphores: Used when multiple instances of a resource are available, and the
semaphore keeps track of the number of available resources.
The two primary operations used with semaphores are wait() and signal().
• wait() (also known as P or down operation): The wait() function decrements the semaphore value. If the
semaphore value is positive, the process continues; if it is zero, the process is blocked until the semaphore
becomes positive again. Essentially, wait() checks whether a resource is available and blocks the process if
not.
o Syntax: wait(semaphore).
o Effect: Decrements the semaphore value and, if the value is negative, blocks the process until it
becomes positive.
• signal() (also known as V or up operation): The signal() function increments the semaphore value. If any
processes are waiting on the semaphore, one of them is allowed to proceed by being unblocked. It
essentially signals that a resource has been released or that a condition is now true.
Together, wait() and signal() help manage mutual exclusion, process synchronization, and coordination in concurrent
systems.
ANSWER:
The ready queue plays a critical role in the operating system's scheduling of processes. It is a queue where processes
wait for CPU time after they have been loaded into memory and are ready to execute.
• Functionality: The ready queue holds processes that are in the "ready" state and waiting for CPU time. The
CPU scheduler selects processes from the ready queue to execute based on a scheduling algorithm.
• Process Lifecycle: When a process is created, it starts in the job queue. Once it is loaded into memory and is
ready to execute, it is moved to the ready queue. The CPU scheduler selects processes from the ready queue
to run on the CPU.
• Organization: The ready queue can be organized in various ways depending on the CPU scheduling algorithm
being used. It may be a simple FIFO queue (First-Come, First-Served), a priority queue, or a more complex
structure such as a multi-level feedback queue.
• Preemptive Scheduling: In preemptive scheduling systems, when a higher-priority process arrives, a process
in the ready queue may be preempted and moved back to the queue to allow the higher-priority process to
run.
• Role in Scheduling: The ready queue helps ensure that processes are given CPU time in an orderly manner. It
is crucial in multi-tasking systems where multiple processes are waiting to run. Efficient management of the
ready queue ensures fair CPU allocation and minimizes waiting times for processes.
(i) What is the average turnaround time for these processes with FCFS scheduling?
(ii) What is the average turnaround time for these processes with SJF scheduling?
(iii) What is the average turnaround time if the CPU is left idle for first 1 unit and then SJF scheduling is
used?
ANSWER:
In FCFS scheduling, processes are executed in the order of their arrival times. The turnaround time for each process
is calculated as:
In SJF, the process with the shortest burst time is executed next, based on the available processes when the CPU is
free.
If the CPU remains idle for the first 1 unit of time, we start scheduling only at t = 1.
Summary of Results
Assume context switching overhead is 1 time unit and time quantum used in round robin scheduling is 2 times
units.
In Preemptive SJF (also known as Shortest Remaining Time First), we always select the process with the shortest
remaining burst time when the CPU becomes available. We also add 1 unit of time for each context switch.
Let's go through each process selection step-by-step to build a timeline with context switches:
Waiting Time=Start Time−Arrival Time+Waiting in Queue\text{Waiting Time} = \text{Start Time} - \text{Arrival Time}
+ \text{Waiting in Queue}Waiting Time=Start Time−Arrival Time+Waiting in Queue
Calculations give:
With Round Robin, we rotate through the processes in order, using a time quantum of 2 units per process and
adding 1 unit for each context switch.
Summary of Results