0% found this document useful (0 votes)
18 views55 pages

Shiva Yadav - OS - Assignment

The document is an assignment on Operating Systems with Linux, covering various topics such as kernel types, the bootstrap process, Peterson's algorithm, distributed operating systems, multi-programmed batch systems, time-sharing systems, inter-process communication, interrupts, and services offered by operating systems. It provides definitions, comparisons, advantages, disadvantages, and pseudocode for algorithms, illustrating key concepts in operating systems. The assignment is submitted by Shiva Yadav to Ms. Preeti for the MCA 1st Semester course.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views55 pages

Shiva Yadav - OS - Assignment

The document is an assignment on Operating Systems with Linux, covering various topics such as kernel types, the bootstrap process, Peterson's algorithm, distributed operating systems, multi-programmed batch systems, time-sharing systems, inter-process communication, interrupts, and services offered by operating systems. It provides definitions, comparisons, advantages, disadvantages, and pseudocode for algorithms, illustrating key concepts in operating systems. The assignment is submitted by Shiva Yadav to Ms. Preeti for the MCA 1st Semester course.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Assignment

Practical File

Operating System With Linux (MCA-163)

Submitted by: Submitted to:

Name: Shiva Yadav Name: Ms. Preeti


Designation:SubjectFaculty
Rollno: 01211104424
Course: Mca 1st Sem
Assignment
Q1. Compare monolithic kernel and microkernel. Which type of kernel is used in
Microsoft windows 10?
Monolithic Kernel:
 Definition: A monolithic kernel is a single large process running entirely in a single
address space. It integrates all the essential functions, including device drivers, file
system management, memory management, and system services within the kernel
space.
 Advantages:
o Performance: Since all services are in the same address space, inter-process
communication (IPC) is faster, leading to better performance.
o Direct Hardware Access: The direct access to hardware allows for efficient
use of resources.
 Disadvantages:
o Stability: A bug in any part of the kernel can crash the entire system.
o Maintenance: Adding or removing features can be more complex because
everything is tightly integrated.
Microkernel:
 Definition: A microkernel includes only the essential core functionalities, such as
inter-process communication (IPC) and basic memory management. Other services
like device drivers, file systems, and networking run in user space as separate
processes.
 Advantages:
o Modularity: Since most services run in user space, it's easier to update or
replace them without affecting the entire system.
o Stability: A fault in a user space service doesn't crash the entire kernel,
making the system more stable.
 Disadvantages:
o Performance: The need for communication between user space services and
the kernel (via IPC) can lead to a performance overhead.
o Complexity: Developing a microkernel is more complex due to the separation
of services.
Kernel Used in Microsoft Windows 10:
Microsoft Windows 10 uses a hybrid kernel, which is a blend of both monolithic and
microkernel designs. It is primarily based on a monolithic architecture, but it incorporates
aspects of microkernels to improve modularity and stability. This allows Windows to have
the performance benefits of a monolithic design while offering some of the modularity and
reliability features of microkernels.
Q2. Illustrate the bootstrap process of an operating system.
The bootstrap process of an operating system refers to the sequence of steps that occur when
a computer is powered on, leading up to the point where the operating system is fully loaded
and operational. This process involves initializing hardware components, loading
essential system software, and starting the operating system. Here’s an illustration of the
bootstrap process:
1. Power On/Reset
 When the computer is powered on or reset, it triggers the Power-On Self Test
(POST).
 The POST checks the hardware components, such as RAM, CPU, and storage,
to ensure they are functioning correctly.
2. BIOS/UEFI Execution
 After a successful POST, the BIOS (Basic Input/Output System) or UEFI
(Unified Extensible Firmware Interface) firmware is executed.
 BIOS/UEFI is stored in non-volatile memory on the motherboard and is responsible
for initializing hardware components like hard drives, display, keyboard, and other
peripherals.
 It then identifies the bootable storage device (such as an SSD, HDD, or USB drive).
3. Bootloader Loading
 Once a bootable device is detected, the BIOS/UEFI loads the bootloader from the
Master Boot Record (MBR) or the GUID Partition Table (GPT).
 The bootloader is a small program that starts the process of loading the operating
system.
 Examples of bootloaders include GRUB (GNU GRand Unified Bootloader) for
Linux and Windows Boot Manager for Windows.
4. Bootloader Execution
 The bootloader, after loading into memory, is executed.
 It loads the kernel of the operating system into memory and passes control to it.
 In systems with multiple operating systems, the bootloader may present a menu for
the user to select which OS to boot.
5. Kernel Initialization
 The operating system kernel is now loaded into memory and starts to initialize.
 The kernel is the core component of the operating system that manages memory,
processes, and hardware interactions.
 It sets up memory management, initializes device drivers, and loads other essential
system services.
6. System Services and User-Space Initialization
 The kernel starts system services and daemon processes, such as networking
services, device management, and background tasks.
 The user-space environment (such as systemd on Linux or winlogon on Windows) is
initialized, which includes starting the login screen or graphical user interface (GUI).
7. User Login and Operating System Ready
 The operating system presents the login screen (if enabled) or goes directly to the
desktop environment.
 The user logs in, and the system is fully operational, ready for user interaction.
This process is essential because it allows the system to prepare the hardware and load the
necessary software components to run the operating system. Here’s a simplified flowchart
of the bootstrap process:

1. Power On ➜ 2. POST (Power-On Self-Test) ➜ 3. BIOS/UEFI Execution ➜ 4.


Bootloader Loading ➜ 5. Kernel Initialization ➜ 6. System Services Start ➜ 7. User
Login/OS Ready

Q3. Write the Peterson’s algorithm(pseudo code) to solve critical section problem.
Peterson's Algorithm is a classic solution for achieving mutual exclusion in concurrent
programming. It ensures that two processes can safely share a critical section without
encountering race conditions. The algorithm relies on two shared variables, a boolean array
flag[2] and an integer turn, and works for two processes, P0 and P1.
Peterson’s Algorithm (Pseudocode)
Here is the pseudocode for Peterson’s algorithm to solve the critical section problem for
two processes:
// Shared variables:
boolean flag[2] = {false, false}; // Indicates if a process wants to enter the critical section.
int turn; // Indicates whose turn it is to enter the critical section.

// Process P0
do {
flag[0] = true; // Indicate that P0 wants to enter the critical section.
turn = 1; // Give turn to P1.
while (flag[1] && turn == 1) {
// Busy wait until P1 is not interested or it's P0's turn.
}

// Critical Section (P0)


// -- Perform critical section activities here --

flag[0] = false; // Indicate that P0 is leaving the critical section.

// Remainder Section (P0)


// -- Perform non-critical section activities here --
} while (true);

// Process P1
do {
flag[1] = true; // Indicate that P1 wants to enter the critical section.
turn = 0; // Give turn to P0.
while (flag[0] && turn == 0) {
// Busy wait until P0 is not interested or it's P1's turn.
}

// Critical Section (P1)


// -- Perform critical section activities here --

flag[1] = false; // Indicate that P1 is leaving the critical section.


// Remainder Section (P1)
// -- Perform non-critical section activities here --
} while (true);

Q4. What is distributed operating system? Compare client server computing and peer-
to- peer computing.
A Distributed Operating System (DOS) is an operating system that manages a collection of
independent computers and makes them appear to the user as a single cohesive system.
These independent computers are networked together and work collectively, sharing
resources such as processors, memory, and data storage.
Characteristics of Distributed Operating Systems:
 Resource Sharing: Multiple systems share hardware, software, and data resources,
leading to better utilization of resources.
 Transparency: The system hides the distribution of resources from the user,
providing location transparency (users do not need to know where a resource is
located).
 Scalability: Can scale easily by adding more machines to the network, making it
suitable for large-scale applications.
 Fault Tolerance: If one of the computers in the network fails, others can continue to
function, improving system reliability.
 Concurrency: Allows multiple processes to run simultaneously across different
machines, increasing performance.
Examples:
 Amoeba, Mach, LOCUS, and modern implementations like Google’s cloud
infrastructure or Amazon Web Services (AWS), which utilize distributed
computing principles.
Q5. Explain the multi-programmed batch systems and time sharing systems with their
advantages and disadvantages.
1. Multi-Programmed Batch
Systems Definition:
 Multi-programmed batch systems are a type of operating system where multiple
jobs (programs or tasks) are loaded into memory, allowing the CPU to switch
between them when one job needs to wait (e.g., for I/O operations). This keeps the
CPU busy by executing other jobs while one job is waiting.
 Jobs are submitted in batches, which means that users submit jobs to the system
without expecting immediate interaction.
Working:
 The OS selects a job from the batch, loads it into memory, and begins execution.
 When a job waits for an I/O operation, the OS switches to another job, ensuring that
the CPU remains utilized.
 The OS schedules jobs based on priority, or on a first-come, first-served basis.
Advantages:
 Better CPU Utilization: Since the CPU is never idle when a job is waiting for
I/O, it switches to other jobs, increasing the overall system efficiency.
 Throughput: The system can process many jobs in a given time,
improving throughput compared to single-program systems.
 Reduced Idle Time: Jobs that need I/O can coexist with compute-intensive jobs,
reducing the idle time of the CPU.
Disadvantages:
 No User Interaction: Users cannot interact with the job once it has been submitted.
The user has to wait until the entire job batch is processed to see the output.
 Difficult Debugging: Debugging is challenging because jobs are processed
without interaction. Errors are detected only after job completion.
 Job Scheduling Complexity: Deciding the order and manner in which jobs are
loaded and executed requires complex algorithms to ensure efficiency and fairness.
Example: Early mainframe systems like IBM’s OS/360 used multi-programmed batch
processing.

2. Time-Sharing
Systems Definition:
 Time-sharing systems are designed to allow multiple users to interact with the
computer system simultaneously. Each user gets a small time slice of the CPU,
making it appear as if the system is dedicated to them.
 This is achieved by rapidly switching between user tasks, making it possible for many
users to share the resources of a single system interactively.
Working:
 The OS uses a scheduling algorithm (e.g., Round Robin) to allocate small time
slices (quantum) to each user program.
 If a user process’s time slice expires or it needs to wait for I/O, the OS switches to
another process.
 This allows users to execute commands interactively and receive quick responses,
creating a multi-user environment.
Advantages:
 Interactivity: Users can interact with their programs and receive immediate
feedback, making it suitable for general-purpose usage.
 Resource Sharing: Efficient use of CPU, memory, and I/O devices as multiple
users share these resources.
 User Convenience: Users perceive that they have dedicated access to the system,
even though the CPU time is being shared among multiple processes.
Disadvantages:
 Overhead: The frequent context switching between users and processes can
introduce a significant overhead, reducing system efficiency.
 Security Risks: Sharing the system among multiple users can lead to security
challenges, as processes may need to be isolated to prevent unauthorized access.
 Performance Issues: As more users connect and demand resources, system
performance can degrade, especially if the time slices are too short or resources are
limited.
Example: UNIX and Linux are examples of time-sharing systems, supporting multiple users
interacting with the system simultaneously.
Q6. Why inter-process communication is important? Compare shared memory and
message passing models of inter process communication.
Inter-Process Communication (IPC) is crucial for enabling processes (programs in execution)
to exchange data and coordinate their actions in a computing system. IPC is particularly
important in multi-process and distributed systems, where processes need to work together to
perform complex tasks. It allows processes to share data, synchronize their execution, and
communicate effectively.
Importance of Inter-Process Communication (IPC):
 Data Sharing: Processes often need to exchange data to complete a task. IPC allows
them to share information seamlessly.
 Synchronization: IPC mechanisms help synchronize the actions of different
processes, ensuring that shared resources are accessed in an orderly manner.
 Modularity: By enabling different processes to communicate, IPC allows for better
modularity and division of labor, where different processes handle different parts of
an application.
 Resource Sharing: IPC helps processes share resources such as memory and data
structures, leading to more efficient resource usage.
 Fault Tolerance: Distributed processes can communicate with each other to provide
redundancy and improve the reliability and fault tolerance of systems.
Q7. What is interrupt? Explain various services offered by an operating system.
An interrupt is a signal sent to the processor by hardware or software indicating that an event
needs immediate attention. Interrupts allow the CPU to respond to events or conditions in
real-time rather than waiting for a program to complete its execution. When an interrupt
occurs, the CPU temporarily halts its current operations, saves its state, and executes a
specific routine known as an interrupt handler or interrupt service routine (ISR) to address
the event.
Types of Interrupts:
1. Hardware Interrupts: Generated by hardware devices (e.g., keyboard, mouse, disk
drives) to signal events like input/output requests or hardware malfunctions.
2. Software Interrupts: Generated by programs or the operating system itself, often
referred to as traps or exceptions (e.g., division by zero, invalid memory access).
3. Timer Interrupts: Generated by the system timer at regular intervals to allow the OS
to perform scheduling and other periodic tasks.
Services Offered by an Operating System
Operating systems provide a range of essential services that facilitate the operation of
computer programs and manage hardware resources effectively. Here are some of the key
services offered by an operating system:
1. Process Management:
o Creation and Termination: Services to create and terminate processes.
o Scheduling: Algorithms to schedule the execution of processes, manage CPU
time allocation, and ensure efficient execution.
o Synchronization: Mechanisms to synchronize the execution of concurrent
processes and avoid race conditions.
o Inter-Process Communication: Methods to allow processes to communicate
and share data.
2. Memory Management:
o Allocation and Deallocation: Services to allocate memory to processes when
they start and deallocate memory when they terminate.
o Paging and Segmentation: Techniques to manage memory efficiently,
allowing for virtual memory and the abstraction of physical memory.
o Memory Protection: Ensuring that processes do not interfere with each
other’s memory space, providing isolation and security.
3. File System Management:
o File Operations: Services to create, read, write, and delete files.
o Directory Management: Services to manage directories (folders) and organize
files hierarchically.
o Access Control: Mechanisms to manage permissions and ensure security by
controlling who can access or modify files.
4. Device Management:
o Device Drivers: Software that allows the OS to interact with hardware devices
(printers, disk drives, etc.).
o I/O Operations: Services to perform input/output operations, manage
buffers, and handle device interrupts.
o Device Allocation: Services to manage and allocate access to various
hardware devices among processes.
5. User Interface:
o Command-Line Interface (CLI): A textual interface that allows users
to interact with the operating system through commands.
o Graphical User Interface (GUI): A visual interface that allows users to interact
with the system using graphical elements like windows, icons, and menus.
6. Security and Access Control:
o Authentication: Services to verify user identities before granting access to
system resources.
o Authorization: Mechanisms to control user permissions and access rights to
files, processes, and devices.
o Encryption: Services to secure data transmission and storage through
encryption methods.
7. Networking:
o Communication Protocols: Services to support network communication using
various protocols (TCP/IP, UDP, etc.).
o Network Management: Services to manage network resources, connections,
and communication between processes running on different systems.
Q8. Describe the functions of a dispatcher. Illustrate multilevel queue scheduling
approach.
The dispatcher is a component of the operating system responsible for managing the
execution of processes. It is the module that handles context switching between processes,
transferring control from one process to another. The dispatcher plays a crucial role in the
overall scheduling process and has several key functions:
1. Context Switching:
o The dispatcher saves the state (context) of the currently running process and
loads the state of the next process to be executed. This involves saving
registers, program counters, and other necessary information.
o This context switch allows the CPU to switch from one process to another
efficiently.
2. Process State Management:
o The dispatcher maintains the status of each process (e.g., running, ready,
waiting) and ensures that processes transition between these states correctly
based on scheduling policies.
3. Control Transfer:
o The dispatcher transfers control from the scheduler to the selected process. It
invokes the appropriate routine to start the execution of the selected process.
4. Scheduling Policy Enforcement:
o The dispatcher implements the scheduling policies defined by the operating
system. It ensures that processes are executed based on their priority and the
scheduling algorithm in use (e.g., First-Come, First-Served, Round Robin).
5. Handling Interrupts:
o The dispatcher also responds to interrupts from hardware or software, which
may require switching to a different process or performing certain actions
before returning to the previous process.
6. Performance Monitoring:
o The dispatcher may track performance metrics related to process scheduling,
such as CPU utilization and turnaround time, which helps in optimizing the
scheduling algorithms.
Multilevel Queue Scheduling Approach
Multilevel Queue Scheduling is a scheduling method that partitions the ready queue into
several separate queues, each serving different types of processes based on specific criteria
(such as priority, process type, or resource requirements). Each queue can have its own
scheduling algorithm.
Structure of Multilevel Queue Scheduling
1. Multiple Queues:
o Each queue is designed for a specific type of process (e.g., foreground,
background, interactive, batch).
o Queues are often prioritized, meaning that processes in higher-priority
queues are given preference over those in lower-priority queues.
2. Scheduling Algorithms:
o Different scheduling algorithms can be used for different queues. For
example:
 Round Robin for interactive processes.
 First-Come, First-Served (FCFS) for batch processes.
 Shortest Job First (SJF) for high-priority tasks.
3. Fixed Queue Allocation:
o Processes are assigned to queues based on their characteristics and
requirements at the time of their creation.
o Processes typically remain in the same queue throughout their lifetime.
Advantages of Multilevel Queue Scheduling
1. Flexibility: Allows the operating system to optimize resource allocation for different
types of processes.
2. Efficiency: Different algorithms for different queues can maximize CPU
utilization and minimize waiting times.
3. Priority Handling: Important processes can be prioritized easily, ensuring
they receive the resources they need promptly.
Disadvantages of Multilevel Queue Scheduling
1. Starvation: Lower-priority queues may experience starvation if higher-priority
queues are constantly filled with new processes.
2. Complexity: Managing multiple queues and scheduling policies can add complexity
to the operating system.
3. Fixed Queue Allocation: Once assigned to a queue, processes generally cannot
change their priority dynamically, which may lead to inefficient resource utilization
in some scenarios.
Q9. Explain the reader-writers problem. Write algorithm (code snippet) to solve
the readers-writers problem using Semaphore.
The Readers-Writers Problem is a classic synchronization problem in concurrent
programming that deals with the management of access to a shared resource (like a
database) where multiple processes can read from the resource concurrently, but writes must
be exclusive. The problem involves two types of processes:
1. Readers: Processes that only read the shared resource.
2. Writers: Processes that modify or write to the shared resource.
Problem Description
The main challenges of the readers-writers problem are:
 Concurrent Reads: Multiple readers can access the resource simultaneously without
conflict.
 Exclusive Writes: Only one writer can access the resource at a time, and no readers
should be accessing the resource while a writer is writing.
 Fairness: The system should ensure that readers and writers get a fair chance to
access the resource. This means that neither readers nor writers should starve; both
should eventually get access to the resource.
Variants
There are two main variants of the readers-writers problem:
1. First Readers-Writers Problem: Prioritizes readers over writers, allowing multiple
readers to read concurrently as long as no writers are waiting. This can lead to writer
starvation if there are many readers.
2. Second Readers-Writers Problem: Prioritizes writers over readers, ensuring that if
a writer is waiting, no new readers can start reading. This prevents writer starvation
but can lead to reader starvation.

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>

sem_t mutex; // Semaphore for mutual exclusion


sem_t writeBlock; // Semaphore to block writers
int readCount = 0; // Number of active readers

void *reader(void *id) {


int readerId = *(int *)id;
// Reader wants to read
sem_wait(&mutex); // Start critical section
readCount++;
if (readCount == 1) {
sem_wait(&writeBlock); // First reader locks the resource for writers
}
sem_post(&mutex); // End critical section

// Reading section
printf("Reader %d is reading\n", readerId);
sleep(1); // Simulating reading time

// Reader is done
sem_wait(&mutex); // Start critical section
readCount--;
if (readCount == 0) {
sem_post(&writeBlock); // Last reader unlocks the resource for writers
}
sem_post(&mutex); // End critical section

return NULL;
}

void *writer(void *id) {


int writerId = *(int *)id;

// Writer wants to write


sem_wait(&writeBlock); // Request exclusive access
// Writing section
printf("Writer %d is writing\n", writerId);
sleep(2); // Simulating writing time

sem_post(&writeBlock); // Release exclusive access


return NULL;
}

int main() {
int numReaders = 5;
int numWriters = 2;
pthread_t readers[numReaders], writers[numWriters]; int
readerIds[numReaders], writerIds[numWriters];

// Initialize semaphores
sem_init(&mutex, 0, 1);
sem_init(&writeBlock, 0, 1);

// Create reader threads


for (int i = 0; i < numReaders; i++) {
readerIds[i] = i + 1;
pthread_create(&readers[i], NULL, reader, &readerIds[i]);
}

// Create writer threads


for (int i = 0; i < numWriters; i++) {
writerIds[i] = i + 1;
pthread_create(&writers[i], NULL, writer, &writerIds[i]);
}
// Wait for all threads to finish
for (int i = 0; i < numReaders; i++) {
pthread_join(readers[i], NULL);
}
for (int i = 0; i < numWriters; i++) {
pthread_join(writers[i], NULL);
}

// Destroy semaphores
sem_destroy(&mutex);
sem_destroy(&writeBlock);

return 0;
}

Explanation of the Code


 Semaphores:
o mutex: A semaphore used to protect the readCount variable to ensure
mutual exclusion when updating the count of active readers.
o writeBlock: A semaphore that prevents writers from accessing the resource if
readers are currently reading.
 Reader Function:
o Each reader first acquires the mutex semaphore to safely increment the
readCount.
o If it’s the first reader (i.e., readCount becomes 1), it blocks writers by
acquiring the writeBlock semaphore.
o After reading, it decrements the readCount, and if it’s the last reader (i.e.,
readCount becomes 0), it releases the writeBlock semaphore.
 Writer Function:
o Each writer acquires the writeBlock semaphore before writing. This ensures
exclusive access to the resource.
o After writing, it releases the writeBlock semaphore.
 Main Function:
o Initializes the semaphores and creates multiple reader and writer threads.
o Joins the threads to ensure the main function waits for their completion
before terminating.
Q10. What are the conditions to be fulfilled by a solution of critical-section
problem? Explain the TestAndSet() and Swap() approaches to solve the critical
section problem.
To ensure correct synchronization among processes in a system, any solution to the
critical-section problem must satisfy the following conditions:
1. Mutual Exclusion:
o Only one process can be in its critical section at a time. If one process is
executing in its critical section, no other process can enter its critical section.
2. Progress:
o If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not in their
remainder sections can participate in the decision on which process will enter
its critical section next. This means that the selection of the next process
cannot be postponed indefinitely.
3. Bounded Waiting:
o There must be a limit on the number of times other processes can enter their
critical sections after a process has made a request to enter its critical section
and before that request is granted. This ensures that every process will
eventually get a chance to enter its critical section, preventing starvation.
The TestAndSet() instruction is a hardware-level atomic operation that is commonly
used to implement mutual exclusion in a concurrent environment. TestAndSet(Lock) is a
function that takes a lock variable as an argument. It reads the current value of the lock,
sets the lock to true (indicating that the critical section is occupied), and returns the
original value of the lock.
In this approach, when a process wants to enter its critical section, it repeatedly calls
TestAndSet() on the lock. If the lock is false, it sets the lock to true and enters the critical
section. If the lock is true, the process keeps spinning (busy waiting) until it can acquire
the lock.
The Swap() instruction is another atomic operation that can be used to solve the critical
section problem. It swaps the values of two variables atomically, ensuring that the
operations are completed without interruption.
 Functionality:
o Swap(A, B) exchanges the values of variables A and B atomically.
In this method, a process attempts to acquire the lock by calling Swap(), passing the lock
variable and a temporary variable. If the lock was previously false, it is set to true,
allowing the process to enter the critical section. If the lock was true, the process keeps
trying until it can acquire the lock.
Q11. Explain the function of the ready queue?
The ready queue is a critical component of an operating system's process management
system. It plays an essential role in managing processes that are in the ready state, which
means they are waiting to be assigned to a CPU for execution. Here are the primary
functions of the ready queue:
Functions of the Ready Queue
1. Process Management:
o The ready queue holds all processes that are ready to run but are waiting for
CPU time. These processes have completed their execution in the waiting
state and are now prepared to execute as soon as they are assigned CPU
resources.
2. Scheduling:
o The ready queue is managed by the operating system's scheduler. The
scheduler determines the order in which processes will access the CPU based
on the scheduling algorithm in use (e.g., First-Come, First-Served, Round
Robin, Shortest Job First, Priority Scheduling).
o The scheduler selects the next process to be executed from the ready queue,
facilitating fair and efficient CPU utilization.
3. State Transition:
o The ready queue serves as a transition point for processes moving from the
waiting state (where they were blocked for some resource) to the running
state. When a process is selected from the ready queue, it transitions to the
running state, allowing it to utilize the CPU.
4. Process Prioritization:
o The ready queue can be structured to prioritize certain processes over
others. For example, higher-priority processes can be placed at the front of
the queue, allowing them to be executed first. This is important for real-time
systems or applications that require prompt responses.
5. Concurrency Management:
o The ready queue enables the operating system to manage multiple processes
concurrently. While one process is running on the CPU, other processes can
be in the ready queue, waiting for their turn to execute, ensuring that the CPU
remains busy.
6. Resource Allocation:
o The ready queue also helps in tracking the status of processes. When a
process is moved to the ready queue, it indicates that it has all the necessary
resources to execute (e.g., memory, I/O availability).
7. Performance Monitoring:
o The operating system can monitor the length of the ready queue to assess
system performance. A long ready queue may indicate that the system is
overloaded, while a short queue might suggest that there are not enough
processes ready to run.
Q12. What is multithreading?
Multithreading is a programming and execution model that allows multiple threads
(smaller units of a process) to exist within a single process. These threads can run
concurrently, sharing the same memory space and resources, which makes them more
lightweight than separate processes. Multithreading can improve the efficiency and
performance of applications, especially those that require parallel processing or handle
multiple tasks simultaneously.
Key Concepts of Multithreading
1. Thread:
o A thread is the smallest unit of processing that can be scheduled by an
operating system. It consists of a thread ID, program counter, register set,
and stack. Threads within the same process share the same code, data, and
resources, allowing for efficient communication and data sharing.
2. Concurrency:
o Multithreading enables multiple threads to execute concurrently, allowing for
improved application responsiveness and resource utilization. For example, a
web server can handle multiple client requests simultaneously using different
threads.
3. Parallelism:
o On multicore or multiprocessor systems, threads can run in parallel on
different cores, leading to significant performance gains. This means that
multiple threads can be executed at the same time, effectively utilizing the
available CPU resources.
4. Shared Resources:
o Threads within the same process share the same memory space and
resources, such as open files and network connections. This allows for
efficient communication between threads but also introduces challenges
related to synchronization and data consistency.
5. Synchronization:
o Since threads share resources, multithreading can lead to issues like race
conditions, deadlocks, and inconsistent data. To prevent these problems,
synchronization mechanisms (e.g., mutexes, semaphores, condition
variables) are used to control access to shared resources.
Q13. What are the basic requirements of any solution to the critical sections problem?
The critical section problem involves managing access to shared resources among
multiple processes in a concurrent system. To ensure correct synchronization and avoid
issues like race conditions, starvation, and deadlocks, any solution to the critical section
problem must satisfy the following basic requirements:
1. Mutual Exclusion
 Definition: Only one process can be in its critical section at a time. If one process is
executing in its critical section, no other process should be allowed to enter its
critical section until the first process exits.
 Importance: This ensures that shared resources are not simultaneously modified by
multiple processes, which can lead to inconsistent or corrupt data.
2. Progress
 Definition: If no process is executing in its critical section and there are processes
that wish to enter their critical sections, the selection of the next process that will
enter its critical section cannot be postponed indefinitely. It should only involve
those processes that are not in their remainder sections.
 Importance: This prevents the system from getting stuck, ensuring that processes
waiting to enter their critical sections will eventually be able to do so.
3. Bounded Waiting
 Definition: There must be a limit (or bound) on the number of times other processes
can enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
 Importance: This ensures that a process will not wait indefinitely to enter its critical
section, preventing starvation. Each process should eventually get a chance to
execute its critical section after requesting access.
Q14 Differentiate between multiprogramming and multiprocessing.
Q15. Differentiate between kernel level and user level threads.
Q16. What is meant by context switch?
A context switch is the process of storing the state of a currently running process or thread
so that it can be resumed later, and then loading the state of a different process or thread to
execute. This mechanism is fundamental in multitasking operating systems, where multiple
processes or threads share the CPU.
Key Aspects of Context Switching
1. State Saving:
o When a context switch occurs, the operating system saves the context (state)
of the currently running process or thread. This includes various information,
such as:
 Program counter (PC): The address of the next instruction to execute.
 CPU registers: Values stored in CPU registers that need to be
preserved.
 Memory management information: Details about the memory
allocation for the process.
 Process control block (PCB): A data structure that holds the
information necessary for the scheduling and management of a
process.
2. State Restoration:

o After saving the context of the currently running process, the operating
system loads the context of the next scheduled process or thread. This
involves restoring:
 The program counter to resume execution from where it left off.
 The CPU registers with the saved values of the new process or thread.
 Any other necessary state information.
3. Overhead:
o Context switching introduces overhead because it requires time and
resources to save and restore the state of processes or threads. This
overhead can impact the overall performance of the system, especially in
environments with frequent context switching.
4. Frequency of Context Switches:
o The frequency of context switches can affect system performance. While
frequent context switching can lead to increased responsiveness and better
multitasking, too many switches can cause inefficiencies and reduce CPU
utilization.
Q17. What is the use of fork and exec system calls?
The fork and exec system calls are fundamental in Unix-like operating systems for creating
and managing processes. They are often used together to enable the execution of new
programs. Here’s a detailed explanation of each and their uses:
fork() System Call
Definition:
 The fork system call creates a new process by duplicating the existing process (the
parent process). The newly created process is referred to as the child process.
Key Characteristics:
1. Process Duplication:
o When fork is called, the operating system creates a new process that is an
exact duplicate of the parent process, with its own unique process ID (PID).
The child process inherits the parent's memory space, file descriptors, and
execution context.
2. Return Values:
o The fork call returns a value that allows the program to determine whether
it's running in the parent or child process:
 In the parent process, fork returns the PID of the child.

 In the child process, fork returns 0.


 If fork fails (e.g., due to insufficient system resources), it returns -1 in
the parent process.
3. Independent Execution:
o After the fork, the parent and child processes execute independently, each
with its own execution flow. Changes made to the memory space of one
process do not affect the other.
Use Cases:
 Creating New Processes: fork is used to create new processes for executing tasks
concurrently, such as handling client requests in a server or performing background
operations.
exec() System Call
Definition:
 The exec system call is used to replace the current process's memory space with a
new program. It loads a specified program into the calling process's memory and
executes it.
Key Characteristics:
1. Replaces Current Process:
o Unlike fork, which creates a new process, exec replaces the current process's
memory and execution context with a new program. This means that the
current process will cease to exist after the exec call completes successfully.
2. Return Value:
o If exec is successful, it does not return to the calling function. If it fails (e.g., if
the specified program cannot be found), it returns -1, and the current process
continues executing.
3. Various Variants:
o There are several variants of the exec system call (e.g., execl, execv, execle,
execvp, etc.) that allow for different ways to specify the program to be
executed and its arguments.
Use Cases:
 Executing New Programs: After a fork, the child process often uses exec to run a
different program. This is common in shell implementations, where the shell forks a
new process for each command entered by the user, then calls exec to run that
command.
Combined Usage of fork and exec

The combination of fork and exec is common in many applications, particularly in Unix-like
operating systems. The typical pattern is as follows:
1. A process calls fork to create a new child process.
2. The child process calls exec to replace its memory with a new program.
3. The parent process can continue executing its code, potentially waiting for the child
process to finish (using wait or similar system calls).
Q18. Differentiate between pre-emptive and non- pre-emptive scheduling.
Q19. Differentiate between long term and short term scheduler.

Q20 What how Distributed operating systems differ from Multiprogrammed and Time-
shared operating systems? Give key features of each.

Key Features of distributed OS:


 Transparency: Provides various levels of transparency (location, migration,
replication, and concurrency) so users do not need to be aware of the physical
distribution of resources.
 Resource Sharing: Allows processes to access resources (like files, printers,
and processors) on different machines seamlessly.
 Scalability: Can easily add more machines and resources without significant
reconfiguration.
 Fault Tolerance: Designed to handle failures of individual machines without
affecting the overall system performance or user experience.
 Concurrency: Supports multiple users and processes accessing shared resources
simultaneously.
 Communication: Provides mechanisms for inter-process communication (IPC)
across the network.
Features of multiprogrammed 0S
 Overlapping I/O and CPU: Allows one process to execute while others are
waiting for I/O operations, leading to better CPU utilization.
 Memory Management: Uses techniques like paging and segmentation to manage
multiple processes in memory effectively.
 Batch Processing: Processes are often executed in batches without user interaction,
which can improve efficiency for certain workloads.
 Context Switching: Frequent context switching among processes to ensure that the
CPU is utilized effectively.
 Static Resource Allocation: Resources are allocated at the beginning of the process,
which can lead to fragmentation over time.
Key Features of Time-shared OS:
 Multitasking: Supports multiple processes running simultaneously, giving the
illusion of parallel execution through rapid context switching.
 User Interaction: Designed for interactive user applications, enabling multiple users
to interact with the system at the same time.
 Time Slicing: Allocates CPU time in small slices (quantum) to each process,
allowing for responsive user experience.
 Scheduling Algorithms: Uses sophisticated scheduling algorithms (like Round
Robin, Shortest Job First) to ensure fair CPU time distribution among users.
 Dynamic Resource Allocation: Resources can be allocated and deallocated
dynamically based on user requests and system load.
Q21. Explain the following (i) Multitasking System (ii) Real time system.
(i) Multitasking System
Definition: A multitasking system allows multiple processes or tasks to run concurrently on
a single processor. It can be implemented using techniques like time-sharing or time-slicing,
enabling users to perform multiple operations simultaneously.
Key Features:
1. Process Management: Supports the execution of multiple processes by switching
between them, giving the illusion that they are running simultaneously.
2. Context Switching: The operating system frequently switches the CPU among
different processes, saving and restoring their states as needed.
3. Time Slicing: Divides CPU time into small slices (quantum), allowing each process
a turn to execute within a fixed time period, thus enhancing responsiveness.
4. Shared Resources: Processes share system resources, such as memory, I/O devices,
and the CPU.
5. User Interaction: Designed to handle interactive user applications, allowing users to
run multiple applications at the same time (e.g., browsing the web while editing a
document).
6. Scheduling: Utilizes scheduling algorithms (like Round Robin, Shortest Job First,
and Priority Scheduling) to allocate CPU time efficiently among processes.
Use Cases:
 Desktop operating systems (e.g., Windows, Linux, macOS) where users can run
multiple applications (like a web browser, text editor, and media player)
simultaneously.
 Server environments where multiple processes handle concurrent requests from
users.
(ii) Real-Time System
Definition: A real-time system is an operating system designed to serve real-time
applications that require a strict and predictable timing response. These systems are built to
respond to inputs or events within a guaranteed time frame.
Key Features:
1. Deterministic Behavior: The system guarantees specific response times, meaning
that operations are completed within a defined time limit.
2. Priority Scheduling: Processes are often prioritized based on their time constraints,
ensuring that critical tasks are executed first.
3. Concurrency: Supports multiple processes executing concurrently while meeting
timing constraints.
4. Reliability: Highly reliable and fault-tolerant, as many real-time applications are
mission-critical.
5. Task Management: Real-time tasks can be periodic (executed at regular intervals) or
aperiodic (executed in response to specific events).
6. Resource Management: Careful management of resources is essential to ensure that
critical tasks receive the required CPU time and other resources.
Types:
 Hard Real-Time Systems: Require that tasks be completed within strict time
limits. Missing a deadline may lead to catastrophic failures (e.g., flight control
systems, medical devices).
 Soft Real-Time Systems: Allow for some flexibility in meeting deadlines. While
missing a deadline can reduce system performance, it does not lead to failure (e.g.,
video streaming or online gaming).
Use Cases:
 Embedded systems in automotive safety (e.g., anti-lock braking systems).
 Industrial automation systems (e.g., robotic assembly lines).
 Telecommunications systems (e.g., managing voice over IP calls).
Q22. What is Semaphore? Explain busy waiting semaphores.
A semaphore is a synchronization primitive used in concurrent programming to control
access to a common resource by multiple processes and to avoid critical section problems. It
helps manage resource allocation and ensure that processes execute in a coordinated manner,
particularly in a multiprogramming environment.
Types of Semaphores
1. Counting Semaphore:
o This semaphore can take on any non-negative integer value. It is useful when
multiple instances of a resource are available.
o The value of a counting semaphore represents the number of available
resources.
2. Binary Semaphore:
o Also known as a mutex (mutual exclusion), it can take on only two values: 0
and 1.
o It is used to ensure that only one process can access a critical section at a
time.
Busy Waiting Semaphores
Busy waiting, also known as spinlock, occurs when a process continuously checks a
condition (like the value of a semaphore) in a loop until it is able to proceed. In the context of
semaphores, a busy waiting semaphore is one where processes that cannot proceed (due to the
semaphore value being zero) enter a loop where they repeatedly check the semaphore.
Characteristics of Busy Waiting Semaphores:
 Polling: The process actively polls the semaphore in a tight loop, consuming
CPU cycles while waiting for the semaphore to become available.
 Inefficiency: Busy waiting can lead to inefficient CPU usage, as the waiting
process does not perform any useful work while it waits. This can lead to
performance degradation, especially in systems with many processes.
 Simplicity: Implementing busy waiting semaphores is straightforward, as they can be
implemented with simple loops. However, this simplicity comes at the cost of
resource efficiency.
Q23 . Explain the differences with diagram between multilevel queue and multilevel
feedback queue scheduling.

Multilevel queue
Multilevel Feedback Queue
Q24. How semaphores help in process synchronization? What is the difference between
binary and counting semaphores?

Semaphores are synchronization tools used in concurrent programming to manage access to shared
resources and prevent race conditions. They help in process synchronization by coordinating the
execution of processes or threads, ensuring that multiple processes do not access critical sections
of code simultaneously in a way that could cause conflicts or inconsistencies.
How Semaphores Help in Process Synchronization:
1. Controlling Access to Shared Resources: Semaphores use a counter to track the
number of available resources or permits for accessing a shared resource. When a
process or thread wants to access a resource, it performs a wait() operation (also
known as P() or down()), which decreases the semaphore's count. If the count is
greater than zero, access is granted; otherwise, the process is blocked until the count
becomes positive (indicating resource availability).
2. Signaling: After a process has finished using a resource, it performs a signal()
operation (also called V() or up()), which increases the semaphore's count,
potentially unblocking a waiting process or thread. This signaling ensures that other
waiting processes can proceed when resources become available.

3. Preventing Race Conditions: By blocking or allowing processes based on resource


availability, semaphores ensure that critical sections (parts of code that access
shared resources) are executed by only one process at a time, thus preventing race
conditions.

Difference between Binary and Counting Semaphores:


1. Binary Semaphore:
o Definition: A binary semaphore can take only two values: 0 and 1. It is often
used as a lock (also called a mutex) to ensure mutual exclusion.
o Use Case: Suitable for situations where only one process should have access
to a critical section at any given time.
o Behavior: When a process performs a wait() operation on a binary
semaphore with value 1, the value is decremented to 0, and the critical
section is locked. A signal() operation sets the value back to 1, unlocking the
section and allowing another process to access it.
o Example: Binary semaphores are commonly used to implement mutual
exclusion (mutexes) to control access to a critical section.
2. Counting Semaphore:
o Definition: A counting semaphore can take a non-negative integer value and
is used to control access to a resource with multiple instances.
o Use Case: Suitable for scenarios where a limited number of identical
resources are available (e.g., a pool of printers).
o Behavior: The semaphore’s value is initially set to the number of available
resources. A wait() operation decreases the counter when a process accesses a
resource, and a signal() operation increases the counter when a resource is
released. Processes are blocked when the counter reaches zero.
o Example: Counting semaphores are used to manage access to a fixed number
of resources like threads in a thread pool or a limited number of database
connections.
Q25. Write three differences between a process and a thread. What do you mean by
user and kernel level threads? Write any three differences among them.
Three Differences between a Process and a Thread:
1. Memory Space:
o Process: Each process has its own independent memory space, including
code, data, and stack segments. It has its own address space.

o Thread: Threads within the same process share the same memory space,
including code, data, and other resources. However, each thread has its own
stack and registers.
2. Communication:
o Process: Communication between processes is more complex and slower
because they have separate memory spaces, requiring mechanisms like inter-
process communication (IPC) such as pipes, sockets, or shared memory.
o Thread: Communication between threads is faster because they share the
same memory space, allowing them to directly read and write shared data.
3. Overhead:
o Process: Creating and managing processes is more resource-intensive and
slower due to the need for separate memory spaces and system resources.
Thread: Threads are lighter and have less overhead since they share the same memory space.
Creating and switching between threads is generally faster than between processes.
User-Level Threads vs. Kernel-Level Threads

Q26. Design a solution to a critical section problem using one semaphore and check it
for mutual exclusion, progress requirement and bounded waiting conditions.
To design a solution for the critical section problem using a single semaphore, we can use a
binary semaphore (mutex). This semaphore will control access to the critical section,
ensuring that only one process or thread can access it at a time. The semaphore will be
initialized to 1, indicating that the critical section is available.
Semaphore-Based Solution for the Critical Section Problem:
1. Initialization:
o Let mutex be a binary semaphore.

o Initialize mutex to 1: mutex = 1.


2. Process Structure: Each process PiP_iPi (where iii is the process index) follows
this structure:
// Entry Section
wait(mutex);
// Critical Section
// Code that accesses shared resources goes here.
// Exit Section
signal(mutex);
// Remainder Section
Entry Section: Each process first calls wait(mutex). This operation decreases the mutex
value by 1.
 If mutex is 1 (indicating that the critical section is free), the process enters the critical
section, and mutex is set to 0.
 If mutex is 0 (indicating that another process is already in the critical section), the
process is blocked and waits until mutex becomes 1 again.
Critical Section: The process executes its critical section code.
Exit Section: After finishing its critical section code, the process calls signal(mutex),
which increases mutex to 1, allowing another process to enter the critical section.
Remainder Section: The code that is not part of the critical section and doesn't require
mutual exclusion.
Let's verify this solution against the three conditions required for a critical section problem:
mutual exclusion, progress, and bounded waiting.

1. Mutual Exclusion:
o Only one process can enter the critical section at a time because the
wait(mutex) operation decreases the mutex value to 0, effectively locking the
critical section.
o When a process enters the critical section, other processes calling
wait(mutex) will be blocked as long as mutex remains 0.
o After a process leaves the critical section and calls signal(mutex), the mutex
value is set back to 1, allowing another waiting process to enter.

o Thus, mutual exclusion is ensured since no two processes can access the
critical section simultaneously.

2. Progress:
o If no process is in the critical section, and some processes wish to enter, one
of those processes will be allowed to enter.
o This is ensured because, once a process finishes and calls signal(mutex), any
waiting process can proceed with wait(mutex).
o The decision of which process enters next depends on the scheduling
mechanism, but as long as a process exists to enter the critical section, it will
make progress.

3. Bounded Waiting:
o Bounded waiting ensures that there is a limit on how many times other
processes can enter the critical section before a waiting process is granted
access.
o Since the wait(mutex) operation blocks processes in a queue, every process
that is waiting for the critical section will eventually get a turn once the
currently running process exits and calls signal(mutex).
o This prevents indefinite postponement and ensures that every process will
have bounded waiting.
Q27. Write a solution to readers/writers problem using semaphore and justify its working.
 Readers: Multiple readers can read the shared resource concurrently without
causing any issues.
 Writers: Only one writer can access the shared resource at a time, and no readers
should be reading while a writer is writing.
 Requirement: A writer should have exclusive access to the shared resource when
writing, while readers can read simultaneously without blocking each other.
Solution Using Semaphores:
We use three semaphores in this solution:
1. mutex: A binary semaphore (mutex) to ensure mutual exclusion when updating the
read_count.
2. wrt: A binary semaphore that allows writers to have exclusive access to the shared
resource.
3. read_count: An integer variable that keeps track of the number of active readers.
Semaphores and Variables:

 wrt: Initialized to 1, indicating that the resource is available for writing.


 mutex: Initialized to 1, used to protect read_count so that readers can update it
without interference.
 read_count: Initialized to 0, represents the current number of readers accessing the
shared resource.
Reader Process:
Each reader process follows this structure:
// Reader Process
// Entry Section
wait(mutex);
read_count++;
if (read_count == 1) {
wait(wrt); // The first reader locks the resource for reading.
}
signal(mutex);
// Critical Section (Reading)
// Readers read from the shared resource here.
// Exit Section
wait(mutex);
read_count--;
if (read_count == 0) {
signal(wrt); // The last reader releases the lock.
}
signal(mutex);
// Remainder Section
Writer Process:
Each writer process follows this structure:
// Writer Process
// Entry Section
wait(wrt); // Writer locks the resource for writing.

// Critical Section (Writing)


// Writer writes to the shared resource here.
// Exit Section
signal(wrt); // Writer releases the lock.
// Remainder Section
Justification of the Solution:
1. Mutual Exclusion:
o wrt ensures that only one writer can write to the shared resource at a time,
and no reader can access it while a writer is writing.
o mutex ensures mutual exclusion when readers update read_count,
preventing race conditions.
2. Readers' Priority:
o Multiple readers can read the shared resource simultaneously because the
first reader locks wrt, blocking writers, while other readers increment
read_count.
o Readers release wrt only when the last reader finishes reading, allowing for
concurrency among readers but ensuring writers get exclusive access when
there are no active readers.
3. Writers' Priority:
o Writers have to wait until all readers have finished (i.e., read_count reaches
0) before they can write.
o Once a writer gains access, it blocks further readers and other writers until it
finishes writing, ensuring consistency of the shared resource.
4. Starvation:
o Readers could potentially starve a writer if new readers keep arriving before
read_count drops to 0, which is a known limitation of this version of the
solution (referred to as "readers-priority").
o To prevent writer starvation, an alternative approach called writers-priority
could be implemented where writers are given priority when they arrive. This
would involve slight adjustments to the control flow of semaphores.
Q28. Mention the main features of Linux Operating System.
1. Multiuser Capability:

 Linux supports multiple users at the same time, allowing different users to access the
system resources (memory, disk space, etc.) simultaneously without interfering with
each other.
 Each user can have separate accounts, permissions, and environments.
2. Multitasking:
 Linux is designed to handle multiple tasks at the same time.
 It can run many processes concurrently, making it suitable for environments that
require running various applications or services simultaneously, such as servers.
3. Security:
 Linux is known for its strong security features, including user authentication, file
permissions, and encryption.
 It supports access control lists (ACLs) and security modules like SELinux
(Security- Enhanced Linux) to enforce security policies.
 The open-source nature also means that security vulnerabilities can be identified and
patched quickly by the community.
4. Portability:
 Linux is highly portable and can run on a wide range of hardware platforms, from
powerful servers and desktop computers to smaller devices like smartphones (e.g.,
Android) and embedded systems.
 It is not hardware-specific, making it suitable for different types of machines.
5. Shell and Command-Line Interface (CLI):
 Linux provides a powerful command-line interface (CLI) through various shells
like Bash, Zsh, and more.
 The shell allows users to execute commands, automate tasks using shell scripts, and
manage the system efficiently.
 It is especially popular among system administrators and developers for its flexibility
and control.
Q29. How does many-to-many thread model differ from one-to-one model? Explain.

Q30. What is multilevel feedback queue scheduling? What is multilevel feedback queue
scheduling?
Multilevel Feedback Queue (MLFQ) Scheduling is a sophisticated CPU scheduling
algorithm used in operating systems to manage processes with varying priority levels and
differing needs for CPU time. It is an extension of the multilevel queue scheduling algorithm
but with the added flexibility that processes can move between different priority queues based
on their behavior and execution history.
Consider an MLFQ with three queues:
 Queue 0: High priority with a time quantum of 2 units.
 Queue 1: Medium priority with a time quantum of 4 units.
 Queue 2: Low priority with a time quantum of 8 units.
 A new process, Process A, arrives and is placed in Queue 0.
 Process A runs for 2 time units but does not finish, so it is moved to Queue 1.
 Process A now runs for up to 4 time units in Queue 1. If it still does not finish, it
is moved to Queue 2.
 If another process, Process B, arrives during this time and is placed in Queue 0, it
will preempt Process A since Queue 0 has a higher priority.
Q31. Define job queue, ready queue and device queue that are used in process scheduling.
1. Job Queue:
 Definition: The job queue (also known as the input queue) is a queue that holds all
the processes that are waiting to enter the system. These processes are usually
stored on the disk and are not yet loaded into the main memory (RAM).

 Purpose: When a new process is created (e.g., a user starts a program or a batch job
is initiated), it is placed in the job queue. The job queue keeps track of all processes
that need to be brought into memory for execution.
 Example: A batch processing system that queues up multiple jobs to be executed
later will place these jobs in the job queue until the system is ready to allocate
resources to them.
2. Ready Queue:
 Definition: The ready queue contains all the processes that are loaded into the
main memory (RAM) and are ready to execute but are currently waiting for the
CPU to become available.
 Purpose: When a process is brought into memory and has all the resources it needs
(except the CPU), it is placed in the ready queue. The scheduler selects processes
from the ready queue and allocates the CPU to them.
 Characteristics:
o The ready queue is typically implemented as a FIFO (First-In-First-Out)
queue or using other data structures like priority queues or linked lists,
depending on the scheduling algorithm used.
o Processes in the ready queue are waiting only for CPU time and are not
blocked for any other resources.
 Example: If three processes, A, B, and C, are all loaded into memory but the CPU
is currently running another process D, A, B, and C will be in the ready queue.
3. Device Queue:
 Definition: A device queue is associated with processes that are waiting for a
particular I/O device (like a disk, printer, or network card) to become
available.
 Purpose: When a process needs to perform an I/O operation (e.g., reading data from a
disk), it is placed in the device queue corresponding to that device. The process
remains in the device queue until the requested I/O operation is completed.
 Characteristics:
o Each I/O device has its own queue of processes waiting for that device.
o When the I/O operation is complete, the process is moved back to the ready
queue if it is ready to continue execution.
 Example: If Process E is waiting for data from the disk, it will be placed in the disk
device queue. Once the disk I/O operation completes, Process E will be moved back
to the ready queue to resume its execution.
Q32. Explain various criteria considered in CPU scheduling algorithms. Explain the
following methods (i)Shortest job first scheduling (ii)Round Robin scheduling.
Criteria for CPU Scheduling:
1. CPU Utilization:
o Refers to keeping the CPU as busy as possible.
o The goal is to maximize CPU utilization, ideally between 40% to 90% in
real- time systems and close to 100% in more intensive environments.
2. Throughput:
o Measures the number of processes that are completed in a unit of time (e.g.,
processes per second).
o Higher throughput indicates that the system is executing more processes in a
given period.
3. Turnaround Time:
o The total time taken for a process to complete, from the time of submission
to its completion.
o Includes waiting time, execution time, and I/O time.
o Lower turnaround time is preferred as it means faster processing of
individual processes.
4. Waiting Time:
o The total amount of time a process spends in the ready queue waiting for
CPU time.
o Lower waiting time is desirable as it reduces the time processes spend
waiting for execution.
5. Response Time:
o The time from the submission of a request until the first response is
produced.
o It is crucial for interactive systems where quick feedback is required,
like time-sharing systems.
o Lower response time means that users experience less delay when
interacting with the system.
Shortest Job First (SJF) Scheduling:
 Definition: In SJF, the process with the smallest burst time (execution time required)
is selected for execution next.
 Types:
o Non-preemptive SJF: Once a process starts execution, it runs
until completion.
o Preemptive SJF (also known as Shortest Remaining Time First (SRTF)):
If a new process arrives with a shorter burst time than the remaining time of
the current process, the current process is preempted, and the new process is
executed.
 Example:
o Suppose there are four processes with burst times: P1 = 6ms, P2 = 2ms, P3 =
8ms, P4 = 3ms.
o According to SJF, the execution order would be: P2 (2ms) → P4 (3ms) →
P1 (6ms) → P3 (8ms).
 Advantages:
o Minimizes Average Waiting Time: SJF can provide optimal average
waiting time because it processes shorter tasks first, reducing the time other
processes wait.
 Disadvantages:
o Difficult to Predict Burst Time: It is not always easy to estimate the
burst time of processes accurately.
o Possibility of Starvation: Long processes may be indefinitely delayed if
short processes keep arriving, leading to starvation of longer tasks.
(ii) Round Robin (RR) Scheduling:
 Definition: In RR, each process gets a fixed time quantum (or time slice) to
execute, and processes are scheduled in a circular order.
 After a process executes for its time slice, it is preempted and placed at the end of
the ready queue, and the next process is selected.
 Example:
o Consider three processes with burst times: P1 = 10ms, P2 = 4ms, P3 = 6ms,
and a time quantum of 4ms.
o Execution order:
 P1 executes for 4ms (remaining time: 6ms).
 P2 executes for 4ms (completes).
 P3 executes for 4ms (remaining time: 2ms).
 P1 executes for another 4ms (remaining time: 2ms).
 P3 executes for its remaining 2ms.
 P1 executes for its remaining 2ms.
Q33. Define semaphores. Explain the role of wait () and signal () function used in
semaphores.
A semaphore is an integer variable that is used to signal the availability of resources and
ensure that multiple processes do not access a critical section simultaneously.
Types of Semaphores:
1. Binary Semaphore (Mutex):
o Can have only two values: 0 and 1.
o It is used for mutual exclusion, where 0 indicates that a resource is locked or
unavailable, and 1 means it is available.
2. Counting Semaphore:
o Can have a range of integer values.
o It is useful when multiple instances of a resource are available (e.g., a pool of
identical resources).
Role of wait() and signal() Functions:
Semaphores use two primary atomic operations, wait() (also known as P()) and signal() (also
known as V()), to manage access to critical sections and shared resources.
1. wait() Function:
 The wait() function is used to decrease the value of a semaphore.
 If the value of the semaphore is greater than zero, the wait() operation decrements
it by 1 and allows the process to continue.
 If the semaphore's value is zero or less, the process calling wait() is blocked and
added to a waiting queue until the semaphore becomes greater than zero.
Purpose:
 It is used to request a resource.
 Ensures that a process can only access the critical section when the semaphore
indicates the resource is available.

 Example: Suppose S is initially 1:


 Process A calls wait(S):
 Since S > 0, it decrements S to 0 and enters the critical section.
 If another process B calls wait(S) while A is still in the critical section:
 It will be blocked since S is now 0.
2. signal() Function:
 The signal() function is used to increase the value of a semaphore.
 When a process calls signal(), it increments the semaphore by 1.
 If any processes are waiting (blocked) because of a previous wait() operation, one of
them is unblocked and allowed to proceed.
Purpose:
 It is used to release a resource.
 Allows other processes to access the critical section by signaling that the resource is
now available.

 Example: Continuing from the above:


 Process A calls signal(S) after leaving the critical section:
 S is incremented to 1, indicating the resource is available.
 If process B is blocked, it is now unblocked and allowed to proceed.
Q34. Functions of Ready Queue?
Functions of the Ready Queue:
1. Storage of Ready Processes:
o The ready queue contains all processes that are loaded into main memory
and are ready to run. These processes have all the necessary resources
except the CPU itself.
o It acts as a waiting area where processes remain until the scheduler allocates
CPU time to them.
2. Facilitating CPU Scheduling:
o The primary function of the ready queue is to facilitate CPU scheduling. The
CPU scheduler selects processes from this queue based on a specific
scheduling algorithm (e.g., Round Robin, Shortest Job First, First-Come-
First- Served).
o The scheduler determines the order in which processes will be executed
based on various criteria, such as priority, burst time, and fairness.
3. Efficient Resource Management:
o By maintaining a ready queue, the operating system can efficiently manage
CPU resources. It ensures that the CPU is utilized effectively by always
having a list of processes ready to run.
o When a process is blocked (waiting for I/O operations or other resources), it is
removed from the ready queue and added to the appropriate device queue or
waiting queue. Once it is ready to run again, it is moved back to the ready
queue.
4. Supporting Preemption:
o The ready queue allows for the implementation of preemptive scheduling.
When a process currently running is preempted (e.g., due to time-slicing in a
Round Robin system), it is moved back to the ready queue, and the next
eligible process is selected for execution.
o This enables better responsiveness in time-sharing systems where multiple
processes require CPU time.
5. Prioritization:
o The ready queue can be organized based on the scheduling algorithm used,
which may prioritize certain processes over others.
o For example, in a priority scheduling algorithm, the ready queue may be
structured as a priority queue, where higher-priority processes are served
before lower-priority ones.
(i)FCFS Scheduling
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround
Time
P1 0.0 8 8.0 0.0 8.0
P2 0.4 4 12.0 7.6 11.6
P3 0.8 1 13.0 12.2 12.2

Average Turn Around Time = (8.0 + 11.6 + 12.2) = 10.6.


3
(ii) SJF Scheduling
Process Arrival Time Burst Time Completion Time Waiting Time Turnaround
Time
P1 0.0 8 13.8 5.8 13.8
P2 0.4 4 5.8 1.4 5.4
P3 0.8 1 1.8 1.0 1.0

Average Turn Around Time = (13.8 + 5.4 + 1.0) = 6.73


3

(iii) CPU Idle for 1 Unit, then SJF Scheduling:


Process Arrival Time Burst Time Completion Time Waiting Time Turnaround
Time
P1 0.0 8 14.0 6.0 14.0
P2 0.4 4 6.0 1.6 5.6
P3 0.8 1 2.0 0.2 1.2

Average Turn Around Time = (14.0 + 5.6 + 1.2) = 6.93


3
(i) Pre-Emptive SJF Scheduling
Process Arrival Time CPU Burst Time Completion Time Waiting Time Turn
arou
nd
Tim
e
P1 0 8 12 4 12

P2 1 5 13 7 12

P3 2 2 4 0 2

P4 2 4 8 2 6

(ii) Round Robin Scheduling Algorithms


Process Arrival Time CPU Burst Time Completion Time Waiting Time Turnar
ound
Time
P1 0 8 21 13 21

P2 1 5 19 13 18

P3 2 2 7 3 5

P4 2 4 16 10 14

Scheduling Algorithm Average Waiting Time Average Turnaround


Time
Pre-emptive SJF 3.25 8.0

Round Robin 9.75 14.5

You might also like