Shiva Yadav - OS - Assignment
Shiva Yadav - OS - Assignment
Practical File
Q3. Write the Peterson’s algorithm(pseudo code) to solve critical section problem.
Peterson's Algorithm is a classic solution for achieving mutual exclusion in concurrent
programming. It ensures that two processes can safely share a critical section without
encountering race conditions. The algorithm relies on two shared variables, a boolean array
flag[2] and an integer turn, and works for two processes, P0 and P1.
Peterson’s Algorithm (Pseudocode)
Here is the pseudocode for Peterson’s algorithm to solve the critical section problem for
two processes:
// Shared variables:
boolean flag[2] = {false, false}; // Indicates if a process wants to enter the critical section.
int turn; // Indicates whose turn it is to enter the critical section.
// Process P0
do {
flag[0] = true; // Indicate that P0 wants to enter the critical section.
turn = 1; // Give turn to P1.
while (flag[1] && turn == 1) {
// Busy wait until P1 is not interested or it's P0's turn.
}
// Process P1
do {
flag[1] = true; // Indicate that P1 wants to enter the critical section.
turn = 0; // Give turn to P0.
while (flag[0] && turn == 0) {
// Busy wait until P0 is not interested or it's P1's turn.
}
Q4. What is distributed operating system? Compare client server computing and peer-
to- peer computing.
A Distributed Operating System (DOS) is an operating system that manages a collection of
independent computers and makes them appear to the user as a single cohesive system.
These independent computers are networked together and work collectively, sharing
resources such as processors, memory, and data storage.
Characteristics of Distributed Operating Systems:
Resource Sharing: Multiple systems share hardware, software, and data resources,
leading to better utilization of resources.
Transparency: The system hides the distribution of resources from the user,
providing location transparency (users do not need to know where a resource is
located).
Scalability: Can scale easily by adding more machines to the network, making it
suitable for large-scale applications.
Fault Tolerance: If one of the computers in the network fails, others can continue to
function, improving system reliability.
Concurrency: Allows multiple processes to run simultaneously across different
machines, increasing performance.
Examples:
Amoeba, Mach, LOCUS, and modern implementations like Google’s cloud
infrastructure or Amazon Web Services (AWS), which utilize distributed
computing principles.
Q5. Explain the multi-programmed batch systems and time sharing systems with their
advantages and disadvantages.
1. Multi-Programmed Batch
Systems Definition:
Multi-programmed batch systems are a type of operating system where multiple
jobs (programs or tasks) are loaded into memory, allowing the CPU to switch
between them when one job needs to wait (e.g., for I/O operations). This keeps the
CPU busy by executing other jobs while one job is waiting.
Jobs are submitted in batches, which means that users submit jobs to the system
without expecting immediate interaction.
Working:
The OS selects a job from the batch, loads it into memory, and begins execution.
When a job waits for an I/O operation, the OS switches to another job, ensuring that
the CPU remains utilized.
The OS schedules jobs based on priority, or on a first-come, first-served basis.
Advantages:
Better CPU Utilization: Since the CPU is never idle when a job is waiting for
I/O, it switches to other jobs, increasing the overall system efficiency.
Throughput: The system can process many jobs in a given time,
improving throughput compared to single-program systems.
Reduced Idle Time: Jobs that need I/O can coexist with compute-intensive jobs,
reducing the idle time of the CPU.
Disadvantages:
No User Interaction: Users cannot interact with the job once it has been submitted.
The user has to wait until the entire job batch is processed to see the output.
Difficult Debugging: Debugging is challenging because jobs are processed
without interaction. Errors are detected only after job completion.
Job Scheduling Complexity: Deciding the order and manner in which jobs are
loaded and executed requires complex algorithms to ensure efficiency and fairness.
Example: Early mainframe systems like IBM’s OS/360 used multi-programmed batch
processing.
2. Time-Sharing
Systems Definition:
Time-sharing systems are designed to allow multiple users to interact with the
computer system simultaneously. Each user gets a small time slice of the CPU,
making it appear as if the system is dedicated to them.
This is achieved by rapidly switching between user tasks, making it possible for many
users to share the resources of a single system interactively.
Working:
The OS uses a scheduling algorithm (e.g., Round Robin) to allocate small time
slices (quantum) to each user program.
If a user process’s time slice expires or it needs to wait for I/O, the OS switches to
another process.
This allows users to execute commands interactively and receive quick responses,
creating a multi-user environment.
Advantages:
Interactivity: Users can interact with their programs and receive immediate
feedback, making it suitable for general-purpose usage.
Resource Sharing: Efficient use of CPU, memory, and I/O devices as multiple
users share these resources.
User Convenience: Users perceive that they have dedicated access to the system,
even though the CPU time is being shared among multiple processes.
Disadvantages:
Overhead: The frequent context switching between users and processes can
introduce a significant overhead, reducing system efficiency.
Security Risks: Sharing the system among multiple users can lead to security
challenges, as processes may need to be isolated to prevent unauthorized access.
Performance Issues: As more users connect and demand resources, system
performance can degrade, especially if the time slices are too short or resources are
limited.
Example: UNIX and Linux are examples of time-sharing systems, supporting multiple users
interacting with the system simultaneously.
Q6. Why inter-process communication is important? Compare shared memory and
message passing models of inter process communication.
Inter-Process Communication (IPC) is crucial for enabling processes (programs in execution)
to exchange data and coordinate their actions in a computing system. IPC is particularly
important in multi-process and distributed systems, where processes need to work together to
perform complex tasks. It allows processes to share data, synchronize their execution, and
communicate effectively.
Importance of Inter-Process Communication (IPC):
Data Sharing: Processes often need to exchange data to complete a task. IPC allows
them to share information seamlessly.
Synchronization: IPC mechanisms help synchronize the actions of different
processes, ensuring that shared resources are accessed in an orderly manner.
Modularity: By enabling different processes to communicate, IPC allows for better
modularity and division of labor, where different processes handle different parts of
an application.
Resource Sharing: IPC helps processes share resources such as memory and data
structures, leading to more efficient resource usage.
Fault Tolerance: Distributed processes can communicate with each other to provide
redundancy and improve the reliability and fault tolerance of systems.
Q7. What is interrupt? Explain various services offered by an operating system.
An interrupt is a signal sent to the processor by hardware or software indicating that an event
needs immediate attention. Interrupts allow the CPU to respond to events or conditions in
real-time rather than waiting for a program to complete its execution. When an interrupt
occurs, the CPU temporarily halts its current operations, saves its state, and executes a
specific routine known as an interrupt handler or interrupt service routine (ISR) to address
the event.
Types of Interrupts:
1. Hardware Interrupts: Generated by hardware devices (e.g., keyboard, mouse, disk
drives) to signal events like input/output requests or hardware malfunctions.
2. Software Interrupts: Generated by programs or the operating system itself, often
referred to as traps or exceptions (e.g., division by zero, invalid memory access).
3. Timer Interrupts: Generated by the system timer at regular intervals to allow the OS
to perform scheduling and other periodic tasks.
Services Offered by an Operating System
Operating systems provide a range of essential services that facilitate the operation of
computer programs and manage hardware resources effectively. Here are some of the key
services offered by an operating system:
1. Process Management:
o Creation and Termination: Services to create and terminate processes.
o Scheduling: Algorithms to schedule the execution of processes, manage CPU
time allocation, and ensure efficient execution.
o Synchronization: Mechanisms to synchronize the execution of concurrent
processes and avoid race conditions.
o Inter-Process Communication: Methods to allow processes to communicate
and share data.
2. Memory Management:
o Allocation and Deallocation: Services to allocate memory to processes when
they start and deallocate memory when they terminate.
o Paging and Segmentation: Techniques to manage memory efficiently,
allowing for virtual memory and the abstraction of physical memory.
o Memory Protection: Ensuring that processes do not interfere with each
other’s memory space, providing isolation and security.
3. File System Management:
o File Operations: Services to create, read, write, and delete files.
o Directory Management: Services to manage directories (folders) and organize
files hierarchically.
o Access Control: Mechanisms to manage permissions and ensure security by
controlling who can access or modify files.
4. Device Management:
o Device Drivers: Software that allows the OS to interact with hardware devices
(printers, disk drives, etc.).
o I/O Operations: Services to perform input/output operations, manage
buffers, and handle device interrupts.
o Device Allocation: Services to manage and allocate access to various
hardware devices among processes.
5. User Interface:
o Command-Line Interface (CLI): A textual interface that allows users
to interact with the operating system through commands.
o Graphical User Interface (GUI): A visual interface that allows users to interact
with the system using graphical elements like windows, icons, and menus.
6. Security and Access Control:
o Authentication: Services to verify user identities before granting access to
system resources.
o Authorization: Mechanisms to control user permissions and access rights to
files, processes, and devices.
o Encryption: Services to secure data transmission and storage through
encryption methods.
7. Networking:
o Communication Protocols: Services to support network communication using
various protocols (TCP/IP, UDP, etc.).
o Network Management: Services to manage network resources, connections,
and communication between processes running on different systems.
Q8. Describe the functions of a dispatcher. Illustrate multilevel queue scheduling
approach.
The dispatcher is a component of the operating system responsible for managing the
execution of processes. It is the module that handles context switching between processes,
transferring control from one process to another. The dispatcher plays a crucial role in the
overall scheduling process and has several key functions:
1. Context Switching:
o The dispatcher saves the state (context) of the currently running process and
loads the state of the next process to be executed. This involves saving
registers, program counters, and other necessary information.
o This context switch allows the CPU to switch from one process to another
efficiently.
2. Process State Management:
o The dispatcher maintains the status of each process (e.g., running, ready,
waiting) and ensures that processes transition between these states correctly
based on scheduling policies.
3. Control Transfer:
o The dispatcher transfers control from the scheduler to the selected process. It
invokes the appropriate routine to start the execution of the selected process.
4. Scheduling Policy Enforcement:
o The dispatcher implements the scheduling policies defined by the operating
system. It ensures that processes are executed based on their priority and the
scheduling algorithm in use (e.g., First-Come, First-Served, Round Robin).
5. Handling Interrupts:
o The dispatcher also responds to interrupts from hardware or software, which
may require switching to a different process or performing certain actions
before returning to the previous process.
6. Performance Monitoring:
o The dispatcher may track performance metrics related to process scheduling,
such as CPU utilization and turnaround time, which helps in optimizing the
scheduling algorithms.
Multilevel Queue Scheduling Approach
Multilevel Queue Scheduling is a scheduling method that partitions the ready queue into
several separate queues, each serving different types of processes based on specific criteria
(such as priority, process type, or resource requirements). Each queue can have its own
scheduling algorithm.
Structure of Multilevel Queue Scheduling
1. Multiple Queues:
o Each queue is designed for a specific type of process (e.g., foreground,
background, interactive, batch).
o Queues are often prioritized, meaning that processes in higher-priority
queues are given preference over those in lower-priority queues.
2. Scheduling Algorithms:
o Different scheduling algorithms can be used for different queues. For
example:
Round Robin for interactive processes.
First-Come, First-Served (FCFS) for batch processes.
Shortest Job First (SJF) for high-priority tasks.
3. Fixed Queue Allocation:
o Processes are assigned to queues based on their characteristics and
requirements at the time of their creation.
o Processes typically remain in the same queue throughout their lifetime.
Advantages of Multilevel Queue Scheduling
1. Flexibility: Allows the operating system to optimize resource allocation for different
types of processes.
2. Efficiency: Different algorithms for different queues can maximize CPU
utilization and minimize waiting times.
3. Priority Handling: Important processes can be prioritized easily, ensuring
they receive the resources they need promptly.
Disadvantages of Multilevel Queue Scheduling
1. Starvation: Lower-priority queues may experience starvation if higher-priority
queues are constantly filled with new processes.
2. Complexity: Managing multiple queues and scheduling policies can add complexity
to the operating system.
3. Fixed Queue Allocation: Once assigned to a queue, processes generally cannot
change their priority dynamically, which may lead to inefficient resource utilization
in some scenarios.
Q9. Explain the reader-writers problem. Write algorithm (code snippet) to solve
the readers-writers problem using Semaphore.
The Readers-Writers Problem is a classic synchronization problem in concurrent
programming that deals with the management of access to a shared resource (like a
database) where multiple processes can read from the resource concurrently, but writes must
be exclusive. The problem involves two types of processes:
1. Readers: Processes that only read the shared resource.
2. Writers: Processes that modify or write to the shared resource.
Problem Description
The main challenges of the readers-writers problem are:
Concurrent Reads: Multiple readers can access the resource simultaneously without
conflict.
Exclusive Writes: Only one writer can access the resource at a time, and no readers
should be accessing the resource while a writer is writing.
Fairness: The system should ensure that readers and writers get a fair chance to
access the resource. This means that neither readers nor writers should starve; both
should eventually get access to the resource.
Variants
There are two main variants of the readers-writers problem:
1. First Readers-Writers Problem: Prioritizes readers over writers, allowing multiple
readers to read concurrently as long as no writers are waiting. This can lead to writer
starvation if there are many readers.
2. Second Readers-Writers Problem: Prioritizes writers over readers, ensuring that if
a writer is waiting, no new readers can start reading. This prevents writer starvation
but can lead to reader starvation.
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>
// Reading section
printf("Reader %d is reading\n", readerId);
sleep(1); // Simulating reading time
// Reader is done
sem_wait(&mutex); // Start critical section
readCount--;
if (readCount == 0) {
sem_post(&writeBlock); // Last reader unlocks the resource for writers
}
sem_post(&mutex); // End critical section
return NULL;
}
int main() {
int numReaders = 5;
int numWriters = 2;
pthread_t readers[numReaders], writers[numWriters]; int
readerIds[numReaders], writerIds[numWriters];
// Initialize semaphores
sem_init(&mutex, 0, 1);
sem_init(&writeBlock, 0, 1);
// Destroy semaphores
sem_destroy(&mutex);
sem_destroy(&writeBlock);
return 0;
}
o After saving the context of the currently running process, the operating
system loads the context of the next scheduled process or thread. This
involves restoring:
The program counter to resume execution from where it left off.
The CPU registers with the saved values of the new process or thread.
Any other necessary state information.
3. Overhead:
o Context switching introduces overhead because it requires time and
resources to save and restore the state of processes or threads. This
overhead can impact the overall performance of the system, especially in
environments with frequent context switching.
4. Frequency of Context Switches:
o The frequency of context switches can affect system performance. While
frequent context switching can lead to increased responsiveness and better
multitasking, too many switches can cause inefficiencies and reduce CPU
utilization.
Q17. What is the use of fork and exec system calls?
The fork and exec system calls are fundamental in Unix-like operating systems for creating
and managing processes. They are often used together to enable the execution of new
programs. Here’s a detailed explanation of each and their uses:
fork() System Call
Definition:
The fork system call creates a new process by duplicating the existing process (the
parent process). The newly created process is referred to as the child process.
Key Characteristics:
1. Process Duplication:
o When fork is called, the operating system creates a new process that is an
exact duplicate of the parent process, with its own unique process ID (PID).
The child process inherits the parent's memory space, file descriptors, and
execution context.
2. Return Values:
o The fork call returns a value that allows the program to determine whether
it's running in the parent or child process:
In the parent process, fork returns the PID of the child.
The combination of fork and exec is common in many applications, particularly in Unix-like
operating systems. The typical pattern is as follows:
1. A process calls fork to create a new child process.
2. The child process calls exec to replace its memory with a new program.
3. The parent process can continue executing its code, potentially waiting for the child
process to finish (using wait or similar system calls).
Q18. Differentiate between pre-emptive and non- pre-emptive scheduling.
Q19. Differentiate between long term and short term scheduler.
Q20 What how Distributed operating systems differ from Multiprogrammed and Time-
shared operating systems? Give key features of each.
Multilevel queue
Multilevel Feedback Queue
Q24. How semaphores help in process synchronization? What is the difference between
binary and counting semaphores?
Semaphores are synchronization tools used in concurrent programming to manage access to shared
resources and prevent race conditions. They help in process synchronization by coordinating the
execution of processes or threads, ensuring that multiple processes do not access critical sections
of code simultaneously in a way that could cause conflicts or inconsistencies.
How Semaphores Help in Process Synchronization:
1. Controlling Access to Shared Resources: Semaphores use a counter to track the
number of available resources or permits for accessing a shared resource. When a
process or thread wants to access a resource, it performs a wait() operation (also
known as P() or down()), which decreases the semaphore's count. If the count is
greater than zero, access is granted; otherwise, the process is blocked until the count
becomes positive (indicating resource availability).
2. Signaling: After a process has finished using a resource, it performs a signal()
operation (also called V() or up()), which increases the semaphore's count,
potentially unblocking a waiting process or thread. This signaling ensures that other
waiting processes can proceed when resources become available.
o Thread: Threads within the same process share the same memory space,
including code, data, and other resources. However, each thread has its own
stack and registers.
2. Communication:
o Process: Communication between processes is more complex and slower
because they have separate memory spaces, requiring mechanisms like inter-
process communication (IPC) such as pipes, sockets, or shared memory.
o Thread: Communication between threads is faster because they share the
same memory space, allowing them to directly read and write shared data.
3. Overhead:
o Process: Creating and managing processes is more resource-intensive and
slower due to the need for separate memory spaces and system resources.
Thread: Threads are lighter and have less overhead since they share the same memory space.
Creating and switching between threads is generally faster than between processes.
User-Level Threads vs. Kernel-Level Threads
Q26. Design a solution to a critical section problem using one semaphore and check it
for mutual exclusion, progress requirement and bounded waiting conditions.
To design a solution for the critical section problem using a single semaphore, we can use a
binary semaphore (mutex). This semaphore will control access to the critical section,
ensuring that only one process or thread can access it at a time. The semaphore will be
initialized to 1, indicating that the critical section is available.
Semaphore-Based Solution for the Critical Section Problem:
1. Initialization:
o Let mutex be a binary semaphore.
1. Mutual Exclusion:
o Only one process can enter the critical section at a time because the
wait(mutex) operation decreases the mutex value to 0, effectively locking the
critical section.
o When a process enters the critical section, other processes calling
wait(mutex) will be blocked as long as mutex remains 0.
o After a process leaves the critical section and calls signal(mutex), the mutex
value is set back to 1, allowing another waiting process to enter.
o Thus, mutual exclusion is ensured since no two processes can access the
critical section simultaneously.
2. Progress:
o If no process is in the critical section, and some processes wish to enter, one
of those processes will be allowed to enter.
o This is ensured because, once a process finishes and calls signal(mutex), any
waiting process can proceed with wait(mutex).
o The decision of which process enters next depends on the scheduling
mechanism, but as long as a process exists to enter the critical section, it will
make progress.
3. Bounded Waiting:
o Bounded waiting ensures that there is a limit on how many times other
processes can enter the critical section before a waiting process is granted
access.
o Since the wait(mutex) operation blocks processes in a queue, every process
that is waiting for the critical section will eventually get a turn once the
currently running process exits and calls signal(mutex).
o This prevents indefinite postponement and ensures that every process will
have bounded waiting.
Q27. Write a solution to readers/writers problem using semaphore and justify its working.
Readers: Multiple readers can read the shared resource concurrently without
causing any issues.
Writers: Only one writer can access the shared resource at a time, and no readers
should be reading while a writer is writing.
Requirement: A writer should have exclusive access to the shared resource when
writing, while readers can read simultaneously without blocking each other.
Solution Using Semaphores:
We use three semaphores in this solution:
1. mutex: A binary semaphore (mutex) to ensure mutual exclusion when updating the
read_count.
2. wrt: A binary semaphore that allows writers to have exclusive access to the shared
resource.
3. read_count: An integer variable that keeps track of the number of active readers.
Semaphores and Variables:
Linux supports multiple users at the same time, allowing different users to access the
system resources (memory, disk space, etc.) simultaneously without interfering with
each other.
Each user can have separate accounts, permissions, and environments.
2. Multitasking:
Linux is designed to handle multiple tasks at the same time.
It can run many processes concurrently, making it suitable for environments that
require running various applications or services simultaneously, such as servers.
3. Security:
Linux is known for its strong security features, including user authentication, file
permissions, and encryption.
It supports access control lists (ACLs) and security modules like SELinux
(Security- Enhanced Linux) to enforce security policies.
The open-source nature also means that security vulnerabilities can be identified and
patched quickly by the community.
4. Portability:
Linux is highly portable and can run on a wide range of hardware platforms, from
powerful servers and desktop computers to smaller devices like smartphones (e.g.,
Android) and embedded systems.
It is not hardware-specific, making it suitable for different types of machines.
5. Shell and Command-Line Interface (CLI):
Linux provides a powerful command-line interface (CLI) through various shells
like Bash, Zsh, and more.
The shell allows users to execute commands, automate tasks using shell scripts, and
manage the system efficiently.
It is especially popular among system administrators and developers for its flexibility
and control.
Q29. How does many-to-many thread model differ from one-to-one model? Explain.
Q30. What is multilevel feedback queue scheduling? What is multilevel feedback queue
scheduling?
Multilevel Feedback Queue (MLFQ) Scheduling is a sophisticated CPU scheduling
algorithm used in operating systems to manage processes with varying priority levels and
differing needs for CPU time. It is an extension of the multilevel queue scheduling algorithm
but with the added flexibility that processes can move between different priority queues based
on their behavior and execution history.
Consider an MLFQ with three queues:
Queue 0: High priority with a time quantum of 2 units.
Queue 1: Medium priority with a time quantum of 4 units.
Queue 2: Low priority with a time quantum of 8 units.
A new process, Process A, arrives and is placed in Queue 0.
Process A runs for 2 time units but does not finish, so it is moved to Queue 1.
Process A now runs for up to 4 time units in Queue 1. If it still does not finish, it
is moved to Queue 2.
If another process, Process B, arrives during this time and is placed in Queue 0, it
will preempt Process A since Queue 0 has a higher priority.
Q31. Define job queue, ready queue and device queue that are used in process scheduling.
1. Job Queue:
Definition: The job queue (also known as the input queue) is a queue that holds all
the processes that are waiting to enter the system. These processes are usually
stored on the disk and are not yet loaded into the main memory (RAM).
Purpose: When a new process is created (e.g., a user starts a program or a batch job
is initiated), it is placed in the job queue. The job queue keeps track of all processes
that need to be brought into memory for execution.
Example: A batch processing system that queues up multiple jobs to be executed
later will place these jobs in the job queue until the system is ready to allocate
resources to them.
2. Ready Queue:
Definition: The ready queue contains all the processes that are loaded into the
main memory (RAM) and are ready to execute but are currently waiting for the
CPU to become available.
Purpose: When a process is brought into memory and has all the resources it needs
(except the CPU), it is placed in the ready queue. The scheduler selects processes
from the ready queue and allocates the CPU to them.
Characteristics:
o The ready queue is typically implemented as a FIFO (First-In-First-Out)
queue or using other data structures like priority queues or linked lists,
depending on the scheduling algorithm used.
o Processes in the ready queue are waiting only for CPU time and are not
blocked for any other resources.
Example: If three processes, A, B, and C, are all loaded into memory but the CPU
is currently running another process D, A, B, and C will be in the ready queue.
3. Device Queue:
Definition: A device queue is associated with processes that are waiting for a
particular I/O device (like a disk, printer, or network card) to become
available.
Purpose: When a process needs to perform an I/O operation (e.g., reading data from a
disk), it is placed in the device queue corresponding to that device. The process
remains in the device queue until the requested I/O operation is completed.
Characteristics:
o Each I/O device has its own queue of processes waiting for that device.
o When the I/O operation is complete, the process is moved back to the ready
queue if it is ready to continue execution.
Example: If Process E is waiting for data from the disk, it will be placed in the disk
device queue. Once the disk I/O operation completes, Process E will be moved back
to the ready queue to resume its execution.
Q32. Explain various criteria considered in CPU scheduling algorithms. Explain the
following methods (i)Shortest job first scheduling (ii)Round Robin scheduling.
Criteria for CPU Scheduling:
1. CPU Utilization:
o Refers to keeping the CPU as busy as possible.
o The goal is to maximize CPU utilization, ideally between 40% to 90% in
real- time systems and close to 100% in more intensive environments.
2. Throughput:
o Measures the number of processes that are completed in a unit of time (e.g.,
processes per second).
o Higher throughput indicates that the system is executing more processes in a
given period.
3. Turnaround Time:
o The total time taken for a process to complete, from the time of submission
to its completion.
o Includes waiting time, execution time, and I/O time.
o Lower turnaround time is preferred as it means faster processing of
individual processes.
4. Waiting Time:
o The total amount of time a process spends in the ready queue waiting for
CPU time.
o Lower waiting time is desirable as it reduces the time processes spend
waiting for execution.
5. Response Time:
o The time from the submission of a request until the first response is
produced.
o It is crucial for interactive systems where quick feedback is required,
like time-sharing systems.
o Lower response time means that users experience less delay when
interacting with the system.
Shortest Job First (SJF) Scheduling:
Definition: In SJF, the process with the smallest burst time (execution time required)
is selected for execution next.
Types:
o Non-preemptive SJF: Once a process starts execution, it runs
until completion.
o Preemptive SJF (also known as Shortest Remaining Time First (SRTF)):
If a new process arrives with a shorter burst time than the remaining time of
the current process, the current process is preempted, and the new process is
executed.
Example:
o Suppose there are four processes with burst times: P1 = 6ms, P2 = 2ms, P3 =
8ms, P4 = 3ms.
o According to SJF, the execution order would be: P2 (2ms) → P4 (3ms) →
P1 (6ms) → P3 (8ms).
Advantages:
o Minimizes Average Waiting Time: SJF can provide optimal average
waiting time because it processes shorter tasks first, reducing the time other
processes wait.
Disadvantages:
o Difficult to Predict Burst Time: It is not always easy to estimate the
burst time of processes accurately.
o Possibility of Starvation: Long processes may be indefinitely delayed if
short processes keep arriving, leading to starvation of longer tasks.
(ii) Round Robin (RR) Scheduling:
Definition: In RR, each process gets a fixed time quantum (or time slice) to
execute, and processes are scheduled in a circular order.
After a process executes for its time slice, it is preempted and placed at the end of
the ready queue, and the next process is selected.
Example:
o Consider three processes with burst times: P1 = 10ms, P2 = 4ms, P3 = 6ms,
and a time quantum of 4ms.
o Execution order:
P1 executes for 4ms (remaining time: 6ms).
P2 executes for 4ms (completes).
P3 executes for 4ms (remaining time: 2ms).
P1 executes for another 4ms (remaining time: 2ms).
P3 executes for its remaining 2ms.
P1 executes for its remaining 2ms.
Q33. Define semaphores. Explain the role of wait () and signal () function used in
semaphores.
A semaphore is an integer variable that is used to signal the availability of resources and
ensure that multiple processes do not access a critical section simultaneously.
Types of Semaphores:
1. Binary Semaphore (Mutex):
o Can have only two values: 0 and 1.
o It is used for mutual exclusion, where 0 indicates that a resource is locked or
unavailable, and 1 means it is available.
2. Counting Semaphore:
o Can have a range of integer values.
o It is useful when multiple instances of a resource are available (e.g., a pool of
identical resources).
Role of wait() and signal() Functions:
Semaphores use two primary atomic operations, wait() (also known as P()) and signal() (also
known as V()), to manage access to critical sections and shared resources.
1. wait() Function:
The wait() function is used to decrease the value of a semaphore.
If the value of the semaphore is greater than zero, the wait() operation decrements
it by 1 and allows the process to continue.
If the semaphore's value is zero or less, the process calling wait() is blocked and
added to a waiting queue until the semaphore becomes greater than zero.
Purpose:
It is used to request a resource.
Ensures that a process can only access the critical section when the semaphore
indicates the resource is available.
P2 1 5 13 7 12
P3 2 2 4 0 2
P4 2 4 8 2 6
P2 1 5 19 13 18
P3 2 2 7 3 5
P4 2 4 16 10 14