0% found this document useful (0 votes)
15 views27 pages

OSMYSOLUN

An operating system (OS) is essential software that manages computer hardware and software resources, providing services such as resource management, user interface, application execution, and data management. It enables processes to communicate and coordinate, supports multi-threading for improved performance, and facilitates process creation and termination through system calls. Various inter-process communication tools, such as pipes, message queues, and shared memory, enhance data exchange between processes.

Uploaded by

hvzc5vy80z
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views27 pages

OSMYSOLUN

An operating system (OS) is essential software that manages computer hardware and software resources, providing services such as resource management, user interface, application execution, and data management. It enables processes to communicate and coordinate, supports multi-threading for improved performance, and facilitates process creation and termination through system calls. Various inter-process communication tools, such as pipes, message queues, and shared memory, enhance data exchange between processes.

Uploaded by

hvzc5vy80z
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Q1) What is an operating system? What are the operating system services?

Explain
Ans- An operating system (OS) is the fundamental software that manages all the
hardware and software resources of a computer system. Think of it as the bridge
between you and the computer’s hardware. It’s the first program that loads when you
turn on your computer, and it provides a platform for all other programs to run.
- Key Functions of an Operating System - 1) Resource Management: The OS efficiently
allocates and manages the computer’s resources
2) User Interface: Provides a way for you to interact with the computer (e.g., through a
graphical desktop or a command-line interface).
3) Application Execution: Loads and runs applications, providing them with the
necessary resources.
4) Data Management: Organizes and manages files and directories.
- Operating System Services
1) Program Execution: i) Loading programs into memory.
ii) Starting and running programs.
iii) Managing the execution of multiple programs concurrently.
2) Input/Output Operations: i) Handling input from devices like keyboards and mice.
ii) Managing output to devices like monitors and printers.
3) File System Manipulation: i) Creating, deleting, and organizing files and directories.
ii) Managing file access permissions.
4) Communication: i) Enabling communication between different programs.
ii) Facilitating network connections.
5) Resource Allocation: i) Distributing resources like CPU time, memory, and I/O
devices among different programs.
6) Accounting: i) Tracking resource usage for billing or performance analysis.
7) Security and Protection: i) Protecting the system from unauthorized access.
ii) Ensuring the integrity of data.
- Examples of Operating Systems : 1) Microsoft Windows: The most widely used desktop
operating system. 2) macOS: Apple’s operating system for Macintosh computers.
3)Linux: An open-source operating system popular for servers and embedded systems.
Q2) What is a thread? What are the benefits of multi-threaded programming? Explain
many to many threads model.
Ans- A thread is the smallest unit of execution within a process. Think of a process as a
running program. A process can have one or more threads. Each thread within a
process runs independently but shares the same memory space and resources of that
process. It’s like having multiple workers within the same office (the process), all
sharing the same resources but working on different tasks. ○ Benefits of Multithreaded
Programming :
1)Improved Responsiveness: If one thread is blocked (e.g., waiting for I/O), other
threads can continue to execute, keeping the application responsive. Imagine a word
processor: one thread could be handling user input while another is spell-checking in
the background.
2)Enhanced Performance (Parallelism): On multi-core processors, multiple threads can
run truly concurrently, significantly speeding up execution for CPU-bound tasks. This
is true parallelism.
3)Resource Sharing: Threads within a process share the same memory and resources,
making it easier to share data between different parts of the program.
4)Simplified Program Structure: For some problems, multithreading can lead to a
cleaner and more logical program design. Complex tasks can be broken down into
smaller, concurrent threads.
○ Many-to-Many Thread Model : The many-to-many model is a way for an operating
system to manage threads. It maps many user-level threads to a smaller or equal
number of kernel-level threads. Let’s break that down:
1) User-level threads: These are threads managed by the application or a threading
library. The OS kernel isn’t directly aware of them.
2) Kernel-level threads: These are threads managed directly by the operating system
kernel. The kernel schedules these threads onto the CPU.

○ How Many-to-Many Works 》 In the many-to-many model:

1) Multiple user-level threads can be created by the application.


2) These user-level threads are mapped to a smaller or equal number of kernel-level
threads. This mapping can be dynamic.
3) The operating system schedules the kernel-level threads onto the available
processors. ○ Example: Imagine a web server. It might create many user-level threads
to handle incoming client requests. The many-to-many model would map these user-
level threads to a smaller number of kernel-level threads, allowing the server to
efficiently utilize the available CPU cores and handle many requests concurrently.
Q3) What is meant by process? Explain mechanism for process creation and process
termination by OS.
Ans- A process is a running instance of a program. It’s more than just the program code;
it includes all the resources needed to execute that program, such as:
* Program code (text section): The actual instructions of the program.
* Data section: Global variables, static variables, and constants used by the program.
* Stack: Memory used for function calls, local variables, and return addresses.
* Heap: Dynamically allocated memory used by the program during execution.
* Registers: CPU registers that store temporary values and the current instruction being
executed.
* Process Control Block (PCB): A data structure maintained by the OS that stores all the
information about the process (e.g., process ID, memory usage, status, etc.).
○ Process Creation - Operating systems provide mechanisms for creating new
processes. Here’s a breakdown of the typical steps involved:
1) Process Initialization: The OS allocates a Process Control Block (PCB) for the new
process. This PCB will store all the essential information about the process.
2) Memory Allocation: The OS allocates the necessary memory space for the process
(code, data, stack, heap).
3) Loading Program Code: The OS loads the program’s executable code into the
allocated memory space.
4) Setting up the Environment: The OS sets up the process’s execution environment,
including initializing registers, setting up file descriptors (for input/output), and other
necessary settings.
5) Assigning a Process ID (PID): The OS assigns a unique identifier (PID) to the new
process, which is used to track and manage the process.
6) Entering the Ready Queue: The newly created process is placed in the “ready queue,”
a list of processes waiting to be executed by the CPU.
○ Process Termination - Processes can terminate in several ways:
1) Normal Completion: The process executes all its instructions and exits gracefully.
This is the most common way a process terminates.
2) Error Condition: The process encounters an error (e.g., division by zero, file not found)
and terminates.
3) Fatal Error: The process encounters a severe error that prevents it from continuing
(e.g., memory corruption).
4) Killed by Another Process: One process might terminate another process (if it has the
necessary privileges).
5) User Intervention: The user might manually terminate a process (e.g., by closing the
application or using a task manager).
○ OS Mechanisms for Process Termination - When a process terminates, the OS
performs the following actions:
1) Releasing Resources: The OS reclaims all the resources used by the process,
including memory, file descriptors, and other allocated resources.
2) Removing the PCB: The OS removes the process’s PCB from the system.
3) Signaling Other Processes (if necessary): The OS might notify other processes that
the terminated process has finished.
4) Returning Exit Status: The OS might return an exit status code indicating whether the
process terminated successfully or due to an error.
Q4) Explain the need of inter-process communication and explain various tools for
inter-process communication.
Ans- In modern operating systems, multiple processes often run concurrently. These
processes might need to:
1)Share Data: Processes might need to exchange information or data with each other.
For example, a word processor might communicate with a spell-checker process.
2)Coordinate Tasks: Processes might need to synchronize their actions or collaborate
on a task. For example, a video editing application might have separate processes for
video encoding and audio processing, which need to work together.
3)Communicate with the User: Processes might need to interact with the user or other
external entities.
4)Improve Efficiency: Breaking down tasks into separate processes can improve
efficiency and responsiveness, especially on multi-core systems.
○ Tools for Inter-Process Communication - Operating systems provide various
mechanisms for IPC. Here are some of the most common:
1) Pipes: i) How it works: A pipe is a unidirectional communication channel between two
related processes (typically a parent and a child process). Data written to one end of
the pipe can be read from the other end.
ii) Use cases: Simple data transfer between related processes, such as piping the
output of one command to the input of another in a shell.
2) Message Queues: i) How it works: Processes can send messages to a queue, which
can be read by other processes. Message queues provide a more structured way to
exchange data than pipes.
ii) Use cases: Asynchronous communication between processes, where the sender and
receiver don’t need to be active at the same time.
3) Shared Memory: i) How it works: Processes can share a region of memory, allowing
them to directly access and modify the same data. This is a very efficient way to
exchange large amounts of data.
ii) Use cases: High-performance applications where processes need to share large data
structures, such as in scientific computing or graphics rendering.
4) Sockets: i) How it works: Sockets are used for communication between processes
over a network, either on the same machine or different machines. They provide a way
to establish connections and exchange data.
ii) Use cases: Client-server applications, distributed systems, and any application that
needs to communicate over a network.
5) Semaphores: i) How it works: Semaphores are used for synchronization between
processes. They can be used to control access to shared resources and prevent race
conditionss.
ii) Use cases: Coordinating access to shared resources, such as a printer or a database
connection.
6) Signals: i) How it works: Signals are a way for one process to notify another process
of an event. They are typically used for asynchronous communication.
ii) Use cases: Handling interrupts, notifying a process of an error, or requesting a
process to terminate.
○ Choosing the Right IPC Mechanism : The best IPC mechanism depends on the
specific needs of the application. Factors to consider include:
1) Amount of data to be exchanged: Shared memory is efficient for large amounts of
data, while pipes or message queues might be suitable for smaller amounts.
2) Communication pattern: Pipes are unidirectional, while message queues and
sockets allow bidirectional communication.
3) Synchronization requirements: Semaphores are needed for coordinating access to
shared resources.
4) Whether communication is local or over a network: Sockets are used for network
communication.
Q5) What is system call? List and explain the process control system call.
Ans- A system call is a request from a program to the operating system’s kernel to
perform a specific task. Think of it as a program asking the OS to do something on its
behalf. These tasks can include things like:
* Creating or deleting files
* Allocating memory
* Starting a new process
* Sending data over a network
Process Control System Calls : Process control system calls are specifically related to
managing processes. Here are some of the key ones:
1)fork(): Creates a new process (a child process) that is a copy of the calling process (the
parent process). The fork() call duplicates the parent process’s memory space, code,
and resources. Both the parent and child processes continue execution from the point
of the fork() call.Returns 0 in the child process and the child’s process ID (PID) in the
parent process.
2)exec(): Replaces the current process image with a new program.The exec() call loads
and executes a new program, effectively replacing the code and data of the current
process with the new program.Only returns if there is an error; otherwise, the new
program starts executing.
3)wait():Suspends the execution of the calling process until one of its child processes
terminates.The wait() call allows a parent process to wait for a child process to finish
and retrieve the child’s exit status.Returns the PID of the terminated child process.
4)exit(): Terminates the calling process.The exit() call releases the process’s resources,
removes its entry from the process table, and notifies the parent process (if any).Takes
an exit status code, which can be used to communicate information about the
process’s termination.
5)getpid(): Returns the process ID (PID) of the calling process. This call retrieves the
unique identifier assigned to the process by the OS.
6) getppid():Returns the process ID (PID) of the parent process of the calling
process.This call retrieves the PID of the process that created the current process.
Q6) Differentiate between thread and process. Give two advantages of thread over
multiple processes.
Ans- :
FEATURE PROCESS THREAD
DEFINITION A running instance of a A lightweight unit of
program, with its own execution within a process,
memory space and sharing the process’s
resources. memory space and
resources.
MEMORY Each process has its own Threads within a process
independent memory share the same memory
space. space.
RESOURCES Processes have their own Threads share the
resources (files, I/O devices, resources of their parent
etc.). process.
CREATION Creating a process is more Creating a thread is faster
time-consuming and and less resource-
resource-intensive. intensive.
COMMUNICATION Communication between Threads within a process
processes requires inter- can communicate directly
process communication through shared memory.
(IPC) mechanisms.
CONTEXT SWITCHING Switching between Switching between threads
processes is slower due to is faster because only the
the need to save and restore thread's registers and stack
entire memory spaces. need to be saved and
restored.

Here are two key advantages of using threads over multiple processes:
1. Faster Context Switching: Switching between threads within the same process
involves less overhead than switching between processes. This is because threads
share the same memory space, so the operating system doesn’t need to reload memory
and other resources.
2. Easier Communication and Resource Sharing: Threads within a process share the
same memory space, making it easier for them to communicate and share data. This
simplifies the design and implementation of concurrent applications.
Q7) Explain the role of operating system in computer system and explain the system
components of OS.
Ans- The operating system (OS) is the most fundamental software on a computer. It acts
as an intermediary between the user and the computer hardware, managing resources
and providing services that allow users to interact with the computer and run
applications. Role of the Operating System:
1)Resource Management: The OS manages all the computer’s resources, including the
CPU, memory, storage devices, and peripherals. It allocates these resources to
different programs and users, ensuring that they are used efficiently and without
conflicts.
2) Abstraction: The OS provides an abstraction layer that hides the complexities of the
hardware from the user. Users interact with the OS through a simpler interface, such as
a graphical user interface (GUI) or a command-line interface (CLI), without needing to
know the details of how the hardware works.
3) Process Management: The OS manages the execution of programs, called processes.
It creates and terminates processes, schedules their execution, and provides
mechanisms for them to communicate with each other.
4) Memory Management: The OS manages the computer’s memory, allocating and
deallocating memory to processes as needed. It also provides mechanisms for virtual
memory, which allows programs to use more memory than is physically available.
5) Input/Output Management: The OS manages communication between the computer
and its peripherals, such as keyboards, mice, printers, and network devices.

○ System Components of an OS》1) Kernel: The kernel is the core of the OS. It is
responsible for managing the CPU, memory, and other essential resources. It also
provides services to other parts of the OS and to applications.
2) System Calls: System calls are the interface between applications and the kernel.
They allow applications to request services from the kernel, such as accessing files or
allocating memory.
3)Shell: The shell is a command-line interpreter that allows users to interact with the
OS by typing commands.
4)GUI: A graphical user interface (GUI) provides a more user-friendly way to interact with
the OS, using windows, icons, and menus.
5)File System: The file system is responsible for organizing and managing files on
storage devices.
Q8) Compare and contrast the various types of Operating System.
Ans- 1)Batch Operating System :Jobs with similar needs are grouped into batches and
processed together. * Pros: Efficient for large tasks, reduces operator intervention.*
Cons: Not interactive, difficult to debug, long turnaround time.* Example: Payroll
systems in the past.
2) Time-Sharing Operating System : CPU time is shared among multiple users, providing
an interactive experience.* Pros: Fast response times, reduces software duplication.
* Cons: Reliability issues, data security concerns.* Example: Early mainframe systems.
3. Distributed Operating System : Multiple independent computers work together,
sharing resources. * Pros: Fault tolerance, resource sharing, high performance. * Cons:
Complexity in management, security challenges.* Example: Cluster computing, cloud
environments.
4. Network Operating System : Runs on a server, provides network services to clients.
* Pros: Centralized management, security, file and printer sharing.* Cons: Server
dependence, high setup costs. Example: Windows Server, Linux servers.
5. Real-Time Operating System (RTOS) : Designed for time-critical applications,
guarantees response times.* Pros: Predictable behavior, suitable for embedded
systems.* Cons: Limited functionality, complex algorithms.* Example: Industrial
control systems, medical devices.
6. Mobile Operating Systems: Designed for mobile devices with touch interfaces.* Pros:
User-friendly, optimized for mobility and apps.* Cons: Limited resources, security
concerns.* Example: Android, iOS.
7. Embedded Operating Systems: Specialized OS for specific devices with limited
functionality. * Pros: Resource-efficient, tailored to hardware.* Cons: Limited features,
difficult to update.* Example: Smartwatches, routers, appliances.
8. Open Source Operating Systems: Source code is freely available, can be modified
and distributed.* Pros: Cost-effective, customizable, large community support.* Cons:
Potential compatibility issues, security risks if not managed well.* Example: Linux,
Android.
10. Graphical User Interface (GUI) Operating Systems : Uses visual elements like
windows, icons, and menus for user interaction.* Pros: User-friendly, intuitive, easy to
learn. * Cons: Resource-intensive, can be slower than command-line. * Example:
Windows, macOS, most modern Oss.
Q9) Define the terms.1) Degree of multi-programming. 2)Context switching 3)Process
4)Dispatcher. 5)CPU-I/O Burst Cycle. 6)Spooling
Ans- 1) Degree of Multiprogramming: This refers to the number of processes that are
present in the main memory (RAM) at a given time. A higher degree of
multiprogramming means more processes are loaded and competing for the CPU. The
goal is to keep the CPU busy by switching between these processes.
2) Context Switching: This is the process of saving the state of a currently running
process (its registers, program counter, etc.) and loading the saved state of another
process to allow it to run. The OS does this to switch between processes, giving the
illusion of them running concurrently. Context switching is an overhead, as the CPU
isn’t doing “real work” during the switch.
3) Process: A process is a program in execution. It’s more than just the program code;
it includes the current activity, the program counter (where it is in the code), registers
(holding data), stack (for function calls), heap (for dynamic memory allocation), and
other resources. Think of a process as an instance of a program running.
4) Dispatcher: The dispatcher is a module within the operating system that selects
which process should be run by the CPU next. It’s invoked after a context switch. The
dispatcher’s job is to take the process chosen by the scheduler and actually get it
running on the CPU.
5) CPU-I/O Burst Cycle: A process’s execution typically alternates between CPU bursts
(periods of CPU activity) and I/O bursts (periods waiting for I/O operations like reading
from a disk or network). A CPU-I/O burst cycle refers to this alternating pattern.
Processes rarely use the CPU continuously; they often need to wait for I/O, allowing
other processes to use the CPU in the meantime.
6) Spooling: Spooling (Simultaneous Peripheral Operations On-Line) is a technique for
managing I/O operations, particularly for devices like printers. Instead of sending
output directly to the printer (which might be slow), the output is first stored in a buffer
(often on disk). A separate process then handles sending the data from the buffer to the
printer. This allows the CPU to continue working on other tasks without waiting for the
slow I/O device. Spooling decouples the application from the I/O device, improving
system performance.
Q10) What are the differences between user level threads and kernel level threads?
Under what circumstances one is better than other? Ans-
1)User-Level Threads (ULTs) i) Management: Managed entirely by a user-level library (a
set of functions within the application itself). The kernel is unaware of these threads.
ii) Creation/Switching: Very fast, as no kernel intervention is needed. The library
handles the switching between ULTs.
iii) Blocking: If one ULT blocks (e.g., waiting for I/O), the entire process blocks, including
all other ULTs within that process.
iv) CPU Scheduling: The kernel schedules the process as a whole, not the individual
ULTs.
v) Portability: More portable, as the thread library can be implemented on different
operating systems. * Example: POSIX threads (pthreads) in some implementations.

2) Kernel-Level Threads (KLTs) 》 i) Management: Managed directly by the operating


system kernel. The kernel is aware of each KLT.
ii) Creation/Switching: Slower than ULTs, as it requires kernel intervention for context
switching.
iii) Blocking: If one KLT blocks, other KLTs within the same process can continue to run.
The kernel can schedule other KLTs.
iv) CPU Scheduling: The kernel schedules individual KLTs, allowing for true parallelism
on multi-core systems.
v) Portability: Less portable, as KLTs are OS-specific.
* Example: POSIX threads (pthreads) in Linux, Windows threads.

○When is one better than the other? 》 1) User-Level Threads:

i) Better when: 1.Speed of thread creation and switching is critical.


2.The application is primarily CPU-bound and doesn’t involve frequent blocking
operations. 3.Portability across different operating systems is a major concern.
ii) Considerations: 1.Not suitable for applications that require true parallelism on multi-
core systems. 2.Vulnerable to blocking issues if one thread makes a blocking call.

2) Kernel-Level Threads:》i) Better when: 1.True parallelism on multi-core systems is


needed.2.The application involves frequent blocking operations (e.g., I/O).
ii) Considerations:1.Slower thread creation and switching due to kernel involvement.
2.Less portable due to OS-specific implementations.
Q2.1) What is semaphore? What operations are performed on semaphore?
Ans- A semaphore is a synchronization object used in operating systems to control
access to shared resources and prevent race conditions. Think of it like a traffic light or
a gatekeeper for a limited number of resources. ○ Types of Semaphores:
1) Binary Semaphore (Mutex): A binary semaphore can have only two values: 0 or 1. It’s
often used to implement mutual exclusion, protecting a critical section of code so that
only one thread or process can access it at a time. A value of 1 means the resource is
available, and 0 means it’s in use.
2) Counting Semaphore: A counting semaphore can have any non-negative integer
value. It’s used to control access to a limited number of resources. The value of the
semaphore represents the number of available resources.
○ Operations on Semaphores:
1) wait (or P): i) Decrements the semaphore’s value.
ii) If the semaphore’s value becomes negative, the process or thread executing the wait
operation is blocked (put into a waiting queue). This indicates that the resource is not
currently available.
iii) If the semaphore’s value is non-negative after the decrement, the process or thread
continues execution.
2) signal (or V): i) Increments the semaphore’s value.
ii) If there are any processes or threads blocked on the semaphore (waiting in the
queue), one of them is unblocked (moved to the ready queue).
iii) If no processes are blocked, the semaphore’s value simply increases.
○ How it works in practice: Imagine you have a limited number of printers (say, 3) in an
office. You can use a counting semaphore initialized to 3 to manage access to these
printers:
* A process that wants to use a printer performs a wait operation.
* If a printer is available (semaphore value > 0), the semaphore is decremented, and the
process gets access to the printer.
* If all printers are in use (semaphore value is 0), the process is blocked and added to
the semaphore’s waiting queue.
* When a process finishes using a printer, it performs a signal operation.
* The semaphore value is incremented. If there are processes waiting, one of them is
unblocked and gets access to the printer.
Q2.2) Explain short term scheduler, long term scheduler, and medium term scheduler
in brief

Ans- 1) Long-Term Scheduler (Job Scheduler) 》 * What it does: Decides which


processes should be admitted to the ready queue (and thus, to memory) from the pool
of waiting processes (often on disk). It controls the degree of multiprogramming (how
many processes are in memory at once).
* When it runs: Less frequently than the other schedulers. It might run when a process
finishes or when the system needs to adjust the degree of multiprogramming.
* Goal: To select a good mix of processes – some CPU-bound (doing lots of calculations)
and some I/O-bound (spending time waiting for input/output) – to keep the CPU and I/O
devices busy.
* Think of it as: The gatekeeper deciding which jobs get to enter the system for
processing.

2. Medium-Term Scheduler 》 * What it does: Handles swapping processes in and out


of memory. This might be done to reduce the degree of multiprogramming if memory is
overcommitted or to make room for higher-priority processes. It also deals with
processes that have been blocked for a long time (e.g., waiting for I/O) – it might swap
them out and then bring them back in later.
* When it runs: Less frequently than the short-term scheduler, but more often than the
long-term scheduler.
* Goal: To improve overall system performance by managing memory and balancing the
workload.
* Think of it as: The traffic manager controlling the flow of processes in and out of
memory.
3. Short-Term Scheduler (CPU Scheduler)
* What it does: Selects which of the ready processes should be run by the CPU next. It’s
the most frequently invoked scheduler.
* When it runs: Very frequently, whenever a process needs to give up the CPU (e.g., it
finishes, its time slice expires, it blocks for I/O) or when a new process becomes ready.
* Goal: To maximize CPU utilization and throughput by quickly switching between ready
processes.
* Think of it as: The conductor of the CPU orchestra, deciding which process gets the
CPU spotlight at each moment.
Q2.3) What are CPU scheduler and scheduling criteria?
Ans- 1) CPU Scheduler : The CPU scheduler is a crucial part of the operating system that
decides which of the ready-to-run processes should be given access to the CPU. It’s like
a traffic controller for the CPU, ensuring that it’s used efficiently and fairly.
○ How it works: 1) The scheduler maintains a queue of processes that are ready to run
(the “ready queue”). 2) When the CPU becomes available (e.g., a process finishes, its
time slice expires, it blocks for I/O), the scheduler selects a process from the ready
queue. 3) The scheduler then performs a context switch, saving the state of the previous
process and loading the state of the chosen process, so it can start or resume execution
on the CPU.
○ Types of CPU Schedulers: There are many different scheduling algorithms, each with
its own approach to choosing which process gets the CPU. Some common ones
include:
1) First-Come, First-Served (FCFS): Processes are executed in the order they arrive.
2) Shortest Job First (SJF): The process with the shortest estimated execution time is run
next.
3) Priority Scheduling: Processes are assigned priorities, and the highest-priority
process is run.
4) Round Robin: Each process gets a small time slice of CPU time, and processes are
cycled through.
5 Multilevel Queue Scheduling: Processes are divided into different queues with
different priorities and scheduling algorithms.
2. Scheduling Criteria: These are the factors that the CPU scheduler considers when
making decisions about which process to run. The goal is to optimize CPU utilization
and provide a good user experience. ○ Common Scheduling Criteria: 1) CPU Utilization:
The percentage of time the CPU is busy executing processes. The goal is to keep the
CPU as busy as possible.
2) Throughput: The number of processes completed per unit of time. A higher
throughput means the system is getting more work done.
3) Turnaround Time: The total time it takes for a process to complete, from its arrival to
its completion. This includes waiting time, execution time, and I/O time.
4) Waiting Time: The amount of time a process spends waiting in the ready queue for the
CPU. 5) Response Time: The time it takes for a process to produce its first response
(important for interactive systems).6) Fairness: Ensuring that all processes get a fair
share of CPU time, preventing starvation (where a process never gets to run).
Q2.4) What is meant by ‘Race condition’? Why race condition occurs? Give an algorithm
to avoid race condition between two processes.
Ans- Race Condition - A race condition occurs when the behavior of a program depends
on the unpredictable order in which different parts of the program execute. It usually
happens when multiple threads or processes access and manipulate shared data
concurrently. The final outcome of the program becomes unpredictable because it
depends on which thread or process “wins the race” to access and modify the shared
data first. ○ Race conditions arise due to these factors:
1) Shared Resources: Multiple threads or processes are trying to access and modify the
same data or resource (e.g., a variable, a file, a database record) concurrently.
2) Unpredictable Execution Order: The operating system might switch between threads
or processes in a way that is not deterministic or predictable. This means the exact
order in which they access the shared resource can vary each time the program runs.
3) Lack of Synchronization: If there are no mechanisms in place to control the access
to the shared resource (no synchronization), then the threads or processes can
interfere with each other, leading to inconsistent or incorrect results.
Example : Imagine two threads, A and B, both trying to increment a shared counter
variable count:
* Thread A: Reads count, adds 1, writes the new value back to count.
* Thread B: Reads count, adds 1, writes the new value back to count.
If these operations happen concurrently without any synchronization, it’s possible for
both threads to read the same value of count, increment it, and write it back. This
means one of the increments gets lost, and the final value of count is incorrect.
○ Algorithm –
// Shared: Semaphore 'mutex' initialized to 1
Process A:
P(mutex) // Acquire the lock
// Critical Section - Access shared resource
V(mutex) // Release the lock
Process B:
P(mutex) // Acquire the lock
// Critical Section - Access shared resource
V(mutex) // Release the lock
1.FCFS (First-Come, First-Served)
Gantt Chart: | P1 | P2 | P3 | P4 |
0 1 10 11 19
Calculations:
Process Arrival Time Burst Time Completion Turnaround Waiting
(AT) Time (CT) time (CT-AT) Time (ST-AT)
P1 0 1 1 1–0=1 0–0=0
P2 1 9 10 10 – 1 = 9 1–1=0
P3 2 1 11 11 – 2 = 9 10 – 2 = 8
P4 3 8 19 19 – 3 = 16 11 – 3 = 8
Average Turnaround Time: (1 + 9 + 9 + 16) / 4 = 35 / 4 = 8.75
Average Waiting Time: (0 + 0 + 8 + 8) / 4 = 16 / 4 = 4
2. Preemptive SJF (Shortest Job First)
Gantt Chart: | P1 | P3 | P2 | P4 |
0 1 2 11 19
Calculations:
Process Arrival Time Burst Time Completion Turnaround Waiting
(AT) Time (CT) time (CT-AT) Time (ST-AT)
P1 0 1 1 1–0=0 0–0=0
P2 1 9 12 12 – 1 = 11 3–1=2
P3 2 1 3 3–2=1 1 – 2 = -1 (0)
P4 3 8 20 20 – 3 = 17 12 – 3 = 9
Average Turnaround Time: (1 + 1 + 11 + 17) / 4 = 30 / 4 = 7.5
Average Waiting Time: (0 + 0 + 2 + 9) / 4 = 11 / 4 = 2.75
3) RR (Round Robin) with Quantum = 1
Gantt Chart: | P1 | P2 | P3 | P4 | P2 | P4 | P2 | P4 | P2 | P4 | P2 | P4 | P2 | P4 |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Calculations:
Process Arrival Time Burst Time Completion Turnaround Waiting
Time (CT) time (CT-AT) Time (ST-AT)
P1 0 1 1 1–0=1 0–0=0
P2 1 9 19 19 – 1 = 18 4–1=3
P3 2 1 3 3 -2 = 1 2–2=0
P4 3 8 18 18 – 3 = 15 5–3=2

Average Turnaround Time: (1 + 18 + 1 + 15) / 4 = 35 / 4 = 8.75


Average Waiting Time: (0 + 3 + 0 + 2) / 4 = 5 / 4 = 1.25
○ Comparison and Best Scheme
Metric FCFS Preemptive SJF RR (quantum = 1)
Average 8.75 7.5 8.75
Turnaround Time
Average Waiting 4 2.75 1.25
Time

In this specific scenario:


* Preemptive SJF has the lowest average turnaround time, making it the best in terms
of minimizing the overall time taken for processes to complete.
* RR has the lowest average waiting time, making it the best in terms of minimizing the
time processes spend waiting in the ready queue.
Justification:
* If minimizing the overall completion time is the priority, Preemptive SJF is the best
choice.
* If minimizing the waiting time for each process is the priority, RR is the best choice.
The "best" scheme ultimately depends on the specific goals and priorities of the
system. In this case, Preemptive SJF offers a good balance between turnaround and
waiting times, even though RR has the lowest average waiting time.
Q2.5) Explain the hardware solution to inter-process synchronization problem.
Ans- The inter-process synchronization problem arises when multiple processes need
to access shared resources or data. Without proper synchronization mechanisms, race
conditions and data corruption can occur. Hardware solutions provide fundamental
building blocks for implementing synchronization primitives. Here are some common
hardware approaches:
* Atomic Instructions: These instructions perform operations on shared data in a
single, indivisible step. Examples include:
* Test-and-Set: Atomically reads a value from memory and sets it to a specific value.
Used for implementing locks.
* Compare-and-Swap: Atomically compares a value in memory with an expected
value, and if they match, replaces it with a new value. Used for implementing various
synchronization primitives.
* Memory Barriers: These instructions enforce ordering constraints on memory
operations. They ensure that writes to shared memory are visible to other processors in
a specific order. Memory barriers are crucial for implementing correct synchronization
in multi-processor systems.
* Cache Coherence Protocols: These protocols maintain consistency of shared data
across multiple processor caches. When a processor modifies data in its cache, the
protocol ensures that other processors see the updated value. Cache coherence is
essential for efficient sharing of data between processes.
* Hardware Locks: Some architectures provide dedicated hardware mechanisms for
implementing locks. These locks can be more efficient than software-based locks, as
they avoid the overhead of system calls.
These hardware solutions provide the foundation for building higher-level
synchronization primitives like semaphores, mutexes, and condition variables.
Operating systems and programming languages use these primitives to provide
synchronization mechanisms to applications.
It’s important to note that hardware solutions alone may not be sufficient for complex
synchronization scenarios. Software-based techniques are often used in conjunction
with hardware primitives to provide robust and efficient synchronization mechanisms.
Ans- a) Gantt Charts

1)FCFS (First-Come, First-Served) 》

| P1 | P2 | P3 | P4 | P5 |
0 10 11 13 14 19
2) SJF (Shortest Job First)
The process with the shortest burst time is executed next.
| P2 | P4 | P3 | P5 | P1|
0 1 2 4 9 19
3) Priority
The process with the highest priority (lowest number) is executed next.
| P2 | P5 | P1 | P3 | P4 |
0 1 6 16 18 19
4) Round Robin (Quantum = 1)
Each process gets a time slice of 1 unit.
| P1 | P2 | P3 | P4 | P5 | P1 | P3 | P5 | P1 | P5 | P1 | P1 | P1 | P1 | P1 | P1 |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
b) Average Waiting Time and Turnaround Time

1. FCFS 》

1) Waiting Time: i) P1: 0, ii) P2: 10, iii) P3: 11, iv) P4: 13, v) P5: 14
○ Average Waiting Time: (0 + 10 + 11 + 13 + 14) / 5 = 9.6
2) Turnaround Time: i) P1: 10, ii) P2: 11, iii) P3: 13, iv) P4: 14, v) P5: 19
○ Average Turnaround Time: (10 + 11 + 13 + 14 + 19) / 5 = 13.4

2. SJF 》

1) Waiting Time: i)P2: 0, ii) P4: 1, iii) P3: 2, iv) P5: 4, v) P1: 9
○ Average Waiting Time: (0 + 1 + 2 + 4 + 9) / 5 = 3.2
2) Turnaround Time: i) P2: 1, ii) P4: 2, iii) P3: 4, iv) P5: 9, v) P1: 19
○ Average Turnaround Time: (1 + 2 + 4 + 9 + 19) / 5 = 7

3. Priority 》

1) Waiting Time: i) P2: 0, ii) P5: 1, iii) P1: 6, iv) P3: 16, v) P4: 18
○ Average Waiting Time: (0 + 1 + 6 + 16 + 18) / 5 = 8.2
2) Turnaround Time: i) P2: 1, ii) P5: 6, iii) P1: 16, iv) P3: 18, v) P4: 19
○ Average Turnaround Time: (1 + 6 + 16 + 18 + 19) / 5 = 12

4. Round Robin 》

1) Waiting Time: i) P1: (16 – 10) + (0) = 6,ii) P2: 1, iii) P3: (6 – 2) = 4, iv) P4: 3
v) P5: (9 – 5) + (7 – 5) = 6
○ Average Waiting Time: (6 + 1 + 4 + 3 + 6) / 5 = 4
2) Turnaround Time: i) P1: 17, ii) P2: 2, iii) P3: 8, iv) P4: 4, v) P5: 14
○ Average Turnaround Time: (17 + 2 + 8 + 4 + 14) / 5 = 9
Ans- 1) FCFS (First-Come, First-Served)
1)Gantt Chart:
| P1 | P2 | P3 | P4 | P5 |
0 6 7 9 10 16
2) Waiting Time: i) P1: 0, ii) P2: 6, iii) P3: 7, iv) P4: 9, v) P5: 10
○ Average Waiting Time: (0 + 6 + 7 + 9 + 10) / 5 = 6.4
3) Turnaround Time: i) P1: 6, ii) P2: 7, iii) P3: 9, iv) P4: 10, v) P5: 16
○ Average Turnaround Time: (6 + 7 + 9 + 10 + 16) / 5 = 9.6
2. SJF (Shortest Job First)
1) Gantt Chart: | P2 | P4 | P3 | P1 | P5 |
0 1 2 4 10 16

2) Waiting Time: i) P2: 0, ii) P4: 1, iii) P3: 2, iv) P1: 4, v) P5: 10
○ Average Waiting Time: (0 + 1 + 2 + 4 + 10) / 5 = 3.4
3) Turnaround Time: i) P2: 1, ii) P4: 2, iii) P3: 4, iv) P1: 10, v) P5: 16
○ Average Turnaround Time: (1 + 2 + 4 + 10 + 16) / 5 = 6.6

3) Priority
1) Gantt Chart:
| P2 | P1 | P5 | P3 | P4 |
0 1 7 13 15 16
2) Waiting Time: i) P2: 0, ii) P1: 1, iii) P5: 7, iv) P3: 13, v) P4: 15
○ Average Waiting Time: (0 + 1 + 7 + 13 + 15) / 5 = 7.2
3) Turnaround Time: i) P2: 1, ii) P1: 7, iii) P5: 13, iv) P3: 15, v) P4: 16
○ Average Turnaround Time: (1 + 7 + 13 + 15 + 16) / 5 = 10.4

4. Round Robin (Quantum = 1)


1) Gantt Chart:
| P1 | P2 | P3 | P4 | P5 | P1 | P3 | P5 | P1 | P5 | P1 | P5 | P1 | P5 |
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
2) Waiting Time: i) P1: (5-1) + (8-6) + (10-9) + (12-11) = 9, ii) P2: 0 , iii) P3: (2-1) + (6-3) = 4
iv) P4: 2, v) P5: (4-1) + (7-5) + (9-8) + (11-10) + (13-12) = 7
○ Average Waiting Time: (9 + 0 + 4 + 2 + 7) / 5 = 4.4
3) Turnaround Time: i) P1: 13, ii) P2: 1, iii) P3: 6, iv) P4: 3, v) P5: 14
○ Average Turnaround Time: (13 + 1 + 6 + 3 + 14) / 5 = 7.4
Ans- 1) FCFS (First-Come, First-Served)
1)Gantt Chart:
| P0 | P1| P2 | P3 |
0 0 1 3 6
2) Waiting Time: i) P0: 0, ii) P1: 0, iii) P2: 1, iv) P3: 3
○ Average Waiting Time: (0 + 0 + 1 + 3) / 4 = 1
3) Turnaround Time: i) P0: 0, ii) P1: 1, iii) P2: 3, iv) P3: 6
○ Average Turnaround Time: (0 + 1 + 3 + 6) / 4 = 2.5

2. Preemptive SJF (Shortest Job First)


1) Gantt Chart:
| P0 | P1 | P2 | P3 |
0 0 1 3 6
(Note: In this specific case, Preemptive SJF acts the same as FCFS because P0 has a
burst time of 0 and is the first process. If P0 had a non-zero burst time, preemption
would come into play.)
2) Waiting Time:i) P0: 0, ii) P1: 0, iii) P2: 1, iv) P3: 3
○ Average Waiting Time: (0 + 0 + 1 + 3) / 4 = 1
3) Turnaround Time: i) P0: 0, ii) P1: 1, iii) P2: 3, iv) P3: 6
○ Average Turnaround Time: (0 + 1 + 3 + 6) / 4 = 2.5
3. Round Robin (Quantum = 1)
1) Gantt Chart:
| P0 | P1 | P2 | P3 | P2 | P3 | P3 |
0 0 1 2 3 4 5 6
2) Waiting Time: i) P0: 0, ii) P1: 0, iii) P2: (2 – 1) + (4 – 3) = 2, iv) P3: (3 – 2) + (5 – 4) + (6 – 5) =
3
○ Average Waiting Time: (0 + 0 + 2 + 3) / 4 = 1.25
3) Turnaround Time: i) P0: 0, ii) P1: 1, iii) P2: 4, iv) P3: 6
○ Average Turnaround Time: (0 + 1 + 4 + 6) / 4 = 2.75

Ans- 1. FCFS (First-Come, First-Served)


1)Gantt Chart: | P0 | P1 | P2 | P3 | P4 |
0 0 1 4 13 25
2) Waiting Time: i) P0: 0, ii) P1: 0, iii) P2: 1, iv) P3: 4, v) P4: 13
○ Average Waiting Time: (0 + 0 + 1 + 4 + 13) / 5 = 3.6
2) Turnaround Time: i) P0: 0, ii) P1: 1, iii) P2: 4, iv) P3: 13, v) P4: 25
○ Average Turnaround Time: (0 + 1 + 4 + 13 + 25) / 5 = 8.6
2. SJF (Non-Preemptive)
1) Gantt Chart:
| P0 | P1 | P2 | P3 | P4 |
0 0 1 4 13 25
(In this case, non-preemptive SJF gives the same result as FCFS because P0 has a burst
time of 0 and is the first process. If P0 had a non-zero burst time, SJF would reorder the
processes.)
2) Waiting Time: i) P0: 0, ii) P1: 0, iii) P2: 1, iv) P3: 4, v) P4: 13
○ Average Waiting Time: (0 + 0 + 1 + 4 + 13) / 5 = 3.6
3) Turnaround Time: i) P0: 0, ii) P1: 1, iii) P2: 4, iv) P3: 13, v) P4: 25
○ Average Turnaround Time: (0 + 1 + 4 + 13 + 25) / 5 = 8.6

3. SJF (Preemptive)
1) Gantt Chart:
| P0 | P1 | P2 | P3 | P4 |
0 0 1 4 13 25
(Again, in this specific scenario, the results are the same as FCFS and non-preemptive
SJF because P0 has a burst time of 0. Preemption would only make a difference if P0
had a non-zero burst time and a shorter job arrived while P0 was still running.)
2) Waiting Time: i) P0: 0, ii) P1: 0, iii) P2: 1, iv) P3: 4, v) P4: 13
○ Average Waiting Time: (0 + 0 + 1 + 4 + 13) / 5 = 3.6
3) Turnaround Time: i) P0: 0, ii) P1: 1, iii) P2: 4, iv) P3: 13, v) P4: 25
○ Average Turnaround Time: (0 + 1 + 4 + 13 + 25) / 5 = 8.6
4. Round Robin (Quantum = 1)
1) Gantt Chart:
| P0 | P1 | P2 | P3 | P4 | P2 | P3 | P4 | P3 | P4 | P3 | P4 | P4 |
0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
2) Waiting Time:
i) P0: 0
ii) P1: 0
iii) P2: (2 – 1) + (4 – 3) = 2
iv) P3: (3 – 2) + (5 – 4) + (7 – 6) + (9 – 8) + (11 – 10) + (13 – 12) = 111
v) P4: (4 – 3) + (6 – 5) + (8 – 7) + (10 – 9) + (12 – 11) + (14 – 13) + (16 – 15) + (18 – 17) + (20 – 19)
+ (22 – 21) + (24 – 23) = 11
○ Average Waiting Time: (0 + 0 + 2 + 6 + 11) / 5 = 3.8
3) Turnaround Time:
* P0: 0
* P1: 1
* P2: 5
* P3: 20
* P4: 25
* Average Turnaround Time: (0 + 1 + 5 + 20 + 25) / 5 = 10.2

You might also like