Os Short Ans
Os Short Ans
4o mini
Definition of Operating System (OS): An Operating System (OS) is a software that acts as
an intermediary between computer hardware and users. It manages hardware resources,
facilitates user interaction with the system, and ensures efficient operation of the computer.
Batch Operating System: In a batch operating system, jobs are grouped into batches based
on their characteristics and then processed sequentially by the system without direct user
interaction.
Advantages:
1. Efficient CPU Utilization: Jobs can be processed without user interaction, leading to
better utilization of the CPU.
2. Less User Interaction: Users don’t need to interact with the system once a batch job
is initiated.
3. Time Sharing: The system can execute jobs sequentially, allowing for fair resource
allocation among jobs.
Disadvantages:
1. No Immediate Feedback: The system does not provide immediate feedback to the
user, which makes it difficult to correct errors in real-time.
2. Limited Interaction: It lacks interactive capabilities, which makes it inefficient for
tasks that require user input during execution.
3. Long Turnaround Time: Since jobs are executed in batches, the time taken to
process a single job may be long, especially in a large batch.
Explanation:
Diagram:
sql
Copy code
+------------------+ +------------------+ +------------------
+
| Process A | | Process B | | Process C
|
+------------------+ +------------------+ +------------------
+
^ ^ ^
| | |
| | |
+----------------------------+----------------------------+
| Main Memory | (Shared between processes)
+----------------------------+----------------------------+
CPU time is divided into small slices (context switching) among processes.
4. Define the Term Distributed Operating System. State Its Advantages and
Disadvantages.
Advantages:
Disadvantages:
The structure of an operating system typically consists of several layers that interact with
hardware and software components. These layers include:
1. Hardware Layer: The physical machine, including the CPU, memory, and I/O
devices.
2. Kernel: The core of the OS, which manages hardware and software resources,
providing services such as process scheduling, memory management, and I/O
operations.
3. System Software: Utilities and system services that provide support for the user and
other software programs.
4. Application Layer: Programs and applications that run on top of the OS.
Diagram:
sql
Copy code
+----------------------------------+
| Application Layer | <-- User applications like Word, Web
Browser
+----------------------------------+
| System Software | <-- Compilers, Utilities
+----------------------------------+
| Kernel | <-- Core OS services like memory
management, process scheduling
+----------------------------------+
| Hardware | <-- CPU, memory, devices
+----------------------------------+
Operating system operations include managing processes, memory, input/output devices, and
file systems.
Operations:
Diagram:
sql
Copy code
+-------------------+
| User Applications|
+-------------------+
|
v
+-------------------+ +------------------+
| Operating System |----| Hardware Layer |
+-------------------+ +------------------+
| |
+-----+ +-----+
| Process Memory |
| Management| Management|
+------------+-------+
Computer System Architecture refers to the design of a computer system's components and
their interaction. It encompasses both the hardware and software design.
Types:
1. Von Neumann Architecture: A single shared memory for both data and instructions.
2. Harvard Architecture: Separate memory for instructions and data.
3. Parallel Architecture: Multiple processors that work together to execute instructions.
Diagram:
diff
Copy code
+-------------------+
| CPU |
| (Control Unit, |
| ALU, Register) |
+-------------------+
|
v
+-------------------+
| Memory (Data & Instructions) |
+-------------------+
|
v
+-------------------+
| Input/Output Devices |
+-------------------+
2. Harvard Architecture:
diff
Copy code
+-------------------+
| CPU |
| (Control Unit, |
| ALU, Register) |
+-------------------+
|
+-------------------+ +-------------------+
| Data Memory | | Instruction Memory|
+-------------------+ +-------------------+
|
v
+-------------------+
| Input/Output Devices |
+-------------------+
10. Some CPU Provides for More Than Two Modes of Operation. What Are
Two Possible Uses of These Multiple Modes?
1. User Mode: Used when the CPU executes user applications. It has restricted access to
system resources for security reasons.
2. Kernel Mode: This is the privileged mode where the OS kernel runs. It has full
access to all system resources.
3. Supervisor Mode: An additional mode that allows certain high-priority operations or
system services to be executed with more control than in user mode, but with less
access than in kernel mode.
These modes help ensure system security and resource management by separating user tasks
from critical system functions.
4o mini
• In SMP, multiple processors have equal access to the main memory and can execute
tasks simultaneously.
• All processors share the same memory, I/O, and system bus.
• The operating system treats all processors equally, and any processor can perform any
task, including running the operating system and user processes.
• Commonly used in modern multi-core systems.
• In AMP, one processor (called the master processor) controls the system and manages
the tasks, while the other processors (called slave processors) are dedicated to specific
tasks.
• The master processor coordinates the operation and delegates work to the slave
processors.
• The slave processors have no access to the operating system and are used for
performing computations.
• AMP is more limited in scalability compared to SMP.
Symmetric Multiprocessing
Feature Asymmetric Multiprocessing (AMP)
(SMP)
Processor
All processors are equal. One master processor and multiple slaves.
Role
Memory Shared memory for all Shared memory for master; slaves have no
Access processors. access.
Scalability Highly scalable. Limited scalability.
Task
Any processor can run tasks. Only master processor manages tasks.
Execution
The operating system (OS) is responsible for managing the hardware and software resources
of a computer system. Its key functions include:
Modern operating systems are interrupt-driven to handle asynchronous events and provide
efficient resource utilization. When an event occurs, such as user input, hardware failure, or
network data arrival, an interrupt signals the OS to suspend the current process, handle the
event, and then resume the previous task. This approach allows for:
1. Process Scheduling: Manages process execution and ensures that CPU time is
allocated effectively.
2. Memory Management: Allocates and manages memory for processes, ensuring no
conflicts.
3. File System Services: Provides an interface for file storage, retrieval, and
management.
4. Input/Output Management: Manages device communication (e.g., keyboard, disk
drives, etc.).
5. Security Services: Includes user authentication, access control, and encryption.
6. Networking Services: Manages network communication between systems and
handles protocols.
7. Error Detection and Handling: Detects errors and takes corrective action when
required.
8. User Interface: Provides command-line or graphical interface for user interaction
with the system.
A system call is a request made by a user-level process to the kernel of the operating system
for services that require privileged access, such as accessing hardware, creating processes, or
handling I/O operations. System calls act as the interface between user applications and the
kernel.
Implementation:
• A user process triggers a system call via a specific software interrupt (e.g., int 0x80
in Linux).
• The operating system kernel handles the system call by switching to kernel mode,
performing the requested action, and then switching back to user mode.
• The process is suspended while the kernel performs the system call and returns the
result to the user process.
Booting is the process of starting a computer from a powered-off state, loading the operating
system into memory, and preparing the system for use.
Booting Process:
1. Power-On: When the computer is powered on, the CPU executes a small program
called the BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware
Interface).
2. POST (Power-On Self Test): BIOS/UEFI checks hardware components (e.g., RAM,
CPU) to ensure everything is working.
3. Bootloader: The BIOS/UEFI locates and loads the bootloader, which is responsible
for loading the OS.
4. OS Loading: The bootloader loads the kernel into memory and transfers control to
the OS.
5. System Initialization: The kernel initializes system resources, starts essential
services, and the user is presented with a login prompt or desktop.
sql
Copy code
+------------------+ +-------------------+ +-------------------+
| Power On | --> | BIOS/UEFI | --> | Bootloader |
| (Hardware init) | | (POST & hardware | | (Load OS kernel) |
+------------------+ | detection) | +-------------------+
+-------------------+ |
+-------------------+
| Operating System |
| (Kernel Loading |
| & System Init) |
+-------------------+
A bootstrap loader (often simply called the bootloader) is a small program stored in a
special location in the computer's memory or on storage. It is responsible for starting the
operating system's loading process when the system is powered on.
• The bootstrap loader is typically stored in ROM or a special area of storage, like the
Master Boot Record (MBR).
• After the system hardware (via BIOS/UEFI) has completed the POST, the bootstrap
loader is executed.
• Its job is to locate and load the operating system kernel into memory, and then
transfer control to the kernel, enabling the OS to take control of the system.
4o mini
1. What is a process?
A process is a program in execution, which includes the program code, its current
activity, and associated resources (e.g., memory, I/O).
2. What is a thread?
A thread is the smallest unit of execution within a process. It shares the same
resources (like memory) with other threads in the same process but has its own
execution stack and program counter.
3. What is PCB?
PCB stands for Process Control Block. It is a data structure used by the operating
system to store information about a process, such as its state, program counter, CPU
registers, memory management, and scheduling information.
4. What is context switch?
A context switch is the process of storing the state of a currently running process and
loading the state of another process. This allows multiple processes to share CPU time
in multitasking systems.
5. What is a scheduler?
A scheduler is a part of the operating system responsible for deciding which process
or thread to run next on the CPU based on scheduling algorithms.
6. What is meant by scheduling queue?
A scheduling queue is a data structure that holds processes or threads waiting to be
executed by the CPU. The processes are organized based on their state, such as ready,
waiting, or running.
7. List types of scheduling queue.
Common types of scheduling queues include:
o Ready queue: Holds processes that are ready to execute.
o Waiting (or Blocked) queue: Holds processes waiting for an event (e.g., I/O).
o Execution queue: Holds processes that are currently being executed.
8. Define scheduler.
A scheduler is a component of the operating system that selects which process or
thread to execute next based on a defined scheduling policy (e.g., First-Come, First-
Served, Round Robin).
9. List operations on processes.
Operations on processes include:
o Creation: Starting a new process.
o Scheduling: Assigning CPU time to processes.
o Termination: Ending a process.
o Suspension/Resumption: Temporarily stopping and restarting a process.
o Synchronization: Ensuring proper sequence and timing between processes.
o Communication: Allowing processes to exchange information (e.g., IPC
mechanisms).
4o mini
4o mini
1. Define process and thread. Compare them with any four points.
• Process: A process is a program in execution, consisting of the program code, data,
and system resources such as memory, file handles, and CPU registers. It is an
independent unit of execution and can run multiple threads within itself.
• Thread: A thread, also called a lightweight process, is the smallest unit of execution
within a process. Multiple threads can exist within a single process, sharing the same
resources but executing independently.
Comparison:
Diagram:
lua
Copy code
+--------------------+
| Process Creation |
+--------------------+
|
v
+--------------------+
| Process Scheduling |
+--------------------+
|
v
+--------------------+
| Process Execution |
+--------------------+
|
v
+--------------------+
| Process Termination|
+--------------------+
The Process Control Block (PCB) is a data structure used by the operating system to
manage information about processes. Each process has its own PCB.
Fields in PCB:
A Process Scheduler is responsible for deciding which process gets to use the CPU at any
given time. The scheduler is part of the operating system's kernel and determines the
execution order of processes.
1. Long-term Scheduler: Decides which processes are admitted to the system and
should be brought into memory (job scheduling).
2. Short-term Scheduler: Decides which process in the ready queue should be given
CPU time (CPU scheduling).
3. Medium-term Scheduler: Handles swapping processes in and out of memory,
maintaining a balance between the number of processes in memory and the available
memory.
• Selection of the next process to run based on scheduling algorithms (e.g., FIFO,
Round Robin).
• Context Switching: Saving the state of the currently running process and loading the
state of the next process.
• Dispatching: Moving the selected process from the ready queue to the CPU for
execution.
• Resource Allocation: Assigning CPU time and other resources to the scheduled
process.
• Termination Handling: Removing processes from the ready queue once they have
completed execution.
A context switch occurs when the CPU switches from executing one process to another. It
involves saving the state of the current process and loading the state of the next one.
Diagram:
sql
Copy code
+-------------------------+
| Process A (Running) |
+-------------------------+
|
v
Save State of Process A
|
v
+-------------------------+
| Process B (Ready) |
+-------------------------+
|
v
Load State of Process B
|
v
+-------------------------+
| Process B (Running) |
+-------------------------+
Scheduler
Purpose Frequency Resource Handling
Type
Decides which
Long-term Infrequent (seconds Controls the degree of
processes are brought
Scheduler to minutes). multiprogramming.
into memory.
Short-term Decides which process Frequent
Manages CPU allocation.
Scheduler gets CPU time next. (milliseconds).
Manages the swapping
Medium-term Moderate Handles process swapping
of processes in/out of
Scheduler frequency. between memory and disk.
memory.
• Process Creation: A process is created using system calls such as fork() in Unix or
CreateProcess() in Windows.
• Process Termination: A process ends when it has completed its execution or an error
occurs, often using system calls like exit().
• Scheduling: The OS scheduler assigns CPU time to processes based on priority or
other criteria.
Example:
In a multi-user environment, a process like a text editor might run in the background (waiting
for input), and the scheduler might switch to another process like a file manager when
needed.
Kernel-Level Threads:
Diagram:
mathematica
Copy code
+------------------------+ +--------------------------+
| User-Level Thread 1 | | Kernel-Level Thread 1 |
| User-Level Thread 2 | | Kernel-Level Thread 2 |
+------------------------+ +--------------------------+
| |
v v
+-----------------------+ +----------------------------+
| Kernel Scheduler | | Kernel Scheduler |
+-----------------------+ +----------------------------+
A scheduling queue is a collection of processes waiting for CPU time. There are different
types of queues for different process states:
• Ready Queue: Holds processes that are ready to execute but waiting for CPU time.
• Blocked Queue: Holds processes that are waiting for I/O or other resources.
• New Queue: Holds newly created processes waiting for admission into the system.
Diagram:
sql
Copy code
+------------------+ +---------------------+
| New Queue | | Ready Queue |
+------------------+ +---------------------+
| |
v v
+--------------------+ +-----------------------+
| Blocked Queue | | Process Execution |
+--------------------+ +-----------------------+
False:
An executable file on a disk is not a process, it is simply a program. A process is an
execution of that program. A process has a dynamic nature (i.e., it is a running instance of a
program), whereas an executable file is static.
A short-term scheduler decides which of the ready processes will execute next.
Queuing Diagram:
lua
Copy code
+-----------------+ +------------------+ +-----------------
-+
| Ready Queue 1 | | Ready Queue 2 | | Ready Queue 3
|
+-----------------+ +------------------+ +-----------------
-+
| | |
v v v
+------------------------------+
| Short-term Scheduler |
+------------------------------+
|
v
+-------------------------------+
| Process Execution (CPU) |
+-------------------------------+
4o mini
Processes are independent and do not Threads within a process share the
Independence share memory space. Each process has its same memory and resources but can
own memory and resources. run independently.
There are several models for managing multiple threads within an operating system. Two
commonly discussed models are:
In the One-to-One model, each user-level thread is mapped to a single kernel-level thread.
This means that for every user thread, there is a corresponding kernel thread that is scheduled
by the operating system.
• Pros:
o Simple to implement and understand.
o Provides true parallelism, as each thread has its own kernel-level context.
o Kernel can schedule each thread independently, making it easier to take full
advantage of multiple CPUs.
• Cons:
o A large number of threads may lead to significant overhead due to the kernel’s
scheduling and context switching between threads.
o Limited by the number of threads that the kernel can handle.
• Example:
o In the Windows operating system (starting from Windows NT), the kernel manages
each user thread as a kernel thread, using the one-to-one model.
Diagram:
mathematica
Copy code
User-Level Threads 1 -> Kernel-Level Thread 1
User-Level Threads 2 -> Kernel-Level Thread 2
User-Level Threads 3 -> Kernel-Level Thread 3
(ii) Many-to-Many Model
In the Many-to-Many model, many user-level threads are mapped to many kernel-level
threads. This means that not every user thread needs a separate kernel thread. Instead, the
system uses a pool of kernel threads, and the user-level threads are scheduled onto these
kernel threads by a user-level thread library.
• Pros:
o Allows the creation of a large number of user threads without exhausting the
system’s kernel resources.
o The system can use a flexible pool of kernel threads, which optimizes resource
utilization.
o More efficient in terms of handling large numbers of threads compared to the one-
to-one model.
• Cons:
o Thread scheduling can become more complex because the operating system is not
directly involved in scheduling user threads.
o The operating system may not fully utilize the available CPU cores, as it does not
directly manage each user thread.
• Example:
o The Solaris operating system uses the many-to-many model. It maps many user
threads to a smaller pool of kernel threads, which is more efficient for high-
concurrency applications.
Diagram:
mathematica
Copy code
User-Level Threads 1,2 -> Kernel-Level Thread Pool
User-Level Threads 3,4 -> Kernel-Level Thread Pool
User-Level Threads 5,6 -> Kernel-Level Thread Pool
Comparison of the Models:
Model Thread Mapping Parallelism Overhead Scalability
One-to- One user thread to True Higher overhead with Limited by the number
One one kernel thread parallelism many threads of kernel threads
Highly scalable,
Many-to- Many user threads to Potential Lower overhead, but more
depending on system
Many many kernel threads parallelism complex management
resources
Each of these models offers different trade-offs in terms of performance, scalability, and
overhead, and the choice of model often depends on the specific requirements of the system
or application.
4o mini
4. Define dispatcher.
The dispatcher is a component of the operating system responsible for giving control
of the CPU to the process selected by the process scheduler. It performs context
switching, switching between processes by saving the state of the currently running
process and loading the state of the next process to be executed.
5. Enlist scheduling criteria.
Scheduling criteria are the performance metrics used to evaluate the efficiency of a
scheduling algorithm. Common scheduling criteria include:
o CPU Utilization: The percentage of time the CPU is busy.
o Throughput: The number of processes completed in a given period of time.
o Turnaround Time: The total time taken from the submission of a process to
its completion.
o Waiting Time: The total time a process spends waiting in the ready queue
before execution.
o Response Time: The time from when a request is submitted to the first
response.
o Fairness: Ensures that each process gets a fair share of the CPU.
6. Define response time and turnaround time.
o Response Time: The time elapsed from when a request is submitted until the
system starts responding. In interactive systems, it refers to the time from
submitting a command to the first feedback or output from the system.
o Turnaround Time: The total time a process takes from submission to
completion, including waiting time, execution time, and time spent in I/O
operations.
7. What is preemptive scheduling and non-preemptive scheduling?
o Preemptive Scheduling: In this type of scheduling, a process can be
interrupted and moved to the ready queue if a higher-priority process needs to
run. This allows the operating system to ensure better responsiveness and
fairness, particularly in time-sharing systems.
o Non-Preemptive Scheduling: In this type of scheduling, once a process starts
executing, it runs to completion or until it voluntarily relinquishes the CPU.
No other process can interrupt it during execution.
8. What is the purpose of scheduling algorithm?
The purpose of a scheduling algorithm is to determine the order in which processes
are executed by the CPU. It aims to optimize system performance based on various
criteria such as CPU utilization, response time, throughput, and fairness. A good
scheduling algorithm ensures efficient resource utilization and improves user
experience.
9. Enlist various scheduling algorithms.
Some common scheduling algorithms include:
o First-Come, First-Served (FCFS)
o Shortest Job First (SJF)
o Priority Scheduling
o Round Robin (RR)
o Multilevel Queue Scheduling
o Multilevel Feedback Queue Scheduling
o Shortest Remaining Time First (SRTF)
o Fair Share Scheduling
10. What is FCFS?
FCFS (First-Come, First-Served) is the simplest scheduling algorithm where
processes are executed in the order in which they arrive in the ready queue. The first
process to arrive is the first to be executed, and the CPU is allocated to each process
in turn without preemption.
11. What is SJF?
SJF (Shortest Job First) is a scheduling algorithm that selects the process with the
shortest burst time (i.e., the process requiring the least CPU time) for execution next.
It can be non-preemptive (once a process starts, it runs to completion) or preemptive
(shorter processes may interrupt longer ones). SJF minimizes average waiting time
and is optimal when burst times are known in advance.
12. Define multiple queue scheduling.
Multiple Queue Scheduling is a scheduling method in which processes are divided
into different queues based on certain criteria, such as priority, memory requirements,
or CPU usage. Each queue has its own scheduling algorithm, and processes are
scheduled based on their queue's policy. For example, interactive processes might be
placed in one queue with Round Robin scheduling, while CPU-bound processes might
be placed in another with FCFS or SJF.
4o mini
Process Scheduling refers to the method by which the operating system decides which
process or task should be allocated to the CPU next. The main goal of process scheduling is
to maximize the CPU utilization and ensure efficient processing of multiple processes in a
multi-tasking environment.
The CPU-I/O Burst Cycle describes the alternating pattern of a process’s execution in the
CPU (CPU burst) and the time it spends performing I/O operations (I/O burst).
Diagram:
mathematica
Copy code
CPU Burst I/O Burst CPU Burst I/O Burst ...
| | | |
------------------->------------------->------------------>
Process executes Process waiting for I/O
• CPU Burst: The time during which the process performs computations on the CPU.
• I/O Burst: The time the process spends performing I/O operations like reading or writing
data.
A process continuously alternates between CPU bursts (where it gets executed) and I/O
bursts (where it waits for I/O operations to complete). This cycle is used by the OS to
determine how long to allow processes to run before being interrupted or scheduled again.
• Each process is assigned a fixed time slice (quantum) during which it is allowed to execute. If
a process doesn’t finish in its time slice, it is preempted, and the next process is scheduled.
• Example: For a quantum of 3, and processes with burst times of 5, 6, and 7:
arduino
Copy code
Process 1 (5) → Process 2 (6) → Process 3 (7) → Process 1 (remaining)
→ Process 2 (remaining) → Process 3 (remaining)
• This non-preemptive algorithm schedules the process with the smallest burst time next. It
minimizes waiting time, but the challenge is predicting the exact burst time of processes.
• Example: Given processes P1 (10 ms), P2 (5 ms), and P3 (1 ms), the order of execution is P3
→ P2 → P1.
• Processes are scheduled in the order in which they arrive in the ready queue. It is simple but
may cause a "convoy effect," where shorter jobs wait for longer jobs to complete.
• Example: If processes arrive in the order P1 (4 ms), P2 (3 ms), and P3 (5 ms), they will be
scheduled in that order.
1. Context Switch: Save the state (context) of the currently running process and load the state
of the next process.
2. Process Scheduling: Determine the next process to run from the ready queue.
3. Switching the CPU: The dispatcher selects the process, updates the CPU registers, and
assigns control to the new process.
The dispatcher typically operates in conjunction with the CPU scheduler and is invoked every
time a process needs to be swapped out or when a new process is scheduled to run.
Diagram:
sql
Copy code
+-------------------+
| Queue 1 (Highest) | → Round Robin (RR)
+-------------------+
| Queue 2 | → Shortest Job First (SJF)
+-------------------+
| Queue 3 (Lowest) | → FCFS
+-------------------+
In this scheme, each process is assigned to a specific queue based on its priority or type.
Processes in higher priority queues are executed first.
• The percentage of time the CPU is actively executing processes. High CPU utilization means
the CPU is busy most of the time.
• The time between a user's request and the first response from the system. It is crucial in
interactive systems.
• The total time taken from the submission of a process to its completion, which includes both
waiting time and execution time.
Average Waiting Can be high due to longer processes Typically lower, especially when jobs are
Time arriving first similar in length
• CPU Bound Process: A process that spends more time performing computations on the CPU
rather than waiting for I/O operations. These processes are generally long-running and
require significant CPU time.
• I/O Bound Process: A process that spends more time waiting for I/O operations than using
the CPU. These processes are usually short-lived but frequently interact with external
devices like disk or network.
9. Describe FCFS Scheduling with Example. Also State its Advantages and
Disadvantages
FCFS Scheduling is a simple scheduling algorithm where processes are executed in the
order they arrive in the ready queue.
Example:
Advantages:
Disadvantages:
• Can cause high waiting time (convoy effect), especially for short processes when long
processes arrive first.
10. What is Priority Scheduling? Explain with Example. State its Advantages
and Disadvantages
Priority Scheduling assigns a priority to each process. The CPU scheduler picks the process
with the highest priority for execution. It can be preemptive or non-preemptive.
Example:
Advantages:
Disadvantages:
• Preemptive Scheduling allows the OS to forcibly remove a running process from the CPU to
allocate it to another process. It helps prevent any single process from monopolizing the
CPU.
• Non-Preemptive Scheduling requires that a process voluntarily gives up the CPU once its
time slice or execution is complete.
Comparison:
1 0 7
2 1
4o mini
14. Compute Average Turnaround Time and Average Wait Time Using (i)
FCFS, (ii) SJF (Pre-emptive)
1 0.0 8
2 0.5 5
3 1.0 2
(i) First-Come, First-Served (FCFS)
1. Execution Order: The order of execution will be based on the arrival time: Job 1 →
Job 2 → Job 3.
2. Completion Times:
o Job 1: Arrival at 0.0, completes at 0.0 + 8 = 8.0
o Job 2: Arrival at 0.5, starts at 8.0 (since Job 1 completes at 8.0), completes at 8.0 + 5
= 13.0
o Job 3: Arrival at 1.0, starts at 13.0 (since Job 2 completes at 13.0), completes at 13.0
+ 2 = 15.0
3. Turnaround Time (TAT):
o TAT for Job 1 = Completion Time - Arrival Time = 8.0 - 0.0 = 8.0
o TAT for Job 2 = Completion Time - Arrival Time = 13.0 - 0.5 = 12.5
o TAT for Job 3 = Completion Time - Arrival Time = 15.0 - 1.0 = 14.0
SJF preemptive (also known as Shortest Remaining Time First, SRTF) selects the process
with the shortest remaining burst time at each moment.
• Execution Order:
o At time 0.0, Job 1 is the only process, so it runs.
o Job 1 executes from 0.0 to 0.5, leaving 7.5 units remaining.
o At time 0.5, Job 2 arrives and runs, as it has a shorter burst time (5 units) than Job
1's remaining 7.5 units.
o Job 2 runs from 0.5 to 2.5, leaving 2 units remaining.
o At time 1.0, Job 3 arrives, and since it has the shortest burst time (2 units), it
preempts Job 2.
o Job 3 runs from 2.5 to 4.5.
o After Job 3 completes, Job 2 (with 2 remaining units) resumes and runs from 4.5 to
6.5.
o Finally, Job 1 runs from 6.5 to 8.5.
• Completion Times:
o Job 1: Completes at 8.5
o Job 2: Completes at 6.5
o Job 3: Completes at 4.5
• Turnaround Time (TAT):
o TAT for Job 1 = 8.5 - 0.0 = 8.5
o TAT for Job 2 = 6.5 - 0.5 = 6.0
o TAT for Job 3 = 4.5 - 1.0 = 3.5
15. Turnaround Time Using (i) Pre-emptive Shortest Job First Algorithm, (ii)
Non-pre-emptive Priority Scheduling
Given:
1 2 3 (Highest) 0
2 1 4 (Lowest) 3
3 3 2 3
4 2 1 4
5 4 5 3
In pre-emptive SJF, the CPU will always pick the process with the shortest remaining burst
time.
• Execution Order:
o Job 1 starts at time 0, runs for 2 units, and finishes at time 2.
o Job 2 arrives at time 3 and has the shortest burst time (1 unit), so it runs next.
o Job 3 arrives at time 3, but Job 2 finishes before Job 3 can start.
o After Job 2 finishes, Job 3 runs next for 3 units.
o Job 4 runs after Job 3 finishes.
o Job 5 finishes last.
Completion Times:
• Job 1: 2
• Job 2: 4
• Job 3: 7
• Job 4: 9
• Job 5: 13
• Job 1: 2 - 0 = 2
• Job 2: 4 - 3 = 1
• Job 3: 7 - 3 = 4
• Job 4: 9 - 4 = 5
• Job 5: 13 - 3 = 10
In non-preemptive priority scheduling, the job with the highest priority (smallest number) is
selected next. Jobs are processed in the order of arrival, with priority taken into account.
• Execution Order: Job 1 (priority 3) → Job 3 (priority 2) → Job 4 (priority 1) → Job 2 (priority
4) → Job 5 (priority 5)
Completion Times:
• Job 1: 2
• Job 3: 5
• Job 4: 7
• Job 2: 9
• Job 5: 13
• Job 1: 2 - 0 = 2
• Job 3: 5 - 3 = 2
• Job 4: 7 - 4 = 3
• Job 2: 9 - 3 = 6
• Job 5: 13 - 3 = 10
Given:
1 0 12
2 3 8
3 8 5
4o mini
18. Average Turnaround Time Calculation Using (i) Round Robin (RR) and
(ii) Shortest Remaining Time First (SRTF)
Given the following set of processes with their burst times and arrival times:
P1 5 1
P2 7 0
P3 3 3
P4 10 2
We will calculate the Average Turnaround Time using two different scheduling algorithms:
Steps:
1. Arrival Order: The processes arrive at different times, so the ready queue is
managed as new processes arrive.
o At time 0, P2 arrives and starts running for 3 ms (since quantum = 3).
o At time 1, P1 arrives and waits for P2 to complete its quantum.
o P2 completes 3 ms and is preempted, added to the ready queue.
o P1 now runs from time 3 to 6 (since its burst time is 5 ms and quantum is 3).
o At time 3, P3 arrives. It runs from time 6 to 9 for 3 ms (quantum = 3).
o Now, P2 resumes from time 9 to 12, executing for the remaining 4 ms.
o After P2 finishes, P4 starts running from time 12 to 15 for 3 ms (quantum = 3).
o P1 continues its remaining burst time from 15 to 17.
o P3 finishes its remaining 1 ms from 17 to 18.
o Finally, P4 finishes its remaining 7 ms from 18 to 25.
Gantt Chart:
Copy code
| P2 | P1 | P3 | P2 | P4 | P1 | P3 | P4 |
0 3 6 9 12 15 17 18 25
Completion Times:
• TAT for P1 = 17 - 1 = 16 ms
• TAT for P2 = 12 - 0 = 12 ms
• TAT for P3 = 18 - 3 = 15 ms
• TAT for P4 = 25 - 2 = 23 ms
In Shortest Remaining Time First (SRTF), the process with the smallest remaining burst
time is selected to run next. This is a preemptive version of Shortest Job First (SJF).
Steps:
• Arrival Order: At each time unit, the process with the shortest remaining burst time is
selected.
• At time 0: P2 arrives and starts executing (since it has the smallest burst time, 7 ms).
• At time 1: P1 arrives with a burst time of 5 ms, which is smaller than P2’s remaining burst
time. P1 starts execution.
• At time 2: P4 arrives and has a burst time of 10 ms, which is larger than the remaining burst
times of P1 and P2. It waits.
• At time 3: P3 arrives with a burst time of 3 ms, which is smaller than the remaining burst
time of P1 (4 ms). P3 starts execution.
• At time 5: P1 continues execution as it has the shortest remaining burst time.
• At time 6: P1 finishes, and P2, with the next shortest remaining burst time, continues
execution.
• At time 9: P4 starts execution as it now has the shortest remaining burst time.
Gantt Chart:
Copy code
| P2 | P1 | P3 | P2 | P4 | P4 |
0 1 3 5 9 10 25
Completion Times:
• P1 completes at time 6.
• P2 completes at time 9.
• P3 completes at time 5.
• P4 completes at time 25.
• TAT for P1 = 6 - 1 = 5 ms
• TAT for P2 = 9 - 0 = 9 ms
• TAT for P3 = 5 - 3 = 2 ms
• TAT for P4 = 25 - 2 = 23 ms
Summary of Results:
Conclusion: SRTF provides a lower average turnaround time compared to Round Robin in
this case.
1. What is synchronization?
Synchronization is the process of coordinating the execution of multiple processes or
threads to ensure that shared resources are accessed in a safe and controlled manner,
preventing conflicts or inconsistencies.
2. Define semaphore.
A semaphore is a synchronization primitive used to control access to a shared
resource in a concurrent system. It can be used to signal between processes or threads,
usually by maintaining a counter that controls access based on availability.
3. List types of semaphore.
There are two types of semaphores:
o Binary Semaphore: Also known as a mutex, it has only two states (0 and 1),
and it is used to ensure mutual exclusion.
o Counting Semaphore: It can take any non-negative integer value and is used
to manage a pool of resources or control access to multiple instances of a
resource.
4. What is critical section problem?
The critical section problem refers to a situation in concurrent programming where
multiple processes or threads need to access a shared resource, and without proper
synchronization, it can lead to data inconsistency or race conditions. The problem is
to ensure that only one process accesses the critical section at a time.
5. What is race condition?
A race condition occurs when multiple processes or threads try to change shared data
concurrently, and the final outcome depends on the non-deterministic order of
execution. This can lead to unpredictable results and errors.
6. List classical problems for synchronization.
Some classical synchronization problems are:
o Producer-Consumer Problem
o Reader-Writer Problem
o Dining Philosophers Problem
o Sleeping Barber Problem
7. What is dining philosopher problem?
The Dining Philosophers Problem is a synchronization problem where a certain
number of philosophers sit at a round table, each with a fork to their left and right.
They must alternately think and eat, but they can only eat if they have both forks. The
challenge is to ensure that philosophers don't starve, by avoiding deadlock and
ensuring proper sharing of resources.
8. Give reader-writer problem in synchronization.
In the Reader-Writer Problem, multiple processes (readers and writers) need to access
a shared resource (e.g., a database). Readers can access the resource simultaneously
without issue, but if a writer is modifying the resource, no reader or other writer can
access it. The challenge is to ensure that writers have exclusive access while allowing
readers to share the resource as long as no writer is active.
9. List signals for semaphore.
The signals used in semaphore operations are:
o Wait (P or down): Decrements the semaphore value and blocks if the value is
zero.
o Signal (V or up): Increments the semaphore value and potentially wakes up a
waiting process.
4o mini
• Binary Semaphore (or Mutex): A binary semaphore can only take two values,
usually 0 or 1. It is used to manage mutual exclusion, ensuring that only one process
or thread can access the critical section at any given time.
o Value 1: Resource is available.
o Value 0: Resource is being used, and other processes need to wait.
• Counting Semaphore: This type of semaphore can take any non-negative integer
value. It is used to control access to a resource pool with a limited number of
instances. If the value of the semaphore is greater than 0, processes can proceed, but if
it is 0, they must wait.
o For example, if a resource pool has 3 identical resources, the semaphore might
be initialized to 3. When a process acquires a resource, the semaphore is
decremented, and when the resource is released, the semaphore is
incremented.
The problem arises when more than one process or thread tries to access the critical section
simultaneously, leading to issues like data races, inconsistent states, or corruption.
Solution Requirements: A solution to the critical section problem must satisfy the following
conditions:
1. Mutual Exclusion: Only one process or thread can execute in the critical section at a
time.
2. Progress: If no process is in the critical section and more than one process is waiting
to enter, then one of the waiting processes should be allowed to enter the critical
section.
3. Bounded Waiting: There must be a limit on the number of times a process can be
bypassed before it is allowed to enter the critical section.
To prevent race conditions and ensure that processes execute safely in shared environments,
synchronization mechanisms such as semaphores, mutexes, and monitors are used to manage
access to critical sections.
A race condition occurs in a system when the behavior of a program depends on the relative
timing or sequence of events, such as the order in which threads or processes execute. It is a
scenario where multiple processes or threads access shared resources concurrently, and at
least one of them modifies the shared resource.
In a race condition, the final outcome depends on the timing of the threads' execution, and if
the timing is not controlled properly, it can lead to unpredictable results, including:
• Data corruption: If multiple threads are modifying a shared variable at the same
time, the result may be an inconsistent or corrupted value.
• Inconsistent behavior: The program may work correctly under one set of conditions
and fail under another due to variations in execution timing.
• Unpredictable results: Since the outcome depends on the timing of threads, it may
differ each time the program runs.
To avoid race conditions, synchronization mechanisms like mutexes, semaphores, and locks
are used to ensure that shared resources are accessed in a controlled manner.
5. What are the types of Semaphore? Explain its usage and implementation.
1. Binary Semaphore:
o Usage: Binary semaphores are used for managing mutual exclusion (mutex).
They can only have two states: 0 or 1, where 1 indicates the resource is
available, and 0 indicates the resource is in use.
o Implementation:
▪ When a process tries to access the critical section, it performs a wait
operation on the semaphore. If the value is 1 (resource available), the
process enters the critical section and changes the semaphore value to
0. If the value is 0, the process waits.
▪ After completing its task, the process performs a signal operation on
the semaphore, changing its value back to 1, allowing other processes
to access the critical section.
c
Copy code
Semaphore mutex = 1; // Initialize binary semaphore
2. Counting Semaphore:
o Usage: Counting semaphores are used to manage a pool of resources. It can
take non-negative integer values, and it is useful for managing access to
multiple identical resources (like available slots in a buffer or available
connections in a server).
o Implementation:
▪ The wait operation (P operation) decrements the semaphore value. If
the value is 0, the process waits (blocked) until the semaphore is
greater than 0.
▪ The signal operation (V operation) increments the semaphore value,
allowing other processes to proceed.
c
Copy code
Semaphore empty = N; // Number of empty slots in a buffer
Semaphore full = 0; // Number of full slots in the buffer
// Producer code
wait(empty); // Decrease empty slot
// Produce item and add to buffer
signal(full); // Increase full slot
// Consumer code
wait(full); // Decrease full slot
// Consume item from buffer
signal(empty); // Increase empty slot
In summary, semaphores are crucial for synchronizing processes and threads in an OS,
ensuring that they can safely share resources without conflict, and preventing issues like race
conditions and deadlocks. The two types of semaphores—binary and counting—serve
different purposes but are essential tools for managing concurrency in a system.
4o mini