0% found this document useful (0 votes)
4 views18 pages

Artificial Intelligence Research Notes

The document provides detailed explanations of various concepts in operating systems, including resource allocation graphs, deadlock detection and prevention, multiprogramming, interrupts, system calls, and mutual exclusion. It discusses the importance of compute virtualization for resource efficiency, isolation, and flexibility, as well as the transitions in process state diagrams. Additionally, it covers race conditions and the necessity of mutual exclusion in critical sections, along with an application of deadlock conditions in a hypothetical intersection scenario.

Uploaded by

mrqibbiq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views18 pages

Artificial Intelligence Research Notes

The document provides detailed explanations of various concepts in operating systems, including resource allocation graphs, deadlock detection and prevention, multiprogramming, interrupts, system calls, and mutual exclusion. It discusses the importance of compute virtualization for resource efficiency, isolation, and flexibility, as well as the transitions in process state diagrams. Additionally, it covers race conditions and the necessity of mutual exclusion in critical sections, along with an application of deadlock conditions in a hypothetical intersection scenario.

Uploaded by

mrqibbiq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Okay, here are the detailed answers to the questions in the Operating Systems

document you provided:

Section A
Question 1

a. Resource Allocation Graphs and Deadlock


●​ Concept: A Resource Allocation Graph (RAG) is a directed graph used to visualize
the state of resource allocation in a system. It consists of:
○​ Processes (P): Represented as circles.
○​ Resource Types (R): Represented as squares. Each dot inside a square
represents an instance of that resource type.
○​ Request Edge: A directed edge from a process (P) to a resource type (R) (P
→ R) indicates that process P has requested an instance of resource type R
and is waiting for it.
○​ Assignment Edge: A directed edge from a resource instance (dot within R) to
a process (P) (R → P) indicates that an instance of resource type R has been
allocated to process P.
●​ Deadlock Detection: A cycle in the RAG is a necessary condition for a deadlock.
○​ If the RAG contains no cycles, then no deadlock exists in the system.
○​ If the RAG contains a cycle:
■​ If each resource type has only one instance, a cycle implies a deadlock
has occurred.
■​ If resource types have multiple instances, a cycle indicates a deadlock
might exist. Further analysis (like using deadlock detection algorithms,
e.g., Banker's algorithm safety check) is needed to confirm if processes
involved in the cycle are truly deadlocked.
●​ Deadlock Prevention: RAGs aren't directly used for prevention in the sense of
dynamically stopping deadlocks before they happen. However, understanding the
potential for cycles helps in designing systems that prevent deadlocks by
negating one of the four necessary conditions (Mutual Exclusion, Hold and Wait,
No Preemption, Circular Wait). For example, requiring processes to request all
resources at once prevents the 'Hold and Wait' condition, which would manifest
as a process holding some resources while requesting others in the RAG.

b. Deadlock Possibility (2 Processes, 3 Resources, Max 2 Each)


●​ No, a deadlock situation is not possible in this scenario.
●​ Explanation: Let the three identical resources be r1, r2, and r3.
○​ Each process (P1, P2) needs a maximum of two resources.
○​ The total maximum need is 2 + 2 = 4 resources.
○​ However, the system only has 3 resources available.
○​ For a deadlock to occur, both processes must be holding at least one
resource and waiting for another resource held by the other process (Circular
Wait & Hold and Wait conditions).
○​ Worst Case Scenario: Assume P1 holds one resource (e.g., r1) and requests
another. P2 holds another resource (e.g., r2) and requests another. There is
still one resource free (r3). This free resource can be allocated to either P1 or
P2, allowing one of them to acquire its maximum needed resources (two),
complete its execution, and release its resources. Once released, the other
process can acquire its needed resources and complete. Because there's
always a path to completion for at least one process, a deadlock cannot
occur. The total number of resources (3) is greater than the minimum required
for a deadlock cycle with these constraints (which would require each process
to hold one and wait for one held by the other, needing at least 2 * 1 = 2 held
resources, and 2 waiting requests, potentially fulfilled only if total resources
were less than 2). Since 3 >= (2-1) + (2-1) + 1, deadlock is impossible.

c. Multiprogramming on a Uniprocessor System


●​ Multiprogramming on a uniprocessor system creates the illusion of simultaneous
execution by rapidly switching the CPU between multiple processes. This is
achieved through CPU scheduling and context switching. The OS keeps several
processes in memory at once. When one process has to wait (e.g., for I/O), the OS
switches the CPU to another ready process.
●​ Preemptive Approach: The OS decides when to switch processes. A running
process can be interrupted (preempted) by the OS after a certain time slice
expires or when a higher-priority process becomes ready. The OS saves the
current process's state (context) and loads the state of the next process. This
ensures fairness and responsiveness, preventing one process from monopolizing
the CPU.
●​ Non-Preemptive Approach: A process runs until it voluntarily relinquishes the
CPU (e.g., terminates, blocks for I/O, or explicitly yields). The OS only switches
processes when the running process gives up the CPU. This is simpler but can
lead to poor response times if a process runs for a very long time without
yielding.

d. Interrupts, Dual Mode Operation, and System Calls


●​ Purpose of Interrupts: Interrupts are signals sent to the CPU by hardware (e.g.,
I/O devices finishing an operation, timer expiring) or software (e.g., errors, system
calls) to indicate that an event needing immediate attention has occurred. They
allow the CPU to pause its current execution, handle the event (via an Interrupt
Service Routine - ISR), and then resume the interrupted task. This mechanism is
crucial for efficient I/O handling, multitasking, and responding to exceptional
conditions without constantly polling devices.
●​ Interrupt-Driven System Operation with Dual Mode & System Calls:
1.​ Dual Mode Operation: The CPU operates in at least two modes: User Mode
(for user applications, restricted access) and Kernel Mode (or
Supervisor/Privileged Mode, for the OS, unrestricted access). This protects
the OS and system resources from user programs.
2.​ System Calls: When a user program needs an OS service (like I/O or process
management) that requires privileged instructions, it cannot execute them
directly in User Mode. It makes a system call.
3.​ The Trap: The system call instruction triggers a software interrupt (often
called a 'trap').
4.​ Mode Switch: This trap causes the CPU to switch from User Mode to Kernel
Mode.
5.​ Interrupt Handling: The CPU transfers control to a specific OS routine (part
of the interrupt handler or system call handler).
6.​ Service Execution: The OS, now in Kernel Mode, performs the requested
service (e.g., handles the hardware interrupt, performs the file read requested
via system call).
7.​ Return and Mode Switch: Once the service is complete, the OS executes a
special return instruction that switches the CPU back to User Mode and
returns control to the user program just after the system call instruction.

e. User Program Executing a System Call


1.​ User Program Request: The user program needs an OS service (e.g., reading
data from a file). It calls a library function (wrapper) corresponding to the desired
system call (e.g., read()).
2.​ Library Function: This library function sets up the necessary parameters (e.g.,
file descriptor, buffer address, number of bytes) in specific registers or on the
stack, according to the OS's convention. It then executes a special 'trap' or
'system call' instruction.
3.​ Trap to Kernel Mode: The trap instruction causes a software interrupt, leading
the hardware to switch the CPU from User Mode to Kernel Mode and transfer
control to a predefined location in the OS's interrupt/trap handler.
4.​ System Call Dispatcher: The OS examines a parameter (e.g., a number in a
specific register) passed during the trap to identify which system call was
requested. It uses this number to index into a table (the system call table)
containing pointers to the actual kernel routines that implement each system call.
5.​ Execute Kernel Routine: The OS executes the specific kernel function
corresponding to the requested system call (e.g., the kernel's internal file reading
routine). This routine runs with full kernel privileges and performs the required
operation (accessing the device driver, reading data from disk into a kernel buffer,
then copying it to the user program's buffer).
6.​ Return to User Mode: Once the kernel routine completes, control returns to the
system call dispatcher/handler. It prepares the return value (e.g., number of bytes
read, error code). A special instruction is executed to switch the CPU back from
Kernel Mode to User Mode.
7.​ Resume User Program: Control is returned to the user program's library
function, immediately following the trap instruction. The library function receives
the return value from the kernel, potentially performs some cleanup, and returns
the result to the calling user code.

f. Mutual Exclusion
●​ Concept: Mutual exclusion is a fundamental synchronization property ensuring
that no two concurrent processes (or threads) are simultaneously active inside
their critical section. A critical section is a piece of code that accesses shared
resources (like shared variables, data structures, files, or devices like the bank
database ).
●​ Importance for Synchronization: It prevents race conditions, where the
outcome of computation depends on the unpredictable timing of concurrent
access to shared resources. In the bank example, without mutual exclusion, two
tellers might try to update the same account balance concurrently, leading to
incorrect results (e.g., one withdrawal overwriting another). It ensures data
integrity and consistency in concurrent systems.
●​ Implementation: Mutual exclusion can be implemented using various
mechanisms:
○​ Mutex Locks (as mentioned ): A common technique. A process must acquire
the mutex lock before entering the critical section and release it upon exiting.
Only one process can hold the lock at a time. Others trying to acquire a held
lock will block (wait) until it's released.
○​ Semaphores: Binary semaphores (initialized to 1) can function like mutexes.
wait() (or P()) decrements the semaphore (acquires the lock), signal() (or V())
increments it (releases the lock).
○​ Monitors: High-level language constructs that encapsulate shared data and
procedures operating on it, automatically enforcing mutual exclusion for
monitor procedures.
○​ Hardware Support: Special atomic instructions (e.g., Test-and-Set,
Compare-and-Swap) provided by the CPU can be used as building blocks for
implementing locks efficiently.
○​ Disabling Interrupts (Careful Use): On a uniprocessor, temporarily disabling
interrupts can prevent context switches within a critical section, ensuring
atomicity. However, this is dangerous, affects system responsiveness, and
doesn't work on multiprocessor systems.

g. Importance of Compute Virtualization (6 Marks)

Compute virtualization is the creation of a virtual (rather than actual) version of


something, such as a server, storage device, network, or operating system. It's
incredibly important for several reasons:
1.​ Resource Efficiency & Consolidation: Allows multiple virtual machines (VMs),
each running its own OS and applications, to run concurrently on a single physical
machine. This drastically improves hardware utilization, reduces the number of
physical servers needed, saving costs on hardware, power, cooling, and space.
2.​ Isolation & Security: VMs are isolated from each other. A crash or security
breach in one VM generally does not affect others running on the same host. This
allows running different services or applications with varying security
requirements on shared hardware.
3.​ Flexibility & Agility: VMs can be created, cloned, moved, backed up, and
restored quickly and easily. This facilitates rapid deployment of new applications,
testing environments, disaster recovery, and load balancing. Developers can have
isolated environments matching production setups.
4.​ Legacy Support: Allows running older operating systems or applications that
may not be compatible with modern hardware on a virtualized layer.
5.​ Sandboxing: Provides a safe environment (sandbox) to run untrusted
applications or test software without risking the host operating system.
●​ Clear Example: A company needs to host its website, an internal database
server, and a development/testing environment. Instead of buying three separate
physical servers, they can buy one powerful server and run three VMs on it using
virtualization software (like VMware ESXi, Microsoft Hyper-V, or KVM).
○​ VM1 runs a Linux distribution with Apache web server for the website.
○​ VM2 runs Windows Server with SQL Server for the database.
○​ VM3 runs another Linux instance for developers to test new code. Each VM
operates independently, uses a portion of the physical server's resources
(CPU, RAM, storage), and can be managed separately. This is far more
cost-effective and flexible than three physical machines.

h. Missing Transitions in Process State Diagram

The standard three-state model (Running, Ready, Blocked) typically shows these
transitions:
1.​ Running → Ready (Preemption, e.g., time slice expired)
2.​ Running → Blocked (Waiting for event, e.g., I/O request)
3.​ Ready → Running (Scheduler dispatches process)
4.​ Blocked → Ready (Event occurred, e.g., I/O completed)

The two missing transitions from the theoretical six are:


●​ Missing 1: Blocked → Running:
○​ Circumstance: This transition might occur in some systems if the event a
process was blocked on occurs, and the scheduler decides to immediately run
that process without placing it in the Ready queue first (perhaps because it's
the highest priority process and the CPU is currently idle or running a
lower-priority process that can be preempted). However, most standard
schedulers transition Blocked → Ready first, and then Ready → Running is a
separate scheduling decision. So, it's often conceptually combined but
technically bypassed in some priority-based preemptive systems.
●​ Missing 2: Ready → Blocked:
○​ Circumstance: This transition is generally considered impossible or
nonsensical within the standard model. A process in the Ready state is waiting
only for the CPU; it has everything else it needs to run. It cannot transition to
Blocked (waiting for an event like I/O) without first being scheduled to Run
and then initiating the operation that causes blocking. A process cannot
initiate an I/O request (or other blocking operation) while it is merely Ready
and not actually executing on the CPU.

Section B
Question 2

a) Racing and Mutually Exclusive Critical Sections


●​ Racing (Race Condition): A race condition occurs when multiple processes or
threads access and manipulate shared data concurrently, and the final1 result
depends on the particular order in which their accesses take place (the timing of
execution). Because the timing is generally unpredictable, the outcome is
non-deterministic and often incorrect.
●​ Why Mutual Exclusion is Necessary: Critical sections are segments of code
where processes access shared resources. If multiple processes execute their
critical sections concurrently without coordination, they can interfere with each
other, leading to race conditions and corrupted shared data. Mutual exclusion
ensures that only one process can be inside its critical section at any given time,
preventing simultaneous access to the shared resource and thus eliminating race
conditions related to that resource. It guarantees the atomicity and integrity of
operations within the critical section.

b) Applying Deadlock Conditions to Figure 2a

(Assuming Figure 2a depicts the classic "four cars at an intersection" scenario, each
wanting to proceed forward but blocked by the car to its right)
1.​ Mutual Exclusion: Each section of the intersection (the resource) can only be
occupied by one car (process) at a time. A car cannot drive through a space
already occupied by another car.
2.​ Hold and Wait: Each car is occupying one section of the intersection (holding a
resource, e.g., car 'a' holds the space it's in) and waiting to enter the section
occupied by the car to its right (waiting for another resource, e.g., car 'a' waits for
the space held by car 'b').
3.​ No Preemption: A section of the intersection occupied by a car cannot be
forcibly taken away from it (e.g., car 'b' cannot be forcibly removed from its spot
to let car 'a' pass). Cars must voluntarily vacate their spot (resource).
4.​ Circular Wait: There exists a circular chain of cars waiting for resources held by
the next car in the chain. For example, car 'a' waits for the resource held by car
'b', car 'b' waits for the resource held by car 'c', car 'c' waits for the resource held
by car 'd', and car 'd' waits for the resource held by car 'a'.

c) Applying Deadlock Techniques to Figure 2

(Applying techniques to the car intersection analogy)


1.​ Prevention: Eliminate one of the four necessary conditions:
○​ Break Mutual Exclusion: Not feasible; two cars cannot occupy the same
space.
○​ Break Hold and Wait: Require cars to reserve all intersection segments they
need before entering (difficult) or force cars to release their current spot if
they cannot acquire the next one immediately (back up).
○​ Break No Preemption: Allow preemption – forcibly remove a car from the
intersection (e.g., police intervention, towing - impractical for normal traffic
flow).
○​ Break Circular Wait: Impose an ordering. For example, establish a rule like
"always yield to the car on your right" or traffic lights that enforce a specific
order of entry.
2.​ Avoidance: Requires foresight. Before a car enters the intersection, a system (like
a traffic controller or autonomous driving coordination) could check if allowing
entry could potentially lead to a deadlock based on the intentions of other cars. If
entry could lead to an unsafe state, the car is made to wait. (e.g., Banker's
algorithm equivalent for traffic).
3.​ Detection and Recovery: Allow deadlocks to occur, detect them, and then
recover.
○​ Detection: Observe that traffic is not flowing and identify the circular wait
condition (all cars waiting, none moving).
○​ Recovery: One or more cars must be forced to back up (preemption/rollback)
or removed (process termination) to break the cycle. For example, one car is
instructed by police to reverse out of the intersection.

d) Techniques for Resolving Deadlocks

(Note: The prompt mentions a protocol where processes acquire all resources first.
This is actually a deadlock prevention technique (breaking Hold and Wait). If this
protocol is strictly followed, deadlocks shouldn't occur. However, the question asks
how to resolve deadlocks if they happen in such a system, implying the protocol might
fail or the question context is slightly shifted. Assuming deadlocks can still occur for
some reason, here are resolution techniques.)
1.​ Process Termination: Abort one or more processes involved in the deadlock
cycle.
○​ Methods:
■​ Abort all deadlocked processes: Simple, but costly as all work done is lost.
■​ Abort one process at a time: Abort one process, check if the deadlock is
resolved. If not, abort another, and so on. Choosing which process to
abort is key (e.g., lowest priority, least progress made, fewest resources
held).
○​ Advantages: Relatively easy to implement. Guaranteed to break the deadlock
eventually.
○​ Disadvantages: Loss of computation, potentially requires processes to be
restarted from scratch. Difficult to choose the optimal victim.
○​ Example Scenario: In a batch processing system where jobs can be easily
restarted, terminating a deadlocked job might be acceptable.
2.​ Resource Preemption: Forcibly take away resources from one or more
deadlocked processes and give them to other processes until the deadlock cycle
is broken.
○​ Issues:
■​ Victim Selection: Which process and which resources to preempt?
Minimize cost.
■​ Rollback: The process that loses a resource must be rolled back to a prior
safe state before it acquired that resource. This can be complex, requiring
checkpointing or state saving.
■​ Starvation: Ensure the same process isn't always chosen as the victim.
○​ Advantages: Potentially less disruptive than process termination if rollback is
feasible.
○​ Disadvantages: Complex to implement rollback mechanisms. Overhead of
state saving. Potential for starvation.
○​ Example Scenario: In a database system, a transaction holding locks might be
chosen for rollback to resolve a deadlock, using the database's transaction
logging/recovery mechanisms.
3.​ Operator Intervention: Alert a human operator about the deadlock, allowing
them to manually intervene and resolve it (e.g., by killing a process, releasing a
resource if possible).
○​ Advantages: Flexible; the operator can use judgment based on the specific
situation.
○​ Disadvantages: Slow, requires human monitoring, not suitable for automated
or time-critical systems.
○​ Example Scenario: A long-running scientific simulation gets stuck; an
administrator investigates and manually terminates one of the processes
involved.

(Note: Deadlock detection algorithms must run first to identify that a deadlock exists
and which processes/resources are involved before recovery techniques can be
applied.)

Question 3

a. Preemptive vs. Non-Preemptive Scheduling


●​ Non-Preemptive Scheduling: Once the CPU has been allocated to a process,
that process keeps the CPU until it either terminates or voluntarily switches to the
Waiting state (e.g., for I/O). The scheduler cannot force the process off the CPU.
○​ Example: First-Come, First-Served (FCFS). If Process A arrives first and needs
100ms of CPU time, and Process B arrives shortly after needing only 1ms,
Process B must wait the full 100ms for Process A to finish, even if Process A
doesn't need I/O.
●​ Preemptive Scheduling: The CPU can be taken away from a currently running
process (preempted) by the OS and allocated to another process. This usually
happens when a higher-priority process becomes ready or when the current
process's time slice expires.
○​ Example: Round Robin (RR). Each process gets a small time slice (quantum). If
Process A is running and its time slice expires before it finishes, the OS
preempts it, saves its state, and schedules the next process in the ready
queue (e.g., Process B). Process A goes back to the ready queue to await its
next turn. This provides better responsiveness.

b. Waiting Time vs. Turnaround Time


●​ Waiting Time: The total amount of time a process spends waiting in the ready
queue before it gets allocated the CPU to run. It does not include time spent
executing on the CPU or time spent waiting for I/O (Blocked state).
○​ Example: Process P arrives at time 0, needs 5ms CPU. It waits until time 2,
runs for 3ms, waits for I/O until time 8, gets put back in the ready queue, waits
until time 10, runs for the remaining 2ms, and finishes at time 12. Its waiting
time is (2-0) + (10-8) = 2 + 2 = 4ms.
●​ Turnaround Time: The total time elapsed from the moment a process arrives in
the system until the moment it completes its execution. It includes time spent
waiting in the ready queue, time executing on the CPU, and time waiting for I/O.
○​ Calculation: Turnaround Time = Completion Time - Arrival Time
○​ Example: For Process P above, Arrival Time = 0, Completion Time = 12.
Turnaround Time = 12 - 0 = 12ms. (Alternatively: Waiting Time + CPU Burst
Time + I/O Waiting Time).

c. Scheduling Algorithm Performance Calculation

Processes:
●​ N1: Arrival=0, Burst=25
●​ N2: Arrival=5, Burst=15
●​ N3: Arrival=10, Burst=5
●​ N4: Arrival=15, Burst=5

i. First Come First Serve (FCFS)


●​ Order: N1 → N2 → N3 → N4
●​ Gantt Chart:​
| N1 (0-25) | N2 (25-40) | N3 (40-45) | N4 (45-50) |​
0 25 40 45 50​

●​ Calculations:
○​ Completion Times (CT): N1=25, N2=40, N3=45, N4=50
○​ Turnaround Times (TAT = CT - Arrival):
■​ N1: 25 - 0 = 25
■​ N2: 40 - 5 = 35
■​ N3: 45 - 10 = 35
■​ N4: 50 - 15 = 35
■​ Average TAT: (25 + 35 + 35 + 35) / 4 = 130 / 4 = 32.5 ms
○​ Waiting Times (WT = TAT - Burst):
■​ N1: 25 - 25 = 0
■​ N2: 35 - 15 = 20
■​ N3: 35 - 5 = 30
■​ N4: 35 - 5 = 30
■​ Average WT: (0 + 20 + 30 + 30) / 4 = 80 / 4 = 20 ms

ii. Preemptive Shortest Job First (SJF) / Shortest Remaining Time First (SRTF)
●​ Gantt Chart & Execution Trace:
○​ t=0: N1 arrives (Burst=25). N1 starts running.
○​ t=5: N2 arrives (Burst=15). N1 remaining=20. N2 burst (15) < N1 remaining (20).
Preempt N1. N2 starts running.
○​ t=10: N3 arrives (Burst=5). N2 remaining=10. N3 burst (5) < N2 remaining (10).
Preempt N2. N3 starts running.
○​ t=15: N4 arrives (Burst=5). N3 remaining=0 (finishes exactly at t=15). Compare
N1(20), N2(10), N4(5). N4 is shortest. N4 starts running.
○​ t=20: N4 finishes (Burst=5). Compare N1(20), N2(10). N2 is shorter. N2
resumes.
○​ t=30: N2 finishes (Ran for 10 more). Only N1(20) left. N1 resumes.
○​ t=50: N1 finishes (Ran for 20 more).
| N1 | N2 | N3 | N4 | N2 | N1 |​
0 5 10 15 20 30 50​

●​ Calculations:
○​ Completion Times (CT): N1=50, N2=30, N3=15, N4=20
○​ Turnaround Times (TAT = CT - Arrival):
■​ N1: 50 - 0 = 50
■​N2: 30 - 5 = 25
■​N3: 15 - 10 = 5
■​N4: 20 - 15 = 5
■​Average TAT: (50 + 25 + 5 + 5) / 4 = 85 / 4 = 21.25 ms
○​ Waiting Times (WT = TAT - Burst):
■​ N1: 50 - 25 = 25
■​ N2: 25 - 15 = 10
■​ N3: 5 - 5 = 0
■​ N4: 5 - 5 = 0
■​ Average WT: (25 + 10 + 0 + 0) / 4 = 35 / 4 = 8.75 ms

iii. Non-Preemptive Shortest Job First (SJF)


●​ Scheduler only chooses when CPU is free.
●​ Gantt Chart & Execution Trace:
○​ t=0: N1 arrives (Burst=25). N1 starts running (Non-preemptive).
○​ t=5: N2 arrives (Burst=15). Ready: {N2}. N1 continues.
○​ t=10: N3 arrives (Burst=5). Ready: {N2(15), N3(5)}. N1 continues.
○​ t=15: N4 arrives (Burst=5). Ready: {N2(15), N3(5), N4(5)}. N1 continues.
○​ t=25: N1 finishes. CPU is free. Ready queue: {N2(15), N3(5), N4(5)}. Shortest
are N3 and N4 (both 5). Use FCFS for tie-break: N3 arrived before N4. N3
starts.
○​ t=30: N3 finishes. CPU is free. Ready queue: {N2(15), N4(5)}. Shortest is N4.
N4 starts.
○​ t=35: N4 finishes. CPU is free. Ready queue: {N2(15)}. N2 starts.
○​ t=50: N2 finishes.

| N1 | N3 | N4 | N2 |​
0 25 30 35 50​

●​ Calculations:
○​ Completion Times (CT): N1=25, N2=50, N3=30, N4=35
○​ Turnaround Times (TAT = CT - Arrival):
■​ N1: 25 - 0 = 25
■​ N2: 50 - 5 = 45
■​ N3: 30 - 10 = 20
■​ N4: 35 - 15 = 20
■​ Average TAT: (25 + 45 + 20 + 20) / 4 = 110 / 4 = 27.5 ms
○​ Waiting Times (WT = TAT - Burst):
■​ N1: 25 - 25 = 0
■​ N2: 45 - 15 = 30
■​ N3: 20 - 5 = 15
■​ N4: 20 - 5 = 15
■​ Average WT: (0 + 30 + 15 + 15) / 4 = 60 / 4 = 15 ms

Question 4

a) Bounded Buffer vs. Unbounded Buffer


●​ Bounded Buffer: Has a fixed, limited size. The producer process can only add
items to the buffer if it is not full. The consumer process can only remove items if
the buffer is not empty. This requires synchronization to prevent the producer
from adding to a full buffer and the consumer from removing from an empty
buffer. It reflects realistic scenarios where storage space is finite.
●​ Unbounded Buffer: Has no practical limit on its size. The producer can always
produce and add items to the buffer without ever having to wait (assuming infinite
storage). The consumer may still have to wait if the buffer is empty. While simpler
conceptually, it's not physically realistic as memory/storage is always finite.

b) Producer-Consumer Problem and Semaphore Solution


●​ Problem Explanation: A classic synchronization problem involving two types of
processes: Producers, which generate data items and put them into a shared
buffer, and Consumers, which take items out of the buffer and consume them.
The challenges are:
1.​ Ensuring the producer doesn't try to add data to a full buffer (in the bounded
buffer case).
2.​ Ensuring the consumer doesn't try to remove data from an empty buffer.
3.​ Ensuring mutual exclusion when accessing and modifying the buffer itself
(e.g., updating pointers or counters).
●​ Example Scenario: A web server producing log entries (producer) that are
written to a shared memory buffer. A separate log processing service (consumer)
reads these entries from the buffer for analysis or storage.
●​ Semaphore Solution (Bounded Buffer):
○​ Use three semaphores:
1.​ mutex: A binary semaphore (initialized to 1) for mutual exclusion when
accessing the buffer.
2.​ empty: A counting semaphore (initialized to buffer size n) representing the
number of empty slots in the buffer. The producer waits on this.
3.​ full: A counting semaphore (initialized to 0) representing the number of
full slots (items) in the buffer. The consumer waits on this.
○​ Producer Code Structure:​
do {​
// ... produce an item ...​
wait(empty); // Wait if buffer is full (decrement empty count)​
wait(mutex); // Acquire lock for buffer access​
// ... add item to buffer ...​
signal(mutex); // Release lock​
signal(full); // Signal buffer has one more item (increment full count)​
} while (true);​

○​ Consumer Code Structure:​


do {​
wait(full); // Wait if buffer is empty (decrement full count)​
wait(mutex); // Acquire lock for buffer access​
// ... remove item from buffer ...​
signal(mutex); // Release lock​
signal(empty); // Signal buffer has one more empty slot (increment empty
count)​
// ... consume the item ...​
} while (true);​

c) Reader-Writer Problem and Solution


●​ Concept Explanation: Another classic synchronization problem where a shared
data resource (like a file or data structure) is accessed by two types of processes:
Readers and Writers.
○​ Readers: Only read the data; they do not modify it. Multiple readers can
access the data concurrently without issue.
○​ Writers: Modify the data. A writer must have exclusive access to the data; no
other reader or writer should be accessing it simultaneously.
●​ Similarity to Producer-Consumer: Both involve coordinating access to a shared
resource between different types of processes/threads. Both require
synchronization mechanisms to prevent errors.
●​ Difference from Producer-Consumer: The access constraints are different.
Producer-Consumer typically involves adding/removing items from a buffer,
requiring mutual exclusion for buffer manipulation. Reader-Writer allows
concurrent reads but requires exclusive access for writes. This allows for
potentially higher concurrency than a simple mutex if reads are frequent.
●​ Scenario: An airline reservation system. Many travel agents (readers) can
concurrently check flight seat availability. However, when an agent books a seat
(writer), they must have exclusive access to update the seat count and passenger
list for that flight to prevent double-booking or inconsistent views.
●​ Solution (using Semaphores - Reader Priority):
○​ Use two semaphores and a shared counter:
1.​ mutex: Binary semaphore (initialized to 1) to protect access to the
read_count.
2.​ rw_mutex: Binary semaphore (initialized to 1) used by writers to ensure
exclusive access and by the first reader entering to block writers.
3.​ read_count: Integer (initialized to 0) tracking the number of active
readers.
○​ Writer Code Structure:​
do {​
wait(rw_mutex); // Lock out readers and other writers​
// ... perform write operation ...​
signal(rw_mutex); // Release lock​
} while (true);​

○​ Reader Code Structure:​


do {​
wait(mutex); // Lock to safely modify read_count​
read_count++;​
if (read_count == 1) {​
wait(rw_mutex); // First reader locks out writers​
}​
signal(mutex); // Release lock for read_count​

// ... perform read operation ...​

wait(mutex); // Lock to safely modify read_count​
read_count--;​
if (read_count == 0) {​
signal(rw_mutex); // Last reader releases lock for writers​
}​
signal(mutex); // Release lock for read_count​
} while (true);​

○​ (Note: This is a reader-priority solution. Writer-priority or fair solutions also


exist but are more complex. Monitors can also provide cleaner solutions.)
Question 5

i. Metrics for Disk Performance


1.​ Access Time: The total time it takes from when an I/O request (read or write) is
issued to the disk until the data transfer begins. It's composed of:
○​ Seek Time: Time taken for the disk arm/head assembly to move to the correct
track/cylinder. This is often the most significant component.
○​ Rotational Latency (or Delay): Time spent waiting for the desired sector on
the track to rotate under the read/write head. On average, it's half the time of
one full rotation.
2.​ Data Transfer Rate (Throughput): The rate at which data can be transferred
between the disk and main memory, usually measured in Megabytes per second
(MB/s). This depends on the disk's rotation speed, recording density, interface
speed (e.g., SATA, NVMe), and bus speed.

ii. Block vs. Character I/O Devices


●​ Block Devices: Store information and transfer it in fixed-size blocks (e.g., 512
bytes, 4KB). Access is typically addressable, meaning you can read or write any
block directly. They usually support seeking (moving directly to a specific
location).
○​ Examples: Hard Disk Drives (HDDs), Solid-State Drives (SSDs), USB Flash
Drives, CD/DVD drives.
●​ Character Devices: Transfer data as a stream of characters (bytes) without
regard to block structure. Access is typically sequential. They usually do not
support seeking.
○​ Examples: Keyboards, Mice, Serial Ports (RS-232), Printers (in some modes),
Sound Cards.

iii. Requirements for Critical-Section Problem Solution

A valid solution to the critical-section problem must satisfy these three requirements:
1.​ Mutual Exclusion: If one process is executing in its critical section, no other
processes can be executing in their critical sections simultaneously.
2.​ Progress: If no process is executing in its critical section and some processes
wish to enter their critical sections, then only those processes that are not
executing in their remainder sections can participate in2 deciding which will enter
its critical section next. This selection cannot be postponed indefinitely.3
(Essentially, if someone wants to enter and the critical section is free, a decision
must eventually be made allowing someone to enter).
3.​ Bounded Waiting (No Starvation): There must be a limit on the number of times
or the amount of time other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and before that
request is granted.4 This ensures that a process waiting to enter its critical
section will eventually get access and not wait forever (starvation).

iv. Desirable Values for Performance Algorithm Measures

When evaluating the performance of scheduling algorithms, common metrics and


their desirable values are:
1.​ CPU Utilization:
○​ Measure: Percentage of time the CPU is busy executing processes (not idle).
○​ Desirable Value: High. We want the CPU to be doing useful work as much as
possible. Values approaching 100% are ideal in heavily loaded systems, but
100% constantly might indicate an overloaded system with no spare capacity.
Typical ranges might be 40% (light load) to 90% (heavy load).
○​ Reason: The CPU is often an expensive resource; keeping it busy maximizes
the return on investment and system throughput.
2.​ Throughput:
○​ Measure: Number of processes completed per unit of time (e.g., processes
per second).
○​ Desirable Value: High. More completed processes mean more work is getting
done.
○​ Reason: Represents the overall productivity of the system.
3.​ Turnaround Time:
○​ Measure: Time from process arrival to process completion.
○​ Desirable Value: Low. Users want their jobs or requests completed quickly.
○​ Reason: Reflects the total time a user or job has to wait for its result. Lower is
better for user satisfaction and system efficiency.
4.​ Waiting Time:
○​ Measure: Time a process spends waiting in the ready queue.
○​ Desirable Value: Low. Time spent waiting is unproductive time for the
process.
○​ Reason: Directly impacts turnaround time and system efficiency. Reducing
waiting time usually improves other metrics.
5.​ Response Time:
○​ Measure: Time from when a request is submitted until the first response is
produced (not necessarily the completion time). Crucial for interactive
systems.
○​ Desirable Value: Low (and often, predictable/consistent). Users interacting
with a system need quick feedback.
○​ Reason: Ensures the system feels responsive to users, even if the total job
takes longer. A low variance in response time is often as important as a low
average.
●​ Example: Comparing FCFS and RR. FCFS might have good throughput for long
batch jobs but terrible response time and potentially high average waiting time if
short jobs get stuck behind long ones. RR sacrifices some throughput (due to
context switching overhead) but provides much lower and fairer response times
and generally lower average waiting times in interactive environments. Therefore,
RR is often preferred for interactive systems (desiring low response time), while
FCFS might be simpler for batch systems (focused on throughput).

You might also like