Os Sample Solved
Os Sample Solved
• Multilevel Scheduling:
o A general concept where processes are assigned to different queues based on
their characteristics (e.g., priority, memory requirements).
o Each queue may have its own scheduling algorithm.
o Less specific than feedback queues.
• Multilevel Feed-back Queue Scheduling:
o A specific type of multilevel scheduling.
o Processes move between queues based on their behavior (e.g., CPU bursts).
o Often used to prioritize interactive processes.
• Starvation:
o A condition where a process is perpetually denied necessary resources,
preventing it from making progress.
o Can occur due to unfair scheduling algorithms or priority inversion.
• Deadlock:
o A circular wait condition where a set of processes are blocked, each waiting
for a resource held by another process in the set.
o No process can proceed until the deadlock is resolved.
• SMP:
o All processors are equal and share the same operating system kernel.
o Load balancing and resource allocation are typically handled dynamically.
• AMP:
o One processor (master) is designated as the primary processor, while others
(slaves) are subordinate.
o The master handles most of the operating system functions.
• Process:
o An independent unit of execution with its own memory space, resources, and
system state.
o Heavier to create and manage.
• Thread:
o A lightweight unit of execution within a process.
o Shares the process's memory space and resources but has its own program
counter and stack.
o Easier and faster to create and manage.
• Mutex-Lock:
o A synchronization primitive that allows only one thread to access a shared
resource at a time.
o Typically used to protect critical sections of code.
• Semaphore:
o A more general synchronization mechanism that can control access to a
resource by multiple threads.
o Can be used for both mutual exclusion and synchronization.
• External Fragmentation:
o Occurs in dynamic memory allocation when there are many small, unused
memory blocks scattered throughout memory.
o Total memory space may be available, but it is fragmented and cannot be used
for large allocations.
• Internal Fragmentation:
o Occurs when a process is allocated more memory than it actually needs.
o The unused portion within the allocated block is wasted.
• Best Fit:
o Allocates the smallest available hole that is large enough to satisfy the request.
o Can lead to many small holes.
• Worst Fit:
o Allocates the largest available hole.
o May leave large, unusable holes.
• First Fit:
o Allocates the first available hole that is large enough to satisfy the request.
o Simple and fast.
• Next Fit:
o Starts searching from the location of the last allocation.
o Can improve efficiency over first fit in some cases.
• Preemptive:
o The CPU can be forcibly taken away from a process before it completes its
time slice.
o Allows for better responsiveness and fairness.
o Examples: Shortest Job First (SJF), Priority Scheduling.
• Non-Preemptive:
o A process runs to completion or voluntarily releases the CPU.
o Simpler to implement.
o Examples: First-Come, First-Served (FCFS), Longest Job First (LJF).
• Library Call:
o A function call to a library routine.
o Library routines are typically part of the user-level program's code and do not
involve the operating system kernel.
o Example: Calling printf() to print output to the console.
• System Call:
o A request from a user program to the operating system kernel for a service.
o Involves a transition from user mode to kernel mode.
o Example: Calling open() to open a file.
www.toppr.com
en.wikipedia.org
en.wikipedia.org
byjus.com
Briefly explain (using example / pseudo code/ block diagram etc, where possible):
a. Producer-Consumer Problem
b. Convey Effect
c. Dining-Philosopher Problem
g. Multi-threading model
j. Context-Switching
l. Swapping
m. Segmentation
n. Paging
p. Demand Paging
q. Lazy Swapper
r. Copy-on-Write (COW) in Virtual memory management
s. Thrashing
a. Producer-Consumer Problem
b. Convoy Effect
c. Dining-Philosopher Problem
• Description: Five philosophers sit around a circular table, each with a bowl of food
and a fork. To eat, a philosopher needs two forks – the ones to their left and right. The
problem is to devise a dining strategy that prevents deadlock (all philosophers waiting
for a fork) and starvation (some philosophers never get to eat).
• Solution: Various solutions exist, often involving semaphores with careful
synchronization logic to prevent deadlock.
• Critical Section Problem: A section of code that accesses shared resources and must
be executed by only one thread at a time to avoid data corruption.
• Race Condition: A situation where the output of a concurrent program depends on
the relative speed of execution of multiple threads.
g. Multi-threading Model
• Description: A data structure that contains all the information needed by the
operating system to manage a process (e.g., process ID, state, program counter,
registers, memory allocation).
j. Context-Switching
• Description: The process of saving the context (state) of the currently running
process and loading the context of the next process to run. This is necessary when
switching between processes on a single CPU.
l. Swapping
• Description: Moving a process from main memory to secondary storage (e.g., disk)
and back. Used to temporarily remove inactive processes from memory to make space
for others.
m. Segmentation
• Description: A memory management technique where memory is divided into
variable-sized segments, each corresponding to a logical unit (e.g., code, data, stack).
n. Paging
• Internal fragmentation occurs in paging because a process may not perfectly fill the
last page it occupies. The unused portion of that page is wasted.
p. Demand Paging
• Description: A technique where pages are only loaded into memory when they are
actually needed (accessed by the process).
q. Lazy Swapper
• Description: A swapper that only swaps out a process when it is absolutely necessary
(e.g., when memory is full).
• Description: A technique where multiple processes can share the same physical
pages initially. When a process tries to modify a shared page, a copy of the page is
created for that process.
s. Thrashing
• Description: A condition where the system spends more time swapping pages in and
out of memory than executing processes. This occurs when the degree of
multiprogramming is too high or the available physical memory is insufficient.
• Description: Belady's Anomaly states that in some cases, increasing the number of
page frames in memory can actually increase the number of page faults. This
counterintuitive behavior can occur with certain page replacement algorithms.
e. Priority Scheduling
• How it works: Processes are served in the order they arrive in the ready queue.
• Example: If Process A arrives first, then Process B, and then Process C, they will be
executed in that order.
• Advantages: Simple to implement.
• Disadvantages: Can lead to long waiting times for short processes if a long process
arrives first.
• How it works: Processes with the shortest estimated burst time are executed first.
• Example: If Process A has a burst time of 2ms, Process B has 5ms, and Process C has
1ms, Process C will be executed first, followed by Process A, and then Process B.
• Advantages: Minimizes average waiting time.
• Disadvantages: Requires accurate knowledge of future burst times, which is often
difficult to predict.
c. RR (Round Robin)
• How it works: Each process is given a fixed time slice (quantum). If a process
doesn't finish within its time slice, it's preempted and moved to the end of the ready
queue.
• Example: If the quantum is 2ms and Process A has a burst time of 5ms, it will run for
2ms, then be preempted, and then resume later.
• Advantages: Provides fair share of CPU time to all processes.
• Disadvantages: Increased context switching overhead can reduce overall throughput.
e. Priority Scheduling
• How it works: Processes are assigned priorities, and higher priority processes are
executed first.
• Example: If Process A has priority 3, Process B has priority 1, and Process C has
priority 2, Process B will be executed first, followed by Process C, and then Process
A.
• Advantages: Can be used to favor important processes.
• Disadvantages: Can lead to starvation of low-priority processes.
• How it works: The oldest page in memory is replaced when a page fault occurs.
• Example: If pages A, B, and C are in memory, and page D is needed, page A (the
oldest) is replaced with page D.
• Disadvantages: Can suffer from Belady's anomaly (increasing the number of frames
can increase the number of page faults).
• How it works: Replaces the page that will not be used for the longest time in the
future.
• Example: If the future reference string is D, B, A, C, D, then page B would be
replaced first.
• Advantages: Minimizes page faults.
• Disadvantages: Not feasible in practice because it requires future knowledge.
• How it works: Replaces the page that has not been used for the longest time in the
past.
• Example: Maintains a list of recently used pages. When a page fault occurs, the page
at the end of the list (least recently used) is replaced.
• Advantages: Performs well in many practical situations.
Disk-Scheduling Algorithms
• How it works: Selects the request that requires the shortest seek time from the current
head position.
• Advantages: Reduces average seek time.
• Disadvantages: Can lead to starvation of requests that are far from the current head
position.
iii. SCAN
• How it works: The head moves in one direction (e.g., from the beginning to the end
of the disk) servicing requests along the way. When it reaches the end, it reverses
direction.
• Advantages: Fairer than SSTF, prevents starvation.
• How it works: Similar to SCAN, but the head moves in one direction only. When it
reaches the end, it immediately jumps to the beginning of the disk and continues in
the same direction.
• Advantages: Further improves fairness compared to SCAN.
• How it works: Similar to C-SCAN, but the head only moves to the last request in the
current direction before reversing.
• Advantages: Reduces unnecessary head movement compared to C-SCAN.
What is Critical Section Problem? Mention the Requirement for the Critical Section
Problem (5)
1. Mutual Exclusion:
o Only one process can be in its critical section at a time.
o Ensures that no two processes can access or modify shared resources
concurrently, preventing data corruption.
2. Progress:
o If no process is executing in its critical section and there are processes waiting
to enter, then the selection of the next process to enter the critical section
cannot be postponed indefinitely.
o Prevents starvation, where a process is perpetually denied access to the critical
section.
3. Bounded Waiting:
o There exists a bound on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its
own critical section and before that request is granted.
o Prevents indefinite waiting for a process.
4. Atomicity:
o Operations within the critical section must appear to be indivisible and
uninterruptible.
o Ensures that all operations within the critical section are executed as a single,
atomic unit.
5. Fairness (Optional):
o No process should wait indefinitely to enter the critical section.
o Ensures that all processes have a fair chance to access the shared resource.
Key Concepts
• Race Condition: When the outcome of a program depends on the relative speed of
execution of multiple threads or processes.
• Synchronization: Mechanisms to coordinate the activities of multiple processes or
threads to ensure correct and predictable behavior.
• Software-based solutions:
o Peterson's algorithm, Dekker's algorithm
• Hardware-based solutions:
o Test-and-set instruction, compare-and-swap instruction
• Synchronization primitives:
o Semaphores, mutexes
By carefully addressing these requirements, solutions to the critical section problem ensure
that shared resources are accessed in a controlled and predictable manner, preventing race
conditions and maintaining the integrity of the system.
www.numerade.com
www.numerade.com
dokumen.tips
dokumen.tips
quizlet.com
quizlet.com
en.wikipedia.org
Critical Section MCQ [Free PDF] - Objective Question Answer for Critical Section Quiz -
Download Now! - Testbook
testbook.com
Deadlock
Deadlock is a situation where two or more processes are indefinitely blocked, each waiting
for a resource held by another process in the set. In essence, they are stuck waiting for each
other, leading to a standstill.
1. Mutual Exclusion: At least one resource must be held in a non-sharable mode; that
is, only one process can use the resource at a time. If another process requests that
resource, the requesting process must be delayed until the resource is released.
2. Hold and Wait: A process must be holding at least one resource and waiting for
resources that are currently held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process that holds
them. A process must release a resource voluntarily.
4. Circular Wait: There exists a set {P0, P1, ..., Pn} of waiting processes such that P0
is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by
P2, ..., Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.
Visualizing Deadlock
In the diagram, Process A holds Resource 1 and is waiting for Resource 2, which is held by
Process B. Process B, in turn, is waiting for Resource 1. This circular wait creates a deadlock.
Handling Deadlock
• Deadlock Prevention: Modify the system to prevent one of the four necessary
conditions from holding.
• Deadlock Avoidance: Dynamically check resource allocation requests to ensure that
they will not lead to a deadlock.
• Deadlock Detection and Recovery: Allow deadlocks to occur, detect them, and then
take steps to recover from the deadlock.
Understanding Deadlock
• FCFS (First-Come, First-Served): Processes are executed in the order they arrive in
the ready queue.
• Arrival Times:
o p0: 0
o p3: 0
o p1: 1
o p2: 2
• Execution Order: p0, p3, p1, p2
Therefore, the process executed after p3 in this FCFS scheduling scenario is p1.
Suppose an application consists with 70% parallel and 30% serial computing
components. Also let this contains 4 processing cores. What is the maximum possible
• Amdahl's Law states that the maximum speedup of a program using multiple
processors is limited by the serial portion of the program.
• Formula:
o Speedup = 1 / [(1 - P) + P/N]
▪ where:
▪ P = Proportion of the program that can be parallelized
▪ N = Number of processors
2. Given Values
Therefore, the maximum possible speedup for this application with 4 processing cores is
approximately 2.105.
Key Takeaway:
• Even with a high percentage of parallelizable components, the serial portion limits the
overall speedup achievable.
• Increasing the number of processors can only provide diminishing returns in speedup
as the serial portion becomes the dominant factor.