0% found this document useful (0 votes)
21 views16 pages

Os Sample Solved

Uploaded by

shuvronil562
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views16 pages

Os Sample Solved

Uploaded by

shuvronil562
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Differentiate

a) Multilevel Feed-back Queue scheduling and Multilevel scheduling.

b) Starvation and Deadlock

c) Symmetric Multi-processing and Asymmetric Multiprocessing

d) Short-term scheduler and Long-Term scheduler

e) Process and Threads

f) Mutex-Lock and Semaphore

g) Deadlock-prevention and Deadlock avoidance strategies

h) Logical Address Space and Physical Address Space

i) External Fragmentation and Internal Fragmentation

j) Best Fit and Worst-Fit in Non-contiguous memory allocation

k) First-fit and Next Fit in Non-contiguous memory allocation

l) Preemptive and Non-Preemptive CPU Scheduling Algorithms

m) Library Call and System call (using an example).

Certainly, let's differentiate the given concepts in Operating Systems:

a) Multilevel Feed-back Queue Scheduling vs. Multilevel Scheduling

• Multilevel Scheduling:
o A general concept where processes are assigned to different queues based on
their characteristics (e.g., priority, memory requirements).
o Each queue may have its own scheduling algorithm.
o Less specific than feedback queues.
• Multilevel Feed-back Queue Scheduling:
o A specific type of multilevel scheduling.
o Processes move between queues based on their behavior (e.g., CPU bursts).
o Often used to prioritize interactive processes.

b) Starvation vs. Deadlock

• Starvation:
o A condition where a process is perpetually denied necessary resources,
preventing it from making progress.
o Can occur due to unfair scheduling algorithms or priority inversion.
• Deadlock:
o A circular wait condition where a set of processes are blocked, each waiting
for a resource held by another process in the set.
o No process can proceed until the deadlock is resolved.

c) Symmetric Multi-processing (SMP) vs. Asymmetric Multiprocessing (AMP)

• SMP:
o All processors are equal and share the same operating system kernel.
o Load balancing and resource allocation are typically handled dynamically.
• AMP:
o One processor (master) is designated as the primary processor, while others
(slaves) are subordinate.
o The master handles most of the operating system functions.

d) Short-term Scheduler vs. Long-term Scheduler

• Short-term Scheduler (CPU Scheduler):


o Selects the next process to run on the CPU from the ready queue.
o Makes frequent decisions (milliseconds).
• Long-term Scheduler (Job Scheduler):
o Controls the degree of multiprogramming by selecting processes from the job
queue to be loaded into memory for execution.
o Makes infrequent decisions (seconds or minutes).

e) Process vs. Threads

• Process:
o An independent unit of execution with its own memory space, resources, and
system state.
o Heavier to create and manage.
• Thread:
o A lightweight unit of execution within a process.
o Shares the process's memory space and resources but has its own program
counter and stack.
o Easier and faster to create and manage.

f) Mutex-Lock vs. Semaphore

• Mutex-Lock:
o A synchronization primitive that allows only one thread to access a shared
resource at a time.
o Typically used to protect critical sections of code.
• Semaphore:
o A more general synchronization mechanism that can control access to a
resource by multiple threads.
o Can be used for both mutual exclusion and synchronization.

g) Deadlock-Prevention vs. Deadlock Avoidance Strategies


• Deadlock-Prevention:
o Involves modifying the system in ways that make it impossible for a deadlock
to occur.
o Examples: resource ordering, denying requests that could lead to deadlock.
• Deadlock Avoidance:
o Requires knowledge of future process resource needs.
o Dynamically checks resource allocation requests to ensure that they will not
lead to a deadlock.
o More complex to implement.

h) Logical Address Space vs. Physical Address Space

• Logical Address Space:


o The set of addresses used by a program to refer to memory locations.
o Independent of the actual physical memory locations.
• Physical Address Space:
o The set of actual memory addresses in the system.

i) External Fragmentation vs. Internal Fragmentation

• External Fragmentation:
o Occurs in dynamic memory allocation when there are many small, unused
memory blocks scattered throughout memory.
o Total memory space may be available, but it is fragmented and cannot be used
for large allocations.
• Internal Fragmentation:
o Occurs when a process is allocated more memory than it actually needs.
o The unused portion within the allocated block is wasted.

j) Best Fit vs. Worst Fit in Non-contiguous Memory Allocation

• Best Fit:
o Allocates the smallest available hole that is large enough to satisfy the request.
o Can lead to many small holes.
• Worst Fit:
o Allocates the largest available hole.
o May leave large, unusable holes.

k) First Fit vs. Next Fit in Non-contiguous Memory Allocation

• First Fit:
o Allocates the first available hole that is large enough to satisfy the request.
o Simple and fast.
• Next Fit:
o Starts searching from the location of the last allocation.
o Can improve efficiency over first fit in some cases.

l) Preemptive vs. Non-Preemptive CPU Scheduling Algorithms

• Preemptive:
o The CPU can be forcibly taken away from a process before it completes its
time slice.
o Allows for better responsiveness and fairness.
o Examples: Shortest Job First (SJF), Priority Scheduling.
• Non-Preemptive:
o A process runs to completion or voluntarily releases the CPU.
o Simpler to implement.
o Examples: First-Come, First-Served (FCFS), Longest Job First (LJF).

m) Library Call vs. System Call (Example)

• Library Call:
o A function call to a library routine.
o Library routines are typically part of the user-level program's code and do not
involve the operating system kernel.
o Example: Calling printf() to print output to the console.
• System Call:
o A request from a user program to the operating system kernel for a service.
o Involves a transition from user mode to kernel mode.
o Example: Calling open() to open a file.

I hope this comprehensive explanation is helpful!

Sources and related content

Multiple Level Queues(MLQ) is a of -CPU SchedulingData SchedulingDisk SchedulingAll


of the above

www.toppr.com

Multilevel feedback queue - Wikipedia

en.wikipedia.org

Starvation (computer science) - Wikipedia

en.wikipedia.org

What Is Starvation In OS? Definition, Causes And Solution - Unstop


unstop.com

Deadlock Prevention in Operating System | GATE Notes - BYJU'S

byjus.com

Briefly explain (using example / pseudo code/ block diagram etc, where possible):

a. Producer-Consumer Problem

b. Convey Effect

c. Dining-Philosopher Problem

d. Semaphore, Counting Semaphore, Binary Semaphore

e. Critical Section Problem, Race Condition

f. Process State Transition Diagram

g. Multi-threading model

h. Advantages and disadvantages of Multiprocessor System

i. Process Control Block

j. Context-Switching

k. Starvation and Aging

l. Swapping

m. Segmentation

n. Paging

o. Why Paging may suffer with internal fragmentation?

p. Demand Paging

q. Lazy Swapper
r. Copy-on-Write (COW) in Virtual memory management

s. Thrashing

t. Belaedy’s Anomaly for page fault

Certainly, let's briefly explain the concepts you've listed:

a. Producer-Consumer Problem

• Description: This classic concurrency problem involves two processes: a producer


that generates data and a consumer that consumes it. They share a common buffer.
The producer must not overwrite data before the consumer has consumed it, and the
consumer must not try to consume data that has not yet been produced.
• Solution: Semaphores can be used to synchronize access to the buffer. A "full"
semaphore counts the number of full slots, and an "empty" semaphore counts the
number of empty slots.

b. Convoy Effect

• Description: In a preemptive scheduling algorithm with low priority processes,


higher priority processes can monopolize the CPU, leading to a "convoy" of low-
priority processes waiting indefinitely.
• Example: In a system with high and low-priority processes, if a high-priority process
arrives, it will preempt the currently running low-priority process. This can continue,
causing low-priority processes to starve.

c. Dining-Philosopher Problem

• Description: Five philosophers sit around a circular table, each with a bowl of food
and a fork. To eat, a philosopher needs two forks – the ones to their left and right. The
problem is to devise a dining strategy that prevents deadlock (all philosophers waiting
for a fork) and starvation (some philosophers never get to eat).
• Solution: Various solutions exist, often involving semaphores with careful
synchronization logic to prevent deadlock.

d. Semaphore, Counting Semaphore, Binary Semaphore

• Semaphore: A synchronization primitive that controls access to shared resources.


o Counting Semaphore: Can take on any non-negative integer value.
Represents the number of available resources.
o Binary Semaphore: Can only take on the values 0 or 1. Essentially a mutex.

e. Critical Section Problem, Race Condition

• Critical Section Problem: A section of code that accesses shared resources and must
be executed by only one thread at a time to avoid data corruption.
• Race Condition: A situation where the output of a concurrent program depends on
the relative speed of execution of multiple threads.

f. Process State Transition Diagram

• Description: A graphical representation of the different states a process can be in


(e.g., New, Ready, Running, Waiting, Terminated) and the transitions between them.

g. Multi-threading Model

• User-level Threads: Managed by the application. Kernel is unaware of their


existence. Faster context switching but may be blocked if any thread in the process
makes a system call.
• Kernel-level Threads: Supported by the operating system. Kernel is aware of them.
Slower context switching but allow multiple threads from the same process to run
concurrently on multiple CPUs.

h. Advantages and Disadvantages of Multiprocessor System

• Advantages: Increased throughput, improved performance for parallel applications,


increased reliability (graceful degradation).
• Disadvantages: Increased complexity, synchronization issues, potential for higher
power consumption.

i. Process Control Block (PCB)

• Description: A data structure that contains all the information needed by the
operating system to manage a process (e.g., process ID, state, program counter,
registers, memory allocation).

j. Context-Switching

• Description: The process of saving the context (state) of the currently running
process and loading the context of the next process to run. This is necessary when
switching between processes on a single CPU.

k. Starvation and Aging

• Starvation: A condition where a process is perpetually denied necessary resources,


preventing it from making progress.
• Aging: A technique to prevent starvation by gradually increasing the priority of
processes that have been waiting for a long time.

l. Swapping

• Description: Moving a process from main memory to secondary storage (e.g., disk)
and back. Used to temporarily remove inactive processes from memory to make space
for others.

m. Segmentation
• Description: A memory management technique where memory is divided into
variable-sized segments, each corresponding to a logical unit (e.g., code, data, stack).

n. Paging

• Description: A memory management technique where memory is divided into fixed-


size pages and physical memory is divided into frames of the same size. Processes are
divided into pages, and pages are loaded into available frames.

o. Why Paging may suffer with internal fragmentation?

• Internal fragmentation occurs in paging because a process may not perfectly fill the
last page it occupies. The unused portion of that page is wasted.

p. Demand Paging

• Description: A technique where pages are only loaded into memory when they are
actually needed (accessed by the process).

q. Lazy Swapper

• Description: A swapper that only swaps out a process when it is absolutely necessary
(e.g., when memory is full).

r. Copy-on-Write (COW) in Virtual Memory Management

• Description: A technique where multiple processes can share the same physical
pages initially. When a process tries to modify a shared page, a copy of the page is
created for that process.

s. Thrashing

• Description: A condition where the system spends more time swapping pages in and
out of memory than executing processes. This occurs when the degree of
multiprogramming is too high or the available physical memory is insufficient.

t. Belady’s Anomaly for page fault

• Description: Belady's Anomaly states that in some cases, increasing the number of
page frames in memory can actually increase the number of page faults. This
counterintuitive behavior can occur with certain page replacement algorithms.

I hope this brief overview is helpful!

How does the following works?

a. FCFS CPU Scheduling

b. SJF CPU Scheduling


c. RR CPU Scheduling

d. SRTF CPU Scheduling

e. Priority Scheduling

f. FIFO Page Replacement Algorithm

g. Optimal Page Replacement Algorithm

h. LRU Page Replacement Algorithm

i. Disk-Scheduling Algorithms (FCFS, SSTS, SCAN, C-SCAN, C-LOOK)

Certainly, let's break down how these scheduling algorithms work:

CPU Scheduling Algorithms

a. FCFS (First-Come, First-Served)

• How it works: Processes are served in the order they arrive in the ready queue.
• Example: If Process A arrives first, then Process B, and then Process C, they will be
executed in that order.
• Advantages: Simple to implement.
• Disadvantages: Can lead to long waiting times for short processes if a long process
arrives first.

b. SJF (Shortest Job First)

• How it works: Processes with the shortest estimated burst time are executed first.
• Example: If Process A has a burst time of 2ms, Process B has 5ms, and Process C has
1ms, Process C will be executed first, followed by Process A, and then Process B.
• Advantages: Minimizes average waiting time.
• Disadvantages: Requires accurate knowledge of future burst times, which is often
difficult to predict.

c. RR (Round Robin)

• How it works: Each process is given a fixed time slice (quantum). If a process
doesn't finish within its time slice, it's preempted and moved to the end of the ready
queue.
• Example: If the quantum is 2ms and Process A has a burst time of 5ms, it will run for
2ms, then be preempted, and then resume later.
• Advantages: Provides fair share of CPU time to all processes.
• Disadvantages: Increased context switching overhead can reduce overall throughput.

d. SRTF (Shortest Remaining Time First)


• How it works: Similar to SJF, but considers the remaining burst time of currently
running processes. If a new process arrives with a shorter remaining time than the
currently running process, the current process is preempted.
• Advantages: Minimizes average waiting time.
• Disadvantages: Requires frequent context switches, which can be expensive.

e. Priority Scheduling

• How it works: Processes are assigned priorities, and higher priority processes are
executed first.
• Example: If Process A has priority 3, Process B has priority 1, and Process C has
priority 2, Process B will be executed first, followed by Process C, and then Process
A.
• Advantages: Can be used to favor important processes.
• Disadvantages: Can lead to starvation of low-priority processes.

Page Replacement Algorithms

f. FIFO (First-In, First-Out)

• How it works: The oldest page in memory is replaced when a page fault occurs.
• Example: If pages A, B, and C are in memory, and page D is needed, page A (the
oldest) is replaced with page D.
• Disadvantages: Can suffer from Belady's anomaly (increasing the number of frames
can increase the number of page faults).

g. Optimal Page Replacement Algorithm

• How it works: Replaces the page that will not be used for the longest time in the
future.
• Example: If the future reference string is D, B, A, C, D, then page B would be
replaced first.
• Advantages: Minimizes page faults.
• Disadvantages: Not feasible in practice because it requires future knowledge.

h. LRU (Least Recently Used)

• How it works: Replaces the page that has not been used for the longest time in the
past.
• Example: Maintains a list of recently used pages. When a page fault occurs, the page
at the end of the list (least recently used) is replaced.
• Advantages: Performs well in many practical situations.

Disk-Scheduling Algorithms

i. FCFS (First-Come, First-Served)

• How it works: Services requests in the order they arrive.


• Disadvantages: Can lead to significant head movement and long seek times.
ii. SSTF (Shortest Seek Time First)

• How it works: Selects the request that requires the shortest seek time from the current
head position.
• Advantages: Reduces average seek time.
• Disadvantages: Can lead to starvation of requests that are far from the current head
position.

iii. SCAN

• How it works: The head moves in one direction (e.g., from the beginning to the end
of the disk) servicing requests along the way. When it reaches the end, it reverses
direction.
• Advantages: Fairer than SSTF, prevents starvation.

iv. C-SCAN (Circular Scan)

• How it works: Similar to SCAN, but the head moves in one direction only. When it
reaches the end, it immediately jumps to the beginning of the disk and continues in
the same direction.
• Advantages: Further improves fairness compared to SCAN.

v. C-LOOK (Circular Look)

• How it works: Similar to C-SCAN, but the head only moves to the last request in the
current direction before reversing.
• Advantages: Reduces unnecessary head movement compared to C-SCAN.

I hope this explanation is helpful!

What is Critical Section Problem? Mention the Requirement for the Critical Section

Problem (5)

Critical Section Problem

• Definition: In concurrent programming, the critical section problem arises when


multiple processes or threads attempt to access and modify shared resources
simultaneously. This can lead to unexpected and incorrect behavior, known as a race
condition.

Requirements for Solving the Critical Section Problem

1. Mutual Exclusion:
o Only one process can be in its critical section at a time.
o Ensures that no two processes can access or modify shared resources
concurrently, preventing data corruption.
2. Progress:
o If no process is executing in its critical section and there are processes waiting
to enter, then the selection of the next process to enter the critical section
cannot be postponed indefinitely.
o Prevents starvation, where a process is perpetually denied access to the critical
section.
3. Bounded Waiting:
o There exists a bound on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its
own critical section and before that request is granted.
o Prevents indefinite waiting for a process.
4. Atomicity:
o Operations within the critical section must appear to be indivisible and
uninterruptible.
o Ensures that all operations within the critical section are executed as a single,
atomic unit.
5. Fairness (Optional):
o No process should wait indefinitely to enter the critical section.
o Ensures that all processes have a fair chance to access the shared resource.

Key Concepts

• Race Condition: When the outcome of a program depends on the relative speed of
execution of multiple threads or processes.
• Synchronization: Mechanisms to coordinate the activities of multiple processes or
threads to ensure correct and predictable behavior.

Solutions to the Critical Section Problem

• Software-based solutions:
o Peterson's algorithm, Dekker's algorithm
• Hardware-based solutions:
o Test-and-set instruction, compare-and-swap instruction
• Synchronization primitives:
o Semaphores, mutexes

By carefully addressing these requirements, solutions to the critical section problem ensure
that shared resources are accessed in a controlled and predictable manner, preventing race
conditions and maintaining the integrity of the system.

Sources and related content

www.numerade.com

www.numerade.com

dokumen.tips
dokumen.tips

quizlet.com

quizlet.com

Critical section - Wikipedia

en.wikipedia.org

Critical Section MCQ [Free PDF] - Objective Question Answer for Critical Section Quiz -
Download Now! - Testbook

testbook.com

What is Deadlock? Mention the necessary condition for deadlock. (5)

Deadlock

Deadlock is a situation where two or more processes are indefinitely blocked, each waiting
for a resource held by another process in the set. In essence, they are stuck waiting for each
other, leading to a standstill.

Necessary Conditions for Deadlock

To occur, all four of the following conditions must hold simultaneously:

1. Mutual Exclusion: At least one resource must be held in a non-sharable mode; that
is, only one process can use the resource at a time. If another process requests that
resource, the requesting process must be delayed until the resource is released.
2. Hold and Wait: A process must be holding at least one resource and waiting for
resources that are currently held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process that holds
them. A process must release a resource voluntarily.
4. Circular Wait: There exists a set {P0, P1, ..., Pn} of waiting processes such that P0
is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by
P2, ..., Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.

Visualizing Deadlock

Opens in a new window www.geeksforgeeks.org


deadlock situation with two processes and two resources

In the diagram, Process A holds Resource 1 and is waiting for Resource 2, which is held by
Process B. Process B, in turn, is waiting for Resource 1. This circular wait creates a deadlock.

Handling Deadlock

There are several strategies for handling deadlocks:

• Deadlock Prevention: Modify the system to prevent one of the four necessary
conditions from holding.
• Deadlock Avoidance: Dynamically check resource allocation requests to ensure that
they will not lead to a deadlock.
• Deadlock Detection and Recovery: Allow deadlocks to occur, detect them, and then
take steps to recover from the deadlock.

Understanding Deadlock

Deadlock is a critical concept in operating systems and concurrent programming. By


understanding the necessary conditions and handling strategies, developers can design
systems that are more robust and less prone to deadlocks.

Sources and related content

Consider a system where, Process={p0,p1,p2,p3}, Arrival time={0,1,2,0} burst

time={4,6,6,2}, In FCFS method CPU scheduling of uni-processor system, which process

is executed after p3?


1. Determine the Order of Execution:

• FCFS (First-Come, First-Served): Processes are executed in the order they arrive in
the ready queue.
• Arrival Times:
o p0: 0
o p3: 0
o p1: 1
o p2: 2
• Execution Order: p0, p3, p1, p2

2. Identify the Process After p3:

• After p3 finishes executing, the next process in the queue is p1.

Therefore, the process executed after p3 in this FCFS scheduling scenario is p1.

Suppose an application consists with 70% parallel and 30% serial computing

components. Also let this contains 4 processing cores. What is the maximum possible

speed-up for this application?

1. Understand Amdahl's Law

• Amdahl's Law states that the maximum speedup of a program using multiple
processors is limited by the serial portion of the program.
• Formula:
o Speedup = 1 / [(1 - P) + P/N]
▪ where:
▪ P = Proportion of the program that can be parallelized
▪ N = Number of processors

2. Given Values

• P = 0.7 (70% parallel)


• N = 4 (4 processing cores)

3. Calculate Maximum Speedup

• Speedup = 1 / [(1 - 0.7) + 0.7/4]


• Speedup = 1 / [0.3 + 0.175]
• Speedup = 1 / 0.475
• Speedup ≈ 2.105

Therefore, the maximum possible speedup for this application with 4 processing cores is
approximately 2.105.
Key Takeaway:

• Even with a high percentage of parallelizable components, the serial portion limits the
overall speedup achievable.
• Increasing the number of processors can only provide diminishing returns in speedup
as the serial portion becomes the dominant factor.

You might also like