0% found this document useful (0 votes)
68 views

homework6KT

The document outlines Homework 6 for CS4348, focusing on processor scheduling concepts, including multilevel feedback, types of scheduling, and specific scheduling algorithms like round-robin and shortest-remaining-time. It also discusses virtual round robin, exponential averaging, fair-share scheduling, and the differences between hard and soft real-time tasks. Additionally, it defines requirements for real-time operating systems, emphasizing determinism, responsiveness, and user control.

Uploaded by

wobay25311
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

homework6KT

The document outlines Homework 6 for CS4348, focusing on processor scheduling concepts, including multilevel feedback, types of scheduling, and specific scheduling algorithms like round-robin and shortest-remaining-time. It also discusses virtual round robin, exponential averaging, fair-share scheduling, and the differences between hard and soft real-time tasks. Additionally, it defines requirements for real-time operating systems, emphasizing determinism, responsiveness, and user control.

Uploaded by

wobay25311
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

CS4348 Homework 6 Fall 2024

Kabir Tapiawala
CS 4348.502
Dr. Salazar
12/5/2024

Homework 6
Answer the questions below, and submit electronically via elearning. Make sure you
submit a couple hours early at the latest and double check your submission to ensure
everything is in order before the submission time. Your answers should be submitted as
a “.doc”, “.docx”, or “.pdf” file. Your answers should be typed, but scans or pictures of
hand drawn figures and diagrams may be included in the file when needed. For
questions where you must write code, turn in the source code files along with your Word
or pdf file.

Due: Thursday, December 5th 11:59pm

Chapter 9: Single Processor Scheduling


1. (2pt) Briefly explain multilevel feedback. How does it differ from the feedback
algorithm?
Multilevel Feedback Scheduling is a sophisticated CPU scheduling method that
employs multiple priority queues, each potentially linked to a distinct processor.
Processes start in a queue corresponding to their priority level. If a process exceeds
its allotted CPU time, its priority decreases, and it moves to a lower-priority queue.
This strategy effectively balances workloads across processors and reduces the
likelihood of process starvation by ensuring equitable CPU access.

In contrast, the traditional Feedback Algorithm is typically utilized in single-processor


systems, operating with a single set of priority queues without distributing processes
across multiple CPUs. While both methods dynamically adjust process priorities,
Multilevel Feedback Scheduling is specifically designed for multiprocessor
environments.

2. (2pt) Briefly describe the three types of processor scheduling.


1. Long-Term Scheduling: Determines which processes transition from the "new"
state to the "ready" state, controlling the system's degree of multiprogramming by
regulating the number of processes admitted for execution.
2. Midterm Scheduling: Also known as "swapping," this manages the suspension
and reactivation of processes between main memory and secondary storage to
optimize resource utilization and maintain system equilibrium.
3. Short-Term Scheduling: Frequently decides which ready process the CPU will
execute next, aiming to maximize CPU utilization and minimize process waiting
time through rapid decision-making.

3. (4pt) Give an example of a system-oriented criteria not given in the notes.

One example of a system-oriented criterion that isn’t directly mentioned in the notes is
CPU Utilization.

CPU Utilization evaluates how much of the CPU’s capacity is actively being used for
processing tasks. A higher utilization generally signifies that the system is making good
use of its processing power, reducing downtime, and boosting throughput. This metric is
vital for gauging whether system resources are being efficiently used, which contributes
to smoother performance and optimized operation.

Briefly explain. 4. (10pt) For the following processes, perform round-robin scheduling.

PID Admit Time Service Time

0 0 16

1 3 16

2 5 14

3 5 10

4 6 4
You use the format we used in class. Assume the processes do not perform I/O. Use
5 as the time quantum.
Timeline:

At time 0:
Process 0 arrives.
Dispatch Process 0.

At time 3:
Process 1 arrives.

At time 5:
Process 0 times out (used 5 units; remaining 11 units).
Processes 2 and 3 arrive.
Ready queue is now [P1, P2, P3, P0].
Dispatch Process 1.

At time 6:
Process 4 arrives.
Ready queue is [P2, P3, P0, P4] after adding P4 behind P0.

At time 10:
Process 1 times out (used 5 units; remaining 11 units).
Ready queue: [P2, P3, P0, P4, P1].
Dispatch Process 2.

At time 15:
Process 2 times out (used 5 units; remaining 9 units).
Ready queue: [P3, P0, P4, P1, P2].
Dispatch Process 3.

At time 20:
Process 3 times out (used 5 units; remaining 5 units).
Ready queue: [P0, P4, P1, P2, P3].
Dispatch Process 0.

At time 25:
Process 0 times out (used 5 units; remaining 6 units).
Ready queue: [P4, P1, P2, P3, P0].
Dispatch Process 4.

At time 29:
Process 4 completes execution (used 4 units; remaining 0 units).
Remove P4 from the system.
Ready queue: [P1, P2, P3, P0].
Dispatch Process 1.

At time 34:
Process 1 times out (used 5 units; remaining 6 units).
Ready queue: [P2, P3, P0, P1].
Dispatch Process 2.

At time 39:
Process 2 times out (used 5 units; remaining 4 units).
Ready queue: [P3, P0, P1, P2].
Dispatch Process 3.

At time 44:
Process 3 completes execution (used 5 units; remaining 0 units).
Remove P3.
Ready queue: [P0, P1, P2].
Dispatch Process 0.

At time 49:
Process 0 times out (used 5 units; remaining 1 unit).
Ready queue: [P1, P2, P0].
Dispatch Process 1.

At time 54:
Process 1 times out (used 5 units; remaining 1 unit).
Ready queue: [P2, P0, P1].
Dispatch Process 2.

At time 58:
Process 2 completes execution (used 4 units; remaining 0 units).
Remove P2.
Ready queue: [P0, P1].
Dispatch Process 0.

At time 59:
Process 0 completes execution (used 1 unit; remaining 0 units).
Remove P0.
Ready queue: [P1].
Dispatch Process 1.

At time 60:
Process 1 completes execution (used 1 unit; remaining 0 units).
All processes have completed execution.
● Operating System Concepts Page 1

CS4348 Homework 6 Fall 2024

5. (10pt) For the following processes, perform shortest-remaining-time scheduling.


PID Admit Time Service Time

0 0 16

1 3 16

2 5 14

3 5 10

4 6 4

5 8 14

6 9 3

7 10 18

You use the format we used in class. Assume the processes do not perform I/O.

1. At time 0:
○ Process 0 arrives.
○ CPU is idle; dispatch Process 0.
○ Process 0 starts running.
2. At time 3:
○ Process 1 arrives.
○ Process 0 continues running (Remaining Time: 16 - (3 - 0) = 13 units).
○ Process 1 has Remaining Time: 16 units.
○ No preemption occurs.
3. At time 5:
○ Process 2 and Process 3 arrive.
○ Process 0 has Remaining Time: 16 - (5 - 0) = 11 units.
○ Process 3 has Remaining Time: 10 units.
○ Process 3 preempts Process 0.
○ At time 5, dispatch Process 3.
○ Process 3 starts running.
4. At time 6:
○ Process 4 arrives.
○ Process 3 has Remaining Time: 10 - (6 - 5) = 9 units.
○ Process 4 has Remaining Time: 4 units.
○ Process 4 preempts Process 3.
○ At time 6, dispatch Process 4.
○ Process 4 starts running.
5. At time 8:
○ Process 5 arrives.
○ Process 4 has Remaining Time: 4 - (8 - 6) = 2 units.
○ Process 4 continues running.
○ No preemption occurs.
6. At time 9:
○ Process 6 arrives.
○ Process 4 has Remaining Time: 4 - (9 - 6) = 1 unit.
○ Process 4 continues running.
○ No preemption occurs.
7. At time 10:
○ Process 7 arrives.
○ Process 4 completes execution.
○ Process 4 exits.
○ Available Processes:
■ Process 0 (Remaining Time: 11 units)
■ Process 1 (Remaining Time: 16 units)
■ Process 2 (Remaining Time: 14 units)
■ Process 3 (Remaining Time: 9 units)
■ Process 5 (Remaining Time: 14 units)
■ Process 6 (Remaining Time: 3 units)
■ Process 7 (Remaining Time: 18 units)
○ Process 6 has the shortest remaining time.
○ Dispatch Process 6.
○ Process 6 starts running.
8. At time 13:
○ Process 6 completes execution.
○ Process 6 exits.
○ Available Processes:
■ Process 0 (Remaining Time: 11 units)
■ Process 1 (Remaining Time: 16 units)
■ Process 2 (Remaining Time: 14 units)
■ Process 3 (Remaining Time: 9 units)
■ Process 5 (Remaining Time: 14 units)
■ Process 7 (Remaining Time: 18 units)
○ Process 3 has the shortest remaining time.
○ Dispatch Process 3.
○ Process 3 resumes running.
9. At time 22:
○ Process 3 completes execution.
○ Process 3 exits.
○ Available Processes:
■ Process 0 (Remaining Time: 11 units)
■ Process 1 (Remaining Time: 16 units)
■ Process 2 (Remaining Time: 14 units)
■ Process 5 (Remaining Time: 14 units)
■ Process 7 (Remaining Time: 18 units)
○ Process 0 has the shortest remaining time.
○ Dispatch Process 0.
○ Process 0 resumes running.
10. At time 33:
○ Process 0 completes execution.
○ Process 0 exits.
○ Available Processes:
■ Process 1 (Remaining Time: 16 units)
■ Process 2 (Remaining Time: 14 units)
■ Process 5 (Remaining Time: 14 units)
■ Process 7 (Remaining Time: 18 units)
○ Process 2 and Process 5 have equal remaining times.
○ Process 2 arrived earlier.
○ Dispatch Process 2.
○ Process 2 starts running.
11. At time 47:
○ Process 2 completes execution.
○ Process 2 exits.
○ Available Processes:
■ Process 1 (Remaining Time: 16 units)
■ Process 5 (Remaining Time: 14 units)
■ Process 7 (Remaining Time: 18 units)
○ Process 5 has the shortest remaining time.
○ Dispatch Process 5.
○ Process 5 starts running.
12. At time 61:
○ Process 5 completes execution.
○ Process 5 exits.
○ Available Processes:
■ Process 1 (Remaining Time: 16 units)
■ Process 7 (Remaining Time: 18 units)
○ Process 1 has the shortest remaining time.
○ Dispatch Process 1.
○ Process 1 starts running.
13. At time 77:
○ Process 1 completes execution.
○ Process 1 exits.
○ Available Process:
■ Process 7 (Remaining Time: 18 units)
○ Dispatch Process 7.
○ Process 7 starts running.
14. At time 95:
○ Process 7 completes execution.
○ Process 7 exits.
○ All processes have completed execution.

6. (2pt) How does virtual round robin differ from round robin? What would be the
purpose of using virtual round robin of round robin?
● Round Robin (RR): This scheduling method assigns each process a fixed time
slice (quantum) in a cyclic order. It ensures that all processes receive an equal
share of CPU time, promoting simplicity and fairness.
● Virtual Round Robin (VRR): An enhanced version of RR, VRR incorporates
virtual time or adjusts process priorities dynamically. This makes CPU allocation
more flexible and tailored to process characteristics or system needs.

1. Improved Efficiency:
VRR adapts to process demands and priorities, resulting in better CPU usage. It
also minimizes delays for interactive or higher-priority tasks compared to the
fixed, cyclic approach of RR.
2. Enhanced Fairness:
By dynamically adjusting priorities, VRR ensures that critical processes are
addressed promptly, reducing the chance of delays caused by the rigid
time-slicing of traditional RR.

7. (4pt) Briefly describe exponential averaging. What is its purpose? What scheduling
algorithms might use it?

Exponential Averaging

Definition:
Exponential averaging is a prediction method that assigns more weight to recent
observations while still factoring in past data. It calculates a weighted average, where
the influence of older data points diminishes exponentially. This approach allows
systems to adapt quickly to recent changes.

Purpose

The main purpose of exponential averaging is to provide timely and accurate estimates
for process metrics, such as CPU burst times. By focusing on recent behavior, it enables
schedulers to make decisions that better reflect the current state of processes,
improving overall efficiency and performance.

Scheduling Algorithms That Use It

Exponential averaging is applied in algorithms like Shortest Remaining Time (SRT)


and Highest Response Ratio Next (HRRN). These algorithms depend on predicted
service times to prioritize processes, and exponential averaging generates these
estimates effectively by leveraging both recent and historical data.
8. (4pt) In the context of fair-share scheduling, suppose in the previous iteration a
process ran for the first time, and used 10ms on the CPU. This iteration the process
did not run. Assuming that no other process in its group has ran, it has a base priority
of 5, and the group has a weight of 0.5; what will its priority be in the next iteration?

Formula:

Pj(i) = Basej + (CPUj(i) / 2) + (GCPUk(i) / (4 * Wk))

Work:

1. CPUj(i-1) = 10
2. CPUj(i) = 10 / 2 = 5
3. Effective CPU usage for priority = 5 / 2 = 2.5
4. GCPUk(i-1) = 10
5. GCPUk(i) = 10 / 2 = 5
6. Substitute: Pj(i) = 5 + 2.5 + (5 / (4 * 0.5))
7. Simplify: Pj(i) = 5 + 2.5 + 2.5 = 7.5

Final Answer:

7.5

Chapter 10: Multithreading/Real-time


scheduling

9. (2pt) What is the difference between hard and soft real-time tasks?

Hard Real-Time Tasks:

● Definition: Must meet deadlines strictly; missing them can cause catastrophic
outcomes.
● Example: Car brakes—failing to activate in time can result in collisions.
● Key Point: Deadlines are mandatory and critical.

Soft Real-Time Tasks:

● Definition: Prefer to meet deadlines but can tolerate occasional misses with
minor consequences.
● Example: Real-time clock—delayed updates may cause jitter but remain
functional.
● Key Point: Deadlines are flexible and non-critical.

10. (4pt) Briefly define the five general areas of requirements for a real-time operating
system.

1. Determinism (Predictability):
○ Definition: The ability of the system to perform operations at fixed,
predetermined times or intervals.
○ Explanation: RTOS must exhibit predictable behavior, ensuring that tasks
are executed within specified time constraints. While complete
predictability is unattainable due to unpredictable inputs, minimizing
variability in task execution times is crucial.
2. Responsiveness:
○ Definition: The speed at which the operating system responds to events,
such as interrupts.
○ Explanation: An RTOS must quickly acknowledge and service interrupts
to meet task deadlines. This involves having minimal and guaranteed
maximum delays between event detection and interrupt handling.
3. User Control:
○ Definition: Fine-grained control over system operations and resource
management.
○ Explanation: Users require precise control over system functions, such as
memory management and paging. This allows users to prevent operations
that could interfere with meeting deadlines, like controlling when pages are
flushed to memory.
4. Reliability:
○ Definition: High system reliability to ensure consistent operation without
failures.
○ Explanation: RTOS must be highly reliable because failures can lead to
catastrophic consequences, including physical damage or loss of life.
Systems like auto-braking in cars rely on this reliability to function correctly.
5. Fail Soft Operations (Fault Tolerance):
○ Definition: The ability of the system to continue operating and handling
critical tasks even in the presence of errors.
○ Explanation: RTOS should maintain essential functions despite partial
system failures. For example, if a non-critical component fails, the system
should still manage critical operations like decision-making in real time.

11. (4pt) Briefly define the four classes of real-time scheduling algorithms.

1. Static Table-Driven Approach:


○ Definition: Precomputed schedules are stored in a table.
○ Key Point: Suitable for periodic tasks but lacks flexibility as it cannot adapt
to changes.
2. Static Priority-Driven Preemptive Approach:
○ Definition: Fixed priorities are assigned to tasks, with higher-priority tasks
preempting lower-priority ones.
○ Key Point: Effective for meeting deadlines but requires careful priority
planning.
3. Dynamic Planning-Based Approach:
○ Definition: The schedule is recalculated dynamically as tasks arrive or
change.
○ Key Point: Offers flexibility for mixed tasks but adds runtime overhead.
4. Dynamic Best-Effort Approach:
○ Definition: Tasks are prioritized at runtime to maximize deadline
adherence without guarantees.
○ Key Point: Flexible and easy to implement but may fail under heavy load.

12. (4pt) What is an advantage of Load Sharing? Briefly explain.


Advantage: Even Distribution of Load Across Processors

Explanation:
Using a single ready queue ensures workloads are evenly distributed among all
processors. When a processor becomes idle, it quickly retrieves the next thread,
avoiding bottlenecks and preventing underutilization of other processors. This approach
maximizes processor usage and improves system efficiency.

13. (2pt) What is a disadvantage of Load Sharing? Briefly explain.


Using a shared ready queue requires synchronization mechanisms (e.g., mutexes or
semaphores) to prevent multiple processors from accessing the queue simultaneously.
This adds overhead, reducing system efficiency. Improper synchronization can also lead
to race conditions, causing duplicate scheduling or lost threads

Operating System Concepts Page 2


CS4348 Homework 6 Fall 2024

14. (10pt) For the following processes, perform Dedicated Processor Assignment.
PID Number of Admit Time Service Time
Threads

0 4 0 8

1 4 3 11

2 3 4 16

3 2 4 18

4 4 6 4

The service time is for the entire process. It is assumed that all thread exist during
this time. You use the format we used in class for the chapter 9 examples. Assume
the system has 6 processors. Assume any I/O is part of the service time. Processes
that cannot be assigned enough processors cannot be dispatched.

Scheduling Steps:

1. Time 0:
○ Process 0 arrives.
○ Requires 4 processors (since it has 4 threads).
○ Available processors: 6.
○ Action: Assign 4 processors to Process 0.
○ Remaining processors: 2.
2. Time 3:
○ Process 1 arrives.
○ Requires 4 processors.
○ Available processors: 2 (since Process 0 is using 4 processors).
○ Action: Cannot dispatch Process 1 (not enough processors available).
○ Process 1 waits.
3. Time 4:
○ Process 2 and Process 3 arrive.
○ Process 2 requires 3 processors.
○ Process 3 requires 2 processors.
○ Available processors: 2.
○ Action:
■ Cannot dispatch Process 2 (requires 3 processors).
■ Dispatch Process 3 (requires 2 processors).
○ Assign 2 processors to Process 3.
○ Remaining processors: 0.
4. Time 6:
○ Process 4 arrives.
○ Requires 4 processors.
○ Available processors: 0.
○ Action: Cannot dispatch Process 4 (no available processors).
○ Process 4 waits.
5. Time 8:
○ Process 0 completes (service time of 8 units).
○ Releases 4 processors.
○ Available processors: 4.
○ Waiting Processes: Process 1, Process 2, Process 4.
○ Priority (based on arrival time):
■ Process 1 (arrived at time 3)
■ Process 2 (arrived at time 4)
■ Process 4 (arrived at time 6)
○ Action:
■ Dispatch Process 1 (requires 4 processors).
■ Assign 4 processors to Process 1.
○ Remaining processors: 0.
6. Time 19:
○ Process 1 completes (service time of 11 units from time 8 to 19).
○ Releases 4 processors.
○ Available processors: 4.
○ Waiting Processes: Process 2, Process 4.
○ Action:
■ Dispatch Process 2 (requires 3 processors).
■ Assign 3 processors to Process 2.
○ Remaining processors: 1.
7. Time 22:
○ Process 3 completes (service time of 18 units from time 4 to 22).
○ Releases 2 processors.
○ Available processors: 3 (1 + 2).
○ Waiting Processes: Process 4.
○ Action:
■ Process 4 requires 4 processors; only 3 are available.
■ Cannot dispatch Process 4.
○ Remaining processors: 3.
8. Time 35:
○ Process 2 completes (service time of 16 units from time 19 to 35).
○ Releases 3 processors.
○ Available processors: 6 (3 + 3).
○ Waiting Processes: Process 4.
○ Action:
■ Dispatch Process 4 (requires 4 processors).
■ Assign 4 processors to Process 4.
○ Remaining processors: 2.
9. Time 39:
○ Process 4 completes (service time of 4 units from time 35 to 39).
○ Releases 4 processors.
○ Available processors: 6.
○ All processes have completed execution.

15. (10pt) For the following processes and threads, perform the dynamic scheduling
algorithm based on ZAHO90.
PID TID Spawn Time Service Time

0 0 2 4

0 1 6 3
1 0 2 9

1 1 9 4

1 2 9 1
2 0 1 10

2 1 2 4

2 2 5 4
2 3 9 10
3 0 4 6
3 1 10 9
4 0 2 10

4165

You may use the format we used in class for the chapter 9 examples. You can have
a “ready queue” for each process to hold ready threads that do not have a
processor. Assume the system has 6 processors. Assume any I/O is part of the
service time. Whenever a thread is spawned, the process will request a processor
for it. When a thread exits, the processor is released. To keep things simple, we will
assume the threads are independent of each other. So, no thread will block to wait
for another thread.

Scheduling Steps:

1. Time 1:
○ Threads Spawned:
■ Process 1, Thread 1 (P1 T1): Spawn Time 9, Service Time 4.
■ Process 3, Thread 0 (P3 T0): Spawn Time 1, Service Time 9.
○ Processors Available: 6.
○ Process Requests:
■ Process 3 requests 1 processor.
○ Actions:
■ Allocate 1 processor to Process 3.
○ Processors Remaining: 5.
○ Process Allocations:
■ Process 3: 1 processor.
2. Time 2:
○ Threads Spawned:
■ Process 0, Thread 0 (P0 T0): Spawn Time 2, Service Time 4.
■ Process 2, Thread 1 (P2 T1): Spawn Time 2, Service Time 4.
■ Process 0, Thread 6 (P0 T6): Spawn Time 2, Service Time 3.
■ Process 4, Thread 10 (P4 T10): Spawn Time 2, Service Time 10.
○ Process Requests:
■ Process 0 requests 1 processor.
■ Process 2 requests 1 processor.
■ Process 0 requests 1 additional processor.
■ Process 4 requests 1 processor.
○ Processors Available: 5.
○ Actions:
■ Allocate 1 processor to Process 0 (P0 T0).
■ Allocate 1 processor to Process 2 (P2 T1).
■ Allocate 1 processor to Process 0 (P0 T6).
■ Allocate 1 processor to Process 4 (P4 T10).
○ Processors Remaining: 1.
○ Process Allocations:
■ Process 0: 2 processors.
■ Process 2: 1 processor.
■ Process 4: 1 processor.
■ Process 3: 1 processor.
3. Time 3:
○ Threads Spawned:
■ Process 2, Thread 3 (P2 T3): Spawn Time 3, Service Time 4.
○ Process Request:
■ Process 2 requests 1 additional processor.
○ Processors Available: 1.
○ Actions:
■ Allocate 1 processor to Process 2 (now has 2 processors).
○ Processors Remaining: 0.
○ Process Allocations:
■ Process 2: 2 processors.
4. Time 5:
○ Threads Spawned:
■ Process 2, Thread 2 (P2 T2): Spawn Time 5, Service Time 4.
○ Process Request:
■ Process 2 requests 1 additional processor (total requested: 3).
○ Processors Available: 0.
○ Actions:
■ Process 2 cannot be allocated the additional processor.
■ Process 2 is not a new arrival.
■ Request remains outstanding.
○ Outstanding Requests:
■ Process 2 requests 1 processor.
5. Time 6:
○ Threads Completed:
■ Process 0, Thread 0 (P0 T0): Service Time 4 units (Started at Time
2, Completed at Time 6).
■ Process 2, Thread 1 (P2 T1): Service Time 4 units (Started at Time
2, Completed at Time 6).
○ Processors Released: 1 from Process 0, 1 from Process 2.
○ Processors Available: 2.
○ Threads Spawned:
■ Process 4, Thread 1 (P4 T1): Spawn Time 6, Service Time 5.
○ Process Requests:
■ Process 4 requests 1 processor.
○ Actions:
■ Process 4 is a new arrival.
■ Allocate 1 processor to Process 4.
■ Process 2 has an outstanding request.
■ Allocate 1 processor to Process 2 (now has 2 processors).
○ Processors Remaining: 0.
○ Process Allocations:
■ Process 0: 1 processor.
■ Process 2: 2 processors.
■ Process 3: 1 processor.
■ Process 4: 1 processor.
6. Time 7:
○ Threads Completed:
■ Process 0, Thread 6 (P0 T6): Service Time 3 units (Started at Time
2, Completed at Time 5).
■ Process 2, Thread 3 (P2 T3): Service Time 4 units (Started at Time
3, Completed at Time 7).
○ Processors Released: 1 from Process 2.
○ Processors Available: 1.
○ Actions:
■ No outstanding requests.
■ Processor remains idle.
○ Process Allocations:
■ Process 0: 1 processor.
■ Process 2: 1 processor.
■ Process 3: 1 processor.
■ Process 4: 1 processor.
7. Time 9:
○ Threads Spawned:
■ Process 1, Thread 1 (P1 T1): Spawn Time 9, Service Time 4.
○ Process Request:
■ Process 1 requests 1 processor.
○ Processors Available: 1.
○ Actions:
■ Allocate 1 processor to Process 1 (now has 2 processors).
○ Processors Remaining: 0.
○ Process Allocations:
■ Process 1: 2 processors.
8. Time 10:
○ Threads Completed:
■ Process 3, Thread 0 (P3 T0): Service Time 9 units (Started at Time
1, Completed at Time 10).
○ Processors Released: 1 from Process 3.
○ Processors Available: 1.
○ Actions:
■ No outstanding requests.
■ Processor remains idle.
○ Process Allocations:
■ Process 3: 0 processors.
9. Time 11:
○ Threads Completed:
■ Process 1, Thread 2 (P1 T2): Service Time 9 units (Started at Time
0, Completed at Time 11).
■ Process 4, Thread 1 (P4 T1): Service Time 5 units (Started at Time
6, Completed at Time 11).
■ Process 2, Thread 2 (P2 T2): Service Time 4 units (Started at Time
5, Completed at Time 9).
○ Processors Released: 1 from Process 1, 1 from Process 4, 1 from
Process 2.
○ Processors Available: 3.
○ Actions:
■ No outstanding requests.
■ Processors remain idle.
○ Process Allocations:
■ Process 1: 1 processor.
■ Process 2: 1 processor.
10. Time 13:
○ Threads Completed:
■ Process 1, Thread 1 (P1 T1): Service Time 4 units (Started at Time
9, Completed at Time 13).
○ Processors Released: 1 from Process 1.
○ Processors Available: 4.
○ Actions:
■ No outstanding requests.
■ Processors remain idle.
○ Process Allocations:
■ Process 1: 0 processors.
11. Time 13 Onwards:
○ All threads have completed.
○ All processors are available.

16. (2pt) How does Dedicated Processor Assignment differ from Gang Scheduling?
Dedicated Processor Assignment

● Definition: Allocates a fixed number of processors exclusively to one process


until it finishes.
● Key Point: All threads of the process run without interruptions, reducing context
switching and ensuring predictable performance.
● Example: Great for tasks needing uninterrupted, consistent processor access.

Gang Scheduling

● Definition: Schedules all threads of a process to run together across processors


at the same time.
● Key Point: Synchronizes threads for efficient communication, reducing delays in
tightly-coupled parallel applications.
● Example: Ideal for parallel programs needing threads to work in sync.

Key Differences:

1. Processor Allocation:
○ Dedicated: Fixed processors assigned for the process duration.
○ Gang: Dynamically assigns processors, sharing resources as needed.
2. Flexibility:
○ Dedicated: Less flexible—processors are locked to one process.
○ Gang: More flexible—processors can be shared across different gangs.
3. Overhead:
○ Dedicated: Low overhead—no context switching.
○ Gang: Reduces thread sync delays but requires dynamic processor
management.

Operating System Concepts Page 3

CS4348 Homework 6 Fall 2024

17. (2pt) How is Multicore scheduling different from multiprocessor


scheduling?
Multicore Scheduling:

● Definition: Manages task execution across multiple cores within a single


processor.
● Key Point: Cores share resources (like caches and memory buses), requiring
techniques like thread affinity and load balancing to optimize performance.
● Example: A single chip with 4 cores running threads simultaneously.

Multiprocessor Scheduling:

● Definition: Allocates tasks across multiple independent processors in a system.


● Key Point: Each processor typically has its own resources, requiring strategies to
minimize inter-processor communication delays and balance workloads.
● Example: A system with 2 physical processors, each running its own tasks.

Key Differences:

1. Hardware Structure:
○ Multicore: Focuses on cores within one processor.
○ Multiprocessor: Involves multiple physical processors.
2. Resource Sharing:
○ Multicore: Cores share caches and buses.
○ Multiprocessor: Processors often have dedicated resources.
3. Complexity:
○ Multicore: Requires fine-grained scheduling to handle shared resources.
○ Multiprocessor: Needs higher-level coordination for independent
processors.

Chapter 14: Virtualization

18. (4pt) Briefly describe Type 1 and Type 2 virtualization. What are their advantages
and disadvantages?
Type 1 Virtualization (Bare-Metal Hypervisor):

● Description: Runs directly on physical hardware, managing guest OSes without


relying on a host OS.
● Advantages:
○ Performance: Direct hardware access minimizes overhead.
○ Security: Fewer layers make it more secure.
○ Resource Management: Ideal for enterprise use due to superior
scalability.
● Disadvantages:
○ Complexity: Requires expertise to set up and manage.
○ Hardware Dependency: Needs dedicated, stable hardware.

Type 2 Virtualization (Hosted Hypervisor):

● Description: Operates within a host OS as an application, managing guest OSes


indirectly.
● Advantages:
○ Ease of Use: Simple installation and setup.
○ Flexibility: Works on personal devices like laptops.
○ Accessibility: Great for testing and development.
● Disadvantages:
○ Performance Overhead: Host OS introduces latency.
○ Security Risks: Dependent on the security of the host OS.
○ Scalability: Less efficient for large-scale environments.

19. (8pt) Describe paravirtualization. How does it increase the speed of virtual
machines?

Definition:

Paravirtualization is a technique that improves the performance of virtual machines


by modifying the guest operating system to interact directly with the hypervisor
through specialized drivers and APIs. Unlike full virtualization, it eliminates the
need for complete hardware emulation.

How Paravirtualization Increases VM Speed

● Reduced Overhead: Eliminates full hardware emulation, cutting down on the


computational load and latency.
● Direct Communication: Replaces slow emulation with efficient API calls,
speeding up operations like disk I/O and networking.
● Efficient Resource Use: Specialized drivers allow better resource allocation,
improving overall system performance.

Advantages

1. Better Performance: Faster operations due to minimized emulation overhead.


2. Efficient Utilization: Optimized use of system resources for critical tasks.
3. Improved Responsiveness: Accelerates processes like I/O by directly
interacting with the hypervisor.

Disadvantages

1. Guest OS Modifications: Requires altering the OS, which may not always be
feasible.
2. Compatibility Issues: Not all operating systems support paravirtualization.
3. Complex Setup: Involves integrating specialized drivers, adding implementation
challenges.

20. (8pt) Compare and contrast container virtualization with virtual machines.

1. Architecture and Isolation

● Virtual Machines (VMs): Emulate entire hardware environments, including


operating systems, ensuring strong isolation between instances.
● Containers: Use the host OS kernel to create isolated user spaces, offering
lighter isolation but sharing the underlying kernel.

Key Point: VMs provide stronger isolation, while containers are more lightweight and
share the host OS kernel.

2. Performance and Resource Utilization


● VMs: Require more resources (CPU, memory) due to hardware emulation,
leading to higher overhead.
● Containers: Share the host OS, making them faster and more efficient with
minimal resource usage.

Key Point: Containers are faster and use fewer resources, while VMs have higher
overhead due to emulation.

3. Flexibility and Portability

● VMs: Allow running different operating systems on the same hardware, offering
greater flexibility.
● Containers: Are highly portable but limited to the host OS kernel.

Key Point: VMs offer more flexibility in running varied OSes, whereas containers excel
in portability for compatible environments.

4. Security Considerations

● VMs: Provide robust security with full isolation between systems.


● Containers: Rely on the shared kernel, making them more susceptible to
host-related vulnerabilities.

Key Point: VMs are more secure due to complete isolation, while containers require
careful configuration to mitigate risks.

5. Use Cases

● VMs: Ideal for running multiple OSes, enterprise environments, and scenarios
needing strict isolation.
● Containers: Best for development, testing, microservices, and rapid scaling.
Key Point: VMs suit tasks needing strong isolation, while containers shine in
lightweight, agile deployments.

21. (2pt) Explain the concept of ballooning.

Definition:
Ballooning is a memory management technique in virtualized environments that
dynamically adjusts memory allocation for virtual machines (VMs) based on their needs
and the host's memory availability.

How It Works:

● A special balloon driver in each VM communicates with the hypervisor.


● When the host's memory is low, the hypervisor instructs the balloon driver to
"inflate," prompting the guest OS to release memory back to the hypervisor.
● This reclaimed memory is then allocated to other VMs that need additional
resources.

Purpose:
Ballooning ensures efficient memory usage by redistributing resources dynamically,
preventing memory shortages and allowing the system to support more VMs.
Operating System Concepts Page 4

You might also like