Unit 5
Unit 5
Introduction
First-In-First-Out (FIFO), also known as First-Come-First-Served (FCFS), is the
simplest scheduling algorithm used in operating systems for job and process
scheduling. In this method, the process that arrives first is executed first,
without preemption.
Definition
FIFO scheduling is a non-preemptive scheduling algorithm where processes are
attended in the exact order of their arrival time. Once a process starts
execution, it runs to completion before the next process begins.
Features
• Simple and easy to implement.
• Jobs are executed in the order they are received.
• Based on queue data structure (FIFO queue).
Diagram (Neat Sketch)
Process Arrival Order:
| P1 | P2 | P3 | P4 | P5 |
↓ FIFO Queue
[ Front ➞ P1 → P2 → P3 → P4 → P5 ➞ Rear ]
Gantt Chart Example (Assume arrival times and burst times):
P1 P2 P3 P4 P5
|-------|-------|-------|-------|-------|
0 4 9 14 17 20
(Where burst times: P1=4, P2=5, P3=5, P4=3, P5=3)
Working Example
Let’s consider 5 processes with the following burst times:
P1 0 ms 4 ms
Process Arrival Time Burst Time
P2 1 ms 5 ms
P3 2 ms 5 ms
P4 3 ms 3 ms
P5 4 ms 3 ms
Assuming all arrive nearly together, the Gantt chart will be:
| P1 | P2 | P3 | P4 | P5 |
0 4 9 14 17 20
Calculations
Turnaround Time (TAT) = Completion Time - Arrival Time
Waiting Time (WT) = Turnaround Time - Burst Time
P1 4 4 0
P2 9 8 3
P3 14 12 7
P4 17 14 11
P5 20 16 13
Definition
EDF is a preemptive scheduling algorithm where at any instant, the process
that has the earliest deadline among all ready processes is selected for
execution. If a new process arrives with a deadline earlier than the currently
running process, the current process is preempted, and the new process is
scheduled.
Key Concepts
• Deadline: The time by which a process must complete.
• Dynamic priority: Priorities are not fixed but change depending on the
deadlines.
• Preemptive: Higher priority tasks can preempt lower priority ones.
Working
• When processes arrive, they are placed in the ready queue.
• The scheduler continuously checks the deadlines of all ready processes.
• The process with the earliest deadline is selected to run.
• If a new process arrives with a deadline earlier than the currently
executing process, preemption occurs.
• The system keeps switching to the process with the nearest deadline,
aiming to meet all deadlines.
Example Explanation
P1 0 ms 7 ms 3 ms
P2 1 ms 4 ms 1 ms
P3 2 ms 9 ms 2 ms
Properties
• Optimal: EDF is an optimal scheduling algorithm for uniprocessor
systems. If any set of processes can be scheduled without missing
deadlines, EDF can schedule them.
• Dynamic priorities: Unlike fixed priority algorithms, priorities vary with
time.
• Preemptive: Can interrupt running processes for urgent deadlines.
Advantages
• Optimal CPU utilization: Can utilize the CPU up to 100% without missing
deadlines in ideal conditions.
• Flexibility: Dynamically adjusts priorities based on deadlines.
• Suitable for real-time systems: Guarantees deadline adherence if the
system is feasible.
Disadvantages
• Overhead: Frequent priority changes and preemption lead to context-
switch overhead.
• Complexity: Requires continuous deadline monitoring.
• Not suitable for systems with unpredictable task arrivals or where
deadline information is not reliable.
• Performance degrades in overloaded systems: Missed deadlines may
cascade.
Applications
• Real-time systems like embedded systems in automotive, aerospace.
• Multimedia systems where timely data delivery is critical.
• Network scheduling where packets have deadlines.
Conclusion
Earliest Deadline First (EDF) is a powerful and dynamic scheduling algorithm
that excels in real-time environments where meeting deadlines is crucial. It
offers optimal scheduling on single processors but requires careful system
design to manage overhead and avoid overload. Understanding EDF is vital for
designing reliable real-time systems.
3. LOOK Algorithm
Description:
• LOOK is a variant of SCAN which avoids going to the very ends of the disk
if there are no requests there.
• The disk arm only goes as far as the last request in each direction, then
reverses direction immediately.
Working:
• The arm moves toward one end but stops at the last request in that
direction.
• It then reverses direction to service requests on the way back.
• This avoids unnecessary movement to the extreme ends of the disk.
Advantages:
• More efficient than SCAN because it reduces unnecessary arm travel.
• Reduces overall seek time.
• More responsive to request patterns.
Disadvantages:
• Slightly more complex to implement than SCAN.
• May still have longer waiting times than C-SCAN for some requests.
Example:
• Disk size: 0 to 199 tracks.
• Current head position: 50
• Requests: 10, 35, 40, 70, 90, 150
• The arm moves from 50 → 150 (last request), then reverses and moves
150 → 10.
Summary Table
Direction of Request
Movement to
Algorithm Arm Service Advantages Disadvantages
Disk Ends
Movement Order
Services Reduces
Back and Delay for new
Yes, goes to requests starvation,
SCAN forth (like requests after
end of disk in both uniform
elevator) pass
directions wait
Services More
One
requests uniform
direction No, jumps to Jump back
C-SCAN in one wait time,
only start causes delay
direction prevents
(circular)
only starvation
Only goes
Back and Less arm
No, stops at as far as Slightly
LOOK forth but movement,
last request last complex
limited efficient
request
Example Calculations of Seek Time for SCAN, C-SCAN, and LOOK Algorithms
Assumptions for Examples:
• Disk tracks numbered from 0 to 199
• Current head position: 50
• Requests in queue: 10, 35, 40, 70, 90, 150
• Seek time is proportional to the absolute difference in track numbers
moved by the head.
• For simplicity, assume seek time per track = 1 unit (so seek time =
number of tracks moved).
SCAN 338
C-SCAN 388
LOOK 240
Interpretation
• LOOK minimizes total seek distance by not moving to the physical ends
of the disk unnecessarily.
• SCAN moves all the way to the end, which increases seek time.
• C-SCAN introduces the jump back to start, adding extra distance,
increasing total seek time but improving fairness and wait times for
some requests.
Introduction
• Heterogeneous streams refer to simultaneous data streams with
differing characteristics — such as variable bitrates, formats, priorities, or
Quality of Service (QoS) requirements.
• Managing these heterogeneous streams efficiently over multiple storage
devices is a complex challenge in modern storage and multimedia
systems.
• The feasibility study involves analyzing how well these streams can be
supported, scheduled, and accessed from different devices to meet
performance, reliability, and QoS goals.
3. Feasibility Factors
a) Performance
• Throughput and latency must satisfy the most stringent stream
requirements.
• Scheduling heterogeneous streams across multiple devices can balance
the load to reduce bottlenecks.
• Devices with high sequential throughput (HDDs) suit streaming large
sequential data, while random access devices (SSDs) better serve small,
scattered data streams.
• Feasibility depends on whether the combined system can meet the
highest QoS demands of the streams simultaneously.
b) Scheduling and Resource Management
• Efficient scheduling algorithms are essential to:
o Prioritize streams based on urgency and importance.
o Dynamically allocate streams to devices with available resources.
• Heterogeneous nature requires adaptive scheduling, since static
allocation may lead to resource underutilization or QoS violations.
• Advanced techniques like reservation-based scheduling, priority queues,
or QoS-aware policies improve feasibility.
c) Synchronization and Coordination
• When multiple devices serve parts of a single heterogeneous stream
(e.g., audio and video stored separately), synchronization is crucial.
• Coordination mechanisms ensure time-aligned playback, avoiding
glitches or latency mismatches.
• Feasibility depends on the ability of the system to maintain
synchronization despite device heterogeneity.
d) Scalability
• As the number of streams and devices increases, the system must scale
without significant degradation.
• Heterogeneous storage systems with distributed data placement and
parallel access improve scalability.
• Feasibility is higher when architecture supports horizontal scaling
(adding more devices).
e) Reliability and Fault Tolerance
• Multiple devices offer redundancy options (e.g., RAID configurations).
• Heterogeneous streams can be prioritized for replication and error
correction to meet reliability.
• Feasibility improves if the system can tolerate device failures without
disrupting streams.
f) Cost and Energy Efficiency
• Combining different devices allows optimization for cost and power:
o Low-cost HDDs for archival streams.
o High-performance SSDs for real-time streams.
• Feasibility involves evaluating trade-offs between cost and performance.
4. Challenges to Feasibility
• Complexity in managing different devices with diverse performance
characteristics.
• Ensuring QoS guarantees for variable and conflicting stream
requirements.
• Overhead of synchronization, coordination, and data consistency.
• Handling device heterogeneity in failure modes and recovery processes.
• Potential increased latency due to inter-device communication or
contention.
Conclusion
• Feasibility of heterogeneous streams over multiple storage devices is
generally high, provided advanced scheduling, synchronization, and
resource management techniques are employed.
• Benefits include improved performance, scalability, reliability, and cost
efficiency.
• However, challenges such as complexity, synchronization overhead, and
QoS assurance must be carefully addressed.
• Modern storage architectures and technologies increasingly support
these heterogeneous environments, making them practical for diverse
applications.