0% found this document useful (0 votes)
10 views17 pages

Unit 5

Uploaded by

mygameidall001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views17 pages

Unit 5

Uploaded by

mygameidall001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

First-In-First-Out (FIFO) Scheduling Method

Introduction
First-In-First-Out (FIFO), also known as First-Come-First-Served (FCFS), is the
simplest scheduling algorithm used in operating systems for job and process
scheduling. In this method, the process that arrives first is executed first,
without preemption.
Definition
FIFO scheduling is a non-preemptive scheduling algorithm where processes are
attended in the exact order of their arrival time. Once a process starts
execution, it runs to completion before the next process begins.
Features
• Simple and easy to implement.
• Jobs are executed in the order they are received.
• Based on queue data structure (FIFO queue).
Diagram (Neat Sketch)
Process Arrival Order:
| P1 | P2 | P3 | P4 | P5 |
↓ FIFO Queue

[ Front ➞ P1 → P2 → P3 → P4 → P5 ➞ Rear ]
Gantt Chart Example (Assume arrival times and burst times):
P1 P2 P3 P4 P5
|-------|-------|-------|-------|-------|
0 4 9 14 17 20
(Where burst times: P1=4, P2=5, P3=5, P4=3, P5=3)
Working Example
Let’s consider 5 processes with the following burst times:

Process Arrival Time Burst Time

P1 0 ms 4 ms
Process Arrival Time Burst Time

P2 1 ms 5 ms

P3 2 ms 5 ms

P4 3 ms 3 ms

P5 4 ms 3 ms

Assuming all arrive nearly together, the Gantt chart will be:
| P1 | P2 | P3 | P4 | P5 |
0 4 9 14 17 20
Calculations
Turnaround Time (TAT) = Completion Time - Arrival Time
Waiting Time (WT) = Turnaround Time - Burst Time

Process Completion Turnaround Waiting

P1 4 4 0

P2 9 8 3

P3 14 12 7

P4 17 14 11

P5 20 16 13

Average Waiting Time = (0 + 3 + 7 + 11 + 13) / 5 = 6.8 ms


Average Turnaround Time = (4 + 8 + 12 + 14 + 16) / 5 = 10.8 ms
Advantages
• Simple to understand and implement.
• Fair in terms of order: first come, first served.
• No starvation (every process eventually gets executed).
Disadvantages
• Poor average waiting time, especially if long processes come first (convoy
effect).
• Non-preemptive: no flexibility in switching between processes.
• Not suitable for time-sharing systems.
Applications
• Simple batch systems.
• Situations where fairness is more important than speed.
• Background or non-critical processes.
Conclusion
FIFO is a basic but foundational scheduling algorithm. While it is easy to use, it
is not efficient in handling processes of varying burst times. It is primarily useful
in simpler or low-priority systems, or as a conceptual tool in learning about
more complex algorithms.

Here’s a detailed 50-mark answer on the Earliest Deadline First (EDF)


Scheduling Method covering its concept, working, properties, example,
advantages, disadvantages, and applications.

Earliest Deadline First (EDF) Scheduling Method


Introduction
Earliest Deadline First (EDF) is a dynamic priority scheduling algorithm
primarily used in real-time operating systems. It assigns priorities based on the
deadlines of processes — the process with the closest (earliest) deadline is
given the highest priority and scheduled to run first.

Definition
EDF is a preemptive scheduling algorithm where at any instant, the process
that has the earliest deadline among all ready processes is selected for
execution. If a new process arrives with a deadline earlier than the currently
running process, the current process is preempted, and the new process is
scheduled.

Key Concepts
• Deadline: The time by which a process must complete.
• Dynamic priority: Priorities are not fixed but change depending on the
deadlines.
• Preemptive: Higher priority tasks can preempt lower priority ones.

Working
• When processes arrive, they are placed in the ready queue.
• The scheduler continuously checks the deadlines of all ready processes.
• The process with the earliest deadline is selected to run.
• If a new process arrives with a deadline earlier than the currently
executing process, preemption occurs.
• The system keeps switching to the process with the nearest deadline,
aiming to meet all deadlines.

Diagram (Neat Sketch)


Process Arrival Times and Deadlines:
--------------------------------------------------
| Process | Arrival Time | Deadline | Burst Time |
--------------------------------------------------
| P1 | 0 ms | 7 ms | 3 ms |
| P2 | 1 ms | 4 ms | 1 ms |
| P3 | 2 ms | 9 ms | 2 ms |
--------------------------------------------------
Gantt Chart Example:
| P1 | P2 | P1 | P3 |
0 1 2 4 6

Example Explanation

Process Arrival Time Deadline Burst Time

P1 0 ms 7 ms 3 ms

P2 1 ms 4 ms 1 ms

P3 2 ms 9 ms 2 ms

• At time 0, P1 arrives, starts execution.


• At time 1, P2 arrives with an earlier deadline (4 ms) than P1’s deadline (7
ms), so P1 is preempted.
• P2 executes from 1 to 2.
• At 2, P1 resumes because it has an earlier deadline than P3.
• P1 executes from 2 to 4.
• Then P3 runs from 4 to 6.

Properties
• Optimal: EDF is an optimal scheduling algorithm for uniprocessor
systems. If any set of processes can be scheduled without missing
deadlines, EDF can schedule them.
• Dynamic priorities: Unlike fixed priority algorithms, priorities vary with
time.
• Preemptive: Can interrupt running processes for urgent deadlines.

Advantages
• Optimal CPU utilization: Can utilize the CPU up to 100% without missing
deadlines in ideal conditions.
• Flexibility: Dynamically adjusts priorities based on deadlines.
• Suitable for real-time systems: Guarantees deadline adherence if the
system is feasible.

Disadvantages
• Overhead: Frequent priority changes and preemption lead to context-
switch overhead.
• Complexity: Requires continuous deadline monitoring.
• Not suitable for systems with unpredictable task arrivals or where
deadline information is not reliable.
• Performance degrades in overloaded systems: Missed deadlines may
cascade.

Applications
• Real-time systems like embedded systems in automotive, aerospace.
• Multimedia systems where timely data delivery is critical.
• Network scheduling where packets have deadlines.

Conclusion
Earliest Deadline First (EDF) is a powerful and dynamic scheduling algorithm
that excels in real-time environments where meeting deadlines is crucial. It
offers optimal scheduling on single processors but requires careful system
design to manage overhead and avoid overload. Understanding EDF is vital for
designing reliable real-time systems.

Sure! Here's a detailed explanation of three popular SCAN scheduling


algorithms used in disk scheduling, suitable for a comprehensive 50-mark
answer:
Disk Scheduling and SCAN Algorithms
Disk scheduling algorithms are used by operating systems to decide the order
in which disk I/O requests are processed. The goal is to improve overall system
efficiency by minimizing disk seek time and latency.
The SCAN scheduling method is one of the important disk scheduling
algorithms designed to reduce the total seek time by moving the disk arm
across the disk in a systematic manner.

1. SCAN (Elevator) Algorithm


Description:
• The SCAN algorithm is often called the "Elevator algorithm" because it
works similarly to an elevator in a building.
• The disk arm starts at one end of the disk and moves toward the other
end servicing all pending requests in its path.
• When it reaches the last request in that direction, it reverses its direction
and services requests on the way back.
Working:
• The disk arm moves from the current position towards one end (say from
low to high track numbers).
• It services all requests in that direction in the order they appear.
• Upon reaching the last request or the end of the disk, it reverses
direction.
• It then services all requests on the way back.
Advantages:
• Reduces starvation compared to FCFS or SSTF.
• More efficient than FCFS as it reduces unnecessary arm movement.
• Provides a more uniform wait time for requests.
Disadvantages:
• Requests arriving just after the arm has passed may have to wait for a
full cycle.
• Slightly more complex than simpler algorithms.
Example:
• Disk size: 0 to 199 tracks.
• Current head position: 50
• Requests: 10, 35, 40, 70, 90, 150
• Direction: Moving towards higher tracks.
• SCAN services requests in the order: 50 → 70 → 90 → 150 → (reverses)
→ 40 → 35 → 10.

2. C-SCAN (Circular SCAN) Algorithm


Description:
• C-SCAN is a variant of SCAN that provides a more uniform wait time by
treating the disk as a circular list.
• The disk arm moves in one direction only (say from low to high).
• After reaching the highest track, instead of servicing requests while
returning, it jumps back to the lowest track without servicing requests
on the return.
Working:
• The arm moves from the lowest track to the highest track servicing all
requests.
• After reaching the highest track, it immediately returns to the lowest
track (without servicing any requests on the return).
• Then it services requests again moving in the same direction.
Advantages:
• Provides a more uniform waiting time than SCAN.
• Prevents starvation for requests near the start of the disk.
• Simplifies handling because the arm only moves in one direction.
Disadvantages:
• The jump back to the start may cause some delay.
• Slightly more complex due to the circular behavior.
Example:
• Disk size: 0 to 199 tracks.
• Current head position: 50
• Requests: 10, 35, 40, 70, 90, 150
• Direction: Moving from low to high.
• C-SCAN services: 50 → 70 → 90 → 150 → jump back to 0 → 10 → 35 →
40.

3. LOOK Algorithm
Description:
• LOOK is a variant of SCAN which avoids going to the very ends of the disk
if there are no requests there.
• The disk arm only goes as far as the last request in each direction, then
reverses direction immediately.
Working:
• The arm moves toward one end but stops at the last request in that
direction.
• It then reverses direction to service requests on the way back.
• This avoids unnecessary movement to the extreme ends of the disk.
Advantages:
• More efficient than SCAN because it reduces unnecessary arm travel.
• Reduces overall seek time.
• More responsive to request patterns.
Disadvantages:
• Slightly more complex to implement than SCAN.
• May still have longer waiting times than C-SCAN for some requests.
Example:
• Disk size: 0 to 199 tracks.
• Current head position: 50
• Requests: 10, 35, 40, 70, 90, 150
• The arm moves from 50 → 150 (last request), then reverses and moves
150 → 10.

Summary Table

Direction of Request
Movement to
Algorithm Arm Service Advantages Disadvantages
Disk Ends
Movement Order

Services Reduces
Back and Delay for new
Yes, goes to requests starvation,
SCAN forth (like requests after
end of disk in both uniform
elevator) pass
directions wait

Services More
One
requests uniform
direction No, jumps to Jump back
C-SCAN in one wait time,
only start causes delay
direction prevents
(circular)
only starvation

Only goes
Back and Less arm
No, stops at as far as Slightly
LOOK forth but movement,
last request last complex
limited efficient
request

Example Calculations of Seek Time for SCAN, C-SCAN, and LOOK Algorithms
Assumptions for Examples:
• Disk tracks numbered from 0 to 199
• Current head position: 50
• Requests in queue: 10, 35, 40, 70, 90, 150
• Seek time is proportional to the absolute difference in track numbers
moved by the head.
• For simplicity, assume seek time per track = 1 unit (so seek time =
number of tracks moved).

1. SCAN Algorithm Example


• Direction: Moving from 50 toward higher tracks first.
Order of servicing:
50 → 70 → 90 → 150 → (reach end 199) → reverse → 40 → 35 → 10
Calculations:
• 50 → 70: 20 tracks
• 70 → 90: 20 tracks
• 90 → 150: 60 tracks
• 150 → 199 (end of disk): 49 tracks (arm moves to end before reversing)
• 199 → 40: 159 tracks (reverse direction)
• 40 → 35: 5 tracks
• 35 → 10: 25 tracks
Total seek distance = 20 + 20 + 60 + 49 + 159 + 5 + 25 = 338 tracks

2. C-SCAN Algorithm Example


• Direction: Moving only from low to high; jumps back from end to start
without servicing requests.
Order of servicing:
50 → 70 → 90 → 150 → 199 (end) → jump to 0 (no servicing during jump) →
10 → 35 → 40
Calculations:
• 50 → 70: 20 tracks
• 70 → 90: 20 tracks
• 90 → 150: 60 tracks
• 150 → 199 (end): 49 tracks
• Jump from 199 → 0: 199 tracks (no servicing, but arm movement
counted)
• 0 → 10: 10 tracks
• 10 → 35: 25 tracks
• 35 → 40: 5 tracks
Total seek distance = 20 + 20 + 60 + 49 + 199 + 10 + 25 + 5 = 388 tracks

3. LOOK Algorithm Example


• Direction: Moving from 50 toward higher tracks, but only up to last
request, then reverse.
Order of servicing:
50 → 70 → 90 → 150 → reverse → 40 → 35 → 10
Calculations:
• 50 → 70: 20 tracks
• 70 → 90: 20 tracks
• 90 → 150: 60 tracks
• 150 → 40: 110 tracks (reverse)
• 40 → 35: 5 tracks
• 35 → 10: 25 tracks
Total seek distance = 20 + 20 + 60 + 110 + 5 + 25 = 240 tracks
Summary of Seek Distances

Algorithm Total Seek Distance (tracks)

SCAN 338

C-SCAN 388

LOOK 240

Interpretation
• LOOK minimizes total seek distance by not moving to the physical ends
of the disk unnecessarily.
• SCAN moves all the way to the end, which increases seek time.
• C-SCAN introduces the jump back to start, adding extra distance,
increasing total seek time but improving fairness and wait times for
some requests.

Absolutely! Here’s a detailed 50-mark level explanation on the feasibility of


Heterogeneous Streams over Multiple Storage Devices.

Feasibility of Heterogeneous Streams over Multiple Storage Devices

Introduction
• Heterogeneous streams refer to simultaneous data streams with
differing characteristics — such as variable bitrates, formats, priorities, or
Quality of Service (QoS) requirements.
• Managing these heterogeneous streams efficiently over multiple storage
devices is a complex challenge in modern storage and multimedia
systems.
• The feasibility study involves analyzing how well these streams can be
supported, scheduled, and accessed from different devices to meet
performance, reliability, and QoS goals.

1. Concept of Heterogeneous Streams


• Streams differ by:
o Data rates: Constant bitrate (CBR) vs. variable bitrate (VBR)
o Formats: Audio, video, text, sensor data, etc.
o Priorities: Real-time streaming, background data transfers
o QoS Requirements: Delay sensitivity, bandwidth needs, jitter
tolerance

2. Multiple Storage Devices Setup


• Storage devices could be:
o HDDs (Hard Disk Drives)
o SSDs (Solid State Drives)
o Network Attached Storage (NAS)
o RAID arrays or distributed storage clusters
• Devices differ in:
o Access latency
o Data transfer rates
o Reliability and failure characteristics
o Cost and power consumption

3. Feasibility Factors
a) Performance
• Throughput and latency must satisfy the most stringent stream
requirements.
• Scheduling heterogeneous streams across multiple devices can balance
the load to reduce bottlenecks.
• Devices with high sequential throughput (HDDs) suit streaming large
sequential data, while random access devices (SSDs) better serve small,
scattered data streams.
• Feasibility depends on whether the combined system can meet the
highest QoS demands of the streams simultaneously.
b) Scheduling and Resource Management
• Efficient scheduling algorithms are essential to:
o Prioritize streams based on urgency and importance.
o Dynamically allocate streams to devices with available resources.
• Heterogeneous nature requires adaptive scheduling, since static
allocation may lead to resource underutilization or QoS violations.
• Advanced techniques like reservation-based scheduling, priority queues,
or QoS-aware policies improve feasibility.
c) Synchronization and Coordination
• When multiple devices serve parts of a single heterogeneous stream
(e.g., audio and video stored separately), synchronization is crucial.
• Coordination mechanisms ensure time-aligned playback, avoiding
glitches or latency mismatches.
• Feasibility depends on the ability of the system to maintain
synchronization despite device heterogeneity.
d) Scalability
• As the number of streams and devices increases, the system must scale
without significant degradation.
• Heterogeneous storage systems with distributed data placement and
parallel access improve scalability.
• Feasibility is higher when architecture supports horizontal scaling
(adding more devices).
e) Reliability and Fault Tolerance
• Multiple devices offer redundancy options (e.g., RAID configurations).
• Heterogeneous streams can be prioritized for replication and error
correction to meet reliability.
• Feasibility improves if the system can tolerate device failures without
disrupting streams.
f) Cost and Energy Efficiency
• Combining different devices allows optimization for cost and power:
o Low-cost HDDs for archival streams.
o High-performance SSDs for real-time streams.
• Feasibility involves evaluating trade-offs between cost and performance.

4. Challenges to Feasibility
• Complexity in managing different devices with diverse performance
characteristics.
• Ensuring QoS guarantees for variable and conflicting stream
requirements.
• Overhead of synchronization, coordination, and data consistency.
• Handling device heterogeneity in failure modes and recovery processes.
• Potential increased latency due to inter-device communication or
contention.

5. Technologies and Techniques Supporting Feasibility


• Hybrid Storage Architectures: Combining SSDs and HDDs for tiered
storage.
• Middleware and Abstraction Layers: Provide unified access despite
heterogeneous devices.
• QoS-aware Storage Systems: Support priority and reservation.
• Distributed File Systems and RAID: Provide data distribution and fault
tolerance.
• Advanced Scheduling Algorithms: SCAN, C-SCAN, priority queues,
deadline-based scheduling.

6. Case Studies and Practical Examples


• Multimedia streaming services (Netflix, YouTube) use multiple storage
tiers.
• Cloud storage platforms distribute data across heterogeneous storage
devices.
• Real-time sensor data processing systems leverage different storage
types based on latency needs.

Conclusion
• Feasibility of heterogeneous streams over multiple storage devices is
generally high, provided advanced scheduling, synchronization, and
resource management techniques are employed.
• Benefits include improved performance, scalability, reliability, and cost
efficiency.
• However, challenges such as complexity, synchronization overhead, and
QoS assurance must be carefully addressed.
• Modern storage architectures and technologies increasingly support
these heterogeneous environments, making them practical for diverse
applications.

You might also like