0% found this document useful (0 votes)
21 views

RTS Compiled Notes UNIT 2

Aktu syllabus

Uploaded by

nikhilraj8020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

RTS Compiled Notes UNIT 2

Aktu syllabus

Uploaded by

nikhilraj8020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CLOCK-DRIVEN APPROACH

In real-time systems, a clock-driven approach is a fundamental concept used to schedule and


control the timing of tasks or processes. The goal of a real-time system is to ensure that certain
tasks or operations are completed within specified time constraints. The clock-driven approach
relies on a periodic clock signal to trigger and synchronize various activities within the system.

Here are key aspects of the clock-driven approach in real-time systems:

1. Periodic Tasks:
• Real-time systems often involve tasks that need to be executed at regular
intervals.
• Each task is associated with a specific period, representing the time between
consecutive executions.
2. Clock Interrupts:
• The system's clock generates interrupts at regular intervals.
• These clock interrupts serve as triggers for the initiation of scheduled tasks.
3. Task Scheduling:
• Tasks are scheduled to run at specific points in time, aligned with the clock
interrupts.
• The scheduling algorithm determines the order and timing of task execution
based on their priorities and deadlines.
4. Time-Driven Behavior:
• The system operates in a time-driven manner, where tasks are initiated or
terminated based on the clock signal.
• This approach allows for predictable and deterministic behavior in meeting
timing requirements.
5. Advantages:
• Predictability: The periodic nature of tasks and the regular clock interrupts
contribute to predictable system behavior.
• Control: The clock-driven approach provides control over the timing and
execution of tasks.
6. Disadvantages:
• May not handle dynamic or unpredictable events well.
• Limited flexibility in handling varying workloads or changing task priorities.
7. Examples:
• In embedded systems, tasks like sensor reading, control loops, or
communication processes may be scheduled using a clock-driven approach.
• Real-time operating systems (RTOS) often use clock-driven scheduling to
ensure timely execution of tasks.

WEIGHTED ROUND ROBIN:

• Weighted Round Robin (WRR) is a scheduling algorithm commonly used in


networking and server environments. While it is not specifically associated with
real-time systems, it can be adapted for certain real-time applications where
fairness and resource allocation are important. Let's explore the basic principles
of Weighted Round Robin and how it might be applied in the context of a real-
time system:
• Task Prioritization:
Each task or process is assigned a weight, which represents its priority or
importance relative to other tasks.
• Time Quantum:
The system operates in rounds, and during each round, tasks are scheduled to
execute for a certain time quantum.
The time quantum is determined based on the weight assigned to each task.
• Task Execution:
Tasks are scheduled in a circular manner, and during each round, a task is
allowed to execute for its allocated time quantum.
• Weight Adjustment:
The weights assigned to tasks may be dynamically adjusted based on the
system's needs or changing priorities.
• Fairness:
The weighted nature of the algorithm ensures that tasks with higher weights
receive a proportionally larger share of the system's processing time.
• Application in Real-Time Systems:
• Resource Allocation:
In real-time systems, certain tasks may have different levels of urgency or
importance. Weighted Round Robin can be used to allocate system resources in
a way that reflects the priority of tasks.
• Dynamic Adjustments:
Real-time systems often face changing workloads and priorities. WRR allows
for dynamic adjustments by modifying task weights, ensuring that critical tasks
receive more processing time when needed.
• Predictability:
While not as deterministic as some clock-driven approaches, Weighted Round
Robin still provides a level of predictability in terms of task execution,
especially when considering the weighted priorities.
• Fairness in Resource Usage:
Weighted Round Robin helps in achieving fairness by ensuring that each task
gets a fair share of the system's resources, taking into account their assigned
weights.
• Adaptability:
The adaptability of WRR allows real-time systems to respond to changing
conditions, such as varying task priorities or resource availability.

PRIORITY-DRIVEN SCHEDULING

Priority-driven scheduling is a type of scheduling algorithm where tasks are executed based on
their priority levels. Tasks with higher priority levels are executed before tasks with lower
priority levels. This ensures that higher-priority tasks are completed in a timely manner,
potentially at the expense of lower-priority tasks. Here's an example to illustrate priority-driven
scheduling:
Let's consider a simplified example where we have four tasks to be scheduled:
1. Task A with priority 1
2. Task B with priority 2
3. Task C with priority 3
4. Task D with priority 1
In this scenario, the higher the priority number, the lower the priority of the task. Therefore,
Task C has the highest priority, followed by Task B, then Task A, and finally Task D.
Now, let's assume each task has a certain execution time:
• Task A requires 5 units of time.
• Task B requires 3 units of time.
• Task C requires 2 units of time.
• Task D requires 4 units of time.
Using a priority-driven scheduling algorithm, tasks will be executed based on their priority
levels. The scheduler will select the task with the highest priority that is ready to execute.
Here's how the scheduling might proceed:
1. Since Task C has the highest priority, it will be executed first. Task C completes in 2
units of time.
2. Next, Task B will be executed as it has the next highest priority. Task B completes in 3
units of time.
3. Task A cannot be executed until both Task C and Task B are completed because they
have higher priorities. Task A completes in 5 units of time.
4. Finally, Task D is executed, as it has the lowest priority among the tasks. Task D
completes in 4 units of time.

Time: 0 1 2 3 4 5 6 7 8 9 10
Task: C C B B B A A A A A D

Aspect Static Scheduling Algorithms Dynamic Scheduling Algorithms


Priority
Assignment Assigned at design time and remains fixed May change dynamically during runtime
Determinism Fully deterministic May introduce nondeterminism
More flexible, priorities adapt to runtime
Flexibility Less flexible, priorities predetermined conditions
Overhead Lower overhead due to fixed priorities Higher overhead due to dynamic adjustments
Complexity Generally simpler May be more complex
Performance Guarantees schedulability under certain May provide better resource utilization, but
Guarantee conditions guarantees may be harder to establish
Rate-Monotonic Scheduling (RMS), Least Slack Time First (LSTF), Proportional
Example Earliest Deadline First (EDF), Deadline Share Scheduling (PSS), Dynamic Priority
Algorithms Monotonic Scheduling (DMS) Scheduling (DPS)
Suitable Deterministic systems with predictable task Systems where task characteristics or system
Applications characteristics conditions may vary over time

1. Offline Scheduling:
• In offline scheduling, the entire set of tasks and their characteristics (such as
execution times, deadlines, etc.) are known in advance before scheduling
begins.
• The scheduler can analyze the tasks and optimize the scheduling based on
various criteria such as minimizing the makespan (total completion time),
minimizing the maximum lateness, or maximizing resource utilization.
• Offline scheduling algorithms can be computationally intensive since they often
involve searching for an optimal or near-optimal solution among all possible
schedules.
• Examples of offline scheduling algorithms include Optimal Scheduling, List
Scheduling, and Genetic Algorithms.
2. Online Scheduling:
• In online scheduling, tasks arrive dynamically during execution, and the
scheduler makes decisions without knowing the characteristics of future tasks.
• The scheduler must make decisions based only on the information available at
the time of task arrival or execution.
• Online scheduling algorithms must be efficient and make decisions quickly
since they do not have the luxury of analyzing all future tasks.
• These algorithms often use heuristics or approximation techniques to make
decisions quickly.
• Examples of online scheduling algorithms include First Come First Serve
(FCFS), Shortest Job Next (SJN), and Round Robin.
Comparison:
1. Complexity:
• Offline scheduling algorithms can be more complex since they have access to
complete information about tasks in advance and can optimize based on various
criteria.
• Online scheduling algorithms tend to be simpler and more lightweight since
they must make decisions quickly based on partial information.
2. Performance:
• Offline scheduling algorithms can potentially achieve better performance since
they can optimize schedules based on complete information.
• Online scheduling algorithms may not always produce optimal solutions but are
designed to make reasonable decisions quickly based on available information.
3. Resource Utilization:
• Offline scheduling algorithms can consider future resource utilization more
effectively since they have information about all tasks.
• Online scheduling algorithms must balance resource utilization based on current
information, which may lead to suboptimal resource allocation in some cases.
4. Real-time Systems:
• Online scheduling is more common in real-time systems where tasks arrive
dynamically and must be scheduled quickly to meet deadlines.
• Offline scheduling may be more suitable for batch processing or situations
where tasks are known in advance and can be optimized before execution
begins.

Earliest Deadline First (EDF)

Earliest Deadline First (EDF) scheduling is a scheduling algorithm used in real-


time operating systems (RTOS) and other contexts where tasks or processes
have deadlines. The basic idea is to prioritize tasks based on their deadlines,
with the task having the earliest deadline being executed first. This ensures that
tasks with imminent deadlines are completed on time.
Here's how the EDF scheduling algorithm works:
Task Arrival: When a new task arrives or becomes ready to execute, its
deadline is noted.
Task Execution: The scheduler selects the task with the earliest deadline from
the set of ready tasks and executes it. If multiple tasks have the same earliest
deadline, the scheduler may use additional criteria to break ties, such as priority
or arrival time.
Task Completion: After executing a task, the scheduler checks if any new tasks
have arrived or if the current tasks have changed their deadlines due to changes
in their execution time. If so, it updates the list of ready tasks accordingly.
Repeat: Steps 2 and 3 are repeated continuously as tasks arrive, execute, and
complete.
EDF scheduling ensures that tasks are scheduled in a manner that minimizes the
number of missed deadlines, assuming that tasks can be preempted if necessary.
However, it's essential to ensure that tasks are schedulable under EDF. The
feasibility of a set of tasks under EDF scheduling can be analyzed using
techniques such as the Rate-Monotonic Analysis (RMA) or Deadline-
Monotonic Analysis (DMA).
While EDF scheduling is efficient and guarantees optimal utilization of CPU
resources when tasks have implicit deadlines, it may not be suitable for all
systems. For example, it can lead to priority inversion and deadline inversion
problems in certain scenarios, especially when tasks have shared resources or
dependencies.
Overall, EDF scheduling is a powerful algorithm for real-time systems where
meeting deadlines is crucial.
The Least Slack Time (LST)

The Least Slack Time (LST) scheduling algorithm is another approach used in real-time
systems to schedule tasks with deadlines. It aims to minimize the slack time of tasks, where
slack time refers to the time remaining until a task's deadline after its execution.
Here's how the LST scheduling algorithm works:
1. Task Arrival: When a new task arrives or becomes ready to execute, its deadline and
computation time are noted.
2. Calculate Slack Time: For each task in the ready queue, calculate its slack time. Slack
time is the difference between the task's deadline and the current time minus the task's
computation time. In other words, it represents how much time is left before the task's
deadline after considering its execution time.
3. Task Selection: Select the task with the least slack time for execution. This means that
tasks with tighter deadlines or less time remaining until their deadlines are given higher
priority.
4. Task Execution: Execute the selected task. If multiple tasks have the same slack time,
additional criteria like priority or arrival time can be used to break ties.
5. Task Completion: After executing a task, update the list of ready tasks if new tasks
have arrived or if the execution time of existing tasks has changed.
6. Repeat: Steps 2 to 5 are repeated as tasks arrive, execute, and complete.

The LST algorithm aims to minimize the likelihood of missing deadlines by prioritizing tasks
that have less time remaining until their deadlines. However, like any scheduling algorithm, it
has its limitations and may not be suitable for all scenarios. Careful analysis of task
characteristics, system requirements, and constraints is necessary to determine whether LST
scheduling is appropriate for a given real-time system.

Rate-Monotonic Scheduling (RMS)

Rate-Monotonic Scheduling (RMS) is a well-known real-time scheduling algorithm primarily


used in systems where tasks have periodic or cyclic behavior. It's based on the principle that
tasks with shorter periods (i.e., tasks that repeat more frequently) are assigned higher priorities.
Here's how the Rate-Monotonic Scheduling algorithm works:
1. Task Periods: Each task in the system has a fixed period, representing how often the
task needs to be executed.
2. Task Priorities: Tasks are assigned priorities inversely proportional to their periods.
That is, tasks with shorter periods have higher priorities. Mathematically,
Priority(Task_i) = 1/Period(Task_i).
3. Task Assignment: Tasks are scheduled based on their priorities. When multiple tasks
become ready to execute simultaneously, the task with the highest priority (shortest
period) is executed first.
4. Preemption: If a higher-priority task becomes ready to execute while a lower-priority
task is currently executing, the lower-priority task is preempted, and the higher-priority
task is executed.
5. Repeat: Steps 3 and 4 are repeated continuously to schedule tasks as they become
ready.
Rate-Monotonic Scheduling is optimal for scheduling periodic tasks under certain conditions.
Specifically, it guarantees that a set of periodic tasks is schedulable if the total CPU utilization
is less than or equal to the number of processors in the system. This condition is known as the
Rate-Monotonic Scheduling (RMS) theorem.
One of the key advantages of RMS is its simplicity and efficiency in managing periodic tasks.
However, it has limitations, such as the assumption of static priorities and periodic task arrivals,
which may not always hold true in real-world scenarios. Additionally, RMS may not be suitable
for systems with a mix of periodic and aperiodic tasks or tasks with variable execution times.

You might also like