0% found this document useful (0 votes)
32 views9 pages

Assfg Scheduling

Uploaded by

jodemisse67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views9 pages

Assfg Scheduling

Uploaded by

jodemisse67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Cyclic scheduling

Cyclic scheduling is simple. Each task is allowed to run to


completion before it hands over to the next. A task cannot
be discontinued as it runs. This is almost like the super
loop operation we saw earlier in this chapter.

A diagrammatic example of cyclic scheduling is shown


in Figure 18.5. Here the horizontal band represents CPU
activity and the numbered blocks the tasks as they
execute. Tasks are seen executing in turn, with Task 3
initially the longest and 2 the shortest. In the third iteration,
however, Task 1 takes longer and the overall loop time is
longer. Cyclic scheduling carries the disadvantages of
sequential programming in a loop, as outlined above. At
least it is simple.
Cyclic scheduling is simple. Each task is allowed to run to
completion before it hands over to the next. A task cannot be
discontinued as it runs. This is almost like the super loop
operation we saw earlier in this chapter.
A cyclical schedule is a scheduling method where tasks
are repeated in a sequence without interruption at regular
intervals:

Definition

A cyclical schedule is a scheduling approach where tasks


are repeated at regular intervals. It's similar to a super
loop operation where each task is completed before the
next one starts.

Examples

Cyclical schedules are used in batch industries,


manufacturing, and in compilers. For example, in a
widget factory, a cyclical schedule might be used to
schedule the tasks needed to build a single widget.


Challenges

In real-world production settings, cyclical schedules can
be disrupted by unexpected events. These disruptions
can be difficult to anticipate when generating a schedule.

The word "cyclical" describes things that occur in regular


intervals or have a regularly patterned pattern. For
example, the moon's phases and the orbit of planets
around the sun are cyclical

Priority-based scheduling is a method for programmers to


control the order in which processes are run by assigning
each process a priority and scheduling the processes
based on those priorities:

In priority-based scheduling, the process with the highest


priority is scheduled first. The CPU is allocated to the
process with the highest priority.

Priorities can be based on a variety of factors,


including: Deadlines, Significance, Time limits, Memory
requirements, and Ratio of average I/O burst to average
CPU burst.

Priority-based scheduling is used in operating systems to


perform batch processes. However, a major problem with
priority scheduling is that it can leave low priority
processes waiting indefinitely, a condition known as
indefinite blocking or starvation.

Priority scheduling involves priority assignment to every


process, and processes with higher priorities are carried out
first, whereas tasks with equal priorities are carried out on a
first-come-first-served (FCFS) or round robin basis.
Priority scheduling can be either of the following:

 Preemptive — This type of scheduling may preempt the


central processing unit (CPU) in the case the priority of the
freshly arrived process is greater than those of the existing
processes.
 Non-preemptive — This type of scheduling algorithm
simply places the new process at the top of the ready
queue

 Priority of processes depends some factors


such as:
 Time limit
 Memory requirements of the process
 Ratio of average I/O to average CPU
burst time
There can be more factors on the basis of
which priority of a process / job is determined.
This
priority is assigned to the processes by the
scheduler.
These priorities of processes are represented
as simple integers in a fixed range such as 0
to 7, or
maybe 0 to 4095. These numbers depend on
the type of systems.
Now that we know that each job is assigned a
priority on the basis of which it is scheduled,
we
can move ahead with it's working and types.
Types of Priority Scheduling algorithm
There are two types of priority scheduling
algorithm in OS:

Non-Preemptive Scheduling

In this type of scheduling:


 If during the execution of a process,
another process with a higher priority arrives
for
execution, even then the currently executing
process will not be disturbed.
 The newly arrived high priority process
will be put in next for execution since it has
higher priority than the processes that are in
waiting state for execution.
 All the other processes will remain in the
waiting queue to be processes. Once the
execution of the current process is done, the
high priority process will be given the CPU
for execution.

Preemptive Scheduling

Preemptive Sheduling as opposed to non-


preemptive scheduling will preempt (stop and
store the
currently executing process) the currently
running process if a higher priority process
enters the
waiting state for execution and will execute the
higher priority process first and then resume
executing the previous process.
Characteristics of Priority Scheduling
Algorithm:
 It is a scheduling algorithm that
schedules the incoming processes on the
basis of the
priority.
 Operating systems use it for performing
batch processes
 If there exist two jobs / processes in the
ready state (ready for execution) that have the
same priority, then priority scheduling
executed the processes on first come first
serve
basis. For every job that exists, we have a
priority number assigned to it that indicates its
priority level.
 If the integer value of the priority number
is low, it means that the process has higher
priority. (low number = high priority)

What is Priority Scheduling?


Priority scheduling is a method used in computer
operating systems and task management systems to
determine the order in which tasks are executed based on
their priority level. Each task is assigned a priority value,
and the task with the highest priority is executed first. This
ensures that the most important tasks are completed in a
timely manner.

How does Priority Scheduling work?


In priority scheduling, tasks are assigned priority levels
based on their importance or urgency. The task with the
highest priority is selected for execution first, followed by
tasks with lower priority levels. If two tasks have the same
priority level, the scheduling algorithm may use additional
criteria, such as the order in which tasks were received, to
determine the execution order.

What are the benefits of using Priority


Scheduling?
One of the main benefits of using priority scheduling is
that it allows important tasks to be completed quickly and
efficiently. By assigning priority levels to tasks,
organizations can ensure that critical tasks are given the
attention they deserve. This can help improve productivity
and ensure that deadlines are met.

Another benefit of priority scheduling is that it can help


organizations prioritize tasks based on their impact on
business objectives. By assigning priority levels to tasks,
organizations can focus on completing tasks that align
with their strategic goals and objectives.

What are the limitations of Priority


Scheduling?
One of the limitations of priority scheduling is that it can
lead to starvation, where low-priority tasks are never
executed because higher-priority tasks are constantly
being selected for execution. This can result in delays for
less critical tasks and impact overall system performance

Multitasking and concurrency are related concepts that


can both involve the execution of multiple tasks at once,
but they have some key differences:
 Multitasking: Involves rapidly switching between multiple
tasks on a single processor.
 Concurrency: Involves handling multiple tasks or
processes at the same time, even if they aren't executed
simultaneously. Tasks take turns using resources, giving
the appearance that they are happening at the same
time.

Concurrency can improve efficiency and responsiveness, but it can


also lead to issues if multiple tasks access shared resources at the
same time. These issues are known as concurrency issues, and can
include race conditions and deadlocks. If left unaddressed, these
issues can lead to unexpected results or system failures.

To avoid these issues, developers can use synchronization


mechanisms like locks and semaphores to coordinate concurrent
taHere are some other things to consider about multitasking and
concurrency in real-time and embedded systems:

True concurrency

Requires parallel processing in separate processors,


either a multi-processor system or multi-CPUs.

Process-process interaction

Communication between processes complicates the


implementation and analysis of a concurrent system.


Threads

Programmers can prune away nondeterminism by
imposing constraints on execution order and limiting
shared data accesses.

sks

Multitasking and concurrency are important concepts in


real-time and embedded systems, and they affect the
performance of these systems:

Multitasking

A method where multiple tasks, or processes, share


processing resources, such as the CPU.

Concurrency

The design and organization of threads of execution,


resource sharing, scheduling mechanisms, and software
allocation in a system.


Real-time operating system (RTOS)

An operating system that allows multiple programs to run
concurrently, with a focus on predictability and
determinism. In an RTOS, repeated tasks are performed
within a tight time boundary

You might also like