Operating Systems: Literature Review On Scheduling Algorithms of Operating System

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

ITE2002

Operating Systems
Digital Assignment – 1

17BIT0294
Chandra Teja. P

LITERATURE REVIEW ON SCHEDULING ALGORITHMS OF


OPERATING SYSTEM

1.Title of the Paper: Scheduling of Operating System Services


Summary: While modern applications are often multithreaded, the manual implementation
of concurrent programs is inconvenient and error prone. One way to ease the implementation
of such programs is to specify them as a set of tasks in data- or workflow graphs. These
graphs describe the individual tasks, causal dependencies and data dependencies between
them. Once all dependencies of one task are satisfied, it can be executed. Thereby, the order
of execution is decoupled from the implementation of the program, which enables the
underlying scheduler to exploit parallelism more efficiently
Approach: In this PhD thesis both a task mapping infrastructure (task scheduler) and a
system for OS service placement (service scheduler) are developed. Due to different
coherency levels (e.g. shared cache, shared memory, different nodes) and heterogeneous
hardware (e.g. CPU, GPU), a hierarchical task scheduler was selected, where the hierarchy
levels correspond to different coherency levels of the computing infrastructure, i.e., cluster,
node, shared memory, shared cache. This task scheduler assigns tasks from an application’s
flow graph to computational resources depending on (a) location of the input data, (b) its
availability, (c) the expected runtime of the task and (d) the location of required operating
system services. To determine whether a task is ready for execution, its data dependencies
have to be evaluated. From the application’s perspective this is a difficult task, since the
location of data is not transparent to the application. Therefore, the task scheduler developed
in this thesis will be integrated into the operating system. It therefore can more efficiently
check dependencies, since data mapping and memory layout are available.
Challenges: Although, many tasks, e.g. in HPC, exhibit similar runtime over multiple
invocations it may vary between iterations or change entirely based on input data. On the
other hand, the task scheduler requires information about the workload of a task to be able to
schedule successive tasks accordingly. Therefore, a suitable estimation of a task’s runtime
has to be found. This could be, e.g., exponential smoothing of previous runtimes. Based on
input data tasks may exhibit entirely different behavior. However, this effect is expected to be
negligible, since it often occurs at loop borders, which only make up a small fraction of the
invocations.
Conclusion: In this thesis, scheduling strategies for both application tasks (depending on data
location and availability) and operating system services will be developed. The cost of
reconfiguration in both task and service placement will be evaluated and will serve as a
metric for the scheduler. Thereby, local and global performance of the system should be
optimized.

Scope of future work: There is a high scope for the further work on mutual dependencies of
the two schedulers.

2. Title of the Paper: Scheduling Challenges in Operating System


Summary: In operating system scheduling is a basic function. Since almost all computer
resources are scheduled before use.
It involves the allocation of resources and time to process or tasks in such a way that certain
performance requirements are met. In a multiprogramming, computer has multiple processes
and they are competing for the CPU at the same time. So, a choice has to be made which
process to run next which is done by scheduler. A scheduler executes the process with the
help of scheduling algorithms. In this paper the scheduler is mainly concern with the
throughput, turn around time, response time, fairness, waiting time etc.
Approach: In this paper they did comparisons of algorithms.
According to this paper FCFS Scheduling algorithm is very easy to understand and it
is particularly troublesome for time-sharing systems, where each user ger a share of CPU for
long period. The SJF scheduling algorithm is to serve all types of job with optimum
scheduling criteria. The process with less execution time is served first and the process with
highest execution time is not served at all.
The Round Robin scheduling algorithm found as most widely adopted algorithm which may
sometimes may go under server problems directly related to quantum size. The main point is
that RR is more responsive than shortest job first (SJF). So, for large number of jobs, SJF
gives better average waiting time than other. For small no of jobs, there is race between FCFS
& SJF but still SJF is better.
Challenges:
1. Type of jobs
2. Reliability
3. Location Dependency
Conclusion: In this paper they have studied the scheduling algorithms & find the challenges
encountered during the scheduling of jobs. Scheduling is a major area in the operating
system. They determined that during scheduling, Operating System doesn’t consider the type
of jobs it is. It simply applies the algorithm on the basis of execution time, no of
processes,etc.
Scope of future work: The effective and efficient exploitation of grid computing facilities
needs highly advanced and protected resource management systems. Efficient resource
sharing and accessing can not go without the assurance of high trustworthiness.

3) Title of the Paper: Real-Time Process Scheduling And Operating System


Summary: In this paper, They gave a brief review of related work in real-time process
scheduling, resource management, schedulability analysis and recent developed real-time
operating systems. We cover the scheduling of periodic, sporadic, and aperiodic processes,
and
the processes could be scheduled by priority-driven or share-driven techniques. Resource
synchronization protocols to bound the time and duration of priority inversions are presented.
Based on the theoretical results, new operating systems realized or traditional operating
systems empowered with new technologies are now very powerful and flexible. Due to the
very distinctive nature of real-time and embedded systems, each system has very specific
requirements to meet. The primary objective of this paper is to deliver readers a picture on
which tools they could use and which approaches should be avoided. Researchers could also
find which topics are well-studied and where is the missing part.
Challenges:
1. Priority-Driven Real-Time Scheduling
2. Real-Time Resources Synchronization
3. Scheduling Aperiodic Real-Time Processes
Conclusion: Since early 1970s scheduling algorithms driven by priorities were extensively studied,
and then new approaches such as share-driven algorithms were proposed to complement the shortages
of priority-driven scheduling algorithms. Based on the scheduling algorithms over independent
processes, resource synchronization protocols were proposed with the objective to properly bound the
duration and number of blocking due to resource contentions. The process model was also extended
from the basic periodic processes to aperiodic/sporadic processes to handle special time-critical events
such as interrupts. With these theoretical results researchers and engineers also started realizing new
powerful operating systems to offer system designers more possibilities in the designing phase.
Traditional operating systems were also given new capabilities to extend their applications to the field
of embedded real-time computing.
In this paper, They did survey classic and recent results in real-time scheduling algorithms and
operating systems to cover both theoretical and practical issues. The rest of this paper is organized as
follows:
Section 2 summarizes related work in real-time process scheduling and resource management.
Section 3 provides a snapshot of recent developed real-time operating systems.
Section 4 is the summary.

Scope of future work: Real-time processes could be scheduled according to their criticality so that
important processes are always scheduled first other than the less-important ones. Alternatively, a
real- time scheduling algorithm could also aim to maximize the system utilization. Priorities could be
fixed or dynamic under different scheduling algorithms, depends on their objectives.

4) Title of the Paper: Scheduling Algorithms and Operating Systems Support for
Real- Time System
Summary: This paper summarizes the state of the real-time field in the areas of scheduling
and operating system kernels. Given the vast amount of work that has been done by both the
operations research and computer science communities in the scheduling area, we discuss
four paradigms underlying the scheduling approaches and present several exemplars of each.
The four paradigms are: static tabledriven scheduling, static priority preemptive scheduling,
dynamic planning-based scheduling, and dynamic best efSort scheduling. In the operating
system context, we argue that most of the proprietary commercial kernels as well as real-time
extensions to time-sharing operating system kernels do not fit the needs of predictable
realtime systems. We discuss several research kernels that are currently being built to
explicitly meet the needs of real-time applications.
Challenges: The variety of metrics that have been suggested for realtime systems is
indicative of the different types of real-time systems that exist in the real world as well as the
types of requirements imposed on them. This sometimes makes it hard to compare different
scheduling algorithms. Another difficulty arises from the fact that different types of task
characteristics occur in practice. Tasks can be associated with computation times, resource
requirements, importance levels (sometimes also called priorities or criticalness), precedence
relationships, communication requirements, and of course, timing constraints. If a task is
periodic, its period becomes important; if it is aperiodic, its deadline becomes important. A
periodic task may have a deadline by which it must be completed. This deadline may or may
not be equal to the period. Both periodic and aperiodic tasks may have start time constraints.
Conclusion: This paper presents a categorized summary of work in the areas of scheduling
and operating systems for real-time applications. In particular, four scheduling paradigms
were identified: static table-driven scheduling, static priority preemptive scheduling, dynamic
planning-based scheduling, and dynamic best effort scheduling. Real-time operating systems
were categorized into three classes: small, proprietary kemels, real-time extensions to
commercial operating systems, and research kemels. Rather than being exhaustive, we have
provided specific examples from each of the categories. Exciting developments and serious
limitations of the current work both in scheduling and operating systems was also noted.
Important interactions between scheduling algorithm development and operating systems
exist. For example, whereas scheduling is an integral part of any real-time operating system,
barring a few exceptions, most scheduling work has ignored the overheads involved in
scheduling. As we saw, for predictability, it is essential to account for all the overheads
involved.
Scope of future work: Clearly, a real-time operating system must be able to perform
integrated CPU scheduling and resource allocation so that collections of cooperating tasks
can obtain the resources they need, at the right time, in order to meet timing constraints. In
addition to proper scheduling algorithms, predictability requires bounded operating system
primitives. Using the current operating system paradigm of allowing arbitrary waits for
resources or events, or treating a task as a random process will not be feasible in the future to
meet the more complicated set of requirements. It is also important to avoid having to rewrite
the operating system for each application area.

6) Title of the Paper: Integrated Processor Scheduling for Multimedia


Summary: Authors of this research paper have created a new processor scheduler that
provides integrated support and effective overload management for all classes of
computational activities, whether real-time or conventional, such as those found in
multimedia applications. When used to schedule real-time applications, our unified scheduler
has the desirable behavior of a typical real-time scheduler: it delivers optimal performance by
satisfying the specified deadlines whenever possible. When used to schedule conventional
applications, our scheduler has the desirable behavior of a conventional scheduler: it provides
good system responsiveness for interactive activities with steady forward progress for batch
activities. More importantly, not only does our unified scheduler handle each type of activity
effectively, it also handles the combination of both types of tasks seamlessly, without
requiring any user parameters.
Challenges: Main challenge was To allow the co-existence of continuous media, interactive, and
batch activities, they rely on artificial rate or deadline parameters to force-fit interactive and batch
activities into an unsuitable real-time model.
Conclusion: The advent of multimedia ushers forth a growing class of applications that must
manipulate digital audio and video within well-defined timeliness requirements. Existing
processor schedulers are inadequate in supporting these requirements. They fail to allow the
integration of these continuous media computations with conventional interactive and batch
activities. So for this Authors have created a new scheduler that provides integrated processor
scheduling for all classes of computational activities. Our solution achieves optimal
performance when all timeliness requirements can be satisfied, and provides graceful
degradation when the system is overloaded. Though unique in the degree to which it allows
users control over the dynamic sharing of processing resources, the scheduler does not
impose any draconian demands on the user to provide information he does not have or does
not choose to specify.
Scope of future work: To support both real-time and conventional tasks, the key problem
that must be addressed is how to allocate processing resources in overload.

7) Title of the Paper: A CPU Scheduling Algorithm for Continuous Media


Applications
Summary: In this research paper authors provided an overview of a CPU management
algorithm called RAP (Rate-based Adjustable Priority Scheduling) that provides predictable
service and dynamic QOS control in the presence of varying compute times, arrival and
departure of processes, and CPU overloads. A significant feature of RAP includes an
application-level QOS manager that implements policies for graceful adaptation in the face of
CPU overload.
Challenges: Challenges faced during dynamic QOS control and predictable service in the
presence of varying compute times, arrival and departure of processes, and CPU overloads.
Conclusion: They have investigated operating system (OS) mechanisms and policies for
managing end-system resources (such as CPU, network interface, memory, and bus
bandwidth) so that an OS can provide predictable service to multimedia (MM) applications.
In this paper, they provided an overview of a CPU management algorithm called RAP (Rate-
based Adjustable Priority Scheduling). Over all their design goal is similar to the objectives
of the dynamic QOS control schemes proposed earlier in some research papers.
Scope of future work: In this research paper I found an application specifies a desired
average rate of execution (e.g., 20 times a second) and an averaging interval over which the
rate of execution is to be measured. So I hope in next research paper we can have a better
approach on RAP

8) Title of the Paper: Scheduling Techniques for Operating Systems for Medical and
IoT Devices: A Review
Summary: Software and Hardware synthesis are the major subtasks in the implementation of
hardware/software systems. Increasing trend is to build SoCs/NoC/Embedded System for
Implantable Medical Devices (IMD) and Internet of Things (IoT) devices, which includes
multiple Microprocessors and Signal Processors, allowing designing complex hardware and
software systems, yet flexible with respect to the delivered performance and executed
application. An important technique, which affect the macroscopic system implementation
characteristics is the scheduling of hardware operations, program instructions and software
processes. This paper presents a survey of the various scheduling strategies in process
scheduling. Process Scheduling has to take into account the real-time constraints. Processes
are characterized by their timing constraints, periodicity, precedence and data dependency,
pre- emptivity, priority etc. The affect of these characteristics on scheduling decisions has
been described in this paper.

Challenges: Process scheduling is the problem of determining when processes execute and
includes handling synchronization and mutual exclusion problem. Algorithms for process
scheduling are important constituents of operating systems and runtime schedulers. The
model of the scheduling problem is more general. Processes have a coarser granularity and
their overall execution time may not be known. Processes may maintain a separate context
through local storage and associated control information. Scheduling objectives may also
vary. In a multitasking operating system, scheduling primarily addresses increasing processor
utilization and reducing response time.
Conclusion: Different goals and algorithms characterize process scheduling in real-time
operating system. Schedules may or may not exist that satisfy the given timing constraints. In
general, the primary goal is to schedule the tasks such that all deadlines are met: in case of
success (failure) a secondary goal is maximizing earliness (minimizing tardiness) of task
completion. An important issue is predictability of the scheduler, i.e., the level of confidence
that the scheduler meets the constraints. In this section, various scheduling schemes and their
schedulability tests have been given. Recent work in process scheduling for multiprocessor
and distributed systems is also covered.
Scope of future work: Scheduling scheme for multiprocessor systems has to provide
solutions for the problems that arise in multiprocessor environments. The problems that need
to be tackled by the multiprocessor scheduling schemes are: task assignment to a processor,
Synchronization protocol, load-balancing etc. Also, scheduling scheme for multiprocessor
system has to take into account the following important factors: memory and resource
utilization, deadlock avoidance, precedence constraints, and communication delay. Because
of these conflicting requirements, development of scheduling scheme for multiprocessor
system is difficult. So there is a need for further development of scheduling algorithms
scheme for multiprocessor system.

9) Title of the Paper: Aperiodic Servers in a Deadline Scheduling Environment

Summary: Authors considered a scheduling problem in which a single processor executes a


set of periodic and aperiodic tasks. The tasks are independent, except for mutual exclusion
constraints over certain sections. A task may have a hard deadline, a soft deadline, both forms
of deadline, or no deadline at all. A hard deadline is one that must always be satisfied,
whereas a soft deadline is one that only represents the desired average response time. If a task
has both hard and soft deadlines, the soft deadline is shorter than the hard deadline. Their
primary focus
is on obtaining good average response time for tasks with soft deadlines, while still being
able to guarantee that all hard deadlines will be satisfied.
Challenges: In this paper, they have shown how to schedule soft-deadline tasks in a way that
is easy to compute, does not hurt the schedulability of hard-deadline tasks, and provides good
average response time for soft-deadline aperiodic tasks.

Conclusion: A real-time system may have tasks with soft deadlines, as well as hard
deadlines. While earliestdeadline-first scheduling is effective for hard-deadline tasks,
applying it to soft- deadline tasks may waste schedulable processor capacity or sacrifice
average response time. Better average response time may be obtained, while still
guaranteeing hard deadlines, with an aperiodic server. Three scheduling algorithms for
aperiodic servers are described, and schedulability tests are derived for them. A simulation
provides performance data for these three algorithms on random aperiodic tasks. The
performances of the deadline aperiodic servers are compared with those of several
alternatives, including background service, a deadline polling server, and rate-monotonic
servers, and with estimates based on the M/M/I queueing model. This adds to the evidence in
support of deadline scheduling, verstts fixed priority scheduling.
Scope of future work: this research paper talks about the criteria which is out of my under
standing so couldn’t figure out what should be done in next research paper.

10) Title of the Paper: Operating System Concepts for Reconfigurable Computing:
Review and Survey
Summary: In this article they presented a summary of ideas to integrate reconfigurable
computing into an operating system. Furthermore, several implemented systems are
presented. Based on these systems summary and discussion on the implemented concepts are
given. Several common patterns are identified. hw-applications usually use a PThread-based
abstraction model; the hwapplications themselves are presented as delegate (sw-)threads
inside operating system; preemptive multitasking is used by the newest systems; partitioning
is usually implemented on top of islands style based architecture; typical benchmarks include
image and video processing, data encryption and decryption, and data compression and
decompression.
Challenges: One of the key future challenges for reconfigurable computing is to enable
higher design productivity and a more easy way to use reconfigurable computing systems for
users that are unfamiliar with the underlying concepts. One way of doing this is to provide
standardization and abstraction, usually supported and enforced by an operating system.
Conclusion: This article gives historical review and a summary on ideas and key concepts to
include reconfigurable computing aspects in operating systems. The article also presents an
overview on published and available operating systems targeting the area of reconfigurable
computing. The purpose of this article is to identify and summarize common patterns among
those systems that can be seen as de facto standard. Furthermore, open problems, not covered
by these already available systems, are identified.
Scope of future work: However, there is still room for improvement, especially to exploit
the possibilities of dynamic and partial reconfiguration. Furthermore some concepts, like
security, were rarely discussed/investigated in the past but should gain more interest in the
future.

You might also like