0% found this document useful (0 votes)
8 views80 pages

Comp2240 Os W04

The document outlines the weekly plan for an Operating Systems course, focusing on real-time system scheduling and multiprocessor scheduling in Week 4. It discusses various scheduling policies, their characteristics, and the design issues related to multiprocessor scheduling, including synchronization granularity and process assignment. Key concepts include different types of multiprocessors, scheduling algorithms, and the importance of thread scheduling in optimizing performance.

Uploaded by

Scott Davies
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views80 pages

Comp2240 Os W04

The document outlines the weekly plan for an Operating Systems course, focusing on real-time system scheduling and multiprocessor scheduling in Week 4. It discusses various scheduling policies, their characteristics, and the design issues related to multiprocessor scheduling, including synchronization granularity and process assignment. Key concepts include different types of multiprocessors, scheduling algorithms, and the importance of thread scheduling in optimizing performance.

Uploaded by

Scott Davies
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

OPERATING SYSTEMS

Week 4

Much of the material on these slides comes from the recommended textbook by William Stallings
2
Lecture
DetailedPlan
content
Weekly program
Weekly program
 Week 1 – Operating System Overview
 Week 2 – Processes and Threads
 Week 3 – Scheduling
 Week 4 – Real-time System Scheduling and Multiprocessor Scheduling

 Week 5 – Concurrency: Mutual Exclusion and Synchronization


 Week 6 – Concurrency: Deadlock and Starvation
 Week 7 – Memory Management
 Week 8 – Disk and I/O Scheduling
 Week 9 – File Management
 Week 10 – Real-world Operating Systems: Embedded and Security
 Week 11 – Real-world Operating Systems: Distributed Operating Systems
 Week 12 – Revision of the course
 Week 13 – Extra revision (if needed)

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Key Concepts From Last Week 3

• Two main parameters: Selection function and Selection Mode


• A variety of algorithms have been developed:
– FCFS
– RR
– SPN
– SRT
– HRRN
– FB
• In Fair-Share scheduling the scheduling decisions are made on the
basis of process sets rather individual processes.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Characteristics of Various Scheduling Policies

Round
FCFS SPN SRT HRRN Feedback
robin 4
Selection æ w + sö
max[w] constant min[s] min[s – e] maxç ÷ (see text)
function è s ø
Preemptive Preemptive
Decision Non- Non- Preemptive Non-
(at time (at time
mode preemptive preemptive (at arrival) preemptive
quantum) quantum)
May be
Through- Not low if Not
quantum High High High
Put emphasized
is too
emphasized
small
May be
high,
Provides Provides
especially if
good good Provides
there is a
Response response response good Provides good Not
large
time time for time for response response time emphasized
variance in time
short short
process
processes processes
execution
times
Overhead Minimum Minimum Can be high Can be high Can be high Can be high
Penalizes
short
Penalizes Penalizes May favor
Effect on processes; Fair
long long Good balance I/O bound
processes penalizes treatment
processes processes processes
I/O bound
processes
Starvation No No Possible Possible No Possible

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


0 5 10 15 20

A
First-Come-First B
Served (FCFS) C
D
5
E

A
Round-Robin B
(RR), q = 1 C
D
E

A
Round-Robin B
(RR), q = 4 C
D
E

A
Shortest Process B
Next (SPN) C
D
E

A
Shortest Remaining B
Time (SRT) C
D
E

A
Highest Response B
Ratio Next (HRRN) C
D
E

A
Feedback B
q=1 C
D
E

A
Feedback B
i
q=2 C
D
E

16/08/2017
0 5 10 15 20
COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au
Figure 9.5 A Comparison of Scheduling Policies
A Comparison of Scheduling Policies
Process A B C D E
Arrival Time 0 2 4 6 8 6
Service Time (Ts) 3 6 4 5 2 Mean
FCFS
Finish Time 3 9 13 18 20
Turnaround Time (Tr) 3 7 9 12 12 8.60
Tr/Ts 1.00 1.17 2.25 2.40 6.00 2.56
RR q = 1
Finish Time 4 18 17 20 15
Turnaround Time (Tr) 4 16 13 14 7 10.80
Tr/Ts 1.33 2.67 3.25 2.80 3.50 2.71
RR q = 4
Finish Time 3 17 11 20 19
Turnaround Time (Tr) 3 15 7 14 11 10.00
Tr/Ts 1.00 2.5 1.75 2.80 5.50 2.71
SPN
Finish Time 3 9 15 20 11
Turnaround Time (Tr) 3 7 11 14 3 7.60
Tr/Ts 1.00 1.17 2.75 2.80 1.50 1.84
SRT
Finish Time 3 15 8 20 10
Turnaround Time (Tr) 3 13 4 14 2 7.20
Tr/Ts 1.00 2.17 1.00 2.80 1.00 1.59
HRRN
Finish Time 3 9 13 20 15
Turnaround Time (Tr) 3 7 9 14 7 8.00
Tr/Ts 1.00 1.17 2.25 2.80 3.5 2.14
FB q = 1
Finish Time 4 20 16 19 11
Turnaround Time (Tr) 4 18 12 13 3 10.00
Tr/Ts 1.33 3.00 3.00 2.60 1.5 2.29
FB q = 2i
Finish Time 4 17 18 20 14
Turnaround Time (Tr) 4 15 14 14 6 10.60
16/08/2017 Tr/Ts 1.33 2.50 3.50 2.80 3.00 2.63
COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au
7

Week 04 Lecture Outline


Real-time System Scheduling and Multiprocessor Scheduling
 Synchronization Granularity
 Design Issues in Multiprocessor Scheduling
 Process Scheduling and Thread Scheduling
 Approaches to Multiprocessor Scheduling
 Load Sharing
 Gang Scheduling
 Dedicated Processor Assignment
 Dynamic Scheduling
 Multicore thread scheduling
 Real Time Systems: Hard RT vs Soft RT
 Characteristics and features of RT systems
 Real time scheduling approaches
 Deadline Scheduling
 RT Scheduling algorithms:
 EDFS Videos to watch before lecture
 RMS

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Multiprocessor Scheduling 8

• There are many kinds of multiprocessors:


– Loosely coupled multiprocessor: consists of a collection of
relatively autonomous systems, each with its own main memory
and I/O channels.
• Example: clusters, network of workstations,.

– Functionally specialised processors: these are slave processors


which provide services for a general purpose processor.
• Examples: I/O processor, PIPADS image processor linked to
i486 master, array processor

– Tightly coupled multiprocessing: a set of processors which share


an operating system, and often share a large amount of their
memory. Examples: Vax dual processor, Sun10/54, Intel Hypercube,
Maspar, Connection machines.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Multiprocessor Scheduling 9

• We are concerned here with tightly coupled multiprocessing. There are


many widely differing architectures and theoretical models for such
machines. In general, they can offer:
– Reliability: through redundancy, no total loss of service - just
degraded performance

– Programming convenience: through scheduling different threads on


different processors.

– Performance: either through parallel algorithms, or through


multiprogramming support.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Synchronization Granularity 10

• Parallelism can be categorised on granularity (frequency of synchronisation):


– Independent parallelism:
• different jobs run independently on different processors.
• no explicit synchronization among processes
• Typical use: Time sharing system: more processors lowers average
user response time
• This is similar to running a network of workstations but more cost-
effective

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Synchronization Granularity 11

• Coarse and very coarse grained parallelism: processes synchronise


at a very gross level
• Can be easily handled as a set of concurrent processes running
on a multiprogrammed uniprocessor
• Can be supported on a multiprocessor with little or no change to
user software
• For instance, a process may spawn several children, that run on
separate processors, and the results are accumulated in the
parent process.
• Synchronisation interval is 200 – 1,000,000 instructions.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Synchronization Granularity 12

• Medium grained parallelism: a single application can be implemented


as a number of threads.
– This parallelism may be specified by the programmer.

– There needs to be a high degree of coordination and interaction


among the threads of an application, leading to a medium-grain
level of synchronization

– Threads interact very frequently and scheduling of one thread


affects performance of whole application.

– Synchronisation interval is 20 - 200 instructions.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Synchronization Granularity 13

• Fine grained parallelism:


– Represents a much more complex use of parallelism than is found
in the use of threads
– The programmer must use special instructions and write parallel
programs.
– Tends to be very specialised and fragmented with many different
approaches
– Synchronisation interval is less than 20 instructions, possibly at the
single instruction level.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Design Issues 14

Scheduling on a
multiprocessor involves
three interrelated
issues:

assignment of actual
processes to use of dispatching of a
processors multiprogramming on process
individual processors

The approach taken will depend on the degree of granularity of applications


and the number of processors available
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Design Issues 15

• Assignment of processes to CPUs for uniform multiprocessor


– Static: a process is assigned to a processor for the total life of the
process. A short-term queue is maintained for each processor
• a simple approach, with very little scheduling overhead.
• the main disadvantage is that one processor may be idle while
another has a long queue.

– Dynamic: a process may change processors during its lifetime. A


single global queue is maintained - jobs scheduled to any available
processor.
• more efficient than static assignment
• if shared-memory, context info available to all processors, cost
of scheduling is independent of processor identity

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Design Issues 16

• Means of assigning processes to processors


– Master-slave architecture: the OS runs on a particular processor
and schedules user tasks on other processors. Resource control is
simple.
• very simple and requires little enhancement to a uniprocessor
multiprogramming operating system.
• the master can be performance bottleneck and failure of the
master is disastrous.

– Peer architecture: the OS can run on any processor, and effectively


each processor schedules itself.
• complicates the OS
• conflict resolution (two processors want the same job or
resource) becomes complex.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Design Issues 17

• Multiprogramming on individual processors


– for coarse grained parallelism, multiprogramming is necessary for each
processor, in the same way as for a uniprocessor - to get higher usage and better
performance.

– however if there are many processors (medium- or fine-grained parallelism) then


it may not be necessary to keep all processors busy all the time.
– Better throughput may be achieved if some processors are sometimes idle. WHY?
– Thus multiprogramming on individual processors may not be necessary. We're
looking for best average performance of applications.

• Process Dispatching
– the complex selection procedure involved in uniprocessors may not be necessary
and could even be detrimental; a simple FIFO strategy may be best - less
overhead.

– the issues can be quite different for scheduling threads.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Process Scheduling 18

• Generally processes are not dedicated to processors.

• Processes can be scheduled from a single queue to multiple


processors. Each processor chooses its next process from the queue.

• Or there is a multiple queue priority scheme.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Process Scheduling 19

• Experience has shown that for processes (tasks are relatively


independent), a simple FCFS policy for scheduling is best.
– If the job mix is highly variable, then Round Robin is better than
FCFS, both on a single processor and a dual processor.
– However, the improvement of RR over FCFS is very small for the
dual processor, and becomes even smaller for more than two
processors.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Thread Scheduling 20

• Threads are different from processes in ways that are important for
scheduling:
– the overhead of thread switching is smaller than for process switching
– threads share resources (including memory) and there is some principle
of locality.

• Threads are important for exploiting medium grained parallelism. Thread


scheduling is important for exploiting true parallelism as much as possible.
– Threads simultaneously running on separate processors give big
performance gains.
– Thread scheduling models significantly impact performance.

• Experience with thread scheduling is increasing; research is still active and


ideas are broad.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Approaches to Thread Scheduling 21

a set of related thread scheduled to


processes are not assigned to a
run on a set of processors at the same
particular processor
time, on a one-to-one basis

Load Sharing/
Self Scheduling Gang Scheduling
Four approaches for
multiprocessor thread
scheduling and
processor assignment
are:

provides implicit scheduling defined by the number of threads in a process can be


the assignment of threads to altered during the course of execution
processors

Dynamic Scheduling
Dedicated Processor Assignment

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Load Sharing/Self Scheduling 22

• For Self Scheduling, processes are not dedicated to any


particular processor. A global queue of ready threads is maintained,
and an idle processor selects a thread from the queue.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Load Sharing/Self Scheduling 23

• The advantages of self scheduling are:


– even load distribution, no processor is idle while there is work to do.
– no centralised scheduler is required, use this processor to
determine next thread to run.
– the global queue can be organised in an appropriate way for
instance, on priority, or execution history, or estimated need .
• Self scheduling is quite common.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Load Sharing/Self Scheduling 24

• There are three types of self scheduling:


– FCFS: the ready queue of threads is maintained as a FIFO queue;
an idle processor always chooses the process at the head of the
queue. Each thread is run non-preemptively (that is, it is run until it
either completes or blocks itself by an I/O request)
– Smallest number of threads first: highest priority is given to
processes with the smallest number of unscheduled threads.
– Pre-emptive smallest number of threads first: highest priority is
given to processes with the smallest number of unscheduled
threads. If a job arrives with a smaller number of threads than one
which is running, then the running job will be pre-empted.

• Some research suggests that FCFS is better than the other two
(more complicated) policies.
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Load Sharing/Self Scheduling 25

• Disadvantages of self scheduling are:


– Mutual exclusion on the central queue must be enforced. This can
be a bottleneck, especially with a large number of processors (10's -
> 100's).
– Since pre-empted threads are unlikely to resume execution on the
same processor, local caching becomes less effective.
– It is unlikely that all threads of a program will gain access to
processors at the same time. This limits thread communication and
is serious if high coordination is required.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Gang Scheduling 26

• In Gang Scheduling (group scheduling, co-scheduling), a set of


related threads are scheduled on a set of processors at the same time.
Some rationale behind gang scheduling is:
– if closely related threads execute in parallel, then synchronisation
blocking will be reduced, and less process switching will occur.
– scheduling overhead is reduced by making a single decision for a
group of threads.
– Useful for medium-grain or fine-grained parallel applications
– Process switching is minimized

• The set or gang of threads can be defined by the programmer, or it can


be automatically defined (for example, all threads of a particular
process can be a gang).

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Processor Allocation in Gang Scheduling 27

• Uniform Scheduling: Each application gets 1/M of the available time in


N processor using time slicing.
• Weighted: The amount of N processor time an application gets is
weighted by the number of threads in that application

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Dedicated Processor Assignment 28

• In Dedicated Processor Assignment, a group of processors is assigned


to a job for the complete duration of the job.

• An extreme form of gang scheduling

• Each thread is assigned to a processor and this processor remains


dedicated to that thread until the job completes.

• This approach results in idle processors (there is no multiprogramming).

• Defense of this strategy:


• in a highly parallel system, with tens or hundreds of processors,
processor utilization is no longer so important as a metric for
effectiveness or performance
• the total avoidance of process switching during the lifetime of a
program should result in a substantial speedup of that program

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Dedicated Processor Assignment 29

# of processors in the system is 16


16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Dynamic Scheduling 30

• For some applications it is possible to provide language and system tools that
permit the number of threads in the process to be altered dynamically
– this would allow the operating system to adjust the load to improve utilization

• Both the operating system and the application are involved in making scheduling
decisions

• The scheduling responsibility of the operating system is primarily limited to


processor allocation
– For example, the OS can partition the processors among the jobs, and each
job partitions its threads among its processors.

• Not suitable for all applications.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Dynamic Scheduling 31

• The only responsibility of the OS is to allocate processors to jobs.

• When a job requests processors (either when the job arrives or when it
creates new threads), the OS acts as follows:
– if there are idle processors, then the request is satisfied.
– otherwise, if the job is new then it is allocated a single processor by
pre-empting a job which has more than one processor.
– if the request cannot be satisfied, the request waits in a queue for a
processor to become available, or until the job rescinds the request
(because it's not needed any more).
– when processors are released, scan queue of unsatisfied requests.
Assign one processor per new process, then allocate to existing
requesting processes on a FCFS basis.

• Can be superior to gang and dedicated processor scheduling, but


overheads can negate this.
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Memory Management VS Processor Management 32

• Processor scheduling on a multiprocessor is rather like memory


management on a uniprocessor. Similar analyses can apply.

• For instance
– the question of how many processors to allocate to a job is
analogous to the question of how many pages to allocate to a job.

– processor thrashing occurs when the scheduling of threads whose


services are required induces the de-scheduling of threads whose
services will soon be needed.

– processor fragmentation occurs when a number of processors are left


over when others are allocated, and the leftover processors are
insufficient to satisfy requests from waiting jobs.

• The connection between processor and memory availability is deep and


has a theoretical basis.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Multicore Thread Scheduling 33

• Contemporary OSs, such as Windows and Linux, essentially treat


scheduling in multicore systems in the same fashion as a
multiprocessor system.
– Focus on keeping processors busy by load balancing
– Unlikely to produce the desired performance benefits of the
multicore architecture

• As the number of cores per chip increases,


– a need to minimize access to off chip memory takes precedence
over a desire to maximize processor utilization.
– Means: use of caches to take advantage of locality.
– Complicated by some of the cache architectures used on multicore
chips
• Specifically when a cache is shared by some but not all of the
cores.
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Core 0 Core 1 Core 2 Core 3 Core 4 Core 5

Cache Sharing 32 kB 32 kB
L1-I L1-D
32 kB 32 kB
L1-I L1-D
32 kB 32 kB
L1-I L1-D
32 kB 32 kB
L1-I L1-D
32 kB 32 kB
L1-I L1-D
32 kB 32 kB
L1-I L1-D
34

256 kB 256 kB 256 kB 256 kB 256 kB 256 kB


L2 Cache L2 Cache L2 Cache L2 Cache L2 Cache L2 Cache

12 MB
L3 Cache

DDR3 Memory QuickPath


Controllers Interconnect

3 8B @ 1.33 GT/s 4 20b @ 6.4 GT/s

Figure 1.20 Intel Core i7-990X Block Diagram

Core 0 Core 1 Core 6 Core 7

16 kB L1D 16 kB L1D 16 kB L1D 16 kB L1D


Cache Cache Cache Cache

2 MB 2 MB
L2 Cache L2 Cache

8 MB
L3 Cache

DDR3 Memory
Hypertransport 3.1
Controllers

2 8B @ 1.86 GT/s 8 2B @ 6.4 GT/s


16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Figure 10.3 AMD Bulldozer Architecture
Cache Sharing 35

Two aspects of cache sharing


Cooperative resource sharing
• Multiple threads access the same set of main memory locations
• Examples:
– applications that are multithreaded
– producer-consumer thread interaction
Resource contention
• Threads, if operating on adjacent cores, compete for cache memory locations
• If more of the cache is dynamically allocated to one thread, the competing thread
necessarily has less cache space available and thus suffers performance
degradation
• Objective of contention-aware scheduling is to allocate threads to cores to
maximize the effectiveness of the shared cache memory and minimize the need
for off-chip memory accesses

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real Time Systems 36

• The operating system, and in particular the scheduler, is perhaps the


most important component

• control of laboratory experiments


• process control in industrial plants
• robotics
Examples: • air traffic control
• telecommunications
• military command and control systems

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real Time System 37

• Correctness of the system may depend not only on the logical result of
the computation but also on the time when these results are produced

• e.g.
– Tasks attempt to control events or to react to events that take place
in the outside world
– These external events occur in real time and processing must be
able to keep up
– Processing must happen in a timely fashion
• Neither too late, nor too early

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Hard-Real time system 38

• Requirements:
– Must always meet all deadlines (time guarantees)
– You have to guarantee that in any situation these applications are
done in time,
– otherwise it will cause unacceptable damage or a fatal error to the
system

• Examples:
1. If the landing of a fly-by-wire jet cannot react to sudden side-winds
within some milliseconds, an accident might occur.
2. An airbag system or the ABS has to react within milliseconds

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Soft-Real Time Systems 39

• Requirements:
– Must mostly meet all deadlines
– An associated deadline that is desirable but not mandatory
– it still makes sense to schedule and complete the task even if it has
passed its deadline

• Examples:
1. Multimedia: 100 frames per day might be dropped (late)
2. Car navigation: 5 late announcements per week are acceptable
3. Washing machine: washing 10 sec over time might occur once in
10 runs, 50 sec once in 100 runs.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Properties of Real-Time Tasks 40

• To schedule a real time task, its properties must be known

• The most relevant properties are


– Arrival time (or release time)
– Maximum execution time (service time)
– Deadline
• Starting deadline
• Completion deadline

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Categories of Real Time Tasks 41

• Periodic
– Each task is repeated at a regular interval
– Max execution time is the same each period
– Arrival time is usually the start of the period
– Deadline is usually the end

• Aperiodic (sporadic)
– Each task can arrive at any time

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Characteristics of Real-Time System 42

Real-time operating systems have requirements in five general areas:

– Determinism
– Responsiveness
– User Control
– Reliability
– Fail-safe operation

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Determinism 43

• Concerned with how long an operating system delays before


acknowledging an interrupt

• Operations are performed at fixed, predetermined times or within


predetermined time intervals
– when multiple processes are competing for resources and
processor time, no system will be fully deterministic

The extent to which an whether the system


operating system can the speed with which has sufficient capacity
it can respond to to handle all requests
deterministically satisfy interrupts within the required
requests depends on: time

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Responsiveness 44

• Concerned with how long, after acknowledgment, it takes an operating


system to service the interrupt

Responsiveness includes:

• amount of time required to initially handle the interrupt


and begin execution of the interrupt service routine (ISR)
• amount of time required to perform the ISR
• effect of interrupt nesting

• OS Response Time = Determinism + Responsiveness


• Together with determinism make up the response time to external
events
• Critical for real-time systems that must meet timing
requirements imposed by individuals, devices, and data flows
external to the system
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


User Control 45

• Generally much broader in a real-time operating system than in ordinary


operating systems

• It is essential to allow the user fine-grained control over task priority

• User should be able to distinguish between hard and soft tasks and to
specify relative priorities within each class

• May allow user to specify such characteristics as:


– paging or process swapping
– what processes must always be resident in main memory
– what disk transfer algorithms are to be used
– what rights the processes in various priority bands have
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Reliability 46

• More important for real-time systems than non-real time systems

• Real-time systems respond to and control events in real time so loss or


degradation of performance may have catastrophic consequences such
as:
• financial loss
• major equipment damage
• loss of life

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Fail-Soft Operation 47

• A characteristic that refers to the ability of a system to fail in such a way


as to preserve as much capability and data as possible

• Important aspect is stability


• A real-time system is stable if the system will meet the
deadlines of its most critical, highest-priority tasks even if some
less critical task deadlines are not always met

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Characteristics of Real-Time System 48

• Determinism
– how long an OS delay before acknowledging an interrupt

• Responsiveness
– After acknowledgment, how long does it take to service the interrupt.

• User Control
– Fine-grained control over task priority

• Reliability

• Fail-safe operation
– Preserve as much capability and data as possible when system fail
– Attempt to correct the problem or minimize its effects while continuing to run

• Stability
– at least meet all task deadlines of most critical, high-priority tasks even if some less critical
task deadlines are not always met.

Real-time applications are not concerned with speed


but with completing tasks on time!!
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Common Features of Real-Time Systems 49

Compared to general purpose OS, in RT OS


– Priorities are used more strictly
• Preemptive scheduling designed to meat RT requirements
– Interrupt latency is bounded and relatively short
– More precise and predictable timing characteristics

• Heart of RT system is short-term task scheduler


– Fairness, minimizing average response time are not paramount
– Deadline is most important!!
• Most RT OS are unable to deal deadlines directly
– Designed to be as responsive as possible to RT tasks scheduling
– Require deterministic response time in milli/micro second ranges

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


(a) Round-robin Preemptive Scheduler

Request from a
real-time process Real-time process added
to head of run queue
Real-time scheduling of process Real-time
50
Current process process

Current process
Scheduling time blocked or completed

(b) Priority-Driven Nonpreemptive Scheduler

Request from a
real-time process Wait for next
preemption point

Real-time
Current process process
Preemption
Scheduling time
point

(c) Priority-Driven Preemptive Scheduler on Preemption Points

Request from a
real-time process
Real-time process preempts current
process and executes immediately

Real-time
Current process process

Scheduling time

(d) Immediate Preemptive Scheduler

Scheduling delay: order of milliseconds Scheduling delay: order of 100 S or less


Figure 10.4 Scheduling of Real-Time Process
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real Time Scheduling 51

• RTS accepts an activity A and guarantees its requested (timely)


behaviour B if and only if
– RTS finds a schedule
• that includes all already accepted activities Ai and the new
activity A,
• that guarantees all requested timely behaviour Bi and B, and
• that can be enforced by the RTS.

• Otherwise, RT system rejects the new activity A.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real-Time Scheduling 52

whether a system if it does, whether it is


performs schedulability done statically or
analysis dynamically
Scheduling approaches
depend on:
whether the result of the
analysis itself produces a
scheduler plan according
to which tasks are
dispatched at run time

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real-time Scheduling Approaches 53

• Static table-driven scheduling


– Given a set of tasks and their properties, a schedule (table) is
precomputed offline.
• Used for periodic task set
• Requires entire schedule to be recomputed if we need to
change the task set

• Static priority-driven scheduling


– Given a set of tasks and their properties, each task is assigned a
fixed priority based on some static analysis
– A preemptive priority-driven scheduler used in conjunction with the
assigned priorities
• Used for periodic task sets

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real-time Scheduling Approaches 54

• Dynamic planning-based scheduling


– Task arrives prior to execution
– The scheduler determines whether the new task can be admitted
• Can all other admitted tasks and the new task meet their
deadlines?
– If no, reject the new task
– Can handle both periodic and aperiodic tasks

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real-time Scheduling Approaches 55

• Dynamic best effort Scheduling


– No feasibility analysis is performed
– When tasks arrives system assigns a priority to the task based on
its characteristics
– System tries to meet all deadlines, abort the started process which
has missed the deadline
– Can’t guarantee the timing constraint of a task will be met until it is
completed
– Usually tasks are aperiodic
– Used by many current commercial RT systems

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Deadline Scheduling 56

• Real-time operating systems are designed with the objective of starting


real-time tasks as rapidly as possible
– emphasize rapid interrupt handling
– task dispatching

• Real-time applications are generally not concerned with sheer speed


but rather with completing (or starting) tasks at the most valuable times
– Neither too early nor too late

• Priorities provide a crude tool and do not capture the requirement of


completion (or initiation) at the most valuable time

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Information Used for Deadline Scheduling 57

• time task becomes ready


Ready time • resources required by
for execution Resource
the task while it is
requirements executing

Starting
• time task must begin
deadline
• measures relative
Priority
importance of the task
Completion • time task must be
deadline completed

• a task may be
Subtask decomposed into a
Processing • time required to execute scheduler mandatory subtask and
time the task to completion an optional subtask

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Usage of Deadlines in RT Scheduling Function 58

• Which Task to Schedule?


– For a given preemption strategy and using either starting or completion
deadlines, a policy that schedules that task with the earliest deadlines,
minimizes the fraction of tasks that miss their deadlines
• What sort of Preemption is allowed?
– When starting deadlines are specified nonpreemptive scheduler makes
sense
• RT task should block itself after completing the mandatory or critical
portion of its execution
– When ending deadlines are specified preemptive scheduler is most
appropriate
• Task X is running and Y is ready, there may be circumstances in which
the only way to allow both X and Y to meet their completion deadlines is
to preempt X, execute Y to completion and then resume X to
completion.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Scheduling in Real-Time Systems 59

• We will consider periodic systems

• Schedulable real-time system


– Given
• n periodic events
• event i occurs within period Ti and requires Ci seconds
𝐶
• 𝑈𝑖 = 𝑇𝑖 is called processor utilization
𝑖

• Then the load can only be handled if

𝐶1 𝐶2 𝐶𝑛
+ + ⋯+ ≤1
𝑇1 𝑇2 𝑇𝑛

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


60
Execution Profile of Two Periodic Tasks

Process Arrival Time Execution Time Ending Deadline


A(1) 0 10 20
A(2) 20 10 40
A(3) 40 10 60
A(4) 60 10 80
A(5) 80 10 100
• • • •
• • • •
• • • •
B(1) 0 25 50
B(2) 50 25 100
• • • •
• • • •
• • • •
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real-time Scheduling Algorithms 61

• Earliest Deadline First Scheduling


– The task with the earliest deadline is chosen next
– The deadline could be
• Completion deadline
• Starting deadline

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


B1 B2
deadline deadline
A1 A2 A3 A4 A5
deadline deadline deadline deadline deadline
62

Arrival times, execution A1 A2 A3 A4 A5


times, and deadlines B1 B2
0 10 20 30 40 50 60 70 80 90 100 Time(ms)

Fixed-priority scheduling; A1 B1 A2 B1 A3 B2 A4 B2 A5 B2
A has priority
A1 A2 B1 A3 A4 A5, B2
(missed)

Fixed-priority scheduling; B1 A2 A3 B2 A5
B has priority
A1 A2 B1 A3 A4 A5, B2
(missed) (missed)

Earliest deadline scheduling A1 B1 A2 B1 A3 B2 A4 B2 A5


using completion deadlines
A1 A2 B1 A3 A4 A5, B2

Figure 10.5 Scheduling of Periodic Real-time Tasks with Completion Deadlines (based on Table 10.2)
16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au 10.3)


63

Table 10.4
Execution Profile of Five Aperiodic Tasks

Process Arrival Time Execution Time Starting Deadline


A 10 20 110
B 20 20 20
C 40 20 50
D 50 20 90
E 60 20 70

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


0 10 20 30 40 50 60 70 80 90 100 110 120

64
Arrival times A B C D E
Requirements
Starting deadline B C E D A
Arrival times A B C D E

Earliest Service A C E D
deadline
Starting deadline B (missed) C E D A
Arrival times A B C D E
Earliest
deadline Service B C E D A
with unforced
idle times
Starting deadline B C E D A
Arrival times A B C D E
First-come Service
first-served A C D
(FCFS)
Starting deadline B (missed) C E (missed) D A

Figure 10.6 Scheduling of Aperiodic Real-time Tasks with Starting Deadlines


16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Real-time Scheduling Algorithms 65

• Rate Monotonic Scheduling


– Static Priority-driven scheduling
– Priorities are assigned based on the period of each task
• The shorter the period, the higher the priority n n(21 n -1)
– Task P has a period of T then rate of the task P is 1/T 1 1.0
– If C is the execution time for task P then 2 0.828
𝐶 3 0.779
– 𝑈 = 𝑇 is called processor utilization for task P
4 0.756
– For RMS the following inequality holds 5 0.743
𝐶1 𝐶2 𝐶𝑛
+ + ⋯+ ≤ 𝑛(21/𝑛 − 1) 6 0.734
𝑇1 𝑇2 𝑇𝑛 • •
• •
• •
¥ ln 2 » 0.693

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Cycle 1 Cycle 2

66
P Processing Idle Processing

Time
task P execution time C

task P period T

Figure 10.8 Periodic Task Timing Diagram

High Highest rate and


highest priority task
Priority

Lowest rate and


lowest priority task

Low
Rate (Hz)

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Figure 10.7 A Task Set with RMS
Real-time Scheduling Algorithms 67

• For RMS the following inequality holds


𝐶1 𝐶2 𝐶𝑛
+ + ⋯+ ≤ 𝑛(21/𝑛 − 1)
𝑇1 𝑇2 𝑇𝑛
n n(21 n -1)
• Example:
1 1.0
– Task P1 : C1 =20, T1=100; U1 = 0.2 2 0.828
– Task P2 : C2 =40, T2=150; U2 = 0.267 3 0.779
– Task P3 : C3 =100, T3=350; U3 = 0.286 4 0.756
𝐶1 𝐶 𝐶 5 0.743
– 𝑇1
+ 𝑇2 + 𝑇3 = 0.753 < 0.779
2 3 6 0.734
– If RMS is used these tasks will be successfully scheduled. • •
• •
• •
¥ ln 2 » 0.693

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Task A : C1 = 10, T1= 30;
Example Task B : C2 = 15, T2= 40; 68
Task C : C3 = 05, T3= 50;

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Task A : C1 = 10, T1= 30;
Task B : C2 = 15, T2= 40;
Task C : C3 = 05, T3= 50; 69

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Let’s Modify the Example Slightly 70

• Increase A’s CPU requirement to 15 msec


• The system is still schedulable

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


71

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


RMS failed, why? 72

• It has been proven that RMS is only guaranteed to work if the CPU
utilisation is not too high
– For three tasks, CPU utilisation must be less than 0.780

• We were lucky with our original example

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


RMS vs EDFS 73

1
𝐶1 𝐶2 𝐶𝑛 𝐶 𝐶 𝐶
• For RMS: + + ⋯+ ≤ 𝑛 2 − 1 For EDFS: 𝑇1 + 𝑇2 + ⋯ + 𝑇𝑛 ≤ 1
𝑛
𝑇1 𝑇2 𝑇𝑛 1 2 𝑛

– EDFS can achieve greater overall processor utilization


• Still RMS is widely adopted for use in industrial applications:
– The performance difference is small in practice. The upper bound
for RMS is a conservative one and in practice upto 90% utilization is
often achieved.
– Most hard-RT system also have soft-RT components which are not
used with RMS scheduling of hard-RT tasks
– Stability is easier to achieve with RMS.

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Priority Inversion 74

• Can occur in any priority-based preemptive scheduling scheme


• Particularly relevant in the context of real-time scheduling
• Best-known instance involved the Mars Pathfinder mission
• Occurs when circumstances within the system force a higher priority
task to wait for a lower priority task

Unbounded Priority Inversion

• the duration of a priority inversion depends not only on


the time required to handle a shared resource, but also
on the unpredictable actions of other unrelated tasks

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


75
Unbounded Priority Inversion
T1: periodic system health check
T2: Process image data
T3: Occasional test on status
blocked by T3
(attempt to lock s) s locked

T1

T2
preempted preempted
by T1 s unlocked
s locked by T2

T3
t1 t2 t3 t4 t5 t6 t7 t8

time
(a) Unbounded priority inversion

16/08/2017

COMP2240 - Semester 2 - 2017


s locked
| www.newcastle.edu.au
blocked by T 3 by T1
(attempt to lock s) s unlocked
T3
t1 t2 t3 t4 t5 t6 t7 t8

time
76
(a) Unbounded priority inversion
Priority Inheritance

s locked
blocked by T3 by T1
(attempt to lock s) s unlocked

T1

T2

s locked preempted
by T1 s unlocked
by T3

T3
t1 t2 t3 t4 t5 t6 t7

16/08/2017
(b) Use of priority inheritance
COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au
Priority Ceiling 77

• A priority is associated with each resource.


– The assigned priority is one level higher than the priority of its
highest-priority user.
• Scheduler dynamically assigns the resource’s priority to any process
that access the resource
– After the process is done with the resource the process priority
returns to normal

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Summary 78

• RT processes are executed in connection with some process or event


external to the computer system
• Therefore, RT scheduling must meet one or more deadlines
• Traditional criteria for scheduling does not apply
• RT systems are characterized by
– Determinism, Responsiveness, User Control, Reliability, Fail-soft
operation
• Both static and dynamic scheduling is possible in RT system
– Static
• Table drive approach VS Priority driven preemptive approach
– Dynamic
• Planning base approach VS Best effort approach

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


Summary 79

• Key factor is meeting the deadlines not processor utilization or etc.


• Algorithms that rely heavily on preemption and on reacting to relative
deadlines are appropriate for this purpose
• Two prominent scheduling algorithms are
– Earliest Deadline First (EDF)
– Rate Monitoring Scheduling (RMS)

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au


References 80

• Operating Systems – Internal and Design


Principles
– By William Stallings
• Chapter 10

16/08/2017

COMP2240 - Semester 2 - 2017 | www.newcastle.edu.au

You might also like