CH 6 (Unit-Iii)
CH 6 (Unit-Iii)
SYSTEMS
T.Ramprakash
AP(Sr.Gr)/ECE
Ramco Institute of Technology
Rajapalayam
1
Flow of syllabus
• Introduction
• Multiple tasks and multiple processes
• Multirate systems
• Preemptive real-time operating systems
• Priority based scheduling
• Interprocess communication mechanisms
• Evaluating operating system performance
• Power optimization strategies for processes
• Example Real time operating systems
– POSIX
– Windows CE
2
Multiple tasks and multiple processes
Process
Multiprogramming
Multitasking
Multiprocessing
Multithreading
3
Process
A single execution of a program is called as
Process.
If we run the same program two different times,
we have created two different processes.
Each process has its own state that includes not
only its registers but all of its memory.
In some OSs, the memory management unit is
used to keep each process in a separate address
space.
In others, particularly lightweight RTOSs, the
processes run in the same address space.
Processes that share the same address space are
often called threads 4
Multiprogramming
Multiprogramming is also the ability of an
operating system to execute more than one
program on a single processor machine.
More than one task/program/job/process
can reside into the main memory at one point
of time.
A computer running excel and firefox browser
simultaneously is an example of
multiprogramming.
5
Memory Layout for Multiprogramming System
6
Multitasking
7
Multitasking
Multitasking is the ability of an operating
system to execute more than one task
simultaneously on a single processor machine.
Though we say so but in reality no two tasks
on a single processor machine can be executed
at the same time.
Actually CPU switches from one task to the
next task so quickly that appears as if all the
tasks are executing at the same time.
8
Multitasking System
9
Multiprocessing
Multiprocessing is the ability of an operating
system to execute more than one process
simultaneously on a multi processor machine.
In this, a computer uses more than one CPU at
a time.
10
Multithread
Threads are the light wait processes which are independent part of a
process or program
Processes that share the same address space are often called threads
11
Multithread
Multithreading is the ability of an operating
system to execute the different parts of a
program called threads at the same time.
13
Tasks and Processes
14
Multirate Systems
In multirate systems, certain operation must
be executed periodically and each operation is
executed at its own rate
15
Multirate Systems
Timing Requirements on processes
16
Timing Requirements on processes
Each process have several different types of
timing requirements
Deadline
Initiation Time:
Process goes from waiting state to ready state
Deadline
It specifies when a computation must be finished
18
Timing Requirements on processes
19
Timing Requirements on processes
20
Sequence of process with a high
initiation rate
Rate Requirement: it specifies how quickly
processes must be initiated
21
Jitter:
Jitter is the delay between the time when task shall be
started, and the time when the task is being started
Missing a deadline:
Variety of actions can be taken when missing a deadline
22
Data Dependencies among process
DAG:
A directed acyclic graph (DAG) is a directed graph that
contains no cycles
23
Communication among processes at
different rates
24
CPU usage metrics
The initiation time is the time at which a process actually
starts executing on the CPU.
The completion time is the time at which the process
finishes its work.
The most basic measure of work is the amount of CPU
time expended by a process.
The CPU time of process i is called Ci .
CPU time is not equal to the completion time minus
initiation time; several other processes may interrupt
25
execution.
CPU usage metrics
The simplest and most direct measure is
Utilization:
27
Process State and Scheduling
• The work of choosing the order of running
processes is known as scheduling
• Scheduling States
• Waiting
• Ready
• Executing
28
Running Periodic Process
Multiple Timers
While Loop
29
Pre-emptive real-time operating systems
• Preemptive real time operation system solves
the fundamental problems of cooperative
multitasking system
• A RTOS executes processes based upon timing
constraints provided by the system designer.
• The most reliable way to meet timing
constraints accurately is to build a preemptive
OS and to use priorities to control what
process runs at any given time
30
Preemptive real-time operating systems
• Two Important Methods
– Preemption
– Priorities
31
Preemption
• Pre-emption is an alternative to the C function
call as a way to control execution
32
Pre-emption
• The kernel is the part of the OS that
determines what process is running
33
Context Switching
• The set of registers that defines a process is known
as context
• The switching from one process’s register set to
another is known as context switching
• The data structure that holds the state of process is
known as record
34
Process Priorities
• Each process is assigned with the numerical priority
35
Process and Context
• A process is known as FreeRTOS.org as a task
36
Process and Context
37
vPreemptiveTick
38
portSAVE_CONTEXT
39
Process and Object oriented design
• UML often refers to processes as active objects, that
is, objects that have independent threads of control.
• The class that defines an active object is known as an
active class.
• It has all the normal characteristics of a class,
including a name, attributes and operations.
• It also provides a set of signals that can be used to
communicate with the process.
40
Process and Object oriented design
• It is a simple collaboration diagram in which an
object is used as an interface between two processes
41
Priority Based Scheduling
• Round-Robin Scheduling
• Process Priorities
• Shared Resources
• Priority Inversion
42
Round-Robin Scheduling
• Round Robin is the pre-emptive process
scheduling algorithm.
• Each process is provided a fix time to execute,
it is called a quantum.
• Once a process is executed for a given time
period, it is preempted and other process
executes for a given time period.
• Context switching is used to save states of
preempted processes.
43
Round-Robin Scheduling
44
Round-Robin Scheduling
45
Process Priorities
• Priority scheduling is a non-preemptive
algorithm and one of the most common
scheduling algorithms in batch systems.
• Each process is assigned a priority. Process
with highest priority is to be executed first and
so on.
• Processes with same priority are executed on
first come first served basis.
• Priority can be decided based on memory
requirements, time requirements or any other
46
resource requirement.
Process Priorities
47
Rate Monotonic scheduling
• Rate-monotonic scheduling (RMS), introduced
by Liu and Layland
• It is one of the first scheduling policies
developed for real-time systems and is still
very widely used
• Rate Monotonic Scheduling (RMS) assigns task
priorities in the order of the highest task
frequencies, i.e. the shortest periodic task
gets the highest priority, then the next with
the shortest period get the second highest
priority, and so on.
48
Rate Monotonic scheduling
• This model uses a relatively simple model of
the system
– All processes run periodically on a single CPU.
– Context switching time is ignored.
– There are no data dependencies between
processes.
– The execution time for a process is constant.
– All deadlines are at the ends of their periods.
– The highest-priority ready process is always
selected for execution.
49
Rate Monotonic scheduling
50
Rate Monotonic scheduling
52
Rate Monotonic scheduling example2
Schedule the process given below using Earliest Deadline First(EDF)
scheduling policy and Rate Monotonic Scheduling
54
Earliest Dead line first scheduling
• Example
55
Earliest Dead line first scheduling
Execution Time Period
T1 1 4
T2 2 6
T3 3 8
• Race Condition
• Critical Sections
• Semaphores
58
Shared Resources
• Race Condition
– Consider the case in which an I/O device has a flag
that must be tested and modified by a process
63
INTERPROCESS COMMUNICATION MECHANISMS
– Message passing
– Signals
– Mailboxes
64
Shared Memory
• CPU and I/O device communicate through a
shared memory location
65
Message passing
• Each communicating entity has its own
message send/receive unit
• The message is stored in the senders/receivers
endpoints
66
Message passing
• For example, a home control system has one
microcontroller per household device – lamp,
fan, and so on.
• The device must communicate relatively
infrequently
• Their physical separation is large enough that
we would not naturally have a sharing a
central pool of memory
• Passing communication packets among the
device is a natural implementation of
communication in many 8 bit controllers
67
Signals
• Another form of inter-process communication
commonly used in Unix is Signal
• A signal is analogous to an interrupt, but it is
entirely a software creation
• A signal is generated by a process and
transmitted to another process by Operating
System
68
Mailboxes
• It is a asynchronous communication
• Mailboxes have a fixed number of bits and can
be used for small messages
• We can also implement a mailbox using P()
and V() using main memory for the mailbox
storage
• Mail box should contain two items:
– Message
– Mail ready Flag
69
Mailboxes
void post(message *msg)
{
P(mailbox.sem); //wait for the mailbox
copy(mailbox.data.msg);
mailbox.flag =TRUE;
V(mailbox.sem) //release the mailbox
}
70
Mailboxes
Boolean pickup(message *msg)
{
boolean pickup =False
P(mailbox.sem); //wait for the mailbox
pickup=mailbox.flag;
mailbox.flag =FALSE;
copy(msg.mailbox.data);
V(mailbox.sem) //release the mailbox
return (pickup)
}
71
Evaluating Operating System Performance
Assumption
Ignored interrupts
72
Evaluating Operating System Performance
Interrupt Latency
73
Power optimization strategies for processes
• The RTOS and system architecture can use
static and dynamic power management
mechanism
• A power management policy is a strategy for
determining when to perform certain power
management operations
• It examines the state of the system to
determine when to take actions
74
Power optimization strategies for processes
75
Power down trade offs
• Going in to low power mode takes time
• The more that is shut off, the longer the delay
incurred during restart
– Avoiding a power down mode can cost
unnecessary power
– Powering down too soon can cause severe
performance penalties
• The best method is to power up the system
when a request is received. This works as long
as the delay in handling the request is
acceptable.
76
Predictive Power Management
• Here, we predict when the next request will be made and
to start the system just before that time, saving the
requestor the startup time
• We guess about the activity patterns based on a
probabilistic model of expected behavior
• Because they relay on statistics, they may not always
correctly guess the time of next activity
• They can cause two types of problems
• The requestor may have to wait for an activity period
• The system may restart itself when no activity is imminent
77
Predictive Power Management
• A simple predictive technique is to use fixed
times
• For example, if a system does not receive inputs
during an interval of length TON, it shuts down
• A powered down system waits for a period TOFF
before returning to the power on mode
• The choice of TON and TOFF must be determined
by experimentations
78
L shaped distribution
• Srivastava and Eustace found one a graphic
terminal in which they plotted the observed
idle time (TOFF) of a graphics terminal versus
the immediately preceding active time (TON)
79
Architecture of Power managed System
Service provider whose power is being managed
Service Requestor making request of the power managed system
Queue hold pending requests
Power manager sends power management commands
Power Manager
80
Advanced Configuration and Power
Interface (ACPI)
• The Advanced Configuration and Power Interface
(ACPI) is an open industry standard for power
management services.
• It is designed to be compatible with a wide
variety of OSs.
• It was targeted initially to PCs.
• The OS has its own power management module
that determines the policy
• Then OS uses ACPI to send the required controls
to the hardware and to observe the hardware’s
state as input to the power manager.
81
Advanced Configuration and Power
Interface (ACPI)
82
(ACPI)
• ACPI supports the following five basic global power
states:
– G3, the mechanical off state, in which the system
consumes no power.
– G2, the soft off state, which requires a full OS reboot to
restore the machine to working condition. This state has
four sub states:
• S1, a low wake-up latency state with no loss of system context;
• S2, a low wake-up latency state with a loss of CPU and system
cache state;
• S3, a low wake-up latency state in which all system state except
for main memory is lost; and
• S4, the lowest-power sleeping state, in which all devices are
turned off.
– G1, the sleeping state, in which the system appears to be
off and the time required to return to working condition is
inversely proportional to power consumption.
– G0, the working state, in which the system is fully usable.
83
Example of RTOS
• POSIX
• Windows CE
84
POSIX
• Portable Operating System Interface
85
POSIX
• POSIX is a version of Unix Operating system
• It is created by a standards organization
• POSIX-complaint operation system are source
code compatible
– (i.e) An application can be complied and run without
modification on a new POSIX
– PREEMP_RT mode
• It provides priority inheritance to reduce the latency of
many kernel operations
87
POSIX
88
Processes in POSIX
• In POSIX, a new process is created by making a
copy of an existing process
• The copying process creates two different
processes both running the same code
• The complication comes in ensuring that one
process runs the code intended for the new
process while the other process continues the
work of the old process
89
Processes in POSIX
• A process makes a copy of itself by calling
fork() function
• It creates a new child process which is exact
copy of parent process
• The both have the same code and the same
data values with one exceptions return
value
– Parent Process: returns the process ID of the child
process
– Child process: returns 0
90
POSIX
fork()
childid = fork();
if (childid == 0)
{
/* Do the child process*/
}
91
Processes in POSIX
• It would be clumsy to have both processes have
all the code for both parent and child processes
• POSIX provides the exec facility for overloading
the code in a process
92
POSIX
• POSIX supports
– Semaphores
– Message Queues
95
POSIX semaphores
• POSIX supports counting semaphores in the
_POSIX_SEMAPHORES option
• A Counting semaphore allows more than one
process to access a resource at a time
• If a semaphore allows up to to N resources,
then it will not block until N process have
simultaneously passed the semaphore
96
POSIX semaphores
• shm_open()
• close()
• mmap()
• munmap()
98
POSIX Message Queues
• POSIX supports message queues
99
Windows CE
100
Windows CE Architecture
• Win32 API
– manage access to the operation system
101
Windows CE Architecture
• OEM Adaption Layer (OAL)
– provides an interface to the hardware
(OEM Original Equipment Manufacturer)
102
Windows CE memory space
• Windows CE provides support for virtual
memory with a flat 32 bit virtual address
space
• Memory space is divided into kernel and user
space
103
Windows CE memory space
• User space is divided into
– System elements and
– User elements
104
Windows CE threads and drivers
• Windows CE supports two kernel-level units of
execution
• Thread
– Threads are defined by executable files
– A process can run multiple threads
– All the threads of a process share the same execution
environment
– Threads in different processes run in different
execution environment
– Threads are scheduled directly by the OS
• Driver
– Drivers are defined by dynamically linked libraries
(DLL)
– A driver may be loaded in to the OS or a process
– Drivers can create threads to handle interrupts 105
Windows CE Scheduling
• Each thread is assigned an integer priority
• Lower valued priorities have highest priority
• 0 is the highest priority and 255 is the lowest
• Task may be scheduled using either of two
policies
– A thread can run until the end of its quantum (or)
– A thread can run until a higher priority thread is
ready to run
106
Windows CE Interrupts
• The Interrupt Service Handler (ISH) is a kernel
service that provides the first response to the
interrupt
• The ISH selects an Interrupt Service Routine
(ISR) to handle the interrupt
• The ISH runs in the kernel with interrupts
turned off
• The ISR in turn calls An Interrupt Service
Thread (IST) which perform most of the work
required to handle the interrupt
107
Windows CE
108
Reference
1. Marilyn Wolf, “Computers as Components -
Principles of Embedded Computing System
Design”, Third Edition, Morgan Kaufmann
Publisher (An imprint from Elsevier), 2012.
2. Wayne Wolf, “Computers as Components -
Principles of Embedded Computer System
Design”, Morgan Kaufmann, 2nd Edition,
2008.
109