Real-Time Software Design For Embedded Systems - Hassan Gomaa
Real-Time Software Design For Embedded Systems - Hassan Gomaa
S O F T WA R E D E S I G N F O R E M B E D D E D
SYSTEMS
This textbook takes the reader from use cases to complete software architectures for real-
time embedded systems using SysML, UML, and MARTE and shows how to apply the
COMET/RTE design method to real-world problems. The author covers key topics such as
use cases for real-time systems, state machines for real-time control, architectural patterns
for distributed and hierarchical real-time control and for real-time component-based
software architectures, performance analysis of real-time designs using real-time
scheduling, and timing analysis on single- and multiple-processor systems.
Five complete case studies illustrating design issues include a light rail control system,
a railroad crossing control system, a microwave oven control system, and an automated
highway toll system.
Hassan Gomaa is Professor and former chair of the Department of Computer Science at
George Mason University. Gomaa has more than thirty years of experience in software
engineering, in both industry and academia. He has taught short in-depth industrial
courses on real-time software design in North America, Europe, Japan, and South Korea.
He has published more than 200 technical papers and is the author of four other textbooks
on software design, including Software Modeling and Design and Designing Software
Product Lines with UML.
REAL-TIME SOFTWARE DESIGN
FOR EMBEDDED SYSTEMS
Hassan Gomaa
George Mason University
32 Avenue of the Americas, New York, NY 10013-2473, USA
It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and
research at the highest international levels of excellence.
www.cambridge.org
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing
agreements, no reproduction of any part may take place without the written permission of Cambridge University
Press.
A catalog record for this publication is available from the British Library.
Real-time software design for embedded systems / Hassan Gomaa, George Mason University.
pages cm
QA76.76.D47G649 2015
005.3–dc23 2015026051
Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party
Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or
will remain, accurate or appropriate.
To Gill, William and Neela, Alex and Nicole,
Part I Overview
1 Introduction
1.1 The Challenge
1.2 Real-Time Embedded Systems and Applications
1.3 Characteristics of Real-Time Embedded Systems
1.4 Distributed Real-Time Embedded Systems
1.5 Cyber-Physical Systems
1.6 Requirements for Real-Time Software Design Method for Embedded
Systems
1.7 COMET/RTE: A Real-Time Software Design Method for Embedded
Systems
1.8 Visual Modeling Languages: UML, SysML, and MARTE
1.9 Summary
2 Overview of UML, SysML, and MARTE
2.1 Model-Driven Architecture with SysML and UML
2.2 Use Case Diagrams
2.3 Classes and Objects
2.4 Class Diagrams
2.5 Interaction Diagrams
2.6 State Machine Diagrams
2.7 Package Diagrams
2.8 Concurrent Sequence and Communication Diagrams
2.9 Deployment Diagrams
2.10 Composite Structure Diagrams
2.11 UML Extension Mechanisms and Profiles
2.12 SysML
2.13 MARTE Profile
2.14 Timing Diagrams
2.15 Tool Support for UML, SysML, and MARTE
2.16 Summary
3 Real-Time Software Design and Architecture Concepts
3.1 Object-Oriented Concepts
3.2 Information Hiding
3.3 Inheritance
3.4 Active and Passive Objects
3.5 Concurrent Processing
3.6 Cooperation between Concurrent Tasks
3.7 Information Hiding Applied to Access Synchronization
3.8 Runtime Support for Real-Time Concurrent Processing
3.9 Task Scheduling
3.10 Software Architecture and Components
3.11 Summary
The book is aimed at both the professional market and the academic market,
particularly at the graduate level. It assumes a basic background in UML and object-
oriented principles, although a brief overview is given of each.
What This Book Provides
What This Book Provides
There are various textbooks on the market describing general object-oriented analysis and
design concepts and methods. However, real-time and embedded systems have special
needs, which are only treated superficially in these books. Other books describe real-time
systems in general or provide a survey-based approach. The focus of this book is on real-
time software design for embedded systems. Because real-time systems are usually
embedded, the method described in the book takes a systems-engineering perspective
addressing system-wide issues involving both hardware and software.
Alternatively, some readers may wish to skip some chapters, depending on their level
of familiarity with the topics discussed. Chapters 1 through 3 are introductory and may be
skipped by experienced readers. Readers familiar with software design concepts may skip
Chapter 3. Readers particularly interested in real-time software design can proceed
directly to the description of COMET/RTE, starting in Chapter 4. Readers who are not
familiar with UML, SysML, or MARTE can read Chapter 2 in conjunction with Chapters
4 through 18.
Experienced software designers may also use this book as a reference, referring to
various chapters as their projects reach a particular stage of the requirements, analysis, or
design process. Each chapter is relatively self-contained. For example, at different times
one might refer to Chapter 5 for a discussion of structural modeling using SysML and
UML, Chapter 6 for a description of use cases, and to Chapter 7 for a description of state
machines. Chapter 10 can be referenced for an overview of real-time software
architectures; Chapter 11 and Appendix B for software architectural patterns; Chapter 12
for component-based software architectures; and Chapter 13 for concurrent real-time task
design with MARTE. Chapter 15 can be consulted for software product line design;
Chapter 16 for system and software quality attributes; and Chapters 17 and 18 for
performance analysis of real-time software designs. One can also improve one’s
understanding of how to use the COMET/RTE method by reading the case studies in
Chapters 19–23, because each case study explains the decisions made at each step of
requirements, analysis, and design.
Hassan Gomaa
November 2015
Email: [email protected]
www: http://mason.gmu.edu/~hgomaa
Annotated Table of Contents
Part I: Overview
Part I: Overview
Chapter 1. Introduction
This chapter provides an overview of real-time embedded systems and applications and
then describes the major characteristics of real-time embedded systems, both centralized
and distributed. This chapter also provides an overview of the emerging field of cyber-
physical systems, for which real-time software is a critical component. This chapter then
introduces COMET/RTE, the design method for real-time embedded systems described
and applied in the book.
Chapter 2. Overview of UML, SysML, and MARTE
This chapter describes the main features of the UML, SysML, and MARTE visual
modeling languages and notations that are particularly suited for real-time design using
the COMET/RTE method. The purpose of this chapter is not to be a full exposition of
UML, SysML, and MARTE, because several detailed books exist on these topics, but
rather to provide a brief overview of each, in particular those parts that are used by
COMET/RTE.
Chapter 3. Real-Time Software Design and Architecture Concepts
This chapter describes key concepts in the software design of concurrent object-oriented
real-time embedded systems as well as important concepts for developing the architecture
of these systems. The concurrent processing concept is introduced and the issues of
communication and synchronization between concurrent tasks are described. Some
general design concepts are also discussed from the perspective of their applicability to
real-time design, including object-oriented design concepts of information hiding and
inheritance, software architecture, and software components. This chapter also briefly
discusses technology issues related to real-time software design, including real-time
operating systems and task scheduling.
Part II: Real-Time Software Design Method
Part II: Real-Time Software Design Method
Chapter 4. Overview of Real-Time Software Design Method for Embedded
Systems
This chapter provides an overview of the software design method for real-time embedded
systems called COMET/RTE (Concurrent Object Modeling and Architectural Design
Method for Real-Time Embedded systems), which uses the SysML, UML, and MARTE
visual modeling languages and notations. This chapter also describes the iterative system
and software life cycle of COMET/RTE and how it compares to other life cycles. It then
describes the main steps in using COMET/RTE.
Chapter 5. Structural Modeling for Real-Time Embedded Systems with SysML
and UML
This chapter describes how structural modeling can be used as an integrated approach for
system and software modeling of embedded systems consisting of both hardware and
software components, using SysML and UML. This chapter describes structural modeling
of the problem domain, structural modeling of the hardware/software system context,
hardware/software boundary modeling, structural modeling of the software system
context, defining hardware/software interfaces, and system deployment modeling.
Chapter 6. Use Case Modeling for Real-Time Embedded Systems
This chapter describes how use case modeling can be applied to real-time embedded
systems from both systems engineering and software engineering perspectives. After an
overview of the basic principles of use cases, it provides a more in-depth focus on
capturing the functional and nonfunctional requirements for real-time and embedded
systems. It also explains the difference between system and software use cases and actors.
Chapter 7. State Machines for Real-Time Embedded Systems
This chapter describes state machine modeling concepts, which are particularly important
for reactive real-time systems. This chapter covers events, states, conditions, actions and
activities, entry and exit actions, composite states, and hierarchical state machines with
sequential and orthogonal substates. The issues of developing cooperating state machines,
inheritance in state machines, and deriving state machines from use cases are also
addressed.
Chapter 8. Object and Class Structuring for Real-Time Embedded Software
This chapter describes the identification and categorization of software classes and
objects, in particular the role the class plays in the real-time software, including boundary,
control, and entity classes. It also describes the corresponding behavior pattern for each
category of object.
Chapter 9. Dynamic Interaction Modeling for Real-Time Embedded Software
This chapter describes dynamic interaction modeling concepts. Interaction diagrams are
developed for each use case, including the main scenario and alternative scenarios.
Specific discussions on state dependent real-time embedded systems cover dynamic
interaction modeling for state dependent object interactions. This chapter describes how
state machines and interaction diagrams relate to each other and how to make them
consistent with each other.
Chapter 10. Software Architectures for Real-Time Embedded Systems
This chapter introduces software architectural concepts for distributed real-time embedded
systems. Issues in Software Architectural Design are described. The benefits of
developing multiple views of a software architecture are explained. This chapter also
provides an introduction to software components and component-based software
architectures. The transition from requirements and analysis to architectural design is
carefully explained. Separation of concerns in subsystem design and subsystem structuring
criteria are also described. This is followed by designing subsystem message
communication interfaces.
Chapter 11. Software Architectural Patterns for Real-Time Embedded Systems
The role of architectural design patterns in developing the real-time software architecture
is described. An overview of software architectural patterns is presented, including
architectural structure and communication patterns. Architectural patterns for real-time
systems are described, including layered patterns, real-time control patterns, client/service
patterns, brokering patterns, and event-based subscription/notification patterns.
Chapter 12. Component-Based Software Architectures for Real-Time
Embedded Systems
This chapter describes how a distributed real-time architecture is designed as a
component-based software architecture, which can be deployed to multiple nodes in a
distributed environment. Component design issues are described, including composite and
simple components, component interface design with provided and required interfaces,
ports, and connectors. The design of service components and distributed software
connectors are also described. Component configuration and deployment issues are
explained.
Chapter 13. Concurrent Real-Time Software Task Design
This chapter describes the design of concurrent tasks using the MARTE real-time
modeling notation. Concurrent task structuring is described, including event-driven tasks,
periodic tasks, and demand driven tasks. Task clustering of objects is also described.
Design of task interfaces is described, including synchronous and asynchronous message
communication, event synchronization, and communication through passive objects. The
implications of different types of message communication on the concurrent behavior of
the software architecture are described.
Chapter 14. Detailed Real-Time Software Design
This chapter describes the detailed design of concurrent tasks. The design of composite
tasks with nested passive classes is described. Task synchronization of access to passive
classes is described using mutual exclusion, multiple readers and writers, and monitors.
The design of connectors for inter-task communication is explained. The implementation
of concurrent tasks as Java threads is briefly described.
Chapter 15. Designing Real-Time Software Product Line Architectures
This chapter describes the characteristics of real-time software product lines. The
important concepts of feature modeling, and modeling commonality and variability, are
explained. How to model variability in use cases, static and dynamic models, and software
architectures is explained. The chapter goes on to describe how to model common and
variable components in software product line architectures. The engineering of software
applications from product line artifacts is explained.
Part III: Analysis of Real-Time Software Designs
Part III: Analysis of Real-Time Software Designs
Chapter 16. System and Software Quality Attributes for Real-Time Embedded
Systems
This chapter describes system and software quality attributes and how they are used to
evaluate the quality of the real-time embedded system and software architecture. System
quality attributes include scalability, performance, availability, safety, and security.
Software quality attributes include maintainability, modifiability, testability, traceability,
and reusability. This chapter also discusses how the COMET/RTE real-time design
method supports the system and software quality attributes.
Chapter 17. Performance Analysis of Real-Time Software Designs
This chapter presents methods for analyzing the performance of real-time embedded
software designs. It describes two approaches for analyzing the performance of a design,
real-time scheduling theory and event sequence analysis, which are then combined to
analyze a concurrent multitasking design. Advanced real-time scheduling algorithms,
including deadline monotonic scheduling, dynamic priority scheduling, and
multiprocessor scheduling, are described. Practical approaches for analyzing the
performance of multiprocessor systems including multicore systems are also described.
Estimation and measurement of performance parameters are discussed.
Chapter 18. Applying Performance Analysis to Real-Time Software Designs
This chapter applies the real-time performance analysis concepts and theory described in
Chapter 17 to the real-time design of a Light Rail Control System. Real-time scheduling
theory and event sequence analysis are both applied to analyze the performance of the
concurrent multitasking design. The performance of the design executing on single-
processor and multiprocessor systems is also analyzed and compared.
Part IV: Real-Time Software Design Case Studies
Part IV: Real-Time Software Design Case Studies
for Embedded Systems
Chapter 19. Microwave Oven Control System Case Study
This chapter describes how the COMET-RTE design method is applied to the design of the
embedded real-time software for a consumer product – a microwave oven control system.
Chapter 20. Railroad Crossing Control System Case Study
This chapter describes how the COMET-RTE design method is applied to the design of the
embedded real-time software for a safety critical railroad crossing control system.
Chapter 21. Light Rail Control System Case Study
This chapter describes how the COMET-RTE design method is applied to the design of an
embedded light rail control system, in which the automated control of driverless trains
must be done safely and in a timely manner.
Chapter 22. Pump Control System Case Study
This chapter describes a concise case study of how the COMET-RTE design method is
applied to the design of the embedded real-time software for a pump control system.
Chapter 23. Highway Toll Control System Case Study
This chapter describes a concise case study of how the COMET-RTE design method is
applied to the design of the distributed embedded real-time software for a highway toll
control system.
Appendix A. Conventions Used in This Textbook
The conventions for naming requirements, analysis, and design artifacts are described.
The conventions used for message sequence numbering on interaction diagrams are
described.
Appendix B. Catalog of Software Architectural
Patterns
Each architectural structure and communication pattern is described using a standard
design pattern template.
Appendix C. Pseudocode Templates for
Concurrent Tasks
The pseudocode for several different kinds of concurrent tasks is provided.
Appendix D. Teaching Considerations
An outline is given for teaching academic (both graduate and senior undergraduate)
courses and industrial courses.
Glossary
Bibliography
Index
Acknowledgments
I gratefully acknowledge the reviewers of earlier drafts of the manuscript for their
constructive comments. Hakan Aydin very carefully reviewed Chapter 17 on performance
analysis and made several valuable and insightful comments. Kevin Mills and Rob Pettit
provided very thorough and constructive reviews of several chapters. The anonymous
reviewers provided many helpful comments. I am very grateful to the students in my
software modeling and design, real-time software analysis and design, and reusable
software architecture courses at George Mason University for their enthusiasm,
dedication, and valuable feedback. Many thanks are due to Aparna Keshavamurthy, Ehsan
Kouroshfar, Carolyn Koerner, Nan Li, and Upsorn Praphamontripong, for their hard work
and careful attention producing the figures. I am also very grateful to the Cambridge
University Press editorial and production staff, including Lauren Cowles, and the
production staff at Aptara.
I gratefully acknowledge the Software Engineering Institute (SEI) for the material
provided on real-time scheduling, on which parts of Chapter 17 are based. I also gratefully
acknowledge the permission given to me by Pearson Education, Inc., to use material from
my earlier textbooks, Designing Concurrent, Distributed, and Real-Time Applications with
UML, © 2000 Hassan Gomaa, Reproduced by permission of Pearson Education, Inc., and
Designing Software Product Lines with UML, © 2005 Hassan Gomaa, Reproduced by
permission of Pearson Education, Inc.
Last, but not least, I would like to thank my wife, Gill, for her encouragement,
understanding, and support.
Part I
◈
Overview
1
Introduction
◈
This book describes how to design the real-time software for embedded systems. This
chapter provides an overview of real-time embedded systems and applications and then
describes the major characteristics of real-time embedded systems, both centralized and
distributed. This chapter also provides an overview of the emerging field of cyber-physical
systems, for which real-time software is a critical component. This chapter then introduces
COMET/RTE, the real-time software design method for embedded systems described and
applied in this book, which uses the Unified Modeling Language (UML), Systems
Modeling Language (SysML), and MARTE (Modeling and Analysis of Real-Time
Embedded Systems) visual modeling languages and notations.
1.1 The Challenge
In the twenty-first century, a growing number of commercial, industrial, military, medical,
and consumer products are real-time embedded software intensive systems, which are
either software controlled or have a crucial software component to them. These systems
range from microwave ovens to Blu-ray™ video recorders, from driverless trains to
driverless automobiles to aircraft that “fly by wire,” from submarines that explore the
depths of the oceans to spacecraft that explore the far reaches of space, from process
control systems to factory monitoring and control systems, from robot controllers to
elevator controllers, from city traffic control to air traffic control, from “smart” sensors to
“smart” phones, from “smart” networks to “smart” grids, an ever-growing volume of
mobile and pervasive systems – the list is continually growing. These systems are
concurrent, real-time, and embedded. Many of them are also distributed. Real-time
software is a critical component of these systems.
1.2 Real-Time Embedded Systems and
Applications
A real-time embedded system is a real-time computer system (hardware and software) that
is part of a larger system (called a real-time system or cyber-physical system) that typically
has mechanical and/or electrical parts, such as an airplane or automobile. A real-time
embedded system interfaces to the external environment through sensors and actuators, as
depicted in Figure 1.1. An example of a real-time embedded system is a robot controller
that is a component of a robot system consisting of one or more mechanical arms,
servomechanisms controlling axis motion, multiple sensors to provide inputs to the system
from external devices, and multiple actuators to control external devices.
Real-time systems are often complex because they have to deal with multiple
independent sequences of input events and produce multiple outputs. Frequently, the order
of incoming events is not predictable. In spite of input events having arrival rates and
sequences that might vary significantly and unpredictably with time, the real-time system
must be capable of responding to these events in a predictable manner within timing
constraints specified in the system requirements.
Real-time systems are frequently classified as hard real-time systems or soft real-time
systems. A hard real-time system, such as a driverless car or train, has time-critical
deadlines, such as an emergency stop in front of an obstacle,which must always be met in
order to prevent a disastrous system failure. A hard real-time system in which a system
failure could be catastrophic is also called a safety-critical system (Kopetz 2011). A soft
real-time system, such as an interactive Web-based system, is a real-time system in which
missing timing deadlines occasionally, such as response time to a user input, is considered
undesirable but not catastrophic.
c) Measuring time. A real-time system models the passage of time from the past
through the present and into the future. An event occurs at an instant of time
(conceptually lasting zero time). A duration is an interval of time between two
events, a starting event and a terminating event. A period is a measurement of
recurring intervals of the same duration.
There are different units of time in a real-time system. Execution time is the CPU
time taken to execute a given task on a CPU (or CPUs in a multiprocessor system).
Elapsed time is the time to execute a task from start to finish, which consists of the
task execution time in addition to blocked time, which is waiting time when the task
is not using the CPU, including waiting for I/O operations to complete, waiting for
messages or responses to arrive, waiting to be assigned the CPU, and waiting for
entry into critical sections. Physical time (or real-world time) is the total time for a
real-time command to be completed, for example, to stop a train, which includes the
elapsed times of the software tasks involved and then the much longer time required
to stop the train physically by applying the brakes and gradually slowing down to a
halt.
f) Reactive systems. Many real-time systems are reactive systems (Harel and Politi
1998). They are event driven and must respond to external stimuli. It is usually the
case in reactive systems that the response made by the system to an input stimulus is
state dependent; that is, the response depends not only on the stimulus itself but also
on what has previously happened in the system, which is captured as the current state
of the system.
Incremental system expansion. If the system gets overloaded, the system can be
expanded by adding more nodes.
Load balancing. In some systems, the overall system load can be shared among
several nodes and can be dynamically adjusted with varying loads.
The design of cyber-physical systems considers the design and integration of both the
embedded cyber system and the physical processes. Furthermore, the real-time software
design of cyber systems, which monitor and control the physical processes, is critical in
the design of cyber-physical systems.
State machines – to react to external events as determined by both the input and
the current state of the system.
These requirements are all addressed by the COMET/RTE real-time software design
method for embedded systems described in this book. How these requirements are
addressed by COMET/RTE is described in Chapter 4. An overview of COMET/RTE is
given next.
1.7 COMET/RTE: A Real-Time Software Design
Method for Embedded Systems
This book describes a software modeling and architectural design method called
COMET/RTE (Concurrent Object Modeling and Architectural Design Method for Real-
Time Embedded Systems), which is tailored to the needs of real-time embedded systems.
COMET/RTE is an iterative use case–driven and object-oriented method that addresses
the requirements, analysis, and design modeling phases of the system and software
development life cycle.
Modern object-oriented analysis and design methods are model-based and use a
combination of use case modeling, static modeling, state machine modeling, and object
interaction modeling. Almost all modern object-oriented methods (such as COMET, as
described in Gomaa 2011) use the UML notation for describing software requirements,
analysis, and design models (Booch et al. 2005; Fowler 2004; Rumbaugh et al. 2005).
This book describes how COMET/RTE can be used to design real-time embedded systems
using a blend of the UML, SysML, and MARTE modeling languages and notations.
1.9 Summary
This chapter has described the characteristics of real-time embedded systems and
applications. It has provided overviews of the COMET/RTE design method for real-time
embedded systems and of its use of visual modeling languages and notations. Chapter 2
provides an overview of the UML, SysML, and MARTE modeling language and
notations, in particular those parts that are used by COMET/RTE. Chapter 3 describes the
fundamental design concepts on which concurrent object-oriented design for real-time
embedded systems is based. It describes object-oriented concepts, the concurrent tasking
concept including task communication and synchronization, as well as operating system
support for concurrent tasks. Chapter 4 provides an overview of the COMET/RTE design
method as well as the system and software life cycle for real-time embedded systems.
Chapters 5 through 18 describe the details of the method, and Chapters 19 through 23
describe case studies of applying COMET/RTE to design real-time embedded systems.
The notation used for the COMET/RTE method is the Unified Modeling Language
(UML), supplemented with the Systems Modeling Language (SysML) and Modeling and
Analysis of Real-Time Embedded Systems (MARTE). This chapter provides a brief
overview of these three related visual modeling notations.
The Object Management Group (OMG) maintains UML and SysML as standards.
The UML notation has evolved since it was first adopted as a standard in 1997. A major
revision to the standard was the introduction of UML 2.0 in 2003. Since then there have
been further minor changes, and the latest version of the standard is UML 2.4. The
versions of the standard before UML 2 are referred to as UML 1.x, and the current version
is generally referred to as UML 2. SysML is based on UML 2, using some parts of UML 2
and extending it in other areas for systems modeling. MARTE is a more recent UML
profile for real-time embedded systems. Each of these notations is of a significant size,
and it is therefore beneficial for the real-time system modeler to pick and choose carefully
among the multitude of diagrams and stereotypes provided by these notations.
The UML notation has grown substantially over the years and supports many
diagrams. SysML and MARTE extend the modeling notations further. The approach taken
in this book is to use only those parts of the UML and SysML notation that provide a
distinct benefit to the design of real-time embedded systems, and to use the parts of
MARTE that can be most usefully blended with UML and SysML for the design of these
systems. This chapter describes the main features of the UML, SysML, and MARTE
notations that are particularly suited for real-time design using the COMET/RTE method.
The purpose of this chapter is not to be a full exposition of UML, SysML, and MARTE,
because several detailed books exist on these topics, but rather to provide a brief overview
of each. The main features of each of the diagrams used in this book are briefly described,
but lesser-used features are omitted.
2.1 Model-Driven Architecture with SysML and
UML
In the OMG’s view, “modeling is the designing of software applications before coding”
(OMG 2015). The OMG promotes model-driven architecture as the approach in which
UML models of the software architecture are developed prior to implementation.
According to the OMG, UML is methodology-independent; UML is a notation for
describing the results of an object-oriented analysis and design developed via the
methodology of choice.
SysML can be used to model the total hardware/software embedded system to help
design the hardware/software interface, and then UML can be used to model the software
system in more detail. MARTE is a UML profile that is a real-time extension of UML that
supports concepts for real-time embedded systems (Selic and Gerard 2014).
Sequence and communication diagrams can also be used for modeling concurrent
systems, as briefly described in Section 2.8.
How these UML diagrams are used by the COMET/RTE method is described in Chapters
5 through 18 and in the case studies described in Chapters 19 through 23 of this book.
2.2 Use Case Diagrams
A use case defines a sequence of interactions between the actor(s) and the system. An
actor is external to the system and is depicted as a stick figure on a use case diagram. The
system is depicted as a box. A use case is depicted as an ellipse inside the box.
Communication associations connect actors with the use cases in which they participate.
Relationships among use cases are defined by means of include and extend relationships.
The notation is depicted in Figure 2.1.
To distinguish between a class (the type) and an object (an instance of the type), an
object name is shown underlined. An object can be depicted in full with the object name
separated by a colon from the class name – for example, anObject : Class.
Optionally, the colon and class name may be omitted, leaving just the object name – for
example, anObject. Another option is to omit the object name and depict just the class
name after the colon, as in : Class. Classes and objects are depicted on various UML
diagrams, as described in Section 2.4.
2.4 Class Diagrams
In a class diagram, classes are depicted as boxes, and the static (i.e., permanent)
relationships between them are depicted as lines connecting the boxes. The following
three main types of relationships between classes are supported: associations, whole/part
relationships, and generalization/specialization relationships, as shown in Figure 2.3. A
fourth relationship, the dependency relationship, is often used to show how packages are
related, as described in Section 2.7.
The multiplicity of an association specifies how many instances of one class may
relate to a single instance of another class (Figure 2.3b). The multiplicity of an association
can be exactly one (1), optional (0..1), zero or more (*), one or more (1..*), or numerically
specified (m..n), where m and n have numeric values. Associations are described in more
detail with examples in Chapter 5.
2.4.2 Aggregation and Composition Hierarchies
Aggregation and composition hierarchies are whole/part relationships. The composition
relationship (shown by a black diamond) is a stronger form of whole/part relationship than
the aggregation relationship is (shown by a hollow diamond). The diamond touches the
aggregate or composite (Class Whole) class box (see Figures 2.3d and 2.3e). More detail
with examples is provided in Chapter 5.
2.4.3 Generalization/Specialization Hierarchy
A generalization/specialization hierarchy is an inheritance relationship. A generalization
is depicted as an arrow joining the subclass (child) to the superclass (parent), with the
arrowhead touching the superclass box (see Figure 2.3c).
2.4.4 Visibility
Visibility refers to whether an element of the class is visible from outside the class, as
depicted in Figure 2.4. Depicting visibility is optional on a class diagram. Public
visibility, denoted with a + symbol, means that the element is visible from outside the
class. Private visibility, denoted with a – symbol, means that the element is visible only
from within the class that defines it and is thus hidden from other classes. Protected
visibility, denoted with a # symbol, means that the element is visible from within the class
that defines it and within all subclasses of the class.
The actor is usually shown at the extreme left of the page. Labeled horizontal arrows
represent messages. Only the source and destination of the arrow are relevant. The
message is sent from the source object to the destination object. Time increases from the
top of the page to the bottom. The spacing between messages is not semantically
significant in UML.
Because the sequence diagram shows the order of messages sent sequentially from
the top to the bottom of the diagram, numbering the messages is not necessary. However,
in Figure 2.5, the messages on the sequence diagram are numbered to show their
correspondence to the communication diagram described in the next section.
Figure 2.7. UML notation for a state machine: composite state with sequential
substates.
On the arc representing the state transition, the notation Event [Condition]/Action is
used. The event causes the state transition. The optional Boolean condition must be true,
when the event occurs, for the transition to take place. The optional action is performed as
a result of the transition. Optionally, a state may have any of the following:
Figure 2.7 depicts a composite state A decomposed into sequential substates A1 and A2.
In this case, the state machine is in only one substate at a time; that is, first substate A1 is
entered and then substate A2. Figure 2.8 depicts a composite state B decomposed into
orthogonal regions BC and BD. In this case the state machine is in each of the orthogonal
regions, BC and BD, at the same time. Each orthogonal region is further decomposed into
sequential substates. Thus, when the composite state B is initially entered, each of the
substates B1 and B3 is also entered.
Figure 2.8. UML notation for a state machine: composite state with orthogonal regions.
2.7 Package Diagrams
In UML, a package is a grouping of model elements – for example, to represent a system
or subsystem. A package diagram is a structural diagram used to model packages and their
relationships, as shown in Figure 2.9. A package is depicted by a folder icon, a large
rectangle with a small rectangle attached on one corner. Packages may also be nested
within other packages. Possible relationships between packages are dependency (shown in
Figure 2.9) and generalization/specialization relationships. Packages may be used to
contain classes, objects, or use cases.
Figure 2.12 and 2.13 respectively depict concurrent versions of interaction diagrams,
namely the concurrent sequence and concurrent communication diagram. Each diagram
depicts active objects and the various kinds of message communication between them. In
both diagrams, objectA, after receiving an input event from an external sensor, sends an
asynchronous message (message #2) to objectB, which in turn sends a synchronous
message (message #3 without reply) to objectC, which in turn sends a synchronous
message (#4) to objectD, which then responds by sending a reply (#5).
Figure 2.12. UML notation for a concurrent sequence diagram.
Figure 2.13. UML notation for a concurrent communication diagram.
2.9 Deployment Diagrams
A deployment diagram shows the physical configuration of the system in terms of
physical nodes and physical connections between the nodes, such as network connections.
A node is shown as a cube, and the connection is shown as a line joining the nodes. A
deployment diagram is essentially a class diagram that focuses on the system’s nodes
(Booch et al. 2005).
In this book, a node usually represents a computer node, with an optional constraint
(see Section 2.10.3) describing how many instances of this node may exist. The physical
connection has a stereotype (see Section 2.10.1) to indicate the type of connection, such as
«local area network» or «wide area network». Figure 2.14 shows two examples of
deployment diagrams: In the first example, nodes are connected via a wide area network
(WAN); in the second, they are connected via a local area network (LAN). In the first
example, the ATM Client node (which has one node for each ATM) is connected to a
Bank Server that has one node. Optionally, the objects that reside at the node may be
depicted in the node cube. In the second example, the network is shown as a node cube.
This form of the notation is used when more than two computer nodes are connected by a
network.
Figure 2.14. UML notation for a deployment diagram.
2.10 Composite Structure Diagrams
Composite structure diagrams are used to depict component-based software architectures
consisting of components and their interfaces. An interface specifies the externally visible
operations of a class, service or component without revealing the internal structure
(implementation) of the operations. Since the same interface can be implemented in
different ways, an interface can be modeled separately from a component that realizes
(i.e., implements) the interface.
An interface can be depicted with a different name from the class or component that
realizes the interface. To improve clarity across UML diagrams, the name of an interface
starts with the letter I. There are two ways to depict an interface: simple and expanded. In
the simple case, the interface is depicted as a little circle with the interface name next to it.
The class or component that provides the interface is connected to the small circle, as
shown in Figure 2.15a.
These extension mechanisms are also used to create UML profiles. Rumbaugh
defines a UML profile as a “coherent set of extensions applicable to a given domain or
purpose” (Rumbaugh et al. 2005). Two relevant UML profiles for real-time embedded
systems are SysML and MARTE. SysML addresses systems modeling concepts that are
important for embedded systems because models of these systems need to consider how
hardware components and software components interface to each other. MARTE is
relevant because it addresses real-time concepts. SysML is described further in Section
2.12, and MARTE is described further in Section 2.13.
2.11.1 Stereotypes
A stereotype defines a new building block that is derived from an existing UML modeling
element but tailored to the modeler’s problem (Booch et al. 2005). This book makes
extensive use of stereotypes. Several standard stereotypes are defined in UML. In
addition, a modeler may define new stereotypes. This chapter includes several examples
of stereotypes, both standard and COMET-specific. Stereotypes are indicated by
guillemets (« »).
In Figure 2.1, two specific kinds of dependency between use cases are depicted by
the stereotype notation: «include» and «extend». Figure 2.9 shows the stereotypes
«system» and «subsystem» to distinguish between two different kinds of packages. Figure
2.11 uses stereotypes to distinguish among different kinds of messages. In UML, a
modeling element can also be depicted by more than one stereotype. Therefore, different,
possibly orthogonal, characteristics of a modeling element can be depicted with different
stereotypes.
The UML stereotype notation allows a modeler to tailor a UML modeling element to
a specific problem. In UML, stereotypes are enclosed in guillemets usually within the
modeling element (e.g., class or object) as depicted in Figure 2.16a, in which the class
Sensor Input is depicted as a «boundary» class to distinguish it from Elevator
Control, which is depicted as a «control» class. However, UML also allows stereotypes
to be depicted as symbols. One of the most common such representations was introduced
by Jacobson (1992) and is used in the Unified Software Development Process (USDP)
(Jacobson et al. 1999). Stereotypes are used to represent «entity» classes, «boundary»
classes, and «control» classes. Figure 2.16b depicts the Process Plan «entity» class, the
Elevator Control «control» class, and the Sensor Input «boundary» class using the
USDP’s stereotype symbols.
Figure 2.16. UML notation for stereotypes.
2.11.2 Tagged Values
A tagged value extends the properties of a UML building block (Booch et al. 2005),
thereby adding new information. A tagged value is enclosed in braces in the form {tag =
value}. Commas separate additional tagged values. For example, a class may be depicted
with the tagged values {version = 1.0, author = Gill}, as shown in Figure 2.17.
From UML 2, SysML incorporates the following diagrams without change, which are
used in this book:
SysML also introduces diagrams which are modifications of UML 2 diagrams. Of these,
the following diagram is used in this book:
Thus, a block definition diagram is equivalent to a class diagram in which the classes
have been stereotyped as blocks. This allows a block definition diagram to represent and
depict the same modeling relationships as a class diagram, in particular associations,
whole/part (composition or aggregation) relationships, and generalization/specialization
relationships. Thus, composite relationships are used to depict how a real-world embedded
system is composed of blocks. The modeling notation for block definition diagrams is
given in Figure 2.18, which is essentially the same notation as in Figure 2.3, except for the
classes stereotyped as blocks.
MARTE also allows for expressing timing values, for example a timer resource can
have a period of 100 milliseconds specified by: period = (100, ms). The «timer-Resource»
stereotype has an attribute is Periodic, which if true means that the timer is recurring.
A software periodic task can be depicted as both a «timerResource» and a
«swSchedulableResource» as depicted in Figure 2.19. More MARTE stereotypes are
given in Chapter 13, with examples of their use in the concurrent design of real-time
embedded systems.
2.14 Timing Diagrams
A timing diagram is a time annotated sequence diagram, which is a sequence diagram
that depicts a time-ordered execution sequence of a collection of concurrent tasks. Time is
explicitly labelled on the left hand side of the page, uniformly increasing from the top of
the page to the bottom in equally spaced intervals. The lifelines depict the tasks as active
throughout, with the shaded portions identifying when tasks are executing on a CPU and
for how long. Depending on whether there is one or more CPUs in the configuration, the
timing diagram can explicitly depict parallel execution of tasks on multiple CPUs. If there
is only one CPU, as in the example in Figure 2.20, only one task can execute at any one
time. When combined with MARTE, tasks on timing diagrams are labelled with the
MARTE stereotype «swSchedulableResource». For example, in Figure 2.20, task t1
executes for 20 msec.
Figure 2.20. Notation for timing diagram.
2.15 Tool Support for UML, SysML, and MARTE
Because UML, SysML, and MARTE are standardized visual modeling languages
maintained by OMG, there is a wide range of tools that support these notations. Of the
many UML tools available, some are proprietary and some are open source. In principle,
any tool that supports UML 2 can be used for developing COMET/RTE designs. However,
the tools vary widely in functionality, ease of use, and price. Using stereotypes, the
designer can assign stereotypes to depict SysML and MARTE concepts, such as to
designate a UML class as a SysML «block» or a MARTE «swSchedulableResource». For
developing real-time designs, the most effective tools are those that provide an execution
and/or simulation framework that allows the designer to dynamically execute the model.
This enables the designer to validate the design by iteratively detecting and correcting
design flaws, and hence have greater confidence in the design before it is implemented
and deployed.
2.16 Summary
This chapter has briefly described the main features of the UML, SysML, and MARTE
notations and the main characteristics of the diagrams using these notations in this book.
Appendix A describes the naming conventions used in this book for classes and objects.
For further reading on UML, Fowler (2004) and Ambler (2005) provide introductory
material. More detailed information can be found in Booch et al. 2005 and Eriksson et al.
2004. A comprehensive and detailed reference to UML is Rumbaugh et al. 2005. For
further reading on SysML, Friedenthal et al. (2015) is a very informative book. For further
reading on MARTE, Selic and Gerard (2014) is an outstanding and very clear explanation
of MARTE.
3
Real-Time Software Design and
Architecture Concepts
◈
This chapter describes key concepts in the software design of concurrent object-oriented
real-time embedded systems as well as important concepts for developing the architecture
of these systems. First, object-oriented concepts are introduced, with the description of
objects and classes, as well as a discussion of the role of information hiding in object-
oriented design and an introduction to the concept of inheritance. Next, the concurrent
processing concept is introduced and the issues of communication and synchronization
between concurrent tasks are described. These design concepts are building blocks in
designing the software architecture of a real-time embedded system: the overall structure
of the system, its decomposition into components, and the interfaces between these
components.
From a design perspective, an object packages both data and procedures that operate
on the data. The procedures are usually called operations or methods. Some approaches,
including the UML notation, refer to the operation as the specification of a function
performed by an object and the method as the implementation of the function (Rumbaugh,
Booch, and Jacobson 2005). In this book, we will use the term operation to refer to both
the specification and the implementation, in common with Gamma et. al. (1995), Meyer
(2000), and others.
A class is an object type; for example, the class train represents all trains of a given
type. An object is an instance of a class. Individual objects, which are instances of the
class, are instantiated as required at execution time, for example, a specific temperature
sensor or a specific train.
Figure 3.1 depicts a class called Sensor Data and two objects, temperature
Sensor Data: Sensor Data and pressure Sensor Data: Sensor Data, which are
instances of the class Sensor Data. The objects humidity Sensor Data: Sensor
Data and: Sensor Data are also instances of the class Sensor Data.
Figure 3.1. Example of classes and objects.
An attribute is a data value held by an object in a class. Each object has a specific
value of an attribute. Figure 3.2 shows a class with attributes. The class Sensor Data has
five attributes, namely sensor Name, sensor Value, upper Limit, lower Limit,
and alarm Status. Two objects of the Sensor Data class are shown, namely
temperature Sensor Data1 and temperature Sensor Data2. Each object has
specific values of the attributes. For example, the sensor Value of the first object is
12.57 while the sensor Value of the second object is 24.83. The alarm Status of the
former object is Normal while and the alarm Status of the latter is High.
Figure 3.2. Example of class with attributes.
Two examples of applying information hiding in software design are given next. The
first example is information hiding applied to the design of internal data structures, and the
second is information hiding applied to the design of interfaces to I/O devices.
3.2.2 Information Hiding Applied to Internal Data Structures
A potential problem in application software development is that an important data
structure, one that is accessed by several objects, might need to be changed. Without
information hiding, any change to the data structure is likely to require changes to all the
objects that access the data structure. Information hiding can be used to hide the design
decision concerning the data structure, its internal linkage, and the details of the operations
that manipulate it. The information hiding solution is to encapsulate the data structure in
an object. The data structure is only accessed directly by the operations provided by the
object.
Other objects may only indirectly access the encapsulated data structure by calling
the operations of the object. Thus if the data structure changes, the only object impacted is
the one containing the data structure. The external interface supported by the object does
not change; hence, the objects that indirectly access the data structure are not impacted by
the change. This form of information hiding is called data abstraction.
Details of how to position the data on the screen, special control characters to be
used, and other device-specific information are hidden from the users of the object. If we
replace this device with a different device having the same general functionality, the
internals of the operations need to change, but the virtual interface remains unchanged.
Thus, users of the object are not impacted by the change to the device.
3.3 Inheritance
Inheritance is a useful abstraction mechanism in analysis and design. Inheritance naturally
models objects that are similar in some but not all respects, thus having some common
properties but other unique properties that distinguish them. Inheritance is a classification
mechanism that has been widely used in other fields. An example is the taxonomy of the
animal kingdom, in which animals are classified as mammals, fish, reptiles, and so on.
Cats and dogs have common properties that are generalized into the properties of
mammals. However, they also have unique properties: a dog barks and a cat mews.
Inheritance is a mechanism for sharing and reusing code between classes. A child
class inherits the properties (encapsulated data and operations) of a parent class. It can
then adapt the structure (i.e., encapsulated data) and behavior (i.e., operations) of its parent
class. The parent class is referred to as a superclass or base class. The child class is
referred to as a subclass or derived class. The adaptation of a parent class to form a child
class is referred to as specialization. Child classes may be further specialized, allowing the
creation of class hierarchies, also referred to as generalization/specialization hierarchies.
Active objects are also referred to as concurrent objects, concurrent tasks, or threads.
A concurrent object (active object) has its own thread of control and can initiate actions
that affect other objects. A passive object, which is an instance of a passive class, has no
thread of control. Passive objects have operations that are invoked by concurrent objects.
Passive objects can invoke operations in other passive objects. An operation of a passive
object, once invoked by a concurrent object, executes within the thread of control of the
concurrent object. In a concurrent application, there are typically several concurrent
objects, each with its own thread of control.
3.5 Concurrent Processing
A concurrent task represents the execution of a sequential program or a sequential
component in a concurrent program. Each task deals with one sequential thread of
execution; thus, no concurrency is allowed within a task. However, overall system
concurrency is obtained by having multiple tasks executing in parallel. The tasks often
execute asynchronously (i.e., at different speeds) and are relatively independent of each
other for significant periods of time. From time to time, the tasks need to communicate
and synchronize their operations with each other. The UML notation for concurrent tasks
is depicted in Section 2.8.
Structuring the system into concurrent tasks allows greater scheduling flexibility
because time-critical tasks with hard deadlines can be given a higher priority than
less critical tasks.
Identifying the concurrent tasks early in the design can allow an early performance
analysis to be made of the system. Many tools and techniques (for example, real-
time scheduling) use concurrent tasks as a fundamental component in their
analysis.
3.5.2 Heavyweight and Lightweight Processes
The term process is used in operating systems as a unit of resource allocation for the
processor (CPU) and memory. The traditional operating system process has a single thread
of control and thus no internal concurrency. Some modern operating systems allow a
process, referred to as a heavyweight process, to have multiple threads of control,
thereby allowing internal concurrency within a process. The heavyweight process has its
own allocated memory. Each thread of control, also referred to as a lightweight process,
shares the same memory with the heavyweight process. Thus the multiple threads of a
heavyweight process can access shared data in the process’s memory, although this access
must be synchronized.
The terms “heavyweight” and “lightweight” refer to the context switching overhead.
When the operating system switches from one heavyweight process to another, the context
switching overhead is relatively high, requiring CPU and memory allocation. With the
lightweight process, context switching overhead is low, involving only CPU allocation.
Bacon uses the term process to refer to a dynamic entity that executes on a processor
and has its own thread of control, whether it is a single threaded heavyweight process or a
thread within a heavyweight process (Bacon 2003). This book uses instead the term task
to refer to such a dynamic entity. The task corresponds to a thread within a heavyweight
process (i.e., one that executes within a process) or to a single threaded heavyweight
process. Many of the issues concerning task interaction apply whether the threads are in
the same heavyweight process or in different heavyweight processes. Task scheduling and
context switching are described in more detail in Section 3.9.
3.6 Cooperation between Concurrent Tasks
In the design of concurrent systems, several problems need to be considered that do not
arise when designing sequential systems. In most concurrent applications, it is necessary
for concurrent tasks to cooperate with each other in order to perform the services required
by the application. The following three problems commonly arise when tasks cooperate
with each other:
1. The mutual exclusion problem. This occurs when tasks need to have exclusive
access to a resource, such as shared data or a physical device. A variation on this
problem, in which the mutual exclusion constraint can be relaxed in certain
situations, is the multiple readers and writers problem, as described in Chapters 12
and 14.
If two or more tasks are allowed to write to a printer simultaneously, output from
the tasks will be randomly interleaved and a garbled report will be produced.
The classical solution to the mutual exclusion problem was first proposed by Dijkstra
(1968), using binary semaphores. A binary semaphore is a Boolean variable that is
accessed only by means of two atomic (i.e., indivisible) operations, acquire (semaphore)
and release (semaphore). Dijkstra originally called these the P (for acquire) and V (for
release) operations.
acquire (sensorDataRepositorySemaphore)
Access sensor data repository[this is the critical section.]
release (sensorDataRepositorySemaphore)
The solution assumes that during initialization, the initial values of the sensors are stored
before any reading takes place.
In a concurrent system, each task has its own thread of control and the tasks execute
asynchronously. It is therefore necessary for the tasks to synchronize their operations
when they wish to exchange data. Thus, the producer must produce the data before the
consumer can consume it. If the consumer is ready to receive the data but the producer has
not yet produced it, then the consumer must wait for the producer. If the producer has
produced the data before the consumer is ready to receive it, then either the producer has
to be held up or the data needs to be buffered for the consumer, thereby allowing the
producer to continue.
Figure 3.8. Synchronous message communication with reply between concurrent tasks.
3.7 Information Hiding Applied to Access
Synchronization
The solution to the mutual exclusion problem described previously is error-prone. It is
possible for a coding error to be made in one of the tasks accessing the shared data, which
would then lead to serious synchronization errors at execution time. Consider, for
example, the mutual exclusion problem described in Section 3.6.2. If the acquire and
release operations were reversed by mistake, the Pseudocode would be
release (sensorDataRepositorySemaphore)
Access sensor data repository [should be critical section]
acquire (sensorDataRepositorySemaphore)
As a result of this error, the task enters the critical section without first acquiring the
semaphore. Hence, it is possible to have two tasks executing in the critical section, thereby
violating the mutual exclusion principle. Instead, the following coding error might be
made:
acquire (sensorDataRepositorySemaphore)
Access sensor data repository [should be critical section]
acquire (sensorDataRepositorySemaphore)
In this case, a task enters its critical section for the first time but is then not able to leave
because it is trying to acquire a semaphore it already possesses. Furthermore, it prevents
any other task from entering its critical section, thus provoking a deadlock, where no task
is able to proceed.
Kernel of an operating system. This has the functionality to provide services for
concurrent processing. In some modern operating systems, a micro-kernel provides
minimal functionality to support concurrent processing, with most services
provided by system level tasks.
With sequential programming languages, such as C, C++, Pascal, and Fortran, there is no
support for concurrent tasks. To develop a concurrent multitasked application using a
sequential programming language, it is therefore necessary to use a kernel or threads
package.
With concurrent programming languages, such as Ada and Java, the language
supports constructs for task communication and synchronization. In this case, the
language’s runtime system provides the services and underlying mechanisms to support
inter-task communication and synchronization.
3.8.1 Operating System Services
The following are typical services provided by an operating system kernel:
Memory management. This handles the mapping of each task’s virtual memory
onto physical memory.
Examples of widely used operating systems with kernels that support concurrent
processing are several versions of Unix, Linux, and Windows.
With an operating system kernel, the send message and receive message operations
for message communication and the wait and signal operations for event synchronization
are direct calls to the kernel. Mutually exclusive access to critical sections is ensured by
using the wait and signal semaphore operations, which are also provided by the kernel.
3.8.2 Real-Time Operating Systems
Much of the operating system technology for concurrent systems is also required for real-
time systems. Most real-time operating systems support a kernel or micro-kernel, as
described previously. However, real-time systems have special needs, many of which
relate to the need for predictable behavior. It is more useful to consider the requirements
of a real-time operating system than to provide an extensive survey of available real-time
operating systems, because the list changes on a regular basis. Thus, a real-time operating
system must:
Support multitasking.
Support priority preemption task scheduling. This means each task needs to have
its own priority. The task scheduling algorithm assigns the CPU(s) to the highest
priority task(s) as soon as it is ready, for example, after it receives a message for
which it was waiting.
Have a predictable behavior (for example, for task context switching, task
synchronization, and interrupt handling). Thus, there should be a predictable
maximum response time under all anticipated system loads.
3.9 Task Scheduling
On single processor (CPU) or multiprocessor systems, the operating system kernel has to
schedule concurrent tasks for one or more CPUs. The kernel maintains a Ready List of all
tasks that are ready to use a CPU. Various task scheduling algorithms have been designed
to provide alternative strategies for allocating tasks to a CPU, such as round-robin
scheduling and priority preemption scheduling.
3.9.1 Task Scheduling Algorithms
The goal of the round-robin scheduling algorithm is to provide a fair allocation of
resources. Tasks are queued on a FIFO basis. The top task on the Ready List is allocated to
a CPU and given a fixed unit of time called a “time slice.” If the time slice expires before
the task has blocked (for example, to wait for I/O or wait for a message), the task is
suspended by the kernel and placed on the end of the Ready List. The CPU is then
allocated to the task at the top of the Ready List. In a multiprocessor system, the number
of tasks that could be in Executing state is equal to the number of processors.
When a task is first created, it is placed in Ready state, during which time it is on the
Ready List. When it reaches the top of the Ready List, it is assigned to a CPU, at which
time it transitions into Executing state. The task might later be preempted by another task
and reenter Ready state, at which time the kernel places it on the Ready List in a position
determined by its priority.
Alternatively, while in Executing state, the task may block, in which case it enters the
appropriate blocked state. A task can block while waiting for I/O, while waiting for a
message from another task, while waiting for a timer event or an event signaled by another
task, or while waiting to enter a critical section. A blocked task reenters Ready state when
the reason for blocking is removed – in other words, when the I/O completes, the message
arrives, the event occurs, or the task gets permission to enter its critical section.
3.9.3 Task Context Switching
When a task is suspended because of either blocking or preemption, its current context or
processor state must be saved. This includes saving the contents of the hardware registers,
the task’s program counter (which points to the next instruction to be executed), and other
relevant information. When a task is assigned to a CPU, its context must be restored so it
can resume executing. This whole process is referred to as context switching.
The software quality attributes of a system should be considered when developing the
software architecture. These attributes relate to how the architecture addresses important
nonfunctional requirements, such as performance, security, and maintainability, and are
described in Chapter 16.
Model-based systems engineering (Buede 2009, Sage 2000) and model-based software
engineering (Booch 2007, Gomaa 2011, Blaha 2005) are recognized as important
engineering disciplines in which the system under development is modeled and analyzed
prior to implementation. In particular, embedded systems, which are software intensive
systems consisting of both hardware and software components, benefit considerably from
a combined approach that uses both system and software modeling. As described in
Chapter 2, the modeling languages used in this book are SysML for systems modeling and
UML for software modeling.
This chapter provides an overview of the real-time software design method for
embedded systems called COMET/RTE (Concurrent Object Modeling and Architectural
Design Method for Real-Time Embedded systems), which uses the SysML, UML, and
MARTE notations. Section 4.1 starts with an overview of the COMET/RTE systems and
software life cycle. Section 4.2 describes each of the main phases of COMET/RTE.
Section 4.3 compares the COMET/RTE life cycle with the Unified Software Development
Process, the spiral model, and agile software development. Section 4.4 provides a survey
of design methods for real-time embedded systems. Finally, Section 4.5 gives an
introduction to the multiple view modeling and design of real-time embedded software
architectures described in this textbook.
4.1 COMET/RTE System and Software Life Cycle
model
This section presents an overview of the COMET/RTE method from a system and
software life cycle perspective. COMET/RTE starts with a systems structural analysis and
modeling of the total system (hardware, software, people), which leads to defining the
boundary between the system and the external environment and to designing the
hardware/software interface. This is followed by an iterative software development
process, which is both use case–based and object-oriented. The COMET/RTE life cycle
model, which is depicted in Figure 4.1, is highly iterative and encompasses both system
and software modeling. Iteration is between successive phases, as well as iterating back
through multiple phases using an incremental development approach.
Develop use cases. The system and software functional requirements of the system
are described in terms of use cases and actors. The use case descriptions are a
behavioral view; the relationships among the use cases give a structural view. Use
case modeling is described in Chapter 6.
In the analysis model, the analysis of the problem domain is considered. The
activities are:
Dynamic state machine modeling. The state dependent view of the system is
defined using hierarchical state machines. Designing state machines is described in
Chapter 7.
Object structuring. Determine the objects that participate in each use case. Object
structuring criteria are provided to help determine the software objects in the
system, which can be entity objects, boundary objects, control objects, and
application logic objects. State machines are encapsulated in state dependent
control objects. Object structuring is described in Chapter 8. After the objects have
been determined, the dynamic relationships between objects are depicted in the
dynamic interaction model.
Dynamic interaction modeling. The use cases are realized to show the interaction
among the objects participating in each use case. Interaction diagrams, either
sequence diagrams or communication diagrams, are developed to depict how
objects communicate with each other to execute each use case. Chapter 9 describes
both stateless dynamic interaction modeling and state-dependent dynamic
interaction modeling, in which the interaction among the state dependent control
objects and the state machines they execute is explicitly modeled.
4.2.4 Design Modeling
In the design modeling phase, the real-time software architecture of the system is
designed, in which the analysis model, with its emphasis on the problem domain, is
mapped to the design model, with its emphasis on the solution domain. Subsystem
structuring criteria are provided to structure the system into subsystems, which are
considered as composite objects. Special consideration is given to designing distributed
subsystems as configurable concurrent components that communicate with each other
using messages. Each subsystem is then designed. For the design of real-time embedded
systems, it is necessary to consider concurrent tasking concepts in addition to object-
oriented concepts of information hiding, classes, and inheritance.
For designing real-time software architectures, the following activities are performed:
Make decisions about subsystem structure and interfaces; develop the overall
software architecture; structure the application into subsystems. Subsystem design
is described in Chapter 10.
Make decisions about what software architectural and design patterns to use in the
software architecture. Software architectural patterns are described in Chapter 11.
Make decisions about how to structure the distributed application into distributed
subsystems, in which subsystems are designed as configurable components, and
define the message communication interfaces between the components. Designing
component-based software architectures is described in Chapter 12.
For each subsystem, structure the system into concurrent tasks (active objects).
During task structuring, tasks are structured using the task structuring criteria, and
task interfaces are defined. Designing concurrent tasks is described in Chapter 13.
Incorporate system and software quality into the software architecture. System and
software quality attributes, and how to incorporate them into a real-time software
architecture, are described in Chapter 16.
Substituting agile user stories (Cohn 2006) for a requirements specification is not an
effective solution for real-time embedded software. Starting from a sketchy design instead
of a well-designed software architecture is also inadequate for real-time software
development. However, some agile approaches can be used effectively in real-time
software development after the requirements have been specified and software
architecture has been designed. Thus, agile approaches have some similarities to the
iterative development approaches used in COMET/RTE, USDP and the spiral model. In
particular, the agile emphasis on team communication and frequent team meetings, short
iterations, frequent integration, and emphasis on software testing including regression
testing, can be used effectively in real-time software development.
4.4 Survey of Design Methods for Real-Time
Embedded Systems
This section provides a survey and description of the evolution of design methods for real-
time embedded systems. For the design of these systems, a major contribution came in the
late seventies with the introduction of the MASCOT notation (Simpson 1979) and later the
MASCOT design method (Simpson 1986). Based on a data flow approach, MASCOT
formalized the way tasks communicate with each other, via either channels for message
communication or pools (information hiding modules that encapsulate shared data
structures).
The 1980s saw a general maturation of software design methods, and several system
design methods were introduced. Parnas’s work with the Naval Research Lab, in which he
explored the use of information hiding in large-scale software design, led to the
development of the Naval Research Lab (NRL) Software Cost Reduction Method (Parnas,
Clements, and Weiss 1984). Work on applying Structured Analysis and Structured Design
to concurrent and real-time systems led to the development of Real-Time Structured
Analysis and Design (RTSAD) (Ward 1985, Hatley 1988) and the Design Approach for
Real-Time Systems (DARTS) (Gomaa 1984, 1986) methods.
Another software development method to emerge in the early 1980s was Jackson
System Development (JSD) (Jackson 1983). JSD was one of the first methods to advocate
that the design should model reality first and, in this respect, predated the object-oriented
analysis methods. The system is considered a simulation of the real world and is designed
as a network of concurrent tasks, in which each real-world entity is modeled by means of a
concurrent task. JSD also defied the then-conventional thinking of top-down design by
advocating a scenario-driven behavioral approach to software design. This approach was a
precursor of object interaction modeling, an essential part of modern object-oriented
development.
The early object-oriented analysis and design methods emphasized the structural
issues of software development through information hiding and inheritance but neglected
the dynamic issues and hence were less useful for real-time design. A major contribution
by the Object Modeling Technique (Rumbaugh et al. 1991) was to clearly demonstrate
that dynamic modeling was equally important. In addition to introducing the static
modeling notation for the object diagrams, OMT showed how dynamic modeling could be
performed with statecharts (hierarchical state transition diagrams originally conceived by
Harel [1996, 1998]) for showing the state-dependent behavior of active objects and with
sequence diagrams to show the sequence of interactions between objects.
Octopus (Awad, Kuusela, and Ziegler 1996) is a real-time design method based on
use cases, static modeling, object interactions, and statecharts. By combining concepts
from Jacobson’s use cases with Rumbaugh’s static modeling and statecharts, Octopus
anticipated the merging of the notations that is now the UML. For real-time design,
Octopus places particular emphasis on interfacing to external devices and on concurrent
task structuring.
Buhr (see Buhr and Casselman 1996) introduced an interesting concept called the use
case map (based on the use case concept) to address the issue of dynamic modeling of
large-scale systems. Use case maps consider the sequence of interactions between objects
(or aggregate objects in the form of subsystems) at a larger grained level of detail than do
communication diagrams.
For UML-based real-time software development, Douglass (1999, 2004) has
described how UML can be applied to real-time systems. The 2004 book describes
applying the UML notation to the development of real-time systems. The 1999 book is a
detailed compendium covering a wide range of topics in real-time system development.
An early version of the COMET/RTE method described in this book is the original
COMET method (Gomaa 2000), which used UML 1.0 and was oriented toward the design
of concurrent, real-time, and distributed applications.
4.5 Multiple Views of System and Software
Architecture
Real-time system and software architectures can be considered from different
perspectives, which are referred to as different views. Kruchten (1995) introduced the 4+1
view model of software architecture, in which he advocated a multiple view modeling
approach for software architectures, in which the use case view is the unifying view (the 1
view of the 4+1 views).
This book describes and depicts the different modeling views of a real-time system
and software architecture using the UML, SysML, and MARTE visual notations. The
modeling views, which address the requirements for a real-time software design method
outlined in Chapter 1, are:
Use case view. This view is a functional requirements view, which is an input to
develop the software architecture. Each use case describes the sequence of
interactions between one or more actors (external users or entities) and the system.
This view is the same as the use case view, which is the 1 view, in the 4+1 view
model.
Dynamic state machine view. This view depicts the internal control and
sequencing of a control object using a state machine. Depicted on UML state
machine diagrams.
Structural component view. This view depicts the software architecture in terms
of components, which are interconnected through ports, which in turn support
provided and required interfaces. Depicted on UML structured class diagrams.
This view is similar to the development view in the 4+1 view model.
Timing view. This view analyzes the concurrent tasks composing the real-time
software architecture from a timing perspective. This analysis considers each task’s
execution time on the target platform, as well as its elapsed time as it competes for
resources with other tasks, and whether it will meet its hard deadlines.
4.6 Summary
This chapter has described the COMET/RTE system and software life cycle for the
development of real-time embedded systems. The chapter then described each of the main
phases of the COMET/RTE method. The chapter then compared the COMET/RTE life
cycle with the Unified Software Development Process, the spiral model, and agile
software development, which was followed by a survey and description of the evolution of
design methods for real-time systems. It then described the different modeling views of
the COMET/RTE method. Each of the steps in the COMET/RTE method is described in
more detail in the subsequent chapters of this textbook.
5
Structural Modeling for Real-Time
Embedded Systems with SysML and
UML
◈
This chapter describes how structural modeling can be used as an integrated approach for
system and software modeling of embedded systems consisting of both hardware and
software components. The structural view of a system is a static modeling view, which
does not change with time. A static model describes the static structure of the system being
modeled, first the static structure of the total hardware/software system followed by the
static structure of the software system.
The SysML block definition diagram notation is used to depict the static model of the
total hardware/software system and the UML class diagram notation is used to depict the
static model of the software system. SysML block definition diagrams and UML class
diagrams were first introduced in Chapter 2. For system modeling, this chapter describes
system-wide structural modeling concepts including blocks, attributes of blocks, and
relationships between blocks. For software modeling, this chapter describes software
structural modeling concepts including classes, attributes of classes and relationships
between classes. Software design concepts such as class operations (methods) are deferred
to software class design as described in Chapter 14.
An example of structural elements that depicts classes and their associations in a factory
automation system is given in Figure 5.1. A workflow plan defines the steps for
manufacturing a part of a given type; it contains several manufacturing operations, where
each operation defines a single manufacturing step. Consequently, there is a one-to-many
association between the Workflow Plan class and the Manufacturing Operation
class. A work order defines the number of parts to be manufactured of a given part type.
Thus the Work Order class has a one-to-many association with the Part class. Because a
workflow plan defines how all parts of a given part type are manufactured, the Workflow
Plan class also has a one-to-many association with the Part class. The attributes of these
classes are also shown in Figure 5.1. For example, the Workflow Plan class has
attributes for the type of part to be manufactured, the raw material type to be used for
manufacturing a part, and the number of manufacturing steps required to produce a part.
Figure 5.1. Example of classes, attributes, and associations on a class diagram.
5.1.2 Composition and Aggregation Hierarchies
Composition and aggregation are special forms of relationships in which structural
elements (blocks or classes) are bound by a whole/part relationship. Both composition
and aggregation hierarchies address a structural element that is made up of other structural
elements.
The following description is in terms of software classes, but it can also be applied to
system blocks. Inheritance is a mechanism for sharing properties between classes. A
child class inherits the properties (e.g., encapsulated data) of a parent class. It can then
modify the structure (i.e., attributes) of its parent class by adding new attributes. The
parent class is referred to as a superclass or base class. The child class is referred to as a
subclass or derived class. The adaptation of a parent class to form a child class is referred
to as specialization. Child classes may be further specialized, allowing the creation of
class hierarchies, also referred to as generalization/specialization hierarchies.
Consider an example from a factory automation system given in Figure 5.4. There are
three types of factory workstations – receiving workstations, line workstations, and
shipping workstations – so they are modeled as a generalization/specialization hierarchy;
that is, the Factory Workstation class is specialized into three subclasses: Receiving
Workstation, Shipping Workstation, and Line Workstation. All factory
workstations have attributes for workstation name, workstation ID, and location,
which are therefore attributes of the superclass and are inherited by the subclasses. Since
factory workstations are physically laid out in an assembly line, the Receiving
Workstation class has a next workstation ID, the Shipping Workstation has a
previous workstation ID, while a Line Workstation has both a previous
workstation ID and a next workstation ID. Because of these differences, previous
and next workstation IDs are attributes of the subclasses, as shown in Figure 5.4.
Figure 5.4. Example of a generalization/specialization hierarchy.
5.2 Categorization of Blocks and Classes using
Stereotypes
This section describes how blocks and classes can be categorized (i.e., grouped together)
using a classification approach. The dictionary definition of category is “a specifically
defined division in a system of classification.” Whereas classification based on inheritance
is an objective of object-oriented modeling, it is essentially tactical in nature. Thus,
classifying the Factory Workstation class into a Receiving Workstation,
Shipping Workstation, and Line Workstationis a good idea because Receiving
Workstation, Shipping Workstation, and Line Workstation have some
properties (e.g., attributes) in common and others that differ. Categorization, however, is a
strategic classification – a decision to organize classes into certain groups because most
software systems have these kinds of classes and categorizing classes in this way helps to
better understand the system being developed.
In UML and SysML, stereotypes are used to distinguish among the various kinds of
modeling elements. A stereotype is a subclass of an existing modeling element (for
example an application or external class), which is used to represent a usage distinction
(for example the kind of application or external class). In the UML notation, a stereotype
is enclosed by guillemets, like this: «input device».
Examples shown in Figure 5.5 from the microwave oven system are the input devices
Door Sensor and Weight Sensor, the output devices Heating Element and Lamp,
and the timer Oven Timer.
Figure 5.5. Example of UML modeling elements and their stereotypes.
5.3 Structural Modeling of the Problem Domain
with SysML
Structural modeling of the problem domain for real-time embedded systems refers to
modeling the external entities that interface to the embedded system to be developed, as
well as the hardware and software structural elements of the embedded system. In this
structural modeling, the embedded system refers to the total hardware/software system,
consisting of hardware elements, such as sensors and actuators, and software elements.
The software system refers to the software elements, in particular the software
components that compose the software system to be developed.
5.3.1 Modeling Real-World Entities in the Problem Domain
With structural modeling of the problem domain for real-time embedded systems, the
designer uses SysML block definition diagrams (see Section 2.12) to depict real-world
structural elements (such as hardware elements, software elements, or people) as blocks
and defines the relationships among these blocks. A block definition diagram is equivalent
to a class diagram in which the classes have been stereotyped as blocks, thereby allowing
a block definition diagram to depict the same modeling relationships as a class diagram.
1. Physical entity. A physical entity is an entity in the problem domain that has
physical characteristics – that is, it can be seen or touched. Such entities include
physical devices, which are often part of the problem domain in embedded
applications. For example, in the railroad crossing system, the train is a physical
entity that must be detected by the system. Other relevant physical entities controlled
by the system are the railroad crossing barrier, the warning flashing lights, and the
audio warning alarm.
2. Human user. A human user of the system interacts with the system, providing
inputs to the system and receiving outputs from the system. For example, the
microwave user is a human user.
3. Human observer. A human observer views the outputs of the system but does not
interact directly with the system, that is, does not provide any inputs to the system.
An example of a human observer is a vehicle driver or pedestrian who is alerted of
the imminent train arrival by the closing of the barrier, the flashing lights, and the
audio alarm.
An example of a conceptual structural model of the problem domain is shown in the block
definition diagram for the Railroad Crossing Embedded System in Figure 5.6. From a total
system perspective, the problem domain for Railroad Crossing Embedded System consists
of the following blocks:
Barrier, which is a physical entity controlled by the system and which consists of
a barrier actuator and a barrier sensor;
Warning Alarm, which consists of Warning Lights and Warning Audio and
which is a physical entity controlled by the system;
During static modeling of the problem domain, the emphasis is on determining the
entity classes that are defined in the problem, their attributes, and their relationships. For
example, in a Factory Automation System, there are parts, workflow plans, manufacturing
operations, and work orders all mentioned in the problem description. Each of these real-
world conceptual entities is modeled as an entity class and depicted with the stereotype
«entity», as depicted in Figure 5.1. The attributes of each entity class are determined and
the relationships among entity classes are defined, as described in Section 5.1.
5.4 Structural Modeling of the System Context
It is very important to understand the system context, that is the scope of a computer
system – in particular, what is to be included inside the system and what is to be excluded
from the system. Context modeling explicitly identifies what is inside the system and what
is outside. Context modeling can be done at the total system (hardware and software) level
or at the software system (software only) level. The system context is determined after
modeling and understanding the problem domain, as described in Section 5.3.
A system context diagram is a block definition diagram that explicitly depicts the
boundary between the system (hardware and software), which is modeled as one block,
and the external environment. By contrast, a software system context diagram explicitly
shows the boundary between the software system, also modeled as one block, and the
external environment, which now includes the hardware.
1. External physical entity. A physical entity is an external entity that the system
has to detect and/or control. For example, in the railroad crossing system, the train is
an external physical entity that has to be detected by the system. Other external
physical entities, which are controlled by the system, are the railroad crossing barrier,
the warning flashing lights, and the audio warning alarm.Some external physical
entities, such as smart devices, might provide input to or receive output from the
system.
3. External user. An external user is a human user of the system who interacts with
the system, providing inputs to the system and receiving outputs from the system. For
example, the microwave user is an external user.
4. External observer. An external observer is a human being who views the outputs
of the system but does not interact directly with the system, that is, does not provide
any inputs to the system. An example of an external observer is a vehicle driver or
pedestrian who is alerted of the imminent train arrival by the closing of the barrier,
the flashing lights, and the audio alarm.
Using the SysML notation, the system context is depicted showing the hardware/software
system as an aggregate block with the stereotype «embedded system». The external
environment is depicted in terms of external entities, depicted as blocks, to which the
system has to interface. Stereotypes are used to differentiate between the different kinds of
external blocks. For the system context diagram, an external block could be an «external
system», «external physical entity», «external user», or an «external observer».
5.4.2 Modeling Associations on the System Context Diagram
The associations between the embedded system block and the external blocks are depicted
on the system context diagram, showing in particular the multiplicity of the associations
between the external blocks and the embedded system. These can be one-to-one or one-to-
many associations. In addition, each association is given a standard name, which describes
what the association is between the embedded system and the external block. The standard
association names on system context block diagrams are Inputs to, Outputs to,
Communicates with, Interacts with, Detects, Controls, and Observes. Note that in some
cases, there is more than one standard association name between an external block and the
embedded system, if different associations between them are possible. These associations
are used as follows:
Barrier, which is an external physical entity controlled by the system and which
consists of a barrier actuator (to raise or lower the barrier) and barrier sensor (to
detect that the barrier has been raised or lowered;
Warning Alarm, which consists of Warning Lights and Warning Audio and
which is an external physical entity controlled by the system;
Note that Observer (vehicle driver, cyclist, or pedestrian who stops at the railroad
crossing) is an external observer of the system.
Figure 5.7 System context diagram for Railroad Crossing Embedded System.
5.5 Hardware/Software Boundary Modeling
To determine the boundary between the hardware and software blocks in preparation for
modeling the software system context diagram, the modeler starts with the system context
diagram and then determines the decomposition into hardware and software blocks.
From a software engineering perspective, some external blocks are modeled in the
same way as in the systems engineering perspective, while others are modeled differently.
In the former category are external system blocks and external users who interact with the
system using standard I/O devices; these external blocks are depicted on the software
system context diagram in the same way as on the system context diagram.
External blocks that are modeled differently from a software engineering perspective
are external physical entity blocks that often do not physically connect to a system, and
therefore need sensors or actuators to make the physical connection. As described in
Section 5.4.2, the association between the embedded system and such a physical entity is
detects and/or controls. Detection of physical entities is done by means of sensors while
control of physical entities is done by means of actuators. Consider the external physical
entities in the Railroad Crossing Embedded System. The arrival of a train is detected by an
arrival sensor and the departure is detected by a departure sensor.
5.6 Structural Modeling of the Software System
Context
As described in Section 5.4, the system context diagram depicts the systems and users that
are external to the total hardware/software system, which is modeled as one composite
block. The hardware blocks (such as sensors and actuators) and software blocks are
internal to the system and are therefore not depicted on the system context diagram.
Together with the hardware/software boundary modeling described in the previous
section, this is the starting point for the software context modeling.
External input device. A device that only provides input to the system – for
example, a sensor;
External output device. A device that only receives output from the system – for
example, an actuator;
External input/output device. A device that both provides input to the system and
receives output from the system – for example, a card reader for an automated
teller machine.
Figure 5.8. Classification of external blocks by stereotype.
These external blocks are depicted with the stereotypes «external input device»,
«external output device», and «external input/output device». Examples are the Door
Sensor external input device and the Heating Element external output device in the
microwave oven system (see Figure 5.9). Other examples are the Arrival Sensor external
input device and the Motor external output device in the Train Control System.
Figure 5.9. Microwave Oven System software system context diagram
A human user often interacts with the system by means of standard I/O devices such
as a keyboard/display and mouse. The characteristics of these standard I/O devices are of
no interest because they are handled by the operating system. The interface to the user is
of much greater interest in terms of what information is being output to the user and what
information is being input from the user. For this reason, an external user interacting with
the system via standard I/O devices is depicted as an «external user». An example is the
Factory Operator in the factory automation system.
An «external timer» block is used if the application needs to keep track of time
and/or if it needs external timer events to initiate certain actions in the system. External
timer blocks are frequently needed in real-time embedded systems. An example from the
Microwave Oven System is the external Timer. It is needed because the system needs to
keep track of elapsed time to determine the cooking time for food placed in the oven and
count down the remaining cooking time, which it displays to the user. When the time
remaining reaches zero, the system needs to stop cooking. In the Train Control System,
time is needed to compute the speed of the train. Sometimes the need for periodic
activities only becomes apparent during design.
An «external system» block is needed when the system interfaces to other systems, to
either send data or receive data. Thus, in the Factory Automation System, the system
interfaces to two external systems: the Pick & Place Robot and the Assembly
Robot.
5.6.2 Modeling Associations on the Software System Context Diagram
The associations between the software system aggregate block and the external blocks are
depicted on the software system context diagram, showing in particular the multiplicity of
the associations and the name of the association. The standard association names on
software system context diagrams are Inputs to, Outputs to, Communicates with, Interacts
with, and Signals. These associations are used as follows:
From the software system point of view, the hardware sensors and actuators are
external to the software system and interface to the software system. Thus the blocks
outside the software system are the external input and output devices, and the external
timer, as depicted in Figure 5.9. In the example, there are three external input device
blocks: the Door Sensor, the Weight Sensor, and the Keypad. There are also five
external output device blocks, the Heating Element, Display, Beeper,
Turntable, and Lamp, as well as one timer, namely Timer. There is one instance of each
of these external blocks for a given microwave oven. This example is described in more
detail for the Microwave Oven Control System case study in Chapter 19.
An I/O device boundary specification can also be depicted as a table. Examples of input
and output device boundary specifications for the Railroad Crossing Control System
(Figure 5.10) are given in Table 5.1.
Figure 5.11. Deployment diagram for Distributed Light Rail Embedded System.
5.9 Summary
This chapter has described how structural modeling using SysML and UML can be used
as an integrated approach for system and software modeling of embedded systems
consisting of both hardware and software modeling elements. This chapter started by
describing some of the basic concepts of static modeling, including using blocks to depict
system modeling elements and classes to depict software modeling elements, as well as
defining the relationships between structural modeling elements. Three types of
relationships have been described: associations, composition/aggregation relationships,
and generalization/specialization relationships. This chapter then described categorization
of blocks using stereotypes, structural modeling of the problem domain, system context
modeling, developing the hardware/software boundary of a system, software system
context modeling, designing the interface between hardware and software blocks, and
system deployment modeling. The categorization of software classes using stereotypes is
described in Chapter 8.
6
Use Case Modeling for Real-Time
Embedded Systems
◈
Use case modeling is widely used for specifying the functional requirements of software
systems. This chapter describes how use case modeling can be applied to real-time
embedded systems from both a systems engineering and a software engineering
perspective. With use case modeling, the system is viewed as a black box, so that only the
external characteristics of the system are considered. Both functional and nonfunctional
requirements need to be described for embedded systems. Functional requirements address
the functionality that the system needs to provide. Nonfunctional requirements, sometimes
referred to as quality attributes, address quality of service goals for the system, which are
particularly important for real-time embedded systems. Although use case modeling is
typically only used for specifying functional requirements, this chapter describes how it
can be extended to specify nonfunctional requirements. Several examples of use case
modeling for embedded systems are given in this chapter.
Section 6.1 gives an overview of use case modeling. Section 6.2 then describes actors
and their role in use case modeling from both systems engineering and software
engineering perspectives. The important topic of how to identify use cases is covered in
Section 6.3. Section 6.4 describes how to document use cases. Section 6.5 describes how
to specify nonfunctional requirements, which is particularly important for real-time
embedded systems. Section 6.6 gives detailed examples of use case descriptions from both
systems engineering and software engineering perspectives. Section 6.7 then describes use
case relationships; modeling with the include relationship is described in Section 6.8;
modeling with the extend relationship is described in Section 6.9. Finally, use case
packages for structuring large use case models are described in Section 6.10.
6.1 Use Cases
In the use case modeling approach, functional requirements are defined in terms of actors,
which are external to the system, and use cases. A use case defines a sequence of
interactions between one or more actors and the system. The use case model describes
the functional requirements of the system in terms of the actors and use cases. In
particular, the use case model considers the system as a black box and describes the
interactions between the actor(s) and the system in a narrative textual form consisting of
actor inputs and system responses. The system is treated as a black box, that is, dealing
with what the system does in response to the actor’s inputs, not the internals of how it
does it.
For real-time embedded systems, both use cases and actors can be modeled from a
systems engineering perspective or from a software engineering perspective. As these
perspectives are different for real-time embedded systems, this chapter describes both
perspectives.
A use case always starts with input from an actor. A use case typically consists of a
sequence of interactions between the actor and the system. Each interaction consists of an
input from the actor followed by a response from the system. Thus, an actor provides
inputs to the system and the system provides responses to the actor. The system is always
considered as a black box, so that its internals are not revealed. Whereas a simple use case
might only involve one interaction between an actor and the system, a more typical use
case will consist of several interactions between the actor and the system. More complex
use cases might also involve more than one actor.
An example of a simple use case model in which there is no difference between the
system and software perspectives is given in Figure 6.1. In this example, there is one use
case, View Alarms, and one actor, Factory Operator, who is a human actor. In View
Alarms, the operator requests to view factory alarms, and the system responds by
displaying the current alarms to the operator.
Figure 6.1. Example of actor and use case.
6.2 Actors
An actor characterizes an external entity (i.e., outside the system) that interacts with the
system. In the use case model, actors are the only external entities that interact with the
system. In other words, actors are outside the system and not part of it. An actor interacts
with the system by providing inputs to the system or by responding to outputs from the
system.
An actor represents a role played in the application domain. An actor represents the
role played by all external instances of the same type, such as all users of the same type.
For example, in the View Alarms use case (Figure 6.1), there are several factory
operators who are represented by the Factory Operator actor. Thus, Factory
Operator models a user type, and individual factory operators are instances of the actor.
6.2.1 Actors in Real-Time Embedded Systems
In many information systems, humans are the only actors. For this reason, the UML
notation depicts an actor using a stick figure. However, in real-time embedded systems,
there are other types of actors in addition to or in place of human actors. In fact, in
embedded systems, the nonhuman actors are frequently more important than human
actors. External I/O devices and timer actors are particularly prevalent in embedded
systems. I/O device actors are needed because the system interacts with the external
environment through sensors and actuators. Timer actors are needed because many
functions in real-time systems need to be performed periodically.
An external entity that is purely passive, that is, only receives outputs from the
system and never responds to these outputs, is not considered an actor in some use case
modeling approaches. However, with embedded systems, it is important to explicitly
consider the interactions with each external device, whether input or output. It is therefore
preferable to explicitly incorporate passive output devices into the use case models when
modeling embedded systems from a software engineering perspective, as described in
Section 6.2.5.
6.2.2 Systems and Software Engineering Perspectives on Actors
For systems in which the actors are usually entirely human, such as information systems
and Web-based systems, there is little or no difference between the systems and software
engineering perspectives of the use case model. However, in real-time embedded systems,
there can be significant differences between the system and software engineering
perspectives.
Consider the case of a train that does not interact directly with the system because its
arrival and departure are detected by sensors. From a systems engineering perspective, the
train is the actor because it is a physical entity that is external to and detected by the
system, whereas the arrival and departure sensors are internal to the total
(hardware/software) system and therefore are not actors. However, from a software
engineering perspective, the arrival and departure sensors are actors because they are
external to the software system and provide inputs to it. Thus, depending on which
perspective is taken for real-time embedded systems, systems or software engineering, the
actors are usually different, and consequently the use cases are described differently.
6.2.3 Primary and Secondary Actors
A primary actor initiates a use case. Thus, the use case starts with an input from the
primary actor to which the system has to respond. Other actors, referred to as secondary
actors, may also participate in the use case by providing inputs and receiving outputs. A
primary actor in one use case can be a secondary actor in another use case. At least one of
the actors must gain value from the use case; usually this is the primary actor. If there is
only one actor in the use case, then that actor is also the primary actor.
In real-time embedded systems, however, where the primary actor can be an external
I/O device or timer, the primary beneficiary of the use case can be a secondary human
actor who receives some information from the system or a human observer who only
observes but does not interact with the system.
An example of primary and secondary actors is shown in Figure 6.2. The Factory
Robot actor (an external computer system) initiates the Generate Alarm use case by
sending monitoring data to the system. The system determines that there is an alarm
condition, which it displays to the factory operator. In this use case, the Factory Robot
is the primary actor that initiates the use case, and the Factory Operator is a secondary
actor that receives the alarm and hence gains value from the use case. However, Factory
Operator is a primary actor in the View Alarms use case (Figure 6.1), in which the
operator requests to view alarm data.
Figure 6.2. Example of primary and secondary actors, as well as external system actor.
6.2.4 Modeling Actors from a Systems Engineering Perspective
From a systems engineering perspective, an actor can be a human user (either as an active
participant in the use case or as an observer), an external system, or a physical entity.
A human actor frequently interacts with the system via standard I/O devices, such as
a keyboard, display, or mouse. However, in real-time embedded systems, a human actor
might interact with the system indirectly via nonstandard I/O devices, such as various
sensors. From a systems engineering perspective, the human is the actor and the I/O
devices are internal to the hardware/software system.
Consider an example of a human actor who interacts with the system using standard
I/O devices. In the factory monitoring system, the Factory Operator is a human actor
who interacts with the system via standard I/O devices, such as a keyboard, display, or
mouse, as shown in Figures 6.1 and 6.2. An example of a human actor who interacts with
the system by using several nonstandard I/O devices is a user of a microwave oven, as
shown in Figure 6.3. To cook food, the user interacts with the system by using several I/O
devices, including a door sensor, weight sensor, and keypad, in addition to an oven heater,
oven display, and oven timer. Modeling the Cook Food use case from a systems
engineering perspective, the user is the actor.
An observer is a human user who passively views the system but does not participate
in the use case by providing any inputs. For example, in Railroad Crossing System, a
driver is an observer who stops when the warning lights are flashing but does not affect
the system in any way.
An actor can also be an external system actor that either initiates (as primary actor)
or participates (as secondary actor) in a use case. An example of an external system actor
is the Factory Robot in the Factory Monitoring System. The Factory Robot initiates
the Generate Alarm use case, as shown in Figure 6.2, by sending an alarm to the
system. The system receives the alarm and sends alarm data that is displayed to factory
operators. The Factory Operator is a secondary actor in this use case.
An example of a physical entity actor is a Train actor, as shown in Figure 6.4. From
a systems engineering perspective, the train is the primary actor of the Arrive at
Railroad Crossing and Depart from Railroad Crossing use cases, since it is
the arrival of the train that triggers the first use case and the departure of the train that
triggers the second use case.
A physical entity actor in the systems engineering view is typically replaced by one
or more input device actors when viewing the system from a software engineering
perspective, since it is the input devices (such as sensors) that detect the presence of a
physical entity. From a systems engineering perspective, the physical entity is the actor
and the I/O devices are internal to the hardware/software system.
Furthermore, a human actor in the systems engineering view who interacts with the
system indirectly via nonstandard I/O devices, such as various sensors, is typically
replaced in the software engineering view by one or more I/O device actors. Thus, an actor
can be an input device actor or an input/output device actor. Typically, the input device
actor interacts with the system via a sensor. The reason why this kind of actor only appears
in a software engineering view is because an input device or sensor is external to the
software system but is internal to the larger hardware/software system. Thus, from a
systems engineering perspective, the input device or sensor is inside the system, whereas
from the software engineering perspective it is outside the system.
An actor can also be a timer actor, which periodically sends timer events to the
system. Periodic use cases are needed in real-time embedded systems when certain
functions need to be performed periodically, such as information that needs to be output
by the system on a regular basis. An example of a periodic use case and timer actor is
given in Figure 6.6. The Timer actor initiates the Display Time of Day use case,
which periodically (every minute) computes and updates the time-of-day clock and
displays its value to the user. In this case, the timer is the primary actor, and the user is the
secondary actor. This is an example of the secondary actor gaining value from the use
case.
Figure 6.6. Example of timer actor (software engineering perspective).
6.2.6 Generalization and Specialization of Actors
In some systems, different actors might have some roles in common but other roles that
are different. In this situation, the actors can be generalized, so that the common part of
their roles is captured as a generalized actor and the different parts by specialized actors.
For an example, consider the actors in a factory automation system depicted in Figure 6.7.
The Factory Robot actor captures the generalized role played by all factory robots.
However, the Pick & Place Robot and Assembly Robot actors are modeled as
specialized roles, which inherit the common role of all robots from Factory Robot and
extend this with specialized roles for the specific types of robot.
Figure 6.7. Example of generalization and specialization of actors.
6.3 Identifying Use Cases
To determine the use cases in the system, it is useful to start by considering the actors and
the interactions they have with the system. Each use case describes a sequence of
interactions between the actor(s) and the system. In this way, the functional requirements
of the system are defined in terms of the use cases, which constitute a functional
specification of a system.
A use case starts with input from the primary actor. The main sequence of the use
case describes the most common sequence of interactions between the actor and the
system. There may also be branches off the main sequence of the use case, which address
less frequent interactions between the actor and the system. These deviations from the
main sequence are executed only under certain circumstances – for example, if the actor
makes an incorrect input to the system. Depending on the application requirements, these
alternative sequences through the use case might join up later with the main sequence. The
alternative sequences are also described in the use case.
Consider the use case model for the microwave oven system, which is viewed from a
systems engineering perspective. This system has three use cases: the Cook Food, Set
Time of Day, and Display Time of Day use cases (see Figure 6.8). From a systems
engineering perspective, the primary actor is the user who wishes to cook food and not the
I/O devices. In the main sequence of the Cook Food use case, the user opens the door,
places the food in the oven, closes the door, selects the cooking time, and presses Start.
The oven starts cooking the food. When the cooking time elapses, the oven stops cooking.
The user opens the door and removes the food.
Figure 6.8. Use case model for Microwave Oven System (systems engineering
perspective).
Each sequence through the use case is called a scenario. A use case usually describes
several scenarios, one main sequence (sometimes referred to as the sunny day scenario)
and a number of alternative sequences. Note that a scenario is a complete sequence
through the use case, so a scenario could start out executing the main sequence and then
follow an alternative branch at a decision point. In the Cook Food use case, there are
several alternative scenarios to the main sequence sunny day scenario. For example, one
scenario is that the user might open the door before cooking is finished, in which case
cooking is stopped. In another scenario, the user might press Cancel or might press Start
when the door is open.
6.3.1 Use Case Structuring Guidelines
When developing use cases, it is important to avoid a functional decomposition in which
several small use cases describe individual functions of the system rather than describing a
sequence of events that provides a useful result to the actor.
Although careful application of use case relationships can help with the overall
organization of the use case model, use case relationships should be employed judiciously.
Small inclusion use cases corresponding to individual functions (such as Open Door,
Update Display, and Start Cooking) should be avoided. These functions are too
small, and making them separate use cases would result in a functional decomposition
with fragmented use cases in which the use case descriptions would be only a sentence
each and not a description of a sequence of interactions. The result would be a use case
model that is overly complex and difficult to understand – in other words, a problem of
not being able to see the forest (the overall sequence of interactions) for the trees (the
individual functions)!
6.4 Documenting Use Cases in the Use Case Model
Use cases are documented in the use case model as follows:
Summary. This section briefly describes the use case, typically in one or two
sentences.
Dependency. This optional section describes whether the use case depends on
other use cases, that is, whether it includes or extends another use case.
Actors. This section names the actors in the use case. There is always a primary
actor who initiates the use case. In addition, there might be one or more secondary
actors who also participate in the use case.
Preconditions. This section specifies one or more conditions that must be true at
the start of use case, from the perspective of this use case.
Main sequence. The bulk of the use case is a narrative textual description of the
main sequence of the use case, which is the most usual sequence of interactions
between the actor and the system. The description is in the form of the input from
the actor, followed by the response of the system.
Postcondition. This section specifies the condition(s) that is always true at the end
of the use case, if the main sequence has been followed, from the perspective of
this use case.
Outstanding questions. This section documents any questions about the use case
for discussions with stakeholders.
6.5 Specifying Nonfunctional Requirements
Nonfunctional requirements address quality-of-service goals of the system, in other words
how well the functional requirements are fulfilled. Nonfunctional requirements are
particularly important for embedded systems and include performance requirements,
safety requirements, availability requirements, and security requirements. An example of a
nonfunctional requirement for the Authenticate Operator use case is the security
requirement that the operator ID and password must be encrypted. An example of a
nonfunctional requirement for the Cook Food use case is the performance requirement that
the system must respond to the timer inputs within 100 milliseconds. An example of a
nonfunctional safety requirement for a furnace is that if the temperature of the furnace
exceeds a certain limit, which indicates a safety hazard of overheating, the furnace should
be switched off. If the nonfunctional requirements apply to a group of related use cases,
then they can be documented as such.
The nonfunctional requirements are specified in a separate section of the use case, as
described in Section 6.4. Nonfunctional requirements include:
Summary: User puts food in oven, and microwave oven cooks food.
Actors: User
Main Sequence:
7. User enters cooking time on the numeric keypad and presses Start.
8. System starts cooking the food, starts the turntable, and switches on the
light.
10. System timer detects that the cooking time has elapsed.
11. System stops cooking the food, switches off the light, stops the turntable,
sounds the beeper, and displays the end message.
12. User opens the door.
14. User removes the food from the oven and closes the door.
15. System switches off the oven light and clears the display.
Alternative Sequences:
Step 3: User presses Start when the door is open. System does not start
cooking.
Step 5: User presses Start when the door is closed and the oven is empty.
System does not start cooking.
Step 5: User presses Start when the door is closed and the cooking time is
equal to zero. System does not start cooking.
Step 5: User presses Minute Plus, which results in the system adding one
minute to the cooking time. If the cooking time was previously zero, System starts
cooking, starts the timer, starts the turntable, and switches on the light.
Step 7: User opens door before pressing the Start button. System switches on
the light.
Step 9: User presses Minute Plus, which results in the system adding one
minute to the cooking time.
Step 9: User opens door during cooking. System stops cooking, stops the
turntable, and stops the timer. The user closes the door (system then switches off
the light) and presses Start; System resumes cooking, resumes the timer, starts the
turntable, and switches on the light.
Step 9: User presses Cancel. System stops cooking, stops the timer, switches
off the light, and stops the turntable. User may press Start to resume cooking.
Alternatively, user may press Cancel again; system then cancels timer and clears
display.
Configuration requirement:
The main sequence for the use case, which describes the sequence of actor inputs to the
system and the system’s responses, should be relatively straightforward to develop.
However, the alternative sequences are often trickier to develop, because so many of the
system actions are state dependent. Figuring out all the alternatives of a state dependent
use case is helped considerably by supplementing the use case with a state machine
design, as described in the next chapter. The biggest contribution the alternatives section
of the use case description can provide is to point out all the alternative events initiated by
the actors that need to be addressed. Determining the details of how the system should
react to these events can then be done with the aid of a state machine.
6.6.2 Example of Use Case from a Software Engineering Perspective
This section gives an example of a use case description from a software engineering
perspective for the Arrive at Railroad Crossing use case (see Figure 6.9) from the
Railroad Crossing Control System. From this perspective, the actors are the I/O devices
(which are outside the software system but inside the total hardware/software system)
rather than the train physical entity; in particular, it is the arrival sensor that detects the
arrival of the train. The use case description is given for the main sequence of the use case,
followed by a description of the alternative sequences. Nonfunctional requirements for
safety and performance are also specified.
Summary: Train approaches railroad crossing. The system lowers the barrier,
switches on the warning lights, and switches on the audio warning alarm.
Actors:
Precondition: The system is operational, and there is either no train or one train in
the railroad crossing.
Main Sequence:
1. Arrival Sensor detects the train arrival and informs the system.
3. Barrier Detection Sensor detects that a barrier has been lowered and
informs the system.
Alternative Sequences:
Step 2: If there is another train already at the railroad crossing, skip steps 2
and 3.
Step 3: If Barrier Timer notifies the system that the lowering timer has timed
out, the system sends a safety warning message to the Rail Operations Service.
Nonfunctional Requirements:
a) Safety requirements:
b) Performance requirement:
The elapsed time from the detection of train arrival to sending the
command to the barrier actuator shall not exceed a pre-specified
response time.
Postcondition: The barrier has been closed, the warning lights are flashing, and
the audio warning is sounding.
As with the Cook Food use case, the trickiest part of the use case description is dealing
with the alternative sequences, particularly relating to the issue of there being one or two
trains in the railroad crossing at train arrival and departure. The intricacies of this problem
are best handled with the aid of a state machine, as described in the case study in Chapter
20.
Figure 6.9. Use case model for Railroad Crossing Control System (software
engineering perspective).
6.7 Use Case Relationships
When use cases get too complex, dependencies between use cases can be defined by using
the include and extend relationships. The objective is to maximize extensibility and reuse
of use cases.
Another use case relationship provided by the UML is the use case generalization.
Use case generalization is similar to the extend relationship because it is also used for
addressing variations. However, users often find the concept of use case generalization
confusing, so in the COMET method, the concept of generalization is confined to classes.
Use case variations can be adequately handled by the extend relationship.
6.8 The Include Use Case Relationship
After the use cases for an application are initially developed, common sequences of
interactions between the actor and the system can sometimes be determined that span
several use cases. These common sequences of interactions reflect functionality that is
common to more than one use case. A common sequence of interactions can be extracted
from several of the original use cases and made into a new use case, which is called an
inclusion use case.
Inclusion use cases reflect functionality that is common to more than one use case.
When this common functionality is separated into an inclusion use case, the inclusion use
case can be reused by several base (executable) use cases. An inclusion use case is
executed in conjunction with a base use case, which includes, and hence executes, the
inclusion use case. In programming terms, an inclusion use case is analogous to a library
routine and a base use case is analogous to a program that calls the library routine.
An inclusion use case might not have a specific actor. The actor is in fact the actor of
the base use case that includes the inclusion use case. Because different base use cases use
the inclusion use case, it is possible for the inclusion use case to be used by different
actors.
6.8.1 Example of Include Relationship and Inclusion Use Cases
As an example of inclusion use cases, consider a Light Rail Control System (see case
study described in Chapter 21), which is described from a software engineering
perspective. In particular, there is a use case, Suspend Train, which includes the
Arrive at Station and Control Train at Station inclusion use cases (this
example is a simplified version of that in Chapter 21). Suspend Train has one human
actor, the Rail Operator, who commands a train to go out of service, and one input
device actor, Door Sensor. Suspend Train includes the Arrive at Station use
case, which has two input device actors, the Approaching Sensor (to detect when the
train is nearing the station) and the Arrival Sensor (to detect when the train is entering
the station) as well as two output device actors, Motor (to first decelerate and then stop
the train) and Door Actuator (to open the train doors). SuspendTrain then includes
the Control Train at Station, which has one input device actor, the Door Sensor,
which detects when the train doors have opened, and the Door Actuator, which, after an
interval, is commanded to close the train doors. The use case descriptions of the three use
cases are given next.
Main sequence:
2) After time interval, System sends Close Doors command to the Door
Actuator.
Main sequence:
Figure 6.11. Example of multiple inclusion use cases and include relationships.
6.9 The Extend Use Case Relationship
In certain situations, a use case can get very complex, with many alternative branches. The
extend relationship is used to model alternative paths that a use case might take under
certain conditions. A use case can become too complex if it has too many alternative,
optional, and exceptional sequences of interactions. A solution to this problem is to split
off an alternative or optional sequence of interactions into a separate use case. The
purpose of this new use case is to extend the old use case, if the appropriate condition
holds. The use case that is extended is referred to as the base use case, and the use case
that does the extending is referred to as the extension use case.
Under certain conditions, a base use case can be extended by a description given in
the extension use case. A base use case can be extended in different ways, depending on
which condition is true. The extend relationship can be used as follows:
To show a conditional part of the base use case that is executed only under certain
circumstances
It is important to note that the base use case does not depend on the extension use
case. The extension use case, however, depends on the base use case and executes only if
the condition in the base use case that causes it to execute is true. Although an extension
use case usually extends only one base use case, it is possible for it to extend more than
one. A base use case can be extended by more than one extension use case.
6.9.1 Extension Points
Extension points are used to specify the precise locations in the base use case at which
extensions can be added. An extension use case may extend the base use case only at these
extension points (Fowler 2004, Rumbaugh et al. 2005).
Each extension point in the base use case is given a name. The extension use case has
one insertion segment (usually the main sequence of the extension use case) for the
extension point. This segment is inserted at the location of the extension point in the base
use case. The extend relationship can be conditional, meaning that a condition is defined
that must be true for the extension use case to be invoked. Thus it is possible to have more
than one extension use case for the same extension point, but with each extension use case
satisfying a different condition.
An extension point with multiple extension use cases can be used to model several
alternatives in which each extension use case specifies a different alternative. The
extension conditions are designed such that only one condition can be true, and hence only
one extension use case selected, for any given situation.
The value of the extension condition is set during runtime execution of the use case
because at any one time, one extension use case could be chosen, and at a different time an
alternative extension use case could be chosen. In other words, the extension condition is
set during runtime of the use case and can change during execution.
6.9.2 Example of Extension Point and Extension Use Cases
Consider the following example for a green zone system (Figure 6.12). The green zone is
an area in the center of the city in which there is restricted access by motor vehicles.
Vehicles entering the green zone have a green zone permit number encoded on a RFID
(radio frequency ID) transponder, which is displayed on the windshield of the vehicle.
When the vehicle enters the green zone, a remote transponder detector reads the permit
number RFID and transmits it to the Green Zone Monitoring System. This functionality is
handled by the base use case, Enter Green Zone. However, a car entering the green zone
without a permit is handled by an extension use case, Process Unauthorized Vehicle. The
extension point is called Unauthorized (Figure 6.12) and is located in an alternative
sequence of the Enter Green Zone use case for handling unrecognized or missing permit
numbers. The use cases are described from a systems engineering perspective, so the
primary actor is the Vehicle (not the sensors that detect the vehicle), and the secondary
actors for the extension use case are the external DMV System and Police Patrol Car.
Actor: Vehicle.
Main Sequence:
Alternative Sequence:
Step 4: Unauthorized (i.e., unrecognized or missing permit number):Extend
with Process Unauthorized Vehicle use case.
Postcondition:
4. The DMV system sends a message to the system containing the name and
address of the vehicle owner.
5. The system issues and prints a fine to be sent by mail to the vehicle owner.
Postcondition: The unauthorized vehicle has been detected and a finehas been
issued.
Alternative sequence:
Step 2: The license plate cannot be decoded (because of bad photograph, bad
weather, covered license plate); System sends alert message to the Police Patrol
Car.
Figure 6.12. Example of an extend relationship and an extension use case.
6.10 Use Case Packages
For large systems that have to deal with a large number of use cases, the use case model
can become unwieldy. A good way to handle this scale-up issue is to introduce a use case
package that groups together related use cases. In this way, use case packages can
represent high-level requirements that address major subsets of the functionality of the
system. Because actors often initiate and participate in related use cases, use cases can be
grouped into packages based on the major actors that use them. Nonfunctional
requirements that apply to a group of related use cases could be assigned to the use case
package that contains those use cases.
Figure 6.13 shows an example of a use case package for the Factory Automation
System, namely the Factory Monitoring Use Case Package, encompassing four
use cases. The Factory Operator is the primary actor of the View Alarms and View
Monitoring Data use cases and a secondary actor of the other use cases. The Factory
Robot is the primary actor of the Generate Alarm and Generate Monitoring Data
use cases.
Figure 6.13. Example of use case package.
6.11 Summary
This chapter has described the use case approach to specifying the functional requirements
of the system from both systems engineering and software engineering perspectives. It has
described the concepts of actor and use cases. It has also described use case relationships,
in particular, the extend and include relationships. Furthermore, use case modeling can be
supplemented with state machine modeling to provide a more precise specification for
state dependent real-time embedded systems, as described in Chapter 7.
Use cases developed from a systems engineering perspective are less detailed than
use cases developed from a software engineering perspective. The former use cases can be
developed earlier in the COMET/RTE life cycle, in particular before hardware/software
boundary modeling and software system context modeling, as described in Chapter 5.
The use case model has a strong influence on subsequent software development.
Thus, use cases are realized in the analysis model during dynamic interaction modeling, as
described in Chapter 9. For each use case, the objects that participate in the use case are
determined by using the object structuring criteria described in Chapter 8, and the
sequence of interactions between the objects is defined. Software can be incrementally
developed by selecting the use cases to be developed in each phase of the project, as
described in Chapter 4. Integration and system test cases should also be based on use
cases.
7
State Machines for Real-Time
Embedded Systems
◈
State machines (also referred to as finite state machines) are used for modeling control and
sequencing in a system. This is particularly important for real-time embedded systems,
which are usually highly state dependent. In particular, the actions of a state dependent
system depend not only on the inputs to the system but also on what happened previously
in the system, which is captured as a state. A state machine can be used to depict the states
of a system, subsystem, component, or object. Notations used to define state machines are
the state transition diagram, state machine diagram, statechart, and state transition table. In
highly state dependent systems, these notations help substantially to understand the
complexity of these systems.
This chapter starts by considering the characteristics of flat state machines and then
describes hierarchical state machines. To show the benefits of hierarchical state machines,
this chapter starts with the simplest form of flat state machine and gradually shows how it
can be improved upon to achieve the full modeling power of hierarchical state machines.
The process of developing state machines from use cases is then described. Several
examples are given throughout the chapter from two case studies, the Microwave Oven
and Train Control state machines.
Section 7.1 describes events and states in state machines. Section 7.2 introduces the
Microwave Oven Control state machine example. Section 7.3 describes events and guard
conditions, while Section 7.4 describes state machine actions. Section 7.5 describes
hierarchical state machines, both sequential and orthogonal. Section 7.6 describes
cooperating state machines, while state machine inheritance is described in Section 7.7.
The process of developing state machines from use cases is then described in Sections 7.8
and 7.9.
7.1 State Machines
A state machine is a conceptual machine with a finite number of states. The state machine
can be in only one of the states at any specific time. A state transition is a change in state
that is caused by an input event. In response to an input event, the state machine might
transition to a different state. Alternatively, the event has no effect and the state machine
remains in the same state. The next state depends on the current state as well as on the
input event. Optionally, an output action might result from the state transition.
A state machine can be used to depict the state of a system, subsystem, or component.
However, in object-oriented systems, a state machine (even if it describes the state of a
system) should always be encapsulated in a class, as described in Chapter 8.
7.1.1 Events
An event occurs at a point in time; it is also known as a discrete event, discrete signal, or
stimulus. An event is an atomic occurrence (i.e., not interruptible) and conceptually has
zero duration. Examples of events are Door Opened, Item Placed, Timer
expired, and Cruising Speed Reached.
Events can depend on each other. For example, in a microwave oven, the event Door
Opened always precedes the event Item Placed for a given sequence of events. In this
situation, the first event (Door Opened) causes a transition into the state (Door Open),
while the next event (Item Placed) causes the transition out of that state; the precedence
of the two events is reflected in the state that connects them, as shown in Figure 7.1.
However, events can be completely independent of each other. For example, the event
Train x Departed from New York is independent of the event Train y Departed
from Washington.
Figure 7.1. Example of main sequence of state machine (partial state machine).
An event can originate from an external source, such as Door Opened (which is the
result of the user opening the oven door), or the event can be internally generated by the
system, such as Cruising Speed Reached.
A timer event is a special event, specified by the keyword after, which indicates that
an event will occur after an elapsed time identified by an expression in parentheses, such
as after (ten seconds) or after (elapsed time). On a state machine, the timer event causes a
transition out of a given state. The elapsed time is measured from the time of entry into
that state (e.g., when the timer is started) until exit from the state, which is caused by the
timer expiration event.
State machines are assumed to observe run to completion semantics. This means that
each event is executed to completion before starting the next event. Thus, if two events
arrive at essentially the same time, one event is selected and processed completely before
the next event is selected. Executing an event includes executing any hierarchical or
orthogonal transitions and actions resulting from the event.
7.1.2 States
A state represents a recognizable situation that exists over an interval of time. Whereas an
event occurs at a point in time, a state machine is in a given state over an interval of time.
The current state is the name given to the state that the state machine is currently
occupying. The arrival of an event at the state machine usually causes a transition from
one state to another. Alternatively, an event can have a null effect, in which case the state
machine remains in the same state. In theory, a state transition takes zero time to occur. In
practice, the time for a state transition to occur is negligible compared to the time spent in
the state.
Some states represent the state machine waiting for an event from the external
environment, for example the state Ready to Cook is the state in which the state
machine is waiting for the user to press the Start button, as shown in Figure 7.1. Other
states represent situations in which the state machine is waiting for a response from
another part of the system. For example, Cooking is the state in which food is being
cooked and the next event is an internal timer event that is generated when the cooking
timer expires.
The initial state of a state machine is the state that is entered when the state machine
is activated. For example, the initial state in the Microwave state machine is the Door
Shut state, as identified in UML by the arc originating from the small black circle in
Figure 7.1.
7.2 Examples of State Machine
As an example of a state machine, consider the partial state machine for the Microwave
Oven, which is taken from the microwave oven system case study and shown in Figure
7.1. The state machine follows the main sequence described in the Cook Food use case
(see Chapters 6 and 19) and shows the different states for cooking food. The initial state is
Door Shut. When the user opens the door, the state machine transitions into the Door
Open state. The user places an item in the oven, causing the state machine to transition
into the Door Open with Item state. When the user closes the door, the state machine
then transitions into the Door Shut with Item state. After the user inputs the cooking
time, the Ready to Cook state is entered. When the user presses the Start button, the
state machine transitions into the Cooking state. When the timer expires, the Door Shut
with Item state is reentered. The user then opens the door and the state machine
transitions back to Door Open with Item state. The user removes the food and the state
machine transitions to the Door Open state. From there, if the user closes the door, the
state machine transitions back to the Door Shut state.
The above description closely follows the use case description and describes the
states entered and exited during the execution of the main sequence of the Cook Food use
case. A state machine can also depict alternative state transitions out of a state. It is
possible to have more than one transition out of a state, with each transition caused by a
different event. Consider the alternative state transition out of Cooking state. If, instead of
the timer expiration causing the transition from Cooking state, the user opens the door
during cooking (see Figure 7.2), the state machine would then transition to the Door
Open with Item state. From this state, the user could then either close the door
(transition to Door Shut with Item state) or remove the item (transition to Door
Open state). These alternative state transitions are clearly visible in the state machine and
are more precisely described than in a textual use case description.
Figure 7.2. Example of alternative state transitions on state machine (partial state
machine).
In some cases, it is also possible for the same event to occur in different states and
have different effects. For example, in Figure 7.2, if the door is opened in Door Shut
state, the state machine transitions to Door Open state. If the door is opened in Door
Shut with Item state, the state machine transitions to Door Open with Item state.
However, if the door is opened in Cooking state, the transition is also to Door Open
with Item state. In addition, on this transition out of Cooking state, cooking is stopped.
This issue is discussed further in Section 7.4.
7.3 Events and Guard Conditions
It is possible to specify conditional state transitions through the use of guard conditions.
This can be achieved by combining events and guard conditions in defining state
transitions. The notation used is Event [Condition]. A condition is a Boolean expression
given in square brackets with a value of True or False, which holds for some period of
time. When the event arrives, it causes a state transition, provided that the guard condition
is True. Conditions are optional.
In some cases, an event does not cause an immediate state transition, but its impact
needs to be remembered because it will affect a future state transition. The fact that an
event has occurred can be stored as a condition that can be checked later.
Examples of guard conditions in Figure 7.3 are Zero Time and Time Remaining
in the microwave state machine. Two of the transitions out of the Door Open with
Item state are Door Closed [Zero Time] and Door Closed [Time Remaining].
Thus the transition taken depends on whether the user has previously entered the time or
not. If the condition Zero Time is true when the door is closed, the state machine
transitions to Door Shut with Item, waiting for the user to enter the time. If the
condition Time Remaining is true when the door is closed, the state machine transitions
to the Ready to Cook state. (It should be noted that these conditions can be depicted as
states on a separate state machine as described in Section 7.5.5).
Figure 7.3. Example of events and conditions (partial state machine).
7.4 Actions
Associated with a state transition is an optional output action. An action is a computation
that executes as a result of a state transition. While an event is the cause of a state
transition, an action is the effect of the transition. An action is triggered at a state
transition. It executes and then terminates itself. The action executes instantaneously at the
state transition; thus conceptually an action is of zero duration. In practice, the duration of
an action is very small compared to the duration of a state.
As examples of actions, consider the Microwave state machine of Figure 7.1 with the
actions added, as shown in Figure 7.4. Consider the situation when the user presses the
start button and the machine is in the Ready to Cook state. The state machine transitions
into the Cooking state. The actions are to start the timer and start cooking.
There can be more than one action associated with a transition. Since the actions all
execute simultaneously, there must not be any interdependencies between the actions.
Thus, in the above example, the actions to start the timer and start cooking are
independent of each other. However, it is not correct to have two simultaneous actions,
such as Compute Change and Display Change. Since there is a sequential dependency
between the two actions, the change cannot be displayed before it has been computed. To
avoid this problem, introduce an intermediate state called Computing Change. The
Compute Change action is executed on entry to this state and the Display Change
action is executed on exit from this state.
An example of a state machine with alternative state transitions and actions is shown
in Figure 7.5. In particular, there are three alternative state transitions out of Cooking
state, which have different resulting actions. From Cooking state, if the timer expires, the
transition is to Door Shut with Item state, and the action is to Stop Cooking. By
contrast, if the door is opened, the transition is to Door Open with Item, and the
actions are Stop Cooking (as before) and Stop Timer. Stop Timer is necessary in
the door opened scenario because there will be a non-zero cooking time left if the door is
opened before the timer expires. If cooking is later resumed, the oven will cook for the
remaining time. The same two actions are also executed if the user presses Cancel,
although the transition is to Ready to Cook state.
Figure 7.5. Example of alternative state transitions and actions (partial state machine).
The same event can occur in different states. Depending on the individual state, the
actions could be the same or different. Figure 7.5 gives an example of the Door Opened
event, which can occur in four different states. In each scenario, the transition is to a
different state; in three scenarios (transition out of Door Shut state to Door Open state,
transition out of Door Shut with Item state to Door Open with Item state, and
transition out of Ready to Cook state to Door Open with Item state) there is no
action. However, in the fourth scenario, transition out of Cooking state, the transition is to
Door Open with Item, and the actions are Stop Cooking and Stop Timer.
7.4.2 Entry Actions
An entry action is an instantaneous action that is performed on transition into the state.
An entry action is represented by the reserved word entry and is depicted as entry/Action
inside the state box. Whereas transition actions (actions explicitly depicted on state
transitions) can always be used, entry actions should only be used in certain situations.
The best time to use an entry action is when:
The same action needs to be performed on every transition into this state.
The action is performed on entry into this state and not on exit from the previous
state.
In this situation, the action is only depicted once inside the state box, instead of on each
transition into the state. However, if an action is only performed on some transitions into
the state and not others, then the entry action must not be used. Instead, transition actions
should be used on the relevant state transitions.
An example of an entry action is given in Figure 7.6. In Figure 7.6a, actions are
shown on the state transitions. If the Start button is pressed (resulting in the Start event)
while the microwave oven is in the Ready to Cook state, the state machine transitions to
the Cooking state. There are two actions, Start Cooking and Start Timer.
However, if Minute Pressed event arrives (to cook the food for one minute) while in
Door Shut with Item state, the state machine will also transition to the Cooking state.
However, in this case the actions are Start Cooking and Start Minute. Thus, in the
two transitions into Cooking state, one action is the same (Start Cooking) but the
second is different. An alternative decision is to use an entry action for Start Cooking
as shown in Figure 7.6b. On entry into Cooking state, the entry action Start Cooking
is executed because this action is executed on every transition into the state. However, the
Start Timer action is shown as an action on the state transition from Ready to Cook
state into Cooking state. This is because the Start Timer action is only executed on
that specific transition into Cooking state and not on the other transition. For the same
reason, on the transition from Door Shut with Item state into Cooking state, there is
a transition action Start Minute. Figures 7.6a and 7.6b are semantically equivalent to
each other but Figure 7.6b is more concise.
Figure 7.6. Example of entry action. (a) Actions on state transitions. (b) Entry action.
7.4.3 Exit Actions
An exit action is an instantaneous action that is performed on transition out of the state.
An exit action is represented by the reserved word exit and is depicted as exit/Action
inside the state box. Whereas transition actions (actions explicitly depicted on state
transitions) can always be used, exit actions should only be used in certain situations. The
best time to use an exit action is when:
The same action needs to be performed on every transition out of the state.
The action is performed on exit from this state and not on entry into the next state.
In this situation, the action is only depicted once inside the state box, instead of on each
transition out of the state. However, if an action is only performed on some transitions out
of the state and not others, then the exit action must not be used. Instead, transition actions
should be used on the relevant state transitions.
An example of an exit action is given in Figure 7.7. In Figure 7.7a, actions are
shown on the state transitions out of Cooking state. Consider the action Stop Cooking.
If the timer expires, the microwave oven transitions from the Cooking state to the Door
Shut with Item state and the action Stop Cooking is executed (Figure 7.7a). If the
door is opened, the oven transitions out of the Cooking state into Door Open with
Item state. In this transition, two actions are executed, Stop Cooking and Stop Timer.
Thus, in both transitions out of Cooking state (Figure 7.7a), the action Stop Cooking is
executed. However, when the door is opened and the transition is to Door Open with
Item state, there is an additional Stop Timer action. An alternative design is shown
in Figure 7.7b, where an exit action Stop Cooking is depicted. This means that
whenever there is a transition out of Cooking state, the exit action Stop Cooking is
executed. In addition, in the transition to Door Open with Item state, the transition
action Stop Timer will also be executed. Having the Stop Cooking action as an exit
action instead of an action on the state transition is more concise, as shown in Figure 7.7b.
The alternative of having transition actions, as shown in Figure 7.7a, requires the Stop
Cooking action to be explicitly depicted on each of the state transitions out of the
Cooking state. Figures 7.7a and 7.7b are semantically equivalent to each other but Figure
7.7b is more concise.
Figure 7.7. Example of exit action. (a) Actions on state transitions. (b) Exit action.
Figure 7.8 depicts an alternative version of the Microwave Oven Control state
machine in which the transition actions to Start Cooking and Stop Cooking on Figure 7.5
are replaced by an entry action in Cooking state to Start Cooking and an exit action to
Stop Cooking.
Figure 7.8. State machine for Microwave Oven Control with entry and exit actions.
7.4.4 Activities
In addition to actions, it is also possible to have an activity executed as a result of a state
transition. An activity is a computation that executes for the duration of a state. Thus,
unlike an action, which takes no time, an activity executes for a finite amount of time. An
activity is enabled on entry into the state and disabled on exit from the state. The cause of
the state change, which results in disabling the activity, is usually an input event from a
source that is not related to the activity. However, in some cases, the activity itself
generates the event that causes the state change.
An activity is depicted as being associated with the state in which it executes. This is
achieved by showing the activity in the state box and having a dividing line between the
state name and the activity name. The activity is depicted as do / Activity, where do is a
reserved word. This means the activity is enabled on entry into the state and disabled on
exit from the state.
For example, consider the Reached Cruising event that causes a transition from
Accelerating state to Cruising state. First, the activity Increase Speed is disabled,
and then the activity Maintain Speed is enabled and remains active throughout
Cruising state. The semantics of this state transition are:
The objective of hierarchical state machines is to exploit the basic concepts and
visual advantages of state transition diagrams, while overcoming the disadvantages of
overly complex and cluttered diagrams, through hierarchical structuring. Note that any
hierarchical state machine can be mapped to a flat state machine, so for every hierarchical
state machine there is a semantically equivalent flat state machine.
There are two main approaches to developing hierarchical state machines. The first
approach is a top-down approach to determine major high-level states, sometimes referred
to as modes of operation. For example, in an airplane control state machine, the modes
might be Taking Off, In Flight, and Landing. Within each mode, there are several states,
some of which might in turn be composite states. The second approach is to first develop a
flat state machine and then identify states that can be aggregated into composite states, as
described in Section 7.5.3.
7.5.1 Sequential State Decomposition
State machines can often be significantly simplified by the hierarchical decomposition of
states, in which a composite state is decomposed into two or more interconnected
sequential substates. This kind of hierarchical decomposition is referred to as sequential
state decomposition. The notation for state decomposition also allows both the composite
state and the substates to be shown on the same diagram or, alternatively, on separate
diagrams, depending on the complexity of the decomposition.
Each transition into the composite state In Motion is, in fact, a transition into one
(and only one) of the substates on the lower-level state machine, namely the
Accelerating substate. Each transition out of the composite state has to actually
originate from one (and only one) of the substates (Accelerating, Cruising, or
Approaching) on the lower-level state machine.
7.5.3 Aggregation of State Transitions
The hierarchical state machine notation also allows a transition out of every one of the
substates on a state machine to be aggregated into a transition out of the composite state.
Careful use of this feature can significantly reduce the number of state transitions depicted
on a state machine diagram.
In the flat state machine in Figure 7.10, the Obstacle Detected event can occur in
any one of the Accelerating, Cruising, or Approaching states, in which case the
state machine transitions to the Emergency Stopping state. With the hierarchical state
machine in Figure 7.11a, instead of depicting the Obstacle Detected event as causing
a transition out of each of the Accelerating, Cruising, or Approaching substates, it
is more concise to show this event causing the transition out of the composite state In
Motion, as depicted in Figure 7.11a. The transitions out of the three substates (of
the In Motion composite state)are not explicitly shown on Figure 7.11a, even though an
individual Obstacle Detected event would actually occur in one of these substates and
cause the transition to the Emergency Stopping state. However, the advantage is the
simplification of the state machine due to the significant reduction in state transition arcs.
7.5.4 History State
The history state is another useful characteristic in hierarchical state machines. Indicated
by an H inside a small circle, a history state is a pseudostate within a sequential
composite state, which means that the composite state remembers its previously active
substate after it exits. Thus, when the composite state is reentered, the previously active
substate is entered.
At any one time, the Microwave Oven Control composite state is in one of the
substates of the Microwave Oven Sequencing region and one of the substates of the
Cooking Time Condition region. The Cooking Time Condition region consists of
two substates – Zero Time and Time Remaining – with Zero Time as the initial
substate. The Update Cooking Time event causes a transition from Zero Time to
Time Remaining. Either the Timer Expired event or the Cancel Timer event can
cause a transition back to Zero Time. The Microwave Oven Sequencing region
consists of the Microwave Oven Sequencing Composite State, which is
decomposed to depict the sequence of states the oven goes through while handling a user
request to cook food, as shown in Figure 7.8. The current state of the Microwave Oven
Control state machine is the union of the current substates in each of the Microwave
Oven Sequencing and the Cooking Time Condition regions.
The Zero Time and Time Remaining substates of the Cooking Time
Condition region (see Figure 7.13) are the guard conditions checked in the Microwave
Oven Sequencing region when the Door Closed event is received while in the Door
Open with Item substate (see Figure 7.8). Cancel Timer is an action (cause) in the
Microwave Oven Sequencing region and an event (effect) on the Cooking Time
Condition region, which causes a transition to the Zero Time state. Update Cooking
Time is also an action on the former region and an event on the latter. Timer Expired is
an event in both regions.
7.6 Cooperating State Machines
State machines can model concurrent processes by using cooperating state machines. With
this approach, the control problem is divided between two separate state machines, which
cooperate with each other. The cooperation is by means of an action on one state machine
that propagates as an event to the other state machine, and vice versa.
An example of this is used in the Microwave Oven problem, which uses two
cooperating state machines, namely the Microwave Oven Control (Figure 7.8) and
Oven Timer (Figures 7.14) state machines. Oven Timer is used to control decrementing
the cooking time down to zero and notifying Microwave Oven Control when the timer
expires. The initial state of Oven Timer is Cooking Time Idle. Cooking food is
initiated by the transition of Microwave Oven Control from Ready to Cook state
into Cooking state, which results in the Start Cooking entry action and the Start
Timer transition action. The Start Timer action in the Microwave Oven Control
state machine propagates as an event of the same name to the Oven Timer state machine,
causing the latter to transition from Cooking Time Idle state to Cooking Food state.
Each second, a Timer Event causes an Oven Timer action to decrement the cooking
time by cycling through the Updating Cooking Time transient state and back to
Cooking Food state. When the cooking time remaining reaches zero, the Finished
event causes the Oven Timer state machine to transition from Updating Cooking
Time state to Cooking Time Idle state. An action on this transition is Timer
Expired, which propagates as an event of the same name back to the Microwave Oven
Control state machine. This event causes Microwave Oven Control to transition
from Cooking state to Door Shut with Item and the resulting action is Stop
Cooking. It should be noted that the Stop Timer action in the Microwave
Oven Control state machine also propagates as an event of the same name to the Oven
Timer state machine, causing the latter to transition from Cooking Food state to
Cooking Time Idle state.
Figure 7.14. State machine for Oven Timer.
7.7 Inherited State Machines
Inheritance can be used to introduce change to a state machine. When a state machine is
specialized, the child state machine inherits the properties of the parent state machine; that
is, it inherits the states, events, transitions, actions, and activities depicted in the parent
state machine model. The child state machine can then modify the inherited state machine
as follows:
1. Add new states. The new states can be at the same level of the state machine
hierarchy as the inherited states. Furthermore, new substates can be defined for either
the new or the inherited states. In other words, a state in the parent state machine can
be decomposed further in the child state machine. It is also possible to add new
orthogonal states – that is, new states that execute orthogonally with the inherited
states.
2. Add new events and transitions. These events cause new transitions to new or
inherited states.
3. Add or remove actions and activities. New actions can be defined that are
executed on transitions into and out of new or inherited states. Exit and entry actions,
as well as new activities, can be defined for new or inherited states. It is also possible
to remove predefined actions and activities, although this should be done with care
and is generally not recommended.
The child state machine must not delete states or events defined in the parent. It must not
change any composite state/substate dependency defined in the parent state machine.
Examples of Inherited State Machines
As an example of an inherited state machine, consider the Microwave Oven Control
class from the microwave oven system, which specifies the state machine of the same
name. The Microwave Oven Control state machine is depicted in Figures 7.8. The
Microwave Oven Control state machine is then specialized to provide the additional
features for the Enhanced Microwave Oven Control child state machine. The
specialization of the Microwave Oven Control state dependent control superclass to
produce the Enhanced Microwave Oven Control subclass is depicted in the class
diagram of Figure 7.15.
The state machine for the Enhanced Microwave Oven Control class is shown in
Figure 7.16. Consider the impact of the following extensions incorporated into the
specialized state machine, which are referred to as features:
TOD Clock
Turntable
Light
Beeper
Minute Plus
Example of new states added. To support the TOD (time-of-day) Clock feature, the
inherited Door Shut state is specialized to create three new substates (see Chapter
19).
Example of new transitions added. To support the Minute Plus feature, a new
Minute Plus transition (see Figure 7.16) is introduced from the Door Shut with
Item state to the Cooking state, since pressing the Minute Plus button when the door
is shut with an item inside it results in the oven cooking the food for a minute. If the
Minute Plus button is pressed while the food is cooking, there is a transition from
Cooking state back to itself.
Example of new actions added (see Figure 7.16). To support the Turntable
feature, two new actions are provided: Start Turning (which is executed on entry
into the inherited Cooking state) and Stop Turning (which is executed on exit
from the Cooking state). To support the Light feature, two new actions are
provided: Switch On, which is both an entry action (into the inherited Cooking
state) and a transition action (between other inherited states), and the Switch Off
transition action. To support the Beeper feature, the Beep transition action is added.
Figure 7.16. Inherited state machine for Enhanced Microwave Oven Control.
7.8 Developing State Machines from Use Cases
This section describes a systematic approach to develop a state machine from a use case.
The approach starts with a typical scenario given by the use case, that is, one specific path
through the use case. This scenario should be the main sequence through the use case,
involving the most usual sequence of interactions between the actor(s) and the system.
Now consider the sequence of external events given in the scenario. Usually, an input
event from the external environment causes a transition to a new state, which is given a
name corresponding to what happens in that state. If an action is associated with the
transition, the action occurs in the transition from one state to the other. If an activity is to
be performed in that state, the activity is enabled on entry to the state and disabled on exit
from the state. Actions and activities are determined by considering the response of the
system to the input event, as given in the use case description.
Initially, a flat state machine is developed, which follows the event sequence given in
the main scenario. The states depicted on the state machine should all be externally visible
states. That is, the actor should be aware of each of these states. In fact, the states
represent consequences of actions taken by the actor, either directly or indirectly. This is
illustrated in the detailed example given in the next section.
To complete the state machine, determine all the possible external events that could
be input to the state machine. Do this by considering the description of alternative paths
given in the use case. Several alternatives describe the reaction of the system to alternative
inputs from the actor. Determine the effect of the arrival of these events on each state of
the initial state machine; in many cases, an event could not occur in a given state or will
have no impact. However, in other states, the arrival of an event will cause a transition to
an existing state or some new state that needs to be added to the state machine. The
actions resulting from each alternative state transition also need to be considered. These
actions should already be documented in the alternative sequences section of the use case
description as the system reaction to alternative input events. However, for complex state
machines, the actions may not have been fully worked out and documented in the use
cases, in which case the actions need to be fully designed for the state machine(s).
7.9 Example of Developing a State Machine from
a Use Case
As an example of a state machine developed from a use case, consider how the
Microwave Oven Control state machine is developed from the Microwave Oven use
case, which is taken from the microwave oven system case study.
7.9.1 Develop State Machine for Main Sequence of Use Case
The state machine needs to follow the interaction sequence described in the Cook Food
use case (see Chapters 6 and 19) and show the different states for cooking food. In
general, user inputs should correspond to input events that cause state transitions. System
responses should correspond to actions on the state machine.
The precondition given for the use case is Microwave Oven Is Idle with Door
Shut. We therefore decide that the initial state should be called Door Shut. The first step
of the use case states that the user opens the door and in response the system switches the
oven light on. The user then puts food in the oven and closes the door. These use case
steps consist of three input events from the user: open the door, insert the food, and close
the door, which we treat as follows:
When the user opens the door, the state machine needs to transition into a new
state, which we name the Door Open state. This causes the state machine action to
switch on the light.
When the user places an item in the oven, the state machine needs to transition
again; we name the new state Door Open with Item state.
When the user closes the door, the state machine transitions into a third state,
which we name the Door Shut Waiting for User state, and a resulting action
is to switch off the light. Note that we designate a different state from the initial
Door Shut state in order to differentiate between the states of Door Shut with
Item in the oven and Door Shut without an item.
In the next use case step, the user presses the Cooking Time button, so the microwave
needs to transition to a new state, which we name Door Shut Waiting for Cooking
Time. Step 6 of the use case states that the system prompts for cooking time. As this
prompt is a system response, the system output in the use case needs to be an output action
on the state machine. After the user inputs the cooking time, the oven is ready to start
cooking, so we name the next state Ready to Cook. When the user presses the Start
button, the oven starts cooking the food, so we designate the next state the Cooking state.
Step 8 of the use case states that the system starts cooking the food. For this to happen, the
system needs to start the timer, start cooking the food, start turning the turntable, and
switch on the light. All these concurrent actions need to be specified on the state machine
as a result of the Start transition. Since there are several actions that result from entering
the Cooking state (Start Cooking, Start Turning, Switch on Light), these
actions are designed as entry actions. However, Start Timer is designed as a transition
action because it is does not happen on every transition into the Cooking state (as
depicted in Figure 7.16).
When the timer expires, the state machine reenters the Door Shut Waiting for
User state. Actions on this transition need to be to stop cooking the food, stop turning the
turntable, switch off the light, and beep. Two actions that result from leaving the Cooking
state are designed as exit actions (Stop Cooking and Stop Turning). The other two
actions, Switch off Light and Beep, are designed as transition actions as these actions
do not happen on every transition out of Cooking state (as explained in the next section
and depicted in Figure 7.16).
Continuing with the main sequence, the user then opens the door and the state
machine transitions back to Door Open with Item state, with the action to switch on
the light. The user removes the food, which causes the state machine to transition back to
Door Open state. Finally, the user closes the door and the state machine transitions back
to the initial Door Shut state, with the action to switch off the light. This sequence of
transitions on the state machine is depicted in Figure 7.17.
Figure 7.17. State machine for Microwave Oven Control (main sequence of
Cook Food use case).
7.9.2 Consider Alternative Sequences of Use Case
The state machine so far corresponds to the main sequence through the Cook Food use
case and describes the states entered and exited during the execution of the use case. Next
we must consider the alternative sequences in the use case description. Some of the
alternatives are events that occur in states in which they are prohibited from causing a
transition and therefore correspond to null transitions. Examples are user pressing START
with the door open (alternative in step 3 of the use case), door closed with empty oven
(alternative in step 5), or door closed with food in the oven but zero time entered (another
alternative in step 5). However, the alternative in step 9 where, instead of timer expiration
from Cooking state, the user opens the door during cooking, necessitates a new transition
on the state machine from Cooking state into the Door Open with Item state, as
depicted in Figure 7.16, from which the user could either close the door or remove the
item. In this transition out of Cooking state, since the timer has not expired, there needs to
be an action to stop the timer. Note that in this transition, the light remains on. These
alternative sequences are clearly visible in the state machine but are less easy to describe
precisely in a use case.
Another alternative is for the user to open the door after the cooking time has been
selected but before the cooking time has been entered. The system response is to return to
Door Open with Item state and switch on the light.
7.9.3 Develop Integrated State Machine
In some applications, one state machine can participate in more than one use case. In such
situations, there will be one partial state machine for each use case. The partial state
machines will need to be integrated to form a complete state machine. The implication is
that there is some precedence in the execution of (at least some of) the use cases and their
corresponding state machines. To integrate two partial state machines, it is necessary to
find one or more common states. A common state might be the last state of one partial
state machine and the first state of the other partial state machine. However, other
situations are possible. The integration approach is to integrate the partial state machines
at the common state, in effect superimposing the common state of the second state
machine on top of the same state on the first state machine. This can be repeated as
necessary, depending on how many partial state machines need to be integrated. An
example of this state machine integration is given for the Light Rail Control System case
study in Chapter 21.
7.9.4 Develop Hierarchical State Machine
It is usually easier to initially develop a flat state machine before trying to develop a
hierarchical state machine. After completing the flat state machine by considering
alternative events, look for ways to simplify the state machine by developing a
hierarchical state machine. Look for states that can be aggregated because they constitute
a natural composite state. In particular, look for situations where the aggregation of state
transitions simplifies the state machine.
For the integrated flat state machine of the Microwave Oven, the decision is made to
aggregate the Waiting for User and Waiting for Cooking Time states into the
Door Shut with Item composite state, as described in Section 7.5.4 and shown in
Figure 7.12. This decision results in:
Waiting for User and Waiting for Cooking Time becoming substates of
the Door Shut with Item composite state.
The aggregation of transitions out of each of these substates into a transition out of
the composite state, when the door is opened.
The creation of a history state to allow reentry to the substate that was previously
active.
Furthermore, an orthogonal state machine can be developed to depict guard conditions for
the microwave oven state machine, as depicted in Figure 7.13. The Microwave Oven
Control state machine is decomposed into two orthogonal regions: one to depict the
sequencing of the events and actions in the oven (Microwave Oven Sequencing), and
the other to depict the Cooking Time Condition, as described in Section 7.5.5.
7.10 Summary
This chapter has described the characteristics of flat state machines, including events,
states, guard conditions, actions, and activities. This was followed by a description of
hierarchical state machines, including sequential state decomposition, history states, and
orthogonal state machines. Cooperating state machines and state machine inheritance were
also described. The process of developing a state machine from a use case was then
described in detail. It is also possible for a state machine to support several use cases, with
each use case contributing to some subset of the state machine. Such cases are often easier
to model by considering the state machine in conjunction with the object interaction
model, in which a state dependent object executes the state machine, as described in
Chapter 9. Several other examples of state machines are given in the case studies.
8
Object and Class Structuring for Real-
Time Embedded Software
◈
After structural modeling and defining the use case and state machine models, the next
step is to determine the software classes and objects in the real-time embedded system.
Using a model-based approach, the emphasis is on software objects that model real-world
objects in the problem domain. Furthermore, since concurrency is so fundamental to real-
time software design, an important issue that is addressed at this stage is whether the
objects are concurrent or not. Another key issue described in this chapter is the behavior
pattern of each category of object.
Classes are categorized in order to group together classes with similar characteristics.
Figure 8.1 shows the categorization of application classes using inheritance. As described
in Chapter 5, stereotypes (See Sections 5.2 and 5.6) are used to distinguish among the
various kinds of classes. Application classes are categorized according to their role in the
application, in particular «boundary» class, «entity» class, «control» class, or «application
logic» class. Because an object is an instance of a class, an object has the same stereotype
as the class from which it is instantiated. Thus, the categorization described in this section
applies equally to classes and objects.
1. Boundary object. Software object that interfaces to and communicates with the
external environment. Boundary objects are further categorized as:
Device I/O boundary object. Software object that receives input from and/or
outputs to a hardware I/O device.
User interaction object. Software object that interacts with and interfaces to a
human user.
2. Control object. A control object that provides the overall coordination for a
collection of objects. Control objects are further categorized as:
Coordinator object. A software object that controls other objects but is not
state dependent.
State dependent control object. A software object that controls other objects
and is state dependent
Timer object. A software object that controls other objects on a periodic basis.
3. Entity object. A software object that encapsulates information and provides access
to the information it stores. Entity objects are classified further as data abstraction or
wrapper objects.
4. Application logic object. A software object that encapsulates the details of the
application logic. For real-time, scientific, or engineering applications, application
logic objects include algorithm objects, which execute problem-specific algorithms,
and service objects, which provide services for client objects, typically in
client/server or service-oriented architectures where there are one or more real-time
objects that access a service. Business logic objects are rarely used in real-time
systems.
In most cases, what category an object fits into is usually obvious. However, in some
cases, it is possible for an object to satisfy more than one of the above criteria. For
example, an object could have characteristics of both an entity object, in that it
encapsulates some data, and an algorithm object, in that it executes an algorithm. In such
cases, allocate the object to the category it seems to fit best in. Note that it is more
important to determine all the objects in the system than to be unduly concerned about
how to categorize a few borderline cases.
8.3 Object Behavior and Patterns
During object and class structuring, two important decisions can be made about object
behavior; the first concerns the concurrent nature of the object, and the second concerns
the behavior pattern of the object.
Another important decision taken during object and class structuring is that for each
object structuring criterion, there is a corresponding object behavioral pattern, which
describes how the object interacts with its neighboring objects. It is useful to understand
the object’s typical pattern of behavior, because when this category of object is used in an
application, it is likely to interact in a similar way with the same kinds of neighboring
objects. Each behavioral pattern is depicted on a UML communication diagram (first
introduced in Chapter 2) as depicted in the next several figures.
8.4 Boundary Classes and Objects
This section describes the characteristics of the three different kinds of software boundary
objects that interface to and communicate with the external objects, namely device I/O
boundary objects, proxy objects, and user interaction objects. In each case, an example is
given of a boundary object, followed by an example of a behavioral pattern in which a
boundary object communicates with neighboring objects in a typical interaction sequence.
8.4.1 External Objects and Software Boundary Objects
Boundary objects are software objects that interface to and communicate with the external
objects that are outside the system (see Section 5.6). To help determine the boundary
objects in the system, it is necessary to consider the external objects to which they are
connected. In fact, identifying the external objects that communicate with and interface to
the system helps identify the boundary objects. Each external object communicates with a
boundary object in the system. External objects interface to software boundary objects as
follows:
An external device object provides input to and/or receives output from a device
I/O boundary object. An external device represents an I/O device type. An
external I/O device object represents a specific I/O device, that is, an instance of
the device type. An external device object can be one of the following:
An external user object interfaces to and interacts with a user interaction object.
8.4.2 Device I/O Boundary Objects
A device I/O boundary object provides the software interface to a hardware I/O device.
Device I/O boundary objects are needed for nonstandard application-specific I/O devices,
which are more prevalent in real-time embedded systems. Standard I/O devices are
typically handled by the operating system and so do not need special-purpose device I/O
boundary objects developed as part of the application.
A physical object in the application domain is a real-world object that has some
physical characteristics – for example, it can be seen and touched. For every real-world
physical object that is relevant to the problem, there should be a corresponding software
object in the system. For example, in the Microwave Oven System, the door sensor and
heating element are relevant real-world physical objects because they interact with the
software system. However, the oven casing is not a relevant real-world object, because it
does not interact with the software system. In the software system, the relevant real-world
physical objects are modeled by means of software objects, such as the door sensor
interface and heating element interface software objects.
Real-world physical objects usually interface to the system via sensors and actuators.
These real-world objects provide inputs to the system via sensors or are controlled by
(receive outputs from) the system via actuators. Thus, to the software system, the real-
world objects are actually I/O devices that provide inputs to and receive outputs from the
system. Because the real-world objects correspond to I/O devices, the software objects
that interface to them are referred to as device I/O boundary objects.
For example, in the Microwave Oven System, the microwave door is a real-world
object that has a sensor (input device) that provides inputs to the system. The heating
element is a real-world object that is controlled by means of an actuator (output device)
that receives outputs from the system.
An input object is a device I/O boundary object that receives input events or data
from an external input device. In common with all boundary objects, an input object is
assumed to be concurrent. Figure 8.3 shows an example of an input class Door Sensor
Input and an instance of this class, a Door Sensor Input object, which receives door
sensor inputs from an external hardware Door Sensor input device. Figure 8.3 also
shows the hardware/software boundary, as well as the stereotypes for the hardware
«external input device» and the software «input» objects. Thus, the input object provides
the software interface to the external hardware input device. Because boundary objects are
assumed to be concurrent, the input object is depicted using the UML notation for a
concurrent object.
An output object is a device I/O boundary object that sends output to an external
output device. As with all boundary objects, an output object is assumed to be concurrent.
Figure 8.4 shows an example of an output class called Heating Element Output, as
well as an instance of this class, a Heating Element Output object, which sends
outputs to an external real-world object, the Heating Element Actuator external
output device. The Heating Element Output software object sends Switch On and
Switch Off heating commands to the hardware Heating Element Actuator. Figure
8.4 also shows the hardware/software boundary.
Figure 8.4. Example of output class and object.
A hardware I/O device is a device that both sends inputs to the system and receives
outputs from the system. The corresponding software class is an I/O class, and a software
object that is instantiated from this class is an I/O object. An input/output (I/O) object is
a device I/O boundary object that receives input from and sends output to an external I/O
device. This is the case with the ATM Card Reader I/O class shown in Figure 8.5a and
its instance, the ATM Card Reader I/O object (see Figure 8.5b), which receives ATM
card input from the external I/O device, the ATM Card Reader. In addition, ATM Card
Reader I/O sends eject and confiscate output commands to the card reader.
Figure 8.5. Example of I/O class and object.
Each software boundary object should hide the details of the physical interface to the
real-world object from which it receives input or to which it provides output. However, a
software object should model the events experienced by the real-world object to which it
corresponds. The events experienced by the real-world object are inputs to the system, in
particular, to the software object that interfaces to it. In this way, the software object can
simulate the behavior of the real-world object. In the case of a real-world object that is
controlled by the system, the software object generates an output event that determines the
behavior of the real-world object.
8.4.3 Proxy Objects
A proxy object interfaces to and communicates with an external system or smart device.
Although an external system can be very different from a smart device, the behavior of the
two types of proxy object is similar. The proxy object is the local representative of the
external system or smart device and hides the details of “how” to communicate with the
external system or smart device. A proxy object is assumed to be concurrent.
An example of a proxy class is a Pick & Place Robot Proxy class. An example
of a behavioral pattern for a proxy object is given in Figure 8.6, which depicts a
concurrent Pick & Place Robot Proxy object that interfaces to and communicates
with the external Pick & Place Robot. The Pick & Place Robot Proxy object
sends pick and place robot commands to the Pick & Place Robot. The real-world
robot responds to the commands.
Figure 8.6. Example of proxy class and object.
Each proxy object hides the details of how to interface to and communicate with the
particular external system. A proxy object is more likely to communicate by means of
messages to an external, computer-controlled system, such as the robot in the above
example, rather than through sensors and actuators, as is the case with device I/O
boundary objects. However, these issues are not addressed until the design phase.
8.4.4 User Interaction Objects
This section addresses real-time embedded systems that need to interact with human users.
A user interaction object communicates directly with a human user, receiving input from
the user and providing output to the user via standard I/O devices, such as the keyboard,
visual display, and mouse. Depending on the user interface technology, the user interface
could be very simple (such as a command line interface) or it could be more complex
(such as a graphical user interface [GUI] object). A user interaction object may be a
composite object composed of several simpler user interaction objects. This means that the
user interacts with the system via several user interaction objects such as windows and
menus. Such objects are depicted with the «user interaction» stereotype. However, it is
initially assumed that only the composite user interaction object is concurrent. Further
design of user interaction objects is addressed during concurrent task design in Chapter
13.
Starting with the software system context diagram for the Microwave Oven System,
we determine that each external block communicates with a boundary class (see Figure
8.8). The software system contains the boundary classes that interface to the external
blocks. In this application, there are eight device I/O boundary classes. The device I/O
boundary classes are the three input classes, the Door Sensor Input, which sends
inputs when the oven door is opened or closed, the Weight Sensor Input, which sends
item weight inputs, and the Keypad Input, which sends keypad inputs from the user.
The five output classes are the Heating Element Output class (which receives
commands to switch the heater on and off), Lamp Output class (which receives
commands to switch the lamp on and off), Turntable Output class (which receives
commands to start and stop the turntable), Beeper Output class (which receives a
command to beep), and the Oven Display Output class (which displays textual
messages and prompts to the user). There is one instance of each of these boundary classes
for a microwave oven.
Figure 8.8. Microwave Oven System external classes and boundary classes.
8.5 Entity Classes and Objects
An entity object is a software object that stores information. Entity objects are instances
of entity classes, whose attributes and relationships with other entity classes are
determined during static modeling, as described in Chapter 5. There are two kinds of
entity objects: data abstraction objects and database wrapper objects. Entity objects are
assumed to be passive and are therefore accessed directly by concurrent objects via
operation (i.e., method) calls.
In many applications, including real-time embedded systems, the entity objects are
stored in main memory and are referred to as data abstraction objects. Some applications
have a need for the information encapsulated by entity objects to be stored in a file or
database. In these cases, the entity object is persistent, meaning that the information it
contains is preserved when the system is shut down and then later powered up.
Persistent entity classes are often mapped to a database in the design phase. In this
case, the data is stored in the database and access to it is by means of database wrapper
objects. Database wrapper objects encapsulate how to access persistent data, which is
stored on long-term storage devices, such as on files and data bases stored on disks.
However, the data encapsulated by data abstraction objects is stored in main memory and
is therefore not persistent.
Database wrapper objects are, in general, less frequently used in real-time embedded
systems. For example, they might be used during initialization to retrieve system
configuration data or before system shutdown to store data previously gathered. However,
data accessed at initialization time or stored during run time execution is more likely to be
obtained from or stored at a service object, as described in Section 8.7.2. For these
reasons, unless explicitly stated otherwise, an entity object in this book refers to a data
abstraction object. Thus the following examples all relate to entity objects that are data
abstraction objects.
Although a whole system can be modeled by means of a state machine (see Chapter
7), in object-oriented analysis and design, a state machine is encapsulated inside one
object. In other words, the object is state dependent and is always in one of the states of
the state machine. In an object-oriented model, the state dependent parts of a system are
defined by means of one or more state machines, where each state machine is
encapsulated inside its own object. If the state machines need to communicate with each
other, they do so indirectly since the objects that contain them send messages to each
other, as described in Chapter 9.
A state dependent control object receives incoming events that cause state transitions
and generates output events that control other objects. The output event generated by a
state dependent control object depends not only on the input received by the object but
also on the current state of the object. An example of a state dependent control object is
from the Microwave Oven System, where the control and sequencing of the microwave
oven is modeled by means of a state dependent control object, Microwave Oven
Control object (see Figures 8.8 and 8.10), which is defined by means of the Microwave
Oven Control state machine. In the example, Microwave Oven is shown receiving
inputs from an input object, Door Sensor Input and controlling two output boundary
objects, Heating Element Output and Oven Display Output.
Figure 8.10. Example of state dependent control class and object.
In a control system, there are usually one or more state dependent control objects. It
is also possible to have multiple state dependent control objects of the same type. Each
object executes an instance of the same state machine, although each object is likely to be
in a different state. An example of this is the Light Rail Control System, which has several
trains, where each train has an instance of the state dependent control class, Train
Control, as shown in Figure 8.11using the 1..* notation (see Chapter 2) to depict multiple
instances of an object. Each Train Control object executes its own instance of the
Train Control state machine and keeps track of the state of the local train. More
information about state dependent control objects is given in Chapter 9.
Figure 8.11. Example of multiple instances of state dependent control object.
8.6.2 Coordinator Objects
A coordinator object is an overall decision-making object that determines the overall
sequencing for a collection of objects. A coordinator object makes the overall decisions
and decides when, and in what order, other objects participate in the interaction sequence.
A coordinator object makes its decision based on the input it receives and is not state
dependent. Thus an action initiated by a coordinator object depends only on the
information contained in the incoming message and not on what previously happened in
the system.
An example from a Light Rail Control System is the Cruiser algorithm class. An
instance of this class, the Cruiser object, calculates what adjustments to the speed should
be made by comparing the current speed of the train with the cruising speed (see Figure
8.14). The algorithm is complex because it must provide gradual acceleration or
deceleration of the train as needed, so as to provide a smooth ride.
An algorithm object frequently has to interact with other objects in order to execute
its algorithm, for example, Cruiser. In this way, it resembles a coordinator object.
However, unlike a coordinator object, whose main responsibility is to supervise other
objects, the prime responsibility of an algorithm object is to encapsulate and execute the
algorithm.
8.7.2 Service Objects
A service object is an object that provides a service for other objects. Although these
objects are usually provided in client/server or service-oriented architectures and
applications, it is possible for a real-time embedded system to make use of a service, for
example to read configuration data or store status data. Client objects make requests to the
service object, which the service object will respond to. A service object never initiates a
request; however, in response to a service request, it might seek the assistance of other
service objects. Service objects play an important role in service-oriented architectures,
although they are used in other architectures as well, such as client/server architectures
and component-based software architectures. A service object could be designed to
encapsulate the data it needs to service client requests; alternatively, it could be designed
to access another entity object(s) that encapsulates the data.
An example of a real-time service class is the Alarm Service class given in Figure
8.15a, from a factory automation example. An example of executing an instance of this
class, the Alarm Service object, is also shown in Figure 8.15b. The Alarm Service
object provides support for storing and viewing various factory alarms. In the example, a
Robot Proxy object sends alarms received from an external robot to Alarm Service.
The Operator Interaction object requests Alarm Service to view alarms.
Figure 8.15. Example of service class and object.
8.8 Summary
This chapter has described how to determine the software objects and classes in the real-
time software system. Object and class structuring criteria were provided, and the objects
and classes were categorized by using stereotypes. The emphasis is on problem domain
objects and classes, which are to be found in the real world, and not on solution domain
objects, which are determined at design time. The initial object structuring decisions
assume that boundary, control, and application logic objects are concurrent, while entity
objects are assumed to be passive. These decisions can be revisited at design time if
necessary, as described in Chapter 13.
The object and structuring criteria are usually applied to each use case in turn during
dynamic interaction modeling, as described in Chapter 9, to determine the objects that
participate in each use case. The sequence of interaction among the objects is then
determined. Subsystem (that is, composite object) structuring criteria are described in
Chapter 10. The design of concurrent tasks using task structuring criteria, as well as
message communication between concurrent tasks, is described in Chapter 13, while the
design of the operations provided by passive classes and synchronization of access to
these classes is described in Chapter 14.
9
Dynamic Interaction Modeling for
Real-Time Embedded Software
◈
There are two main kinds of dynamic interaction modeling. Stateless dynamic
interaction modeling is applied if the interaction sequence does not involve a state
dependent control object. State dependent dynamic interaction modeling is applied if at
least one of the objects is a state dependent control object, in which case the interaction is
state dependent and necessitates the execution of a state machine. State dependent
dynamic interaction modeling is particularly important in real-time embedded systems,
because object interactions in these systems are frequently state dependent.
Section 9.1 gives an overview of object interaction modeling. Section 9.2 describes
message sequence descriptions. Section 9.3 introduces an approach for dynamic
interaction modeling, which can be either state dependent or stateless, depending on
whether the object communication is state dependent or not. Section 9.4 describes a
systematic approach for stateless dynamic interaction modeling with two examples of this
approach provided in Section 9.5. Section 9.6 describes a systematic approach for state
dependent dynamic interaction modeling with an example of this approach provided in
Section 9.7. Appendix A describes the convention for message sequence numbering on
interaction diagrams.
9.1 Object Interaction Modeling
For each use case, the objects that realize the use case dynamically cooperate with each
other and are depicted on either a UML sequence diagram or a UML communication
diagram. An introduction to these interaction diagrams was given in Chapter 2, Sections
2.5 and 2.8. Further examples of using sequence and communication diagrams are given
in the examples in Sections 9.5 and 9.7. Following from Chapter 8, objects are depicted as
concurrent (active) objects except for entity objects, which are depicted as passive objects.
9.1.1 Analysis and Design Decisions in Object Interaction Modeling
During analysis modeling, an interaction diagram (sequence diagram or communication
diagram) is developed for each use case; only objects that participate in the use case are
depicted. The sequence of messages depicted on the interaction diagram should be
consistent with the sequence of interactions between the actor and the system already
described in the use case.
In the analysis model, messages represent the information passed between objects. At
the analysis stage, all messages passed between concurrent (active) objects are assumed to
be asynchronous, while all communication with a passive entity object is assumed to be
synchronous. During design, we might decide that two different messages arriving at a
passive object invoke different operations – or alternatively, the same operation, with the
message name being a parameter of the operation. However, these decisions are postponed
to the design phase. At the analysis stage, it is assumed that all messages passed between
concurrent objects are asynchronous, but this initial decision can be reversed at design
time.
9.1.2 Sequence Diagrams and Communication Diagrams in Object Interaction
Modeling
COMET/RTE uses a combination of communication and sequence diagrams.
Communication diagrams are used primarily to present the layout and interconnections
among the objects participating in the use case, while sequence diagrams are used to
depict the details of the sequence of messages passed between the interacting objects.
Sequence diagrams are particularlyhelpful for intricate object interactions and for timing
diagrams, as described in Chapter 17.
1. Analyze use case model. For dynamic modeling, consider each interaction
between the primary actor and the system, as described in the main sequence of the
use case. The primary actor starts the interaction with the system through an external
input. The system responds to this input with some internal execution and then
typically provides a system output. The sequence of actor inputs and system
responses is described in the use case. Start by developing the interaction sequence
for the scenario described in the main path of the use case.
2. Determine objects needed to realize use case. This step requires applying the
object structuring criteria (see Chapter 8) to determine the software objects needed to
realize the use case, both boundary objects (2a below) and internal software objects
(2b below).
2a. Determine boundary object(s). Consider the actor (or actors) that participates in
the use case; determine the external objects (external to the system) through which
the actor communicates with the system, and the software objects that receive the
actor’s inputs.
Considering the inputs from each external object to the system. For each external
input event, consider the software object required to process the event. A software
boundary object (such as an input object or user interaction object) is needed to
receive the input from the external object. On receipt of the external input, the
boundary object does some processing and typically sends a message to an internal
(i.e., non-boundary) object.
2b. Determine internal software objects. Consider the main sequence of the use
case. Using the object structuring criteria, determine the internal software objects that
participate in the use case, such as control or entity objects.
3. Determine message communication sequence. For each input event from the
external object, consider the communication required between the boundary object
that receives the input event and the subsequent objects – entity or control objects –
that cooperate in processing this event. Draw a sequence diagram or communication
diagram showing the objects participating in the use case and the sequence of
messages passing between them. This sequence typically starts with an external input
from the actor (external object) to the boundary object, followed by a sequence of
messages among the participating software objects, through to a boundary object that
provides an external output to the actor (external object). Repeat this process for each
subsequent interaction between the actor(s) and the system. As a result, additional
objects might be required to participate, and additional message communication,
along with message sequence numbering, might need to be specified.
In the case of a periodic activity – for example, a report that is generated periodically – it
is necessary to consider a software timer object that is activated by a timer event from an
external hardware timer. The software timer object triggers an entity object or algorithm
object to perform the required activity. In a periodic use case, the external timer is the
actor and the software timer object is the control object. Each significant system output,
such as a report, requires an object to produce the data and then typically send the data to a
boundary object, which outputs it to the external environment.
9.5 Examples of Stateless Dynamic Interaction
Modeling
Two contrasting examples are given of stateless dynamic interaction modeling. The first
example starts with the use case for View Alarms, in which the primary actor is a human
user. The second example starts with the use case for Send Status, in which the primary
actor is an external timer. Both examples follows the four steps for dynamic modeling
described in Section 9.4, although because they are simple examples, there are no
alternative sequences. An example of alternative sequences is given in Section 9.7.
9.5.1 View Alarms Example
Main sequence:
2. The system displays the current alarms. For each alarm, the system
displays the name of the alarm, alarm description, location of alarm, and
severity of alarm (high, medium, low).
Figure 9.1. Use case diagram for the View Alarms use case.
Figure 9.2. Sequence diagram for the View Alarms use case.
Figure 9.3. Communication diagram for the View Alarms use case.
Figure 9.4. Use case diagram for the Send Vehicle Status use case.
Summary: The vehicle sends status information about its location and speed
to the driver.
Main sequence:
1. Digital Timer notifies the System that the timer has expired.
2. System reads the location and speed status information about the vehicle
location.
Postcondition: System has sent location and speed status information to the
Driver.
2. Determine Objects Needed to Realize Use Case
The software objects that realize this use case are the Vehicle Timer (which receives
timer events from the external Digital Timer), Vehicle Data, which stores location
and speed status information, and Vehicle Display Output, which sends vehicle
status to the external Driver.
Figure 9.5. Sequence diagram for the Send Vehicle Status use case.
Figure 9.6. Communication diagram for the Send Vehicle Status use case.
The message sequence starts with the external timer event from the external
Digital Timer, and is described next:
2. Vehicle Timer reads speed and location data from Vehicle Data.
It is possible for an event not to have any data associated with it; for example, the event
door Closed does not have any attributes.
The message name corresponds to the name of the event. The message parameters
correspond to the message attributes. Thus, for interaction diagrams, we can use the terms
“event sequence” and “message sequence” synonymously. To understand the sequence of
interactions among objects, we often initially concentrate on the events; hence the term
event sequence analysis.
9.6.2 Steps in State Dependent Dynamic Interaction Modeling
In state dependent dynamic interaction modeling, the objective is to determine the
interaction between the following objects:
The state dependent control object, which executes the state machine.
The objects (usually software boundary objects)that send events to the control
object. These events cause state transitions in the control object’s internal state
machine.
The objects that provide and execute the actions and activities, which are triggered
by the control object as a result of the state transitions.
The main steps in the state dependent dynamic interaction modeling approach are as
follows. The sequence of interactions needs to reflect the main sequence of interactions
described in the use case:
1. Determine the boundary object(s). Consider the objects that receive the inputs
sent by the external objects in the external environment.
2. Determine the state dependent control object. There is at least one state
dependent control object, which executes the state machine. Others might also be
required.
3. Determine the other software objects. These are software objects that interact
with the control object, by executing actions or activities, or boundary objects.
4. Determine object interactions in the main sequence scenario. Carry out this
step in conjunction with step 5 because the interaction between the state dependent
control object and the encapsulated state machine it executes needs to be determined
in detail.
5. Determine the execution of the state machine. This is described in the next
section.
6. Consider alternative sequence scenarios. Perform the state dependent dynamic
analysis on scenarios described by the alternative sequences of the use case. This is
also described in the next section.
9.6.3 Modeling Interaction Scenarios Controlled by State Machines
This section describes how interaction diagrams – in particular, sequence diagrams and
communication diagrams – can be used with state machines to model state-dependent
interaction scenarios, as outlined in steps 5 and 6 above.
A source object sends an event to the state dependent control object. The arrival of
this input event causes a state transition on the state machine. The effect of the state
transition is one or more output events. The state dependent control object sends each
output event to a destination object. An output event is depicted on the state machine as an
action (which can be a state transition action, an entry action, or an exit action), an enable
activity, or a disable activity.
To ensure that the interaction diagram and state machine are consistent with each
other, the equivalent interaction diagram message and state machine event are given the
same name. Furthermore, for a given state dependent scenario, it is necessary to use the
same event sequence numbering on both diagrams. Using the same sequence numbers
ensures that the scenario is represented accurately on both diagrams and can be reviewed
for consistency.
An initial state machine might have already been developed to get a better
understanding of the state dependent parts of the system, as described in Chapter 7. At this
stage, the initial state machine probably needs further refinement. If the state machine was
developed prior to the interaction diagram, it needs to be reviewed to see if it is consistent
with the interaction diagram and, if necessary, modified.
Developing the interaction diagram and the state machine is usually iterative; each
input event (to the control object and its state machine) and each output event (from the
state machine and control object) need to be considered in sequence. They can actually be
further broken down as follows:
1. The arrival of an event at the state dependent control object (often from a boundary
object) causes a state transition. For each state transition, determine all the actions
and activities that result from this change in state. Remember that an action is
executed instantaneously, whereas an activity executes for a finite amount of time –
conceptually, an action is executed at a state transition, and an activity executes for
the duration of the state. When triggered by a control object at a state transition, an
action executes instantaneously and then terminates itself. An activity is enabled by
the control object on entry into the state and disabled by the control object on exit
from the state.
Determine all the objects that execute the identified actions and activities. It is also
necessary to determine if any activity should be disabled.
2. For each triggered or enabled object, determine what messages it generates and
whether these messages are sent to another object or output to the external
environment.
3. Depict the incoming external event and the subsequent internal events on both the
state machine and the interaction diagram. The events are numbered to show the
sequence in which they are executed. The same event sequence numbers are used on
the interaction diagram, state machine, and sequence diagram, as well as on the
message sequence description that describes the object interactions.
When the state dependent dynamic analysis has been completed for the main sequence,
the alternative sequences need to be considered as follows:
1. Analyze the alternative branches described in the use case to develop additional
states and transitions in the state machine. For example, alternative branches are
needed for error handling.
Each action and activity has been performed at least once, so that each state
dependent action has been triggered and each state dependent activity has been
enabled and subsequently disabled.
9.7 Example of State Dependent Dynamic
Interaction Modeling: Microwave Oven System
As an example of state dependent dynamic interaction modeling, consider the following
example from the Microwave Oven System, the Cook Food use case, which is described
in Chapter 6, Section 6.6.1. The software objects that participate in the realization of this
use case are determined by applying the class and object structuring criteria described in
Chapter 8. As described in Section 8.4, there is a need for software boundary objects,
since the user interacts with the system via several external devices, in particular input and
output objects.
a) To communicate with the external input devices, the corresponding input objects
are Door Sensor Input, Weight Sensor Input, and Keypad Input.
c) Because of the need to measure elapsed cooking time, there needs to be a software
timer object, Oven Timer.
d) There also needs to be an entity object to store the cooking time, which is called
Oven Data, and an entity object to store Oven Prompts.
e) Furthermore, to provide the overall control and sequencing for the microwave
oven, there is a need for a control object, Microwave Oven Control. Since the
actions of this control object depend on what happened previously, the control object
needs to be state dependent and therefore execute a state machine.
By executing the Microwave Oven Control state machine, the state dependent control
object, Microwave Oven Control, controls the execution of several objects. To fully
understand and design the state dependent interactions, it is necessary to analyze how the
interaction diagram and state machine work together. A message on the interaction
diagram and its equivalent event on the state machine are given the same name and
sequence number to emphasize how the diagrams work together. First, the main sequence
is considered, followed by the alternative sequences.
9.7.1 Determine Main Sequence
Consider the main sequence of the Cook Food use case, which is described in Chapter 6,
Section 6.6. It describes the user opening the microwave oven door, inserting the food into
the oven, then entering the cooking time and pressing the Start button. The system sets the
timer and starts cooking the food. When the timer elapses, the system stops cooking the
food. The user then opens the door to remove the food.
This use case starts when the user opens the oven door, which is detected by the door
sensor. The message sequence number starts at 1, which is the first external event initiated
by the user actor, as described in the Cook Food use case. Subsequent numbering in
sequence, representing the objects in the system reacting to the input event from the
external object, is 1.1 and 1.2. The next input event from the external environment is the
external event from the weight sensor numbered 2, and so on. The object interactions for
the main sequence scenario are shown on the sequence diagrams in Figures 9.7 and
continued on Figure 9.8, which depict external input and timer objects in addition to
software objects, but for space reasons do not depict external output objects.
Figure 9.7. Sequence diagram for Cook Food use case: main sequence scenario.
Figure 9.8. Sequence diagram for Cook Food use case: main sequence scenario
(continued).
The message sequencing on the object interaction diagrams is faithful to the main
sequence of the use case as given by the use case description. The message sequence from
1 to 1.2 starts with the door being opened by the user and detected by the hardware Door
Sensor. Door Sensor then passes this input event to the software Door Sensor
Input object, which consequently sends the Door Opened (message 1.1 on Figure 9.7)
message to Microwave Oven Control. This state dependent control object executes the
Microwave Oven Control state machine, shown in Figure 9.9. The Door Opened
event (event 1.1 on Figure 9.9) causes the state machine to transition from the initial state,
Door Shut, to Door Open. The resulting state machine action Switch On (event 1.2)
leads to the Microwave Oven Control object sending the Switch On (message 1.2 on
Figure 9.7) message to the Lamp Output object.
Figure 9.9. State machine diagram for Cook Food use case: main sequence scenario.
The message sequence from 2 to 2.1 follows a similar sequence involving the
Weight Sensor Input object. Arrival of the Item Placed message (message 2.1 on
Figure 9.7) causes the state machine to transition to the state Door Open with Item
(event 2.1 on Figure 9.9). This sequence is followed by the message sequence from 3 to
3.2, which involves closing the oven door and again involves the Door Sensor Input
object, which sends the Door Closed message (3.1) causing the state machine to
transition to Door Shut Waiting for User and the action Switch Off.
The message sequence from 4 to 4.4 starts with the user pressing the Cooking Time
button (message #4 on Figure 9.7), which is received by the Keypad Input, which sends
the Cooking Time Selected message (#4.1) to Microwave Oven Control, which
then transitions to Door Shut Waiting for Cooking Time. The resulting action is
Prompt for Time (action #4.2on Figure 9.9), which is sent as a message to Oven
Display Output, which then, given the prompt id, reads the time prompt from the Oven
Prompts entity object (messages #4.3, 4.4 on Figure 9.7) and outputs it to the user. The
user then enters the cooking time by pressing the appropriate number one or more times
(message 5). For each digit, Keypad Input sends Cooking Time Entered (#5.1)
message containing the digit to Microwave Oven Control. The state machine then
transitions to Ready to Cook state. There are two actions associated with this transition,
Display Cooking Time, which is sent to Oven Display Output (#5.2) for output to
the display and Update Cooking Time, which adds the digit to the cooking time stored
in the Oven Data entity object (#5.2a). Note that because these actions are concurrent,
they are labeled 5.2 and 5.2a, according to the numbering convention for concurrent
events and messages (see Appendix A).
When the user presses the Start key, the external Keypad object sends the Start
Pressed message (message 6 on Figure 9.8) to the software Keypad Input object,
which in turn sends a Start message (message 6.1) to the Microwave Oven Control
object. The arrival of the message triggers the Start event on the Microwave Oven
Control state machine (event 6.1 on Figure 9.9), which in turn causes the state transition
from the Ready to Cook state to the Cooking state. The resulting concurrent actions are
the transition action Start Timer (action 6.2a on Figure 9.9) and the entry actions
Start Cooking (action 6.2), Switch On (action 6.2b), and Start Turning (action
6.2c). These four actions correspond to the four messages of the same name sent
concurrently (i.e., at the same time) by Microwave Oven Control on Figure 9.8:
Start Cooking (message 6.2) to the Heating Element Output object, Start
Timer (message 6.2a) to the Oven Timer object, Switch On (message 6.2b) to Lamp
Output, and Start Turning (message 6.2c) to Turntable Output.
While cooking the food, the Oven Timer continually decrements the cooking time
(messages 7, 7.1, 7.2 on Figure 9.8) stored in Oven Data. When the timer counts down to
zero (#8, 8.1, 8.2), the Oven Timer object sends the Timer Expired message (# 8.3 on
Figures 9.8 and 9.9) to Microwave Oven Control and sends the Display End
Prompt (#8.3a) to the Oven Display Output object. The Timer Expired event
causes the state machine to transition to Door Shut Waiting for User state (Figure
9.9) and execute four concurrent actions, the two exit actions Stop Cooking (action 8.4)
and Stop Turning (action 8.4c), as well as the two transition actions Beep (action 8.4a)
and Switch Off (action 8.4b). These four actions correspond to the four messages of the
same name sent concurrently by Microwave Oven Control on Figure 9.8:Stop
Cooking (message 8.4) to the Heating Element Output object, Beep (message 8.4a)
to Beeper Output, Switch Off (message 8.4b) to Lamp Output, and Stop
Turning (message 8.4c) to Turntable Output.
Various concurrent sequences are shown in Figures 9.7 and 9.8. For example,
Microwave Oven Control simultaneously sends messages to display the cooking time
(#5.2) and update the cooking time in Oven Data (#5.2a); Microwave Timer sends the
Timer Expired message to Microwave Oven Control (#8.3) and the Display End
Prompt to Oven Display Output (#8.3a).
The message sequence description, which describes the messages on the sequence
diagram (shown on Figures 9.7 and 9.8) and the events on the state machine diagram
(shown in Figure 9.9), is described in detail in the Microwave Oven Control System case
study in Section 19.6.
9.7.2 Determine Alternative Sequences
The interaction sequence described in the previous section follows the main sequence
described in the use case. Next, consider the alternative sequences of the Cook Food use
case, which are given in the Alternatives section of the use case (given in full in Chapter
6). Some alternatives have little impact on the system. However, there are three
alternatives of note in the use case, which impact both the interaction diagrams and the
state machine, two of which involve the user pressing the Minute Plus button and the
third involves the user opening the door during cooking.
9.7.3 Alternative Minute Plus Scenarios
The Minute Plus alternative scenarios affect the Cook Food sequence diagram in
different ways. If Minute Plus is pressed after cooking has started, then the cooking time
is updated. If Minute Plus is pressed before cooking has started, then the cooking time is
updated and cooking is started (assuming that the oven has the door shut and there is an
item in the oven).
The two alternative Minute Plus scenarios are depicted inside the alt frame (drawn
as a rectangle with an alt title in the top left corner) on the sequence diagram in Figure
9.10, in which an alternative sequence is identified by a [condition] that must be True for
it to be executed. The conditions reflect whether the microwave oven is [Cooking] or [Not
Cooking] at the start of each alternative sequence. A dashed line is the separator between
the two alternative sequences.
Both Minute Plus scenarios start in the same way. The user presses the Minute Plus
button on the keypad after pressing the Start button (message 6). This is depicted in
Figure 9.10 as the Keypad external input device sending the Minute Plus Pressed
message (6.10). Keypad Input sends the Minute Plus message (shown as message
6.11) to Microwave Oven Control. What follows is state dependent and depicted in the
alt segment. If cooking is in progress, the alternative sequence for the [Cooking] condition
is taken: Microwave Oven Control sends an Add Minute message (6.12) to Oven
Timer, which adds sixty seconds to the cooking time in Oven Data (messages 6.13 and
6.14). The scenario then exits the alternative sequence, rejoins the main sequence, and
sends the new time to Oven Display Output (6.15), which in turn outputs the
Display Time message (6.16) to the external display.
Figure 9.10. Sequence diagram for the Cook Food use case: impact of the Minute
Plus alternative scenarios.
The alternative scenario of pressing Minute Plus when cooking is not in progress is
depicted with an alternative message sequence starting with 4M. Keypad Input sends
the Minute Plus message (4M.1) to Microwave Oven Control. Microwave Oven
Control behaves differently in this situation, as depicted by the alternative sequence for
the [Not Cooking] condition on Figure 9.10, by sending a Start Minute message
(4M.2) to Oven Timer and a Start Cooking message (4M.2a) to Heating Element
Output. Oven Timer then sets the cooking time to sixty seconds in Oven Data
(message 4M.3). The scenario then rejoins the main sequence and sends the new time to
Oven Display Output (message 4M.3a), which in turn outputs the Display Time
message (4M.4) to the external display. To avoid clutter on the sequence diagram for the
Not Cooking alternative scenario, the Lamp Output and Turntable Output objects are
omitted, as well as the Switch On and Start Turning messages sent respectively to
them by Microwave Oven Control. These interactions are similar to those for the
Cook Food main sequence diagram in Figure 9.8.
9.7.4 Impact of Alternative Scenarios on State Machine
Consider the impact of the minute plus alternative scenarios on the Microwave Oven
Control state machine, which is depicted in Figure 9.11. If the oven is in Cooking state
when the Minute Plus button is pressed, the Minute Plus event (# 6.11) causes a
transition back to the Cooking state, and the action is to Add Minute (#6.12). Entry and
exit actions are not affected by this internal transition. However, if Minute Plus is pressed
from Door Shut Waiting for User state, the Minute Plus event (#4M.1) causes a
transition to Cooking state. The effects of this transition are the execution of four
concurrent actions, the three entry actions Start Cooking, Start Turning, and
Switch On and the transition action Start Minute (#4M.2).
Figure 9.11. State machine diagram for Cook Food use case: Open Door while Cooking
and Minute Plus scenarios.
Consider the alternative scenario in which the door is opened while the food is
cooking. This causes the state machine to transition from Cooking state to Door Open
with Item state. Since this event occurs after cooking has started (i.e., after event 6.3
and assumed to be after the first timer event 7 on Figure 9.7), we assign the event Door
Opened on the state machine (Figure 9.11) the sequence number 7.10. There are three
concurrent actions resulting from this transition, the two exit actions Stop Cooking,
Stop Turning, and the transition action Stop Timer. From this state, the user could
close the door (event 7.12), causing the state machine to transition to Ready to Cook
state (since the condition [Time Remaining] is True) or remove the item, causing the
state machine to transition to Door Open state.
9.8 Summary
This chapter has described dynamic interaction modeling, in which the objects that
participate in each use case are determined, as well as the sequence of their interactions.
This chapter described the details of the dynamic interaction modeling approach for
determining how objects collaborate with each other. State dependent dynamic
interaction modeling involves a state dependent collaboration controlled by a state
machine, and stateless dynamic interaction modeling does not.
During the transition from analysis to design described in Chapter 10, the interaction
diagrams corresponding to each use case are synthesized into an integrated
communication diagram, which represents the first step in developing the software
architecture of the system. During analysis, all message interactions are depicted as
asynchronous messages between concurrent objects and synchronous messages for
communication with passive entity objects. During design, these decisions can be
changed, as described in Chapter 13. Appendix A describes message sequence numbering
conventions on interaction diagrams and state machines, as used in the examples in this
chapter and in the case studies.
10
Software Architectures for Real-Time
Embedded Systems
◈
Developing the software architecture is the first step in software design modeling.
Whereas requirements modeling addresses analyzing and specifying software
requirements, and analysis modeling considers the problem domain from static and
dynamic modeling perspectives, the software architecture addresses the solution domain.
During analysis modeling, dynamic interaction modeling considers the software system
from a use case–based perspective, determining the software objects required to realize
each use case and the interaction sequence of these objects. During software architecture,
the use case–based interaction diagrams are synthesized into an initial software design,
from which the software architecture can be developed.
A real-time system can be effectively designed using concurrent tasks for the reasons
given in Chapter 3. In particular, a multitasking design permits a real-time system to
manage multiple streams of input events in parallel (i.e., one stream per task). It also
allows a real-time system to handle multiple periodic or aperiodic events concurrently,
including external events that arrive from sources outside the system and internal events
that arrive from other tasks. In concurrent designs, tasks can communicate with each other
using different architectural communication patterns, as described in Chapter 11, including
synchronous and asynchronous communication. A multitasking design can be deployed to
multiple nodes in a distributed environment since each task can execute on a separate
node. One approach for a distributed configuration is to preassign each task to a given
node. However, greater flexibility in task deployment can be achieved using a component-
based software architecture, as described next.
10.1.3 Component-Based Software Architectures
A component-based software architecture consists of multiple components that are each
self-contained and encapsulate information. A component is either a composite component
or a simple component. Unless explicitly stated, the term component refers to both a
component type and a component instance.
Figure 10.1. Structural view of software architecture: high level class diagram for Light
Rail System.
A composite structure diagram (as described in Section 2.10) depicts the static
structural relationship between components. The diagram depicts component types (and in
some cases component instances), ports, and connectors that join the component ports
together, as described in detail in Chapter 12. They also allow provided and required
interfaces of each component to be explicitly specified, as described in Chapter 12.
An example of a composite structure diagram for the Light Rail System is given in
Figure 10.2, which depicts the subsystems as concurrent component types and the
connectors that join the components together. Four of the components constitute the parts
of the Light Rail Control System (see Figure 10.1) and two represent the other
software systems, Railroad Crossing System and Wayside Monitoring System.
Several connectors are depicted, for example there is one connector between Rail
Operations Interaction and Train Control, one connector between Rail
Operations Interaction and Station, and one connector between Train Control
and Station. Each component is depicted with both a role stereotype and an architecture
stereotype. Thus, Train Control is depicted with the role stereotype «control» and the
architecture stereotype «component». Five of the components are clients of Rail
Operations Service. The composite structure diagram is described further in Chapter
12, and this example is described in more detail in Chapter 21.
Figure 10.2. Structural view of software architecture: composite structure diagram for
Light Rail System.
10.2.2 Dynamic View of a Software Architecture
The dynamic view of a software architecture is a behavioral view, which is depicted on a
communication diagram. A subsystem communication diagram shows the subsystems
(depicted as composite objects or composite component instances) and the message
communication between them. If the subsystems can be deployed to different nodes, they
are depicted as concurrent component instances, since they execute in parallel and
communicate with each other over a network. The subsystem communication diagram is
also sometimes referred to as a high level communication diagram.
An example of the dynamic view of the software architecture for the Light Rail
System depicts the six concurrent components (from Figure 10.2) on a subsystem
communication diagram in Figure 10.3. Four of the six concurrent components are part of
the Light Rail Control System, of which there are many instances of Train
Control, Station, and Rail Operations Interaction and one instance of Rail
Operations Service. Each instance of Train Control sends train arrival and
departure status messages to each instance of Station. Rail Operations
Interaction sends train command messages to a given instance of Train Control to
transition into and out of service. All communication between the distributed components
is asynchronous, except for the synchronous communication between Rail Operations
Interaction and Rail Operations Service.
Figure 10.3. Dynamic view of software architecture: subsystem communication
diagram for Light Rail System.
An example of the deployment view is given in Figure 10.4 for the software
architecture of the Light Rail System. In this deployment diagram, each instance of
Railroad Crossing Control is allocated to its own physical node, as is each instance
of theWayside Monitoring. There are multiple instances of Train Control,
Station, and Rail Operations Interaction, each of which is allocated to its own
physical node. There is one instance of Rail Operations Service, which is assigned
to a physical node. The nodes are geographically distributed and connected by a wide area
network. Communication with a mobile component, such as Train Control, of which
there is one instance for each train, needs to be by wireless communication.
Figure 10.4. Deployment view of software architecture: deployment diagram for Light
Rail System.
10.3 Transition from Analysis to Design
During the dynamic interaction modeling step of the analysis modeling phase (see Chapter
9), the objects that realize each use case are determined and the sequence of object
interactions are determined and depicted on use case–based interaction diagrams. Thus,
the analysis is carried out on a use case–by–use case basis. To transition from analysis to
design and to structure the system into component-based subsystems, it is necessary to
synthesize an initial software design from the dynamic interaction model, by integrating
the use case–based interaction diagrams. Although dynamic interaction between objects
can be depicted on either sequence diagrams or communication diagrams, this integration
needs to be depicted on communication diagrams because these diagrams visually depict
the interconnection between objects, as well as the messages passed between them.
In the analysis model, at least one interaction diagram is developed for each use case.
The integrated communication diagram is a synthesis of all the communication
diagrams developed to realize the use cases and is developed as follows:
Frequently, there is a precedence order in which use cases are executed. The order of
the synthesis of the communication diagrams should correspond to the order in which the
use cases are executed. From a visual perspective, the integration is done as follows: Start
with the communication diagram for the first use case and superimpose the
communication diagram for the second use case on top of the first to form an integrated
diagram. Next, superimpose the third diagram on top of the integrated diagram of the first
two, and so on. In each case, add new objects and new message interactions from each
subsequent diagram onto the integrated diagram, which gradually gets bigger as more
objects and message interactions are added. Objects and message interactions that appear
on more than one communication diagram are only shown once.
It is important to realize that the integrated communication diagram must show all
message communication derived from the individual use case–based communication
diagrams. Communication diagrams often show the main sequence through a use case, but
not necessarily all the alternative sequences. In the integrated communication diagram, it
is necessary to show the messages that are sent as a result of executing the alternative
sequences in addition to the main sequence through each use case.
The integrated communication diagram is thus a synthesis of all relevant use case–
based communication diagrams showing the realization of all use case scenarios, and all
objects and their interactions. The integrated communication diagram is represented as a
generic UML communication diagram (see Section 10.2.2), which means that it depicts all
possible interactions between the objects. On the integrated communication diagram,
objects and messages are shown, but the message sequence numbering is usually not
shown because this would make the diagram too cluttered. As with the use case–based
interaction diagrams, messages on the integrated communication diagram are depicted as
asynchronous messages between concurrent objects and synchronous when
communicating with a passive object. These initial decisions can later be reversed, when
decisions about the type of message communication (synchronous or asynchronous) are
finalized, as described in Section 10.6.
For a large system, the integrated communication diagram can get very complicated;
it is therefore necessary to have ways to reduce the amount of information depicted. One
way to reduce the amount of information on the diagram is to aggregate the messages –
that is, if one object sends several individual messages to another, instead of showing all
these messages on the diagram, use one aggregate message. The aggregate message is a
useful way of grouping messages to reduce clutter on the diagram. It does not represent an
actual message sent from one object to another; rather it represents the collection of
messages sent at different times between the same pair of objects. For example, the
messages sent by the Railroad Crossing Control object to the Rail Operations
Proxy object in Figure 10.5 are aggregated into an aggregate message called Status
Messages. A message dictionary is then used to define the contents of Status
Messages, as shown in Table 10.1.
Status Train Arrived, Train Departed, Barrier Raised, Barrier Lowered, Barrier
Messages Raising Timeout Message, Barrier Lowering Timeout Message
Furthermore, showing all the objects on one communication diagram might not be
practical. A solution to this problem is to develop a higher-level subsystem
communication diagram to show the interaction between the subsystems and to develop an
integrated communication diagram for each subsystem.
A user interaction client subsystem could support a simple user interface, consisting
of a command line interface or a graphical user interface, which contains multiple objects.
A simple user interaction client subsystem would have a single thread of control.
In the Light Rail System shown in Figures 10.2 and 10.3, there is one service
subsystem, Rail Operations Service. The Train Control, Station,
Railroad Crossing System, Wayside Monitoring System, and Rail
Operations Interaction components are all clients of Rail Operations
Service. Examples of client subsystems from the emergency monitoring system, shown
in Figure 10.12, are the Monitoring Sensor Component, Remote System Proxy
and Operator Presentation subsystems, which are described in the next section.
10.5.8 Service Subsystem
A service subsystem is a subsystem that provides a service for client subsystems. It
responds to requests from client subsystems, although it does not initiate any requests.
Service subsystems are usually composite objects that are composed of two or more
objects. These include entity objects, coordinator objects, which service client requests
and determine what object should be assigned to handle them, and application logic
objects, which encapsulate application specific logic, such as algorithms. Frequently, a
service is associated with a data repository or a set of related data repositories, or it might
provide access to a database or a file system.
An example of a system with one data service subsystem is the Light Rail System,
which has a service subsystem, Rail Operations Service, to maintain the current
status of the trains and stations in the system, as depicted in Figures 10.2 and 10.3.
Examples of multiple data service subsystems are from the emergency monitoring system,
in which the Alarm Service and the Monitoring Data Service subsystems, shown
in Figure 10.12, store current and historical alarm and sensor data respectively.
Monitoring Data Service receives new sensor data from the Monitoring Sensor
Component and Remote System Proxy subsystems. Sensor data is requested by other
client subsystems, such as the Operator Presentation subsystem, which displays the
data.
Figure 10.12. Examples of client and service subsystems in the emergency monitoring
system.
Another example of a data service is the Sensor Data Service shown in Figure
10.11, which stores current and historical sensor data. It receives new sensor data from the
Sensor Data Collection subsystem. Sensor data is requested by other subsystems,
such as multiple instances of the Operator Interaction subsystem, which displays
the data. The design of concurrent service subsystems is described in Chapter 12.
10.6 Decisions about Message Communication
between Subsystems
In the transition from analysis to design, one of the most important decisions relates to
what type of message communication is needed between the subsystems. A second related
decision is to determine more precisely the name and parameters of each message, that is,
the interface specification. In the analysis model, an initial decision is made about the type
of message communication. In addition, the emphasis is on the information passed
between objects, rather than on precise message names and parameters. In design
modeling, after the subsystem structure is determined (as described in Section 10.5), a
decision has to be made about the precise semantics of message communication, such as
whether message communication will be synchronous or asynchronous, introduced in
Chapters 2 and 3, and the precise content of the message.
Figure 10.13b shows the result of message design decisions concerning the type of
message communication between the subsystems. Figure 10.13b depicts the decision to
use asynchronous message communication between the producer and consumer because
this is one-way message communication with no reason to hold up the producer. By
contrast, synchronous message communication is used between the client and service
because the client needs to wait for the response from the service. In addition, the precise
name and parameters of each message are determined. The asynchronous message has the
name send Asynchronous Message and content called message. The synchronous
message has the name send Asynchronous Message with Reply, with the input
content called message and the service’s reply called response.
During software design modeling, design decisions are made relating to the
characteristics of the software architecture. In designing the overall software architecture,
it helps to consider applying the software architectural patterns, both architectural
structure patterns and architectural communication patterns. Chapter 11 describes the
software architectural design patterns and how they can be used in the design of real-time
embedded systems. Chapter 12 describes the design of component-based software
architectures, including the design of component interfaces, with component ports that
have provided and required interfaces, and connectors that join compatible ports. Chapter
13 describes the design of real-time software architectures, which are concurrent
architectures that frequently have to deal with multiple streams of input events. Chapter 14
describes the detailed design of software architectures. Chapter 15 describes the design of
software product line architectures, which are architectures for families of products that
need to capture both the commonality and variability in the family.
System and software quality issues in developing the software architecture of real-
time embedded systems are described in Chapters 16 and 17. Chapter 16 describes the
system and software quality attributes of a real-time system and how they are used to
evaluate the quality of the software architecture. Chapters 17 and 18 describes
performance analysis of software designs. Chapters 19 to 23 provide case study examples
of applying COMET/RTE to the modeling and design of different real-time embedded
software architectures.
11
Software Architectural Patterns for
Real-Time Embedded Systems
◈
In software design, designers frequently encounter a problem that they have solved before
on a previous project. Often the context of the problem is different; it might be a different
application, a different platform, or a different programming language. Because of the
different context, a designer usually ends up redesigning and reimplementing the solution,
thereby falling into the trap of “reinventing the wheel.” The field of software patterns,
including architectural and design patterns, is helping developers avoid unnecessary
redesign and reimplementation.
This chapter describes several software architectural patterns that can be used in the
development of real-time embedded systems. Section 11.1 provides an overview of the
different kinds of software patterns. Sections 11.2 through 11.7 describe the different
software architectural patterns, with Sections 11.2 through 11.4 focusing on patterns that
address the structure of the software architecture and Sections 11.5 through 11.7
discussing patterns that address the message communication among distributed
components of the software architecture. Section 11.8 describes how to document
software architectural patterns using a standard template. Section 11.9 describes how to
apply software architectural patterns to build a new software architecture.
11.1 Software Design Patterns
A design pattern describes a recurring design problem to be solved, a solution to the
problem, and the context in which that solution works (Buschmann et al. 1996, Gamma et
al. 1995). The description is in terms of communicating objects and classes customized to
solve a general design problem in a particular context. A design pattern is a larger-grained
form of reuse than a class. A design pattern involves more than one class along with the
interconnection among the different classes.
After the original success of the design pattern concept, other kinds of patterns were
developed. The main kinds of reusable patterns are given below:
Design patterns. In a widely cited book (Gamma et al. 1995), design patterns
were described by four software designers – Erich Gamma, Richard Helm, Ralph
Johnson, and John Vlissides – who were named in some quarters as the “gang of
four.” A design pattern is a small group of collaborating objects.
Analysis patterns. Analysis patterns were described by Fowler (2002), who found
similarities during analysis of different application domains. He described
recurring patterns found in object-oriented analysis and described them with static
models, expressed in class diagrams.
Idioms. Idioms are low-level patterns that are specific to a given programming
language and describe implementation solutions to a problem that use the features
of the language – for example, Java or C++. These patterns are closest to code, but
they can be used only by applications that are coded in the same programming
language.
Design anti-patterns. These are patterns that should not be used because they are
incorrect or ineffective solutions to a recurring problems. For example, they lead to
potential performance pitfalls. An example of this is for a component to use up
CPU time unnecessarily by continually checking for message arrival, instead of
waiting on a message arrival event.
11.1.1 Software Architectural Patterns
As introduced in the previous section, software architectural patterns provide the
skeleton or template for the overall software architecture or high-level design of an
application. Shaw and Garlan (1996) referred to architectural styles or patterns of
software architecture, which are recurring architectures used in a variety of software
applications (see also Bass et al. 2013). These include such widely used architectures as
client/service and layered architectures.
This chapter groups software architectural patterns into two main categories, as
described in the following sections: architectural structure patterns (which address the
static structure of the architecture) and architectural communication patterns (which
address the message communication among distributed components of the architecture).
Furthermore, it is also possible for an architectural structure pattern to incorporate other
architectural structure and/or communication patterns.
11.2 Layered Software Architectural Patterns
This section describes layered software architectural structure patterns, which address the
static structure of the architecture by organizing the architecture into hierarchical layers or
levels of abstraction.
11.2.1 Layers of Abstraction Architectural Pattern
The Layers of Abstraction pattern (also known as the Hierarchical Layers or Levels of
Abstraction pattern) is a common architectural pattern, which is applied in many different
software domains (Buschmann et al. 1996). Operating systems, database management
systems, and network communication software are examples of software systems that are
often structured as hierarchies.
As Parnas (1979) pointed out in his seminal paper on designing software for ease of
extension and contraction (see also Hoffman and Weiss 2001), if software is designed in
the form of layers, it can be extended by the addition of upper layers that use services
provided by lower layers and contracted by the removal of some or all the components in
the upper layers.
With a strict layered hierarchy, each layer uses services in the layer immediately
below it; for example, layer 3 can only invoke services provided by layer 2. With a flexible
layered hierarchy, a layer does not have to invoke a service at the layer immediately
below it but can instead invoke services at more than one layer below; for example, layer
3 could directly invoke services provided by layer 1.
The Layers of Abstraction architectural pattern is used in the TCP/IP protocol, which
is the most widely used protocol on the Internet (Comer 2008). Each layer deals with a
specific characteristic of network communication and provides an interface, as a set of
operations, to the layer above it. This is an example of a strict layered hierarchy. For each
layer on the sender node, there is an equivalent layer on the receiver node. TCP/IP is
organized into five conceptual layers, as shown in Figure 11.1:
Layer 2: Network interface layer. Specifies how data is organized into frames and
how frames are transmitted over the network.
Layer 3: Internet Protocol (IP) layer. Specifies the format of packets sent over the
Internet and the mechanisms for forwarding packets through one or more routers
from a source to a destination (see Figure 11.2). The router node in Figure 11.2 is a
gateway that interconnects a local area network to a wide area network.
Layer 4: Transport layer (TCP). Assembles packets into messages in the order they
were originally sent. TCP is the Transmission Control Protocol, which uses the IP
network protocol to send and receive messages. It provides a virtual connection from
an application on one node to an application on a remote node, hence providing what
is termed an end-to-end protocol (see Figure 11.2).
Examples of this pattern can be found in the railroad crossing control system (see
Chapter 20) and the microwave oven control system case study (see Chapter 19). Figure
11.5 gives an example of the Centralized Control architectural pattern from the latter case
study, in which the concurrent components are depicted on a concurrent communication
diagram. The Microwave Control component is a centralized control component,
which executes the state machine that provides the overall control and sequencing for the
microwave oven. Microwave Control receives messages from three input components
– Door Component, Weight Component, and Keypad Component – when they detect
inputs from the external environment. Microwave Control actions are sent to two
output components – Heating Element Component (to switch the heating element on
or off) and Microwave Display (to display information and prompts to the user).
Figure 11.5. Example of the Centralized Control pattern.
11.3.2 Distributed Collaborative Control Architectural Pattern
The Distributed Collaborative Control pattern contains several control components.
Each of these components controls a given part of the system by conceptually executing a
state machine. Control is distributed among the various control components, with no
single component in overall control. To notify each other of important events, the
components communicate through peer-to-peer communication. The components also
interact with the external environment as in the Centralized Control pattern (see Section
11.3.1).
One example of the Hierarchical Control pattern is given in Figure 11.8, in which a
coordinator component, the Hierarchical Controller, sends high-level commands to
each of the distributed controllers. The distributed controllers provide the low-level
control, interacting with sensor and actuator components, and respond to the
Hierarchical Controller when they have finished. They may also send progress
messages to the Hierarchical Controller.
This pattern is different from the Hierarchical Control pattern in that the slaves,
unlike a lower level controller, do not have any localized control. It is also different from
the centralized control pattern, in which the controller typically interacts with multiple
sensors and actuators. An example of this pattern is given in Figure 11.9, in which the
Master sends assignment commands to each Slave. After completing the assignment,
Slave sends its response to Master.
An example of this pattern comes from a banking system (Gomaa 2011), which is
depicted in Figure 11.11 and consists of multiple ATMs connected to a Banking
Service component by means of a wide area network. Each ATM is controlled by an ATM
Controller component. Each ATM Controller is independent of the other ATM
Controllers, but all of them communicate with the Banking Service. A typical ATM
control sequence consists of an ATM Controller reading a customer’s ATM card,
prompting for the PIN and cash amount, and communicating with the Banking Service
to validate the PIN and determine that there is enough cash in the customer’s account. If
the Banking Service approves the request, then the ATM Controller component
dispenses the cash, prints the receipt, and ejects the ATM card. Each ATM Controller
executes a state machine that controls the above interaction sequence, receiving inputs
from the card reader and customer keypad and controlling outputs to the cash dispenser,
receipt printer, customer display, and card reader.
The clients in Figure 11.11 are ATM Controller components, which communicate
with the Banking Service using the synchronous message communication with reply
pattern (see Section 11.5.4) because a client sends a message to the service and then waits
for a response. After receiving the message, the service processes the message, prepares a
reply, and sends the reply to the client. After receiving the response, the client resumes
execution.
11.4.2 Multiple Client/Multiple Service Architectural Pattern
More complex client/service systems might support multiple services. In the Multiple-
Client/Multiple-Service pattern, a client might communicate with several services, as
depicted in Figure 11.12. With this pattern, a client could communicate with each service
sequentially or could communicate with multiple services concurrently.
With the callback pattern, the client sends a remote reference or handle, which is then
used by the service to respond to the client. A variation on the callback pattern is for the
service to delegate the response to another component by forwarding to it the callback
handle, as described in the examples in Section 12.7.
The broker provides both location transparency and platform transparency. Location
transparency means that if the service is moved to a different location, clients are
unaware of the move and only the broker needs to be notified. Platform transparency
means that each service can execute on a different hardware/software platform and does
not need to maintain information about the platforms that other services execute on.
With brokered communication, the service has to first register with a broker as
described by the service registration pattern in Section 11.6.1. The pattern of
communication, in which the client knows the service required but not the location, is
referred to as white page brokering, analogous to the white pages of the telephone
directory, and is described in Section 11.6.2. Yellow page brokering, in which the specific
service is discovered, is described in Section 11.6.3.
R2: The Broker registers the service in the service registry and sends a
registration Ack acknowledgment to the service.
Figure 11.24. Service registration with Broker.
11.6.2 Broker Handle Pattern
With the Broker Handle pattern, the broker is an intermediary for establishing connections
between clients and services. Once connected to a service, a client communicates with the
service directly without involving the broker.
Most commercial object brokers use a Broker Handle design. This pattern is
particularly useful when the client and service are likely to have a dialog and exchange
several messages between them. The pattern is depicted in Figure 11.25 and consists of the
following message sequence:
B1: The Service Requester client sends a service request to the Broker.
B2: The Broker looks up the location of the service and returns a service
handle to the client.
B3: The Service Requester client uses the service handle to make the
request to the appropriate Service.
B4: The Service executes the request and sends the reply directly to the
Service Requester client.
An alternative brokering pattern is the Broker Forwarding pattern, in which the broker is
an intermediary for every message sent between the client and service. Broker Forwarding
is less efficient if the client/service dialog results in the exchange of several messages. The
reason is that with Broker Handle, the interaction with the broker is only done once at the
start of the dialog instead of every time, as with Broker Forwarding.
The message traffic using the Broker Handle pattern is equal to 2n+2, assuming that
each request n has one response, and that two additional messages are needed for the
client to communicate with the broker and receive a response. Compared with the
Client/Service pattern (see Section 11.4.1) in which the message traffic is equal to 2n, the
brokering overhead decreases as the value of n increases.
For a real time embedded system, the Broker Handle pattern can be used efficiently
to establish a connection between client and service components at initialization time, and
then during normal operation, communication between components is done efficiently
without broker intervention.
Figure 11.25. Broker Handle (white page brokering) pattern.
11.6.3 Service Discovery Pattern
The brokered patterns of communication described earlier, in which the client knows the
service required but not the location, are referred to as white page brokering. A different
brokering pattern is yellow page brokering, analogous to the yellow pages of the
telephone directory, in which the client knows the type of service required but not the
specific service. This pattern, which is shown in Figure 11.26, is also known as the
Service Discovery pattern because it allows the client to discover new services. The client
sends a query request to the broker, requesting all services of a given type. The broker
responds with a list of all services that match the client’s request. The client selects a
specific service. The broker returns the service handle, which the client uses for
communicating directly with the service.
1: The Service Requester client sends a yellow pages request to the Broker
requesting information about all services of a given type.
2: The Broker looks up this information and returns a list of all services that satisfy
the query criteria.
3: The Service Requester client selects one of the services and sends a white
pages request to the Broker.
4: The Broker looks up the location of the service and returns a service handle to the
Service Requester client.
5: The Service Requester client uses the service handle to send a request to the
appropriate Service
6: The Service executes the request and sends the response directly to the Service
Requester client.
The message traffic using yellow page brokering followed by white page brokering is
equal to 2n+4. This assumes that each client request n has one response, two messages are
needed for yellow page brokering, and two additional messages are needed for white page
brokering. Compared with the Client/Service pattern (see Section 11.4.1) in which the
message traffic is equal to 2n, the brokering overhead decreases as the value of n
increases. For a real time embedded system, an efficient usage of these patterns is to
establish a connection between client and service components at initialization time using
the yellow pages service discovery pattern followed by the white pages brokering pattern,
and then during subsequent operation, components can communicate efficiently with each
other without any broker intervention.
N2a, N2b, N2c: Alarm Handling Service looks up the list of subscribers
who have requested to be notified of alarms of this type. It multicasts the
alarm Notify message to all instances of the Operator Interaction
component that are on the subscription list. Each recipient takes appropriate
action in response to the alarm notification.
Pattern name.
Strengths of solution. Use to determine if the solution is right for your problem.
Reference. Where you can find more information about the pattern.
The patterns described in this chapter are documented with this standard template in
Appendix B.
11.9 Applying Software Architectural Patterns
This section describes how to develop a software architecture starting from software
architectural patterns. A very important decision is to determine which architectural
patterns – in particular, which architectural structure and communication patterns – are
required. Architectural structure patterns can initially be identified during dynamic
interaction modeling (see Chapter 9) because patterns can be recognized during
development of the interaction diagrams. For example, any of the control patterns can first
be used during dynamic modeling. Although architectural structure patterns can be
identified during dynamic modeling, the real decisions are made during software
architectural design. It is necessary to first decide what architectural structure patterns to
apply in order to determine the organization of the components in the architecture, and
then to apply the architectural communication patterns to determine how components
communicate with each other.
This chapter has also described how to document software architectural patterns
using a standard template. The software architectural patterns described in this chapter are
documented with this template in Appendix B. Chapter 12 discusses several important
topics in designing component-based software architectures. The case studies in Chapters
19 through 23 give several examples of applying the software architectural structure and
communication patterns to real-time software designs.
12
Component-Based Software
Architectures for Real-Time
Embedded Systems
◈
In earlier chapters, the term component has been used informally. This chapter describes
the design of distributed component-based software architectures in which the architecture
for a real-time embedded system is designed in terms of components that can be deployed
to execute on different nodes in a distributed environment. It describes the component
structuring criteria for designing components that can be deployed to execute in a
distributed configuration. The design of component interfaces is described, with
component ports that have provided and required interfaces and connectors that join
compatible ports.
Components are initially designed using the subsystem structuring criteria described
in Chapter 10. Additional component configuration criteria are used to ensure that
components are configurable, in other words that they can be effectively deployed to
distributed physical nodes in a distributed environment. Architectural structure and
communication patterns described previously in Chapter 11 are also used in the design of
component-based software architectures.
Components can be effectively modeled in UML with structured classes and depicted
on composite structure diagrams. Structured classes have ports with provided and required
interfaces. Structured classes can be interconnected through their ports via connectors that
join the ports of communicating classes. Chapter 2, Section 2.10 and Chapter 10, Section
10.2.1 introduce the UML notation for composite structure diagrams. This chapter
describes in detail how component-based software architectures are designed.
Three interfaces from an emergency monitoring system will be used in the examples
that follow. Each interface consists of one or more operations, as follows:
1. Interface: IAlarmService
Operations provided:
2. Interface: IAlarmStatus
3. Interface: IAlarmNotification
The interface of a component can be depicted with the static modeling notation (see
Chapter 2), as shown in Figure 12.1, with the stereotype «interface».
Figure 12.1. Example of component interfaces.
12.3.2 Provided and Required Interfaces
To provide a complete definition of the component-based software architecture, it is
necessary to specify the interface(s) provided by each component and the interface(s)
required by each component. A provided interface specifies the operations that a
component must fulfill. A required interface describes the operations that other
components provide for this component to operate properly in a particular environment.
Although many components are designed to provide one interface, it is possible for a
component to provide more than one interface. To do this, the component designer selects
for each provided interface a subset of the component’s operations that are required by
some of its clients. An example of a component that provides more than one interface is
the Alarm Service component, which provides two of the interfaces in
Figure 12.1, IAlarmService and IAlarmStatus. IAlarmService is required by the
Operator Alarm Presentation component and IAlarmStatus is required by the
Monitoring Sensor Component, as shown in Figure 12.2.
12.3.3 Ports and Interfaces
A component has one or more ports through which it interacts with other components.
Each component port is defined in terms of provided and/or required interfaces. A
provided interface of a port specifies the requests that other components can make of this
component. A required interface of a port specifies the requests that this component can
make of other components. A provided port supports a provided interface. A required
port supports a required interface. A complex port supports both a provided interface and
a required interface. A component can have more than one port. In particular, if a
component communicates with more than one component, it can use a different port for
each component with which it communicates. Figure 12.2 shows an example of
components with ports, as well as provided and required interfaces. Figure 12.2 depicts
each component with two stereotypes: the first stereotype corresponding to its subsystem
structuring criterion (Section 10.5), such as «service» or «user interaction», and the second
stereotype is «component».
Figure 12.2. Examples of component ports, with provided and required interfaces.
By convention, the name of a component’s required port starts with the letter R to
emphasize that the component has a required port. The name of a component’s provided
port starts with the letter P to emphasize that the component has a provided port. In Figure
12.2, the Monitoring Sensor Component has one required port, called
RAlarmStatus, which supports a required interface called IAlarmStatus, as defined in
Figure 12.1. The Operator Alarm Presentation component is a client component,
which has a required port with a required interface (PAlarmService) and a provided port
with a provided interface PAlarmNotification. The Alarm Service component has
two provided ports, called PAlarmStatus and PAlarmService, and one required port,
RAlarmNotification. The port PAlarmStatus provides an interface called
IAlarmStatus, through which alarm status messages are sent. The port
PAlarmService provides the main interface through which clients request alarm services
(provided interface IAlarmService). The Alarm Service component sends alarm
notifications through its RAlarmNotification port.
12.3.4 Connectors and Interconnecting Components
A connector joins the required port of one component to the provided port of another
component. The connected ports must be compatible with each other. This means that if
two ports are connected, the required interface of one port must be compatible with the
provided interface of the other port; that is, the operations required in one component’s
required interface must be the same as the operations provided in the other component’s
provided interface. In the case of a connector joining two complex ports (each with one
provided interface and one required interface), the required interface of the first port must
be compatible with the provided interface of the second port, and the required interface of
the second port must be compatible with the provided interface of the first port.
Figure 12.3 shows how the three components (Monitoring Sensor Component,
Operator Alarm Presentation, and Alarm Service) are interconnected. The first
connector is unidirectional (as shown by the direction of the arrow representing the
connector) and joins Monitoring Sensor Component‘s RAlarmStatus required port
to Alarm Service‘s PAlarmStatus provided port. Figure 12.2 shows that these ports
are compatible because it results in the IAlarmStatus required interface being connected
to the IAlarmStatus provided interface. The second connector is also unidirectional and
joins Operator Alarm Presentation‘s required port RAlarmService to Alarm
Service‘s provided port PAlarmService. Examination of the port design in Figure 12.2
shows that these ports are also compatible, with the required IAlarmService interface
connected to the provided interface of the same name. The third connector is also
unidirectional and joins Alarm Service’s RAlarmNotification required port to
Operator Alarm Presentation‘s PAlarmNotification provided port, and it is
through this connector that alarm notifications are sent via the IAlarmNotification
interface.
Figure 12.3. Example of components, ports, and connectors in a software architecture.
12.4 Designing Composite Components
A composite component is a component that encapsulates the internal components it
contains. The component is both a logical and a physical container; however, it adds no
further functionality. Thus, a component’s functionality is provided entirely by the
constituent components it contains. An example of a composite component with internal
components is depicted in Figure 12.4, in which the composite Operator
Presentation user interaction component contains three internal simple components,
Operator Interaction, Alarm Window, and Event Monitoring Window.
Note that each component sends a message by communicating through its own local
port, which means that it need have no knowledge of the component that will actually
receive the message.
Figure 12.7. Component-based software architecture for Factory Automation System.
Figure 12.8. Composite component ports and interfaces for Factory Automation
System.
12.6 Component Structuring Criteria
A distributed software architecture must be designed with an understanding of the
distributed environments in which it is likely to operate. The component structuring
criteria provide guidelines on how to structure a software architecture into configurable
distributed components, instances of which can be deployed to geographically distributed
nodes. The actual assignment of component instances to physical nodes is done later,
when an individual target system is instantiated and deployed. However, it is necessary to
design the components as configurable components, instances of which are indeed capable
of later being effectively deployed to distributed physical nodes. Consequently, the
component structuring criteria must take into account the characteristics of distributed
environments.
I/O component is a general name given to components that interact with the external
environment; they include input components, output components, I/O components (which
provide both input and output), network interface components, and system interface
components.
Examples of I/O components are the Barrier Component (depicted in Figure 12.9)
and the Warning Alarm (depicted in Figure 12.5) composite components from the
Railroad Crossing Control System. The design of the Barrier Component is described
in Section 12.6.1.
12.7 Design of Service Components
Service components play an important role in the design of distributed software
architectures. Real-time embedded systems particularly need service components for
storing and accessing status and alarm data, as well as for configuration data, which can be
used during software initialization. A service component provides a service for multiple
client components, as described by the client/service patterns in Chapter 11. Typical
service components are file services and database services.
Another approach to providing concurrent service design using multiple readers and
writers is described next. In a concurrent service component, several concurrent objects
might wish to access a data repository at the same time, so access must be synchronized.
Possible synchronization algorithms include the mutual exclusion algorithm and the
multiple readers and writers algorithm. In the latter case, multiple readers are allowed to
access a shared data repository concurrently; however, only one writer is allowed to
update the data repository at any one time, and only after the readers have finished.
In the multiple readers and writers solution shown in Figure 12.11, each read and
write service is performed by a concurrent object, either a reader or a writer. The Service
Coordinator object keeps track of all service requests – those currently being serviced
and those waiting to be serviced. When it receives a request from a client, Service
Coordinator allocates the request to an appropriate reader or writer concurrent object to
perform the service. For example, if the coordinator receives a read request from a client,
it instantiates a Reader object and increments its count of the number of readers. The
reader notifies the coordinator when it finishes, so that the coordinator can decrement the
reader count. If a write request is received from a client, the coordinator allocates the
request to a Writer object only when all readers have finished. This delay is to ensure
that each writer has mutually exclusive access to the data. If the overhead of instantiating
new concurrent objects is too high, the coordinator can maintain a pool of concurrent
Reader objects and one concurrent Writer object and allocate new requests to
concurrent objects that are free.
Figure 12.11. Example of a concurrent service component: multiple readers and writers.
If new readers keep coming and are permitted to read, a writer could be indefinitely
prevented from writing; this problem is referred to as writer starvation. The coordinator
avoids writer starvation by queuing up new reader requests after receiving a writer
request. After the current readers have finished reading, the waiting writer is then allowed
to write before any new readers are permitted to read.
In this example, the clients communicate with the service by using the Asynchronous
Message Communication with Callback pattern (see Section 11.5.5). This means that the
clients do not wait and can do other things before receiving the service response. In this
case, the service response is handled as a callback. With the callback approach, the client
sends an operation handle with the original request. The service uses the handle to
remotely call the client operation (the callback) when it finishes servicing the client
request. In the example illustrated in Figure 12.11, Service Coordinator passes the
client’s callback handle to the reader (or writer). On completion, the Reader concurrent
object remotely invokes the callback, which is depicted on as the service Response
message sent to the client.
12.7.3 Concurrent Service Component with Subscription and Notification
Another example of a concurrent service component is shown in Figure 12.12, which uses
the Subscription/Notification Pattern (see Section 11.7.2). This service maintains an event
archive and also provides a subscription/notification service to its clients. An example is
given of a Real-Time Event Monitor concurrent component that monitors external
events. The Subscription Service component maintains a subscription list of clients
that wish to be notified of these events. When an external event occurs, Real-Time
Event Monitor updates the event archive and informs Event Distributor of the
event arrival. Event Distributor queries Subscription Service to determine the
clients that have subscribed to receive events of this type and then notifies those clients of
the new event.
Q3: Event Archive Service sends the appropriate archive data – for example,
events over the past twenty-four hours – to the client.
E2: Real-Time Event Monitor determines that this is a significant event and
sends an update message to Event Archive Service.
E4, E5: Event Distributor queries Subscription Service to get the list of
event subscribers (i.e., clients that have subscribed to receive events of this type).
Define instances of the component. For each component that can have multiple
instances, it is necessary to define the instances desired. For example, in a
distributed Light Rail system, it is necessary to define the number of instances of
components required in the target system. It is thus necessary to define one
Railroad Crossing Control instance for each railroad crossing, one Train
Control instance for each train, one Station instance for each physical station,
one instance of the Rail Operations Interaction component for each
operator, and one instance of the service component, Rail Operations
Service. Each component instance must have a unique name so that it can be
uniquely identified. For components that are parameterized, the parameters for
each instance need to be defined. Examples of component parameters are instance
name (such as train ID, station ID, or operator ID), sensor names, sensor limits,
and alarm names.
During concurrent task design, a task architecture is developed in which the system
is structured into concurrent tasks and the task interfaces and interconnections are
designed. To help determine the concurrent tasks, task structuring criteria are provided to
assist in mapping an object-oriented analysis model of the system to a concurrent tasking
architecture. These criteria are a set of heuristics, also referred to as guidelines, which
capture expert designer knowledge in the software design of concurrent real-time systems.
Task structuring decisions are depicted using stereotypes. This chapter uses MARTE
stereotypes (Selic 2014) to depict concurrent tasks, as introduced in Chapter 3. After task
structuring, the task interfaces and interconnections are designed by applying the
architectural communication patterns described in Chapter 11.
Real-time software architectures can also be distributed; for this reason they can be
considered a special case of component-based software architectures. In this context, a
simple component is either designed as one task or as a component that contains multiple
active objects (tasks) and passive objects, as described in Chapter 12.
With the advent of relatively cheap multi-core processors, multitasking is even more
important to consider with the need to structure a system into concurrent tasks in order to
take advantage of executing multiple tasks concurrently on multiple processors. As
described in Chapter 3, there are many advantages to having a concurrent tasking design;
however, the designer must be careful in designing the task structure. Too many tasks in a
system can unnecessarily increase complexity because of greater inter-task
communication and synchronization and can lead to increased overhead because of
additional context switching (see Section 3.10). The system designer must, therefore,
make tradeoffs between, on the one hand, introducing tasks to simplify and clarify the
design and, on the other hand, not introducing too many tasks, which could make the
design overly complex. The task structuring criteria are intended to help the designer
make these tradeoffs. They also enable the designer to analyze alternative task
architectures.
MARTE stereotypes are also used to depict the kinds of devices to which the
concurrent tasks interface. Thus, an external hardware device is classified with the
stereotype «hwDevice», and an interrupt-driven device is also classified with the
stereotype «interruptResource».
The task structuring criteria are described next. In each case, a task structuring
criterion is described followed by an example of a behavioral pattern in which an instance
of a task of that type, such as an event driven I/O task, communicates with neighboring
tasks in a typical interaction sequence.
13.2.1 Task Structuring Criteria
The task structuring criteria are organized into groups based on how they are used to assist
in the task structuring activity. The following are the four task structuring groups:
1. I/O task structuring criteria. Address how I/O objects are mapped to I/O tasks as
well as when and how an I/O task is activated.
2. Internal task structuring criteria. Address how internal objects are mapped to
internal tasks as well as when and how an internal task is activated.
3. Task priority criteria. Address the importance of executing a given task relative
to others.
4. Task clustering criteria. Address whether and how multiple objects should be
grouped into a concurrent task. A form of task clustering is task inversion, which is
used for merging tasks to reduce task overhead.
The task structuring criteria are applied in two stages. In the first stage, the I/O task
structuring criteria, the internal task structuring criteria, and the task priority criteria are
applied. This results in a one-to-one mapping of objects in the analysis model to tasks in
the design model. In the second stage, the task clustering criteria are applied, with the
objective of reducing the number of physical tasks. For an experienced designer, these two
stages can be combined. After the tasks have been determined, the task interfaces are
designed.
13.3 I/O Task Structuring Criteria
This section describes the various I/O task structuring criteria. An important factor in
deciding on the characteristics of an I/O task is to determine the characteristics of the I/O
device to which it has to interface.
13.3.1 Characteristics of I/O Devices
There is certain hardware-related information concerning I/O devices that interface to the
embedded system, which is essential to determining the characteristics of tasks that
interface to the devices. Before the I/O task structuring criteria can be applied, it is
necessary to determine the hardware characteristics of the I/O devices that interface to the
system. It is also necessary to determine the nature of the data being input to the system by
these devices or being output by the system to these devices. In this section, the following
I/O issues specific to task structuring are described:
· 2. Passive I/O devices. A passive I/O device does not generate an interrupt on
completion of the input or output operation. Thus, the input from a passive input
device needs to be read either on a polled basis or on demand. Similarly, in the
case of a passive output device, output needs to be provided on either a regular
(i.e., periodic) basis or on demand. Stereotypes are used to depict the device as
an input or output, passive, hardware device, such as «input» «passive»
«hwDevice».
Passive I/O device. For a passive I/O device, it is necessary to determine whether:
The device needs to be polled on a periodic basis so that any change in value
is sent to a consumer task without being explicitly requested, or the value is
written to an entity object with sufficient frequency so that the data do not get
out of date.
An event driven I/O device interface task is often a device driver task. It is typically
activated by a low-level interrupt handler or – in some cases – directly by the hardware.
An event driven I/O task is constrained to execute at the speed of the I/O device with
which it interacts. Thus, an input task might be suspended indefinitely awaiting an input.
However, when activated by an interrupt, the input task typically has to respond to a
subsequent interrupt within a few milliseconds to avoid any loss of data. After the input
data is read, the input task might send the data to be processed by another task or update a
passive object. This frees the input task to respond to another interrupt that might closely
follow the first.
As an example of an event driven input task, consider the Arrival Sensor Input
object shown on the analysis model communication diagram in Figure 13.1a. The
Arrival Sensor Input object receives inputs from the real-world arrival sensor, which
is depicted as an input hardware device. In preparation for task structuring, the Arrival
Sensor is assigned the MARTE stereotypes «input» «hwDevice». The Arrival Sensor
Input object then converts the input to an internal format and sends the train arrival
message to the Train Control object. For task structuring, it is given that the arrival
sensor is an interrupt-driven input hardware device, depicted on the design model
concurrent communication diagram (see Figure 13.1b) with the stereotypes «interrupt-
Resource» «input» «hwDevice», which generates an interrupt when the train arrival is
detected. The Arrival Sensor Input object is designed as an event driven input task
of the same name, depicted on the concurrent communication diagram with the
stereotypes «event driven» «input» «swSchedulableResource». When the task is activated
by the arrival Interrupt, it reads the arrival Data, converts the input data to an
internal format, and sends the data as a train Arrival message to the Train Control
task. In the design model, the interrupt is depicted as an asynchronous event.
Periodic I/O tasks are often used for simple I/O devices that, unlike event driven I/O
devices, do not generate interrupts when I/O is available. Thus, they are often used for
passive sensor devices that need to be sampled periodically.
Consider a passive digital input device – for example, a door sensor. This could be
handled by a periodic I/O task. The task is activated by a timer event and then reads the
status of the device. If the value of the digital sensor has changed since the previous time
it was sampled, the task indicates the change in status. In the case of an analog sensor – a
temperature or pressure sensor, for example – the device is sampled periodically and the
current value of the sensor is read. As an example of a periodic input task, consider the
Pressure Sensor Input object shown in Figure 13.2a. In the analysis model depicted
on the communication diagram, the Pressure Sensor Input object is an «input»
object that receives inputs from the real-world Pressure Sensor input hardware device,
which in preparation for task structuring is depicted with the stereotype «input»
«hwDevice». Because this analog sensor is a passive input hardware device, it is depicted
on the design model concurrent communication diagram with the stereotype «passive»
«input» «hwDevice» (see Figure 13.2b). Because a passive device does not generate an
interrupt, an event driven input task cannot be used. Instead, this case is handled by a
periodic input task, the Pressure Sensor Input task, which is activated periodically
by an external timer to sample the value of the pressure sensor. Thus, the Pressure
Sensor Input object is designed as the Pressure Sensor Input task, which is
depicted as «timerResource» «input » «swSchedulableResource» on the concurrent
communication diagram. To activate the Pressure Sensor Input task periodically, it
is necessary to add an external timer object, the Digital Timer, which is depicted as a
hardware timer resource, «timerResource» «hwDevice» in Figure 13.2b. When activated,
the Pressure Sensor Input task samples the pressure sensor, updates the Pressure
Data entity object with the new pressure reading and then waits for the next timer event.
The timer event is depicted as an asynchronous event on the concurrent communication
diagram.
Figure 13.2. Example of a periodic input task.
The higher the sampling rate of a given task, the greater the overhead that will be
generated. For a digital input device, a periodic input task is likely to consume more
overhead than the equivalent event driven input task. This is because there will likely be
times when the periodic input task is activated and the value of the sensor being monitored
will not have changed. If the sampling rate chosen is too high, significant unnecessary
overhead could be generated. The sampling rate selected for a given task depends on the
characteristics of the input device as well as the characteristics of the environment external
to the application.
13.3.4 Demand Driven I/O Tasks
Demand driven I/O tasks are used when dealing with passive I/O devices that do not
need to be polled and hence do not need periodic I/O tasks. In particular, they are used
when it is considered desirable to overlap computation with I/O. A demand driven I/O task
is used in such a situation to interface to the passive I/O device. Stereotypes are used to
depict a demand driven I/O task as an input or output demand driven task, such as
«demand» «output» «swSchedulableResource».
In the case of input, overlap the input from the passive device with the
computational task that receives and consumes the data. This is achieved by using
a demand driven input task to read the data from the input device, when requested
to do so. Separate demand driven input and computational tasks are only useful if
the computational task has some computation to do while the input task is reading
the input. If the computational task has to wait for the input, the input can be
performed in the same thread of control.
In the case of output, overlap the output to the device with the computational task
that produces the data. This is achieved by using a demand driven output task to
output to the device when requested to do so, usually via a message.
Demand driven I/O tasks are used more often with output devices than with input devices
because the output can be overlapped with the computation more often, as shown in the
following example. Usually, if the I/O and computation are to be overlapped for a passive
input device, a periodic input task is used.
Consider a demand driven output task that receives a message from a producer task.
Overlapping computation and output is achieved as follows: the consumer task outputs the
data contained in the message to the passive output device, the display, while the producer
is preparing the next message. This case is shown in Figure 13.3. Speed Display
Output is a demand driven output task. It accepts a message containing the current speed
from the Speed Computation Algorithm task and then formats and displays the speed
while the Speed Computation Algorithm task is computing the next speed value to
display. Thus, the computation is overlapped with the output. The Speed Display
Output task is depicted on the concurrent communication diagram with the stereotypes
«demand» «output» «swSchedulableResource». The Display passive output hardware
device is depicted with the stereotypes «passive» «output» «hwDevice». This example is
continued in Section 13.9.3.
For example, if two or more tasks are allowed to write to a printer simultaneously,
output from the tasks will be randomly interleaved, and a garbled report will be produced.
To avoid this problem, it is necessary to design a printer resource monitor task. This task
receives output requests from multiple source tasks and has to deal with each request
sequentially. Because the request from a second source task might arrive before the first
task has finished, having a resource monitor task to handle the requests ensures that
multiple requests are dealt with sequentially.
An example of an event driven proxy task is a Pick & Place Robot Proxy task,
which communicates with and interfaces to a Pick & Place Robot, which is an
external computer-based system, as given in Figure 13.5.
An example of a demand driven task is given in Figure 13.7. In the analysis model,
the Speed Adjustment object is activated on demand by the arrival of a Cruise
Command message from the Train Control object, reads from the Current Speed and
Cruising Speed entity objects, calculates the adjustment to the speed, and sends a
Speed Value message with the speed adjustment to the Motor Output object (Figure
13.7a). In the design model, the Speed Adjustment object is structured as a demand
driven algorithm task called Speed Adjustment, which is activated by the arrival of a
cruise Command message. The Speed Adjustment task is depicted on the concurrent
communication diagram with the stereotypes «demand» «algorithm»
«swSchedulableResource» task (Figure 13.7b). The Train Control and Motor Output
objects are also structured as demand driven tasks. The Current Speed and Cruising
Speed objects are passive entity objects.
Figure 13.7. Example of a demand driven algorithm task.
13.4.3 State Dependent Control Tasks
In the analysis model, a state dependent control object executes a state machine. Using the
restricted form of state machines whereby concurrency within an object is not permitted, it
follows that the execution of a state machine is strictly sequential. Hence, a task whose
execution is also strictly sequential can perform the control activity. A task that executes a
sequential state machine (typically implemented as a state transition table) is referred to as
a state dependent control task. A control task is usually a demand driven task, which is
activated on demand by the arrival of a message sent by another task. A state dependent
control task is depicted with the stereotypes «demand» «state dependent control»
«swSchedulableResource».
An example of a demand driven state dependent control task is shown in Figure 13.8.
Train Control (Figure 13.8a) is structured as a state dependent control task because it
executes the Train Control state machine, which is strictly sequential. The Train
Control task (Figure 13.8b) receives train arrival events from an Arrival Sensor
Input task and sends speed commands to a Speed Adjustment algorithm task. The
Train Control demand driven state dependent control task is depicted on the
concurrent communication diagram with the stereotypes «demand» « state dependent
control» «swSchedulableResource».
Another example of a state dependent control task is the Character Control task,
which executes the state machine for a computer game character, in which all characters
are of the same type. There are multiple Character Control objects (depicted by using
the multiple instance 1..* notation in Figure 13.9a). Each Character Control instance
is structured as a demand driven state dependent control task. Consequently, there is one
Character Control task for each game character, which is also depicted by using the
multiple instance notation in Figure 13.9b. The game character tasks are identical, and
each task executes an instance of the same state machine. However, each character is
likely to be in a different state on its state machine and either waiting for or executing a
different event.
A user interaction task usually interfaces with various standard I/O devices – such as
the input keyboard, output display, and mouse – that are typically handled by the operating
system. Because the operating system usually provides a standard interface to these
devices, it is not necessary to develop special-purpose I/O tasks to handle them.
The concept of one task per sequential activity is used on modern workstations with
multiple windows. Each window executes a sequential activity, so there is one task for
each window. In the Windows operating system, it is possible for the user to have Word
executing in one window, PowerPoint executing in another window, and the user browsing
the Web in a third window. There is one user interaction task for each window, and each of
these tasks can spawn other tasks (for example, to overlap printing with editing).
A service object that is designed as a demand driven service task is also depicted in
Figure 13.11. The Sensor Data Service task is activated on demand by the arrival of
sensor requests from its clients. The task is depicted on the concurrent communication
diagram with the stereotypes «demand driven» «service» «swSchedulableResource». See
Chapter 12 for a longer discussion on the design of service subsystems.
Figure 13.11. Example of event driven user interaction tasks and demand driven service
tasks.
It might be that, for a given application, there are too many objects of the same type
to allow each to be mapped to a separate task. This issue is addressed by using task
inversion, as described in Section 13.7.
13.5 Task Priority Criteria
Task priority criteria take into account priority considerations in task structuring; in
particular, high- and low-priority tasks are considered. Task priority is often addressed late
in the development cycle. The main reason for considering it during the task structuring
phase is to identify any time-critical or non-time-critical computationally intensive objects
that need to be treated as separate tasks. Priorities for most tasks are determined based on
real-time scheduling considerations, as described in Chapter 17.
13.5.1 Time-Critical Tasks
A time-critical task is a task that needs to meet a hard deadline. Such a task needs to run
at a high priority. High-priority time-critical tasks are needed in most real-time systems.
Consider the case where the execution of a time-critical object is followed by a non-
time-critical object. To ensure that the time-critical object gets serviced rapidly, it should
be allocated to its own high-priority task.
Other examples of time-critical tasks are control tasks and event driven I/O tasks. A
control task executes a state machine and needs to execute at a high priority because state
transitions must be executed rapidly. An event driven I/O task needs to have a high
priority so it can service interrupts quickly; otherwise, there is a danger that it might miss
interrupts. An example of a high-priority event driven input task is the Arrival Sensor
Input task in Figure 13.1.
13.5.2 Non-Time-Critical Computationally Intensive Tasks
A non-time-critical computationally intensive task may run as a low-priority task
consuming spare CPU cycles. A low-priority computationally intensive task executes as a
background task that is preempted by higher-priority tasks that are more time critical.
The task clustering criteria are used to determine whether certain tasks, determined
during the first stage of task structuring, could be consolidated further to reduce the
overall number of tasks. The tasks determined during the first phase of task structuring (by
using the I/O, internal, and priority task structuring criteria described in the previous
subsections) are referred to as candidate tasks. Candidate tasks can actually be combined
into physical tasks, based on the task clustering criteria described in this subsection.
The clustering criteria provide a means of analyzing the concurrent nature of the
candidate tasks and hence provide a basis for determining whether two or more candidate
tasks should be grouped into a single physical task and, if so, how. Thus, if two candidate
tasks are constrained so they cannot execute concurrently and must instead execute
sequentially, combining them into one physical task usually simplifies the design. There
are exceptions to this general rule, as described later.
This chapter describes task structuring by using the clustering criteria. However, the
internal design of clustered tasks is described in Chapter 14.
13.6.1 Temporal Clustering
Certain candidate tasks may be activated by the same event, for example, a timer event.
Each time the tasks are awakened, they execute some activity. If there is no sequential
dependency between the candidate tasks – that is, no required sequential order in which
the tasks must execute – the candidate tasks may be grouped into the same task, based on
the temporal clustering criterion. When the task is activated, each of the clustered
activities is executed in turn. Because there is no sequential dependency between these
clustered activities, an arbitrary execution order needs to be selected by the designer.
In Figure 13.12a, the Temperature Sensor Input periodic input task periodically
reads the current value of the temperature sensor and updates the current temperature in
the Sensor Data Repository object. Similarly, the Pressure Sensor Input
periodic input task periodically reads the current value of the pressure sensor and updates
the current pressure in the Sensor Data Repository object.
Now, assume that the sensors are to be sampled with the same frequency, perhaps
every 100 milliseconds. In this case, the Temperature Sensor Input and the
Pressure Sensor Input tasks can be grouped into a task called Sensor Monitor,
based on the temporal clustering criterion, as shown in Figure 13.12b. The Sensor
Monitor task is depicted as a periodic temporal clustering task on the concurrent
communication diagram with the stereotypes «timerResource» «temporal clustering»
«swSchedulableResource».
The Sensor Monitor task is activated periodically by a timer event from the
external timer and then samples the current values of the temperature and pressure
sensors. It then updates the values of the current temperature and pressure in the Sensor
Data Repository, which is a passive entity object. The attributes of the
«timerResource» are set to {isPeriodic = true, period = (100, ms)}, which means that each
sensor is sampled with a frequency given by the period value of 100 msec.
Although this example only has two sensors, the benefits of temporal clustering
become more apparent if one considers 100 sensors sampled with the same period being
clustered into one temporally cohesive task instead of into 100 periodic tasks.
If one candidate task is more time-critical than a second candidate task, the tasks
should not be combined; this gives the additional flexibility of allocating different
priorities to the two tasks.
If it is considered likely that two candidate tasks for temporal clustering could be
executed on separate processors, they should be kept as separate tasks because
each candidate task would execute on its own processor.
In short, the use of temporal clustering for related tasks is recommended in certain cases.
However, grouping periodic tasks that are not functionally related into one task is not
considered desirable from a design viewpoint, although it might be done for optimization
purposes if the tasking overhead is considered too high.
13.6.2 Sequential Clustering
The execution of certain candidate tasks might be constrained by the needs of the
application to be carried out in a sequential order. The first candidate task in the sequence
is triggered by an aperiodic or periodic event. The other candidate tasks are then executed
sequentially after it. These sequentially dependent candidate tasks may be grouped into a
task based on the sequential clustering criterion. A sequentially clustered task is depicted
with the stereotypes «sequential clustering» «swSchedulableResource».
If the last candidate task in a sequence does not send an inter-task message, this
terminates the group of tasks to be considered for sequential clustering. This
happens with the Status Display Output candidate task in Figure 13.13a,
which ends a sequence of two sequentially connected candidate tasks by displaying
the report.
If the next candidate task in the sequence also receives inputs from another source
and therefore can also be activated by receiving input from that source, this
candidate task should be left as a separate task. This happens in the case of the
Microwave Control task (Figure 13.16b), which can receive inputs from the
Door Sensor Input task as well as from the Weight Sensor and Keypad
Input tasks (Figure 13.16b). The four candidate tasks are not combined.
If the next candidate task in the sequence is likely to hold up the preceding
candidate task(s) such that they could miss either an input or a state change, the
next candidate task should be structured as a separate, lower-priority task. This is
what happens with the Arrival Sensor Input task in Figure 13.1, which
receives arrival events from the external arrival sensor, which it then passes on to
the Train Control task. The Arrival Sensor Input task must not miss any
external events, so it is structured as a higher-priority input task separate from the
Train Control task.
If the next candidate task in sequence is of a lower priority and follows a time-
critical task, the two tasks should be kept as separate tasks. This is discussed in
more detail in Task Priority Criteria in Section 13.5.
13.6.3 Control Clustering
A state dependent control object, which executes a sequential state machine, is mapped to
a state dependent control task. In certain cases, the state dependent control task may be
combined with other objects that execute actions triggered by the state machine. This is
referred to as control clustering. A control clustered task is activated on demand by the
arrival of a message from another task. A demand driven control clustering task is
therefore depicted with the stereotypes «demand» «control clustering»
«swSchedulableResource».
State dependent actions that are triggered by the control object because of a state
transition. Consider an action (designed as an operation provided by a separate
object) that is triggered at the state transition and both starts and completes
execution during the state transition. Such an action operation does not execute
concurrently with the control object. When mapped to tasks, the operation is
executed within the thread of control of the control task. If all the action operations
of an object are executed within the thread of control of the control task, that
object can be combined with the control task, based on the control clustering task
structuring criterion.
State dependent activities that are either enabled or disabled by the control object
because of a state transition. Consider an activity (executed by a separate object)
that is enabled at a state transition and then executes continuously until disabled at
a subsequent state transition. This activity should be structured as a separate task,
because both the control object and the activity will need to be active concurrently.
Consequently, the state dependent control object Pump Control and the output
object Pump Engine Output are grouped into a demand driven control clustering task –
the Pump Controller task – which is depicted with the stereotypes «demand» «control
clustering» «swSchedulableResource» on the concurrent communication diagram shown
in Figure 13.14b.
13.7 Design Restructuring by Using Task
Inversion
Task inversion is a concept that originated in Jackson Structured Programming and
Jackson System Development (Jackson 1983), whereby the number of tasks in a system
can be reduced in a systematic way. At one extreme, a concurrent solution can be mapped
to a sequential solution.
The task inversion criteria are used for merging tasks to reduce task overhead. The
task inversion criteria – and in particular multiple instance task inversion – may be used
during initial task structuring if high task overhead is anticipated. Alternatively, they may
be used for design restructuring in situations in which there are concerns about high
tasking overhead. In particular, task inversion can be used if a performance analysis of the
design indicates that the tasking overhead is too high.
13.7.1 Multiple Instance Task Inversion
Handling multiple control tasks of the same type was described in Section 13.4.6. With
this approach, several objects of the same type can be modeled by using one task instance
for each object, where all the tasks are of the same type. The problem is that, for a given
application, the system overhead for modeling each object by means of a separate task
might be too high.
With multiple instance task inversion, all identical tasks of the same type are
replaced by one task that performs the same functionality. For example, instead of
mapping each control object to a separate task, all control objects of the same type are
mapped to the same task. Each object’s state information is captured in a separate passive
entity object. A multiple instance inversion task is typically activated on demand by the
arrival of a message destined for one of the inverted tasks. A demand driven multiple
instance inversion task is depicted with the stereotypes «demand» «multiple instance
inversion» «swSchedulableResource».
1. I/O tasks. Start with the I/O objects that interact with the outside world. Determine
whether the object should be structured as an event driven I/O task, a periodic I/O
task, a demand driven I/O task, a resource monitor task, or a temporally clustered
periodic I/O task.
2. Control tasks. Analyze each control object (state dependent control object or
coordinator object). Structure this object as a demand driven control task. Any object
that executes an action (operation) triggered by the control task can potentially be
combined with the control task based on the control clustering criterion (for a state
dependent object) or sequential clustering criterion (for a coordinator object). Any
activity that the control task enables and subsequently disables should be structured
as a separate task.
3. Periodic tasks. Analyze the internal periodic activities, which are structured as
periodic tasks. Determine if any candidate periodic tasks are triggered by the same
event. If they are, they may be grouped into the same task, based on the temporal
clustering criterion. Other candidate tasks that execute in sequence may be structured
into the same task, according to the sequential clustering criterion.
4. Other internal tasks. For each internal candidate task activated by an internal
event, identify whether any adjacent candidate tasks on the concurrent
communication diagram may be grouped into the same task according to the
temporal, sequential, or multiple instance task inversion clustering criteria.
The guidelines for mapping analysis model objects to design model tasks are summarized
in Table 13.1. In cases in which the clustering criterion applies, this means that the
analysis model object is designed as a passive object nested inside a clustered task, as
described in more detail in Chapter 14.
Sequential clustering
Entity Service
Control clustering
Coordinator Coordinator
Sequential clustering
Periodic algorithm
Any clustering criterion
Table 13.2 depicts the stereotypes for all the tasks described in this chapter.
Task Stereotypes
«timerResource» «output»
«swSchedulableResource»
«timerResource» «I/O»
«swSchedulableResource»
Consider the concurrent communication diagram (Figure 13.16a), which depicts the
Door Sensor Input task sending a message to the Microwave Control task. The
Door Sensor Input task sends the message and does not wait for it to be accepted by
the Microwave Control task. This allows the Door Sensor Input task to quickly
service any new external input that might arrive. Asynchronous message communication
also provides the greatest flexibility for the Microwave Control task because it can wait
on a queue of messages that arrive from multiple sources – in addition to the Door
Sensor Input task, there are also the Weight Sensor Input and Keypad Input
tasks that send control requests as messages to Microwave Control (Figure 13.16b).
The messages from these producer tasks are queued FIFO in a message queue for
Microwave Control. The Microwave Control task processes the requests in the
order in which they arrive. This example is described in more detail in Chapter 19.
Figure 13.16. Examples of asynchronous message communication.
13.9.2 Synchronous Message Communication with Reply
In the case of the synchronous message communication with reply pattern (Section
11.5.4), the producer sends a message to the consumer and then waits for a reply. When
the message arrives, the consumer accepts the message, processes it, generates a reply, and
sends the reply. The producer and consumer then both continue. The consumer is
suspended if no message is available.
In this example, the decision made is that there is no point in having the Speed
Computation Algorithm producer task compute values of speed if the Speed
Display Output consumer task cannot keep up with displaying them. Consequently, the
interface between the two tasks is mapped to a synchronous message communication
without reply interface, as depicted on Figure 13.18. The Speed Computation
Algorithm computes the speed, sends the message, and then waits for the acceptance of
the message by the Speed Display Output before resuming execution. The Speed
Computation Algorithm is held up until the Speed Display Output finishes
displaying the previous message. As soon as the Speed Display Output accepts the
new message, the Speed Computation Algorithm is released from its wait and
computes the next value of speed while the Speed Display Output displays the
previous value. By this means, computation of the new value of speed (a compute-bound
activity) can be overlapped with displaying of the previous value of speed (an I/O-bound
activity), while preventing an unnecessary message queue build-up of speed messages at
the display task. Thus, the synchronous message communication without reply between
the two tasks acts as a brake on the producer task.
13.9.4 External and Timer Event Synchronization
Three types of event synchronization are possible: an external event, a timer event, and
an internal event. This section describes event synchronization with external and timer
events. The next section described internal event synchronization.
An example of a timer event is given in Figure 13.20. The digital timer, which is a
timer resource hardware device, generates a timer event to awaken the Microwave
Timer «timerResource» «swSchedulableResource» task. The Microwave Timer task
then performs a periodic activity – in this case, decrementing the cooking time by one
second and checking whether the cooking time has expired. The timer event is generated
at fixed intervals of time.
Figure 13.20. Example of timer event.
13.9.5 Internal Event Synchronization
An internal event represents internal synchronization between a source task and a
destination task. Internal event synchronization is used when two tasks need to
synchronize their operations without communicating data between the tasks. The source
task executes a signal (event) operation, which sends the event. The destination task
executes a wait (event) operation, which suspends the task until the event is signaled. The
destination task is not suspended if the event has previously been signaled. The event
signal is depicted in UML by an asynchronous message that does not contain any data. An
example of this is shown in Figure 13.21, in which the pick-and-place robot task signals
the event part Ready. This awakens the drilling robot, which operates on the part and
then signals the event part Completed, which the pick-and-place robot is waiting to
receive. This example is described in more detail in Section 14.6.1.
It is important to realize how the synchronous message notation used between two
concurrent tasks differs from that used between a task and a passive object. The notation
looks the same in the UML: an arrow with a filled-in arrowhead. The semantics are
different, however. The synchronous message notation between two concurrent tasks
represents message communication between two tasks in which the producer task waits
for the consumer task, as shown in Figures 13.17 and 13.18 using the synchronous
communication with reply and without reply patterns respectively. However, the
synchronous message notation between a task and a passive object represents an operation
call (as shown in Figure 13.22) in which the task invokes an operation of the object, which
executes in the thread of control of the task, using the Synchronized Object Access pattern
(see Chapter 11).
13.10 Task Interface and Task Behavior
Specifications
A task interface specification (TIS) describes a concurrent task’s interface. It is an
extension of the class interface specification (Gomaa 2011) with additional information
specific to a task, including task structure, timing characteristics, relative priority, and
errors detected. A task behavior specification (TBS) describes the task’s event
sequencing logic. The task’s interface defines how it interfaces to other tasks. The task’s
structure describes how its structure is derived, using the task structuring criteria. The
task’s timing characteristics addresses frequency of activation and estimated execution
time. This information is used for real-time scheduling purposes, as described in Chapter
17.
The TIS is introduced with the task architecture to specify the characteristics of each
task. The TBS is defined later, during detailed software design (Chapter 14), and describes
the task event sequencing logic, which is how the task responds to each of its message or
event inputs, in particular, what output is generated as a result of each input.
A task (active class) differs from a passive class in that it should be designed with
only one operation (in Java, this can be implemented as the run method). For this reason,
the TIS only has a specification of one operation, instead of several for a typical passive
class. The TIS is defined as follows, with the first five items identical to a class interface
specification:
Name.
Information hidden.
Structuring criteria: Both the role criterion (e.g., input) and concurrency criterion
(e.g., event driven) need to be described.
Assumptions
Anticipated Changes
· a) Messages inputs and outputs. For each message interface (input or output)
there should be a description of
Type of interface: asynchronous, synchronous with reply, or synchronous
without reply
For each message type supported by this interface: message name and
message parameters
· b) Events signaled (input and output), name of event, type of event: external,
internal, timer
· c) External inputs or outputs. Define the inputs from and outputs to the external
environment.
Errors detected. This section describes the possible errors that could be detected
during execution of this task.
Examples of task interface specifications for tasks in the Railroad Crossing Control
System are described in Chapter 20.
13.11 Summary
During the concurrent task design phase, the system is structured into concurrent tasks and
the task interfaces are designed. To help determine the concurrent tasks, task structuring
criteria are provided to assist in mapping an object-oriented analysis model of the system
to a concurrent tasking architecture. Tasks are labeled using MARTE stereotypes. The task
communication and synchronization interfaces are also designed.
Following concurrent task design, Chapter 14 describes the detailed software design,
in which tasks that contain nested passive objects are designed, detailed task
synchronization issues are addressed, connector classes are designed to encapsulate the
details of inter-task communication, and each task’s internal event sequencing logic is
designed. Examples of task event sequencing logic for the different kinds of tasks
described in this chapter are given in Appendix C. As soon as the task architecture has
been designed, performance analysis of the concurrent real-time design can commence, as
described in Chapter 17. Several examples of task structuring and designing task
interfaces are described in the case studies in Chapters 19–23.
14
Detailed Real-Time Software Design
◈
After structuring the system into tasks (in Chapter 13), this chapter describes the detailed
software design. In this step, the internals of composite tasks that contain nested objects
are designed, detailed synchronization issues of tasks accessing passive classes are
addressed, connector classes are designed that encapsulate the details of inter-task
communication, and each task’s internal event sequencing logic is defined. Several
examples are given in Pseudocode of the detailed design of task synchronization
mechanisms, connector classes for inter-task communication, and task event sequencing
logic.
Section 14.1 describes the design of composite tasks, including the internal design of
temporal and control clustering tasks. Section 14.2 describes the synchronization of access
to classes using different synchronization mechanisms, including the mutual exclusion
algorithm and the multiple readers and writers algorithm. Section 14.3 describes the
synchronization of access to passive objects using the monitor concept. Section 14.4
describes the design of connectors for inter-task communication, in particular for
synchronous and asynchronous message communication. Section 14.5 describes the
detailed software design of tasks using task behavior specifications and event sequencing
logic. Section 14.6 provides detailed software design examples of task communication and
synchronization in real-time robot and vision systems. Finally, Section 14.7 briefly
describes implementing concurrent tasks in Java using threads.
14.1 Design of Composite Tasks
A composite task is a task that encapsulates one or more nested objects. This section
describes the detailed design of composite tasks, which includes tasks that were structured
using the task clustering criteria. Such tasks are designed as composite active classes that
contain nested passive classes. In a real-time design, typical nested classes are entity
classes, input/output classes, and state machine classes.
After considering the relationship between tasks and classes, this section describes
situations where it is useful to divide the responsibility between tasks and classes. Next,
the design of two composite tasks is described in detail: a temporal clustering task and a
control clustering task.
14.1.1 Separation of Concerns between Tasks and Nested Classes
The relationship between tasks and classes is handled as follows. The active object, the
task, is activated by an external, internal, or timer event. It then calls an operation
provided by a passive object, which might be nested inside the task, as described in this
section, or external to the task, as described in Section 14.2.
Entity classes that encapsulate internal data structures (also referred to as data
abstraction classes), as described initially in Section 3.2.2. Design of entity classes
that are accessed by multiple tasks is described in Section 11.5.1, in Section 13.9.5,
and in much more details in this chapter in Section 14.2.
Device input or output classes that hide the details of how to interface to I/O
devices, as described in Section 3.2.3 and in more detail in Section 14.1.2.
State machine classes that hide the details of the encapsulated state transition table,
as described in Section 14.1.3.
A composite task with several nested objects can be depicted on a detailed concurrent
communication diagram. Each composite task has a coordinator object, which receives the
task’s incoming messages and can then invoke operations provided by other nested
objects.
14.1.2 Design of Device I/O Classes
A device I/O class provides a virtual interface that hides the actual interface to a real-
world I/O device. The rationale for designing such classes using information hiding is
described in Section 3.2.3. This section describes the design of the class operations.
A device I/O class interfaces to the real-world device and provides the operations that
read from and/or write to the device. A device I/O class has an initialize operation.
When an object is instantiated from the class, this operation is called at device
initialization time to initialize the device and any internal variables used by the object. The
other operations depend on the characteristics of the device. For example, a device input
class is likely to have a read operation, and a device output class is likely to have a
write or update operation. A device interface class that provides both input and output
is likely to have both read and write operations.
The state machine class provides the operations that access the state transition table
and change the state of the object. In particular, one or more operations are designed to
process the incoming events that cause state changes. One way of designing the operations
of a state machine class is to have one operation for each incoming event. This means that
each state machine class is designed explicitly for a particular state machine. However, it
is more desirable to design a state machine class that is application independent and hence
more reusable.
A reusable state machine class hides the contents of the state transition table and the
current state of the machine. It provides three reusable operations that are not application
specific:
initializeSTM ()
processEvent (in event, out action)
currentState (): State
The processEvent operation is called when there is a new event to process, with the
new event passed in as an input parameter. Given the current state of the machine and any
specified conditions that must hold, the operation looks up the state transition table entry
for Table (new event, current state). The information contained in that entry is the next
state and action(s) to be performed. The current state is then updated to the new state and
the action or action list is returned as an output parameter. The current State operation
is optional; it returns the state of the machine and is only needed in applications where the
current state needs to be known by tasks using the state machine class.
A state machine class is a reusable class in that it can be used to encapsulate any state
transition table. The contents of the table are application-dependent and are defined at the
time the state machine class is instantiated and/or initialized. At initialization time, the
initializeSTM operation is called to populate the state machine (typically from a file)
with the states, events, actions, and conditions, as well as setting the current state of the
machine to the initial state.
An example of a state machine class from the Microwave Oven Control System
is the Microwave State Machine state machine class, shown in Figure 14.2. The class
encapsulates the microwave oven state transition table (which is mapped from the
microwave oven state machine, as depicted in Chapters 7 and 19) and provides the
initializeSTM, processEvent, and currentState operations. At initialization
time, the current state of the state machine is set to Door Closed, which is the initial
state of the microwave oven.
Consider the dynamic behavior. The task is first activated by a timer event. It then
calls the operations provided by each device interface object to obtain the latest status of
each device and then either sends the device status to a consumer task or writes it to a
passive entity object.
An example of polled I/O is given in Figure 14.3. The initial design decision is to
design two separate periodic input tasks, one for each input object, Temperature
Sensor Input and Pressure Sensor Input (Figure 14.3a), which monitor the
temperature and pressure sensors respectively. Given that the temperature and pressure
sensors are sampled periodically and with the same frequency, an alternative design
decision is to design one temporal clustering task, which samples both temperature and
pressure, as described in Section 13.6.1 and shown in Figure 14.3b.
Figure 14.3. Example of temporal clustering and input objects. a. Periodic input tasks
before temporal clustering. b. Periodic input with one temporal clustering task.
Figure 14.3c. Design of nested input classes.
Figure 14.3d. Temporal clustering task with nested input objects.
From a class structuring perspective, two separate input classes are created for the
temperature and pressure sensors (Figure 14.3c), namely the Temperature Sensor
Input and Pressure Sensor Input classes. Each input class supports two operations:
for Temperature Sensor Input, the operations are read (out tempData) and
initialize. For Pressure Sensor Input, they are read (out pressureData)
and initialize.
From a combined task and class perspective, the Sensor Monitor task is structured
as a composite task, which contains three nested objects: a coordinator object, the Sensor
Coordinator, and two input objects, Temperature Sensor Input and Pressure
Sensor Input. The Sensor Data Repository entity class is outside the task and has
operations to update and read the temperature and pressure sensor values, namely update
(in currentPressure), update (in currentTemp), read (out
pressureValue), and read (out temperatureValue).
Consider the dynamic behavior as depicted in Figure 14.3d. The Sensor Monitor
task is activated periodically by a timer event. At this time, the coordinator object,
Sensor Monitor Coordinator, reads the current values of the sensors by calling each
of the operations, Temperature Sensor Input.read (out tempData) and
Pressure Sensor Input.read (out pressureData). It then invokes the update
operations of the Sensor Data Repository entity object, namely Sensor Data
Repository.update (in currentTemp), and Sensor Data Repository.update
(in currentPressure).
By separating the concern of how a device is accessed into the input class from the
concern of when the device is accessed into the task, greater flexibility and potential reuse
is achieved. Thus, for example, the temperature input class could be used in different
applications by an event driven input task, a periodic input task, or a temporally clustered
periodic I/O task. Furthermore, the characteristics of different temperature sensors could
be hidden inside the input class while preserving the same virtual device interface.
14.1.5 Control Clustering Task and Information-Hiding Objects
The next case to be considered is the design of a control clustering task with nested
information-hiding objects. The task is activated on demand. It then calls operations
provided by one or more passive objects.
Figure 14.4 gives an example of a control clustering task and the nested objects to
which it interfaces. The initial design decision before control clustering (Figure 14.4a) is
to have the control task Pump Control, which encapsulates a state machine, sends start
and stop messages (at different state transitions) to the Pump Engine Output task.
Figure 14.4. Example of a control clustering task with passive objects. a. Tasks before
control clustering. b. Control clustering task.
From a combined task- and class structuring perspective (Figure 14.4d), there is one
task, the Pump Controller task, which is structured as a composite task. It contains
three nested objects: a state machine object called Pump Control, a passive output object
called Pump Engine Output, and a nested coordinator object called Pump
Coordinator, which provides the overall internal coordination of the task. When a new
message arrives at Pump Controller, it is received by Pump Coordinator, which
extracts the specific event from the request and calls PumpControl.processEvent (in
event, out action). Pump Control looks up the state transition table, given the
current state and the new event. The entry in the table contains the new state and the
action to be performed. Pump Control updates the current state and returns the action to
be performed. Pump Coordinator then initiates the action. If the action is to start or stop
the pump, it invokes the start or stop operation of Pump Engine Output.
14.2 Synchronization of Access to Classes
If a class is accessed by more than one task, the class’s operations must synchronize the
access to the data it encapsulates, as described in the Object Access Pattern in Section
11.5.1. This section describes mechanisms for providing this synchronization using the
mutual exclusion algorithm and the multiple readers and writers algorithm.
14.2.1 Example of Synchronization of Access to Class
As an example of synchronization of access to a class, consider a passive entity class – the
Analog Sensor Repository class, which encapsulates a sensor data repository. In
designing this class, one design decision relates to whether the internal sensor data
structure is to be designed as an array or a linked list. Another design decision relates to
the nature of the synchronization required, whether an object of this class is to be accessed
by more than one task concurrently, and – if so – whether mutual exclusion or the multiple
readers and writers algorithm is required. These design decisions relate to the design of the
class and need not concern users of the class.
By separating the concerns of what the class does – namely the specification of the
operations – from how it does it – namely the internal design of the class – any changes to
the internals of the class have no impact on users of the class. Possible changes are
Changes to the internal data structure, such as from array to linked list;
Changes to the internal synchronization of access to the data, such as from mutual
exclusion to multiple readers and writers;
The impact of these changes is only on the internals of the class, namely, the internal data
structure and the internals of the operations that access the data structure.
14.2.2 Operations Provided for Synchronized Access to Class
For the same external interface of the Analog Sensor Repository entity class,
consider two different internal designs for the synchronization of access to the sensor data
repository: mutual exclusion and multiple readers and writers. As described in Section
13.9.5 and depicted in Figure 14.5, a shared entity class is labeled with the MARTE
stereotypes «sharedDataComResource» because it is a resource for sharing data that is
communicated between tasks and «sharedMutualExclusionResource» because it is also a
resource that ensures mutually exclusive access to the shared data. It should be pointed out
that the mutual exclusion stereotype is interpreted as meaning that mutual exclusion is
enforced when necessary and not that every access to the data is mutually exclusive.
In the sensor repository example, the Analog Sensor Repository entity class
provides the following two operations (see Figure 14.5).
readAnalogSensor (in sensorID, out sensorValue, out upperLimit, out
lowerLimit, out alarmCondition)
This operation is called by reader tasks that wish to read from the sensor data repository.
Given the sensor ID, this operation returns the current sensor value, upper limit, lower
limit, and alarm condition to users who might wish to manipulate or display the data. The
range between the lower limit and upper limit is the normal range within which the sensor
value can vary without causing an alarm. If the value of the sensor is below the lower limit
or above the upper limit, the alarmCondition is equal to low or high, respectively.
This operation is called by writer tasks that wish to write to the sensor data repository. It is
used to update the value of the sensor in the data repository with the latest reading
obtained by monitoring the external environment. It checks whether the value of the
sensor is below the lower limit or above the upper limit, and if so sets the value of the
alarmCondition to low or high, respectively. If the sensor value is within the normal
range, the alarmCondition is set to normal.
Figure 14.5. Example of concurrent access to passive entity object.
14.2.3 Synchronization Using Mutual Exclusion
Consider first the mutual exclusion solution using a binary semaphore (see Section 3.6.1)
in which the acquire and release operations on the semaphore are provided by the
operating system. To ensure mutual exclusion in the sensor repository example, each task
must execute an acquire operation on the semaphore readWriteSemaphore (initially set
to 1) before it starts accessing the data repository. It must also execute a release operation
on the semaphore after it has finished accessing the data repository. The Pseudocode for
the read and update operations is as follows:
class AnalogSensorRepository
private readWriteSemaphore : Semaphore = 1
public readAnalogSensor (in sensorID, out sensorValue, out upperLimit, out
lowerLimit, out alarmCondition)
-- Critical section for read operation.
acquire (readWriteSemaphore);
sensorValue := sensorDataRepository (sensorID, value);
upperLimit := sensorDataRepository (sensorID, upLim);
lowerLimit := sensorDataRepository (sensorID, loLim);
alarmCondition := sensorDataRepository (sensorID, alarm);
release(readWriteSemaphore);
end readAnalogSensor;
In the case of the update operation, in addition to updating the value of the sensor in the
data repository, it is also necessary to determine whether the sensor’s alarm condition is
high, low, or normal.
public updateAnalogSensor (in sensorID, in sensorValue)
-- Critical section for write operation.
acquire (readWriteSemaphore);
sensorDataRepository (sensorID, value) := sensorValue;
if sensorValue ≥ sensorDataRepository (sensorID, upLim)
then sensorDataRepository (sensorID, alarm) := high;
elseif sensorValue ≤ sensorDataRepository (sensorID, loLim)
then sensorDataRepository (sensorID, alarm) := low;
else sensorDataRepository (sensorID, alarm) := normal;
end if;
release (readWriteSemaphore);
end updateAnalogSensor;
14.2.4 Synchronization of Multiple Readers and Writers
With the multiple readers and writers solution, multiple reader tasks may access the data
repository concurrently, and writer tasks have mutually exclusive access to it. Two binary
semaphores are used, readerSemaphore and readWriteSemaphore, which are both
initially set to 1. A count of the number of readers, numberOfReaders, is also
maintained, initially set to 0. The readerSemaphore is used by readers to ensure
mutually exclusive updating of the reader count. Writers use the readWriteSemaphore
to ensure mutually exclusive access to the sensor data repository. This semaphore is also
accessed by readers. It is acquired by the first reader prior to reading from the data
repository and released by the last reader after finishing reading from the data repository.
The Pseudocode for the read and update operations is as follows:
class AnalogSensorRepository
private numberOfReaders : Integer = 0;
readerSemaphore: Semaphore = 1;
readWriteSemaphore: Semaphore = 1;
public readAnalogSensor (in sensorID, out sensorValue, out upperLimit, out
lowerLimit, out alarmCondition)
-- Read operation called by reader tasks. Several readers are
-- allowed to access the data repository providing there is no
-- writer accessing it.
acquire (readerSemaphore);
Increment numberOfReaders;
if numberOfReaders = 1 then acquire (readWriteSemaphore);
release (readerSemaphore);
sensorValue := sensorDataRepository (sensorID, value);
upperLimit := sensorDataRepository (sensorID, upLim);
lowerLimit := sensorDataRepository (sensorID, loLim);
alarmCondition := sensorDataRepository (sensorID, alarm);
acquire (readerSemaphore);
Decrement numberOfReaders;
if numberOfReaders = 0 then release (readWriteSemaphore);
release (readerSemaphore);
end readAnalogSensor;
The Pseudocode for the update operation is similar to that for the mutual exclusion
example because it is necessary to ensure that writer tasks that call the update operation
have mutually exclusive access to the sensor data repository.
public updateAnalogSensor (in sensorID, in sensorValue)
-- critical section for write operation.
acquire (readWriteSemaphore);
sensorDataRepository (sensorID, value) := sensorValue;
if sensorValue ≥ sensorDataRepository (sensorID, upLim)
then sensorDataRepository (sensorID, alarm) := high;
elseif sensorValue ≤ sensorDataRepository (sensorID, loLim)
then sensorDataRepository (sensorID, alarm) := low;
else sensorDataRepository (sensorID, alarm) := normal;
end if;
release (readWriteSemaphore);
end updateAnalogSensor;
end AnalogSensorRepository;
This solution solves the problem; however, it intertwines the synchronization solution with
the access to the data repository. It is possible to separate these two concerns, as described
next.
14.3 Designing Monitors
Synchronization of access to passive objects can also be achieved using monitors, as
described in this section. A monitor combines the concepts of information hiding and
synchronization. A monitor is a data object that encapsulates data and has operations that
are executed mutually exclusively. The critical section of each task is replaced by a call to
a monitor operation. An implicit semaphore is associated with each monitor, referred to as
the monitor lock. Thus, only one task is active in a monitor at any one time. A call to a
monitor operation results in the calling task acquiring the associated semaphore. However,
if the lock is already taken, the task blocks until the monitor lock is acquired. An exit from
the monitor operation results in a release of the semaphore, that is, the monitor lock is
released so that it can be acquired by a different task. The mutually exclusive operations of
a monitor are also referred to as guarded operations or synchronized methods in Java.
14.3.1 Example of Mutual Exclusion with Monitor
An example of mutually exclusive access to the analog sensor repository using a monitor
is described next. The monitor solution is to encapsulate the sensor data repository in an
Analog Sensor Repository information-hiding object, which supports read and
update operations. These operations are called by any task wishing to access the data
repository. The details of how to synchronize access to the data repository are hidden from
the calling tasks.
The monitor provides for mutually exclusive access to the analog sensor repository.
There are two mutually exclusive operations, one to read from and one to update the
contents of the analog repository. The specification of the two operations is given in
Section 14.2.2 and depicted in Figure 14.5. The Pseudocode for the mutually exclusive
operations is as follows:
monitor AnalogSensorRepository
public readAnalogSensor (in sensorID, out sensorValue, out upperLimit, out
lowerLimit, out alarmCondition)
sensorValue := sensorDataRepository (sensorID, value);
upperLimit := sensorDataRepository (sensorID, upLim);
lowerLimit := sensorDataRepository (sensorID, loLim);
alarmCondition := sensorDataRepository (sensorID, alarm);
end readAnalogSensor;
public updateAnalogSensor (in sensorID, in sensorValue)
sensorDataRepository (sensorID, value) := sensorValue;
if sensorValue ≥ sensorDataRepository (sensorID, upLim)
then sensorDataRepository (sensorID, alarm) := high;
elseif sensorValue ≤ sensorDataRepository (sensorID, loLim)
then sensorDataRepository (sensorID, alarm) := low;
else sensorDataRepository (sensorID, alarm) := normal;
end if;
end updateAnalogSensor;
end AnalogSensorRepository;
14.3.2 Monitors and Condition Synchronization
In addition to providing synchronized operations, monitors support condition
synchronization. This allows a task executing the monitor’s mutually exclusive operation
to block by executing a wait operation until a particular condition is true, for example,
waiting for a buffer to become full or empty. When a task in a monitor blocks, it releases
the monitor lock, allowing a different task to acquire the monitor lock. A task that blocks
in a monitor is awakened by some other task executing a signal operation (referred to as
notify in Java). For example, if a reader task needs to read an item from a buffer and the
buffer is empty, it executes a wait operation. The reader remains blocked until a writer
task places an item in the buffer and executes a notify operation.
The following is the monitor design for mutually exclusive access to a resource:
monitor Semaphore
-- Declare Boolean variable called busy, initialized to false.
private busy : Boolean = false;
—acquire is called to take possession of the resource
—the calling task is suspended if the resource is busy
public acquire ()
while busy = true do wait;
busy := true;
end acquire;
—release is called to relinquish possession of the resource
—if a task is waiting for the resource, it will be awakened
public release ()
busy := false;
notify;
end release;
end Semaphore;
14.3.3 Synchronization of Multiple Readers and Writers Using a Monitor
This section describes a monitor solution to the multiple readers and writers problem.
Because the operations of a monitor are executed mutually exclusively, a mutual exclusion
solution to the sensor repository problem can easily be achieved using monitors, as
described in Section 14.3.1. However, a multiple readers and writers solution cannot use a
monitor solution for the design of the Analog Sensor Repository class because the
readAnalogSensor operation needs to be executed by several readers concurrently.
Instead, the synchronization parts of the multiple readers and writers algorithm are
encapsulated in a monitor, which is then used by a redesigned Analog Sensor
Repository class. Two solutions to this problem are presented, the first providing the
same functionality as the previous section. The second solution provides an added
capability, that of preventing writer starvation.
A ReadWrite monitor is declared that uses two semaphore monitors and provides
four mutually exclusive operations. The semaphores are the readerSemaphore and the
readWriteSemaphore. The four mutually exclusive operations are the startRead,
endRead, startWrite, and endWrite operations. A reader task calls the startRead
operation before it starts reading and the endRead operation after it has finished reading.
A writer task calls the startWrite operation before it starts writing and the endWrite
operation after it has finished writing. A semaphore monitor (Section 14.3.2) provides an
acquire operation – which is called to first get hold of the resource and involves a possible
delay if the resource is initially busy – and a release operation to free up the resource.
The startRead operation has to first acquire the readerSemaphore, increment the
number of readers, and then release the semaphore. If the reader count was zero before
incrementing, then startRead also has to acquire the readWriteSemaphore, which is
acquired by the first reader and released by the last reader. Although monitor operations
are executed mutually exclusively, the readerSemaphore is still needed. This is because
it is possible for the reader to be suspended, waiting for the readWriteSemaphore
semaphore, and hence release the ReadWrite monitor lock. If another reader now
acquires the monitor lock by calling startRead or endRead, it is suspended, waiting for
the readerSemaphore.
monitor ReadWrite
-- Design for multiple readers/single writer access to resource
-- Declare an integer counter for the number of readers.
-- Declare semaphore for accessing count of number of readers
-- Declare a semaphore for mutually exclusive access to buffer
private numberOfReaders : Integer = 0;
readerSemaphore: Semaphore = 1;
readWriteSemaphore: Semaphore = 1;
public startRead ()
-- A reader calls this operation before it starts to read
readerSemaphore.acquire;
if numberOfReaders = 0 then readWriteSemaphore.acquire ();
Increment numberOfReaders;
readerSemaphore.release;
end startRead;
public endRead ()
-- A reader calls this operation after it has finished reading
readerSemaphore.acquire;
Decrement numberOfReaders;
if numberOfReaders = 0 then readWriteSemaphore.release ();
readerSemaphore.release;
end endRead;
public startWrite ()
-- A writer calls this operation before it starts to write
readWriteSemaphore.acquire ();
end startRead;
public endWrite ()
-- A writer calls this operation after it has finished writing
readWriteSemaphore.release ();
end endWrite;
end ReadWrite;
To take advantage of the ReadWrite monitor, the Analog Sensor Repository is now
redesigned to declare its own private instance of the ReadWrite monitor called
multiReadSingleWrite. The readAnalogSensor operation now calls the
startRead operation of the monitor before reading from the repository and calls the
endRead operation after finishing reading. The updateAnalogSensor operation calls
the startWrite operation of the monitor before updating the repository and calls the
endWrite operation after completing the update.
class AnalogSensorRepository
private multiReadSingleWrite : ReadWrite
public readAnalogSensor (in sensorID, out sensorValue, out upperLimit, out
lowerLimit, out alarmCondition)
multiReadSingleWrite.startRead();
sensorValue := sensorDataRepository (sensorID, value);
upperLimit := sensorDataRepository (sensorID, upLim);
lowerLimit := sensorDataRepository (sensorID, loLim);
alarmCondition := sensorDataRepository (sensorID, alarm);
multiReadSingleWrite.endRead();
end readAnalogSensor;
public updateAnalogSensor (in sensorID, in sensorValue)
-- Critical section for write operation.
multiReadSingleWrite.startWrite();
sensorDataRepository (sensorID, value) := sensorValue;
if sensorValue ≥ sensorDataRepository (sensorID, upLim)
then sensorDataRepository (sensorID, alarm) := high;
elseif sensorValue ≤ sensorDataRepository (sensorID, loLim)
then sensorDataRepository (sensorID, alarm) := low;
else sensorDataRepository (sensorID, alarm) := normal;
end if;
multiReadSingleWrite.endWrite();
end updateAnalogSensor;
end AnalogSensorRepository;
14.3.4 Synchronization of Multiple Readers and Writers without Writer
Starvation
The previous solution to this problem has a limitation in that a busy reader population
could indefinitely prevent a writer from accessing the buffer, a problem referred to as
writer starvation. The following monitor solution prevents this problem by adding a
writerWaitingSemaphore. The startWrite operation must now acquire the
writerWaitingSemaphore before acquiring the readWriteSemaphore. The
startRead operation must acquire (and then release) the writerWaitingSemaphore
before acquiring the readerSemaphore.
The reason for these changes is explained in the following scenario. Assume that
several readers are reading and a writer now attempts to write. It successfully acquires the
writerWaitingSemaphore but is then suspended while trying to acquire the
readWriteSemaphore, which is held by the readers. If a new reader tries to read from
the buffer, it calls startRead and is then suspended, waiting to acquire the
writerWaitingSemaphore. Gradually, the current readers will finish reading until the
last reader reduces the reader count to zero and releases the readWriteSemaphore. The
semaphore is now acquired by the waiting writer, which releases the
writerWaitingSemaphore, thereby allowing a reader or writer to acquire the
semaphore. The monitor solution is given next – compared with the previous solution, the
startRead and startWrite operations have changed.
monitor ReadWrite
-- Prevent writer starvation by adding new semaphore.
-- Design for multiple readers/single writer access to resource.
-- Declare an integer counter for the number of readers.
-- Declare semaphore for accessing count of number of readers
-- Declare a semaphore for mutually exclusive access to buffer
-- Declare a semaphore for writer waiting
private numberOfReaders : Integer = 0;
readerSemaphore: Semaphore = 1;
readWriteSemaphore: Semaphore = 1;
writerWaitingSemaphore: Semaphore = 1;
public startRead ()
-- A reader calls this operation before it starts to read
writerWaitingSemaphore.acquire
writerWaitingSemaphore.release
readerSemaphore.acquire;
if numberOfReaders = 0 then readWriteSemaphore.acquire ();
Increment numberOfReaders;
readerSemaphore.release;
end startRead;
public endRead ()
-- A reader calls this operation after it has finished reading
readerSemaphore.acquire;
Decrement numberOfReaders;
if numberOfReaders = 0 then readWriteSemaphore.release ();
readerSemaphore.release;
end endRead;
public startWrite ()
-- A writer calls this operation before it starts to write
writerWaitingSemaphore.acquire();
readWriteSemaphore.acquire ();
writerWaitingSemaphore.release();
end startRead;
public endWrite ()
-- A writer calls this operation after it has finished writing
readWriteSemaphore.release ();
end endWrite;
end ReadWrite;
No change is required in the design of the Analog Sensor Repository class to take
advantage of this variant.
14.4 Designing Connectors for Inter-Task
Communication
As described in Chapter 3, a multitasking kernel can provide services for inter-task
communication and synchronization. Some concurrent programming languages, such as
Ada and Java, also provide mechanisms for inter-task communication and
synchronization. An alternative approach is to use a connector that encapsulate the details
of inter-task communication and synchronization.
To send a message, the producer calls the send operation and is suspended if the
queue is full (messageCount = maxCount). The producer is reactivated when a slot
becomes available to accept the message. After adding the message to the queue, the
producer continues executing and might send additional messages. To receive a message,
the consumer calls the receive operation and is suspended if the message queue is empty
(messageCount = 0). When a new message arrives, the consumer is activated and given
the message. The consumer is not suspended if there is a message on the queue. It is
assumed that there can be several producers and one consumer. The Pseudocode for the
connector is described next.
monitor MessageQueue
-- Encapsulate message queue that holds max of maxCount messages
-- Monitor operations are executed mutually exclusively;
private messageQ : Queue;
private maxCount : Integer;
private messageCount : Integer = 0;
public send (in message)
while messageCount = maxCount do wait;
place message in messageQ;
Increment messageCount;
if messageCount = 1 then notify;
end send;
public receive (out message)
while messageCount = 0 do wait;
remove message from messageQ;
Decrement messageCount;
if messageCount = maxCount-1 then notify;
end receive;
end MessageQueue;
Figure 14.6. Design of a message queue connector.
14.4.2 Design of Message Buffer Connector
A message buffer connector is used to encapsulate the communication mechanism for
synchronous message communication without reply. The connector is designed as a
monitor that encapsulates a single message buffer and provides synchronized operations to
send a message and receive a message (see Figure 14.7). Figure 14.7a depicts synchronous
message communication without reply between producer and consumer tasks. Figure
14.7b depicts the Producer and Consumer tasks interacting via a Message Buffer
connector. Figure 14.7c depicts the specification of the Message Buffer connector with
the public send and receive operations and the encapsulated data structure for the
message buffer. The producer task calls the send operation and the consumer task calls
the receive operation in Figure 14.7b.
To send a message, the producer calls the send operation. After it has written the
message into the buffer, the producer is suspended until the consumer receives the
message. The consumer calls the receive operation and is suspended if the message
buffer is empty. It is assumed that there is only one producer and one consumer. The
Pseudocode for the connector is described next.
monitor MessageBuffer
-- Encapsulate a message buffer that holds at most one message.
-- Monitor operations are executed mutually exclusively
private messageBuffer : Buffer;
private messageBufferFull : Boolean = false;
public send (in message)
place message in messageBuffer;
messageBufferFull := true;
notify;
while messageBufferFull = true do wait;
end send;
public receive (out message)
while messageBufferFull = false do wait;
remove message from messageBuffer;
messageBufferFull := false;
notify;
end receive;
end MessageBuffer;
Figure 14.7. Design of a message buffer connector.
14.4.3 Design of Message Buffer and Response Connector
A message buffer and response connector is used to encapsulate the communication
mechanism for synchronous message communication with reply. The connector is
designed as a monitor that encapsulates a single message buffer and a single response
buffer. It provides synchronized operations to send a message, receive a message, and send
a reply (see Figure 14.8). Figure 14.8a depicts synchronous message communication with
reply between producer and consumer tasks. Figure 14.8b depicts the Producer and
Consumer tasks interacting via a Message Buffer & Response connector. Figure
14.8c depicts the specification of the connector with the public send, receive, and
reply operations and the encapsulated data structures for the message and response
buffers.
The producer calls the send message operation (S1 in Figure 14.8b). After it has
written the message into the message buffer, the producer is suspended until the response
is received from the consumer. The consumer calls the receive message operation (R1)
and is suspended if the message buffer is empty. When a message is available, the
consumer processes the message, prepares the response, and calls the reply operation
(R2)to place the response in the response buffer. It is assumed that there is only one
producer and one consumer. The Pseudocode for the connector is described next.
monitor MessageBuffer&Response
-- Encapsulates a message buffer that holds at most one message
-- and a response buffer that holds at most one response.
-- Monitor operations are executed mutually exclusively.
private messageBuffer : Buffer;
private responseBuffer : Buffer;
private messageBufferFull : Boolean = false;
private responseBufferFull : Boolean = false;
public send (in message, out response)
place message in messageBuffer;
messageBufferFull := true;
notify;
while responseBufferFull = false do wait;
remove response from responseBuffer;
responseBufferFull := false;
end send;
public receive (out message)
while messageBufferFull = false do wait;
remove message from messageBuffer;
messageBufferFull := false;
end receive;
public reply (in response)
Place response in responseBuffer;
responseBufferFull := true;
notify;
end reply;
end MessageBuffer&Response;
The connectors for the Microwave Oven Control task are depicted on Figure
14.9b. The Oven Control Message Q encapsulates the queue of incoming messages to
the Microwave Oven Control consumer task, for which there are four producers. In
each case, a producer calls the sendControlRequest operation of the message queue
connector object to insert a message in the connector queue, and the consumer calls the
receiveControlRequest operation to remove a message from the queue. There is also
an Oven Timer Message Q message queue connector object to encapsulate the
asynchronous communication between the Microwave Oven Control producer task
and the Oven Timer consumer task, in which the producer calls the sendTimerRequest
operation of the connector and the consumer calls the receiveTimerRequest operation.
For a composite task with several nested objects, a nested coordinator object receives
the task’s incoming messages and then invokes operations provided by other nested
objects. In such cases, the coordinator object executes the task’s event sequencing logic.
14.5.1 Example of Event Sequencing Logic for Sender and Receiver Tasks
The event sequencing logic for a sender task, which sends messages to other tasks, is
given next. The exact form of the send (message) will depend on whether this is a
service provided by the operating system or whether it uses a connector, as described in
the previous section.
loop
Prepare message containing message name (type) and optional message
parameters;
send (message) to receiver;
endloop;
The event sequencing logic for a receiver task, which receives incoming messages from
other tasks, is
loop
receive (message) from sender;
Extract message name and any message parameters from message
case message of
message type 1:
objectA.operationX (optional parameters);
….
message type 2:
objectB.operationY (optional parameters);
…..
endcase;
endloop;
aConnector.send (message)
aConnector.receive (message)
Templates in Pseudocode for the task event sequencing logic for the different kinds of
tasks described in Chapter 13 are given in Appendix C. Examples of task event
sequencing logic for task communication and synchronization are given next, and in the
Microwave Oven Control case study described in Chapter 19.
14.6 Detailed Real-Time Software Design in
Robot and Vision Systems
Consider the following detailed software design examples of task communication and
synchronization in real-time robot and vision systems. Each robot and vision system is
designed as a real-time embedded system. In each robot system, a task controls a robot
arm that performs factory operations such as picking up a part, placing down a part, or
welding two parts together. Each vision system has a task that analyzes images of factory
parts and extracts important properties, such as the type and location of the part. In these
examples, the interaction between tasks is explained in detail by providing the task event
sequencing logic for each task’s behavior.
14.6.1 Example of Event Synchronization between Robot Tasks
The first example is of event synchronization (see Section 13.9.4) between two robot
tasks, in which a pick-and-place robot brings a part to the work location so that a drilling
robot can drill four holes in the part. On completion of the drilling operation, the pick-and-
place robot moves the part away.
The pick-and-place robot moves the part to the work location, moves out of the
collision zone, and then signals the event part Ready, as depicted in Figure 14.11. This
awakens the drilling robot, which moves to the work location and drills the holes. After
completing the drilling operation, it moves out of the collision zone and then signals a
second event, part Completed, which the pick-and-place robot is waiting to receive. After
being awakened, the pick-and-place robot removes the part. Each robot task executes a
loop, because the robots repetitively perform their operations, as described in the task
event sequencing logic below.
Figure 14.12. Example of message communication between Vision and Robot Tasks.
In Java, it is possible for an object to encapsulate a thread but also to have operations
(methods in Java) that may be invoked by other threads. These operations do not
necessarily need to be synchronized with the internal thread. In this case, the object has
both active and passive characteristics. In this book, however, we will maintain a
distinction between active and passive objects. Thus, an object is defined as active or
passive, but not both.
14.8 Summary
After structuring the system into tasks in Chapter 13, this chapter has described the
detailed software design. In this step, the internals of composite tasks that contain nested
objects are designed, detailed task synchronization issues are addressed using semaphores
and monitors, connector classes are designed that encapsulate the details of inter-task
communication, and each task’s internal event sequencing logic is defined. Several
examples were given in Pseudocode of the detailed design of task synchronization
mechanisms, connector classes for inter-task communication, and task event sequencing
logic. Detailed software design examples were given of task communication and
synchronization in real-time robot and vision systems. Templates in Pseudocode for the
task event sequencing logic for the different kinds of tasks are given in Appendix C.
Finally, a brief overview was given of implementing concurrent tasks in Java using
threads.
15
Designing Real-Time Software
Product Line Architectures
◈
A software product line (SPL) consists of a family of software systems that have some
common functionality and some variable functionality (Parnas 1979, Clements 2002,
Weiss 1999). Software product line engineering involves developing the requirements,
architecture, and component implementations for a family of systems, from which
products (family members) are derived and configured. The problems of developing
individual software systems are scaled upward when developing software product lines
because of the increased complexity due to variability management. This chapter gives an
overview of designing software product line architectures using the PLUS (Product Line
UML-based Software engineering) method. The topic is covered in considerable detail in
the author’s book on this topic (Gomaa 2005a).
Section 15.1 describes the software process model for SPL Engineering. Section 15.2
presents the problem description for the SPL example used in this chapter. Section 15.3
describes requirements modeling for SPLs, in particular use case modeling and feature
modeling for SPLs. Section 15.4 describes analysis modeling for SPLs, in particular how
variability is handled in static models, dynamic interaction models, and dynamic state
machine models for SPLs. Section 15.5 describes how variability is addressed in design
models of SPLs.
15.1 Software Product Line Engineering
The software process model for SPL Engineering is a highly iterative software process
that eliminates the traditional distinction between software development and maintenance.
Furthermore, because new software systems are outgrowths of existing ones, the process
takes a software product line perspective; it consists of two main processes (see Figure
15.1):
Figure 15.1. Software process model for software product line engineering.
15.2 Problem Description of Microwave Oven SPL
The manufacturer of the microwave oven product line is an original equipment
manufacturer with an international market. The microwave oven will form the basis of this
product line, which will offer options from basic to top-of-the-line.
The basic microwave oven system has input buttons for selecting Cooking Time,
Start, and Cancel, as well as a numeric keypad. It also has a display to show the cooking
time left. In addition, the oven has a microwave heating element for cooking the food, a
door sensor to sense when the door is open, and a weight sensor to detect if there is an
object in the oven.
Options available for more advanced ovens are a beeper to indicate when cooking is
finished, a light that is switched on when the door is open and when food is being cooked,
and a turntable that turns during cooking. The microwave oven displays messages to the
user such as prompts and warning messages. Because the oven is to be sold around the
world, it must be able to vary the display language. The default language is English, but
other possible languages are French, Spanish, German, and Italian. The basic oven has a
one-line display; more-advanced ovens can have multi-line displays. Other options include
a time-of-day clock, which needs the multi-line display option.
The top-of-the-line oven has a recipe cooking feature, which needs an analog weight
sensor in place of the basic Boolean weight sensor, the multi-line display feature, and a
multi-power level feature (high, medium, low) in place of the basic on/off power feature.
Vendors can configure their microwave oven systems of choice from a wealth of optional
and alternative features, although feature dependency constraints must be obeyed.
15.3 Requirements Modeling for Software
Product Lines
For single systems, use case modeling (see Chapter 6) is the primary vehicle for
describing software functional requirements. For software product lines, feature modeling
is an additional important part of requirements modeling. The strength of feature modeling
is in differentiating between the functionality provided by the different family members of
the product line in terms of common functionality, optional functionality, and alternative
functionality.
15.3.1 Use Case Modeling for Software Product Lines
The functional requirements of a system are defined in terms of use cases and actors. For a
single system, all use cases are required. In a software product line, only some of the use
cases, which are referred to as kernel use cases, are required by all members of the family.
Other use cases are optional, in that they are required by some but not all members of the
family. Some use cases may be alternative; that is, different versions of the use case are
required by different members of the family. In UML, the use cases are labeled with the
stereotype «kernel», «optional», or «alternative» (Gomaa 2005a). In addition, variability
can be incorporated into a use case by means of variation points, which specify where in
the use case description variability can be introduced (Jacobson 1997, Webber and Gomaa
2004, Gomaa 2005a).
Variation points are provided for both the kernel and optional use cases. One
variation point concerns the display prompt language. Since the Microwave System family
members will be deployed in different countries, the appropriate prompt language can be
selected for a given microwave oven product. The default language is English, with
alternative languages being French, Spanish, Italian, and German. An example of a
variation point is for all steps that involve displaying information to the customer in the
Cook Food use case. Mandatory alternative means that a selection among the alternative
choices must be made.
Features are used widely in product line engineering but are not typically used in
UML. In order to effectively model product lines, it is necessary to incorporate feature
modeling concepts into UML. Features are incorporated into UML in the PLUS method
using the meta-class concept, in which features are modeled using the UML static
modeling notation and given stereotypes to differentiate between «common feature»,
«optional feature», and «alternative feature» (Gomaa 2005a). Feature dependencies are
depicted as associations with the name requires; for example, the TOD Clock feature
requires the Multi-Line Display feature. Furthermore, feature groups, which place a
constraint on how certain features can be selected for a product line member, such as
mutually exclusive features, are also modeled using meta-classes and given stereotypes,
such as «zero-or-one-of feature group» or «exactly-one-of feature group» (Gomaa 2005a).
A feature group is modeled as an aggregation of features, since a feature is part of a
feature group.
The common features identify the common functionality in the SPL, as specified by
the kernel use case; the optional and alternative features represent the variability in the
product line as specified by the optional use cases and the variation points. The common
feature in the Microwave SPL is the Microwave Oven Kernel, which corresponds to the
core functionality described in the Cook Food kernel use case.
Figure 15.3. Features and feature groups in Microwave Oven feature model.
Some features have prerequisite features, meaning that for the feature to be selected,
the prerequisite feature must also be selected. Some features are alternative features; that
is, one out of a group of alternatives must be chosen. If an alternative is not chosen, then
the default is used. Feature groups, such as Display Unit and Heating Element, use
alternative features to specify alternative I/O devices (both the hardware and software
support) that can be chosen for the oven display and oven heating unit respectively.
In single systems, use cases are used to determine the functional requirements of a
system; they can also serve this purpose in product families. Griss (1998) has pointed out
that the goal of the use case analysis is to get a good understanding of the functional
requirements, whereas the goal of feature analysis is to enable reuse. Use cases and
features complement each other. Thus, optional and alternative use cases are mapped to
optional and alternative features respectively, while use cases variation points are also
mapped to features (Gomaa 2005a).
The relationship between use cases and features can be explicitly depicted in a
feature/use case relationship table, as shown in Table 15.1. For each feature, the use case it
relates to is depicted. In the case of a feature derived from a variation point, the variation
point name is listed.
Three features correspond to use cases, and the remaining features correspond to
variation points in the use cases. For example, Microwave Oven Kernel is a common
feature determined from the kernel use case, Cook Food. Light is an optional feature
determined from the Cook Food use case; however, it represents a use case variation
point also called Light. TOD Clock is an optional feature that corresponds to the two
optional time-of-day use cases. Language is an exactly-one-of feature group, which
corresponds to the Language variation point in the use case model. This feature group
consists of the default feature English and the alternative features of Spanish,
French, Italian, or German.
Table 15.1. Feature/Use Case Relationship Table for Microwave Oven SPL
Display optional
Time of Day
After developing the use case and feature models, the next step is to develop a
structural model of the problem domain (see Chapter 5), from which the product line
software context diagram is developed. This diagram defines the boundary between a
product line system (i.e., any member of the product line) and the external environment
(i.e., the external entities (depicted using SysML blocks) to which members of the product
line have to interface). The product line software context model is depicted on a block
definition diagram (Figure 15.4) and shows the multiplicity of the associations between
the external blocks and the product line system, which is depicted as one aggregate block.
Figure 15.4. Software context diagram for the microwave oven software product line.
Each external block is depicted with three stereotypes: The first stereotype represents
the reuse category, whether the external block is a kernel or optional block in the product
line. The second stereotype represents the role of the external block; for example, Door
Sensor is an external input device. The third stereotype is the SysML notation for
«block», as described in Chapter 5. In this case study, Door Sensor, Weight Sensor,
and Keypad are all external input devices; they are also kernel blocks. Heating
Element and Display are external output devices that are also kernel. Clock is an
external timer that is kernel. However, Beeper, Turntable, and Lamp are external output
devices that are optional. Kernel external blocks have a one-to-one association with the
product line system; optional external blocks have a zero-to-one association with the
product line system.
15.4.2 Dynamic Interaction Modeling for Software Product Lines
Dynamic interaction modeling for software product lines uses an iterative strategy called
evolutionary dynamic analysis to help determine the dynamic impact of each feature on
the software architecture. This results in new components being added or existing
components having to be adapted. The kernel system is a minimal member of the product
line. In some product lines, the kernel system consists of only the kernel objects. For other
product lines, some default objects may be needed in addition to the kernel objects. The
kernel system is developed by considering the kernel use cases, which are required by
every member for the product line. For each kernel use case, an interaction diagram is
developed depicting the objects needed to realize the use case. The kernel system consists
of the integration of all these objects and the classes from which they are instantiated.
The software product line evolution approach starts with the kernel system and
considers the impact of optional and/or alternative features (Gomaa 2005a). This results in
the addition of optional or variant components to the product line architecture. This
analysis is done by considering the variable (optional and alternative) use cases, as well as
any variation points in the kernel or variable use cases. For each optional or alternative use
case, an interaction diagram is developed consisting of new optional or variant objects –
the variant objects are kernel or optional objects that are impacted by the variable
scenarios and must therefore be adapted.
The relationship between features and the classes can be depicted on a feature/class
table, which shows for each feature the classes that realize the feature, as well as the class
reuse category (kernel, optional, or variant) and, in the case of a parameterized class, the
class parameter. This table (see Table 15.2) is developed after the dynamic impact analysis
has been carried out using evolutionary dynamic analysis.
It is often more effective to design a parameterized state machine, in which there are
feature-dependent states, events, and transitions. Optional transitions are specified by
having an event qualified by a Boolean feature condition, which guards entry into the
state. Optional actions are also guarded by a Boolean feature condition, which is set to
True if the feature is selected and False if the feature is not selected for a given SPL
member.
Examples of feature-dependent events and actions are given for an extract from a
Microwave Oven product line. Minute Plus is an optional microwave oven feature that
cooks food for a minute. In the state machine, Minute Pressed is a feature-dependent
transition guarded by the feature condition minuteplus in Figure 15.6, which is True if
the feature is selected. There are feature-dependent actions, such as Switch On and
Switch Off in Figure 15.6, which are only enabled if the light feature condition is
True, and the Beep action, which is only enabled if the beeper feature condition is True.
Thus, the feature condition is True if the optional feature is selected for a given product
line member, and false if the feature is not selected. The impact of feature interactions can
be modeled very precisely using state machines through the introduction of alternative
states or transitions. Designing parameterized state machines is often more manageable
than designing specialized state machines.
Most software systems and product lines can be based on well-understood overall
software architectures. For example, the client/server software architecture is prevalent in
many software applications. There is the basic client/service architectural pattern, with one
service and many clients. However, there are also many variations on this theme, such as
the multiple client/multiple service architectural patterns and broker patterns (see Chapter
11). Furthermore, with a client/service pattern, services can evolve with the addition of
new services, which are discovered and invoked by clients. New clients can be added that
discover services provided by one or more service providers.
The feature model is the unifying model for relating variability in requirements to
variability in the SPL architecture. For more information on these topics, considerable
detail is provided in the author’s book on designing software product lines with UML
(Gomaa 2005a).
Part III
◈
Some quality attributes are actually system quality attributes because both hardware
and software considerations are needed to achieve high quality. These system quality
attributes include scalability, performance, availability, safety, and security. Other quality
attributes are purely software in nature because they rely entirely on the quality of the
software. These software quality attributes include maintainability, modifiability,
testability, traceability, and reusability. This chapter provides an overview of system and
software quality attributes, and discusses how they are supported by the COMET/RTE
software design method.
16.1 Scalability
Scalability is the extent to which the system is capable of growing after its initial
deployment. There are system and software factors to consider in scalability. From a
system perspective, there are issues of adding further hardware to increase the capacity of
the system. In a centralized system, the scope for scalability is limited, such as adding
more memory, more disk capacity, or an additional CPU. A distributed system offers much
more scope for scalability by adding more nodes to the configuration.
From a software perspective, the system needs to be designed in such a way that it is
capable of growth. A distributed component-based software architecture is much more
capable of scaling upward than a centralized design. Components are designed such that
multiple instances of each component can be deployed to different nodes in a distributed
configuration. A light rail control system that supports multiple trains and multiple stations
can have a component-based software design, such that there is one instance of a train
component for each train and one instance of a station component for each station. Such a
software architecture can be deployed to execute in a small town, in a large city, or in a
wide geographical region. A service-oriented architecture can be scaled up by adding more
services or additional instances of existing services. New clients can be added to the
system as needed. Clients can discover new services and take advantage of their offerings.
Fault-tolerant systems have recovery built into them so that the system can recover
from failure automatically. However, such systems are typically very expensive, requiring
such capabilities as triple redundancy and voting systems. Other less expensive solutions
are possible, such as a hot standby, which is a machine ready for usage very soon after the
failure of the system. The hot standby could be for a server in a client/server system. It is
possible to design a distributed system without a single point of failure, such that the
failure of one node results in reduced service with the system operational in a degraded
mode. This is usually preferable to having no service whatever.
From a software design perspective, support for availability necessitates the design of
systems without single points of failure. COMET/RTE supports availability by providing
an approach for designing distributed component-based software architectures that can be
deployed to multiple nodes with distributed control, data, and services, so that the system
does not fail if a single node goes down but can operate in a degraded mode.
For the case study examples, the hot standby could be used for a Banking System,
which is a centralized client/server system in which the Bank Server is a single point of
failure. A hot standby is a backup server that can be rapidly deployed if the main server
goes down. An example of a distributed system without a single hardware point of failure
is the Emergency Monitoring System (Figure 16.2), in which the remote system and
sensor monitoring components, the monitoring and alarm services, and the operator
interaction components can all be replicated. There are several instances of each of the
client components, so if a component goes down, the system can still operate. The services
can be replicated so that there are multiple instances of Monitoring Data Service and
Alarm Service. This is illustrated in the deployment diagram in Figure 16.2. It is
assumed that the network used is the Internet, in which there might be local failures but
not a global failure, so that individual nodes or even regional subnets might be unavailable
at times but other regions would still be operational.
A safety critical system must be designed in such a way that safety-related hazards
are identified during requirements specification and documented as nonfunctional safety
requirements. The software design must then ensure that these hazards will be detected
and that safety mechanisms are designed into the system to avoid undesirable events that
might be caused by these hazards.
16.5 Security
Security is an important consideration in many systems. There are many potential threats
to distributed application systems, such as electronic commerce and banking systems.
There are several textbooks that address computer and network security, including Bishop
(2004) and Pfleeger et al. (2015). Some of the potential threats are:
COMET/RTE extends the use case descriptions to allow the description of nonfunctional
requirements, which include security requirements. An example of the extension of use
cases to allow nonfunctional requirements is given in Chapter 6.
These potential threats can be addressed in the following ways for a Banking System,
not all of which can be addressed by software means:
Repudiation – A log must be maintained of all transactions so that a claim that the
transaction or activity did not occur can be verified by analyzing the log.
Fix remaining errors. These are errors that were not detected during testing of the
software prior to deployment.
During the Requirements Phase, it is necessary to develop functional (black box) test
cases. These test cases can be developed from the use case model, in particular the use
case descriptions. Because the use case descriptions describe the sequence of user
interactions with the system, they describe the user inputs that must be captured for the
test cases and the expected system output. A test case must be developed for each use case
scenario, one of the main sequence and one for each alternative sequence of the use case.
Using this approach, a test suite can be developed to test the functional requirements of
the system.
During detailed design and coding, in which the internal algorithms for each
component are developed, white box test cases can be developed that test the component
internals using well-known coverage criteria, such as executing every line of code and the
outcome of every decision. By this means, it is possible to develop unit test cases to test
the individual units, such as components.
An example of a black box test case based on the Cook Food use case in the
Microwave Oven System would consist of opening the oven door, placing the food in the
oven, closing the door, entering the cooking time, and pressing the Start button. Initially a
test stub object could be developed, which simulates the customer going through the
earlier-discussed sequence. The system prompts for the cooking time and after the cooking
time has expired, outputs the end of cooking prompt. A test environment could be set up
with an external environment simulator simulating the door and weight sensors inputting
events to the system, and the oven starting and stopping cooking. This would allow the
main sequence of the Cook Food use case as well as all the alternative sequences to be
tested (door opened after cooking has started, customer pressing cancel, pressing Minute
Plus before and after cooking has started, etc.).
16.9 Traceability
Traceability is the extent to which artifacts of each phase can be traced back to products
of previous phases. Requirements traceability is used to ensure that each software
requirement has been designed and implemented. Each requirement is traced to the
software architecture and to the implemented code modules. Requirements traceability
tables are a useful tool during software architecture reviews for analyzing whether the
software architecture has addressed all the software requirements.
As an example of traceability, consider the Cook Food use case from the Microwave
Oven System. This use case is realized in the dynamic interaction model by the Cook
Food communication diagram. The change required by the addition of the prompt
language requirement can be determined by an impact analysis, which reveals that the
prompt object would need to be accessed by the Oven Display Output object prior to
displaying the prompt, as shown in Figure 16.4a. Figure 16.4a shows the original design
with Oven Display Output outputting directly to the display, and Figure 16.4b shows
the modified design with Oven Display Output reading the prompt text from the Oven
Prompts object before outputting to the display. A solution to this problem using product
line concepts is described in Chapter 15.
Figure 16.4. Traceability analysis before and after change to introduce Oven Prompts
object.
16.10 Reusability
Software reusability is the extent to which software is capable of being reused. In
traditional software reuse, a library of reusable code components is developed, such as a
statistical subroutine library. This approach requires the establishment of a library of
reusable components and of an approach for indexing, locating, and distinguishing
between similar components (Jacobson et al. 1997). Problems with this approach include
managing the large number of components that such a reuse library is likely to contain and
distinguishing among similar though not identical components.
When a new design is being developed, the designer is responsible for designing the
software architecture – that is, the overall structure of the program and the overall flow of
control. Having located and selected a reusable component from the library, the designer
must then determine how this component fits into the new architecture.
This chapter describes two approaches for analyzing the performance of a design.
The first approach uses real-time scheduling theory, and the second uses event sequence
analysis. The two approaches are then combined. Both real-time scheduling theory and
event sequence analysis are applied to a design consisting of a set of concurrent tasks.
Section 17.1 provides an introduction to real-time scheduling theory, in particular the rate-
monotonic algorithm and two of its theorems, the utilization bound theorem, and the
completion time theorem. Section 17.2 describes how real-time scheduling theory can be
extended to address aperiodic tasks and task synchronization. Section 17.3 describes the
generalized real-time scheduling theory, which can be applied in cases in which the rate-
monotonic assumptions do not hold. Section 17.4 describes performance analysis of real-
time software designs using event sequence analysis. Section 17.5 then describes how
real-time scheduling theory and event sequence analysis can be combined to analyze the
performance of real-time software designs. Section 17.6 describes advanced real-time
scheduling algorithms, including deadline-monotonic scheduling, dynamic priority
scheduling, and scheduling for multiprocessor systems. Section 17.7 describes
performance analysis of multiprocessor systems, including multicore systems. Finally,
Section 17.8 describes the estimation and measurement of performance parameters.
17.1 Real-Time Scheduling Theory
Real-time scheduling theory addresses the issues of priority-based scheduling of
concurrent tasks with hard deadlines. The theory addresses how to determine whether a
group of tasks, whose individual CPU utilization is known, will meet their deadlines. The
theory assumes a priority preemption scheduling algorithm, as described in Chapter 3.
This section is based on the reports and book on real-time scheduling produced at the
Software Engineering Institute (Sha and Goodenough 1990, SEI 1993), which should be
referenced for more information on this topic.
As real-time scheduling theory has evolved, it has gradually been applied to more
complicated scheduling problems. Problems that have been addressed include scheduling
independent periodic tasks, scheduling in situations in which there are both periodic and
aperiodic (i.e., event driven and demand driven) tasks, and scheduling in cases in which
task synchronization is required.
17.1.1 Scheduling Periodic Tasks
Initially, real-time scheduling algorithms were developed for independent periodic tasks –
that is, periodic tasks that do not communicate or synchronize with each other (Liu and
Layland 1973). Since then, the theory has been developed considerably so it can now be
applied to other practical problems, as will be illustrated in the examples. In this chapter, it
is necessary to start with the basic rate-monotonic theory for independent periodic tasks
for us to understand how it has been extended to address more complex situations.
A periodic task has a period T (the frequency with which it executes) and an
execution time C (the CPU time required during the period). Its CPU utilization U is the
ratio C⁄T. A task is schedulable if all its deadlines are met, that is, if the task completes its
execution before its period elapses. A group of tasks is considered schedulable if each task
can meet its deadlines.
For a set of independent periodic tasks, the rate-monotonic algorithm assigns each
task a fixed priority based on its period, such that the shorter the period of a task, the
higher its priority. Consider three tasks ta, tb, and tc, with periods 10, 20, and 30,
respectively. The highest priority is given to ta, the task with the shortest period; the
medium priority is given to task tb; and the lowest priority is given to tc, the task with the
longest period.
In Liu and Layland (1973), it is formally proven that for a set of independent periodic
real-time tasks, the rate-monotonic priority assignment is optimal among all schemes that
assign unique and fixed priorities to individual tasks, when the tasks have to complete
their execution by their respective periods.
17.1.2 Utilization Bound Theorem
According to the rate-monotonic scheduling theory (RMS), a group of n-independent
periodic tasks can be shown to always meet their deadlines, providing the sum of the
ratios C⁄T for each task is below an upper bound of overall CPU utilization.
The Utilization Bound Theorem (Liu and Layland 1973) states that:
where Ci and Ti are the execution time and period of task ti respectively.
The upper bound U(n) converges to 69 percent (ln 2) as the number of tasks
approaches infinity. The utilization bounds for up to nine tasks, according to the
Utilization Bound Theorem, are given in Table 17.1. This is a worst-case approximation,
and for a randomly chosen group of tasks, Lehoczky, Sha, and Ding (1989) show that the
likely upper bound is 88 percent. For tasks with harmonic periods – that is, with periods
that are multiples of each other – the upper bound is even higher and could reach 100
percent if all the tasks have harmonic periods.
1 1.000
2 0.828
3 0.779
4 0.756
5 0.743
6 0.734
7 0.728
8 0.724
9 0.720
Infinity ln 2 (0.69)
It is assumed that the context-switching overhead, once at the start of the task’s execution
and once at the end of its execution, is included in the CPU times.
The total utilization of the three tasks is 0.7, which is below 0.779, the Utilization
Bound Theorem’s upper bound for three tasks. Thus, the three tasks can meet their
deadlines in all cases.
However, consider that the task t3‘s characteristics are instead as follows:
In this case, the total utilization of the three tasks is 0.85, which is higher than 0.779, the
Utilization Bound Theorem’s upper bound for three tasks. Thus, the Utilization Bound
Theorem indicates that the tasks may not meet their deadlines. Next, a check is made to
determine whether the first two tasks can meet their deadlines.
Given that the rate-monotonic algorithm is stable, the first two tasks can be checked
by using the Utilization Bound Theorem. The utilization of these two tasks is 0.4, well
below the Utilization Bound Theorem’s upper bound for two tasks of 0.828. Thus, the first
two tasks always meet their deadlines. Given that the Utilization Bound Theorem is a
pessimistic theorem, a further check can be made to determine whether Task t3 can meet
its deadlines by applying the more exact Completion Time Theorem.
17.1.4 Completion Time Theorem
If a set of tasks have a utilization greater than the Utilization Bound Theorem’s upper
bound, the Completion Time Theorem, which gives a more exact schedulability criterion
(Lehoczky, Sha, and Ding 1989), can be checked. For a set of independent periodic tasks,
the Completion Time Theorem provides an exact determination of whether the tasks are
schedulable. The theorem assumes a worst case of all the periodic tasks ready to execute
at the same time, which is sometimes referred to as the critical instant. It has been shown
that in this worst case, if a task completes execution before the end of its first period, it
will never miss a deadline (Liu and Layland 1973; Lehoczky, Sha, and Ding 1989). The
Completion Time Theorem therefore checks whether each task can complete execution
before the end of its first period.
For a set of independent periodic tasks, if each task meets its first deadline when all
tasks are started at the same time, the deadlines will be met for any combination of
start times.
To do this, it is necessary to check the end of the first period of a given task ti, as well as
the end of all periods of higher priority tasks in interval [0, Ti]. Following the rate-
monotonic theory, these tasks will have shorter periods than ti. These periods are referred
to as scheduling points. Task ti will execute once for a total CPU amount of Ci during its
period Ti. However, higher priority tasks will execute more often and can preempt ti at
least once. It is therefore necessary to consider the CPU time used up by the higher
priority tasks as well.
The Completion Time Theorem can be illustrated graphically with a timing diagram.
A timing diagram is a time-annotated sequence diagram, based on the UML sequence
diagram, which is a sequence diagram that explicitly depicts the passage of time in a time-
ordered execution sequence of a group of concurrent tasks. See Section 2.14 for more
details of the timing diagram.
17.1.5 Example of Applying Completion Time Theorem
Consider the example described in Section 17.1.3 of three tasks with the following
characteristics:
The execution of the three tasks is illustrated by the timing diagram shown in Figure 17.1.
The tasks are shown as active throughout, with the shaded portions identifying when tasks
are executing. Because there is one CPU in this example, only one task can execute at any
one time.
Figure 17.1. Timing diagram for tasks executing on a single-processor system.
Given the worst case of the three tasks being ready to execute at the same time, t1
executes first because it has the shortest period and hence the highest priority. It completes
after 20 msec, after which the task t2 executes for 30 msec. On completion of t2, t3
executes. At the end of the first scheduling point, T1 = 100, which corresponds to t1‘s
deadline; t1 has already completed execution and thus met its deadline. Task t2 has also
completed execution and easily met its deadline of 150 msec, and t3 has executed for 50
msec out of the necessary 90.
At the start of task t1‘s second period, t3 is preempted by task t1. After executing for
20 msec, t1 completes and relinquishes the CPU to task t3 again. Task t3 executes until the
end of period T2 (150 msec), which represents the second scheduling point due to t2‘s
deadline. Because t2 completed before T1 (which is less than T2) elapsed, it easily met its
deadline. At this time, t3 has used up 80 msec out of the necessary 90.
Task t3 is preempted by task t2 at the start of t2‘s second period. After executing for
30 msec, t2 completes, relinquishing the CPU to task t3 again. Task t3 executes for another
10 msec, at which time it has used up all its CPU time of 90 msec, thereby completing
before its deadline. Figure 17.1 shows the third scheduling point, which is both the end of
t1‘s second period (2T1 = 200) and the end of t3‘s first period (T3 = 200). Figure 17.1 also
shows that each of the three tasks completes execution before the end of its first period,
and thus they successfully meet their deadlines.
Figure 17.1 shows that the CPU is idle for 10 msec before the start of t1‘s third period
(also the start of t3‘s second period). It should be noted that a total CPU time of 190 msec
was used up over the 200 msec period, giving a CPU utilization for this 200 msec period
of 0.95, although the overall utilization is 0.85. After an elapsed time equal to the least
common multiple of the three periods (600 msec in this example) the utilization averages
out to 0.85.
17.1.6 Mathematical Formulation of Completion Time Theorem
The Completion Time Theorem for single-processor systems can be expressed
mathematically in Theorem 3 (Sha and Goodenough 1990) as follows:
where Cj and Tj are the execution time and period of task tj respectively and Ri = {(k,
p)|1 ⩽ k ⩽ i, p = 1, …., ⌊Ti/Tk⌋}.
In the formula, ti denotes the task to be checked, and tk denotes each of the higher
priority tasks that impact the completion time of task ti. For a given task ti and a given task
tk, each value of p represents a scheduling point of task tk. At each scheduling point, it is
necessary to consider task ti‘s CPU time Ci once, as well as the CPU time used by the
higher priority tasks. Hence, you can determine whether ti can complete its execution by
that scheduling point.
Consider Theorem 3 applied to the three tasks, which were illustrated with the timing
diagram in Figure 17.1. The timing diagram is a graphical representation of what Theorem
3 computes. Again, the worst case is considered of the three tasks being ready to execute
at the same time. The inequality for the first scheduling point, T1 = 100, is given from
Theorem 3:
For this inequality to be satisfied, all three tasks would need to complete execution within
the first task t1‘s period T1. This is not the case because before t3 completes, it is
preempted by t1 at the start of t1‘s second period.
The inequality for the second scheduling point, T2 = 150, is given from Theorem 3:
For this inequality to be satisfied, task t1 would need to complete execution twice and
tasks t2 and t3 would each need to complete execution once within the second task t2‘s
period T2. This is not the case, because t3 is preempted by task t2 at the start of t2‘s second
period.
The inequality for the third scheduling point, which is both the end of t1‘s second
period (2T1 = 200) and the end of t3‘s first period (T3 = 200), is given from Theorem 3:
This time the inequality is satisfied and all three tasks meet their deadlines. As long as all
three tasks meet at least one of the scheduling point deadlines, the tasks are schedulable.
17.2 Real-Time Scheduling for Aperiodic Tasks
and Task Synchronization
Real-time scheduling theory can be extended to address aperiodic tasks, which do not
execute periodically, and to situations in which task synchronization is needed, as
described in this section.
17.2.1 Scheduling Periodic and Aperiodic Tasks
To address aperiodic tasks as well as periodic tasks, the rate-monotonic theory must be
extended. An aperiodic task is assumed to arrive randomly and execute once within some
period Ta, which represents the minimum inter-arrival time of the event that activates the
task. The CPU time Ca used by the aperiodic task to process the event is reserved as a
ticket of value Ca for each period Ta. When the event arrives, the aperiodic task is
activated, claims its ticket, and consumes up to Ca units of CPU time. If the task is not
activated during the period Ta, the ticket is discarded. Thus, based on these assumptions,
the CPU utilization of the aperiodic task is Ca⁄Ta. However, this represents the worst-case
CPU utilization because, in general, reserved tickets are not always claimed.
If there are many aperiodic tasks in the application, the sporadic server algorithm
(Sprunt, Lehoczy, and Sha 1989) can be used. From a schedulability analysis viewpoint,
an aperiodic task (referred to as the sporadic server) is equivalent to a periodic task whose
period is equal to the minimum inter-arrival time of the events that activate the aperiodic
task. Hence Ta, the minimum inter-arrival time for an aperiodic task ta, can be considered
the period of an equivalent periodic task. Each aperiodic task is also allocated a budget of
Ca units of CPU time, which can be used up at any time during its equivalent period Ta. In
this way, aperiodic tasks can be placed at different priority levels according to their
equivalent periods and treated as periodic tasks.
17.2.2 Scheduling with Task Synchronization
Real-time scheduling theory has also been extended to address task synchronization. The
problem here is that a task that enters a critical section can block other, higher priority
tasks that wish to enter the critical section. The term priority inversion is used to refer to
the case where a low priority task prevents a higher priority task from executing, typically
by acquiring a resource needed by the latter.
Unbounded priority inversion can occur because the lower priority task, while in its
critical section, could itself be blocked by other medium priority tasks, thereby prolonging
the total delay experienced by the higher priority task. One solution to this problem is to
prevent preemption of tasks while in their critical sections. This is acceptable only if tasks
have very short critical sections. For long critical sections, lower priority tasks could block
higher priority tasks that do need to access the shared resource.
The priority ceiling protocol (Sha and Goodenough 1990) avoids mutual deadlock
and provides bounded priority inversion; that is, one lower priority task, at most, can block
a higher priority task. Only the simplest case of one critical section is considered here.
Adjustable priorities are used to prevent lower priority tasks from holding up higher
priority tasks for an arbitrarily long time. While a low priority task tl is in its critical
section, higher priority tasks can become blocked by it because they wish to acquire the
same resource. If that happens, tl‘s priority is increased to the highest priority of all the
tasks blocked by it. The goal is to speed up the execution of the lower priority task so
blocking time for higher priority tasks is reduced.
The priority ceiling P of a binary semaphore S is the highest priority of all tasks that
may acquire the semaphore. Thus, a low priority task that acquires S can have its priority
increased up to P, depending on what higher priority tasks it blocks.
Another case that could occur is deadlock, in which two tasks each need to acquire
two resources before they can complete. If each task acquires one resource, neither will be
able to complete, because each one is waiting for the other to release its resource – a
deadlock situation. The priority ceiling protocol overcomes this problem (Sha and
Goodenough 1990).
A second case often happens when there are aperiodic tasks. As discussed in Section
17.2.1, aperiodic tasks can be treated as periodic tasks, with the worst-case inter-arrival
time considered the equivalent periodic task’s period. Following the rate-monotonic
scheduling algorithm, if the aperiodic task has a longer period than a periodic task, it
should execute at a lower priority than the periodic task. However, if the aperiodic task is
interrupt-driven, it will need to execute as soon as the interrupt arrives, even if its worst-
case inter-arrival time, and hence equivalent period, is longer than that of the periodic
task.
17.3.1 Priority Inversion
The term priority inversion is given to any case in which a task cannot execute because it
is blocked by a lower priority task. In the case of rate-monotonic priority inversion, the
term “priority” refers to the rate-monotonic priority; that is, the priority assigned to a task
based entirely on the length of its period and not on its relative importance. A task may be
assigned an actual priority that is different from the rate-monotonic priority. Rate-
monotonic priority inversion refers to a task A preempted by a higher priority task B,
when in fact task B’s rate-monotonic priority is lower than A’s (i.e., B’s period is longer
than A’s).
Consider a task ti with a period Ti during which it consumes Ci units of CPU time.
The extensions to Theorems 1, 2, and 3 mean it is necessary to consider explicitly each
task ti to determine whether it can meet its first deadline. In particular, four factors must be
considered for each task:
a. Preemption time by higher priority tasks with periods less than ti. These tasks
can preempt ti many times. Call this set Hn and let there be j tasks in this set. Let Cj
be the CPU time for task j and Tj the period of task j, where Tj< Ti, the period of task
ti. The utilization of a task j in the Hn set is given by Cj⁄Tj.
b. Execution time for the task ti. Task ti executes once during its period Ti and
consumes Ci units of CPU time.
c. Preemption by higher priority tasks with longer periods. These are tasks with
non-rate-monotonic priorities. They can only preempt ti once because they have
longer periods than ti. Call this set H1 and let there be k tasks in this set. Let the CPU
time used by a task in this set be Ck. The worst-case utilization of a task k in the H1
set is given by Ck⁄Ti, because this means k preempts ti and uses up all its CPU time
Ck during the period Ti.
Ui is the utilization bound during a period Ti for task ti. The first term in the Generalized
Utilization Bound Theorem is the total preemption utilization by higher priority tasks with
periods of less than ti. The second term is the CPU utilization by task ti. The third term is
the worst-case blocking utilization experienced by ti. The fourth term is the total
preemption utilization by higher priority tasks with longer periods than ti.
It is also the case at design time that the designer has the freedom to choose the
priorities to be assigned to the tasks. In general, wherever possible, priorities should be
assigned according to the rate-monotonic theory. This is most easily applied to the
periodic tasks. Estimate the worst-case inter-arrival times for the aperiodic tasks and
attempt to assign the rate-monotonic priorities to these tasks. Interrupt-driven tasks will
often need to be given the highest priorities to allow them to quickly service interrupts.
This means that an interrupt-driven task may need to be allocated a priority that is higher
than its rate-monotonic priority. If two tasks have the same period and hence the same
rate-monotonic priority, it is up to the designer to resolve the tie. In general, assign the
higher priority to the task that is more important from an application perspective.
The Generalized Utilization Bound Theorem described in this chapter can be applied
to analyzing the performance of software designs executing on a single-processor system.
As described previously, for time-critical tasks that miss their deadlines according to the
Utilization Bound Theorem, the Generalized Completion Time Theorem can be applied
for a more precise analysis.
17.3.5 Example of Applying Generalized Utilization Bound Theorem
As an example of applying the generalized real-time scheduling theory with the
Generalized Utilization Bound Theorem (Section 17.3.2), consider the following case.
There are four tasks, of which two are periodic and two are aperiodic. One of the aperiodic
tasks, ta, is interrupt-driven and must execute within 200 msec of the arrival of its interrupt
or data will be lost. The other aperiodic task, t2, has a worst-case inter-arrival time of T2,
which is taken to be the period of the equivalent periodic task. The detailed characteristics
are as follows, where all times are in msec and the utilization Ui = Ci⁄Ti:
In addition, t1, t2, and t3 all access the same data store, which is protected by a semaphore
s. It is assumed that the context-switching overhead, once at the start of a task’s execution
and once at the end of its execution, is included in the CPU times.
The overall CPU utilization is 0.42, which is below the worst-case utilization bound
of 0.69. However, it is necessary to investigate each task individually because rate-
monotonic priorities have not been assigned. First consider the interrupt-driven task ta.
Task ta is the highest priority task, which always gets the CPU when it needs it. Its
utilization is 0.04, so it will have no difficulty meeting its deadline.
Next consider the task t1, which executes for 20 msec during its period T1 of duration
100 msec. Applying the Generalized Utilization Bound Theorem, it is necessary to
consider the following four factors:
a. Preemption time by higher priority tasks with periods less than T1. There are
no tasks with periods less than T1.
d. Blocking time by lower priority tasks. Both t2 and t3 can potentially block t1.
Based on the priority ceiling algorithm, at most, one lower priority task can actually
block t1. The worst case is t3, because it has a longer CPU time of 30 msec. Blocking
utilization during the period T1 = B3⁄T1 = 30⁄100 = 0.3.
Next consider task t2, which executes for 15 msec during its period T2 of duration
150 msec. Again, applying the Generalized Utilization Bound Theorem, it is necessary
to consider the following four factors:
a. Preemption time by higher priority tasks with periods less than T2. Only one
task, t1, has a period less than T2. Its preemption utilization during the period T2 = U1
= 0.2.
d. Blocking time by lower priority tasks. The task t3 can block t2. In the worst case,
it blocks t2 for its total CPU time of 30 msec. Blocking utilization during the period
T2 = B3⁄T2 = 30⁄150 = 0.2.
Finally, consider task t3, which executes for 30 msec during its period T3 of duration
300 msec. Once again, applying the Generalized Utilization Bound Theorem, it is
necessary to consider the following four factors:
a. Preemption time by higher priority tasks with periods less than t3. All three
higher priority tasks fall into this category, so total preemption utilization = U1 + U2
+ Ua = 0.2 + 0.1 + 0.02 = 0.32.
c. Preemption by higher priority tasks with longer periods. No tasks fall into this
category.
d. Blocking time by lower priority tasks. No tasks fall into this category.
Figure 17.2. Timing diagram for tasks executing on a single-processor system with
mutual exclusion.
17.4 Performance Analysis Using Event Sequence
Analysis
During the requirements phase of the project, the system’s required response times to
external events are specified. After task structuring, a first attempt at allocating time
budgets to the concurrent tasks in the system can be made. Event sequence analysis is
used to determine the sequence of tasks that need to be executed to service a given
external event. The first task in an event sequence waits for the event that initiates the
sequence (such as an external event) while the other tasks in the event sequence execute in
a strict sequence because each task is activated by a message sent by its predecessor. It is
also possible for an event sequence to divide into more than one event sequence, if a given
task sends messages to more than one waiting task. A timing diagram is used to depict
the sequence of internal events and tasks activated after the arrival of the external event.
The approach is described next.
Consider an external event. Determine which I/O task is activated by this event and
then determine the sequence of internal events that follow. This necessitates identifying
the tasks that are activated and the I/O tasks that generate the system response to the
external event. Estimate the CPU time for each task. Estimate the CPU overhead, which
consists of context-switching overhead, interrupt-handling overhead, and inter-task
communication and synchronization overhead. It is also necessary to consider any other
tasks that execute during this period. The sum of the CPU times for the tasks that
participate in the event sequence, plus any additional tasks that execute, plus CPU
overhead, must be less than or equal to the specified system response time. If there is
some uncertainty over the CPU time for each task, allocate a worst-case upper bound.
An example of applying the event sequence analysis approach is given next. A more
detailed example is given in Chapter 18.
17.4.1 Example of Performance Analysis Using Event Sequence Analysis
For an example of applying the event sequence analysis approach, consider four tasks with
the same CPU times and periods as those described in Section 17.3.5. However, this time
consider the situation where three of these tasks are involved in an event sequence in
which the tasks execute in the order t1, t2, and t3, such that task t1 is awakened by an
external event, and tasks t2 and t3 each wait for a message from their predecessor task in
the event sequence. As before, the priority assignment is ta highest, followed respectively
by the tasks t1, t2, and t3. The execution of these four tasks on a single-processor system is
depicted on Figure 17.3 with the worst-case scenario of all tasks being ready to execute at
the same time.
In this situation, the highest priority task ta executes first for 4 msec, followed by the
task with the next highest priority t1 for its execution time of 20 msec. Task t1 sends a
message to task t2 just before completing execution. Task t2 is then unblocked and starts
executing. However the lower priority task t3 remains blocked waiting for a message from
t2. When task t2 sends the message to task t3, t3 is unblocked, and when t2 completes
execution, t3 starts executing to completion.
Figure 17.3. Timing diagram for tasks in an event sequence executing on a single-
processor system.
17.5 Performance Analysis Using Real-Time
Scheduling Theory and Event Sequence Analysis
This section describes how the real-time scheduling theory can be combined with the
event sequence analysis approach. Instead of considering individual tasks, it is necessary
to consider all the tasks in an event sequence. The task activated by the external event
executes first and then initiates a series of internal events, resulting in activation and
execution of other internal tasks. It is necessary to determine whether all the tasks in the
event sequence can be executed before the deadline.
Initially attempt to allocate all the tasks in the event sequence the same priority.
These tasks can then collectively be considered one equivalent task from a real-time
scheduling viewpoint. This equivalent task has a CPU time equal to the sum of the CPU
times of the tasks in the event sequence, plus context-switching overhead, plus message
communication or event synchronization overhead. The worst-case inter-arrival time of
the external event that initiates the event sequence is then made the period of this
equivalent task.
To determine whether the equivalent task can meet its deadline, it is necessary to
apply the real-time scheduling theorems. In particular, it is necessary to consider
preemption by higher priority tasks, blocking by lower priority tasks, and execution time
of this equivalent task. An example of combining event sequence analysis with real-time
scheduling using the equivalent task approach is given in Chapter 18, for the Light Rail
Control System.
In some cases, you cannot assume that all the tasks in the event sequence can be
replaced by an equivalent task. This happens if one of the tasks is used in more than one
event sequence or if executing the equivalent task at that priority would prevent other
tasks from meeting their deadlines. In such cases, the tasks in the event sequence need to
be analyzed separately and assigned different priorities. In determining whether the tasks
in the event sequence will meet their deadlines, it is necessary to consider preemption and
blocking on a per task basis; however, it is still necessary to determine whether all tasks in
the event sequence will complete before the deadline. An example of this case is also
described in Chapter 18.
17.6 Advanced Real-Time Scheduling Algorithms
The scheduling theory for performance analysis of real-time designs described so far in
this chapter has considered implicit deadline task sets, where the relative deadline of each
task coincides with its next arrival time. While this represents many real-time applications,
there are cases when the deadlines can be less than the periods. For such cases, the
deadline-monotonic algorithm, which assigns fixed priorities according to the relative
deadlines, is known to be optimal among all fixed-priority scheduling algorithms (Leung
and Whitehead 1982).
After executing for 90 msec, t3 completes, which after a total elapsed time of 110
msec is less than its deadline of 200 msec. As there is no ready task, CPU A becomes idle.
After executing for 20 msec, t1 completes and, since there are no ready tasks, CPU B
becomes idle again. At this time, both CPUs are idle. Task t2 resumes executing again at
the start of its second period, T2 = 150, on CPU A and finishes 30 msec later.
Now consider the same tasks executing with partitioned scheduling instead of global
scheduling. Assume tasks are partitioned such that tasks t1 and t3 are assigned CPU A and
task t2 is assigned CPU B. There is no difference in execution until the start of t1‘s second
period, T1 = 100. With partitioned scheduling, task t1 resumes execution on CPU A
(instead of CPU B) by preempting task t3, which by then has been executing for 80 msec.
Task t1 completes execution after 20 msec, at which time task t3 resumes executing on
CPU A and completes execution after a further 10 msec. Task t2 resumes executing at the
start of its second period, T2 = 150, but this time on CPU B instead of CPU A. Thus all
tasks meet their deadlines. Comparing partitioned scheduling with global scheduling for
this example, all tasks meet their deadlines in both cases. In fact, there is no difference in
the elapsed times for tasks t1 and t2. However, the elapsed time for task t3 is extended from
110 msec withglobal scheduling to 130 msec with partitioned scheduling, which is less
than its deadline of 200 msec. (The timing diagram for partitioned scheduling is not
depicted and is left as an exercise for the reader).
These examples show how tasks can take advantage of an additional processor by
meeting their deadlines earlier than on the single-processor system described in Section
17.1.5. However, it is often the case that tasks cannot take full advantage of a second (or
more) processor(s) because they are held up waiting for a scarce resource (such as shared
memory or I/O) or for a message from another task. Furthermore, memory contention can
also negatively affect the performance of multicore systems.
17.7.2 Performance Analysis of Multiprocessor Systems with Mutual Exclusion
Consider next the case of the four tasks described in Section 17.3.6 (and depicted on
Figure 17.2), in which three of the tasks have mutually exclusive access to a critical
section, executing on a dual-processor system. We assume the same worst-case scenario of
all tasks being ready to execute at the same time. In this situation, the two highest priority
tasks, ta and t1, execute in parallel on CPUs A and B respectively, as depicted in Figure
17.5. Task ta completes execution on CPU A after 4 msec. However, because task t1 has
mutually exclusive access to its critical section for the duration of its execution, neither t2
nor t3 can execute as they are both blocked waiting to enter their critical sections;
consequently, CPU A becomes idle. When task t1 leaves its critical section just before
completing execution on CPU B, task t2 is then unblocked, starts executing on CPU A,
and enters its critical section. However, the lowest priority task, t3, remains blocked and
cannot take advantage of a free CPU. When task t2 leaves its critical section before
completing execution on CPU A, t3 is then unblocked, starts executing on CPU B, and
enters its critical section.
Figure 17.5. Timing diagram for tasks executing on a dual-processor system with
mutual exclusion.
This example shows that, with multiprocessor systems, there can be situations when
concurrent tasks are unable to take full advantage of available CPUs because the tasks are
blocked waiting for scarce resources.
17.7.3 Performance Analysis of Multiprocessor Systems with Event Sequence
Analysis
Consider next applying the event sequence analysis approach to tasks executing on a dual-
processor system. This example uses the same four tasks with the same CPU times and
periods as those described in Section 17.3.6 and depicted in Figure 17.3. However, this
time consider the situation where three of these tasks are involved in an event sequence in
which the tasks execute in the order t1, t2, and t3, such that task t1 is awakened by an
external event, and tasks t2 and t3 each wait for a message from their predecessor task in
the event sequence. As before, the priority assignment is ta highest, followed respectively
by the tasks t1, t2, and t3. The execution of these four tasks on a dual-processor system is
depicted on Figure 17.6 with the worst-case scenario of all tasks being ready to execute at
the same time.
In this situation, the two highest priority tasks, ta and t1, start executing in parallel on
CPUs A and B respectively. Task ta completes execution on CPU A after 4 msec.
However, because tasks t2 and t3 are blocked waiting for messages, neither of these tasks
can execute, and consequently CPU A becomes idle. Just before completing execution on
CPU B, task t1 sends a message to task t2. Task t2 is then unblocked and executes on CPU
A. However, the lower priority task, t3, remains blocked waiting for a message from t2 and
cannot take advantage of a free CPU. When task t2 sends the message to task t3, t3 is then
unblocked and executes on CPU B.
Figure 17.6. Timing diagram for tasks in an event sequence executing on a dual-
processor system.
As with the example in the previous section, this example of applying event sequence
analysis shows that, with multiprocessor systems, there are situations when concurrent
tasks are unable to take full advantage of available CPUs, in this case because they are
blocked waiting for messages from other tasks.
17.8 Estimation and Measurement of
Performance Parameters
Several performance parameters must be determined through estimation or measurement
before a real-time performance analysis can be carried out. These are independent
variables whose values are inputs to the performance analysis. Dependent variables are
variables whose values are estimated by the real-time scheduling theory.
A major assumption made for the real-time scheduling is that all tasks are locked in
main memory so there is no paging overhead. Paging overhead adds another degree of
uncertainty and delay that cannot be tolerated in hard real-time systems.
The following parameters must be estimated for each task involved in the
performance analysis:
a. The task’s period Ti, which is the frequency with which it executes. For a
periodic task, the period is fixed (refer to Chapter 13 for more details on periodic
tasks). For an aperiodic task, use the worst-case (i.e., minimum) external event inter-
arrival time for an input task and then extrapolate from this for downstream internal
tasks that participate in the same event sequence.
b. The execution time Ci, which is the CPU time required for the period. At
design time, this figure is an estimate. Estimate the number of source lines of code
for the task, and then estimate the number of compiled lines of code. Use benchmarks
of programs developed in the selected source language executing on the selected
hardware with the selected operating system. Compare benchmark results with the
size of the task to estimate compiled code execution time. When the task has been
implemented, substitute performance measurements of the task executing on the
hardware for the task estimates.
CPU system overhead parameters are also needed for the performance analysis. These
parameters can be determined by performance measurements of benchmark programs.
These programs need to be developed in the programming language selected for the real-
time system, executing on the hardware platform selected for the RT system, and with the
multitasking operating system or kernel selected for the RT system. The following system
overhead parameters must be measured:
a. Context-switching overhead. The CPU time for the operating system to switch
the CPU allocation from one task to another (see Chapter 3).
These overhead parameters must be factored into the computation of task CPU time, as
described in this chapter and applied in the next chapter.
17.9 Summary
This chapter has described the performance analysis of software designs by applying real-
time scheduling theory to a concurrent tasking design executing on single-processor or
multiprocessor systems. This approach is particularly appropriate for hard real-time
systems with deadlines that must be met. This chapter has described two approaches for
analyzing the performance of a design: real-time scheduling theory and event sequence
analysis. The two approaches were then combined. This chapter also briefly described
advanced real-time scheduling algorithms, including deadline-monotonic scheduling,
dynamic priority scheduling, and scheduling for multiprocessor systems. Because the
performance analysis is applied to a design consisting of a set of concurrent tasks, the
analysis can start as soon as the task architecture has been designed, as described in
Chapter 13. It can then be refined as the real-time application development progresses
through detailed software design and implementation. A detailed example of performance
analysis of a real-time software design is described in Chapter 18. Other examples of
performance analysis are described in the case studies of real-time embedded systems in
Chapters 19 and 20.
18
Applying Performance Analysis to
Real-Time Software Designs
◈
This chapter applies the real-time performance analysis concepts and theory described in
Chapter 17 to a real-time embedded system, namely the Light Rail Control System. The
complete case study is described in Chapter 21. This chapter focuses on the real-time
performance analysis using real-time scheduling theory and event sequence analysis.
Sections 18.1 through 18.3 provide a detailed example of analyzing the performance
of the Light Rail Control System. Section 18.1 describes a performance analysis using
event sequence analysis. Section 18.2 describes a performance analysis using real-time
scheduling theory. Section 18.3 describes a performance analysis using both real-time
scheduling theory and event sequence analysis. Section 18.4 describes design restructuring
to meet performance goals.
18.1 Example of Performance Analysis Using
Event Sequence Analysis
The example of performance analysis using event sequence analysis describes three time-
critical event sequences for a train approaching a station, arriving at a station, and
detecting a hazard. Assume that the first case to be analyzed is that of the Approaching
Sensor detecting that the train is approaching a station at which it must stop, followed by
the Arrival Sensor detecting that the train has arrived at the station. Assume also that
the train is operating at the cruising speed. A performance requirement is that the system
must respond to each of the approaching sensor and arrival sensor input events within 200
msec. The sequence of internal events following the approaching sensor input is depicted
by the event sequence on the timing diagram in Figure 18.1, in which there are two
hardware devices and four software tasks shown with their appropriate stereotypes (see
Chapter 13). Tasks that are not involved in this scenario are excluded from the figure.
Figure 18.1. Event sequence timing diagram for a train approaching a station.
Assume that the Train Control state machine is in Cruising state. Consider the
case of input from the approaching sensor. The event sequence is as follows, with the CPU
time to process each event given in parentheses (where Ci is the CPU time required to
process event i).
A0: Approaching Sensor sends an Approached event (i.e., interrupt) to
the Approaching Sensor Input task to indicate that the train is
approaching a station.
A1: The Approaching Sensor Input task receives an interrupt from the
Approaching Sensor and reads the approaching sensor input.
A3: Train Control receives the message, executes its state machine, and
changes state from Cruising to Approaching.
A5: Speed Adjustment receives the Decelerate message and computes the
deceleration rate.
A7: The Motor Output task receives the message and converts the
deceleration rate to electric motor units (e.g., volts) and computes the gradual
adjustment required to the external motor.
A8: Motor Output task sends the electric motor adjustment rate to Motor
Actuator.
Now consider the event sequence following input from the arrival sensor, which is
depicted on the timing diagram in Figure 18.2 and is described as follows:
B0: Arrival Sensor sends an Arrival event (i.e., interrupt) to the Arrival
Sensor Input task to indicate that the train is entering the station.
B1: The Arrival Sensor Input task reads the arrival sensor input.
B2: The Arrival Sensor Input task sends an Arrived at station message
to Train Control.
B3: Train Control receives the message, executes its state machine, and
changes state from Approaching to Stopping.
B6: Speed Adjustment sends a Stop message to the Motor Output task.
B8: Motor Output sends a Stop command to Motor Actuator to stop the
train.
Table 18.1 depicts each task in the Train Subsystem in the first column with the CPU time
Ci depicted in the second column. Every time a periodic task executes, there could be two
context switches, assuming that the executing task has one context switch in at the start of
the period and one context switch out at the end of the period. For periodic tasks, the third
column depicts the total execution time Cp for a periodic task, which is the CPU time Ci
plus the context-switching time Cx before and after task execution, which is given by
Equation 1:
(Equation
1)
For tasks in the event sequence, the execution time for a task must account for
both the context-switching time and the message communication time, as depicted in the
fourth column for the tasks in the Arrival Sensor event sequence and in the fifth column
for the tasks in the Proximity Sensor event sequence. For a task that participates in an
event sequence, the execution time Ce is the sum of the CPU time Ci, context-switching
time Cx before execution, and message communication time Cm to send a message to the
next task in the event sequence, which is given by Equation 2. (Note that Cm does not
apply to the last task in the event sequence).
(Equation
2)
Since the Approaching Sensor and Arrival Sensor scenarios are very similar and
occur in sequence, we will consider the train arrival event sequence from events B1
through B8, which is more time-critical since it requires the train to stop at the station. The
event sequence diagram (Figure 18.2) shows that four tasks (Arrival Sensor Input,
Train Control, Speed Adjustment, and Motor Output) are required to support
the arrival sensor external event. Assume that the CPU time to execute event Bi is Ci.
There is also a minimum of four context switches required, 4*Cx, where Cx is the context-
switching overhead, as well as three message transfers.
Figure 18.2. Event sequence timing diagram for a train arriving at a station.
The total CPU time for the tasks in the arrival event sequence (Ce) is the sum of CPU
time for the four tasks in the event sequence (C1, C3, C5, C7), plus CPU time for message
communication (C2, C4, C6) and context-switching overhead (4*Cx):
Assume that message communication overhead Cm is the same in all cases. The times C2,
C4, and C6 for message communication should therefore be equal to Cm. The execution
time Ce is thus equal to:
(Equation
3)
A second event sequence of note is depicted in the fifth column of Table 18.1
and is for the tasks in the Proximity Sensor event sequence, which detects hazards ahead
on the rail track such as approaching too close to an earlier train, a hazard signal indicating
a problem with the rail track, or a vehicle stopped on a railroad crossing. The total CPU
time for the tasks in the proximity event sequence is based on the four tasks in the event
sequence, which are Proximity Sensor Input, Train Control, Speed
Adjustment, and Motor Output, three of which are also in the arrival event sequence.
The execution time Cp is thus equal to:
(Equation
4)
This event sequence consists of the following events, as depicted in Figure 18.3:
P1, P2: The Proximity Sensor Input task receives an interrupt from the
Proximity Sensor and reads the proximity sensor input, which indicates
that a hazard has been detected ahead of the train.
P3: The Proximity Sensor Input task sends a Hazard Detected message
to Train Control.
P4: Train Control receives the message, executes its state machine, and
changes state from Cruising to Emergency Stopping.
P6, P7: Speed Adjustment receives the Emergency Stop message and
sends it to the Motor Output task.
P8, P9: The Motor Output task receives the Emergency Stop message and
outputs the Stop command to Motor Actuator to stop the train.
Figure 18.3. Event sequence timing diagram for a hazard detected.
Approaching 4 5
Sensor Input
(C0)
Arrival Sensor 4 5 5
Input (C1)
Train Control 5 6 6 6
(C3)
Speed 9 10 10 10
Adjustment (C5)
Motor Output 4 5 5 5
(C7)
Message 0.7
communication
overhead (Cm)
overhead (Cm)
Context- 0.3
switching
overhead (Cx)
Proximity Sensor 4 5 5
Input (C8)
Speed Sensor 2 3
Input (C9)
Location Sensor 5 6
Input (C10)
Train Status 10 11
Dispatcher (C11)
Train Display 14 15
Output (C12)
Train Audio 11 12
Output (C13)
Table 18.2 depicts the real-time scheduling parameters for the steady state periodic
and aperiodic tasks. Table 18.2 depicts the period of each task Ti in Column 3 and the
CPU time required by the task Ci in Column 2. The CPU time for each periodic task
includes the CPU time for two context switches, as depicted in Table 18.1. Each task’s
CPU utilization Ui, which is the ratio Ui = Ci⁄Ti, is depicted in Column 4 of Table 18.2.
There are some rounding errors in the computation of CPU utilization Ui in this and
subsequent tables. The periodic and aperiodic tasks are described next.
Speed Sensor Input. It is assumed that this task is a periodic task. It is actually
aperiodic because it is activated by a shaft interrupt. However, the interrupt arrives
on a regular basis, every shaft rotation, so the task is assumed to behave as a
periodic task. Assume a worst case of 6,000 rpm, meaning there will be an
interrupt every 10 msec, which therefore represents the minimum period of the
equivalent periodic task. Because this task has the shortest period, it is assigned the
highest priority. Its CPU time is 3 msec.
Proximity Sensor Input. This task has a period of 100 msec and a CPU time of 5
msec.
Train Status Dispatcher. This task has a period of 600 msec and a CPU time of
11 msec.
Speed Adjustment. When activated under automated control, this task executes
periodically every 100 msec to compute the required speed value and has a CPU
time of 10 msec.
Motor Output. This task is activated by a message from the periodic Speed
Adjustment task. It is therefore assumed that the Motor Output task has period
equal to that of Speed Adjustment, namely 100 msec, and executes for 5 msec.
Location Sensor Input. This task executes aperiodically with an equivalent period
of 50 msec and has a CPU time of 6 msec. It executes on a regular basis to
determine the location of the train.
Train Display Output. This task is activated by, and therefore has the same period
(600 msec) as Train Status Dispatcher. Its CPU time is 15 msec.
Train Audio Output. This task is also activated by, and therefore has the same
period (600 msec) as Train Status Dispatcher. Its CPU time is 12 msec.
The rate-monotonic priorities of the tasks are assigned in inverse proportion to their
periods (as depicted in Column 5 of Table 2), such that higher priorities are allocated to
tasks with shorter periods. Thus, the highest priority task is Speed Sensor Input,
which has a period of 10 msec. The next highest priority task is Location Sensor
Input, which has a period of 50 msec. Next highest priority is Proximity Sensor
Input, which has a period of 100 msec. Two other tasks have a period of 100 msec:
Speed Adjustment and Motor Output. Even though Speed Adjustment sends
messages that are consumed by Motor Output, the higher priority is given to Motor
Output because it interfaces to the external motor. Priorities are next assigned to Train
Status Dispatcher, Train Display Output, and Train Audio Output tasks.
Since these tasks all have the same priority, higher priority is given to Train Status
Dispatcher, which is the producer of the messages consumed by the other two tasks.
Table 18.2. Real-Time Scheduling Parameters: Steady-State Periodic and Aperiodic Task
Parameters
From Table 18.2, the total utilization of the steady-state periodic and aperiodic tasks
is 0.68, which is below the theoretical worst-case upper bound of 0.69 given by the
Utilization Bound Theorem. Therefore, according to the rate-monotonic algorithm, all
the tasks are able to meet their deadlines.
18.3 Example of Performance Analysis Using
Real-Time Scheduling Theory and Event
Sequence Analysis
Next, consider the case of an external event, such as that from the approaching sensor,
arrival sensor, or proximity sensor, triggering an event sequence. Since the approaching
sensor event occurs a significant time before the arrival sensor event, the approaching
sensor event sequence and the arrival sensor event sequence do not overlap in time.
Because the two event sequences are very similar in behavior, only one of them needs to
be considered. This analysis must consider the tasks in the event sequence, in addition to
the steady-state periodic and aperiodic tasks described in the previous section. The first
solution uses an equivalent event sequence task to replace the tasks in the event sequence.
18.3.1 Equivalent Event Sequence Tasks
It is necessary to consider the impact of the additional load imposed by the arrival sensor
event sequence or the proximity sensor event sequence on the steady-state load of the
periodic and aperiodic tasks. This is done by considering the impact of the tasks in each
event sequence on the steady-state analysis described in Section 18.1. For the four
aperiodic tasks participating in the arrival sensor event sequence (namely Arrival
Sensor Input, Train Control, Speed Adjustment, and Motor Output)
consider the equivalent aperiodic task, which is referred to as the event sequence task.
First consider an input from the arrival sensor. As described in the event sequence
analysis and shown in the event sequence diagram, the tasks required to process this input
are Arrival Sensor Input, Train Control, Speed Adjustment, and Motor
Output. Although four tasks are involved in the event sequence, they have to execute in
strict sequence because each task is activated by a message sent by its predecessor in the
sequence. We can therefore assume, to a first approximation, that the four tasks are
equivalent to one aperiodic task whose CPU time is Ce. Ce is the sum of the CPU times of
the four individual tasks plus message communication overhead and context-switching
overhead, as given by Equation 3. The equivalent aperiodic task is referred to as the
arrival event sequence task. From Equation 3 and Table 18.1, Ce is equal to 26 msec.
From the real-time scheduling theory, an aperiodic task can be treated as a periodic
task whose period is given by the minimum inter-arrival time of the aperiodic requests.
Let the period for the equivalent periodic event sequence task be Te. Assume that Te is
also the necessary response time to the arrival sensor input. For example, if Te is 200
msec, the desired response to the external event from the arrival sensor is 200 msec.
Now consider the second event sequence task, which is the proximity event sequence
task. This event sequence is initiated by the proximity sensor detecting a hazard on the
track. However, an event sequence will only occur if the Proximity Sensor Input
task actually detects a hazard ahead; if it does not, then it just completes executing and
waits for the next timer event. If a hazard is detected, then an input from the proximity
sensor can be treated in a similar way to the arrival event sequence. In the proximity
sensor case, the tasks in the event sequence are Proximity Sensor Input, Train
Control, Speed Adjustment, and Motor Output, with the last three identical to
those for the arrival sensor. The main difference is that the Proximity Sensor Input
task is periodic, with a period of 100 msec, and hence is activated more frequently. From
Table 18.1, the estimated CPU time for Proximity Sensor Input of 5 msec. Thus,
from Equation 4 and Table 18.1, the CPU time to process input from the proximity sensor
is 26 msec. However, the period for Proximity Sensor Input is 100 msec, which is
lower than the arrival sensor. The higher sampling rate for the proximity sensor is to
ensure the quick detection of hazards, which are unexpected, in comparison to train arrival
at a station, which is expected. This allows approaching sensors to be placed at a
preplanned distance from each station to allow the train to decelerate to a slower speed
and allows the arrival sensors to be placed near the entrance to the station to allow the
train to stop at the station.
18.3.2 Assigning Rate-Monotonic Priorities
Next, consider the real-time scheduling impact of adding each event sequence task in turn
on the steady-state situation previously considered. Table 18.3 provides the real-time
scheduling parameters in which the two event sequence tasks are added to the steady state
tasks from Table 18.2. Besides the CPU time and period for each periodic task and event
sequence task, in columns 2 and 3 respectively, the data for three scenarios are provided.
Columns 4 and 5 respectively depict the CPU utilization and priorities for tasks
participating in the arrival event sequence. Columns 6 and 7 provide the same information
for tasks in the proximity event sequence, while columns 8 and 9 provide this information
for tasks when the arrival and proximity event sequences occur simultaneously.
When assigning a priority to the event sequence task, the task is initially assigned its
rate-monotonic priority, which is based on its period. First consider the periodic proximity
event sequence task. When an obstacle is detected, this event sequence task, consisting of
the four tasks starting with the Proximity Sensor Input task (in Table 18.3), which
replaces the Proximity Sensor Input task executing alone. The proximity event
sequence task has the same period and therefore is assigned the same rate-monotonic
priority as Proximity Sensor Input (the third highest after Speed Sensor Input)
and has a CPU time of 26 msec. Given that the period is 100 msec, the CPU utilization for
this event sequence task is 0.26. The total CPU utilization of the steady-state tasks and the
proximity event sequence task is 0.89 (column 6 in Table 18.3), which is well above the
worst-case upper bound of 0.69 given by the Utilization Bound Theorem. Consequently,
the proximity event sequence task is likely to miss its deadline.
Next consider the arrival event sequence task, which is aperiodic. Because this task
has a longer period than five other steady-state tasks, namely Speed Sensor Input,
Proximity Sensor Input, Location Sensor Input, Speed Adjustment, and
Motor Output, it is given a lower rate-monotonic priority than these five tasks. The real-
time scheduling parameters for this case, as well as the assigned task priorities, are given
in Table 18.3. Given that the arrival event sequence task has a CPU time Ce of 26 msec
and an equivalent period Te of 200 sec, the task CPU utilization is 0.13. The total CPU
utilization of the steady-state tasks (column 4 in Table 18.3) and the arrival event sequence
task is 0.81, which is also above the worst-case upper bound of 0.69 given by the
Utilization Bound Theorem. Consequently, the arrival event sequence task could also
miss its deadline in addition to all the periodic tasks.
It should be noted that the impact of each event sequence task on the steady-state
periodic tasks was considered separately. What would the impact be if both event
sequence tasks were triggered in quick succession? This analysis is depicted in columns 8
and 9 of Table 18.3. The total CPU utilization is 1.02, which is obviously an impossible
number (more than 100%) and well above the utilization bound upper limit of 0.69. Since
rapid successive inputs from the arrival and proximity sensors would be interleaved, this
impact needs a more detailed analysis. A more detailed rate-monotonic analysis is given in
the next section.
18.3.3 Detailed Rate-Monotonic Analysis
A more comprehensive analysis of the light rail control problem is obtained by treating
each of the tasks in the event sequences separately rather than together. The CPU
parameters for each task, including the individual tasks in the proximity and arrival event
sequences, are shown in Table 18.4, in which each task has its context-switching and
message communication overhead added to its CPU time. Table 18.4 provides the CPU
time, period, and utilization (in columns 2, 3, and 4 respectively) for all periodic and
aperiodic tasks. All the tasks in the event sequence are treated as periodic tasks with a
period equal to the minimum inter-arrival time of 200 msec for the tasks in the arrival
event sequence and 100 msec for the tasks in the proximity event sequence. However,
since three of the tasks (Train Control, Speed Adjustment, and Motor Output)
are in both event sequences, this worst-case analysis assigns all three tasks the lower
period of 100 msecs.
Table 18.4. Real-Time Scheduling: Periodic and Aperiodic Task Parameters (*tasks in
event sequence)
Location 6 50 0.12 2 2
Sensor Input
Total 0.77
Utilization for
all tasks
The detailed analysis initially assigns the rate-monotonic priority to each task (case 1
in Table 18.4). As before, Speed Sensor Input is given the highest rate-monotonic
priority because it has the shortest period of 10 msec, followed by Location Sensor
Input. The third highest rate-monotonic priority is Proximity Sensor Input (which
initiates the proximity event sequence) since it has the next shortest period of 100 msec.
Next are the three other tasks in the proximity event sequence, namely Train Control,
Speed Adjustment, and Motor Output. Because these three tasks participate in both
the arrival and proximity event sequences, which have different periods, for a worst-case
analysis, the three tasks are assumed to have the shorter proximity sensor period of 100
msec. In addition, Speed Adjustment and Motor Output also execute in the steady-
state situation when the train is accelerating or cruising with a period of 100 msec.
Because all three tasks have the same period of 100 msec as Proximity Sensor
Input, they are assigned the same rate-monotonic priority. The decision is made to give
Motor Output the highest priority of the three tasks because it outputs to the motor,
followed by Train Control, because it is the control task, and then Speed
Adjustment, which is an algorithm task with the longest CPU time of the three. Rate-
monotonic priorities are then assigned to the remaining tasks, namely Arrival Sensor
Input with a period of 200 msec, Train Status Dispatcher, Train Display
Output,and Train Audio Output tasks, all three of which have a period of 600 msec.
From column 4 in Table 18.4, the total utilization of these tasks is 0.77, which is
above the theoretical worst-case upper bound of 0.69 given by the Utilization Bound
Theorem. Therefore, according to the rate-monotonic algorithm, not all the tasks will be
able to meet their deadlines in the worst case with the execution of the tasks in both the
proximity and arrival event sequences.
18.3.4 Assigning Non-Rate-Monotonic Priorities
The detailed analysis described in the previous section assumed that each task was
assigned its rate-monotonic priority, that is, priority in inverse proportion to its period. The
major concern is that Arrival Sensor Input, which should be a high-priority input
task to respond to the arrival sensor interrupt in a timely manner, is assigned a relatively
low rate-monotonic priority because of its relatively long period of 200 msec. A problem
with giving the Arrival Sensor Input task its rate-monotonic priority is that the task
could potentially miss the arrival sensor interrupt if it has to wait for six higher-priority
tasks (Speed Sensor Input, Proximity Sensor Input, Speed Adjustment,
Train Control, Location Sensor Input, and Motor Output) to execute.
Because of the risk of missing the arrival interrupt, it is therefore decided to raise the
priority of the Arrival Sensor Input task above its rate-monotonic priority.
Assigning the Arrival Sensor Input task the highest priority could lead to Speed
Sensor Input missing its deadlines because it is also interrupt-driven and has a much
shorter period of 10 msec. In addition, the Location Sensor Input is also an input
task that receives time-critical external inputs. To avoid delaying these two input tasks, the
Arrival Sensor Input task is given a lower priority than these tasks but a higher
priority than all the other tasks, which is therefore the third highest priority. This means
the Arrival Sensor Input task is given a higher priority than its rate-monotonic
priority, as shown in Table 18.4 (case 2). The assignment of non-rate-monotonic priorities
is described next.
18.3.5 Applying Generalized Real-Time Scheduling Theory to Tasks with Non-
Rate-Monotonic Priorities
To carry out a full analysis of tasks assigned non-rate-monotonic priorities, it is necessary
to apply the Generalized Real-Time Scheduling Theory, as described in Section 17.3.
Because of the assignment of non-rate-monotonic priorities, each task must be checked
explicitly against its upper bound to determine whether it meets its deadline. This section
analyzes the performance of the tasks shown in Table 18.4 (case 2).
In this analysis, Proximity Sensor Input is considered with the other tasks in
the proximity event sequence because it is important to determine that all four tasks
complete before the 100 msec deadline. Similarly, Arrival Sensor Input is
considered with the other tasks in the arrival event sequence in order to determine that all
four tasks complete before the 200 msec deadline. Note that even though we are
considering the four tasks together, this analysis is different from the equivalent event
sequence task analysis given in Section 18.3.2 because the tasks are considered separately
in all other cases.
a. Execution time for the tasks in the event sequence. The total execution time for
the four tasks in the event sequence, Ce = 26 msec and Te = 100 msec. Execution
utilization = 0.26.
b. Preemption time by higher-priority tasks with shorter periods, i.e., less than
100 msec, the period of the tasks in the event sequence. There are two tasks in this
set.
Speed Sensor Input, with a period of 10 msec, can preempt any of the four
tasks a maximum of ten times over 100 msec for a total preemption time of 10*3
msec = 30 msec and preemption utilization of 0.3.
The other task is Location Sensor Input, with a period of 50 msec, can
preempt any of the four tasks a maximum of twice over 100 msec for a total
preemption time of 2*6 msec = 12 msec and preemption utilization of 0.12.
Total preemption utilization of these two higher-priority tasks in the 100 msec
period = 0.3 + 0.12 = 0.42.
Total preemption time by higher-priority tasks with both shorter and longer
periods = 42 +5 = 47 msec.
Total preemption utilization by higher-priority tasks with both shorter and longer
periods during the 100 msec period = 0.42 + 0.05 = 0.47.
After considering these four factors, we now determine the total elapsed time and total
utilization:
Total elapsed time = total execution time + total preemption time + worst-case
blocking time = 26 + 47 + 11 = 84 < 100
Total utilization = execution utilization + preemption utilization + worst-case
blocking utilization = 0.26 + 0.47 + 0.11 = 0.84 > 0.69
The total utilization of 0.84 is greater than the Generalized Utilization Bound Theorem’s
upper bound of 0.69. However, the more accurate timing analysis using the Generalized
Completion Time Theorem, which considers the actual execution time of the tasks,
determines that the four tasks in the proximity event sequence all meet their deadlines
because the total elapsed time of 84 msec is less than the period of 100 msec.
As before, the objective is to determine that the four tasks in the arrival event
sequence will complete execution before the 200 msec deadline. It is necessary to apply
the Generalized Utilization Bound Theorem and consider the following four factors:
a. Execution time for the tasks in the event sequence. The total execution time for
the four tasks in the event sequence, Ce = 26 msec and Te = 200 msec. Execution
time = 26 msec. Execution utilization = 0.13.
b. Preemption time by higher-priority tasks with shorter periods, i.e., less than
200 msec, the period of the tasks in the event sequence. There are three tasks in this
set.
Speed Sensor Input, with a period of 10 msec, can preempt any of the four
tasks a maximum of twenty times for a total of 20*3 msec = 60 msec and
preemption utilization of 60/200 = 0.3.
Location Sensor Input, with a period of 50 msec, can preempt any of the
four tasks a maximum of four times over 200 msec for a total preemption time
of 4*6 msec = 24 msec and preemption utilization of 24/200 = 0.12.
Proximity Sensor Input, with a period of 100 msec, can preempt three of
the four tasks a maximum of twice over 200 msec for a total preemption time of
2*5 msec = 10 msec and preemption utilization of 10/200 = 0.05.
After considering these four factors, we now determine the total elapsed time and total
utilization:
Total elapsed time = total execution time + total preemption time + worst-case
blocking time = 26 + 94 +11 = 131 < 200;
The total utilization of 0.66 is less than the Generalized Utilization Bound Theorem’s
upper bound of 0.69, so the four tasks in the event sequence all meet their deadlines. This
result is confirmed by the Generalized Completion Time Theorem.
a. Execution time for the two tasks. In the 50 msec period, Speed Sensor Input
will execute 5 times for 3 msec each time while Location Sensor Input will
execute once for 6 msec. Total execution time = 5*3 + 6 = 21 msec. Execution
utilization = 0.3 + 0.12 = 0.42.
b. Preemption time by higher-priority tasks with shorter periods, i.e., less than
50 msec. There are no such tasks.
After considering these four factors, we now determine the total elapsed time and total
utilization:
Total elapsed time = total execution time + total preemption time + worst-case
blocking time = 21 + 0 + 21 = 42 < 50;
The total utilization of 0.84 is greater than the Generalized Utilization Bound Theorem’s
upper bound of 0.69. However, the more accurate timing analysis using the Generalized
Completion Time Theorem determines that the two high-priority tasks with the shorter
periods will meet their deadlines because the total elapsed time of 42 msec is less than the
period of 50 msec.
Performance Analysis of Lowest Priority Tasks
The remaining tasks that need to be analyzed are the three lowest priority tasks that
execute with a 600 msec period, namely Train Status Dispatcher, Train
Display Output, and Train Audio Output tasks. Consider these three tasks. These
three tasks are the lowest-priority tasks and so will be preempted by all the other tasks:
a. Execution time for the three tasks. In the 600 msec period, each task will execute
once.
b. Preemption time by higher-priority tasks with periods less than 600 msec.
There are seven tasks with higher priorities that will preempt these tasks. Speed
Sensor Input will execute sixty times while Location Sensor Input will
execute twelve times. Proximity Sensor Input, Train Control, Speed
Adjustment, and Motor Output will each execute six times, and Arrival
Sensor Input will execute three times.
Total utilization is = 0.30 + 0.12 + 0.05 + 0.06 + 0.10 + 0.05 + 0.03 = 0.71.
Total preemption time = 60*3 + 12*6 + 6*5 + 6*6 + 6*10 + 6*5 + 3*5 = 423
msec.
After considering these four factors, we now determine the total elapsed time and
total utilization:
The total utilization of 0.78 is greater than the Generalized Utilization Bound Theorem’s
upper bound of 0.69, so according to this theorem, the three tasks could miss their
deadlines. However, the Generalized Completion Time Theorem, which considers the
actual execution time of the tasks, shows that 461 msec out of 600 msec are used, so that
the three tasks do meet their deadlines.
18.3.6 Applying the Generalized Completion Time Theorem to Tasks with Non-
Rate-Monotonic Priorities
The Generalized Completion Time Theorem, as described in Section 17.3.6, was also
applied to evaluate the performance of the multitasking design described in the previous
section. The results of this performance analysis are depicted on the timing diagram in
Figure 18.4, which shows the execution of the seven highest-priority tasks in Table 18.4
on a single processor.
The scenario depicted in Figure 18.4 is for train arrival at a station with the Arrival
Sensor Input task initiating an arrival event sequence (which also consists of Train
Control, Speed Adjustment, and Motor Output) in addition to the tasks Speed
Sensor Input, Location Sensor Input, and Proximity Sensor Input (not
detecting a hazard). Assume a worst case that all tasks are ready to execute at the start of
this scenario, except that the tasks in the arrival event sequence must execute according to
that sequence.
Figure 18.4. Timing diagram for tasks in Train Control Subsystem executing on single
CPU.
In conclusion, in this scenario the total elapsed time (from the start of the scenario)
for tasks in the arrival event sequence to complete execution is 64 msec, during which
time Speed Sensor Input executes seven times, Location Sensor Input executes
twice, and Proximity Sensor Input executes once. Thus, all four tasks in the arrival
event sequence complete before the 200 msec deadline. Similarly, if Proximity Sensor
Input detected a hazard it would initiate the hazard detected event sequence, which
would also complete execution before the 100 msec deadline. Note that the timing
analysis described in this section only considers the scenario over 70 msec whereas the
real-time scheduling analysis in the previous section considered the elapsed time over a
200 msec period.
18.3.7 Performance Analysis of Tasks Executing on a Multiprocessor System
If the concurrent tasks in the software design are to execute on a multiprocessor system,
then the performance can be analyzed using timing diagrams to evaluate the impact of
increasing the number of processors, as described in Section 17.7.
Consider the task in the event sequence described in Section 18.3.6 executing on a
dual-processor system, as depicted on the timing diagram in Figure 18.5, using global
scheduling. The scenario is for train arrival at a station with the Arrival Sensor
Input initiating an arrival event sequence, which also consists of Train Control,
Speed Adjustment,and Motor Output, and Proximity Sensor Input not detecting
a hazard. As before, a worst-case scenario is assumed in which all tasks are ready to
execute at the start of the scenario, except that the tasks in the arrival event sequence must
execute according to that sequence.
Figure 18.5. Timing diagram for tasks in Train Control Subsystem executing on two
CPUs.
With a dual-processor system, the two highest-priority tasks can execute in parallel.
Thus, this scenario starts with both Speed Sensor Input and Location Sensor
Input executing in parallel on CPU A and CPU B respectively. Speed Sensor Input
completes execution after 3 msec and releases CPU A for the highest-priority ready task,
which is Arrival Sensor Input. After executing for 6 msec, Location Sensor
Input completes execution and releases CPU B for the highest-priority ready task, which
is Proximity Sensor Input. Arrival Sensor Input sends a message to Train
Control before completing execution after 5 msec and releasing CPU A. Train
Control is next to execute on CPU A for 2 msec before being preempted by Speed
Sensor Input at the start of its second period (elapsed time of 10 msec). Proximity
Sensor Input completes executing after 6 msec and releases CPU B. Train Control
can now resume execution for its remaining 4 msec on CPU B. Speed Sensor Input
completes its second execution cycle of 3 msec and releases CPU A, which becomes idle
because the other tasks are blocked waiting for a message or for their period to elapse.
Train Control sends a message to the next task in the event sequence, Speed
Adjustment, before completing execution and releasing CPU B. The arrival of the
message unblocks Speed Adjustment, which starts executing on CPU A for 10 msec.
Speed Sensor Input is ready to execute at the start of its third period (elapsed time of
20 msec) and is assigned to the free CPU B, on which it executes for a further 3 msec
before releasing the CPU. Just before completing execution for 10 msec, Speed
Adjustment sends a message to the last task in the event sequence, Motor Output,
which immediately starts executing on the free CPU B for 5 msec. At elapsed time of 30
msec, Speed Sensor Input is ready to execute at the start of its fourth period on the
free CPU A for a further 3 msec. At elapsed time of 31 msec, Motor Output completes
its execution time of 5 msec.
In conclusion, in this scenario the total elapsed time of the four tasks in the arrival
event sequence to complete execution is 31 msec. At this time, Speed Sensor Input is
executing for the fourth time, while Location Sensor Input and Proximity
Sensor Input have both completed execution once. Thus, all four tasks in the arrival
event sequence complete long before the 200 msec deadline, and the tasks that execute
periodically also all meet their deadlines. Similarly, if Proximity Sensor Input
detected a hazard it would initiate the hazard detected event sequence, which would also
complete execution before the 100 msec deadline.
If there is a performance problem in the Light Rail Control System example, one
attempt at design restructuring is to apply sequential clustering. Consider the case of the
Train Control task sending a speed command message to the Speed Adjustment
task, which in turn sends speed messages to the Motor Output task. These three tasks
could be combined into one task using sequential clustering, the clustered Train
Control task with passive objects for Speed Adjustment and Motor Output. This
eliminates the message communication overhead between these tasks, as well as the
context-switching overhead. Let the CPU time for the clustered task be Cv. Then, referring
to Table 18.1:
(Equation
5)
The CPU time for the two tasks in the new event sequence Cee is now given by
(Equation
6)
It is interesting to compare Equation 6 (with two tasks in the event sequence)
with Equation 3 in Section 18.1 (with four tasks in the event sequence): the message
communication overhead is reduced from 3*Cm to Cm, and the context-switching overhead
is reduced from 4*Cx to 2*Cx. Given the estimated timing parameters in Table 18.1 and
substituting for them in Equations 3 and 6 results in a reduction of total CPU time from 26
msec to 24 msec. If the message communication and context-switching overhead times
were larger, the savings would be more substantial. However, if the overhead times were
shorter, the savings would be less.
This chapter describes a case study for a microwave oven control system. The software
design for this embedded system is typical of many consumer products. Thus, the
microwave oven embedded system interfaces with the external environment by means of
several sensors and actuators, supports a simple user interface, keeps track of time, and
provides centralized control that necessitates the design of a state machine. As the
microwave oven is an embedded system, the design approach benefits from starting with a
systems engineering perspective of the total hardware/software system before the software
modeling and design.
The problem is described in Section 19.1. Section 19.2 describes structural modeling
of the microwave oven embedded system, in which the system and software context block
definition diagrams are developed. Section 19.3 describes the use case model for the
microwave oven system. Section 19.4 describes how the object and class structuring
criteria are applied to this system. Section 19.5 describes the design of the state machines
for controlling the microwave oven. Section 19.6 describes how dynamic interaction
modeling is used to develop sequence diagrams from the use cases. Section 19.7 describes
the design modeling for the microwave oven software system, which is designed as a
concurrent component-based software architecture based on architectural structure and
communication patterns. Section 19.8 describes the performance analysis of the real-time
design. Section 19.9 describes the design of components, interfaces, and connectors of the
component-based software architecture. Section 19.10 describes detailed software design
and Section 19.11 describes system deployment.
19.1 Problem Description
The microwave oven has input buttons for selecting Cooking Time, Start, Minute Plus,
Time of Day, and Cancel, as well as a numeric keypad. It also has a display to show the
cooking time left and time of day. In addition, the oven has a microwave heating element
for cooking the food, a door sensor to sense when the door is open, and a weight sensor to
detect if there is an item in the oven. Cooking is only permitted when the door is closed
and when there is something in the oven. The oven has several actuators. Besides the
heating element, there are light, beeper, and turntable actuators. The microwave oven
displays the cooking time, the time-of-day clock, as well as messages to the user such as
prompts and warning messages.
The Microwave Oven Embedded System is modeled as a composite block with the
stereotype «embedded system», which is composed of several blocks including sensors
and actuators. The oven is composed of three input devices: a door sensor, which senses
when the door is opened and closed by the user, a weight sensor to weigh food, and a
keypad for entering user commands. There are five output devices: a heating element for
cooking food, a lamp that is switched on during cooking and when the door is open, a
turntable that turns during cooking, a beeper that beeps when the food is cooked, and an
oven display for displaying information and prompts to the user. There is also a timer
block, namely the real-time timer.
Figure 19.1. Conceptual structural model for the microwave oven embedded system –
SysML block definition diagram.
19.2.2 System Context Model
The system context model, which is also referred to as a system context diagram, is
determined from the structural model of the problem domain. The system context diagram
defines the boundary between the total hardware/software system and the external
environment, which is modeled as external blocks to which the system has to interface.
The context model is depicted on a SysML block definition diagram (Figure 19.2), which
shows the embedded system, the external blocks, and multiplicity of the associations
between the external blocks and the system. The Microwave Oven Embedded System is
modeled as a single composite block and is labelled with the stereotypes «embedded
system» «block». The system context diagram for the Microwave Oven Embedded
System is quite simple, as there is only one external entity, the microwave oven user
(depicted as an actor), which has a one-to-one association with the embedded system
block Microwave Oven Embedded System. The reason is that the embedded system is a
hardware/software system, which contains all the hardware microwave sensors and
actuators, and the physical timer.
Figure 19.2. System context diagram for the microwave oven embedded system –
SysML block definition diagram.
19.2.3 Software System Context Model
The software system context diagram for the software system, namely the Microwave
Oven System, is depicted in Figure 19.3. The user from the system context diagram is
replaced on the software system context diagram by the external input devices through
which the user interacts with the system, the external output devices that are controlled by
the software system, and the external timer that provides timer events for the system.
Figure 19.3. Software context diagram for the microwave oven software system.
Each external block on the software context diagram is depicted with a stereotype
that represents the role of the external device. In this case study, Door Sensor, Weight
Sensor, and Keypad are all external input devices. Heating Element, Lamp,
Turntable, Beeper, and Display are external output devices. There is also an
external timer called Timer. The external blocks all have a one-to-one association with
the software system aggregate block.
19.3 Use Case Modeling
As described in Chapter 6, use case modeling can be applied at the systems or software
engineering level. In the former case, the user is the primary actor, whereas in the latter
case, the various I/O devices are the actors. For this problem, we decide to use a
combination of the systems engineering and software engineering approach for the use
case model. In particular, from a systems engineering perspective, we consider the user as
an actor and not the various input devices (in particular the door and weight sensors, and
keypad) he or she uses. From a software engineering perspective, the timer is also
considered an actor. This is because the timer plays a very important role in the use case
model as it counts down the cooking time and notifies the system when the cooking time
has elapsed.
The functionality of the microwave oven system is captured by three use cases, Cook
Food, Set Time of Day, and Display Time of Day. The use case model is
depicted on the use case diagram in Figure 19.4. The User is the primary actor for the
Cook Food and Set Time of Day use cases, and a secondary actor for the Display
Time of Day use case. The Timer is the primary actor for the Display Time of Day
use case and a secondary actor for the Cook Food use case.
Figure 19.4. Use case model for the microwave oven software system.
19.3.1 Cook Food Use Case
The Cook Food use case is the primary use case of the system, because the description of
the main and alternative sequences of the use case address the different scenarios for
cooking food in the oven. The user is the primary actor because this actor initiates the use
case by opening the door and putting the food in the oven. The timer is a secondary actor
because it counts down the cooking time and notifies the system when the time has
elapsed. In addition, there is a nonfunctional configuration requirement, namely the choice
of display language.
Summary: User puts food in oven, and microwave oven cooks food.
Main sequence:
7. User enters cooking time on the numeric keypad and presses Start.
8. System starts cooking the food, starts the turntable, and switches on the
light.
10. Timer notifies the system when the cooking time has elapsed.
11. System stops cooking the food, switches off the light, stops the turntable,
sounds the beeper, and displays the end message.
14. User removes the food from the oven and closes the door.
15. System switches off the oven light and clears the display.
Alternative sequences:
Step 3: User presses Start when the door is open. System does not start
cooking.
Step 5: User presses Start when the door is closed and the oven is empty.
System does not start cooking.
Step 5: User presses Start when the door is closed and the cooking time is
equal to zero. System does not start cooking.
Step 5: User presses Minute Plus, which results in the system adding one
minute to the cooking time. If the cooking time was previously zero, System starts
cooking, starts the timer, starts the turntable, and switches on the light.
Step 7: User opens door before pressing the Start button. System switches on
the light.
Step 9: User presses Minute Plus, which results in the system adding one
minute to the cooking time.
Step 9: User opens door during cooking. System stops cooking, stops the
turntable, and stops the timer. The user closes the door (system then switches off
the light) and presses Start; System resumes cooking, resumes the timer, starts the
turntable, and switches on the light.
Step 9: User presses Cancel. System stops cooking, stops the timer, switches
off the light, and stops the turntable. User may press Start to resume cooking.
Alternatively, user may press Cancel again; system then cancels timer and clears
display.
Configuration requirement:
Actor: User.
Main sequence:
3. User enters the time of day (in hours and minutes) on the numeric keypad.
Alternative sequences:
Lines 1, 3: If the oven is busy, the system will not accept the user input.
Line 5: The user may press Cancel if the incorrect time was entered. The
system clears the display.
Configuration requirement:
Precondition: TOD clock has been set (by Set Time of Day use case).
Main sequence:
2. System increments TOD clock every second, adjusting for minutes and
hours.
Postcondition: TOD clock has been updated (every second) and time of day
displayed (every minute).
19.4 Object and Class Structuring
The next step is to determine the software classes and objects needed to realize the use
cases. The software system context diagram for an embedded system also helps with this
step as the type of each external device helps determine the software class that needs to
interface to it. The software classes are primarily determined by consideration of the Cook
Food use case. The classes are categorized according to the object and class structuring
criteria. As described in Chapter 8, it is assumed that all classes except for entity classes
are concurrent and are therefore modeled as active (i.e., concurrent) classes.
The software input classes are determined by consideration of the external device
classes on the software system context diagram. In this case study, Door Sensor Input,
Weight Sensor Input, and Keypad Input are all software input classes that
communicate with the corresponding external device classes. Heating Element
Output, Lamp Output, Turntable Output, Beeper Output, and Oven
Display Output are software output classes that communicate with the corresponding
external output devices. Clock is an external timer that appears on the context diagram. A
software timer object, namely Oven Timer, receives timer events from Clock. Oven
Timer needs to keep track of the cooking time remaining and when the cooking time
expires, as well as the time of day.
There is also a need for an entity class to store microwave oven data, such as the
cooking time, which is called Oven Data. In addition, because there is a need to provide
display prompts to the user, configurable in different languages, the decision is made to
separate the textual prompts from the Oven Display Output object. The prompts are
stored in an entity class called Oven Prompts. Finally, because of the complex
sequencing and control required for the oven, a state dependent control class is required –
Microwave Oven Control – which executes the state machine for the oven. The
software classes are therefore categorized as follows:
Input classes:
Keypad Input
Output classes:
Lamp Output
Turntable Output
Beeper Output
Timer classes:
Oven Timer (it should be noted that this class is a timer because it is
activated by timer events from the hardware timer, but it is also state
dependent and is therefore designed using a state machine as described in the
next section).
Entity classes:
Oven Data
Oven Prompts
The software classes are depicted on a class diagram, as shown in Figure 19.5.
Figure 19.5. Software classes in the microwave oven software system.
19.5 Dynamic State Machine Modeling
This section describes the state machines for the Microwave Oven Control System. Two
state machines are developed, one for Microwave Oven Control and the other for
Oven Timer. Chapter 7 describes how the Microwave Oven Control state machine
can be developed from the Cook Food use case.
19.5.1 State Machine Model for Microwave Oven Control
The state machine for Microwave Oven Control (Figure 19.6) is composed of two
orthogonal finite state machines. One is Microwave Oven Sequencing (which is
decomposed into substates as shown in Figure 19.7); the other is Cooking Time
Condition, which consists of two sequential substates: Zero Time and Time
Remaining. The reason for this design is to explicitly model the time condition, without
which Microwave Oven Control would be a lot more complicated. Thus, the Zero
Time and Time Remaining substates of Cooking Time Condition are guard
conditions on the Microwave Oven Sequencing state machine (Figure 19.6). The
sequence numbers for the main Cook Food scenario are also shown on the figures. This
state machine is also used as an example in Chapter 7.
Figure 19.6. State machine for Microwave Oven Control: top-level state machine.
Figure 19.7. State machine for Microwave Oven Control: decomposition of the
Microwave Oven Sequencing Composite State.
Door Shut. This is the initial state, in which the oven is idle with the door shut
and there is no food in the oven.
Door Open. In this state the door is open and there is no food in the oven.
Door Open with Item. This state is entered after an item has been placed in the
oven.
Door Shut with Item. This state is entered after the door has been closed with
an item in the oven. This state is a composite state consisting of the following
substates (see Figure 19.8):
Waiting for User. Waiting for user to press the Cooking Time button.
Waiting for Cooking Time. Waiting for user to enter the cooking time.
Because of the effect of opening and closing the door, two substates of theDoor
Shut with Item composite state are entered via a history state H, as described in
Chapter 7. This mechanism is used to ensure that when the door is opened (e.g.,
while in the Waiting for Cooking Time substate) and then closed again, the
previously active substate (in this example Waiting for Cooking Time) is
reentered.
Cooking. The food is cooking. This state is entered from the Ready to Cook
state when the Start button is pressed. This state is exited if the timer expires, the
door is opened, or Cancel is pressed.
Figure 19.8. State machine for Microwave Oven Control: decomposition of the
Door Shut with Item composite state.
19.5.2 State Machines for Oven Timer and Cooking Timer
Timing decisions in the microwave oven are state dependent, and for this reason the Oven
Timer object is designed to contain a state machine. Because two different times need to
be controlled in the microwave oven, the Oven Timer object is composed of two
orthogonal timer state machines, one to keep track of cooking time (after cooking has
started) and the other to keep track of the time of day. For this reason, the Oven Timer is
designed as an orthogonal state machine with two orthogonal regions, one for the
Cooking Timer state machine and the other for the TOD Timer, as depicted in Figure
19.9. This section describes the Cooking Timer state machine, and the TOD Timer state
machine is described in Section 19.6.2.
Because time plays such an important and wide-ranging role in the operation of the
microwave oven, it is advantageous to consider the different states that the oven timer
needs to go through. The state machine for Cooking Timer has the following states for
cooking food (Figure 19.10):
Cooking Time Idle. This is the initial state, in which the oven is idle.
Cooking Food. The timer is keeping track of the cooking time. This state is
entered when the timer is started.
Updating Cooking Time. This state is entered every time a timer event is
received, which is every second. It is an interim state from which either Cooking
Food is reentered if the timer has not yet expired or Cooking Time Idle is
entered if the timer has expired.
Figure 19.9. State machine for Oven Timer.
The sequence numbers on the state transitions of the Cooking Timer state machine
(depicted on Figure 19.10) correspond to the Cook Food scenario described in Section
19.6.1, in particular the Start Timer, Timer Event, Time Left, and Finished state
transitions. Additional state transitions for Start minute and Add Minute correspond to the
Minute Plus scenarios described in the next section.
19.5.3 Impact of Minute Plus Alternative Scenarios
The Minute Plus button on the oven keypad provides a fast way for the user to add a
minute to the cooking time. However, the system behaves differently depending on
whether food is being cooked or not when the button is pressed. This necessitates two
alternative scenarios on state machine diagrams and sequence diagrams to be considered,
as described in detail in Sections 7.7 and 9.7 respectively.
The impact of the Minute Plus alternative scenarios feature is state dependent and
results in additional transitions on the Microwave Oven Control state machine (
Figure 19.7). Minute Plus can be pressed when the oven is in the state Door Shut with
Item, in which case the Cooking state is entered and the output action is Start Minute
in addition to the entry actions of Cooking state. Minute Plus can also be pressed while
the oven is in the Cooking state, in which case the state is not changed and an internal
transition causes the Add Minute action. These output actions are sent to Oven Timer,
as described next.
The impact of the Minute Plus alternative scenarios results in two additional state
transitions on the Cooking Timer state machine, as depicted in Figure 19.10. If the timer
is in the Cooking Time Idle state when Minute Plus is pressed, the input event (sent
by Microwave Oven Control) is Start Minute, and the timer transitions to the
Cooking Food state. However, if the state machine is in the Cooking Food state when
Minute Plus is pressed, the input event is Add Minute and the timer transitions to the
Updating Cooking Time state. The actions corresponding to each transition are
depicted in Figure 19.10.
19.6 Dynamic Interaction Modeling
With the dynamic interaction modeling approach, a sequence diagram is developed for
each use case. For state dependent scenarios, the state machines for the state dependent
objects are also developed. This section describes the sequence diagrams developed for
the three use cases depicted in Figure 19.4, namely the Cook Food, Set Time of
Day, and Display Time of Day use cases.
19.6.1 Dynamic Interaction Modeling of Cook Food use case
First consider the Cook Food use case. Because of the amount of detail, three sequence
diagrams are developed to depict the event sequencing among the objects that realize the
use case. The first sequence diagram (Figure 19.11) provides a black box perspective,
which depicts the sequence of interactions between the external input and output devices
with the Microwave Oven software System (depicted as a composite concurrent object).
The second and third sequence diagrams (Figures 19.12 and 19.13) depict the interactions
among the software objects that realize the Cook Food use case, in addition to the
external input devices. Because these interactions are state dependent, the scenario is also
shown on the state machines for the state dependent objects: Microwave Oven Control
and Oven Timer (Figures 19.7, 19.8, and 19.10 respectively) as described in the previous
section. For more information on how to develop the object interaction sequence from the
Cook Food use case in conjunction with the Microwave Oven Control state machine,
see the description in Chapter 9, Section 9.7.
Figure 19.11. Sequence diagram for Cook Food use case depicting external input and
output devices interacting with software system.
Figure 19.12. Sequence diagram for Cook Food use case depicting interactions among
software objects.
Figure 19.13. Sequence diagram for Cook Food use case depicting interactions among
software objects (continued).
The following is the sequence of messages for the sequence diagram and state
machines based on the main sequence through the Cook Food use case, as described in
Section 19.2.1. The sequence numbers correspond to the messages on the sequence
diagrams depicted in Figures 19.11 through 19.13, and to the events and actions depicted
on the state machines in Figures 19.7 through 19.10.
1: Door Opened Event. The user opens the door. The external Door
Sensor object sends this input to the Door Sensor Input object.
1.1: Door Opened. Door Sensor Input sends the Door Opened message
to the Microwave Oven Control object, which changes state.
2: Weight Event. The user places an item to be cooked into the oven. The
external Weight Sensor object sends this input to the Weight Sensor
Input object.
2.1: Item Placed. Weight Sensor Input sends the Item Placed
message to the Microwave Oven Control object, which changes state.
3: Door Closed Event. The user closes the door. The external Door Sensor
object sends this input to the Door Sensor Input object.
3.1: Door Closed. Door Sensor Input sends the Door Closed message
to the Microwave Oven Control object, which changes state.
3.2, 3.3: Switch Off. Microwave Oven Control sends Switch Off
message to the Lamp Output object, which in turn sends
the Switch Off message to the external Lamp.
4: Cooking Time Pressed. The user presses the Cooking Time button on the
keypad. The external Keypad object sends this input to the Keypad Input
object.
4.1: Cooking Time Selected. Keypad Input sends the Cooking Time
Selected message to the Microwave Oven Control object, which
changes state.
4.4: Prompt. Oven Prompts returns the text for the Time Prompt
message.
4.5: Time Prompt. Oven Display Output sends the Time Prompt output
to the external Display object.
5: Numeric Key Pressed. The user enters the numeric value of the time on
the keypad. Keypad sends the value of the numeric key(s) input to Keypad
Input.
5.1: Cooking Time Entered. Keypad Input sends the internal value of
each numeric key to Microwave Oven Control.
5.2: Display Cooking Time. Microwave Oven Control sends the value of
each numeric key to Oven Display Output, to ensure that these values are
sent only in the appropriate state.
5.3: Display Time. Oven Display Output shifts the previous digit to the
left and adds the new digit. It then sends the new value of cooking time to the
external Display object.
6: Start Pressed. The user presses the Start button. The external Keypad
object sends this input to the Keypad Input object.
6.1: Start. Keypad Input sends the Start message to Microwave Oven
Control, which changes state.
7: Timer Event. The external Clock object sends a timer event every second
to Oven Timer.
7.2: Time Left. After decrementing the cooking time, which is assumed to be
greater than zero at this step of the scenario, Oven Data sends the Time
Left message to Oven Timer.
7.3: Update Cooking Time Display. Oven Timer sends the cooking time
left to Oven Display Output.
7.4: Display Time. Oven Display Output sends the new cooking time
value to the external Display object.
8: Timer Event. The external Clock object sends a timer event every second
to Oven Timer.
8.3: Timer Expired. Oven Timer sends the Timer Expired message to
Microwave Oven Control, which changes state.
8.3a: Display End Prompt. Oven Timer concurrently sends the Display
End Prompt message to Oven Display Output.
8.3a.2: Prompt. Oven Prompts returns the text for the End Prompt
message.
8.3a.3: End Prompt. Oven Display Output sends the End Prompt
message to the external Display object.
8.4, 8.5: Stop Cooking. As a result of changing state (in step 8.3),
Microwave Oven Control sends the Stop Cooking message to Heating
Element Output object, which in turn sends this message to the Heating
Element object to stop cooking the food.
8.4a, 8.4a.1: Beep. Microwave Oven Control sends the Beep message to
Beeper Output object, which in turn sends this message to the external
Beeper.
8.4c, 8.4c.1: Stop Turning. Microwave Oven Control sends the Stop
Turning message to Turntable Output object, which in turn sends this
message to the external Turntable.
8.5: Stop Cooking. Heating Element Output sends this message to the
Heating Element object to stop cooking the food.
Because of lack of space, the remaining messages are only depicted on the
sequence diagram depicting external objects (Figure 19.11) and as events and
actions on the state machine (Figure 19.7):
9: Door Opened Event. The user opens the door. The external Door
Sensor object sends this input to the Door Sensor Input object.
9.1: Door Opened. Door Sensor Input sends the Door Opened message
to the Microwave Oven Control object, which changes state.
10: Weight Event. The user removes the cooked item from the oven. The
external Weight Sensor object sends this message to the Weight Sensor
Input object.
10.1: Item Removed. Weight Sensor Input sends the Item Removed
message to the Microwave Oven Control object, which changes state.
11.1: Door Closed Event. Door Sensor Input sends the Door Closed
message to the Microwave Oven Control object, which changes state.
11.2, 11.3: Switch Off. Microwave Oven Control sends Switch Off
message to the Lamp Output object, which in turn sends
the Switch Off message to the external Lamp.
19.6.2 Dynamic Modeling for the TOD Clock Use Cases
The TOD Clock use cases are Set Time of Day and Display Time of Day.
Because these are different use cases, it is necessary to determine the objects needed to
support each of them and to develop new sequence diagrams to depict the dynamic
execution of the objects for these use cases.
For the Set Time of Day use case, the objects needed are Keypad Input (to
receive inputs from the TOD Clock button), Microwave Oven Control (because the
time of day can be set only when the oven is idle), Oven Data (to store the current time
of day), Oven Display Output (to display the TOD), and Oven Timer (in particular the
orthogonal state machine within it (see Figure 19.9), the TOD Timer).
For the Display Time of Day use case, the objects needed are Oven Timer (to
receive timer events), Oven Data (to store the time of day that must be incremented), and
Oven Display Output (to display the new time).
Figures 19.14 and 19.15 respectively depict the sequence diagrams for the Set Time
of Day and the Display Time of Day use cases. The following is the sequence of
messages for the sequence diagrams and state machines developed for these use cases.
The sequence numbers correspond to the messages on the sequence diagrams depicted in
Figures 19.14 and 19.15 and to the events and actions depicted on the state machine for
TOD Timer in Figure 19.16 and state machine for the Door Shut composite state in
Figure 19.17, which is in turn a substate of the Microwave Oven Control state
machine depicted in Figure 19.7 and described in Section 19.5.1.
Figure 19.14. Sequence diagram for Set Time of Day use case.
Figure 19.15. Sequence diagram for Display Time of Day use case.
Figure 19.16. State machine for TOD Timer (orthogonal state machine within the Oven
Timer state machine).
Figure 19.17. State machine for Door Shut (substate of Microwave Oven
Control) composite state.
The message sequence for the Set Time of Day use case is as follows:
C1: TOD Clock Key. The user presses the TOD Clock button on the
keypad. The external Keypad object sends this input to the Keypad Input
object.
C1.1: TOD Clock Selected. Keypad Input sends the TOD Clock
Selected message to the Microwave Oven Control object, which
changes state.
C1.2: Prompt for TOD. As a result of changing state, one action is for
Microwave Oven Control to send the Prompt for TOD message to the
Oven Display Output object.
C1.2a: Stop TOD Timer. As a result of changing state, a second concurrent
action is for Microwave Oven Control to send the Stop TOD Timer
message to the TOD Timer object (within Oven Timer).
C1.4: Prompt. Oven Prompts returns the text for the Enter TOD Prompt
message.
C1.5: Enter TOD Prompt. Oven Display Output sends the Enter TOD
Prompt message to the external Display object.
C2: Numeric Key Input. The user enters the numeric value of the time on
the keypad. Keypad sends the value of the numeric key(s) input to Keypad
Input.
C2.1: Time Entered. Keypad Input sends the internal value of each
numeric key to Microwave Oven Control.
C2.2: Display TOD. Microwave Oven Control sends the value of each
numeric key to Oven Display Output, to ensure that these values are sent
only in the appropriate state.
C2.3: Display TOD. Oven Display Output shifts the previous digit to the
left and adds the new digit. It then sends the new time of day to the external
Display.
C3: Start Key. User presses the Start button. The external Keypad object
sends this input to the Keypad Input object.
C3.1: Start. Keypad Input sends the Start message to Microwave Oven
Control, which changes state.
C3.2: Start TOD Timer. As a result of changing state, Microwave Oven
Control notifies TOD Timer (within Oven Timer) to start the TOD timer.
The message sequence for the Display Time of Day use case is as follows:
T1: Timer Event. The external Clock sends a timer event every second to
TOD Timer (within Oven Timer).
T1.1: Increment TOD Clock Time. TOD Timer (within Oven Timer)
sends this message to the Oven Data object, which adds one second to the
time of day.
T1.2: TOD. After incrementing the time of day, Oven Data sends the TOD
message to TOD Timer (within Oven Timer).
T1.3: Update TOD Display. TOD Timer (within Oven Timer) sends the
current time of day to Oven Display Output.
T1.4: Display TOD. Oven Display Output sends the new TOD value to
the external Multi-line Display.
19.6.3 State Machines for TOD Timer and Door Shut
TOD Timer is an orthogonal state machine within the Oven Timer state machine as
depicted in Figure 19.10. The state machine for TOD Timer, which is depicted in Figure
19.16, has three states:
TOD Idle.
Displaying TOD. The TOD clock is active. This state is entered when the TOD
clock receives the Start TOD event,
Updating TOD. This is a transient state, which is entered when a Timer Event is
received, which results in incrementing the TOD Time (stored in Oven data).
The state machine for the composite state Door Shut, which is a substate
of the Microwave Oven Control state machine, is depicted in Figure 19.17. In the
Door Shut state, the oven is idle with the door shut, and there is no food in the oven.
Setting the TOD clock is allowed in this state. To allow for controlling the TOD clock, the
Door Shut state is a composite state consisting of the following substates:
Idle,
Setting TOD.
These substates are entered via a history state H. Entry via a history state allows a
composite state that has sequential substates to remember the last substate entered and to
return to it when the composite state is reentered. This mechanism is used in the Door
Shut composite state so that when the door is opened (e.g., while in the Waiting for
TOD substate) and then closed again, the previously active substate (in this example
Waiting for TOD) is reentered.
Figure 19.18. Oven Data entity class.
19.6.4 Design of Entity Classes
The entity classes, instances of which are depicted in the three sequence diagrams
described earlier, are designed as follows:
a) Oven Data. This entity class contains all the data that needs to be stored for
cooking food and displaying the time of day. This class is designed as one class with
three attributes, which are:
cookingTime (remaining time to cook food). The value of this attribute must
be > = 0 and for safety reasons needs to have an upper bound, which is
nominally set to 20 mins.
The attributes held by the Oven Data class are depicted in Figure 19.18, which
shows the variable name, the type of variable, the range for the variable, and
permitted values. Configuration parameters, such as TODmaxHour, are depicted as
static variables because once the value of the parameter is set at configuration time, it
cannot be changed. When TODvalue is incremented every minute, this parameter is
checked to determine whether after 12:59, the clock should be set to 1:00 or 13:00.
b) Oven Prompts. This class is needed because the prompt language is selected at
system configuration time. Each set of language prompts is stored in a separate
subclass, as depicted in Figure 19.19:
Variant subclasses:
Figure 19.21. Software architecture for the Microwave Oven System: component and
task structuring.
This paragraph describes the mapping from the integrated communication diagram of
Figure 19.20 to the concurrent component-based software architecture depicted in Figure
19.21. There are two simple components, the Microwave Control and Microwave
Display components, which contain tasks and passive objects. All the other components
are simple components designed as tasks.
Control component containing two tasks and a passive object. The Microwave
Control component is the centralized control component for the system. It
contains two concurrent tasks, Microwave Oven Control and Oven Timer,
and the Oven Data passive entity object, as shown in Figure 19.21. All three
internal objects are determined from the integrated communication diagram (see
Figure 19.20). These three objects are grouped into the Microwave Control
component because the overall control of the microwave oven needs both the state
dependent control task Microwave Oven Control and the timer task, Oven
Timer, as well as the entity object Oven Data, which stores essential data.
Microwave Oven Control is a demand-driven state dependent control task and
is hence depicted with the stereotypes «demand» «state dependent control»
«swSchedulableResource». Oven Timer is a software periodic (i.e., timer) task
and is hence depicted with the MARTE stereotypes «timerResource»
«swSchedulableResource». Oven Data is an entity object that is both shared
(hence given the MARTE stereotype «sharedDataComResource») and accessed
mutually exclusively by two tasks (hence given the MARTE stereotype
«sharedMutualExclusionResource»). Thus, the full stereotype depiction for an
entity object, which is a mutually exclusive shared data communication resource,
is «entity» «sharedDataComResource» «sharedMutualExclusionResource».
Synchronized Object Access. This pattern is used for the invocation of operations
on shared passive entity objects accessed by more than one task – in particular, the
Oven Data (see Figure 19.23) entity object. Access to this shared object by the
Microwave Oven Control and Oven Timer tasks must be mutually exclusive.
Figure 19.22. Distributed software architecture for the Microwave Oven System:
message interfaces.
Figure 19.23. Concurrent communication diagram for the Microwave Control
component.
Figure 19.24. Concurrent communication diagram for the Microwave Display
component.
19.8 Performance Analysis of Real-Time Software
Design
This section describes the real-time performance analysis of the Microwave Oven Control
System. The software system is both event driven (because it reacts to external events) and
periodic (because certain events happen on a regular basis). To analyze the performance, it
is necessary to consider the time-critical scenarios using an event sequence analysis for
each scenario with the help of timing diagrams, as described in Chapters 17 and 18.
A time-critical scenario is the Oven Timer counting down the cooking time and
alerting Microwave Oven Control when the cooking time has expired. This event
sequence is fully described in Section 19.6. The seven tasks that participate in this
scenario are depicted in the first column of Table 19.1, with the CPU time Ci depicted in
the second column, and the task execution time depicted in the third column. The
execution time for each task in the event sequence is the sum of its CPU time, context-
switching time, and message communication time (apart from the last task in the event
sequence). The task priorities are depicted in the fourth column. Heating Element
Output is given the highest priority, followed by three other output tasks because they are
the more time-critical tasks and can, if necessary, preempt Microwave Oven Control,
which is given the next highest priority followed by Oven Timer and Oven Display
Output.
Oven Timer 6 7 6
Lamp Output 5 6 3
Turntable Output 4 5 2
Beeper Output 3 4 4
The Timer Expired event causes a state transition on the internal Microwave
Oven Control state machine from Cooking state to Door Shut with Item state
(Figure 19.7). The effect of the state transition is to trigger four concurrent actions to Stop
Cooking, Stop Turning, Switch Off Light, and Beep. On the timing diagram,
this is depicted as follows: after executing for 6 msec, Microwave Oven Control sends
the Stop Cooking message to Heating Element Component. Because this output
task has a higher priority than Microwave Oven Control, when it receives the
message, it unblocks and preempts Microwave Oven Control. After executing for 5
msec it sends the Stop Cooking command to the external heating element and
terminates.
Microwave Oven Control then resumes execution for 2 msec before sending the
Beep message to Beeper Component. Because it has a higher priority, upon receiving
the message, Beeper Component preempts Microwave Oven Control, executes for 4
msec, sends the Beep command to the external beeper device, and terminates. The same
procedure is then followed with Microwave Oven Control resuming execution and
sending messages to Lamp Component and Turntable Component, which in turn
execute for their allotted times. After Turntable Component completes execution, the
only ready task, Oven Timer, resumes execution and sends a prompt message to Oven
Display Output. As can be seen from Figure 19.25, the total elapsed time for this
scenario is 48 msec.
Figure 19.25. Timing diagram for Microwave Oven System tasks executing on a single
processor system.
As depicted on Figure 19.26, the total elapsed time for this multiprocessor scenario is
23 msec, which is 25 msec less than the single processor scenario. This comparison shows
that there are situations when a multiprocessor system can be used to great advantage, in
particular when there are multiple tasks concurrently executing independent actions.
However, it should be pointed out that due to memory contention, the actual elapsed time
could be longer on a multicore system.
Figure 19.26. Timing diagram for Microwave Oven System tasks executing on a
multiprocessor system.
Each component port is defined in terms of its provided and/or required interfaces.
Some producer components – in particular, the input components – do not provide a
software interface because they receive their inputs directly from the external hardware
input devices. However, they require an interface provided by the control component in
order to send messages to the control component. Figure 19.28 depicts the ports and
required interfaces for the three input components of Figure 19.27: Door Component,
Weight Component, and Keypad Component. Each of the three input components has
the same required interface – IMWControl – which is provided by the Microwave
Control component.
The Microwave Control component has several required ports from which it sends
messages to the provided ports of the five output components depicted in Figure 19.27
(Heating Element Component, Lamp Component, Turntable Component,
Beeper Component, and Microwave Display).
The output components do not require a software interface, because their outputs go
directly to external hardware output devices. However, they need to provide an interface to
receive messages sent by the control component. Figure 19.29 depicts the ports and
provided interfaces for all the output components of the system. Figure 19.29 also shows
the specifications of the interfaces in terms of the operations they provide. Lamp
Component, Turntable Component, and Beeper Component are output components,
each of which has a provided port – for example, PLamp for Lamp Component, which
provides an interface (e.g., ILamp).
The Microwave Display component has a provided port called PDisplay, which
in turn provides an interface called IDisplay. This interface specifies four operations,
display Prompt, display Time, clear Screen, and display TOD. Figure
19.29 shows the specification of the interface.
Some components, such as control components, need to provide interfaces for the
input components to use and require interfaces that are provided by output components.
The Microwave Control component has several ports – one provided port and five
required ports – as shown in Figure 19.30. Each required port is used to interface to a
different output component and is given the prefix R – for example, RLamp. The provided
port, which is called PMWControl, provides the interface IMWControl, which is required
by the input components. This interface is specified in Figure 19.30. It is kept simple by
having only one operation (send Control Request), with a parameter for the type of
request instead of one operation for each type of request. Designing each control request
as a separate operation would make the interface more complicated. It also makes it easier
to modify the design of the Microwave Control component, since an additional request
type only requires a new value for the request type parameter instead of necessitating a
change to the interface to add a new operation.
Figure 19.30. Ports and interfaces of the Microwave Control component.
19.9.2 Design of Components Containing Multiple Objects
The Microwave Control component is designed to contain two concurrent tasks
(Microwave Oven Control and Oven Timer) and a passive entity object (Oven
Data). Because the entity object is passive, as depicted in Figure 19.23, the two
concurrent tasks that access it directly, Microwave Oven Control and Oven Timer,
cannot be deployed to different nodes as separate components. The unit of deployment is
therefore the Microwave Control component, which for this reason is a simple
component with no internal component structuring.
The passive object Oven Data is designed as an information hiding object with two
provided interfaces, one to specify operations relating to updating the cooking time,
ICookingTimeData , and the other to specify operations related to updating the time-of-
day clock, ITODData, as depicted in Figure 19.31.
Figure 19.31. Interfaces of passive objects.
The Microwave Display component is designed to contain one task and one
passive entity object, as shown in Figure 19.24: a concurrent task called Oven Display
Output and a passive entity object called Oven Prompts. As with the Microwave
Control component, because the entity object is passive, the Oven Display Output
task that accesses it directly cannot be deployed to its own node as a separate component.
The unit of deployment is therefore the Microwave Display component, which is thus a
simple component with no internal component structuring.
The Oven Display Output task receives asynchronous messages from its
producers (Figure 19.24). For each message that requires a text prompt to be displayed,
given the prompt ID, Oven Display Output retrieves the appropriate prompt text by
invoking the read operation (Figure 19.24) provided by the passive Oven Prompts
entity object. The Oven Prompts object is designed as an information hiding object with
a provided interface depicted in Figure 19.31.
19.9.3 Design of Connectors
All communication between the components depicted in Figures 19.22 and 19.23 is
asynchronous. This necessitates the design of message queue connectors between the
components as described in Chapter 14. This section describes the design of two message
queue connectors that are used by the Microwave Oven Control component, which is
a simple component designed as a task.
The Microwave Oven Control task (see Figure 19.23), which conceptually
executes the microwave oven state machine, receives asynchronous control request
messages from several producer tasks, as shown in Figure 19.23. These messages are
placed on a message queue by the producers and removed from the queue by the only
consumer, the Microwave Oven Control task, as depicted in Figure 19.32 using the
Oven Control Message Q connector. Each message has an input parameter that holds
the name and contents of the individual control request.
Door Component is designed as a producer task that sends door opened and door
closed control request messages to Microwave Oven Control through a message queue
connector called Oven Control Message Q, as depicted in Figure 19.32. However, this
connector is inside the Microwave Control component and is therefore accessed via the
required port RMWControl of DoorComponent, which invokes the
sendControlRequest operation provided by the IMWControl interface. The task event
sequencing logic described in Pseudocode for DoorComponent is as follows.
The task event sequencing logic for Microwave Oven Control task is as follows.
Note that the actions are determined by the Microwave Oven Control state machine
(depicted in Figures 19.7 and 19.8), which is encapsulated as a state transition table inside
the MOCStateMachine object, as described in Chapter 14. Note also that Microwave
Oven Control receives control request messages from its four producers through the
Oven Control Message Q connector and sends timer messages to Oven Timer
through the Oven Timer Message Q connector; both these connectors are inside the
Microwave Control component. However, Microwave Oven Control sends
messages to the output components, such Heating Element Component, by invoking
an operation in the Microwave Control component’s required port interface for
connecting to that component (depicted in Figures 19.27 and 19.30). For example, to start
cooking food, Microwave Control invokes the startCooking operation (from the
IHeatingElement interface provided by Heating Element Component in Figure
19.29), through the requiredRHeater port (see Figure 19.30), as described next.
loop
—Messages from all senders are received on Oven Control Message Q
OvenControlMessageQ.receiveControlRequest (out controlEvent);
—Extract the event name and any message parameters
newEvent = controlEvent
—Assume state machine is encapsulated in object MOCStateMachine;
—Given the incoming event, lookup state transition table;
—change state if required; return action to be performed;
MOCStateMachine.processEvent (in newEvent, out action);
—Execute state dependent action(s) given by MOC state machine;
case action of
Start Actions:
OvenTimerMessageQ.sendTimerRequest (in startTimer);
RHeater.startCooking ();
RLamp.switchOn ();
RTurntable.startTurning ();
exit;
Timer Expired Actions:
RHeater.stopCooking ();
RTurntable.stopTurning ();
RLamp.switchOff ();
RBeeper.beep ();
exit;
Door Opened while Cooking Actions:
RHeater.stopCooking ();
RTurntable.stopTurning ();
OvenTimerMessageQ.sendTimerRequest (in stopTimer);
exit;
Switch On Action:
RLamp.switchOn ();
exit;
Switch Off Action:
RLamp.switchOff ();
exit;
Cancel Timer Action:
OvenTimerMessageQ.sendTimerRequest (in cancelTimer);
exit;
Add Minute:
OvenTimerMessageQ.sendTimerRequest (in addMinute);
exit;
Start Minute:
OvenTimerMessageQ.sendTimerRequest (in startMinute);
exit;
Display Cooking Time Actions:
RDisplay.displayTime (in time);
OvenData.updateCookingTime (in time)
exit;
—other actions not shown
end case;
end loop;
The task event sequencing logic for Heating Element Component task is given
next. It is assumed that incoming messages arrive on its message queue, where they are
placed by the startCooking and stopCooking operations of the Heating Element
Component.
This chapter describes a case study for a railroad crossing control embedded system. This
software design is for a safety-critical system, in which the raising and lowering of
railroad barriers must be done safely and in a timely manner. As is typical of embedded
systems, the system interfaces with the external environment by means of several sensors
and actuators. It also must send status messages to a Rail Operations Service. Control of
the railroad crossing is state dependent, which necessitates the design of a state machine to
provide overall control of the software system. As the Railroad Crossing Control System
(RXCS) is an embedded system, the design approach benefits from starting with a systems
engineering perspective of the total hardware/software system, the Railroad Crossing
Embedded System.
The problem is described in Section 20.1. Section 20.2 describes the structural
modeling of the system, consisting of the structural model of the problem domain,
followed by the system and software system context models, and the hardware/software
boundary model. Section 20.3 describes the use case model from a software engineering
perspective, describing both the functional and nonfunctional requirements of the safety-
critical system. Section 20.4 describes the dynamic state machine modeling, which is
particularly important to model the state dependent intricacies of this embedded system.
Section 20.5 describes how the object and class structuring criteria are applied to this
system. Section 20.6 describes how dynamic interaction modeling is used to develop
sequence diagrams from the use cases. Section 20.7 describes the design model for the
software system, which is designed as a concurrent software architecture that is based on
software architectural patterns. Section 20.8 describes the performance analysis of the
real-time design executing on single and multiprocessor systems. Section 20.9 describes
the design of the RXCS component-based software architecture that is part of the
distributed Light Rail System described in Chapter 21. Section 20.10 describes system
configuration and deployment.
20.1 Problem Description
A railroad crossing consists of two barriers, each with a flashing warning light and an
audio warning signal. The barriers are normally raised. When a train approaches, the
barriers are lowered, the warning lights start flashing, and the audio warnings are sounded.
When the train departs, the barriers are raised, the warning lights stop flashing, and the
audio warnings are stopped. Since there are two sets of rails, it is possible for two trains to
be at the railroad crossing simultaneously, in which case the barriers are lowered when the
first train arrives and only raised when the second train has departed.
20.2 Structural Modeling
From a structural modeling perspective, four diagrams are developed and depicted on
SysML block definition diagrams. First there is a conceptual static model of the problem
domain, which views the system in its real-world perspective. A structural model is then
developed of the total hardware/software system. From these two diagrams, the system
context block definition diagram is developed depicting the external entities to the total
hardware/software system. Finally, the software system context block definition diagram
is developed depicting the software system and the external entities that interface to it.
20.2.1 Structural Model of the Problem Domain
The conceptual structural model of the problem domain is depicted on a SysML block
definition diagram in Figure 20.1. From a total system perspective, the problem domain
for Railroad Crossing Embedded System consists of the following blocks:
Barrier, which is a physical entity controlled by the system and which consists of
a barrier actuator and a barrier sensor.
Warning Alarm, which consists of Warning Lights and Warning Audio and
which is a physical entity controlled by the system.
Observer, who is a driver, cyclist, or pedestrian who stops at the railroad crossing
and who is a human observer of the system.
Two barriers, which are commanded by the system to move up and down. Each
Barrier is composed of a Barrier Actuator, a Barrier Detection
Sensor, and a Timer.
Barrier Detection Sensor detects when the barrier has been lowered
and raised and sends barrier lowered and barrier raised messages.
Train arrival at and departure from the railroad crossing is detected by two sets of
sensors. Train Sensor is specialized into Arrival Sensor and Departure
Sensor.
Departure Sensor detects when a train has departed from the railroad
crossing.
The Observer (who stops at the railroad crossing), who is an external observer of
the system.
In the system context model, the train is depicted (Figure 20.3) as an external physical
entity, which is detected by the system. The observer, in particular the vehicle driver, is an
external observer of the system. It is worth noting that two of the external blocks on the
system context diagram, namely the train and the observer, do not physically interact with
the system. The arrival and departure of the train are detected by arrival and departure
sensors. The observer is alerted of an imminent train arrival by the closing of the barrier,
the warning lights, and the warning audio alarm.
Figure 20.3. System context model for Railroad Crossing Embedded System.
20.2.4 Software System Context Model
The software system context model for RXCS is depicted on a SysML block definition
diagram in Figure 20.4. As is typical for an embedded system, there are several external
input and output devices, which are depicted by means of SysML blocks. These I/O
devices are part of the embedded hardware/software system and hence not depicted in
Figure 20.3. However, they are external to the software system and therefore need to be
depicted on the software system context model.
Since train arrival and departure are detected by arrival and departure sensors, the
Train external physical entity block on the system context model (Figure 20.3) is
replaced by Arrival Sensor and Departure Sensor external input device blocks on
the software system context model (Figure 20.4). Since the raising and lowering of the
Barrier external physical entity is controlled by an actuator and detected by a sensor, it
is replaced by the Barrier Actuator external output device and the Barrier
Detection Sensor external input device. In addition, there is a Timer to help
determine if there are delays in lowering or raising the barrier. Since the Warning Alarm
external physical entity is activated by switching actuators on and off, it is replaced by the
Warning Light Actuator and Warning Audio Actuator external output devices.
Since the Observer on the system context diagram does not interact with the software
system, it is not needed on the software system context diagram. Finally, the external
Rail Operations Service on the system context diagram is also depicted on the
software system context diagram.
Consider next the multiplicity between the software system and the external devices.
The software system interfaces to two instances of each of the arrival and departure
sensors, one pair for each railroad track, and two barriers. Each barrier consists of a barrier
actuator, barrier detection sensor, barrier timer, warning light actuator and warning audio
actuator. Thus, as depicted in Figure 20.4, the software system interfaces to two instances
of each external device and one instance of the external system.
Figure 20.4. Software system context model for Railroad Crossing Control System.
20.2.5 Hardware/Software Boundary Model
The specification of the I/O devices, in particular the three input sensors and three output
actuators, are given in Table 20.1. The inputs to the software system from the three input
sensors and the outputs from the software system to the three output actuators are
specified. An example of an input device is the Barrier Detection Sensor, which
sends Barrier Raised and Barrier Lowered input events to the software system. An
example of an output device is the Barrier Actuator, which receives Raise Barrier
and Lower Barrier commands from the software system.
The hardware characteristics of the I/O devices are that all sensors are event driven;
that is, an interrupt is generated when there is an input from one of these devices. The
output devices are passive; that is, they do not generate interrupts.
Figure 20.5. Use case model for Railroad Crossing Control System.
20.3.1 Arrive at Railroad Crossing Use Case
The Arrive at Railroad Crossing use case starts with an input from the Arrival Sensor
actor:
Actors:
Main sequence:
1. Arrival Sensor detects the train arrival and informs the system.
3. Barrier Detection Sensor detects that a barrier has been lowered and
informs the system.
Alternative sequences:
Step 2: If there is another train already at the railroad crossing, skip steps 2
and 3.
Step 3: If a barrier lowering timer times out, the system sends a safety
warning message to the Rail Operations Service.
Nonfunctional requirements:
Safety requirements:
Barrier lowering time shall not exceed a pre-specified time. If a barrier
timer times out, the system shall notify Rail Operations Service.
System shall keep track of the number of trains at the railroad crossing,
such that the barrier shall only be lowered when the first train arrives.
Performance requirement: The elapsed time from the detection of the train
arrival to sending the command to the barrier actuator shall not exceed a pre-
specified response time.
Postcondition: The barriers have been closed, the warning lights are flashing and
the audio warning is sounding.
20.3.2 Depart from Railroad Crossing Use Case
The Depart from Railroad Crossing use case starts with an input from the Departure
Sensor actor:
Summary: Train departs from railroad crossing. The system raises the
barriers, switches off the warning lights, and switches off the audio warning alarm.
Actors:
Main sequence:
1. Departure Sensor detects that the train has departed and informs the
system.
3. Barrier Detection Sensor detects that a barrier has been raised and informs
the system.
4. System commands each Warning Light Actuator to switch off the flashing
lights and each Warning Audio Actuator to switch off the audio warning.
Alternative sequences:
Step 2: If there is another train at the railroad crossing, skip steps 2, 3, and 4.
Step 3: If a barrier raising timer times out, the system sends a safety message
to the Rail Operations Service.
Nonfunctional requirements:
Safety requirement:
Barrier raising time must not exceed a pre-specified time. If timer times
out, the Rail Operations Service shall be notified.
System shall keep track of the number of trains at the railroad crossing,
such that, if there is more than one train at the railroad crossing, the
barrier shall not be raised until the last train has departed.
Performance requirement: The elapsed time from the detection of the train
departure to sending the command to the barrier actuator shall not exceed a pre-
specified response time.
Postcondition: The barrier has been raised, the warning lights and the audio
warning signal have been switched off.
20.4 Dynamic State Machine Modeling
The state machine for RXCS is an orthogonal state machine that consists of two
orthogonal regions, Barrier Control and Train Count, as depicted in Figure
20.6a.The reason for this is because barrier control actions depend on whether there are
one or two trains in the railroad crossing. The state machine for Barrier Control is
depicted in Figure 20.6b and the state machine for Train Count in Figure 20.6c. There
are four states in Barrier Control as follows:
Up – This is the initial state in which the railroad crossing is open. This state is also
entered when the barrier sensor detects that the second barrier has been raised. The
associated transition (into this state) actions are to switch off both the warning
lights and the audio warnings, send a departed message, and cancel the barrier
timer.
Lowering – This state is entered when the first train arrives. The associated
transition actions are to lower the barriers, sound the audio warning signals, switch
on the flashing lights, and start the barrier timers. If the timer elapses while in this
state, which indicates that lowering a physical barrier is too slow, a warning
message is sent.
Down – This state is entered when the barrier sensor detects that the first barrier has
been lowered. The associated transition actions are to send the arrived message and
cancel the barrier timer. There is no change of state if a barrier lowered event
indicates that the second barrier has been lowered or a timer elapsed event
indicates that lowering the second physical barrier is too slow.
Raising – This state is entered when the last train has departed. The associated
transition actions are to raise the barrier and to start the barrier timers. There is no
change of state if a barrier raised event indicates that the first barrier has been
raised or a timer elapsed event indicates that raising a physical barrier is too slow,
in which case, a warning message is sent.
Figure 20.6. State Machine model for Railroad Crossing Control System.
Since, it is possible for two trains to be passing the railroad crossing at the same time,
it is vital to ensure that the barrier is not raised until the second train has left. It is therefore
necessary to keep track of the number of trains at the railroad crossing, so that the barrier
is only lowered when the first train arrives and only raised when the last train leaves. For
this reason, a second orthogonal region is designed to maintain the Train Count, as
shown in Figure 20.6c. There is one state for each train count.
Two Trains in Railroad Crossing. This state is entered when the second
train arrives at the railroad crossing. When the first of two trains leaves the railroad
crossing, the state machine transitions out of this state.
To avoid race conditions in the two orthogonal regions, the Train Arrived and Train
Departed sensor inputs come to the Train Count state machine. The first Train
Arrived input causes a transition from No Trains in Railroad Crossing to One
Train in Railroad Crossing state. The action on this transition is First Train
Arrived. This action is propagated as an input event to the Barrier Control state machine
(Figure 20.6b), which causes the transition from Up state to Lowering state, thereby
triggering the Lower Barrier and related actions. The second Train Arrived event
causes a transition to Two Trains in Railroad Crossing state in the Train Count
state machine but has no effect on the Barrier Control state machine. A similar
approach is used on train departure. The first Train Departed input causes a transition
from Two Trains in Railroad Crossing state to One Train in Railroad
Crossing state in the Train Count state machine but has no effect on the Barrier
Control state machine. The second Train Departed input causes a transition from One
Train in Railroad Crossing state to No Trains in Railroad Crossing state
in the Train Count state machine. The action on this transition is Last Train
Departed, which is propagated as an input event to the Barrier Control state
machine (Figure 20.6b) and causes a transition from Down state to Raising state, thereby
triggering the Raise Barrier and Start Timer actions.
20.5 Object and Class Structuring
Software class structuring is carried out in preparation for dynamic interaction modeling.
Given that the system to be developed is a real-time embedded system, it is assumed that
all classes, except for entity classes, are concurrent and will therefore be modeled as active
(i.e., concurrent) classes.
The software classes in the system are depicted in Figure 20.7 inside the outer box
that represents the software system. The external blocks that interface to and communicate
with the boundary classes (input, output, and proxy) are also depicted outside the box
representing the software system in Figure 20.7. Because there are two instances of each
of the external sensors and each of the external actuators, there are correspondingly two
instances of each of the software input classes and output classes that interface to these
external devices.
Figure 20.7. Software classes in Railroad Crossing Control System.
20.6 Dynamic Interaction Modeling
Next, the dynamic interaction model is developed to depict the interaction among the
objects that realize the two use cases, Arrive at Railroad Crossing and Depart from
Railroad Crossing. Because of the large number of objects that realize each use case, it is
clearer to show the object interaction sequence on two sequence diagrams for each use
case instead of one, the first depicting interaction between the external objects and the
software system, and the second depicting the interaction among the external input objects
and the software objects. The sequence diagrams depict the realization of the main
sequence of each use case.
20.6.1 Sequence Diagrams for Arrive at Railroad Crossing
The first sequence diagram depicts the interaction of the external objects with the software
system, as shown in Figure 20.8 for Arrive at Railroad Crossing. On this sequence
diagram, there are two external input devices, three external output devices and one
external system in addition to the RXCS software system, which is depicted as one
composite object. This sequence diagram faithfully follows the interaction sequence
described in the Arrive at Railroad Crossing software level use case. The sequence starts
with the arrival input event from the Arrival Sensor external input device (message
#1), which results in the system lowering the barrier, switching on the warning lights, and
switching on the warning audio. When the barrier has been lowered, the Barrier
Detection Sensor sends a Barrier Lowered event to the system (message #2),
which causes the system to send a status message to the external Rail Operations
Service.
Figure 20.8. Sequence diagram for Arrive at Railroad Crossing use case (external
objects).
The second sequence diagram depicts the interaction among the external input
objects and the software objects within the software system, as shown in Figure 20.9 for
Arrive at Railroad Crossing. The first object in this sequence is the external Arrival
Sensor. The interaction sequence (for all messages depicted on Figure 20.9 and messages
to external output objects depicted on Figure 20.8) is described as follows:
1: Arrival Sensor detects train arrival and sends the Arrival Event to
Arrival Sensor Input object.
1.2c: Railroad Crossing Control commands the Timer to start the barrier
lowering timer.
1.2a.1: Warning Light Output sends the Switch On message to the external
Warning Light Actuator (see Figure 20.8).
1.2a.2: Warning Audio Output sends the Switch On message to the external
Warning Audio Actuator (see Figure 20.8).
1.3: Barrier Actuator Output sends the Lower Barrier message to the
external Barrier Actuator (see Figure 20.8).
2: Barrier Detection Sensor detects that the barrier has been lowered and
sends the Barrier Lowered Event to the Barrier Detection Sensor Input
object.
2.3: Rail Operations Proxy sends the train arrival message to the external Rail
Operations Service (see Figure 20.8).
Figure 20.9. Sequence diagram for Arrive at Railroad Crossing use case (external input
objects and software objects).
It should be noted that in Figures 20.8 through 20.11, Railroad Crossing Control
sends concurrent messages (corresponding to concurrent actions on its encapsulated state
machine) such as messages 1.2, 1.2a, 1.2b, and 1.2c. The subsequent message for #1.2 is
#1.3, for #1.2a is #1.2a.1, and so on (see Appendix A for conventions on message
sequence numbering).
20.6.2 Sequence Diagrams for Depart from Railroad Crossing
The Depart from Railroad Crossing interaction sequence is also depicted on two sequence
diagrams. Figure 20.10 depicts the interaction of the external objects with the software
system, which starts with the Departure Event from the external Departure Sensor
(message #1), which results in the system raising the barrier. When the Barrier
Detection Sensor detects that the barrier has been raised, it sends a Barrier Raised
Event (message #2) to the system. The system then switches off the warning lights,
switches off the warning audio signal, and sends a train departed status message to Rail
Operations Service.
Figure 20.10. Sequence diagram for Depart from Railroad Crossing use case (external
objects).
The second sequence diagram depicts the interaction among the external input
objects and the software objects within the software system, as shown in Figure 20.11 for
Depart from Railroad Crossing. The first object in this sequence is the Departure
Sensor. The interaction sequence (for all messages depicted on Figure 20.11 and
messages to external output objects depicted on Figure 20.10) is described as follows:
1.2a: Railroad Crossing Control commands the Timer to start the barrier
raising timer.
1.3: Barrier Actuator Output sends the Raise Barrier message to the
external Barrier Actuator (see Figure 20.10).
2: Barrier Detection Sensor detects the raising of the barrier and sends the
Barrier Raised Event to the Barrier Detection Sensor Input object.
2.3: Warning Light Output sends the switch off message to the Warning Light
Actuator (see Figure 20.10).
2.2a.1: Warning Audio Output sends the switch off message to the Warning
Audio Actuator (see Figure 20.10).
2.2b.1: Rail Operations Proxy sends the train departed message to Rail
Operations Service(see Figure 20.10).
Figure 20.11. Sequence diagram for Depart from Railroad Crossing use case (external
input objects and software objects).
20.7 Design Modeling
The software architecture of the Railroad Crossing Control System is designed around a
Centralized Control Pattern. Centralized control is provided by the Railroad Crossing
Control component receiving inputs from the arrival, departure, and barrier detection
sensors via input objects and controlling the external environment by means of the barrier,
warning light, and warning audio actuators via output objects. However, viewed from the
larger distributed Light Rail System (Chapter 21), the Railroad Crossing Control System is
also an example of a Distributed Independent Control pattern, because each instance of the
control system is independent of the other instances and sends status messages to Rail
Operations Service. The initial software architecture is designed by integrating the
use case–based sequence diagrams.
20.7.1 Integrated Communication Diagram
The initial attempt at design modeling is to develop the integrated communication diagram
for the Railroad Crossing Control System, which necessitates the integration of the use
case–based interaction diagrams shown in Figures 20.8 through 20.11. Since these
diagrams are sequence diagrams, the objects and object interactions must be mapped to an
integrated communication diagram as depicted in Figure 20.12. In addition, it is necessary
to address alternative sequences that are not depicted on the sequence diagrams, in
particular for the barrier lowering and raising timers. The integration is quite
straightforward because most of the objects support both use cases. However, the
Arrival Sensor Input object only supports the Arrival use case, and the Departure
Sensor Input object only supports the Departure use case. The integrated
communication diagram is a generic concurrent communication diagram in that it depicts
all possible communications between the objects.
Input tasks. Concurrent input tasks receive inputs from the external environment
and send corresponding messages to the control task. There are three input tasks –
Arrival Sensor Input, Departure Sensor Input, and Barrier
Detection Sensor Input – each of which is designed as an event driven input
task that is awakened by the arrival of the corresponding sensor input. Thus, the
three input tasks are all depicted with the stereotypes «event driven» «input»
«swSchedulableResource».
Output tasks. There are three output objects, each of which is designed as a
demand driven task awakened on demand by the arrival of a message from the
Railroad Crossing Control task and then outputs to an external actuator.
The three demand driven output tasks are Barrier Actuator Output, which
interfaces to the external barrier actuator, Warning Light Output, which
interfaces to the external warning light actuator, and Warning Audio Output,
whichinterfaces to the external warning audio actuator. The three output tasks are
all depicted with the stereotypes «demand» «output» «swSchedulableResource».
Proxy task. Rail Operations Proxy is the proxy task that sends railroad
crossing status message to the Rail Operations Service. Rail Operations
Proxy is designed as a demand driven task awakened by messages from
Railroad Crossing Control. The proxy task is depicted with the stereotypes
«demand» «proxy» «swSchedulableResource».
Timer task. Barrier Timer is designed as a periodic task awakened by timer events
from the external timer. Its timing is initiated by a start timer message from
Railroad Crossing Control, which can later be cancelled. When it does time
out, it sends a timeout message to Railroad Crossing Control to warn it that
the barrier raising or lowering is slower than expected. The periodic task is
depicted with the stereotypes «timerResource» «swSchedulableResource».
Task interface:
Task inputs:
Task outputs:
The second task interface specification is for the Arrival Sensor Input task:
Name: Arrival Sensor Input.
Task interface:
Task inputs:
Event input: Arrival sensor external interrupt to indicate that train arrival has
been detected.
Task outputs:
The development of task behavior specifications, which describe the event sequencing
logic for these tasks, is left as an exercise for the reader.
20.8 Performance Analysis of Real-Time Software
Design
This section describes the real-time performance analysis of the Railroad Crossing Control
System. The system is event driven because it reacts to the external events arriving at the
system. Consequently, a combination of event sequence analysis and timing diagrams, as
described in Chapters 17 and 18, is applied.
Timer 3 4 7
On the timing diagram, after executing for 6 msec, Railroad Crossing Control
sends the Lower Barrier message to Barrier Actuator Output. When it receives
the message, because Barrier Actuator Output has a higher priority than Railroad
Crossing Control, it unblocks and preempts Railroad Crossing Control. After
executing for 5 msec, Barrier Actuator Output sends the lower barrier command to
the external barrier actuator and terminates. Railroad Crossing Control then
resumes execution for 2 msec before sending the Activate Light message to Warning
Light Output. Because the latter task has a higher priority, upon receiving the message,
Warning Light Output preempts Railroad Crossing Control, executes for 7
msec, sends the activate command to the warning light, and then terminates. The same
procedure is then followed with Railroad Crossing Control resuming execution and
sending messages to Warning Audio Actuator and Barrier Timer respectively. As
can be seen from Figure 20.14, the total elapsed time for this scenario is 39 msec.
Figure 20.14. Timing diagram for Railroad Crossing Control tasks executing on a
single-processor system.
20.8.2 Performance Analysis on Multiprocessor System
Now consider the same event sequence executing on a multiprocessor system with four
CPUs, as depicted on the timing diagram in Figure 20.15. This scenario starts with
Arrival Sensor Input activated by an interrupt, executing for 5 msec on CPU A and
sending a Train Arrived message. Railroad Crossing Control receives the
message and executes for 6 msec on CPU B before sending a lower barrier message to
Barrier Actuator Output. However, in this multiprocessor scenario, Railroad
Crossing Control continues executing on CPU B in parallel with Barrier
Actuator Output executing on CPU C. After a further 2 msec, Railroad Crossing
Control sends an activate message to Warning Light Output, which then starts
executing on CPU D. Railroad Crossing Control continues executing on CPU B
and, after a further 2 msec, sends an activate message to Warning Audio Output,
which starts executing on CPU A. At this time, there are tasks executing in parallel on all
four CPUs.
After a further 2 msec, Railroad Crossing Control sends a start timer message
to Barrier Timer, which then executes on CPU C, replacing the recently terminated
Barrier Actuator Output. As depicted on Figure 20.15, the total elapsed time for
this multiprocessor scenario is 21 msec, which is 18 msec less than the single-processor
scenario. This comparison shows that there are situations when a multiprocessor system
can be used to significant advantage, in particular when there are multiple tasks
concurrently executing independent actions. However, it should be pointed out that
memory contention negatively affects performance, and therefore elapsed times, in
multicore systems.
Figure 20.15. Timing diagram for Railroad Crossing Control tasks executing on a
multiprocessor system.
20.9 Component-Based Software Architecture
The design for the component-based software architecture for the Railroad Crossing
Control System is given on Figure 20.16, which depicts a UML composite structure
diagram showing the RXCS components, ports, and connectors. All the components are
concurrent and communicate with other components through ports. The overall
architecture and connectivity among components is initially determined from the RXCS
concurrent communication diagram shown in Figure 20.13. However, there are other
factors to consider concerning the creation of composite components. In particular,
composite components are created such that they could be deployed to execute on
different nodes in a distributed configuration.
20.9.1 Design of Components
RXCS is designed as a composite component that contains six components, four of which
are simple components and two of which are in turn composite components, as depicted in
Figure 20.16. Each simple component has a single thread of control (Arrival Sensor
Input, Departure Sensor Input, Railroad Crossing Control, and Rail
Operations Proxy). These simple components correspond to the concurrent tasks
determined in the concurrent communication diagram of Figure 20.13 and are depicted
with the MARTE stereotype «swSchedulableResource». The two composite components
are Barrier Component (which contains the simple components Barrier
Actuator Output, Barrier Detection Input, and Barrier Timer) and
Warning Alarm Component (which contains the simple components
Warning Light Output and Warning Audio Output). The composite components
are depicted with the component stereotype. This design allows components to be
deployed to be in close proximity to the devices they monitor or control, in particular the
barrier sensor monitoring and barrier actuator control components (within the
Barrier Component) and warning video and audio alarm components (within
the Warning Alarm Component). The tasks Arrival Sensor Input and
Departure Sensor Input are not combined into a component because it is likely that
they will be in physically separate locations, as the arrival sensor is located before the
entrance to the railroad crossing while the departure sensor is located at the exit from the
railroad crossing.
Figure 20.16. Railroad Crossing Control System component-based software
architecture.
In Figure 20.16, Railroad Crossing Control, which executes the state machine,
has one provided port PRXControl, through which it receives all incoming messages
from its producers, namely Arrival Sensor Input (train Arrived), Departure
Sensor Input (train Departed), Barrier Detection Input (barrier Raised, barrier
Lowered), and Barrier Timer (timer Expired). In this way, Railroad Crossing
receives all incoming messages on a FIFO basis.
Railroad Crossing Control also has five required ports through which it
communicates with Rail Operations Proxy, Barrier Component (in
particular the internal Barrier Actuator Output and Barrier Timer
components), and Warning Alarm (in particular the internal Warning
Light Output and Warning Audio Output simple components). For example, the
RLight and RAudio required ports of Railroad Crossing Control are respectively
connected to the PLight and PAudio ports of the Warning Alarm composite
component.
It should be noted that delegation connectors join the RRXCtrl ports of the Barrier
Detection Input and Barrier Timer internal components to the port of the same
name in the composite component Barrier Component. Note also that delegation
connectors join the PLight and PAudio ports of the composite component Warning
Alarm respectively to the ports of the same name in the two internal components
Warning Light Output and Warning Audio Output. This means that the outer port
for Plight forwards the messages it receives to the inner Plight port. The two ports
have the same name because they provide the same interface.
20.9.2 Design of Component Interfaces
Each component port is defined in terms of its provided and/or required interfaces. Some
producer components – in particular, the input components – do not provide a software
interface because they receive their inputs directly from the external hardware input
device. However, they require an interface (which is provided by the control component)
in order to send messages to the control component. Figure 20.17 depicts the port and
required interfaces for the input components Arrival Sensor Input and Departure
Sensor Input. These input components, as well as the Barrier Component (in
addition to the internal Barrier Detection Input and Barrier Timer component)
have the same required interface – IRXControl – which is provided by the Railroad
Crossing Control component.
Control components need to provide interfaces for the producer components to use
and require interfaces that are provided by output components. The Railroad Crossing
Control component (see Figure 20.16 and 20.18), which conceptually executes the
Railroad Crossing Control state machine, receives asynchronous control request
messages from its producer components, as depicted in Figure 20.13. The provided
interface IRXControl, which is specified in Figure 20.18, is kept simple by having
only one operation (sendcontrolRequest), which has an input parameter (eventRX)
that holds the name and contents of the individual message. Having each control request
as a separate operation would make the interface more complicated because it would
consist of five operations instead of one. Furthermore, evolution of the system would
require the addition or deletion of operations rather than leaving the interface unchanged
and adding a parameter value to the eventRX input parameter of the send
controlRequest operation.
Figure 20.18. Component ports and interfaces for control and proxy components.
Figure 20.18 also depicts the port and provided interface for the Rail Operations
Proxy. The provided interface IOps is a required interface of the
Railroad Crossing Control component.
Figure 20.19 depicts the ports and provided interfaces for the Warning Light
Output and Warning Audio Output components, which are simple components
contained within the Warning Alarm component, which is also shown. Figure 20.19 also
shows the specifications of the component interfaces in terms of the operations they
provide. Each output component provides an interface to receive messages sent by the
control component. However, it does not require a software interface because it sends
outputs directly to an external hardware output device.
Figure 20.19. Component ports and interfaces for output and composite components.
PLight for Warning Light Output, which provides the interface ILight.
The provided operations are to activate and deactivate the warning lights.
PAudio for Warning Audio Output, which provides the interface IAudio.
The provided operations are to activate and deactivate the audio warning device.
The Barrier Component composite component and simple components it contains are
depicted in Figure 20.20. The ports and interfaces of the periodic timer inner component
are also shown in Figure 20.20. The encapsulated Barrier Timer simple component has
one provided port with a provided interface and one required port with a required
interface. The provided interface is ITimer, which allows it to receive start and cancel
timer requests from Railroad Crossing Control via a delegation connector from
the composite Barrier Component. The required interface is IRXControl, which
allows Barrier Timer to send timer expired messages to Railroad Crossing
Control via a delegation connector to the composite Barrier Component. The
Barrier Detection Input inner component communicates with Railroad
Crossing Control via the IRXControl required interface in the same way. The
Barrier Actuator Output inner component has a port PBarrier, which provides the
interface IBarrier. The provided operations are to raise and lower the barrier.
Figure 20.20. Component ports and interfaces for Barrier composite component and
simple components it contains.
20.10 System Configuration and Deployment
During system configuration and deployment, the components are deployed to execute on
different nodes in a distributed configuration. An example of system deployment is shown
on the deployment diagram in Figure 20.21, in which there are five nodes connected by a
local area network. The Barrier Component, Warning Alarm Component, Arrival
Sensor Input, and Departure Sensor Input components are all deployed to
separate nodes. This is so that each software component can be in close proximity to the
hardware sensor from which it receives inputs and/or the hardware actuator(s) to which it
sends outputs. Thus Barrier Component is near the barrier actuator and the barrier
detection sensor; Warning Alarm Component is near the warning light and audio
actuators; Arrival Sensor Input and Departure Sensor Input are respectively
near the arrival and departure sensors. The remaining components, the Railroad
Crossing Control and Rail Operations Proxy components, are deployed to the
same node.
The performance requirement that the elapsed times from detection of train
arrival/departure to sending a command to the barrier actuator do not exceed
predetermined response times is addressed by the performance analysis in Section 20.8.
The safety requirement that the system keep track of the number of trains at the railroad
crossing, such that the barrier is lowered when the first train arrives and raised when the
last train departs, is addressed by the design of the Railroad Crossing Control state
machine. The safety requirement that the system measure the barrier lowering and raising
times and raise a warning if predetermined elapsed times are exceeded is addressed by the
design of the Barrier Timer object and the Railroad Crossing Control state machine.
Figure 20.21. Example of component deployment for Railroad Crossing Control
System.
21
Light Rail Control System Case Study
◈
This chapter describes a case study for an embedded Light Rail Control System. This
design is for a safety-critical system, in which the automated control of driverless trains
must be done safely and in a timely manner. As is typical of embedded systems, the
system interfaces with the external environment by means of several sensors and
actuators. Control of each train is state dependent, which necessitates the design of a state
machine to provide control of the train. As this system is an embedded system, the design
approach benefits from starting with a systems engineering perspective of the total
hardware/software system before the real-time software modeling and design. The Light
Rail Embedded System refers to the total hardware/software system, while Light Rail
Control System refers to the software system.
The problem is described in Section 21.1. Section 21.2 describes the structural
modeling of the system, consisting of the structural model of the problem domain,
followed by the system and software system context models. Section 21.3 describes the
use case model from a software engineering perspective, describing both the functional
and nonfunctional requirements of the safety-critical system. Section 21.4 describes the
dynamic state machine modeling, which is particularly important to model the state
dependent intricacies of this embedded system. Section 21.5 describes how the system
structuring criteria are applied to this system, followed by Section 21.6, which describes
how the object and class structuring criteria are applied to each subsystem. Section 21.7
describes how dynamic interaction modeling is used to develop sequence diagrams from
the use cases. Section 21.8 provides an overview of the design model for the software
system. Section 21.9 describes developing integrated communication diagrams, which
leads to the design of the distributed software architecture in Section 21.10, and the
component-based software architecture in Section 21.11. Section 21.12 describes system
configuration and deployment.
21.1 Problem Description
The Light Rail Control System consists of several trains that travel between stations
along a track in both directions with a semi-circular loop at each end. Trains have to stop
at each station. If a proximity sensor detects a hazard ahead, the train decelerates before
stopping. If taken out of service, a train stops at the next station to discharge passengers,
after which it goes out of service with doors closed.
Door actuators. Each door actuator is controlled by commands to open and close
a door.
Door sensors. For each door actuator, there is also a door sensor to detect when
the door is open.
Arrival Sensor. Detects when train arrives at station. Used to stop train.
A GPS location sensor, which determines the coordinates of the train at regular
intervals.
Train audio devices. Broadcast audio messages to the train passengers, informing
them of arrival at station.
A station display. Displays the next trains in sequence to arrive and expected
times of arrival.
A station audio device. Broadcasts audio messages to the station passengers.
There are several railroad crossings that cross the track, the operation of which is
described in Chapter 20.
The hardware characteristics of the I/O devices are that all sensors except for the
proximity sensor are event driven; that is, an interrupt is generated when there is an input
from one of these devices. The proximity sensor and all output devices are passive.
21.2 Structural Modeling
The static structural model of the problem domain captures the structural entities (modeled
as SysML blocks) and relationships in the Light Rail Embedded System, as depicted in
Figure 21.1. The Light Rail Embedded System is modeled as an embedded system
composite block, which is composed of several Train and Station blocks. The Train
is a modeled as an embedded subsystem composite block composed of input and output
device blocks. Thus, there are several output device blocks: one Motor, many Door
Actuators, many Train Displays, and many Train Audio Device blocks. There
are also several train sensors, which are generalized into a Sensor input device block.
The specialized sensor blocks are the Approaching Sensor, Arrival Sensor,
Departure Sensor, Proximity Sensor, Location Sensor, Speed Sensor,
and Door Sensor. Station is also modeled as an embedded subsystem composite
block, composed of Station Display and Station Audio Device blocks. The
Train block has a many-to-many association with the Station block, as any train can
stop at any station. The embedded systems Railroad Crossing System and Wayside
Monitoring System communicate with the Light Rail Embedded System.
Figure 21.1. Conceptual structural model for Light Rail Embedded System.
Next, the system context block diagram is developed for the Light Rail Embedded
System, which models the external entities to the hardware/software system and is
depicted in Figure 21.2. From a system point of view, all sensors and actuators are part of
the system. The external entities are the Train (which is an external physical entity block
that is detected and controlled by the system), Hazard (which is an external physical
entity block such as train or vehicle ahead that is detected by the system), the Rail
Operator (an external user block that interacts with the system), the external observer
blocks Train Passenger, Station User, and Rail Operations Observer, and
the external system blocks Railroad Crossing System and Wayside Monitoring
System.
After modeling the system context, the next step is to develop the software system
context block diagram, which depicts in Figure 21.3 the Light Rail Control System as a
software system block that interfaces to several external input and output device blocks,
two external system blocks, and an external user block. From the conceptual static model
in Figure 21.1, the input and output device blocks that are part of the Train and Station
composite blocks are in fact external input and output devices to the Light Rail Control
System. The Train and Hazard external physical entities in the system context block
diagram (Figure 21.2) are represented on the software context block diagram by the
sensors that detect them and/or actuators that control them. Thus, the train’s arrival and
departure are detected by Approaching Sensor, which detects that the train is
approaching a station, Arrival Sensor, which detects the train’s imminent arrival at a
station, and Departure Sensor block which detects that the train has left the station.
The train’s location and speed are measured respectively by a Location Sensor and a
Speed Sensor. Train door status is detected by a Door Sensor. A physical hazard
ahead of the train is detected by a Proximity Sensor. The system is controlled by
outputs to the Motor Actuator and many Door Actuators. The external sensors and
actuators are respectively depicted on the software system context block diagram as
external input or output device blocks that interface to the Light Rail Control System.
The external train, station, and rail operations observers in the system block diagram are
replaced respectively by the Train Display, Station Display, and Rail
Operations Display they view and the Train Audio Device and Station Audio
Device they hear. The remaining external blocks carry over from the system context
diagram, namely a human external user, the Rail Operator and two external systems,
Railroad Crossing System and Wayside Monitoring System.
Figure 21.3. Light Rail Control software system context block diagram.
21.3 Use Case Modeling
The next step is to develop the use case model for the Light Rail Control System.
Because this is an embedded system with many external sensors and actuators, it is
desirable to develop a more detailed use case model from a software engineering
perspective, in which there will be many actors. There is one human actor, namely the
Rail Operator, several I/O device actors, and two external system actors. There are
nine input and/or output device actors, which are the Approaching Sensor, Arrival
Sensor, Departure Sensor, Proximity Sensor, Motor, Door Actuator,
Door Sensor, Location Sensor, and Speed Sensor. The input and output actors
correspond to the external input and output device blocks on the software context block
diagram. There is one generalized actor representing the Railroad Media. There are
two external system actors, Railroad Crossing System and Wayside Monitoring
System.
Because the use case model for the Light Rail Control System has many use cases
and actors, it is preferable to structure the use case model into use case packages, which
group together related use cases. Thus, the use cases are grouped into four use case
packages based on their functionality. Because of the number of use cases and actors, each
of the four use case packages with its corresponding use cases and actors is shown on a
separate use case diagram. From the problem definition, the use case packages and use
cases in each package are identified and described next.
21.3.1 Use Case Package for Light Rail Operations
Use cases that address the train arriving and leaving a station during normal operation.
These use cases are grouped into a use case package called Light Rail Operations,
as depicted in Figure 21.4:
Arrive at Station. A train arrives at the station. The actors are Approaching
Sensor (primary actor), Arrival Sensor, Motor, and Door Actuator.
Depart from Station. A train leaves the station. The actors are Door Sensor
(primary actor), Departure Sensor, and Motor.
Control Train Operation, which is a high level use case that includes the
Arrive at Station, Control Train at Station, and Depart from
Station use cases. It describes the sequence of use cases for normal train
operation. The actor for this use case is Railroad Media, which is also an actor
of the inclusion use cases.
The use case descriptions are given next. The Arrive at Station use case starts with
an input from the Approaching Sensor actor.
Use case: Arrive at Station.
Main sequence:
Alternative sequences:
The Control Train at Station use case starts with an input from the Door Sensor
actor.
Use case: Control Train at Station.
Main sequence:
2) After time interval, System sends Close Doors command to Door Actuator.
Alternative:
Step 2: If there is a hazard ahead, train remains at station with doors open
until the hazard has been removed.
The Depart from Station use case starts with an input from the Door Sensor actor.
Use case: Depart from Station.
Main sequence:
3) Departure Sensor detects that the train has left the station and notifies
System.
6) When train has reached cruising speed, System commands Motor to stop
accelerating and start cruising at a constant speed.
Alternative sequences:
The Control Train Operation is a high level use case that includes three use cases.
Use case: Control Train Operation.
Dependency: Includes Arrive at Station use case, Control Train at Station use
case, Depart from Station use case.
Main sequence:
Figure 21.4. Light Rail Control System actors and use cases: Light Rail Operations use
case package.
21.3.2 Use Case Package for Train Dispatch and Suspend
Use cases that address the train going out of service and going back into service are
grouped into Train Dispatch and Suspend use case package, which also include use
cases from the Light Rail Operations use case package, as depicted in Figure 21.5.
A train is dispatched into service using the Dispatch Train use case and then continues
normal operation as described in the Control Train Operation use case. It can then
be suspended from service using the Suspend Train use case.
The use case descriptions are given next. The Suspend Train use case starts with an
input from the Rail Operator actor.
Use case: Suspend Train.
Dependency: Includes Arrive at Station use case, Control Train at Station use
case.
Main sequence:
Alternative sequence:
Step 3: If the train is already at the station with doors open, then, after the
time interval, the system sends a close doors message to the door actuator and
resumes at step 5.
The Dispatch Train use case starts with an input from the Rail Operator actor.
Use case: Dispatch Train.
Dependency: Include Control Train at Station and Depart from Station use
cases.
Main sequence:
Detect Hazard Presence. When a hazard ahead is detected, the train slows
down to a stop. This use case is an extension use case that extends the Arrive at
Station and Depart from Station use cases when they encounter a hazard.
Detect Hazard Removal. When the hazard is removed, the train starts moving.
This use case is an extension use case that extends the Arrive at Station and
Depart from Station use cases when a hazard they previously encountered is
removed.
The use case descriptions are given next. The Detect Hazard Presence use case starts
with an input from the Proximity Sensor actor.
Use case: Detect Hazard Presence.
Dependency: Extends Arrive at Station use case, Depart from Station use
cases.
Main sequence:
Alternative sequence:
Step 3: If proximity changes to clear (>100 meters) before train has stopped,
system commands motor to start accelerating. Exit use case and return to base use
case.
The Detect Hazard Removal use case starts with an input from the Proximity
Sensor actor.
Use case: Detect Hazard Removal.
Dependency: Extends Arrive at Station use case, Depart from Station use
cases.
Main sequence:
Monitor Train Location. GPS Location Sensor actor informs train of its
current location.
Monitor Train Speed. Speed Sensor actor informs train of its current speed.
In addition, the actor Railroad Media is specialized to the five actors that receive
railroad status messages, as depicted in Figure 21.8. These are Train Display, Train
Audio Device, Station Display, Station Audio Device, and Rail
Operations Display.
Figure 21.7. Light Rail Control System: Railroad Monitoring use case package.
Figure 21.8. Light Rail Control System: Railroad Media generalized and specialized
actors.
The use case descriptions are given next. The Monitor Train Speed use case
starts with an input from the Speed Sensor actor.
Use case: Monitor Train Speed.
Main sequence:
2) System converts current speed to engineering units and stores the current
value.
The Monitor Train Location use case starts with an input from the GPS Location
Sensor actor.
Main sequence:
2) System uses train location and current speed to estimate arrival times at
Train stations
Postcondition: Location and speed information has been stored and distributed.
The Monitor Rail Operations use case starts with an input from the Railroad Crossing
System or Wayside Monitoring System actor.
Use case: Monitor Rail Operations.
Main sequence:
The state machine for the Control Train at Station use case starts with the
train in Doors Opening state, as depicted in Figure 21.10. The Opened event
(originating from the Door Sensor) causes the state machine to transition to Doors
Open state, with a resulting Start Timer transition action (to the local timer). After the
timeout event, and assuming no hazards ahead (i.e., [All Clear] guard condition is
True), the state machine transitions state to Doors Closing state from Doors Open
state, with a resulting exit action to send a command to Close Doors (to Door
Actuator) and a transition action to Send Departing message (to Railroad Media).
Figure 21.10. State machine for Control Train at Station use case.
The state machine for the Depart from Station use case starts with the train in
Doors Closing state as depicted in Figure 21.11. The Closed event (originating from
the Door Sensor) causes the state machine to transition to Accelerating state, with a
resulting entry action to send an Accelerate command (to Motor). The Departed event
(originating from the Departure Sensor) causes a transition to the Accelerating
state and the transition action to Send Departed message (to Railroad Media). When
the system detects that the train has reached the cruising speed, the state machine
transitions to Cruising state and sends a Cruise command (to Motor).
Figure 21.11. State machine for Depart from Station use case.
The state machine for the Suspend Train use case starts with the train in Doors
Open state, when the timeout expires. If the train has been commanded to go out of
service (i.e., [Suspending] guard condition is True), the state machine will transition to
Out of Service state, which results in an exit action to Close Doors (sent to Door
Actuator) and a transition action to Send Out of Service message (to Railroad
Media) as depicted in Figure 21.12.
Figure 21.12. State machine for Suspend Train use case.
The state machine for the Dispatch Train use case starts with the train in Out of
Service state when a Dispatch message arrives (from the Rail Operator). The state
machine transitions to Doors Opening state, which results in an entry action to send an
Open Doors command to the Door Actuator and a transition action to Send In
Service message (to Railroad Media), as depicted in Figure 21.13.
Figure 21.14. State machine for Detect Hazard Presence use case.
The state machine for the Detect Hazard Removal use case starts with the train in
either Emergency Stopping or Emergency Halt states. If the proximity sensor sends
a Hazard Removed message (as depicted in Figure 21.15), this causes the state machine
to transition to: (a) Approaching state if the train is approaching a station, in which case
the transition actions are to Accelerate Slowly and Send Hazard Removed message
or (b) Accelerating state if the train is not approaching a station, in which case the
actions are to Accelerate and Send Hazard Removed message.
Figure 21.15. State machine for Detect Hazard Removal use case.
21.4.2 Integrated Train Control State Machine
Because the state machine modeling involves seven state dependent use cases, it is
necessary to integrate the partial state machines of these use cases and consider alternative
branches to create an initial integrated Train Control state machine, which is depicted
in Figure 21.16.
The initial integrated state machine is a flat state machine without any hierarchy;
hence, there is an opportunity to design a hierarchical state machine by defining composite
states to represent the major states of the train. It is possible to group certain states in
Figure 21.16 into a composite state. In particular, the Accelerating, Cruising, and
Approaching states can be grouped to become substates of a composite state called In
Motion. The reason is that the Hazard Detected transition from each of the
Accelerating, Cruising, and Approaching substates can be replaced with a Hazard
Detected transition from the In Motion composite state. Similarly, Emergency
Stopping and Emergency Halt can be grouped to become substates of a composite
state called Emergency. The reason is that the Hazard Removed transition from the
Emergency Stopping and Emergency Halt substates can be replaced with a Hazard
Removed transition from the Emergency composite state.
The composite states and substates of the Train Control hierarchical state machine are
described next and depicted in Figure 21.17. The initial state is Out of Service:
Doors Opening. This state is entered when the train has stopped at a station and
the doors are opening. It is also entered when a train is dispatched into service. On
the state machine, the Open Doors action is shown as an entry action because the
transition to Doors Opening state can arrive from either the Out of Service
state or the Stopping state. It is more concise to depict one entry action on the
state machine instead of transition actions on each of the two incoming state
transitions.
Doors Open. This state is entered when the train doors have completed opening.
There is an action to start a timer on transition into the state. When the timeout
expires, there are three possible transitions. If the Hazard condition is True, this
state is re-entered. If the Suspending condition is True, the state machine
transitions to Out of Service state. If the All Clear condition is True, the
state machine transitions to Doors Closing state. On exit from this state to either
Doors Closing or Out of Service state, there is an exit action to Close
Doors. It should be noted that the All Clear condition is defined in terms of the
Hazard and Suspending conditions using the following Boolean expression:
Doors Closing. This state is entered when the train doors start closing to satisfy
a request to move to the next station.
In Motion. This is a composite state, which is entered when the train is moving
and consists of the following substates:
Emergency Halt. The train has stopped because of the emergency with
doors closed. This substate is entered from Emergency Stopping substate.
Figure 21.17. Hierarchical state machine for Train Control.
21.5 Subsystem Structuring
As the Light Rail Control System is a large system with many objects, it is necessary to
considering how the system is structured into subsystems. Because this is a distributed
application, the geographical location and aggregation/composition considerations take
precedence. From a geographical perspective, train and stations are distinct distributed
entities. The conceptual static model in Figure 21.1 shows that there are multiple trains
and multiple stations, each of which is composed of several parts. Thus, trains and stations
can be modeled structurally as geographically distributed subsystems.
Because the primary purpose of the train subsystem is to control the physical train,
the subsystem is named the Train Control Subsystem, of which there is one instance
for each train. There is also a Station Subsystem, of which there is one instance for
each station in the system. This subsystem is an output subsystem, as its main function is
to output train status information to station visual displays and audio devices.
Because the system needs an operator to view train and station status, as well
as to command trains to go into and out of service and to notify stations of delays, a user
interaction subsystem is designed called Rail Operations Interaction. Finally,
Rail Operations Service is a service subsystem of which there is only one instance.
It is independent of the number of trains and stations and is responsible for maintaining
the status of the system, as well as dynamically outputting real-time train and station
statuses on large screens in the rail operations center.
Thus, the Light Rail Control System consists of four subsystems, as depicted in
Figure 21.18. They are the Train Control Subsystem, the Station Subsystem, the
Rail Operations Services subsystem, and the Rail Operations Interaction
subsystem. Starting from the software context diagram depicted in Figure 21.3, Figure
21.18 depicts these four subsystems as well as the external entities to which they interface.
Figure 21.18. Light Rail Control software subsystems.
21.6 Object and Class Structuring
Because this is a real-time embedded system, there are many external devices and
consequently many software boundary classes. The COMET/RTE object and class
structuring criteria are applied to determine the objects and classes in each subsystem. The
behavior of these objects is described in detail in Section 21.7.
All train-related classes, such as the train’s proximity sensor and motor, are part of
the Train Control Subsystem. Boundary classes are determined by considering the
software classes that interface to and communicate with the external entities. Input classes
are needed to receive inputs from the seven external input devices shown in Figures 21.18.
As depicted in Figure 21.19, the corresponding seven input classes, which are all in the
Train Control Subsystem, are Approaching Sensor Input, Arrival Sensor
Input, Departure Sensor Input, Proximity Sensor Input, Door Sensor
Input, Location Sensor Input, and Speed Sensor Input.
Figure 21.19. Input and output classes for Train Control Subsystem.
Next, the output classes that output to the external output devices are determined.
Figure 21.3 shows that there are seven external output devices. Four of the corresponding
output classes are in the Train Control Subsystem, as depicted in Figure 21.19.
These are Door Actuator Output, Motor Output, Train Display Output, and
Train Audio Output.
Now consider the control objects needed by the Train Control Subsystem. A
Train Control object is needed for each train. This must be a state dependent control
object that executes the state machine described in Section 21.4. Since controlling the
speed of the train is an important factor in this system, there needs to be a separate Speed
Adjustment algorithm object, which sends speed commands to the Motor Output
object, which in turn interfaces to the external motor. There also must be a Train Timer
for periodic events, such as the time that train doors need to be kept open at a station.
An entity object is needed to hold Train Data, including the current speed and
location of the train. Because train status needs to be sent to various train and station
objects on a regular basis, a coordinator object, the Train Status Dispatcher, is
designed for this purpose.
Next consider the classes needed by the Station Subsystem. Two output classes
are in the Station Subsystem, namely Station Display Output and Station Audio
Output, as depicted in Figure 21.20. For each station, there is also a need for a
coordinator object, the Station Coordinator, and an entity object, Station Status.
Figure 21.20. Classes for Station Subsystem.
Sequence numbers are depicted with whole numbers. For some use cases, an optional
letter (a use case identifier) precedes the sequence number. See Appendix A for more
information on message sequence numbering conventions.
21.7.1 Sequence Diagram for Arrive at Station
Because of the larger number of objects involved, this use case is realized by two
sequence diagrams, one for the external objects interacting with the system, and the
second depicting the interaction among the software objects. The former sequence
diagram is depicted in Figure 21.21 and described first:
Figure 21.21. Sequence diagram for Arrive at Station use case (external objects).
5: When the train has stopped, the Motor Actuator sends a Stopped event to the
system.
7: The system sends an Arrived message to the Train Display and the Train
Audio Device (event 8).
The second sequence diagram (Figure 21.22) depicts the software objects and internal
message interactions among them, following the input from the approaching sensor:
4: (parallel sequence with event 2 because both are actions associated with the state
transition): Train Control sends a Send Approached message to Train
Status Dispatcher.
5: Arrival Sensor Input object receives an arrival event from the external
Arrival Sensor indicating that the train has arrived at the station. The Arrival
Sensor Input object sends the Arrived message to the Train Control object.
On receiving this message, Train Control transitions from Approaching state to
Stopping state.
7: Speed Adjustment object sends Stop message to Motor Output, which in turn
sends Stop message to the real-world motor.
8: When the train has stopped, the motor sends a stopped response to the Motor
Output object. Motor Output object sends a Stopped message to the Speed
Adjustment object.
10: (parallel sequence because there are two actions associated with the state
transition) On transitioning to Doors Opening state, the Train Control object
sends the Door Actuator Output object a command to Open Doors. On the state
machine, the Open Doors event is shown as an entry action, because the transition
to Doors Opening state can arrive from either the Out of Service state or the
Stopping state. It is more concise to depict one entry action on the state machine
instead of two actions, one on each of the incoming state transitions.
11: (parallel sequence because there are two actions associated with the state
transition) The Train Control object sends a Send Arrived message to the
Train Status Dispatcher.
Figure 21.22. Sequence diagram for Arrive at Station use case (software objects).
21.7.2 Sequence Diagram for Train Status Dispatcher
The Train Status Dispatcher sends multicast status messages to all the Railroad
Media actors depicted in Figure 21.23. The corresponding objects that received status
messages are: Train Display Output, Train Audio Output, Station
Subsystem (for Station Display Output and Station Audio Output),
Rail Operations Service (for Rail Operations Display), and also the
Train Status entity object, as shown in Figure 21.23.
2: Train Status Dispatcher sends a train status message to the Train Audio
Output, which sends the message to the train audio device.
3: Train Status Dispatcher updates the Train Status entity object with the
arrival status.
S1: The Door Sensor sends the Opened message to the Door Sensor
Input object.
S2: The Door Sensor Input object sends an Opened message to the
Train Control object, which then transitions to Doors Open state.
S3: The Train Control object sends a Start Timer message to the
Train Timer to start a timer.
S4: A timer event is generated after a period of time equal to timeout. Timer
object sends Timer Elapsed event to Train Control.
S5: If the track condition is All Clear, then Train Control object
transitions to Doors Closing state and sends a Close Doors command to
Door Actuator Output.
(Note that in the case that there is a hazard ahead, the Train will remain at
the station and periodically check if the hazard has been cleared. Once the
hazard has been cleared, the Train will resume its movement).
S6: Door Actuator Output sends a Close Doors command to the real-
world doors actuator.
S5a: (parallel sequence with S5 because there are two actions associated with
the state transition): The Train Control object sends a Send Departing
message to the Train Status Dispatcher.
Figure 21.24. Sequence diagram for Control Train at Station use case.
21.7.4 Sequence Diagram for Depart from Station
This sequence diagram (Figure 21.25) depicts the software objects and internal message
interactions among them following the door sensor detecting train doors closed:
D1: The real-world door sensor sends a Closed message when all the doors
are closed. The Door Sensor Input in turn sends a Closed message to
Train Control, which transitions to Accelerating state.
D3: Speed Adjustment object computes the acceleration rate and sends
Accelerate message with the acceleration rate as a parameter to Motor
Output, such that the acceleration gradually increases the speed of the train.
D4: The Motor Output object sends the Accelerate command to the real-
world motor.
D6: The Train Control object sends a Send Departed message to the
Train Status Dispatcher.
D7: By comparing current speed with the cruising speed, the Speed
Adjustment object determines when the train has reached the cruising
speed. Speed Adjustment object sends a Reached Cruising message to
Train Control, which transitions to Cruising state.
D9: By comparing current speed with the cruising speed, the Speed
Adjustment object determines what plus or minus delta adjustments are
needed to the train speed. It then sends a Cruise message with the delta
amounts to the Motor Output object.
D10: The Motor Output object converts the delta amounts to electrical
units and sends the voltage setting to the real-world motor.
Figure 21.25. Sequence diagram for Depart from Station use case.
21.7.5 Sequence Diagram for Detect Hazard Presence
This sequence diagram (Figure 21.26) depicts the software objects and internal message
interactions among them following the proximity sensor detecting the presence of a hazard
ahead of the train:
P1: Proximity Sensor detects the presence of a hazard and sends message
to Proximity Sensor Input.
P4: Speed Adjustment object computes fast deceleration value for motor
and sends Emergency Stop message with the deceleration rate as a
parameter to Motor Output.
P6 (parallel sequence with P3 because there are two actions associated with
the state transition): The Train Control object sends a Hazard Detected
message to the Train Status Dispatcher.
P10: The Train Control action is to send the Send Stopped message to
the Train Status Dispatcher.
Figure 21.26. Sequence diagram for Detect Hazard Presence use case.
21.7.6 Sequence Diagram for Detect Hazard Removal
This sequence diagram (Figure 21.27) depicts the software objects and internal message
interactions among them following the proximity sensor detecting the removal of the
hazard:
R1: Proximity Sensor detects the removal of the hazard and sends
message to Proximity Sensor Input.
R4: Speed Adjustment object computes the acceleration rate and sends
Accelerate message with the acceleration rate as a parameter to Motor
Output, such that acceleration gradually increases the speed of the train.
R5: The Motor Output object sends the accelerate command to the real-
world motor.
R6: (parallel sequence with R3 because there are two actions associated with
the state transition): The Train Control action is to Send Hazard
Removed message to the Train Status Dispatcher.
Figure 21.27. Sequence diagram for Detect Hazard Removal use case.
21.7.7 Sequence Diagram for Dispatch Train
This sequence diagram (Figure 21.28) depicts the software objects and internal message
interactions among them following the operator sending a dispatch message to the train:
I3a (parallel sequence with I3): The Train Control object sends an In
Service message to the Train Status Dispatcher.
I4: The Door Actuator Output object sends the Open Doors command
to the real-world Door Actuator.
2. Structure the Light Rail Control System into subsystems based on the architectural
structure patterns and design the subsystem interfaces based on the architectural
communication patterns.
3. For each subsystem, structure the subsystem into concurrent tasks using the task
structuring criteria and design the task interfaces.
4. Analyze the performance of the concurrent real-time software design. This step is
described in detail in Chapter 18 for the Light Rail Control System.
Figure 21.29 depicts the state dependent control object Train Control, which
receives messages from several input objects including Approaching Sensor Input,
Arrival Sensor Input, Proximity Sensor Input, Departure Sensor
Input, and Door Sensor Input. The events contained in these messages cause state
transitions in the state machine encapsulated by Train Control (Figure 21.17). The
resulting state machine actions are sent as speed command messages to Speed
Adjustment, door command messages to Door Actuator Output, and train status
messages to Train Status Dispatcher. Train location and speed data is stored in the
Train Data entity object, which is updated periodically by the Location Sensor
Input and Speed Sensor Input objects. Train Status Dispatcher reads and
combines this data with the train status messages that it sends to the Train Display
Output and Train Audio Output objects as well as the Station Subsystem and
Rail Operations Service.
The architectural structure patterns used by the distributed Light Rail System are
There are two reasons for the emphasis on asynchronous message communication:
firstly, the producer task is not delayed by a consumer task. Secondly, the design of the
consumer task is less complex if it receives incoming asynchronous messages from
multiple producers on a single FIFO message queue, which it then services in the order of
messages received. The Synchronous Message Communication with Reply pattern is
applied between Rail Operations Interaction and Rail Operations Service
for requests that need a response. The Subscription/Notification pattern is also used
between Rail Operations Interaction (which subscribes to receive rail
notifications) and Rail Operations Service, which responds with a notification
every time it receives a rail status update.
Each instance of this subsystem is composed of one instance of each of the following
tasks:
a) Event driven input tasks. There are several event driven input tasks, each of
which is depicted with the stereotypes «event driven» «input»
«swSchedulableResource».
Door Sensor Input. Awakened by interrupt when train doors have opened or
closed.
b) Periodic input tasks. There are several periodic input tasks, each of which is
depicted with the stereotypes «timerResource» «input» «swSchedulableResource».
c) Demand driven state dependent control task. Train Control task is activated
by messages from several producer tasks including five input tasks and train
commands from Rail Operations Interaction. Incoming messages are input
events on the encapsulated Train Control state machine. State machine actions
are sent as outgoing messages from the Train Control task. This task is depicted
with the stereotypes «demand» «state dependent control» «swSchedulableResource».
f) Event driven output task. Motor Output is activated by messages from Speed
Adjustment, which then sends motor commands to the external electric motor and
receives an interrupt when the motor has completed the command. This output task is
depicted with the stereotypes «event driven» «output» «swSchedulableResource».
This task is categorized as an event driven output task because it receives interrupts
from the output device, whereas a demand driven output task interfaces to a passive
output device that does not generate interrupts.
g) Demand driven output tasks. These output tasks are activated on demand by
messages from other tasks in the Train Control Subsystem. Each task is
depicted with the stereotypes «demand» «output» «swSchedulableResource».
The Station Coordinator task receives train status from the multiple instances of
Train Control Subsystem and uses this to update the Station Status passive
entity object and to send status messages to Station Display Output and Station
Audio Output. The tasks in the Station Subsystem are:
Station Audio Output. Demand driven output task sends messages to station
audio device concerning train arrival at station and departure from station.
Station Data. Because this passive entity object is not shared, it is only labeled
with the stereotype «entity».
21.10.5 Concurrent Task Design of Rail Operations Interaction and Service
Subsystems
The task architecture for the Rail Operations Service and Rail Operations
Interaction subsystems is shown in Figure 21.35, which depicts the tasks and task
interfaces in these subsystems.
Figure 21.35. Task architecture of Rail Operations Service and Rail Operations
Interaction Subsystems.
There is only one instance of the Rail Operations Service Subsystem, which
consists of two tasks and one passive information-hiding object. The information-hiding
object is the Rail Operations Status entity object, which contains the current status
of each train and station. The tasks are the Rail Ops Coordinator task (a coordinator
task) and the Rail Operations Display Output task (an output task). The Rail
Ops Coordinator task receives status messages from each instance of Train Control
Subsystem and Station Subsystem, in addition to each instance of Railroad
Crossing System and Wayside Monitoring System; and updates the Rail
Operations Status entity object. The Rail Operations Display Output task
receives status data from Rail Ops Coordinator and then displays the status of all
trains and stations on the large rail operations display.
Rail Ops Coordinator. Demand driven coordinator task. Receives train and
station status and updates Rail Operations Status object. This task is
depicted with the stereotypes «demand» «coordinator «swSchedulableResource».
Rail Operations Status. Because this passive entity object is not shared, it is
only labeled with the stereotype «entity».
The Train Control Subsystem component has two required ports from which it
sends messages to the provided ports of two components depicted in Figure 21.36
(Station Subsystem and Rail Operations Service). It sends train status
messages to both components using the ITrainStatus and IRailStatus required
interfaces respectively depicted in Figure 21.37. Train Control also has one complex
port PTrain, with both a provided and a required interface, to allow it to receive
asynchronous commands from Rail Operations Interaction on the ITrain
provided interface and send asynchronous responses on the ITrainResp required
interface.
The Station Subsystem component has one required port from which it sends
messages to the provided port of Rail Operations Service using the IRailStatus
interface. It receives status messages from Train ControlSubsystem on the
PTrainStatus port through the ITrain Status provided interface. It also has a
complex port PStation through which it receives asynchronous commands on the
IStation provided interface and sends asynchronous responses on the IStationResp
required interface, as depicted in Figure 21.37.
The Rail Operations Interaction component has three complex ports, which
allow it to be a client of each of the Train Control Subsystem, Station
Subsystem, and Rail Operations Service components, sending requests on its
required interface and receiving responses on its provided interface. For example, it sends
asynchronous train commands (such as Suspend Train x) on the ITrain required interface
and receives asynchronous train responses (such as Train x Suspended) on the
ITrainResp provided interface. For communication with Rail Operations
Service, the complex port between them supports the IRailOps
interface through which Rail Operations Service provides
synchronous communication with response for status and subscription
requests, as well as an IRailNotification interface provided by
Rail Operations Interaction through which it receives asynchronous
notifications.
Each instance of Train Control (one per train) is allocated to a node to achieve
localized autonomy and adequate performance. Thus, the failure of one train node does
not affect other nodes. For the same reason, each instance of Railroad Crossing
Control (one per crossing) is assigned its own node. Station Subsystem (one per
station) is allocated to a node for localized autonomy. Loss of a station node means that
the station is temporarily out of service but does not affect other nodes. Wayside
Monitoring is also allocated to a separate node for each wayside area to be in close
proximity to the sensors that it is monitoring. Rail Operations Interaction is
assigned a separate node so that it can be both dedicated and responsive to its local user.
Rail Operations Service is assigned a separate node so that it can be responsive to
service requests. There is only one instance of this node. However, a backup hot standby
node could be provided, which would receive all status information sent to the primary
Rail Operations Service node and would therefore be available to be immediately
switched into service should there be a failure on the primary node.
Figure 21.39. Example of component deployment for Distributed Light Rail System.
The performance analysis in Chapter 18 confirms that the real-time design addresses
performance requirements that the elapsed times for detection of a train approaching a
station, stopping on arrival at a station, and stopping on detection of a hazard, do not
exceed predetermined response times. The safety requirement that a train respond to a
hazard ahead is provided by the Train Control state machine reacting to input from the
hazard detection sensor and commanding the electric motor to stop the train.
22
Pump Control System Case Study
◈
This chapter describes a concise case study of a real-time embedded system, namely a
Pump Control System. Of particular interest are several periodic activities necessitating
the design of periodic tasks, in addition to examples of task design with temporal and
control clustering. There is also a need for a state machine that is designed with three
separate orthogonal regions in order to separate three different but interrelated control
concerns. This is one of the shorter case studies in which the details of dynamic
interaction modeling (covered in detail in other case studies) are left as an exercise for the
reader. The end product of dynamic interaction modeling is an integrated communication
diagram, which is used to transition into design modeling.
The problem description is given in Section 22.1. Section 22.2 describes the
structural modeling, and Section 22.3 describes the use case model. Section 22.4 describes
the object and class structuring. Section 22.5 describes the state machine model. Section
22.6 describes the integrated interaction model, which is an outcome of dynamic
interaction modeling. Section 22.7 describes the design modeling, which consists of the
distributed software design and distributed software deployment. This is followed by the
design of the concurrent task architecture and detailed software design.
22.1 Problem Description
A Pump Control System for a mineral mine has several pumps situated underground,
which are used to pump out water that has collected at the bottom of the mine. Each pump
has an engine, which is controlled automatically by the system. The system uses Boolean
high- and low-level water sensors, in addition to an analog methane sensor, to monitor the
environment inside the mineral mine. Detection of the high water level causes the system
to pump water out of the mine until the low water level is detected. For safety reasons, the
system must switch off the pump when the level of methane in the atmosphere exceeds a
preset safety limit. Once the pump has been switched off, five minutes must elapse before
it can be switched on again. For each pump, status information on the methane and water
level sensors, as well as the pump engine, is sent to a central server. Human operators can
view the status of the various pumps.
For the design of the system, it is assumed that all I/O devices are passive (do not
generate interrupts) and that an external timer is used to generate periodic timer events.
22.2 Structural Modeling
Structural modeling starts with the development of a conceptual structural model, which is
depicted as a block definition diagram. Each structural element is modeled as a SysML
block with a stereotype identifying its role. The Pump Control Embedded System in
Figure 22.1 is modeled as a composite block with the stereotype «embedded system»,
which contains four part blocks, the High Water Sensor «input device», the Low
Water Sensor «input device», the Methane Sensor «input device», and the Pump
Engine «output device». The system generates Pump Status, which is stored in an
«entity» block and viewed by the Operator «external user». An external Timer signals
the system at regular intervals.
Figure 22.1. Conceptual Structural Model for Pump Control Embedded System.
From the conceptual static model, a software system context block definition diagram
for the Pump Control System is developed, as shown in Figure 22.2, in which the
software system and external entities are depicted as SysML blocks. There are three
external input device blocks, namely the High and Low Water Sensors and the
Methane Sensor, one external output device block, namely the Pump Engine, one
external Timer block, and one external user block, the Operator. There are multiple
instances of each external block.
Figure 22.2. Pump Control System software system context diagram.
22.3 Use Case Modeling
The use case model for the Pump Control System is depicted in Figure 22.3, in which
there are two use cases, Control Pump and View Pump Status. The use cases are
depicted at the software engineering level, which is the reason for having six actors that
correspond to the external classes on the software context class diagram: three
representing the three external sensors (High Water Sensor, Low Water Sensor,
and Methane Sensor), one for the Pump Engine, one Timer actor, and an external user
actor, the Operator. The external Timer signals timer events to the system every second.
The use case descriptions are as follows. The Control Pump use case is started with
an input from the High Water Sensor actor.
Actors: High Water Sensor (primary actor), Low Water Sensor, Methane
Sensor, Pump Engine, Timer.
Main sequence:
Alternative sequences:
Step 2: If the methane sensor detects that the methane level is unsafe when
the high water level is detected, the system does not switch on the pump engine.
Step 2: If the methane sensor detects that the methane level becomes unsafe
while the pump engine is operational, the system switches off the pump engine.
Step 2: If the methane sensor detects that the methane level has become safe
when the water level is high, the system switches on the pump engine, providing it
has been off for at least five minutes.
Step 4: After switching off the pump engine, five minutes must elapse before
the system can switch on the engine again.
Step 4: After the five minutes elapsed time, the system switches on the pump
engine, if the water level is high and the methane level is safe.
Nonfunctional requirements:
Safety requirement: System must not switch on the Pump Engine when the
methane level is unsafe.
Performance requirement: After switching off the pump engine, the system
must not switch on the Pump Engine until at least five minutes have elapsed.
Postcondition. The pump engine has been switched off.
The View Pump Status use case is started with an input from the Operator actor.
Actor: Operator.
Main sequence:
2. The system displays the pump status for the given pump.
Alternative sequence:
For each external output device, there is a corresponding software output object:
For each external user, there is a corresponding software user interaction object:
Operator Interaction.
Pump Timer.
In addition, because this is a real-time control system, there is a need for a state
dependent control object to execute an encapsulated state machine:
Pump Control.
Furthermore, since the pump controller and the user interaction object need to be on
separate nodes in a distributed configuration, the pump status needs to be maintained by a
service object:
In the Pump State machine, the water and methane conditions need to be checked
before deciding whether or not to switch the pump on. For pumping to be started from the
Pump Idle state, both the high water guard condition and the methane safe guard
condition must be true. If either High Water is detected (when the guard condition
Methane Safe is True) or Methane Safe is detected (when the guard condition High
Water is True), Pump State transitions from Pump Idle state to Pumping state, and the
entry action is to start (i.e., switch on) the pump. If either of the events Low Water
Detected or Methane Unsafe Detected arrives, then Pump State transitions from
Pumping state to Resetting Pump state. In the transition, the actions are for the
system to stop (i.e., switch off) the pump and start the timer. A minimum time must elapse
before the pump can be switched on again. When the timer elapses with the event After
(Timeout), the Pump State transitions from Resetting Pump back to Pumping state,
providing both the High Water and Methane Safe guard conditions are True. However,
if either the Low Water or Methane Unsafe guard condition is True, the state machine
transitions to the Pump Idle state.
a) The first Control Pump sequence diagram is for the main sequence of the use
case, which consists of an input from the High Water Sensor that results in the
system switching on the pump and transitioning to Pumping state, followed later by
an input from the Low Water Sensor that results in the system switching off the
pump and transitioning to Resetting Pump state. This is followed by a transition to
Pump Idle state after the timeout.
b) The second Control Pump sequence diagram addresses the alternative sequence
in which, after switching on the pump and transitioning to Pumping state, an unsafe
Methane Sensor reading is detected that results in the system switching off the
pump and transitioning to the Resetting Pump state. After five minutes, the pump
transitions to Pump Idle state. A safe Methane Sensor reading is then detected,
when the High Water guard condition is True, which results in the system switching
on the pump and transitioning back to the Pumping state. This is later followed by an
input Low Water Detected from the Low Water Sensor that results in the
system switching off the pump and transitioning to Resetting Pump state.
Messages sent to the Pump Control object, such as High Water Detected and
Low Water Detected from the High Water Sensor Input and Low Water
Sensor Input objects respectively in Figure 22.5, are the events that cause state changes
on the state machine in Figure 22.4. Actions in Figure 22.4, such as Start Pump and
Stop Pump, correspond to output messages from the Pump Control object to the Pump
Engine Output object in Figure 22.5.
The three input objects also send water and methane sensor status information to the
Pump Status Service object. The Operator Interaction object requests status
information from the Pump Status Service.
22.7 Design Modeling
22.7.1 Distributed Software Architecture
The Pump Control System is structured into three distributed subsystems, as depicted on
Figure 22.6. The three subsystems are Pump Subsystem (a control subsystem of which
there is one instance for each pump), Pump Status Service (a service subsystem of
which there is one instance), and Operator Interaction (a user interaction subsystem
of which there is one instance for each operator).
Figure 22.6 also depicts the message communication between the three subsystems.
The Pump Subsystem sends asynchronous pump control status messages to the Pump
Status Service subsystem. The Operator Interaction subsystem communicates
with the Pump Status Service using synchronous communication with response,
requesting and receiving pump status data.
A periodic input task, Methane Sensor Input, to monitor the status of a passive
methane sensor. The MARTE stereotypes for this task, which correspond to it
being a periodic input task, are «timerResource» «input»
«swSchedulableResource».
A periodic temporal clustering task, Water Sensors, to monitor the status of the
high and low water sensors. These sensors need to be monitored with the same
frequency and are therefore grouped into the same task. The stereotypes for this
task, which correspond to it being a periodic temporal clustering task, are
«timerResource» «temporal clustering» «swSchedulableResource».
A demand driven control clustering task, Pump Controller, in which the Pump
Control task is clustered with Pump Engine Output, since the start and stop
pump commands are executed at state transitions. The stereotypes for this task,
which correspond to it being a demand driven control clustering task, are
«demand» «control clustering» «swSchedulableResource».
A periodic timer task, Pump Timer, to receive the timer events from the clock.
The MARTE stereotypes for this task, which correspond to it being a periodic task,
are «timerResource» «swSchedulableResource».
Figure 22.8. Pump Subsystem – task architecture.
22.7.4 Detailed Software Design
The detailed design of a periodic temporal clustering task is given in Figure 22.9. The
Water Sensors task of Figure 22.8 is a composite task that contains three passive
objects, a coordinator object called the Water Sensors Coordinator, two input
objects called the High Water Sensor Input and Low Water Sensor Input
objects.
Figure 22.9. Water Sensors – temporal clustering with nested passive objects.
The detailed design of a demand driven control clustering task is given in Figure
22.10. The Pump Controller is a composite task that contains three passive objects, a
coordinator object called the Pump Coordinator, an output object called Pump Engine
Output, and a state machine object called Pump Control.
Figure 22.10. Pump Controller – control clustering with nested passive objects.
The design of the passive information-hiding classes, instances of which are nested in
the two clustered tasks, are depicted in Figure 22.11. The classes are High Water
Sensor Input and Low Water Sensor Input (instances of which are nested in the
Water Sensors task), and Pump Engine Output and Pump Control (instances of
which nested in the Pump Controller task).
Figure 22.11. Design of passive information-hiding classes.
22.7.5 Applying Software Architectural Patterns
The Pump Control System uses several software architectural structure and
communication patterns. The Centralized Control pattern is used because for a given Pump
Subsystem, there is one control task, which executes a state machine. It receives sensor
input from input tasks and controls the external environment via an output task, as shown
in Figure 22.8 for the Pump Controller task. In a Centralized Control pattern, the
control task executes a state machine, which for Pump Controller is depicted in Figure
22.4. A second architectural structure pattern used in the Pump Control System is the
Distributed Independent Control pattern because the system has several instances of the
Pump Subsystem, each of which is a control subsystem that executes independently of
the other control subsystems and sends pump status to the Pump Status Service
subsystem. Note that each instance of the Pump Subsystem is independent of the service
subsystem because it sends unidirectional asynchronous messages to the service and
therefore never has to wait for a response. The third architectural structure pattern is the
Multiple Client/Single Service pattern, as shown in Figure 22.6, in which the multiple
instances of the Operator Interaction subsystem are clients of the Pump Status
Service subsystem, because each client sends status requests to and receives status
responses from the service subsystem. The difference between the second and third
architectural structure patterns is that the Pump Subsystem is independent of the Pump
Status Service subsystem whereas the Operator Interaction subsystem is
dependent on the service subsystem as it has to wait for responses from it.
This chapter describes a concise case study of a Highway Toll Control System in which
there are several entry and exit toll booths. Each toll booth is controlled by a real-time
embedded subsystem that communicates with a Highway Toll Service subsystem, which
receives entry and exit transactions from the toll booths and charges customer accounts. At
each toll booth there are multiple sensors and actuators, requiring state dependent entry
and exit control. Because entry and exit toll booths are similarly configured and behave in
a similar way, this shorter case study concentrates on the design of the entry toll booth.
There is less emphasis on the structural modeling in this case study, which has been
covered in detail in other case studies.
The problem description is given in Section 23.1. Section 23.2 describes the use case
model, and Section 23.3 describes the software system context modeling. Section 23.4
describes the object and class structuring. Section 23.5 describes the state machine model,
and Section 23.6 describes the dynamic interaction modeling. Section 23.7 describes the
design modeling, which consists of the distributed software design and distributed
software deployment, followed by the design of the concurrent task architecture and
detailed software design.
23.1 Problem Description
A highway toll road has several entry and exit points, at each of which, there is a toll plaza
with one or more tollbooths. To use the system, a customer purchases a RFID (radio
frequency ID) transponder, which holds the encoded customer account number, from the
Highway Toll Service and mounts the transponder on the windshield of the vehicle. The
Highway Toll Service maintains customer accounts in a database including owner and
vehicle information, and account balance. Customers purchasing a transponder must pay
in advance for toll fees by credit card. Accounts are reduced by the toll charge incurred at
the end of each trip. The toll charge to be paid depends on the length of the trip and
category of the vehicle.
All tollbooths consist of a vehicle arrival sensor (placed fifty feet in front of the
tollbooth), a vehicle departure sensor, a traffic light to indicate whether the vehicle has
been authorized to pass through the tollbooth, a transponder detector, and a camera.
The traffic light at each tollbooth is initially red. When a vehicle approaches the
tollbooth, the vehicle sensor detects the vehicle’s presence. If the transponder detector
detects a valid transponder (i.e., the transponder holds a valid customer account) in the
approaching vehicle, the system switches the light to green. If there is no transponder or
the account is low on funds, the system switches the light to yellow. In addition, the video
camera photographs the license plate, and the image is sent to Highway Toll Service. After
the car departs, the system switches the light to red.
23.2 Use Case Modeling
The use case model for the Highway Toll Control System is depicted in Figure 23.1, in
which there are two use cases, Enter Highway and Exit Highway. The use cases are
depicted at the system engineering level, which is the reason for having only three actors:
the Vehicle actor, movement of which is tracked by four input devices, Traffic Light
actor, which corresponds to the output device of the same name, and the external system
actor, the Highway Toll Service. The timer is assumed to be internal to the system.
Figure 23.1. Use case model for Highway Toll Control System.
The use case descriptions are as follows. The Enter Highway use case is started
with an input from the Vehicle actor.
Use case: Enter Highway
Precondition: Tollbooth is open, and the traffic light at the tollbooth is set to
red.
Main sequence:
Alternative sequences:
Step 3: Account is low in funds. If the system determines that the account is
low in funds, System switches traffic light to yellow.
The Exit Highway use case is started with an input from the Vehicle actor. For
information purposes, this use case describes functionality performed by the Highway
Toll Service actor.
Use case: Exit Highway
Precondition: Tollbooth is open and the traffic light at the tollbooth is red
Main sequence:
5. Highway Toll Service calculates toll based on start time and day, exit time
and day, start location, exit location, and vehicle type.
9. System detects that the vehicle has departed and switches traffic light to
red.
Alternative sequences:
Step 3: Insufficient funds. If the system determines that there are insufficient
funds in the account, System switches traffic light to yellow.
For each external output device, there is a corresponding software output object:
In addition, because the behavior of this control system is state dependent, there will
need to be a state dependent control object to execute an encapsulated state machine:
Entry Control.
Furthermore, there needs to be a passive entity object to store the entry transaction,
before it is sent to the Highway Toll Service:
Entry Transaction.
23.5 Dynamic State Machine Modeling
Next the Entry Control state machine is designed, as depicted in Figure 23.3. The
states are:
The objects in the Entry Tollbooth Controller subsystem are also depicted on
an integrated communication diagram in Figure 23.5, which depicts all the objects in this
subsystem as well as all the messages passed between them.
Figure 23.5. Integrated Communication diagram for Entry Tollbooth Controller
subsystem.
23.7 Design Modeling
23.7.1 Distributed Software Architecture
The Highway Toll System (which consists of the Highway Toll Control System
and the Highway Toll Service) is structured into three distributed subsystems, as
depicted on Figure 23.6. The three subsystems are Entry Tollbooth Controller
subsystem (a control subsystem of which there is one instance for each entry tollbooth),
Exit Tollbooth Controller subsystem (a control subsystem of which there is one
instance for each exit tollbooth), and Highway Toll Service (a service subsystem of
which there is one instance).
Figure 23.6 also depicts the message communication between the three subsystems.
The Entry Tollbooth Controller and Exit Tollbooth Controller subsystems
send asynchronous entry and exit transaction messages respectively, as well as
asynchronous process photo messages, to the Highway Toll Service subsystem. The
service subsystem responds to entry and exit transactions with asynchronous valid or
invalid account status messages.
Figure 23.7. Distributed system deployment for Highway Toll Control System.
23.7.3 Concurrent Task Architecture
The task architecture for the Entry Booth Controller subsystem is given in Figure
23.8. Tasks are depicted using MARTE stereotypes. There are seven tasks in this
subsystem:
An event driven input task, Arrival Sensor Input, which receives inputs from
the arrival sensor. The stereotypes for this task, which correspond to it being an
event driven input task, are «event driven» «input» «swSchedulableResource».
A second event driven input task, Departure Sensor Input, which receives
inputs from the departure sensor. The stereotypes for this task are also «event
driven» «input» «swSchedulableResource».
A demand driven input/output task, Video Camera I/O, which sends a command
to the external video camera to take a photo of the car before it departs. The
stereotypes for this task are «demand» «I/O» «swSchedulableResource».
A demand driven output task, Traffic Light Output, which sends commands
to the external traffic light to change the color of the light to red, green, or yellow.
The stereotypes for this task are «demand» «output» «swSchedulableResource».
Finally, there is a demand driven proxy task, Highway Toll Service Proxy,
which sends requests to the external Highway Toll Service to process entry
and exit transactions for cars with valid transponders or to process photos of cars
with invalid or no transponders. The stereotypes for this task are «demand»
«proxy» «swSchedulableResource».
Figure 23.8. Entry Tollbooth Controller Subsystem – task architecture.
23.7.4 Detailed Software Design
The detailed design of a demand driven control clustering task is given in Figure 23.9. The
Entry Controller is a composite task that contains three passive objects, a coordinator
called the Entry Coordinator, an entity object called Entry Transaction, and a
state machine object called Entry Control. Entry Coordinator receives messages from
the three producer tasks, Arrival Sensor Input, Departure Sensor Input, and
Transponder Detector I/O on a FIFO queue, and invokes the operations of the
Entry Control and Entry Transaction passive objects.
Figure 23.9. Entry Controller – control clustering with nested passive objects.
23.7.5 Architectural Pattern Usage
The Highway Toll Control System uses several software architectural structure and
communication patterns. The Centralized Control pattern is used in the Entry
Tollbooth Controller and Exit Tollbooth Controller subsystems because in
each case, there is one control task, which executes a state machine. It receives sensor
input from multiple input and I/O tasks and controls the external environment via output
and I/O tasks, as shown in Figure 23.8 for the Entry Tollbooth Controller
subsystem. In a Centralized Control pattern, the control task executes a state machine,
which for Entry Tollbooth Controller is depicted in Figure 23.3. Another
architectural pattern used in the Highway Toll Control System is the Multiple-
Client/Single-Service pattern, as shown in Figure 23.6, in which the multiple instances of
the Entry Tollbooth Controller and Exit Tollbooth Controller subsystems
are clients of the service subsystem, the Highway Toll Service.
Classes
Classes are shown with an uppercase initial letter. In the figures, there are no spaces in
multiword names – for example, HeatingElement. In the text, however, spacing is
introduced to improve the readability – for example, Heating Element.
Attributes are shown with a lowercase initial letter – for example, weight. For
multiword attributes, there are no spaces between the words in figures, but spaces are
introduced in the text. The first word of the multiword name has an initial lowercase letter;
subsequent words have an initial uppercase letter – for example, sensorValue in figures
and sensor Value in text.
The type of the attribute has an initial uppercase letter – for example, Boolean,
Integer, or Real.
Objects
An object may be depicted in various ways, in particular as:
An individual named object. In this case, the first letter of the first word is
lowercase, and subsequent words have an uppercase first letter. In figures, the
objects appear as, for example, aWarningAlarm and anotherWarningAlarm. In
the text, these objects appear as a Warning Alarm and another Warning
Alarm.
An individual unnamed object. Some objects are shown in the figures as class
instances without a given object name – for example : WarningAlarm. In the
text, this object is referred to as Warning Alarm. For improved readability, the
colon is removed, and a space is introduced between the individual words of a
multiword name.
This means that, depending on how the object is depicted in a figure, it will appear in the
text sometimes with a first word initial letter uppercase and sometimes with a first word
initial letter lowercase.
Messages
In the analysis model, messages are depicted with an uppercase initial letter. Multiword
messages are shown with spaces in both figures and text – for example, Simple
Message Name.
State Machines
In both figures and text, states, events, conditions, actions, and activities are all shown
with initial letter uppercase and spaces in multiword names – for example, the state
Emergency Stopping, the event Timer Event, and the action Open Doors.
Messages
In the design model, the first letter of the first word of the message is lowercase, and
subsequent words have an uppercase first letter. In both the figures and text, there is no
space between words, as in alarmMessage.
Message parameters are shown with a lowercase initial letter – for example, speed.
For multiword attributes, there are no spaces between the words in both the figures and the
text. The first word of the multiword name has a lowercase initial letter, and subsequent
words have an uppercase initial letter – for example, cumulativeDistance in both
figures and text.
Operations
The naming conventions for operations (a.k.a. methods) follow the conventions for
messages in both figures and text. Thus, the first letter of the first word of both the
operation and the parameter is lowercase, and subsequent words have an uppercase first
letter. There is no space between words – for example, validatePassword
(userPassword).
A.2 Message Sequence Numbering on Interaction
Diagrams
Messages on a communication diagram or sequence diagram are given message sequence
numbers. This section provides some guidelines for numbering message sequences. These
guidelines follow the general UML conventions; however, they have been extended to
address concurrency, alternatives, and large message sequences better. These conventions
are followed in the examples given in this book, including the case studies in Chapters 19
through 23.
where the sequence expression consists of the message sequence number and an indicator
of recurrence.
Argument list. The argument list of the message is optional and specifies any
parameters sent as part of the message.
There can also be optional return values from the message sent.
[first optional letter sequence] [numeric sequence] [second optional letter sequence]
The first optional letter sequence is an optional use case ID and identifies a specific use
case. The first letter is an uppercase letter and might be followed by one or more upper- or
lowercase letters if a more descriptive use case ID is desired.
An example is V1, where the letter V identifies the use case, and the number
identifies the message sequence within the communication diagram supporting the use
case. The object sending the first message – V1 – is the initiator of the use case–based
communication. Subsequent message numbers following this input message are V1.1,
V1.2, and V1.3. If the dialog were to continue, the next input from the actor would be V2.
Alternative message sequences are depicted with the condition indicated after the
message. An uppercase letter is used to name the alternative branch. For example, the
main branch may be labeled 1.4[Normal], and the other, less frequently used branch could
be named 1.4A[Error]. The message sequence numbers for the normal branch would be
1.4[Normal], 1.5, 1.6, and so on. The message sequence numbers for the alternative
branch would be 1.4A[Error], 1.4A.1, 1.4A.2, and so on.
Appendix B
Catalog of Software Architectural Patterns
The architectural structure patterns and architectural communication patterns are
documented with the template described in Chapter 11, Section 11.8, in Sections B.1 and
B.2, respectively. The patterns are summarized in the following tables.
Problem Several actions and activities are state dependent and need to be
controlled and sequenced.
Summary of There are several control components, such that each component
solution controls a given part of the system by conceptually executing a state
machine. Control is distributed among the various control components,
which communicate with each other. No single component has overall
control.
Weaknesses Does not have an overall coordinator. If this is needed, consider using
of solution Hierarchical Control pattern.
Summary of There are several control components, such that each component
solution controls a given part of the system by conceptually executing a state
machine. Control is distributed among the various control components,
which do not communicate with each other but might communicate
asynchronously with a service component. No single component has
overall control.
Weaknesses Does not have an overall coordinator. If this is needed, consider using
of solution Hierarchical Control pattern.
Applicability Distributed real-time control systems, distributed state dependent
applications.
Problem Distributed application with multiple locations for which both real-time
localized control and overall control are needed.
Summary of There are several control components, each controlling a given part of a
solution system by conceptually executing a state machine. There is also a
coordinator component, which provides high-level control by deciding
the next job for each control component and communicating that
information directly to the control component.
Problem A software architecture that encourages design for ease of extension and
contraction is needed.
Reference Chapter 11, Section 11.2.1; Hoffman and Weiss 2001; Parnas 1979.
Figure B.5. Layers of Abstraction pattern: TCP/IP example.
Pattern Kernel.
name
Aliases Microkernel.
Weaknesses If care is not taken, kernel can become too large and bloated.
of solution Alternatively, essential functionality could be left out in error.
Pattern Master/Slave.
name
Aliases None.
Summary of Master divides up the work to be performed and assigns each part to a
solution slave. Each slave executes its assignment and, when it has finished,
sends a response to the master. The master integrates the slave
responses.
Strengths of Divides up work to be done so that it can be done in parallel.
solution
Weaknesses Could have situations where the work is not divided evenly between
of solution slaves, which results in less efficient master/slave operation. A slave
might be held up or fail and hence slow down the entire master/slave
operation.
Weaknesses Client can be held up indefinitely if there is a heavy load at any server.
of solution
Summary of Client requests service. Service responds to client requests and does not
solution initiate requests. Service handles multiple client requests.
Strengths of Good way for client to communicate with service when it needs a reply
solution from service. Very common form of communication in client/service
applications.
Weaknesses Client can be held up indefinitely if there is a heavy load at the server.
of solution
Strengths of Good way for client to communicate with service when it needs a reply
solution but can continue executing and receive reply later.
Weaknesses Suitable only if the client does not need to send multiple requests before
of solution receiving the first reply.
Summary of Use two message queues between producer component and consumer
solution component: one for messages from producer to consumer and one for
messages from consumer to producer. Producer sends message to
consumer on P→C queue and continues. Consumer receives message.
Messages are queued if consumer is busy. Consumer sends replies on
C→P queue.
Strengths of Producer does not get held up by consumer. Producer receives replies
solution later, when it needs them.
Weaknesses If producer produces messages more quickly than consumer can process
of solution them, the message (P→C) queue will eventually overflow. If producer
does not service replies quickly enough, the reply (C→P) queue will
overflow.
Pattern Broadcast.
name
Weaknesses Places an additional load on the client because the client may not want
of solution the message.
Summary of Use broker. Services register with broker. Client sends service request
solution to broker. Broker returns service handle to client. Client uses service
handle to make request to service. Service processes request and sends
reply directly to client. Client can make multiple requests to service
without broker involvement.
Strengths of Location transparency: services may relocate easily. Clients do not need
solution to know locations of services.
Weaknesses Additional overhead because broker is involved in initial message
of solution communication. Broker can become a bottleneck if there is a heavy load
at the broker. Client may keep outdated service handle instead of
discarding.
Strengths of Location transparency: services may relocate easily. Clients do not need
solution to know locations of services.
Pattern Subscription/Notification.
name
Aliases Multicast.
Problem Distributed application with multiple clients and services. Clients want
to receive messages of a given type.
Weaknesses This pattern cannot be used if the tasks need to execute on separate
of solution nodes.
Strengths of Good way for producer to communicate with consumer when it wants
solution confirmation that consumer received the message and producer does not
want to get ahead of consumer.
loop
–– Wait for message from another task arriving via message connector;
aConnector.receive (message);
extract message name and any message parameters from message;
–– perform coordination action (assumed to be not state dependent)
case message of
message type 1:
objectA.methodX (optional parameters);
….
message type 2:
objectB.methodY (optional parameters);
…..
endcase;
prepare output message containing message name and parameters
–– send output message;
aConnector.send (message);
end if;
end loop;
C.5 Pseudocode for Periodic Algorithm Task
A periodic algorithm task is a task that executes an algorithm periodically, that is, at
regular, equally spaced intervals of time (Section 13.4.1). The task is activated by a timer
event, executes the periodic algorithm, and then waits for the next timer event. The task’s
period is the time between successive activations.
loop
–– Wait for timer event;
wait (timerEvent);
execute periodic algorithm;
prepare output message containing message name and parameters
–– send output message;
aConnector.send (message);
end if;
end loop;
C.6 Pseudocode for Demand Driven Task
A demand driven task is a task that is activated on demand by the arrival of a message or
event sent by a producer task (Section 13.4.2). The action the demand driven task takes is
based entirely on the contents of the input message it receives. The task reads the
incoming message, performs the demanded action, and then communicates the result, such
as by sending a message to a consumer task, by sending a response to the original
producer task, or by updating a passive entity object. The task then loops back and waits
for the next message.
loop
–– wait for message or event from producer task arriving via message
connector;
aConnector.receive (message);
extract message name and any message parameters from message;
perform requested action on demand
– Read data from passive entity object(s) if needed
– Execute action
– Update data in passive entity object(s) if needed
prepare output message or response containing message name and parameters
–– send output message or event;
aConnector.send (message);
end loop;
C.7 Pseudocode for User Interaction Task
A user interaction task is a demand driven task that interacts with a human user. It
typically outputs a prompt to a user (either on initialization or on arrival of a message
from another task) and then waits for the input from the user (Section 13.4.5). It will read
the input, possibly following this up with further prompts and user inputs, determine the
desired user action, and send a message to a consumer object (which could be a passive
entity object, service task or control task). It typically receives a response from the
consumer. It then formats the response in textual and/or graphical form, and outputs this
response to the user. The task then loops back and waits for the next user interaction.
loop
output menu or prompt to user;
wait (user response);
read user input;
process user input and have further interactions with user if necessary;
–– send message with user request to consumer task
aConnector.send (user request);
–– wait for response from consumer task arriving via message connector;
aConnector.receive (consumer response);
extract and process consumer response;
prepare textual and/or graphical output for user;
output response to user
end loop;
C.8 Pseudocode for Demand Driven State
Dependent Control Task
A state dependent control task is a demand driven task (Section 13.4.3) that executes a
sequential state machine. The task receives messages from its producers on a message
queue. Given the next message, the task extracts the event from the message and uses this
event as an input parameter to invoke the processEvent method of a passive STM
(Section 14.1.3) object, which encapsulates a state transition table. Given the new event
and current state, the method looks up the state transition table entry for Table (new event,
current state) and reads the next state and action(s) to be performed. It then sets the current
state to the next state and returns the action(s) to be performed. The task then executes
each action, such as by sending a message to another task, and then loops back to receive
the next message.
loop
–– messages from all senders are received on Message Queue
Receive (messageQ, message);
–– extract the event name and any message parameters
newEvent = message.event
–– assume state machine is encapsulated in object aSTM;
–– given the incoming event, lookup state transition table;
–– change state if required; return action to be performed;
aSTM.processEvent (in newEvent, out action);
–– execute statedependent action(s) as given on state machine;
case state_dependent_action of
action_1:
execute state_dependent_action 1;
exit;
action_2
execute state_dependent_action 1;
exit;
…
action_n
execute state_dependent_action n;
exit;
end case;
end loop;
Appendix D
Teaching Considerations
D.1 Overview
The material in this book may be taught in different ways depending on the time available
and the knowledge level of the students. This appendix describes possible academic and
industrial courses that could be based on this book.
In each of these courses, there are three components: description of the method,
presentation of at least one case study using the method, and hands-on design exercise for
students to apply the method to a real world problem.
D.2 Suggested Academic Courses
The following academic courses could be could be taught in graduate and advanced
undergraduate courses in Computer Science, Software Engineering, Systems Engineering,
and Computer Engineering programs, and are based on the material covered in this
textbook.
2. A practical hands-on course in which each stage of the real-time software design
method is followed by a hands-on design lab. The design lab could be on a problem
of the company’s choice, assuming an in-house course.
D4. Design Exercises
The following discussion applies to both academic and industrial courses:
As part of the course, students should also work on one or more real-time problems,
either individually or in teams. Whether one or more problems are tackled depends on the
size of the problem and the length of the course. However sufficient time should be
allocated for students to work on the problems since this is the best way for the students to
really understand the method.
d. House-cleaning robot,
e. Driverless car,
1. Work on one problem throughout the course using COMET/RTE. This has the
advantage that students get an in-depth appreciation of the method.
2. Divide the class up into teams. Each group solves a different problem using
COMET/RTE. Time is allocated at the end of the course for each group to present
their solution. A class discussion is held on the strengths and weaknesses of each
solution.
abstract class
A class that cannot be directly instantiated (Booch, Rumbaugh, and Jacobson 2005).
Compare concrete class.
A data type that is defined by the operations that manipulate it and thus has its
representation details hidden.
A specification that defines the external view of the information hiding class – that is,
all the information required by the user of the class.
abstract operation
action
active object
activity
actor
An outside user or related set of users who interact with the system (Rumbaugh, Booch,
and Jacobson 2005).
actuator
The means by which a real-time computer system can control an external device or
mechanism
aggregate class
aggregation
algorithm object
analog data
analysis modeling
A phase of the COMET/RTE system and software life cycle in which static modeling
and dynamic modeling are performed. Compare design modeling and requirements
modeling.
aperiodic task
A task that is activated on demand. See event driven or demand driven task.
application deployment
A process for deciding which component instances are required, how component
instances should be allocated to physical nodes in a distributed environment, and how
component instances should be interconnected.
application logic object
An object that hides the details of the application logic separately from the data being
manipulated.
architectural pattern
association
availability
behavioral model
A model that describes the responses of the system to the inputs that the system receives
from the external environment. Also referred to as dynamic model.
binary semaphore
A SysML diagram that is a class diagram in which each class has the stereotype
«block».
boundary object
A software object that interfaces to and communicates with the external environment.
broadcast communication
broker
brokered communication
callback
CASE
category
A specifically defined division in a system of classification.
class
class diagram
A UML diagram that depicts a static view of a system in terms of classes and the
relationships between classes. Compare interaction diagram.
A specification that defines the externally visible view of a class, including the
specification of the operations provided by the class.
client
client/server system
A system that consists of clients that request services and one or more servers that
provide services.
COMET
bedded SystemsA software design method for real-time embedded systems. See
COMET/RTE.
commonality
The functionality that is common to all members of a software product line. Compare
variability.
commonality/variability analysis
communication diagram
A UML 2 interaction diagram that depicts a dynamic view of a system in which objects
interact by using messages.
complex port
A real-time scheduling theorem that states that for a set of independent periodic tasks, if
each task meets its first deadline, when all tasks are started at the same time, then the
deadlines will be met for any combination of start times.
component
A concurrent self-contained object with a well-defined interface, capable of being used
in different applications from that for which it was originally designed. Also referred to
as distributed component.
component-based system
composite component
composite state
A state on a statechart that is decomposed into two or more substates. Also referred to
as a superstate.
A UML 2 diagram that depicts the structure and interconnections of composite classes;
specifically used to depict components, ports, and connectors.
composite subsystem
composite task
A form of whole/part relationship that is stronger than an aggregation; the part objects
are created, live, and die together with the composite (whole) object.
concrete class
A class that can be directly instantiated (Booch, Rumbaugh, and Jacobson 2005).
Compare abstract class.
concurrent
A communication diagram that depicts concurrent objects and their interactions in the
form of asynchronous and synchronous message communication.
concurrent object
An autonomous object that has its own thread of control. Also referred to as an active
object, process, task, thread, concurrent process, or concurrent task.
concurrent process
concurrent service
A service that services multiple client requests in parallel. Compare sequential service.
concurrent task
condition
The value of a Boolean variable that can be true or false over a finite interval of time.
connector
constraint
continuous data
control clustering
A task structuring criterion by which a control object is combined into a task with the
objects it controls.
control object
coordinator object
An overall decision-making object that determines the overall sequencing for a
collection of objects and is not state dependent.
critical section
data abstraction
An approach for defining a data structure or data type by the set of operations that
manipulate it, thus separating and hiding the representation details.
A class that encapsulates a data structure or data type, thereby hiding the representation
details; operations provided by the class manipulate the hidden data.
data replication
deadlock
A situation in which two or more concurrent tasks are suspended indefinitely because
each task is waiting for a resource acquired by another task.
delegation connector
A connector that joins the outer port of a composite component to the inner port of a
part component such that messages arriving at the outer port are forwarded to the inner
port.
deployment diagram
A UML diagram that shows the physical configuration of the system in terms of
physical nodes and physical connections between the nodes, such as network
connections.
design concept
design method
A systematic approach for creating a design. The design method helps identify the
design decisions to be made, the order in which to make them, and the criteria used in
making them.
design modeling
A phase of the COMET/RTE system and software life cycle in which the software
architecture of the system is designed. Compare analysis modeling and requirements
modeling.
design notation
design pattern
design strategy
An information hiding object that hides the characteristics of an I/O device and presents
a virtual device interface to its users.
A software object that receives input from and/or outputs to a hardware I/O device.
discrete data
distributed
distributed application
distributed component
See component.
distributed service
domain-specific pattern
A view of a problem or system in which control and sequencing are considered by the
sequence of interaction among objects.
dynamic model
A view of a problem or system in which control and sequencing are considered, either
within an object by means of a finite state machine or by consideration of the sequence
of interaction among objects. Also referred to as behavioral model.
encapsulation
entity class
A class, in many cases persistent, whose instances are objects that encapsulate
information.
entity object
entry action
An action that is performed on entry into a state. Compare exit action.
environment simulator
A tool that models the inputs arriving from the external entities that interface to the
system, and feeds them to the systems being tested.
event
An input/output device that generates an interrupt when it has produced some input or
when it has finished processing an output operation.
event sequence
event trace
A time-ordered description of each external input and the time at which it occurred.
exit action
external block
A block that is outside the system and part of the external environment.
external event
An event from an external object, typically an interrupt from an external I/O device.
Compare internal event.
family of systems
feature
feature/class dependency
The relationship in which one or more classes support a feature of a software product
line (i.e., realize the functionality defined by the feature).
feature group
A group of features with a particular constraint on their usage in a software product line
member.
feature modeling
The process of analyzing and specifying the features and feature groups of a software
product line.
A conceptual machine with a finite number of states and state transitions that are
caused by input events. The notation used to represent a finite state machine is a state
transition diagram, statechart, or state transition table. Also referred to simply as state
machine.
formal method
A software engineering method that uses a formal specification language – that is, a
language with mathematically defined syntax and semantics.
generalization/specialization
idiom
information hiding
The concept of encapsulating software design decisions in objects in such a way that the
object’s interface reveals only what its users need to know. Also referred to as
encapsulation.
A class that is structured according to the information hiding concept. The class hides a
design decision and is accessed by means of operations.
inheritance
input object
A software device I/O boundary object that receives input from an external input
device.
A software device I/O boundary object that receives input from and sends output to an
external I/O device.
interaction diagram
A UML diagram that depicts a dynamic view of a system in terms of objects and the
sequence of messages passed between them. Communication diagrams and sequence
diagrams are the two main types of interaction diagrams. Compare class diagram.
interface
internal event
A category of the task structuring criteria that addresses how device I/O objects are
mapped to I/O tasks and when an I/O task is activated.
maintainability
MARTE
mathematical model
message dictionary
middleware
A layer of software that sits above the heterogeneous operating system to provide a
uniform platform above which distributed applications can run (Bacon 2003).
modifiability
The extent to which software is capable of being modified during and after initial
development.
monitor
A data object that encapsulates data and has operations that are executed mutually
exclusively.
multicast communication
See subscription/notification.
A task clustering technique where all identical tasks of the same type are replaced by
one task that performs the same functionality.
mutual exclusion
An algorithm that allows only one concurrent task to have access to shared data at a
time, which can be enforced by means of binary semaphores or through the use of
monitors. Compare multiple readers and writers.
node
object
An instance of a class that contains both hidden data and operations on that data.
object broker
See broker.
object-oriented analysis
object-oriented design
A software design method based on the concept of objects, classes, and inheritance.
See broker.
A set of heuristics for assisting a designer in structuring a system into objects. Also
referred to as class structuring criteria.
operation
output object
A software device I/O boundary object that sends output to an external output device.
package
part component
passive object
An object that has no thread of control; an object with operations that are invoked
directly or indirectly by concurrent objects.
performance analysis
performance model
An abstraction of the real computer system behavior, developed for the purpose of
gaining greater insight into the performance of the system, whether or not the system
actually exists.
period
periodic task
A concurrent task that is activated periodically (i.e., at regular, equally spaced intervals
of time) by a timer event.
Petri net
port
primary actor
An algorithm that provides bounded priority inversion; that is, at most one lower-
priority task can block a higher priority task. See Priority inversion.
priority inversion
A queue in which each message has an associated priority. The consumer always
accepts higher-priority messages before lower-priority messages.
process
A design method for software product lines that describes how to conduct requirements
modeling, analysis modeling, and design modeling for software product lines in UML.
profile
provided interface
Specifies the operations that a component (or class) must fulfill. Compare required
interface.
provided port
proxy object
A software object that interfaces to and communicates with an external system or
subsystem.
pseudocode
queuing model
A real-time scheduling algorithm that assigns higher priorities to tasks with shorter
periods.
real-time
The operations that another component (or class) provides for a given component (or
class) to operate properly in a particular environment. Compare provided interface.
required port
requirements modeling
A phase of the COMET/RTE system and software software life cycle in which the
functional requirements of the system are determined through the development of use
case models. Compare analysis modeling and design modeling.
reuse category
reuse stereotype
RMI
role category
role stereotype
scalability
The extent to which the system is capable of growing after its initial deployment.
scenario
secondary actor
An actor that participates in (but does not initiate) a use case. Compare primary actor.
semaphore
sequence diagram
A UML interaction diagram that depicts a dynamic view of a system in which the
objects participating in the interaction are depicted horizontally, time is represented by
the vertical dimension, and the sequence of message interactions is depicted from top to
bottom.
sequential
sequential clustering
A task structuring criterion in which objects that are constrained to execute sequentially
are mapped to a task.
sequential service
A service that completes one client request before it starts servicing the next. Compare
concurrent service.
sensor
A device that detects events or changes in a physical property or entity and converts the
measurement or event into an electrical signal.
server
service
service object
simple component
simulation model
A process within software product line engineering in which the software product line
architecture is adapted and configured to produce a given software application, which is
a member of the software product line. Also referred to as application engineering.
A software architectural pattern that addresses the static structure of the software
architecture.
software architecture
A family of software systems that have some common functionality and some variable
functionality. Also referred to as family of systems, software product family, product
family, or product line.
The architecture for a family of products, which describes the kernel, optional, and
variable components in the software product line, and their interconnections.
software product line engineering
A process for analyzing the commonality and variability in a software product line and
developing a product line use case model, product line analysis model, software product
line architecture, and reusable components. Also referred to as software product family
engineering, product family engineering, or product line engineering.
A block definition diagram that depicts the relationships between the software system
and the external blocks outside the software system. Compare system context diagram.
spiral model
state
statechart
A UML hierarchical state transition diagram in which the nodes represent states and the
arcs represent state transitions.
An object that hides the details of a finite state machine; that is, the object encapsulates
a statechart, a state transition diagram, or the contents of a state transition table.
state machine
See statechart.
state transition
A graphical representation of a finite state machine in which the nodes represent states
and the arcs represent transitions between states.
static modeling
stereotype
A classification that defines a new building block that is derived from an existing UML
modeling element but is tailored to the modeler’s problem (Booch, Rumbaugh, and
Jacobson 2005).
structural modeling
subscription/notification
substate
A significant part of the whole system; a subsystem provides a subset of the overall
system functionality.
superstate
A composite state.
A visual modeling language based on UML 2 for modeling systems requirements and
designs.
A block definition diagram that depicts the relationships between the system and the
external blocks outside the system. Compare software system context diagram.
system context model
task
task architecture
A category of the task structuring criteria that addresses whether and how objects
should be grouped into concurrent tasks.
task inversion
A task clustering concept that originated in Jackson Structured Programming and
Jackson System Development, whereby the tasks in a system can be merged in a
systematic way.
A category of the task structuring criteria that addresses the importance of executing a
given task relative to others.
task structuring
A set of heuristics for assisting a designer in structuring a system into concurrent tasks.
temporal clustering
A task structuring criterion by which activities that are not sequentially dependent but
are activated by the same event are grouped into a task.
testability
The extent to which software is capable of being tested during and after its initial
development.
thread
time-critical task
A Petri net that allows finite times to be associated with the firing of transitions.
timer event
timer object
timing diagram
traceability
The extent to which products of each phase can be traced back to products of previous
phases.
UML
An iterative use case-driven software process that uses the UML notation.
use case
A description of a sequence of interactions between one or more actors and the system.
A UML diagram that shows a set of use cases and actors and their relationships (Booch,
Rumbaugh, and Jacobson 2005).
use case model
A description of the functional requirements of the system in terms of actors and use
cases.
The process of developing the use cases of a system or software product line.
A real-time scheduling theorem that states the conditions under which a set of n
independent periodic tasks scheduled by the rate monotonic algorithm will always meet
their deadlines.
variability
The functionality that is provided by some, but not all, members of the software product
line. Compare commonality.
variation point
A location at which change can occur in a software product line artifact (e.g., in a use
case or class).
visibility
The characteristic that defines whether an element of a class is visible from outside the
class.
A pattern of communication between a client and a broker in which the client knows the
service required but not the location. Compare yellow page brokering.
whole/part relationship
A pattern of communication between a client and a broker in which the client knows the
type of service required but not the specific service. Compare white page brokering.
Bibliography
Albassam E., H. Gomaa, and R. Pettit. 2014. Experimental Analysis of Real-Time
Multitasking on Multicore Systems, Proc. 17th IEEE Symposium on
Object/Component/Service-oriented Real-time Distributed Computing (ISORC), June
2014.
Ammann, P. and J. Offutt. 2008. Introduction to Software Testing. New York: Cambridge
University Press.
Ambler, S. 2005. The Elements of UML 2.0 Style. New York: Cambridge University Press.
Awad, M., J. Kuusela, and J. Ziegler. 1996. Object-Oriented Technology for Real-Time
Systems: A Practical Approach Using OMT and Fusion. Upper Saddle River, NJ: Prentice
Hall.
Bass, L., P. Clements, and R. Kazman. 2013. Software Architecture in Practice, 3rd ed.
Boston: Addison-Wesley.
Beck, K. and C. Andres. 2005. Extreme Programming Explained: Embrace Change, 2nd
ed. Boston: Addison-Wesley.
Bjorkander, M. and C. Kobryn. 2003. “Architecting Systems with UML 2.0.” IEEE
Software 20(4): 57–61.
Blaha, J. and J. Rumbaugh. 2005. Object-Oriented Modeling and Design, 2nd ed. Upper
Saddle River, NJ: Pearson Prentice Hall.
Booch, G., J. Rumbaugh, and I. Jacobson. 2005. The Unified Modeling Language User
Guide, 2nd ed. Boston: Addison-Wesley.
Bosch, J. 2000. Design & Use of Software Architectures: Adopting and Evolving a
Product-Line Approach. Boston: Addison-Wesley.
Bruno, E. and G. Bollella. 2009. Real-Time Java Programming: With Java RTS. Upper
Saddle River, NJ: Prentice Hall
Buede, D. M. 2009. The Engineering Design of Systems: Methods and Models. 2nd ed.
New York: Wiley.
Buhr, R. J. A. and R. S. Casselman, 1996. Use Case Maps for Object-Oriented Systems.
Upper Saddle River, NJ: Prentice Hall.
Burns, A. and A. Wellings, 2009. Real-Time Systems and Programming Languages, 4th
ed. Boston: Addison Wesley.
Clements, P. and L. Northrop. 2002. Software Product Lines: Practices and Patterns.
Boston: Addison-Wesley.
Cockburn, A. 2006. Agile Software Development: The Cooperative Game, 2nd ed. Boston:
Addison-Wesley.
Cohn, M. 2006. Agile Estimating and Planning. Upper Saddle River, NJ: Pearson Prentice
Hall.
Comer, D. E. 2008. Computer Networks and Internets, 5th ed. Upper Saddle River, NJ:
Pearson Prentice Hall.
Cooling, J. 2003. Software Engineering for Real-Time Systems. Harlow: Addison Wesley.
Dollimore J., T. Kindberg, and G. Coulouris. 2005. Distributed Systems: Concepts and
Design, 4th ed. Boston: Addison-Wesley.
Davis, A. 1993. Software Requirements: Objects, Functions, and States, 2nd ed. Upper
Saddle River, NJ: Prentice Hall.
Douglass, B. P. 1999. Doing Hard Time: Developing Real-Time Systems with UML,
Objects, Frameworks, and Patterns. Reading, MA: Addison-Wesley.
Douglass, B. P. 2002. Real-Time Design Patterns: Robust Scalable Architecture for Real-
Time Systems. Boston: Addison-Wesley.
Douglass, B. P. 2004. Real Time UML: Advances in the UML for Real-Time Systems, 3rd
ed. Boston: Addison-Wesley.
Eeles, P., K. Houston, and W. Kozaczynski. 2002. Building J2EE Applications with the
Rational Unified Process. Boston: Addison-Wesley.
Eriksson, H. E., M. Penker, B. Lyons, et al. 2004. UML 2 Toolkit. Indianapolis, IN: Wiley.
Espinoza H., D. Cancila, B. Selic, and S. Gérard, 2009. “Challenges in Combining SysML
and MARTE for Model-Based Design of Embedded Systems.” Lecture Notes in Computer
Science 5562, pp. 98–113. Berlin: Springer.
Fowler, M. 2004. UML Distilled: Applying the Standard Object Modeling Language, 3rd
ed. Boston: Addison-Wesley.
Friedenthal S, A. Moore, and R. Steiner, 2015. A Practical Guide to SysML: The Systems
Modeling Language, 3rd ed. San Francisco: Morgan Kaufmann.
Gamma, E., R. Helm, R. Johnson, and J. Vlissides. 1995. Design Patterns: Elements of
Reusable Object-Oriented Software. Reading, MA: Addison-Wesley.
Gomaa, H. 1984. “A Software Design Method for Real Time Systems.” Communications
of the ACM 27(9): 938–949.
Gomaa, H. 1989b. “Structuring Criteria for Real Time System Design.” In Proceedings of
the 11th International Conference on Software Engineering, May 15–18, 1989, Pittsburgh,
PA, USA, pp. 290–301. Los Alamitos, CA: IEEE Computer Society Press.
Gomaa, H. 1993. Software Design Methods for Concurrent and Real-Time Systems.
Reading, MA: Addison-Wesley.
Gomaa, H. 2005a. Designing Software Product Lines with UML. Boston: Addison-
Wesley.
Gomaa, H. 2005b. “Modern Software Design Methods for Concurrent and Real-Time
Systems.” In Software Engineering, vol. 1: The Development Process. 3rd ed. M. Dorfman
and R. Thayer (eds.), pp 221–234. Hoboken, NJ: Wiley Interscience.
Gomaa H. 2011. Software Modeling and Design: UML, Use Cases, Patterns, and Software
Architectures. New York: Cambridge University Press.
Harel, D. and M. Politi. 1998. Modeling Reactive Systems with Statecharts: The Statemate
Approach. New York: McGraw-Hill.
Hatley D. and I. Pirbhai, 1988. “Strategies for Real Time System Specification,” New
York: Dorset House.
Hofmeister, C., R. Nord, and D. Soni. 2000. Applied Software Architecture. Boston:
Addison-Wesley.
Jackson, M. 1983. System Development. Upper Saddle River, NJ: Prentice Hall.
Jacobson, I., G. Booch, and J. Rumbaugh. 1999. The Unified Software Development
Process. Reading, MA: Addison-Wesley.
Jacobson, I., M. Griss, and P. Jonsson. 1997. Software Reuse: Architecture, Process and
Organization for Business Success. Reading, MA: Addison-Wesley.
Jacobson, I., and P.W. Ng. 2005. Aspect-Oriented Software Development with Use Cases.
Boston: Addison-Wesley.
Jain, R. 2015. The Art of Computer Systems Performance Analysis: Techniques For
Experimental Design Measurements Simulation and Modeling. 2nd ed. New York: Wiley.
Jazayeri, M., A. Ran, and P. Van Der Linden. 2000. Software Architecture for Product
Families: Principles and Practice. Boston: Addison-Wesley.
Kang, K., S. Cohen, J. Hess, et al. 1990. Feature-Oriented Domain Analysis (FODA)
Feasibility Study (Technical Report No. CMUSEI-90-TR-021). Pittsburgh, PA: Software
Engineering Institute. Available online at
www.sei.cmu.edupublicationsdocuments90.reports90.tr.021.html.
M. Kim, S. Kim, S. Park, et al. “Service Robot for the Elderly: Software Development
with the COMET/UML Method.” IEEE Robotics and Automation Magazine, March 2009.
Kroll, P. and P. Kruchten. 2003. The Rational Unified Process Made Easy: A
Practitioner’s Guide to the RUP. Boston: Addison-Wesley.
Kruchten, P. 1995. “The 4+1 View Model of Architecture.” IEEE Software 12(6): 42–50.
Kruchten, P. 2003. The Rational Unified Process: An Introduction, 3rd ed. Boston:
Addison-Wesley.
Laplante P. 2011. Real-Time Systems Design and Analysis: Tools for the Practitioner, 4th
ed. New York: Wiley-IEEE Press.
Larman, C. 2004. Applying UML and Patterns, 3rd ed. Boston: Prentice Hall.
Lauzac, S., Melhem, R., and Mosse, D. 1998. Comparison of Global and Partitioning
Schemes for Scheduling Rate Monotonic Tasks on a Multiprocessor. In Proceedings of the
EuroMicro Workshop on Real-TimeSystems. 188–195.
Lea, D. 2000. Concurrent Programming in Java: Design Principles and Patterns, 2nd ed.
Boston: Addison-Wesley.
Lehoczy J. P., L. Sha, and Y. Ding. 1987. “The Rate Monotonic Scheduling Algorithm:
Exact Characterization and Average Case Behavior,” Proc IEEE Real-Time Systems
Symposium, San Jose, CA, December 1987.
Leung, J., and Whitehead, J. 1982. On the Complexity of Fixed Priority Scheduling of
Periodic, Real-Time Tasks. Performance Evaluation 2. 237–250.
Li Q. and C Yao. 2003. Real-Time Concepts for Embedded Systems. New York: CMP
Books.
Magee, J. and J. Kramer. 2006. Concurrency: State Models & Java Programs, 2nd ed.
Chichester, England: Wiley.
Menascé, D. A. and H. Gomaa. 2000. “A Method for Design and Performance Modeling
of Client/Server Systems.” IEEE Transactions on Software Engineering 26: 1066–1085.
Meyer, B. 2000. Object-Oriented Software Construction, 2nd ed. Upper Saddle River, NJ:
Prentice Hall.
Meyer, B. 2014. Agile! The Good, the Hype, and the Ugly. Switzerland: Springer.
Object Management Group (OMG). 2015. “MDA – The Architecture Of Choice For A
Changing World.” http://www.omg.org/mda/
Parnas, D. 1972. “On the Criteria to Be Used in Decomposing a System into Modules.”
Communications of the ACM 15: 1053–1058.
Parnas, D. 1979. “Designing Software for Ease of Extension and Contraction.” IEEE
Transactions on Software Engineering 5(2): 128–138.
Parnas, D., P. Clements, and D. Weiss. 1984. “The Modular Structure of Complex
Systems.” In Proceedings of the 7th International Conference on Software Engineering,
March 26–29, 1984, Orlando, Florida, pp. 408–419. Los Alamitos, CA: IEEE Computer
Society Press.
Pettit, R. and H. Gomaa. 2007. “Analyzing Behavior of Concurrent Software Designs for
Embedded Systems.” In Proceedings of the 10th IEEE International Symposium on Object
and Component-Oriented Real-Time Distributed Computing, Santorini Island, Greece,
May 2007.
Pfleeger, C., S. Pfleeger, and J. Margulies. 2015. Security in Computing. 5th ed. Upper
Saddle River, NJ: Prentice Hall.
Pressman, R. 2009. Software Engineering: A Practitioner’s Approach, 7th ed. New York:
McGraw-Hill.
Quatrani, T. 2003. Visual Modeling with Rational Rose 2002 and UML. Boston: Addison-
Wesley.
Rumbaugh, J., G. Booch, and I. Jacobson. 2005. The Unified Modeling Language
Reference Manual, 2nd ed. Boston: Addison-Wesley.
Sage, A. P. and Armstrong, J. E., Jr., 2000. An Introduction to Systems Engineering, John
Wiley & Sons.
Schmidt, D., M. Stal, H. Rohnert, et al. 2000. Pattern-Oriented Software Architecture,
Volume 2: Patterns for Concurrent and Networked Objects. Chichester, England: Wiley.
Schneider, G. and J. P. Winters. 2001. Applying Use Cases: A Practical Guide, 2nd ed.
Boston: Addison-Wesley.
Selic, B., and S. Gerard. 2014. Modeling and Analysis of Real-Time and Embedded
Systems: Developing Cyber-Physical Systems with UML and MARTE. Burlington, MA:
Morgan Kaufmann.
Selic, B., G. Gullekson, and P. Ward. 1994. Real-Time Object-Oriented Modeling. New
York: Wiley.
Sha L. and J. B. Goodenough. 1990. “Real-Time Scheduling Theory and Ada.” IEEE
Computer 23(4), 53–62.
Shan, Y. P. and R. H. Earle. 1998. Enterprise Computing with Objects. Reading, MA:
Addison-Wesley.
Silberschatz, A., P. Galvin, and G. Gagne. 2013. Operating System Concepts, 9th ed. New
York: Wiley.
Simpson H., 1986. “The MASCOT Method,” IEE/BCS Software Engineering Journal,
1(3), 103–120.
Sprunt B, JP Lehoczy and L Sha. 1989. “Aperiodic Task Scheduling for Hard Real-Time
Systems,” The Journal of Real-Time Systems 1 (1989): 27–60.
Sutherland, J. 2014. Scrum: The Art of Doing Twice the Work in Half the Time. New York:
Crown Business.
Tanenbaum, A. S. 2011. Computer Networks, 5th ed. Upper Saddle River, NJ: Prentice
Hall.
Tanenbaum, A. S. 2014. Modern Operating Systems, 4th ed. Upper Saddle River, NJ:
Prentice Hall.
Ward P. and S. Mellor, 1985. Structured Development for Real-Time Systems, vols. 1, 2,
and 3, Upper Saddle River, NJ: Yourdon Press, Prentice Hall.
Warmer, J. and A. Kleppe. 1999. The Object Constraint Language: Precise Modeling with
UML. Reading, MA: Addison-Wesley.
Webber, D. and H. Gomaa. 2004. “Modeling Variability in Software Product Lines with
the Variation Point Model.” Journal of Science of Computer Programming 53(3):
305–331, Amsterdam: Elsevier.
Wellings, A. 2004. Concurrent and Real-Time Programming in Java. New York: Wiley.
Index
4+1 view model of software architecture,59
action,19, 105
entry,19, 108
exit,19, 108
transition,106
activity,19, 110
external system,83
generalized,85
input device,84
physical entity,83
primary,81
secondary,81
actuator,5
aggregation hierarchy,16, 64
agile methods,57
algorithm
object,128, 139
algorithm task
demand driven,244
analysis model,144
analysis modeling,53
analysis patterns,185
application logic
object,128, 139
architectural patterns,185
association,15, 62
multiplicity of,62
atomic operation,40
attribute,33
autonomy
localized,222
availability,7, 315
requirements,88
behavioral pattern
object,129
external,73
block definition diagram,28, 61, 67, 69
boundary object,127, 129
broker,203
Broker Forwarding pattern,204
class,14, 33
synchronization of access to,274
class diagram,15, 61
subsystem,166
classification
of application classes,127
client,173, 194
user interaction,177
client subsystem,179
COMET,59
COMET/RTE,xx, 10, 51
concurrent,20
subsystem,169
mathematical formulation,329
component,47
composite,167
I/O,223
interface inheritance,309
mobile,173
plug-compatible,308
component interface,213
component performance,223
composite object,172
composite task,266
composition hierarchy,16, 64
concurrency,7
concurrent
application,37
systems,40
task,37
concurrent communication diagram,22, 267
concurrent tasks
implementation in Java,295
condition,19, 103
configuration
requirements,88
connector,47, 215
example of use for asynchronous communication,289
message buffer,286
constraint,27
context modeling,69
context switching,39, 46
control
object,127, 137
state dependent,176
control subsystem,176
coordinator
object,128, 138
coordinator subsystem,177
coordinator task,246
critical region,40
critical section,40, 43
cyber-physical system,3
data abstraction,35
object,136
data distribution,227
data replication,227
database wrapper
object,136
deadlock,43
delegation connector,217
application,228
deployment diagram,23, 169, 229
design
rationale,318
design anti-patterns,185
Design Approach for Real-Time Systems (DARTS),58
design for change,319
design modeling,54
for software product lines,308
design pattern,184
design restructuring,367
discrete data,236
Distributed Collaborative Control pattern,191, 532
distributed component,212
distributed control,7
domain engineering,298
domain–specific patterns,185
duration,5
stateless,143, 146
encapsulation,35
entity
object,128, 136
class,69
event,5, 19, 101, 151
external,262
internal,263
timer,262
event synchronization,262
extend
external entities,70
alternative,301
common,301
group,301
prerequisite,301
feature modeling
functional requirements,53
generalization/specialization,37
generalization/specialization hierarchy,16, 65
generalized completion time theorem
example of,363
generalized real-time scheduling
example of,359
geographical location,173
group message communication,206
guard condition,103
hardware/software boundary,76
specification,76
hazard,316
I/O device
event driven,236
hardware characteristics,235
interrupt-driven,236
passive,236
I/O task
demand driven,240
event driven,236
periodic,238
sensor-based periodic,238
idioms,185
implementation,165
include
relationship,94
inheritance,16, 36, 65
as taxonomy,65
classification mechanism,65
input class
passive,268
input object,130
input/output (I/O)
object,130, 131
input/output subsystem,178
generic form,145
instance form,145
abstract,35
external,35
provided,26
required,26
virtual,35
Internet of Things,9
Jackson System Development (JSD),58
location transparency,203
maintainability,318
MARTE,xx, 11, 12, 28, 51
stereotype,234
message,151
message communication,181
asynchronous,182
bidirectional,181
synchronous,182
unidirectional,181
middleware,8
model-driven architecture,12
modifiability,319
monitor,43, 278
condition synchronization,279
mutual exclusion,278
Multicast communication,207
multicore systems,7, 234
multiple views
of system and software architecture,59
multiplicity
of an association,15
global scheduling,339
partitioned scheduling,339
multitasking
kernel,45
object,15, 32
active,20, 37
concurrent,20, 37
passive,37
Octopus,58
OMG,12
operating system
kernel,43
services,44
operation,33
output class
passive,268
package diagram,20
performance,315
requirements,88
performance analysis,315
of software designs,315
Performance Analysis
performance parameters
estimation and measurement,343
period,6
periodic task,242
platform transparency,203
platform-independent model,13
platform-specific model,13
polling,236
port,214
complex,215
provided,215
required,215
priority inheritance,44
process,38, 39
heavyweight,39
lightweight,39
process control,6
producer/consumer problem,41
provided interface,214
proxy
object,127, 130, 133
proxy task
event driven,242
pseudocode
periodic input task,552
Publish/Subscribe pattern,207
publisher,207
quality of service,164
real-time control,6
distributed,7
real-time embedded systems
characteristics,5
aperiodic tasks,330
periodic tasks,325
Real-Time Scheduling
Advanced,339
task synchronization,331
Generalized,331
Real-Time Structured Analysis and Design,58
real-time system,3, 4
hard,4
soft,4
real-time systems,315
required interface,214
requirements modeling,53
resource monitor task,240
RFID,9
run to completion semantics,101
safety,316
requirements,88
scalability,313
requirements,88
scenario,145
scheduling algorithm
priority preemption,45
round-robin,45
scope of control,175
security,317, 318
semaphore,40
sensor,5
separation of concerns,172
concurrent,20
sequential clustering,253
issues,254
sequential software architecture,164
server,194
service,173, 181, 194
object,128, 141
service component,224
concurrent,224, 226
sequential,224
service subsystem,180
simple component,165, 212
smart device,236
deployment view,169
dynamic view,168
multiple views,166
structural view,166
software evolution,318
software maintenance,318
software modeling,61
software product line,297, 323
context diagram,304
engineering,298
evolution approach,305
software quality attributes,4, 55, 313
software reusability,322
associations,74
spiral model,57
state,102
composite,113
history,116
state decomposition
sequential,113
object,128, 137
task,245
class,269
diagram,19
flat,113
hierarchical,113
inheritance,119
integration,124
orthogonal,117
state transition,101
aggregation,114
state transition diagram,19, 100
statechart,100
statechart diagram,19
statecharts,58
static modeling,54, 61
concepts,62
architectural,166
concurrency,234
definition,66
MARTE,166
reuse,303
structural element,61
component,221
I/O task,235
synchronization
writer starvation,282
synchronized methods,278
Synchronized Object Access pattern,198, 263, 548
associations,70
system deployment modeling,53, 77
system modeling,61
system quality attributes,313
system testing,56
Systems Modeling Language (SysML),11, 12
tagged value,27
task,39, 233
behavior specification,264, 291
periodic algorithm,242
sampling rate,239
scheduling,45
states,45
synchronization,41, 44
task architecture,233
task clustering criteria,250
task interaction
task inversion,256
multiple-instance,256
internal,242
TCP/IP protocol,8, 186
testability,320
thread,39
throwaway prototype,53
time
blocked,6
elapsed,6
execution,6
physical,6
time-critical task,249
timer
timing constraints,4, 6
traceability,321
requirements,321
transition
UML,xx, 12, 51
UML tools,30
base,92, 95
extension,95
extension point,95
inclusion,92
kernel,299
main sequence,86
model,53, 79
modeling,79
optional,299
package,98
relationships,92
variation point,300
use cases
documentation of,87
user interaction,174
Generalized,333
visibility,16
private,16
protected,16
public,16
Voice over IP,187