The Embedded Project Cookbook
The Embedded Project Cookbook
Project
Cookbook
A Step-by-Step Guide for
Microcontroller Projects
—
John T. Taylor
Wayne T. Taylor
The Embedded
Project Cookbook
A Step-by-Step Guide
for Microcontroller Projects
John T. Taylor
Wayne T. Taylor
The Embedded Project Cookbook: A Step-by-Step Guide for
Microcontroller Projects
John T. Taylor Wayne T. Taylor
Covington, GA, USA Golden, CO, USA
Copyright © 2024 by The Editor(s) (if applicable) and The Author(s), under
exclusive license to APress Media, LLC, part of Springer Nature
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way,
and transmission or information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark
symbol with every occurrence of a trademarked name, logo, or image we use the names, logos,
and images only in an editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal
responsibility for any errors or omissions that may be made. The publisher makes no warranty,
express or implied, with respect to the material contained herein.
Managing Director, Apress Media LLC: Welmoed Spahr
Acquisitions Editor: Melissa Duffy
Development Editor: James Markham
Editorial Project Manager: Gryffin Winkler
Cover designed by eStudioCalamar
Cover image designed by Tom Christensen from Pixabay
Distributed to the book trade worldwide by Springer Science+Business Media New York, 1
New York Plaza, Suite 4600, New York, NY 10004-1562, USA. Phone 1-800-SPRINGER, fax (201)
348-4505, e-mail [email protected], or visit www.springeronline.com. Apress Media,
LLC is a California LLC and the sole member (owner) is Springer Science + Business Media
Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail [email protected]; for reprint,
paperback, or audio rights, please e-mail [email protected].
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook
versions and licenses are also available for most titles. For more information, reference our Print
and eBook Bulk Sales web page at http://www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is
available to readers on GitHub. For more detailed information, please visit https://www.apress.
com/gp/services/source-code.
If disposing of this product, please recycle the paper
To Sally, Bailey, Kelly, and Todd.
—J.T.
Table of Contents
About the Authors������������������������������������������������������������������������������xiii
Preface���������������������������������������������������������������������������������������������xvii
Chapter 1: Introduction������������������������������������������������������������������������1
Software Development Processes������������������������������������������������������������������������2
Software Development Life Cycle�������������������������������������������������������������������������5
Outputs and Artifacts��������������������������������������������������������������������������������������������7
What You’ll Need to Know�������������������������������������������������������������������������������������8
Coding in C and C++���������������������������������������������������������������������������������������������9
What Toys You Will Need���������������������������������������������������������������������������������������9
Regulated Industries�������������������������������������������������������������������������������������������10
What Is Not Covered��������������������������������������������������������������������������������������������11
Conclusion����������������������������������������������������������������������������������������������������������12
Chapter 2: Requirements��������������������������������������������������������������������13
Formal Requirements�����������������������������������������������������������������������������������������14
Functional vs. Nonfunctional������������������������������������������������������������������������������16
Sources for Requirements����������������������������������������������������������������������������������16
Challenges in Collecting Requirements��������������������������������������������������������������18
Exiting the Requirements Step���������������������������������������������������������������������������19
GM6000���������������������������������������������������������������������������������������������������������������19
Summary������������������������������������������������������������������������������������������������������������22
v
Table of Contents
Chapter 3: Analysis����������������������������������������������������������������������������25
System Engineering��������������������������������������������������������������������������������������������26
GM6000 System Architecture������������������������������������������������������������������������26
Software Architecture�����������������������������������������������������������������������������������������28
Moving from Inputs to Outputs���������������������������������������������������������������������������30
Hardware Interfaces��������������������������������������������������������������������������������������31
Performance Constraints�������������������������������������������������������������������������������32
Programming Languages������������������������������������������������������������������������������34
Subsystems���������������������������������������������������������������������������������������������������35
Subsystem Interfaces������������������������������������������������������������������������������������40
Process Model�����������������������������������������������������������������������������������������������42
Functional Simulator�������������������������������������������������������������������������������������45
Cybersecurity������������������������������������������������������������������������������������������������48
Memory Allocation�����������������������������������������������������������������������������������������49
Inter-thread and Inter-process Communication��������������������������������������������50
File and Directory Organization���������������������������������������������������������������������51
Localization and Internationalization�������������������������������������������������������������52
Requirement Traceability������������������������������������������������������������������������������������54
Summary������������������������������������������������������������������������������������������������������������56
vi
Table of Contents
Chapter 5: Preparation�����������������������������������������������������������������������77
GitHub Projects���������������������������������������������������������������������������������������������������78
GitHub Wiki����������������������������������������������������������������������������������������������������������79
Continuous Integration Requirements����������������������������������������������������������������82
Jenkins���������������������������������������������������������������������������������������������������������������84
Summary������������������������������������������������������������������������������������������������������������86
Chapter 6: Foundation������������������������������������������������������������������������89
SCM Repositories������������������������������������������������������������������������������������������������90
Source Code Organization�����������������������������������������������������������������������������������90
Build System and Scripts������������������������������������������������������������������������������������92
Skeleton Applications������������������������������������������������������������������������������������������94
CI “Build-All” Script���������������������������������������������������������������������������������������������94
Software Detailed Design�����������������������������������������������������������������������������������95
Summary������������������������������������������������������������������������������������������������������������98
vii
Table of Contents
viii
Table of Contents
ix
Table of Contents
x
Table of Contents
Separation of Concerns�������������������������������������������������������������������������������238
Polymorphism����������������������������������������������������������������������������������������������256
Dos and Don’ts��������������������������������������������������������������������������������������������������263
Summary����������������������������������������������������������������������������������������������������������265
Appendix G: RATT������������������������������������������������������������������������������437
Appendix H: GM6000 Requirements�������������������������������������������������449
xi
Table of Contents
Index�������������������������������������������������������������������������������������������������671
xii
About the Authors
John Taylor has been an embedded developer
for over 30 years. He has worked as a firmware
engineer, technical lead, system engineer,
software architect, and software development
manager for companies such as Ingersoll
Rand, Carrier, Allen-Bradley, Hitachi Telecom,
Emerson, AMD, and several startup companies.
He has developed firmware for products
that include HVAC control systems, telecom
SONET nodes, IoT devices, microcode for
communication chips, and medical devices.
He is the co-author of five US patents and holds a bachelor’s degree in
mathematics and computer science.
xiii
About the Technical Reviewer
Jeff Gable is an embedded software consultant
for the medical device industry, where
he helps medical device startups develop
bullet-proof software to take their prototypes
through FDA submission and into production.
Combining his expertise in embedded
software, FDA design controls, and practical
Agile methodologies, Jeff helps existing
software teams be more effective and efficient
or handles the entire software development
and documentation effort for a new device.
Jeff has spent his entire career doing safety-critical product
development in small, cross-disciplinary teams. After stints in aerospace,
automotive, and medical, he founded Gable Technology, Inc. in 2019 to
focus on medical device startups. He also co-hosts the Agile Embedded
podcast, where he discusses how device developers don't have to choose
between time-to-market and quality.
In his spare time, Jeff enjoys rock climbing, woodworking, and
spending time with his wife and two small children.
xv
Preface
My personal motivation for writing this cookbook is so that I never have to
start an embedded project from scratch again. I am tired of reinventing the
wheel every time I move to a new project, or new team, or new company.
I have started over many times, and every time I find myself doing all the
same things over again. This, then, is a cookbook for all the “same things”
I do—all the same things that I inevitably have to do. In a sense, these are
my recipes for success.
On my next “new project,” I plan to literally copy and paste from the
code and documentation templates I have created for this book. And for
those bits that are so different that a literal copy and paste won’t work, I
plan to use this cookbook as a “reference design” for generating the new
content. For example, suppose for my next project I need a hash table
(i.e., a dictionary) that does not use dynamic memory allocation. My
options would be
For me, the perfect world choice is option one—copy, paste into a new
file, and then “save as” with a new file name. Option two would be to use
the material in this book as a reference design. Start with one of the code
or documentation templates and adapt it to the needs of the new project.
And option three would be the last resort. Been there; done that; don’t
want to do it ever again.
xvii
Preface
xviii
CHAPTER 1
Introduction
The purpose of this cookbook is to enable the reader to never have to
develop a microcontroller software project from scratch. By a project,
I mean everything that is involved in releasing a commercially viable
product that meets industry standards for quality. A project, therefore,
includes noncode artifacts such as software processes, software
documentation, continuous integration, design reviews and code reviews,
etc. Of course, source code is included in this as well. And it is production-
quality source code; it incorporates essential middleware such as an OS
abstraction layer (OSAL), containers that don’t use dynamic memory,
inter-thread communication modules, a command-line console, and
support for a functional simulator.
The book is organized in the approximate chronological order of a
software development life cycle. In fact, it begins with a discussion of the
software development process and the software development life cycle.
However, the individual chapters are largely independent and can stand
alone. Or, said another way, you are encouraged to navigate the chapters in
whatever order seems most interesting to you.
2
Chapter 1 Introduction
The more additional processes and steps you add, the more
sophisticated your development process becomes, and—if you add the
right additional processes—the better the results. Figure 1-3 illustrates this
continuum.
3
Chapter 1 Introduction
4
Chapter 1 Introduction
want to do.” And, yes, it’s not fun writing architecture documentation or
automated unit tests and the like, but it’s the difference between being a
hacker or a professional, spit-and-bailing wire or craftsmanship.
• Planning
• Construction
• Release
These three stages are waterfall in nature. That is, you typically
don’t want to start the construction stage until the planning stage has
completed. That said, work within each stage is very much iterative, so if
new requirements (planning) arise in the middle of coding (construction),
the new requirements can be accommodated in the next iteration through
the construction phase. To some, in this day of Agile development, it might
seem like a step backward to employ even a limited waterfall approach, but
I would make the following counter arguments:
5
Chapter 1 Introduction
6
Chapter 1 Introduction
grade” and “production quality.” That is, everything in this book has been
used and incorporated in real-life products. Nevertheless, there are some
limitations to the GM6000 project:
7
Chapter 1 Introduction
8
Chapter 1 Introduction
1
Paraphrased from John W. Berman: “There’s never enough time to do it right, but
there’s always enough time to do it over.”
9
Chapter 1 Introduction
• Target hardware.
• STMicroelectronics’ NUCLEO-F413ZH
development board.
Regulated Industries
Most of my early career was spent working in domains with no or very
minimal regulatory requirements. But when I finally did work on medical
devices, I was pleased to discover that the best practices I had accumulated
over the years were reflected in the quality processes required by the FDA
or EMA. Consequently, the processes presented here are applicable to
both nonregulated and regulated domains. Nevertheless, if you’re working
in a regulated industry, you should compare what is presented here against
your specific circumstances and then make choices about what to adopt,
exclude, or modify to fit your project’s needs.
10
Chapter 1 Introduction
Additionally, while they are worthy topics for discussion, this book
only indirectly touches on the following:
• Multithreading
• Real-time scheduling
• Interrupt handling
• Algorithm design
11
Chapter 1 Introduction
This is not to say that the framework does not support multithreading
or interrupt handling or real-time scheduling. Rather, I didn’t consider
this book the right place for those discussion. To extend the cookbook
metaphor a little more, I consider that a list of ingredients. And while
ingredients are important, I’m more interested here in the recipes that
detail how to prepare, combine, and bake it all together.
Conclusion
Finally, it is important to understand that this book is about how to
productize software, not a book on how to evaluate hardware or create a
proof of concept. In my experience, following the processes described in
this book will provide you and your software team with the tools to achieve
a high-quality, robust product without slowing down the project timeline.
Again, for a broader discussion of why I consider these processes best
practices, I refer you to Patterns in the Machine,2 which makes the case for
the efficiency, flexibility, and maintainability of many of these approaches
to embedded software development.
2
John Taylor and Wayne Taylor. Patterns in the Machine: A Software Engineering
Guide to Embedded Development. Apress Publishers, 2021
12
CHAPTER 2
Requirements
Collecting requirements is the first step in the planning stage. This is where
you and your team consolidate the user and business needs into problem
statements and then define in rough terms how that problem will be
solved. Requirements articulate product needs like
• Functions
• Capabilities
• Attributes
• Capacities
These written requirements become the inputs for the second step in
the planning phase. Most of the time, though, the analysis step needs to
start before the requirements have all been collected and agreed upon.
Consequently, don’t burden yourself with the expectation that all the
requirements need to be defined before exiting the requirements step.
Rather, identify an initial set of requirements with your team as early
as possible to ensure there’s time to complete the analysis step. The
minimum deliverable or output for the requirements step is a draft set of
requirements that can be used as input for the analysis step.
Formal Requirements
Typically, requirements are captured in a table form or in a database.
If the content of your requirements is presented in a natural language
form or story form that is often referred to as a product specification. In
my experience, a product specification is a better way to communicate
to people an overall understanding of the requirements; however, a list
of formal requirements is a more efficient way to track work items and
14
Chapter 2 Requirements
15
Chapter 2 Requirements
• Availability
• Compatibility
• Reliability
• Maintainability
• Manufacturability
• Regulatory
• Scalability
16
Chapter 2 Requirements
The initial set of requirements coming out of the planning stage will
be a mix of MRS, PRS, and SRS requirements. This is to be expected as no
development life cycle is truly waterfall.
17
Chapter 2 Requirements
18
Chapter 2 Requirements
GM6000
Table 2-1 is the list of initial requirements for a hypothetical heater
controller that I like to call the GM6000. This list is intended to illustrate
the kinds of requirements that are available when you start to develop the
software architecture in the analysis step. As you make progress on the
software architecture, additional requirements will present themselves,
and you will need to work with your extended team to get the new
requirements included in the MRS or PRS requirements documents.
19
Chapter 2 Requirements
1
Commercial release, where Rel 1.0 is the initial product release.
20
Chapter 2 Requirements
MR-107 User interface The DHC unit shall support display, LEDs, and user 1.0
inputs (e.g., physical buttons, keypad membrane,
etc.). The arrangement of the display and user
inputs can be different between heater enclosures.
MR-108 User actions The DHC display, LEDs, and user inputs shall allow 1.0
the user to do the following:
• Turn the heater on and off
• Set the maximum fan speed
• Specify the temperature set point
MR-109 User The DHC display LEDs shall provide the user with 1.0
information the following information:
• Current temperature
• DHC on/off state
• Active heating state
• Fan on/off state
• Alerts and failure conditions
PR-100 Sub- The DHC heater closure shall contain the following 1.0
assemblies sub-assemblies:
• Control Board (CB)
• Heating Element (HE)
• Display and User Inputs (DUI)
• Blower Assembly (BA)
• Power Supply (PS)
• Temperature Sensor (TS)
(continued)
21
Chapter 2 Requirements
PR-101 Wireless The DHC heater closure shall contain the following 2.0
module sub-assemblies:
• Wireless Module (WM)
PR-103 Heater safety The Heating Element (HE) sub-assembly shall 1.0
contain a hardware temperature protection circuit
that forces the heating source off when it exceeds
the designed safety limits.
PR-105 Heater element The Heating Element (HE) sub-assembly shall 1.0
interface have a proportional heating output interface to the
Control Board (CB).
PR-106 Blower The Blower Assembly (BA) sub-assembly shall 1.0
assembly have a proportional speed control interface to the
interface Control Board (CB).
PR-107 Temperature The Temperature Sensor (TS) sub-assembly shall 1.0
sensor use a thermistor for measuring space temperature.
S
ummary
The goal of the requirements step is to identify the problem statement
presented by the user and business needs. In addition, a high-level
solution is identified and proposed for the problem statement. Both the
problem statement and the high-level solution are captured in the form of
formal requirements.
22
Chapter 2 Requirements
INPUTS
• User needs
• Business needs
OUTPUTS
23
CHAPTER 3
Analysis
In the analysis step of the planning stage, you will create three artifacts:
• System architecture (SA)—The system architecture
document describes the discrete pieces or units of
functionality that will be tied together to make the
product. It consists of diagrams with boxes and lines
whose semantics usually mean “contains” and “is
connected to.”
• Software architecture (SWA) documents—The software
architecture, on the other hand, provides the designs
for the system components that describe how each unit
works. These designs usually contain diagrams that are
more sophisticated in that they may be structural or
behavioral and their lines and boxes often have more
particular meanings or rules associated with them (like
UML diagrams).
• Requirements trace matrix—The requirements trace
matrix is generally a spreadsheet that allows you to
map each requirement to the code that satisfies it and
the tests that validate it.
System Engineering
I will note here that system engineering is not a software role. And while it
is not unusual for a software engineer to fill the role of the system engineer,
a discussion about the intricacies of developing the system architecture is
outside the scope of this book. But, as it is an essential input to the software
architecture document, Appendix I, “GM6000 System Architecture,”
provides an example of a system architecture document for the GM6000.
26
Chapter 3 Analysis
Component Description
Enclosure The box that contains the product. The enclosure should be IP51
rated.
Control Board The board that contains the microcontroller that runs the heater
(CB) controller. The CB contains circuits and other chips as needed to
support the microcontroller unit (MCU) and software.
Display and User A separate board that contains the display, buttons, and LEDs
Inputs (DUI) used for interaction with the user. This DUI can be located
anywhere within the enclosure and is connected to the CB via a
wire harness.
… …
27
Chapter 3 Analysis
Software Architecture
There is no canonical definition of software architecture. This cookbook
defines software architecture as
Identifying the solution at a high level and defining
the rules of engagement for the subsequent design
and implementation steps.
• Hardware interfaces.
• Performance constraints.
• Programming languages.
• Subsystem interfaces.
• Process model.
• Functional simulator.
• Cybersecurity.
• Memory allocation.
28
Chapter 3 Analysis
29
Chapter 3 Analysis
Nevertheless, the inputs I provide are sufficient to create the first draft
of the software architecture.
The following sections discuss the creation of the content of the
software architecture document. While you are performing the analysis
and making architectural decisions, you’ll discover that there are still a lot
of unknown or open questions. Some of these questions may be answered
during the planning stage. Some are detailed design questions that will be
answered in the construction stage. Either way, keep track of these open
questions because they will eventually have to be answered.
30
Chapter 3 Analysis
Hardware Interfaces
Create a hardware block diagram in relation to the microcontroller.
That is, outline the inputs and outputs to and from the microcontroller
(see Figure 3-2). Whenever possible, omit details that are not critical to
understanding the functionality of the inputs and outputs. For example,
simply identify that there will be “external serial data storage.” Do not call
out a specific chip, storage technology (e.g., flash vs. EEPROM), or specific
type of serial bus.
connector Heater
MCU
connector Blower
Console
connector Temperature
Sensor
Wireless Module
Legend:
Bi direconal Analog Input
Wireless Sensor Serial Bus
Programming/ GPIO
debugger
PWM
31
Chapter 3 Analysis
Component Description
… …
Data Storage Serial persistent data storage for saving configuration,
user settings, etc.
… …
Performance Constraints
For all of the identified hardware interfaces, you will want to make an
assessment of real-time performance and bandwidth usage. Also you
should make performance assessments for any applications, driver stacks,
crypto routines, etc., that will require significant CPU usage. Since the
specifics of these interfaces are still unknown, the assessment will be an
approximation—that is, an order-of-magnitude estimate—rather than a
32
Chapter 3 Analysis
precision value. Note that I use the term “real time” to describe contexts
where stimuli must be detected and reacted to in less than one second.
Events and actions that occur slower than 1 Hz can be achieved without
special considerations.
The following are some excerpts from the software architecture
document in Appendix J, “GM6000 Software Architecture,” that illustrate
the performance analysis.
Display
The microcontroller unit (MCU) communicates with the display controller
via a serial bus (e.g., SPI or I2C). There is a time constraint in that the physical
transfer time for an entire screen’s worth of pixel data (including color data)
must be fast enough to ensure a good user experience. There is also a RAM
constraint with respect to the display in the MCU; it requires that there will be
at least one off-screen frame buffer that can hold an entire screen’s worth of pixel
data. The size of the pixel data is a function of the display’s resolution times the
color depth. The assessments and recommendations are as follows:
Temperature Sensor
The space temperature must be sampled and potentially filtered before
being used as an input to the control algorithm. However, controlling
space temperature is a relatively slow system (i.e., much slower than 1 Hz).
Consequently, the assessments and recommendations are as follows:
33
Chapter 3 Analysis
Threading
A Real-Time Operating System (RTOS) with many threads will be
used. Switching between threads—that is, a context switch—requires a
measurable amount of time. This becomes important when there are sub-
millisecond timing requirements and when looking at overall CPU usage.
The RTOS also adds timing overhead for maintaining its system tick timer,
which is typically interrupt based. The assessments and recommendations
are as follows:
Programming Languages
Selecting a programming language may seem like a trivial decision, but
it still needs to be an explicit decision. The experience of the developers,
regulatory considerations, performance, memory management, security,
tool availability, licensing issues, etc., all need to be considered when
selecting the programming language. The language choice should be
documented in the software architecture document as well as in the
Software Development Plan.
In most cases, I prefer to use C++. I know that not everyone agrees with
me on this, but I make the case for the advantages of using C++ in Patterns
in the Machine. I mention it here to say that you should not exclude C++
from consideration simply because you are working in a resource-
constrained environment.
34
Chapter 3 Analysis
Subsystems
You will want to decompose the software for your project into components
or subsystems. The number and granularity of the components are a
choice that you will have to make. Some things to consider when defining
subsystems are as follows:
35
Chapter 3 Analysis
The GM6000 control board software is broken down into the following
subsystems. In Figure 3-3, the subsystems in the dashed boxes represent
future or anticipated functionality.
OSAL
Drivers Graphics Library
OS
Boot
BSP Loader
HARDWARE
36
Chapter 3 Analysis
Application
The application subsystem contains the top-level business logic for the entire
application. This includes functionality such as
BSP
The Board Support Package (BSP) subsystem is responsible for abstracting
the details of the microcontroller unit (MCU) datasheet. For example, it is
responsible for
Diagnostics
The diagnostics subsystem is responsible for monitoring the software’s
health, defining the diagnostics logic, and self-testing the system. This
includes features such as power on self-tests and metrics capture.
37
Chapter 3 Analysis
Drivers
The driver subsystem is the collection of driver code that does not reside in
the BSP subsystem. Drivers that directly interact with hardware are required
to be separated into layers. There should be at least three layers:
Graphics Library
The graphics library subsystem is responsible for providing graphic
primitives, fonts, window management, widgets, etc. The expectation is that
the graphic library will be third-party software. The minimum requirements
for the graphics library are as follows:
38
Chapter 3 Analysis
Heating
The heating subsystem is responsible for the closed loop space temperature
control. This is the code for the heat control algorithm.
Persistent Storage
The persistent storage subsystem provides the framework, interfaces, data
integrity checks, etc., for storing and retrieving data that is stored in local
persistent storage. The persistent storage paradigm is a RAM-cached model.
The RAM-cached model is as follows:
UI
The user interface (UI) subsystem is responsible for the business logic and
interaction with end users of the unit. This includes the LCD display screens,
screen navigation, consuming button inputs, LED outputs, etc. The UI
subsystem has a hard dependency on the graphic library subsystem. This
hard dependency is acceptable because the graphic library is platform
independent.
39
Chapter 3 Analysis
Subsystem Interfaces
This section is where you define how the various subsystems, components,
modules, and drivers will interact with each other. For example, it
addresses the following questions:
Interfaces
The preferred, and primary, interface for sharing data between subsystems
will be done via the data model pattern. The secondary interface will be
message-based inter-thread communications (ITC). Which mechanism
is used to share data will be determined on a case-by-case basis with the
preference being to use the data model. However, the decision can be “both”
because both approaches can co-exist within a subsystem.
40
Chapter 3 Analysis
Applicaon Heang
Funconal
APIs
Diagnoscs UI
System
Services
Persistent
Data Model Alert Mgmt Crypto
Storage
Soware Logging
Sensor
Update Comms
Graphics
Library
Drivers Console
41
Chapter 3 Analysis
Process Model
The following questions need to be answered in the architecture
document:
42
Chapter 3 Analysis
Process Model
The software will be implemented as a multithreaded application using
real-time preemptive scheduling. The preemptive scheduling provides for the
following:
Thread Priorities
The application shall be designed such that the relative thread priorities
between individual threads do not matter with respect to correctness.
Correctness in this context means the application would still function,
albeit sluggishly, and not crash if all the threads had the same priority. The
exception to this rule is for threads that are used exclusively as deferred
interrupt handlers.
Data Integrity
Data that is shared between threads must be implemented in a manner to
ensure data integrity. That is, read, write, read-modify-write operations
must be atomic with respect to other threads accessing the data. The
following is a list of allowed mechanisms for sharing data across threads:
43
Chapter 3 Analysis
44
Chapter 3 Analysis
If the MCU is used, it must be clearly documented in the detailed design. The
following guidelines shall be followed for sharing data between a thread
and ISRs:
Functional Simulator
If your project requires you to implement automated unit tests for your
project, you will find that a lot of the work that goes into creating a
functional simulator will be done while creating the automated unit tests.
The reason is because the automated unit tests impose a decoupled design
that allows components and modules to be tested as platform-
independent code. Consequently, creating a simulator that reuses
abstracted interfaces is not a lot of additional effort—if it is planned for
from the beginning of the project. The two biggest pieces of work are as
follows:
45
Chapter 3 Analysis
Simulator
The software architecture and design accommodate the creation of a
functional simulator. A functional simulator is the execution of production
source code on a platform that is not the target platform. A functional
simulator is expected to provide the majority of the functionality but not
necessarily the real-time performance of the actual product. Or, more
simply, functional simulation enables developers to develop, execute, and
46
Chapter 3 Analysis
test production code without the target hardware. Figure 3-5 illustrates what
is common and different between the software built for the target platform
and software built for the functional simulator. The architecture for the
functional simulator is on the right.
47
Chapter 3 Analysis
Cybersecurity
Cybersecurity may or may not be a large topic for your project.
Nevertheless, even if your initial reaction is “there are no cybersecurity
concerns for this product,” you should still take the time to document why
there are no concerns. Also note that I include the protection of personally
identifiable information (PII) and intellectual property (IP) as part of
cybersecurity analysis.
Sometimes just writing down your reasoning as to why cybersecurity is
not an issue will reveal gaps that need to be addressed. Depending on the
number and types of attack surfaces your product has, it is not uncommon
to break the cybersecurity analysis into its own document. And a separate
document is okay; what is important is that you do some analysis and
document your findings.
A discussion of the best practices and methodologies for performing
cybersecurity analysis is beyond the scope of this book. For example,
with the GM6000 project, the cybersecurity concerns are minimal. Here
is a snippet of the “Cybersecurity” section from the software architecture
document that can be found in Appendix J, “GM6000 Software Architecture.”
48
Chapter 3 Analysis
Cybersecurity
The software in the GM6000 is considered to be a low-risk target in that it is
easier to compromise the physical components of a GM6000 than the software.
Assuming that the software is compromised, there are no safety issues
because the HE has hardware safety circuits. The worst-case scenarios for
compromised software are along the lines of denial-of-service (DoS) attacks,
which might cause the DHC to not heat the space, yield uncomfortable
temperature control, or run constantly to incur a high energy bill.
No PII is stored in persistent storage. There are no privacy issues
associated with the purchase or use of the GM6000.
Another possible security risk is the theft of intellectual property. That is,
can a malicious bad actor steal and reverse-engineer the software in the control
board? This is considered low risk since there are no patented algorithms or
trade secrets contained within the software and the software only has value
within the company’s hardware. The considered attack surfaces are as follows:
Memory Allocation
The architecture document should define requirements, rules, and
constraints for dynamic memory allocation. Because of the nature of
embedded projects, the extensive use of dynamic memory allocation
is discouraged. When you have a device that could potentially run for
years before it is power-cycled or reset, the probability of running out of
heap memory due to fragmentation becomes a valid concern. Here is the
“Memory Allocation” section from the software architecture document
that can be found in Appendix J, “GM6000 Software Architecture.”
49
Chapter 3 Analysis
50
Chapter 3 Analysis
51
Chapter 3 Analysis
52
Chapter 3 Analysis
53
Chapter 3 Analysis
Requirement Traceability
Requirement traceability refers to the ability to follow the path a
requirement takes from design all the way through to a specific test case.
There are three types of traceability: forward, backward, and bidirectional.
Forward traceability is defined as starting with a requirement and working
downward (e.g., from requirements down to test cases). Backward
traceability is the opposite (e.g., from test cases up to requirements).
Bidirectional is the ability to trace in both directions.
It is not uncommon for SDLC processes to include requirements
tracing. In my experience, forward tracing requirements to verification
tests is all part and parcel of creating the verification test plan. Forward
tracing to design documentation and source code can be more
challenging, but it doesn’t have to be.
The following steps simplify the forward tracing of requirements to
design artifacts (i.e., the software architecture and Software Detailed
Design documents) and then to source code:
1
Content section means any section that is not part of the boilerplate or the
housekeeping sections. The introduction, glossary, and change log sections are
examples of non-content sections.
54
Chapter 3 Analysis
There are similar rules for the detailed design document, which also
support forward tracing from the software architecture to detailed design
sections and then to source code. See Chapter 6, “Foundation,” for details.
To see the requirements tracing for the GM6000, see the “Software
Requirements Traced to Software Architecture” document in Appendix P,
“GM6000 Software Requirements Trace Matrix.” You will notice that the trace
matrix reveals two orphan subsystems (SWA-12 Bootloader and SWA-26
Software Update). This particular scenario exemplifies a situation where the
software team fully expects to create a feature—the ability to update software
in the field—but there is no formal requirement because the software team
hasn’t reminded the marketing team that this would be a good idea … yet.
Even when your SDLC processes do not require requirements forward
traceability to design artifacts, I still strongly recommend that you follow
the steps mentioned previously because it is an effective mechanism
for closing the loop on whether the software team implemented all the
product requirements. It also helps prevent working on features that are
not required for release.
55
Chapter 3 Analysis
Summary
The initial draft of the software architecture document is one of the major
deliverables for the analysis step in the planning stage. As discussed in the
“Requirement Traceability” section, there is an auxiliary deliverable to the
software architecture document, which is the trace matrix for the software
architecture.
INPUTS
56
Chapter 3 Analysis
OUTPUTS
57
CHAPTER 4
Software
Development Plan
Nothing in this step requires invention. The work here is simply to capture
the stuff that the development team normally does on a daily basis.
Creating the Software Development Plan (SDP) is simply making the
team’s informal processes formal.
The value add of the SDP is that it eliminates many misunderstanding
and miscommunication issues. It proactively addresses the “I didn’t know
I needed to …” problems that invariably occur when you have more than
one person writing software for a project. A written and current SDP is
especially helpful when a new team member is added to the project; it is a
great tool for transmitting key tribal knowledge to a new team member.
A large percentage of the SDP decisions are independent of the
actual project. That is, they are part of your company’s existing software
development life cycle (SDLC) processes or Quality Management System
(QMS). This means that work on the SDP can begin early and even
be created in parallel with the requirements documents. That said, I
recommend that you start the software architecture document before
finalizing the first draft of the SDP. The software architecture will provide
more context and scope for the software being developed than just the
high-level requirements.
Project-Independent Processes
and Standards
I recommend that you create the following development processes and
standards for every project:
60
Chapter 4 Software Development Plan
61
Chapter 4 Software Development Plan
Additional Guidelines
You should also reference some additional guidelines that may not be
under the control of your software team. For example, some important
guidelines may be owned by the quality team, and you can simply include
references to those guidelines. However, if those documents don’t exist,
you will need to create them. I recommended that you develop guidelines
for the following items:
• Requirements traceability
• Regulatory concerns
62
Chapter 4 Software Development Plan
2. Overview
3. Glossary
4. Document references
6. Software items
7. Documentation outputs
8. Requirements
10. Cybersecurity
11. Tools
12. SCM
13. Testing
14. Deliverables
63
Chapter 4 Software Development Plan
Housekeeping
Sections 1–4 and section 15, Document name and version number,
Overview, Glossary, Document references, and Change log, are housekeeping
sections for software documentation. They are self-explanatory, and it
should be easy to fill in these sections. For example, the Overview section
is just a sentence or two that states the scope of the document. Here is an
example from Appendix K, “GM6000 Software Development Plan”:
64
Chapter 4 Software Development Plan
Also note that when a person is assigned a role, it does not mean
that they are the sole author of the document or the deliverable. The
responsibility of the role is to ensure the completion of the document and
not necessarily to write it themselves.
Software Items
This section identifies what the top-level software deliverables are. For
each software item, the following items are called out:
65
Chapter 4 Software Development Plan
Documentation Outputs
This section is used to call out non-source code artifacts that the software
development team will deliver. It describes who has the responsibility to
see that the artifact is completed and delivered and who, if it is a different
person, the subject matter expert (SME) is.
Not all the artifacts or documents need to be Word-style documents.
For example, Doxygen HTML pages, wiki pages, etc., are acceptable.
Depending on your project, you may need to create all these documents,
or possibly just a subset. And in some cases, you may need additional
documents that aren’t listed here. Possible artifacts are
• Software architecture
• Doxygen output
66
Chapter 4 Software Development Plan
• CI setup
• Release notes
67
Chapter 4 Software Development Plan
Requirements
This section specifies what requirements documentation (with respect to
software) is needed, who is responsible for the requirements, and what is
the canonical source for the requirements. The section also includes what
traceability processes need to be put in place. Basically, all of the processes
discussed in Chapter 3, “Analysis,” are captured in the SDP. Here is an
example:
68
Chapter 4 Software Development Plan
69
Chapter 4 Software Development Plan
Cybersecurity
This section identifies the workflows and deliverables—if any—needed to
address cybersecurity. Here is an example from the SDP.
70
Chapter 4 Software Development Plan
Tools
This section identifies tools used to develop the software. Because it
is sometimes a requirement that you can go back and re-create (i.e.,
recompile) any released version of the software, it is also important to
describe how tools will be archived.
Here is an example from the SDP:
71
Chapter 4 Software Development Plan
Testing
This section specifies details how the testing—which is the responsibility
of the software team—will be done. Topics that should be covered are as
follows:
• Unit test requirements and code coverage metrics as
applicable
73
Chapter 4 Software Development Plan
Deliverables
This section summarizes the various deliverables called out in the SDP
document. This is essentially the software deliverables checklist for the
entire project. Here is an example from the SDP in Appendix K, “GM6000
Software Development Plan”:
74
Chapter 4 Software Development Plan
Summary
Don’t skip creating an SDP. Lacking a formal SDP simply means that a de
facto SDP will organically evolve and be inconsistently applied over the
course of the project where it will be a constant source of drama.
INPUTS
OUTPUTS
75
CHAPTER 5
Preparation
The preparation step is not about defining and developing your product
as much as it is about putting together the infrastructure that supports the
day-to-day work. The tools that you will be using should have been called
out in the Software Development Plan (SDP), but if you have tools that
aren’t referenced there, this is the time to add them to the SDP along with
the rationale for prescribing their use.
The SDP for the GM6000 calls for the following tools:
GitHub Projects
GitHub Projects is an adaptable, flexible tool for planning and tracking
your work.2 It’s a free tool that allows you to track issues such as bug
reports or tasks or feature requests, and it provides various views to
facilitate prioritization and tracking. For example, Figure 5-1 is an example
of the Kanban-style view of the GM6000 project. For a given project, the
1
https://github.com/johnttaylor/epc/wiki
2
https://docs.github.com/en/issues/planning-and-tracking-with-
projects/learning-about-projects/about-projects
78
Chapter 5 Preparation
issue cards can span multiple repositories, and branches can be created
directly from the issue cards themselves. For the details of setting up
GitHub Projects, I recommend the “Creating a Project” documentation
provided by GitHub.3
GitHub Wiki
GitHub provides one free wiki per public repository. The following list
provides examples of documents you should consider capturing on a Wiki:
3
https://docs.github.com/en/issues/planning-and-tracking-with-
projects/creating-projects/creating-a-project
79
Chapter 5 Preparation
• Coding standards
Figure 5-2 shows an example of a GitHub wiki page for the GM6000
project.
80
Chapter 5 Preparation
[Home](https://github.com/johnttaylor/epc/wiki/Home )
* [GM6000](https://github.com/johnttaylor/epc/wiki/GM6000)
* [GM6000 SWBOM](https://github.com/johnttaylor/epc/wiki/
GM6000---Software-Bill-of-Materials-(SWBOM))
* [GitHub Projects](https://github.com/johnttaylor/epc/wiki/
GitHub---Projects)
4
https://docs.github.com/en/communities/documenting-your-project-
with-wikis/creating-a-footer-or-sidebar-for-your-wiki#creating-
a-sidebar
81
Chapter 5 Preparation
* [Setup](https://github.com/johnttaylor/epc/wiki/GitHub-
Projects---Setup)
* [Developer Environment](https://github.com/johnttaylor/epc/
wiki/Development-Environment)
* [Install Build Tools](https://github.com/johnttaylor/epc/
wiki/Developer---Install-Build-Tools)
* [GIT Install](https://github.com/johnttaylor/epc/wiki/
Developer---GIT-Install)
* [Install Local Tools](https://github.com/johnttaylor/epc/
wiki/Developer---Install-Local-Tools)
* [Tools](https://github.com/johnttaylor/epc/wiki/Tools)
82
Chapter 5 Preparation
83
Chapter 5 Preparation
Jenkins
Installing and configuring Jenkins for your specific CI needs is nontrivial.
But here is the good news:
5
www.jenkins.io/doc/book/installing/
84
Chapter 5 Preparation
85
Chapter 5 Preparation
Summary
The preparation step in the planning stage is all about enabling software
development. Many of the activities are either mostly completed, because
the tools are already in place from a previous project, or can be started
in parallel with the creation of the Software Development Plan (SDP). So
there should be ample time to complete this step before the construction
stage begins. But I can tell you from experience, if you don’t complete
this step before construction begins, you can expect to experience major
headaches and rework. Without a working CI server, you may have broken
builds or failing unit tests without even knowing it. And the more code that
is written before these issues are discovered, the more the work (and pain)
needed to correct them increases.
INPUTS
OUTPUTS
86
Chapter 5 Preparation
• User accounts have been created for all of the team members
(including nonsoftware developers) in the tracking tool.
87
CHAPTER 6
Foundation
The foundation step in the planning phase is about getting the day-to-day
environment set up for developers so they can start the construction phase
with a production-ready workflow. This includes performing tasks, for
example:
• Setting up the SCM repositories (e.g., setting up the
repository in GitHub)
• Defining the top-level source code organization for
the project
• Creating build scripts that can build unit tests and
application images
• Creating skeleton applications for the project, including
one for the functional simulator
• Creating CI build scripts that are fully integrated and
that build all the unit tests and application projects
using the CI build server
• Creating the outline for the Software Detailed Design
(SDD) document. Because the SDD is very much a
living document, as detailed design is done on a just-
in-time basis, nothing is set in stone at this point, and
you can expect things to change throughout the course
of the project. (See Chapter 11, “Just-in-Time Detailed
Design,” for more details.)
SCM Repositories
The SCM tool should have been “stood up” as part of the preparation step.
So for the foundation step, it is simply a matter of creating the repository
and setting up the branch structure according to the strategy you defined
in the SDP.
90
Chapter 6 Foundation
91
Chapter 6 Foundation
1
https://cmake.org/
92
Chapter 6 Foundation
93
Chapter 6 Foundation
After you have selected your build tools, you need to create scripts
that will build the skeleton applications and unit tests (both manual and
automated).
The art of constructing build scripts and using makefiles is out of the
scope of this book, simply because of the number of build tools available and
the nuances and variations that come with them. Suffice it to say, though,
that over the years I have been variously frustrated by many of the build tools
that I was either required to use or tried to use. Consequently, I built my own
tool—the NQBP2 build engine—which eschews makefiles in favor of a list of
directories to build. This tool is freely available and documented on GitHub.
The GM6000 example uses the NQBP2 build engine, and more details
about NQBP2 are provided in Appendix F, “NQBP2 Build System.”
Skeleton Applications
The skeleton projects provide the structure for all the code developed
in the construction phase. There should be a skeleton application for
all released images, including target hardware builds and functional
simulator builds for the applications. Chapter 7, “Building Applications
with the Main Pattern,” provides details about creating the skeleton
application for the target hardware and the functional simulator.
94
Chapter 6 Foundation
The build-all script should be architected such that it does not have
to be updated when new applications and unit tests are added to the
project.
95
Chapter 6 Foundation
96
Chapter 6 Foundation
Design that the project needs. However, all sections (except housekeeping
sections) must be traceable back to a section in the software architecture
document. If a detailed design section can’t be traced back to the software
architecture document, you should stop immediately and resolve the
disconnect.
The other sections for the SDD outline for the GM6000 are as follows:
97
Chapter 6 Foundation
Summary
The foundation step is the final, gating work that needs to be completed
before moving on to the construction phase. The foundation step includes
a small amount of design work and some implementation work to get
the build scripts to successfully build the skeleton applications and to
integrate these scripts with the CI server.
You may be tempted to skip the foundation steps, or to do them later
after you started coding the interesting stuff. Don’t! The value of the CI
server is to detect broken builds immediately. Until your CI server is fully
stood up, you won’t know if your develop or main branches are broken.
Additionally, skipping building the skeleton applications is simply creating
technical debt. The creation and startup code will exist at some point, so
doing it first, while planning for known permutations, is a much more
efficient and cost-effective solution than organically evolving it over time.
INPUTS
98
Chapter 6 Foundation
OUTPUTS
• The first draft of the outline for the Software Detailed Design
document. In addition to the outline, the SDD should also
include the design for
99
CHAPTER 7
Building Applications
with the Main Pattern
At this point in the process, your principal objective is to set up a process
to build your application to run on your hardware. However, you may also
want to build variations of your application to provide different features
and functionality. Additionally, you may want to create builds that will
allow your application to run on different hardware. Consequently, the
goal of this chapter is to put a structure in place that lets you build these
different versions in an automated and scalable way.
Even if you have only a single primary application running on a
single hardware platform, I recommend that you still build a functional
simulator for your application. A simulator provides the benefits of being
able to run and test large sections of your application code even before the
hardware is available. It also provides a much richer debug environment
for members of the development team.1
The additional effort to create a functional simulator is mostly
planning. The key piece of that planning is starting with a decoupled
design that can effectively isolate compiler-specific or platform-specific
code. The trick, here, is not to do this with IF-DEFs in your function calls.
1
For an in-depth discussion of the value proposition for a functional simulator,
I recommend reading Patterns in the Machine: A Software Engineering Guide to
Embedded Development.
© The Editor(s) (if applicable) and The Author(s), 101
under exclusive license to APress Media, LLC, part of Springer Nature 2024
J. T. Taylor and W. T. Taylor, The Embedded Project Cookbook,
https://doi.org/10.1007/979-8-8688-0327-7_7
Chapter 7 Building Applications with the Main Pattern
102
Chapter 7 Building Applications with the Main Pattern
103
Chapter 7 Building Applications with the Main Pattern
• Mutexes
• Semaphores
You can create your own OSAL or use an open source OSAL such
as CMSIS-RTOS2 for ARM Cortex processors. In addition, many
microcontroller vendors (e.g. Microchip, TI, etc.) provide an OSAL with
their SDKs. The example code for the GM6000 uses the OSAL provided by
the CPL C++ class library.
104
Chapter 7 Building Applications with the Main Pattern
In Figure 7-3, the differences between the target application and the
functional simulator are contained in the blocks labeled.
• Main
• Platform
105
Chapter 7 Building Applications with the Main Pattern
Implementing Main
The Main pattern only addresses runtime creation and initialization. Of
course, in practice, some of the platform-specific bindings will be done
at compile or link time. These bindings are not explicitly part of the Main
pattern.
It might be helpful at this point to briefly review how an embedded
application gets executed. The following sequence assumes the
application is running on a microcontroller:
106
Chapter 7 Building Applications with the Main Pattern
107
Chapter 7 Building Applications with the Main Pattern
For steps 3a and 3b, the source code is platform dependent. One
approach is to just have a single chunk of code and use an #ifdef
whenever there are platform deltas. The problem is that #ifdefs within
function calls do not scale, and your code will quickly become unreadable
and difficult to maintain. The alternative is to separate—by platform—
the source code into individual files and use link time bindings to select
the platform-specific files. This means the logic of determining what
hardware-specific and application-specific pieces make up a stand-alone
module is moved out of your source code and into your build scripts. This
approach declutters the individual source code files and is scalable to
many platform variants.
For the remaining steps, 3c, 3d, and 3e, there is value in having a
common application startup across different platforms, especially as the
amount and complexity of the application startup code increase.
108
Chapter 7 Building Applications with the Main Pattern
alpha1/main.cpp simulator/main.cpp
#include
stac void task( void* args ) { #include main(...) {
runTheApplicaon( inializeCplLibrary()
getBspConsoleIn(), infd = stdin
getBspConsoleOut() ) appmain.h ou d = stdout
}
runTheApplicaon( cli_infd, cli_oud )
runTheApplicaon(infd, ou d)
main(...) { }
inializeBsp()
inializeCplLibrary()
#include
createFreeRTOSThread( task, … )
startFreeRTOS()
}
appmain.cpp
simulatorPlaorm.cpp
alpha1Plaorm.cpp stac HeangAlgorithm heatAlgo_(…)
… SimEEPROM nvramDriver_(…)
EEPROM nvramDriver_(…) runTheApplicaon( infd, ou d ) {
pla orm_inialize() inializePla orm() {
pla orm_inialize() { nvramDriver_.start()
nvramDriver_.start() startDebugShell( infd, ou d ) }
} createApplicaonThread( ... )
openPla orm() {
pla orm_open() { pla orm_open() ...
... heatAlgo_.open() }
}
waitForShutdown() pla orm_close() {
pla orm_close() { ...
... heatAlgo_.close() }
} pla orm_close()
pla orm_exit() {
pla orm_exit() { pla orm_exit() exit();
bsp_reset_MCU() } }
}
#include
plaorm.h
#include #include
plaorm_inialize()
plaorm_open()
plaorm_close()
plaorm_exit()
109
Chapter 7 Building Applications with the Main Pattern
• src/Ajax/Main
• src/Ajax/Main/_app
• src/Ajax/Main/_plat_alpah1
• src/Ajax/Main/_plat_simulator
Application Variant
It may be that you have a marketing requirement for good, better, and best
versions of your application, or it may be that you just need a test-enabled
application for a certification process. But whatever the reason for creating
additional applications, each separate application will be built from a large
amount of common code (see Figure 7-5).
110
Chapter 7 Building Applications with the Main Pattern
Common Common
Common code
simulator-code target-specific code
Common
target2-specific code
Essentially there are two axes of variance: the Y axis for different
application variants and the X axis for different platforms (target
hardware, simulator, etc.). Starting with two application variants (a main
application and an engineering test application) and two platforms (target
hardware and simulator), this yields nine distinct code combinations (see
Figure 7-7).
111
Chapter 7 Building Applications with the Main Pattern
is better to extend the Main pattern to handle these new combinations. For
the GM6000 example, the Main pattern has been extended to build two
application variants and three platform variants.
112
Chapter 7 Building Applications with the Main Pattern
2
The GM6000 example actually supports three different platforms: the simulator
and two different hardware boards. To simplify the discussion for this chapter,
only two platforms—the simulator and STM32 hardware target—are used. Scaling
up to more target platforms or application variants only requires adding “new”
code, that is, no refactoring of the existing Main pattern code.
113
Chapter 7 Building Applications with the Main Pattern
Here is a snippet from SDD-33 from the SDD for the Eros application.
114
Chapter 7 Building Applications with the Main Pattern
The Eros application shares, or extends, the Ajax Main pattern (see the
“[SDD-32] Creation and Startup (Application)” section). The following
directory structure shall be used for the Eros-specific code and extensions to
the Ajax startup/shutdown logic.
src/Eros/Main/ //
Platform/Application Specific implementation
+--- app.cpp //
Eros Application (non-platform) implementation
+--- _plat_xxxx/ //
Platform variant 1 start-up implementation
| +--- app_platform.cpp //
Eros app + specific startup implementation
+--- _plat_yyyy/ //
Platform variant 2 start-up implementation
Build Scripts
After creating the code for the largely empty Main pattern, the next step is
to set up the build scripts for the skeleton projects. For the GM6000 project,
I use the nqbp2 build system. In fact, all the build examples in the book
assume that nqbp2 is being used. However, you can use whatever collection
115
Chapter 7 Building Applications with the Main Pattern
• projects/GM6000/Ajax/alpha1/windows/gcc-arm
• projects/GM6000/Ajax/simulator/windows/vc12
• projects/GM6000/Eros/alpha1/windows/gcc-arm
• projects/GM6000/Eros/simulator/windows/vc12
In practice, there are actually more build directories than this because
the simulator is built using both the MinGW Windows compiler and the
GCC compiler for a Linux host.
Shared libdirs.b files are created for each possible combination of
common code. These files are located under the /project/GM6000 tree in
various subdirectories.
116
Chapter 7 Building Applications with the Main Pattern
File Contents
Figures 7-9 and 7-10 are listings of the master libdirs.b files for the
Ajax target (alpha1 HW) build and the simulator (Win32-VC) build.
117
Chapter 7 Building Applications with the Main Pattern
# Common stuffs
../../../ajax_common_libdirs.b
../../../ajax_target_common_libdirs.b
../../../../common_libdirs.b
../../../../target_common_libdirs.b
# BSP
src/Cpl/Io/Serial/ST/M32F4
src/Bsp/Initech/alpha1/trace
src/Bsp/Initech/alpha1
src/Bsp/Initech/alpha1/MX
src/Bsp/Initech/alpha1/MX/Core/Src > freertos.c
src/Bsp/Initech/alpha1/console
# SDK
xsrc/stm32F4-SDK/Drivers/STM32F4xx_HAL_Driver/Src >
stm32f4xx_hal_timebase_rtc_alarm_template.c
stm32f4xx_hal_timebase_rtc_wakeup_template.c
stm32f4xx_hal_timebase_tim_template.c
# FreeRTOS
xsrc/freertos
xsrc/freertos/portable/MemMang
xsrc/freertos/portable/GCC/ARM_CM4F
# SEGGER SysVIEW
src/Bsp/Initech/alpha1/SeggerSysView
118
Chapter 7 Building Applications with the Main Pattern
Preprocessor
One of the best practices that is used in the development of the GM6000
application is the LConfig pattern.3 The LConfig pattern is used to provide
project-specific configuration (i.e., magic constants and preprocessor
symbols). Each build directory has its own header file—in this case,
colony_config.h—that provides the project specifics. For example, the
LConfig pattern is used to set the buffer sizes for the debug console and
the human-readable major-minor-patch version information.
As with the build script, there are common configuration settings
across the application and platform variants. Shared colony_config.h files
are created for each possible combination.
Simulator
When initially creating the skeleton applications, there are very few
differences between the target build and the simulator build. The actual
differences at this point are as follows:
119
Chapter 7 Building Applications with the Main Pattern
120
Chapter 7 Building Applications with the Main Pattern
Summary
Creating the skeleton applications that include the functional simulator
variants is the primary deliverable for the foundation step. The Main pattern is
an architectural pattern that is used to allow reuse of platform-independent
code across the different platforms and application variants. Constructing
the skeleton applications is mostly a boilerplate activity, assuming you have
an existing middleware package (such as the CPL C++ class library) that
provides interfaces and abstractions that decouple the underlying platform.
Avoid the temptation to create the skeleton applications in combination with
features, drivers, etc. From my experience, creating the skeletons upfront
helps identify all the possible build variants, which helps the implementation
of the Main pattern to be cleaner and significantly easier to maintain.
For the GM6000, an example of the initial skeleton applications
using the Main pattern with support for multiple applications and target
hardware variants can be found on the main-pattern_initial-skeleton-
projects branch in the Git repository.
INPUTS
121
Chapter 7 Building Applications with the Main Pattern
OUTPUTS
122
CHAPTER 8
Continuous
Integration Builds
This chapter walks through the construction of the “build-all” scripts for
the GM6000 example code. The continuous integration (CI) server invokes
the build-all scripts when performing pull requests and merging code to
stable branches such as develop and main. At a minimum, the build-all
scripts should do the following things:
• Build all unit tests, both manual and automated
unit tests
• Execute all automated unit tests and reports pass/fail
• Generate code coverage metrics
• Build all applications. This includes board variants,
functional simulator, debug and nondebug
versions, etc.
• Run Doxygen (assuming you are using Doxygen)
• Help with collecting build artifacts. What this entails
depends on what functionality is or is not provided by
your CI server. Some examples of artifacts are
• Application images
• Doxygen output
• Code coverage data
© The Editor(s) (if applicable) and The Author(s), 123
under exclusive license to APress Media, LLC, part of Springer Nature 2024
J. T. Taylor and W. T. Taylor, The Embedded Project Cookbook,
https://doi.org/10.1007/979-8-8688-0327-7_8
Chapter 8 Continuous Integration Builds
124
Chapter 8 Continuous Integration Builds
The CI Server
At this point, the CI server should be up and running and should be able to
invoke scripts stored in the code repository (see Chapter 5, “Preparation”).
In addition, the skeleton applications should have been constructed
(see Chapter 6, “Foundation”), and there should be a certain number of
existing unit tests that are part of the CPL C++ class library.
The build system used for examples in this book is nqbp2 (see
Appendix F, “nqbp2 Build System”). However, the code itself has no direct
dependencies on nqbp2, which means you can use a different build
system, but you will need to create the build makefiles and scripts.
Directory Organization
The key directories at the root of the source code tree are as follows:
125
Chapter 8 Continuous Integration Builds
Naming Conventions
You can use whatever naming convention you would like. The important
thing is that you should be able to differentiate scripts for different use
cases by simply looking at the file name. For example, here are some key
use cases to differentiate:
126
Chapter 8 Continuous Integration Builds
127
Chapter 8 Continuous Integration Builds
Name Description
a.exe, Single lowercase “a” Scripts used for all automated units that
a.out prefix to denote parallel can be run in parallel with other units.
aa.exe, Dual lowercase “aa” Scripts used for all automated units
aa.out prefix to denote that cannot be run in parallel with other
not parallel units. An example here would be a test
that uses a hard-wired TCP port number.
b.exe, Single lowercase “b” Scripts used for all automated units that
b.out prefix to denote parallel can be run in parallel and that require
and require a script an external Python script to run. An
example here would be piping a golden
input file to stdin of the test executable.
bb.exe, Dual lowercase “bb” Scripts used for all automated units that
bb.out prefix to denote not cannot be run in parallel and that require
parallel and require an external Python script to run.
a script
a.py Single lowercase “a” Python program used to execute the
prefix to script to denote b.exe and b.out executables.
it will be running only
tests prefixed with single
lowercase “b”
aa.py Dual lowercase “aa” Python program used to execute the
prefix to script to denote bb.exe and bb.out executables.
it will be running only
tests prefixed with
dual lowercase “bb”
<all Manual units can use any name for the
others> executable except for the one listed
previously.
128
Chapter 8 Continuous Integration Builds
• It builds all target unit tests (e.g., builds that use the
GCC-ARM cross compiler). Typically these are manual
unit tests.
129
Chapter 8 Continuous Integration Builds
The script is built so that adding new unit tests (automated or manual)
or new applications under the projects/ directory does not require the
script to be modified.
If you are adding a new hardware target—for example, changing from
the STM32 alpha1 board to the STM32 dev1 board—then a single edit to
script is required to specify the new hardware target.
If new build artifacts are needed, then the script has to be edited to
generate the new artifacts and to copy the new items to the _artifacts/
directory.
Table 8-2 summarizes where changes need to be made to
accommodate changes to the build script.
130
Chapter 8 Continuous Integration Builds
New Unit Test Create a new unit test build directory under the tests/ directory.
New Build Edit the build-all scripts to generate the new build artifact and copy
Artifact it to the _artifacts/ directory.
New Target Edit the build-all scripts as needed. If the new target is a “next
Hardware board spin,” then only the "_TARGET” and “_TARGET2” variables
need to be updated.
New Create the application build directory under the projects/
Application directory.
@echo on
:: This script is used by the CI\Build machine to build the Windows Host
:: projects
::
:: usage: build_all_windows.bat <buildNumber> [branch]
set _TOPDIR=%~dp0
set _TOOLS=%_TOPDIR%..\xsrc\nqbp2\other
set _ROOT=%_TOPDIR%..
set _TARGET=alpha1
set _TARGET2=alpha1-atmel
:: Set Build info (and force build number to zero for "non-official" builds)
set BUILD_NUMBER=%1
set BUILD_BRANCH=none
IF NOT "/%2"=="/" set BUILD_BRANCH=%2
IF "%BUILD_BRANCH%"=="none"" set BUILD_NUMBER=0
echo:
echo:BUILD: BUILD_NUMBER=%BUILD_NUMBER%, BRANCH=%BUILD_BRANCH%
131
Chapter 8 Continuous Integration Builds
:: Build unit test projects (debug builds for more accurate code coverage)
cd %_ROOT%\tests
%_TOOLS%\bob.py -v4 mingw_w64 -cg --bldtime -b win32 --bldnum %BUILD_NUMBER%
IF ERRORLEVEL 1 EXIT /b 1
...
132
Chapter 8 Continuous Integration Builds
...
:: Everything worked!
:builds_done
echo:EVERTHING WORKED
exit /b 0
133
Chapter 8 Continuous Integration Builds
# This script is used by the CI/Build machine to build the Linux projects
#
# The script ASSUMES that the working directory is the package root
#
# usage: build_linux.sh <bldnum>
#
set -e
134
Chapter 8 Continuous Integration Builds
Summary
It is critical that the CI server and the CI build scripts are fully up and
running before the construction stage begins. By not having the CI process
in place, you risk starting the project with broken builds and a potential
descent into initial merge hell.
INPUTS
OUTPUTS
• The CI builds are full featured. That is, they support and
produce the following outputs on every build.
135
Chapter 8 Continuous Integration Builds
• They tag the develop and main branches in GitHub with the
CI build number.
136
CHAPTER 9
Requirements
Revisited
It is extremely rare that all of the requirements have been defined before
the construction phase. In fact, it is not even reasonable to expect that the
requirements will be 100% complete before starting implementation. The
reason is because unless all of the details are known—and nailing down
the details is what you do in the construction phase—it is impossible to
have all of the requirements carved in stone. So in addition to design,
implementation, and testing, there will inevitably be a continuing
amount of requirement work that needs to be addressed. This includes
refactoring and clarifying existing requirements as well as developing new
requirements. Common sources for new requirements that show up in the
construction phase are as follows:
Analysis
There are many types of analysis (e.g., FMEA) that are typically done for an
embedded product to ensure it is safe, reliable, compliant with regulatory
agencies, and secure. These tests and analysis cover all aspects of the
product—mechanical, electrical, packaging, and manufacturing—not
just software. This is where many of the edge cases are uncovered and
possible solutions are proposed. Often, these solutions take the form of
new requirements, which are also sometimes called risk control measures.
Typically, these new requirements are product (PRS) or engineering
detailed requirements (SRS)—although sometimes they feed back into the
marketing (MRS) requirements.
To mitigate this risk, the following requirement was added as a risk control
measure to the PRS:
PR-206: Heater Safety. The Heating Element (HE) sub-assembly shall provide
an indication to the Control Board (CB) when it has been shut off due to a
safety limit.
1
See https://en.wikipedia.org/wiki/Integral_windup
139
Chapter 9 Requirements Revisited
There is a single PRS requirement for the heating algorithm for the GM6000
example.
A choice was made to use a fuzzy logic controller (FLC) for the core temperature
control algorithm instead of simple error or proportional-integral-derivative (PID)
algorithms. The details and nuisances of the FLC are nontrivial, so the algorithm
design was captured as a separate document (see Appendix N, “GM6000 Fuzzy
Logic Temperature Control”). Here is a snippet from the FLC document.
140
Chapter 9 Requirements Revisited
Fuzzification
There are two input membership functions: one for absolute temperature error
and one for differential temperature error. Both membership functions use
triangle membership sets as shown in Figure 9-1.
141
Chapter 9 Requirements Revisited
142
Chapter 9 Requirements Revisited
143
Chapter 9 Requirements Revisited
144
Chapter 9 Requirements Revisited
2
See Chapter 2 for discussion of functional vs. nonfunctional requirements.
145
Chapter 9 Requirements Revisited
Requirements Tracing
All functional requirements should be forward traceable to one or more
test cases in the verification test plan.3 If there are test cases that can’t
be backward traced to requirements, there is disconnect between what
was asked for and what is expected for the release. The disconnect
needs to be reconciled, not just for the verification testing, but for the
software implementation (and the other engineering disciplines) as
well. Reconciled means that there are no orphans when forward tracing
functional requirements to test cases or when backward tracing test cases
to requirements. NFRs are not typically included in the verification tracing.
Requirements should also be traced to design outputs. Design outputs
include the software architecture and detailed design documents as well as
the source code. The importance or usefulness of tracing to design outputs
is the same as it is for verification testing; it ensures that you are building
everything that was asked for and not building stuff that is not needed. Unlike
verification tracing, NFRs should be included in the tracing to design outputs.
3
See Chapter 3, “Analysis,” for a discussion of forward and backward requirements
tracing.
146
Chapter 9 Requirements Revisited
Requirements tracing to design outputs is not the hard and fast rule
that it is for verification testing. However, with some up-front planning,
and following the SDLC process steps in order, tracing requirements to
design outputs is a straightforward exercise.
Requirements tracing can be done manually, or it can be done using
a requirement management tool (e.g., Doors or Jama). The advantage of
using requirement management tools is that they are good at handling the
many-to-many relationships that occur when tracing requirements and
provide both forward and backward traces. The downside to these tools
is their cost and learning curve. For the GM6000 project, I manually trace
requirements to and from design outputs using a spreadsheet.
The following processes simplify the work needed to forward trace
software-related requirements down to source code files:
4
A content section is any section that is not a housekeeping section. For example,
the Introduction, Glossary, and Change Log are considered housekeeping
sections.
147
Chapter 9 Requirements Revisited
148
Chapter 9 Requirements Revisited
After you have completed these steps, you have effectively forward
traced the software architecture through detailed design to source code
files. The remaining steps are to forward trace requirements (i.e., MRS,
PRS, SRS) to sections in the software architecture document.
Summary
The formal requirements should avoid excessive design details or specifics
whenever possible. The use of design statements should be used to bridge
the gap between formal requirements and the details needed by the
software team to design and implement the solutions.
149
Chapter 9 Requirements Revisited
INPUTS
150
Chapter 9 Requirements Revisited
OUTPUTS
151
CHAPTER 10
Tasks
This is the start of the construction phase, and this phase is where the
bulk of the development work happens. Assuming that all of the software
deliverables from the planning phase were completed, there should now
be very few interdependencies when it comes to software work. That is,
the simulator has removed dependencies on hardware availability, the
UI has been decoupled from the application’s business logic, the control
algorithm has been decoupled from its inputs and outputs, etc. And
since there are now minimal interdependencies, most of the software
development can be done in parallel; you can have many developers
working on the same project without getting bogged down in merge hell.
The software work, then—the work involved in producing working
code—can be broken down into tasks, where each task has the following
elements:
1. Requirements
4. A code review
A single task does not have to contain all elements; however, the order
of the elements must be honored. For example, there are three types
of tasks:
The tasks as discussed in this chapter are very narrow in scope, and
they are limited primarily to generating source code. But there are many
other tasks—pieces of work or project activities such as failure analysis, CI
maintenance, bug fixing, bug tracking, integration testing, etc.—that still
need to be planned, tracked, and executed for the project to be completed.
1) Requirements
The requirements for a task obviously scope the amount of work that
has to be done. However, if there are no requirements, then stop design
and coding activities and go pull the requirements from the appropriate
stakeholders. Also remember that with respect to tasks, design statements
are considered requirements.
154
Chapter 10 Tasks
If there are partial or incomplete requirements for the task, either split
the current task into additional tasks: some where you do have sufficient
requirements and others where the task is still undefined and ambiguous.
Save the undefined and ambiguous tasks for later when they are better and
more fully specified. Proceeding without clear requirements puts all of the
downstream work at risk.
1
Test-Driven Development: https://en.wikipedia.org/wiki/
Test-driven_development
155
Chapter 10 Tasks
5) Merge
Never break the build.
156
Chapter 10 Tasks
driver is done. However, there is still more work needed to update (e.g.,
account for different pin assignments) and verify the driver after the target
hardware is received. Consequently, a new task or card should be put
into the project backlog to account for the additional work. If this isn’t
done, bad things happen when the target board is finally available and
incorporated into testing. Jim is now fully booked with other tasks and is
not “really available” to finish the EEPROM driver. A minor fire drill ensues
over priorities for when and who will finish the EEPROM driver.
While this example is admittedly a bit contrived, most software tasks
have nuances where there is still some amount of future work left when
the code is merged. You will save a lot of time and angst if the development
team (including the program manager) clearly defines what done means.
My recommendation for a definition of done is as follows:
157
Chapter 10 Tasks
Task Granularity
What is the appropriate granularity of a task? Or, rather, how much time
should the task take to complete? Generally, taking into account the five
elements of a coding task discussed previously, a task should not take
longer than a typical Agile sprint, or approximately two weeks. That said,
tasks can be as short as a couple of hours or as long as several weeks.
Table 10-1 shows some hypothetical examples of tasks for the GM6000
project.
158
Chapter 10 Tasks
SPI Driver Coding The requirement for the task is that the physical interface
to the LCD display be an SPI bus.
1) The detailed design is created. The design is captured
as one or more sections in the SDD. These sections
formalize the driver structure, including how the
SPI interface will be abstracted to be platform
independent.
2) The new sections in the SDD are reviewed.
3) Before the code is written, a ticket is created, if there
is not one already, and a corresponding branch in the
repository is created for the code.
4) The code is written.
5) As a coding task requires a manual unit test to verify
the SPI operation on a hardware platform, the manual
unit test or tests are created and run.
6) After the tests pass, a pull request is generated, the
code is reviewed, and any action items arising from
the review are resolved.
7) The code is then merged, and if the CI build is
successful, the code is merged into its parent branch.
Control Requirement There is only one formal requirement (PR-207) with
Algorithm respect to the heater control algorithm.
Definition
The scope of the task is to define the actual control
algorithm and document the algorithm as design
statements. The documented algorithm design is then
reviewed by the appropriate SMEs.
(continued)
159
Chapter 10 Tasks
• Design
160
Chapter 10 Tasks
• Design Review
• Requirements
• Design
• Design Review
• Unit Test
• Code
161
Chapter 10 Tasks
Summary
Software tasks are a tool and process for decomposing the software effort
into small, well-defined units that can be managed using Agile tools
or more traditional Gantt chart–based schedulers. Within a task, there
is a waterfall process that starts with requirements, which is followed
by detailed design, which is then followed by creating the unit tests
and coding.
INPUTS
OUTPUTS
• Working code
• Unit tests
• Design reviews
• Code reviews
162
Chapter 10 Tasks
163
CHAPTER 11
Just-in-Time Detailed
Design
While defining the overall software architecture is a waterfall process,
the planning phase activities, or detailed software design, are made up of
many individual activities that are done on a just-in-time basis. As work
progresses during the construction phase, the Software Detailed Design is
updated along the way. At the start of the construction phase, the state of
the Software Detailed Design document should consist of at least
As you put together the detailed design components, you can write for
a knowledgeable reader who is
166
Chapter 11 Just-in-Time Detailed Design
167
Chapter 11 Just-in-Time Detailed Design
Examples
I can’t tell you how to do detailed design. There is no one-size-fits-all
recipe for doing it, and at the end of the day, it will always be a function of
your experience, abilities, and domain knowledge. But I can give examples
of what I do. Here are some examples from this design process that I used
when creating the GM6000 project.
Subsystem Design
As defined in the software architecture of the GM6000, persistent storage is
one of the top-level subsystems identified. It is responsible for “framework,
data integrity checks, etc., for storing and retrieving data that is stored
in local persistent storage.” The snippets that follow illustrate how the
subsystem design was documented in the SDD (Appendix M, “GM6000
Software Detailed Design (Final Draft)”).
168
Chapter 11 Just-in-Time Detailed Design
169
Chapter 11 Just-in-Time Detailed Design
[SDD-55] Records
Here is the design for how persistent storage records are structured and
how data integrity and power-failure use cases will be handled.
The application data is persistently stored using records. A record is the
unit of atomic read/write operations from or to persistent storage. The CPL
C++ class library’s persistent storage framework is used. In addition, the
library’s Model Point records are used, where each record contains one or
more model points.
All records are mirrored—two copies of the data are stored—to ensure
that no data is lost if there is power failure during a write operation.
All of the records use a 32-bit CRC for detecting data corruption.
Instead of a single record, separate records are used to insulate the data
against data corruption. For example, as the Metrics record is updated
multiple times per hour, it has a higher probability for data corruption than
the Personality record that is written once.
The application data is broken down into the following records:
• User Settings
• Metrics
170
Chapter 11 Just-in-Time Detailed Design
171
Chapter 11 Just-in-Time Detailed Design
The concrete Record instances are defined per project variant (e.g., Ajax
vs. Eros), and the source code files are located at
src/Ajax/Main
src/Eros/Main
The concrete driver for the STM32 microcontroller family uses the ST
HAL I2C interfaces for the underlying I2C bus implementation.
173
Chapter 11 Just-in-Time Detailed Design
src/Driver/I2C
src/Driver/I2C/STM32
174
Chapter 11 Just-in-Time Detailed Design
src/Driver/Button
src/Driver/Button/STM32
175
Chapter 11 Just-in-Time Detailed Design
176
Chapter 11 Just-in-Time Detailed Design
Graphics Library
The Graphics Library is a top-level subsystem identified in the software
architecture document. The software architecture calls out the Graphics
Library subsystem as third-party code.
177
Chapter 11 Just-in-Time Detailed Design
Note: This decision should be revisited for future GM6000 derivatives that
have a different display or a more sophisticated UI. The Pimoroni library
has the following limitations that do not recommend it as a “foundational”
component:
xsrc/pimoroni
178
Chapter 11 Just-in-Time Detailed Design
179
Chapter 11 Just-in-Time Detailed Design
180
Chapter 11 Just-in-Time Detailed Design
src/Ajax/ScreenMgr
181
Chapter 11 Just-in-Time Detailed Design
Design Reviews
Always perform design reviews and always perform them before coding
the component. That is, there should be a design review for each task
(see Chapter 10, “Tasks”), and you shouldn’t start coding until the design
review has been completed. In my experience, design reviews have a
bigger return on investment (for the effort spent) than code reviews. The
perennial problem with code reviews is that they miss the forest for trees.
That is, code reviews tend to focus on the syntax of the code and don’t
really look at the semantics or the overall design of the software. Code
reviews can catch implementation errors, but they don’t often expose
design flaws. And design flaws are much harder and more expensive to fix
than implementation errors.
Do not skip the design review step when the design content is just a
few sentences or paragraphs. Holding design reviews is a great early
feedback loop between the developers and the software lead.
Review Artifacts
Design reviews can be very informal. For example, the software lead
reviews the updated SDD and gives real-time feedback to the developer.
However, depending on your QMS or regulatory environment, you may
be required to have review artifacts (i.e., a paper trail) that evidence that
reviews were held and that action items were identified and resolved.
When formal review artifacts are required, it is tempting to hold one or
two design reviews at the end of projects. Don’t do this. This is analogous
to having only a single code review of all the code after you have released
the software. Reviews of any kind only add value when performed in real
time before coding begins. And, of course, action items coming out of the
review should be resolved before coding begins as well.
182
Chapter 11 Just-in-Time Detailed Design
183
Chapter 11 Just-in-Time Detailed Design
Summary
The software detail design process is about decoupling problem-solving
from the act of writing source code. When detailed design creates
solutions, then “coding” becomes a translation activity of transforming the
design into source code. The detailed design process should include the
following:
184
Chapter 11 Just-in-Time Detailed Design
INPUTS
1
https://en.wikipedia.org/wiki/Rubber_duck_debugging
185
Chapter 11 Just-in-Time Detailed Design
OUTPUTS
186
CHAPTER 12
You can now begin the coding and unit testing process. This involves
the following:
188
Chapter 12 Coding, Unit Tests, and Pull Requests
Check-In Strategies
After the implementation is completed, and the desired test coverage has
been achieved, the next step is to check everything into the repository. One
advantage of doing the work on a branch is that the developer can check
in their work in progress without impacting others on the team. I highly
recommend checking in (and pushing) source code multiple times a day.
This provides the advantage of being able to revert or back out changes
if your latest work goes south. It also ensures that your code is stored on
more than one hard drive.
Pull Requests
Use pull requests (PRs) to merge the source on the temporary branch to its
parent branch (e.g., develop or main). Other source control tools provide
similar functionality, but for this discussion, my examples will be GitHub
specific.1 The pull request mechanism has the following advantages:
1
If you are not using Git, you may have to augment your SCM’s merge functionality
with external processes for code reviews and CI integration.
189
Chapter 12 Coding, Unit Tests, and Pull Requests
190
Chapter 12 Coding, Unit Tests, and Pull Requests
Granularity
I strongly recommend the best practice of implementing new components
in isolation without integrating the component into the application.
That is, I recommend that you first go through the coding process—
steps 1 through 5—and then create a second ticket to integrate the new
component into the actual application or applications. Things are simpler
if the scope of the changes is restricted to the integration changes only.
Additionally, these are the following benefits:
Examples
The sections that follow provide examples from the GM6000 project using
the coding and unit testing process.
I2C Driver
The I2C driver is needed for serial communications to the off-board
EEPROM IC used for the persistent storage in the GM6000 application.
The class diagram in Figure 12-1 illustrates that there are different
implementations for each target platform.
191
Chapter 12 Coding, Unit Tests, and Pull Requests
Before the coding and unit testing began, the following steps were
completed:
192
Chapter 12 Coding, Unit Tests, and Pull Requests
Table 12-1. Process and work summary for the I2C driver
Step Work
Branch 1. Create a branch off of develop for the work. The branch name should
contain the ticket card number in its name.
193
Chapter 12 Coding, Unit Tests, and Pull Requests
Step Work
Test 1. Create the unit test project and build scripts that are specific to target
Project hardware. The new unit test project is located under the existing
tests/Driver directory tree at tests/Driver/I2C/_0test/
master-eeprom/NUCLEO-F413ZH-alpah1/windws/gcc-arm.
a. Typically, I clone a similar test project directory to start. For the I2C
driver’s test project, I cloned the tests/Driver/DIO/_0test/
out_pwm/NUCLEO-F413ZH-alpah1/windows/gcc-arm. Then I
modified the copied files to create a I2C unit test using these steps:
2. Compile, link, and download the unit tests. Then verify that the
test code passes. Iterate to fix any bug found or to extend the test
coverage.
(continued)
194
Chapter 12 Coding, Unit Tests, and Pull Requests
Step Work
a. Notify the code reviewers that the code is ready for review. This is
done automatically by the Git server when the PR owner selects or
assigns reviewers as part of creating the PR.
4. If there are CI build failures, commit and push code fixes to the PR
(which triggers a new CI build).
5. Resolve all code review comments and action items. And again, any
changes you make to files in the PR trigger a new CI build.
6. After all review comments have been resolved, and the last pushed
commit has built successfully, the branch can be merged to develop.
The merge will trigger a CI build for the develop branch.
7. Delete the PR branch as it is no longer needed.
At this point, the I2C driver exists and has been verified, but it has not
been integrated into the actual GM6000 application (i.e., Ajax or Eros).
Additional tickets or tasks are needed to integrate the driver into the
Application build.
The I2C driver is intermediate driver in the GM6000 architecture in
that it is used by the Driver::NV::Onsemi::CAT24C512 driver. In turn, the
NV driver is used by the Cpl::Dm persistent storage framework for reading
and writing persistent application records. The remaining tasks would be
the following:
195
Chapter 12 Coding, Unit Tests, and Pull Requests
Screen Manager
The UI consists of a set of screens. At any given time, only one screen is
active. Each screen is responsible for displaying content and reacting to UI
events (e.g., button presses) or model point change notifications from the
application. The Screen Manager component is responsible for managing
the navigation between screens as well as handling the special use cases
such as the splash and UI halted screens.
The Screen Manager itself does not perform screen draw operation
so it is independent of the Graphics library as well as being hardware
independent. Before the coding and unit testing began, the following steps
should have been completed:
• A ticket was created for the driver.
• The requirements were identified. In this case, it is the
UI wireframes (see the EPC wiki).
• The detailed design was completed and reviewed.
From the detailed design, the Screen Manager is
responsible for
196
Chapter 12 Coding, Unit Tests, and Pull Requests
• Screen navigation
At this point, I was ready to start the process of creating the Screen
Manager. Table 12-2 summarizes the work.
Table 12-2. Process and work summary for the Screen Manager
Step Work
Branch 1. Create a GIT branch (off of develop) for the work. The branch
name should contain the ticket number in its name.
197
Chapter 12 Coding, Unit Tests, and Pull Requests
Test Project 1. Create the unit test projects. Since it is an automated unit test, the
test needs to be built with three different compilers to eliminate
issues that could potentially occur when using the target cross
compiler. These are the three compilers I use:
• Microsoft Visual Studio compiler—Used because it provides
the richest debugging environment
• MinGW compiler—Used to generate code coverage metrics
• GCC for Linux—Used to ensure the code is truly platform
independent
2. I recommend that you build and test with the compiler toolchain
that is the easiest to debug with. After all tests pass, then build
and verify them with the other compilers. The unit test project
directories are as follows:
tests/Ajax/ScreenMgr/_0test/windows/vc12
tests/Ajax/ScreenMgr/_0test/windows/mingw_w64
tests/Ajax/ScreenMgr/_0test/linux/mingw_gcc
3. Compile, link, and execute the unit tests and verify that the test
code passes with the targeted code coverage.
4. Iterate through the process to extend the test coverage and fix
any bugs.
Pull Request 1. Run the top/run_doxgyen.py script to verify that there are
no Doxygen errors in the new file that was added. This step is
needed because the CI builds will fail if there are Doxygen errors
present in the code base.
a. Notify the code reviewers that the code is ready for review.
This is done automatically by the Git server when the PR
author selects or assigns reviewers as part of creating the PR.
4. If there are CI build failures, commit and push code fixes to the
PR (which will trigger new CI builds).
5. Resolve all code review comments and action items. And again,
any changes you make to files in the PR triggers a new CI build.
6. After all review comments have been resolved and the last
pushed commit has built successfully, the branch can be merged
to develop. The merge will trigger a CI build for the develop
branch.
7. Delete the PR branch as it is no longer needed.
199
Chapter 12 Coding, Unit Tests, and Pull Requests
At this point, the Screen Manager code exists and has been verified,
but it has not been integrated into the actual GM6000 application (i.e.,
Ajax or Eros). A second ticket is needed to actually integrate the Screen
Manager into the application build. For the GM6000 code, I created a
second ticket that included the following:
• Creating a stub for a Home screen for the Ajax and Eros
applications.
Summary
If you wait until after the Software Detailed Design has been completed
before writing source code and unit tests, you effectively decouple
problem-solving from the act of writing source code. The coding, unit test,
and pull request process should include the following:
200
Chapter 12 Coding, Unit Tests, and Pull Requests
INPUTS
OUTPUTS
201
CHAPTER 13
Integration Testing
Integration testing is where multiple components are combined into
a single entity and tested. In theory, all of the components have been
previously tested in isolation with unit tests, and the goal of the integration
testing is to verify that the combined behavior meets the specified
requirements. Integration testing also serves as incremental validation of
the product as more and more functionality becomes available.
Integration testing is one of those things that is going to happen
whether you explicitly plan for it or not. The scope and effort of the
integration testing effort can range from small to large, and the testing
artifacts that are generated can be formal or informal. Here are some
examples of integration testing:
1. Plan
5. Report results
204
Chapter 13 Integration Testing
• It sets a timeline.
205
Chapter 13 Integration Testing
206
Chapter 13 Integration Testing
Based on the current test results, the software and hardware leads
determine if the goals of the testing have been reached. If yes, the software
lead sends out an email with test results. Note that the testing goals can be
met without all the test cases passing. For example, if the RF range tests
fail, and a new antenna design is needed to resolve the issue, testing can be
paused until new hardware is received, and then the integration test plan
can be re-executed.
1
A formal build is defined as a build of a “stable” branch (e.g., develop or main)
performed on the CI build server where the source code has been tagged and
labeled. It is imperative that all non-unit testing be performed on formal builds
because the provenance of a formal build is known and labelled in the SCM
repository (as opposed to a private build performed on a developer’s computer).
207
Chapter 13 Integration Testing
Smoke Tests
Smoke or sanity tests are essentially integration tests that are continually
executed. Depending on the development organization, smoke tests
are defined and performed by the software test team, the development
team, or both. In addition, smoke tests can be automated or manual. The
automated tests can be executed as part of the CI build or run on demand.
If the tests are manual, it is essential that the test cases and steps be
sufficiently documented so that the actual test coverage is repeatable over
time even when different engineers perform the testing.
One downside to smoke tests is that they can be easily broken as
requirements and implementations evolve over time. This means that
Simulator
A functional simulator can be a no-cost platform for performing
automated smoke tests that can be run as part of the CI builds. These
automated simulator tests can be simple or complex. In my experience,
the only limitation for simulator-based tests that run as part of the CI build
is the amount of time they add to the overall CI build time.
208
Chapter 13 Integration Testing
2
https://pypi.org/project/pexpect/
209
Chapter 13 Integration Testing
Summary
Integration testing performed by the software team occurs throughout a
project. How formal or informal the integration testing should be depends
on what is being integrated. Generally, a more formal integration testing
effort is required when integrating components across teams. However,
a minimum level of formality should be that the pass/fail criteria is
written down.
Continual execution of integration tests, for example, smoke tests or
sanity tests, provides an initial quality check of a build. Ideally, these tests
would be incorporated in the CI build process.
INPUTS
• Source code
210
Chapter 13 Integration Testing
OUTPUTS
211
CHAPTER 14
Board Support
Package
The Board Support Package (BSP) is a layer of software that allows
applications and operating systems to run on the hardware platform.
Exactly what is included in a BSP depends on the hardware platform, the
targeted operating system (if one is being used), and potential third-party
packages that may be available for the hardware (e.g., graphics libraries).
The BSP for a Raspberry PI running Linux is much more complex than a
BSP for a stand-alone microcontroller. In fact, I have worked on numerous
microcontroller-based projects that had no explicit concept of a BSP in
their design. So while there is no one-size-fits-all definition or template for
BSPs in the microcontroller hardware space, ideally a microcontroller BSP
encapsulates the following:
Compiler Toolchain
The compiler toolchain is all of the glue and magic that has to be in place
from when the MCU’s reset vector executes up until the application’s
main() function is called. This includes items, for example:
1. The vector table—The list of function pointers that
are called when a hardware interrupt occurs.
214
Chapter 14 Board Support Package
215
Chapter 14 Board Support Package
BSPs in Practice
As previously mentioned, there is no one size fits all when it comes to
microcontroller BSPs. There are two BSPs for the GM6000 project: one is
for the ST NUCLEO-F413ZH evaluation board, and the second is for the
Adafruit Grand Central M4 Express board. The two BSPs are structured
completely differently, and the only thing they have in common is each
BSP has Api.h and Api.cpp files. The BSP for the Nucleo board uses ST’s
HAL library and Cube MX IDE for low-level hardware functionality. The
BSP for the Adafruit Grand Central board leverages the Arduino framework
for abstracting the hardware. To illustrate the differences, the following are
summaries of the file and directory structure of the two BSPs.
ST NUCLEO-F413ZH BSP
src/Bsp/Initech/alpha1/ // Root directory for the BSP
+--- Api.h // Public BSP interface
+--- Api.cpp // Implementation for Api.h
+--- ...
+--- stdio.cpp // C Library support
+--- syscalls.c // C Library support
+--- ...
+--- mini_cpp.cpp // C++ support when not linking against stdlibc++
+--- MX/ // Root directory for the ST Cube MX tool
| +--- MX.ioc // MX project file
| +--- startup_stm32f413xx.s // Initializes the Data & BSS segments
| +--- STM32F413ZHTx_FLASH.ld // Linker script
| +--- ...
| \--- Core/ // Contains the MX tool's auto generated code.
// The dir contains the vector table, clock cfg, etc.
|
+--- console/ // Support for the CPL usage of the debug UART
| +--- Ouptut.h // Cpl::Io stream instance for the debug UART
| \--- Ouptut.cpp // Cpl::Io stream instance for the debug UART
+--- SeggerSysView/ // Run time support for Segger's SysView tool
\--- trace/ // Support for CPL tracing
\--- Ouptut.cpp // Tracing using C libray stdio and Cpl::Io streams
217
Chapter 14 Board Support Package
Structure
The structure for BSPs that I recommend is minimal because each BSP is
conceptually unique since it is compiler, MCU, and board specific. This
structure is a single Api.h header file that uses the LHeader pattern and
exposes all of the BSP’s public interfaces. An in-depth discussion of the
LHeader pattern can be found in Appendix D, “LHeader and LConfig
Patterns.” However, here is a summary of how I used the LHeader
pattern in conjunction with BSPs.
218
Chapter 14 Board Support Package
219
Chapter 14 Board Support Package
not a BSP. For example, when migrating from using an evaluation board to
the first in–house designed board, the existing driver source code does not
have to be updated when changing to the in-house board.
220
Chapter 14 Board Support Package
• Keep the BSPs small since they are not reuse friendly.
Typically, this means implementing drivers outside of
the BSP whenever possible.
221
Chapter 14 Board Support Package
• BSPs are fairly static in nature. That is, after they are
working, they require very little care and feeding. Doing
a lot of refactoring or maintenance on existing BSP
functionality is a potential indication that there are
underlying architecture or design issues.
Bootloader
The discussion so far has omitted any discussion using a bootloader with
the MCU. The reason is because designing and implementing a bootloader
is outside the scope of this book. However, many microcontroller projects
include a bootloader so that the firmware can be upgraded after the device
has been deployed. Conceptually a bootloader does the following:
222
Chapter 14 Board Support Package
the owner of the MCU’s vector table. However, the changes are relatively
isolated (e.g., an alternate linker script) and usually do not disrupt the
existing BSP structure.
Summary
The goal of the Board Support Package is to encapsule the low-level details
of the MCU hardware, the board schematic, and compiler hardware
support into a single layer or component. The design of a BSP should
decouple the concrete implementation from being directly referenced (i.e.,
#include statements) by the drivers and application code that consume
the BSP’s public interfaces. This allows the client source code to be
independent of concrete BSPs. The decoupling of a BSP’s public interfaces
can be done by using the LHeader pattern.
INPUTS
223
Chapter 14 Board Support Package
OUTPUTS
• Design reviews
• Code reviews
224
CHAPTER 15
Drivers
This chapter is about how to design drivers that are decoupled
from a specific hardware platform that can be reused on different
microcontrollers. In this context, reuse means reuse across teams,
departments, or your company. I am not advocating designing general-
purpose drivers that work on any and all platforms; just design for the
platforms you are actively using today.
Writing decouple drivers does not take more effort or time to
implement than a traditional platform-specific driver. This is especially
true once you have implemented a few decoupled drivers. The only extra
effort needed is a mental shift in thinking about what functionality the
driver needs to provide as opposed to getting bogged down in the low-level
hardware details. I am not saying the hardware details don’t matter; they
do. But defining a driver in terms of a specific microcontroller register set
only pushes hardware details into your application’s business logic.
A decoupled driver requires the following:
Binding Times
A Hardware Abstraction Layer is created using late bindings. The general
idea behind late binding time is that you want to wait as long as possible
before binding data or functions to names. Here are the four types of name
bindings. However, only the last three are considered late bindings.
226
Chapter 15 Drivers
Public Interface
Defining a public interface for a driver is a straightforward task. What
makes it complicated is trying to do it without referencing any underlying
hardware-specific data types. And interacting with the MCU’s registers
(and its SDK) always involves data types of some kind. For example, when
configuring a Pulse Width Modulation (PWM) output signal using the ST
HAL library for the STM32F4x, the MCU requires a pointer to a timer block
(TIM_HandleTypeDef) and a channel number (of type uint32_t). When
using the Arduino framework with the ATSAMD51 MCU, only a simple int
is needed. So how do you define a handle that can be used to configure a
PWM output signal that is platform independent?
227
Chapter 15 Drivers
#include "colony_map.h"
// Note: no path specified
228
Chapter 15 Drivers
229
Chapter 15 Drivers
230
Chapter 15 Drivers
Facade
A facade driver design is one where the public interface is defined and
then each supported platform has its own unique implementation. That
is, there is no explicit HAL defined. A simple and effective approach for a
facade design is to use a link-time binding. This involves declaring a set of
functions (i.e., the driver’s public interface) and then having platform-
specific implementations for that set of functions. In this way, each
platform gets its own implementation. The PWM driver in the CPL class
library is an example of an HAL that uses link-time binding. This driver
generates a PWM output signal at a fixed frequency with a variable duty
cycle controlled by the application.
Figure 15-2 illustrates how a client of the PWM driver is decoupled
from the target platform by using link-time bindings.
231
Chapter 15 Drivers
232
Chapter 15 Drivers
233
Chapter 15 Drivers
234
Chapter 15 Drivers
In the following three sections, you will see the implementation of the
PWM functionality for each of these platforms:
235
Chapter 15 Drivers
236
Chapter 15 Drivers
237
Chapter 15 Drivers
Separation of Concerns
The “separation of concerns” approach to driver design is to separate
the business logic of the driver from the hardware details. This involves
creating an explicit HAL interface definition in a header file that is
separate from the driver’s public interface. The HAL interface specifies
basic hardware actions. Or, said another way, the HAL interface should
be limited to encapsulating access to the MCU’s registers or SDK function
238
Chapter 15 Drivers
239
Chapter 15 Drivers
240
Chapter 15 Drivers
241
Chapter 15 Drivers
242
Chapter 15 Drivers
Figure 15-10 shows a snippet from the HAL header file (located at
src/Driver/Button/Hal.h). Note the following:
1
https://en.wikipedia.org/wiki/Factory_method_pattern
243
Chapter 15 Drivers
In the following three sections, you will see the implementation of the
button driver functionality for each of these platforms:
• STM32—See Figures 15-11, 15-12, and 15-13.
244
Chapter 15 Drivers
245
Chapter 15 Drivers
246
Chapter 15 Drivers
247
Chapter 15 Drivers
248
Chapter 15 Drivers
249
Chapter 15 Drivers
250
Chapter 15 Drivers
251
Chapter 15 Drivers
252
Chapter 15 Drivers
253
Chapter 15 Drivers
LHeader Caveats
At this point, I should note that the LHeader pattern has an implementation
weakness. It breaks down in situations where interface A defers a type
definition using the LHeader pattern, and interface B also defers a type
definition using the LHeader pattern, and interface B has a dependency on
interface A. This use case results in a “cyclic header include scenario,” and
the compile will fail. This problem can be solved by adding an additional
header latch using the header latch symbol defined in the HAL header file to
the platform-specific mappings header file. Figure 15-12 shows an example
254
Chapter 15 Drivers
of the additional header latch from STM32 mapping header file for the
Button driver, and Appendix D, “LHeader and LConfig Patterns,” provides
additional details.
Unit Testing
All drivers should have unit tests. These unit tests are typically manual
unit tests because they are built and executed on a hardware target.2 One
advantage of using the separation of concerns paradigm for a driver is
that you can write automated unit tests that run as part of CI builds. For
example, with the Button driver, there are two separate unit tests:
2
In my experience, test automation involving target hardware is an exception
because of the effort and expenses involved.
255
Chapter 15 Drivers
Polymorphism
A polymorphic design is similar to a facade design, except that it
uses run-time bindings for selecting the concrete hardware-specific
implementation. A polymorphic design is best suited when using C++.3
The I2C driver is an example of a polymorphic Hardware Abstraction
Layer. It encapsulates an I2C data transfer from an I2C master to an I2C
slave device.
Figure 15-20 is a class diagram of I2C driver abstract interface and
concrete child classes.
3
It is possible to implement run-time polymorphism in C. For example, the
original CFront C++ compiler translated the C++ source code into C code and then
passed it to the C compiler. I recommend that you use C++ instead of hand-
crafting the equivalent of the vtables in C.
256
Chapter 15 Drivers
257
Chapter 15 Drivers
258
Chapter 15 Drivers
259
Chapter 15 Drivers
260
Chapter 15 Drivers
261
Chapter 15 Drivers
262
Chapter 15 Drivers
D
os and Don’ts
There is no silver bullet when it comes to driver design. That said,
I recommend the following best practices:
• When designing a new driver, base your design on
your current hardware platform, BSP, and application
needs. This minimizes the amount of glue logic that
is needed for the current project. It also avoids having
to “guesstimate” the nuances of future hardware
platforms and requirements. Don’t spend a lot of effort
in coming up with an all-encompassing interface or
HAL definition. Don’t overdesign your driver, and
don’t include functionality that is not needed by the
application. For example, if your application needs a
UART driver that never needs to send or receive a break
character,4 do not include logic for supporting break
characters in your UART driver.
• Always include a start() or initialize() function
even if your initial implementation does not require
it. In my experience, as a driver and the application
mature, you typically end up needing some
initialization logic. I also recommend always including
a stop() or shutdown() function as well.
• If you have experience writing the driver on other
platforms, leverage that knowledge into current
driver design.
4
A break character consists of all zeros and must persist for a minimum of 11bit
times before the next character is received. A break character can be used as an
out-of-band signal, for example, to signal the beginning of a data packet.
263
Chapter 15 Drivers
264
Chapter 15 Drivers
S
ummary
A decoupled driver design allows reuse of drivers across multiple hardware
platforms. A driver is decoupled from the hardware by creating a Hardware
Abstraction Layer (HAL) for its run-time usage. The following late binding
strategies can be used for the construction of the HAL interfaces:
• Facade—Using link-time bindings
265
Chapter 15 Drivers
INPUTS
OUTPUTS
• Unit tests
• Design reviews
• Code reviews
266
CHAPTER 16
Release
For most of the construction stage, you are focused on writing and testing
and optimizing your software. And as the product gets better and better
and the bugs get fewer and fewer, you start to have the sense that you’re
almost done. However, even when you finally reach the point when you
can say “the software is done,” there are still some last-mile tasks that need
to be completed in order to get the software to your customers. These
mostly not-coding-related tasks comprise the release activities of the
project.
The release stage of the project overlaps the end-of-construction
stage. That is, about the time you start thinking about creating a release
candidate, you should be starting your release activities. Ideally, if you’ve
followed the steps in this cookbook, your release activities will simply
involve collecting, reviewing, and finalizing reports and documentation
that you already have. If not, you’ll get to experience the angst and drama
of a poorly planned release: fighting feature creep, trying to locate licenses,
fighting through installation and update issues, etc. If you find yourself
in the position of struggling with the logistics of releasing your software,
it means you probably “cheated” during the planning and construction
stages and built up technical debt that now has to be retired before
shipping the software. My recommendation is if you are working with an
Agile methodology, you practice “releasing” the software at the end of
each sprint. That is, when you estimate you have about three sprints left to
finish the project, go through all the release activities as if you were going
to release the product.
268
Chapter 16 Release
269
Chapter 16 Release
270
Chapter 16 Release
When discussing releases, all release builds must be formal builds and
have a human-friendly version identifier. Generally, there are four different
types of releases:
271
Chapter 16 Release
272
Chapter 16 Release
273
Chapter 16 Release
across all the stakeholders, which results in there always being one more
software change request. The CCB process (formal or informal) provides
the discipline to stop changes so the software can finally ship.
274
Chapter 16 Release
It should be obvious that the SBOM should be created as you go. That
is, the process should start when external packages are first incorporated
into the code base, not as a “paperwork” item during the release stage.
For non-open-source licensing, purchasing software licenses takes time
and money, and you need to make sure that money for those purchases is
budgeted. Your company’s legal department needs to weigh in on what is
or is not acceptable with respect to proprietary and open source licenses.
Or, said another way, make sure you know, and preferably have some
documentation on, your company’s software licensing policies. Don’t
assume something is okay because it is widely used in or around your
company because nothing is as frustrating as the last-minute scramble to
redesign (and retest) your application because one of your packages has
unacceptable licensing terms.
The SBOM is not difficult to create since it is essentially just a table that
identifies which third-party packages are used along with their pertinent
information. My recommendations for creating and maintaining the
SBOM are as follows:
275
Chapter 16 Release
Anomalies List
By definition, your bug tracking tool is the canonical source for all known
defects. The anomalies list is simply a snapshot in time of the known
defects and issues for a given release. In theory, the anomalies list for any
given release could be extracted from the bug tracking tool, but having
an explicit document that enumerates all of the known defects for a given
release simplifies communications within the cross-functional team and
with the leadership team. When an anomalies list needs to be generated
should be defined by your QMS process; however, as it is a key tool in
determining if a release is ready for early access or GA, you may find
yourself generating the list more frequently at the end of the project.
Release Notes
Internal release notes should be generated every time a formal build is
deployed to anyone outside of the immediate software team. This means
you need release notes for formal builds that go to testers, hardware
engineers, QA, etc. Simply put, the internal release summarizes what is
and what is not the release. It is important to remember that when writing
the internal release notes, the target audience is everyone on the
team—not just the software developers. The software lead or the “build
master” is typically responsible for generating the internal release notes.
There should be a line or bullet item for every change (from the
previous release) that is included in the current release. There should be
an item for every pull request that was merged into the release branch.
Internal release notes can be as simple as enumerating all of the work
item tickets and bug fixes—preferably with titles and hyperlinks—that
went into the release. Remember, if you are following the cookbook,
there will be a work item or bug tickets created for each pull request. One
advantage of referencing work or bug tickets is that individual tickets will
276
Chapter 16 Release
• Performance issues
Deployment
In most cases, the embedded software you create is sold with your
company’s custom hardware. This means that deploying your software
requires making the images available to your company’s manufacturing
process. Companies that manufacture physical products typically track
customer-facing releases in a Product Lifecycle Management (PLM) tool
such as Windchill or SAP that is used to manage all of the drawings and
bill of materials for the hardware. With respect to embedded software,
the software images and their respective source files are bill-of-material
line items or sub-assemblies tracked by the PLM tool. Usually, the PLM
tool contains an electronic copy of the release files. As PLM tools have
277
Chapter 16 Release
a very strict process for adding or updating material items, the process
discourages frequent updates, so you don’t want to put every working
release or candidate release into the PLM system—just the alpha, beta, and
gold releases.
These processes are company and PLM tool specific. For example,
at one company I worked for, the PLM tool was Windchill, but since the
software images were zipped snapshots of the source code (i.e., very large
binary files), the electronic copies of the files were formally stored in a
different system and only a cover sheet referencing the storage location
was placed in Windchill.
Typically, the following information is required for each release into
the PLM system:
278
Chapter 16 Release
The same care and due diligence that went into updating the PLM
system for manufacturing with a new release should be applied when
releasing images to the OTA server. The last thing you want is a self-
made crisis of releasing the wrong or bad software to the field. Since
almost all embedded software interacts with the physical world, a
bad release can have negative real-world consequences for a period
of time before a fixed release can be deployed (e.g., no heating for an
entire building for days or weeks in the middle of winter). Or, worst
case, the OTA release “bricks” the hardware, requiring field service
personnel to resurrect the device.
279
Chapter 16 Release
QMS Deliverables
While the Quality Management System (QMS) deliverables do not
technically gate the building and testing of a release, the required
processes can delay shipping the software to customers. What is involved
in the QMS deliverables is obviously specific to your company’s defined
QMS processes. On one end of the spectrum, there are startup companies
that have no QMS processes, and all that matters is shipping the software.
On the other end, there are the regulated industries, such as medical
devices, that will have quite verbose QMS processes. If you have no, or
minimal, QMS processes defined, I recommend the following process be
followed and the following artifacts be generated for each gold release.
• System architecture
• Software architecture
• CI setup
280
Chapter 16 Release
281
Chapter 16 Release
2
Archiving a build server can take many forms. For example, if the build server is
a VM, then creating a backup or snapshot of VM is sufficient. If the build server is
a Docker container, then backing up the Docker file or Docker image is sufficient.
For a physical box, creating images of the box (using tools like Ghost, AOMEI,
or True Image) could be an option. Just make sure that the tool you use has the
ability to restore your image to a machine that has different hardware than the
original box.
282
Chapter 16 Release
Summary
The start of the release stage overlaps the end of the construction phase
and finishes when a gold release is approved. The release stage has several
deliverables in addition to the release software images. Ideally, the bulk of
the effort during the release stage is focused on collecting the necessary
artifacts and quality documents, and not the logistics and actual work of
creating and completing them.
Ensure that you have a functioning Change Control Board in place for
the end-game sprints in order to prevent feature creep and never-ending
loops of just-one-more-change and to reduce your regression testing
efforts.
Finally, put all end-customer release images and source code into
your company’s PLM system to be included as part of the assembled end
product.
INPUTS
OUTPUTS
283
Chapter 16 Release
• Software architecture
• Doxygen output
284
APPENDIX A
Repository Organization
A single GitHub repository stores all of the source code that is used to build
the GM6000 example project. This means that all third-party code has
been copied into the epc repository and was incorporated using the open
source tool Outcast.1 The following is the top-level directory structure of
the repository.
1
https://github.com/johnttaylor/Outcast
286
Appendix A Getting Started with the Source Code
Windows
This section details the Windows specific instructions.
287
Appendix A Getting Started with the Source Code
The GCC cross compiler for the ARM Cortex M/R is included as part
of the epc repository (xsrc/stm32-gcc-arm) and does not need to be
installed. This compiler is used for both the ST NUCLEO-F413ZH and
Adafruit Grand Central M4 Express targets.
C:\>mkdir work
C:\>cd work
C:\work>git clone https://github.com/johnttaylor/epc
288
Appendix A Getting Started with the Source Code
and to specify which compiler to use. You should not have to edit the env.
bat script (though you are welcome to customize it). The env.bat script
calls helper scripts that configure the individual compilers. You will need
to modify these helper scripts for each installed compiler.
These helper scripts are located in the top\compilers\ directory.
There needs to be a Windows batch file for each installed compiler. The
individual batch files are responsible for updating the environment for
that compiler with such things as adding the compiler to the Windows
command path, setting environment variables needed by the compiler or
debug tools, etc.
In addition, each script provides a “friendly name” that is used to
identify the compiler when running the env.bat file. The actual batch file
script names are not important. That is, it does not matter what the file
names are as long as you have a unique file name for each script.
Here is an example of the top\compilers\01-vcvars32-vc17.bat file.
In line 2 of Figure A-1, you can see where the friendly name is set. And line
4 simply calls the batch file supplied by Visual Studio. You will need to edit
lines 2 and 4 to match your local installation.
289
Appendix A Getting Started with the Source Code
There is no naming convention for the file names for these scripts.
However, I do recommend that you prefix the name with a numeric
number, for example, “01”. The reason is because by doing this, the
compiler scripts are always listed in the same order when you add a new
compiler.
Building on Windows
Developers can build the applications and unit tests. All application builds
are performed under the projects\ directory tree. All unit test builds
are performed under the tests\ directory tree. While there is a limited
amount of source code under the projects\ and tests\ directories, the
bulk of the source code is under the src\ and xsrc\ directories.
290
Appendix A Getting Started with the Source Code
c:\>cd work\epc
c:\work\epc>env.bat
NO TOOLCHAIN SET
c:\work\epc>env.bat 1
*******************************************************
** Visual Studio 2022 Developer Command Prompt v17.3.6
** Copyright (c) 2022 Microsoft Corporation
*******************************************************
291
Appendix A Getting Started with the Source Code
c:\work\epc>
c:\work\epc>cd projects\GM6000\Ajax\simulator\
windows\vc12
c:\work\epc\projects\GM6000\Ajax\simulator\windows\
vc12>nqbp.py
=======================================================
= START of build for: ajax-sim.exe
= Project Directory: C:\work\epc\projects\GM6000\Ajax\
simulator\windows\vc12
= Toolchain: VC++ 12, 32bit (Visual Studio 2013)
= Build Configuration: win32
= Begin (UTC): Sat, 02 Mar 2024 19:32:25
= Build Time: 1709407945 (65e37ec9)
=======================================================
[266/266] Linking: ajax-sim.exe
292
Appendix A Getting Started with the Source Code
=======================================================
= END of build for: ajax-sim.exe
= Project Directory: C:\work\epc\projects\GM6000\Ajax\
simulator\windows\vc12
= Toolchain: VC++ 12, 32bit (Visual Studio 2013)
= Build Configuration: win32
= Elapsed Time (hh mm:ss): 00 00:16
=======================================================
c:\work\epc\projects\GM6000\Ajax\simulator\windows\
vc12>dir _win32
03/02/2024 02:32 PM <DIR> .
03/02/2024 02:32 PM <DIR> ..
03/02/2024 02:32 PM 98,632 .ninja_deps
03/02/2024 02:32 PM 45,433 .ninja_log
03/02/2024 02:32 PM 2,990,080 ajax-sim.exe
03/02/2024 02:32 PM 10,202,952 ajax-sim.ilk
03/02/2024 02:32 PM 27,668,480 ajax-sim.pdb
03/02/2024 02:32 PM 195,812 build.ninja
03/02/2024 02:32 PM 469,107 main.obj
02/25/2024 01:12 PM <DIR> src
03/02/2024 02:32 PM 1,567,744 vc140.idb
03/02/2024 02:32 PM 2,740,224 vc140.pdb
02/25/2024 01:12 PM <DIR> xsrc
The same steps are used to build all applications and unit tests.
293
Appendix A Getting Started with the Source Code
c:\>cd work\epc
c:\work\epc>env.bat
NO TOOLCHAIN SET
c:\work\epc>cd projects\GM6000\Ajax\alpha1\
windows\gcc-arm
c:\work\epc\projects\GM6000\Ajax\alpha1\windows\
gcc-arm>nqbp.py
=======================================================
= START of build for: ajax
= Project Directory: C:\work\epc\projects\GM6000\Ajax\
alpha1\windows\gcc-arm
294
Appendix A Getting Started with the Source Code
295
Appendix A Getting Started with the Source Code
The same steps can be used for the Grand Central hardware, except
that in step 2 you would select “GCC-ARM compiler for Grand Central
BSP 1.6.0” and in step 3 you would change to the projects\GM6000\Ajax\
alpha1-atmel\gcc\ directory.
296
Appendix A Getting Started with the Source Code
Linux
This section details the Linux specific instructions.
297
Appendix A Getting Started with the Source Code
~$ mkdir work
~$ cd work
~/work$ git clone https://github.com/johnttaylor/epc
Building on Linux
All application builds are performed under the projects/ directory tree.
All unit test builds are performed under the tests/ directory tree. While
there is a limited amount of source code under the projects/ and tests/
directories, the bulk of the source code is under the src/ and xsrc/
directories.
All builds are performed on the command line. Use the following steps
to build the functional simulator for the GM6000 Ajax application.
1. Open a terminal window.
298
Appendix A Getting Started with the Source Code
~$ cd work/epc
~/work/epc$ source env.sh
Environment set (using native GCC compiler)
gcc (Debian 10.2.1-6) 10.2.1 20210110
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying
conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
~/work/epc$ cd projects/GM6000/Ajax/simulator/linux/gcc
~/work/epc/projects/GM6000/Ajax/simulator/linux/
gcc$ nqbp.py
=======================================================
= START of build for: ajax-sim
= Project Directory: ~/work/epc/projects/GM6000/Ajax/
simulator/linux/gcc
= Toolchain: GCC
= Build Configuration: posix64
= Begin (UTC): Sun, 31 Mar 2024 14:16:31
= Build Time: 1711894591 (6609703f)
=======================================================
[282/282] Linking: ajax-sim
=======================================================
299
Appendix A Getting Started with the Source Code
The same steps are used to build all applications and unit tests.
300
Appendix A Getting Started with the Source Code
Additional Tools
Up until now, this discussion has been about building GM6000
applications and unit tests. However, if you are planning to
• Run on the target hardware
then additional steps are required. What follows are the additional tools
that need to be installed.
STM32 Cube MX
The STM32 Cube MX is a GUI application used for the low-level MCU
configuration. That is, you need it when creating and updating the BSPs
for the STM32 target hardware. The following steps show how to install the
301
Appendix A Getting Started with the Source Code
Segger Tools
If you plan on using the STM32 IDE for programming and debugging, you
can skip the following discussions of Segger tools.
302
Appendix A Getting Started with the Source Code
Segger SystemView
Segger’s SystemView is a real-time recording and visualization tool
for embedded systems that reveals the true runtime behavior of an
application. Or said another way, it is a handy-dandy tool for evaluating
and troubleshooting CPU usage and performance. Detailed installation
instruction for SystemView can be found on the Wiki page: https://
github.com/johnttaylor/epc/wiki/Installing-Developer-Tools:-
Development-Machine#segger-systemview. Accept the default install
options.
Terminal Emulator
A terminal emulator is required to connect to the target’s hardware serial
port debug console. You can use any terminal emulator, not just one of the
ones listed here.
PuTTY
Detailed installation instructions for PuTTY can be found on the Wiki
page: https://github.com/johnttaylor/epc/wiki/Installing-
Developer-Tools:-Development-Machine#putty. Accept the default
install options.
303
Appendix A Getting Started with the Source Code
Tera Term
Detailed installation instructions for Tera Term can be found on the
Wiki page: https://github.com/johnttaylor/epc/wiki/Installing-
Developer-Tools:-Development-Machine#tera-term. Accept the default
install options.
Doxygen
Doxygen is an automated code documentation tool that is used to generate
detailed documentation directly from the code base. To achieve this, you
need to install
• Doxygen
• Graphviz
Code Coverage
The GCC compiler and the Python gcovr library are used to generate
code coverage metrics when running the automated unit tests. The code
coverage metrics are collected and reported as part of the CI builds. The
developer can run code coverage reports on a per-unit-test basis before
submitting a pull request.
304
Appendix A Getting Started with the Source Code
The following steps show how to set up and generate coverage metrics
for a unit test on a Windows box. The process is the same on Linux.
c:\>cd work\epc
c:\work\epc>env.bat
NO TOOLCHAIN SET
305
Appendix A Getting Started with the Source Code
c:\_workspaces\zoe\epc\tests\Ajax\Alerts\_0test\
windows\mingw_w64>nqbp.py -g
=======================================================
= START of build for: a.exe
= Project Directory: C:\work\epc\Ajax\Alerts\_0test\
windows\mingw_w64
= Toolchain: Mingw_W64
= Build Configuration: win32
= Begin (UTC): Sat, 02 Mar 2024 21:33:26
= Build Time: 1709415206 (65e39b26)
=======================================================
[110/110] Linking: a.exe
=======================================================
= END of build for: a.exe
= Project Directory: C:\work\epc\Ajax\Alerts\_0test\
windows\mingw_w64
= Toolchain: Mingw_W64
= Build Configuration: win32
= Elapsed Time (hh mm:ss): 00 00:25
=======================================================
c:\work\epc\tests\Ajax\Alerts\_0test\windows\mingw_
w64>_win32\a.exe
=======================================================
All tests passed (22 assertions in 1 test case)
306
Appendix A Getting Started with the Source Code
c:\work\epc\tests\Ajax\Alerts\_0test\windows\mingw_
w64>tca.py rpt
-f.*Ajax.Alerts.*
-------------------------------------------------------
GCC Code Coverage Report
Directory: ../../../../../src
-------------------------------------------------------
File Lines Exec Cover Missing
-------------------------------------------------------
C:/work/epc/src/Ajax/Alerts/Summary.cpp
50 44 88% 41-43,65-67
-------------------------------------------------------
TOTAL 50 44 88%
-------------------------------------------------------
c:\work\epc\tests\Ajax\Alerts\_0test\windows\
mingw_w64>tca.py rpt --branch
-f.*Ajax.Alerts.*
-------------------------------------------------------
GCC Code Coverage Report
Directory: ../../../../../src
-------------------------------------------------------
File Branches Taken Cover Missing
-------------------------------------------------------
C:/work/epc/src/Ajax/Alerts/Summary.cpp
28 24 85% 39,53,63,136
-------------------------------------------------------
TOTAL 28 24 85%
-------------------------------------------------------
307
Appendix A Getting Started with the Source Code
RATT
The RATT test engine is a set of Python scripts that are used to automate
user interactions over IO streams such as
• Serial ports
• TCP sockets
RATT is built on top of the Python pexpect library and can be found
in the epc repository in the xsrc/ratt/ directory. For example, the
automated “smoke test” for the GM6000 Ajax application is a collection of
RATT scripts that interact with the functional simulator’s debug console.
The following steps show how to set up and run a RATT script on
Windows. The process is the same on Linux box.
c:\>cd work\epc
c:\work\epc>env.bat 1
*******************************************************
** Visual Studio 2022 Developer Command Prompt v17.3.6
308
Appendix A Getting Started with the Source Code
309
Appendix A Getting Started with the Source Code
310
Appendix A Getting Started with the Source Code
File: src/Ajax/Heating/Supervisor/genfsm.py
#!/usr/bin/python3
"""Invokes NQBP's genfsm_base.py script. To run 'GENFSM' copy this file
To your source directory. Then edit the local script to generate one or
more Finite State Machines (FSMs)
"""
import os
import sys
311
Appendix A Getting Started with the Source Code
if ( NQBP_BIN == None ):
sys.exit( "ERROR: The environment variable NQBP_BIN is not set!" )
sys.path.append( NQBP_BIN )
###############################################################
# BEGIN EDITS HERE
###############################################################
# Generate FSM#1
sys.argv[1] = '-d 4' # Size of the event queue
sys.argv[2] = 'Fsm' # Name of Cadifra diagram (without .extension)
sys.argv[3] = 'Ajax::Heating::Supervisor' # Namespace
genfsm_base.run( sys.argv )
Outcast
Outcast is an experimental tool suite for managing source code packages
for reuse by other packages. Outcast is used to incorporate external
repositories such as the CPL C++ class library (colony.core) and NQBP2
(nqbp2) into the epc repository so that day-to-day development in the epc
is a mono-repository work flow. The Outcast tool only needs to be used
when adding or upgrading external packages.
If you need to install the Outcast because you are adding or upgrading
external packages, detailed installation instruction for Outcast can be
found on the Wiki page: https://github.com/johnttaylor/epc/wiki/
Installing-Developer-Tools:-Development-Machine#outcast. Make
sure you add Outcast’s bin/ directory to the system’s command path.
312
APPENDIX B
Running the
Example Code
This appendix describes how to run the Ajax and Eros applications in the
GM6000 project. The Ajax application is the image that is provisioned into
the GM6000 product when it ships. The Eros application is an engineering
test application for exercising and validating the hardware.
The developer’s primary interaction with these example applications
is through the debug console. With only a few exceptions, all of the console
commands are the same whether running on the target hardware or the
simulator.
Ajax Application
The Ajax application has a live debug console. However, the console is
password protected. The password for each individual board is set during
the provisioning when units are manufactured.
Provisioning
The Ajax application requires that each control board be provisioned
during the manufacturing process. The provisioning step is used to store
the following data in nonvolatile storage:
• Model number
• Serial number
2
To build the debug version of any application or unit test, use the -g option when
invoking the build script. For example, nqbp.py -g.
314
Appendix B Running the Example Code
C:\work\epc\projects\GM6000\Ajax\simulator\windows\vc12>_
win32\ajax-sim.exe
AJAX: appvariant_initialize0( )
>> 00 00:00:02.158 (METRICS) METRICS:POWER_ON. Boot count = 7
315
Appendix B Running the Example Code
<h-outK2>: 0
<h-outK3>: 10
<h-outK4>: 20
<fanLow>: 39322
<fanMed>: 52429
<fanHi>: 65536
<maxCap>: 60000
$>bye app
$> >> 00 00:02:20.042 (METRICS) METRICS:SHUTDOWN. Boot
count = 7
c:\work\epc\projects\GM6000\Ajax\simulator\windows\vc12>
Console Password
The Ajax application’s debug console is password protected. To gain
access to the debug console, the user must log in using the user console
command. Per the GM6000 requirements, no help or error messages
are displayed on the debug console until a user successfully logs in. The
following is an example of logging into the release version of the Ajax
application running on a functional simulator without the simulated
display running.
316
Appendix B Running the Example Code
C:\work\epc\projects\GM6000\Ajax\simulator\windows\vc12>_win32\
ajax-sim.exe
AJAX: appvariant_initialize0()
>> 00 00:00:02.149 (METRICS) METRICS:POWER_ON. Boot count = 8
>> 00 00:00:02.552 (ALERT) ALERT:FAILED_SAFE. Cleared
>> 00 00:00:04.206 (INFO) User Record size: 14 (30)
>> 00 00:00:04.206 (INFO) Metrics Record size: 39 (55)
>> 00 00:00:04.208 (INFO) Personality Record size: 193 (209)
>> 00 00:00:04.213 (INFO) Log Entry size: 159
help # Command is ignored until a successful login
user login dilbert Was.Here1234!
$> help
bye [app [<exitcode>]]
dm ls [<filter>]
dm write {<mp-json>}
dm read <mpname>
dm touch <mpname>
help [* | <cmd>]
log [*|<max>]
log <nth> (*|<max>)
log clear
rnd <numBytes>
threads
tprint ["<text>"]
trace [on|off]
317
Appendix B Running the Example Code
The debug version of the Ajax application does not require a login to
the debug console. This is done to simplify development.
Functional Simulator
The functional simulator can run with or without a simulated display. The
simulated display is provided by a C# Windows application (see Figure B-1).
The simulated display emulates the physical display at the pixel data level.
That is, the raw pixel data is transferred from the functional simulator via
sockets to the C# Windows application. The C# Windows application also
emulates the button inputs and the RGB LED output that is part of the
physical display board. The mouse or keyboard can be used to “press the
buttons.” However, only the keyboard can be used to press multiple buttons
simultaneously (e.g., the B+Y key combination).
318
Appendix B Running the Example Code
c:\work\epc\projects\GM6000\Ajax\simulator\windows\vc12>go.bat
Launching display simulator...
AJAX: appvariant_initialize0()
>> 00 00:00:00.133 (METRICS) METRICS:POWER_ON. Boot count = 9
>> 00 00:00:00.519 (ALERT) ALERT:FAILED_SAFE. Cleared
>> 00 00:00:02.184 (INFO) User Record size: 14 (30)
>> 00 00:00:02.184 (INFO) Metrics Record size: 39 (55)
>> 00 00:00:02.186 (INFO) Personality Record size: 193 (209)
319
Appendix B Running the Example Code
c:\work\epc\projects\GM6000\Ajax\simulator\windows\vc12>_win32\
ajax-sim.exe -h
Ajax Simulation.
Usage:
ajax-sim [options]
Options:
-s HOST Hostname for the Display Simulation.
[Default: 127.0.0.1]
-p PORT The Display Simulation's Port number
[Default: 5010]
-e Generate a POST failure on start-up
-t ADCBITS Sets the initial mocked ADCBits value
(only valid when the House Simulator is
NOT compiled) [Default: 2048]
320
Appendix B Running the Example Code
Hardware Target
The Ajax application can execute on the STM32 and Adafruit evaluation
boards without any extra hardware such as the display board, EEPROM,
temperature sensor, etc. However, depending on what the missing hardware
is, the functionality of the application will be restricted. Figure B-2 is a block
diagram of the hardware for the GM6000’s pancake control board.
321
Appendix B Running the Example Code
The MCU pinout for the pancake board can be found at the following
Wiki pages in the epc repository:
322
Appendix B Running the Example Code
I recommend that you build and run the debug version of the Ajax
application during development (e.g., nqbp.py -g). The reason
is because the debug version of the application contains the prov
console command and does not require the developer to log in into
the console.
Console Commands
This section covers the basic debug console commands. The commands
and behavior are the same whether running on the target or the functional
simulator. A full list of console commands supported at runtime can
always be obtained by using the following help commands:
• help
• help *
• help <cmd>
$> help
bye [app [<exitcode>]]
dm ls [<filter>]
dm write {<mp-json>}
dm read <mpname>
dm touch <mpname>
help [* | <cmd>]
log [*|<max>]
log <nth> (*|<max>)
log clear
nv ERASE
323
Appendix B Running the Example Code
324
Appendix B Running the Example Code
dm read heatingMode
dm read fanMode
dm read heatSetpoint
Example:
325
Appendix B Running the Example Code
"seqnum": 2,
"locked": false,
"val": 7000
}
$>
The following commands are used to configure the heating mode, fan
mode, and heating setpoint. In the following examples, the heating mode
is turned on, the fan mode is set to high, and the heating setpoint is set to
72.5° F.
dm write {name:"heatingMode",val:true}
dm write {name:"fanMode",val:"eHIGH"}
dm write {name:"heatSetpoint",val:7250}
Example output:
326
Appendix B Running the Example Code
{
"name": "fanMode",
"valid": true,
"type": "Ajax::Dm::MpFanMode",
"seqnum": 7,
"locked": false,
"val": "eHIGH"
}
$> dm read heatSetpoint
{
"name": "heatSetpoint",
"valid": true,
"type": "Cpl::Dm::Mp::Int32",
"seqnum": 7,
"locked": false,
"val": 7250
}
$>
Temperature
The dm console command is used to monitor the indoor temperature.
The command can also be used to override or force a specific indoor
temperature. The following command is used to display the current indoor
temperature:
dm read onBoardIdt
Example output:
327
Appendix B Running the Example Code
"type": "Cpl::Dm::Mp::Int32",
"seqnum": 1087,
"locked": false,
"val": 7579
}
$>
dm write {name:"onBoardIdt",val:7233,locked:true}
dm write {name:"onBoardIdt",locked:false}
Example output:
328
Appendix B Running the Example Code
"type": "Cpl::Dm::Mp::Int32",
"seqnum": 1377,
"locked": true,
"val": 7233
}
$> # Temperature as reported by the physical sensor
$> dm write {name:"onBoardIdt",locked:false}
$> dm read onBoardIdt
{
"name": "onBoardIdt",
"valid": true,
"type": "Cpl::Dm::Mp::Int32",
"seqnum": 1381,
"locked": false,
"val": 7587
}
$>
329
Appendix B Running the Example Code
dm read cmdHeaterPWM
dm read cmdFanPWM
Example output:
330
Appendix B Running the Example Code
$> help ui
ui <event>
Generates UI events. Supported events are:
btn-a -->AJAX_UI_EVENT_BUTTON_A
btn-b -->AJAX_UI_EVENT_BUTTON_B
btn-x -->AJAX_UI_EVENT_BUTTON_X
btn-y -->AJAX_UI_EVENT_BUTTON_Y
btn-esc -->AJAX_UI_EVENT_BUTTON_ESC
nop -->AJAX_UI_EVENT_NO_EVENT
331
Appendix B Running the Example Code
{
"name": "heatingMode",
"valid": true,
"type": "Cpl::Dm::Mp::Bool",
"seqnum": 9,
"locked": false,
"val": true
}
$> # Toggle the heating mode by emulating pressing the A button
$> ui btn-a
Generated event: btn-a
>> 00 00:52:01.876 (Ajax::Ui::Home) heat mode changed
>> 00 00:52:01.878 (INFO) INFO:HEATING_ALGO. Heating mode
unexpectedly disabled, generating Fsm_evDisabled event.
$> dm read heatingMode
{
"name": "heatingMode",
"valid": true,
"type": "Cpl::Dm::Mp::Bool",
"seqnum": 10,
"locked": false,
"val": false
}
$>
General-Purpose Commands
There are several non-application-specific console commands that are
very helpful. The threads command displays a list of all threads created by
the application. When running on a target with FreeRTOS, the output also
contains the stack usage per thread. This allows the developer to evaluate
if the stack size for a given thread is sufficient.
332
Appendix B Running the Example Code
$> threads
Name R ID Native Hdl Pri State Stack
---- - -- ---------- --- ----- -----
main Y 0x20016d90 0x20016d90 3 Blckd 684
UI Y 0x200195b8 0x200195b8 4 Ready 543
APP Y 0x20018480 0x20018480 3 Ready 630
NVRAM Y 0x2001a6e8 0x2001a6e8 1 Ready 795
TShell Y 0x2001b9a8 0x2001b9a8 4 Run 585
The trace command is used to control the CPL library’s printf tracing
at runtime. The command is used to
333
Appendix B Running the Example Code
$> trace
334
Appendix B Running the Example Code
$> log *
[10] ( 3121877) INFO:HEATING_ALGO. Heating mode unexpectedly
disabled, generating Fsm_evDisabled event.
[f] ( 3093309) INFO:HEATING_ALGO. Heating mode unexpectedly
disabled, generating Fsm_evDisabled event.
[e] ( 657164) INFO:HEATING_ALGO. Heating mode unexpectedly
disabled, generating Fsm_evDisabled event.
[d] ( 513590) INFO:HEATING_ALGO. Heating mode unexpectedly
disabled, generating Fsm_evDisabled event.
[c] ( 566) ALERT:FAILED_SAFE. Cleared
[b] ( 436) METRICS:POWER_ON. Boot count = 6
[a] ( 963162) METRICS:SHUTDOWN. Boot count = 5
[9] ( 483583) INFO:HEATING_ALGO. Heating mode unexpectedly
disabled, generating Fsm_evDisabled event.
[8] ( 566) ALERT:FAILED_SAFE. Cleared
[7] ( 436) METRICS:POWER_ON. Boot count = 5
[6] ( 15919) METRICS:SHUTDOWN. Boot count = 4
[5] ( 566) ALERT:FAILED_SAFE. Cleared
[4] ( 436) METRICS:POWER_ON. Boot count = 4
[3] ( 434) METRICS:POWER_ON. Boot count = 3
[2] ( 434) METRICS:POWER_ON. Boot count = 2
[1] ( 610) METRICS:POWER_ON. Boot count = 1
16 log records
$>
335
Appendix B Running the Example Code
Eros
The Eros application has a live debug console. Unlike the Ajax application,
the console is not password protected. The Eros application is intended to
be used during the manufacturing process to provision each unit as well as
to perform “design verification testing” of the heating system.
336
Appendix B Running the Example Code
Provisioning
The provisioning step is used to store the following data in nonvolatile
storage:
• Model number
• Serial number
c:\work\epc\projects\GM6000\Eros\simulator\windows\vc12>_win32\
eros-sim.exe
337
Appendix B Running the Example Code
338
Appendix B Running the Example Code
Functional Simulator
The functional simulator can run with or without a simulated display.
The simulated display is provided by a C# Windows application, which is
shown in Figure B-3. The simulated display emulates the physical display
at the pixel data level. That is, the raw pixel data is transferred from the
functional simulator via sockets to the C# Windows application. The C#
Windows application also emulates the button inputs and the RGB LED
output that is part of the physical display board. The mouse or keyboard
can be used to “press the buttons.” However, only the keyboard can be
used to press multiple buttons simultaneously. For example, the B+Y key
combination.
projects\pico-display-simulator\simulator\bin\Release
339
Appendix B Running the Example Code
projects\pico-display-simulator
c:\work \epc\projects\GM6000\Eros\simulator\windows\vc12>go.bat
Launching display simulator...
Hardware Target
The Eros application can execute on the STM32 and Adafruit evaluation
boards without any extra hardware such as the display board, EEPROM,
temperature sensor, etc. However, depending on what the missing
hardware is, the functionality of the application will be restricted.
Figure B-1 is a block diagram of the hardware for the GM6000’s pancake
control board.
340
Appendix B Running the Example Code
Screen Test
The Eros application contains an LCD screen test to check for stuck and
dead pixels. The screen test cycles through a series of screens with a single
solid color (red, green, blue, black, and white). The LCD test is started by
pressing the B+Y button simultaneously and then pressing any button to
cycle through the different color screens.
Console Commands
This section covers the basic debug console commands used with the
Eros application. The Eros application contains most of the console
commands that the Ajax application supports, plus additional commands
that directly exercise the electrical and physical components of the system.
The commands and behavior are the same whether running on the target
or the functional simulator. A full list of console commands (supported at
runtime) can always be obtained by using the help command, for example,
help, help *, or help <cmd>.
Example output:
$$> help
bye [app [<exitcode>]]
dm ls [<filter>]
dm write {<mp-json>}
dm read <mpname>
dm touch <mpname>
help [* | <cmd>]
hws
log [*|<max>]
log <nth> (*|<max>)
log clear
mapp
341
Appendix B Running the Example Code
342
Appendix B Running the Example Code
Example output:
343
Appendix B Running the Example Code
"locked": false,
"val": 32767
}
$$> rgb 0 255 0
RGB Color set to: 0 255 0
$$> rgb 200
Brightness set to: 200
$$> hws
HW Safety Limited Tripped: deasserted
$$>
Micro Applications
The Eros application contains a collection of micro applications. After a
micro application is started, the micro application runs in the background
until stopped. Meanwhile, other console commands can be issued while
the micro application is running. The console command mapp is used to
run and stop the micro applications. More than one micro application can
be running at a time. The syntax for the mapp command is as follows:
$$> mapp ls
AVAILABLE MApps:
cycle
Duty Cycles the Heating equipment.
344
Appendix B Running the Example Code
thermistor
Periodically Samples temperature and displays sample/
metric values.
args: [<displayms>]
<displayms> milliseconds between outputs. Default is 10000ms
$$>
345
Appendix B Running the Example Code
346
Appendix B Running the Example Code
EEPROM Testing
The nv console command is used to test and erase the nonvolatile storage
in the off-board EEPROM. Here is the syntax for the nv command:
$$> help nv
nv ERASE
nv read <startOffSet> <len>
nv write <startOffset> <bytes...>
nv test (aa|55|blank)
Exercises and tests the NV media. The supported tests are:
'aa' - Writes (and verifies) 0xAA to all of the media
'55' - Writes (and verifies) 0x55 to all of the media
'blank' - Verify if the media is 'erased'
347
Appendix B Running the Example Code
Common Commands
The following console commands are also available in the Ajax
application. See the “Ajax Application” section for example usage of the
commands.
• bye—Exits or restarts the application
• ui—Triggers UI events
348
APPENDIX C
Introduction to
the Data Model
Architecture
The Data Model architecture is used to design highly decoupled code. It
allows for the exchange of data between modules where neither module
has dependencies on the other. For example, Figure C-1 illustrates a
strongly coupled design.
However, Figure C-2 illustrates how, by using the Data Model pattern,
the design can be decoupled.
350
Appendix C Introduction to the Data Model Architecture
Figure C-3. Extending your design using the Data Model pattern
351
Appendix C Introduction to the Data Model Architecture
More Details
Additional resources for the Data Model architecture:
h ttps://patternsinthemachine.net/2022/12/data-mode-
change-notifications/.
352
APPENDIX D
LHeader and
LConfig Patterns
The LHeader and LConfig patterns are two C/C++ patterns that leverage
the preprocessor for compile-time binding of interface definitions. The
patterns allow creating different flavors of your project based on compile-
time settings. While it does add some additional complexity to the
structure of your header files and build scripts, it also provides a reliable
way to cleanly build multiple variants of your binary.
This appendix describes how to implement and use LHeader and
LConfig patterns. A detailed discussion of why you would use these
patterns is outside the scope of the book. For an in-depth discussion of
the “why” and the importance of late binding times, I refer you to the
companion book: Patterns in the Machine: A Software Engineering Guide to
Embedded Development.3
LHeader
With the Late Header, or LHeader, pattern, you defer which header files
are actually included until compile time. In this way, the name bindings
don’t occur until compile time. This decoupling makes the module more
3
John Taylor and Wayne Taylor. Patterns in the Machine: A Software Engineering
Guide to Embedded Development. Apress Publishers, 2021
© The Editor(s) (if applicable) and The Author(s), 353
under exclusive license to APress Media, LLC, part of Springer Nature 2024
J. T. Taylor and W. T. Taylor, The Embedded Project Cookbook,
https://doi.org/10.1007/979-8-8688-0327-7
Appendix D LHeader and LConfig Patterns
354
Appendix D LHeader and LConfig Patterns
-I\_workspaces\zoe\epc\src
-I\_workspaces\zoe\epc\projects\GM6000\Ajax\simulator\windows\vc12
// Platform mapping
#include "Cpl/Text/_mappings/_vc12/strapi.h"
// Cpl::System mappings
#include "Cpl/System/Win32/mappings_.h"
Prototype:
int strncasecmp(const char *s1, const char *s2,
size_t n);
*/
#define strncasecmp strncasecmp_MAP
355
Appendix D LHeader and LConfig Patterns
#define strcasecmp_MAP _stricmp
Implementation Example
The CPL C++ library implements the LHeader pattern for deferring the
concrete Mutex type. It relies on the Patterns in the Machine (PIM) best
practices for file organization and #include best practices.
356
Appendix D LHeader and LConfig Patterns
#ifndef Cpl_System_Mutex_h_
#define Cpl_System_Mutex_h_
#include "colony_map.h"
...
#endif // end Header latch
357
Appendix D LHeader and LConfig Patterns
#include "colony_map.h"
Project’s compiler search path:
...
-Iprojects/GM6000/Ajax/simulator/windows/vc12
/// Defer the definion of the raw mute type
#define Cpl_System_Mutex_T Cpl_System_Mutex_T_MAP
...
class Mutex
{ Simulator – Windows
public:
/** Constructor */ projects/GM6000/Ajax/simulator/windows/vc12/colony_map.h
Mutex();
… …
protected: // Cpl::System mappings
/// Raw Mutex handle/instance/pointer #include "Cpl/System/Win32/mappings_.h"
Cpl_System_Mutex_T m_mutex; ...
...
};
Concrete Mutex type
src/Cpl/System/Win32/mappings_.h
STM32F4n Project
projects/GM6000/Ajax/alpha1/windows/gcc-arm/colony_map.h
…
// FreeRTOS mapping
#include "Cpl/System/FreeRTOS/mappings_.h"
...
Simulator - Linux
projects/GM6000/Ajax/simulator/linux/gcc/colony_map.h Concrete Mutex type
… src/Cpl/System/FreeRTOS/mappings_.h
// POSIX mappings
#include "Cpl/System/Linux/mappings_.h" #include "FreeRTOS.h"
... …
#define Cpl_System_Mutex_T_MAP SemaphoreHandle_t
...
#include <pthread.h>
…
#define Cpl_System_Mutex_T_MAP pthread_mutex_t
...
358
Appendix D LHeader and LConfig Patterns
Caveat Implementor
The implementation described previously works well most of the time.
Where it breaks down is when interface A defers a type definition using
the LHeader pattern and interface B also defers a type definition using the
LHeader pattern and interface B has a dependency on interface A. This use
case results in a “cyclic header #include” scenario, and the compile will
fail. If you are using C++, this use case typically does not occur. However,
with C code, you will run into it, although it is not a frequent occurrence.
When this use case is encountered in either C++ or C, the following
constraints are imposed:
• The project-specific reserved header file (e.g., colony_
map.h) shall only contain #include statements to other
header files. Furthermore, this reserved header file
(e.g., colony_map.h) shall not have a header latch (i.e.,
no #ifndef MY_HEADER_FILE construct at the top or
bottom of the header file).
• The header files—which are included by the project-
specific colony_map.h header file and which resolve
the name bindings—shall have an additional symbol
check in their header latch. The additional symbol
check is for the header latch symbol of the module file
that originally declared the name whose binding is
being deferred. For example, the extra symbol check for
the src/Cpl/System/Win32/mappings.h file would be
359
Appendix D LHeader and LConfig Patterns
...
#endif // end header latch
#endif // end interface latch
LConfig
The Late Config, or LConfig, pattern is a specialized case of the LHeader
pattern that is used exclusively for configuration. The LConfig pattern
provides for project-specific header files that contain preprocessor
directives or symbol definitions that customize the default behavior of the
source code.
The LConfig pattern uses a globally unique header file name, and
it relies on a per-project unique header search path to select a project-
specific configuration. That is, it uses the same basic mechanisms as the
LHeader pattern; however, LConfig is not used for resolving deferred
function and type name bindings. Rather, LConfig is used to define, or
configure, magic constants and preprocessor directives for conditionally
compiled code. The CPL C++ library uses the reserved file name colony_
config.h for the LConfig pattern. The following are examples of the
LConfig pattern:
360
Appendix D LHeader and LConfig Patterns
#ifndef COLONY_CONFIG_H
#define COLONY_CONFIG_H
361
Appendix D LHeader and LConfig Patterns
#endif
362
APPENDIX E
4
https://github.com/johnttaylor/colony.core
5
https://github.com/johnttaylor/epc/
6
https://github.com/johnttaylor/Outcast
© The Editor(s) (if applicable) and The Author(s), 363
under exclusive license to APress Media, LLC, part of Springer Nature 2024
J. T. Taylor and W. T. Taylor, The Embedded Project Cookbook,
https://doi.org/10.1007/979-8-8688-0327-7
Appendix E CPL C++ Framework
The CPL C++ framework assumes UTF-8 byte encoding for strings.
That is, it has not been internationalized for double-byte encodings
per character.
7
Doxygen is a documentation generator and static analysis tool for software
source trees. The Doxygen output for the CPL framework is viewable online at
https://johnttaylor.github.io/epc/namespaces.html
364
Appendix E CPL C++ Framework
About src/Cpl
Figure E-1 shows the directory structure of src/Cpl, and the following
sections describe these directories and their associated namespaces for
the CPL framework in the colony.core GitHub repository.
365
Appendix E CPL C++ Framework
Checksum
The Cpl::Checksum namespace provides classes for various types of
checksum, CRC, hashes, etc. The Checksum namespace does not contain
an exhaustive collection of algorithms because these are only added in a
just-in-time fashion. That is, new algorithms are only added when there is
an explicit need for a new checksum or CRC algorithm.
Container
The Cpl::Container namespace provides various types of containers.
What makes the CPL different from the C++ STL library or traditional
containers is that the CPL containers use an intrusive listing mechanism.
This means that every item that is put into a container has all the memory
and fields that are necessary to be in the container. No memory is allocated
when an item is inserted into a container, and all of the containers can
contain an infinite number of items (RAM permitting). There are two
major side effects of intrusive containers:
366
Appendix E CPL C++ Framework
Data Model
The Cpl::Dm namespace provides a framework for the Data Model pattern.
The Data Model software architecture pattern defines how modules
interact with each other through data instances, or model points, with no
direct dependencies between modules. See Appendix C, “Introduction
to the Data Model Architecture,” for an introduction to the Data Model
pattern.
• Cpl::Dm::Mp::Int32|64
• Cpl::Dm::Mp::Uint32|64
• Cpl::Dm::Mp::Float|Double
• Cpl::Dm::Mp::ElapsedPrecisionTime
• Cpl::Dm::Mp::RefCounter
• Cpl::Dm::Mp::Void
367
Appendix E CPL C++ Framework
• Cpl::Dm::Mp::BitArray16
• Cpl::Dm::Mp::ArrayUint8|32|64
• Cpl::Dm::Mp::ArrayInt8|32|64
Persistent Storage
The Data Model namespace provides a mechanism for persistently
storing model points (see Cpl::Dm::Persistent). One or more model
points are grouped into a record, which is the atomic unit for reading
and writing to nonvolatile storage. Persistently storing records relies on
the Cpl::Persistent namespace. The Cpl::Persistent namespace
implements a storage paradigm where data is only read from nonvolatile
storage on startup and then written to nonvolatile storage at runtime.
TShell
The Data Model namespace provides a Cpl::TShell console command
for interactively accessing model point instances at runtime (see
Cpl::Dm::TShell). The dm command supports reading, writing,
invalidating, and locking model points at runtime. The following snippet
illustrates how to write and read a model point:
8
BETTER ENUM is a single, lightweight header file that makes your compiler
generate reflective enum types, that is, enums that can be serialized to and from
text symbols.
368
Appendix E CPL C++ Framework
Io
The Cpl::Io namespace provides the base and common interfaces
for reading and writing data from and to streams and files. Essentially
the Cpl::Io namespaces provide platform-independent interfaces for
operations that would typically be done using POSIX file descriptors or
Windows file handles.
Serial Ports
The Cpl::Io::Serial namespace contains serial port drivers that
conform to the Cpl::Io::InputOutput interface. Typically, the stream
implementations have blocking/waiting semantics for reading and writing
data.
While these serial port drivers are specific to serial port hardware, they
are not specific to a particular board. For example, the Cpl::Io::Serial:
:ST::M32F4::InputOutput class is usable (without modifications) on any
STM32 microcontroller that has a UART peripheral that is compatible with
the UART peripheral on an STM32F4 microcontroller.
369
Appendix E CPL C++ Framework
Sockets
The Cpl::Io::Socket namespace contains support for creating and
using target-operating-system-independent BSD socket connections.
The Socket namespace provides a simple TCP listener using a dedicated
listener thread as well as separate simple TCP connector.
Files
The Cpl::Io::File namespace contains basic support for creating,
opening, reading, and writing files as well as file operations such as
deleting and moving files, creating and deleting directories, traversing
a directory’s contents, etc. The File namespace is operating system
independent and provides a standardized directory separator across
platforms.
Itc
The Cpl::Itc namespace provides interfaces and implementation for
message-based inter-thread communication (ITC). The ITC interfaces are
built on top of the OSAL so they are platform independent. The message
passing design has the following characteristics:
• The model is a client-server model, where clients send
messages to servers.
• Servers are required to execute in an event loop–
based thread.
• Clients sending messages asynchronously are
required to execute in an event loop–based thread.
• Clients sending messages synchronously can
execute in any type of thread, but they must execute
in a different thread than the server receiving the
message.
370
Appendix E CPL C++ Framework
371
Appendix E CPL C++ Framework
Json
The Cpl::Json namespace encapsulates the ArduinoJson9 C++ JSON
library. The ArduinoJson library was chosen because of the following:
9
https://github.com/bblanchon/ArduinoJson
372
Appendix E CPL C++ Framework
Logging
The Cpl::Logging namespace provides a framework for logging events.
While the bulk of code necessary for logging is part of the framework, the
application is required to provide the definition of the logging categories
and message identifiers and required to manage the persistent storage for
the log messages. The framework provides the following features:
• Timestamp
• Category ID
• Message ID
373
Appendix E CPL C++ Framework
374
Appendix E CPL C++ Framework
MApp
The Cpl::MApp namespace provides framework for asynchronously
running micro applications. A micro application can be anything. The
original use case for the MApp framework was to support being able to
selectively run, stop, pause, and resume a set of tests used for board
checkout testing, end-of-line manufacturing testing, emissions testing,
design validation testing, etc., from the TShell console.
A rough analog for a micro application would be launching a console
command to execute in the background.
Math
The Cpl::Math namespace provides classes, utilities, etc., related to
numeric operation. Most notably it provides almostEqual methods for
comparing float and double values.10
Memory
The Cpl::Memory namespace provides a collection of interfaces that
allow an application to manually manage dynamic memory independent
of the actual heap. The memory allocated using the Memory namespace
requires the application to use the C++ “placement new” operator when
constructing C++ objects. For example, the Cpl::Memory::SPool template
class statically allocates memory for N instances of an object of type M.
The application calls the SPool class’s allocate() and release() methods
to request and free memory for dynamically creating objects of type M at
runtime.
https://randomascii.wordpress.com/2012/02/25/comparing-floating-
point-numbers-2012-edition/
375
Appendix E CPL C++ Framework
Persistent
The Cpl::Persistent namespace provides a basic persistent storage
mechanism for nonvolatile data. The persistent subsystem has the
following features:
• The subsystem organizes persistent data into records.
The Application is responsible for defining the data
content of record, and it is the responsibility of the
concrete record instances to initiate updates to the
persistent media. A record is the unit of atomic read/
write operations to persistent storage.
• On startup, the records are read, and the concrete
record instances process the incoming data.
• All persistently stored data is CRC’d to detect data
corruption. CRCs are only validated on startup.
• When the stored data has been detected as corrupt (i.e.,
a bad CRC), record instances are responsible for setting
their data to defaults and subsequently initiating an
update to the persistent media.
• The subsystem is independent of the physical
persistent storage media.
• The record server can process an unlimited number
of records. It is also okay to have more than one record
server instance.
• The record server is a Runnable object and a Data
Model mailbox. This means it executes in its own
thread. All read/write operations to the persistent
media are performed in this thread. It is assumed
that the business logic for individual records is also
performed in this thread and that each record instance
is thread safe with respect to the rest of the system.
376
Appendix E CPL C++ Framework
System
The Cpl::System namespace provides the Operating System Abstraction
Layer (OSAL) as well as basic system services. The CPL library provides an
implementation of the OSAL for the following platforms:
• FreeRTOS
Assert
The src/Cpl/System/assert.h file provides a replacement for the
standard assert() macro where the fatal error handling of a non-true
assert() call is routed through the Cpl::System::FatalError interface.
Elapsed Time
The Cpl::System::ElapsedTime interface provides the elapsed time from
power-up or reset of the target platform. The interface provides time in
three formats:
• Milliseconds (as an unsigned long)
377
Appendix E CPL C++ Framework
Event Flags
The Cpl::System::EventFlag interface provides an inter-thread
communication mechanism that is a collection of 32 Boolean event flags.
An event loop–based thread can wait on zero or more event flags to be
set. Event flags are set from other threads. Event flags have the following
characteristics:
• An individual event flag can be viewed as binary
semaphore with respect to being signaled or waiting
(though waiting is done on a thread’s entire set of event
flags). The event loop provides an optional mechanism
to call back into the application when the event loop is
unblocked and is processing the set of event flags.
• An event loop–based thread waits for at least one event
flag to be signaled. When the thread is waiting on event
flags, and at least one flag is signaled, then all of the
events that were in the signaled state when the thread
was unblocked are processed and cleared.
• Each thread supports up to 32 unique event flags.
Event flags are not unique across threads. That is,
the semantics associated with EventFlag1 for Thread
A is independent of any semantics associated with
EventFlag1 for Thread B.
378
Appendix E CPL C++ Framework
Event Loop
The Cpl::System::EventLoop class is a Cpl::System::Runnable object
that forms the foundation of an event-driven application thread. Event
loops are essentially threads that have a super-loop that contain blocking/
waiting semantics on the following entities:
• Event flags—Setting an event flag unblocks an
event loop.
• Software timers—Event loops periodically unblock
and check the thread’s active software timer list for
expired timers.
• Semaphore—Each event loop has a semaphore that
can signal to unblock a waiting event loop.
379
Appendix E CPL C++ Framework
Global Lock
The Cpl::System::GlobalLock interface provides lightweight mutual
exclusion operations. The interface is intended to be an abstraction
for disabling/enabling global interruptions when the target platform is
running an RTOS. The GlobalLock has following constraints:
• Nonrecursive semantics—The calling thread cannot
attempt to acquire the lock a second time after it has
already acquired the lock.
• The code that is protected by this lock must be very
short in terms of execution time and not call any
operating system methods (e.g., any Cpl::System
methods).
380
Appendix E CPL C++ Framework
Mutexes
The Cpl::System::Mutex class provides a platform-independent recursive
mutex. The CPL library assumes that support for Priority Inheritance11 in
relation to mutexes is provided by the target’s underlying operating system.
Periodic Scheduler
The Cpl::System::PeriodicScheduler interface is a “policy object” that
is used to provide cooperative monotonic scheduling to a Runnable object
or EventLoop. The interface can also be used in an application’s defined
super-loop. The Cpl::Dm::PeriodicScheduler class is an example of
adding periodic scheduling to an event loop.
Scheduling is polled and cooperative, and it is the application’s
responsibility to not overrun or over-allocate the processing done during
each interval. The periodic scheduler, then, makes its best attempt at being
monotonic and deterministic, but the timing cannot be guaranteed. That
said, the scheduler will detect and report when the interval timing slips.
Semaphores
The Cpl::System::Semaphore class provides a platform independent
counting semaphore. The semaphore interface supports wait() and
timedWait() methods. Semaphores can be signaled from other threads
and from interrupt service routines.
11
“In real-time computing, priority inheritance is a method for eliminating
unbounded priority inversion. Using this programming method, a process
scheduling algorithm increases the priority of a process (A) to the maximum
priority of any other process waiting for any resource on which A has a resource
lock (if it is higher than the original priority of A).” Priority Inheritance, Wikipedia.
381
Appendix E CPL C++ Framework
Shell
The Cpl::System::Shell interface provides a platform-independent
mechanism to execute native OS system commands. Support for this
interface is optional. For example, there is no operating system shell
available when running FreeRTOS on an MCU.
Sleep
The Cpl::System::Api::sleep() interface provides a blocking wait with
one millisecond resolution.
Shutdown
The Cpl::System::Shutdown interface provides methods for gracefully
shutting down an application. The interface provides a mechanism for
registering callback functions that are executed when the application is
requested to shut down. The application is responsible for registering a
shutdown callback method that executes the graceful shutdown process.
Simulated Time
The CPL library has a concept of simulated time where the timing sources
for all of the OSAL methods (i.e., the Cpl::System interfaces) are driven
by a simulated tick. For the most part, enabling and using simulated time
is transparent to the application code. The one exception is that for an
application using simulated time, there must be at least one thread that
executes in real time that generates the actual simulation ticks, which is
done by using the Cpl::System::SimTick class.
382
Appendix E CPL C++ Framework
Software Timers
The Cpl::System::Timer interface provides software-based timers.
Software timers execute a callback function when they expired. The timer-
expired callback function executes in the same thread where the timer was
started. Software timers can only be used with event-based threads, where
the thread’s Runnable object is an EventLoop.
The simplest usage pattern for an application to add a timer to the
class is for the class to inherit from the Cpl::System::Timer class and
then provide an implementation for the timer-expired method (i.e., void
expired() noexcept). If a single class requires multiple timers, then the
Cpl::System::TimerComposer object can be used. The TimerComposer
takes a member function pointer in its constructor, and the member
function pointer is the timer instance’s timer-expired callback function.
12
www.mathworks.com/products/simulink.html
383
Appendix E CPL C++ Framework
384
Appendix E CPL C++ Framework
Tracing
The src/Cpl/System/Trace.h file provides an interface for printf-like
debugging. The tracing interface consists of a collection of preprocessor
macros. The macros can be conditionally “compiled out” for release
builds of an application. By default, the trace output is routed to stdout.
However, the application can optionally route the trace messages to any
output stream (i.e., to any instance of Cpl::Io::Output). The trace engine
has the following features:
• Trace messages can be globally enabled or disabled at
run time.
• The application can configure different levels of
verbosity for the messages.
• All trace messages have an associated Section text label.
There is no limit to the number of section labels.
• Trace messages can be filtered at run time by
section label. There is a limit, which is compile time
configurable, to the number of concurrent section
filters.
• Trace messages can be filtered at run time by thread
name. Up to four different thread filters can be applied.
• The trace output can be redirected (and reverted) at
run time to an alternate output stream.
385
Appendix E CPL C++ Framework
Text
The Cpl::Text interface is yet another string class with additional string
and text processing utilities. What makes the Cpl::Text::String classes
different from other string classes is that they support a ZERO dynamic
memory allocation interface and implementation. There is also a dynamic
memory implementation of the string interface for when strict memory
management is not required. The String class only supports UTF-8
encodings.
386
Appendix E CPL C++ Framework
Encoding
The Cpl::Text::Encoding namespace provides interfaces for encoding
binary data as ASCII text (e.g., as a Base64 encoding).
Framing
The Cpl::Text::Frame namespace provides interfaces for encoding and
decoding data within a frame that has a uniquely specified start and end of
frame bytes. This is similar to character-based HDLC framing.
387
Appendix E CPL C++ Framework
String Class
The Cpl::Text::String is an abstract class that defines the operations for
UTF-8, null-terminated string. There are several concrete implementations
of the string class:
• Cpl::Text::FString<N>—A template class that
statically allocates memory for a string with a
maximum length of N bytes not including the null
terminator. Attempts to write more than N bytes will
silently truncate the write to N bytes. However, the
class maintains an internal “truncated flag” that can be
queried by the application after write operations.
• Cpl::Text:DString—A concrete class that uses the
dynamic memory from the heap to allocate memory
for the string contents. The class will automatically
increase the memory size as needed, but it does not
attempt to reduce the amount of currently allocated
string memory.
• Cpl::Text::DFString—A concrete class that uses the
heap to allocate the initial amount of string memory.
After the initial allocation, the class behaves like the
Fstring class.
388
Appendix E CPL C++ Framework
Tokenizer
The Cpl::Text::Tokenizer namespace contains a collection of tokenizers
that parse text strings into individual tokens. The Tokenizer namespace
does not contain an exhaustive collection of algorithms. The namespace
is only added in a just-in-time fashion. That is, new algorithms are only
added when there is an explicit need for a new tokenizer algorithm.
TShell
The Cpl::TShell namespace provides a framework for a text-based shell
that can be used at run time to interact with the application. Think of this
as the debug or command console. The TShell uses the Cpl::Io::Input
and Cpl::Io::Output streams for its command input and text output,
respectively. Because the Cpl::Io stream abstractions are used, the TShell
can be used with the C stdio, serial ports, sockets, etc.
The TShell optionally supports a basic user authentication
mechanism (Cpl::TShell::Security) that enforces a user to log in
before being able to execute any TShell commands. The application is
responsible for providing the policies for authenticating users.
The TShell namespace supports blocking and nonblocking read
semantics for reading its inputs. The use of the blocking semantics requires
a dedicated reader thread. When using nonblocking semantics, the TShell
can be used without a dedicated thread. That is, it can be implemented as
part of super-loop or executed in a shared thread.
The TShell output can share the same output stream as the CPL
Tracing interface. The TShell and trace outputs are atomic with respect
to each other. Individual trace messages do not get intermingled with the
TShell outputs and vice versa.
389
Appendix E CPL C++ Framework
Commands
The Cpl::TShell::Cmd namespace provides a basic set of TShell
commands. The design of the commands includes a self-registering
mechanism. After adding the TShell framework code to the application,
individual commands are added simply by creating an instance of the
command. The command’s constructor registers itself with the list of
available commands.
The application can optionally include any of the following commands
or create its own:
• Bye—Exits the TShell and the application.
390
Appendix E CPL C++ Framework
Type
The Cpl::Type namespace provides a collection of typedefs and
helper classes that function as general purpose types, generic callback
mechanisms, etc.
One of the most notable types is the BETTER_ENUM13 contained in the
src/Type/enum.h header file. Better Enum is a single, lightweight header
file that makes your compiler generate reflective enum types. The principal
usage for Better Enums is the ability to serialize the symbolic enum names
to and from text.
About src/Bsp
The src/Bsp directory tree contains one concrete Board Support Package
(BSP) per target board. A BSP is a layer of software that allows applications
and operating systems to run on the hardware platform. The contents of
a BSP depend on the hardware platform, the targeted operating system
(if one is being used), and potential third-party packages that may be
available for the hardware (e.g., graphics libraries). The ideal BSP for a
microcontroller encapsulates
• The compiler toolchain, as it relates to the
specific MCU
Better ENUM is not native to the CPL Library; it is a third-party package created
13
and maintained by Anton Bachin. The canonical source for Better ENUM is at
https://aantron.github.io/better-enums/
391
Appendix E CPL C++ Framework
About src/Driver
The Cpl::Driver namespace contains a collection of reusable drivers. In
this context, reusable means that the driver has no direct dependencies
on an MCU or a compiler. However, the driver can be specific to hardware.
For example, the Driver::Nv::Onsemi::CAT24C512 driver is specific to
OnSemi’s 64KB I2C EEPROM chip but is independent of whatever MCU or
I2C driver is being used.
The provided drivers are a mixed bag of functionality because drivers
are only added when there is an explicit need for a new one. The core
CPL framework has no dependencies on the Driver namespace. Or, said
another way, there is no requirement to use the framework’s drivers when
using CPL framework.
392
Appendix E CPL C++ Framework
Porting
Porting the CPL library to run on a new target platform involves providing
platform-specific implementations of the OSAL interfaces defined in the
Cpl::System namespace as well as additional interfaces under the Cpl::Io
namespace. The CPL OSAL is meant to provide an abstraction for, or a
decoupling of, the underlying operating system. However, the CPL OSAL
does not actually require an underlying operating system. For example, the
CPL library contains a bare-metal port of the OSAL, and there is a port of
the OSAL that runs on Raspberry PI Pico without an RTOS. It supports two
threads, one for each of the PI Pico’s ARM cores. In addition, the OSAL can
be ported to run in non-RTOS environments such as Windows and Linux.
There are 15 different interfaces listed here that make up the
OSAL. This may sound daunting, but it’s straightforward to port the
individual interfaces. The exception to this is the Threading interface,
which is the most involved when it comes to the target implementation.
The recommendation is to clone one of the existing target
implementations and then modify that as needed.
• Assert
• Elapsed Time
• Fatal Error
• Global Lock
• Mutex
• Newline
• Semaphore
• Shell
393
Appendix E CPL C++ Framework
• Shutdown
• System API
• Threads
• Thread-Local Storage
• Tracing
Decoupling Techniques
For the most part, the decoupling of the OSAL from the target platform is
done using link time binding and the LHeader pattern.
Basically, each target platform has it own implementation of the
platform-specific implementations of the OSAL interfaces. As part of the
LHeader pattern, there is a mappings_.h header file that provides the
target-specific mappings and data types.
Runtime Initialization
The CPL library provides a mechanism that allows modules to register
for callbacks when the framework is initialized (i.e., when Cpl::System
::Api::initialize() is called). In the context of porting the OSAL, this
394
Appendix E CPL C++ Framework
Interfaces
This section discusses the 15 platform-specific implementations of the
OSAL interface that need to be ported. I’ve used simplified pseudocode in
place of the actual interfaces so you need to bring up the files in an editor
to follow along. As the text in this appendix will become stale over time,
you should always consider the header files to be the source of truth for the
OSAL interfaces.
395
Appendix E CPL C++ Framework
Assert
The assert interface (src/Cpl/System/assert.h) only requires a platform-
specific implementation of the CPL_SYSTEM_ASSERT() macro. Typically,
this macro is mapped to the Cpl::System::FatalError interface. For
example:
LHeader mapping file for the Port: mappings_.h
#define CPL_SYSTEM_ASSERT_MAP(e) \
do { if ( ! (e) ) Cpl::System::FatalError::logf( "ASSERT Failed
at: file=%s, line=%d, func=%s\n", \
__FILE__, __LINE__, CPL_SYSTEM_ASSERT_PRETTY_FUNCNAME ); \
} while(0)
One case where this should not be done is if your application contains
C files and you want to be able to call the CPL_SYSTEM_ASSERT macro from
within these files. For this case, the assert macro must map to something
that is legal C code. See the FreeRTOS mappings_.h, c_assert.h, and
c_assert.cpp files as examples.
Elapsed Time
The Cpl::System::ElapsedTime interface has the following methods that
require a platform-specific implementation:
396
Appendix E CPL C++ Framework
Fatal Error
The Cpl::System::FatalError interface has the following methods that
require a platform-specific implementation:
397
Appendix E CPL C++ Framework
CPL_IO_FILE_NATIVE_DIR_SEP
CPL_IO_FILE_MAX_NAME
For example:
LHeader mapping file for the Port: mappings_.h
398
Appendix E CPL C++ Framework
399
Appendix E CPL C++ Framework
• Cpl::Io::Stdio::Output_
Global Lock
The Cpl::System::GlobalLock interface has the following methods that
require a platform-specific implementation:
400
Appendix E CPL C++ Framework
Mutex
The Cpl::System::Mutex interface consists of the following class. An
instance of this class is created in each mutex instance. Here are the OSAL
mutex semantics for a recursive mutex.
class Mutex
{
public:
Mutex();
~Mutex();
protected:
Cpl_System_Mutex_T m_mutex;
};
#define Cpl_System_Mutex_T_MAP pthread_mutex_t
The CPL library requires a set of mutex instances for its internal usage.
While these mutexes are not directly exposed to the application, the OSAL
port still needs to provide the mutexes. The following methods from the
src/Cpl/System/Private_.h header file require a platform-specific
implementation:
401
Appendix E CPL C++ Framework
Newline
The Cpl::Io::NewLine interface only requires a platform-specific
implementation of the CPL_IO_NEW_LINE_NATIVE symbol. The definition of
the newline string is provided in the LHeader mappings_.h header file. For
example:
#define CPL_IO_NEW_LINE_NATIVE_MAP "\n"
Semaphore
The Cpl::System::Semaphore interface consists of the following class. An
instance of this class is created for each semaphore instance.
protected:
Cpl_System_Sema_T m_sema;
402
Appendix E CPL C++ Framework
#define Cpl_System_Sema_T_MAP sem_t
Shell
The Cpl::System::Shell interface has the following methods that require
a platform-specific implementation for the following methods:
CPL_SYSTEM_SHELL_NULL_DEVICE_
CPL_SYSTEM_SHELL_SUPPORTED_
For example, here is the LHeader mapping file for the Port: mappings_.h
403
Appendix E CPL C++ Framework
Shutdown
The Cpl::System::Shutdown interface has the following methods that
require a platform-specific implementation:
System API
The Cpl::System::Api interface has the following methods that require a
platform-specific implementation:
404
Appendix E CPL C++ Framework
#include "Cpl/System/Api.h"
namespace Cpl::System;
Threads
The Cpl::System::Thread interface is the most involved interface to port.
This is because each target platform or OS or RTOS has its own interfaces,
semantics, and nuances when it comes to creating threads, allocating stack
space, prioritizing threads, etc.
In addition, on some platforms, the RTOS or OS will have created
an initial thread before main(). This is problematic because many of the
Cpl::System::Thread methods (e.g., getCurrent()) assume that an
instance of the Thread class has been created and associated with the
405
Appendix E CPL C++ Framework
current thread. A native target thread will not have the instance association
because the thread was not created using the CPL createThread()
method.
A final complication is that the CPL OSAL semantics for creating
threads are “threads are created at run time” with the option for statically
allocating the memory for the thread’s stack.
The Thread interface is defined as an abstract class that also contains
several static methods (see the following coded sample). The OSAL port
requires a concrete child class as well as implementation for all of the
thread’s static methods.
class Thread : public Signable
{
public:
virtual const char* getName() noexcept = 0;
virtual size_t getId() noexcept = 0;
virtual Cpl_System_Thread_NativeHdl_T getNativeHandle( void ) noexcept = 0;
...
static Thread& getCurrent() noexcept;
static void wait() noexcept;
...
static Thread* create( Runnable& runnable,
const char* name,
int
priority = CPL_SYSTEM_THREAD_PRIORITY_NORMAL,
int stackSize = 0,
void* stackPtr = 0,
bool allowSimTicks = true );
static void destroy( Thread& threadToDestroy );
};
#define Cpl_System_Thread_NativeHdl_T pthread_t
406
Appendix E CPL C++ Framework
Create Thread
Because there is some much variation on how an RTOS or OS creates and
manages threads, a given port of the OSAL may not be able to support all
of the semantics of the preceding createThread() method. This is okay as
long as the deviations are documented and are sufficiently functional for
your needs on the target. The reason it is okay to deviate from the OSAL
semantics is that the code that is creating threads is typically very platform-
specific code. It is code that is not commonly reused across multiple target
platforms.
Thread Priorities
It seems every OS and RTOS has a different scheme for thread priorities.
The CPL OSAL handles thread priorities by defining a set of macro symbols
that specify the highest, lowest, and nominal priorities values as well as
two symbols to increase or decrease priority values. The port provides
the target-specific values for these constants in the LHeader mappings_.h
header file.
#define CPL_SYSTEM_THREAD_PRIORITY_HIGHEST_MAP 31
#define CPL_SYSTEM_THREAD_PRIORITY_NORMAL_MAP 15
#define CPL_SYSTEM_THREAD_PRIORITY_LOWEST_MAP 0
#define CPL_SYSTEM_THREAD_PRIORITY_RAISE_MAP (1)
#define CPL_SYSTEM_THREAD_PRIORITY_LOWER_MAP (-1)
407
Appendix E CPL C++ Framework
Caution There are other use cases where native threads, other
than the initial main thread, can exist in your application. For
example, if you are using a communication stack provided by your
board vendor or third party, this stack creates a pool of worker
threads from which callbacks into your application are made. This
means that your code in these callbacks cannot directly or indirectly
call any of the static class methods of the Cpl::System::Thread
interface, except for the tryGetCurrent() method.
Thread-Local Storage
The Cpl::System::Tls interface consists of the Tls class. An instance of
this class is created for each TLS instance.
class Tls
{
public:
Tls();
~Tls();
408
Appendix E CPL C++ Framework
protected:
Cpl_System_TlsKey_T m_key;
};
#define Cpl_System_TlsKey_T_MAP pthread_key_t
If the target platform does not natively support TLS, there are many
possible options for implementing the TLS interface. For example, one
option is to include the TLS storage as part of the concrete Thread class
since there is an instance of the class in each thread. This is how the
FreeRTOS port is done. Another example, which is very specific to the
Raspberry PI Pico port, is to allocate the TLS storage per CPU core since
threads map one to one with cores.
Tracing
The Cpl::System::Trace interface is another interface that, in addition
to being target specific, can also be application specific. For this reason,
I strongly recommend that the OSAL port provide a default implementation
for the Tracing interface and allow for applications and build scripts to
provide their own implementation.
For example, the CPL library provides a default implementation for
the entire Tracing interface that relies on the Cpl::Io::StdOut interface.
The default implementation is segregated into two directories: one for
formatting the output and one for the target output stream. This allows an
application to use part, all, or none of the default implementation.
The tracing interface has the following methods defined in the
src/Cpl/System/Trace.h header file that require platform- and
application-specific implementation:
409
Appendix E CPL C++ Framework
410
APPENDIX F
Installing NQBP2
NQBP2 is pre-installed in the epc repository. The only developer
installation required is to install Python 3.6 or newer. See the Wiki page
instructions on how to install Python. NQBP2 is managed as third-party
package in the epc repository and is located in the xsrc/nqbp2/ directory.
NQBP2 relies on specific environment variables being set with the env.bat
and env.sh scripts. These variables are described in Table F-1.
Ninja is a small build system from Google specifically built for speed (see
14
https://ninja-build.org/).
412
Appendix F NQBP2 Build System
NQBP_BIN The full path to the root directory where the NQBP2 package
is located.
NQBP_PKG_ROOT The full path to the package that is actively being worked
on. Typically, this is the root directory of your local repository.
NQBP_WORK_ROOT The full path of the directory containing one or more
packages being developed. In practice, this is set to the
parent directory of NQBP_PKG_ROOT.
NQBP_XPKGS_ROOT The full path to external or third-party source code
referenced by the build scripts. Typically this is set to the
NQBP_PKG_ROOT/xsrc directory.
Usage
NQBP2 separates the build directories from the source code directories.
It further separates the build directories into two buckets: unit tests (the
tests/ directory) and applications (the projects/ directory). The NQBP2
build scripts will only work if they are executed under one of these two
directories. It is up to you whether you have both or only one of these
directories.
413
Appendix F NQBP2 Build System
Build Model
NQBP2 builds directories and, by default, builds all source code files found
in each directory specified. The object files for each directory are then
placed in an object library for that directory. main() is then linked against
all of the object libraries created.
Incremental builds are supported so that NQBP2 only recompiles
source code that has changed (directly or indirectly through header
includes) since the last build. The incremental build feature is wholly
managed by the Ninja build tool.
414
Appendix F NQBP2 Build System
415
Appendix F NQBP2 Build System
Build Variants
NQBP2 supports the concepts of build variants. A build variant is where
the same basic set of code is compiled and linked against different targets.
For example, the automated unit test for the Cpl::Dm namespace using the
MinGW compiler has three build variants: win32, win64, and cpp11. Here is
a description of these variants.
Each build variant can be built independently from the others. That is,
if you build variant A, it does not delete any of the files for variant B.
NQBP2 does not consider a debug build a build variant. This means
building with debug or without debug enabled will overwrite the
previous build variant’s files.
Build Scripts
Some of the Python scripts that NQBP2 uses are common across projects,
and others are unique to individual projects. Table F-2 describes the
primary components that make up a complete build script.
416
Appendix F NQBP2 Build System
417
Appendix F NQBP2 Build System
Component Description
418
Appendix F NQBP2 Build System
[cpp11] src/Cpl/System/_cpp11
419
Appendix F NQBP2 Build System
Here are some examples of lines that can be included in a libdirs.b file:
Compiler Toolchains
The NQBP2 build engine is based on the concept of creating reusable
compiler or target-specific toolchain scripts that individual projects and
unit tests can then invoke. In addition, the toolchains can be customized
per project and unit tested via the mytoolchain.py script in each build
420
Appendix F NQBP2 Build System
421
Appendix F NQBP2 Build System
Toolchain Details
Each toolchain script is a child class of the Toolchain Python class defined
in the nqbp2/nqbplib/base.py. The Toolchain class is responsible for
generating build rules for the Ninja build tool. Besides being fast, the Ninja
tool handles the details of performing incremental builds.
The base Toolchain class defaults to building a command line
executable using the native GCC compiler. The child classes can extend
or customize any base class method as needed, but typically only the
following methods need to be implemented by a child class:
422
Appendix F NQBP2 Build System
mytoolchain.py
Each toolchain can be customized on a per-project and per-unit-test basis.
The customization is done via the mytoolchain.py script that is located in
each build directory. The mytoolchain.py script is responsible for
• Specifying which compiler toolchain to use
Build Variant
A build variant is specified using a Python dictionary that contains three
sets of customizable build options. These build options are OR-ed to the
build options specified in the toolchain script. Each set of build options is
an instance of the class BuildValues. The three sets of build options are
• user_base—Options that are common to both the
debug and release builds.
• user_optimized—Options that are specific to building
a release build. That is, the -g flag was not specified
when invoking the nqbp.py script.
• user_debug—Options that are specific to building a
debug build. That is, the -g flag was specified when
invoking the nqbp.py script.
423
Appendix F NQBP2 Build System
Build Values
Table F-4 shows the settable data members that the BuildValues class
contains. By default, all the data members are empty strings.
424
Appendix F NQBP2 Build System
425
Appendix F NQBP2 Build System
Specifying Toolchain
To specify a compiler toolchain, at least one build variant must be created.
Then all of the build variant’s dictionaries are combined into yet another
dictionary. The keyword values used in this dictionary are the values used
when specifying the -b <variant> build option and in the libdirs.b
files when specifying conditional builds (i.e., [cpp11] src/Cpl/System/_
cpp11). For example:
# CPP11 variant
cpp11_opts = { 'user_base':base_cpp11,
'user_optimized':optimzed_cpp11,
'user_debug':debug_cpp11
}
The final step is to invoke the desired compiler toolchain. This is done
by calling the toolchain’s create() method. For example:
426
Appendix F NQBP2 Build System
Linking
As discussed earlier, the default linking paradigm for the NQBP2 is to link
against libraries, not directly against object files. However, as this approach
does not always work, there are ways to go about unconditionally linking
object files:
1. Put the source code for the object files in the
build directory. When these source code files are
compiled their object files are always directly linked.
2. Use the firstobjs and lastobjs build options to
explicitly link one or more object files. An object file
is specified by its path to the object build directory,
and this path needs to be relative to the build-
variant directory—not the project directory. For
example:
base_release.lastobjs = \
r'..\src\Bsp\Renesas\Rx\u62n\Yrdkr62n\Gnurx\vectors.o'
427
Appendix F NQBP2 Build System
base_release.firstobjs = '_BUILT_DIR_.src/Cpl/Dm/Mp/_0test'
Preprocessing Scripts
NQBP2 supports the ability to run preprocessing scripts prior to the
compile stage. This allows NQBP2 to work with code bases and tools that
auto-generate additional source code files from non-C/C++ source files or
from files that were generated from the examination of other source files.
An example of this is running QT’s Moc compiler with each build.
428
Appendix F NQBP2 Build System
429
Appendix F NQBP2 Build System
#!/usr/bin/python3
#--------------------------------------------------------------
# Usage: preprocess.py <arg1> ... <arg6> <compiler> where:
# <arg1>: build|clean
# <arg2>: verbose|terse
# <arg3>: <workspace-dir>
# <arg4>: <package-dir>
# <arg5>: <project-dir>
# <arg6>: <current-dir>
# <compiler>: compiler being used, e.g. mingw|mingw_64|vc12|etc..
#--------------------------------------------------------------
import os
import sys
# MAIN
if __name__ == '__main__':
# Create path to the 'real' script
script = os.path.join(sys.argv[4], "scripts", "preprocess_base.py" )
#!/usr/bin/python3
#--------------------------------------------------------------
# This is example of the 'single instance' NQBP pre-processing script
# where <prjargs..> are project specific arguments passed to the
# <preprocess-script.py> script when it is executed.
#
# Usage:
# preprocessing_base <a1> <a2> <a3> <a4> <a5> <a6> <prj1>
430
Appendix F NQBP2 Build System
#
# where:
# <a1>: build|clean
# <a2>: verbose|terse
# <a3>: <workspace-dir>
# <a4>: <package-dir>
# <a5>: <project-dir>
# <a6>: <current-dir>
# <prj1>: <compiler> // mingw|mingw_64|vc12|etc.
#--------------------------------------------------------------
# Do stuff...
print( "--> Example Pre-Processing Script" )
if (sys.argv[2] == 'verbose'):
print( "= ECHO: " + ' '.join(sys.argv) )
...
...
431
Appendix F NQBP2 Build System
#===================================================
# BEGIN EDITS/CUSTOMIZATIONS
#---------------------------------------------------
...
...
--qry-dirs Displays the list of directories for the selected build variant
referenced in the libdirs.b file.
--qry-dirs2 The same as –-qry-dirs, except with the addition of any
source files specifically included or excluded on a per-directory
basis. The token “>>>” indicates excluded files. The token
“<<<” indicates only included files.
--qry-opt Displays all of the toolchain options, that is, compiler flags,
header search paths, linker flags, etc.
432
Appendix F NQBP2 Build System
433
Appendix F NQBP2 Build System
c:\epc\tests\Driver\NV\_0test\onsemi-cat24c512\NUCLEO-F413ZH-alpha1\windows\
gcc-arm>nqbp.py --qry-opts
# inc: -I. -I/epc/src -I/epc/xsrc -IC:/epc/tests/Driver/NV/_0test/
onsemi-Cat24c512/NUCLEO-F413ZH-alpha1/windows/gcc-arm
-I/epc/xsrc/stm32F4-SDK/ Drivers/STM32F4xx_HAL_Driver/Inc
-I/epc/xsrc/stm32F4-SDK/Drivers/STM32F4xx_HAL_Driver/Inc/Legacy
-I/epc/xsrc/stm32F4-SDK/Drivers/CMSIS/Device/ST/STM32F4xx/Include
-I/epc/xsrc/stm32F4-SDK/Drivers/CMSIS/Include
-I/epc/xsrc/stm32F4-SDK/Middlewares/Third_Party/FreeRTOS/Source/CMSIS_RTOS
-I/epc/xsrc/FreeRTOS/Include -I/epc/xsrc/FreeRTOS/portable/GCC/ARM_CM4F
-I/epc/src/Bsp/ST/NUCLEO-F413ZH/alpha1/MX
-I/epc/src/Bsp/ST/NUCLEO-F413ZH/alpha1/MX/Core/Inc
434
Appendix F NQBP2 Build System
# firstobjs: _BUILT_DIR_.src/Bsp/ST/NUCLEO-F413ZH/alpha1/MX/Core/Src
src/Bsp/ST/NUCLEO-F413ZH/alpha1/MX/../stdio.o
# lastobjs: src/Bsp/ST/NUCLEO-F413ZH/alpha1/MX/../syscalls.o
Extras
NQBP2 also provides some additional scripts and features that are not
directly used for building but are used to leverage, or support, the NQBP2
engine. Table F-6 lists a few of these supported scripts.
Table F-6. NQBP2 scripts that are used to leverage the NQBP2 engine
Script Description
bob.py The bob script is a tool that recursively builds multiple projects
or tests. bob can only be run under the projects/ and
tests/ directory trees. In addition, bob provides several
options for filtering and specifying which projects or tests
actually get built. For example, the following statement will
build only tests that use the Visual Studio compiler. It also
passes the -g option to the nqbp.py build scripts. For
example: c:\work\pim\tests>bob.py vc12 -g
(continued)
435
Appendix F NQBP2 Build System
Script Description
The tca script is run after the unit test executable has been
run at least once. In the EPC repository, only the mingw_w64
32-bit builds are configured to be instrumented to generate
code coverage metrics.
436
APPENDIX G
RATT
RATT is a Python-based automated test tool built on top of the pexpect15
package. I specifically created it to perform automated testing with Units
Under Test (UUTs). It supports a command-line interface, and it can be
used with any application that supports interactive behavior with the
parent process that launched the application (e.g., stdio). The following is
a summary of RATT’s features:
https://github.com/pexpect/pexpect
• stdio
Installing
RATT is pre-installed in the epc repository and is located in the xsrc/ratt/
directory.16 It requires the following additional developer tools:
16
RATT is managed as third-party package in the epc repository. The canonical
repo for RATT is https://github.com/johnttaylor/ratt
438
Appendix G RATT
The native Windows telent.exe does not work with RATT because
pexpect does not spawn applications in a terminal, and the
Windows telnet application only supports interactive behavior when it
detects it is running in a terminal.
Test Scripts
The basic execution model is that RATT scripts read and write data to
the UUT’s input and output streams, respectively. In addition, the read
operations can optionally wait until a specific string or regular expression
is detected in the output stream coming from the UUT.
The RATT package does not impose constraints or limits to the
content of the test scripts that are written in Python. The exception is the
entry script, which is expected to have the following semantics when it is
launched from the command line:
439
Appendix G RATT
Usage
Here is the usage for running a ratty.py script:
To see the complete list of options, you can enter ratt.py -h on the
command line. Here are some examples of running the ratt.py script.
440
Appendix G RATT
Script Locations
Test scripts can be located anywhere on your PC or on the network. When
running a ratt.py script as a test suite—that is, as a script that calls other
scripts—there are very specific semantics associated with searching for the
entry script and for other scripts loaded by the entry script. The following
hierarchical rules apply when loading test scripts:
Example Scripts
Here are two example snippets of RATT test scripts used for the GM6000
project. One is a test suite, and the other is a test case. The std, uut, and
output symbols in the following snippets are Python modules provided
by the RATT engine that are available to your test scripts after the from
rattlib import * statement executes. See the “rattlib” section that
follows for a description of these modules.
Here is a test suite that verifies the GM6000 Heating algorithm:
src/Ajax/Heating/Simulated/_0test/test_suite.py
441
Appendix G RATT
def main():
""" Entry point for the Test Suite
"""
output.write_entry( __name__ )
passcode = config.g_passed
uut.setprompt( prompt_string )
output.write_exit( __name__ )
return passcode
src/Ajax/Heating/Simulated/_0test/tc_basic_heating.py
def run():
""" Entry point for the Test Case
"""
442
Appendix G RATT
output.write_entry(__name__)
helper = std.load( "tc_common" )
# load a RATT test script with a collection of common functions
passcode = config.g_passed
443
Appendix G RATT
Caveats
Be aware of the following caveats when using RATT:
https://pexpect.readthedocs.io/en/stable/overview.html#find-the-
17
end-of-line-cr-lf-conventions
444
Appendix G RATT
Recommended Conventions
I recommend that you organize your RATT test scripts by test suites and
test cases. Additionally, I recommend that you do the following things:
For all RATT test files:
18
See https://peps.python.org/pep-0257
445
Appendix G RATT
For the purposes of usability and debugging, here are the preferred
output methods for scripts:
446
Appendix G RATT
>man.list()
cplutils.py
tc_basic_heating.py
tc_common.py
tc_heating_alerts.py
tc_uut_alive.py
test_suite.py
>man.man('cplutils')
447
Appendix G RATT
MODULE NAME:
cplutils
LOCATION:
c:\_workspaces\zoe\epc\scripts\colony.core\ratt\
cplutils.py
DESCRIPTION:
Utilities, common functions, etc.
FUNCTIONS:
get_uut_elapsed_time(display_time=False)
Gets the current UUT elapsed time and returns
it in milliseconds.
When 'display_time' is True, the current time
will be displayed on the console.
...
448
APPENDIX H
GM6000
Requirements
This appendix provides examples of requirements documents that you
can use as templates if your requirements process is informal, ad hoc, or
inconsistent. In this case, the examples are the formal requirements for
the GM6000 Digital Heater Controller project, and they are separated into
several documents based on the principal document owner or discipline.
That is, there are examples of
Marketing Requirements
The sample document here is for the top-level requirements for a project.
These requirements represent the customer and business needs.
Overview
The GM6000 product is a Digital Heater Controller (DHC) that can be
used in many different physical form factors and heating capacities. The
specifics of the final unit’s physical attributes will be provisioned into the
DHC during the manufacturing process.
It is acceptable for the initial release of the GM6000 to be a single form
factor. Follow-up releases will include additional form factors as well as a
wireless space temperature sensor.
450
Appendix H GM6000 Requirements
Glossary
Term Definition
Document References
Document # Document name Version
Requirements
The use of the words shall, will, and must indicates that the requirement
must be implemented. The use of the words should, may, and can
indicates that the requirement is desired but does not have to be
implemented.
MR-100 Heating System The DHC system shall provide indoor 1.0
heating based on space temperature
and a user-supplied heat setting.
MR-101 Heating Enclosures The DHC shall support at least three 1.0
different heater enclosures. The heating
capacity of each heater enclosure can
be different from the other enclosures.
(continued)
451
Appendix H GM6000 Requirements
MR-102 Control Board DHC shall have a single control board 1.0
that can be installed in many heater
enclosures.
MR-103 Control Algorithm The heater control algorithm in the 1.0
control board shall accept parameters
and configuration that customizes
the algorithm for a specific heater
enclosure.
MR-104 Provisioning The DHC control board shall be 1.0
provisioned to a specific heater
enclosure during the manufacturing
process. The provisioning shall
include the heater control algorithm’s
parameters and configuration.
MR-105 Wireless The control board shall support 2.0
connecting to a wireless module
for communicating with a wireless
temperature input.
MR-106 Wireless The DHC system shall support an 2.0
external, wireless temperature sensor.
MR-107 User Interface The DHC unit shall support a display, 1.0
LEDs, and user inputs (e.g., physical
buttons, keypad membrane, etc.). The
arrangement of the display and user
inputs can be different between heater
enclosures.
(continued)
452
Appendix H GM6000 Requirements
MR-108 User Actions The DHC display, LEDs, and user inputs 1.0
shall allow the user to do the following:
• Turn the heater on/off
• Set the maximum fan speed
• Set the temperature setpoint
MR-109 User Information The DHC display LEDs shall provide the 1.0
user with the following information:
• Current temperature
• The DHC on/off state
• Whether the DHC is actively heating
• The fan on/off state
• Alerts and failure conditions
MR-200 UI Languages The User Interface text shall be in US 1.0
English.
MR-201 Troubleshooting The DHC shall support troubleshooting 1.0
failures in the field and diagnostics for
analyzing returned units.
MR-203 Lifetime The DHC shall be designed for a lifetime 1.0
of five years, with a minimum of 25,000
hours of active heating operations.
Change Log
Version Date Updated By Changes
453
Appendix H GM6000 Requirements
Product Requirements
The following sample document is for system-level or product-level
requirements that identify a solution to the Marketing Requirements
Specification (MRS).
Overview
The GM6000 product is a Digital Heater Controller (DHC) that can be
used in many different physical form factors and heating capacities.
The specifics of the final unit’s physical attributes will be provisioned
into the DHC during the manufacturing process. The following product
requirements identify a solution for the GM6000.
Glossary
Term Definition
Document References
Document # Document Name Version
454
Appendix H GM6000 Requirements
Requirements
Req# Name Requirement Rel
455
Appendix H GM6000 Requirements
456
Appendix H GM6000 Requirements
• Approximately 1” (± 20%)
Graphic LCD
457
Appendix H GM6000 Requirements
Change Log
Version Date Updated By Changes
458
Appendix H GM6000 Requirements
Software Requirements
The following document is for detailed software-level requirements for the
GM6000 project. With respect to software, this is the lowest, most-detailed
level of requirements.
Overview
The GM6000 product is a Digital Heater Controller (DHC) that can be
used in many different physical form factors and heating capacities.
The specifics of the final unit’s physical attributes will be provisioned
into the DHC during the manufacturing process. The following software
requirements identify the software specific requirements.
Glossary
Term Definition
Document References
Document # Document Name Version
459
Appendix H GM6000 Requirements
Requirements
Req# Name Requirement Rel
460
Appendix H GM6000 Requirements
461
Appendix H GM6000 Requirements
SWR-207 UI LED Status The RGB LED shall have the 1.0
following behaviors:
462
Appendix H GM6000 Requirements
Change Log
Version Date Updated By Changes
Hardware Requirements
The following document is for detailed (electrical) hardware-level
requirements for the GM6000 project. With respect to hardware, this is the
lowest, most-detailed level of requirements.
463
Appendix H GM6000 Requirements
Overview
The GM6000 product is a Digital Heater Controller (DHC) that can be
used in many different physical form factors and heating capacities. The
specifics of the final unit’s physical attributes will be provisioned into
the DHC during the manufacturing process. The following hardware
requirements identify the hardware specific requirements.
Glossary
Term Definition
Document References
Document # Document name Version
464
Appendix H GM6000 Requirements
Requirements
Req# Name Requirement Rel
465
Appendix H GM6000 Requirements
Change Log
Version Date Updated By Changes
466
APPENDIX I
GM6000 System
Architecture
This appendix provides an example of a system architecture document
that you can use as template. It is the system architecture document for
the GM6000 Digital Heater Controller project. The document’s example
content is sparse, and it omits almost all non-software-related content.
Overview
Build a Digital Heater Controller (DHC) that can be used in many different
physical form factors and heating capacities. The specifics of the final
unit’s physical attributes will be provisioned into the DHC during the
manufacturing process.
© The Editor(s) (if applicable) and The Author(s), 467
under exclusive license to APress Media, LLC, part of Springer Nature 2024
J. T. Taylor and W. T. Taylor, The Embedded Project Cookbook,
https://doi.org/10.1007/979-8-8688-0327-7
Appendix I GM6000 System Architecture
Glossary
Term Definition
Document References
Document # Document Name Version
Block Diagram
To meet the marketing needs of multiple form factors and capacities,
the decision was made to break down the DHC unit into six major
sub-assemblies (see Figure I-1). The individual sub-assemblies can be
specific to physical enclosures with the exception of a single Control Board
sub-assembly that is common to all enclosures.
The DHC system consists of the following components that will be
physically located within the primary physical enclosure. There can be
many different physical enclosures.
468
Appendix I GM6000 System Architecture
Component Description
Control Board (CB) This contains the microcontroller that runs the heater
controller. The CB contains circuits and other chips as
needed to support the MCU and software.
Display and User A separate board that contains the display, buttons, and
Inputs (DUI) LEDs used for interaction with the customer. This DUI
can be located anywhere within the enclosure and is
connected to the CB via a wire harness or cable.
(continued)
469
Appendix I GM6000 System Architecture
Component Description
470
Appendix I GM6000 System Architecture
Component Description
Power Supply (PS) The power supply accepts line power in and is
responsible for providing the appropriate voltage inputs
and current ratings to the CB, HE, and BA. The PS is
connected to the CB, HE, and BA via wire harnesses or
cables.
Note: The DUI and the optional WM board are powered
via their connection with the CB.
Component Description
Change Log
Version Date Updated By Changes
471
APPENDIX J
GM6000 Software
Architecture
This appendix provides an example of a software architecture (SWA)
document that you can use as template. It is the software architecture
document for the GM6000 Digital Heater Controller project. The
document’s example content is not exhaustive, but the structure of the
template does address all the areas you should be paying attention to and
thinking about.
Scope
This document captures the software architecture for the GM6000
software. The software architecture identifies a high-level solution
and defines the rules of engagement for the subsequent design and
implementation steps of the GM6000 software.
The target audience for this document is software developers, system
engineers, and test engineers. In addition, authors are required to view the
audience as not only being the original development team but also future
team members on future development efforts.
Requirements Tracing
For requirements tracing purposes, the [SWA-nnn] identifiers are used to
label traceable architectural items. The SWA identifiers, once assigned,
shall not be reused or reassigned to a different item. The nnn portion of
the identifier is simply a free running counter to ensure unique identifiers
within the document.
Overview
The GM6000 is a Digital Heater Controller (DHC) that can be used in
many different form factors and heating capacities. The specifics of the
unit’s physical attributes will be provisioned into the DHC when the final
assembly of the DHC is performed. The GM6000’s software executes
on a microcontroller that is part of the unit’s Control Board (CB). The
Control Board is physically located within the unit’s enclosure. The
software interacts with other sub-assemblies—Heating Element, Blower
Assembly, Display and User Inputs, and Temperature Sensor—within the
enclosure. In the future, the software will interact with an external wireless
temperature sensor. Figure J-1 summarizes the system architecture.
474
Appendix J GM6000 Software Architecture
Glossary
Term Definition
475
Appendix J GM6000 Software Architecture
Document References
Document # Document Name Version
Hardware Interfaces
Figure J-2 illustrates the input and output signals to the Control Board’s
microcontroller. It defines what the MCU’s inputs and outputs are.
476
Appendix J GM6000 Software Architecture
Component Description
Control Board (CB) A single PCBA that contains the microcontroller that
operates the heater controller. The board has numerous
connectors for off-board connections to the other sub-
assemblies.
Data Storage Serial persistent data storage for configuration, user
settings, etc.
(continued)
477
Appendix J GM6000 Software Architecture
Component Description
Performance Constraints
This section summarizes the analysis performed—and documents the
decisions made—with respect to real-time performance.
478
Appendix J GM6000 Software Architecture
Data Storage
The off-board data storage is used for storing configuration, user settings,
device logs, etc. All of this data can be written and retrieved in the
background. However, on orderly shutdown, all pending write actions
must be completed before the Control is powered off or the MCU is put to
sleep. The assessment and recommendations are as follows:
• No real-time constraints
Display
The MCU communicates with the display controller via a serial bus (e.g.,
SPI or I2C). There is a time constraint in that the physical transfer time for
an entire screen’s worth of pixel data (including color data) must be fast
enough to ensure a good user experience. There is also a RAM constraint
with respect to the display in that the MCU will need at least one off-screen
frame buffer that can hold an entire screen’s worth of pixel data. The size of
the pixel data is a function of the display’s resolution times the color depth.
The assessment and recommendations are as follows:
479
Appendix J GM6000 Software Architecture
Display Backlight
The control of the display’s backlighting is assumed to be either a
discrete GPIO enable signal or a PWM signal that proportionally controls
brightness. For the PWM use case, the MCU hardware will generate the
PWM signal, and it is assumed that changes in the brightness do not have
to occur immediately (i.e., there is no requirement for faster than 1 Hz).
The assessment and recommendations are as follows:
User Inputs
The application must respond to user button presses in a timely manner.
The time for sampling and debouncing the input signals and then
providing feedback to the user (updated display, LED change state, etc.)
must also be taken into consideration. A button press (and release) must
be detected and debounced in approximately 50 msec. The assessment
and recommendations are as follows:
• Button input signals are required to be interrupt driven
to detect the initial edges.19
Software Detailed Design. It is left in its original form here to illustrate that
original ideas can change in the course of the development process and that when
changes occur, it’s simply a matter of documenting the change and, as necessary,
documenting the agreement for the change.
480
Appendix J GM6000 Software Architecture
LEDs
In general, the LEDs only change state based on the application changing
state, so LED changes should be slower than 1 Hz. The exception is if there
are requirements to flash one or more LEDs. However, since flashing LEDs
will be in the context of conveying a particular state or fault condition
to the user, it is not critical to have precise and 100% consistent flash
intervals. The assessment and recommendations are as follows:
Heater Control
The heating capacity of the heater element is controlled by a PWM signal.
Controlling space temperature is a relatively slow system (i.e., much slower
than 1 Hz), and the heater element has its own safety protection. The
assessment and recommendations are as follows:
481
Appendix J GM6000 Software Architecture
Blower Control
The fan speed of the Blower Assembly is controlled by a PWM signal.
Controlling space temperature is a relatively slow system (i.e., much slower
than 1 Hz). The assessment and recommendations are as follows:
Temperature Sensor
The space temperature must be sampled and potentially filtered before
being input into the control algorithm. However, controlling space
temperature is a relatively slow system (i.e., much slower than 1 Hz). The
assessment and recommendations are as follows:
Console
The console port is used during manufacturing for provisioning and
testing. It can also be used for live troubleshooting. In either case, it is
a text-based command/response interface that has no critical timing
requirements. The assessment and recommendations are as follows:
482
Appendix J GM6000 Software Architecture
Wireless Module
The MCU will interface with the wireless module via a serial bus. The
assumption is that the wireless module itself handles all of the real-time
aspects of the wireless protocol. In addition, since only space temperature
is being reported (and not at a high frequency), there is not much data to
be transferred.
• No real-time constraints
Control Algorithm
Controlling space temperature is a relatively slow system. The control
algorithm will execute periodically at a rate slower than 1 Hz. The
assessment and recommendations are as follows:
• No real-time constraints
Password Hashing
The console is password protected. The password is not stored in plain text
in persistent storage; rather, a hash of the password is stored. Depending
on the type of hash and number of iterations, the hashing process can
take a considerable amount of time and CPU usage. However, this is a
one-time-per-console-session event, and it is okay for the user to wait for
authorization. The assessment and recommendations are as follows:
483
Appendix J GM6000 Software Architecture
Threading
A Real-Time Operating System (RTOS) with many threads will be used.
Switching between threads (a context switch) requires a measurable
amount of time. This becomes important when there are sub-10-
millisecond timing requirements and when looking at overall CPU usage.
The RTOS also adds timing overhead for maintaining its system tick timer,
which is typically interrupt based. The assessment and recommendations
are as follows:
Summary
There are no real-time and performance constraints that require
microsecond or sub-10-millisecond processing. The most impactful
constraints are for the display driver and the user buttons with processing
times in the tens of milliseconds, which can be accommodated using
high-priority interrupts and threads. There is a hardware requirement that
the MCU has at least two to three PWM outputs (depending on how the
backlight is controlled).
Programming Languages
The software shall be a C/C++ application—predominantly C++. The
Software Development Plan will specify the specific ISO language and
coding standards to follow.
484
Appendix J GM6000 Software Architecture
Subsystems
Figure J-3 illustrates the subsystems that the control board software is
broken down into.
485
Appendix J GM6000 Software Architecture
[SWA-11] Application
The application subsystem contains the top-level business logic for the
entire application. This includes functionality such as the following:
[SWA-12] Bootloader
The bootloader subsystem consists of the nonmutable bootloader that is
responsible for
• Determining the validity of the application software
At this time, the bootloader is not a required subsystem. That is, there
is no requirement for field upgradable software.
486
Appendix J GM6000 Software Architecture
[SWA-13] BSP
The Board Support Package (BSP) subsystem is responsible for abstracting
the details of the MCU’s datasheet. For example, it is responsible for the
following:
The BSP is specific to a target PCBA. That is, each board spin will have
its own BSP. The BSP may or may not contain driver code. Driver code
that is contained within the BSP is by definition dependent on a specific
target. Whether or not a specific driver is generic—that is, it has no direct
dependency on the BSP—will be determined on a case-by-case basis. The
preference is for generic drivers.
[SWA-14] Console
The console subsystem provides a command-line interface (CLI) for the
application. The CLI is used to provision the unit during manufacturing.
The console will be secured so that only authorized users have access to it
after the unit is shipped. The CLI is also used for
• Development debugging
487
Appendix J GM6000 Software Architecture
[SWA-15] Crypto
The Crypto subsystem provides application-specific cryptography services
as well as abstract APIs to low-level crypto services provided by the MCU,
SDK, and third-party software.
488
Appendix J GM6000 Software Architecture
489
Appendix J GM6000 Software Architecture
[SWA-17] Diagnostics
The Diagnostics subsystem is responsible for monitoring the software’s
health, diagnostics logic, and self-testing the system. This includes features
such as power-on self-tests and metrics capture.
[SWA-18] Drivers
The Driver subsystem is the collection of driver code that does not reside
in the BSP subsystem. Drivers that directly interact with hardware are
required to be separated into layers. There should be at least three layers:
• A hardware-specific layer that is specific to a target
platform.
490
Appendix J GM6000 Software Architecture
[SWA-20] Heating
The heating subsystem is responsible for the closed-loop space
temperature control (in other words, the algorithm code).
[SWA-21] Logging
The logging subsystem is responsible for creating and managing a
device log. The device log is a persistent collection of timestamped
events and data that is used for profiling the device’s behavior, algorithm
491
Appendix J GM6000 Software Architecture
[SWA-22] OS
The Operating System subsystem will be a third-party software package.
The OS for the hardware target’s platform must be an RTOS with priority-
based thread preemption and priority inheritance features (for mutexes).
[SWA-23] OSAL
The Operating System Abstraction Layer (OSAL) subsystem decouples the
application and driver code from a specific operating system executing on
the target platform.
492
Appendix J GM6000 Software Architecture
• Discovering sensors
[SWA-28] UI
The user interface (UI) subsystem is responsible for the business logic and
interaction with end users of the unit. This includes displaying the LCD
screens, screen navigation, processing button inputs, LED outputs, etc. The
UI subsystem has a hard dependency on the Graphics library subsystem.
This hard dependency is acceptable because the Graphics library is
platform independent.
[SWA-31] Interfaces
The preferred, and primary, interface for sharing data between subsystems
will be done via the Data Model pattern. The secondary interface will be
message-based inter-thread communications (ITC). Which mechanism
is used to share data will be determined on a case-by-case basis with the
preference being to use the data model. However, the decision can be
“both” because both approaches can co-exist within a subsystem.
The Data Model pattern supports publish-subscribe semantics as well
as a polling approach to sharing data. The net effect of this approach is
that subsystems are decoupled. That is, there are no direct dependencies
494
Appendix J GM6000 Software Architecture
495
Appendix J GM6000 Software Architecture
496
Appendix J GM6000 Software Architecture
497
Appendix J GM6000 Software Architecture
[SWA-34] Simulator
The software architecture and software design include the concept of
a functional simulator. A functional simulator executes the production
source code on a platform that is not the target platform. The simulator
provides the majority of the functionality, but not necessarily the real-time
performance. Or, more simply, functional simulation enables developers
to develop, execute, and test production code without target hardware.
Figure J-5 illustrates what is common, and different, between the software
built for the target platform and that built for the functional simulator.
498
Appendix J GM6000 Software Architecture
[SWA-35] Cybersecurity
The software in the DHC is considered to be a low-risk target in that it
is easier to compromise the physical components of a DHC than the
software. Assuming that the software is compromised, there are no safety
issues because the Heating Element has hardware safety circuits. The
worst-case scenarios for compromised software are along the lines of a
denial-of-service (DOS) attack in that the DHC may be unable to heat the
space, resulting in uncomfortable temperature control and possibly a high
energy bill.
No PII is stored in persistent storage. There are no privacy issues
associated with the purchase or use of the GM6000.
Another possible security risk is the theft of intellectual property, for
example, a malicious actor stealing and reverse-engineering the software
in the Control Board. This is considered low risk as well because there are
no patented algorithms or trade secrets contained within the software and
because the software only has value with the company’s hardware.
The considered attack vectors are as follows:
500
Appendix J GM6000 Software Architecture
501
Appendix J GM6000 Software Architecture
502
Appendix J GM6000 Software Architecture
503
Appendix J GM6000 Software Architecture
to the namespace rule are for the BSP subsystem and third-party
packages. In addition to namespaces, the following conventions shall
be followed:
For example:
504
Appendix J GM6000 Software Architecture
[SWA-40] Localization
and Internationalization
The product is targeted to the North American, US English–speaking
market only (as determined by the Product Manager). In the context of the
software, the 7-bit ASCII character code is sufficient for all text presented
to an end user.
[SWA-41] Engineering
and Manufacturing Testing
Various product testing—for example, EMC, HALT, end-of-line
manufacturing, etc.—on the physical product will be performed.
Specialized software for the Control Board will be required to support
these tests. Support for these test scenarios shall be one of the following:
505
Appendix J GM6000 Software Architecture
Change Log
Version Date Updated By Changes
506
APPENDIX K
GM6000 Software
Development Plan
This appendix provides an example of a Software Development Plan (SDP)
document that you can use as template. It is the SDP for the GM6000
Digital Heater Controller project. Several tools such as JIRA, Git, Jenkins,
etc., are called out in this document. Consider these tool names to be
placeholders or labels for the functionality they provide. That is, you can
replace these names with the names of the tools you use.
The contents of this SDP are verbose in some areas in that they
describe work instructions for certain processes. If your organization has
existing documentation for a particular process, simply reference that
documentation instead of providing details. Ideally, your SDP should not
be the source of details for process work instructions.
Overview
This document captures the software development decisions, activities,
and logistics for developing all of the software that executes on the
GM6000 Digital Heater Controller’s Control Board. This includes all the
software that is needed to formally test, validate, manufacture, and release
a GM6000.
Glossary
Term Definition
508
Appendix K GM6000 Software Development Plan
Term Definition
509
Appendix K GM6000 Software Development Plan
Term Definition
Pull Request A pull request (PR) informs others about changes that
developers have pushed to a branch in a GIT repository.
QMS Quality Management System
SCM Software Configuration Management is the process and tools
used to store, track, and control changes to the source code.
For example, git is an SCM tool.
Software BOM Software Bill of Materials is a list of all third-party packages
(SBOM) and their version information that were included in the released
software. The Software BOM can contain existing internal
packages that are being used with the released software.
SRS Software Requirements Specification
SWA Software Architecture Document
SWD Software Detailed Design document
Ticket A ticket represents a unit of work with respect to the source
code that is atomically merged to a stable branch in the SCM
repository. Tickets are required to be formally identified and
tracked by JIRA.
Validation Validation is the process of checking whether the specification
captures the customer’s requirements, that is, “are we building
the right thing.” Examples of validation might be collecting
voice-of-the-customer inputs, running focus groups with users
using mock-ups or prototypes, etc.
Verification Verification is the process of checking that the software fulfills
requirements, for example, functional software testing.
510
Appendix K GM6000 Software Development Plan
Document References
Document # Document Name Version
511
Appendix K GM6000 Software Development Plan
Role Responsibility
Software Lead The Software Lead is the technical lead for all software
contained within the GM6000 control board. This role is
responsible for the following:
• Software architecture
• Software Detailed Design
• SRS requirements
• Resolving (software) technical issues
• Ensuring the software-specific processes are followed
(especially reviews)
• Signing off on the final releases
• All the responsibilities of a Software Developer
Software Developer The Software Developer writes and tests code. This role is
responsible for the following:
• Assisting with software architecture
• Assisting with Software Detailed Design
• Assisting with SRS requirements
• Implementing code and unit tests
• Participating in design and code reviews
• Following the defined SDLC processes
Software Test Lead The Software Test Lead is responsible for all things related
to software verification:
• Creating the formal test plan and test matrix
• Creating test reports
• Resolving (software testing) technical issues
• Ensuring that the software test-specific processes are
followed
• Signing off on the final releases
• All the responsibilities of a Software Tester
(continued)
512
Appendix K GM6000 Software Development Plan
Role Responsibility
513
Appendix K GM6000 Software Development Plan
Software Items
The software items covered under this development plan are as follows:
514
Appendix K GM6000 Software Development Plan
Documentation Outputs
1. The supporting documentation shall be created
in accordance with the processes defined in the
QMS-010 Software Development Life Cycle Process
document.
515
Appendix K GM6000 Software Development Plan
516
Appendix K GM6000 Software Development Plan
Requirements
1. The supporting documentation shall be created in
accordance with the processes defined in the QMS-
004 Requirements Management document.
517
Appendix K GM6000 Software Development Plan
518
Appendix K GM6000 Software Development Plan
519
Appendix K GM6000 Software Development Plan
520
Appendix K GM6000 Software Development Plan
521
Appendix K GM6000 Software Development Plan
522
Appendix K GM6000 Software Development Plan
Cybersecurity
1. The cybersecurity needs of the project shall follow
the processes defined in QMS-018 Cyber Security
Work Instructions.
523
Appendix K GM6000 Software Development Plan
Tools
1. The software that executes on the DHC’s Control
Board hardware shall be compiled with the GCC
cross compiler for the specific microcontroller.
524
Appendix K GM6000 Software Development Plan
525
Appendix K GM6000 Software Development Plan
526
Appendix K GM6000 Software Development Plan
Testing
1. The software team is responsible for unit testing and
integration testing.
527
Appendix K GM6000 Software Development Plan
528
Appendix K GM6000 Software Development Plan
Deliverables
This section provides a summarized list of deliverables that the software
team is responsible for on the project. There are numerous other
deliverables (e.g., the GM6000 Software Test Plan) that are not summarized
here because the software team in not responsible for the deliverables.
529
Appendix K GM6000 Software Development Plan
530
Appendix K GM6000 Software Development Plan
Change Log
Version Date Updated By Changes
531
APPENDIX L
GM6000 Software
Detailed Design
(Initial Draft)
This appendix is a point-in-time snapshot of the GM6000 Software
Detailed Design document that should be in place before exiting the
planning stage. It is meant to illustrate the minimum amount of up-front
detailed design that is necessary before starting the construction stage.
But I don’t want you to read this appendix. What I want you to do is
scan it and look at the empty headings. That is, the point of this appendix
is to illustrate structure, not content. This is why throughout this particular
document, at this particular stage in its development, you’ll see a lot of
headings with no content; they are just placeholders for content that
is coming in the next stage. During the construction stage, content will
be added to these empty sections at the same time the functionality is
developed.
For examples or models for what to write in an SDD, read Appendix M,
“GM6000 Software Detailed Design (Final Draft),” which is the full-fledged
version of the SDD. In this appendix, though, just note the kinds of things
that need to be done and the kinds of things that can be left undone.
Scope
This document captures the design of the individual Software Items (e.g.,
subsystems, features) and Software Units (e.g., modules) contained within
the GM6000 software. Each of the Software Items and Units is designed in
accordance with the GM6000 Software Architecture and can be designed
in an Agile just-in-time manner. This document is supplemented by the
Doxygen output file that provides header file–level details.
The target audience for this document are software developers, system
engineers, and test engineers. In addition, authors are required to view the
audience as not only being the original development team, but also future
team members on future development efforts.
Requirements Tracing
For requirements tracing purposes, the [SDD-nnn] identifiers are used
to label traceable design items. The SDD identifiers, once assigned, shall
not be reused or reassigned to a different item. The nnn portion of the
identifier is simply a free running counter to ensure unique identifiers
within the document.
Overview
The GM6000 is a Digital Heater Controller (DHC) that can be used in
many different form factors and heating capacities. The specifics of the
unit’s physical attributes will be provisioned into the DHC when the final
534
Appendix L GM6000 Software Detailed Design (Initial Draft)
• Heating Element
• Blower Assembly
• Temperature Sensor
535
Appendix L GM6000 Software Detailed Design (Initial Draft)
The software for the GM6000 product includes two applications. The
first application, codenamed Ajax, is the application that is shipped with
the physical hardware. The second application, codenamed Eros, is an
engineering-only test application used for environmental, emissions, and
end-of-line manufacturing testing.
Glossary
Term Definition
Document References
Document # Document Name Version
536
Appendix L GM6000 Software Detailed Design (Initial Draft)
537
Appendix L GM6000 Software Detailed Design (Initial Draft)
original repository. The open source tool Outcast20 will be used to manage
the incorporation of third-party repositories.
Per the SW-1002 Software C/C++ Embedded Coding Standard, the
source code files are organized by component dependencies (e.g., by
C++ namespaces) and not by project. As a naming convention, those
directories that do not map one to one with a namespace should be
prefaced with a leading underscore (e.g., _0test/). The following table
identifies the top-level source code directories.
20
https://github.com/johnttaylor/Outcast
538
Appendix L GM6000 Software Detailed Design (Initial Draft)
539
Appendix L GM6000 Software Detailed Design (Initial Draft)
Name Description
a.exe, a.out Used for all automated units that can be run in parallel with
other units.
aa.exe, aa.out Same as a.exe, a.out except the executables cannot be
run in parallel. For example, this could be a test that uses a
hard-wired TCP port number.
b.exe, b.out Used for all automated units that can be run in parallel
and that require an external Python script to run the test
executable. For example, this could be a test that pipes a
golden input file to stdin of the test executable.
bb.exe, bb.out Same as b.exe, b.out except that executables cannot be
run in parallel.
a.py Used to execute the b.exe and b.out executables
aa.py Used to execute the bb.exe and bb.out executables
<all others> Manual units can use any name for the executable except
for the ones listed previously.
[SDD-31] Subsystems
[SDD-10] Alert Management
[SDD-11] Application
The application subsystem contains the top-level business logic for the
entire application.
540
Appendix L GM6000 Software Detailed Design (Initial Draft)
The design decision to share the startup code across the different
variants was made to avoid maintaining N-number of separate startup
sequences.
Per the coding standard, the use of #ifdef within functions is not
allowed. The different permutations are done by defining functions
(i.e., internal interfaces) and then statically linking the appropriate
implementations for each variant.
541
Appendix L GM6000 Software Detailed Design (Initial Draft)
The following directory structure and header files (for the internal
interfaces) shall be used. The directory structure assumes that the build
scripts for the different variants build different directories, as opposed to
cherry-picking files within a directory.
src/Ajax/Main/ // Platform/Application independent implementation
+--- platform.h // Interface for Platform dependencies
+--- application.h // Interface for App dependencies
+--- _app/ // Ajax specific startup implementation
+--- _plat_xxxx/ // Platform variant 1 start-up implementation
| +--- app_platform.h // Interface for Platform specific App dependencies
| +--- _app_platform/ // Ajax-Platform specific startup implementation
+--- _plat_yyyy/ // Platform variant 2 start-up implementation
| +--- app_platform.h // Interface for Platform specific App dependencies
| +--- _app_platform/ // Ajax-Platform specific startup implementation
The base implementation for the Main pattern is located at the src/
Ajax/Main/ directory tree. The base implementation is for the Ajax
application.
The Eros Main pattern implementation is built on top of the Ajax
implementation. The Eros customizations are located at src/Eros/Main.
[SDD-12] Bootloader
[SDD-13] BSP
[SDD-14] Console
[SDD-15] Crypto
[SDD-16] Data Model
[SDD-17] Diagnostics
[SDD-18] Drivers
542
Appendix L GM6000 Software Detailed Design (Initial Draft)
The Eros application shares or extends the Ajax Main pattern (see [SDD-
32] Creation and Startup (Application)). The following directory structure
shall be used for the Eros-specific code and for extensions to the Ajax
startup and shutdown logic.
src/Eros/Main/ // Platform/Application Specific implementation
+--- app.cpp // Eros Application (non-platform) implementation
+--- _plat_xxxx/ // Platform variant 1 start-up implementation
| +--- app_platform.cpp // Eros app + specific startup implementation
+--- _plat_yyyy/ // Platform variant 2 start-up implementation
| +--- app_platform.cpp // Eros app + specific startup implementation
Change Log
Version Date Updated By Changes
544
APPENDIX M
GM6000 Software
Detailed Design
(Final Draft)
This appendix provides an example of a detailed design document that
you can use as template. It is the detailed design document for the GM6000
Digital Heater Controller project. The formatting is minimal, but the
structure of the template does address all the areas you should be paying
attention to and thinking about.
The first part of this appendix is the same as it was in Appendix L,
“GM6000 Software Detailed Design (Initial Draft).” There are a few
changes here and there, but mostly this appendix fills out the empty table
of contents that I outlined in the initial draft. If for any reason you see
something in Appendix L that is different or contradicts what’s in this
appendix, consider this appendix the canonical SDD for the GM6000
project.
And, for the record, I’d like to state that I did this “by the book.” I didn’t
write a single section of code for the GM6000 project before I had written
up the detailed design in this document.
Scope
This document captures the design of the individual Software Items (e.g.,
subsystems, features) and Software Units (e.g., modules) contained within
the GM6000 software. Each of the Software Items and Units is designed in
accordance with the GM6000 Software Architecture and can be designed
in an Agile just-in-time manner. This document is supplemented by the
Doxygen output file that provides header file–level details.
The target audience for this document are software developers, system
engineers, and test engineers. In addition, authors are required to view the
audience as not only being the original development team but also future
team members on future development efforts.
Requirements Tracing
For requirements tracing purposes, the [SDD-nnn] identifiers are used
to label traceable design items. The SDD identifiers, once assigned, shall
not be reused or reassigned to a different item. The nnn portion of the
identifier is simply a free running counter to ensure unique identifiers
within the document.
546
Appendix M GM6000 Software Detailed Design (Final Draft)
Overview
The GM6000 is a Digital Heater Controller (DHC) that can be used in
many different form factors and heating capacities. The specifics of the
unit’s physical attributes will be provisioned into the DHC when the final
assembly of the DHC is performed. The GM6000’s software executes on a
microcontroller that is part of the unit’s Control Board (CB). The Control
Board is physically located within the unit’s enclosure. The software
interacts with other sub-assemblies that are within the unit’s enclosure:
• Heating Element
• Blower Assembly
• Temperature Sensor
547
Appendix M GM6000 Software Detailed Design (Final Draft)
The software for the GM6000 product includes two applications. The
first application, codenamed Ajax, is the application that is shipped with
the physical hardware. The second application, codenamed Eros, is an
engineering-only test application used for environmental, emissions, and
end-of-line manufacturing testing.
Glossary
Term Definition
548
Appendix M GM6000 Software Detailed Design (Final Draft)
Term Definition
Document References
Document # Document Name Version
549
Appendix M GM6000 Software Detailed Design (Final Draft)
21
https://github.com/johnttaylor/Outcast
550
Appendix M GM6000 Software Detailed Design (Final Draft)
551
Appendix M GM6000 Software Detailed Design (Final Draft)
552
Appendix M GM6000 Software Detailed Design (Final Draft)
Name Description
a.exe, a.out Used for all automated units that can be run in parallel with
other units.
aa.exe, aa.out Same as a.exe, a.out except that executables cannot be
run in parallel. For example, this could be a test that uses a
hard-wired TCP port number.
b.exe, b.out Used for all automated units that can be run in parallel
and that require an external Python script to run the test
executable. For example, this could be a test that pipes a
golden input file to stdin of the test executable.
bb.exe, bb.out Same as b.exe, b.out except that executables cannot be
run in parallel.
a.py Used to execute the b.exe and b.out executables.
aa.py Used to execute the bb.exe and bb.out executables.
<all others> Manual units can use any name for the executable except
for the ones listed previously.
[SDD-31] Subsystems
The following sections describe the individual subsystems in detail.
553
Appendix M GM6000 Software Detailed Design (Final Draft)
For the Alerts themselves, a model point instance is created for each
possible alert instance. A raised alert is indicated by the model point
instance having a valid value. When the alert is lowered or is inactive, the
model point instance is set to invalid.
The alert data structure for Ajax::Dm::MpAlert includes the following
fields:
[SDD-11] Application
The application subsystem contains the top-level business logic for the
entire application.
554
Appendix M GM6000 Software Detailed Design (Final Draft)
The design decision to share the startup code across the different
variants was made to avoid maintaining N-number of separate startup
sequences.
Per the coding standard, the use of #ifdef within functions is not
allowed. The different permutations are done by defining functions
(i.e., internal interfaces) and then statically linking the appropriate
implementations for each variant.
555
Appendix M GM6000 Software Detailed Design (Final Draft)
The following directory structure and header files (for the internal
interfaces) shall be used. The directory structure assumes that the build
scripts for the different variants build different directories, as opposed to
cherry-picking files within a directory.
src/Ajax/Main/ // Platform/Application independent implementation
+--- platform.h // Interface for Platform dependencies
+--- application.h // Interface for App dependencies
+--- _app/ // Ajax specific startup implementation
+--- _plat_xxxx/ // Platform variant 1 start-up implementation
| +--- app_platform.h // Interface for Platform specific App dependencies
| +--- _app_platform/ // Ajax-Platform specific startup implementation
+--- _plat_yyyy/ // Platform variant 2 start-up implementation
| +--- app_platform.h // Interface for Platform specific App dependencies
| +--- _app_platform/ // Ajax-Platform specific startup implementation
The base implementation for the Main pattern is located at the src/
Ajax/Main/ directory tree. The base implementation is for the Ajax
application.
The Eros Main pattern implementation is built on top of the Ajax
implementation. The Eros customizations are located at src/Eros/Main.
[SDD-12] Bootloader
This remains a placeholder for now since there is not currently a
requirement for upgradable firmware in the initial release. However, it is
fully expected in the future.
[SDD-13] BSP
There will be a Board Support Package (BSP) for each hardware platform
used in the development process from prototypes to final platform. The
first hardware platform BSP is for the ST NUCLEO-F413ZH evaluation
556
Appendix M GM6000 Software Detailed Design (Final Draft)
board. This board will be used for the “pancake”22 hardware platform.
In addition, the BSP software will be developed incrementally. That is,
support for the SPI driver will not be implemented until there is an LCD
controller it can communicate with.
The general structure of a BSP is
• Dependent on a specific compiler
[SDD-69] alpha1
The alpha1 BSP supports the initial firmware development using the ST
NUCLEO-F413ZH evaluation board.
The BSP relies on the ST Cube MX application to generate the low-
level initialization of the board, peripherals, and IO. The ST Cube MX
project file for the BSP is located in the following directory: src/Bsp/
Initech/alpha1/MX.
The BSP is dependent on FreeRTOS, and the configuration of
FreeRTOS is done via the ST Cube MX tool.
22
A “pancake board” is where a collection of individual off-the-shelf hardware
boards—spread across a workbench—are wired together to provide the
functionality of the final customized PCBA.
557
Appendix M GM6000 Software Detailed Design (Final Draft)
All of the IO required for the initial release is supported. See the
STM32-Alpha board Pin Out wiki page for the board’s pin out.
The BSP also includes support for running the Segger SystemView tool.
To include the SystemView tool, the build script must define the ENABLE_
BSP_SEGGER_SYSVIEW symbol and build the src/Bsp/Initech/alpha1/
SeggerSysView directory.
The BSP is located at src/Bsp/Initech/alpha1.
[SDD-70] alpha1-atmel
The alpha1-atmel BSP is an alternate platform that uses the Atmel
SAMD51 microcontroller. The platform is Adafruit’s Grand Central M4
Express evaluation board. Adafruit supplies Arduino support for the board.
The BSP relies on the Arduino framework for the bulk of the MCU
startup and peripheral initialization and usage.
The BSP is dependent on FreeRTOS and supports the Arduino
framework design of creating an initial thread that calls the Arduino
setup() and loop() methods. The BSP contains a header file for the
FreeRTOS Configuration. However, each project or test build directory is
required to contain a FreeRTOSConfig.h header file that references this
file:
src/Bsp/Initech/alpha1-atmel/FreeRTOSConfig_bsp.h
All of the IO required for the initial release is supported. See the
Adafruit-Grand-Central-Alpha board Pin Out wiki page for the board’s pin
out.
The BSP is located at src/Bsp/Initech/alpha1-atmel.
558
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-14] Console
The debug console subsystem provides a command-line interface (CLI)
into the firmware application. The CLI provides commands for debugging
and white box testing as well as a text-based scriptable interface to exercise
the unit’s functionality.
The debug console is implemented using the CPL library’s TShell
framework. For target builds, a UART is used for the stream IO. For
simulation builds, stdio is used for the stream IO.
The TShell framework has the following features:
559
Appendix M GM6000 Software Detailed Design (Final Draft)
560
Appendix M GM6000 Software Detailed Design (Final Draft)
• Numeric character
• Uppercase letter
• Lowercase letter
[SDD-15] Crypto
The cryptography functions needed are as follows:
561
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-58] ED25519
An open source implementation of the ED25519 is used to provide
a software-only implementation of the algorithm. The algorithm is
implemented using standard C, and it has, no platform dependencies. The
source code is located at xsrc/orlp/ed25519/.
The “adapter” for the Driver::Crypto interfaces to the
aforementioned implementation is located at src/Driver/Crypto/Orlp.
where
• H is the output of FUNC, the final result of the hashing
process. The size, in bytes, of H is 32 bytes, which is the
size of the HF function’s digest.
• FUNC is the top-level function.
where
U1 = HF(plaintext + salt)
U2 = HF(plaintext + U1)
Uc = HF(plaintext + Uc-1)
563
Appendix M GM6000 Software Detailed Design (Final Draft)
564
Appendix M GM6000 Software Detailed Design (Final Draft)
565
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-17] Diagnostics
The Diagnostics subsystem is responsible for monitoring and self-testing
the system. This includes features such as power-on self-tests and metrics.
[SDD-72] POST
During startup, the system performs one or more power-on self-tests
(POST). If any of the self-tests fail, then
• The ePOST_FAILURE alert is raised
566
Appendix M GM6000 Software Detailed Design (Final Draft)
The POST tests are platform specific. They are located under the
following directory trees: src/Ajax/Main/_plat_xxxx.
Be aware that the simulator has command-line arguments that can be
used to simulate a POST failure.
[SDD-73] Metrics
The metrics module does the following:
567
Appendix M GM6000 Software Detailed Design (Final Draft)
568
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-18] Drivers
There is a single driver thread. The driver thread shall have higher priority
than the UI thread and the control algorithm thread. Drivers that have
strict time constraints shall execute in the driver thread.
569
Appendix M GM6000 Software Detailed Design (Final Draft)
The driver relies on an HAL interface defined in the header file src/
Driver/AIO/HalSingle.h. A platform-specific implementation of the HAL
interface is located at src/Driver/AIO.
A mocked HAL driver—using a model point for the ADC bit values—is
provided in the analog driver with the functional simulator. The mocked
HAL driver is located at src/Driver/AIO/Simulated. The source code for
the driver is located at src/Driver/AIO/Ajax.
570
Appendix M GM6000 Software Detailed Design (Final Draft)
• src/Driver/DIO/STM32
• src/Driver/DIO/Arduino
• src/Driver/DIO/Simulated
• src/Driver/DIO/Arduino
571
Appendix M GM6000 Software Detailed Design (Final Draft)
• src/Driver/DIO/Simulated
The concrete driver for the STM32 microcontroller family uses the ST
HAL I2C interfaces for the underlying I2C bus implementation, specifically
using the ST Cube MX generated code for configuration and initialization.
The concrete driver for the Adafruit Grand Central board uses the
Arduino framework’s I2C driver.
For the functional simulator, the EEPROM functionality is simulated at
a higher layer (i.e., the Cpl::Persistent::RegionMedia layer).
The abstract I2C driver interface is located at
• src/Driver/I2C
• src/Driver/I2C/Arduino
572
Appendix M GM6000 Software Detailed Design (Final Draft)
The LCD controller’s 4-line, 8-bit serial interface is used. Data is only
written to the LCD controller. The following signals are used:
• CSX—SPI chip select
573
Appendix M GM6000 Software Detailed Design (Final Draft)
The concrete driver for the STM32 and Atmel SAM51x microcontroller
families uses the STM32 and Arduino framework implementations of the
following drivers:
• Driver::DIO::Out
• Driver::SPI
• Driver::DIO::Pwm
• Driver::Button::PolledDebounced
• Driver::LED::RedGreenBlue
• src/Driver/PicoDisplay/Arduino
• src/Driver/PicoDisplay/TPipe
574
Appendix M GM6000 Software Detailed Design (Final Draft)
The existing driver uses the LHeader pattern for decoupling the
interface from the hardware platform.
For the functional simulator, the PWM outputs to the display board are
simulated using the TPipe implementation of the Driver::PicoDisplay
driver.
For the functional simulator, the CPL driver’s existing simulated
implementation, which incorporates model points to mock the driver
interface, will be used.
The driver is located at
• src/Driver/DIO
• src/Driver/DIO/STM32
• src/Driver/DIO/Arduino
• src/Driver/DIO/Simulated
• src/Driver/LED/Pimoroni/RedGreenBlue.h
575
Appendix M GM6000 Software Detailed Design (Final Draft)
• src/Driver/SPI/Arduino
576
Appendix M GM6000 Software Detailed Design (Final Draft)
in the frame is a command verb. The command verb and any additional
frame data are application specific. The command verb must start
immediately after the SOF character (with no leading whitespace).
A separate C# Windows application, simulator.exe, is used to
simulate the physical display. The simulator.exe application uses bit
maps to emulate the LCD display and the RGB LED. Dialog buttons in the
application are used to emulate the four physical momentary buttons. The
buttons can be pressed using the mouse or the keyboard. The keyboard
option supports multi-button pressed combinations. The GM6000
functional simulator connects to the simulator.exe application via TCP
sockets using the TPipe protocol to transfer data and button events. Note
that the backlight brightness is not simulated.
The TPipe driver is located at
• src/Driver/TPipe
• src/Driver/LED/TPipe/RedGreenBlue.h
• src/Driver/Button/TPipe/Hal.h
577
Appendix M GM6000 Software Detailed Design (Final Draft)
Note: This decision should be revisited for future GM6000 derivatives that
have a different display or a more sophisticated UI. The Pimoroni library
has the following limitations that do not recommend it as a foundational
component:
[SDD-20] Heating
The heating subsystem is a collection of several components. The core
algorithm for heating is a fuzzy logic controller.
578
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-63] Supervisor
The heating supervisor is responsible for the top-level control of the
heating algorithm. Specifically, the heating supervisor does the following:
• It runs the fuzzy logic controller on a periodic basis.
Note that the tuning and selection of the fuzzy logic controller
parameters are TBD while waiting for a representative sample unit to be
built and for room and temperature testing to be done.
The following state machine is used to manage the heating algorithm
at run time. Figure M-3 is a diagram of the state machine. The canonical
state diagram for it is located at src/Ajax/Heating/Supervisor/Fsm.cdd.
579
Appendix M GM6000 Software Detailed Design (Final Draft)
580
Appendix M GM6000 Software Detailed Design (Final Draft)
581
Appendix M GM6000 Software Detailed Design (Final Draft)
582
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-85] IO
The heating supervisor operates on logical hardware inputs and outputs
via model points. The heating IO component is responsible for converting
the model point values to and from hardware signals. This includes the
following:
• The heater and fan PWM model points (cmdHeaterPWM,
cmdFanPWM) are monitored for change, and the
hardware outputs are updated on a value change.
• The hardware-safety-limit-tripped input signal
is sampled periodically and is used to update the
hardware safety limit model point (hwSafetyLimit).
The hardware input is sampled or updated at four times
the rate that the heating supervisor executes at.
• The heating IO component executes in the same thread
as the heating supervisor.
583
Appendix M GM6000 Software Detailed Design (Final Draft)
584
Appendix M GM6000 Software Detailed Design (Final Draft)
src/Ajax/SimHouse/
src/Ajax/Heating/Simulated/
[SDD-21] Logging
The CPL C++ logging framework (Cpl::Logging) was selected for the
logging interface. The CPL logging framework just provides the framework,
and it defers the definition of logging categories and message identifiers to
the application. In addition, the application is responsible for storing (or
subsequently retrieving) the log entries to and from persistent storage.
The CPL logging framework identifies each log entry by categories with
a sub-message ID, where category identifiers are unique and message IDs
are only unique within a category. Table M-1 shows the top-level logging
categories.
585
Appendix M GM6000 Software Detailed Design (Final Draft)
586
Appendix M GM6000 Software Detailed Design (Final Draft)
587
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-22] OS
The operating system for the target hardware is the open source FreeRTOS.
Only the basic kernel (i.e., the task scheduler) is used. For the functional
simulator (and automated testing), the operating system is Windows and
Linux.
[SDD-23] OSAL
The Operating System Abstraction Layer is provided by the System
Services subsystem.
588
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-55] Records
The application data is persistently stored using records. A record is the
unit of atomic read/write operations to and from persistent storage. The
CPL C++ class library’s persistent storage framework is used. In addition,
the library’s model point records are used, where each record contains one
or more model points.
589
Appendix M GM6000 Software Detailed Design (Final Draft)
All records are mirrored. That is, two copies of the data are stored
to ensure that no data is lost when there is power failure during a write
operation.
All the records use a 32-bit CRC for detecting data corruption.
Separate records—instead of a single record—are used to insulate the
data against data corruption. For example, the metrics record is updated
multiple times per hour, and, consequently, has the highest probability of
data corruption when compared with the personality record that is written
once. The application data is broken down into the following records:
• User settings
• Metrics
590
Appendix M GM6000 Software Detailed Design (Final Draft)
The concrete record instances are defined per project variant (e.g., Ajax
vs. Eros), and the source code files are located at
src/Ajax/Main
src/Eros/Main
591
Appendix M GM6000 Software Detailed Design (Final Draft)
23
A single entry with the per-entry metadata is 159 bytes.
592
Appendix M GM6000 Software Detailed Design (Final Draft)
593
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-28] UI
The UI is event driven. Events sources are as follows:
594
Appendix M GM6000 Software Detailed Design (Final Draft)
Figure M-5 is the state machine diagram that shows the life cycle of
the screen manager. It describes the behavior; it is not intended to be an
implementation.
The screen manager should be opened as soon as possible during the
startup sequence, so the splash screen is displayed (instead of a blank
screen).
During an orderly shutdown, the application should trigger the
UI-shutdown request as the first step in the shutdown sequence and then
close the screen manager as late as possible in the shutdown sequence.
595
Appendix M GM6000 Software Detailed Design (Final Draft)
596
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-45] Events
The UI events are limited to discrete button events. From the GUI
wireframes, there are five different button events: one per discrete button
(x4) and one logical button, which is two keys (B and Y) pressed together.
The UI events are defined as a C-compatible enumeration.
This is because the event type is mapped to the screen manager’s
AjaxScreenMgrEvent_T using the LHeader pattern. The implementation
597
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-46] Screens
The following sections detail the individual screens. The screens directly
access the Pimoroni graphics library. That is, they have a dependency on
the library (see section [SDD-19]).
The Driver::PicoDriver is used to implement the screen manager’s
DisplayApi.
All temperatures are displayed in degrees Fahrenheit.
Individual screens use hard-coded model point names when
consuming data from or updating data in the application. This is to
simplify the construction and implementation of the screen instances.
598
Appendix M GM6000 Software Detailed Design (Final Draft)
For screens that have dynamic data sourced from model points, the
screens subscribe for change notifications and refresh themselves on
change.
None of the individual screens have unit test projects. The screens are
manually tested as they are integrated into the Ajax or Eros projects.
599
Appendix M GM6000 Software Detailed Design (Final Draft)
600
Appendix M GM6000 Software Detailed Design (Final Draft)
601
Appendix M GM6000 Software Detailed Design (Final Draft)
602
Appendix M GM6000 Software Detailed Design (Final Draft)
603
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-81] Provisioning
The Eros application provides a TShell debug console command to
provision each heater unit. All the provisioning parameters are stored in
the personality record using model points. The following table enumerates
the data that is required to be provisioned.
604
Appendix M GM6000 Software Detailed Design (Final Draft)
Example:
[SDD-75] Tests
The following sections provide brief descriptions of the individual tests
that the Eros application can perform. The Eros application supports the
following types of tests:
605
Appendix M GM6000 Software Detailed Design (Final Draft)
For the asynchronous tests, the MApp framework and the TShell
command to start and stop tests are located under the src/Cpl/MApp/
directory. The TShell command to manage asynchronous tests is mapp.
Usage:
mapp
mapp start <mapp> [<args...]
mapp stop <mapp>|ALL
mapp ls
Example:
[SDD-76] Button
The button test is a UI test. The Eros home screen displays which button
was pressed last for buttons A, B, X, and Y. The button state is also output
to the debug console. The logical key combination B+Y is used to transition
to the LCD test screen.
The Eros home screen code is located at src/Eros/Ui/Home.
606
Appendix M GM6000 Software Detailed Design (Final Draft)
[SDD-77] Cycling
The cycling test is an asynchronous test that is used to exercise the heating
element and fan. The test allows the engineer to specify heater and fan
on/off times as well as PWM values.
The mapp TShell is used to start/stop the cycling test. The command
usage is
[SDD-78] EEPROM
The EEPROM test is a synchronous test that is used to verify the operation
of the off-board EEPROM chip. The command can also be used to erase
the entire EEPROM contents. The TShell command for the test is nv. The
source code is located at src/Driver/NV/_tshell/Cmd.h|.cpp.
Usage:
nv ERASE\n
nv read <startOffSet> <len>
nv write <startOffset> <bytes...>
nv test (aa|55|blank)
Example:
$$> nv read 0 10
607
Appendix M GM6000 Software Detailed Design (Final Draft)
hws
[SDD-79] LCD
The LCD test is a UI test that is used to visually inspect the LCD for dead
pixels. The test is launched by pressing the B+Y buttons on the Eros home
screen. The user is walked through a series of solid color screens by
pressing any button. After all colors have been displayed, the next button
press returns the user to the home screen.
[SDD-83] PWM
The PWM test is a synchronous test that is used to directly command the
heater, fan, and backlight. The TShell command for the test is pwm. The
source code is located at src/Ajax/TShell/Pwm.h|.cpp.
Usage:
Example:
608
Appendix M GM6000 Software Detailed Design (Final Draft)
rgb off
rgb <red> <green> <blue>
rgb <brightness>
Example:
609
Appendix M GM6000 Software Detailed Design (Final Draft)
Change Log
Version Date Updated By Changes
610
APPENDIX N
Overview
The GM6000 is a Digital Heater Controller (DHC) that can be used in
many different form factors and heating capacities. The specifics of the
unit’s physical attributes will be provisioned into the DHC when the final
assembly of the DHC is performed. The GM6000’s software executes on
microcontroller that is part of the unit’s Control Board (CB). Figure N-1
summarizes the system architecture.
Glossary
Term Definition
612
Appendix N GM6000 Fuzzy Logic Temperature Control
Document References
Document # Document name Version
Fuzzification
There are two input membership functions: one for absolute temperature
error and one for differential temperature error. Both membership
functions use triangle membership sets as shown in Figure N-2.
Absolute error is calculated as
613
Appendix N GM6000 Fuzzy Logic Temperature Control
Ymax = 1000
Jerror = 4
Jderror = 16
error = 0.8°C = 80 ; Below setpoint
errors = 320
dError = -0.04C = -4 ; Moving towards the setpoint
dErrors = -64
Set mid points = [-2000, -1000, 0, 1000, 2000]
614
Appendix N GM6000 Fuzzy Logic Temperature Control
then
Fuzzy Inference
Fuzzy inference is the process of mapping fuzzified inputs to an output
using fuzzy logic. Table N-1 is used with Zadeh MAX-MIN relations to
generate the fuzzy output vector. The inference sequence is as follows:
615
Appendix N GM6000 Fuzzy Logic Temperature Control
m2[] (dError)
NM NS ZE PS PM
m1 [] NM PM PM PM PS ZE
(error) NS PM PM PS ZE NS
ZE PM PS ZE NS NM
PS PS ZE NS NM NM
PM ZE NS NM NM NM
out[] = [0, 0, 0, 0, 0]
m1[] = membership(errors) = [0, 0, 680, 320, 0]
m2[] = membership(dErrors) = [0, 64, 936, 0, 0]
For m1[0]:
min[] = [0, 0, 0, 0, 0]
max = 0
maxidx = 0
rules[0][maxidx] = NM = 0
out[] = [0, 0, 0, 0, 0]
For m1[1]:
min[] = [0, 0, 0, 0, 0]
max = 0
maxidx = 0
rules[1][maxidx] = NM = 0
out[] = [0, 0, 0, 0, 0]
616
Appendix N GM6000 Fuzzy Logic Temperature Control
For m1[2]:
For m1[3]:
For m1[4]:
min[] = [0, 0, 0, 0, 0]
max = 0
maxidx = 0
rules[4][maxidx] = ZE = 2
out[] = [0, 320, 680, 0, 0]
Defuzzification
The output of the control algorithm is a change to the proportional output
value that represents the heater output capacity.
The defuzzied output value is obtained by finding the centroid
point of the function that is the result of the multiplication of the output
membership function and the output vector from the fuzzy inference
phase (see Figure N-3).
617
Appendix N GM6000 Fuzzy Logic Temperature Control
A positive defuz value indicates that there is too much heating capacity,
so the value needs to be multiplied by -1 to invert the polarity so that the
final result represents the needed change to the requested capacity. outfinal
represents a delta change that should be applied to the actual output
signal, where a positive value indicates that more capacity is needed.
∑ out [i ]× K [i ]
4
i =0
∑ out [i ]
4
i =0
where K[] is the relative output strengths for the membership sets:
618
Appendix N GM6000 Fuzzy Logic Temperature Control
then
0 × ( −20 ) + 320 × ( −10 ) + 680 × ( 0 ) + 0 × (10 ) + 0 × ( 20 )
defuz =
0 + 320 + 680 + 0 + 0
−3200
defuz = = −3.2 ; less than zero 🡪 not enough capacity
1000
outfinal = 3.2; add the value to the controller’s commanded capacity
output
Change Log
Version Date Updated By Changes
619
APPENDIX O
Software C/C++
Embedded Coding
Standard
This appendix is the C/C++ embedded coding standard document used
on the GM6000 Digital Heater Controller project. It provides an example of
a coding standard document that you can use as template. The document
content is sparse, and the formatting is minimal, but the structure of the
template does address all the areas you should be paying attention to and
thinking about.
Overview
This document defines the C++ coding standards and style for developing
embedded software. Coding standards and coding style are often mixed
together, which is unfortunate.
Coding standards should be reserved for coding practices that bear
directly on the reliability of code. This means that violating a coding
standard puts your code at risk for errors. Because it is unacceptable,
for example, for embedded software to periodically reboot because of
exhausted or squandered resources (e.g., no memory available), adhering
to coding standards specifically tailored to working in an embedded
environment supports the robustness and reliability of the software.
Coding style guidelines, on the other hand, address issues like
formatting and naming conventions. Deviating from the style guidelines
doesn’t necessarily introduce errors, but it does affect the readability and
maintainability of your code, and the establishment of a common coding
style can facilitate
622
Appendix O Software C/C++ Embedded Coding Standard
Scope
This standard applies to source code that is developed for new in-house
projects (i.e., “greenfield code”). Code development for legacy code bases
shall follow the coding standards of the legacy code’s existing coding
standards. This standard does not apply to third-party code bases.
623
Appendix O Software C/C++ Embedded Coding Standard
Glossary
Term Definition
Camel Case Camel case is used to eliminate white space in symbol names by
using mixed case to separate words. The first word starts with a
lowercase letter, for example, camelCase.
Pascal Case Pascal case is used to eliminate white space in symbol names by
using mixed case to separate words. The first word starts with an
uppercase letter, for example, PascalCase.
Snake Case Snake case is used to eliminate white space in symbol names
by using an underscore to separate words. Snake case is all
lowercase, for example, snake_case.
Namespace Namespace case is a variant of snake case that is used when
Case prefixing a C symbol with its logical namespace. Namespace case
uses an underscore to separate words like snake_case, but the first
letter of each namespace is uppercase, for example, Foo_Bar_
hello_world, where Foo::Bar is the logical namespace.
Document References
Document # Document name Version
624
Appendix O Software C/C++ Embedded Coding Standard
Coding Standards
Standards tagged with an REQ label must always be followed, and any
deviations must be documented in the source code. Standards without the
REQ label are strongly recommended, but deviation is allowed (without
documenting the deviation). Some required standards explicitly call out
exceptions. For these cases, no deviation documentation is required.
Language (REQ)
All C++ code shall be compliant with the C++11 standard (ISO/IEC
14882:2011). All C code shall be compliant with C11 standard (ISO/IEC
9899:2011). Language features defined in a newer language standard shall
not be used. Note that older standards are intentionally used to facilitate a
broader range of tools and legacy platforms.
625
Appendix O Software C/C++ Embedded Coding Standard
626
Appendix O Software C/C++ Embedded Coding Standard
627
Appendix O Software C/C++ Embedded Coding Standard
class Foo
{
public:
/// Constructor
Foo( .... );
/// Destructor
virtual ~Foo();
public:
/// A virtual do something function
virtual void doSomething( .... );
...
};
628
Appendix O Software C/C++ Embedded Coding Standard
• Destructor
• Copy constructor
• Move constructor
class Foo
{
private:
char* m_string;
public:
/// Constructor
Foo(const char* src = "")
: m_string(nullptr) {
if (src) {
m_string = new char[strlen(src) + 1];
strcpy(m_string, src );
}
}
language/rule_of_three
629
Appendix O Software C/C++ Embedded Coding Standard
/// Destructor
~Foo() {
delete[] m_string;
}
630
Appendix O Software C/C++ Embedded Coding Standard
return *this;
}
};
#ifndef Namespace_ClassName_h_
#define Namespace_ClassName_h_
...
#endif
#ifndef Cpl_System_Win32_Thread_h_
#define Cpl_System_Win32_Thread_h_
...
#endif
631
Appendix O Software C/C++ Embedded Coding Standard
632
Appendix O Software C/C++ Embedded Coding Standard
633
Appendix O Software C/C++ Embedded Coding Standard
Coding Style
The intent of the style guide is to establish a style baseline that all
developers are required to follow. This baseline provides consistency
across source code files and aids in reading and maintaining the code.
In addition, following a common programming style will enable the
construction of tools that leverage a knowledge of these standards to assist
in the development and maintenance of the code.
This guide specifies only basic requirements; the individual
programmer is allowed to impose their own style preferences on top of
these requirements. There are only two absolute rules to observe when
it comes to creating your own coding style. The first rule is consistency.
Establish a style and stick to it. The second is tolerance. When you must
modify code written by others that has a different style, adopt their style,
and do not convert the code to your style.
Style rules tagged with an REQ label should always be followed, and any
deviations will need to be justified during the code review process. Style rules
without the REQ label are strongly recommended, but deviation is allowed
(without any justification). Some required style rules explicitly call out
exceptions. For these cases, no deviation document in the source is required.
634
Appendix O Software C/C++ Embedded Coding Standard
Comments
Header Files (REQ)
Header files must be completely documented. This means every class,
method, and data member must have comments. Header files describe the
interfaces of the system and, as such, should contain all the information a
developer needs to use and understand the interface.
Header file comments shall not be duplicated in or copied to the .c|.
cpp files.
C|CPP Files
The quantity and quality of the comments in a CPP file are left to the
individual developers to decide. Comments in the C|CPP files should
be limited to implementation detail, not the semantics and usage of the
functions, methods, classes, etc. (as these are commented in header files).
635
Appendix O Software C/C++ Embedded Coding Standard
File Organization
Organize by Namespace (REQ)
Organize your files by component dependencies, not by project. That
is, do not create your directory structure to reflect a top-down project
decomposition. Rather, organize your code by namespaces where nested
namespaces are reflected as subdirectories in their parent namespace
directory. By having the dependencies reflected in the directory structure, it is
a quick and visual sanity check to avoid undesired and cyclical dependencies.
File names (.h|.c|.cpp files) do not have to be globally unique. The
file names only need to be unique within a given directory and namespace.
Directories and namespaces are your friends when it comes to naming
because they provide a simple mechanism for avoiding future naming
collisions.
The C programming language standard does support namespaces.
However, the concept of namespace can still be implemented with C by
applying the following convention:
636
Appendix O Software C/C++ Embedded Coding Standard
README.txt (REQ)
Each namespace directory shall contain a README.txt file that describes
the purpose or content of the namespace. The file should provide
descriptions, comments, usage semantics, etc., that span multiple files and
classes in the namespace.
The file is required to include a Doxygen-readable comment that
describes the namespace. For example:
*/
637
Appendix O Software C/C++ Embedded Coding Standard
Files (REQ)
The file names .h and .cpp files that define a C++ class shall match the
contained class (including case). If multiple classes are contained in a
single file, the recommendation is to name the file after the primary class
in the file.
638
Appendix O Software C/C++ Embedded Coding Standard
src
└───Storm
├───Component
│ ├───Equipment
│ │ ├───Stage
├───Dm
│ └───_0test
├───Thermostat
│ ├───Main
│ │ ├───_adafruit_grand_central_m4
│ │ └───_win32
│ ├───SimHouse
│ └───_file_logger
├───TShell
├───Type
└───Utils
File: Storm/Component/AirFilterMonitor.h
#include "Storm/Component/Base.h"
#include "Storm/Dm/MpSimpleAlarm.h"
#include "Storm/Dm/MpVirtualOutputs.h"
File: Storm/Component/Equipment/Cooling.h
#include "Storm/Component/Control.h"
#include "Storm/Component/Equipment/StageApi.h"
639
Appendix O Software C/C++ Embedded Coding Standard
Naming
In general, C++ names use camel and Pascal case, while C names use
snake and namespace case. Refer to the glossary for definitions of the case
terminology.
640
Appendix O Software C/C++ Embedded Coding Standard
641
Appendix O Software C/C++ Embedded Coding Standard
Formatting
Indenting Spacing (REQ)
Tab stops will be set to 4, and spaces shall be used for indenting. It is
recommended that developers configure their editor and IDE to insert
spaces for tabs.
Braces (REQ)
The Allman brace style and indenting shall be used.25 For example:
while (x == y)
{
something();
somethingElse();
}
finalThing();
25
See https://en.wikipedia.org/wiki/Indentation_style
642
Appendix O Software C/C++ Embedded Coding Standard
Change Log
Version Date Updated By Changes
643
APPENDIX P
GM6000 Software
Requirements Trace
Matrix
This appendix illustrates how to trace GM6000 Digital Heater Controller
project software requirements to software design artifacts. The tracing
includes forward tracing (i.e., from requirements down into software
architecture, design, etc.) as well as backward tracing (i.e., from software
design and architecture up to requirements). The document does not
include tracing requirements to test protocols or test cases.
Requirements tracing can be done manually, or it can be done using
a requirements management tool (e.g., Doors or Jama). The advantage of
using requirements management tools is that they are good at handling
the many-to-many relationships that occur when tracing requirements
and provide both forward and backward traces.
In the absence of something better, use this document as a template
for tracing your requirements to software design artifacts. The formatting
is minimal, but the structure of the template addresses all the areas you
should be paying attention to and thinking about.
Overview
This document captures the forward and backward tracing of software
requirements for the GM6000 Digital Heater Controller to and from
software architecture and design.
Glossary
Term Definition
646
Appendix P GM6000 Software Requirements Trace Matrix
Document References
Document # Document name Version
647
Appendix P GM6000 Software Requirements Trace Matrix
648
Appendix P GM6000 Software Requirements Trace Matrix
649
Appendix P GM6000 Software Requirements Trace Matrix
650
Appendix P GM6000 Software Requirements Trace Matrix
651
Appendix P GM6000 Software Requirements Trace Matrix
652
Appendix P GM6000 Software Requirements Trace Matrix
653
Appendix P GM6000 Software Requirements Trace Matrix
654
Appendix P GM6000 Software Requirements Trace Matrix
655
Appendix P GM6000 Software Requirements Trace Matrix
656
Appendix P GM6000 Software Requirements Trace Matrix
Change Log
Version Date Updated By Changes
657
APPENDIX Q
Overview
This document enumerates all the third-party packages and source code
that are contained in the publicly released software for the GM6000 Digital
Heater Controller.
Glossary
Term Definition
Document References
Document # Document Name Version
Packages
The following table lists all the third-party software and packages that get
compiled into the release executables for the GM6000 project. The license
column should be considered a required column in this table.
660
Appendix Q GM6000 Software Bill of Materials
661
Appendix Q GM6000 Software Bill of Materials
Versions
The following table lists the versions (and their location in the repositories)
of packages listed in the previous section.
662
Appendix Q GM6000 Software Bill of Materials
Change Log
Version Date Updated By Changes
663
APPENDIX R
GM6000 Software
Release Notes
This appendix provides an example of a release notes document for formal
releases (e.g., candidate and gold releases) for the GM6000 Digital Heater
Controller project. In the absence of something better, this document can
be used as a starting point for your release notes documentation.
Overview
The GM6000 product is a Digital Heater Controller (DHC) that can be
used in many different physical form factors and heating capacities. The
specifics of the final unit’s physical attributes will be provisioned into the
DHC during the manufacturing process.
The document contains the release notes for all candidate releases.
The intended audience is for internal stakeholders only. The individual
candidate releases are listed latest first.
Glossary
Term Definition
Document References
Document # Document Name Version
666
Appendix R GM6000 Software Release Notes
RC2
Date: 2/27/2024
Summary: Bug fixes and usability improvements
Build Number: 20000013
Versions: Ajax: 1.0.0
Eros: 0.0.1
Git Label: MAIN-2000013
957d1e33d57abdc97cefbd91e3b839d98bdff4bb
Artifacts: ajax-alpha.bin|elf
ajax-simulator.exe
eros-alpha.bin|elf
eros-simulator.exe
Changes
• #125—Added code to flash LED indicators. This is a
usability enhancement so users do not need to rely
solely on LED colors to determine state.
Bug Fixes
• #123—Fixed a problem where LED does not flash when
there is a hard error
667
Appendix R GM6000 Software Release Notes
Known Issues
• The heating algorithm configuration (provisioned
during manufacturing) still needs to be tuned to the
final physical components (e.g., heater elements) and
mechanical layout.
RC1
Date: 1/27/2024
Summary: First candidate release
Build Number: 2000008
Versions: Ajax: 1.0.0
Eros: 0.0.1
Git Label: MAIN-2000008
0a33083bcc57592e9d3d06c10277f8b3750580ec
Artifacts: ajax-alpha.bin|elf
ajax-simulator.exe
eros-alpha.bin|elf
eros-simulator.exe
Changes
• n/a
Bug Fixes
• n/a
668
Appendix R GM6000 Software Release Notes
Known Issues
• The heating algorithm configuration (provisioned
during manufacturing) still needs to be tuned to the
final physical components (e.g., heater elements) and
mechanical layout.
Change Log
Version Date Updated By Changes
669
Index
A B
Adafruit, 10, 217, 218, 285, 302, 321, BA, 21, 22, 30, 455, 456, 470, 535
322, 340, 538, 551, 558, 572, Blower Assembly (BA). See BA
576, 661 Board Support Package (BSP).
Agile, 5, 6, 62, 69, 158, 160, 162, 267, See BSP
520, 534, 546 Bootloader, 55, 222–223, 486, 556
Ajax application, 113, 114, 291, 298, BSP, 36, 37, 45, 228, 391, 475, 487,
308, 313 536, 548, 556
console password, 316–318 board schematic,
debug version, 318, 323 encapsulate, 216
functional simulator, 291, 316 bootloader, 55, 222–223,
command-line options, 486, 556
320, 321 C/C++ runtime code, 107,
epc repository, 319 214, 221
go.bat directory, 319, 320 compiler toolchain, 214
provisioning, 319 creation, 220
simulated display, 318, microcontroller hardware, 213
319, 321 Build-all scripts
hardware, 321, 322 characteristics, 124
provisioning, 314–316, 337, CI server, 125, 135
604, 605 directories, 125, 126
Arduino, 227 GM6000, 125
Arduino framework, 9, 217, 236, Linux build_all script, 133–134
249, 256, 261, 548, 574 naming convention, 126, 127
Asynchronous Windows build_all
test, 605–607 script, 129–132
672
INDEX
673
INDEX
674
INDEX
675
INDEX
676
INDEX
677
INDEX
678
INDEX
679
INDEX
680
INDEX
681
INDEX
Software requirements trace SRS, 17, 55, 68, 138, 142, 148, 149,
matrix (cont.) 449, 510, 518, 647–648
requirements management State machine tools, 310–312
tool, 645 ST HAL library, 235, 259
Software Detailed Design, STM32 Cube IDE, 301
654, 655 STM32 Cube MX, 301
Software architecture, 648, STM32 setDutyCycle()
655, 656 method, 235–237
Software tasks ST NUCLEO, 217, 285, 288, 322,
code review, 156 392, 556, 557
detailed design, 155 Subsystems, SWA
elements, 153, 154 alert management, 485, 486
granularity, 158 application, 486
requirements, 154 Board Support Package, 36, 37,
source code/unit test, 155 45, 228, 391, 475, 487, 536,
tickets/agile, 160, 161 548, 556
types, 159 bootloader, 55, 222–223, 486, 556
Software timers, 379, 383–384 console, 487
src/Cpl crypto, 488
Checksum namespace, 366 cybersecurity, 48, 49, 70, 500,
Cpl::Container 501, 523
namespace, 366 data integrity, 496–498
Cpl::Io, 369 data model, 488–490
Cpl::Itc, 370, 371 diagnostics, 490
Cpl::Json, 372 drivers, 490
Cpl::Logging, 373, 374 engineering testing, 505, 506
Cpl::MApp, 375 file/directory organization,
Cpl::Math, 375 503, 504
Cpl::Memory, 375 functional simulator, 498–500
Cpl::Persistent, 376 graphics library, 491
Cpl::System, 377–385 interdependencies, 41
Cpl::Text, 386–391 interfaces, 494, 495
data model, 367–368 internationalization, 505
directory structure, 365 localization, 505
682
INDEX
683
INDEX
U V
UART, 11, 32, 214–216, 221, 263, validate_cc() method, 422
271, 369, 475, 478, 482, 536, Validation, 510
549, 559 Verification, 510, 522
UI, 39, 494 Visual Studio compiler, 129, 287,
events, 594, 597, 598 291, 292, 355, 356, 428, 431
screen manager, 595–597
screens
about screen, 599 W, X, Y, Z
dynamic data, 599 Waterfall, 5, 17, 29, 62, 70, 75, 162,
edit screen, 599 165, 520
error screen, 600 Windows
hard-coded model, 598 application builds, 290
home screen, 600, 601 build directories, 296
navigation, 594 compiler
Pimoroni graphics library, 598 configuration, 288–290
shut-down screen, 601 functional simulator, 291–293
splash screen, 601 hardware
state machine diagram, targets, 293, 294, 296
595, 596 source code, 288
types, 594, 595 tools, 287, 288
status indicator, 601, 602 Windows build_all
Units Under Test (UUTs). See UUTs script, 129–132
Unit testing, 73, 97, 160, 187, 198, Wireless module (WM). See WM
255, 505, 539 Wireless Remote Temperature
Universal asynchronous receiver- Sensor (WTS). See WTS
transmitter (UART). Wireless sensor (WS). See WS
See UART WM, 22, 452, 455, 471, 478, 483
User interface (UI). See UI WS, 20, 205, 478, 501
UUTs, 437–439, 444, 448 WTS, 205, 471
684