0% found this document useful (0 votes)
92 views65 pages

Mbeddr User Guide

This document is the mbeddr user guide. It provides an overview of mbeddr and its key concepts, how to install and set up mbeddr, fundamentals of using the mbeddr IDE, details on the mbeddr C language and extensions, how to build and debug code with mbeddr's C tooling, and descriptions of mbeddr's analyses and process support features. The guide directs users to additional resources on mbeddr's website for videos, papers, and further learning materials.

Uploaded by

igormarkes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views65 pages

Mbeddr User Guide

This document is the mbeddr user guide. It provides an overview of mbeddr and its key concepts, how to install and set up mbeddr, fundamentals of using the mbeddr IDE, details on the mbeddr C language and extensions, how to build and debug code with mbeddr's C tooling, and descriptions of mbeddr's analyses and process support features. The guide directs users to additional resources on mbeddr's website for videos, papers, and further learning materials.

Uploaded by

igormarkes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

mbeddr User Guide

This is the mbeddr user guide. It targets users of mbeddr, not people who extend the mbeddr languages. We strive to cover all relevant
mbeddr functionalities, but as things change, some parts may not always be up-to-date. Please report problems, ommissions or bugs to
our mailing list at mailto:[email protected]. When you report a problem, please make sure you include a Node Info for
the problem (see Section Utilitites for details on what a Node Info is).

This user guide is written with the MPS documentation language. On the plus side, this affords us very close integration with source code
and models -- these things are always up to date. On the downside, we still have some issues in the Latex generator, the generated
Latex file, and especially the embedded figures, don't yet look very beautiful. Please bear with us. It will improve.

You can use mbeddr's full text search utility provided by mbeddr: see Section Searching Models.

Finally, we want to emphasize that, in addition to this user guide, there is a lot of additional information. This includes various videos (at
http://mbeddr.com/ and at http://mbeddr.com/screencasts.html) as well as a number of articles and papers at
http://mbeddr.com/learn.html.

Table of Contents
1 mbeddr Concepts
1.1 JetBrains MPS
1.2 mbeddr
1.2.1 mbeddr Architecture
1.3 gcc, make and gdb
1.4 CBMC
1.5 PlantUML and Graphviz
1.6 Java
2 Installation and Setup
2.1 Gcc, Make and Gdb
2.2 Graphviz
2.3 Verification Tools
2.4 mbeddr
2.4.1 mbeddr IDE
2.4.2 mbeddr Plugin Distribution
2.4.3 mbeddr Source Installation
2.4.3.1 Introduction
2.4.3.2 Source Distribution Python Installer
2.4.3.3 Manual Source Installation
2.5 JetBrains MPS and Java
2.6 Adding Tools to the System Path
2.7 Required Tools and Versions
3 Fundamentals
3.1 The MPS UI
3.1.1 Overall Structure
3.1.2 Tools
3.2 Project Structure
3.2.1 Nodes, Roots and Concepts
3.2.2 Project, Solutions, Models, Folders, Roots
3.2.3 Using Languages and Devkits
3.2.4 Paths and Libraries
3.3 The Editor
3.3.1 Usability
3.3.2 Projection Modes
3.4 Keyboard Shortcuts
3.5 Version Control
3.5.1 Introduction
3.5.2 Supported Version Control Systems
3.5.3 Setting Up Version Control for a Project
3.5.4 Tools and Views
3.5.5 Common Version Control Operations
3.5.5.1 Update/Pull
3.5.5.2 Commit/Push
3.5.5.3 Conflicts/Merging
3.6 Utilitites
3.6.1 Interop with Text
3.6.2 Searching Models
3.6.3 Opening Terminals
3.6.4 Find Usages
3.7 Cheat Sheet
3.8 Useful MPS Settings
4 mbeddr C vs. C99
4.1 mbeddr's Module System
4.2 Data Types
4.2.1 Boolean
4.2.2 User defined types
4.2.3 Integers
4.2.4 Type Size Configuration
4.3 Array Syntax
4.4 Various Additions
4.5 Unit Tests
4.6 Function Pointers
4.6.1 Function Types
4.6.2 Function References
4.6.3 Lambdas
4.7 Commenting
4.7.1 Statement Comments
4.7.2 Element Documentations
4.7.3 Margin Comments
4.7.4 Commenting Out Code
5 Generated Code
5.1 Efficiency
5.2 Readability
6 C Tooling
6.1 Building Binaries in the IDE
6.1.1 Build Configurations
6.1.2 Model Checking
6.1.3 Building in the IDE
6.2 Building Binaries from the Command-Line
6.2.1 Example Application
6.2.2 Writing Build Scripts
6.2.3 Building from Command-Line
6.3 Running Applications from MPS/mbeddr
6.3.1 Creating a Launch Configurations
6.3.2 Contents of the Launch Configuration
6.3.3 Console output
6.4 Debugging
6.4.1 Introduction
6.4.2 Starting a Debug Session
6.5 Hello, World
6.5.1 A new Project
6.5.2 New Model
6.5.3 Writing the Code
6.5.4 Build and Run
6.6 Accessing Libraries
6.6.1 Manual Header Import
6.6.2 Automatic Header Import
7 C Extensions
7.1 Static Evaluation
7.2 Units
7.2.1 Physical Units Basics
7.2.2 Unit Conversions
7.2.3 Generic Units
7.2.4 Stripping and Reintroducing Units
7.3 State Machines
7.3.1 Implementing a State machine
7.3.2 Interacting with Other Code -- Outbound
7.3.3 Interaction with Other Code -- Inbound
7.3.4 Hierarchical State Machines
7.3.5 Tabular Notation
7.3.6 State Machine Diagrams
7.4 Components
7.4.1 An Interface with Contracts
7.4.2 A first Component
7.4.3 Verifying Contracts Statically
7.4.4 Collaborating and Stateful Components
7.4.5 Mocks
7.4.6 Sender/Receiver Interfaces
7.4.7 Composite Components
7.4.8 Interface Types
7.4.9 Visualizing Components
7.4.10 Contract Verification
7.5 Decision Tables
7.5.1 Verifying the Decision Table
8 Process Support
8.1 Requirements
8.1.1 Overview
8.1.2 Requirements Basics
8.1.3 Visualizations and Tree Views
8.2 Tracing
8.2.1 Attaching and Removing Traces
8.2.2 Reverse Tracing
8.2.3 Trace Report Assessment
8.3 Assessments
8.4 Documentation Language
8.4.1 Basic Ideas
8.4.2 Target Platforms
8.4.3 Core Concepts
8.5 Code Review
8.5.1 Overview
8.5.2 Projection Modes
8.5.3 Review States
8.5.4 Assessments
8.6 Product Line Variability
8.6.1 Specifying Variability
8.6.2 Runtime Variability
8.6.3 Static Variability
9 Analyses
9.1 Introduction to Analyses
9.2 Analyzing variability
9.2.1 FM-Analyzer
9.2.2 CM-Analyzer
9.2.3 Modules-Analyzer
9.2.4 Modules and Solutions Variability Analyzer
9.3 Formal Verification with CBMC
9.3.1 Verification primer
9.3.1.1 Behind the Courtain
9.3.2 Basic Configuration
9.3.3 Robustness Checks at C-level
9.3.4 Robustness Checks of mbeddr-Extensions
9.3.4.1 Checking Decision Tables
9.3.4.2 Checking State Machines
9.3.5 Functional Verification
9.3.5.1 Assertions verification
9.3.5.2 Components Verification
9.3.6 Defining the Environment for Verification

1 mbeddr Concepts
1.1 JetBrains MPS
mbeddr relies on the JetBrains MPS language workbench. A language workbench is a tool that supports language engineering, the
activity of building, composing, extending and using languages. MPS can be seen as a framework on which mbeddr relies.

MPS is open source software licensed under Apache 2.0. It can be downloaded from http://jetbrains.com/MPS. mbeddr currently uses
MPS 3.1.4, even though 3.1.5 should work as well. Later in 2014 we will move to the upcoming version 3.2.

Various tutorials exist, and the MPS documentation page (at https://www.jetbrains.com/mps/documentation/) contains links to most of
these. We want to highlight in particular the book by Fabien Campagne (http://www.amazon.com/The-MPS-Language-Workbench-
Vol/dp/1497378656/). Note that in order to use mbeddr, you don't have to understand the MPS language engineering facilities (this is only
necessary to extend mbeddr with new languages.

MPS relies on a projectional editor. Instead of using a parser that transforms character sequence into a tree for further processing, in
MPS, every editing gesture directly changes the tree. This has two important advantages:
flexible notations and language modularity.

Language Modularity Because a projectional editor does not use a grammar, the combination of independently developed grammars
cannot become ambiguous. Potential ambiguities are delegated to the user to resolve. This way, arbitrary languages and language
extensions can be combined in one single program. mbeddr exploits this by providing a set of modular language extensions to C (and a
couple of other languages). If you are interested in the details, please refer to this paper: http://voelter.de/data/pub/Voelter-GTTSE-
MPS.pdf as well as mbeddr in general.

Flexible Notations Since no parser is used, MPS can support essentially any notation. This includes classical text, but also prose (as in
this user guide), tables, math symbols and also diagrams. We exploit this in mbeddr extensively. For an overview over MPS' supported
notations take a look at this paper: http://mbeddr.com/files/gemoc2014-MPSNotations.pdf

Traditionally, the above mentioned benefits of a projectional editor have come at a price: the user experience of traditional projectional
editors was not very good, in particular, when editing code that looked like "normal text". Consequently, projectional editors were not
adopted. However, MPS is doing a much better job with this. We have conducted a study with 20 MPS users (most of them mbeddr users
as well), and the conclusion is that the editor works well enough. Take a look at this paper for details:
http://mbeddr.com/files/projectionalEditing-sle2014.pdf.
1.2 mbeddr
mbeddr is essentially a set of plugins to MPS. If you consider MPS as a framework, then mbeddr can be considered an application. The
plugins contain mostly language definitions: mbeddr C as well as its extensions are languages in terms of MPS. In addition, the plugins
also contain additional utilities, libraries, views, editors, etc. However, these details are irrelevant to the mbeddr user.

In this user guide we do not explain the philosophical underpinnings, ideas and approaches of mbeddr since these have been discussed
extensively in the various papers we wrote. You can find these papers on the mbeddr website (in particular on the Learn page at
http://mbeddr.com/learn.html. To get an overview we suggest to take a look at the following two:

mbeddr - Instantiating a Language Workbench in the Embedded Software Domain: http://mbeddr.com/files/voelteretal-mbeddr-


final.pdf
Generic Tools, Specific Languages (Markus' dissertation that uses mbeddr as a case study):
http://voelter.de/data/books/GenericToolsSpecificLanguages-1.0-web.pdf

1.2.1 mbeddr Architecture


mbeddr is structured into 5 layers and addresses three different concerns (see Fig. 1.2.1-A). The platform layer consists of the JetBrains
MPS language workbench, which enables the overall approach. The core layer includes the base languages used in mbeddr; these are
different for each of the three concerns. The default extensions layer consits of a set of predefined extensions of the core languages that
can be used out-of-the-box to develop software. The user extension layer is empty, it is intended to be filled by users. The backend layer
consists of existing tools compilation and analysis tools which mbeddr uses.

Figure 1.2.1-A: Overview over mbeddr's five layers and three concerns.

Implementation Concern The implementation concern addresses the development of applications based on C. On the core layer,
mbeddr comes with an implementation of C99 in MPS. There are a few minor differences to C99, and the preprocessor is not exposed to
the user; first-class concepts are provided for the legitimate uses of the preprocessor (including a module system). On the default
extensions layer, the implementation concern comes with C extensions for decision tables, interfaces and components, state machines,
physical units, testing as well as logging and tracing. The user extensions layer is by definition empty; users can easily extend the C core
as well as any of the default extensions. State-of-the-art IDE support is available for all languages, including syntax highlighting, code
completion, real-time type checks and refactorings. The implementation concern also ships with an extensible debugger that is able to
debug on the level of the extensions, so the abstractions do not break down when debugging becomes necessary. At the backend layer,
the implementation concern relies on a C compiler, a C debugger and tools for importing existing C code into mbeddr. By default, mbeddr
uses the gcc compiler and the gdb debugger.

Analysis Concern The analysis concern provides static analyses (formal verification) for some of the default extensions provided by the
implementation concern. The analyses themselves are performed by existing external tools. However, mbeddr integrates the tools tightly
by (a) providing language abstractions to conveniently describe behavior that can be analyzed, (b) translating this description to the input
of the analysis tool, (c) running the tool, and (d) lifting the output of the tool back to the original abstraction level, to make it easier to
understand for the user. The integrated analyses are based on symbolic model checking, SMT solving and C-level model-checking.
Specifically, the following analyses are available: State machines can be checked with a symbolic model checker. It verifies a set of
default properties and optional user-defined properties. Decision tables can be checked for completeness and consistency. Feature model
configurations are checked for consistency. Finally, interface contracts can be checked statically: interfaces can specify pre- and post-
conditions as well as protocol state machines that specify the valid invocation order of interface operations. These contracts can be
checked for each implementing component via a C-level model checker.

Process Concern The process concern includes facilities for integrating mbeddr into development processes. These facilities can be
used with arbitrary mbeddr languages, such as all the default and user-defined C extensions. The requirements support provides a
language for describing requirements. Traces can be attached to any program element expressed in any language. Arbitrary additional
data, expressed in any language, can be added to a requirement. The product lines supports defining feature models and configurations.
Feature models can be connected to other artifacts by means of presence conditions, While presence conditions are static and work for
any language, there is also C-specific support to evaluate variability at runtime. The documentation language aspect supports writing
prose documents as part of an mbeddr project, exportable as HTML or LaTeX. It supports close integration with program elements. They
can be referenced (with real references that are renamed if the element itself is renamed) and program code can be embedded as text or
as an image. The embedded code is updated whenever the document is regenerated. Visualization provides a facility to render diagrams.
Reports and assessments are customizable queries over the code.

1.3 gcc, make and gdb


mbeddr does not come with its own compiler. While this could be considered "cheating", it is actually a good idea because gcc and other
compilers contain person-decades of optimizations. Replicating those is hopeless. Hence, mbeddr relies on existing backends.

For compiling, building and debugging mbeddr code, mbeddr relies on the established GNU C tool chain. In particular, gcc is used to
compile the generated C code, make is used to orchestrate the compile and link process, andgdb is used by the mbeddr debugger. Note
that all of these tools are not visible to the mbeddr user. But they have to be available on your system (see Section Gcc, Make and Gdb
for details).

Other compilers, make tools and debuggers can be plugged into mbeddr (we have just integrated the Microchip PIC 16 for a customer).
How this is done is beyond the scope of this user guide.

1.4 CBMC
CBMC is used for verification. It is discussed in detail in Section Formal Verification with CBMC. To summarize, CBMC is a bounded
model checker for C programs. This means that it can essentially simulate the execution of a C program regarding all its possible
execution paths. CBMC is open source software and can be downloaded from http://www.cprover.org/cbmc/. It has to be installed
separately, as described in the Installation section Section Verification Tools. For details on how mbeddr uses CBMC, please take a look
at the following papers, as well as at [ToDo: section ]:

Automated Domain-Specific C Verification with mbeddr: http://mbeddr.com/files/dscv-ase2014.pdf


Using Language Engineering to Lift Languages and Analyses at the Domain Level http://mbeddr.com/files/nfm2013.pdf

1.5 PlantUML and Graphviz


PlantUML (http://www.plantuml.com/) is an open source tool to generate UML diagrams from relatively simple textual specifications. We
use it in mbeddr to render read-only diagrams of program structures in a separate view (these are different from graphical editors).
Internally, PlantUML relies on Graphviz (http://www.graphviz.org/) for layouting the diagrams. As a user, you will not interact with graphviz
directly, but you have to install it manually (see Section Graphviz). PlantUML itself is packaged with mbeddr and does not have to be
installed manually.

1.6 Java
MPS is a Java application. This means that it relies on a Java Virtual Machine (specifically, a JRE) to be able to execute. So, in order to be
able to use MPS and mbeddr, you have to install Java on your computer. The specific versions are discussed in the Installation section.

Note that the Java dependency is of course only for the tool - the systems developed with mbeddr are pure C.

2 Installation and Setup


This chapter describes the manual installation process of mbeddr in detail. This includes MPS, Java and additional tools needed for
compilation, debugging, visualization and verification.

2.1 Gcc, Make and Gdb


mbeddr relies on gcc and make for compilation and gdb for debugging. You must install these tools unless you have a different, target-
specific build process. Below we describe the installation of these tools on the various platforms.

Windows: We recommend installing MinGW (see Section Required Tools and Versions), a port of the GNU tools to Windows. When
selecting the packages to be installed (only Class (bin), make sure mingw32-gcc, mingw32-gdb and mingw32-make are included (all of them
are in the All Packages subtree in the selection dialog).
Linux: These tools should be installed by default. Otherwise use your favourite package manager to install them (seeSection
Required Tools and Versions).
MacOS: You should install XCode's command line tools to getgcc, make and the associated tools (see Section Required Tools and
Versions). XCode comes with lldb, which is not yet supported by mbeddr. Therefore you have to installgdb via brew (see Section
Required Tools and Versions). Use the following command for installing gdb: brew install https://raw.github.com/Homebrew/homebrew-
dupes/master/gdb.rb. Additionally, you have to sign a certificate forgdb
(http://wiki.freepascal.org/GDB_on_OS_X_Mavericks_and_Xcode_5).

After the installation, please make sure the tools can be invoked from the command-line. If this is not possible, please add them to your
system path, as described in Section Adding Tools to the System Path.

2.2 Graphviz
mbeddr supports visualization via PlantUML (http://plantuml.sourceforge.net), directly embedded in MPS. To use it, you have to install
graphviz. Section Required Tools and Versions describes which version is required and where to get it from. After the installation, you
have to put the bin directory of graphviz into your system path (seeSection Adding Tools to the System Path). On Windows, in addition,
you also need to have an environment variable GRAPHVIZ_DOT that points to the dot.exe file supplied with graphviz.

2.3 Verification Tools


To be able to use the verifications in mbeddr, you have to install CBMC, which is a C-level model-checker. Section Required Tools and
Versions describes which version is required and where to get it from. You can find installers for the various platforms at the bottom of the
website. After the installation, please make sure CBMC can be invoked from the command-line. If this is not possible, please add it to your
system path, as described in Section Adding Tools to the System Path. CBMC uses gcc for preprocessing the C files, so please make
sure that you have gcc installed on your machine.

You can check that a proper version of CBMC is correctly installed on your machine by clicking on the 'Test CBMC Installation' button
from File -> Settings -> CProver Preferences.

2.4 mbeddr
Depending on what you want to do with mbeddr, we provide three different installation methods. The following list describes general
differences between these methods, the next sections describe each of them in detail:

mbeddr IDE: sufficient, if you want to use mbeddr, but not change or extend it. The mbeddr IDE is a stripped-down version of MPS,
which just provides the UI facilities required for using mbeddr. It already comes with the mbeddr distribution and is the easiest way
to get mbeddr running on your machine.
mbeddr Plugin Distribution: suitable if you want to use and extend mbeddr. With this approach, you install MPS manually on your
machine and deploy the mbeddr plugins into this installation.
mbeddr from Sources: use this option if you want total control over what you are doing with mbeddr or want to stay at the bleeding
edge of mbeddr development. This way you can just use mbeddr, extend it or even change the sources and try out things. You can
pull from the current master or even check out experimental branches.

2.4.1 mbeddr IDE


With the mbeddr IDE you can use all languages provided by mbeddr and write applications with them. Installing the IDE is simple, as you
just have to perform two tasks:

install a JDK (Section Required Tools and Versions lists which version is required for your platform) on your local machine and
make sure it is available on the path (see Section Adding Tools to the System Path).
download from the mbeddr GitHub page the mbeddr IDE that fits to your platform and unzip the downloaded ZIP file:
https://github.com/mbeddr/mbeddr.core/releases

2.4.2 mbeddr Plugin Distribution


In case you do not want to change mbeddr but still want to be able to extend it (so you'll need MPS' language engineering facilities), we
recommend installing the distribution. This just requires downloading a ZIP file and copying a bunch of plugins into your existing MPS
installation. If you do not have an existing MPS installation on your machine, please follow the steps described in Section JetBrains MPS
and Java.

To install the distribution, please download the distribution ZIP file from the mbeddr GitHub release page:
https://github.com/mbeddr/mbeddr.core/releases. This ZIP file contains the mbeddr plugins for MPS. After unzipping, please take all
folders inside the plugins directory and copy them into the plugins directory of your MPS installation 1. So, for example, after copying, there
should be a $MPS_DIR$/plugins/mbeddr.core directory.

1: Note that there is also a plugin (without the s) directory under MPS!

2.4.3 mbeddr Source Installation

2.4.3.1 Introduction

Installing mbeddr from sources gives you total control over what you want to do with mbeddr. We rather recommend working with the
distribution or the mbeddr IDE, except you have good reasons not to. The following list provides some of these good reasons:

want to stay at the bleeding edge of mbeddr development by using the most recent version on master
plan to fork mbeddr
need to regularly update your local mbeddr

There are two ways to get the mbeddr source installation. One of them is a completely manual installation as it is described below in
Section Manual Source Installation. And the other one is an equivalent automated interactive process, an installer written in Python,
described in Section Source Distribution Python Installer

2.4.3.2 Source Distribution Python Installer

The installer is hosted at http://mbeddr.fortiss.org/. To be able to run it you have to install some of the prerequisites required by MPS and
mbeddr, and also the installer itself.

Java Development Kit (JDK)


Apache Ant
git client
Python 3

For the hints on how to install the prerequisites, especially for the Windows platform, please, follow to this website:
http://mbeddr.fortiss.org/download/prereq/. After the prerequisites are installed, you are ready to run the installer. The installer will check
that you have installed everything right before setting up mbeddr.

To use the installer, please, download it from http://mbeddr.fortiss.org/download/, run it, and then simply follow the instructions it gives.

After the installer has finished, you should have the following installed and configured:

mbeddr source code


MPS of a matching version
CBMC checker

You might still want to install other optional tools used by mbeddr, as it is described in this chapter. [ToDo: Make a reference to
gcc&make and graphviz. ]

The website with the installer provides additional information on how to run the installer, and what to do next, after it has finished.

Basically the installer is an automation, which performs for you the steps, described below in Section Manual Source Installation. Thus,
normally, you would never need to perform the steps as described below. However, we provide it for completeness and for
troubleshooting purposes, in the rare case when the installer fails to work right.
2.4.3.3 Manual Source Installation

The following guide will explain you how to install mbeddr from sources manually. Please carry out the following instructions step by step
in their defined order:

1. First, in order to clone the mbeddr repository, you will need to have a git client installed on your local machine (see Section
Required Tools and Versions).
2. Second, you will need an Apache ant installation (see Section Required Tools and Versions). After the installation, please make
sure the ant binary is in your path (see Section Adding Tools to the System Path).
3. If you do not already have an MPS installation on your machine, please follow the installation guide in Section JetBrains MPS and
Java.
4. Next, clone the mbeddr repository from GitHub onto your local machine. You can find it at https://github.com/mbeddr/mbeddr.core,
the repository URLs to be used for cloning (for various protocols) can also be found at this URL.
5. After cloning the repository, you now have to build all languages shipped with mbeddr. For this task we provide a shell script named
buildLanguages , which is located inside the mbeddr.core/code/languages directory. Depending on your operating system, you either use the
one with .bat (Windows) or .sh (Linux/Mac) file extension. Before running the script, you first have to copy thebuild.properties.example
into the same folder as the script and rename it to build.properties. Next, open this file and change the paths (On Windows, you have
to use forward slashes as directory separators even on Windows!) of the following properties:

* mps.home: points to the Contents folder of your MPS installation


* mps.platform.caches: points to a folder, which is used by the build process to store temporary files
* mbeddr.github.core.home: points to the mbeddr.core folder of your local mbeddr repository

You can now execute the buildLanguages script from within the mbeddr.core/code/languages folder. At the end of the build you should get a
message stating BUILD SUCCESSFUL. Depending on the performance of your machine, running the script takes between 20 and 45
minutes.
6. We now have to make future application projects aware of the mbeddr.core languages in your local repository. Start MPS and go to
the File → Settings (on the Mac it is under MPS → Preferences ) and select the Path Variables in the IDE Settings . Create a path variable
named mbeddr.github.core.home and point to the root directory of your mbeddr git clone (e.g./Users/peter/Documents/mbeddr.core ).
7. Finally, go to the Global Libraries section in the IDE settings (see Fig. 2.4.3.3-A). Create a library named mbeddr.core that points to the
code directory of the github repository directory (/Users/peter/Documents/mbeddr.core/code). This library must point to the code directory of
the checkout so that all languages are below it, including core and mpsutil. Because we had earlier defined the path variable, it will
use the path variable for defining the library. Notice that this is a global setting, which means it has to be performed only once before
your first application project.

Figure 2.4.3.3-A

Congrats: you are now ready to build your first project with mbeddr.

2.5 JetBrains MPS and Java


The mbeddr system is based on JetBrains MPS, an open source language workbench available from http://www.jetbrains.com/mps/. MPS
is available for different platforms, like Windows, Mac and Linux. Please make sure you pick the right installer for your platform and the
proper version (see Section Required Tools and Versions). When running the installer, please make sure you install MPS in a path that
does not contain blanks in any of its directory or file names (not even in the MPS 3.1 folder). This will simplify some of the command line
work you may want to do. After installing MPS, please follow the platform-specific steps mentioned below.

Windows: If you installed MPS with the Windows-specific installer, Java was already installed along with it. Open thebin folder and
edit, depending on weather you are running mbeddr on a 32- or 64-bit machine, either mps.exe.vmoptions or mps64.exe.vmoptions. To
make MPS run smoothly, increase the MaxPermSize setting to at least 512m. This JVM setting controls how much space is occupied
for loaded Java classes, methods etc.
Linux: Make sure you have installed Java on your machine (Version 1.6 or higher). Next, open thebin folder and edit the
mps.vmoptions. To make MPS run smoothly, increase theMaxPermSize setting to at least 512m.
Mac: Make sure you have installed Java on your machine (seeSection Required Tools and Versions) Next, open the app folder
(right click on the application and do a Show package contents) and open the Contents folder. Open the Info.plist file with a text editor
and navigate to the entry VMOptions at the very bottom of the file. Edit themps.vmoptions to make MPS run smoothly, increase the-
XX:MaxPermSize setting to at least 512m.

If your MPS does not start with the error message Could not create the VM, then your system has less main memory available than
required. In this case, you should open the previously edited file (*.vmoptions or Info.plist) and decrease the value of the -Xmx parameter in
small steps, until you are able to start MPS. This parameter controls the maximum size of dynamic memory that the JVM provides to
MPS.

2.6 Adding Tools to the System Path


For mbeddr/MPS to be able to access command-line tools, their binaries have to be added to your system's PATH variable. You can do
this either globally, or locally in the script that starts up MPS. The way how the PATH variable is changed depends on your platform. The
table below shows these platform-specific differences.

For changing the variable locally, open your start-up script that is located inside your MPS installation directory. Next, add the PATH
redefinition at the beginning of that file, replacing<your path> with the file system path of the tool you want to add to the path. For details
see the second column of the table below.

To change the path globally, see the right column of the table below. It explains how to add tools to the global PATH on the different
platforms. Here, you should also replace <your path> with the file system path that points to your tool.

locally globally
start-up script:
mps.bat

PATH in your Windows System Settings, change the value of thePATH variable by adding your path to the
Windows
redefinition: existing paths: <existing paths>;<your path>
set PATH=
<your
path>;%PATH%
start-up script: in general you will have to add your tool path to thePATH variable. However, the way how this is done
mps.sh depends on your actual operating system and the shell you are using. Since nowadays almost every Unix
system comes with a bash shell, we describe the process just for tis scenario. In case you using a different
PATH shell, please refer to the vendor's documentation.
Linux/Mac
redefinition:
export PATH: First, create or open your existing .bash_profile, that is located inside your home folder. Next, just add the
<your following line to it, save the file and restart your shell:
path>;$PATH export PATH=<your path>:$PATH

Table 2.6-A

2.7 Required Tools and Versions


The following table lists all tools mentioned in this guide, on which platform they are needed, in which version, which components are
required and where to get the tool from.

Tool Platform Version Required components URL


mingw32-gcc (v 4.8.1),
mingw32-make (v 3.82.90-2),
mingw32-gdb (v 7.6.1),
MinGW Windows newest http://http://www.mingw.org/
msys-base (v 2013072300)
and
msys-coreutils (v 5.97-3)
XCode Mac newest gcc and make -
Brew Mac newest gdb http://brew.sh
make Linux newest - -
gcc Linux newest - -
gdb Linux newest - -
Graphviz all 2.30 - http://graphviz.org
CBMC all 4.9 - http://www.cprover.org/cbmc
ant all newest - http://ant.apache.org
git all newest - -
3.1.4 or 3.1.5
MPS all (MPS 3.2 not yet - http://www.jetbrains.com/mps
supported)
JDK Windows 1.6 or higher - -
Linux 1.6 or higher - -
Mac 1.6 - -

Table 2.7-A

3 Fundamentals
3.1 The MPS UI
3.1.1 Overall Structure
Fundamentally, MPS looks and works like any modern IDE. Fig. 3.1.1-A shows an overview. We describe the various marked up parts
below.
Figure 3.1.1-A: The MPS IDE, with major components marked up.

A Menu Bar Like all IDEs and applications MPS comes with a menu bar. Depending on your OS it is located at the top of the MPS
window (on Windows) or at the top of the screen (Mac).

B Tool Bar The tool bar contains quick access buttons for various functionalities and the run configuration selection combo box for
running and debugging programs.

C Project Explorer / Logical View The Project Explorer is the central view on the left side of the screen. It can show several different
things: the Logical View of the project (solutions, folders, models), a File System View (directories, files) as well as various favourites lists.
In the context of the user guide we mostly talk about the Logical View, which is why we use the terms Project Explorer and Logical View
synonymously.

The Logical View contains the project structure. It is the primary means for navigating around mbeddr (and MPS in general). We describe
the project structure separately below (Section Project Structure). The Project Explorer is an example of a Tool (aka View in Eclipse).

Editor The main part of the MPS window is made up of the editor pane. The editor pane contains any number of editors, each with its
own Tab. Each editor edits a so-called root (see Section Nodes, Roots and Concepts). It is also possible to split the editor pane into
several editors (arranged vertically and horizontally) through the Window menu.

Logical Inspector The Inspector is associated with the editor. For each selected node in the currently active editor, the Inspector
provides additional (read/write) information. The particular contents of the inspector are determined by the language definition (of the
concept that defines the selected node). Sometimes it shows nothing useful, sometimes it just shows additional information (such as the
statically evaluated value of an expression), and sometimes it contains additional stuff that can be edited (and is hence essential for the
editing experience). We suggest to always keep it open.

G Status Bar The status bar shows status information about MPS, including the version control status, the current branch, memory
usage and various other details.

Note that the Status Bar, the Tool Bar Buttons as well as the little tool placeholder tabs can all be hidden through the View menu.

3.1.2 Tools
Tools are the windows that are arranged around the edges of the MPS window. In this section we describe the most important ones
(some language-specific tools are explained in their respective context). Generally, they can be activated through the View → Tool Windows
menu.

Tools can be arranged around the edges of the MPS window. Every Tool has a default position on one particular edge of the MPS
window. However, by right-clicking on to the title bar, other edges (and a floating mode) can be selected for each Tool. Tools can also be
hidden. If not shown, the border shows a little placeholder (like the Context Sidebar tool near D on the left and H on the right). If a tool has
a number associated with it (such as the 1 in case of the Logical View), Ctrl + <Number> can be used to select this window.

Project Explorer As mentioned above, the Project Explorer (in its Logical View mode) contains the overall structure of the project. It is
the primary means of navigating around your code. We'll describe it in more detail in Section Project Structure.

Changes The Changes view shows the files that have been changed since the last commit. This is part of MPS' version control support
and is discussed in more detail in Section Version Control.

Model Checker The Model Checker shows the result of running Check Model on a project, solution, language or model. It shows warnings
and errors. Double-clicking on an element opens this element in the editor. There are various options for grouping and filtering the
contents using the buttons on the left. You should play around with these options. Fig. 3.1.2-A shows an example. Note that errors that
result from a transformation (code generation) are not shown here. They are shown in the Messages view.

Figure 3.1.2-A: The model checker showing an error in this document.

Messages This view shows (mostly) messages, errors and warning resulting from the build process. These include problems in the
transformation as well as problems with the subsequent compilation and link process. If an error shows up here during build, it is likely that
there has already been a problem with the input models. In this case, please run the Model Checker on the respective parts of the project
and fix all errors reported there before attempting to generate.

Visualizations A visualization is a read-only diagram of some program structures. It is layouted automatically. Users can double-click on
nodes and labels to jump back into the code. Fig. 3.1.2-B shows an example. To open a visualization, users can press Ctrl + Alt + V or
use the context menu to open a visualization if the selected node defines any. Note that the view also supports changing between
multiple visualizations for the same node (if the node defines several of them) through the combobox. The buttons support navigating
through the recent diagrams, zooming, and to export the diagram as a PNG file.

Figure 3.1.2-B: Visualization of module dependencies.

Tree Views Tree views are similar to visualizations in that they are contributed by particular nodes, they are shown in a special view,
double-click on the tree node selects the node in the editor, and in that a node can define zero, one or more tree view structures. They are
activated through Ctrl + Alt + O or the context menu.

Figure 3.1.2-C: Tree view of the document structure.


Event Log The event log shows internal MPS errors. You can generally ignore this Tool except for debugging internal MPS or mbeddr
errors.

Version Control This tool shows the version control console, essentially a stream of all version control operations. Typically not needed.

Explorer The explorer shows the low level tree structure of a node. It is more like a debugging tool and is not used regularly. It can be
opened for a node by selecting the node and choosing Show Node in Explorer from the context menu. Fig. 3.1.2-D shows an example.

Figure 3.1.2-D: An example node shown in the explorer.

3.2 Project Structure


3.2.1 Nodes, Roots and Concepts
MPS programs are trees of nodes (this is the structure you can see in the Node Explorer). Each node can have several children, resulting
in the tree structure. However, there can also be cross-references between nodes in a program tree. So this makes MPS programs a
graph, but one with a primary containment hierarchy. And in fact, a complete MPS program consists of several such trees, we call each of
theses trees a Root. Importantly, each Root gets its own editor tab in MPS. References can be across Root boundaries. Fig. 3.2.1-A
shows the structure. In Fig. 3.2.2-A, all the implementation modules (blue I icon) such asComponents or StateMachines are roots.

Figure 3.2.1-A: This figure shows two roots (grey boxes). Each of them has a number of nodes inside (the letters). The solid lines are the parent-child relationships. The
dotted lines are the cross references. Note that references can cross root boundaries.

Each node is an instance of a language concept. The language concept defines all properties of its instance nodes, including its visual
representation (notation, concrete syntax). Concepts are created as part of a language. Fig. 3.2.1-B shows this relationship.

Figure 3.2.1-B: The relationship between nodes and concepts.

3.2.2 Project, Solutions, Models, Folders, Roots


MPS has an elaborate project structure. We discuss the most important ingredients in this section. Each of the ingredients is used for
configuring different aspects of the overall project. We describe this in Section Using Languages and Devkits. The basics on the various
kinds of dependencies between these ingredients are discussed below. The (vast load of) details are discussed in
https://github.com/mbeddr/mbeddr.core/wiki/Dependencies-and-Classpath-in-MPS.

Figure 3.2.2-A: A typical MPS project structure. Details are discussed in the running text.

Models MPS is fundamentally a file-based system. The roots discussed above are stored in XML files (these can then be managed with
existing version control systems, see Section Version Control). These files are called models in MPS. Each model contains a number of
roots (which then have arbitrary substructure). In Fig. 3.2.2-A, the purple diamonds with the little M inside are models. Models als
determine which languages are used for the programs/roots/nodes inside the model. Models can import each other in order to be able to
establish cross-references. These dependencies are established in the Model Properties dialog (Dependencies tab).

Models are important because they represent the only real "physical structure" in MPS. Hence they are the granularity relevant to version
control (since they are files, it is models that are seen by file-based tools such as git). Also, the MPS transformation and generation
engine always processes complete models. So in order to keep (partial) generation times low, you should split your overall system into a
reasonable number of (relatively small-sized) models. See Chapter Structuring Complex Projects for details.

Solutions From a user's perspective, a solution is where application code lives (this is in contrast to Languges, which is where language
definitions reside). Technically, a solution is a way to group models. A solution can contain any number of models and each model resides
in exactly one solution. In Fig. 3.2.2-A, the orange S is a solution. Solutions can also have dependencies onto other solutions.

Project A project contains several solutions. Any particular solution can be used in several projects, so projects do not own the solutions
(in contrast to models, which are owned by a solution). A project is really more like a workspace in Eclipse: it's a collection of solutions
which you want to work on for a particular task. The project defines various settings, preferences and view options.

3.2.3 Using Languages and Devkits


To be able to write code in a model, that model must specify which languages should be used to write the code. As shown in Fig. 3.2.3-A,
these languages are specified in the Used Languages tab of the model's properties.

Figure 3.2.3-A: The Used Languages tab of a model properties dialog.

In the Used Languages spec, users can also add dependencies on devkits. A Devkit is a group of languages. For example, there are
several languages that deal with requirements in mbeddr. Instead of adding all of these manually to any given model, users can simply
add the com.mbeddr.reqtrace devkit to a model. This simplifies configuration and avoids clutter.

3.2.4 Paths and Libraries


A library is essentially a directory that contains additional languages or solutions (it is like a search path or the Java classpath). Instead of
adding a needed solution or language to the project, it can remain "outside" the project and made visible by pointing a library to the file
system folder. Libraries can be defined specifically for a project (Project Libraries) or globally (Global Libraries). Both are created via the
Preferences.

In order not to hard code absolute paths in the definition of libraries, you can define a path variable. A path variable is a name (such as
MyProjectDir) plus a directory (such as /Users/markus/mbeddr/project1). If you now define a reference to a libary in, say,
/Users/markus/mbeddr/project1/my/folder/sol.mps , then this stored in the project as${myProjectDir}/my/folder/sol.mps even though this is not visible in
the UI. If the same library is accessed on another computer, the its definition of the MyProjectDir variable is used to resolve it.

3.3 The Editor


3.3.1 Usability
MPS is a projectional editor. This means that the editing actions as you know them from a text editor may not work exactly the same way
as in a text editor. This leads to a lot of benefits in terms of language modularity and notational flexibility. It also has the potential drawback
that the editor may not work exactly as you'd expect. To better understand how the editor works and what MPS does to achieve quite
good editor usability, we suggest you watch this video: http://www.youtube.com/watch?v=PvcXD4cFM4s

In addition, the section on Section Keyboard Shortcuts has a lot of important keyboard shortcuts. Please take the time to read this section!
The cheat sheet in Section Cheat Sheet also summarizes them again.

3.3.2 Projection Modes


Since MPS is a projectional editor, the same program can be shown with different notations. Users can switch the notation at any time to
match the task they are working on. The specific available notations are discussed throughout this user guide when we discuss the
respective languages; examples include a tabular notation for state machines, a textual vs. graphical notation for wiring components, as
well as the regular and the presentation mode in the kinds of documents you may be reading right now.

All notations can be switched from one common menu location: Code → Projection Modes → ... . See also Fig. 3.3.2-A. The menu shows only
those options that are valid for the languages used in the current model, and the selection is specific for any given editor/root. The specific
entries are discussed in the context of their respective languages.

Figure 3.3.2-A: The menu to change projection modes. The specific entries are discussed in the context of their respective languages.

3.4 Keyboard Shortcuts


Here is a summary of the most important keyboard shortcuts. Please take the time to learn them. It pays off! At the end of this section you
can also find a cheatsheet that you can print and put next to your computer.

Entering Code In MPS you can only enter code that is available from the code completion menu. Using aliases and other "tricks", MPS
manages to make this feel almost like text editing. Here are some hints though:

As you start typing, the text you're entering remains red, with a light red background. This means the string you've entered has not
yet bound.
Entered text will bind if there is only one thing left in the code completion menu that starts with the substring you've typed so far. An
instance of the concept selected by the text you have entered will be created and the red color goes away.
As long as text is still red, you can press Ctrl + Space to explicitly open the code completion menu, and you can select from those
concepts that start with the substring you have typed in so far.
If you want to go back and enter something different from what the entered text already preselects, press Ctrl + Space again. This will
show the whole code completion menu.
Finally, if you're trying to enter something that does not bind at all because the prefix you've typed does not match anything in the
code completion menu, there is no point in continuing to type; it won't ever bind. You're probably trying to enter something that is not
valid in this place. Maybe you haven't included the language module that provides the concept you have in mind?
Some language features may only be accessible via an intention. If you don't know how to go on, press Alt + Enter and see if the
intentions window has something useful to offer.

Entering Expressions Expressions can be entered linearly. So if you have a number literal42 and you want to change that expression to
42 + 2 you can simply move to the right side of the 42 and type + and then 2 . However, if you want to add something on the left side of
an expression (e.g. changing 42 to 10 + 42) then you have to move to theleft side of the 42 and enter + as well; you can then enter 10.

Parentheses Imagine you have already entered 4 + 2 * 3 . The * binds more tightly, so the tree root is the+, with the 4 and the * as children.
If you now want to change this to (4 + 2) * 3 you have to set parentheses. This is supported in a natural way: just move to the left of the4
and enter an opening parenthesis. It will remain red, because the balacing parenthesis is still missing. You can now go to any other place
in the expression, in this case, to the right of the 2, and enter the closing parenthesis. The tree is restructured accordingly.

Navigation Navigation in the source works as usual using the cursor keys or the mouse. References can be followed ("go to definition")
either by Ctrl + Click or by using Ctrl + B . The reverse is also supported. The context menu on a program element supports Find Useages.
This can also be activated via Alt + F7 . Within an mbeddr program, you can also press Ctrl + F12 to get an outline view that lists all top
level or important elements in that particular program so you can navigate to it easily. Ctrl + E will show you the recently closed editors, in
case you've closed too many. More generally, Ctrl + Tab can be used to switch between windows in MPS.

Selection Selection is different than in normal text editors. Ctrl + Up/Down can be used to select along the tree. For example consider a
local variable declaration int x = 2 + 3 * 4; with the cursor at the3. If you now press Ctrl + Up , the 3 * 4 will be selected because the * is the
parent of the 3. Pressing Ctrl + Up again selects 2 + 3 * 4 , and the next Ctrl + Up selects the whole local variable declaration. You can also
select with Shift + Up/Down . This selects siblings in a list. For example, consider a statement list as in a function body:

void aFunction() {
int x;
int y;
int z;
}

Iimagine the cursor in the x . You can press Ctrl + Up once to select the whole int x; and then you can use Shift + Down to select the y and z
siblings. Note that the screencast mentioned above illustrates these things much clearer. We recommend again to watch it, if you haven't
done so yet.

Deleting Things The safest way to delete something is to mark it (using the strategies discussed in the previous paragraph) and the
press Backspace or Delete . In many places you can also simply press Backspace behind or Delete before the thing you want to delete.

Adding List Elements If you are in list context (e.g. in function arguments or statement lists) you can press Enter to add an element
after the current one; you can press Shift + Enter before the current element. You can also type the list separator (if one is defined) to add
a new element. For example, in an argument list, you can type a comma to add another argument.

Intentions Some editing functionalities are not available via "regular typing", but have to be performed via what's traditionally known as a
quick fix. In MPS, those are called intentions. The intentions menu can be invoked by pressing Alt + Enter while the cursor is on the
program element for which the intention menu should be shown (each language concept has its own set of intentions). For example,
module contents in mbeddr can only be set to be exported by selecting export from the intentions menu. Explore the contents of the
intentions menu from time to time to see what's possible. Note that you can just type the name of an intention once the menu is open, you
don't have to use the cursor keys to select from the list. So, for example, to export a module content (function, struct), you type Alt + Enter ,
e , x , Enter .

Surround-With Intentions Surround-With intentions are used to surround a selection with another construct. For example, if you select a
couple of lines (i.e. a list of statements) in a C program, you can then surround these statements with an if or with a while . Press
Ctrl + Alt + T to show the possible surround options at any time. To reemphasize: in contrast to regular intentions which are opened by
Alt + Enter , surround-with intentions can work on a selection that contains several nodes!

Refactorings For many language constructs, refactorings are provided. Refactorings are more important in MPS than in "normal" text
editors, because a quite few editing operations are hard to do manually. Please explore the refactorings context menu, and take note
when we explain refactorings in the user's guide. Unlike intentions, which cannot have a specific keyboard shortcut assigned, refactorings
can, and we make use of this feature heavily. For example, Ctrl + Alt + V extracts a variable from an expression.

Select in Project To select the currently edit root note in the project explorer (the big tree of the left), press Alt + F1 and then Enter .

Open Inspector Sometimes, additional properties of an element can be entered in the inspector ("properties view"). To open the
inspector window, press Ctrl + 2 .

Make and Rebuild By default, Ctrl + F9 ( Cmd + F9 on the Mac) makes the current module, i.e. it regenerates and recompiles if the
current module has changed. We recommend using the Keymap preferences to assign Ctrl + Alt + F9 (and Cmd + Alt + F9 on the Mac) to a
complete Rebuild of the current solution; sometimes changes aren't detected correctly and a full rebuild is necessary. Ctrl + Alt + F8 and
Ctrl + Alt + F10 should be mapped to a rebuild of the current model and the current project, respectively (see screenshot below).

Figure 3.4-A: The keymap preferences dialog in MPS where users can define their own keyboard shortcuts.

3.5 Version Control


3.5.1 Introduction
In this chapter we discuss version control with MPS. Note that this is not an introduction to version control in general: we assume that you
understand version control concepts before you read this chapter. We use git as the example version control system because this is what
the mbeddr team has used successfully for years now. We clearly recommend using git.

Also, mbeddr/MPS uses the version control support of the IDEA platform. So to get an overview, you can also check out the following
links:

https://www.jetbrains.com/idea/features/version_control.html
https://www.jetbrains.com/idea/webhelp/version-control-with-intellij-idea.html

3.5.2 Supported Version Control Systems


In principle, MPS can work with all the version control systems supported by IntelliJ IDEA. These include git, Subversion, CVS, Mercurial,
Perforce, TFS, VSS and Rational ClearCase. This is the case because from the version control system's perspective, MPS is a regular,
file based tool.

However, this is not quite true, because during merging, some MPS-specific things have to be done. For this purpose, version control
systems support merge drivers. Currently, MPS merge drivers exist only for git, Subversion and CVS. Since nobody wants to use CVS
anymore, git and Subversion are the most reasonable alternatives to use with MPS. However, it is feasible to develop additional merge
drivers if this should become necessary. Please contact the MPS team in this case.
Our experience over the last years is with git, which is why we use git in this chapter. We've also used Subversion successfully in one
project. We clearly recommend using git with mbeddr.

3.5.3 Setting Up Version Control for a Project


To be able to use version control from within MPS, MPS must be made aware of the version control system and its meta data. MPS uses
so called VCS roots. A VCS root is simply a root directory in which git (or other version control systems) store their meta data. You can
configure the roots via the Preferences (see Fig. 3.5.3-A). Note that MPS also detects directories that could/should be roots and brings up
a popup.

Figure 3.5.3-A: The configuration of VCS roots for git.

3.5.4 Tools and Views


Change Markup in Editor As you edit code in the regular MPS editor, MPS keeps track of changes. In particular, it shows changes in
the left gutter. Fig. 3.5.4-A shows how new nodes are highlighted. MPS also shows changes to existing nodes as shown inFig. 3.5.4-B.
Clicking on the respective color bar in the gutter lets you revert that change, among other things. It also lets you show a detailed diff; we
discuss the diff in the next paragraph.

Figure 3.5.4-A: Highlight of added nodes in the editor.

Figure 3.5.4-B: Highlight of changed nodes in the editor.

Changes View The Changes view keeps track of which files (i.e., models) have changed on you system compared to the latest in the
version control system. Fig. 3.5.4-C shows an example. The defaultExtensions model is the one that contains the root with the two changes
Show
shown above. You can see a diff by selecting Diff from the context menu of the respective model in the changes view. The view
which then opens (Fig. 3.5.4-D) lets you choose from the changed roots (if several have changed) and then see a diff in the usual side-by-
side style. Note that this works for arbitrary notations! You can also undo the changes with the usual operations.

Figure 3.5.4-C: The changes view in MPS.

Figure 3.5.4-D: The diff view in MPS.

3.5.5 Common Version Control Operations


In principle, all version control operations except the diff/merge can be performed from the command line. You can also use VCS-specific
tools such as SourceTree (http://www.sourcetreeapp.com/). However, it is recommended to use the MPS-internal ways to do things to
make sure that MPS' caches do not get out of sync. In practice we do the basic things (explained here) with MPS, and the advanced
things with the command line or SourceTree.

We recommend exploring the VCS menu in MPS, and in particular the git submenu; it has a lot of git operations such as branching or
stashing.

3.5.5.1 Update/Pull

Updating, or pulling in git-speak, refers to getting the newest contents from the version control system. From within MPS you can do this
via the menu entry VCS → Update Project . After the update, the Version Control view shows the set of changed files Fig.
( 3.5.5.1-A). You can
use the context menu to show a diff of the changes that came in.

Figure 3.5.5.1-A: List of changed files after an update/pull.

3.5.5.2 Commit/Push

You can use VCS → Commit Changes or Ctrl + K to bring up the commit dialog. This dialog, shown inFig. 3.5.5.2-A, lets you select the
models you want to commit and specify a commit message. By using the context menu on models, you can also revert some of the
changes. By pressing the little down-arrow on the Commit button, you can also directly push from within MPS. You can also use the
context menu to show the diff, i.e. the changes that are about to be committed.

Figure 3.5.5.2-A: Commit dialog in MPS.

3.5.5.3 Conflicts/Merging

Conflicts happen if the same node(s) are changed independently by different parties. It can happen during an update/pull (if somebody
else has changed the same nodes independently) or when merging a branch (in the case where the two branches have changes to the
same model). In both cases, the Version Control view, when showing the incoming changes, will mark conflicting files as red, as shown in
Fig. 3.5.5.3-A.
Figure 3.5.5.3-A: The changes view after it ran into a conflict in a model.

You can then select Git → Merge Conflicts to open the Merge overview (Fig. 3.5.5.3-B). It shows the list of conflicing files, and for each of
them you can select whether you want to accept your local changes (overwriting what comes in), accept the incoming changes
(discarding your own changes) or merge manually.

Figure 3.5.5.3-B: The merge overview dialog shows the files that changed as well as the merge options.

In case you select Merge , you will get a three-way merge dialog as shown inFig. 3.5.5.3-C. There you pick and choose then changes you
want to use. You can also edit, as well as copy/paste changes into the merge result.

Figure 3.5.5.3-C: Three-way merge to resovle conflict.

3.6 Utilitites
3.6.1 Interop with Text
There are various ways of interoperating with textual worlds. One of them is the import of textual, legacy C source. This is covered in
Section Accessing Libraries. In this section we discuss a few additional ways.

Node Info and Node URL It is often useful to be able to be able to point somebody to a specific node. For example, one may want to
send an email to a coworker pointing out a specific node, or an issue in an issue tracker may want to refer to a program node in MPS. For
this purpose, it is possible to copy the Node Info and the Node URL to the clipboard (and then paste it as text somewhere else).

You can select any node in MPS and use Copy Node URL to Clipboard from the context menu. The following code snippet is a node URL. If you
open this URL in a browser, MPS will try to open the respective node. This requires MPS to be running and the currently opened project
must contain the node. MPS does not open the project if it is currently closed. [ToDo: Where is the menu entry in MPS to do the same
based on URL in the clipboard? ]

http://localhost:8080/select/tutorial/r:4ac377c2-0a54-4908-ae24-f86f1bad7e73/1195021413141453989/

The node URL is not readable by humans, only by the tool. You can use Edit → Copy Special → Copy Node Info to Clipboard to get a more detailed
info about the node, as shown in the next snippet. This contains enough data to navigate to a node manually; of course the URL is also
included to automate the navigation.

project: tutorial (/Users/markusvoelter/Documents/mbeddr/mbeddr.core/code/applications/tutorial)


module: com.mbeddr.tutorial.documentation
model: com.mbeddr.tutorial.documentation.ug.fundamentals
root: G_TextInterOp [Document]
node: Word [Word]
url: http://localhost:8080/select/tutorial/r:4ac377c2-0a54-4908-ae24-f86f1bad7e73/1195021413141455557/

By the way: this functionality is also very useful if you want to report problems about the documentation to us. Please include a Node URL
or even better, a Node Info of the document or paragraph where you spot a problem.

Copy to Text Every MPS node can be copied to a text editor. However, the result is only useful if the original node has essentially a
textual syntax (like regular program code or the paragraphs in this document). In the text editor, it will look essentially similar. However, it
is not possible to paste the text back into MPS, because, by default, an MPS language definition does not come with the parser necessary
to process the text and build the MPS tree (it is of course possible to build and integrate such parsers into MPS, but you don't get it for
free). So this approach is only useful for communication purposes, but not to actually exchange program code. The next paragraph
describes a more useful alternative.

Copy to XML It is possible to copy a node (and its complete subtree, of course) to an XML text. The menu item
Edit → Copy Special → Copy Node as XML creates the XML. This can of course be copied into any text context. The following listing shows an
example XML (of this paragraph):
<_root _nodeID="1195021413141455614" _conceptFQN="com.mbeddr.doc.structure.TextParagraph" _linktype="child">

<header _nodeID="1195021413141455617" _conceptFQN="com.mbeddr.doc.structure.TextParHeader" _linktype="child" text="Copy to XML" />

<text _nodeID="1195021413141455615" _conceptFQN="de.slisson.mps.richtext.structure.Text" _linktype="child">

<words _nodeID="1195021413141455616" _conceptFQN="de.slisson.mps.richtext.structure.Word" _linktype="child" escapedValue="It is possible to copy a node (and

<words _nodeID="1195021413141473799" _conceptFQN="com.mbeddr.doc.structure.KeyPressFormattedText" _linktype="child">

<text _nodeID="1195021413141473800" _conceptFQN="de.slisson.mps.richtext.structure.Text" _linktype="child">

<words _nodeID="1195021413141473801" _conceptFQN="de.slisson.mps.richtext.structure.Word" _linktype="child" escapedValue="Edit-&gt;Copy Special-&gt;Copy

</text>

</words>

<words _nodeID="1195021413141473802" _conceptFQN="de.slisson.mps.richtext.structure.Word" _linktype="child" escapedValue=" creates the XML. This can of cou

</text>

</_root>

This XML can then be pasted into another MPS using Edit → Paste XML Node . The MPS node structure is reconstructed and the (internal and
external) references are reestablished. While, in general, it is better to use the version control system (and branching, if necessary) to
exchange code, this XML-based facility is sometimes useful as well.

3.6.2 Searching Models


You can do a full-text search on soutions and models. The functionality is available via the context menu (shown in Fig. 3.6-B). The
resulting search window (Fig. 3.6.2-A) reports all the model elements that contain your search string. Note that this can also be used to
search this documentation itself.

Figure 3.6.2-A: The mbeddr full text search dialog.

3.6.3 Opening Terminals


It is often useful to open a terminal (console, command line) in solutions or models. The context menu for solutions or models has this
option. For solutions, the terminal opens in the solution directory. For models, it opens in the model's output directory (where the model's
code will be generated to). Fig. 3.6-B shows the context menu for solutions.

Figure 3.6-B: The menu options for opening the search dialog and the terminals

3.6.4 Find Usages


[ToDo: ]

3.7 Cheat Sheet


Print, cut out and put it next to your monitor when you start with mbeddr. Note that on the Mac, most (but not all!) of the Ctrl + Something are
actually Cmd + Something . You will have to try it out.

Code Completion Menu Ctrl + Space

Code Completion Menu Reset Ctrl + Space again


Intentions Menu Alt + Enter

Select Intentions Alt + Enter , then type name


Wrap With... Ctrl + Alt + T

Backspace , or Select-and-then-
Deleting Backspace

Table 3.7-A: Entering Text

Add after current Enter

Add before current Shift + Enter again


Move current node up Ctrl + Shift Up on Windows and Linux, Cmd + Shift Up on Mac
Move current node Ctrl + Shift Down on Windows and Linux, Cmd + Shift Down on
down Mac

Table 3.7-B: In Lists

Ctrl + Up on Windows and Linux, Cmd + Up on


Select Along Tree
Mac
Select Siblings (List) Shift Up/Down

Table 3.7-C: Selection

Select in Project Alt + F1 , then Enter

Ctrl + Click or Ctrl + B on Windows and Linux, Cmd + Click or Cmd + B on


Go-To-Definition
Mac
Find Usages Alt + F7 on Windows and Linux, Alt + F7 on Mac
Find Usages with Options Ctrl + Alt + Shift + F12 on Windows and Linux, Cmd + Alt + Shift + F12 on Mac
Outline Ctrl + F12 on Windows and Linux, Cmd + F12 on Mac
Open/Focus Project Tree Ctrl + 1 on Windows and Linux, Cmd + 1 on Mac
Open/Focus Inspector Ctrl + 2 on Windows and Linux, Cmd + 2 on Mac
Close current Editor Ctrl + F4 on Windows and Linux, Cmd + F4 on Mac
Show List of Recent Editors Ctrl + E on Windows and Linux, Cmd + E on Mac
Toggle through current
Ctrl + Tab
Windows

Table 3.7-D: Navigation

Add Documentation Ctrl + A

Comment Out Ctrl + Shift + C

Table 3.7-E: Generic Refactoring

Make Solution Ctrl + F9

Rebuild Solution Ctrl + Alt + F9 (custom mapped)


Rebuild Model Ctrl + Alt + F8 (custom mapped)
Ctrl + Alt + F10 (custom
Rebuild Project
mapped)

Table 3.7-F: Build

3.8 Useful MPS Settings


Here are a few settings we recommend you change in MPS more generally (in the File → Preferences menu):

In the Editor settings, you should select Show 1 Tab for EachAspect in the Editor Tabs choice.
In the Editor settings, you should select Activate most recentopened Tab in the When closing active editor choice.
In the Editor settings, you may want to change the font to Consolas. Looks nicer.
In the Generator settings, you may want to turn of the Check models before Generation in the General settings. Running the model
checker before each generation run is annoyong - you can always run it manually.

4 mbeddr C vs. C99


While mbeddr tries to stick as close to the C99 standard C as possible there are some differences between the two. In this section we will
discuss these differences in detail.

4.1 mbeddr's Module System


C has header files to manage visibility. If a file A.c wants to access symbols defined in fileB.c, then the to-be-accessible entities (or
prototypes of them) are put into B's header file B.h. A.c uses #include to import the B.h header, and this way sees the declaration of the
symbols. This way, header files can be used in C to provide some kind of modularization and information hiding. Header files can even be
seen as a poor man's way of specifying interfaces (through prototypes). mbeddr instead provides a native, first-class module system.
Headers are not exposed, they are only used under the hood (i.e., generated).

Chunks and Modules MPS' main entity for structuring code are models (see Section Project Structure). A model is essentially a file that
contains a set of nodes. The nodes directly beneath the model are called root nodes in MPS. All root nodes provided by mbeddr are
called chunks. A chunk has a name as well as a set of dependencies, i.e., references of chunks it depends on. Chunks come in many
different kinds. BuildConfigurations are chunks, UnitContainers are chunks, and the modules in which you can write mbeddr C code are also
chunks. In general, any chunk can reference (or import) any other one, but there may be chunk-type-specific restrictions. If a chunkA
imports chunk B, then it is possible to reference B's contents from A (in some cases only exported content are referencable; see below).

Implementation and External Module Modules are the chunks that contain C code (as opposed to, for example, build configurations or
requirements). mbeddr has two different kinds of modules. The most common one is the ImplementationModule in which you write your
normal mbeddr code which will be generated to C99 code and then compiled. The second kind is the ExternalModule. External Modules
won't be generated into C code. They are used to proxy external, textual C code such as header files on which your code may rely 1.
Since they just proxy existing C or header files they don't contain any implementation. They just contain declarations. ExternalModules
always have to declare a header file that is included in the generated C code (the files for which they are a proxy).

Visibility and Exporting A chunk A that imports (depends on) chunk B only sees those contents of B that are exported. Some kinds of
chunks, for example the requirements module, implicitly export everything. Other chunks, for example implementation modules, default to
private contents, and the user has to specifically mark contents as exported . An importing chunk, A in the example, can also mark a chunk
import as reexport. This way, if a chunk C imports A, and a has reexported imports, C can access the contents of B as well.

1: This is necessary because, in order to reference a node (e.g., call a function), that node has to be in MPS. If you want to call into textual C code that is not in MPS, you
have to create a proxy of that textual code in MPS. External modules are used for this purpose.

4.2 Data Types


mbeddr supports all of C99's data types in principle, but it changes a few defaults. We discuss these details here.

4.2.1 Boolean
While in C99 the boolean is a typedef'd integer, in mbeddr booleans are first class types.

boolean bool = true; ⬈

This means code like this is not valid in mbeddr:

if (1) {
//do stuff

} if ⬈

Also boolean and integer are not assignable to each other by default. You have to convert them explicitly. This might be the case when
you interface legacy code that was not written in mbeddr. In addition to int2bool, there is also a bool2int conversion operator.

{
int16 aInt16 = 0;
boolean aBoolean = int2bool<aInt16>;
} ⬈

4.2.2 User defined types


C allows you to specify user defined types (struct and union). To make use of these types, they are usually wrapped with a typedef. This is
not necceary in mbeddr. mbeddr will take care of the typedef during generation to textual C. In addition, the order of declarations does not
matter in mbeddr. It is safe to write code which references types that are defined later in the ImplementationModule. The contents of a module
are automatically sorted topographically during generation.

C99vsMbeddr2
model com.mbeddr.tutorial.documentation.code constraints

struct order {
product product;
int32 amount;
};

struct product {
int32 product_number;
string name;
};
imports nothing ⬈

4.2.3 Integers
Integers in mbeddr always come with a qualified size. There is no int, short, long long and so on in mbeddr, because the size of these
integers can differ depending on the compilation target. mbeddr has its own data types for singed or unsigned integers from 8bit to 64bit.
For example, uint8 represents an 8bit unsigned integer, int8 represents an 8bit signed integer, uint16 represents an 16bit integer and so on.

If you need to interact with existing C that uses int, you can still write int inside ExternalModules but you have to provide a mapping to the
mebddr types. This is done inside the TypesizeConfiguration.

4.2.4 Type Size Configuration


As described above, mbeddr uses size-qualified alternatives of the C99 native types. To interact with existing C code, mbeddr offers
ExternalModules. In those modules it is possible to use unqualified integers (such asint ) . Since mbeddr by default doesn't know about the
size of these types, those have to be configured on a per-project basis. This is done in the so called TypesizeConfiguration. This configuration
can be accessed via File → Settings → Type Size Configuration (see Fig. 4.2.4-A). It contains a mapping for each C99 type to an mbeddr type. The
mapping for a given target may differ from the defaults that ship with mbeddr. These can be changes for each project.

Figure 4.2.4-A: Type Size Configuration

4.3 Array Syntax


One of the fundamental differences compared to C in mebddr is the way how arrays are declared. In C, array declarations are hard to
read when they incorporate pointers and/or the const and volatile modifier. In mbeddr, arrays are declared and read from right to left. This
might be different compared to other languages that you are used to, but in order to make reading types consistent across mbeddr, we
had to introduce this. Declaring a multidimensional (3 sets of 2 elements) array of 16 bit integers in C99 would look like this:

int16[3][2] integers; ⬈

In mbeddr it looks like this:

int16[2][3] array = {
{1, 2},
{3, 4},
{5, 6}
}; ⬈

This might look irritating in the first place, but once we add pointers it will look much easier to read. Declaring a pointer to the array, we
declared above in C, looks like this:

int16_t *(array[3][2])

In mbeddr, the same declaration is completely written from right to left:

int16[2][3]* pointer = &array; ⬈

4.4 Various Additions


Declaration order In contrast to C99 the order of declared functions, variables and types doesn't matter. Also forward declarations aren't
necessary. mbeddr will take care of all these during generation to textual C code.

No Preprocessor mbeddr does not expose the C preprocessor to the end user. Because the preprocessor simply replaces text, it is
possible that it results in not compileable code (and create all kinds of other mayhem). In addition, the preprocessor does not take the
typesystem into account and the code may not typecheck.

Constants are supported via #constant


Function macros are supported via #macro
#ifdefs are supported via native support for product-line variability (see Section Static Variability)

Finally, more sophisticated/awkward/brittle uses of the preprocessor should be replaced by native mbeddr language extensions. There's
no need for "preprocessor meta programming" if you have a full-blown language workbench at your disposal!

4.5 Unit Tests


Unit testing is important for software development in general, it is important for developing code with mbeddr, and it is important for the
remainder of the user guide. Hence we discuss unit testing now. The examples discussed in this section can be found in
SimpleTestCase⬈.

Test Subjects The example module contains two functions which we will use as test subjects,add⬈ and divide⬈. They look as follows:
SimpleTestCase
model mbeddr.tutorial.main.defaultExtensions
package examples constraints

int32 add(int32 a, int32 b) {


return a + b;
} add (function)
int32 divide(int32 a, int32 b) {
return a / b;
} divide (function)
imports nothing ⬈

Declaring Test Cases Test cases are a little bit like void functions: they have no parameters and they do not return anything. mbeddr
provides first class abstractions, because they track failures. Here is the signature of a test case:

exported testcase testAdding {


testHelperFunction();
assert-equals(0) 10 == add(5, 5);
} testAdding(test case) ⬈

Asserting Inside test cases, a number of assert statements are available, as Fig. 4.5-A shows. It should be self-explanatory what they
mean. Using these assert statements, test cases can now be made meaningful.

exported testcase testDivision {


assert-equals(0) 4 == divide(8, 2);
} testDivision(test case) ⬈

Figure 4.5-A: The various assert statements available in mbeddr.

Invoking test cases A collection of test cases can be invoked via the test expression. It invokes the test cases mentioned as arguments
and evaluates to the total number of assertion failures. Typically the test expression is used from a main function, and the result is returned
to the OS; this way, the whole process fails if there are assertion failures. The test output will also be printed to the Console in MPS with
links pointing to the individual test cases in the C code. You can read more about this in Section Running Applications from MPS/mbeddr.

exported int32 main(int32 argc, string[] argv) {


return test testAdding; (unittest) ;
} main (function)
testDivision; (unittest) ⬈

The test executable can then be run like any other executable, as, for example, the HelloWorld⬈ example in Section Build and Run. Note
that you can set the isTest flag to true for the executable (inside theBuildConfiguration). If you do this, your Makefile has an additional target
test, so you can invoke it directly via make test .

Helper Functions Assert statements only work in test cases. If, for some reason, you want to access them from regular functions, then
these functions have to have the @test helper annotation. It can be attached with the corresponding intention.

SimpleTestCase
model mbeddr.tutorial.main.defaultExtensions
package examples constraints

void testHelperFunction() {
assert(0) add(1, 1) == 2;
} testHelperFunction (function)
exported testcase testAdding {
testHelperFunction();
assert-equals(0) 10 == add(5, 5);
} testAdding(test case)
imports nothing ⬈

Parameterized unit tests mbeddr has tight integration with formal verification tools which can be used to specify a verification harness
for the testing function. This way similar functionality can be achieved as with the ordinary parameterized tests. Actually, testing your
function (unit of the system) with harness and formal verification is much more powerful because unit tests usually cover only punctual
cases, while the verification can prove the property (in this case assertions) for a whole wider set of values. You can read more about
these features in Section Formal Verification with CBMC.

Domain-Specific Testing Support Some of mbeddr's extensions (such as components or state machines) come with their own
extensions of the testing language to make it more efficient to unit-test these abstractions. These extensions are discussed together with
the respective language extensions.

4.6 Function Pointers


Function pointers are an important ingredient to writing flexible, configurable software in C. However, their syntax and type safety is not
that great -- hence, mbeddr provides a different, more typesafe approach.

4.6.1 Function Types


As the first example, we will add a configurable event handler using function pointers. We create a new module FunctionPointers⬈ using
the context menu New → c.m.core.modules → ImplementationModule on the current model.

Inside it, we will add a struct called Trackpoint that contains a number of members. You create thestruct by just typing struct inside the module.
You add the members by simply starting to type the int8 types. Ignore the physical units (the things behind the slashes after the types) for
now.

exported struct Trackpoint {


int8 id;
int8/s/ time;
int8/m/ x;
int8/m/ y;
int16/m/ alt;
int16/mps/ speed;
}; ⬈

We then create two functions that are able to process the Trackpoints. Here is one function that does nothing (intentionally). You enter this
function by starting out with the Trackpoint type, then typing the name and then using the( to actually create the function (the thing has
been a global variable up to this point!):

exported Trackpoint process_doNothing(Trackpoint e) {


return e;
} process_doNothing (function) ⬈

Other functions with the same signature may process the data in some specific way. We can generalize those into a function type using a
typedef . Note that entering the function type ()=>() is in fact a little bit cumbersome. The alias for entering it isfuntype :

exported typedef (Trackpoint)⇒(Trackpoint) as DataProcessorType; ⬈

4.6.2 Function References


We can now create a global variable that holds an instance of this type and that acts as a global event dispatcher.

DataProcessorType processor; ⬈

We also create a new, empty test case that we will use for making sure the program actually works. In the test we assign a reference to
process_doNothing⬈ to that event handler. Note the : notation for function references - we do not use the ugly C function pointer syntax.
With this in place, we can write the first test assertion:

{
processor = :process_doNothing;
Trackpoint i2 = processor(i1);
assert(0) i2.id == 1 && i2.alt == 100 m;
} ⬈

Let us complete this into a runnable system. In the Main module we change our main function to run our new test. Note how we import the
FunctionPointers module; we call the test case, which is visible because it isexported . In the code below we call all the test cases built in the
tutorial:
exported int32 main(int32 argc, string[] argv) {
return test testProcessing; (unittest) ;
} main (function)
testLambdaProcessing; (unittest)
testNullerOK; (unittest)
testInterpolator; (unittest)
testJudging; (unittest)
testInterpolatorWithMock; (unittest)
testPrintf; (unittest)
testFlightAnalyzer; (unittest)
testRuntimeVar; (unittest)
testPresenceConditions; (unittest)
testConditionalAlternative; (unittest)
testConditionalReplacement; (unittest)
testFlightRecorder; (unittest) ⬈

To make the program build correctly, we have to make sure all modules used in the program are included. In particular, the FunctionPointers
module must be included. Mising modules can be included with a quick fix. In the end, this results in the following binary:

executable MbeddrTutorialDefaultExt is test {


modules:
Main
}
FunctionPointers (examples)
Components (examples)
DataStructures (examples)
StateMachines (examples)
LibraryAccess (examples)
stdio_stub (external)
stdlib_stub (external)
RuntimeVariability (examples)
StaticVariability (examples)
ComponentsSRI (examples)
UnitDeclarations (config) ⬈

We can run the tests from within MPS or from the console.

4.6.3 Lambdas
In contrast to regular C, mbeddr also provides lambdas, i.e. anonymous functions. They can be passed to functions that take function
types as an argument. However, they can also be assigned to variables that have a function type, such as the processor above. Here is an
example:

exported testcase testLambdaProcessing {


Trackpoint i1 = {
id = 1,
time = 0 s,
x = 0 m,
y = 0 m,
alt = 50 m
};

processor = [tp|
tp.alt = 100 m;
tp;];

assert(0) processor(i1).alt == 100 m;


} testLambdaProcessing(test case) ⬈

A lamda is expressed as [arg1, arg2, ...|statements] . The type of the arguments is inferred from the context, they don't have to be specified. If
several statements are required (as in the example above), they are layouted vertically. If only an expression is required, it is shown in
line.

4.7 Commenting
mbeddr supports several kinds of comments. We discuss all of them in this section. All of them have in common that they are so-called
richtext blocks. This means that you can edit the multi-line comments in a way that resembles a regular text editor: pressing Enter to get a
new line, using Alt + Backspace to delete words, Ctrl + Right to jump to the end of the line, etc. In addition, you can press Ctrl + Space inside
the text blocks to insert actual MPS nodes: this lets you mix unstructured text with "real program code". We use this, for example, to
support references to program elements inside the comments. Since these are real references, they are renamed automatically if the
referenced element is renamed. Press Ctrl + Space to see which references are available; here are some:
@mc references another top-level element in an implementation module (these are calledmodule contents, hence the mc
@child references any named child under the current node
@module references another module
@arg references an argument (only works in functions or similar contexts)
@local references a local variable (only works in functions or similar contexts)

There are several context-specific references available, for example, a comment in a state machine can use @state to reference states.

4.7.1 Statement Comments


Statement comments are comments that are used in statement context (i.e., places where you write "procedural" C code). Examples can
be seen in the following two code fragments:

void store_store(Trackpoint* tp) <= op store.store {


//here is a regular statement comment.
@arg(tp) is a reference to an argument.
storedTP = tp;
return;
} runnable store_store ⬈

Trackpoint* store_take() <= op store.take {


Trackpoint* temp = storedTP;
//this comment refers to a local variable: @local(temp)
storedTP = null;
return temp;
} runnable store_take ⬈

Statement comments are entered just like you would expect, by typing //.

4.7.2 Element Documentations


The statement comments are simply another kind of statement. They live inside a statement list along with all the other statements. Their
only "connection" to the code is their phyical location. Element Documentations are different: they are actually attached to program nodes.
This means that if you move/cut/copy/paste the program node, the documentation moves along with it. In addition, while statement
comments can only be used in statement context, element documentations can be attached to all (coarse-grained) program nodes
(functions, components, operations, states, transitions, unit declarations, etc).

To attach an element documentation, you can use the Add Documentation to ... intention on the respective element. The following is an
example:

exported component Nuller extends nothing {


provides TrackpointProcessor processor
Trackpoint* processor_process(Trackpoint* p) <= op processor.process {
p.alt = 42 m;
return p;
} runnable processor_process
} component Nuller ⬈

4.7.3 Margin Comments


Review comments are not inlined in the code. Instead, they a reside in the right margin, just like comments in Word. An example can be
found in the DataStructures⬈ module. Margin comments can be attached to any node using the Add Margin Comment intention. Once there,
they can be replied to or deleted using context menu items. The color of the comment outline depends on the commenter (as determined
by the currently logged in user name).

If margin comments are used in a root, the root itself shows a summary at the top: how many comments are in the root, plus who
commented last, and when. You can use the Delete All Comments intention on that root annotation to delete all comments.

4.7.4 Commenting Out Code


The comments described so far were all text comments: their contents was unstructured text. This is totally different from commenting out
code, because in this case, even though the code has been commented out, the structure of the code (concepts, nodes, tree, references)
must be retained so it can be commented back in. This is why commenting out code is treated differently than the comments discussed
above which contain text (as opposed to code).

Essentially all program nodes can be commented out by using the Comment Out intention on the node. Commented code gets greyed, no
type checks are performed inside it, and commented nodes cannot be referenced from the outside. Examples can be seen in the
FlightAnalyzer⬈ state machine (commented out states and transitions), the commented struct in DataStructures⬈ and the two first
statements in calcVerticalSpeed⬈.

To comment something back in, you can either use the Comment In intention anywhere in the commented code, or just press Backspace on
the leading /* or trailing */.
5 Generated Code
5.1 Efficiency
If mbeddr is used as a plain C IDE, the efficiency of the generated code will not be any better or worse than it would have been written in
any other IDE. mbeddr does not do any magic to code that is written on C-level.

Extensions to C fall come in two flavours: those that extend only the type system and those that affect the generated C code.

Type System Only An example for such a language is the units support in mbeddr. While it can help you to catch flaws in your code in
the IDE, it has no implications for the generated code. This means you will not pay any price at runtime for using this extension.

Extensions that affect the generated Code The far more common case is that mbeddr's extension translate into C code. Here we try to
be as efficient wrt. memory and runtime overhead as possible. Again, some of the extensions will have no overhead because they are
very straightforward and will just improve the readability and reduce possible errors. However, more complex extensions such as
mbeddr's components or statemachines have to store state and potentially add additional indirections in certain cases. The price you pay
for that is typically not any higher than the price you would pay if you implement the same functionality manually in a clean, well-structured
way.

In many cases mbeddr offers transformation options in the build configuration which make different trade-offs between flexibility and
efficiency. For example, the static wiring strategy in mbeddr's components does not support interface polymorphism (less flexibility) but
translates to C code that does not involve a function pointer indirection (more efficient).

5.2 Readability
Within the mbeddr IDE you have many options to write readable code, mostly due to the high-level extensions. Not all of them can be
reflected in the code we generate due to limitations of the C-language itself. When translating mbeddr code into C code we follow a few
idiomatic rules.

Naming First and foremost mbeddr has a concept of namespaces which is not available in C. Every module in mbeddr forms its own
namespace. Additional structural elements such as components or statemachines add additional namespaces. This means that mbeddr
allows two functions to have the same name as long as they live in different namespaces. In order to translate these namespaces into C
and retain globally unique names, we typically prefix e.g., the function name with the module name. Similar things happen for all module
contents. This can make the generated funtion names quite long and thus hard to read. There are two ways around that: You can add the
preventNameMangling annotation to module contents. This will prevent the prefixes from being added to the element which has the
annotation. This should only be used in cases where you want to provide non-mbeddr users with API to work with. The second option is to
select the prevent name mangling configuration item in the build configuration. This will try to get rid of the prefixes where possible, i.e., if an
only if a non-prefixed name is globally unique within an executable. This optimization is performed each time you rebuild your code. The
downside of that might be that the generated code changes once you introduce a new e.g. function with a name which already exists
elsewhere in your code.

Sections The second option to structure your code in mbeddr aresections. They do not affect the generated code (nor the scoping in
mbeddr) and are just there to visually structure your module content.

Ordering It is also worth noting that in mbeddr you can order your module content according to aspects important for maintainability
rather than the declaration dependencies you know from C.

6 C Tooling
6.1 Building Binaries in the IDE
6.1.1 Build Configurations
mbeddr has its own abstraction for building applications. This is called the BuildConfiguration. The BuildConfiguration is responsible for the
generation of the Makefile and holds parameters for the various parts of mbeddr generators. The following is an example.
BuildConfiguration (c.m.t.d.code)

Platform
GNU paths are not checked
make: make
gdb: gdb
compiler
path to executable: gcc
compiler options: -std=c99
debug options: -g

Configuration Items
reporting printf

Binaries

executable HelloWorld is test {


modules:
HelloWorld
} ⬈

It is split into three sections:

Platform
Configuration Items
Binaries

Platform In this section a platform can be referenced. A platform specifies the used compiler and the options passed to the compiler. By
default mbeddr ships with two predefined platforms Desktop and Nothing. The Desktop platform will use Make for building, GCC for compiling,
GDB for debugging. Per default, mbeddr will expect these tools to be on your global PATH (see paths in build config above). However,
you can specify their locations by providing an absolute path or using a path macro (see Section Paths and Libraries): e.g., specifying the
location of make via ${make.home}/make. Latter makes sense if you are working in a team with people, each having these tools installed
at different locations. The Nothing platform, as the name implies, does nothing. It does not compile anything and does not generate a
Makefile. It is possible to define additional platforms for other compiler toolchains in the settings:[ToDo: bild ]

Configuration Items The Configuration Items sections configures the generators of mbeddr. For example thereporting item configures what
the report Statement should be gernerated to. If it is set toprintf the generator will create printf statements for them. There are various
configuration items for generators in mbeddr. These configuration items are discussed later in the user guide in the context of the
languages that contribute (and whose generation is affected by) these config items.

reporting printf ⬈

Binaries The last section is Binaries. It configures which mbeddr modules are compiled as a single compilation unit. By default it can be
either Executable or Library. The first one tells the Makefile to create an ELF binary the later one just creates a linkable that can be used in
further compilation. An Executable can also be marked as a test. This way it will be included in a target calledtest inside the resulting
makefile. If the tests should be run on the command line [ToDo: ref ] this flag has to be true. mbeddr will also check if all the modules that
are used in the code, that should be compiled as a single compilation unit, are included in the Binary (based on the transitive import
dependencies between modules). If not, mbeddr will prevent you from generating a broken Makefile. There is also an intention on the
binary that adds missing modules.

executable HelloWorld is test {


modules:
HelloWorld
} ⬈

Currently only one binary can be specified for each build configuration (this may change in the future).

6.1.2 Model Checking


The model checker checks that there are no errors left in the mbeddr code. You can run the model checker from the context menu of
models, solutions or the whole project. If there's something wrong, code generation will likely not work. You may want to make it a habit to
check your model (and fix errors) before you attempt a rebuild.

6.1.3 Building in the IDE


To build a binary in the IDE, make sure that the model from which you want to build contains a BuildConfiguration. You can then invoke the
Rebuild action from the context menus of models, solutions and the project -- each rebuild their own scope. You can also pressCtrl + F9
to rebuild.

Rebuilding in MPS reduced all extensions back to mbeddr C, generates textual C99 from mbeddr C and then also invokes the generated
Makefiles to build the final binary.

If you don't have any errors in your mbeddr code (see previous subsection), then the code generation and compilation should go without
problems and you can run the binary (see Section Running Applications from MPS/mbeddr). If something goes wrong nontheless (during
generation of compilation), then the errors are shown in the messages view.

6.2 Building Binaries from the Command-Line


This section provides an overview of how build scripts for mbeddr-based applications are written and how you can integrate them into
your continous integration environment. For trying out the shown examples, you will either need the mbeddr IDE (see Section mbeddr
IDE) or mbeddr Distribution (see Section mbeddr Plugin Distribution). When building your mbeddr-based applications, we recommend
using any of the two installation methods.

6.2.1 Example Application


In the folder userguide.samples, you will find a solution named com.mbeddr.tutorial.sample.CProject;
its main function is here: Main⬈. It contains an mbeddr-based example application, which we will use throughout this chapter for
describing the build process in MPS. The application consists of a single module, which contains a main function that invokes a testcase. As
you can see from the assert statements, the latter one will fail due to a false comparison.

You can build the application from within MPS by marking the solution, opening the context menu and selecting Rebuild . Now, you can run
the test by putting your cursor on the name of the main function (located inside Main⬈), opening the context menu and selecting run 'Main' .
You will now see a console appearing at the bottom of MPS with some messages in it (Section Running Applications from MPS/mbeddr
describes their meaning). The next sections will describe how to write a build script for this application and how to invoke it from the
command-line.

6.2.2 Writing Build Scripts


In software projects, you often want to build and test your software automatically. To achieve this for our example application, this section
will show you how a corresponding script will look like. MPS comes already with a language for writing these scripts. Scripts written with
this language are translated to regular ant scripts, which you can then execute from command-line or your CI environment.

The tutorial project contains a build script for our example application. You can find it here: build-example⬈. This script is organized into
different sections, which have different responsibilities. The following list describes the meaning of each:

buildbuild-example generates: build-example is the name of the MPS script, whereby build.xml is the name of the generated ant file.
base directory: the file system location to which the ant file is generated.
use plugins: a list of MPS plugins that are required while executing the script. When building mbeddr-based applications, you just need
those two.
macros: macros are used in the lower sections to avoid replication of file system paths and string values.
dependencies: builds scripts, i.e., artifacts on which our build script depends at generation-time. Since we usembeddr (and depend on
other languages as well), its plugins should be available while generating code for our application.
project structure: contains plugin descriptors and solutions/languages you want to build. In our case, we have adummy plugin
descriptor that causes classes from MPS' debugger api to be loaded during generation and the solution that contains our example
application.
default layout: in case your build has artifacts (e.g., plugins or zip files), you can declare their structure here. Since we just want to run
code generation and execute make, we do not need to list anything here.

6.2.3 Building from Command-Line


Invoking rebuild on the build solution will cause generation of an ant file. This section describes how we can invoke this file for building and
testing our application from command-line. For just building the application, we could directly invoke the scripts from command-line. But,
we will write a wrapper script in ant, that calls the generated file. Reason is, this wrapper script will invoke make test , which executes the
tests written with mbeddr. MPS itself doesn't provide off-the-shelf support for that, therefore we have to do it this way.

The listing below shows the wrapper script with two targets: build for building the application and test for invoking the tests. In order to get it
run on your machine, please adjust the following paths:

source.location:
points to the folder where the generated files of your appplication solution are located (inside foldersource_gen).
mps.home : on Windows and Linux, this property points to the directory where your MPS is installed. On Mac, it points to theContents
folder, located inside MPS' installation directory.

In order to execute the script, please install ant on your machine (see Section Required Tools and Versions). Next, store this script to your
local file system as wrapper.xml. For building our example application, on the console, navigate to the location where you stored the script
and enter ant -f wrapper.xml build . For executing the tests, enterant -f wrapper.xml test at the same file system location.
<project name="build and test helloworld" default="build" >
<property name="base.path" location="/Users/domenik/repositories/mbeddr.core/code/applications/tutorial/solutions" />
<property name="source.location" location="${base.path}/com.mbeddr.tutorial.sample.CProject/source_gen/com/mbeddr/tutorial/sample/CProject/main" />
<property name="mps.home" location="/Users/domenik/MPS/MPS-135.1441.app/Contents" />

<target name="build">
<ant antfile="build.xml" target="clean" />
<ant antfile="build.xml" target="generate" />
<ant antfile="build.xml" target="build" />
</target>

<target name="test">
<ant antfile="build.xml" target="clean" />
<ant antfile="build.xml" target="generate"/>
<ant antfile="build.xml" target="build" />
<exec executable="make" dir="${source.location}">
<arg value="test"/>
</exec>
</target>
</project>

You can copy the same script to your CI environment and execute it there. Since you might have different file system paths there, we will
need to change locations of the ant properties. You can either change the value in the scripts or invoke the script with the following
command (replace the brackets):

ant -Dsource.location=<path to solution> -Dmps.home=<path to mps> -f wrapper.xml build

6.3 Running Applications from MPS/mbeddr


Once you have built a binary you can run it on the PC (assuming it has been built for the the PC using the Desktop platform). There are two
ways to run it:

First, you can open a terminal and run it from the console. We won't cover this any further.
Alternatively you can run it from within mbeddr. To do this, you have to create a Launch Configuration and run or debug it.

The main goal of the Launch Configuration is to be able to specify certain properties of the mbeddr binary you'd like to run. TheLaunch
Configuration is related to the Build Configuration because the latter one specifies which modules constitute the executable and it is mandatory
that exactly one module listed in the Build Configuration contains a function which can act as a main function (basically as an entry point).

6.3.1 Creating a Launch Configurations


As an example let's take a look at the PlainCDemo⬈, which is located in mbeddr.tutorial.main.plainC. In the same model, you find the
corresponding BuildConfiguration (m.t.m.plainC)⬈. This Configuration is used to specify which modules should be compiled into the
binary. As a next step, the plainC model needs to be (re-)built. There are two options to run/debug the binary from within MPS/mbeddr:

Go to the entry point (i.e., main⬈) of the executable and select Run PlainCDemo (or similarly Debug PlainCDemo ) from the context menu.
You will see the name of the executable at the menu item that you have specified in the Build Configuration. This approach can be seen
in Fig. 6.3.1-A.
Go to the Build Configuration of your application and run/debug the executable from the context menu of the executable definition. This
approach can be senn in Fig. 6.3.1-B.

Figure 6.3.1-A: Running the executable from the context menu of the entry point

Figure 6.3.1-B: Running the executable from the context menu of the executable definition

After you have run the executable a new launch configuration will be created with the name of the executable that you have specified. If a
configuration already exists with the same name then it will be overwritten. From this point, it is also possible to run/debug the executable
from the menu bar or from the Run menu of MPS. The launch configuration can also be created manually, or from the context menus of
the entry point or executable definition; take a look at the previous two figures (Fig. 6.3.1-B or Fig. 6.3.1-A) for the corresponding Create
PlainCDemo element.

An existing launch configuration can also be used to launch your application. Select the PlainCDemo element from the drop down list which
lists the available Launch Configuration s. Simply click on the green run button to launch the executable as shown onFig. 6.3.1-B. In order to
debug the application, you need to click on the green bug icon next to the selected Launch Configuration . You can read more about how you
can debug your application in Section Debugging.

Figure 6.3.1-C: Running the executable from the menu bar

6.3.2 Contents of the Launch Configuration


Launch Configuration s can be opened for editing either from the menubar or by clicking on the Run → Edit Configurations menu item. Fig. 6.3.2-A
shows the contents of the Launch Configuration for the PlainCDemo.

Figure 6.3.2-A

The following properties can be edited:

Name: the name of the Launch Configuration . If you specify a custom configuration, it may be important to rename it and save it like
that, so that during the next run it will not be overwritten.
Build project: points to the executable within your Build Configuration.
Launch timeout: the value here is in milliseconds and it specifies for how long the debugger should try to connect to the low-level C
debugger (don't change the default unless you have a reason).
Command timeout: the value here is in milliseconds and it specifies how long the debugger waits with the execution of individual
debug commands (don't change the default unless you have a reason).
Before launch: this specifies the commands that should be executed before launching. Usually this includes generate and make (to
build the executable).

6.3.3 Console output


After you have launched the application all the output will be redirected to the Console view of MPS. You will be able to see all the
messages that would appear if you run the executable from the command line.

Executable tests will show the test results also in the Console. Fig. 6.3.3-A shows the output of the test run for the
mbeddr.tutorial.main.defaultExtensions.SimpleTestCase executable. The console output contains links for executed tests and failed assertions, and
you can click on them and to select the corresponding program element (test case, assertion) in the editor.

Figure 6.3.3-A: Running a test from MPS lets you directly click on tests and failed assertions.

6.4 Debugging
mbeddr comes with a debugger for core C. This chapter describes how to use this debugger to debug C programs written with mbeddr.

6.4.1 Introduction
The mbeddr debugger behaves like a regular C debugger, but on the abstraction level of mbeddr and its extensions: it shows the call
stack and watchables and allows stepping and setting of breakpoints. This way, you don't need to know any details about the generated C
code. In addition, the debugger can be extended to enable debugging of code written with user-defined language extensions. Currently,
the debugger uses gdb as debug backend and only supports local debugging. However, in future releases of mbeddr, we plan to support
debugging on the target device. In the tutorial, you find in the solution mbeddr.tutorial.main a model named plainC. We use the contained
mbeddr code to illustrate debugging mbeddr programs in MPS.

6.4.2 Starting a Debug Session


Before you can debug your program, you have to make sure the C code for the program has been generated. Therefore, click on the
Rebuild Model
model plainC, open the context menu and select 'm.t.main.plainC' or press Ctrl + F9 . Next, set a breakpoint in the first line of the
main function. This is done by clicking into the gutter of the editor. The result should look like this:

Figure 6.4.2-A: Creating a Breakpoint

Next, create a run configuration as explained in Section Running Applications from MPS/mbeddr and run it in debug mode by pressing
Shift + F9 or by clicking on the debug button in MPS' toolbar (seeFig. 6.4.2-B).

Figure 6.4.2-B: Starting a new debug session

The debugger UI should now appear at the bottom of the MPS window. In the lower left corner of this UI, you can see two activated
buttons: a green (arrow) and a red (square) one. This indicates, that the debugger is now connected to the underlying C debugger. You
can now start program execution by clicking on the green (arrow) button. This will suspend the debugger on the first line of the main
function, on your previously created breakpoint:

Figure 6.4.2-C: Debugger suspended at Breakpoint

Next, press F7 to step into the current line, this will suspend the debugger in the called functionadd. For returning to the calling function
main, press Shift + F8 . Once the debugger is suspended there, you can see the stack framemain in the call stack and the local variablesp,
argc and argv in the watchables view (see Fig. 6.4.2-D).

Figure 6.4.2-D: Suspended Debugger after performing step out

6.5 Hello, World


This section assumes that you have installed MPS and mbeddr correctly. This section walks you through creating a new project.
6.5.1 A new Project
We first create a new project. You can do this via File → New Project . Fig. 6.5.1-A shows the Wizard that opens subsequently. We select a
Solution Project 1, which is a project that contains one solution. We use the usual Java package naming conventions for the solution.

Figure 6.5.1-A: MPS' dialog for creating new projects.

If you have a project alredy, then you can simply create a new solution inside the existing project. Select the project node itself and select
New → Solution from the context menu.

1: If you work with the mbeddr IDE, then the language project shown in Fig. 6.5.1-A is not available.

6.5.2 New Model


Inside the solution you have to create a (or potentially several) new model (see Fig. 6.5.2-A). Use the New → Model in the context menu of
the solution. The model should have a (qualified) name which typically starts with the solution name. We call our model ...main.

Figure 6.5.2-A: The dialog to create a new model in MPS.

After pressing ok, the new model is created. Also, the model's property dialog pops up automatically (it can be brought up manually by
selecting Model Properties from its context menu). In any case, select the Used Languages tab and use the + button to add the com.mbeddr.core
devkit (note that you can select devkits and languages by just typing the first letters (c.m.c)). Fig. 6.5.2-B shows the result.

Figure 6.5.2-B: The MPS dialog for setting the languages used in a model.

6.5.3 Writing the Code

Note: the stuff explained in this subsection can be done automatically using the Code → Wizards → Create Minimal Test wizard. We do it
manually to explain things in detail.

Inside the model we create a new Implementation Module . To do so, select New → c.m.c.modules → module from the model's context menu.
Specify the name HelloWorld for the module. The result looks as shown inFig. 6.5.3-A.

Figure 6.5.3-A ⬈

Inside the module we now write a test case that (wrongfully) asserts that 1 + 1 == 3 . Here is the code:

exported testcase testAdding {


assert-equals(0) 1 + 1 == 3;
} testAdding(test case) ⬈

Next is a main function. You can create one by simply typing main: this expands to a fully parameterized main function. It also already
contains a return 0; statement. Next, we want to replace the0 with a call to our test case. Delete the0 and instead enter a test expression.
Inside it's brackets, add a reference to the testAdding⬈ test case. The resulting main function looks like this:

exported int32 main(int32 argc, string[] argv) {


return test testAdding; (unittest);
} main (function) ⬈

To be able to run the code, we also have to create a build configuration. Use the menu New → c.m.core.buildconfig → BuildConfiguration on the main
model to create one. Initially it looks as shown in Fig. 6.5.3-B

Figure 6.5.3-B ⬈

To make the system run, you have to do three things:

First, you have to select a platform. The platform determines how the generated C code is compiled. Select the desktop platform. No
need to change any of its parameters.
Then you have to specify configuration items. For our case we need the reporting item that determines how console output is
handled. The default printf is ok.
Finally, you have to specify an executable. Add one called HelloWorld, and add the HelloWorld⬈ implementation module to its modules
slot.

You're done. It should now look as shown in Fig. 6.5.3-C


Figure 6.5.3-C ⬈

6.5.4 Build and Run


You can now build the system, for example by selecting Rebuild from the context menu of the main model or the
com.mbeddr.tutorial.sample.HelloWorld solution. In the Messages view you should see the following output:

rm -rf ./bin
mkdir -p ./bin
gcc -std=c99 -c -o bin/HelloWorld.o HelloWorld.c
gcc -std=c99 -o HelloWolrd bin/HelloWorld.o
make finished successfully for com.mbeddr.tutorial.sample.HelloWorld/com.mbeddr.tutorial.sample.HelloWorld.main
No input. Skipping optional target.
"textGen" target execution time: 546 ms
"callMake" target execution time: 446 ms
"copyTraceInfo" target execution time: 138 ms
"generate" target execution time: 131 ms
"reconcile" target execution time: 37 ms
Other targets execution time: 35 ms; compile: 24 ms, configure: 6 ms, runProcessors: 5 ms, preloadModels: 0 ms, make: 0 ms, addLanguages: 0 ms, resetCache: 0 ms, c

To run the code, you can select Run HelloWorld from the context menu of either the main function or the HelloWorld executable. The result
should be shown in the Run tool, as shown in Fig. 6.5.4-A. Note how it contains clickable lines for the test(s) that have run as well as for
the failed assertions.

Figure 6.5.4-A: Running a test lets you directly click (and then open in the editor) on tests and failed assertions.

This concludes the Hello, World.

6.6 Accessing Libraries


So far we have not accessed any functions external to the mbeddr program -- everything was self-contained. Let us now look at how to
access external code. We start with the simplest possible example. We would like to be able to write the following code:

module LibraryAccess imports nothing {


exported test case testPrintf {
printf("Hello, World\n");
int8 i = 10;
printf("i = %i\n", i); }
}

To make this feasible, we have to integrate C's standard printf function. We could import all ofstdio automatically (we'll do that below).
Alternatively, if you only need a few API functions from some library, it is simpler to just write the necessary proxies manually. Let's use
the second approach first.

6.6.1 Manual Header Import


External functions are represented in a so-called external module . After you create one and give it a name, it looks like this:

stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing

resources 6b853dfb-19be-33ee-b67c-33b658dd0753 ⬈

An external module is always associated with one or more "real" header files. The trick is that when an implementation module imports an
external module in mbeddr, upon generation of the textual C code, the referenced real header is included into the C file. This also means
that, even if you stdio.h requires all kinds of additional definitions for it to work, these donot have to be imported into mbeddr. Only the
function prototype must be imported so it can called from mbeddr C.

So the first thing we need to do is to express that this stdlib_stub external module represents the stdlib.h file:

stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing

resources header: <stdlib.h> ⬈


In general, we also have to specify the .o or .a files that have to be linked to the binary during themake process (in the resources section of
the external module). In case of stdlib.h, we don't have to specify this because gcc somehow does this automatically. So what remains to
do is to write the actual printf prototype. This is a regular function signature. The ellipsis can be added via an intention. The same is true for
the const modifier. This leads us to this:

stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing

void* malloc(size_t size);


void free(void* ptr);
resources header: <stdlib.h> ⬈

To be able to write the test case, we have to import the stdio_stub into our LibraryAccess implementation module. And in the build
configuration we have to add the LibraryAccess and the stdio_stub to the binary. We should also call thetestPrintf test case from Main.

6.6.2 Automatic Header Import


mbeddr has support for importing existing C code. But this is a comercial extension that has to be installed seperately. For futher
information please contact mailto:[email protected]

7 C Extensions
7.1 Static Evaluation
mbeddr can calculate the static value of an expression if all children of that expression are statically evaluable. This is the case if the leaf
nodes of the expression tree consist of literals, references to #constant or references to #macro (function macro calls). Whether an
expression is statically evaluatable, as well as as its calculated static value, is shown in the inspector.

The interpreter which calculates the static value works with a precision of 100 decimal places. The 101st place is rounded half up.

By default the interpreter has no effect on the code generated by mbeddr. However by using the staticValueOf expression you can ask
mbeddr's code generator to first use the interpreter to calculate the static value and use this value then in the generated code. This
functionality was introduced so that your code readability does not need to be sacrificed for performance reasons (in traditional C code
you would typically see "magic numbers" where staticValueOf is used in mbeddr). The expression can be inserted into your code using the
staticValueOf alias or by using a surround-with intention on a statically evaluable expression (Ctrl + Alt + T ).

Figure 7.1-A ⬈

In addition to the pure static value calculation mbeddr can perform various rounding operations before the static value is inserted into your
code (an example is shown in Fig. 7.1-A). These rounding operations can be added via an intention on thestaticValueOf expression (
Add rounding operation ) or it can be selected in the inspector of thestaticValueOf expression. The following rounding operations are available:

none: (default) no rounding operation will be performed. The resulting type might be a floating point type.
ceil: rounds torwards positive infinity. The result is an integer.
floor: rounds torwards negative infinity. The result is an integer.
round: rounds towards "nearest neighbor" unless both neighbors are equidistant, in which case it rounds up. The result is an
integer.
trunc: rounds torwards zero. The result is an integer.

7.2 Units
The purpose of physical units is to annotate types and literals with additional information - units - used to improve type checking. Many
embedded systems work with real-world quantities, and many of those have a physical unit associated with them.

7.2.1 Physical Units Basics


Defining and Attaching Units to Types Let us go back to the definition of Trackpoint introduced in Section Function Pointers and make it
more useful. Our application is supposed to work with tracking data (captured from a bike computer or a flight logger). For the examples
so far we were not interested in the physical units.

Let us now explore how we can work with physical units, adding more semantics to this data structure. In order to use the physical units
language, we need to import either the com.mbeddr.physicalunits devkit. After the import we can add units for the types by simply pressing the
/ at the right side of them.

The 7 standard SI units are available from an accessory module called SIUnits⬈. It can be imported into any implementation module.
There is an extra nounit unit defined in this module. The usage of this unit is limited to test case definitions and conversion rules, which we
will talk about in later sections. When you import this module, the simple units like s and m will be immediately available.

For the speed member of the Trackpoint struct we need to add m/s. Since this is not an SI base unit, we first have to define it, which can be
done either in the ImplementationModule or in a UnitContainer (such as UnitDeclarations⬈). To create a new unit just use its alias unit. The
specification of the derived units can be typed in after the name of the unit. A simple unit specification consists of one unit with possibly an
exponent. If you would like to type in a composite unit specification then press ENTER to expand the components in the specifiction.
Exponents can be added to the units by pressing the cap (^) symbol and the value of the exponent. This works for all units in the
specification except for the last component; there the cap is disabled because it conflicts with the binary XOR operator. At the last position
use the appropriate intention (by the way it is available for all components regardless of the position in the specification) to add the
exponent to the unit. An exponent can either be simple integer number or it can be transformed to a fraction by typing / in the exponent
expression.

The following code example shows how can we use units for the members of the Trackpoint struct.

exported struct Trackpoint {


int8 id;
int8/s/ time;
int8/m/ x;
int8/m/ y;
int16/m/ alt;
int16/mps/ speed;
}; ⬈

Units on Literals Adding these units may result in errors in the existing code (depending on whether you had added them in previous
tutorial steps) because you cannot simply assign a plain number to a variable or member whose type includes a physical unit ( int8/m/ length
= 3; is illegal). Instead you have to add units to the literals as well. You can simply type the unit name after the literal to get to the following:

Trackpoint i1 = {
id = 1,
time = 0 s,
x = 0 m,
y = 0 m,
alt = 100 m
}; ⬈

You also have to add them to the comparison in the assertions as well, for example in this one:

{
processor = :process_doNothing;
Trackpoint i2 = processor(i1);
assert(0) i2.id == 1 && i2.alt == 100 m;
} ⬈

Operators If we were to write the following code, we would get an error:

int8 someInt = i1.x + i1.speed; // error, adding m and mps

This is because all the mathematical operators are overloaded for physical units and these operations are also type-checked accordingly.
Clearly, the problem with this code is that you cannot add a length (i1.x) and a speed (i1.speed). The result is certainly not a plain int8, so you
cannot assign the result to someInt. Adding i1.x and i1.y will work, though. Another example where the units are matched properly:

int8/mps/ speed1 = (i4.x - i3.x) / (i4.time - i3.time); ⬈

The calcVerticalSpeed⬈ provides a few more examples of working with units in expressions:

void calcVerticalSpeed(TrackpointWithVertical* prev, TrackpointWithVertical* cur) {


/* cur.alt = 0 m; */
/* cur.speed = 0 mps; */
if (prev == null) {
cur.vSpeed = 0 mps;
} else {
int16/m/ dAlt = cur.alt - prev.alt;
int8/s/ dTime = cur.time - prev.time;
cur.vSpeed = dAlt / dTime;
}
} calcVerticalSpeed (function) ⬈

Editing Composite Units When you edit an expression with composite units you may encounter the problem that multiple side
transformations are available at a given place and so you need to make the choice by hand. Consider the following code example:

int8/mps/ speed2 = 2 m s-1 + 3 s-1 m; ⬈


Here, when you are editing the unit definition of the number literal 2, the * operator may be used at the end to introduce a multiplication in
the expression or to extend the unit defition. Similarly, the ^ symbol may be used to introduce exponent for the units or could be also used
for the binary xor operator. This problem only arises at the end of the unit definition.

Build Configuration If you try to rebuild the model, you will get an error message, saying that you need to add theunits configuration
item to the build configuration. If you have added this item, the build should be successful and we should be able to run the test again.

7.2.2 Unit Conversions


So far, we have used units only for type checking. However, sometimes you have several units for the same physical quantity. For
example, speed can be measured in m/s or km/h. For this purpose you can define conversion rules between the units.

Defining Conversions The conversion rule must define a source and a target unit (the conversion will happen between these units) and it
must contain one or more conversion specifiers. A conversion specifier defines the conversion expression for a given type. Inside the
conversion expression one can use the val expression as a placeholder for the to-be-converted value. It will be substituted with the value
that is passed to the conversion rule. The type that you define for the specifier is the type that the val expression will have. Additionally, it
is also possible to omit the type: in this case the specifier works as a generic one, where the expression may be applied to any types (the
type of the val expression will be double in this case, but this is just a trick that is needed for the typesystem to work properly). The
conversion specifiers are checked, and redundant specifiers will be marked as erroneous (the ones that are covered by some other
specifier due to its type being the supertype of the original type).

section conversions {

exported conversion mps -> kmh {

exported conversion kmh -> mps {

exported conversion s -> h {

exported conversion m -> km {


} section conversions
val as double -> val * 3.6 val as <no type> -> val / 3.6 val as <no type> -> val / 3600 val as <no type> -> val / 1000
} } } } ⬈

Conversion rules can be either lazy or eager. The default behavior is the lazy evaluation, you can switch to eager from the corresponding
intention on the conversion rule itself.

Lazy conversion rule: the val expression inside the conversion specifier has no unit, and the expression must evaluate to a type
without units. During the usage of the conversion rule, the type system will just simply append the rule's target unit to the evaluated
expression.
Eager conversion rule: the val expression has the same unit as the rule's source unit. The expression in the conversion specifier
must evaluate to a type with the rule's target unit. During the usage of the conversion rule, the expression will be simply evaluated
and the resulting type will be used (which must match the target unit).

Using Conversions You can now invoke this conversion within a convert expression:

void somethingWithConversion() {
Trackpoint highSpeed;
highSpeed.speed = ((int16/mps/) convert[300 kmh-> mps]);
} somethingWithConversion (function) ⬈

The convert expression does not explicitly refer to any specific conversion rule, you only need to define the target unit of the conversion,
while the source unit is known from the type of the original expression. The system will try to find a matching conversion specifier (where
both the units and the types match). Here comes the conversion specifier with no specific type handy, because it can be applied to any
expression if the units match.

The conversion specifier can be set manually too in the Inspector of the convert expression.

7.2.3 Generic Units


Generic units can be used to enhance the type safety for function calls by also providing the possibility to specify the function in a generic
way with respect to the used type annotations. The following code snippets show some use cases of generic units:

int8/U1/ sum(int8/U1/ a, int8/U1/ b) {


return a + b;
} sum (function) ⬈

int8/U1 U2/ mul(int8/U1/ a, int8/U2/ b) {


return a * b;
} mul (function) ⬈
Trackpoint process_generic(Trackpoint e) {
e.time = sum(10 s, 10 s);
e.id = sum(1, 2);
e.x = sum(10 m, 10 m);
e.speed = mul(10 m, 10 s-1);
return e;
} process_generic (function) ⬈

First you need to create generic units by invoking the Add Generic Unit Declaration intention on the function. You can specify multiple generic
units once the first one has been created by just simply pressing Enter in the unit list. These newly created generic units can then be used
for the type annotation just like any other unit. The substitutions will be computed based on the input parameters of the function call. One
can also combine a generic unit with additional arbitrary non-generic units for the type annotations of the parameters and the return type.
In addition, it is also possible to invoke the function with bare types, but be aware that once at least one substitution is present, the
function call will be type checked also for matching units. The generic units can also have exponents (even as fractions) and the same
type-checks also apply to them as it was described for the non-generic units. An example shows how this could be done for a square root
function:

double/U112/ sqrt(double/U1/ a) {
double/U112/ res = 0 U112;
//here goes your sophisticated square root approximation
return res;
} sqrt (function) ⬈

Some additional notes on the usage of the generic units:

You should not use multiple generic units in the annotation of one given function parameter, because the non-generic units will be
bound to the first generic unit (all of them), so the substitutions will probably not be the ones that you would expect. This is a
constraint introduced to manage the complexity of the compuatation of the bindings; allowing multiple generic units for parameter
types could result in a constraint solving problem which we do not support right now.
The generic units can only be used inside the function definition, this means that they are not visible outside of the function
definition.

7.2.4 Stripping and Reintroducing Units


Let us assume we have an existing (legacy or external) function that does not know about physial units and you cannot or do not want to
use generic units. An example is anExistingFunction⬈:

int16 anExistingFunction(int16 x) {
return x + 10;
} anExistingFunction (function) ⬈

To be able to call this function with arguments that have units, we have to strip away the units before we call the function. This can be
achieved by selecting the corresponding expression and invoking the Strip Unit intention. The type of this stripped expression will be simply
the type of the original expression but without units.

int16/m/ someFunction(Trackpoint* p1, Trackpoint* p2) {


int16 newValueWithoutUnit = anExistingFunction(stripunit[p1.alt]);
return newValueWithoutUnit m;
} someFunction (function) ⬈

The opposite direction (i.e., adding a unit to a value that has no unit) is also supported. The introduceunit operator is available for this. It
takes an expression plus the to-be-introduced unit as arguments.

7.3 State Machines


Next to components and units, state machines are one of the main C extensions available in mbeddr. They can be used directly in C
programs, or alternatively, embedded into components. To keep the overall complexity of the example manageable, we will show how
state machines can be used directly in C.

State machines can either be viewed and edited with the textual notation or with the table based projection. You can switch between the
Projection
two modes with the Code → Mode → Statemachines as Tables option.

This section will give a brief overview on state machines; how they can be defined, how one could interact with C code when using the
state machines and we will also give details about hierarchical state machines and their visualizations with the help of the PlantUML tool.

7.3.1 Implementing a State machine


Import the com.mbeddr.statemachines devkit into your model and create a new ImplementationModule⬈ (the example module is called also
StateMachines⬈. Then you can create the state machine in this module and add theImplementationModule⬈ to the build configuration.
This section will guide you through an example that creates a state machine for a flight judgement system. This system has 5 states which
describe the behavior of the plane and also give the semantics for awarding the points for a given flight. We will create these states and
see how they can be connected with transitions. We will also explore how one could specify guard conditions for the transitions. The
informal rules for judging flights are as follows:

Once a flight lifts off, you get 100 points


For each trackpoint where you go more than 100 mps, you get 10 points
For each trackpoint where you go more than 200 mps, you get 20 points
You should land as short as possible; for each trackpoint where you are on the ground, rolling, you get 1 point deducted.
Once you land successfully, you get another 100 points.

In order to create the state machine in the ImplementationModule⬈ simply type in statemachine at the top level in implementation modules.
This will create a new state machine with an initial state and one event already defined. You can leave the event there, we will come back
to that later. We know that the airplane will be in various states: beforeFlight⬈, airborne⬈, landing⬈ (and still rolling), landed⬈ and
crashed⬈. You can just rename the already existing initial state tobeforeFlight⬈ and add the other states to the state machine. In the end
you should have the following states:

exported statemachine FlightAnalyzer initial = beforeFlight {


state beforeFlight {
} state beforeFlight
state airborne {
} state airborne
state landing {
} state landing
state landed {
} state landed
state crashed {
} state crashed
} ⬈

The state machine will accept two kinds of events. The first one is the next⬈ event, which takes the next trackpoint submitted for
evaluation. Note how an event can have arguments of arbitrary C types, a pointer to a Trackpoint⬈ in this example. The Trackpoint struct is
already defined in the DataStructures⬈ module. The other event, reset⬈, resets the state machine back to its initial state.

exported statemachine FlightAnalyzer initial = beforeFlight {


in event next(Trackpoint* tp) <no binding>
in event reset() <no binding>
} ⬈

We also need a variable in the state machine to keep track of the points we have accumulated during the flight. In order to create a new
variable simply type var in the state machine. This creates a new variable, you need to specify its name and the type. The newly created
variable is invisible from outside by default, you can change this to readable or writable with the corresponding intentions. Readable variables
may be read from outside of the state machine, while writable variables can also be modified from the interacting C code (you will have to
create a state machine instance first; we explain this below). In the end you should have a points variable in the state machine which is
readable only:

exported statemachine FlightAnalyzer initial = beforeFlight {


readable var int16 points = 0
} ⬈

We can now implement the rules outlined above using transitions and actions. Let us start with some simple ones. Whenever we enter
beforeFlight⬈ we reset the points to 0. We can achieve this with an entry action inbeforeFlight⬈:

exported statemachine FlightAnalyzer initial = beforeFlight {


state beforeFlight {
entry { points = 0; }
} state beforeFlight
} ⬈

There are some additional rules for taking off, landing and conditions for crashing.

All states other than beforeFlight⬈ must have a transition triggered by the reset⬈ event to go back to the beforeFlight⬈ state. Note,
that as a consequence of the entry action in the beforeFlight⬈ state, the points get reset in all three cases.
As soon as we submit a trackpoint where the altitude is greater than zero we can transition to the airborne state. This means we
have successfully taken off, and we should get 100 points in bonus. TAKEOFF⬈ is a global constant representing 100 (#constant
TAKEOFF = 100;). We also make use of the physical units extension (seeSection Units) and annotate the speed and altitude with the
appropriate unit.
Events while we are in the air: when we are airborne and we receive a trackpoint with zero altitude and zero speed (without going
through an orderly landing process), we have crashed. If we are at altitude zero with a speed greater than zero, we are in the
process of landing. The other two cases deal with flying at over 200 and over 100 mps. In this case we stay in the airborne⬈ state
(by transitioning to itself) but we increase the points.
The complete set of transitions is as follows:

exported statemachine FlightAnalyzer initial = beforeFlight {


state beforeFlight {
entry { points = 0; }
on next [tp.alt > 0 m] -> airborne
exit { points += TAKEOFF; }
} state beforeFlight
state airborne {
on next [tp.alt == 0 m&& tp.speed == 0 mps] -> crashed
on next [tp.alt == 0 m&& tp.speed > 0 mps] -> landing
on next [tp.speed > 200 mps&& tp.alt == 0 m] -> airborne { points += VERY_HIGH_SPEED; }
on next [tp.speed > 100 mps&& tp.speed <= 200 mps&& tp.alt == 0 m] -> airborne { points += HIGH_SPEED; }
on reset [ ] -> beforeFlight
} state airborne
state landing {
on next [tp.speed == 0 mps] -> landed
on next [tp.speed > 0 mps] -> landing { points--; }
on reset [ ] -> beforeFlight
} state landing
state landed {
entry { points += LANDING; }
on reset [ ] -> beforeFlight
} state landed
state crashed {
entry { send crashNotification(); }
} state crashed
} ⬈

Note that the transitions are checked in the order of their appearance in the state machine; if several of them are ready to fire (based on
the received event and the evaluation of the guard conditions), the first one is picked. Actually this kind of nondeterminism is usually not
wanted and mbeddr provides support for the verification of state machines, which you can read more about in Section Checking State
Machines.

Junction states (influenced by Simulink) can also be created in the state machine in mbeddr. A junction state can only contain epsilon-
transitions, meaning these transitions are immediately ready to fire when the state is entered, they don't need a triggering event. Having
multiple epsilon-transitions clearly introduces nondeterminism, so one typically specifies guards for these transitions. The following
example uses the points variable to select the state transition that should be applied. The example junction state makes a decision based
on the points and immediately fires.

exported statemachine FlightAnalyzer initial = beforeFlight {


junction NextRound {
[points > 100] -> airborne
[points <= 100] -> beforeFlight
} junction NextRound
} ⬈

Junctions are essentially branching points in a statemachine and help modularize complex guards (especially if several guards in one
state have common subexpressions):

7.3.2 Interacting with Other Code -- Outbound


So how do we deal with the crashed⬈ state? Assume this flight analyzer runs on a server, analyzing flights that are submitted via the
web. If we detect a crash, we want to send notifications or perform other kinds of error handling. In any case, this would involve the
invocation of some external code. This can be performed in two ways:

The first one is to simply invoke a C function from an entry or exit action. Another alternative, which is more suitable for formal analysis
(as we will see below and in Section Checking State Machines) involves out events. From the entry action we send an out event, which we
have defined in the state machine. The following code example shows how the latter one would look like.
StateMachines
model mbeddr.tutorial.main.defaultExtensions
package examples constraints

exported statemachine FlightAnalyzer initial = beforeFlight {


out event crashNotification() => raiseAlarm
state crashed {
entry { send crashNotification(); }
} state crashed
}
void raiseAlarm() {
//invoke some kind of real-world thingy
//that reacts suitably to a detected crash
} raiseAlarm (function)
imports FlightJudgementRules
DataStructures
stdlib_stub
stdio_stub
UnitDeclarations
SIUnits ⬈

We create an out event called crashNotification⬈ (which will be sent when we enter thecrashed⬈ state). We then specify a binding to the
out event; the binding is part of the out event definition: simply add the name of the function as the target of the arrow (in the example this
is the raiseAlarm⬈ function).

The benefit of this approach compared to the previous one is that formal verification can check whether the notification was sent at all
during the execution of the state machine. The effect is the best of both worlds: in the generated code we do call the raiseAlarm⬈
function, but on the state machine level we have abstracted the implementation from the intent. See Section Checking State Machines for
a discussion of state machine verification.

7.3.3 Interaction with Other Code -- Inbound


Let us write some test code that interacts with a state machine. To write a meaningful test, we will have to create a whole lot of
trackpoints. So to do this we create helper functions. These in turn need malloc and free, so we first create an additional external module
that represents stdlib.h:

stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing

void* malloc(size_t size);


void free(void* ptr);
resources header: <stdlib.h> ⬈

We can now create a helper function that creates a new Trackpoint⬈ based on an altitude and speed passed in as arguments:

exported Trackpoint* makeTP(int16 alt, int16 speed) {


static int8 trackpointCounter = 0;
trackpointCounter++;
Trackpoint* tp = ((Trackpoint*) malloc(sizeof[Trackpoint]));
tp.id = trackpointCounter;
tp.time = trackpointCounter s;
tp.alt = alt m;
tp.speed = speed mps;
return tp;
} makeTP (function) ⬈

We can now start writing (and running!) the test. We first create an instance of the state machine (state machines act as types and must
be instantiated). We then initialize the state machine by using the init operation:
exported testcase testFlightAnalyzer {
FlightAnalyzer f;
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈

Initially we should be in the beforeFlight⬈ state. We can check this with an assertion:

exported testcase testFlightAnalyzer {


f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈

We also want to make sure that the value of points is zero initially. Since we have declared thepoints variable to be writable above, we can
write:

exported testcase testFlightAnalyzer {


f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈

Let us now create the first trackpoint and pass it in. This one has speed, but no altitude, so we are in the take-off run. We assume that we
remain in the beforeFlight state and that we still have 0 points. Notice how we use thetrigger operation on the state machine instance. It
takes the event as well as its arguments (if any):
exported testcase testFlightAnalyzer {
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈

Now we lift off by setting the alt to 100 meters:

exported testcase testFlightAnalyzer {


f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈

So as you can see it is easy to interact from regular C code with a state machine. For testing, we have special support that checks if the
state machine transitions to the desired state if a specific event is triggered. Here is some example code (note that you can use the test
statemachine construct only within test cases):

exported testcase testFlightAnalyzer {


f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈

You may have noticed that the helper function allocates the new Trackpoint⬈s on the heap, without releasing the memory. You could
simply call free on these newly created structures after the test statement has been executed or allocate the trackpoints on the stack to
solve this problem.

7.3.4 Hierarchical State Machines


State machines can also be hierarchical. This means that a state may contain essentially a sub-state machine. The following piece of
code shows an example. It was derived from the previous flight judgement example, but the three flying⬈, landing⬈ and landed⬈ states
are now nested inside the airborne⬈ composite state. Composite states can be created in the state machine by typingcomposite state.
exported statemachine HierarchicalFlightAnalyzer initial = beforeFlight {
macro stopped(next) = tp.speed == 0 mps
macro onTheGround(next) = tp.alt == 0 m
in event next(Trackpoint* tp) <no binding>
in event reset() <no binding>
out event crashNotification() => raiseAlarm
readable var int16 points = 0
state beforeFlight {
entry { points = 0; }
on next [tp.alt > 0 m] -> airborne
exit { points += TAKEOFF; }
} state beforeFlight
composite state airborne initial = flying {
on reset [ ] -> beforeFlight { points = 0; }
on next [onTheGround && stopped] -> crashed
state flying (airborne.flying) {
on next [onTheGround && tp.speed > 0 mps] -> landing
on next [tp.speed > 200 mps] -> flying { points += VERY_HIGH_SPEED; }
on next [tp.speed > 100 mps] -> flying { points += HIGH_SPEED; }
} state flying
state landing (airborne.landing) {
on next [stopped] -> landed
on next [ ] -> landing { points--; }
} state landing
state landed (airborne.landed) {
entry { points += LANDING; }
} state landed
} state airborne
state crashed {
entry { send crashNotification(); }
} state crashed
} ⬈

Here are the semantics:

When a transition from outside a composite state targets a composite state, the initial state in that composite state is activated.
A composite state can have its own transitions. These act as if they were defined for each of the states of the composite state.
If a transition from an inner state A crosses a composite state-boundary B, then the actions happen in the following order: exit
actions of A, exit actions of B, transition action, and entry action of the transition's target (which is outside of the composite state).

7.3.5 Tabular Notation


State machines can also be rendered and edited as a table: the events become the column headers, and the states become the row
headers. Fig. 7.3.5-A shows an example. You can switch projection modes the usual way via the Code → Projection Modes .

Figure 7.3.5-A: The FlightAnalyzer state machine shown as a table.

7.3.6 State Machine Diagrams


State machines can also be visualized with PlantUML (a graphical editor will be done soon). The visualization can be generated from the
context menu of the state machine. You may want to use the statechart or statechart (short) , the former one displays all information for the
transitions while the latter one will only print the event's name which triggers the given transition.

Fig. 7.3.6-A shows an example visualization for the hierarchical state machine that was described in the previous section.

Figure 7.3.6-A: A visualization of a state machine in mbeddr. You can click on the states and transitions to select the respective element in the editor.

7.4 Components
Let us now introduce components to further structure the system. We start by factoring the Trackpoint data structure into a separate module
and export it to make it accessible from importing modules.

exported struct Trackpoint {


int8 id;
int8/s/ time;
int8/m/ x;
int8/m/ y;
int16/m/ alt;
int16/mps/ speed;
}; ⬈
7.4.1 An Interface with Contracts
We now define an interface that processes Trackpoints. To be able to do that we have to add thecom.mbeddr.components devkit to the current
model. We can then enter a client-server interface in a new module Components⬈. We use pointers for the trackpoints here to optimize
performance. Note that you can just press * on the right side of Trackpoint to make it a Trackpoint*.

The interface has one operation process⬈.

exported cs interface TrackpointProcessor {


Trackpoint* process(Trackpoint* p)
} ⬈

To enhance the semantic "richness" of the interface we add preconditions and a postcondition. To do so, use an intention Add Precondition
on the operation itself. Please add the following pre- and postconditions (note how you can of course use units in the precondition). The
result expression is only available in postconditions and represents the result of the executed operation.

exported cs interface TrackpointProcessor {


Trackpoint* process(Trackpoint* p)
pre(0) p != null
pre(1) p.id != 0
pre(2) p.time != 0 s
post(3) result.id != 0
} ⬈

After you have added these contracts, you will get an error message on the interface. The problem is this: if a contract (pre- or
postcondition) fails, the system will report a message (this message can be deactivated in case you don't want any reporting). However,
for the program to work you have to specify a message on the interface. We create a new message list and a messge.

exported messagelist ContractMessages {


message contractFailed(int8 opID, int8 constraintID) ERROR: contract failed (active)
} ⬈

You can now open the inspector for the interface and reference this message from there:

Figure 7.4.1-A: A message definition used in the interface definition to report contract failures.

There are still errors. The first one complains that the message list must be exported if the interface is exported. We fix it by exporting the
message list (via an intention). The next error complains that the message needs to have to integer arguments to represent the operation
and the pre- or postcondition. We change it thusly (note that there are quick fixes available to adapt the signatures in the required way).

7.4.2 A first Component


We create a new component by typing component. We call it Nuller. It has one provided port called processor that provides the
TrackpointProcessor⬈ interface.

exported component Nuller extends nothing {


provides TrackpointProcessor processor
} component Nuller ⬈

After you add only the provided port, you get an error that complains that the component needs to implement the operations defined by
the port's interface; we can get those automatically generated by using a quick fix from the intentions menu on the provided port. This gets
us the following:

exported component Nuller extends nothing {


provides TrackpointProcessor processor
Trackpoint* processor_process(Trackpoint* p) <= op processor.process {
p.alt = 42 m;
return p;
} runnable processor_process
} component Nuller ⬈

The processor_process⬈ runnable is triggered by an incoming invocation of theprocess⬈ operation defined in the TrackpointProcessor⬈
interface. The Nuller⬈ simply sets the altitute to zero.

Let us now write a simple test case to check this component. To do that, we first have to create an instance of Nuller. We create an
instance configuration that defines exactly one instance of this component. Also, we add an adapter. An adapter makes a provided port of
a component instance (Nuller.processor ) available to a regular C program under the specified namen:
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
} ⬈

Now we can write a test case that accesses the n adapter -- and through it, theprocessor port of the Nuller component instance nuller. We
create a new Trackpoint, using 0 as the id -- intended to trigger a contract violation (rememberpre(1) p->id != 0 ). To enter the &tp just enter a &,
followed by tp.

section testNullerFailing {
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
}

exported testcase testNullerFailing {


initialize nullerInstancesFailing;
Trackpoint tp = {
id = 0,
time = 0 s,
alt = 1000 m
};
n.process(&tp);
assert(0) tp.alt == 0 m;
} testNullerFailing(test case)
} section testNullerFailing ⬈

Before we can run this, we have to make sure that the instances are initialized (cf. the warning you get on them). We do this right in the
beginning of the test case. We then create a trackpoint and assert that it is correctly nulled by the Nuller.

section testNullerFailing {
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
}

exported testcase testNullerFailing {


initialize nullerInstancesFailing;
Trackpoint tp = {
id = 0,
time = 0 s,
alt = 1000 m
};
n.process(&tp);
assert(0) tp.alt == 0 m;
} testNullerFailing(test case)
} section testNullerFailing ⬈

To make the system work, you have to import the Components module into the Main⬈ module so you can call the testNullerFailing⬈ test
case from the test expression in Main. In the build configuration, you have to add the missing modules to the executable (using the quick
fix). Finally, also in the build configuration, you have to add the components configuration item:

Configuration Items:
reporting: printf (add labels false)
physical units (config = Units Declarations (mbeddr.tutorial.main.m1))
components: no middleware
wire statically: false

You can now rebuild and run. As a result, you'll get contract failures:

./MbeddrTutorial
$$runningTest: running test () @FunctionPointers:test_testProcessing:0#767515563077315487
$$runningTest: running test () @Components:test_testNuller:0#767515563077315487
$$contractFailed: contract failed (op=0, pc=1) @Components:null:-1#1731059994647588232
$$contractFailed: contract faied (op=0, pc=2) @Components:null:-1#1731059994647588253

We can fix these problems by changing the test data to conform to the contract, i.e.
section testNullerOK {
instances nullerInstancesOK {
instance Nuller nuller
adapt n -> nuller.processor
}

exported testcase testNullerOK {


initialize nullerInstancesOK;
Trackpoint tp = {
id = 10,
time = 10 s,
alt = 100 m
};
n.process(&tp);
assert(0) tp.alt == 0 m;
} testNullerOK(test case)
} section testNullerOK ⬈

Let us provoke another contract violation by returning from the implementation in the Nuller component a Trackpoint whose id is 0.

Running it again provokes another contract failure. Notice how the contract is specified on the interface, but they are checked for each
component implementing the interface. There is no way how an implementation can violate the interface contract without the respective
error being reported!

7.4.3 Verifying Contracts Statically


[ToDo: Need to fill in the static verification part here. ]

7.4.4 Collaborating and Stateful Components


Let us look at interactions between components. We create a new interface, the
TrackpointStore1⬈. It can store and return trackpoints1. Here is the basic interface:

exported cs interface TrackpointStore1 {


void store(Trackpoint* tp)
Trackpoint* get()
Trackpoint* take()
query boolean isEmpty()
} ⬈

Let us again think about the semantics: you shouldn't be able to get or take stuff from the store if it is empty, you should not put stuff into it
when it is full, etc. These things can be expressed as pre- and postconditions. The following should be pretty self-explaining. The only new
thing is the query operation. Queries can be used from inside pre- and postconditions, but cannot modify state2

exported cs interface TrackpointStore1 {


void store(Trackpoint* tp)
pre(0) isEmpty()
pre(1) tp != null
post(2) !isEmpty()
Trackpoint* get()
pre(0) !isEmpty()
Trackpoint* take()
pre(0) !isEmpty()
post(1) result != null
post(2) isEmpty()
query boolean isEmpty()
} ⬈

These pre- and postconditions mostly express a valid sequence of the operation calls: you have to call store before you can call get, etc.
This can be expressed directly with protocols, as implemented in TrackpointStore2⬈:
exported cs interface TrackpointStore2 {

void store(Trackpoint* tp)


protocol init(0) -> new full(1)

Trackpoint* get()
protocol full -> full

Trackpoint* take()
post(0) result != null
protocol full -> init(0)

query boolean isEmpty()


} ⬈

You can add a new protocol using the respective intention. The protocol is essentially a state machine. On each operation you can specify
the transition from the old to the new state. We have one special state which is called initial. On a transition you can either jump into an
already existing state or create a new state and then directy move into that. I.e. you see on the store⬈ operation that we transition from
the initial state into the newly created full⬈ state. The operation get⬈ can now make use of the previously createdfull⬈ state and does not
create a new state. It is also worth mentioning that you can reset the protocol by transitioning into the initial state again (as done in take⬈)

The two interfaces are essentially equivalent, and both are checked at runtime and lead to errors if the contract is violated.

We can now implement a component that provides this interface: InMemoryStorage⬈ Most of the following code should be easy to
understand based on what we have discussed so far. There are two new things. There is a field Trackpoint* storedTP; that represents
component state.

Trackpoint* storedTP; ⬈

Second there is an on-init runnable: this is essentially a constructor that is executed as an instance is created.

void init() <= on init {


storedTP = null;
return;
} runnable init ⬈

To keep our implementation module Components⬈ well structured we can use sections. A section is a named part of the implementation
module that has no semantic effect beyond that. Sections can be collapsed.

module Components imports DataStructures {

exported messagelist ContractMessages {...}

section processor {...}

section store {
exported cs interface TrackpointStore1 {
...
}
exported cs interface TrackpointStore2 {
...
}
exported component InMemoryStorage extends nothing {
...
}
}

instances nullerInstances {...}


test case testNuller {...}
instances interpolatorInstances {...}
exported test case testInterpolator { ... }
}

We can now implement a second processor, the Interpolator⬈. For subsequent calls of process⬈ it computes the average of the two last
speeds of the passed trackpoints. Let us start with the test case. Note how p2 has its speed changed to the average of thep1 and p2
originally.
section testInterpolator {

instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
}

exported testcase testInterpolator {


initialize interpolatorInstances;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 1,
time = 1 s,
speed = 20 mps
};
ip.process(&p1);
assert(0) p1.speed == 10 mps;
ip.process(&p2);
assert-equals(1) p2.speed == 3 mps;

} testInterpolator(test case)

mock component StorageMock report messages: true {


provides TrackpointStore1 store
Trackpoint* lastTP;
total no. of calls is 5
sequence {
step 0: store.isEmpty return true;
step 1: store.store {
assert 0: parameter tp: tp != null
}
do { lastTP = tp; }
step 2: store.isEmpty return false;
step 3: store.take return lastTP;
step 4: store.store
}
}

instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
}

exported testcase testInterpolatorWithMock {


initialize interpolatorInstancesWithMock;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 2,
time = 2 s,
speed = 20 mps
};
ipMock.process(&p1);
ipMock.process(&p2);
validatemock(0) interpolatorInstancesWithMock:storeMock;
} testInterpolatorWithMock(test case)
} section testInterpolator ⬈
Let us look at the implementation of the Interpolator⬈:

exported component Interpolator extends nothing {


provides TrackpointProcessor processor
requires TrackpointStore1 store
init int8 divident;

Trackpoint* processor_process(Trackpoint* p) <= op processor.process {


if (store.isEmpty()) {
store.store(p);
return p;
} else {
Trackpoint* old = store.take();
p.speed = (p.speed + old.speed) / divident;
store.store(p);
return p;
}
} runnable processor_process
} component Interpolator ⬈

A few things are worth mentioning. First, the component requires another interface, TrackpointStore1. Any component that implements this
interface can be used to fulfil this requirement (we'll discuss how, below). Second, we use an init field. This is a regular field from the
perspective of the component (i.e. it can be accessed from within the implementation), but it is special in that a value for it has to be
supplied when the component is instantiated. Third, this example shows how to call operations on required ports (store.store(p);). The only
remaining step before running the test is to define the instances:

instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
} ⬈

A few interesting things. First, notice how we pass in a value for the init field divident as we define an instance of Interpolator. Second, we use
connect to connect the required port store of the ipc instance to the store provided port of the store instance. If you don't do this you will get an
error on the ipc instance since it requires this thing to be connected (there are also optional required ports which may remain unconnected
and multiple required ports which can be connected to more than one required port). Finally, the provided interfaceprocessor is made
available to other code as the variable ip. You can run the test case now. On my machine here it works successfully :-)

To better understand the connections between component instances, there is also a graphical editor available. To switch to the graphical
Projection
wireing you can select the respective option form the Code → Mode → Component Wiring as Diagram menu.

1: sure, it is completely overdone to separate this out into a separate interface/component, but for the sake of the tutorial it makes sense.

2: Currently this is not yet checked. But it will be.

7.4.5 Mocks
Let us assume we wanted to test if the Interpolator works correctly with the TrackpointStore interface. Of course, since we have already
described the interface contract semantically we would find out quickly if the Interpolator would behave badly. However, we can make such
a test more explicit. Let us revisit the test from above:
section testInterpolator {

instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
}

exported testcase testInterpolator {


initialize interpolatorInstances;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 1,
time = 1 s,
speed = 20 mps
};
ip.process(&p1);
assert(0) p1.speed == 10 mps;
ip.process(&p2);
assert-equals(1) p2.speed == 3 mps;

} testInterpolator(test case)

mock component StorageMock report messages: true {


provides TrackpointStore1 store
Trackpoint* lastTP;
total no. of calls is 5
sequence {
step 0: store.isEmpty return true;
step 1: store.store {
assert 0: parameter tp: tp != null
}
do { lastTP = tp; }
step 2: store.isEmpty return false;
step 3: store.take return lastTP;
step 4: store.store
}
}

instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
}

exported testcase testInterpolatorWithMock {


initialize interpolatorInstancesWithMock;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 2,
time = 2 s,
speed = 20 mps
};
ipMock.process(&p1);
ipMock.process(&p2);
validatemock(0) interpolatorInstancesWithMock:storeMock;
} testInterpolatorWithMock(test case)
} section testInterpolator ⬈
In this test, we expect the following: when we call process first, the store is still empty, so the interpolator stores a new trackpoint. When we
call process again, we expect the interpolator to calltake and then store. In both cases we expect isEmpty to be called first.

We can test for this behavior explicitly via a mock. A mock is a component that specifies the behavior it expects to see on a provided port
during a specific test case. The crucial point about mocks is that a mock implements each operationinvocation separately (the step s
below), whereas a regular component or even a stub just describes each operation with one implementation. This makes a mock
implementation much simpler -- it doesn't have to replicate the algorithmic implementation of the real component. Let us look at the
implementation:

mock component StorageMock report messages: true {


provides TrackpointStore1 store
Trackpoint* lastTP;
total no. of calls is 5
sequence {
step 0: store.isEmpty return true;
step 1: store.store {
assert 0: parameter tp: tp != null
}
do { lastTP = tp; }
step 2: store.isEmpty return false;
step 3: store.take return lastTP;
step 4: store.store
}
} ⬈

This mock component expresses that we expect 5 calls in total. Then we describe the sequence of calls we expect. The first one must be
a call to isEmpty and we return true. Then we expect a store, and for the sake of the example, we check thattp is not null. We also store the tp
parameter in a field lastTP so we can return it later (you can add the parameter assertions and thedo body with intentions). We then expect
another isEmpty query, which we now answer withfalse. At this point we expect a call totake , and another call to store. Notice how we return
null from take : this violates the postcondition! However, pre- and postconditions are not checked in mock components because their
checking may interfere with the expectations! Also, we have slightly changed the test case so we don't stumble over the null. We don't
assert anything about the result of theprocess calls:

exported testcase testInterpolatorWithMock {


initialize interpolatorInstancesWithMock;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 2,
time = 2 s,
speed = 20 mps
};
ipMock.process(&p1);
ipMock.process(&p2);
} testInterpolatorWithMock(test case) ⬈

Two more steps are required for this test to work. The first one is the instances and the wiring. Notice how we now connect the
interpolator with the mock:

instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
} ⬈

The second thing is the test case itself. Obviously, we want the test case to fail if the mock saw something other than what it expects on
its port. We can achieve this by using the validate mock statement in the test:

validatemock(0) interpolatorInstancesWithMock:storeMock; ⬈

Check out the complete testInterpolator⬈ test case.

7.4.6 Sender/Receiver Interfaces


So far we have always used client/server interfaces to communicate between components. These essentially define a set of operations,
plus contracts, that can be invoked in a client/server style. However, mbeddr comes with a second kind of interface, the sender/receiver
interface. In this case, the providing and requiring components exchange data items. To demonstrate how they work, let us explore
another aspect of the application around Trackpoints (the example is in the ComponentsSRI⬈ implementation module). The data has to be
collected in the airplane. Let us assume we have the following components:

a GPS to provide the position


a speed indicator for the speed
a flight recorder, whose job it is to create lists of Trackpoints that capture the progress of the flight

All these components are time-triggered, i.e. it is assumed that they execute in regular intervals, by some kind of scheduler. They all
provide an interface Timed that provides an operation tick that is called by the scheduler. So far, these components don't exchange any
data yet: sender/receiver interfaces will be used for that later1. Here is the code so far:

module ComponentsSRI imports DataStructures {

exported cs interface Timed {


void tick()
}

exported component GPS extends nothing {


provides Timed timed
void timed_tick() <= op timed.tick {}
}

exported component SpeedIndicator extends nothing {


provides Timed timed
void timed_tick() <= op timed.tick {}
}

exported component FlightRecorder extends nothing {


provides Timed timed
void timed_tick() <= op timed.tick {}
}

Let's now look at the data exchange, focussing on the position first. Here is a sender/receiver interface position provider. The interface
declares a set of data elements, in this case with physical units:

exported sr interface PositionProvider {


int8/m/ x;
int8/m/ y;
int16/m/ alt;
} ⬈

The GPS is supposed to provide this data, so we give it a provided port with this interface:

exported component GPS extends nothing {


provides PositionProvider pos
provides Timed timed
void init() <= on init {
pos.x = 0 m;
pos.y = 0 m;
pos.alt = 0 m;
} runnable init
void timed_tick() <= op timed.tick {
pos.x++;
pos.y++;
} runnable timed_tick
} component GPS ⬈

Note how from within component runnables we can use expressions to assign to the data values defined in the interface as if they were
normal fields. Let us now look at the flight recorder. It is supposed to read the data written by the GPS (and the same with the speed
indicator):
exported component FlightRecorder extends nothing {
provides Timed timed
requires PositionProvider pp
requires SpeedProvider sp
Trackpoint[1000] recordedFlight;
uint16 count = 0;
void timed_tick() <= op timed.tick {

with (recordedFlight[count]) {
id = ((int8) count);
x = pp.x;
y = pp.y;
alt = pp.alt;
speed = sp.speed;
}
count++;
} runnable timed_tick
Trackpoint getFlightNo(uint16 no) <= no trigger {
return recordedFlight[no];
} runnable getFlightNo
} component FlightRecorder ⬈

Inside the with-statement, we can access the data acquired via thepp and sp required ports. What distinguishes this from global variables,
of course, is that the component instances still have to be wired: required ports have to be connected to provided ports, in this case,
defining access to the data items:

instances instances {
instance GPS gps_comp
instance SpeedIndicator indicator
instance FlightRecorder recorder
connect recorder.sp to indicator.speed
connect recorder.pp to gps_comp.pos
adapt gps -> gps_comp.timed
adapt si -> indicator.timed
adapt rec -> recorder.timed
} ⬈

This part of the tutorial only provided a few examples of Interfaces and Components. For a full discussion of Interfaces and Components
see \\fordetails{Interfaces and Components}{\\sect{details_components}}

1: Note that this time-triggered architecture is very widespread in embedded software. In future releases of mbeddr we will provide direct support for time-triggered
runnables, so you don't have to use an explicit interface such as Timed.

7.4.7 Composite Components


Components are great to structure your code in reusable pieces. But typically when your systems grow you want to reuse whole
subsystems. To the outside they look exactly like any other component but on the inside they again are wired up out of many
components. This can be achieved with mbeddr's composite components.
A composite component, as any other component can require and provide interfaces from/to its environment. In addition to regular
components it has an additional code block called internal instances for instantiating other components and wire them up.
Let's build an imaginary GPS parser that can be reused. The first interface we create is a client server interface which accepts single
characters. Those characters will be used by a line parser which can tokenize the stream of chars into whole lines and at the same time
figures out which command was send by the client. Each line is then fed into another component which itself delegates the command to a
command handler. This command handler parses the command and transforms the data into a struct that can be easily consumed by a
client. The client of the GPS parser does not need to know how the parsing works internally and that it again is split up into separate
components. The only relevant part is that she needs to provide the system with the stream of characters and takes the events produced
by the GPS parser to work on them. To achieve this we create a composite component which provides the CharParser⬈ and requires the
NmeaRmcEventProcessor⬈. In the wiring we instantiate the respective components and wire them up. Finally, wedelegate the outside
required and provided ports. The whole code for the composite component is here. Don't forget to add the composite components config item
to the build configuration.
exported composite component NmeaStack {

provides CharParser charParser


requires NmeaRmcEventProcessor rmcEventProcessor

internal instances {
instance NmeaLineParserImpl lineParser
instance NmeaSentenceHandlerImpl sentenceHandler

instance NmeaRmcHandlerImpl rmcHandler


connect lineParser.sentenceHandler to sentenceHandler.sentenceHandler
connect multi sentenceHandler.commandHandlers to rmcHandler.nmeaCommandHandler

delegate charParser to lineParser.charParser


delegate rmcEventProcessor to rmcHandler.nmeaRmcEventProcessor
}
} ⬈

7.4.8 Interface Types


In a few rare cases you want to store a required port also as a local variable (e.g. when iterating over a bunch of multiports where you filter
out some of them). In even rarer cases you may want to pass a required port as an argument to a function or runnable you are calling or
store it in a global variable. To address those use cases we introduces a type in mbeddr called interface. In order to use an interface in
such a interface type this interface has to be specially marked with an annotation Toggle Can be used as Type .

exported cs interface NmeaCommandHandler can be used as type {


boolean canHandle(string command)
void handle(int8[MAX_NMEA_WORD_COUNT]* words, uint8 wordCount)
} ⬈

void sentenceHandler_parseSentence(int8[MAX_NMEA_WORD_COUNT]* words, uint8 wordCount) <= op sentenceHandler.parseSentence {


interface<NmeaCommandHandler> handlerToCall = null;
for (i ++ in [0..MAX_NMEA_COMMAND_HANDLER_COUNT[) {
if (commandHandlers[i] != null && commandHandlers[i].canHandle(words[0])) {
handlerToCall = commandHandlers[i];
} if
} for
handlerToCall.handle(words, wordCount);
} runnable sentenceHandler_parseSentence ⬈

The example above actually would not need the interface type as the handle runnable could be called right in place.

7.4.9 Visualizing Components


mbeddr's diagramming capabilities are put to use in two ways in the context of components: component/interface dependencies and
instance diagrams.

* Component/Interface Dependencies Select a component or an interface and execute theVisualize action from the context menu (or
press Ctrl + Alt + V ). Fig. 7.4.9-A shows the result.

Figure 7.4.9-A: The interface/components dependency diagram shows all components visible from the current module, the interfaces, and the provided (solid lines) and
required ports (dashed lines).

* Instance/Wiring Diagrams You can also select an instance configuration and visualize it. You'll get a diagram that shows component
instances and their connections (Fig. 7.4.9-B).

Figure 7.4.9-B: This diagram shows component instances and their connectors. The label in the instance boxes contain the instance name and the component name
(after the colon). The edges represent connectors. The label shows the required port (before the arrow, the provided port name (after the arrow), and the name of the
interface used by the two ports (on the new line).

7.4.10 Contract Verification


mbeddr comes with support for verifying contracts of components statically. This verification is based on C-level verification with CBMC
(for a more detailed discussion about the formal verification with mbeddr, please look at Section Functional Verification. Let's set up our
tutorial to use verification. Let us verify the InMemoryStorage component. To do so, first add the com.mbeddr.analyses.componentcontracts devkit to
the model that contains the components code. Then use an intention to add the verifiable flag to the component. To make the verification
work, you will have to provide some more information in the inspector:
entry point: verification
loops unwinding: 2
unwinding assertions: false

Let us look at the three parameters you have to set here: The first one determines from where the program is "executed". The entry point
should be selected to be "close" to the to-be-verified component (if you verify the whole system, then, at least for big systems, this will
take long). In our case we use a special test case verification, which looks as follows:

instances verificationInstances {
instance Interpolator interpol(divident = 2)
connect interpol.store to store.store
instance InMemoryStorage store
adapt verificationInterpolator -> interpol.processor
} ⬈

exported testcase verification {


initialize verificationInstances;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 1,
time = 1 s,
speed = 20 mps
};
verificationInterpolator.process(&p1);
verificationInterpolator.process(&p2);
} verification(test case) ⬈

The second line in the configuration determines how often a loop is executed. You should start with low numbers to keep verification
times low. Finally, the third parameter determines if the verification should fail in case it cannot be proven that the unwinding loops number is
sufficient. You can now run the verification by selecting the component and executing the Verify Component action. After a few seconds,
you'll get a result table that reports everything as ok (see Fig. 7.4.10-A): every precondition of every operation in every provided port has
been proven to be correct.

Figure 7.4.10-A: The table that shows the verification results; everything is ok in this case.

Let us introduce an error. The following version of the trackpointStore_store runnable does not actually store the trackpoint. This violates the
postcondition, which claims that storedTP != null . Note that for the analysis to work, all paths through the body of a function (or a runnable)
must end with a return (you'll get an in-IDE error if you don't do this).

void trackpointStore_store(Trackpoint* tp) <- op store.store {


return;
}

Let us rerun the verification. Now we get an error, as shown in Fig. 7.4.10-B. Note how the lower part of the table now shows the
execution trace that led to the contract violation. You should check the Call/Return checkbox to filter the trace to only show the call/return-
granularity, and not every statement. You can also double-click onto the trace elements to select the particular program element in the
code.

Figure 7.4.10-B: The table that shows the verification results; now we have an error, and the trace in the bottom half shows an example execution that led to the error.

7.5 Decision Tables


Let us implement another interface, one that lets us judge flights (we do this in a new section in the Components module). The idea is that
clients add trackpoints, and the FlightJudger computes some kind of score from it (consider some kind of biking/flying competition as a
context):

exported cs interface FlightJudger {


void reset()
void addTrackpoint(Trackpoint* tp)
int16 getResult()
} ⬈

Here is the basic implementation of a component that provides this interface.


exported component Judge extends nothing {
provides FlightJudger judger
int16 points = 0;
void judger_reset() <- op judger.reset {
points = 0;
}
void judger_addTrackpoint(Trackpoint* tp) <- op judger.addTrackpoint {
points += 0; // to be changed
}
int16 judger_getResult() <- op judger.getResult {
return points;
}
}

Of course the implementation of addTrackpoint that just adds 0 to the points doesn't make much sense yet. The amount of points added
should depend on how fast and how high the plane (or whatever) was going. The following screenshot shows an embedded decision
table that computes points (Notice we mix the components language, the decision tables and the units in one integrated program):

Figure 7.5-A

You can create the decision on your own by first of all typing the keyword dectab - this instanciates the concept. To add a column, hit enter
in one of the cells. For adding a row, move your cursor on the left side of the table (between the default return value and the table) and hit
enter . Now, let us write a test. Of course we first need an instance of Judge:

instances instancesJudging {
instance Judge theJudge
adapt j -> theJudge.judger
} ⬈

Below is the test case. It contains two things you maybe haven't seen before. There is a second form of the for statement that iterates over
a range of values. The range can be exclusive the ends or inclusive (to be changed via an intention). In the example we iterate from 0 to
4, since 5 is excluded. The introduceunit construct can be used to "sneak in" a unit into a regular value. This is useful for interacting with
non-unit-aware (library) code. Note how the expression for speed is a way of expressing the same thing without theintroduceunit in this case.
Any expression can be surrounded by introduceunit via an intention.

exported testcase testJudging {


initialize instancesJudging;
j.reset();
Trackpoint[5] points;
for (i ++ in [0..5[) {
points[i].id = ((int8) i);
points[i].alt = 1810 + 100 * i m;
points[i].speed = 130 mps+ 10 mps* i;
j.addTrackpoint(&points[i]);
} for
assert-equals(0) j.getResult() == 0 + 0 + 20 + 20 + 20;
} testJudging(test case) ⬈

7.5.1 Verifying the Decision Table


So far so good. The test case is fine. However, as with many tests, this one only tests part of the overall functionality. And in fact, you may
have noticed that an error lurks inside our decision table: for 2000 m of altitude, the table is non-deterministic: both conditions in the
column header work! We could find this problem with more tests, or we could use formal verification as described in Section Checking
Decision Tables.

8 Process Support
8.1 Requirements
8.1.1 Overview
mbeddr ships with a language for capturing, documenting and maintaining requirements. To use them, use the com.mbeddr.reqtrace devkit in
your model. In this documentation we refer to a number of requirements modules, in particular FlightJudgementRules⬈,
ArchitecturalComponents⬈ and UseCases⬈.

8.1.2 Requirements Basics


Requirements are contained in requirements modules. An example is FlightJudgementRules⬈. Like C implementation modules,
requirements modules can import other modules. The (exported) contents of the imported modules then become visible in the current
module.
Each requirement is decribed with a number of characteristics; an example is shown in Fig. 8.1.2-A. Each requirement has a unique ID
(InitialNoPoints), a summary (Initially you have no points. ), a requirements kind (/functional), a number of tags (none here), a detailed textual
description (only one line given here), plus optional additional data (here we show a workpackage). The number (1 in the example) is
automatically determined by the tree structure of the requirements. In order to reference a requirement, the unique ID is used, not the
number. We discuss some of these things below.

Figure 8.1.2-A ⬈

Hierarchy Requirements are stored in a hierarchy. Each requirement can have children, as can be seen fromInFlightPoints⬈. The
semantics of hierarchical nesting are not rigorously defined, but usually assumed to represent refinement (i.e., additional details).

Kind Each requirement has a kind. The kind represents the nature of the requirement, and existing kinds includefunctional, non-functional
and technical. Language extension can be used to define arbitrary additional kinds. In addition to being a label (to sort or query
requirements), the kind can also be used to enforce the presence of certain additional requirements data nodes (discussed next).

Additional Data In addition to the description, a requirement can have data objects. As usual, these are extensible and can represent
any additional structure. Examples include the (demo) business rules in PointForATrackpoint⬈, tables as in priceDep⬈ or architectural
components as shown in Driver⬈. The requirements kind described above can include constraints that enforce certain kinds of data (e.g.,
a timing requirement may require a timing spec data object).

Cross References An important aspect of requirements are their relationships: a requirement can establish relations to other
requirements. The requirement shown in Fig. 8.1.2-B shows both kinds of cross references. First, cross references can be added to the
prose description using the @req word (press Ctrl + Space anywhere in the prose block to insert such special words). The other alternative
is to use special data objects (such as requires also and conflicts with. The former approach is a generic relationship, the latter ones are
qualified. Of course, additional relationships can be created using language extension.

Figure 8.1.2-B ⬈

Tags Tags are similar the data objects, but they are "syntactically smaller". Existing tags include the requirements status (you can set it
to draft, accepted, etc.), the estimated effort for implementing the requirement as well as a general string tag represented by the@ sign.

The requirements language is intended to be used by (among others) non-programmers. They may not be familiar with pressing
Ctrl + Space to enter new program elements. Hence, there is a special projection modeEditor Helper Buttons that projects buttons into a
requirements module to add child requirements, data or change the nesting hierarchy (see Section Projection Modes on how to swich
Projection Modes).

8.1.3 Visualizations and Tree Views


The requirements visualization show the upstream and downstream dependencies of a particular requirement. An example is shown in
Fig. 8.1.3-A. In addition, several tree views are available that show the requirements hierarchy, the dependencies and also the up- and
downstream dependencies of the requirements module (shown in Fig. 8.1.3-B).

Figure 8.1.3-A: A visualization of the dependencies of a requirement.

Figure 8.1.3-B: The tree view that shows the dependencies of a requirements module.

8.2 Tracing
A trace is a pointer from some implementation artifact to one or more requirements 1. To use traces, make sure you use the
com.mbeddr.reqtrace devkit in your models.

For an example of a trace, see the StateMachines⬈ module. It has traces attached to a number of C constants and to various parts of the
state machine (make sure you select a projection mode that actually shows the traces); an example is also shown in Fig. 8.2-A. Make
sure you select a projection mode that lets you actually see the traces (see below).

Trace Kind Each trace has a trace kind associated with it. Trace kinds characterize the nature of the trace relationship. While the set of
trace kinds is extensible, mbeddr ships with three default trace kinds: implements, exemplifies and tests. Note that a trace kind can restrict the
kinds of requirements it can trace to, and it can also restrict to which nodes the trace can be attached. While the default trace kinds have
no such constraints, it is useful in general to enforce specific semantics through those constraints.

Projection Modes There are three different modes how traces can be shown (and they can be switched like all other projection modes,
see Section Projection Modes):

Detailed Traces :The trace kind and the trace target(s) are shown on the right side of the traced node (seeFig. 8.2-A).
Compact Traces : Only a small [T] is shown on the right side of the traced node (seeFig. 8.2-B). A tooltip shows the first of the traces
elements and the trace kind. The kind and targets can be changed in the inspector of the [T] node.
No Traces : the traces are not shown (even though, of course, they are still in the code)
Figure 8.2-A: The detailed trace mode in mbeddr.

Figure 8.2-B: The compace trace mode in mbeddr.

To get a better feel for the projection modes we suggest you play around with them in the StateMachines⬈ module which contains traces
to requirements.

8.2.1 Attaching and Removing Traces


To attach a trace to any given requirement, the requirements module, that contains the target requirement, must be imported to the client
module (using the regular module imports). You can then use the Attach Trace annotation to attach a trace to any node. By default, the
implements trace kind will be used and you can then use regular code completion to select a target requirement. Note that if you use the
Compact Trace projection mode, you will have to use the inspector to change the kind or the target requirement(s).

To remove a trace, you can simply use the Backspace key when the trace itself is selected.

8.2.2 Reverse Tracing


Traces always go towards a requirement. However, often you want to see which artifacts trace to which requirement, i.e., you want to
reverse the direction. You can do this by using the customized Find Usages Settings... context menu. If you select Traces in the dialog (see Fig.
8.2.2-A). In the result, you will get a separate section with only the traces to the respective requirement.Fig. 8.2.2-B shows an example
result. Section Find Usages explains the find usages facility in general.

Figure 8.2.2-A: Selecting the Traces as the target for the Find Usages.

Figure 8.2.2-B: Finding the traces to a requirement via Find Usages.

8.2.3 Trace Report Assessment


The find usages facility discussed in the previous section provides an interactive means to find out which code locations trace to a
requirement. But sometimes you want to get an overview. For this purpose, you can use the traces assessment. Assessments are reports
that query the model and list the results; they are explained in more detail in Section Assessments.

An example traces assessment can be found in Traces⬈. In the query, you have to select the requirements module for whose
requirements you want to run the assessment. The results then show each requirement and the code locations from which it is traced.

1: Strictly speaking, it can also point to other traceable elements; mbeddr is extensible in this way. But by default, only requirements are supported as trace targets.

8.3 Assessments
An assessment is essentially a report over a model. It performs a query and then reports the results. The queries are of course
customizable; there are dozens of different assessments and new ones can be developed using language engineering. Also the structure
of the result data items is of course different for each assessment. Check out the following assessments to get an impression:

Visualizations⬈ reports a simple list of all program elements that have a visualization associated with them. The visualization can be
rendered directly from the assessment result item (using Ctrl + Alt + V or the context menu. This assessment always uses the
current model as the scope for the report; it has no further configuration options.
Traces⬈ shows a trace report. The query can be parametrized with the requirements module for which the query should be run. The
result shows various properties of the result, you can use Ctrl + Click to navigate to the result nodes themselves. This assessment
also uses grouping.
EffortsOfWorkPackages⬈ shows an assessment that can be parametrized in various ways; the results use custom cells to show a
progress bar.

Since assessment and their state is persisted, they can be used to assess, or audit, models over time. As we discuss below, it is easy to
track changes to the result set over time.

Setting up an Assessment Assessments live in an Assessments root. Once such a root is created, any number of assessments can be
added. Each assessment has a name and a query. The query is a separate node that performs the query itself; it also may have
parameters that can be set by the users. Queries are the primary extension point, new queries can be developed via language extension.

Updating and the Color Code Once a query is defined, you can use Update to execute it (either via an intention or via the context
menu). Updating the assessment reruns the query. However, the old results are not replaced by the new ones. Instead, the system
performs a kind of diff:

Results that are no longer in the new result set are deleted
Results that are in the new set but have not been in the old one are called new; they get an orange bar in front.
Results that are in the new as well as in the old result set are marked as grey.

By using this color coding scheme, users can always see which result items have been added during the latest update. This is very useful
for continuing audits of models because it is immediately obvious which result items may require attention (see next paragraph).

Checkboxing and Errors A particularly interesting use case for assessments is the detection and tracking of smells, style guide
violations or other things that may have to be fixed or changed. The assessment can be used to find and report them. The user then goes
through all of the result items and decides for each of them what to do:

The user may fix the offending code; upon update of the assessment, the corresponding result items goes away
Alternatively, the user may decide that a particular smell is acually ok in this case; so the result item must be marked as "it's ok,
don't bother me again".

To support the latter case, the must be ok option can be set on an assessment. Then every result item gets its own checkbox. Checking this
checkbox means that "this result item is ok", i.e., it should not be marked as an error (the colored vertical bar becomes blue in this case).
It is also possible to hide all those items where the checkbox is checked in the result set by selecting the hide ok ones option. Finally, an
assessment that is marked as must be ok results in a regular type system error if it has one or more result items that are not marked as ok.

8.4 Documentation Language


8.4.1 Basic Ideas
Writing documentation for code is annoying, since you always have to copy code snippets into the document (as text or as a screenshot),
make sure they are formatted nicely, and - most importantly - keep them up to date as the code changes. The mbeddr documentation
language avoids this problem: it treats documentation just like code in MPS, the documentation language is just another language written
in MPS. Consequently it is trivial to refer to code, make sure that the reference is refactored along with the code itself (in particular in case
of renames) and, by using a few tricks, you can even "virtually embed" code into the documents.

The user guide you are reading right now is written with this language. Consequently, a good way to learn mbeddr's documentation
language is to switch off Presentation Mode (via the Code → Projection Modes menu) and take a look how the user guide is built. We'll explain
some basics in this chapter but otherwise recommend you to just look at the user guide⬈.

In addition, you can also check out this video: http://www.youtube.com/watch?v=-kMrvreg6n0

8.4.2 Target Platforms


We support several target plaforms for the documentation language:

Presentation Mode: Most importantly, we support the Presentation Mode where the document is rendered in MPS with a nice,
readable projection and all images, listings or code snippets shown inline.
LaTeX: A generator creates LaTeX sources from documentation language Documents. While this works well in principle, there are
several layouting problems with image sizes. After the sources have been generated they have to be processed by Latex manually
(e.g., by invoking xelatex on them). A more detailed documentation is still [ToDo: ].
HTML: Similar to LaTeX, we can also generate HTML sources. We use this to generate the online version of this user guide.

8.4.3 Core Concepts


In this section we describe the most important abstractions used in the documentation language. Together with the user guide as an
extensive example, you should be able to find your way around the language.

Structure Here are the most important structural concepts:

Documents: Documents are the main roots in which documentation language content lives. As usual in mbeddr, Documents can
express dependencies on other Documents.
Sections: A document contains sections. Sections have a name (used for referencing them) as well as a title. Sections can be
nested. They are also automatically assigned a number (even though this does not work correctly in Presentation Mode). A
reference to a section is created via the @sect word.
Chapter: There is also a kind of "special section", the Chapter. It is simlar in all respects to sections except that itis a chapter, so it
can be treated specially in transformations.
Paragraphs: Inside sections there are all kinds of paragraphs.

Paragraphs The paragraph is an abstract concept of which many subconcepts exist. They hold the meat of the document:

image: This represents an embedded image. You specify a path prefix (see Configuration and Resources below), and then you can
use code completion to select an image file under that path that should be shown in the editor.
embed as text This embeds arbitrary MPS nodes into the document. When rendered to HTML or LaTeX, the embedded code will
be "copied to text", so this really only makes sense for notations that have a meaningful textual representation (program code,
essentially).
embed as image: Similar to the previous one, but the MPS nodes are rendered as an image (screenshot) when serialized to LaTeX
or HTML.
visualization: References a program node that has a visualization. When rendered out, the visualization is created in real time and
embedded.

Words Inside text paragraphs, you can use various different special words. Those that start with a backslash are used for formatting.
Those that start with an @ are references, the ones starting with an\ are formatting options. They can be entered by pressing Ctrl + Space
inside text blocks. The default text for references can be overridden in the inspector view.

There are the usual formatting options for \bold, \emph, \math, \key, \menu and \code. These can also be entered by selecting existing text
and then pressing Ctrl + Alt + T to bring up the wrapper intentions menu.
There are refernces to other sections (@sect), to figures, images or code embedded as images (via @fig).
You can point to other resources in the file system using @attachment
You can reference MPS nodes via @node
You can reference MPS modules via @language, @solution, @generator or @devkit.
You can embed URLs using \url
Footnotes are added via \footnote
Todos can be left in the text via \todo

Configuation and Resources Every document must refer to a Configuration node. It is mostly used in the context of LaTeX or HTML
rendering. However, it also defines the default path where the visualizations store their temporary images. Most importantly, it defines the
path prefixes where images (and other attachments) are searched. The path is always relative to the current solutions root directory.
When defining image paragraphs or attachments, you use one of the paths defined in the Config as the path prefix and select an image
"below" this path prefix. This is supported with code completion.

Building For consuming documents inside MPS using the presentation mode, no build is necessary. However, in case of generating
LaTeX or HTML, a DocumentExport node is needed to configure the generation process. [ToDo: ]

8.5 Code Review


8.5.1 Overview
Code reviews are an important ingredient of many development processes. The idea is that in addition to writing the code, the code is
also reviewed by another person as a means of quality assurance and joint learning. To this end, it must be tracked which parts of the
code are ready for review, which parts have been reviewed (and by whom), and which parts have changed since the review (and hence
need to be re-reviewed). In mbeddr, we support this process with the following approach:

In principle, all program nodes can be reviewed separately. For pragmatic reasons, we currently resrict reviews to root nodes, as
well as the first level of children (i.e, module contents such as functions, components, or interfaces in C implementation modules). In
a future version there will be preferences that define the granularity of review specifically for a project.
The review information is stored in the program node itself, so it is persisted along with the code itself. It survives branching etc.
Intentions are used to change the review states.

8.5.2 Projection Modes


The code review data is shown with the review state annotation and a custom background color. Sometimes, however, you don't want to
see this stuff in the code. Hence there are two projection modes: the review state and the background colors are only shown if you check
Projection
the Review State option in the Code → Mode menu. Make sure you enable the display of the review state so you can follow the next
subsections.

8.5.3 Review States


mbeddr supports several states as part of the review process:

new: The code has just been written. No review-related information is stored in the code. For example, theFlightJudger⬈ component
is in this state.
ready: When the developer thinks the code is finished, it can be marked asready. The code gets a yellow background, and an
annotation that states that the code is ready. See Judge2⬈ for an example.
reviewed: Once a reviewer has reviewed the code, it is marked asreviewed. The code gets a green background. Check out this node
instancesJudging⬈ for an example.
raw: After the code has been marked as ready or reviewed, it may be modified again. At this point, of course, the review is invalid. The
code now becomes raw (red background). This test case is an example: testJudging⬈

When a piece of code is marked as ready or reviewed, we create a hash of the code structure. This hash is stored in the code review
annotation, inside the code. By recalculating the hash and comparing it to the one stored in the code, we can detect whether the code has
been changed. However, because of the performance implications, we don't calculate this hash automatically. Instead, you have to use
the Reevaluate Review intention on the reviewed node. Alternatively you can also reevaluate all review states from one single location; this is
discusseb below in Section Assessments.

8.5.4 Assessments
There is an assessment (see Section Assessments for details about assessments) to show the global state of the code review.Fig. 8.5.4-
A shows an example.

Figure 8.5.4-A ⬈

The query that shows the code review state is called code review summary . In the query you can set the scope to either a single chunk, a
single model, or a model includings its imports. The results show the state of the review for the various reviewable nodes (incl. the color
code); the second column is clickable, so you can directly jump to the mentioned section of code.

Updating the assessment also re-evaluates all the hashes of the reviewed sections of code. So the assessment, in addition to showing an
overview, is also the central place from which to reevaluate all review states.

8.6 Product Line Variability


mbeddr supports two kinds of variability: runtime and static. Runtime variability makes the decision about which variant to execute as the
program runs. The binary contains the code for all options. Since the code must be aware of the variability, the underlying language must
know about variability. Static variability makes the decision before program execution. In particular, the MPS generators remove all the
program code that is not part of a particular variant. The binary is tailored, and the mechanism is generic -- the target language does not
have to be aware of the variability. We discuss both approaches in this chapter.

A discussion of the trade-offs between static and runtime variability is beyond the scope of this user guide: it is not a tutorial on product
line engineering in general, but only a tutorial on how to do it with mbeddr.

8.6.1 Specifying Variability


Both approaches have in common that in mbeddr, you first describe the available variability on an abstract level that is unrelated to the
implementation artifacts that realize the variability. We use feature models for this aspect. A feature model describes the available
variability (aka the configuration space) in a software system. Let us start by creating a feature model that describes variability for
processing flights. To do so, add the com.mbeddr.cc.variability devkit to your model and add a VariabilitySupport node into your program.
VariabilitySupport nodes contain feature models as well as configurations. An example can be found inFlightVariability⬈.

Feature Models Feature models map a configuration space. An example feature model is shown inFig. 8.6.1-A. A feature is simply a
named entity. Features can have children (subfeatures). A feature specifies a constraint over the subfeatures that determine how they
can be selected. The following four tree constraints exist:

! (mandatory) : this means that all child features are mandatory. In a valid configuration, all of them must be selected.
? (optional): this means that in any valid configuration of this feature model, any combination of the child features can be selected. In
other words, all children are optional.
xor (exclusive): this means that exactly one subfeature must be selected in a valid configuration
or (n-of-m) : this means that one or more (but not zero) of the subfeatures must be selected in a valid configuration.

Figure 8.6.1-A ⬈

In addition, a feature model can also define derived features. These are essentially just macros, i.e., expressions over the existing feature
tree. The features everything⬈ and empty⬈ are examples.

Features can also have attributes. For and example see maxSpeed⬈ below. These are (more or less primitively) typed attribute
declarations. In a valid configuration, all attributes of all selected features must have a value.

Configurations Once a feature model is defined, you can specify configurations or variants. A configuration has a name and selects any
valid (in terms of the constraints discussed above) subset of the features defined in the referenced feature model. The example below has
three configurations: cfgDoNothing⬈, cfgNullifyOnly⬈ and cfgNullifyMaxAt200⬈. Note how cfgNullifyMaxAt200⬈ specifies a value of 200
for the attribute associated with the maxCustom⬈ feature. An example configuration is also shown inFig. 8.6.1-B.

Note that, in order for the stuff described below to work (static and runtime variability), you need to make sure that the feature models,
on which the variability relies, are actually imported into the respective implementation modules.

Figure 8.6.1-B ⬈

8.6.2 Runtime Variability


As mentioned above, runtime variability means that we will evaluate a configuration at runtime and based on the evaluation, make
decisions about program execution. Take a look at the following piece of code:
Trackpoint processTrackpoint(fmconfig<FlightProcessor> cfg, Trackpoint tp) {
Trackpoint result;
variant<cfg> {
case (nullify && maxCustom) {
result = process_nullifyAlt(tp);
if (tp.speed > maxCustom.maxSpeed) {
result.speed = maxCustom.maxSpeed;
} if
}
case (nullify && max100) {
result = process_nullifyAlt(tp);
if (tp.speed > 100 mps) {
result.speed = 100 mps;
} if
}
case (nullify) { result = process_nullifyAlt(tp); }
default { result = process_doNothing(tp); }
}
return result;
} processTrackpoint (function) ⬈

Variant-dependent behavior This function takes two arguments, one of them is an fmconfig. This data type holds a configuration for the
specified feature model (FlightVariability⬈ in this case). Inside the function we use thevariant statement to branch the code based on the
selected feature. The variant statement works essentially like a switch statement, but it "switches over" feature configurations. Note that the
conditions in the case parts only allow to access the features defined in by feature model mentioned in the embracingvariant statement.

Since variant is a statement, it can only be used in statement context -- so it cannot be used to vary arbitrary behavior as expressed, for
example, with state machines. Of course, additional similar language constructs could be built for other DSLs.

Note also that we can access the values of attributes associated with features. In the code above we access the maxSpeed⬈ attribute
and use it as a normal expression. Note that only attributes of selected features can be used! Otherwise, no value may be specified.

Storing Configurations in Programs In order to be able to change program behavior based on feature configurations at runtime, the
configuration has to be stored in the program itself. The test case testRuntimeVar⬈ exemplifies this. Note how we declare afmconfig
variable for a given feature model and then use the store config statement to store a specific configuration into the variable. We then call the
abovementioned function where we then use the variant statement to exploit the data.

8.6.3 Static Variability


Static variability refers to modifying the code statically (before it is executed) based on configuration models. In mbeddr this means that
various transformations handle the variability. In the following paragraphs we describe the various ingredients to static variability in
mbeddr.

Presence Conditions A presence condition is a Boolean expression over features attached to any program element. During
transformation, the program element is deleted if, based on the selected configuration, the Boolean expression evaluates to false.
Presence conditions are essentially a kind of "structured #ifdef". In the two functions process_trackpoint⬈ and testPresenceConditions⬈,
presence conditions are attached to several statements.

The program elements to which presence conditions are attached are color-coded. The color depends on the expression. This means that
all program elements that are annotated with the same presence condition get the same color, making it easy to spot disparate parts of
programs that rely on the same variant configuration.

Presence conditions are attached to program nodes with the Toggle Presence Condition intention.

Projection
Projection Modes Variability-aware code can be projected in various ways (switchable via the Code → Mode menu). The Detailed
Product Line shows the presence conditions in-line in the code (above the annotated node). TheCompact Product Line just shows small
markers in the code. Hovering over the marker shows the presence condition in a tool-tip. The third mode shows the program as it would
look for a given variant (as if the transformation would be executed directly in the editor). For this to work, the system has to know which
variant should be rendered. To do this, a so-called Variant Selection annotation must be annotated to the respective root node (it must be on
the root node!) using an intention. This annotation allows the selection of a feature model and a particular configuration. Once one is
specified, the Selected Variant projection mode can be selected.

To better understand these modes, please go to StaticVariability⬈ and experiment for yourself.

Conditional Replacements Presence conditions are useful for blocks, or statements, or other structural parts of programs. However,
sometimes you only want to change the value of expressions. Exchanging the complete statement (e.g., in case of a variable declaration)
has the problem of changing the identities of the variables, which leads to all kinds of downstream problems. It is hence much better to
change the value of a variable by exchanging an expression. Conditional Replacements and Conditional Switches can be used for this. You can
also use Conditional Alternatives , as discussed below.

Take a look at testConditionalReplacement⬈. In the case of v1⬈ we exchange the value 10 with 0 if the nullify feature is selected using a
conditional replacement. A conditional replacement replaces an expression with one other expression, based on a feature condition. It's a
bit like an if statement. However, sometimes an expression must be replaced with various different ones based on a set of feature models.
A conditional switch can be used for this; it is more like a switch statement. Take a look at v2⬈ for an example. Both conditional
replacements and conditional switches are attached with intentions.

Conditional Alternatives Presence conditions are good to exchange code blocks or statements. Conditional replacements and switches
are good to replace expressions. What's missing is a way to replace things that have a name and that can be referenced. The problem is
that all references to something break if it is removed via a presence condition. To solve this problem, mbeddr has a third option, the so-
called conditional alternative.

Take a look at alternatives⬈. We declare a function add⬈ which we call from the test case testConditionalAlternative⬈. Now let's imagine
we wanted to replace this function with another one based on a presence condition. If we did that, we'd also have to use presence
conditions on all call sites of the function. This is annoying. Conditional alternatives solve this problem in the following way:

You mark the original function as conditional original


You then write one or more alternative functions. Each is marked as a conditional alternative : it points to the original, it has a presence
condition that determines when it should be used, and its name must start with the name of the original plus an underscore.
When the transformation runs, the original is replaced with the a suitable alternative, and all references are updated automatically.

Note that this does not just work for functions but for anything that has a name and can be referenced.

Building Variable Systems In order to be able to build variant-aware software, one has to specify the configurations for all involved
feature models. To do this, you have to add a variability mappings configuration item to the build configuration (please see [ToDo: ] for a
general discussion on mbeddr's build system). In this item you can list all relevant feature models plus their configuration. Check out this
build configuration as an example:
BuildConfiguration (m.t.m.defaultExtensions)⬈. Note that you will get errors in the build configuration and sometimes during generation
itself, if no configuration is specified for a feature model used in the system.

As explained in [ToDo: ], each model can only have one build configuration. Relative to building variability-aware software, this results in
two alternative setups:

You can either have one model that contains your variable code (i.e., programs with presence conditions and the like) as well as a
build configuration. By changing the variability mappings in the build configuration, you can build different variants of the software.
However, since they are all in the same model, you can only build one variant at a time!
The alternative is to have one model with your variability-aware code (let's call it S), and then an additional model for each variant
you want to build (models V_1 through V_n). The additional models V_i import the model S and each contain a build configuration with
a suitable variability mapping. This way, each of these models builds a separate variant. They exist in parallel, so you can automate
the build for all the variants at the same time.

9 Analyses
9.1 Introduction to Analyses
Besides increasing the programmers productivity, domain specific languages allow the definition of advanced, user friendly and domain
specific analyses. One special kind of analyses is formal verification, which uses mathematical precise methods for checking properties of
mbeddr code. In mbeddr we have integrated different formal verification techniques aiming to be used on a continuous basis in everyday
work of practicing engineers. As of today, we have implemented two different kinds of analyses:

Formal verification of variability: mbeddr features advanced support for the definition of product lines using feature models,
configuration models and annotation of artifacts with advanced presence conditions. We analyze the consistency of feature models,
of configuration models and of the annotated artifacts. To do this we use the SAT4J SAT-solver http://www.sat4j.org/.
Formal verification of mbeddr-C programs: once programs are written in mbeddr, we enable developers to use advanced formal
verification techniques to prove absence of bugs, or, when the analyses are incomplete, to search for cumbersome bugs. Our focus
is on user-friendliness of the verification: all analyses are targeted to practicing embedded engineers. For verification we use the
CBMC C-level model checker. http://www.cprover.org/cbmc/.

9.2 Analyzing variability


mbeddr allows the definition of product lines with the help of feature models, configuration models and attaching advanced presence
conditions to the produced artifacts (e.g. programs code, requirements). Defining and using product lines in mbeddr is described in
Section Section Product Line Variability.

Based on the above content, mbeddr provides the following analyses:

FM-Analyzer checks whether a feature model can be instantiated (i.e. whether there exists configuration models that conform to the
feature model). Examples of inconsistencies are mandatory features that conflict with each other, or features that require other
features but their parents cannot be parts of the same configuration model.
CM-Analyzer checks whether a configuration model is consistent with the feature model that it configures. Examples of
inconsistencies are when not all mandatory features are selected or when conflicting features are selected.
Modules-Analyzer checks whether a mbeddr-C program whose parts are annotated with presence conditions is consistent.
Examples of inconsistencies are when a function declaration is annotated with a presence condition and there exist calls to this
function that are not annotated with a presence condition - in this case the function is called when the function itself is not included in
the generated code and thereby generator or compile errors will be caused.

Variability analyses are enabled both fine-granular on feature models and configuration models as well as on models and solutions that
use variability support.
9.2.1 FM-Analyzer
In the figure below we present an example of a feature model defined in mbeddr.

Figure 9.2.1-A: A feature model contains features and relations among them.

The FM-Analyzer can be started by right-clicking on the feature model name and selecting the corresponding menu entry.

Figure 9.2.1-B: One-click starting of the FM-Analyzer.

FM-Analyzer will open a window with the result of the analysis. In the case when the analysis fails, FM-Analyzer provides an explanation
about the cause of failure.

Figure 9.2.1-C: Results provided by the FM-Analyzer. The failure is caused by the fact that the feature 'NoDebug' cannot be selected in any configuration since it
conflicts with the 'Optimization' feature which is mandatory.

9.2.2 CM-Analyzer
In the figure below we present an example of a configuration model defined in mbeddr. Each configuration model contains a subset of
features from the feature model it configures.

Figure 9.2.2-A: A configuration model contains a sub-set of features.

The CM-Analyzer can be started by right-clicking on the feature model name and selecting the corresponding menu entry.

Figure 9.2.2-B: One-click starting of the CM-Analyzer.

CM-Analyzer will open a window with the result of the analysis. In the case when the analysis fails, CM-Analyzer provides an explanation
about the cause of failure.

Figure 9.2.2-C: Results provided by the CM-Analyzer. The failure is caused by the fact that the feature 'Communication' cannot have both sub-features 'CAN' and
'FlexRay' at the same time.

9.2.3 Modules-Analyzer
In the figure below we present an example of an implementation module in mbeddr which has attached presence conditions to several
entities.

Figure 9.2.3-A: An implementation module with annotated presence conditions.

The Modules-Analyzer can be started by right-clicking on the implementation module name and selecting the corresponding menu entry. It
checks whether the implementation module is consistent with respect to the feature model. Example of inconsistencies are cases when
for a function (or variable) declaration we have a presence condition, however, for the calls (or references) to this function (or variable) we
do not have any presence condition. In this case, if the code generation is performed, we can get a compile error.

Figure 9.2.3-B: One-click starting of the Module-Analyzer.

Module-Analyzer will open a window with the results of the analysis. In the case when the analysis fails, Mo-Analyzer provides an
explanation about the cause of failure by giving a possible configuration (selecting concrete features) that would cause the failure.

Figure 9.2.3-C: Results provided by the Module-Analyzer. The failure is caused by the fact that the function 'log_debug_info' has presence condition 'MinimumDebug'
and the caller of this function does not have annotated any presence condition.

9.2.4 Modules and Solutions Variability Analyzer


Whenever a model contains a root node of type VariabilitySupport, by right-clicking on this model we can check the consistency of variability
definition. Whenever a model contains an ImplementationModule which is referencing a feature model, by right-clicking on this model we can
check the consistency of variability use.
Figure 9.2.4-A: Direct start of 'variability definition' and 'variability use' checks for models.

When a solution contains models that define feature models, then we can check the variability consistency (both definition and use) for
the entire solution.

Figure 9.2.4-B: One-click start of both variability definition and variability use checks for solutions.

9.3 Formal Verification with CBMC


To perform formal verification, we have integrated the CBMC http://www.cprover.org/cbmc/ C-level model-checker which is free under a
BSD-4-clause license. Below we reproduce the 'advertising requirements' of the CBMC license:

"This product includes software developed by Daniel Kroening, ETH Zurich and Edmund Clarke, Computer Science Department,
Carnegie Mellon University."

The focus in mbeddr is on hunting bugs at the unit level in a continuous and user-friendly manner. In general, we do not aim to prove
correctness of a system but rather help users to find bugs. Each run of an analysis can provide one of the results: SUCCESS (colored
GREEN) - meaning that no bug could be found; FAIL (colored RED) - meaning that a bug could be found, or DON'T KNOW (colored
orange) meaning that no bug could be found but the verification is incomplete. When a bug is found, most of the times we have a trace
through the system (counterexample) that leads to that failure.

We distinguish between the following kinds of analyses:

Robustness verification at C-level: allows checking of robustness properties like: absence of null pointer dereferencing, absence of
div-by-zero, absence of overflows, absence of arrays-out-of-bounds accesses, etc. A more in-depth presentation of robustness
checks is given in Section Robustness Checks at C-level.
Robustness verification of C-extensions: allows automatic checking of the robustness of several mbeddr-extensions (similar to C-
code robustness checks, we can check the robustness of extensions). For example, given a state-machine, a robustness check for it
ensures that all states of the state machine can be reached and all transitions can be fired. More details are provided in Section
Robustness Checks of mbeddr-Extensions.
Functional verification: mbeddr users can define themselves the business-domain specific properties to be checked on a given
program; such properties originate usually from functional requirements. Such properties can be given in code either as assertions
or attached to higher level constructs like components interfaces. More details are provided in Section Functional Verification.

9.3.1 Verification primer


Before you start The verification tool that we are using is run on the generated C code (not on the mbeddr models themselves).
Thereby, it assumes that from your models you can generate valid C code 1 (otherwise, CBMC will complain and no verification will be
performed).

Before starting each verification we should ask ourselves the following questions:

Q1) What will be checked? The first step is to be aware about what properties will be checked and what properties will not be
checked :-) Also, be sure to understand what sub-system will be verified.
For example, when performing robustness analyses of C-code (e.g. searching for div-by-zero) from a function as entry point, only
those division operations will be checked for div-by-zero that can be reached from that entry point (see below).
Q2) What is the verification entry-point? Each run of CBMC must be given the entry point in the program from which it should
perform the verification. When nothing is specified, the entry point is the function main. Code not reachable from the entry point will
not be checked at all!
Q3) What are the environment conditions? Each sub-system (e.g. function, component) on which a verification is performed, is
embedded in a bigger system from where it gets the data. Usually, the data types used in the code are much more permissive than
the actual values that a parameter can take (e.g. for a function that computes a distance that a car can travel in a certain time period
given a parameter currentSpeed with int32 as type, only a small sub-interval of int32 is relevant since cars cannot travel very fast or have
high negative speeds).
Q4) What are the parameters given to CBMC?CBMC takes a big number of parameters that directly affect the results of the
verification. The most important parameter is related to the loops unwinding 2 - CBMC performs a stepwise symbolic execution of
the code and needs to know how many times loops should be unwound.

9.3.1.1 Behind the Courtain

mbeddr allows the definition of higher-level, domain specific properties. Our approach to check them is to generate C-level labels (or
asserts) for each of them and to check the reachability of these labels (or violation of asserts). In some cases, if a label is reachable then
the property is violated (e.g. for decision tables); in other cases labels that are not rechable represent property violations (e.g. for state-
machines states reachability verification, if a label corresponding to a state is not reachable, then that state is not reachable). Encoding of
high-level properties as labels (or assertions) and lifting the verification results is done automatically and transparent for the user.

In order not to clutter the production code with unnecessary lines, the labels used for the verification are generated only when a model
'rebuild' which is started by an analysis action is performed - please make sure that the code you analyze gets rebuilt before the analysis
starts.
Figure 9.3.1.1-A: An intuitive overview about how we integrate CBMC in mbeddr: users write high-level functional properties, they are translated into low-level C
checks, CBMC is run and then the produced result is lifted back at the domain level.

1: If advanced environments specification is used, the generated C code might contain also CBMC specific macros, so it does not need to compile.

2: loops unwinding is also known as loops unrolling

9.3.2 Basic Configuration


Before starting mbeddr verification, one needs to perform the following steps:

Step 1: Add the corresponding verification devkit.Analyses are provided through a series of devkits that need to be enabled for the
model where the analysis is run. Without these devkits enabled, either some analyses are not possible at all or the interpretation of
the program trace will not be lifted at the DSL level.
Step 2: Make sure that C-code is generated from the model you check.The verification is performed directly on the generated C
code. Thereby, if no valid C code is generated from your models, then no verification will be done. Do not forget to add all
implementation modules to the build configuration.

Global settings for CBMC-based analyses can be changed via "File → Settings" menu and then selecting CProver preference tab as shown
below.

Figure 9.3.2-A: CBMC global settings.

The meaning of these settings is briefly described below:

Number of parallel threads defines the maximum number of instances of CBMC that will be started simultaneously.
Timeout represents the time budget allocated for an analysis. After the timeout expires, the CBMC process will be killed and the
analysis will be stopped. In the case when a high-level analysis starts several instances of CBMC (e.g. in case of robustness
analyses, we will start one instance of CBMC for each atomic robustness check), then users can define timeout for atomic analyses
- denoted as partial timeout.
Use of VCC/GCC under Windows. If VCC is used then mbeddr should be started from the Visual Studio console. If GCC is used,
then 'gcc' should be in path (see Section Verification Tools).
Rebuild models before analysis instructs mbeddr to perform a full "MPS rebuild" of the analyzed model before each analysis
(otherwise only an "MPS make" will be run; in MPS, 'just make' is not always functioning properly).
Slice formula instructs CBMC to perform program slicing. The analyses will run faster, however, in the case when the analysis fails,
the returned counterexample might not contain all intermediate steps.
Unwinding depth instructs CBMC how many times it should unwind loops. If generate unwinding assertions is selected, then CBMC
will explicitly check that each loop was unwound enough. For more details about the loops unwinding in CBMC, please look here:
http://www.cprover.org/cprover-manual/cbmc-loops.shtml
Partial loops instructs CBMC to continue the analysis even if unwinding is not enough. Enabling partial loops, increases the number
of bugs that can be found, however, it might introduce false positives (properties reported as SUCCESS even if they should FAIL)
and false negatives (properties reported as FAILS even if they should be SUCCESS; in these cases, provided counterexamples are
not feasible). Unfeasible counterexamples can be returned also in the case when the property FAILS indeed.
Use refinement instructs CBMC to use refinement procedure. If checked, this setting speeds-up some analyses. However, it should
be used with case since it is still marked as 'experimental' in CBMC 4.9.

9.3.3 Robustness Checks at C-level


For checking robustness of C programs, we need to add the com.mbeddr.analyses.core devkit as shown in the following figure. C programs
are called to be 'robust' if no run-time errors (e.g. division by zero, null pointers dereferencing) can occur. To search for these errors,
CBMC will check each place in the generated C code where the error can occur (e.g. each division operation, each dereferencing of
pointers). The properties to be checked in this case are automatically generated by CBMC and mean for example that all divisions have
divisors that are not zero.

Figure 9.3.3-A: Adding the com.mbeddr.analyses.core devkit to your model will enable robustness checks.

To start the robustness checks for a given function 1 you can either open a pop-up menu on the function or define an robustness analysis
configuration 2.

Figure 9.3.3-B: One click start of robustness checks.

Figure 9.3.3-C: Creation of a new 'robustness analysis configuration item'.

Figure 9.3.3-D: For each configuration item the users can fine-tune which robustness properties will be checked and the entry point in the verification.⬈
Figure 9.3.3-E: To verify an analysis configuration item, one needs just to right-click and select the menu.

When the analysis is started from the pop-up menu, all robustness properties will be checked. Below we summarize the robustness
properties (details about these propertie can be found on the web-page of CBMC http://www.cprover.org/cprover-manual/properties.shtml)
and in its user guide:

array bounds check that each array access is within bounds (aka. buffer overflow).
div-by-zero check that for each division the divisor is not zero.
not-a-number check whether the floating-point computations can result in NaN.
pointer check that pointers are valid whenever they are accessed (no NULL-pointer dereference and no access to invalid pointers
such as dead objects).
overflows/underflows check that no signed/unsigned/float over- or underflow can occur.
memory leak check that all memory allocated within the run code is also deallocated before exit.

1: All analyses re performed on the code from the function set as entry point and transitively for the functions that are called from it

2: Most analyses can be either run with one click (in that case the global settings (see Section Basic Configuration) will be used and the entry point in the analysis is
'main' or can be run via checking 'analysis configurations'. An analysis configuration enables the specification of entry points (e.g. verification environments) and fine
granular definition of CBMC parameters.

9.3.4 Robustness Checks of mbeddr-Extensions


Higher-level language constructs carry with them higher-level properties that can be checked. For example, when programming using
decision tables, we might ask ourselves if the decision table is complete (have we covered all cases?) or if it is consistent (do we have
cases where multiple cells could be active at the same time?). Another example are state-machines about which we might ask ourselves if
all states are reachable (i.e. we do not have superfluous states) and if all transitions can be fired (i.e. no transition is completely shadowed
by previous transitions).

9.3.4.1 Checking Decision Tables

Let's consider a decision table which implements a look-up table to compute a breaking distance given the current speed and the
information whether the road is icy or not.

Figure 9.3.4.1-A⬈

A decision table can be verified only if the 'checked' annotation is enabled as shown below. This flag will instruct the C-code generator to
generate labels for each of the properties to be checked.

Figure 9.3.4.1-B: To verify a decision table one needs to make it 'checked' by using an intention.

It is possible to start the verification with one click from the pop-up menu of the decision table node or through an analysis configuration
as described in the case of robustness checks. The one-click start will use the main method as entry point; in the case when analysis
configurations are used then we can specify another entry point (usually a harness - see Section Defining the Environment for
Verification).

Figure 9.3.4.1-C: One-click starting of the decision table verification.

Figure 9.3.4.1-D: A configuration item for verifying a decision table. ⬈

After the verification is finished, the results (i.e. have we missed cases?, do we have cases where two cells of the table are active at the
same time?) are automatically shown. When a result fails then a trace through the system is given that shows an example of values that
could cause the failure.

Figure 9.3.4.1-E: A trace to the failure will be shown if the verification result is selected.

9.3.4.2 Checking State Machines

The first step to check state-machines is to add the com.mbeddr.analyses.statemachines devkit to the model containing the state machine. This
devkit enables actions in context menus and the lifting of counterexamples such that they are aware of state-machines.

Let's consider a state-machine that implements a simple counter. After the state-machine is started (with the start event), it counts up or
down.
statemachine Counter initial = Init {
in event countUp(int16 step) <no binding>
in event countDown(int16 step) <no binding>
in event start() <no binding>
var int16 counterState = 0

state Init {
on start [ ] -> Counting
} state Init
state Counting {
on countUp [ ] -> Counting { counterState += step; }
on countUp [step < 0] -> Init
on countDown [ ] -> Counting { counterState += step; }
on countDown [step < 0] -> Init
} state Counting
} ⬈

A state-machine can be verified only if the 'checked' annotation is enabled as shown below. The checked flag is enabled via an intention
on the state-machine. This flag will instruct the C-code generator to generate labels for each of the properties to be checked (see Section
Behind the Courtain).

checked ⬈

The verification can be started either with one click from the pop-up menu of the state-machine node or through an analysis configuration.
The one-click start will use the main method as entry point and global settings for CBMC; in the case when analysis configurations are
used then we can specify another entry point and fine-tune the settings.

Figure 9.3.4.2-A: A configuration item for verifying state-machines. ⬈

After the verification is finished, the results are automatically shown - i.e. if a state cannot be reached or a transition cannot be fired then
the result is marked with FAIL. Since the generated labels cannot be reached, we do not have any trace through the system.

Figure 9.3.4.2-B: Results of the state machine verification. Two transitions cannot be fired since they are shadowed by previous transitions.

9.3.5 Functional Verification


Many times we want to check properties in the code that are relevant to the business domain of our application and originate from
requirements. In order to do so, we need to be able to write such properties in the code. In mbeddr one can use plain C-level assertions,
extended assertions or can attach properties to higher level constructs like for example pre/post conditions to components.

9.3.5.1 Assertions verification

Writing assertions in C code is a 'classical' method to check properties. In the code fragment below we present a simple assertion.

Figure 9.3.5.1-A: We specify that the time should be positive. ⬈

Extended assertions Many times using simple assertions to encode more complex properties that represent the evolution of the system
in time (e.g. 'after Q then P', 'before Q must P') is cumbersome. As usual, mbeddr provides a set of first class language constructs that
allow users to directly express such properties. By clicking on these constructs, you can find in the 'Inspector' information about how the
code will be generated - basically C-level assertions wrapped in small monitors.

Figure 9.3.5.1-B: Examples of extended assertions. ⬈

The assertions verification can be started either with one-click from the pop-up menu of the function, or by defining an analysis
configuration item. Both ways are illustrated below.

Figure 9.3.5.1-C: One click check of all assertions that can be reached from one function.

Figure 9.3.5.1-D: As any other analysis configuration items, assertion analysis configuration items allow mbeddr users to fine tune different verification parameters.⬈

9.3.5.2 Components Verification


In order to enable components verification, we should firstly add the com.mbeddr.analyses.components devkit to the current model.

mbeddr allows its users to define interfaces and components that implement them. To each of the runnables of an interface, we can
attach pre/post conditions. Furthermore, the expected ordering of calls to functions of an interface can be defined using a protocol
specification. Pre-/post-conditions and protocol together we call the contract of the interface.

Figure 9.3.5.2-A: Example of an interface with pre-/post-conditions and protocol attached to its functions. ⬈

Interfaces can be implemented by components like in the code fragment shown below.

Figure 9.3.5.2-B: A component implements all methods of the interface. The contracts defined by all interfaces are automatically transferred to the component.⬈

Having specified the contract of an interface, we would like to verify if the clients of the components implementing this interface comply
with the specified preconditions and call the interface functions in an appropriate order. Furthermore, we would like to check that the
implementation of the methods fo the interface fulfills the specified post-conditions.

Figure 9.3.5.2-C: Example of a client of the component. ⬈

To check the component, we firstly need to add the attribute 'checked' via an intention (having this attribute enabled, we can generate
labels for the cases where pre-/post-conditions are violated).

The verification is started either by one-click from the components pop-up menu or via an analysis configuration item which allows to
specify an arbitrary entry point. Both of these ways are illustrated below.

Figure 9.3.5.2-D: One click check of the contracts of all interfaces provided by this component.

Figure 9.3.5.2-E: Example of a configuration item for components analyses. ⬈

Once the verification finishes, a window with results will be automatically opened as illustrated below.

Figure 9.3.5.2-F: The results of the verification of each pre-/post-condition and protocol are displayed in the results window.

9.3.6 Defining the Environment for Verification


Most of the software modules we develop (e.g. functions, state machines, components) are included within bigger systems. Other
systems, that are neighboring with our module that we verify are called the environment of the system. Most of the time, the data types of
the inputs of the to-be-verified system are too permissive and in practice, the sub-system we are verifying should work under much tighter
constraints.

Figure 9.3.6-A: Let's consider for example that our system under verification is a simple function that computes the speed of a car. This function takes two
parameters: a distance and a time interval, both of type int16 . Of course that the time interval cannot be smaller than zero; let's assume that it is smaller than 10. We
can also assume that the distance cannot be bigger than 1000 and must be positive as well. ⬈

The constraints over variables and ranges can be modeled inside harness modules. Below we show an example of such a module:

Figure 9.3.6-B: The function speedComputer is called from within the harness definition that contains nondeterministic assignments that are further constraint (via an
intention on a nondet assign ) to belong to the desired intervals. In the first nondeterministic assignment we use an explicit construct to specify the interval, in the second
assignment we use classical C inequalities. When we use the 'in range expression' (as opposed to explicit inequalities), then the harness is generated in an optimal
manner and this increases the running efficiency of CBMC.⬈

Furthermore, the sequence in which different operations are called can be modeled as well using the harness. For doing this, we can use
the nondeterministic_choice construct that will choose nondeterministically one of the defined cases.

Figure 9.3.6-C: We define traces with size 5 (using the for loop) and at each step either fun1 or fun2 are called that compute some value. We check that for 5 calls in
either order to these functions, a specific property holds. In this manner we can for example model a sequence of external events, each event triggering a function
where it is handled.⬈

You might also like