Mbeddr User Guide
Mbeddr User Guide
This is the mbeddr user guide. It targets users of mbeddr, not people who extend the mbeddr languages. We strive to cover all relevant
mbeddr functionalities, but as things change, some parts may not always be up-to-date. Please report problems, ommissions or bugs to
our mailing list at mailto:[email protected]. When you report a problem, please make sure you include a Node Info for
the problem (see Section Utilitites for details on what a Node Info is).
This user guide is written with the MPS documentation language. On the plus side, this affords us very close integration with source code
and models -- these things are always up to date. On the downside, we still have some issues in the Latex generator, the generated
Latex file, and especially the embedded figures, don't yet look very beautiful. Please bear with us. It will improve.
You can use mbeddr's full text search utility provided by mbeddr: see Section Searching Models.
Finally, we want to emphasize that, in addition to this user guide, there is a lot of additional information. This includes various videos (at
http://mbeddr.com/ and at http://mbeddr.com/screencasts.html) as well as a number of articles and papers at
http://mbeddr.com/learn.html.
Table of Contents
1 mbeddr Concepts
1.1 JetBrains MPS
1.2 mbeddr
1.2.1 mbeddr Architecture
1.3 gcc, make and gdb
1.4 CBMC
1.5 PlantUML and Graphviz
1.6 Java
2 Installation and Setup
2.1 Gcc, Make and Gdb
2.2 Graphviz
2.3 Verification Tools
2.4 mbeddr
2.4.1 mbeddr IDE
2.4.2 mbeddr Plugin Distribution
2.4.3 mbeddr Source Installation
2.4.3.1 Introduction
2.4.3.2 Source Distribution Python Installer
2.4.3.3 Manual Source Installation
2.5 JetBrains MPS and Java
2.6 Adding Tools to the System Path
2.7 Required Tools and Versions
3 Fundamentals
3.1 The MPS UI
3.1.1 Overall Structure
3.1.2 Tools
3.2 Project Structure
3.2.1 Nodes, Roots and Concepts
3.2.2 Project, Solutions, Models, Folders, Roots
3.2.3 Using Languages and Devkits
3.2.4 Paths and Libraries
3.3 The Editor
3.3.1 Usability
3.3.2 Projection Modes
3.4 Keyboard Shortcuts
3.5 Version Control
3.5.1 Introduction
3.5.2 Supported Version Control Systems
3.5.3 Setting Up Version Control for a Project
3.5.4 Tools and Views
3.5.5 Common Version Control Operations
3.5.5.1 Update/Pull
3.5.5.2 Commit/Push
3.5.5.3 Conflicts/Merging
3.6 Utilitites
3.6.1 Interop with Text
3.6.2 Searching Models
3.6.3 Opening Terminals
3.6.4 Find Usages
3.7 Cheat Sheet
3.8 Useful MPS Settings
4 mbeddr C vs. C99
4.1 mbeddr's Module System
4.2 Data Types
4.2.1 Boolean
4.2.2 User defined types
4.2.3 Integers
4.2.4 Type Size Configuration
4.3 Array Syntax
4.4 Various Additions
4.5 Unit Tests
4.6 Function Pointers
4.6.1 Function Types
4.6.2 Function References
4.6.3 Lambdas
4.7 Commenting
4.7.1 Statement Comments
4.7.2 Element Documentations
4.7.3 Margin Comments
4.7.4 Commenting Out Code
5 Generated Code
5.1 Efficiency
5.2 Readability
6 C Tooling
6.1 Building Binaries in the IDE
6.1.1 Build Configurations
6.1.2 Model Checking
6.1.3 Building in the IDE
6.2 Building Binaries from the Command-Line
6.2.1 Example Application
6.2.2 Writing Build Scripts
6.2.3 Building from Command-Line
6.3 Running Applications from MPS/mbeddr
6.3.1 Creating a Launch Configurations
6.3.2 Contents of the Launch Configuration
6.3.3 Console output
6.4 Debugging
6.4.1 Introduction
6.4.2 Starting a Debug Session
6.5 Hello, World
6.5.1 A new Project
6.5.2 New Model
6.5.3 Writing the Code
6.5.4 Build and Run
6.6 Accessing Libraries
6.6.1 Manual Header Import
6.6.2 Automatic Header Import
7 C Extensions
7.1 Static Evaluation
7.2 Units
7.2.1 Physical Units Basics
7.2.2 Unit Conversions
7.2.3 Generic Units
7.2.4 Stripping and Reintroducing Units
7.3 State Machines
7.3.1 Implementing a State machine
7.3.2 Interacting with Other Code -- Outbound
7.3.3 Interaction with Other Code -- Inbound
7.3.4 Hierarchical State Machines
7.3.5 Tabular Notation
7.3.6 State Machine Diagrams
7.4 Components
7.4.1 An Interface with Contracts
7.4.2 A first Component
7.4.3 Verifying Contracts Statically
7.4.4 Collaborating and Stateful Components
7.4.5 Mocks
7.4.6 Sender/Receiver Interfaces
7.4.7 Composite Components
7.4.8 Interface Types
7.4.9 Visualizing Components
7.4.10 Contract Verification
7.5 Decision Tables
7.5.1 Verifying the Decision Table
8 Process Support
8.1 Requirements
8.1.1 Overview
8.1.2 Requirements Basics
8.1.3 Visualizations and Tree Views
8.2 Tracing
8.2.1 Attaching and Removing Traces
8.2.2 Reverse Tracing
8.2.3 Trace Report Assessment
8.3 Assessments
8.4 Documentation Language
8.4.1 Basic Ideas
8.4.2 Target Platforms
8.4.3 Core Concepts
8.5 Code Review
8.5.1 Overview
8.5.2 Projection Modes
8.5.3 Review States
8.5.4 Assessments
8.6 Product Line Variability
8.6.1 Specifying Variability
8.6.2 Runtime Variability
8.6.3 Static Variability
9 Analyses
9.1 Introduction to Analyses
9.2 Analyzing variability
9.2.1 FM-Analyzer
9.2.2 CM-Analyzer
9.2.3 Modules-Analyzer
9.2.4 Modules and Solutions Variability Analyzer
9.3 Formal Verification with CBMC
9.3.1 Verification primer
9.3.1.1 Behind the Courtain
9.3.2 Basic Configuration
9.3.3 Robustness Checks at C-level
9.3.4 Robustness Checks of mbeddr-Extensions
9.3.4.1 Checking Decision Tables
9.3.4.2 Checking State Machines
9.3.5 Functional Verification
9.3.5.1 Assertions verification
9.3.5.2 Components Verification
9.3.6 Defining the Environment for Verification
1 mbeddr Concepts
1.1 JetBrains MPS
mbeddr relies on the JetBrains MPS language workbench. A language workbench is a tool that supports language engineering, the
activity of building, composing, extending and using languages. MPS can be seen as a framework on which mbeddr relies.
MPS is open source software licensed under Apache 2.0. It can be downloaded from http://jetbrains.com/MPS. mbeddr currently uses
MPS 3.1.4, even though 3.1.5 should work as well. Later in 2014 we will move to the upcoming version 3.2.
Various tutorials exist, and the MPS documentation page (at https://www.jetbrains.com/mps/documentation/) contains links to most of
these. We want to highlight in particular the book by Fabien Campagne (http://www.amazon.com/The-MPS-Language-Workbench-
Vol/dp/1497378656/). Note that in order to use mbeddr, you don't have to understand the MPS language engineering facilities (this is only
necessary to extend mbeddr with new languages.
MPS relies on a projectional editor. Instead of using a parser that transforms character sequence into a tree for further processing, in
MPS, every editing gesture directly changes the tree. This has two important advantages:
flexible notations and language modularity.
Language Modularity Because a projectional editor does not use a grammar, the combination of independently developed grammars
cannot become ambiguous. Potential ambiguities are delegated to the user to resolve. This way, arbitrary languages and language
extensions can be combined in one single program. mbeddr exploits this by providing a set of modular language extensions to C (and a
couple of other languages). If you are interested in the details, please refer to this paper: http://voelter.de/data/pub/Voelter-GTTSE-
MPS.pdf as well as mbeddr in general.
Flexible Notations Since no parser is used, MPS can support essentially any notation. This includes classical text, but also prose (as in
this user guide), tables, math symbols and also diagrams. We exploit this in mbeddr extensively. For an overview over MPS' supported
notations take a look at this paper: http://mbeddr.com/files/gemoc2014-MPSNotations.pdf
Traditionally, the above mentioned benefits of a projectional editor have come at a price: the user experience of traditional projectional
editors was not very good, in particular, when editing code that looked like "normal text". Consequently, projectional editors were not
adopted. However, MPS is doing a much better job with this. We have conducted a study with 20 MPS users (most of them mbeddr users
as well), and the conclusion is that the editor works well enough. Take a look at this paper for details:
http://mbeddr.com/files/projectionalEditing-sle2014.pdf.
1.2 mbeddr
mbeddr is essentially a set of plugins to MPS. If you consider MPS as a framework, then mbeddr can be considered an application. The
plugins contain mostly language definitions: mbeddr C as well as its extensions are languages in terms of MPS. In addition, the plugins
also contain additional utilities, libraries, views, editors, etc. However, these details are irrelevant to the mbeddr user.
In this user guide we do not explain the philosophical underpinnings, ideas and approaches of mbeddr since these have been discussed
extensively in the various papers we wrote. You can find these papers on the mbeddr website (in particular on the Learn page at
http://mbeddr.com/learn.html. To get an overview we suggest to take a look at the following two:
Figure 1.2.1-A: Overview over mbeddr's five layers and three concerns.
Implementation Concern The implementation concern addresses the development of applications based on C. On the core layer,
mbeddr comes with an implementation of C99 in MPS. There are a few minor differences to C99, and the preprocessor is not exposed to
the user; first-class concepts are provided for the legitimate uses of the preprocessor (including a module system). On the default
extensions layer, the implementation concern comes with C extensions for decision tables, interfaces and components, state machines,
physical units, testing as well as logging and tracing. The user extensions layer is by definition empty; users can easily extend the C core
as well as any of the default extensions. State-of-the-art IDE support is available for all languages, including syntax highlighting, code
completion, real-time type checks and refactorings. The implementation concern also ships with an extensible debugger that is able to
debug on the level of the extensions, so the abstractions do not break down when debugging becomes necessary. At the backend layer,
the implementation concern relies on a C compiler, a C debugger and tools for importing existing C code into mbeddr. By default, mbeddr
uses the gcc compiler and the gdb debugger.
Analysis Concern The analysis concern provides static analyses (formal verification) for some of the default extensions provided by the
implementation concern. The analyses themselves are performed by existing external tools. However, mbeddr integrates the tools tightly
by (a) providing language abstractions to conveniently describe behavior that can be analyzed, (b) translating this description to the input
of the analysis tool, (c) running the tool, and (d) lifting the output of the tool back to the original abstraction level, to make it easier to
understand for the user. The integrated analyses are based on symbolic model checking, SMT solving and C-level model-checking.
Specifically, the following analyses are available: State machines can be checked with a symbolic model checker. It verifies a set of
default properties and optional user-defined properties. Decision tables can be checked for completeness and consistency. Feature model
configurations are checked for consistency. Finally, interface contracts can be checked statically: interfaces can specify pre- and post-
conditions as well as protocol state machines that specify the valid invocation order of interface operations. These contracts can be
checked for each implementing component via a C-level model checker.
Process Concern The process concern includes facilities for integrating mbeddr into development processes. These facilities can be
used with arbitrary mbeddr languages, such as all the default and user-defined C extensions. The requirements support provides a
language for describing requirements. Traces can be attached to any program element expressed in any language. Arbitrary additional
data, expressed in any language, can be added to a requirement. The product lines supports defining feature models and configurations.
Feature models can be connected to other artifacts by means of presence conditions, While presence conditions are static and work for
any language, there is also C-specific support to evaluate variability at runtime. The documentation language aspect supports writing
prose documents as part of an mbeddr project, exportable as HTML or LaTeX. It supports close integration with program elements. They
can be referenced (with real references that are renamed if the element itself is renamed) and program code can be embedded as text or
as an image. The embedded code is updated whenever the document is regenerated. Visualization provides a facility to render diagrams.
Reports and assessments are customizable queries over the code.
For compiling, building and debugging mbeddr code, mbeddr relies on the established GNU C tool chain. In particular, gcc is used to
compile the generated C code, make is used to orchestrate the compile and link process, andgdb is used by the mbeddr debugger. Note
that all of these tools are not visible to the mbeddr user. But they have to be available on your system (see Section Gcc, Make and Gdb
for details).
Other compilers, make tools and debuggers can be plugged into mbeddr (we have just integrated the Microchip PIC 16 for a customer).
How this is done is beyond the scope of this user guide.
1.4 CBMC
CBMC is used for verification. It is discussed in detail in Section Formal Verification with CBMC. To summarize, CBMC is a bounded
model checker for C programs. This means that it can essentially simulate the execution of a C program regarding all its possible
execution paths. CBMC is open source software and can be downloaded from http://www.cprover.org/cbmc/. It has to be installed
separately, as described in the Installation section Section Verification Tools. For details on how mbeddr uses CBMC, please take a look
at the following papers, as well as at [ToDo: section ]:
1.6 Java
MPS is a Java application. This means that it relies on a Java Virtual Machine (specifically, a JRE) to be able to execute. So, in order to be
able to use MPS and mbeddr, you have to install Java on your computer. The specific versions are discussed in the Installation section.
Note that the Java dependency is of course only for the tool - the systems developed with mbeddr are pure C.
Windows: We recommend installing MinGW (see Section Required Tools and Versions), a port of the GNU tools to Windows. When
selecting the packages to be installed (only Class (bin), make sure mingw32-gcc, mingw32-gdb and mingw32-make are included (all of them
are in the All Packages subtree in the selection dialog).
Linux: These tools should be installed by default. Otherwise use your favourite package manager to install them (seeSection
Required Tools and Versions).
MacOS: You should install XCode's command line tools to getgcc, make and the associated tools (see Section Required Tools and
Versions). XCode comes with lldb, which is not yet supported by mbeddr. Therefore you have to installgdb via brew (see Section
Required Tools and Versions). Use the following command for installing gdb: brew install https://raw.github.com/Homebrew/homebrew-
dupes/master/gdb.rb. Additionally, you have to sign a certificate forgdb
(http://wiki.freepascal.org/GDB_on_OS_X_Mavericks_and_Xcode_5).
After the installation, please make sure the tools can be invoked from the command-line. If this is not possible, please add them to your
system path, as described in Section Adding Tools to the System Path.
2.2 Graphviz
mbeddr supports visualization via PlantUML (http://plantuml.sourceforge.net), directly embedded in MPS. To use it, you have to install
graphviz. Section Required Tools and Versions describes which version is required and where to get it from. After the installation, you
have to put the bin directory of graphviz into your system path (seeSection Adding Tools to the System Path). On Windows, in addition,
you also need to have an environment variable GRAPHVIZ_DOT that points to the dot.exe file supplied with graphviz.
You can check that a proper version of CBMC is correctly installed on your machine by clicking on the 'Test CBMC Installation' button
from File -> Settings -> CProver Preferences.
2.4 mbeddr
Depending on what you want to do with mbeddr, we provide three different installation methods. The following list describes general
differences between these methods, the next sections describe each of them in detail:
mbeddr IDE: sufficient, if you want to use mbeddr, but not change or extend it. The mbeddr IDE is a stripped-down version of MPS,
which just provides the UI facilities required for using mbeddr. It already comes with the mbeddr distribution and is the easiest way
to get mbeddr running on your machine.
mbeddr Plugin Distribution: suitable if you want to use and extend mbeddr. With this approach, you install MPS manually on your
machine and deploy the mbeddr plugins into this installation.
mbeddr from Sources: use this option if you want total control over what you are doing with mbeddr or want to stay at the bleeding
edge of mbeddr development. This way you can just use mbeddr, extend it or even change the sources and try out things. You can
pull from the current master or even check out experimental branches.
install a JDK (Section Required Tools and Versions lists which version is required for your platform) on your local machine and
make sure it is available on the path (see Section Adding Tools to the System Path).
download from the mbeddr GitHub page the mbeddr IDE that fits to your platform and unzip the downloaded ZIP file:
https://github.com/mbeddr/mbeddr.core/releases
To install the distribution, please download the distribution ZIP file from the mbeddr GitHub release page:
https://github.com/mbeddr/mbeddr.core/releases. This ZIP file contains the mbeddr plugins for MPS. After unzipping, please take all
folders inside the plugins directory and copy them into the plugins directory of your MPS installation 1. So, for example, after copying, there
should be a $MPS_DIR$/plugins/mbeddr.core directory.
1: Note that there is also a plugin (without the s) directory under MPS!
2.4.3.1 Introduction
Installing mbeddr from sources gives you total control over what you want to do with mbeddr. We rather recommend working with the
distribution or the mbeddr IDE, except you have good reasons not to. The following list provides some of these good reasons:
want to stay at the bleeding edge of mbeddr development by using the most recent version on master
plan to fork mbeddr
need to regularly update your local mbeddr
There are two ways to get the mbeddr source installation. One of them is a completely manual installation as it is described below in
Section Manual Source Installation. And the other one is an equivalent automated interactive process, an installer written in Python,
described in Section Source Distribution Python Installer
The installer is hosted at http://mbeddr.fortiss.org/. To be able to run it you have to install some of the prerequisites required by MPS and
mbeddr, and also the installer itself.
For the hints on how to install the prerequisites, especially for the Windows platform, please, follow to this website:
http://mbeddr.fortiss.org/download/prereq/. After the prerequisites are installed, you are ready to run the installer. The installer will check
that you have installed everything right before setting up mbeddr.
To use the installer, please, download it from http://mbeddr.fortiss.org/download/, run it, and then simply follow the instructions it gives.
After the installer has finished, you should have the following installed and configured:
You might still want to install other optional tools used by mbeddr, as it is described in this chapter. [ToDo: Make a reference to
gcc&make and graphviz. ]
The website with the installer provides additional information on how to run the installer, and what to do next, after it has finished.
Basically the installer is an automation, which performs for you the steps, described below in Section Manual Source Installation. Thus,
normally, you would never need to perform the steps as described below. However, we provide it for completeness and for
troubleshooting purposes, in the rare case when the installer fails to work right.
2.4.3.3 Manual Source Installation
The following guide will explain you how to install mbeddr from sources manually. Please carry out the following instructions step by step
in their defined order:
1. First, in order to clone the mbeddr repository, you will need to have a git client installed on your local machine (see Section
Required Tools and Versions).
2. Second, you will need an Apache ant installation (see Section Required Tools and Versions). After the installation, please make
sure the ant binary is in your path (see Section Adding Tools to the System Path).
3. If you do not already have an MPS installation on your machine, please follow the installation guide in Section JetBrains MPS and
Java.
4. Next, clone the mbeddr repository from GitHub onto your local machine. You can find it at https://github.com/mbeddr/mbeddr.core,
the repository URLs to be used for cloning (for various protocols) can also be found at this URL.
5. After cloning the repository, you now have to build all languages shipped with mbeddr. For this task we provide a shell script named
buildLanguages , which is located inside the mbeddr.core/code/languages directory. Depending on your operating system, you either use the
one with .bat (Windows) or .sh (Linux/Mac) file extension. Before running the script, you first have to copy thebuild.properties.example
into the same folder as the script and rename it to build.properties. Next, open this file and change the paths (On Windows, you have
to use forward slashes as directory separators even on Windows!) of the following properties:
You can now execute the buildLanguages script from within the mbeddr.core/code/languages folder. At the end of the build you should get a
message stating BUILD SUCCESSFUL. Depending on the performance of your machine, running the script takes between 20 and 45
minutes.
6. We now have to make future application projects aware of the mbeddr.core languages in your local repository. Start MPS and go to
the File → Settings (on the Mac it is under MPS → Preferences ) and select the Path Variables in the IDE Settings . Create a path variable
named mbeddr.github.core.home and point to the root directory of your mbeddr git clone (e.g./Users/peter/Documents/mbeddr.core ).
7. Finally, go to the Global Libraries section in the IDE settings (see Fig. 2.4.3.3-A). Create a library named mbeddr.core that points to the
code directory of the github repository directory (/Users/peter/Documents/mbeddr.core/code). This library must point to the code directory of
the checkout so that all languages are below it, including core and mpsutil. Because we had earlier defined the path variable, it will
use the path variable for defining the library. Notice that this is a global setting, which means it has to be performed only once before
your first application project.
Figure 2.4.3.3-A
Congrats: you are now ready to build your first project with mbeddr.
Windows: If you installed MPS with the Windows-specific installer, Java was already installed along with it. Open thebin folder and
edit, depending on weather you are running mbeddr on a 32- or 64-bit machine, either mps.exe.vmoptions or mps64.exe.vmoptions. To
make MPS run smoothly, increase the MaxPermSize setting to at least 512m. This JVM setting controls how much space is occupied
for loaded Java classes, methods etc.
Linux: Make sure you have installed Java on your machine (Version 1.6 or higher). Next, open thebin folder and edit the
mps.vmoptions. To make MPS run smoothly, increase theMaxPermSize setting to at least 512m.
Mac: Make sure you have installed Java on your machine (seeSection Required Tools and Versions) Next, open the app folder
(right click on the application and do a Show package contents) and open the Contents folder. Open the Info.plist file with a text editor
and navigate to the entry VMOptions at the very bottom of the file. Edit themps.vmoptions to make MPS run smoothly, increase the-
XX:MaxPermSize setting to at least 512m.
If your MPS does not start with the error message Could not create the VM, then your system has less main memory available than
required. In this case, you should open the previously edited file (*.vmoptions or Info.plist) and decrease the value of the -Xmx parameter in
small steps, until you are able to start MPS. This parameter controls the maximum size of dynamic memory that the JVM provides to
MPS.
For changing the variable locally, open your start-up script that is located inside your MPS installation directory. Next, add the PATH
redefinition at the beginning of that file, replacing<your path> with the file system path of the tool you want to add to the path. For details
see the second column of the table below.
To change the path globally, see the right column of the table below. It explains how to add tools to the global PATH on the different
platforms. Here, you should also replace <your path> with the file system path that points to your tool.
locally globally
start-up script:
mps.bat
PATH in your Windows System Settings, change the value of thePATH variable by adding your path to the
Windows
redefinition: existing paths: <existing paths>;<your path>
set PATH=
<your
path>;%PATH%
start-up script: in general you will have to add your tool path to thePATH variable. However, the way how this is done
mps.sh depends on your actual operating system and the shell you are using. Since nowadays almost every Unix
system comes with a bash shell, we describe the process just for tis scenario. In case you using a different
PATH shell, please refer to the vendor's documentation.
Linux/Mac
redefinition:
export PATH: First, create or open your existing .bash_profile, that is located inside your home folder. Next, just add the
<your following line to it, save the file and restart your shell:
path>;$PATH export PATH=<your path>:$PATH
Table 2.6-A
Table 2.7-A
3 Fundamentals
3.1 The MPS UI
3.1.1 Overall Structure
Fundamentally, MPS looks and works like any modern IDE. Fig. 3.1.1-A shows an overview. We describe the various marked up parts
below.
Figure 3.1.1-A: The MPS IDE, with major components marked up.
A Menu Bar Like all IDEs and applications MPS comes with a menu bar. Depending on your OS it is located at the top of the MPS
window (on Windows) or at the top of the screen (Mac).
B Tool Bar The tool bar contains quick access buttons for various functionalities and the run configuration selection combo box for
running and debugging programs.
C Project Explorer / Logical View The Project Explorer is the central view on the left side of the screen. It can show several different
things: the Logical View of the project (solutions, folders, models), a File System View (directories, files) as well as various favourites lists.
In the context of the user guide we mostly talk about the Logical View, which is why we use the terms Project Explorer and Logical View
synonymously.
The Logical View contains the project structure. It is the primary means for navigating around mbeddr (and MPS in general). We describe
the project structure separately below (Section Project Structure). The Project Explorer is an example of a Tool (aka View in Eclipse).
Editor The main part of the MPS window is made up of the editor pane. The editor pane contains any number of editors, each with its
own Tab. Each editor edits a so-called root (see Section Nodes, Roots and Concepts). It is also possible to split the editor pane into
several editors (arranged vertically and horizontally) through the Window menu.
Logical Inspector The Inspector is associated with the editor. For each selected node in the currently active editor, the Inspector
provides additional (read/write) information. The particular contents of the inspector are determined by the language definition (of the
concept that defines the selected node). Sometimes it shows nothing useful, sometimes it just shows additional information (such as the
statically evaluated value of an expression), and sometimes it contains additional stuff that can be edited (and is hence essential for the
editing experience). We suggest to always keep it open.
G Status Bar The status bar shows status information about MPS, including the version control status, the current branch, memory
usage and various other details.
Note that the Status Bar, the Tool Bar Buttons as well as the little tool placeholder tabs can all be hidden through the View menu.
3.1.2 Tools
Tools are the windows that are arranged around the edges of the MPS window. In this section we describe the most important ones
(some language-specific tools are explained in their respective context). Generally, they can be activated through the View → Tool Windows
menu.
Tools can be arranged around the edges of the MPS window. Every Tool has a default position on one particular edge of the MPS
window. However, by right-clicking on to the title bar, other edges (and a floating mode) can be selected for each Tool. Tools can also be
hidden. If not shown, the border shows a little placeholder (like the Context Sidebar tool near D on the left and H on the right). If a tool has
a number associated with it (such as the 1 in case of the Logical View), Ctrl + <Number> can be used to select this window.
Project Explorer As mentioned above, the Project Explorer (in its Logical View mode) contains the overall structure of the project. It is
the primary means of navigating around your code. We'll describe it in more detail in Section Project Structure.
Changes The Changes view shows the files that have been changed since the last commit. This is part of MPS' version control support
and is discussed in more detail in Section Version Control.
Model Checker The Model Checker shows the result of running Check Model on a project, solution, language or model. It shows warnings
and errors. Double-clicking on an element opens this element in the editor. There are various options for grouping and filtering the
contents using the buttons on the left. You should play around with these options. Fig. 3.1.2-A shows an example. Note that errors that
result from a transformation (code generation) are not shown here. They are shown in the Messages view.
Messages This view shows (mostly) messages, errors and warning resulting from the build process. These include problems in the
transformation as well as problems with the subsequent compilation and link process. If an error shows up here during build, it is likely that
there has already been a problem with the input models. In this case, please run the Model Checker on the respective parts of the project
and fix all errors reported there before attempting to generate.
Visualizations A visualization is a read-only diagram of some program structures. It is layouted automatically. Users can double-click on
nodes and labels to jump back into the code. Fig. 3.1.2-B shows an example. To open a visualization, users can press Ctrl + Alt + V or
use the context menu to open a visualization if the selected node defines any. Note that the view also supports changing between
multiple visualizations for the same node (if the node defines several of them) through the combobox. The buttons support navigating
through the recent diagrams, zooming, and to export the diagram as a PNG file.
Tree Views Tree views are similar to visualizations in that they are contributed by particular nodes, they are shown in a special view,
double-click on the tree node selects the node in the editor, and in that a node can define zero, one or more tree view structures. They are
activated through Ctrl + Alt + O or the context menu.
Version Control This tool shows the version control console, essentially a stream of all version control operations. Typically not needed.
Explorer The explorer shows the low level tree structure of a node. It is more like a debugging tool and is not used regularly. It can be
opened for a node by selecting the node and choosing Show Node in Explorer from the context menu. Fig. 3.1.2-D shows an example.
Figure 3.2.1-A: This figure shows two roots (grey boxes). Each of them has a number of nodes inside (the letters). The solid lines are the parent-child relationships. The
dotted lines are the cross references. Note that references can cross root boundaries.
Each node is an instance of a language concept. The language concept defines all properties of its instance nodes, including its visual
representation (notation, concrete syntax). Concepts are created as part of a language. Fig. 3.2.1-B shows this relationship.
Figure 3.2.2-A: A typical MPS project structure. Details are discussed in the running text.
Models MPS is fundamentally a file-based system. The roots discussed above are stored in XML files (these can then be managed with
existing version control systems, see Section Version Control). These files are called models in MPS. Each model contains a number of
roots (which then have arbitrary substructure). In Fig. 3.2.2-A, the purple diamonds with the little M inside are models. Models als
determine which languages are used for the programs/roots/nodes inside the model. Models can import each other in order to be able to
establish cross-references. These dependencies are established in the Model Properties dialog (Dependencies tab).
Models are important because they represent the only real "physical structure" in MPS. Hence they are the granularity relevant to version
control (since they are files, it is models that are seen by file-based tools such as git). Also, the MPS transformation and generation
engine always processes complete models. So in order to keep (partial) generation times low, you should split your overall system into a
reasonable number of (relatively small-sized) models. See Chapter Structuring Complex Projects for details.
Solutions From a user's perspective, a solution is where application code lives (this is in contrast to Languges, which is where language
definitions reside). Technically, a solution is a way to group models. A solution can contain any number of models and each model resides
in exactly one solution. In Fig. 3.2.2-A, the orange S is a solution. Solutions can also have dependencies onto other solutions.
Project A project contains several solutions. Any particular solution can be used in several projects, so projects do not own the solutions
(in contrast to models, which are owned by a solution). A project is really more like a workspace in Eclipse: it's a collection of solutions
which you want to work on for a particular task. The project defines various settings, preferences and view options.
In the Used Languages spec, users can also add dependencies on devkits. A Devkit is a group of languages. For example, there are
several languages that deal with requirements in mbeddr. Instead of adding all of these manually to any given model, users can simply
add the com.mbeddr.reqtrace devkit to a model. This simplifies configuration and avoids clutter.
In order not to hard code absolute paths in the definition of libraries, you can define a path variable. A path variable is a name (such as
MyProjectDir) plus a directory (such as /Users/markus/mbeddr/project1). If you now define a reference to a libary in, say,
/Users/markus/mbeddr/project1/my/folder/sol.mps , then this stored in the project as${myProjectDir}/my/folder/sol.mps even though this is not visible in
the UI. If the same library is accessed on another computer, the its definition of the MyProjectDir variable is used to resolve it.
In addition, the section on Section Keyboard Shortcuts has a lot of important keyboard shortcuts. Please take the time to read this section!
The cheat sheet in Section Cheat Sheet also summarizes them again.
All notations can be switched from one common menu location: Code → Projection Modes → ... . See also Fig. 3.3.2-A. The menu shows only
those options that are valid for the languages used in the current model, and the selection is specific for any given editor/root. The specific
entries are discussed in the context of their respective languages.
Figure 3.3.2-A: The menu to change projection modes. The specific entries are discussed in the context of their respective languages.
Entering Code In MPS you can only enter code that is available from the code completion menu. Using aliases and other "tricks", MPS
manages to make this feel almost like text editing. Here are some hints though:
As you start typing, the text you're entering remains red, with a light red background. This means the string you've entered has not
yet bound.
Entered text will bind if there is only one thing left in the code completion menu that starts with the substring you've typed so far. An
instance of the concept selected by the text you have entered will be created and the red color goes away.
As long as text is still red, you can press Ctrl + Space to explicitly open the code completion menu, and you can select from those
concepts that start with the substring you have typed in so far.
If you want to go back and enter something different from what the entered text already preselects, press Ctrl + Space again. This will
show the whole code completion menu.
Finally, if you're trying to enter something that does not bind at all because the prefix you've typed does not match anything in the
code completion menu, there is no point in continuing to type; it won't ever bind. You're probably trying to enter something that is not
valid in this place. Maybe you haven't included the language module that provides the concept you have in mind?
Some language features may only be accessible via an intention. If you don't know how to go on, press Alt + Enter and see if the
intentions window has something useful to offer.
Entering Expressions Expressions can be entered linearly. So if you have a number literal42 and you want to change that expression to
42 + 2 you can simply move to the right side of the 42 and type + and then 2 . However, if you want to add something on the left side of
an expression (e.g. changing 42 to 10 + 42) then you have to move to theleft side of the 42 and enter + as well; you can then enter 10.
Parentheses Imagine you have already entered 4 + 2 * 3 . The * binds more tightly, so the tree root is the+, with the 4 and the * as children.
If you now want to change this to (4 + 2) * 3 you have to set parentheses. This is supported in a natural way: just move to the left of the4
and enter an opening parenthesis. It will remain red, because the balacing parenthesis is still missing. You can now go to any other place
in the expression, in this case, to the right of the 2, and enter the closing parenthesis. The tree is restructured accordingly.
Navigation Navigation in the source works as usual using the cursor keys or the mouse. References can be followed ("go to definition")
either by Ctrl + Click or by using Ctrl + B . The reverse is also supported. The context menu on a program element supports Find Useages.
This can also be activated via Alt + F7 . Within an mbeddr program, you can also press Ctrl + F12 to get an outline view that lists all top
level or important elements in that particular program so you can navigate to it easily. Ctrl + E will show you the recently closed editors, in
case you've closed too many. More generally, Ctrl + Tab can be used to switch between windows in MPS.
Selection Selection is different than in normal text editors. Ctrl + Up/Down can be used to select along the tree. For example consider a
local variable declaration int x = 2 + 3 * 4; with the cursor at the3. If you now press Ctrl + Up , the 3 * 4 will be selected because the * is the
parent of the 3. Pressing Ctrl + Up again selects 2 + 3 * 4 , and the next Ctrl + Up selects the whole local variable declaration. You can also
select with Shift + Up/Down . This selects siblings in a list. For example, consider a statement list as in a function body:
void aFunction() {
int x;
int y;
int z;
}
Iimagine the cursor in the x . You can press Ctrl + Up once to select the whole int x; and then you can use Shift + Down to select the y and z
siblings. Note that the screencast mentioned above illustrates these things much clearer. We recommend again to watch it, if you haven't
done so yet.
Deleting Things The safest way to delete something is to mark it (using the strategies discussed in the previous paragraph) and the
press Backspace or Delete . In many places you can also simply press Backspace behind or Delete before the thing you want to delete.
Adding List Elements If you are in list context (e.g. in function arguments or statement lists) you can press Enter to add an element
after the current one; you can press Shift + Enter before the current element. You can also type the list separator (if one is defined) to add
a new element. For example, in an argument list, you can type a comma to add another argument.
Intentions Some editing functionalities are not available via "regular typing", but have to be performed via what's traditionally known as a
quick fix. In MPS, those are called intentions. The intentions menu can be invoked by pressing Alt + Enter while the cursor is on the
program element for which the intention menu should be shown (each language concept has its own set of intentions). For example,
module contents in mbeddr can only be set to be exported by selecting export from the intentions menu. Explore the contents of the
intentions menu from time to time to see what's possible. Note that you can just type the name of an intention once the menu is open, you
don't have to use the cursor keys to select from the list. So, for example, to export a module content (function, struct), you type Alt + Enter ,
e , x , Enter .
Surround-With Intentions Surround-With intentions are used to surround a selection with another construct. For example, if you select a
couple of lines (i.e. a list of statements) in a C program, you can then surround these statements with an if or with a while . Press
Ctrl + Alt + T to show the possible surround options at any time. To reemphasize: in contrast to regular intentions which are opened by
Alt + Enter , surround-with intentions can work on a selection that contains several nodes!
Refactorings For many language constructs, refactorings are provided. Refactorings are more important in MPS than in "normal" text
editors, because a quite few editing operations are hard to do manually. Please explore the refactorings context menu, and take note
when we explain refactorings in the user's guide. Unlike intentions, which cannot have a specific keyboard shortcut assigned, refactorings
can, and we make use of this feature heavily. For example, Ctrl + Alt + V extracts a variable from an expression.
Select in Project To select the currently edit root note in the project explorer (the big tree of the left), press Alt + F1 and then Enter .
Open Inspector Sometimes, additional properties of an element can be entered in the inspector ("properties view"). To open the
inspector window, press Ctrl + 2 .
Make and Rebuild By default, Ctrl + F9 ( Cmd + F9 on the Mac) makes the current module, i.e. it regenerates and recompiles if the
current module has changed. We recommend using the Keymap preferences to assign Ctrl + Alt + F9 (and Cmd + Alt + F9 on the Mac) to a
complete Rebuild of the current solution; sometimes changes aren't detected correctly and a full rebuild is necessary. Ctrl + Alt + F8 and
Ctrl + Alt + F10 should be mapped to a rebuild of the current model and the current project, respectively (see screenshot below).
Figure 3.4-A: The keymap preferences dialog in MPS where users can define their own keyboard shortcuts.
Also, mbeddr/MPS uses the version control support of the IDEA platform. So to get an overview, you can also check out the following
links:
https://www.jetbrains.com/idea/features/version_control.html
https://www.jetbrains.com/idea/webhelp/version-control-with-intellij-idea.html
However, this is not quite true, because during merging, some MPS-specific things have to be done. For this purpose, version control
systems support merge drivers. Currently, MPS merge drivers exist only for git, Subversion and CVS. Since nobody wants to use CVS
anymore, git and Subversion are the most reasonable alternatives to use with MPS. However, it is feasible to develop additional merge
drivers if this should become necessary. Please contact the MPS team in this case.
Our experience over the last years is with git, which is why we use git in this chapter. We've also used Subversion successfully in one
project. We clearly recommend using git with mbeddr.
Changes View The Changes view keeps track of which files (i.e., models) have changed on you system compared to the latest in the
version control system. Fig. 3.5.4-C shows an example. The defaultExtensions model is the one that contains the root with the two changes
Show
shown above. You can see a diff by selecting Diff from the context menu of the respective model in the changes view. The view
which then opens (Fig. 3.5.4-D) lets you choose from the changed roots (if several have changed) and then see a diff in the usual side-by-
side style. Note that this works for arbitrary notations! You can also undo the changes with the usual operations.
We recommend exploring the VCS menu in MPS, and in particular the git submenu; it has a lot of git operations such as branching or
stashing.
3.5.5.1 Update/Pull
Updating, or pulling in git-speak, refers to getting the newest contents from the version control system. From within MPS you can do this
via the menu entry VCS → Update Project . After the update, the Version Control view shows the set of changed files Fig.
( 3.5.5.1-A). You can
use the context menu to show a diff of the changes that came in.
3.5.5.2 Commit/Push
You can use VCS → Commit Changes or Ctrl + K to bring up the commit dialog. This dialog, shown inFig. 3.5.5.2-A, lets you select the
models you want to commit and specify a commit message. By using the context menu on models, you can also revert some of the
changes. By pressing the little down-arrow on the Commit button, you can also directly push from within MPS. You can also use the
context menu to show the diff, i.e. the changes that are about to be committed.
3.5.5.3 Conflicts/Merging
Conflicts happen if the same node(s) are changed independently by different parties. It can happen during an update/pull (if somebody
else has changed the same nodes independently) or when merging a branch (in the case where the two branches have changes to the
same model). In both cases, the Version Control view, when showing the incoming changes, will mark conflicting files as red, as shown in
Fig. 3.5.5.3-A.
Figure 3.5.5.3-A: The changes view after it ran into a conflict in a model.
You can then select Git → Merge Conflicts to open the Merge overview (Fig. 3.5.5.3-B). It shows the list of conflicing files, and for each of
them you can select whether you want to accept your local changes (overwriting what comes in), accept the incoming changes
(discarding your own changes) or merge manually.
Figure 3.5.5.3-B: The merge overview dialog shows the files that changed as well as the merge options.
In case you select Merge , you will get a three-way merge dialog as shown inFig. 3.5.5.3-C. There you pick and choose then changes you
want to use. You can also edit, as well as copy/paste changes into the merge result.
3.6 Utilitites
3.6.1 Interop with Text
There are various ways of interoperating with textual worlds. One of them is the import of textual, legacy C source. This is covered in
Section Accessing Libraries. In this section we discuss a few additional ways.
Node Info and Node URL It is often useful to be able to be able to point somebody to a specific node. For example, one may want to
send an email to a coworker pointing out a specific node, or an issue in an issue tracker may want to refer to a program node in MPS. For
this purpose, it is possible to copy the Node Info and the Node URL to the clipboard (and then paste it as text somewhere else).
You can select any node in MPS and use Copy Node URL to Clipboard from the context menu. The following code snippet is a node URL. If you
open this URL in a browser, MPS will try to open the respective node. This requires MPS to be running and the currently opened project
must contain the node. MPS does not open the project if it is currently closed. [ToDo: Where is the menu entry in MPS to do the same
based on URL in the clipboard? ]
http://localhost:8080/select/tutorial/r:4ac377c2-0a54-4908-ae24-f86f1bad7e73/1195021413141453989/
The node URL is not readable by humans, only by the tool. You can use Edit → Copy Special → Copy Node Info to Clipboard to get a more detailed
info about the node, as shown in the next snippet. This contains enough data to navigate to a node manually; of course the URL is also
included to automate the navigation.
By the way: this functionality is also very useful if you want to report problems about the documentation to us. Please include a Node URL
or even better, a Node Info of the document or paragraph where you spot a problem.
Copy to Text Every MPS node can be copied to a text editor. However, the result is only useful if the original node has essentially a
textual syntax (like regular program code or the paragraphs in this document). In the text editor, it will look essentially similar. However, it
is not possible to paste the text back into MPS, because, by default, an MPS language definition does not come with the parser necessary
to process the text and build the MPS tree (it is of course possible to build and integrate such parsers into MPS, but you don't get it for
free). So this approach is only useful for communication purposes, but not to actually exchange program code. The next paragraph
describes a more useful alternative.
Copy to XML It is possible to copy a node (and its complete subtree, of course) to an XML text. The menu item
Edit → Copy Special → Copy Node as XML creates the XML. This can of course be copied into any text context. The following listing shows an
example XML (of this paragraph):
<_root _nodeID="1195021413141455614" _conceptFQN="com.mbeddr.doc.structure.TextParagraph" _linktype="child">
</text>
</words>
<words _nodeID="1195021413141473802" _conceptFQN="de.slisson.mps.richtext.structure.Word" _linktype="child" escapedValue=" creates the XML. This can of cou
</text>
</_root>
This XML can then be pasted into another MPS using Edit → Paste XML Node . The MPS node structure is reconstructed and the (internal and
external) references are reestablished. While, in general, it is better to use the version control system (and branching, if necessary) to
exchange code, this XML-based facility is sometimes useful as well.
Figure 3.6-B: The menu options for opening the search dialog and the terminals
Backspace , or Select-and-then-
Deleting Backspace
In the Editor settings, you should select Show 1 Tab for EachAspect in the Editor Tabs choice.
In the Editor settings, you should select Activate most recentopened Tab in the When closing active editor choice.
In the Editor settings, you may want to change the font to Consolas. Looks nicer.
In the Generator settings, you may want to turn of the Check models before Generation in the General settings. Running the model
checker before each generation run is annoyong - you can always run it manually.
Chunks and Modules MPS' main entity for structuring code are models (see Section Project Structure). A model is essentially a file that
contains a set of nodes. The nodes directly beneath the model are called root nodes in MPS. All root nodes provided by mbeddr are
called chunks. A chunk has a name as well as a set of dependencies, i.e., references of chunks it depends on. Chunks come in many
different kinds. BuildConfigurations are chunks, UnitContainers are chunks, and the modules in which you can write mbeddr C code are also
chunks. In general, any chunk can reference (or import) any other one, but there may be chunk-type-specific restrictions. If a chunkA
imports chunk B, then it is possible to reference B's contents from A (in some cases only exported content are referencable; see below).
Implementation and External Module Modules are the chunks that contain C code (as opposed to, for example, build configurations or
requirements). mbeddr has two different kinds of modules. The most common one is the ImplementationModule in which you write your
normal mbeddr code which will be generated to C99 code and then compiled. The second kind is the ExternalModule. External Modules
won't be generated into C code. They are used to proxy external, textual C code such as header files on which your code may rely 1.
Since they just proxy existing C or header files they don't contain any implementation. They just contain declarations. ExternalModules
always have to declare a header file that is included in the generated C code (the files for which they are a proxy).
Visibility and Exporting A chunk A that imports (depends on) chunk B only sees those contents of B that are exported. Some kinds of
chunks, for example the requirements module, implicitly export everything. Other chunks, for example implementation modules, default to
private contents, and the user has to specifically mark contents as exported . An importing chunk, A in the example, can also mark a chunk
import as reexport. This way, if a chunk C imports A, and a has reexported imports, C can access the contents of B as well.
1: This is necessary because, in order to reference a node (e.g., call a function), that node has to be in MPS. If you want to call into textual C code that is not in MPS, you
have to create a proxy of that textual code in MPS. External modules are used for this purpose.
4.2.1 Boolean
While in C99 the boolean is a typedef'd integer, in mbeddr booleans are first class types.
if (1) {
//do stuff
} if ⬈
Also boolean and integer are not assignable to each other by default. You have to convert them explicitly. This might be the case when
you interface legacy code that was not written in mbeddr. In addition to int2bool, there is also a bool2int conversion operator.
{
int16 aInt16 = 0;
boolean aBoolean = int2bool<aInt16>;
} ⬈
C99vsMbeddr2
model com.mbeddr.tutorial.documentation.code constraints
struct order {
product product;
int32 amount;
};
struct product {
int32 product_number;
string name;
};
imports nothing ⬈
4.2.3 Integers
Integers in mbeddr always come with a qualified size. There is no int, short, long long and so on in mbeddr, because the size of these
integers can differ depending on the compilation target. mbeddr has its own data types for singed or unsigned integers from 8bit to 64bit.
For example, uint8 represents an 8bit unsigned integer, int8 represents an 8bit signed integer, uint16 represents an 16bit integer and so on.
If you need to interact with existing C that uses int, you can still write int inside ExternalModules but you have to provide a mapping to the
mebddr types. This is done inside the TypesizeConfiguration.
int16[3][2] integers; ⬈
int16[2][3] array = {
{1, 2},
{3, 4},
{5, 6}
}; ⬈
This might look irritating in the first place, but once we add pointers it will look much easier to read. Declaring a pointer to the array, we
declared above in C, looks like this:
int16_t *(array[3][2])
No Preprocessor mbeddr does not expose the C preprocessor to the end user. Because the preprocessor simply replaces text, it is
possible that it results in not compileable code (and create all kinds of other mayhem). In addition, the preprocessor does not take the
typesystem into account and the code may not typecheck.
Finally, more sophisticated/awkward/brittle uses of the preprocessor should be replaced by native mbeddr language extensions. There's
no need for "preprocessor meta programming" if you have a full-blown language workbench at your disposal!
Test Subjects The example module contains two functions which we will use as test subjects,add⬈ and divide⬈. They look as follows:
SimpleTestCase
model mbeddr.tutorial.main.defaultExtensions
package examples constraints
Declaring Test Cases Test cases are a little bit like void functions: they have no parameters and they do not return anything. mbeddr
provides first class abstractions, because they track failures. Here is the signature of a test case:
Asserting Inside test cases, a number of assert statements are available, as Fig. 4.5-A shows. It should be self-explanatory what they
mean. Using these assert statements, test cases can now be made meaningful.
Invoking test cases A collection of test cases can be invoked via the test expression. It invokes the test cases mentioned as arguments
and evaluates to the total number of assertion failures. Typically the test expression is used from a main function, and the result is returned
to the OS; this way, the whole process fails if there are assertion failures. The test output will also be printed to the Console in MPS with
links pointing to the individual test cases in the C code. You can read more about this in Section Running Applications from MPS/mbeddr.
The test executable can then be run like any other executable, as, for example, the HelloWorld⬈ example in Section Build and Run. Note
that you can set the isTest flag to true for the executable (inside theBuildConfiguration). If you do this, your Makefile has an additional target
test, so you can invoke it directly via make test .
Helper Functions Assert statements only work in test cases. If, for some reason, you want to access them from regular functions, then
these functions have to have the @test helper annotation. It can be attached with the corresponding intention.
SimpleTestCase
model mbeddr.tutorial.main.defaultExtensions
package examples constraints
void testHelperFunction() {
assert(0) add(1, 1) == 2;
} testHelperFunction (function)
exported testcase testAdding {
testHelperFunction();
assert-equals(0) 10 == add(5, 5);
} testAdding(test case)
imports nothing ⬈
Parameterized unit tests mbeddr has tight integration with formal verification tools which can be used to specify a verification harness
for the testing function. This way similar functionality can be achieved as with the ordinary parameterized tests. Actually, testing your
function (unit of the system) with harness and formal verification is much more powerful because unit tests usually cover only punctual
cases, while the verification can prove the property (in this case assertions) for a whole wider set of values. You can read more about
these features in Section Formal Verification with CBMC.
Domain-Specific Testing Support Some of mbeddr's extensions (such as components or state machines) come with their own
extensions of the testing language to make it more efficient to unit-test these abstractions. These extensions are discussed together with
the respective language extensions.
Inside it, we will add a struct called Trackpoint that contains a number of members. You create thestruct by just typing struct inside the module.
You add the members by simply starting to type the int8 types. Ignore the physical units (the things behind the slashes after the types) for
now.
We then create two functions that are able to process the Trackpoints. Here is one function that does nothing (intentionally). You enter this
function by starting out with the Trackpoint type, then typing the name and then using the( to actually create the function (the thing has
been a global variable up to this point!):
Other functions with the same signature may process the data in some specific way. We can generalize those into a function type using a
typedef . Note that entering the function type ()=>() is in fact a little bit cumbersome. The alias for entering it isfuntype :
DataProcessorType processor; ⬈
We also create a new, empty test case that we will use for making sure the program actually works. In the test we assign a reference to
process_doNothing⬈ to that event handler. Note the : notation for function references - we do not use the ugly C function pointer syntax.
With this in place, we can write the first test assertion:
{
processor = :process_doNothing;
Trackpoint i2 = processor(i1);
assert(0) i2.id == 1 && i2.alt == 100 m;
} ⬈
Let us complete this into a runnable system. In the Main module we change our main function to run our new test. Note how we import the
FunctionPointers module; we call the test case, which is visible because it isexported . In the code below we call all the test cases built in the
tutorial:
exported int32 main(int32 argc, string[] argv) {
return test testProcessing; (unittest) ;
} main (function)
testLambdaProcessing; (unittest)
testNullerOK; (unittest)
testInterpolator; (unittest)
testJudging; (unittest)
testInterpolatorWithMock; (unittest)
testPrintf; (unittest)
testFlightAnalyzer; (unittest)
testRuntimeVar; (unittest)
testPresenceConditions; (unittest)
testConditionalAlternative; (unittest)
testConditionalReplacement; (unittest)
testFlightRecorder; (unittest) ⬈
To make the program build correctly, we have to make sure all modules used in the program are included. In particular, the FunctionPointers
module must be included. Mising modules can be included with a quick fix. In the end, this results in the following binary:
We can run the tests from within MPS or from the console.
4.6.3 Lambdas
In contrast to regular C, mbeddr also provides lambdas, i.e. anonymous functions. They can be passed to functions that take function
types as an argument. However, they can also be assigned to variables that have a function type, such as the processor above. Here is an
example:
processor = [tp|
tp.alt = 100 m;
tp;];
A lamda is expressed as [arg1, arg2, ...|statements] . The type of the arguments is inferred from the context, they don't have to be specified. If
several statements are required (as in the example above), they are layouted vertically. If only an expression is required, it is shown in
line.
4.7 Commenting
mbeddr supports several kinds of comments. We discuss all of them in this section. All of them have in common that they are so-called
richtext blocks. This means that you can edit the multi-line comments in a way that resembles a regular text editor: pressing Enter to get a
new line, using Alt + Backspace to delete words, Ctrl + Right to jump to the end of the line, etc. In addition, you can press Ctrl + Space inside
the text blocks to insert actual MPS nodes: this lets you mix unstructured text with "real program code". We use this, for example, to
support references to program elements inside the comments. Since these are real references, they are renamed automatically if the
referenced element is renamed. Press Ctrl + Space to see which references are available; here are some:
@mc references another top-level element in an implementation module (these are calledmodule contents, hence the mc
@child references any named child under the current node
@module references another module
@arg references an argument (only works in functions or similar contexts)
@local references a local variable (only works in functions or similar contexts)
There are several context-specific references available, for example, a comment in a state machine can use @state to reference states.
Statement comments are entered just like you would expect, by typing //.
To attach an element documentation, you can use the Add Documentation to ... intention on the respective element. The following is an
example:
If margin comments are used in a root, the root itself shows a summary at the top: how many comments are in the root, plus who
commented last, and when. You can use the Delete All Comments intention on that root annotation to delete all comments.
Essentially all program nodes can be commented out by using the Comment Out intention on the node. Commented code gets greyed, no
type checks are performed inside it, and commented nodes cannot be referenced from the outside. Examples can be seen in the
FlightAnalyzer⬈ state machine (commented out states and transitions), the commented struct in DataStructures⬈ and the two first
statements in calcVerticalSpeed⬈.
To comment something back in, you can either use the Comment In intention anywhere in the commented code, or just press Backspace on
the leading /* or trailing */.
5 Generated Code
5.1 Efficiency
If mbeddr is used as a plain C IDE, the efficiency of the generated code will not be any better or worse than it would have been written in
any other IDE. mbeddr does not do any magic to code that is written on C-level.
Extensions to C fall come in two flavours: those that extend only the type system and those that affect the generated C code.
Type System Only An example for such a language is the units support in mbeddr. While it can help you to catch flaws in your code in
the IDE, it has no implications for the generated code. This means you will not pay any price at runtime for using this extension.
Extensions that affect the generated Code The far more common case is that mbeddr's extension translate into C code. Here we try to
be as efficient wrt. memory and runtime overhead as possible. Again, some of the extensions will have no overhead because they are
very straightforward and will just improve the readability and reduce possible errors. However, more complex extensions such as
mbeddr's components or statemachines have to store state and potentially add additional indirections in certain cases. The price you pay
for that is typically not any higher than the price you would pay if you implement the same functionality manually in a clean, well-structured
way.
In many cases mbeddr offers transformation options in the build configuration which make different trade-offs between flexibility and
efficiency. For example, the static wiring strategy in mbeddr's components does not support interface polymorphism (less flexibility) but
translates to C code that does not involve a function pointer indirection (more efficient).
5.2 Readability
Within the mbeddr IDE you have many options to write readable code, mostly due to the high-level extensions. Not all of them can be
reflected in the code we generate due to limitations of the C-language itself. When translating mbeddr code into C code we follow a few
idiomatic rules.
Naming First and foremost mbeddr has a concept of namespaces which is not available in C. Every module in mbeddr forms its own
namespace. Additional structural elements such as components or statemachines add additional namespaces. This means that mbeddr
allows two functions to have the same name as long as they live in different namespaces. In order to translate these namespaces into C
and retain globally unique names, we typically prefix e.g., the function name with the module name. Similar things happen for all module
contents. This can make the generated funtion names quite long and thus hard to read. There are two ways around that: You can add the
preventNameMangling annotation to module contents. This will prevent the prefixes from being added to the element which has the
annotation. This should only be used in cases where you want to provide non-mbeddr users with API to work with. The second option is to
select the prevent name mangling configuration item in the build configuration. This will try to get rid of the prefixes where possible, i.e., if an
only if a non-prefixed name is globally unique within an executable. This optimization is performed each time you rebuild your code. The
downside of that might be that the generated code changes once you introduce a new e.g. function with a name which already exists
elsewhere in your code.
Sections The second option to structure your code in mbeddr aresections. They do not affect the generated code (nor the scoping in
mbeddr) and are just there to visually structure your module content.
Ordering It is also worth noting that in mbeddr you can order your module content according to aspects important for maintainability
rather than the declaration dependencies you know from C.
6 C Tooling
6.1 Building Binaries in the IDE
6.1.1 Build Configurations
mbeddr has its own abstraction for building applications. This is called the BuildConfiguration. The BuildConfiguration is responsible for the
generation of the Makefile and holds parameters for the various parts of mbeddr generators. The following is an example.
BuildConfiguration (c.m.t.d.code)
Platform
GNU paths are not checked
make: make
gdb: gdb
compiler
path to executable: gcc
compiler options: -std=c99
debug options: -g
Configuration Items
reporting printf
Binaries
Platform
Configuration Items
Binaries
Platform In this section a platform can be referenced. A platform specifies the used compiler and the options passed to the compiler. By
default mbeddr ships with two predefined platforms Desktop and Nothing. The Desktop platform will use Make for building, GCC for compiling,
GDB for debugging. Per default, mbeddr will expect these tools to be on your global PATH (see paths in build config above). However,
you can specify their locations by providing an absolute path or using a path macro (see Section Paths and Libraries): e.g., specifying the
location of make via ${make.home}/make. Latter makes sense if you are working in a team with people, each having these tools installed
at different locations. The Nothing platform, as the name implies, does nothing. It does not compile anything and does not generate a
Makefile. It is possible to define additional platforms for other compiler toolchains in the settings:[ToDo: bild ]
Configuration Items The Configuration Items sections configures the generators of mbeddr. For example thereporting item configures what
the report Statement should be gernerated to. If it is set toprintf the generator will create printf statements for them. There are various
configuration items for generators in mbeddr. These configuration items are discussed later in the user guide in the context of the
languages that contribute (and whose generation is affected by) these config items.
reporting printf ⬈
Binaries The last section is Binaries. It configures which mbeddr modules are compiled as a single compilation unit. By default it can be
either Executable or Library. The first one tells the Makefile to create an ELF binary the later one just creates a linkable that can be used in
further compilation. An Executable can also be marked as a test. This way it will be included in a target calledtest inside the resulting
makefile. If the tests should be run on the command line [ToDo: ref ] this flag has to be true. mbeddr will also check if all the modules that
are used in the code, that should be compiled as a single compilation unit, are included in the Binary (based on the transitive import
dependencies between modules). If not, mbeddr will prevent you from generating a broken Makefile. There is also an intention on the
binary that adds missing modules.
Currently only one binary can be specified for each build configuration (this may change in the future).
Rebuilding in MPS reduced all extensions back to mbeddr C, generates textual C99 from mbeddr C and then also invokes the generated
Makefiles to build the final binary.
If you don't have any errors in your mbeddr code (see previous subsection), then the code generation and compilation should go without
problems and you can run the binary (see Section Running Applications from MPS/mbeddr). If something goes wrong nontheless (during
generation of compilation), then the errors are shown in the messages view.
You can build the application from within MPS by marking the solution, opening the context menu and selecting Rebuild . Now, you can run
the test by putting your cursor on the name of the main function (located inside Main⬈), opening the context menu and selecting run 'Main' .
You will now see a console appearing at the bottom of MPS with some messages in it (Section Running Applications from MPS/mbeddr
describes their meaning). The next sections will describe how to write a build script for this application and how to invoke it from the
command-line.
The tutorial project contains a build script for our example application. You can find it here: build-example⬈. This script is organized into
different sections, which have different responsibilities. The following list describes the meaning of each:
buildbuild-example generates: build-example is the name of the MPS script, whereby build.xml is the name of the generated ant file.
base directory: the file system location to which the ant file is generated.
use plugins: a list of MPS plugins that are required while executing the script. When building mbeddr-based applications, you just need
those two.
macros: macros are used in the lower sections to avoid replication of file system paths and string values.
dependencies: builds scripts, i.e., artifacts on which our build script depends at generation-time. Since we usembeddr (and depend on
other languages as well), its plugins should be available while generating code for our application.
project structure: contains plugin descriptors and solutions/languages you want to build. In our case, we have adummy plugin
descriptor that causes classes from MPS' debugger api to be loaded during generation and the solution that contains our example
application.
default layout: in case your build has artifacts (e.g., plugins or zip files), you can declare their structure here. Since we just want to run
code generation and execute make, we do not need to list anything here.
The listing below shows the wrapper script with two targets: build for building the application and test for invoking the tests. In order to get it
run on your machine, please adjust the following paths:
source.location:
points to the folder where the generated files of your appplication solution are located (inside foldersource_gen).
mps.home : on Windows and Linux, this property points to the directory where your MPS is installed. On Mac, it points to theContents
folder, located inside MPS' installation directory.
In order to execute the script, please install ant on your machine (see Section Required Tools and Versions). Next, store this script to your
local file system as wrapper.xml. For building our example application, on the console, navigate to the location where you stored the script
and enter ant -f wrapper.xml build . For executing the tests, enterant -f wrapper.xml test at the same file system location.
<project name="build and test helloworld" default="build" >
<property name="base.path" location="/Users/domenik/repositories/mbeddr.core/code/applications/tutorial/solutions" />
<property name="source.location" location="${base.path}/com.mbeddr.tutorial.sample.CProject/source_gen/com/mbeddr/tutorial/sample/CProject/main" />
<property name="mps.home" location="/Users/domenik/MPS/MPS-135.1441.app/Contents" />
<target name="build">
<ant antfile="build.xml" target="clean" />
<ant antfile="build.xml" target="generate" />
<ant antfile="build.xml" target="build" />
</target>
<target name="test">
<ant antfile="build.xml" target="clean" />
<ant antfile="build.xml" target="generate"/>
<ant antfile="build.xml" target="build" />
<exec executable="make" dir="${source.location}">
<arg value="test"/>
</exec>
</target>
</project>
You can copy the same script to your CI environment and execute it there. Since you might have different file system paths there, we will
need to change locations of the ant properties. You can either change the value in the scripts or invoke the script with the following
command (replace the brackets):
First, you can open a terminal and run it from the console. We won't cover this any further.
Alternatively you can run it from within mbeddr. To do this, you have to create a Launch Configuration and run or debug it.
The main goal of the Launch Configuration is to be able to specify certain properties of the mbeddr binary you'd like to run. TheLaunch
Configuration is related to the Build Configuration because the latter one specifies which modules constitute the executable and it is mandatory
that exactly one module listed in the Build Configuration contains a function which can act as a main function (basically as an entry point).
Go to the entry point (i.e., main⬈) of the executable and select Run PlainCDemo (or similarly Debug PlainCDemo ) from the context menu.
You will see the name of the executable at the menu item that you have specified in the Build Configuration. This approach can be seen
in Fig. 6.3.1-A.
Go to the Build Configuration of your application and run/debug the executable from the context menu of the executable definition. This
approach can be senn in Fig. 6.3.1-B.
Figure 6.3.1-A: Running the executable from the context menu of the entry point
Figure 6.3.1-B: Running the executable from the context menu of the executable definition
After you have run the executable a new launch configuration will be created with the name of the executable that you have specified. If a
configuration already exists with the same name then it will be overwritten. From this point, it is also possible to run/debug the executable
from the menu bar or from the Run menu of MPS. The launch configuration can also be created manually, or from the context menus of
the entry point or executable definition; take a look at the previous two figures (Fig. 6.3.1-B or Fig. 6.3.1-A) for the corresponding Create
PlainCDemo element.
An existing launch configuration can also be used to launch your application. Select the PlainCDemo element from the drop down list which
lists the available Launch Configuration s. Simply click on the green run button to launch the executable as shown onFig. 6.3.1-B. In order to
debug the application, you need to click on the green bug icon next to the selected Launch Configuration . You can read more about how you
can debug your application in Section Debugging.
Figure 6.3.2-A
Name: the name of the Launch Configuration . If you specify a custom configuration, it may be important to rename it and save it like
that, so that during the next run it will not be overwritten.
Build project: points to the executable within your Build Configuration.
Launch timeout: the value here is in milliseconds and it specifies for how long the debugger should try to connect to the low-level C
debugger (don't change the default unless you have a reason).
Command timeout: the value here is in milliseconds and it specifies how long the debugger waits with the execution of individual
debug commands (don't change the default unless you have a reason).
Before launch: this specifies the commands that should be executed before launching. Usually this includes generate and make (to
build the executable).
Executable tests will show the test results also in the Console. Fig. 6.3.3-A shows the output of the test run for the
mbeddr.tutorial.main.defaultExtensions.SimpleTestCase executable. The console output contains links for executed tests and failed assertions, and
you can click on them and to select the corresponding program element (test case, assertion) in the editor.
Figure 6.3.3-A: Running a test from MPS lets you directly click on tests and failed assertions.
6.4 Debugging
mbeddr comes with a debugger for core C. This chapter describes how to use this debugger to debug C programs written with mbeddr.
6.4.1 Introduction
The mbeddr debugger behaves like a regular C debugger, but on the abstraction level of mbeddr and its extensions: it shows the call
stack and watchables and allows stepping and setting of breakpoints. This way, you don't need to know any details about the generated C
code. In addition, the debugger can be extended to enable debugging of code written with user-defined language extensions. Currently,
the debugger uses gdb as debug backend and only supports local debugging. However, in future releases of mbeddr, we plan to support
debugging on the target device. In the tutorial, you find in the solution mbeddr.tutorial.main a model named plainC. We use the contained
mbeddr code to illustrate debugging mbeddr programs in MPS.
Next, create a run configuration as explained in Section Running Applications from MPS/mbeddr and run it in debug mode by pressing
Shift + F9 or by clicking on the debug button in MPS' toolbar (seeFig. 6.4.2-B).
The debugger UI should now appear at the bottom of the MPS window. In the lower left corner of this UI, you can see two activated
buttons: a green (arrow) and a red (square) one. This indicates, that the debugger is now connected to the underlying C debugger. You
can now start program execution by clicking on the green (arrow) button. This will suspend the debugger on the first line of the main
function, on your previously created breakpoint:
Next, press F7 to step into the current line, this will suspend the debugger in the called functionadd. For returning to the calling function
main, press Shift + F8 . Once the debugger is suspended there, you can see the stack framemain in the call stack and the local variablesp,
argc and argv in the watchables view (see Fig. 6.4.2-D).
If you have a project alredy, then you can simply create a new solution inside the existing project. Select the project node itself and select
New → Solution from the context menu.
1: If you work with the mbeddr IDE, then the language project shown in Fig. 6.5.1-A is not available.
After pressing ok, the new model is created. Also, the model's property dialog pops up automatically (it can be brought up manually by
selecting Model Properties from its context menu). In any case, select the Used Languages tab and use the + button to add the com.mbeddr.core
devkit (note that you can select devkits and languages by just typing the first letters (c.m.c)). Fig. 6.5.2-B shows the result.
Figure 6.5.2-B: The MPS dialog for setting the languages used in a model.
Note: the stuff explained in this subsection can be done automatically using the Code → Wizards → Create Minimal Test wizard. We do it
manually to explain things in detail.
Inside the model we create a new Implementation Module . To do so, select New → c.m.c.modules → module from the model's context menu.
Specify the name HelloWorld for the module. The result looks as shown inFig. 6.5.3-A.
Figure 6.5.3-A ⬈
Inside the module we now write a test case that (wrongfully) asserts that 1 + 1 == 3 . Here is the code:
Next is a main function. You can create one by simply typing main: this expands to a fully parameterized main function. It also already
contains a return 0; statement. Next, we want to replace the0 with a call to our test case. Delete the0 and instead enter a test expression.
Inside it's brackets, add a reference to the testAdding⬈ test case. The resulting main function looks like this:
To be able to run the code, we also have to create a build configuration. Use the menu New → c.m.core.buildconfig → BuildConfiguration on the main
model to create one. Initially it looks as shown in Fig. 6.5.3-B
Figure 6.5.3-B ⬈
First, you have to select a platform. The platform determines how the generated C code is compiled. Select the desktop platform. No
need to change any of its parameters.
Then you have to specify configuration items. For our case we need the reporting item that determines how console output is
handled. The default printf is ok.
Finally, you have to specify an executable. Add one called HelloWorld, and add the HelloWorld⬈ implementation module to its modules
slot.
rm -rf ./bin
mkdir -p ./bin
gcc -std=c99 -c -o bin/HelloWorld.o HelloWorld.c
gcc -std=c99 -o HelloWolrd bin/HelloWorld.o
make finished successfully for com.mbeddr.tutorial.sample.HelloWorld/com.mbeddr.tutorial.sample.HelloWorld.main
No input. Skipping optional target.
"textGen" target execution time: 546 ms
"callMake" target execution time: 446 ms
"copyTraceInfo" target execution time: 138 ms
"generate" target execution time: 131 ms
"reconcile" target execution time: 37 ms
Other targets execution time: 35 ms; compile: 24 ms, configure: 6 ms, runProcessors: 5 ms, preloadModels: 0 ms, make: 0 ms, addLanguages: 0 ms, resetCache: 0 ms, c
To run the code, you can select Run HelloWorld from the context menu of either the main function or the HelloWorld executable. The result
should be shown in the Run tool, as shown in Fig. 6.5.4-A. Note how it contains clickable lines for the test(s) that have run as well as for
the failed assertions.
Figure 6.5.4-A: Running a test lets you directly click (and then open in the editor) on tests and failed assertions.
To make this feasible, we have to integrate C's standard printf function. We could import all ofstdio automatically (we'll do that below).
Alternatively, if you only need a few API functions from some library, it is simpler to just write the necessary proxies manually. Let's use
the second approach first.
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
resources 6b853dfb-19be-33ee-b67c-33b658dd0753 ⬈
An external module is always associated with one or more "real" header files. The trick is that when an implementation module imports an
external module in mbeddr, upon generation of the textual C code, the referenced real header is included into the C file. This also means
that, even if you stdio.h requires all kinds of additional definitions for it to work, these donot have to be imported into mbeddr. Only the
function prototype must be imported so it can called from mbeddr C.
So the first thing we need to do is to express that this stdlib_stub external module represents the stdlib.h file:
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
To be able to write the test case, we have to import the stdio_stub into our LibraryAccess implementation module. And in the build
configuration we have to add the LibraryAccess and the stdio_stub to the binary. We should also call thetestPrintf test case from Main.
7 C Extensions
7.1 Static Evaluation
mbeddr can calculate the static value of an expression if all children of that expression are statically evaluable. This is the case if the leaf
nodes of the expression tree consist of literals, references to #constant or references to #macro (function macro calls). Whether an
expression is statically evaluatable, as well as as its calculated static value, is shown in the inspector.
The interpreter which calculates the static value works with a precision of 100 decimal places. The 101st place is rounded half up.
By default the interpreter has no effect on the code generated by mbeddr. However by using the staticValueOf expression you can ask
mbeddr's code generator to first use the interpreter to calculate the static value and use this value then in the generated code. This
functionality was introduced so that your code readability does not need to be sacrificed for performance reasons (in traditional C code
you would typically see "magic numbers" where staticValueOf is used in mbeddr). The expression can be inserted into your code using the
staticValueOf alias or by using a surround-with intention on a statically evaluable expression (Ctrl + Alt + T ).
Figure 7.1-A ⬈
In addition to the pure static value calculation mbeddr can perform various rounding operations before the static value is inserted into your
code (an example is shown in Fig. 7.1-A). These rounding operations can be added via an intention on thestaticValueOf expression (
Add rounding operation ) or it can be selected in the inspector of thestaticValueOf expression. The following rounding operations are available:
none: (default) no rounding operation will be performed. The resulting type might be a floating point type.
ceil: rounds torwards positive infinity. The result is an integer.
floor: rounds torwards negative infinity. The result is an integer.
round: rounds towards "nearest neighbor" unless both neighbors are equidistant, in which case it rounds up. The result is an
integer.
trunc: rounds torwards zero. The result is an integer.
7.2 Units
The purpose of physical units is to annotate types and literals with additional information - units - used to improve type checking. Many
embedded systems work with real-world quantities, and many of those have a physical unit associated with them.
Let us now explore how we can work with physical units, adding more semantics to this data structure. In order to use the physical units
language, we need to import either the com.mbeddr.physicalunits devkit. After the import we can add units for the types by simply pressing the
/ at the right side of them.
The 7 standard SI units are available from an accessory module called SIUnits⬈. It can be imported into any implementation module.
There is an extra nounit unit defined in this module. The usage of this unit is limited to test case definitions and conversion rules, which we
will talk about in later sections. When you import this module, the simple units like s and m will be immediately available.
For the speed member of the Trackpoint struct we need to add m/s. Since this is not an SI base unit, we first have to define it, which can be
done either in the ImplementationModule or in a UnitContainer (such as UnitDeclarations⬈). To create a new unit just use its alias unit. The
specification of the derived units can be typed in after the name of the unit. A simple unit specification consists of one unit with possibly an
exponent. If you would like to type in a composite unit specification then press ENTER to expand the components in the specifiction.
Exponents can be added to the units by pressing the cap (^) symbol and the value of the exponent. This works for all units in the
specification except for the last component; there the cap is disabled because it conflicts with the binary XOR operator. At the last position
use the appropriate intention (by the way it is available for all components regardless of the position in the specification) to add the
exponent to the unit. An exponent can either be simple integer number or it can be transformed to a fraction by typing / in the exponent
expression.
The following code example shows how can we use units for the members of the Trackpoint struct.
Units on Literals Adding these units may result in errors in the existing code (depending on whether you had added them in previous
tutorial steps) because you cannot simply assign a plain number to a variable or member whose type includes a physical unit ( int8/m/ length
= 3; is illegal). Instead you have to add units to the literals as well. You can simply type the unit name after the literal to get to the following:
Trackpoint i1 = {
id = 1,
time = 0 s,
x = 0 m,
y = 0 m,
alt = 100 m
}; ⬈
You also have to add them to the comparison in the assertions as well, for example in this one:
{
processor = :process_doNothing;
Trackpoint i2 = processor(i1);
assert(0) i2.id == 1 && i2.alt == 100 m;
} ⬈
This is because all the mathematical operators are overloaded for physical units and these operations are also type-checked accordingly.
Clearly, the problem with this code is that you cannot add a length (i1.x) and a speed (i1.speed). The result is certainly not a plain int8, so you
cannot assign the result to someInt. Adding i1.x and i1.y will work, though. Another example where the units are matched properly:
The calcVerticalSpeed⬈ provides a few more examples of working with units in expressions:
Editing Composite Units When you edit an expression with composite units you may encounter the problem that multiple side
transformations are available at a given place and so you need to make the choice by hand. Consider the following code example:
Build Configuration If you try to rebuild the model, you will get an error message, saying that you need to add theunits configuration
item to the build configuration. If you have added this item, the build should be successful and we should be able to run the test again.
Defining Conversions The conversion rule must define a source and a target unit (the conversion will happen between these units) and it
must contain one or more conversion specifiers. A conversion specifier defines the conversion expression for a given type. Inside the
conversion expression one can use the val expression as a placeholder for the to-be-converted value. It will be substituted with the value
that is passed to the conversion rule. The type that you define for the specifier is the type that the val expression will have. Additionally, it
is also possible to omit the type: in this case the specifier works as a generic one, where the expression may be applied to any types (the
type of the val expression will be double in this case, but this is just a trick that is needed for the typesystem to work properly). The
conversion specifiers are checked, and redundant specifiers will be marked as erroneous (the ones that are covered by some other
specifier due to its type being the supertype of the original type).
section conversions {
Conversion rules can be either lazy or eager. The default behavior is the lazy evaluation, you can switch to eager from the corresponding
intention on the conversion rule itself.
Lazy conversion rule: the val expression inside the conversion specifier has no unit, and the expression must evaluate to a type
without units. During the usage of the conversion rule, the type system will just simply append the rule's target unit to the evaluated
expression.
Eager conversion rule: the val expression has the same unit as the rule's source unit. The expression in the conversion specifier
must evaluate to a type with the rule's target unit. During the usage of the conversion rule, the expression will be simply evaluated
and the resulting type will be used (which must match the target unit).
Using Conversions You can now invoke this conversion within a convert expression:
void somethingWithConversion() {
Trackpoint highSpeed;
highSpeed.speed = ((int16/mps/) convert[300 kmh-> mps]);
} somethingWithConversion (function) ⬈
The convert expression does not explicitly refer to any specific conversion rule, you only need to define the target unit of the conversion,
while the source unit is known from the type of the original expression. The system will try to find a matching conversion specifier (where
both the units and the types match). Here comes the conversion specifier with no specific type handy, because it can be applied to any
expression if the units match.
The conversion specifier can be set manually too in the Inspector of the convert expression.
First you need to create generic units by invoking the Add Generic Unit Declaration intention on the function. You can specify multiple generic
units once the first one has been created by just simply pressing Enter in the unit list. These newly created generic units can then be used
for the type annotation just like any other unit. The substitutions will be computed based on the input parameters of the function call. One
can also combine a generic unit with additional arbitrary non-generic units for the type annotations of the parameters and the return type.
In addition, it is also possible to invoke the function with bare types, but be aware that once at least one substitution is present, the
function call will be type checked also for matching units. The generic units can also have exponents (even as fractions) and the same
type-checks also apply to them as it was described for the non-generic units. An example shows how this could be done for a square root
function:
double/U112/ sqrt(double/U1/ a) {
double/U112/ res = 0 U112;
//here goes your sophisticated square root approximation
return res;
} sqrt (function) ⬈
You should not use multiple generic units in the annotation of one given function parameter, because the non-generic units will be
bound to the first generic unit (all of them), so the substitutions will probably not be the ones that you would expect. This is a
constraint introduced to manage the complexity of the compuatation of the bindings; allowing multiple generic units for parameter
types could result in a constraint solving problem which we do not support right now.
The generic units can only be used inside the function definition, this means that they are not visible outside of the function
definition.
int16 anExistingFunction(int16 x) {
return x + 10;
} anExistingFunction (function) ⬈
To be able to call this function with arguments that have units, we have to strip away the units before we call the function. This can be
achieved by selecting the corresponding expression and invoking the Strip Unit intention. The type of this stripped expression will be simply
the type of the original expression but without units.
The opposite direction (i.e., adding a unit to a value that has no unit) is also supported. The introduceunit operator is available for this. It
takes an expression plus the to-be-introduced unit as arguments.
State machines can either be viewed and edited with the textual notation or with the table based projection. You can switch between the
Projection
two modes with the Code → Mode → Statemachines as Tables option.
This section will give a brief overview on state machines; how they can be defined, how one could interact with C code when using the
state machines and we will also give details about hierarchical state machines and their visualizations with the help of the PlantUML tool.
In order to create the state machine in the ImplementationModule⬈ simply type in statemachine at the top level in implementation modules.
This will create a new state machine with an initial state and one event already defined. You can leave the event there, we will come back
to that later. We know that the airplane will be in various states: beforeFlight⬈, airborne⬈, landing⬈ (and still rolling), landed⬈ and
crashed⬈. You can just rename the already existing initial state tobeforeFlight⬈ and add the other states to the state machine. In the end
you should have the following states:
The state machine will accept two kinds of events. The first one is the next⬈ event, which takes the next trackpoint submitted for
evaluation. Note how an event can have arguments of arbitrary C types, a pointer to a Trackpoint⬈ in this example. The Trackpoint struct is
already defined in the DataStructures⬈ module. The other event, reset⬈, resets the state machine back to its initial state.
We also need a variable in the state machine to keep track of the points we have accumulated during the flight. In order to create a new
variable simply type var in the state machine. This creates a new variable, you need to specify its name and the type. The newly created
variable is invisible from outside by default, you can change this to readable or writable with the corresponding intentions. Readable variables
may be read from outside of the state machine, while writable variables can also be modified from the interacting C code (you will have to
create a state machine instance first; we explain this below). In the end you should have a points variable in the state machine which is
readable only:
We can now implement the rules outlined above using transitions and actions. Let us start with some simple ones. Whenever we enter
beforeFlight⬈ we reset the points to 0. We can achieve this with an entry action inbeforeFlight⬈:
There are some additional rules for taking off, landing and conditions for crashing.
All states other than beforeFlight⬈ must have a transition triggered by the reset⬈ event to go back to the beforeFlight⬈ state. Note,
that as a consequence of the entry action in the beforeFlight⬈ state, the points get reset in all three cases.
As soon as we submit a trackpoint where the altitude is greater than zero we can transition to the airborne state. This means we
have successfully taken off, and we should get 100 points in bonus. TAKEOFF⬈ is a global constant representing 100 (#constant
TAKEOFF = 100;). We also make use of the physical units extension (seeSection Units) and annotate the speed and altitude with the
appropriate unit.
Events while we are in the air: when we are airborne and we receive a trackpoint with zero altitude and zero speed (without going
through an orderly landing process), we have crashed. If we are at altitude zero with a speed greater than zero, we are in the
process of landing. The other two cases deal with flying at over 200 and over 100 mps. In this case we stay in the airborne⬈ state
(by transitioning to itself) but we increase the points.
The complete set of transitions is as follows:
Note that the transitions are checked in the order of their appearance in the state machine; if several of them are ready to fire (based on
the received event and the evaluation of the guard conditions), the first one is picked. Actually this kind of nondeterminism is usually not
wanted and mbeddr provides support for the verification of state machines, which you can read more about in Section Checking State
Machines.
Junction states (influenced by Simulink) can also be created in the state machine in mbeddr. A junction state can only contain epsilon-
transitions, meaning these transitions are immediately ready to fire when the state is entered, they don't need a triggering event. Having
multiple epsilon-transitions clearly introduces nondeterminism, so one typically specifies guards for these transitions. The following
example uses the points variable to select the state transition that should be applied. The example junction state makes a decision based
on the points and immediately fires.
Junctions are essentially branching points in a statemachine and help modularize complex guards (especially if several guards in one
state have common subexpressions):
The first one is to simply invoke a C function from an entry or exit action. Another alternative, which is more suitable for formal analysis
(as we will see below and in Section Checking State Machines) involves out events. From the entry action we send an out event, which we
have defined in the state machine. The following code example shows how the latter one would look like.
StateMachines
model mbeddr.tutorial.main.defaultExtensions
package examples constraints
We create an out event called crashNotification⬈ (which will be sent when we enter thecrashed⬈ state). We then specify a binding to the
out event; the binding is part of the out event definition: simply add the name of the function as the target of the arrow (in the example this
is the raiseAlarm⬈ function).
The benefit of this approach compared to the previous one is that formal verification can check whether the notification was sent at all
during the execution of the state machine. The effect is the best of both worlds: in the generated code we do call the raiseAlarm⬈
function, but on the state machine level we have abstracted the implementation from the intent. See Section Checking State Machines for
a discussion of state machine verification.
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
We can now create a helper function that creates a new Trackpoint⬈ based on an altitude and speed passed in as arguments:
We can now start writing (and running!) the test. We first create an instance of the state machine (state machines act as types and must
be instantiated). We then initialize the state machine by using the init operation:
exported testcase testFlightAnalyzer {
FlightAnalyzer f;
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈
Initially we should be in the beforeFlight⬈ state. We can check this with an assertion:
We also want to make sure that the value of points is zero initially. Since we have declared thepoints variable to be writable above, we can
write:
Let us now create the first trackpoint and pass it in. This one has speed, but no altitude, so we are in the take-off run. We assume that we
remain in the beforeFlight state and that we still have 0 points. Notice how we use thetrigger operation on the state machine instance. It
takes the event as well as its arguments (if any):
exported testcase testFlightAnalyzer {
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case) ⬈
So as you can see it is easy to interact from regular C code with a state machine. For testing, we have special support that checks if the
state machine transitions to the desired state if a specific event is triggered. Here is some example code (note that you can use the test
statemachine construct only within test cases):
You may have noticed that the helper function allocates the new Trackpoint⬈s on the heap, without releasing the memory. You could
simply call free on these newly created structures after the test statement has been executed or allocate the trackpoints on the stack to
solve this problem.
When a transition from outside a composite state targets a composite state, the initial state in that composite state is activated.
A composite state can have its own transitions. These act as if they were defined for each of the states of the composite state.
If a transition from an inner state A crosses a composite state-boundary B, then the actions happen in the following order: exit
actions of A, exit actions of B, transition action, and entry action of the transition's target (which is outside of the composite state).
Fig. 7.3.6-A shows an example visualization for the hierarchical state machine that was described in the previous section.
Figure 7.3.6-A: A visualization of a state machine in mbeddr. You can click on the states and transitions to select the respective element in the editor.
7.4 Components
Let us now introduce components to further structure the system. We start by factoring the Trackpoint data structure into a separate module
and export it to make it accessible from importing modules.
To enhance the semantic "richness" of the interface we add preconditions and a postcondition. To do so, use an intention Add Precondition
on the operation itself. Please add the following pre- and postconditions (note how you can of course use units in the precondition). The
result expression is only available in postconditions and represents the result of the executed operation.
After you have added these contracts, you will get an error message on the interface. The problem is this: if a contract (pre- or
postcondition) fails, the system will report a message (this message can be deactivated in case you don't want any reporting). However,
for the program to work you have to specify a message on the interface. We create a new message list and a messge.
You can now open the inspector for the interface and reference this message from there:
Figure 7.4.1-A: A message definition used in the interface definition to report contract failures.
There are still errors. The first one complains that the message list must be exported if the interface is exported. We fix it by exporting the
message list (via an intention). The next error complains that the message needs to have to integer arguments to represent the operation
and the pre- or postcondition. We change it thusly (note that there are quick fixes available to adapt the signatures in the required way).
After you add only the provided port, you get an error that complains that the component needs to implement the operations defined by
the port's interface; we can get those automatically generated by using a quick fix from the intentions menu on the provided port. This gets
us the following:
The processor_process⬈ runnable is triggered by an incoming invocation of theprocess⬈ operation defined in the TrackpointProcessor⬈
interface. The Nuller⬈ simply sets the altitute to zero.
Let us now write a simple test case to check this component. To do that, we first have to create an instance of Nuller. We create an
instance configuration that defines exactly one instance of this component. Also, we add an adapter. An adapter makes a provided port of
a component instance (Nuller.processor ) available to a regular C program under the specified namen:
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
} ⬈
Now we can write a test case that accesses the n adapter -- and through it, theprocessor port of the Nuller component instance nuller. We
create a new Trackpoint, using 0 as the id -- intended to trigger a contract violation (rememberpre(1) p->id != 0 ). To enter the &tp just enter a &,
followed by tp.
section testNullerFailing {
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
}
Before we can run this, we have to make sure that the instances are initialized (cf. the warning you get on them). We do this right in the
beginning of the test case. We then create a trackpoint and assert that it is correctly nulled by the Nuller.
section testNullerFailing {
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
}
To make the system work, you have to import the Components module into the Main⬈ module so you can call the testNullerFailing⬈ test
case from the test expression in Main. In the build configuration, you have to add the missing modules to the executable (using the quick
fix). Finally, also in the build configuration, you have to add the components configuration item:
Configuration Items:
reporting: printf (add labels false)
physical units (config = Units Declarations (mbeddr.tutorial.main.m1))
components: no middleware
wire statically: false
You can now rebuild and run. As a result, you'll get contract failures:
./MbeddrTutorial
$$runningTest: running test () @FunctionPointers:test_testProcessing:0#767515563077315487
$$runningTest: running test () @Components:test_testNuller:0#767515563077315487
$$contractFailed: contract failed (op=0, pc=1) @Components:null:-1#1731059994647588232
$$contractFailed: contract faied (op=0, pc=2) @Components:null:-1#1731059994647588253
We can fix these problems by changing the test data to conform to the contract, i.e.
section testNullerOK {
instances nullerInstancesOK {
instance Nuller nuller
adapt n -> nuller.processor
}
Let us provoke another contract violation by returning from the implementation in the Nuller component a Trackpoint whose id is 0.
Running it again provokes another contract failure. Notice how the contract is specified on the interface, but they are checked for each
component implementing the interface. There is no way how an implementation can violate the interface contract without the respective
error being reported!
Let us again think about the semantics: you shouldn't be able to get or take stuff from the store if it is empty, you should not put stuff into it
when it is full, etc. These things can be expressed as pre- and postconditions. The following should be pretty self-explaining. The only new
thing is the query operation. Queries can be used from inside pre- and postconditions, but cannot modify state2
These pre- and postconditions mostly express a valid sequence of the operation calls: you have to call store before you can call get, etc.
This can be expressed directly with protocols, as implemented in TrackpointStore2⬈:
exported cs interface TrackpointStore2 {
Trackpoint* get()
protocol full -> full
Trackpoint* take()
post(0) result != null
protocol full -> init(0)
You can add a new protocol using the respective intention. The protocol is essentially a state machine. On each operation you can specify
the transition from the old to the new state. We have one special state which is called initial. On a transition you can either jump into an
already existing state or create a new state and then directy move into that. I.e. you see on the store⬈ operation that we transition from
the initial state into the newly created full⬈ state. The operation get⬈ can now make use of the previously createdfull⬈ state and does not
create a new state. It is also worth mentioning that you can reset the protocol by transitioning into the initial state again (as done in take⬈)
The two interfaces are essentially equivalent, and both are checked at runtime and lead to errors if the contract is violated.
We can now implement a component that provides this interface: InMemoryStorage⬈ Most of the following code should be easy to
understand based on what we have discussed so far. There are two new things. There is a field Trackpoint* storedTP; that represents
component state.
Trackpoint* storedTP; ⬈
Second there is an on-init runnable: this is essentially a constructor that is executed as an instance is created.
To keep our implementation module Components⬈ well structured we can use sections. A section is a named part of the implementation
module that has no semantic effect beyond that. Sections can be collapsed.
section store {
exported cs interface TrackpointStore1 {
...
}
exported cs interface TrackpointStore2 {
...
}
exported component InMemoryStorage extends nothing {
...
}
}
We can now implement a second processor, the Interpolator⬈. For subsequent calls of process⬈ it computes the average of the two last
speeds of the passed trackpoints. Let us start with the test case. Note how p2 has its speed changed to the average of thep1 and p2
originally.
section testInterpolator {
instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
}
} testInterpolator(test case)
instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
}
A few things are worth mentioning. First, the component requires another interface, TrackpointStore1. Any component that implements this
interface can be used to fulfil this requirement (we'll discuss how, below). Second, we use an init field. This is a regular field from the
perspective of the component (i.e. it can be accessed from within the implementation), but it is special in that a value for it has to be
supplied when the component is instantiated. Third, this example shows how to call operations on required ports (store.store(p);). The only
remaining step before running the test is to define the instances:
instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
} ⬈
A few interesting things. First, notice how we pass in a value for the init field divident as we define an instance of Interpolator. Second, we use
connect to connect the required port store of the ipc instance to the store provided port of the store instance. If you don't do this you will get an
error on the ipc instance since it requires this thing to be connected (there are also optional required ports which may remain unconnected
and multiple required ports which can be connected to more than one required port). Finally, the provided interfaceprocessor is made
available to other code as the variable ip. You can run the test case now. On my machine here it works successfully :-)
To better understand the connections between component instances, there is also a graphical editor available. To switch to the graphical
Projection
wireing you can select the respective option form the Code → Mode → Component Wiring as Diagram menu.
1: sure, it is completely overdone to separate this out into a separate interface/component, but for the sake of the tutorial it makes sense.
7.4.5 Mocks
Let us assume we wanted to test if the Interpolator works correctly with the TrackpointStore interface. Of course, since we have already
described the interface contract semantically we would find out quickly if the Interpolator would behave badly. However, we can make such
a test more explicit. Let us revisit the test from above:
section testInterpolator {
instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
}
} testInterpolator(test case)
instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
}
We can test for this behavior explicitly via a mock. A mock is a component that specifies the behavior it expects to see on a provided port
during a specific test case. The crucial point about mocks is that a mock implements each operationinvocation separately (the step s
below), whereas a regular component or even a stub just describes each operation with one implementation. This makes a mock
implementation much simpler -- it doesn't have to replicate the algorithmic implementation of the real component. Let us look at the
implementation:
This mock component expresses that we expect 5 calls in total. Then we describe the sequence of calls we expect. The first one must be
a call to isEmpty and we return true. Then we expect a store, and for the sake of the example, we check thattp is not null. We also store the tp
parameter in a field lastTP so we can return it later (you can add the parameter assertions and thedo body with intentions). We then expect
another isEmpty query, which we now answer withfalse. At this point we expect a call totake , and another call to store. Notice how we return
null from take : this violates the postcondition! However, pre- and postconditions are not checked in mock components because their
checking may interfere with the expectations! Also, we have slightly changed the test case so we don't stumble over the null. We don't
assert anything about the result of theprocess calls:
Two more steps are required for this test to work. The first one is the instances and the wiring. Notice how we now connect the
interpolator with the mock:
instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
} ⬈
The second thing is the test case itself. Obviously, we want the test case to fail if the mock saw something other than what it expects on
its port. We can achieve this by using the validate mock statement in the test:
validatemock(0) interpolatorInstancesWithMock:storeMock; ⬈
All these components are time-triggered, i.e. it is assumed that they execute in regular intervals, by some kind of scheduler. They all
provide an interface Timed that provides an operation tick that is called by the scheduler. So far, these components don't exchange any
data yet: sender/receiver interfaces will be used for that later1. Here is the code so far:
Let's now look at the data exchange, focussing on the position first. Here is a sender/receiver interface position provider. The interface
declares a set of data elements, in this case with physical units:
The GPS is supposed to provide this data, so we give it a provided port with this interface:
Note how from within component runnables we can use expressions to assign to the data values defined in the interface as if they were
normal fields. Let us now look at the flight recorder. It is supposed to read the data written by the GPS (and the same with the speed
indicator):
exported component FlightRecorder extends nothing {
provides Timed timed
requires PositionProvider pp
requires SpeedProvider sp
Trackpoint[1000] recordedFlight;
uint16 count = 0;
void timed_tick() <= op timed.tick {
with (recordedFlight[count]) {
id = ((int8) count);
x = pp.x;
y = pp.y;
alt = pp.alt;
speed = sp.speed;
}
count++;
} runnable timed_tick
Trackpoint getFlightNo(uint16 no) <= no trigger {
return recordedFlight[no];
} runnable getFlightNo
} component FlightRecorder ⬈
Inside the with-statement, we can access the data acquired via thepp and sp required ports. What distinguishes this from global variables,
of course, is that the component instances still have to be wired: required ports have to be connected to provided ports, in this case,
defining access to the data items:
instances instances {
instance GPS gps_comp
instance SpeedIndicator indicator
instance FlightRecorder recorder
connect recorder.sp to indicator.speed
connect recorder.pp to gps_comp.pos
adapt gps -> gps_comp.timed
adapt si -> indicator.timed
adapt rec -> recorder.timed
} ⬈
This part of the tutorial only provided a few examples of Interfaces and Components. For a full discussion of Interfaces and Components
see \\fordetails{Interfaces and Components}{\\sect{details_components}}
1: Note that this time-triggered architecture is very widespread in embedded software. In future releases of mbeddr we will provide direct support for time-triggered
runnables, so you don't have to use an explicit interface such as Timed.
internal instances {
instance NmeaLineParserImpl lineParser
instance NmeaSentenceHandlerImpl sentenceHandler
The example above actually would not need the interface type as the handle runnable could be called right in place.
* Component/Interface Dependencies Select a component or an interface and execute theVisualize action from the context menu (or
press Ctrl + Alt + V ). Fig. 7.4.9-A shows the result.
Figure 7.4.9-A: The interface/components dependency diagram shows all components visible from the current module, the interfaces, and the provided (solid lines) and
required ports (dashed lines).
* Instance/Wiring Diagrams You can also select an instance configuration and visualize it. You'll get a diagram that shows component
instances and their connections (Fig. 7.4.9-B).
Figure 7.4.9-B: This diagram shows component instances and their connectors. The label in the instance boxes contain the instance name and the component name
(after the colon). The edges represent connectors. The label shows the required port (before the arrow, the provided port name (after the arrow), and the name of the
interface used by the two ports (on the new line).
Let us look at the three parameters you have to set here: The first one determines from where the program is "executed". The entry point
should be selected to be "close" to the to-be-verified component (if you verify the whole system, then, at least for big systems, this will
take long). In our case we use a special test case verification, which looks as follows:
instances verificationInstances {
instance Interpolator interpol(divident = 2)
connect interpol.store to store.store
instance InMemoryStorage store
adapt verificationInterpolator -> interpol.processor
} ⬈
The second line in the configuration determines how often a loop is executed. You should start with low numbers to keep verification
times low. Finally, the third parameter determines if the verification should fail in case it cannot be proven that the unwinding loops number is
sufficient. You can now run the verification by selecting the component and executing the Verify Component action. After a few seconds,
you'll get a result table that reports everything as ok (see Fig. 7.4.10-A): every precondition of every operation in every provided port has
been proven to be correct.
Figure 7.4.10-A: The table that shows the verification results; everything is ok in this case.
Let us introduce an error. The following version of the trackpointStore_store runnable does not actually store the trackpoint. This violates the
postcondition, which claims that storedTP != null . Note that for the analysis to work, all paths through the body of a function (or a runnable)
must end with a return (you'll get an in-IDE error if you don't do this).
Let us rerun the verification. Now we get an error, as shown in Fig. 7.4.10-B. Note how the lower part of the table now shows the
execution trace that led to the contract violation. You should check the Call/Return checkbox to filter the trace to only show the call/return-
granularity, and not every statement. You can also double-click onto the trace elements to select the particular program element in the
code.
Figure 7.4.10-B: The table that shows the verification results; now we have an error, and the trace in the bottom half shows an example execution that led to the error.
Of course the implementation of addTrackpoint that just adds 0 to the points doesn't make much sense yet. The amount of points added
should depend on how fast and how high the plane (or whatever) was going. The following screenshot shows an embedded decision
table that computes points (Notice we mix the components language, the decision tables and the units in one integrated program):
Figure 7.5-A
You can create the decision on your own by first of all typing the keyword dectab - this instanciates the concept. To add a column, hit enter
in one of the cells. For adding a row, move your cursor on the left side of the table (between the default return value and the table) and hit
enter . Now, let us write a test. Of course we first need an instance of Judge:
instances instancesJudging {
instance Judge theJudge
adapt j -> theJudge.judger
} ⬈
Below is the test case. It contains two things you maybe haven't seen before. There is a second form of the for statement that iterates over
a range of values. The range can be exclusive the ends or inclusive (to be changed via an intention). In the example we iterate from 0 to
4, since 5 is excluded. The introduceunit construct can be used to "sneak in" a unit into a regular value. This is useful for interacting with
non-unit-aware (library) code. Note how the expression for speed is a way of expressing the same thing without theintroduceunit in this case.
Any expression can be surrounded by introduceunit via an intention.
8 Process Support
8.1 Requirements
8.1.1 Overview
mbeddr ships with a language for capturing, documenting and maintaining requirements. To use them, use the com.mbeddr.reqtrace devkit in
your model. In this documentation we refer to a number of requirements modules, in particular FlightJudgementRules⬈,
ArchitecturalComponents⬈ and UseCases⬈.
Figure 8.1.2-A ⬈
Hierarchy Requirements are stored in a hierarchy. Each requirement can have children, as can be seen fromInFlightPoints⬈. The
semantics of hierarchical nesting are not rigorously defined, but usually assumed to represent refinement (i.e., additional details).
Kind Each requirement has a kind. The kind represents the nature of the requirement, and existing kinds includefunctional, non-functional
and technical. Language extension can be used to define arbitrary additional kinds. In addition to being a label (to sort or query
requirements), the kind can also be used to enforce the presence of certain additional requirements data nodes (discussed next).
Additional Data In addition to the description, a requirement can have data objects. As usual, these are extensible and can represent
any additional structure. Examples include the (demo) business rules in PointForATrackpoint⬈, tables as in priceDep⬈ or architectural
components as shown in Driver⬈. The requirements kind described above can include constraints that enforce certain kinds of data (e.g.,
a timing requirement may require a timing spec data object).
Cross References An important aspect of requirements are their relationships: a requirement can establish relations to other
requirements. The requirement shown in Fig. 8.1.2-B shows both kinds of cross references. First, cross references can be added to the
prose description using the @req word (press Ctrl + Space anywhere in the prose block to insert such special words). The other alternative
is to use special data objects (such as requires also and conflicts with. The former approach is a generic relationship, the latter ones are
qualified. Of course, additional relationships can be created using language extension.
Figure 8.1.2-B ⬈
Tags Tags are similar the data objects, but they are "syntactically smaller". Existing tags include the requirements status (you can set it
to draft, accepted, etc.), the estimated effort for implementing the requirement as well as a general string tag represented by the@ sign.
The requirements language is intended to be used by (among others) non-programmers. They may not be familiar with pressing
Ctrl + Space to enter new program elements. Hence, there is a special projection modeEditor Helper Buttons that projects buttons into a
requirements module to add child requirements, data or change the nesting hierarchy (see Section Projection Modes on how to swich
Projection Modes).
Figure 8.1.3-B: The tree view that shows the dependencies of a requirements module.
8.2 Tracing
A trace is a pointer from some implementation artifact to one or more requirements 1. To use traces, make sure you use the
com.mbeddr.reqtrace devkit in your models.
For an example of a trace, see the StateMachines⬈ module. It has traces attached to a number of C constants and to various parts of the
state machine (make sure you select a projection mode that actually shows the traces); an example is also shown in Fig. 8.2-A. Make
sure you select a projection mode that lets you actually see the traces (see below).
Trace Kind Each trace has a trace kind associated with it. Trace kinds characterize the nature of the trace relationship. While the set of
trace kinds is extensible, mbeddr ships with three default trace kinds: implements, exemplifies and tests. Note that a trace kind can restrict the
kinds of requirements it can trace to, and it can also restrict to which nodes the trace can be attached. While the default trace kinds have
no such constraints, it is useful in general to enforce specific semantics through those constraints.
Projection Modes There are three different modes how traces can be shown (and they can be switched like all other projection modes,
see Section Projection Modes):
Detailed Traces :The trace kind and the trace target(s) are shown on the right side of the traced node (seeFig. 8.2-A).
Compact Traces : Only a small [T] is shown on the right side of the traced node (seeFig. 8.2-B). A tooltip shows the first of the traces
elements and the trace kind. The kind and targets can be changed in the inspector of the [T] node.
No Traces : the traces are not shown (even though, of course, they are still in the code)
Figure 8.2-A: The detailed trace mode in mbeddr.
To get a better feel for the projection modes we suggest you play around with them in the StateMachines⬈ module which contains traces
to requirements.
To remove a trace, you can simply use the Backspace key when the trace itself is selected.
Figure 8.2.2-A: Selecting the Traces as the target for the Find Usages.
An example traces assessment can be found in Traces⬈. In the query, you have to select the requirements module for whose
requirements you want to run the assessment. The results then show each requirement and the code locations from which it is traced.
1: Strictly speaking, it can also point to other traceable elements; mbeddr is extensible in this way. But by default, only requirements are supported as trace targets.
8.3 Assessments
An assessment is essentially a report over a model. It performs a query and then reports the results. The queries are of course
customizable; there are dozens of different assessments and new ones can be developed using language engineering. Also the structure
of the result data items is of course different for each assessment. Check out the following assessments to get an impression:
Visualizations⬈ reports a simple list of all program elements that have a visualization associated with them. The visualization can be
rendered directly from the assessment result item (using Ctrl + Alt + V or the context menu. This assessment always uses the
current model as the scope for the report; it has no further configuration options.
Traces⬈ shows a trace report. The query can be parametrized with the requirements module for which the query should be run. The
result shows various properties of the result, you can use Ctrl + Click to navigate to the result nodes themselves. This assessment
also uses grouping.
EffortsOfWorkPackages⬈ shows an assessment that can be parametrized in various ways; the results use custom cells to show a
progress bar.
Since assessment and their state is persisted, they can be used to assess, or audit, models over time. As we discuss below, it is easy to
track changes to the result set over time.
Setting up an Assessment Assessments live in an Assessments root. Once such a root is created, any number of assessments can be
added. Each assessment has a name and a query. The query is a separate node that performs the query itself; it also may have
parameters that can be set by the users. Queries are the primary extension point, new queries can be developed via language extension.
Updating and the Color Code Once a query is defined, you can use Update to execute it (either via an intention or via the context
menu). Updating the assessment reruns the query. However, the old results are not replaced by the new ones. Instead, the system
performs a kind of diff:
Results that are no longer in the new result set are deleted
Results that are in the new set but have not been in the old one are called new; they get an orange bar in front.
Results that are in the new as well as in the old result set are marked as grey.
By using this color coding scheme, users can always see which result items have been added during the latest update. This is very useful
for continuing audits of models because it is immediately obvious which result items may require attention (see next paragraph).
Checkboxing and Errors A particularly interesting use case for assessments is the detection and tracking of smells, style guide
violations or other things that may have to be fixed or changed. The assessment can be used to find and report them. The user then goes
through all of the result items and decides for each of them what to do:
The user may fix the offending code; upon update of the assessment, the corresponding result items goes away
Alternatively, the user may decide that a particular smell is acually ok in this case; so the result item must be marked as "it's ok,
don't bother me again".
To support the latter case, the must be ok option can be set on an assessment. Then every result item gets its own checkbox. Checking this
checkbox means that "this result item is ok", i.e., it should not be marked as an error (the colored vertical bar becomes blue in this case).
It is also possible to hide all those items where the checkbox is checked in the result set by selecting the hide ok ones option. Finally, an
assessment that is marked as must be ok results in a regular type system error if it has one or more result items that are not marked as ok.
The user guide you are reading right now is written with this language. Consequently, a good way to learn mbeddr's documentation
language is to switch off Presentation Mode (via the Code → Projection Modes menu) and take a look how the user guide is built. We'll explain
some basics in this chapter but otherwise recommend you to just look at the user guide⬈.
Presentation Mode: Most importantly, we support the Presentation Mode where the document is rendered in MPS with a nice,
readable projection and all images, listings or code snippets shown inline.
LaTeX: A generator creates LaTeX sources from documentation language Documents. While this works well in principle, there are
several layouting problems with image sizes. After the sources have been generated they have to be processed by Latex manually
(e.g., by invoking xelatex on them). A more detailed documentation is still [ToDo: ].
HTML: Similar to LaTeX, we can also generate HTML sources. We use this to generate the online version of this user guide.
Documents: Documents are the main roots in which documentation language content lives. As usual in mbeddr, Documents can
express dependencies on other Documents.
Sections: A document contains sections. Sections have a name (used for referencing them) as well as a title. Sections can be
nested. They are also automatically assigned a number (even though this does not work correctly in Presentation Mode). A
reference to a section is created via the @sect word.
Chapter: There is also a kind of "special section", the Chapter. It is simlar in all respects to sections except that itis a chapter, so it
can be treated specially in transformations.
Paragraphs: Inside sections there are all kinds of paragraphs.
Paragraphs The paragraph is an abstract concept of which many subconcepts exist. They hold the meat of the document:
image: This represents an embedded image. You specify a path prefix (see Configuration and Resources below), and then you can
use code completion to select an image file under that path that should be shown in the editor.
embed as text This embeds arbitrary MPS nodes into the document. When rendered to HTML or LaTeX, the embedded code will
be "copied to text", so this really only makes sense for notations that have a meaningful textual representation (program code,
essentially).
embed as image: Similar to the previous one, but the MPS nodes are rendered as an image (screenshot) when serialized to LaTeX
or HTML.
visualization: References a program node that has a visualization. When rendered out, the visualization is created in real time and
embedded.
Words Inside text paragraphs, you can use various different special words. Those that start with a backslash are used for formatting.
Those that start with an @ are references, the ones starting with an\ are formatting options. They can be entered by pressing Ctrl + Space
inside text blocks. The default text for references can be overridden in the inspector view.
There are the usual formatting options for \bold, \emph, \math, \key, \menu and \code. These can also be entered by selecting existing text
and then pressing Ctrl + Alt + T to bring up the wrapper intentions menu.
There are refernces to other sections (@sect), to figures, images or code embedded as images (via @fig).
You can point to other resources in the file system using @attachment
You can reference MPS nodes via @node
You can reference MPS modules via @language, @solution, @generator or @devkit.
You can embed URLs using \url
Footnotes are added via \footnote
Todos can be left in the text via \todo
Configuation and Resources Every document must refer to a Configuration node. It is mostly used in the context of LaTeX or HTML
rendering. However, it also defines the default path where the visualizations store their temporary images. Most importantly, it defines the
path prefixes where images (and other attachments) are searched. The path is always relative to the current solutions root directory.
When defining image paragraphs or attachments, you use one of the paths defined in the Config as the path prefix and select an image
"below" this path prefix. This is supported with code completion.
Building For consuming documents inside MPS using the presentation mode, no build is necessary. However, in case of generating
LaTeX or HTML, a DocumentExport node is needed to configure the generation process. [ToDo: ]
In principle, all program nodes can be reviewed separately. For pragmatic reasons, we currently resrict reviews to root nodes, as
well as the first level of children (i.e, module contents such as functions, components, or interfaces in C implementation modules). In
a future version there will be preferences that define the granularity of review specifically for a project.
The review information is stored in the program node itself, so it is persisted along with the code itself. It survives branching etc.
Intentions are used to change the review states.
new: The code has just been written. No review-related information is stored in the code. For example, theFlightJudger⬈ component
is in this state.
ready: When the developer thinks the code is finished, it can be marked asready. The code gets a yellow background, and an
annotation that states that the code is ready. See Judge2⬈ for an example.
reviewed: Once a reviewer has reviewed the code, it is marked asreviewed. The code gets a green background. Check out this node
instancesJudging⬈ for an example.
raw: After the code has been marked as ready or reviewed, it may be modified again. At this point, of course, the review is invalid. The
code now becomes raw (red background). This test case is an example: testJudging⬈
When a piece of code is marked as ready or reviewed, we create a hash of the code structure. This hash is stored in the code review
annotation, inside the code. By recalculating the hash and comparing it to the one stored in the code, we can detect whether the code has
been changed. However, because of the performance implications, we don't calculate this hash automatically. Instead, you have to use
the Reevaluate Review intention on the reviewed node. Alternatively you can also reevaluate all review states from one single location; this is
discusseb below in Section Assessments.
8.5.4 Assessments
There is an assessment (see Section Assessments for details about assessments) to show the global state of the code review.Fig. 8.5.4-
A shows an example.
Figure 8.5.4-A ⬈
The query that shows the code review state is called code review summary . In the query you can set the scope to either a single chunk, a
single model, or a model includings its imports. The results show the state of the review for the various reviewable nodes (incl. the color
code); the second column is clickable, so you can directly jump to the mentioned section of code.
Updating the assessment also re-evaluates all the hashes of the reviewed sections of code. So the assessment, in addition to showing an
overview, is also the central place from which to reevaluate all review states.
A discussion of the trade-offs between static and runtime variability is beyond the scope of this user guide: it is not a tutorial on product
line engineering in general, but only a tutorial on how to do it with mbeddr.
Feature Models Feature models map a configuration space. An example feature model is shown inFig. 8.6.1-A. A feature is simply a
named entity. Features can have children (subfeatures). A feature specifies a constraint over the subfeatures that determine how they
can be selected. The following four tree constraints exist:
! (mandatory) : this means that all child features are mandatory. In a valid configuration, all of them must be selected.
? (optional): this means that in any valid configuration of this feature model, any combination of the child features can be selected. In
other words, all children are optional.
xor (exclusive): this means that exactly one subfeature must be selected in a valid configuration
or (n-of-m) : this means that one or more (but not zero) of the subfeatures must be selected in a valid configuration.
Figure 8.6.1-A ⬈
In addition, a feature model can also define derived features. These are essentially just macros, i.e., expressions over the existing feature
tree. The features everything⬈ and empty⬈ are examples.
Features can also have attributes. For and example see maxSpeed⬈ below. These are (more or less primitively) typed attribute
declarations. In a valid configuration, all attributes of all selected features must have a value.
Configurations Once a feature model is defined, you can specify configurations or variants. A configuration has a name and selects any
valid (in terms of the constraints discussed above) subset of the features defined in the referenced feature model. The example below has
three configurations: cfgDoNothing⬈, cfgNullifyOnly⬈ and cfgNullifyMaxAt200⬈. Note how cfgNullifyMaxAt200⬈ specifies a value of 200
for the attribute associated with the maxCustom⬈ feature. An example configuration is also shown inFig. 8.6.1-B.
Note that, in order for the stuff described below to work (static and runtime variability), you need to make sure that the feature models,
on which the variability relies, are actually imported into the respective implementation modules.
Figure 8.6.1-B ⬈
Variant-dependent behavior This function takes two arguments, one of them is an fmconfig. This data type holds a configuration for the
specified feature model (FlightVariability⬈ in this case). Inside the function we use thevariant statement to branch the code based on the
selected feature. The variant statement works essentially like a switch statement, but it "switches over" feature configurations. Note that the
conditions in the case parts only allow to access the features defined in by feature model mentioned in the embracingvariant statement.
Since variant is a statement, it can only be used in statement context -- so it cannot be used to vary arbitrary behavior as expressed, for
example, with state machines. Of course, additional similar language constructs could be built for other DSLs.
Note also that we can access the values of attributes associated with features. In the code above we access the maxSpeed⬈ attribute
and use it as a normal expression. Note that only attributes of selected features can be used! Otherwise, no value may be specified.
Storing Configurations in Programs In order to be able to change program behavior based on feature configurations at runtime, the
configuration has to be stored in the program itself. The test case testRuntimeVar⬈ exemplifies this. Note how we declare afmconfig
variable for a given feature model and then use the store config statement to store a specific configuration into the variable. We then call the
abovementioned function where we then use the variant statement to exploit the data.
Presence Conditions A presence condition is a Boolean expression over features attached to any program element. During
transformation, the program element is deleted if, based on the selected configuration, the Boolean expression evaluates to false.
Presence conditions are essentially a kind of "structured #ifdef". In the two functions process_trackpoint⬈ and testPresenceConditions⬈,
presence conditions are attached to several statements.
The program elements to which presence conditions are attached are color-coded. The color depends on the expression. This means that
all program elements that are annotated with the same presence condition get the same color, making it easy to spot disparate parts of
programs that rely on the same variant configuration.
Presence conditions are attached to program nodes with the Toggle Presence Condition intention.
Projection
Projection Modes Variability-aware code can be projected in various ways (switchable via the Code → Mode menu). The Detailed
Product Line shows the presence conditions in-line in the code (above the annotated node). TheCompact Product Line just shows small
markers in the code. Hovering over the marker shows the presence condition in a tool-tip. The third mode shows the program as it would
look for a given variant (as if the transformation would be executed directly in the editor). For this to work, the system has to know which
variant should be rendered. To do this, a so-called Variant Selection annotation must be annotated to the respective root node (it must be on
the root node!) using an intention. This annotation allows the selection of a feature model and a particular configuration. Once one is
specified, the Selected Variant projection mode can be selected.
To better understand these modes, please go to StaticVariability⬈ and experiment for yourself.
Conditional Replacements Presence conditions are useful for blocks, or statements, or other structural parts of programs. However,
sometimes you only want to change the value of expressions. Exchanging the complete statement (e.g., in case of a variable declaration)
has the problem of changing the identities of the variables, which leads to all kinds of downstream problems. It is hence much better to
change the value of a variable by exchanging an expression. Conditional Replacements and Conditional Switches can be used for this. You can
also use Conditional Alternatives , as discussed below.
Take a look at testConditionalReplacement⬈. In the case of v1⬈ we exchange the value 10 with 0 if the nullify feature is selected using a
conditional replacement. A conditional replacement replaces an expression with one other expression, based on a feature condition. It's a
bit like an if statement. However, sometimes an expression must be replaced with various different ones based on a set of feature models.
A conditional switch can be used for this; it is more like a switch statement. Take a look at v2⬈ for an example. Both conditional
replacements and conditional switches are attached with intentions.
Conditional Alternatives Presence conditions are good to exchange code blocks or statements. Conditional replacements and switches
are good to replace expressions. What's missing is a way to replace things that have a name and that can be referenced. The problem is
that all references to something break if it is removed via a presence condition. To solve this problem, mbeddr has a third option, the so-
called conditional alternative.
Take a look at alternatives⬈. We declare a function add⬈ which we call from the test case testConditionalAlternative⬈. Now let's imagine
we wanted to replace this function with another one based on a presence condition. If we did that, we'd also have to use presence
conditions on all call sites of the function. This is annoying. Conditional alternatives solve this problem in the following way:
Note that this does not just work for functions but for anything that has a name and can be referenced.
Building Variable Systems In order to be able to build variant-aware software, one has to specify the configurations for all involved
feature models. To do this, you have to add a variability mappings configuration item to the build configuration (please see [ToDo: ] for a
general discussion on mbeddr's build system). In this item you can list all relevant feature models plus their configuration. Check out this
build configuration as an example:
BuildConfiguration (m.t.m.defaultExtensions)⬈. Note that you will get errors in the build configuration and sometimes during generation
itself, if no configuration is specified for a feature model used in the system.
As explained in [ToDo: ], each model can only have one build configuration. Relative to building variability-aware software, this results in
two alternative setups:
You can either have one model that contains your variable code (i.e., programs with presence conditions and the like) as well as a
build configuration. By changing the variability mappings in the build configuration, you can build different variants of the software.
However, since they are all in the same model, you can only build one variant at a time!
The alternative is to have one model with your variability-aware code (let's call it S), and then an additional model for each variant
you want to build (models V_1 through V_n). The additional models V_i import the model S and each contain a build configuration with
a suitable variability mapping. This way, each of these models builds a separate variant. They exist in parallel, so you can automate
the build for all the variants at the same time.
9 Analyses
9.1 Introduction to Analyses
Besides increasing the programmers productivity, domain specific languages allow the definition of advanced, user friendly and domain
specific analyses. One special kind of analyses is formal verification, which uses mathematical precise methods for checking properties of
mbeddr code. In mbeddr we have integrated different formal verification techniques aiming to be used on a continuous basis in everyday
work of practicing engineers. As of today, we have implemented two different kinds of analyses:
Formal verification of variability: mbeddr features advanced support for the definition of product lines using feature models,
configuration models and annotation of artifacts with advanced presence conditions. We analyze the consistency of feature models,
of configuration models and of the annotated artifacts. To do this we use the SAT4J SAT-solver http://www.sat4j.org/.
Formal verification of mbeddr-C programs: once programs are written in mbeddr, we enable developers to use advanced formal
verification techniques to prove absence of bugs, or, when the analyses are incomplete, to search for cumbersome bugs. Our focus
is on user-friendliness of the verification: all analyses are targeted to practicing embedded engineers. For verification we use the
CBMC C-level model checker. http://www.cprover.org/cbmc/.
FM-Analyzer checks whether a feature model can be instantiated (i.e. whether there exists configuration models that conform to the
feature model). Examples of inconsistencies are mandatory features that conflict with each other, or features that require other
features but their parents cannot be parts of the same configuration model.
CM-Analyzer checks whether a configuration model is consistent with the feature model that it configures. Examples of
inconsistencies are when not all mandatory features are selected or when conflicting features are selected.
Modules-Analyzer checks whether a mbeddr-C program whose parts are annotated with presence conditions is consistent.
Examples of inconsistencies are when a function declaration is annotated with a presence condition and there exist calls to this
function that are not annotated with a presence condition - in this case the function is called when the function itself is not included in
the generated code and thereby generator or compile errors will be caused.
Variability analyses are enabled both fine-granular on feature models and configuration models as well as on models and solutions that
use variability support.
9.2.1 FM-Analyzer
In the figure below we present an example of a feature model defined in mbeddr.
Figure 9.2.1-A: A feature model contains features and relations among them.
The FM-Analyzer can be started by right-clicking on the feature model name and selecting the corresponding menu entry.
FM-Analyzer will open a window with the result of the analysis. In the case when the analysis fails, FM-Analyzer provides an explanation
about the cause of failure.
Figure 9.2.1-C: Results provided by the FM-Analyzer. The failure is caused by the fact that the feature 'NoDebug' cannot be selected in any configuration since it
conflicts with the 'Optimization' feature which is mandatory.
9.2.2 CM-Analyzer
In the figure below we present an example of a configuration model defined in mbeddr. Each configuration model contains a subset of
features from the feature model it configures.
The CM-Analyzer can be started by right-clicking on the feature model name and selecting the corresponding menu entry.
CM-Analyzer will open a window with the result of the analysis. In the case when the analysis fails, CM-Analyzer provides an explanation
about the cause of failure.
Figure 9.2.2-C: Results provided by the CM-Analyzer. The failure is caused by the fact that the feature 'Communication' cannot have both sub-features 'CAN' and
'FlexRay' at the same time.
9.2.3 Modules-Analyzer
In the figure below we present an example of an implementation module in mbeddr which has attached presence conditions to several
entities.
The Modules-Analyzer can be started by right-clicking on the implementation module name and selecting the corresponding menu entry. It
checks whether the implementation module is consistent with respect to the feature model. Example of inconsistencies are cases when
for a function (or variable) declaration we have a presence condition, however, for the calls (or references) to this function (or variable) we
do not have any presence condition. In this case, if the code generation is performed, we can get a compile error.
Module-Analyzer will open a window with the results of the analysis. In the case when the analysis fails, Mo-Analyzer provides an
explanation about the cause of failure by giving a possible configuration (selecting concrete features) that would cause the failure.
Figure 9.2.3-C: Results provided by the Module-Analyzer. The failure is caused by the fact that the function 'log_debug_info' has presence condition 'MinimumDebug'
and the caller of this function does not have annotated any presence condition.
When a solution contains models that define feature models, then we can check the variability consistency (both definition and use) for
the entire solution.
Figure 9.2.4-B: One-click start of both variability definition and variability use checks for solutions.
"This product includes software developed by Daniel Kroening, ETH Zurich and Edmund Clarke, Computer Science Department,
Carnegie Mellon University."
The focus in mbeddr is on hunting bugs at the unit level in a continuous and user-friendly manner. In general, we do not aim to prove
correctness of a system but rather help users to find bugs. Each run of an analysis can provide one of the results: SUCCESS (colored
GREEN) - meaning that no bug could be found; FAIL (colored RED) - meaning that a bug could be found, or DON'T KNOW (colored
orange) meaning that no bug could be found but the verification is incomplete. When a bug is found, most of the times we have a trace
through the system (counterexample) that leads to that failure.
Robustness verification at C-level: allows checking of robustness properties like: absence of null pointer dereferencing, absence of
div-by-zero, absence of overflows, absence of arrays-out-of-bounds accesses, etc. A more in-depth presentation of robustness
checks is given in Section Robustness Checks at C-level.
Robustness verification of C-extensions: allows automatic checking of the robustness of several mbeddr-extensions (similar to C-
code robustness checks, we can check the robustness of extensions). For example, given a state-machine, a robustness check for it
ensures that all states of the state machine can be reached and all transitions can be fired. More details are provided in Section
Robustness Checks of mbeddr-Extensions.
Functional verification: mbeddr users can define themselves the business-domain specific properties to be checked on a given
program; such properties originate usually from functional requirements. Such properties can be given in code either as assertions
or attached to higher level constructs like components interfaces. More details are provided in Section Functional Verification.
Before starting each verification we should ask ourselves the following questions:
Q1) What will be checked? The first step is to be aware about what properties will be checked and what properties will not be
checked :-) Also, be sure to understand what sub-system will be verified.
For example, when performing robustness analyses of C-code (e.g. searching for div-by-zero) from a function as entry point, only
those division operations will be checked for div-by-zero that can be reached from that entry point (see below).
Q2) What is the verification entry-point? Each run of CBMC must be given the entry point in the program from which it should
perform the verification. When nothing is specified, the entry point is the function main. Code not reachable from the entry point will
not be checked at all!
Q3) What are the environment conditions? Each sub-system (e.g. function, component) on which a verification is performed, is
embedded in a bigger system from where it gets the data. Usually, the data types used in the code are much more permissive than
the actual values that a parameter can take (e.g. for a function that computes a distance that a car can travel in a certain time period
given a parameter currentSpeed with int32 as type, only a small sub-interval of int32 is relevant since cars cannot travel very fast or have
high negative speeds).
Q4) What are the parameters given to CBMC?CBMC takes a big number of parameters that directly affect the results of the
verification. The most important parameter is related to the loops unwinding 2 - CBMC performs a stepwise symbolic execution of
the code and needs to know how many times loops should be unwound.
mbeddr allows the definition of higher-level, domain specific properties. Our approach to check them is to generate C-level labels (or
asserts) for each of them and to check the reachability of these labels (or violation of asserts). In some cases, if a label is reachable then
the property is violated (e.g. for decision tables); in other cases labels that are not rechable represent property violations (e.g. for state-
machines states reachability verification, if a label corresponding to a state is not reachable, then that state is not reachable). Encoding of
high-level properties as labels (or assertions) and lifting the verification results is done automatically and transparent for the user.
In order not to clutter the production code with unnecessary lines, the labels used for the verification are generated only when a model
'rebuild' which is started by an analysis action is performed - please make sure that the code you analyze gets rebuilt before the analysis
starts.
Figure 9.3.1.1-A: An intuitive overview about how we integrate CBMC in mbeddr: users write high-level functional properties, they are translated into low-level C
checks, CBMC is run and then the produced result is lifted back at the domain level.
1: If advanced environments specification is used, the generated C code might contain also CBMC specific macros, so it does not need to compile.
Step 1: Add the corresponding verification devkit.Analyses are provided through a series of devkits that need to be enabled for the
model where the analysis is run. Without these devkits enabled, either some analyses are not possible at all or the interpretation of
the program trace will not be lifted at the DSL level.
Step 2: Make sure that C-code is generated from the model you check.The verification is performed directly on the generated C
code. Thereby, if no valid C code is generated from your models, then no verification will be done. Do not forget to add all
implementation modules to the build configuration.
Global settings for CBMC-based analyses can be changed via "File → Settings" menu and then selecting CProver preference tab as shown
below.
Number of parallel threads defines the maximum number of instances of CBMC that will be started simultaneously.
Timeout represents the time budget allocated for an analysis. After the timeout expires, the CBMC process will be killed and the
analysis will be stopped. In the case when a high-level analysis starts several instances of CBMC (e.g. in case of robustness
analyses, we will start one instance of CBMC for each atomic robustness check), then users can define timeout for atomic analyses
- denoted as partial timeout.
Use of VCC/GCC under Windows. If VCC is used then mbeddr should be started from the Visual Studio console. If GCC is used,
then 'gcc' should be in path (see Section Verification Tools).
Rebuild models before analysis instructs mbeddr to perform a full "MPS rebuild" of the analyzed model before each analysis
(otherwise only an "MPS make" will be run; in MPS, 'just make' is not always functioning properly).
Slice formula instructs CBMC to perform program slicing. The analyses will run faster, however, in the case when the analysis fails,
the returned counterexample might not contain all intermediate steps.
Unwinding depth instructs CBMC how many times it should unwind loops. If generate unwinding assertions is selected, then CBMC
will explicitly check that each loop was unwound enough. For more details about the loops unwinding in CBMC, please look here:
http://www.cprover.org/cprover-manual/cbmc-loops.shtml
Partial loops instructs CBMC to continue the analysis even if unwinding is not enough. Enabling partial loops, increases the number
of bugs that can be found, however, it might introduce false positives (properties reported as SUCCESS even if they should FAIL)
and false negatives (properties reported as FAILS even if they should be SUCCESS; in these cases, provided counterexamples are
not feasible). Unfeasible counterexamples can be returned also in the case when the property FAILS indeed.
Use refinement instructs CBMC to use refinement procedure. If checked, this setting speeds-up some analyses. However, it should
be used with case since it is still marked as 'experimental' in CBMC 4.9.
Figure 9.3.3-A: Adding the com.mbeddr.analyses.core devkit to your model will enable robustness checks.
To start the robustness checks for a given function 1 you can either open a pop-up menu on the function or define an robustness analysis
configuration 2.
Figure 9.3.3-D: For each configuration item the users can fine-tune which robustness properties will be checked and the entry point in the verification.⬈
Figure 9.3.3-E: To verify an analysis configuration item, one needs just to right-click and select the menu.
When the analysis is started from the pop-up menu, all robustness properties will be checked. Below we summarize the robustness
properties (details about these propertie can be found on the web-page of CBMC http://www.cprover.org/cprover-manual/properties.shtml)
and in its user guide:
array bounds check that each array access is within bounds (aka. buffer overflow).
div-by-zero check that for each division the divisor is not zero.
not-a-number check whether the floating-point computations can result in NaN.
pointer check that pointers are valid whenever they are accessed (no NULL-pointer dereference and no access to invalid pointers
such as dead objects).
overflows/underflows check that no signed/unsigned/float over- or underflow can occur.
memory leak check that all memory allocated within the run code is also deallocated before exit.
1: All analyses re performed on the code from the function set as entry point and transitively for the functions that are called from it
2: Most analyses can be either run with one click (in that case the global settings (see Section Basic Configuration) will be used and the entry point in the analysis is
'main' or can be run via checking 'analysis configurations'. An analysis configuration enables the specification of entry points (e.g. verification environments) and fine
granular definition of CBMC parameters.
Let's consider a decision table which implements a look-up table to compute a breaking distance given the current speed and the
information whether the road is icy or not.
Figure 9.3.4.1-A⬈
A decision table can be verified only if the 'checked' annotation is enabled as shown below. This flag will instruct the C-code generator to
generate labels for each of the properties to be checked.
Figure 9.3.4.1-B: To verify a decision table one needs to make it 'checked' by using an intention.
It is possible to start the verification with one click from the pop-up menu of the decision table node or through an analysis configuration
as described in the case of robustness checks. The one-click start will use the main method as entry point; in the case when analysis
configurations are used then we can specify another entry point (usually a harness - see Section Defining the Environment for
Verification).
After the verification is finished, the results (i.e. have we missed cases?, do we have cases where two cells of the table are active at the
same time?) are automatically shown. When a result fails then a trace through the system is given that shows an example of values that
could cause the failure.
Figure 9.3.4.1-E: A trace to the failure will be shown if the verification result is selected.
The first step to check state-machines is to add the com.mbeddr.analyses.statemachines devkit to the model containing the state machine. This
devkit enables actions in context menus and the lifting of counterexamples such that they are aware of state-machines.
Let's consider a state-machine that implements a simple counter. After the state-machine is started (with the start event), it counts up or
down.
statemachine Counter initial = Init {
in event countUp(int16 step) <no binding>
in event countDown(int16 step) <no binding>
in event start() <no binding>
var int16 counterState = 0
state Init {
on start [ ] -> Counting
} state Init
state Counting {
on countUp [ ] -> Counting { counterState += step; }
on countUp [step < 0] -> Init
on countDown [ ] -> Counting { counterState += step; }
on countDown [step < 0] -> Init
} state Counting
} ⬈
A state-machine can be verified only if the 'checked' annotation is enabled as shown below. The checked flag is enabled via an intention
on the state-machine. This flag will instruct the C-code generator to generate labels for each of the properties to be checked (see Section
Behind the Courtain).
checked ⬈
The verification can be started either with one click from the pop-up menu of the state-machine node or through an analysis configuration.
The one-click start will use the main method as entry point and global settings for CBMC; in the case when analysis configurations are
used then we can specify another entry point and fine-tune the settings.
After the verification is finished, the results are automatically shown - i.e. if a state cannot be reached or a transition cannot be fired then
the result is marked with FAIL. Since the generated labels cannot be reached, we do not have any trace through the system.
Figure 9.3.4.2-B: Results of the state machine verification. Two transitions cannot be fired since they are shadowed by previous transitions.
Writing assertions in C code is a 'classical' method to check properties. In the code fragment below we present a simple assertion.
Extended assertions Many times using simple assertions to encode more complex properties that represent the evolution of the system
in time (e.g. 'after Q then P', 'before Q must P') is cumbersome. As usual, mbeddr provides a set of first class language constructs that
allow users to directly express such properties. By clicking on these constructs, you can find in the 'Inspector' information about how the
code will be generated - basically C-level assertions wrapped in small monitors.
The assertions verification can be started either with one-click from the pop-up menu of the function, or by defining an analysis
configuration item. Both ways are illustrated below.
Figure 9.3.5.1-C: One click check of all assertions that can be reached from one function.
Figure 9.3.5.1-D: As any other analysis configuration items, assertion analysis configuration items allow mbeddr users to fine tune different verification parameters.⬈
mbeddr allows its users to define interfaces and components that implement them. To each of the runnables of an interface, we can
attach pre/post conditions. Furthermore, the expected ordering of calls to functions of an interface can be defined using a protocol
specification. Pre-/post-conditions and protocol together we call the contract of the interface.
Figure 9.3.5.2-A: Example of an interface with pre-/post-conditions and protocol attached to its functions. ⬈
Interfaces can be implemented by components like in the code fragment shown below.
Figure 9.3.5.2-B: A component implements all methods of the interface. The contracts defined by all interfaces are automatically transferred to the component.⬈
Having specified the contract of an interface, we would like to verify if the clients of the components implementing this interface comply
with the specified preconditions and call the interface functions in an appropriate order. Furthermore, we would like to check that the
implementation of the methods fo the interface fulfills the specified post-conditions.
To check the component, we firstly need to add the attribute 'checked' via an intention (having this attribute enabled, we can generate
labels for the cases where pre-/post-conditions are violated).
The verification is started either by one-click from the components pop-up menu or via an analysis configuration item which allows to
specify an arbitrary entry point. Both of these ways are illustrated below.
Figure 9.3.5.2-D: One click check of the contracts of all interfaces provided by this component.
Once the verification finishes, a window with results will be automatically opened as illustrated below.
Figure 9.3.5.2-F: The results of the verification of each pre-/post-condition and protocol are displayed in the results window.
Figure 9.3.6-A: Let's consider for example that our system under verification is a simple function that computes the speed of a car. This function takes two
parameters: a distance and a time interval, both of type int16 . Of course that the time interval cannot be smaller than zero; let's assume that it is smaller than 10. We
can also assume that the distance cannot be bigger than 1000 and must be positive as well. ⬈
The constraints over variables and ranges can be modeled inside harness modules. Below we show an example of such a module:
Figure 9.3.6-B: The function speedComputer is called from within the harness definition that contains nondeterministic assignments that are further constraint (via an
intention on a nondet assign ) to belong to the desired intervals. In the first nondeterministic assignment we use an explicit construct to specify the interval, in the second
assignment we use classical C inequalities. When we use the 'in range expression' (as opposed to explicit inequalities), then the harness is generated in an optimal
manner and this increases the running efficiency of CBMC.⬈
Furthermore, the sequence in which different operations are called can be modeled as well using the harness. For doing this, we can use
the nondeterministic_choice construct that will choose nondeterministically one of the defined cases.
Figure 9.3.6-C: We define traces with size 5 (using the for loop) and at each step either fun1 or fun2 are called that compute some value. We check that for 5 calls in
either order to these functions, a specific property holds. In this manner we can for example model a sequence of external events, each event triggering a function
where it is handled.⬈