0% found this document useful (0 votes)
43 views56 pages

Bachelor Thesis: University of Marburg

This document describes a bachelor thesis that presents a variability-aware interpreter for software product lines. It introduces software product lines, discusses variability in programs and existing analysis techniques. The paper then presents the concept and implementation of the variability-aware interpreter and evaluates its performance on benchmarks.

Uploaded by

Xenogod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views56 pages

Bachelor Thesis: University of Marburg

This document describes a bachelor thesis that presents a variability-aware interpreter for software product lines. It introduces software product lines, discusses variability in programs and existing analysis techniques. The paper then presents the concept and implementation of the variability-aware interpreter and evaluates its performance on benchmarks.

Uploaded by

Xenogod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

University of Marburg

Department of Mathematics & Computer Science

Bachelor Thesis

Variability-Aware Interpretation

Author:
Jonas Pusch
October 11, 2012

Advisors:
Prof. Dr. Klaus Ostermann
University of Marburg
Department of Mathematics & Computer Science

Prof. Dr. Christian Kästner


Carnegie Mellon University
Institute of Software Research

Sebastian Erdweg, M.Sc.


University of Marburg
Department of Mathematics & Computer Science
Pusch, Jonas:
Variability-Aware Interpretation
Bachelor thesis, University of Marburg, 2012.
CONTENTS I

Contents

List of Figures III

List of Listings IV

List of Abbreviations V

1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Variability in Programs 5
2.1 Software Product Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Variability in Product Lines . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Testing Product Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Variability-Aware Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.1 Abstract syntax trees . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.2 Variability representation . . . . . . . . . . . . . . . . . . . . . . 11
2.5 TypeChef Variability-Aware Parser Framework . . . . . . . . . . . . . . . 13

3 Variability-Aware Interpreter 15
3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4 Evaluation 31
4.1 Method Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Comparing Runtimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Subject Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
II CONTENTS

4.3.1 Benchmarking specific cases . . . . . . . . . . . . . . . . . . . . . 33


4.3.2 Benchmarking generated product lines . . . . . . . . . . . . . . . 34
4.4 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.1 Single cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.2 Generated product lines . . . . . . . . . . . . . . . . . . . . . . . 37
4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6.1 Threats to validity . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5 Future Work 41

6 Related work 42

7 Conclusion 43
LIST OF FIGURES III

List of Figures

2.1 Product-line development process. . . . . . . . . . . . . . . . . . . . . . . 6


2.2 Time/cost-effectiveness of different development approaches. . . . . . . . 6
2.3 Variability-aware analysis in comparison to brute-force analysis. . . . . . 10
2.4 Exemplary code fragment with its corresponding AST. . . . . . . . . . . 11
2.5 Variable code fragment and the corresponding AST with variability. . . . 11
2.6 UML model of conditional classes. . . . . . . . . . . . . . . . . . . . . . . 12

3.1 Architecture of the variability-aware interpreter. . . . . . . . . . . . . . . 16


3.2 Storing conditional values. . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Variability propagation in if statements. . . . . . . . . . . . . . . . . . . 18
3.4 Variability propagation in while loops. . . . . . . . . . . . . . . . . . . . 19
3.5 Variability propagation in blocks. . . . . . . . . . . . . . . . . . . . . . . 19
3.6 Variability affecting assertions. . . . . . . . . . . . . . . . . . . . . . . . . 20
3.7 Partial trace of program execution for two specific variants and in
variability-aware fashion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.8 While language syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.9 Exemplary mapping over conditional structures. . . . . . . . . . . . . . . 28

4.1 Generating all possible program variants. . . . . . . . . . . . . . . . . . . 32


4.2 Hardware setup of the benchmark system. . . . . . . . . . . . . . . . . . 35
4.3 Results for early joining benchmark. . . . . . . . . . . . . . . . . . . . . 36
4.4 Results for late splitting benchmark. . . . . . . . . . . . . . . . . . . . . 37
4.5 Boxplot of runtimes for generated product lines benchmark. Some outliers
omitted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.6 Variability-aware interpretation speedup in relation to the number of vari-
ants for generated product lines benchmark. . . . . . . . . . . . . . . . . 38
IV

List of Listings

2.1 Exemplary ifdef statement. . . . . . . . . . . . . . . . . . . . . . . . . . . 8


3.1 Code example for early joining. . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Code example for late splitting. . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Language constructs of a variability-enriched While language in Scala. . 23
3.4 Parsing while- and block statements with the TypeChef Parser Framework 24
3.5 Method for processing statements in the variability-aware interpreter. . . 26
3.6 Value representation in the variability-aware interpreter. . . . . . . . . . 26
3.7 Method for evaluating expressions in the variability-aware interpreter. . . 28
3.8 Implementation of the variability-aware store. . . . . . . . . . . . . . . . 29
3.9 Execution of statements in the plain interpreter. . . . . . . . . . . . . . . 29
4.1 Measuring wall-clock time of interpreter execution. . . . . . . . . . . . . 32
4.2 Code fragment used for measuring the effect of early joining. . . . . . . . 33
4.3 Code fragment used for measuring the effect of late splitting. . . . . . . . 34
5.1 Object orientation for the variability-aware interpreter. . . . . . . . . . . 41
V

List of Abbreviations

SPL software product line

AST abstract syntax tree

AOP aspect-oriented programming

FOP feature-oriented programming


VI
Chapter 1. Introduction 1

Chapter 1

Introduction

1.1 Overview
Software product lines (SPL) offer the ability to create multiple similar software
products from a shared set of features. These features satisfy the needs of a certain
domain or market segment and are connected to software components, in which they
are implemented. The goal of software product lines is to decrease maintenance-
and development times when distributing various similar software products, while
simultaneously increasing the flexibility and quality of the created product variants.
An advantage of SPL is the high level of reusability that enables developers to pick a
specific subset of features and thereby get a custom-made product variant.

To create a finished product, features from a shared pool are selected and a
generator connects software components according to the specific feature selection.
Products with the same features, thus reuse the same existing software components,
which do not have to be rewritten for every new product. Furthermore, changes to the
implementation of a feature in the shared pool affect all products that use this feature
and remove the need to carry out changes to the code of every single software product.
Therefore, software product lines as a development paradigm, obtain an increasing
relevance in economy. The ability to provide custom-made software, enables compa-
nies to extend their product range, while saving working hours helps to maximize profits.

An increasing relevance of software product lines, leads to the demand of testing


a whole product line. Traditional approaches focus on testing all configured programs
of a SPL. Considering n features in a product line would require to analyze up to
2n configurations separately. This kind of approaches can be seen as a brute-force
method of testing SPLs, because indeed every product that can be derived of the
product line’s common set of software components, gets analyzed. An obvious problem
with this approach is redundancy. The fact that different products of an SPL reuse
the same software components, implies a certain similarity concerning these derivates
of an SPL. This results in similar parts being analyzed multiple times, which is not
desirable. In addition, testing 2n products is not viable from a certain number of features.

One alternative to the brute-force approach are sampling strategies [Cabral et al.,
2010]. In this approach only a subset of all configurations is analyzed, selected
randomly or by a specific criterion. Sampling strategies solve the problem of testing
2 1.2. Contributions

all generated products, but in return they cannot provide an exhaustive result for a
whole product line. To still achieve this goal, researchers have recently drawn attention
to variability-aware analysis of SPL. Performing variability-aware analysis means, that
the variability of a product line is included in the analysis process. In contrast to other
methods, variability-aware analysis are not performed on the products of a SPL, but
rather on the not yet configured implementation of a software product line. Thus,
only one analysis is performed, that takes variability into account when needed and
redundant parts of the program are not analyzed again and again.

Variability-aware interpretation finally connects the process of interpreting source


code with the principles of variability-aware analysis. In the above-mentioned tradi-
tional approaches, where tests are performed on configured products, no variability is
left in the source code, because the feature selection has already been done. Interpreting
source code in a variability-aware fashion, however, describes a different strategy. In this
method, source code is interpreted together with the instructions that specify the fea-
ture context, in which code segments are executed. The output of the variability-aware
interpretation process are results enriched with variability, which can subsequently be
configured to get the concrete result for a specific feature selection. Benefits of variability-
aware interpretation are the possibility to execute test cases in all configurations and the
avoidance of redundant interpretation, as source code gets interpreted only once.

1.2 Contributions

In this thesis the concept of variability-aware interpretation is introduced as an alterna-


tive software product-line testing approach to existing brute-force or sampling strategies.
Initially, the structures used to represent variability are explained. This includes
explanations about how variability is specified at source code level and how it is trans-
ferred to a representation, which can be used as input for variability-aware analysis.
These informations provide basic knowledge for understanding the further steps towards
variability-aware interpretation.
On top of that, the prototype of a variability-aware interpreter for a variability-
enriched While language is introduced as the main contribution of this thesis. The
interpreter is able to execute statements annotated with ifdef variability, syntactically
similar to the C preprocessor’s conditional compilation directives. Also it can execute
test cases in all configurations of a product line by using assertions. The concept of how
the interpreter reasons about variability is explained and its implementation is shown
based on code examples. The code of the interpreter is available at http://github.
com/puschj/Variability-Aware-Interpreter.
This work shows, that variability-aware interpretation can speedup software product-
line testing in comparison to brute-force analysis by applying the principles of variability-
aware analysis on interpreting source code. Therefore, both approaches are compared
in an empirical evaluation, that features three benchmarks including favorable cases to
show the interpreter’s potential and a set of 100 random generated test product lines to
provide a comprehensive result for a rich set of subjects.
Chapter 1. Introduction 3

1.3 Outline
Chapter 2 provides an overview of software product lines in general (Section 2.1) and how
variability can be specified in programs (Section 2.2). Afterwards, testing of software
product lines is described and the problems that occur thereby (Section 2.3). The chapter
closes with describing variability-aware analysis as an approach to address problems
concerned with product-line analysis (Section 2.4), as well as the structures needed to
perform variability-aware analysis on programs.
The variability-aware interpreter is described in Chapter 3. First, this thesis declares
a problem statement that specifies the goals associated with developing an interpreter
(Section 3.1). After that, the underlying concept (Section 3.2) and the implementation
(Section 3.3) of the interpreter itself are explained in detail.
In Chapter 4 an empirical evaluation of the variability-aware interpreter is performed.
At first, it is explained which method has been used for the evaluation and why (Section
4.1). Then, the way of comparing the approaches under test is shown (Section 4.2),
before the subjects of the evaluation are described (Section 4.3). The results of the
evaluation are shown in Section 4.5. Finally, the results are discussed (Sec. 4.6) and
possible threats to validity are mentioned (Sec. 4.6.1).
Future research targets concerning variability-aware interpretation are shown in
Chapter 5 and related work is shown in Chapter 6. Lastly, this thesis gets concluded in
Chapter 7.
4 1.3. Outline
Chapter 2. Variability in Programs 5

Chapter 2

Variability in Programs

2.1 Software Product Lines


A Software Product Line [Kästner, 2011] is a set of similar software systems, with each
one sharing a common set of features to satisfy the needs of a certain domain and
each one being built from a common set of software components. A domain can be a
certain market segment or the needs of a specific task. The demands of a domain are
modeled as features, which in turn are used to specify concrete program variants. SPL
development, however, differs noticeably from development of single products. It is
divided into Application Engineering and Domain Engineering. The latter represents
the main difference to traditional development. Instead of engineering one single
product for a specific order, Domain Engineering involves collecting the needs of a whole
domain, such as a market segment, and converting the demands of the field to reusable
software artifacts, that can later be tailored together to form a final derivate of the
product-line. Additionally, Domain Engineering includes finding appropriate means of
how to reuse this artifacts when building a concrete product. Application Engineering
on the other side, describes the process of developing a single product, that means one
single derivate of a product-line. Unlike in traditional development, no new product
gets implemented in this step, but rather the artifacts created in Domain Engineering
are used to create the final product. The Application Engineering process involves
selecting appropriate features for the requested product. These features represent a
specific subset of the above-mentioned software artifacts, which afterwards get tailored
together and form the final software product.

Figure 2.1 summarizes the process of product-line development. Domain- and


Application Engineering are divided. In the Domain Engineering process, analyzing the
domain and collecting domain knowledge firstly results in a feature model, from which
specific features later can be selected to form a product. Secondly, a set of reusable
software artifacts is implemented, with all components representing a specific feature
of the domain. At Application Engineering level, the first step is to perform a feature
selection that fits to the requirements of the requested product. Providing a specific
generator for tailoring together specific software artifacts belongs also to Application
Engineering. The generator finally connects Domain- and Application Engineering by
taking inputs of both branches and constructing the final product. As input, it takes
the set of reusable software artifacts, which have been implemented in the Domain
Engineering process, together with the above-mentioned feature selection that tells the
6 2.1. Software Product Lines

Domain Engineering
Feature Reusable
Model Artifacts

Application Engineering
Feature
Generator Product
Selection

Figure 2.1: Product-line development process.

generator which artifacts are used in the construction process. The outcome, finally, is
a derivate of the product-line, tailored to the customer’s specific needs.

As already explained, the way of constructing programs in product-line development,


differs clearly from traditional approaches. The division into Domain- and Application
Engineering and the fact that this involves implementing software artifacts that satisfy
the needs of a whole domain, do not imply product-line development as very cost-
efficient at first glance. However, SPLs unfold their full potential, when many similar
products have to be developed, which can profit from the high degree of reusability,
that product-line development provides.

development
time / cost

traditional
development
product-line
development

number of products

Figure 2.2: Time/cost-effectiveness of different development approaches.

The benefit of software-product-line development is illustrated in Figure 2.2.


While developing a product-line initially produces higher costs, an increasing number
of products that can be constructed using SPL reusability, results in a less rising
cost-graph compared to traditional software development. The higher initial costs
in product-line development are closely related to the Domain Engineering process.
This phase, however, is executed only once and further products profit from already
existing software components, that just have to be woven together. That way, it can be
explained, why software-product-line development receives an increasing attention. At
a certain number of products, every further product increases the benefit of just having
to reuse existing components. The exact number, at which the advantage turns towards
product-line development, depends on the size and properties of a product-line.
Chapter 2. Variability in Programs 7

2.2 Variability in Product Lines


In Section 2.1, Domain Engineering was introduced as a process, where domain knowl-
edge is used to create a set of reusable software artifacts, which can afterwards be tailored
together by a generator according to a certain feature selection. Creating a set of reusable
software artifacts requires a method to map specific code fragments to features. This
is necessary for the generator to pick up the correct pieces of code when creating the
final product. In the past, multiple ways of implementing variability in SPL have proven
working. This work uses preprocessor directives because of their simplicity. The follow-
ing list provides an overview of established ways of implementing variability that helps
to bring the approach used in this work in line with other approaches of implementing
variability.

• Runtime Variability
This topic includes command-line parameters, configuration files and global con-
figuration variables. Configurations are checked at runtime and there is no explicit
generator, as unused functionality is still in the source code, even if not used due
to runtime checks.
• Build Systems and Version Control Systems [Staples and Hill, 2004]
When implementing variability with build systems or version control systems, pro-
gram variants get mapped to build scripts in the case of build systems, or branches
in the case of version control systems. Existing tools provide good tool support
for this method of building SPL, but the downside on the other hand is, that de-
velopers operate on variants, not features in isolation. Merging different variants
requires high efforts and arbitrary combinations of features are not possible.
• Aspect-Oriented Programming (AOP) [AsJ, Kiczales et al., 2001]
Aspects in AOP are able to manipulate existing class hierarchies and intercept
method calls or field access without having to change existing code. This so-
called principle of obliviousness enables developers to add feature functionality as
independent aspects to an existing code base. As aspects are one-to-one mapped
to features, a product can be created by including only the selected features, and
thus aspects, in the compilation process.
• Feature-Oriented Programming (FOP) [Batory et al., 2003, Kuhlemann et al., 2007]
FOP also follows the principle of obliviousness, as an existing code base can be
refined in feature modules. Refining a class, developers can add new methods or add
code to existing methods. The idea is also to have a one-to-one mapping between
features and feature modules. When creating a product, a special composition
algorithm processes all refinements that were included by a certain feature selection.
That way, the existing code base gets modified only by the refinements, which were
chosen due to the feature selection.
• Preprocessors [Liebig et al., 2010]
Using preprocessors, the source code is transformed before the compiler processes
it. Developers specify code fragments that will either be left or removed according
to a specific configuration, when the preprocessor is called on a source file. This
8 2.3. Testing Product Lines

method guarantees, in contrast to Runtime Variability, that only the code of a


specific variant gets delivered. Preprocessors do not separate the code that belongs
to features from the common code base, which can lead to messy code, but they
provide a easy-to-learn way to implement variability.

This work focuses on preprocessor directives, more specifically on ifdef variability, to


specify feature contexts. This type of preprocessor directives receives its name because
feature contexts are specified using #ifdef statements around variable code fragments.
Listing 2.1 shows an example of a variable
assignment. Although there are other approaches 1 #i f d e f A

to realize variability in programs (see above), 2 x = 1;

ifdef variability offers a very simple and easy 3 #e n d i f

understandable way to specify feature contexts in Listing 2.1: Exemplary ifdef


source code. Additionally, using ifdef variability is statement.
quite common in product-line development, with
the Linux kernel as its most prominent example. These are the reasons for focusing on
the preprocessor approach in this work.

2.3 Testing Product Lines


Software testing is the process of examining the quality of a software product or
a service. Tests often are conducted to prove that a certain program meets some
predefined requirements or satisfies the customer’s needs. A special method of software
testing is unit testing [Olan, 2003]. Here, individual parts of the program (units) are
tested separately. Unit tests are often created by programmers during the development
process where they rely on unit testing frameworks, which are available for most of the
common programming languages.

Concerning software product lines, testing is also important, because there are many
different program variants, that can be derived from one set of software components
and in the best case, the test result should provide an exhaustive output for the whole
product line. However, testing a whole product line is different from testing a single
program, because product lines include feature contexts next to the normal source code,
which requires a more sophisticated approach.

The easiest way to conduct unit tests on product lines is to perform them on the
products. In the Application Engineering process, the final outcome is a configured
program, which contains a subset of the product line’s shared set of software compo-
nents, according to a specific feature selection. This product does not longer contain
variability, because the generation process removes feature context declarations, after
it has been checked whether their nested code block gets included in the final product,
or not. For that reason, a default unit test can be executed on the generated product.
The results of all test executions of all products that can be derived from a product
line, provide an exhaustive test result for the product line as a whole.

When developing a product line, the number of features determines the possible
number of different products, which can be derived. Another factor, that has influence
Chapter 2. Variability in Programs 9

on the maximum number of products, are dependencies between features. For example,
when the selection of one feature implies that another feature already has been selected,
the maximum number of derivates decreases. Assuming no dependencies between
features, a product line with n optional features has 2n possible products. This leads to
huge numbers of products that have to be tested, when the number of features increases.
Performing product-line analysis by testing all possible products, therefore is declined as
a brute-force approach. High variability leads to high complexity when testing product
lines. Considering a product line with 320 optional, independent features would lead
to more possible products than the estimated number of atoms in the universe. Even
with great computing power, testing a product line with about 10000 features (210000
variants), like the Linux kernel, would not provide a result in a reasonable amount of
time, when using the brute-force approach.

A possible solution for still getting a result, when testing a product line with such
high numbers of features, is to reduce the number of products being tested. Sampling
strategies [Cabral et al., 2010] focus on selecting some of the product-line variants for
testing. Variants are selected randomly or by a specific criterion, with all criteria aiming
at providing a more meaningful result by choosing the tested variants wisely. Though
sampling strategies can provide a result, where the brute-force approach would not be
able to provide a result in an acceptable amount of time, their result is not fully exhaus-
tive. Even when choosing variants under test wisely, there is no guarantee for getting
an expressive result, because still not all variants have been investigated. Therefore,
sampling strategies do not provide an appropriate solution for getting exhaustive test
results. They only solve performance issues when trading off performance against result
quality, which is not desirable. In this work, the problem of getting an exhaustive result,
while still not checking every possible variant, is addressed.

2.4 Variability-Aware Analysis


Because of the obvious performance issues presented in Section 2.3, researchers have
drawn attention to an approach, where analysis are not performed on the products of a
product line. Instead, they focus on a method, where analysis are conducted on the not
yet configured software components of a product line, including variability in the anal-
ysis process. Analysis following this scheme are called variability-aware analysis, which
shows their intent to handle variability while analyzing the product line. Existing ex-
amples for the use of variability-aware analysis are type checking [Kästner et al., 2011a],
static analysis [Kreutzer, 2012] and model checking [Lauenroth et al., 2009]. This work
omits further details concerning these use cases as they can be taken from the referenced
publications. Variability-aware analysis address the above-mentioned performance issues
in the way that product-line code is analyzed only once, instead of checking every pos-
sible product. Doing this, variability-aware analysis use the fact, that the variants of a
product line normally have many similar code passages, that would be analyzed over and
over again in the brute-force approach. Variability-aware analysis improve performance,
because they analyze equal code only once and still take variability into account when
needed. That way, additional effort is only needed for differing code passages.
Figure 2.3 illustrates the difference between variability-aware analysis and the
brute-force approach, by showing two possible ways (red/green) of getting an analysis
result for a product line. The red path shows the brute-force analysis, which initially
10 2.4. Variability-Aware Analysis

Product3Line Product
configure
(enriched3with3Variability) (without3Variability)
1

variability-aware3 brute-force
analysis 3 2 approach

4
Result configure Result
(enriched3with3Variability) aggregate
(without3Variability)

Figure 2.3: Variability-aware analysis in comparison to brute-force analysis.

configures all possible products (Step 1) and then conducts analysis on all of them
(Step 2). On the contrary, the green path describes the variability-aware analysis, that
is performed on the entire product line, instead of configured products (Step 3). The
outcome of the analysis process is a result enriched with variability, because it has to
store the value of all investigated properties for each configuration. Afterwards, the
result for one specific configuration can be deduced from the variability-enriched result
(Step 4, top). To get a result which describes the whole product line, it is also possible
to aggregate all results from the brute-force analysis (Step 4, bottom). Though the
results of both approaches should be equivalent, the variability-aware approach (green)
is expected to be faster than the brute-force approach (red), because common code
segments are analyzed only once.

2.4.1 Abstract syntax trees

When performing variability-aware analysis, a structural representation of code is


needed. An abstract syntax tree (AST) offers this kind of structure, as it is a repre-
sentation of the source code in tree form, based on the syntax of a certain programming
language. When a specific construct of the programming language’s syntax is used in
the source code, it gets denoted as a tree node. Figure 2.4 shows an exemplary code
fragment together with its corresponding AST. The root of the AST contains all top-
level statements, that are in the source code: an assignment to variable x (Line 2) and
an if statement (Line 4-6). The left branch decodes the assignment with an Assign node
and has two children, Id and Num. Id nodes represent access to a variable and have one
further child node for the name of the accessed variable (x in this case). The if construct
is decoded in the right branch of the AST, with its subtrees for the condition and the
statement that gets executed when the condition evaluates to true.
Abstract syntax trees form the basic structural representation of source code in this
work. However, when developing product lines, the source code gets enriched with
variability. This brings up the need for a structural representation of variability also,
which is solved by extending the standard AST with constructs for variability. How
variability is represented in AST is further explained in Section 2.4.2.
Chapter 2. Variability in Programs 11

AST
1 begin Assign If
2 x = 1;
3 Var
Id Num Greater Block
4 i f ( x > 0) {
5 y = 1 + 2; x 1 Var
Id Num Assign
6 }
7 end x 0 Var
Id Add
y Num Num
1 2
Figure 2.4: Exemplary code fragment with its corresponding AST.

2.4.2 Variability representation


When parsing product-line source code [Kästner et al., 2011b], the parsing process pro-
vides another kind of information next to the normal source code: the feature context,
in which an instruction should be applied. Therefore, variability needs to be structurally
represented when parsing variability-enriched code. Compile-time variability gets woven
in abstract syntax trees in this work. More exactly, the feature context in which a node
should be present, is also stored in the AST. This is done by wrapping nodes with a pres-
ence condition, a propositional formula over existing features, whose result determines
the presence or absence of the node in the final product.
AST (withevariability)
1 begin Num
Opt Num
Opt
2 #i f d e f A
3 x = 1; A Assign True If
4 #e n d i f
5 Var One Num
One Block
6 i f ( x > 0) {
x Num Greater Num
Opt
7 y = 1 +
8 #i f d e f B 1 Var Num True Assign
9 2
10 #e l s e x 0 Var Num
One
11 3
12 #e n d i f y Add
13 ;
Num Choice
14 }
15 end 1 B One One
Num Num
2 2
3

Figure 2.5: Variable code fragment and the corresponding AST with variability.

Figure 2.5 continues the example from the last section, but now introduces ifdef
variability in the source code and variability nodes in the AST, respectively. Variability
nodes can be optional (green) or conditional (blue). For example, the first assignment
shall only be executed in variants with feature A selected (Lines 2-4). In the corre-
sponding AST, the assignment is wrapped with an Opt node that also includes the
feature context in which the Assign node later influences the execution (left branch).
12 2.4. Variability-Aware Analysis

Conditional nodes are used when different configurations yield different values, as
with the assignment to y (Lines 7-13). In configurations where feature B is selected,
the left side of the addition has the value 2, otherwise it yields 3, which is denoted
using a Choice construct. On the contrary, when a conditional value is equal in all
configurations, it is denoted using One, e.g. the right side of the assignment to x (Line 3).

After looking abstractly at the constructs to represent variability, the following para-
graph describes how they are realized as constructs of an object-oriented programming
language (Scala). Figure 2.6 shows, that there are two constructs available to handle
variability: Opt[T] and Conditional[T]. Opt[T] is used for optional elements of type T,
e.g. a list of statements with some of them only being present in specific configurations
( List [Opt[Statement]]). Its two fields, feature and entry, specify the presence condition, that
determines in which configurations the element is present on the one hand, and the
element of type T itself, that is wrapped by the Opt[T] class on the other hand.
Presence conditions are of type FeatureExpr, which is a class especially designed for
propositional formulas. FeatureExpr provides useful functions on propositional formulas,
e.g. querying if the formula is a tautology or a contradiction, concatenating two formulas
(and, or), negating the formula (not) or checking if a specific feature selection satisfies
the formula. True indicates, that a node is present in every configuration.
While Opt[T] manages optional elements and mainly occurs in lists, Conditional[T] is
the construct for elements, that have different values regarding to specific configura-
tions. Lists of optional elements can have an arbitrary number of elements in different
configurations, but conditional elements have exactly one value for each configuration.
Conditional[T] is realized as an abstract class, that has two subclasses. One[T] simply
is a container for elements of type T . Choice[T] is used to choose between two values
depending on a presence condition. More specifically, if the presence condition evaluates
to true, the value of this choice object for the current configuration is in the then-branch,
otherwise it is in the else-branch. thenBranch and elseBranch are of type Conditional[T]
again, which makes it possible to build nested choices. As a result, it can not only be
chosen between two values, rather specifying multiple alternative elements of type T is
possible.

Figure 2.6: UML model of conditional classes.


Chapter 2. Variability in Programs 13

2.5 TypeChef Variability-Aware Parser Framework


The TypeChef Variability-Aware Parser Framework [Kästner et al., 2011b] provides
classes for parsing variability-enriched code as illustrated in Figure 2.5. This work
uses the TypeChef parser as a block-box framework for creating ASTs with variabil-
ity that can then be used in the variability-aware interpretation process. Concerning
#ifdef variability, the framework contains a java parser that is capable of reading feature
declarations inside of comments (//#ifdef and //#if), which is used out of the box. Creat-
ing ASTs with variability is done by using the parse methods provided by the framework
that are capable of reading feature contexts. The following list briefly presents some of
the possibilities, that are provided by the TypeChef parser:

• parse conditional elements


Element can be parsed conditional, which means that it can be wrapped with an
#ifdef statement for specifying its feature context.

• parse optional lists


A list of optional elements can be parsed and the parser accepts #ifdef statements
around each element of the list.

• parse sequences
Parsing sequences of elements is used to parse compound constructs, such as a while
loop, which contains a keyword, a condition and a body.

• map parse results


Parsed elements can be mapped to a certain type. This is, for example, used to
convert parsed tokens to custom AST nodes.

Further details on how exactly variability contexts are parsed using the TypeChef
Variability-Aware Parser Framework have already been published in previous work and
are no topic in this work. How using the framework is done programmatically, is illus-
trated in Section 3.3.
14 2.5. TypeChef Variability-Aware Parser Framework
Chapter 3. Variability-Aware Interpreter 15

Chapter 3

Variability-Aware Interpreter

3.1 Problem Statement

In order to analyze whole product lines, a sophisticated approach is needed, that provides
an exhaustive result in a reasonable amount of time. Right now, analyzing whole product
lines is performed either with brute-force analysis, that require to analyze up to 2#F eatures
configurations, or with sampling strategies, that reduce the number of tests, but cannot
yield an exhaustive result for the whole product line. When focusing on analyzing
configured products, the dependency between the number of features and the effort
needed to receive a test result remains, thus providing a bad scaling for product lines with
many features. Researchers therefore draw their attention to variability-aware analysis,
a technique to perform analysis on the product-line code itself, taking variability into
account and reducing test effort by analyzing shared code passages only once.
The above-mentioned performance problems for product lines with high numbers
of features can be transferred to the interpretation of source code. When interpreting
variability-enriched product-line code, developers want to be able to check a certain
property, for example, with an assertion. According to brute-force analysis, this problem
would be solved by interpreting every possible variant of the product line individually,
which means also executing the assertion in every variant and thus getting an exhaustive
result for the whole product line.
This method, however, leads to performance issues for product lines with many pos-
sible configurations, as has already been explained above. Therefore, the goal is to
transfer the benefits of variability-aware analysis, which already provide strategies for
reducing analysis effort in comparison to the brute-force approach, to the interpretation
of source code. The resulting approach, variability-aware interpretation, takes varability
into account when interpreting source code and as with variability-aware analysis, source
code is interpreted only once and a speedup is received by not interpreting common code
passages again and again. To perform variability-aware interpretation on source code,
an interpreter must be implemented, that supports reasoning about variability in the
manner of variability-aware analysis.
This work proposes to research the practicability and performance of such a
variability-aware interpreter that accepts a While language and that applies the prin-
ciples of variability-aware analysis to interpreting source code.
16 3.2. Concept

3.2 Concept
This section describes the concept of a variability-aware interpreter, using the principles
of variability-aware analysis to perform testing on product lines as a whole. Interpreters
are programs that execute code of a certain programming language at runtime. Tradi-
tional interpreters operate on ASTs (Sec. 2.4.1), which are the representation of a certain
code fragment. Additionally, they use a store, where the values of all used variables are
stored.
On executing a code fragment, the interpreter reads from the store, when variables are
accessed and writes to the store, when variables are assigned, thus returning an updated
store at the end of the execution. This process could be described with the following
function: execute(s: Statement, store: Store) : Store. However, when interpreting source code
in a variability-aware fashion, the components used in the interpretation process differ.
As shown in Section 2.4.2, ASTs are now enriched with variability. The interpreter also
must be aware of the variability context it is currently operating in, so this variability
context is another input for the variability-aware approach. Finally, the variability-
aware interpreter stores its variables also in a variability-aware store, that maps variable
names to conditional values, instead of just plain values. Using function notation, the
variability-aware interpretation would be described with: execute(s: Conditional[Statement],
store : VAStore): VAStore.

AST with
Variability
read & write

Variability
context Variability-Aware Variability-Aware
Interpreter Store

Figure 3.1: Architecture of the variability-aware interpreter.

Figure 3.1 describes the architecture of the variability-aware interpreter. Next to a


different AST and Store, which are adjusted to support variability, the main difference
to traditional interpreters is the use of a variability context in the interpretation process.
On executing instructions it enables the variability-aware interpreter to be aware of its
current variability context.
Variability contexts are propositional formulas, that describe the configurations in
which the current code fragment gets executed. For example, an assignment to variable
x within a #ifdef A ∧ B block, will result in the interpreter executing the assignment
with variability context A ∧ B. This means that the value of the assignment is only
being stored for configurations, where feature A and feature B are selected. To be able
to save variables depending on a certain configuration, the store used in variability-
aware interpretation is not a mapping from variable names to values (Map[String, Value])
as in traditional interpretation, rather it maps variable names to conditional values
(Map[String, Conditional[Value]]).
One important thing that speeds up variability-aware interpretation in comparison to
brute-force analysis, is the locality of variability. Looking at the variability-aware store,
Chapter 3. Variability-Aware Interpreter 17

variability is stored as local as possible. A less local implementation of a variability-aware


store would be a conditional mapping of names to values (Conditional[Map[String, Value]]),
which causes the problem, that a whole map must be stored for each configuration.
However, keeping variability local, provides the ability to exploit sharing, which means,
that structures are used for all configurations. When a value changes in a single con-
figuration, a store of type Conditional[Map[String, Value]] would create a new modified Map
for that configuration, thus redundantly storing all values that remain the same. If af-
terwards, a value changes in all configurations, the update has to be executed on every
existing Map. By storing variability as local as possible (Map[String, Conditional[Value]]),
updates on values affect only the value itself, because variability is stored at value level.
How values are stored in a variability-aware fashion on executing statements is shown
in Figure 3.2. While the assignments to x have no variability context, the first assignment
to y is executed in a specific variability context. In configurations where feature A is
selected, the value 2 is assigned to y (Line 4) and when feature A is not selected, the value
of y is 3 (Line 6). Below the code fragment, the state of the store after the variability-
aware interpretation process is illustrated. While the value for variable x is unique and
thus the same in all configurations, the value for y is conditional.
Programmatically, the final values are rep-
resented as One(2) and Choice(A, One(3), One(4)) 1 begin

as described in Section 2.4.2. Figure 3.2 also 2 x = 1;

shows, how sharing can reduce analysis effort in 3 //#i f d e f A


4 y = 2;
comparison to the brute-force approach. When
5 //#e l s e
x gets incremented in Line 8, again with no 6 y = 3;
specific variability context, this requires only 7 //#e n d i f

one execution, because x not yet depends on a 8 x = x + 1;


9 y = y + 1;
certain configuration, which means that it is shared
10 end
between configurations. Looking at brute-force
analysis, the assignment would be executed twice,
because one selectable feature yields two possible
products, each with one incrementation. Incre- Variability-Aware
Store
menting y on the other hand (Line 9), requires the
variability-aware interpretation process to update x 2
y’s value for all stored configurations, which are in A 3
y
this case A and ¬A, resulting in both approaches A 4
requiring two instructions. In other words, only
Figure 3.2: Storing conditional values.
code passages with variability that affect the anal-
ysis result cause additional effort during the variability-aware interpretation process.
Code passages that are shared between configurations, therefore make up the difference
that speeds up variability-aware interpretation compared to checking all configurations
individually.

Evaluating expressions. Another functionality of the variability-aware inter-


preter, which has also been used in the above example, is the evaluation of expressions.
Conceptually, the interpreter takes a conditional expression and returns a conditional
value. In this case, every alternative expression must be evaluated separately in its own
feature context. For example, when variable y is incremented (Line 9), the interpreter
has to evaluate the expression Add(Var(”y”), Num(1)). At first, the current value of y is
read from the store and returns Choice(A, One(2), One(3)). Now, the important part is,
18 3.2. Concept

that the addition is performed twice, incrementing both sides of the Choice construct.
On the other hand, when incrementing x, the interpreter behaves differently. In this
case the interpreter has to evaluate Add(Var(”x”), Num(1)), but looking up x in the store
returns just One(1), which is not dependent on a certain configuration. In this case,
the variability-aware interpreter executes the addition only once, thus acting like a
traditional interpreter and saving effort compared to the brute-force approach.

Figure 3.2 illustrates, how the variability-aware interpreter handles assignments and
stores conditional values in a variability-aware store. However, to provide a more com-
prehensive approach, the variability-aware interpreter must support more language con-
structs, such as control structures. The variability-aware interpreter actually is imple-
mented to support a While language including functions. Though it is desirable to
support high-level programming languages as Java, C, C++, etc. in the future, this
work presents a prototype interpreter for initial insights on viability and performance.
Adapting the structures presented in this work, to high-level programming languages, is
left open as a future research topic.
While executing assignments is a relatively straightforward task for the variability-
aware interpreter, handling control structures requires more complicated reasoning
about variability. For example, consider a while loop, where the number of iterations
depends on the configuration. As a result, all statements in the loop body must
be executed in a specific feature context, that describes in which configurations the
current loop iteration is still performed. A more detailed example for while loops is
shown below. The following paragraphs step through all language constructs, excerpt
assignments, which have already been shown, and describe how the variability-aware
interpreter executes them, taking variability into account during interpretation.

If statements. On interpreting if statements, traditional interpreters evaluate a


condition and then execute either the then branch or the else branch of the statement,
according to the result of the condition evaluation. Looking at code without variability,
the condition always evaluates to exactly one result: true or f alse. When interpreting
variability-enriched code, the value of the condition may depend on the configuration,
which implies that possibly both branches must be visited.
The variability-aware interpreter
solves this problem by firstly checking, in
VA Store
which configuration the condition yields
true. The outcome of this query is sub-
sequently used as the variability context res
in the execution of the then branch. If
there is an else branch, it is executed A A
with the variability context, in which
the condition yields f alse. Due to that
behavior, the variability-aware interpreter
2 3
covers all configurations concerning if
statements. An example for variability
Figure 3.3: Variability propagation
getting propagated by if statements, is in if statements.
shown in Figure 3.3. The value of variable
c is 2 in configurations where A is selected and 3, when A is not selected. As a result,
the outcome of the if statement (Lines 8-12) is also dependent on the configuration.
Chapter 3. Variability-Aware Interpreter 19

The variability-aware interpreter executes the then branch (Line 9) in the variability
context, where the condition yields true and the else branch (Line 11) in the context,
where it yields f alse, which is the negation of the context for the then branch. Because
c is only greater than 0, when A is not selected, the context for the then branch is
¬A and the context for the else branch is A. Thus, the assignments store its values
only in the variability context, in which they have been executed, resulting in the il-
lustrated store, where res has value 2 on feature A being selected, and value 3 otherwise.

While loops. The fact that conditions can be dependent on configurations causes the
problem that while loops may have different numbers of iterations in variability-aware
interpretation. Therefore, while loops are executed until there is no further possible
configuration, in which the loop would perform another iteration. Additionally, the
statements in the loop body are executed in the variability context, where the condition
yields true. Considering a case, where the condition is true in one configuration and
f alse in a different one, the variability-aware interpreter would perform the statements
in the loop body only in the variability context with the condition being true.
Figure 3.4 shows an example for
while loops, where the condition depends
VA Store
on feature A. In every iteration, the
interpreter resolves the feature context, in
which the condition yields true. For the res
first three iterations, i > 0 is true in every
configuration. The query for the feature A A
context in which the condition holds,
therefore results in the propositional
formula true. However, in the fourth
3 4
iteration, the condition only yields true
in variability context ¬A. Due to that, Figure 3.4: Variability propagation
the incrementation of res (Line 9) is in while loops.
executed in variability context ¬A, which
propagates the variability for variable i to res. After the assignment, the value of res
has been increased by one, but only in configurations, where feature A is not selected,
which is shown in the store right to the code fragment.

Block statements. On executing


block statements it is important to propa-
VA Store
gate variability of a whole block to its in-
ner statements. When blocks are executed
in a specific variability context, all inner x
statements share this context on execu- A A
tion. As statements in blocks are also op-
tional, their own variability context must B B 0
be merged with the outer context. In
other words, the execution of statements 2 1
in a block depends on the block’s variabil-
ity context and the statements context.
Because variability contexts are proposi- Figure 3.5: Variability propagation in blocks.
tional formulas, they can be concatenated with the and operator. Just like presence con-
20 3.2. Concept

ditions, which have been described in Section 2.4.2, variability contexts are represented
by the type FeatureExpr that already provides functions for concatenating propositional
formulas out of the box.
In Figure 3.5, multiple assignments are executed in different variability contexts.
The code fragment shows a block in feature context A (Lines 3-10) with two inner
assignments. On executing the block, the variability context of the block, which
is A, has to be merged with the context of its inner statements. The first inner
assignment has no local variability context, which means that its context is T rue. It
is therefore executed in variability context A ∧ T rue. The second assignment in the
block is analogously executed in variability context A ∧ B. This finally results in three
different values for x. As the illustrated store shows, x has value 2, when features
A and B are selected, value 1, if only A is selected, and 0, when none of the two is selected.

Assertions. To perform analysis on whole product lines, a construct for checking


specific properties is needed. Therefore, programmers may use the assert statement to
check if a certain condition holds.
However, assertions are statements and
therefore may be executed in a certain vari-
ability context. This requires the interpreter to
take the feature context, in which the assertion true
was executed, into account when checking a
property. Figure 3.6 shows a program fragment
with three assertions. Two of them are being false
executed in variability context A, just after an true
assignment to variable x (Line 5). After this
assignment, x has value 5 in configurations
where A is selected. Checking for x == 0 in Figure 3.6: Variability affecting assertions.
Line 6 yields f alse, because the assertion itself
is executed in variability context A. By the same reason, checking the property x == 5
(Line 7) yields true, when executed in the same context.

After looking at the language constructs, which are supported by the variability-aware
interpreter, a more complicated example is shown, where all of the above-mentioned con-
structs have been used. Figure 3.7 shows the partial trace of a full program execution,
allowing to see the difference between the interpretation of two exemplary configura-
tions of the program and the variability-aware interpretation. All program variables
are tracked and shown at five points of program execution. After the first assignment
(Line 2) all variables are shared between configurations, but when y is assigned, the
effects of variability get visible (Line 8). In case of the configurations, different values
are stored for y and the variability-aware interpretation stores y as a Choice. At Line
10, the condition’s value depends on the configuration, which results in the value of x
also being conditional. The last snapshot of the variables illustrates the different kinds
of results, which both approaches provide. While the configurations both have a unique
result, the variability-aware interpretation yields the result for all configurations at once.
The assertion in Line 24 checks the property res < 15, which is true in all cases. While
this outcome is obvious in case of the configurations, because their result is unique, the
variability-aware interpretation checks all values for res to be less than 15. In this case,
res can have values 0, 10 and 12. Therefore, the assertion evaluates to true.
Chapter 3. Variability-Aware Interpreter 21

B A B Variability-AwarehInterpretation
xh=h4 xh=h4 xh=hOne(4)
yh=hundef yh=hundef yh=hOne(undef)
resh=hundef resh=hundef resh=hOne(undef)

xh=h4 xh=h4 xh=hOne(4)


yh=h3 yh=h2 yh=hChoice(A,hOne(2),hOne(3))
resh=hundef resh=hundef resh=hOne(undef)

xh=h4 xh=h5 xh=hChoice(A,hOne(5),hOne(4))


yh=h3 yh=h2 yh=hChoice(A,hOne(2),hOne(3))
resh=hundef resh=hundef resh=hOne(undef)

xh=h4 xh=h5 xh=hChoice(A,hOne(5),hOne(4))


yh=h3 yh=h2 yh=hChoice(A,hOne(2),hOne(3))
resh=h0 resh=h0 resh=hOne(0)

xh=h4 xh=h5 xh=hChoice(A,hOne(5),hOne(4))


yh=h0 yh=h0 yh=hOne(0)
resh=h12 resh=h10 resh=hChoice(B,hChoice(A,hOne(10),hOne(12)),hOne(0))

true true true

Figure 3.7: Partial trace of program execution for two specific variants and in variability-aware fashion.

It already has been said, that variability-aware interpretation is expected to be faster


than checking all possible configurations individually, because common code segments
are executed only once. Two special cases of this effect that emphasize the profitability
of variability-aware interpretation are shown below.

Early joining. An effect that occurs, when previously configuration dependent


values are joined again, is called early joining. To join values in this case means, that
the number of configurations, in which they are equal, increases.
The benefit of joining as early as pos-
sible is, that less subsequent calculations 1 begin

must be performed to provide a result 2 def expensive (n) { . . . }

for all configurations. An example of 3 d e f max( a , b ) { . . . }


4
early joining is shown in Listing 3.1. In
5 x = 1;
this code fragment, two functions are 6 //#i f d e f A
declared: max(a,b) returns the maximum 7 x = 2;

value of a and b, while expensive(n) is 8 //#e n d i f

a placeholder representing any sort of 9


10 x = max( x , 4 2 ) ;
CPU-intensive calculation. When calling
11 r e s = e x p e n s i v e ( x ) ;
the max function (Line 10), the value of x 12 end
is conditional. However, the max function
returns the same value for every config- Listing 3.1: Code example for early joining.
uration and therefore x is shared between configurations again, after this assignment.
Due to that fact, the expensive calculation (Line 11) has to be applied only once,
increasing the variability-aware interpretation speedup.
22 3.2. Concept

Late splitting. When expensive calculations differ only slightly between configu-
rations, intermediate results may have to be calculated only once. The goal is to keep
values shared between configurations as long as possible and only split them for the
configuration-dependent part of the calculation. This effect is called late splitting and
in contrast to early joining, which is implemented as a special feature, it occurs by
construction. Late splitting is illustrated in Listing 3.2.
The code fragment shows the calcula-
tion for the sum of the first n natural num- 1 begin

bers. The calculation mainly consists of a 2 x = 100;

while loop, summarizing all numbers be- 3 //#i f d e f A


4 x = 101;
tween 1 and x. In every iteration the in-
5 //#e n d i f
termediate result is stored in variable sum. 6
Previously, x has been assigned (Lines 2- 7 sum = 0 ;

5) and affects the number of iterations, the 8 i = 1;


9 while ( i <= x ) {
while loop will perform (Line 9). Above, it
10 sum = sum + i ;
has been mentioned, that the variability-
11 i = i + 1;
aware interpreter executes while loop bod- 12 }
ies in the context, where the while condi- 13 end
tion yields true. In Listing 3.2, this con-
Listing 3.2: Code example for late splitting.
text is always T rue, except for the last
iteration. As a consequence, the intermediate result, which is stored in sum, is updated
once in nearly all iterations. Only the last iteration is executed in variability context
A, thus updating sum conditionally. Also additional effort for handling variability is
only required in this last iteration. In comparison to the brute-force approach, which
would execute the whole while loop in every possible product, late splitting makes up a
significant performance difference.
Chapter 3. Variability-Aware Interpreter 23

3.3 Implementation
In this section the concept of the variability-aware interpreter is picked up again and its
implementation is shown. Especially, implementations of the AST classes, the parser,
the execution of statements, the evaluation of expressions and the variability-aware store
are shown. The variability-aware interpreter is written in Scala to enable compatibility
with the TypeChef libraries, which are also written in Scala.

AST

In Section 2.4.1 it has been described, how the source code of programs is represented
using ASTs. Afterwards, Section 2.4.2 showed, how variability is woven into ASTs when
the code they represent is enriched with variability. In the following, it is illustrated
how the AST nodes are implemented as constructs of Scala.

1 abstract c l a s s Stmt
2 case c l a s s A s s i g n ( n : S t r i n g , e : C o n d i t i o n a l [ Expr ] ) extends Stmt
3 case c l a s s I f ( c : C o n d i t i o n a l [ Expr ] , thn : Block , e l s : Block ) extends Stmt
4 case c l a s s While ( c : C o n d i t i o n a l [ Expr ] , b : Block ) extends Stmt
5 case c l a s s Block ( s : L i s t [ Opt [ Stmt ] ] ) extends Stmt
6 case c l a s s FuncDec ( n : S t r i n g , a : L i s t [ Opt [ S t r i n g ] ] , b : Block ) extends Stmt
7
8 abstract c l a s s Expr
9 case c l a s s Num( i : I n t ) extends Expr
10 case c l a s s Var ( name : S t r i n g ) extends Expr
11 case c l a s s C a l l ( n : S t r i n g , a : L i s t [ Opt [ Expr ] ] ) extends Expr
12 case c l a s s Add( e1 : Expr , e2 : Expr ) extends Expr
13 case c l a s s Sub ( e1 : Expr , e2 : Expr ) extends Expr
14 ...

Listing 3.3: Language constructs of a variability-enriched While language in Scala.

As shown in Listing 3.3, the While language constructs are divided in statements
and expressions. Statements include Assignments, which have a field for the variable
name and an expression, which will be evaluated and stored as value for the corresponding
variable. If- and While statements have a condition, which will be evaluated before their
bodies are executed. While loops have one block statement, that includes the statements
which will be repeatedly executed. If statements have one block for their then branch
and another block for the else branch. Function declarations also count as statements
and include the function name, a list of function parameters and the function body as
a block statement. Expressions can be numbers (Num), the value of a variable (Var) or
function calls (Call). Calls have the name of the called function as field, as well as a
list of parameter expressions. Expressions can also be operators (Add/Sub/...), which
include another left and right Expression. The abstract syntax of the While language
is extended with Variability by using Opt and Conditional constructs, which have been
described in Section 2.4.2. This provides the ability to express, for example, function
declarations with configuration dependent arguments.
24 3.3. Implementation

Parser

To transfer source code into its corresponding representation of the above-mentioned


classes, the code is processed by a parser. The concept of the parser has already been
described in Section 2.5. In this section, a small example is shown, that illustrates how a
textual code representation is finally transferred to objects of the abstract syntax classes.

1 def whileStmt = ” w h i l e ” ˜ ” ( ” ˜ ( expr ! ) ˜ ” ) ” ˜ b l o c k ˆˆ {


2 case ˜ ˜ c ˜ ˜b => While ( c , b )
3 }
4 def blockStmt = ” { ” ˜ ( stmt ∗ ) ˜ ”} ” ˆˆ {
5 case ˜ s t m t l s t ˜ => Block ( s t m t l s t )
6 }

Listing 3.4: Parsing while- and block statements with the TypeChef Parser Framework

Listing 3.4 is an excerpt from the


parser implemented for this work’s pur-
poses. It shows how parsing while- and program begin {statement | funcDec}* end
block statements is realized using auxiliary statement assign | if | while | block
operators of the TypeChef Parser Frame- assign identifier = {expr} ;
if if ( {expr} ) block [else block]
work [Kästner et al., 2011b]. How exactly
while while ( {expr} ) block
these operators work, is not part of this
block { {statement}* }
work and can be looked up in the refer-
funcDec def identifier ( {identifier}* ) block
enced literature. Concerning this work,
expr add | sub | mul | div | grt | let | goe | leo |
TypeChef’s parser functions and operators
have been used out of the box, as already and | or | eq | neq | neg | var | num | bool |
stated in Section 2.5. call | par
A While statement (Lines 1-3) consists add expr + expr
of the keyword ”while”, a condition in sub expr - expr
mul expr * expr
braces that is parsed using expr and a div expr / expr
block. A pair of curly braces that wraps grt expr > expr
a list of optional statements are the con- let expr < expr
goe expr >= expr
structs for a Block (Lines 4-6). The dif-
leo expr <= expr
ferent elements, that belong to these con- and expr && expr
structs are concatenated using a sequence or expr || expr
operator (˜). The condition in a while eq expr == expr
neq expr != expr
loop is conditional, which means that it
neg ! expr
can differ between configurations, which is var identifier
achieved using the ! operator after expr. num integer
An example for the usage of optional lists bool true | false
call identifier ( {identifier}* ) ;
(∗) is given in the definition of a block
par ( expr )
statement, as it contains an arbitrary
number of optional statements. Both con-
{ }* - list of optional elements
structs are mapped to their correspond- { } - conditional element
ing AST node representation on parsing. foo - language construct / keyword

Therefore, the ˆˆ operator is used to cre-


ate a While object that contains the parsing Figure 3.8: While language syntax.
result of its condition and its block, and a
Block object that contains the parsing result of its list of statements.
Chapter 3. Variability-Aware Interpreter 25

Finally, Figure 3.8 describes the syntax of the variability-enriched While language
that is accepted by the parser of this work. A program contains a list of optional state-
ments and function declarations. It starts with ”begin” and is closed with ”end”. All
these constructs are implemented using the TypeChef Parser Framework and its helper
functions, analogously to the example presented in Listing 3.4.

Statements

After parsing the source code of a program, it is stored as a list of optional statements.
Running the program means, that the interpreter successively processes all statements
and accordingly updates its store. Listing 3.5 shows the implementation of statement
execution in the variability-aware interpreter. First of all, the interpreter does not exe-
cute any statement, if the variability context is a contradiction (Line 2). The handling
for the remaining statement types is explained in the following.
Assignments. On interpreting an assignment (Lines 4-6), at first, the right side
of the assignment is evaluated to a conditional value. Afterwards this value is saved
in the store. The value is saved as a Choice to express, that the assigned variable re-
ceives its value only in the variability context, the assignment has been called with.
Finally, if this would yield an unnecessarily complicated structure, it is simplified with
the corresponding function (Line 6). An example for this process would be simplifying
Choice(True, One(1), One(2)) to One(1).
Blocks. When interpreting blocks, all statements included in a block are executed
in the variability context of the block itself and their own context. To achieve this, the
interpreter is recursively called with variability context f e ∧ vctx (Line 8).
While loops. The concept of how while loops have to be handled concerning vari-
ability, has been described in Section 3.2. In short, every iteration of the loop needs
to query the variability context, in which the while condition yields true. Looking at
the code, this behavior is received by calling whenT rue on the result of evaluating the
condition in every iteration (Line 13). The result of whenT rue (Lines 35-40) is a feature
expression, that expresses all configurations, in which the evaluated condition is true.
The loop is exited, when there is no configuration left, in which another iteration could
be performed.
If statements. For interpreting if statements, the variability context in which the
if condition is true, is also queried (Line 19). When this context is satisfiable, the then
branch is executed with it (Line 20). If there are statements in the else branch, these
are executed in the negation of the queried context, which is the context, where the
condition yields f alse (Line 21).
Assertions. Interpreting assertions, an exception is thrown, when the variability
context of when the assertion condition yields true, is not equivalent to the context, the
assertion has been called in (Line 25).
Function declarations. When the interpreted statement is a FuncDec (Lines 29-31),
it is stored in the function store of the execution. The behavior is the same, as with
assignments to variables, which has been explained above.
26 3.3. Implementation

1 def e x e c u t e ( s : Stmt , v c t x : FeatureExpr , s t o : VAStore , f S t o : VAFuncStore ) {


2 i f ( v c t x . i s C o n t r a d i c t i o n ( ) ) return
3 s match {
4 case A s s i g n ( name , expr ) =>
5 s t o . put ( name , Choice ( vctx , e v a l ( expr , s t o , f S t o ) , s t o . g e t ( name ) )
6 . simplify )
7 case Block ( s t m t s ) => {
8 f o r ( Opt ( f e , stm ) <− s t m t s ) e x e c u t e ( stm , f e and vctx , s t o , f S t o )
9 }
10 case While ( c , b l o c k ) => {
11 var i s S a t = true
12 while ( i s S a t ) {
13 val x = whenTrue ( e v a l ( c , s t o , f S t o ) )
14 i s S a t = ( v c t x and x ) . i s S a t i s f i a b l e
15 i f ( i s S a t ) e x e c u t e ( b lock , i t e r c t x , s t o , f S t o )
16 }
17 }
18 case I f ( c , s1 , s 2 ) => {
19 val x = whenTrue ( e v a l ( c , s t o , f S t o ) )
20 i f ( x . i s S a t i s f i a b l e ) e x e c u t e ( s1 , v c t x and x , s t o , f S t o )
21 i f ( ! ( s 2 . s t m t s . isEmpty ) ) e x e c u t e ( s2 , v c t x andNot x , s t o , f S t o )
22 }
23 case A s s e r t ( c ) => {
24 val x = whenTrue ( e v a l ( One ( c ) , s t o , f S t o ) )
25 i f ( ! ( x . equivalentTo ( vctx ) ) ) {
26 throw new A s s e r t i o n E r r o r ( ” v i o l a t i o n o f ” + printNode ( cnd ) )
27 }
28 }
29 case FuncDec ( name , a r g s , body ) =>
30 f S t o . put ( name , Choice ( vctx , One ( FDef ( a r g s , body ) ) , f S t o . g e t ( name ) )
31 . simplify )
32 }
33 }
34
35 def whenTrue ( cv : C o n d i t i o n a l [ Value ] ) : FeatureExpr =
36 cv match {
37 case One ( E r r o r V a l u e ( ) ) => F a l s e
38 case One ( v ) => i f ( v . g e t B o o l ) True e l s e F a l s e
39 case Choice ( f , a , b ) => ( whenTrue ( a ) and f ) o r ( whenTrue ( b ) andNot f )
40 }

Listing 3.5: Method for processing statements in the variability-aware interpreter.

1 t r a i t Value {
2 def g e t I n t ( ) : I n t
3 def g e t B o o l ( ) : Boolean
4 }
5

6 case c l a s s I V a l ( i : I n t ) extends Value {


7 def g e t I n t ( ) = i
8 def g e t B o o l ( ) = throw new I l l e g a l C a l l E x c e p t i o n ( )
9 }
10
11 case c l a s s UndefVal extends Value {
12 def g e t I n t ( ) = throw new I l l e g a l C a l l E x c e p t i o n ( )
13 def g e t B o o l ( ) = throw new I l l e g a l C a l l E x c e p t i o n ( )
14 }

Listing 3.6: Value representation in the variability-aware interpreter.


Chapter 3. Variability-Aware Interpreter 27

Expressions

On executing statements, the variability-aware interpreter relies on evaluating expres-


sions. The outcome of evaluating an expression can be a conditional boolean or integer
value. In the variability-aware interpreter, values are implemented as subtypes of the
trait Value. As shown in Listing 3.6, these subtypes can get queried for integer and
boolean values. However, the example of IVal shows, that calling the false getter results
in an exception (Line 8). There is also a type UndefVal, that represents an undefined
value. When undefined values occur in a calculation, the error propagates. This behav-
ior is preferable to throwing an exception, because continuing the evaluation provides
the possibility to detect error-containing configurations afterwards and separate them
from others, that provided a correct result.
The evaluation of conditional expressions is shown in Listing 3.7. To map over
a conditional structure, the method mapr from the TypeChef library is used, which
provides the possibility to process every alternative expression in its separate variability
context. An example of this behavior is shown in Figure 3.9, where mapr is called with
the functional parameter store.get( ) on a conditional variable name (left). This results
in all alternative variable names being looked up in the store separately and preserving
the variability context C of the queried variable y (right).
Listing 3.7 also shows the handling of the different expression types. When the
expression is a plain number (Num), the corresponding result is a single IVal (Line 3).
When the type of the expression is a variable name (Var), the value of this variable is
looked up in the store (Line 4). For evaluating composite expressions (Lines 5-6), an
auxiliary function calcV al is used, which first evaluates the inner expressions, propagates
possible error values and finally executes the given operation on the outcome of its inner
evaluations. In Listing 3.7, only Add and Sub are shown, because the handling for other
composite expressions is the same.
Finally, the processing of function calls is shown. In this case another call for mapr is
needed, because looking up a function in the function store, yields a conditional function.
This is desirable, because one function can have alternative implementations, according
to a certain configuration. However, when there are alternative implementations, the call
must be processed for each implementation of the function. It may also happen, that
a function is not defined at all in a certain configuration, which results in an UndefVal
being returned (Line 11). When the function is actually defined, a new local store for
the function execution is created (Line 16), that includes the variables from the function
definition mapped to the evaluated expressions from the function call. Because optional
function parameters are supported for the definition, as well as the call, these variability
contexts have to be merged before (Lines 19-22). Afterwards, the function body is
executed using the new local store (Line 23). Return values are realized as assignments
to a variable res inside of the function body. If there is no such variable, the call returns
a UndefVal, informing that the returned value is of type void.
28 3.3. Implementation

Choice Choice

A One One mapr(store.get(_)) A One Choice

x2 y2 02 C One One
Store
x One(0)
y Choice(C, One(1), One(2)) 12 2
Figure 3.9: Exemplary mapping over conditional structures.

1 def e v a l ( exp : C o n d i t i o n a l [ Expr ] , s t o : VAStore , f S t o : VAFuncStore ) = {


2 exp . mapr ( match {
3 case Num( n ) => One ( I V a l ( n ) )
4 case Var ( name ) => s t o . g e t ( name )
5 case Add( e1 , e2 ) => c a l c V a l ( e1 , e2 , ( a , b ) => I V a l ( a . g e t I n t+b . g e t I n t ) )
6 case Sub ( e1 , e2 ) => c a l c V a l ( e1 , e2 , ( a , b ) => I V a l ( a . g e t I n t −b . g e t I n t ) )
7 ...
8
9 case C a l l ( cname , c a r g s ) => {
10 f S t o . g e t ( cname ) . mapr ( match {
11 case FErr ( msg ) => One ( UndefVal ( msg ) )
12 case FDef ( f a r g s , fbody ) => {
13 i f ( c a r g s . s i z e != f a r g s . s i z e )
14 One ( UndefVal ( ” i l l e g a l arguments s i z e ” ) )
15 else {
16 val l o c S t o r e = new VAStore ( )
17 f o r ( i <− 0 u n t i l c a r g s . s i z e )
18 l o c S t o r e . put ( f a r g s ( i ) . entry ,
19 Choice ( a r g s ( i ) . f e a t u r e and f a r g s ( i ) . f e a t u r e ,
20 e v a l ( One ( a r g s ( i ) . e n t r y ) , s t o , f S t o ) ,
21 One ( UndefVal ( ” undef f u n c a r g ” ) )
22 ) . simplify )
23 e x e c u t e ( fbody , True , l o c S t o r e , f S t o )
24 i f ( ! locStore . contains (” res ”) )
25 One ( UndefVal ( ” ’ ” + name + ” ’ o f type v o i d ” ) )
26 else
27 locStore . get ( ” res ” )
28 }
29 }
30 })
31 }
32 })
33 }

Listing 3.7: Method for evaluating expressions in the variability-aware interpreter.

Store

To map values to variables, the interpreter interacts with a variability-aware store. The
implementation of the store is shown in Listing 3.8. Basically, the store consists of a
simple Map[String, Conditional[Value]], but behaves differently, when a variable is queried,
which is not yet in the store. In this case, a single UndefVal is returned (Line 6). Also,
it is possible, to get the value of a variable in a specific variability context. This is
implemented using the TypeChef helper function f indSubtree, which looks for values
Chapter 3. Variability-Aware Interpreter 29

that fit to variability context vctx in a conditional construct (Line 12).

1 c l a s s VAStore {
2 val v a r s = Map . empty [ S t r i n g , C o n d i t i o n a l [ Value ] ]
3
4 def put ( key : S t r i n g , v a l u e : C o n d i t i o n a l [ Value ] ) = v a r s . put ( key , v a l u e )
5

6 def g e t ( key : S t r i n g ) = v a r s . g e t O r E l s e ( key , One ( UndefVal ( s ) ) )


7
8 def getByContext ( key : S t r i n g , v c t x : FeatureExpr ) =
9 i f ( ! v a r s . c o n t a i n s ( key ) )
10 One ( UndefVal ( s ) )
11 else
12 f i n d S u b t r e e ( vctx , v a r s . g e t ( key ) . g e t )
13 }

Listing 3.8: Implementation of the variability-aware store.

Plain interpreter

Next to the variability-aware interpreter, a plain interpreter has been implemented, that
is able to interpret programs without variability contexts. Reasons for implementing a
second interpreter, are the possibility to perform comparisons between both implemen-
tations (Chapter 4) and the fact, that the existing variability-enriched AST classes can
be reused with an adjusted handling. When the plain interpreter is called, it completely
ignores variability, avoiding all checks for variability contexts and thus providing a
traditional interpreter. However, the fact that it reuses the existing AST structures
requires some modifications, which are exemplarily shown for assignments and while
loops in Listing 3.9. For example, on interpreting assignments, the interpreter assumes
unambiguous expressions (Line 3) and uses a plain store, which maps variable names
to values instead of conditional values (Map[String, Value]). Also, conditions in while
loops and if statements are unambiguous. For that reason, the evaluated condition
implies directly, whether the loop is continued or not (Lines 6-12). On the contrary, the
variability-aware interpreter would need to check in which variability context further
iterations are executed, at this point.

1 def e x e c u t e ( s : Stmt , s t o : P l a i n S t o r e , f S t o : P l a i n F u n c S t o r e ) {
2 s match {
3 case A s s i g n ( name , One ( expr ) ) => s t o . put ( name , e v a l ( expr , s t o , f S t o ) )
4 case While ( One ( c ) , s ) => {
5 var cnd = true
6 while ( cnd ) {
7 cnd = e v a l ( c , s t o , f S t o ) match {
8 case E r r o r V a l u e ( ) => f a l s e
9 case v => v . g e t B o o l
10 }
11 i f ( cnd ) e x e c u t e ( s , s t o , f S t o )
12 }
13 }
14 ...
15 }
16 }

Listing 3.9: Execution of statements in the plain interpreter.


30 3.3. Implementation
Chapter 4. Evaluation 31

Chapter 4

Evaluation

In the following chapter the variability-aware interpreter is evaluated. The evaluation


focuses on providing a comparison between the traditional brute-force approach and
the interpreter introduced in this work. Therefore, three benchmarks are performed.
Two of them focus on benchmarking specific cases, the third one is applied to a set
of 100 randomly generated test product lines. In every benchmark, the speedup of
the variability-aware interpreter in relation to the complexity of the tested product-
line is shown. Initially, Section 4.1 describes the evaluation method used in this work.
Afterwards, the strategy used for comparing both approaches is explained in Section
4.2. Section 4.3 describes which subjects are used for the evaluation and why, before
the actual results are shown in Section 4.5. Finally, Section 4.6 discusses the results und
points out possible threats to validity.

4.1 Method Selection


When performing empirical evaluations, a variety of possibilities to measure the quality
of an approach exist. The problem addressed in this work, is to provide an approach for
testing whole product-lines, which yields an exhaustive result, but scales better in perfor-
mance than brute-force analysis, for high numbers of features. Therefore, the size under
investigation is the runtime of both approaches, concerning interpreted product lines. In
search of an adequate evaluation method, performance benchmarks have been chosen,
because runtimes on a real system are required. Performance benchmarks as evalua-
tion method, focus on actually executing programs on such a system. This method fits
for evaluating this work, because interpreters are programs, that interpret and execute
source code, and which can be executed on a test system to measure the time elapsed
for completing the interpretation of a certain code fragment.

4.2 Comparing Runtimes


To compare the runtime of the variability-aware interpreter and the brute-force analysis,
a specific comparison method is necessary. Gathering the correct runtimes requires more
than just invoking two interpreters on a certain code fragment, because the brute-force
approach must be applied on all possible variants of a product line under test. Applying
the brute-force approach in this case, means invoking the plain interpreter, which has
been already described in Section 3.3. The plain interpreter has been designed especially
32 4.2. Comparing Runtimes

for evaluation purposes, because the impact of variability calculations would distort the
benchmark results, when the variability-aware interpreter was also used to execute the
variants in the brute-force approach, though these programs do not contain variability
anymore. However, when using the plain interpreter, all statements and expressions are
assumed unambiguous by construction. The interpreter does not take a variability con-
text, nor does it own any functions for processing variability. It handles statements and
expressions like a traditional interpreter, which is desired when comparing variability-
aware interpretation with the brute-force approach. The strategy we used for receiving
the correct runtime values for both approaches can be described in the following major
steps:

1. Generate possible variants.


In the first step, all possible variants from a certain product line under test have to
be created. This is done by looking up all distinct features, that have been used in
the corresponding product line and creating a set of all possible feature selections,
that can be made. Afterwards, a variant of the product line according to every
of the above-mentioned feature selections is generated. After removing duplicates,
this process yields all possible variants, that can be derived from the product line
under test.

2. Execute the interpreters.


After generating all possible products, the actual runtime values are collected.
This work measures wall-clock time of the pure program execution, which means
without parsing. The exact method is shown in Listing 4.1. As it shows, system
time is measured before and after program execution and the difference is taken as
the runtime of program execution. This way of testing is called differential testing.
In case of the variability-aware interpreter, only one execution of the program has
to be measured. Concerning the plain interpreter, the runtime for every variant
execution is measured and summarized.

Figure 4.1 shows a minimal example


for generating all possible program vari-
ants. Though four possible configurations ,B}
B},{A
exist, when combining the optional fea- {A},{

tures A and B, all configurations where at


{}
least one feature is selected yield the same
variant (top right). Only when no feature
is selected, the resulting program is empty
Figure 4.1: Generating all possible program
(bottom right), which results in a total of
variants.
2 possible variants.
At the end of the second step, the re- 1 s t a r t = System . nanoTime ( )
sults for both approaches are finally col- 2 program . run ( )

lected. This way of comparing both ap- 3 s t o p = System . nanoTime ( )


4 t o t a l = ( stop − s t a r t )
proaches also conforms to the comparison
of variability-aware analysis and brute- Listing 4.1: Measuring wall-clock time of
force analysis (Figure 2.3), which has been interpreter execution.
described in Section 2.4.
In the following Sections, the expression ”speedup value” is frequently used. It refers
to the runtime of the brute-force approach divided by the runtime of the variability-
Chapter 4. Evaluation 33

aware interpretation, concerning a certain product line. The speedup value describes the
factor of which the variability-aware approach is faster than the brute-force approach.

4.3 Subject Selection


Empirical evaluations always need subjects on which the evaluation can be performed.
In case of qualitative analyses, this can be a group of probands. This work, however,
uses performance benchmarks as evaluation method and therefore needs some kind of
data, on which the benchmarks can be performed. The kind of data addressed by this
work are product lines. Therefore adequate product lines for conducting performance
benchmarks have to be selected. Which product lines serve as subjects for the evaluation
is explained in the following.

4.3.1 Benchmarking specific cases


In Section 3.2, it has already been ex-
plained why keeping variability as local as 1 begin
possible and thus sharing structures be- 2 def fibo (a) {
tween configurations are benefits of the 3 i f ( a == 1 | | a == 2 ) {
4 res = 1;
variability-aware approach in comparison
5 } else {
to the brute-force approach. Early join- 6 r e s = f i b o ( a−1) +
ing and late splitting have been introduced f i b o ( a −2) ;
as positive effects that occur, when code 7 }
fragments adapt a specific pattern. In 8 }
9
the evaluation of this work, a benchmark
10 d e f max( a , b ) {
is conducted for both cases, to provide a 11 i f (a > b) {
speedup value for each effect in isolation. 12 res = a ;
Also, these cases have been selected, be- 13 } else {
cause the variability-aware interpreter can 14 res = b;
15 }
unfold its full potential, on appearance of 16 }
the above-mentioned effects, which should 17
be emphasized with some actual speedup 18 x = 1;
values. 19 //#i f d e f A
20 x = 2;
21 //#e n d i f
Early joining 22 //#i f d e f B
23 x = 3;
Early joining exploits the fact that values 24 //#e n d i f
are joined, before expensive calculations 25 ...
are executed on them. In this benchmark, 26
27 x = max( x , 2 5 ) ;
a function for the nth fibonacci number 28 r e s = f i b o ( x ) ;
represents the expensive calculation. The 29 end
value for variable x is joined when the
Listing 4.2: Code fragment used for
function max(a, b) is executed, because it measuring the effect of early joining.
was less than 25 in all configurations be-
fore. The benchmark is conducted with
multiple starting values of x. Initially, the code is interpreted without any variability.
All further executions add another assignment below 25 to x to the code, which is ex-
ecuted in a specific variability context, thus increasing the expected benefit from early
34 4.3. Subject Selection

joining in favor of variability-aware interpretation. Further assignments increase the as-


signed value by one and use the subsequent alphabetical character as variability context.
Listing 4.2 shows an example of the benchmark, where x is currently dependent on two
features. An increasing speedup is expected, because the expensive calculation (Line 28)
is executed once in the variability-aware interpreter, while it has to be executed multiple
times, according to the number of possible products, in the brute-force approach.

Late splitting

When calculations use a single variability context as long as possible and only split,
when variability actually affects the calculation result, this effect is called late split-
ting. In this benchmark the impact of late splitting is measured based on the code
fragment in Listing 4.3. The code fragment used for benchmarking the impact of
late splitting is very close to the code, that has been used for explaining the effect in
Section 3.2. Still, the calculation for the sum of the first n numbers is used. How-
ever, in this benchmark the number of assignments to x is increased step by step.
Starting off with no specific variability
context, for further executions another as- 1 begin

signment to x in a alphabetically increas- 2 x = 100;

ing variability context is added. In this 3 //#i f d e f A

4 x = 101;
way, the impact of late splitting is ex-
5 //#e n d i f
pected to increase, because in variability- 6 //#i f d e f B
aware interpretation, the calculation is ex- 7 x = 102;

ecuted in a single variability context un- 8 //#e n d i f


9 . . .
til 100 and only then splits for the dif-
10
ferent variability contexts. On the con-
11 sum = 0 ;
trary, the brute-force approach executes 12 i = 1 ;
the whole calculation for every possible 13 while ( i < x ) {
product. Because an increasing number 14 sum = sum + i ;
of different variability contexts increases 15 i = i + 1;
16 }
the calculation effort significantly less for 17 end
variability-aware interpretation compared
to the brute-force approach, the speedup Listing 4.3: Code fragment used for
measuring the effect of late splitting.
of the variability-aware interpreter is also
expected to increase.

4.3.2 Benchmarking generated product lines


The third benchmark performed in this work, includes testing 100 random generated
test product lines. In contrast to the previously described benchmarks, which show
the variability-aware interpreter’s potential in favorable cases, this benchmark aims at
providing a comparison for a variety of different code fragments. To provide a realistic
result, the set of test subjects contains many structurally different product lines. The
following list provides a more detailed insight of which structural attributes are varied
within this set.

• Number of statements. The size of the test product lines is continually increased
by extending the number of statements, which are generated.
Chapter 4. Evaluation 35

• Type of statements. Not only the number of statements changes over the set of
subjects, but also the type of statements that is used. Possible types of statements
are assignments, if statements and while loops.

• Number of features. In the set of test product lines, statements are randomly
annotated with variability contexts. For this purpose, a subset of the features A-F
is used. Product lines with greater size use more features respectively.

• Structure of feature expressions. Next to varying the number of feature ex-


pressions, the structure of them is also varied. Structural variations include how
many distinct features are used in a complete feature expression and how these
distinct features are concatenated. For example, a feature expression can just be
one distinct feature (X) or the conjunction of many distinct features by an and
operation (X ∧ Y ∧ ...).

• Usage of functions. Functions have also been incorporated in the set of test
product lines. The number of declared and used functions for a certain product
line is determined by its size, which means the number of statements. In other
words, greater product lines use more functions. Though function definitions, as
well as function calls are randomly generated with a certain variability context, the
actual function body is taken from a static set of predefined functions, e.g. sum,
sub, min, max, etc.

The generator for these product lines is written with ScalaCheck [Nilsson], a library
for automated testing and test case generation. On generating product lines, ScalaCheck
automatically covers a variety of special cases, such as using one of the above-mentioned
constructs excessively (e.g. many while loops), or on the other hand, using it not at
all (e.g. an empty product line). Therefore, the comparison of both approaches over
the whole set of generated product lines, is expected to yield a representative result for
the speedup of variability-aware interpretation. The amount of 100 random generated
product lines, has been borrowed from the standard settings of the ScalaCheck test case
generator.

4.4 Setup
The following section briefly describes
the setup used for benchmark execu- CPU : Intel Core i7 3770K (4 x 3,5GHz)
tion. All benchmarks in this work RAM : 16GB DDR3-1600
have been performed on the same LANG: Scala 2.9.2
workstation. The hardware and vir- VM : Java HotSpot(TM) Client VM (1.6.0 32)
HEAP: 512MB (default) / 1024MB (max)
tual machine settings of the worksta-
tion are shown in Figure 4.2. To min- Figure 4.2: Hardware setup of the benchmark system.
imize the impact of confounding fac-
tors, all unnecessary background processes have been closed prior to benchmark execu-
tion. Also, the garbage collector has been launched manually before each benchmark
run and one warmup run was executed before the actual measurement. Additionally,
all benchmarks were performed three times, and the results shown in the next section,
represent mean values of these three runs.
36 4.5. Results

4.5 Results
4.5.1 Single cases
Early joining

In the first benchmark, the impact of early joining on variability-aware interpretation


speedup was the target of the evaluation. The intuition was, that an increasing number
of features would also increase the variability-aware interpretation speedup, which led
to the following hypotheses:

Null hypothesis (H0 ): In case of early joining, the number of used features, has no
relation to the speedup of the variability-aware interpreter.

Alternative hypothesis (H1 ): For early joining, a relation between the number of
features and the speedup of the variability-aware interpreter exists.

Figure 4.3 shows the results for the


early joining benchmark. The variability-
aware interpreter performs slower than the
brute-force approach when the joined vari-
20

able is shared between configurations or


15
Speedup

dependent on only one feature. However,


10

starting at the use of two variability con-


5

texts, the variability-aware interpreter is


faster and also increases its speedup for
0

the further executions of this benchmark. 0 1 2 3 4 5 6


Features
When the joined variable was dependent
on 6 features before joining, this bench-
mark yields a speedup of over 20 times for Figure 4.3: Results for early joining benchmark.
the variability-aware interpreter. The absolute values are around 850ms for all runs of
the variability-aware interpreter and range from 280ms to 17700ms for the runs of the
brute-force approach.

Late splitting

The second benchmark addressed the impact of late splitting on the speedup of the
variability-aware approach. Again, the intuition was, that an increasing number
of features affects the variability-aware interpretation speedup positively, because
calculations are mostly performed in a single variability context.

Null hypothesis (H0 ): When late splitting occurs, the number of used features, has
no relation to the speedup of the variability-aware interpreter.

Alternative hypothesis (H1 ): On occurrence of late splitting, a relation between the


number of features and the speedup of the variability-aware interpreter exists.
Chapter 4. Evaluation 37

The results for the late splitting bench-


mark are shown in Figure 4.4. Com-
pared to the results of the previous

10 15 20
benchmark, the variability-aware inter-

Speedup
preter crosses the mark where it performs
faster than the brute-force approach, at
a later point. With a growing number

5
of features, however, the speedup also in-

0
creases, just like in the previous bench- 0 2 4 6 8 10

mark. The variability-aware interpreter Features

reaches a speedup of over 20 times, when


the variable that gets split after 100 loop Figure 4.4: Results for late splitting benchmark.
iterations, is dependent on 10 different fea-
tures. In this benchmark, the absolute values range from 54ms to 433ms in case of the
variability-aware interpreter and from 14ms to 10100ms in case of the brute-force ap-
proach.

4.5.2 Generated product lines


Finally, both interpreters were compared over the set of 100 generated product lines.
In this third benchmark, the evaluation provides a more comprehensive result because
the product lines under test differ in a variety of structural attributes. Though there
may be some critical cases, where the variability-aware interpreter performs poorly, the
expectation was, that it outperforms the brute-force approach regarding the whole set
of product lines. Therefore we raised the following hypotheses:

Null hypothesis (H0 ): Comparing the runtimes of all 100 generated product lines
under test, the variability-aware interpreter does not perform better than the brute-force
approach.

Alternative hypothesis (H1 ): The variability-aware interpreter outperforms the


brute-force approach, when taking all collected runtime values into account.
Interpreter Type
BF
VA

0 200 400 600 800 1000 1200 1400


Runtime (ms)

Figure 4.5: Boxplot of runtimes for generated product lines benchmark. Some outliers omitted.
38 4.6. Discussion

A comparison of both approaches concerning plain runtimes is shown by the boxplot


in Figure 4.5. The boxes on the left include fifty percent of all collected runtimes, while
their inner vertical line shows the median for each approach’s runtimes. Some outliers
for the brute-force approach have been omitted. The boxplot shows, that the majority of
collected runtime values for the variability-aware interpretation is between slightly over
0ms and 350ms. On the contrary, the majority of runtimes for the brute-force approach
is spread over a greater interval, until 1300ms. The observed medians are at about 40ms
for the variability-aware interpreter and about 120ms for the brute-force approach.
Figure 4.6 shows the speedup value
of the variability-aware interpreter depen-
dent on the number of variants. In the
previous benchmarks, the number of fea-

15
tures directly indicated the number of
variants, because there was exactly one
statement in the variability context of each

Speedup
feature, resulting in 2#f eatures variants. In

10
this benchmark, the subjects under test
are generated randomly and it is possi-
ble, that some of the product lines yield
equal variants for different feature selec-
5

tions, which lowers the number of possi-


ble variants, though the number of fea-
tures might be the same. Therefore the
0

speedup is shown in relation to the num-


ber of variants, in this benchmark. As 0 10 30 50
the generated product lines used at most Variants
6 distinct features, the number of variants
caps at 26 = 64. The blue line in the plot Figure 4.6: Variability-aware interpretation
marks the point, at which the variability- speedup in relation to the number of variants
for generated product lines benchmark.
aware interpreter outperforms the brute-
force analysis. While the brute-force ap-
proach performs better for very small product lines (≤ 8 variants), the variability-aware
interpreter is faster in 76 out of 83 cases with more than 8 variants, receiving speedup
factors of up to 18.

4.6 Discussion
In the evaluation, three benchmarks have been conducted to compare the variability-
aware interpreter with the brute-force approach. The first two benchmarks addressed
cases, where an advantage in favor of the variability-aware approach was expected before-
hand. The null hypotheses were rejected, because an increasing number of features indeed
affected the variability-aware interpretation speedup positively. As a consequence, the
alternative hypotheses are valid. In the third benchmark, a set of 100 generated product
lines was used to compare both approaches. Performing a Welch Two Sample t-test on
the two measurement series, yields, that they differ from each other (significance level
0.99). In other words, it is most likely, that the variability-aware interpreter is actually
faster than the brute-force approach, which also rejects the null hypothesis for this third
benchmark.
Chapter 4. Evaluation 39

All performed empirical evaluations show, that the variability-aware interpreter is


slower when interpreting code with few variability contexts. This outcome is conditioned
by the variability checks, that are invoked by the variability-aware interpreter. For
example, in every while loop iteration, it must be calculated, in which variability context
the condition yields true. This also explains, why the variability-aware approach does
not outperform the brute-force analysis at a lower number of features in the late splitting
benchmark. Compared to a simple addition calculation, it takes more time to query the
variability context of when the while condition yields true, which is not required in the
plain interpreter. With a growing number of features, and thus an increasing number of
variants, this initial penalty is compensated.
However, this work addresses the problem that brute-force analysis scale poorly for
high numbers of features. Variability-aware interpretation is primarily designed for pro-
viding a suitable approach, when brute-force analysis can not yield a result in a reason-
able amount of time. Therefore, performance problems, that only apply on very small
product lines are negligible and in summary it can be stated, that variability-aware
interpretation outperforms the brute-force approach in total.

4.6.1 Threats to validity


As every evaluation this work’s empirical study must confront certain threats to
validity. Possible threats can be divided into threats to internal validity and threats to
external validity. While threats to internal validity represent factors that would provide
alternative explanations for the benchmark results presented in Section 4.5, threats
to external validity concern the generalizability of the variability-aware interpretation
approach. In this section both types are discussed.

Internal validity. Circumstances that would influence the benchmark results are
possible threats to internal validity. This includes an alternative explanation for the
speedup values of the variability-aware interpreter. Such a threat is the impact of con-
founding factors in a benchmark. Examples for confounding factors are background
processes, the execution of benchmarks on different systems or settings, and in some
cases garbage collection. As described in Section 4.4, this evaluation has reacted to
these threats by using the same hardware setup and java virtual machine settings for ev-
ery benchmark, as well as closing every background process, which has not been needed
for the benchmark to be carried out. Additionally, the garbage collector has been in-
voked manually before every benchmark. To control the impact of outliers, a warmup
run has been executed before the actual benchmarks and all benchmarks describe the
mean of three runs.
External validity. Factors that limit the generalizability of the approach presented
in this work are threats of external validity. In the first two benchmarks of our evaluation
we investigated favorable cases of the variability-aware interpreter. This may pose a
possible threat to external validity because a speedup was already expected beforehand.
However, these benchmarks served for providing some actual values for these cases, while
the intention of the third benchmark was to provide a representative result. In this case,
a possible threat to external validity would be, that the generated test product lines were
not suitable as a set of subjects. For controlling this threat, a special test case generator,
ScalaCheck [Nilsson], has been used to cover a variety of different cases concerning the
structure of the subjects, such as empty product lines or product lines with excessive
40 4.6. Discussion

use a certain structure, e.g. many statements of the same type. Thus, a great variety of
test cases is covered, and the set of subjects is stated as sufficiently representative.
Chapter 5. Future Work 41

Chapter 5

Future Work

The variability-aware interpreter presented in this work serves as a prototype, showing


that applying the principles of variability-aware analysis on interpreting source code is an
approach worth further research. However, future work concerning the interpreter mainly
focuses on extending the supported language constructs. The used While language
served well for showing, that variability-aware interpretation actually works, but in the
presence of today’s modern object oriented programming languages, it can not provide
a practicable solution.
There is already an experimental
branch of the variability-aware interpreter, 1 c l a s s Mult {

that tries to add structures for support- 2 const FACTOR = 2 ;


ing class definitions and objects. The ex- 3
4 def c a l c ( x ) {
tended interpreter is already able to han-
5 r e s = t h i s .FACTOR ∗ x ;
dle simple class definitions as shown in 6 }
Listing 5.1. Concerning object orienta- 7 }

tion, another store for class definitions is 8 o = new Mult ( ) ;


9 v = o . calc (3) ;
used and objects are treated as subclasses
of type Value. Representing variability in- Listing 5.1: Object orientation for the
side of classes, e.g. optional method defini- variability-aware interpreter.
tions, is done by incorporating variability
into these values, which technically means, that object values have their own variability-
aware store and function store.
The difficulty in supporting objects is the great number of locations, where variability
can appear. In this work, it has already been explained, that it is desirable to store
variability as local as possible. However, this also implies great complexity. For example,
classes can have variable fields, variable constructors and variable functions, which in turn
can have optional parameters or optional statements in their bodies. Therefore, future
work would primarily aim at tailoring together all these different variability contexts
when interpreting programs. Tailoring together in this case means, that all locations,
where variability can be specified have to be taken into account, when an object is
accessed. For example, invoking the method of an object requires to handle the variability
context of the object itself, of the function declaration, of possibly referenced variables
and of the method parameters. As a result, future evaluations may also be able to take
existing product lines, e.g. the Graph Product Line, as subjects.
42

Chapter 6

Related work

Software product-line testing. Testing is a first class concern in software development


and a common method for maintaining the quality of software products. However, testing
software product lines brings up the problem, that all variants of a product line have to
be tested, to receive an exhaustive test result, which results in performance issues for
product lines with high numbers of features.
One approach to address these performance issues, are sampling strategies [Cabral
et al., 2010], which focus on reducing the number of variants under test, by selecting
test cases by a specific selection criterion. Because sampling strategies can not provide
exhaustive test results, when testing only a subset of all products, a different approach
for handling the performance issues of brute-force analysis is needed.
As already explained in this work, variability-aware analysis provide a promising
approach to address these performance issues. Developing a variability-aware interpreter,
which has been done in this work, is one approach among others for product line testing,
that is based on the principles of variability-aware analysis.
Parts of this thesis have already been published in a paper at the 4th International
Workshop on Feature Oriented Software Development [Kästner et al., 2012]. In this
paper, the white-box approach of developing a variability-aware interpreter from scratch
is compared to using an existing model checker as black-box approach for product-line
testing. On using the model checker, variability has been encoded with normal control-
flow mechanisms, e.g. if statements. Kaestner et al. also introduce a ”gray-box”
strategy, which adds an extension to the standard model checker to improve its handling
of variability, e.g. joining paths after variability related splits. Further references to
related work concerning product-line testing can also be taken from the FOSD paper.
Variability-aware analysis. The variability-aware interpreter introduced in this
work is an extension of the variability-aware analysis strategy, implemented to support
interpreting source code. During its development we used parts of other approaches,
that have presented strategies concerned to variability-aware analysis. For example, the
TypeChef Parser [Kästner et al., 2011b], which provides the ability to parse variability-
enriched code.
Other approaches related to variability-aware analysis concern type checking [Kästner
et al., 2011a], static analysis of product lines [Kreutzer, 2012] and model checking [Lauen-
roth et al., 2009]. An overview of existing product-line analysis strategies is also available
[Thüm et al., 2012]. In this work, variability-aware analysis are referred to as family-
based analysis.
Chapter 7. Conclusion 43

Symbolic execution. Another approach related to this work is symbolic execution


[Cadar et al., 2011]. This technique uses symbolic values as input when executing pro-
grams and returns functions that describe the output of the program in relation to its
symbolic inputs. When conditional statements occur, the values of program variables
are maintained using a path condition for every possible path.
The variability-aware interpreter uses symbolic execution, as well as default concrete
execution. Consider the following conditional statement: if (a == 0) {...} . When the
evaluation of the condition yields a result, which is not dependent on a certain configu-
ration concrete execution is used as the interpreter follows only one path, which is the
actual result of evaluating the condition. However, if evaluating the condition does not
provide the same result for all configurations, both branches are executed in the variabil-
ity context that describes in which configuration the corresponding branch is entered,
thus performing symbolic execution. Also, the variability context, in which each branch
is executed can be compared to the path conditions in the symbolic execution approach.

Chapter 7

Conclusion

In this work, variability-aware interpretation has been presented as an alternative to


traditional approaches of testing software product lines. First, the problems of existing
approaches have been described. Brute-force analysis focus on testing all possible vari-
ants of a product line, thus analyzing common code over and over again, and running into
performance issues for product lines with many features. Another approach, sampling
strategies, reduces the number of tests, by selecting the tested variants with a special
criterion. Though it solves performance issues, this approach is not able to provide a
fully exhaustive test result and still focuses on testing variants of a product line.
Therefore, the concept of variability-aware analysis has been described and how it
addresses the above-mentioned issues. Variability-aware analysis take variability into
account when performing analysis on product lines, thus requiring only one process that
analyzes common code once and provides a result for all configurations of a product-line.
Also, abstract syntax trees have been introduced as a structure, where variability-aware
analysis can be performed on. It was further shown, how variability can be injected into
ASTs and how using the TypeChef Parser, an existing library for parsing variability-
enriched code, helps to transfer source code in its corresponding AST representation.
As the main contribution of this work, the prototype of a variability-aware interpreter
has been presented, that applies the concept of variability-aware analysis on interpreting
source code and is able to execute test cases for all configurations. The interpreter has
been described conceptually, as well as its implementation has been shown. Also, it has
been achieved to transfer benefits of variability-aware analysis to the interpreter. For
example, it executes programs in a single variability context as long as possible, resulting
44

in common code getting executed only once. The interpreter’s accepted language is a
simple While language and it has been shown how variability is handled in all of its
language constructs.
To measure the performance of the variability-aware interpreter, an appropriate
method for comparing the variability-aware approach to the brute-force approach has
been presented. For that reason, a plain interpreter was introduced, that ignores all
variability and represents the traditional way of interpreting source-code. For compar-
ing both approaches, three benchmarks have been conducted, two of them showing the
potential of the variability-aware approach in favorable cases, and another one over a set
of random generated product lines to provide a comprehensive comparison over a variety
of subjects.
The benchmarks results have shown, that the variability-aware interpreter outper-
forms the brute-force approach in total, especially when late splitting and early joining
can be exploited. Limitations of the variability-interpreter’s performance come into view
on interpreting very small product lines, or in general, when the effort needed for inter-
preting a certain code fragment is less than the effort needed for checking the variability
context of the execution.
These results emphasize the viability of variability-aware interpretation concerning
the used While language. However, to become an established approach for software
product-line testing, the support of object oriented programming is essential. The
variability-aware interpreter has been developed as a white-box approach from scratch.
Its implementation is extensible and offers the possibility to add further language con-
structs. Also, it has been shown that some steps to supporting object orientation have
already been done in an experimental branch.
All in all it can be said, that variability-aware interpretation offers a promising al-
ternative to traditional software product-line testing approaches, that is worth further
research.
BIBLIOGRAPHY 45

Bibliography

The aspectj project. http://www.eclipse.org/aspectj/. Accessed: 06.10.2012.


Don Batory, Jacob Neal Sarvela, and Axel Rauschmayer. Scaling step-wise refinement.
In Proceedings of the 25th International Conference on Software Engineering, ICSE
’03, pages 187–197, Washington, DC, USA, 2003. IEEE Computer Society. ISBN
0-7695-1877-X.
Isis Cabral, Myra B. Cohen, and Gregg Rothermel. Improving the testing and testability
of software product lines. In Proceedings of the 14th International Conference on
Software Product Lines, SPLC’10, pages 241–255, Berlin, Heidelberg, 2010. Springer-
Verlag. ISBN 3-642-15578-2, 978-3-642-15578-9.
Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Păsăreanu, Koushik
Sen, Nikolai Tillmann, and Willem Visser. Symbolic execution for software testing in
practice: preliminary assessment. In Proceedings of the 33rd International Conference
on Software Engineering, ICSE ’11, pages 1066–1071, New York, NY, USA, 2011.
ACM. ISBN 978-1-4503-0445-0.
Gregor Kiczales, Erik Hilsdale, Jim Hugunin, Mik Kersten, Jeffrey Palm, and William G.
Griswold. An overview of aspectj. In Proceedings of the 15th European Conference on
Object-Oriented Programming, ECOOP ’01, pages 327–353, London, UK, UK, 2001.
Springer-Verlag. ISBN 3-540-42206-4.
Markus Kreutzer. Statische Analyse von Produktlinien. Bachelor thesis, April 2012.
Christian Kästner. Software product lines. Lecture, http://www.uni-marburg.de/
fb12/ps/teaching/ss11/spl, 2011. Accessed: 06.10.2012.
Christian Kästner, Sven Apel, Thomas Thüm, and Gunter Saake. Type checking
annotation-based product lines. ACM Transactions on Software Engineering and
Methodology (TOSEM), 2011a.
Christian Kästner, Paolo G. Giarrusso, Tillmann Rendel, Sebastian Erdweg, Klaus Os-
termann, and Thorsten Berger. Variability-aware parsing in the presence of lexical
macros and conditional compilation. In Proceedings of the 26th Annual ACM SIG-
PLAN Conference on Object-Oriented Programming, Systems, Languages, and Appli-
cations (OOPSLA), pages 805–824, New York, NY, USA, October 2011b. ACM Press.
ISBN 978-1-4503-0940-0.
Christian Kästner, Alexander von Rhein, Sebastian Erdweg, Jonas Pusch, Sven Apel,
Tillmann Rendel, and Klaus Ostermann. Toward variability-aware testing. In Pro-
ceedings of the 4th International Workshop on Feature-Oriented Software Development,
FOSD ’12, pages 1–8, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1309-4.
46 BIBLIOGRAPHY

Martin Kuhlemann, Sven Apel, and Thomas Leich. Streamlining feature-oriented de-
signs. In Proceedings of the 6th International Conference on Software Composition,
SC’07, pages 168–175, Berlin, Heidelberg, 2007. Springer-Verlag. ISBN 3-540-77350-9,
978-3-540-77350-4.

Kim Lauenroth, Klaus Pohl, and Simon Toehning. Model checking of domain artifacts
in product line engineering. In Proceedings of the 2009 IEEE/ACM International
Conference on Automated Software Engineering, ASE ’09, pages 269–280, Washington,
DC, USA, 2009. IEEE Computer Society. ISBN 978-0-7695-3891-4.

Jörg Liebig, Sven Apel, Christian Lengauer, Christian Kästner, and Michael Schulze.
An analysis of the variability in forty preprocessor-based software product lines. In
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering
- Volume 1, ICSE ’10, pages 105–114, New York, NY, USA, 2010. ACM. ISBN 978-1-
60558-719-6.

Rickard Nilsson. Scalacheck github repository. http://github.com/rickynils/


scalacheck. Accessed: 06.10.2012.

Michael Olan. Unit testing: test early, test often. J. Comput. Sci. Coll., 19(2):319–328,
December 2003. ISSN 1937-4771.

Mark Staples and Derrick Hill. Experiences adopting software product line development
without a product line architecture. In Proceedings of the 11th Asia-Pacific Software
Engineering Conference, APSEC ’04, pages 176–183, Washington, DC, USA, 2004.
IEEE Computer Society. ISBN 0-7695-2245-9.

Thomas Thüm, Sven Apel, Christian Kästner, Martin Kuhlemann, Ina Schaefer, and
Gunter Saake. Analysis strategies for software product lines. Technical Report FIN-
004-2012, School of Computer Science, University of Magdeburg, April 2012.
47

Selbstständigkeitserklärung

Hiermit erkläre ich, dass ich die vorliegende Bachelorarbeit selbstständig und nur unter
Verwendung der angegebenen Literatur und Hilfsmittel angefertigt habe. Die aus
fremden Quellen direkt oder indirekt übernommenen Stellen sind als solche kenntlich
gemacht.

Die Arbeit wurde bisher in gleicher oder ähnlicher Form keiner anderen Prüfungsbehörde
vorgelegt und auch nicht veröffentlicht.

Marburg, den October 11, 2012


Jonas Pusch
48

You might also like