A Short Tutorial On Feature-Oriented Programming
A Short Tutorial On Feature-Oriented Programming
1 Introduction
Fig. 2. Preprocessor directives in the code of Femto OS: Black lines represent prepro-
cessor directives such as #ifdef, white lines represent C code, comment lines are not
shown [49].
than the estimated number of atoms in the universe). Instead of a single product,
product-line developers implement millions of variants in parallel. To support
them in dealing with this complexity and to prevent or detect errors (even those
that occur only in one variant with a specific feature combination, out of millions),
many researchers have proposed means for variability-aware analysis that lifts
existing analyses to the product-line world. So far, variability-aware analysis
has been explored, for example, for type checking, parsing, model checking, and
verification. Instead of analyzing each of millions of variants in a brute-force
fashion, variability-aware analysis seeks mechanisms to analyze the entire product
line. We introduce the idea behind variability-aware analysis and illustrate it
with the example of type checking, both for annotations and composition.
This tutorial gives a gentle introduction to FOSD. It is structured as follows:
First, we introduce product lines, such as feature models and the process of
domain engineering. Second, we exemplify feature-oriented programming with
FeatureHouse to separate the implementation of features into distinct modules.
Third, we introduce the idea of virtual separation of concerns, an approach that,
4 Christian Kästner and Sven Apel
Domain
knowledge Mapping
(incl. scoping,
variability modeling) (models, source code, ...)
New Common
requirements Features
implementation
artifacts
Application engineering
practice, the scope is often iteratively refined; domain engineering and application
engineering are rarely strictly sequential and separated steps. For example, it is
common not to implement all features upfront, but incrementally, when needed.
Furthermore, requirements identified in domain engineering may be incomplete,
so new requirements arise in application engineering, which developers must
either feed back into the domain-engineering process or address with custom
development during the application engineering of a specific variant [30].
Tooling. There are many languages and tools to manage feature models or draw
feature diagrams, ranging from dozens of academic prototypes to fully fledged
commercial systems such as Gears 2 and pure::variants.3 For a research setting,
2
http://www.biglever.com/solution/product.html
3
http://www.pure-systems.com; a limited community edition is available free of
charge, and the authors are open for research collaborations.
Feature-Oriented Software Development 9
3 Feature-oriented programming
The key idea of feature-oriented programming is to decompose a system’s design
and code along the features it provides [16, 77]. Feature-oriented programming
follows a disciplined language-oriented approach, based on feature composition.
literals, and Add for representing addition. Each class defines a single operation
toString for pretty printing. The collaboration Eval adds the new operation eval,
which evaluates an expression. Evaluation is a crosscutting concern because eval
must be defined by adding a method to each of the three classes. A collaboration
bundles these changes.
class Expr
String toString()
Fig. 6. Containment hierarchy (left) and feature model (right) of the expression-
evaluator example.
3.3 Jak
Feature Expr
1 abstract class Expr {
2 abstract String toString();
3 }
4 class Val extends Expr {
5 int val;
6 Val(int n) { val = n; }
7 String toString() { return String.valueOf(val); }
8 }
9 class Add extends Expr {
10 Expr a; Expr b;
11 Add(Expr e1, Expr e2) { a = e1; b = e2; }
12 String toString() { return a.toString() + "+" + b.toString(); }
13 }
Derivative Mult#Eval
32 refines class Mult {
33 int eval() { return a.eval() ∗ b.eval(); }
34 }
The derivative ‘Mult#Eval’ is present when both features Mult and Eval are
present.
3.4 AHEAD
AHEAD is an architectural model of feature-oriented programming [16]. With
AHEAD, each feature is represented by a containment hierarchy, which is a direc-
tory that maintains a substructure organizing the feature’s artifacts (cf. Fig. 6).
Composing features means composing containment hierarchies and, to this end,
composing corresponding artifacts recursively by name and type (see Fig. 10
for an example), much like the mechanisms of hierarchy combination [70, 89],
mixin composition [20, 24, 37, 38, 85], and superimposition [21, 22]. In contrast to
these earlier approaches, for each artifact type, a different implementation of the
Feature-Oriented Software Development 13
composition operator ‘•’ has to be provided in AHEAD (i.e., different tools that
perform the composition, much like Jak for Java artifacts). The background is
that a complete software system does not just involve Java code. It also involves
many non-code artifacts. For example, the simple expression evaluator of Fig-
ure 7 may be paired with a grammar specification, providing concrete syntax
for expressions, and documentation in XHTML. For grammar specifications and
XML based languages, the AHEAD tool suite has dedicated composition tools.
Feature Expr
1 Expr: Val | Expr Oper Expr;
2 Oper: ’+’;
3 Val: INTEGER;
Fig. 8. A Bali grammar with separate features for addition and multiplication.
Bali is similar to Jak in its use of keyword Super: Expression Super.Oper refers
to the original definition of Oper.
Xak. Xak is a language and tool for composing various kinds of XML docu-
ments [2]. It enhances XML by a module structure useful for refinement. This
way, a broad spectrum of software artifacts can be refined à la Jak, (e.g., UML
diagrams, build scripts, service interfaces, server pages, or XHTML).
Figure 9 depicts an XHTML document that contains documentation for our
expression evaluator. The base documentation file describes addition only, but
we refine it to add a description of evaluation and multiplication as well. The
tag xak:module labels a particular XML element with a name that allows the
element to be refined by subsequent features. The tag xak:extends overrides an
element that has been named previously, and the tag xak:super refers to the
original definition of the named element, just like the keyword Super in Jak and
Bali.
AHEAD tool suite. Jak, Xak, and Bali are each designed to work with a
particular kind of software artifact. The AHEAD tool suite brings these separate
14 Christian Kästner and Sven Apel
Feature Expr
1 <html xmlns:xak="http://www.onekin.org/xak" xak:artifact="Expr" xak:type="xhtml">
2 <head><title>A Simple Expression Evaluator</title></head>
3 <body bgcolor="white">
4 <h1 xak:module="Contents">A Simple Expression Evaluator</h1>
5 <h2>Supported Operations</h2>
6 <ul xak:module="Operations">
7 <li>Addition of integers</li>
8 <!−− a description of how integers are added −−>
9 </ul>
10 </body>
11 </html>
Fig. 9. A Xak/XHTML document with separate features for addition, evaluation, and
multiplication.
tools together into a system that can handle many different kinds of software
artifacts.
In AHEAD, a piece of software is represented as a directory of files. Composing
two directories together will merge subdirectories and files with the same name.
AHEAD will select different composition tools for different kinds of files. Merging
Java files will invoke Jak to refine the classes, whereas merging XML files will
invoke Xak to combine the XML documents, and so on, as illustrated in Figure 10.
3.5 FeatureHouse
Recently, following the philosophy of AHEAD, the FeatureHouse tool suite has
been developed that allows programmers to enhance given languages rapidly with
support for feature-oriented programming (e.g., C#, C, JavaCC, Haskell, Alloy,
and UML [7]).
FeatureHouse is a framework for software composition supported by a cor-
responding tool chain. It provides facilities for feature composition based on
a language-independent model of software artifacts and an automatic plugin
mechanism for the integration of new artifact languages. FeatureHouse improves
Feature-Oriented Software Development 15
Fig. 11. Superimposition of feature structure trees (excerpt of the expression example).
16 Christian Kästner and Sven Apel
What code elements are represented as inner nodes and leaves? This depends
on the language and on the level of granularity at which software artifacts are to
be composed [50]. Different granularities are possible and might be desired in
different contexts. For Java, we could represent only packages and classes but not
methods or fields as FST nodes (a coarse granularity), or we could also represent
statements or expressions as FST nodes (a fine granularity). In any case, the
structural elements not represented in the FST are text content of terminal nodes
(e.g., the body of a method). In our experience, the granularity of Figure 11 is
usually sufficient for composition of Java artifacts.
Generator FSTGenerator
Recently, several researchers have taken a different path to tackle more disciplined
product-line implementations. Instead of inventing new languages and tools that
support feature decomposition, they stay close to the concept of conditional
compilation with preprocessors, but improve it at a tooling level. The goal is
to keep the familiar and simple mechanisms of annotating code fragments in
a common implementation (e.g., as with the C preprocessor), but to emulate
modularity with tool support and to provide navigation facilities as well as error
diagnostics. We work around the limitations for which traditional preprocessors
are typically criticized.
usage, but preprocessors are a different story. Overall, the flexibility of lexical
preprocessors allows undisciplined use that is hard to understand, to debug, and
to analyze.
To overcome the above problems, we require a disciplined use of preprocessors.
With disciplined use, we mean that annotations (in the simplest form #ifdef flags)
must correspond to feature names in a feature model and that annotations align
with the syntactic structure of the underlying language [50, 54, 64]. For example,
annotating an entire statement or an entire function is considered disciplined; the
annotation aligns with the language constructs of the host language. In contrast,
we consider annotating an individual bracket or just the return type of a function
as undisciplined. In Figure 14, we illustrate several examples of disciplined and
undisciplined annotations from the code of the text editor vim. A restriction
to disciplined annotations enables easy parsing of the source code [17, 64, 66]
and hence makes the code available to automated analysis (including variability-
aware analysis, as discussed in Sec. 5). Code with disciplined annotations can be
represented in the choice calculus [33], which opens the door for formal reasoning
and for developing a mathematical theory of annotation-based FOSD. As a side
effect, it guarantees that all variants are syntactically correct [54].
There are different ways to enforce annotation discipline. For example, we can
introduce conditional compilation facilities into a programming language, instead
of using an external preprocessor, as done in D9 and rbFeatures [41]. Similarly,
syntactic preprocessors allow only transformations based on the underlying
structure [23, 66, 97]. Alternatively, we can check discipline after the fact by
9
http://www.digitalmars.com/d/
Feature-Oriented Software Development 21
running additional analysis tools (however, even though Linux has a script to
check preprocessor flags against a feature model, Tartler et al. report several
problems in Linux with incorrect config flags as the tool is apparently not
used [90]). Finally, in our tool CIDE, we map features to code fragments entirely
at the tool level, such that the tool allows only disciplined annotations; hence, a
developer is not able to make an undisciplined annotation in the first place [50].
Enforcing annotation discipline limits the expressive power of annotations
and may require somewhat higher effort from developers who need to rewrite
some code fragments. Nevertheless, experience has shown that the restriction
to disciplined annotations are not a serious limitation in practice [17, 54, 64, 96].
Developers can usually rewrite undisciplined annotations locally into disciplined
ones – there is even initial research to automate this process [39,56]. Furthermore,
developers usually prefer disciplined annotations anyway (and sometimes, e.g., in
Linux, have corresponding guidelines), because they understand the threats to
code comprehension from undisciplined usage. Liebig et al. have shown that 84 %
of all #ifdef directives in 40 substantial C programs are already in a disciplined
form [64]. So, we argue that enforcing discipline, at least for new projects, should
be a viable path that eliminates many problems of traditional preprocessors.
Disciplined usage of annotations opens annotation-based implementations
to many forms of analysis and tool support, some of which we describe in the
following. Many of them would not have been possible with traditional lexical
preprocessors.
4.3 Views
One of the key motivations of modularizing features (for example, with feature-
oriented programming) is that developers can find all code of a feature in one
spot and reason about it without being distracted by other concerns. Clearly, a
scattered, preprocessor-based implementation, as in Figure 2, does not support
this kind of lookup and reasoning, but the core question “what code belongs to
this feature” can still be answered by tool support in the form of views [44,58,84].
With relatively simple tool support, it is possible to create an (editable) view
on the source code by hiding all irrelevant code of other features. In the simplest
case, we hide files from the file browser in an IDE. Developers will only see files
that contain code of certain features selected interactively by the user. This way,
developers can quickly explore all code of a feature without global code search.
In addition, views can filter code within a file (technically, this can be imple-
mented like code folding in modern IDEs).10 In Figure 15, we show an example
10
Although editable views are harder to implement than read-only views, they are
more useful since users do not have to go back to the original code to modify
it. Implementations of editable views have been discussed intensively in work on
database or model-roundtrip engineering. Furthermore, a simple but effective solution,
which we apply in our tools, is to leave a marker indicating hidden code [50]. Thus,
modifications occur before or after the marker and can be unambiguously propagated
to the original location.
22 Christian Kästner and Sven Apel
Traditional preprocessors have a reputation for obfuscating source code such that
the resulting code is difficult to read and maintain. The reason is that preprocessor
directives and statements of the host language are intermixed. When reading
source code, many #ifdef and #endif directives distract from the actual code
and can destroy the code layout (with cpp, every directive must be placed on its
own line). There are cases in which preprocessor directives entirely obfuscate the
source code as illustrated in Figure 1611 and in our previous FemtoOS example
in Figure 2. Furthermore, nested preprocessor directives and multiple directives
belonging to different features as in Figure 1 are other typical causes of obfuscated
code.
While language-based mechanisms such as feature-oriented programming avoid
this obfuscation by separating feature code, researchers have explored several
ways to improve the representation in the realm of preprocessors: First, textual
annotations with a less verbose syntax that can be used within a single line could
help, and can be used with many tools. Second, views can help programmers to
focus on the relevant code, as discussed above. Third, visual means can be used to
differentiate annotations from source code: Like some IDEs for PHP use different
font styles or background colors to emphasize the difference between HTML
and PHP in a single file, different graphical means can be used to distinguish
11
In the example in Figure 16, preprocessor directives are used for Java code at a fine
granularity [50], annotating not only statements but also parameters and part of
expressions. We need to add eight additional lines just for preprocessor directives.
Together with additional necessary line breaks, we need 21 instead of 9 lines for this
code fragment.
24 Christian Kästner and Sven Apel
4.5 Summary
There are many directions from which we can improve annotation-based imple-
mentations without replacing them with alternative implementation approaches,
such as feature-oriented programming. Disciplined annotations remove many
low-level problems and open the implementation for further analysis; views emu-
late modularity by providing a virtual separation of concerns; and visualizations
reduce the code cluttering. At the same time, we keep the flexibility and simplicity
of preprocessors: Developers still just mark and optionally remove code fragments
from a common implementation.
Together, these improvements can turn traditional preprocessors into a viable
alternative to composition-based approaches, such as feature-oriented program-
ming. Still there are trade-offs: For example, virtual separation does not support
true modularity and corresponding benefits such as separate compilation, whereas
compositional approaches have problems at a fine granularity. Even combining
the two approaches may yield additional benefits. We have explored these differ-
ences and synergies elsewhere [47, 48]. Recently, we have explored also automated
Feature-Oriented Software Development 25
Tooling. Basic preprocessors are widely available for most languages. For Java,
Antenna is a good choice for which also tool integration in Eclipse and NetBeans
is available. Most advanced concepts discussed here have been implemented
in our tool CIDE as an Eclipse plugin.12 CIDE uses the feature-model editor
and reasoning engine from FeatureIDE. CIDE is open source and comes with
a number of examples and a video tutorial. Visualizations have been explored
further in View Infinity 13 and FeatureCommander,14 the latter of which comes
with Xenomai (a realtime extension for Linux with 700 features) as example. For
graphical models, FeatureMapper 15 provides similar functionality.
5 Variability-aware analysis
The analysis of product lines is difficult. The exponential explosion (up to 2n
variants for n features) makes a brute-force approach infeasible. At the same
time, checking only sampled variants or variants currently shipped to customers
leads to the effect that errors can lurk in the system for a long time. Errors
are detected late, only when a specific feature combination is requested for the
first time (when the problem is more expensive to find and fix). While this may
work for in-house development with only a few products per year (e.g., software
bundled with a hardware product line), especially in systems in which users can
freely select features (e.g., Linux), checking variants in isolation obviously does
not scale.
Variability-aware analysis is the idea to lift an analysis mechanism for a single
system to the product-line world. Variability-aware analysis extends traditional
analysis by reasoning about variability. Hence, instead of checking variants, vari-
ability is checked locally where it occurs inside the product-line implementation
(without variant generation). Variability-aware analysis has been proposed for
many different kinds of analysis, including type checking [5, 53, 92], model check-
ing [12, 27, 60, 76], theorem proving [95], and parsing [56]; other kinds of analyses
can probably be lifted similarly. There are very different strategies, but the key
idea is usually similar. We will illustrate variability-aware analysis with type
checking, first for annotation-based implementations, then for composition-based
ones. Subsequently, we survey different general strategies.
1 #include <stdio.h>
2
3 #ifdef WORLD
4 char ∗msg = "Hello World\n";
5 #endif
6 #ifdef BYE
7 char ∗msg = "Bye bye!\n";
8 #endif
9
10 main() {
11 #if defined(SLOW) && defined(WORLD)
12 sleep(10);
13 #endif
14
15 println(msg);
16 }
eight different variants (with any combination of WORLD, BYE, and SLOW).
Quite obviously, some of these programs are incorrect: Selecting neither WORLD
nor BYE leads to a dangling variable access in the println parameter (msg has not
been declared); selecting both WORLD and BYE leads to a variable declared
twice.
To detect these errors with a brute-force approach, we would simply generate
and type check all eight variants individually. While brute force seems acceptable
in this example, it clearly does not scale for implementations with many features.
Instead, variability-aware type checking uses a lifted type system that takes
variability into account.
As a first step, we need to reason about conditions under which certain
code fragments are included. Czarnecki and Pietroszek coined them presence
conditions, to describe the conditions under which a code fragment is included
with a propositional formula (the code line is included iff the presence condition
of that line evaluates to true) [31]. In our example, the formulas are trivial:
WORLD for Line 4, BYE for Line 7, SLOW ∧ WORLD for Line 12, and true for
all other lines. With more complex #ifdef conditions and nesting, the formulas
become more complex as described in detail elsewhere [83].
Now, we can formulate type rules based on presence conditions. For example,
whenever we find an access to a local variable, we need to make sure that we
can reach at least one declaration. In our example, we require that the presence
condition of accessing msg (i.e., true) implies the presence condition of either
declaration of msg (i.e., WORLD and BYE): true ⇒ (WORLD ∨ BYE). Since this
formula is not a tautology, we detect that a variant selecting neither feature is not
type correct. Similar reachability conditions for function calls are straightforward
and uninteresting, because the target declaration in a header file has presence
condition true. As an additional check, we require that multiple definitions
with the same name must be mutually exclusive: ¬(WORLD ∧ BYE). This check
reports an error for variants with both features. If the product line has a feature
model describing the valid variants, we are only interested in errors in valid
Feature-Oriented Software Development 27
1 #include <stdio.h>
2
3 #ifdef WORLD
4 char ∗msg = "Hello World\n"; fm ⇒ ¬(WORLD ∧ BYE)
5 #endif
6 #ifdef BYE
7 char ∗msg = "Bye bye!\n";
8 #endif
9
10 main() {
11 #if defined(SLOW) && defined(WORLD)
12 sleep(10);
fm ⇒ 13 #endif
(SLOW ∧ WORLD 14
⇒ true) 15 println(msg); fm ⇒ (true ⇒ (WORLD ∨ BYE))
16 }
fm ⇒ (true ⇒ true)
Abstracting from the example, we can define generic reachability and unique-
ness conditions. A reachability condition between a caller and multiple targets
is:
_
fm ⇒ pc(caller) ⇒ pc(t)
t∈targets
Even for complex presence conditions and feature models, we can check whether
these constraints hold efficiently with SAT solvers (Thaker et al. provide a good
description of how to encode and implement this [92]).16
So, how does variability-aware type checking improve over the brute-force
approach? Instead of just checking reachability and unique definitions in a single
variant, we formulate conditions over the space of all variants. The important
benefit of this approach is that we check variability locally, where it occurs.
In our example, we do not need to check the combinations of SLOW and
BYE, which are simply not relevant for typing. Technically, variability-aware
16
Other logics and other solvers are possible, but SAT solvers seem to provide a sweet
spot between performance and expressiveness [67].
28 Christian Kästner and Sven Apel
type checking requires lookup functions to return all possible targets and their
presence conditions. Furthermore, we might need to check alternative types of
a variable. Still, in large systems, we do not check the surface complexity of 2n
variants, but analyze the source code more closely to find essential complexity,
where variability actually matters. We cannot always avoid exponential blowup,
but practical source code is usually well behaved and has comparably little
local variability. Also, caching of SAT-solver queries is a viable optimization
lever. Furthermore, the reduction to SAT problems enables efficient reasoning in
practice, even in the presence of complex presence conditions and large feature
models [53, 67, 92].
In prior work, we have described variability-aware type checking in more
detail and with more realistic examples; we have formalized the type system and
proven it sound (when the type system judges a product line as well-typed all
variants are well-typed); and we have provided experience from practice [53].
The same concept of introducing variability into type checking can also be
applied to feature-oriented programming. To that end, we first need to define a
type system for our new language (as, for example, FFJ [6]) and then make it
variability-aware by introducing reachability checks (as, for example, FFJPL [5]).
Since the type-checking mechanisms are conceptually similar for annotation-
based and composition-based product lines, we restrict our explanation to a simple
example of an object store with two basic implementations (example from [93])
that each can be extended with a feature AccessControl in Figure 20. Lookup
of function calls works across feature boundaries and checking presence conditions
is reduced to checking relationships between features.
More interestingly, the separation of features into distinct modules allows
us to check some constraints within a feature. Whereas the previous approaches
assume a closed world in which all features are known, separation of features
encourages modular type checking in an open world. As illustrated in Figure 21,
we can perform checks regarding fragments that are local to the feature. At the
same time, we derive interfaces, which specify the constraints that have to be
checked against other features. To check constraints between features, we can use
brute force (check on composition) or just another variability-aware mechanism.
Modular type checking paves the road to true feature modularity, in which
we distinguish between the public interface of a feature and private hidden
implementations. Modular analysis of a feature reduces analysis effort, because
we need to check each feature’s internals only once and need to check only
interfaces against interfaces of other features (checking interfaces usually is much
faster than checking the entire implementation). Furthermore, we might be able
to establish guarantees about features, without knowing all other features (open-
world reasoning). For an instantiation of modular type checking of features, see
the work on gDeep [3] and delta-oriented programming [81]. Li et al. explored a
similar strategy for model checking [63].
Feature-Oriented Software Development 29
Feature SingleStore
1 class Store {
2 private Object value;
3 Object read() { return value; }
4 void set(Object nvalue) { value = nvalue; }
5 }
fm ⇒
AccessControl ⇒ Feature MultiStore
SingleStore ∨ MultiStore
6 class Store {
7 private LinkedList values = new LinkedList();
8 Object read() { return values.getFirst(); }
9 Object[] readAll() { return values.toArray(); }
10 void set(Object nvalue) { values.addFirst(nvalue); }
11 }
Feature AccessControl
12 refines class Store {
13 private boolean sealed = false;
14 Object read() {
fm ⇒
15 if (!sealed) { return Super().read(); }
AccessControl ⇒
16 else { throw new RuntimeException("Access denied!"); }
MultiStore
17 }
18 Object[] readAll() {
19 if (!sealed) { return Super().readAll(); }
20 else { throw new RuntimeException("Access denied!"); }
21 }
22 void set(Object nvalue) {
23 if (!sealed) { Super(Object).set(nvalue); }
24 else { throw new RuntimeException("Access denied!"); }
25 }
26 }
Fig. 20. Checking whether references to read and readAll are well-typed in all valid
products.
30 Christian Kästner and Sven Apel
Feature SingleStore
1 class Store {
2 private Object value;
3 Object read() { return value; }
4 void set(Object nvalue) { value = nvalue; }
5 }
Feature MultiStore
6 class Store {
7 private LinkedList values = new LinkedList();
8 Object read() { return values.getFirst(); } Interface of SingleStore
9 Object[] readAll() { return values.toArray(); } 1 provides Object read();
10 void set(Object nvalue) { 2 provides void set(Object);
values.addFirst(nvalue); }
11 }
Interface of MultiStore
Fig. 21. References to field sealed can be checked entirely within feature AccessCon-
trol (left); references to read and readAll cut across feature boundaries and are checked
at composition time based on the features’ interfaces (right).
Feature-Oriented Software Development 31
Tooling. Most variability-aware analyses, we are aware of, are in the state of
research prototypes. See the corresponding references for further information.
Our environment for virtual separation of concerns, CIDE, contains a variability-
aware type system that covers large parts of Java. The safegen tool implements
part of a variability-aware type system for the feature-oriented language Jak
and is available as part of the AHEAD tool suite. We are currently in the
process of integrating such type system into the Fuji compiler for feature-oriented
programming in Java,17 and afterward into FeatureIDE, and we are developing a
type system for C code with #ifdefs as part of the TypeChef project.18
6 Open challenges
So far, we have illustrated different strategies to implement features in product
lines. They all encourage disciplined implementations, that alleviate many prob-
17
http://fosd.net/fuji
18
https://github.com/ckaestne/TypeChef
32 Christian Kästner and Sven Apel
7 Conclusion
With this tutorial, we have introduced FOSD. Beginning with basic concepts from
the field of software product line engineering, we have introduced two approaches
to FOSD: feature-oriented programming à la AHEAD and FeatureHouse and
virtual separation of concerns. Subsequently, we have introduced the subfield of
variability-aware analysis, which highlights a promising avenues of further work.
We have covered only the basic concepts and a few methods, tools, and techniques,
with a focus on techniques that can be readily explored. For further information,
Feature-Oriented Software Development 33
we recommend a recent survey, which covers also related areas including feature
interactions, feature design, optimization, and FOSD theories [4, 49].
References
1. B. Adams, B. Van Rompaey, C. Gibbs, and Y. Coady. Aspect mining in the presence
of the C preprocessor. In Proc. AOSD Workshop on Linking Aspect Technology and
Evolution (LATE), pages 1–6. ACM Press, 2008.
2. F. I. Anfurrutia, O. Diaz, and S. Trujillo. On refining XML artifacts. In Int’l
Conf. Web Engineering, volume 4607 of Lecture Notes in Computer Science, pages
473–478. Springer-Verlag, 2007.
3. S. Apel and D. Hutchins. A calculus for uniform feature composition. ACM Trans.
Program. Lang. Syst. (TOPLAS), 32(5):1–33, 2010.
4. S. Apel and C. Kästner. An overview of feature-oriented software development. J.
Object Technology (JOT), 8(5):49–84, 2009.
5. S. Apel, C. Kästner, A. Größlinger, and C. Lengauer. Type safety for feature-
oriented product lines. Automated Software Engineering, 17(3):251–300, 2010.
6. S. Apel, C. Kästner, and C. Lengauer. Feature Featherweight Java: A calculus
for feature-oriented programming and stepwise refinement. In Proc. Int’l Conf.
Generative Programming and Component Engineering (GPCE), pages 101–112.
ACM Press, 2008.
7. S. Apel, C. Kästner, and C. Lengauer. FeatureHouse: Language-independent,
automated software composition. In Proc. Int’l Conf. Software Engineering (ICSE),
pages 221–231. IEEE Computer Society, 2009.
8. S. Apel, S. Kolesnikov, J. Liebig, C. Kästner, M. Kuhlemann, and T. Leich. Access
control in feature-oriented programming. Science of Computer Programming (Special
Issue on Feature-Oriented Software Development), 77(3):174–187, Mar. 2012.
9. S. Apel, T. Leich, M. Rosenmüller, and G. Saake. FeatureC++: On the symbiosis of
feature-oriented and aspect-oriented programming. In Proc. Int’l Conf. Generative
Programming and Component Engineering (GPCE), volume 3676 of Lecture Notes
in Computer Science, pages 125–140. Springer-Verlag, 2005.
10. S. Apel, T. Leich, and G. Saake. Aspectual feature modules. IEEE Trans. Softw.
Eng. (TSE), 34(2):162–180, 2008.
11. S. Apel and C. Lengauer. Superimposition: A language-independent approach to
software composition. In Proc. ETAPS Int’l Symposium on Software Composition,
number 4954 in Lecture Notes in Computer Science, pages 20–35. Springer-Verlag,
2008.
12. S. Apel, H. Speidel, P. Wendler, A. von Rhein, and D. Beyer. Detection of feature
interactions using feature-aware verification. In Proc. Int’l Conf. Automated Software
Engineering (ASE), pages 372–375. IEEE Computer Society, 2011.
13. D. L. Atkins, T. Ball, T. L. Graves, and A. Mockus. Using version control data to
evaluate the impact of software tools: A case study of the Version Editor. IEEE
Trans. Softw. Eng. (TSE), 28(7):625–637, 2002.
14. L. Bass, P. Clements, and R. Kazman. Software Architecture in Practice. Addison-
Wesley, Boston, MA, 1998.
34 Christian Kästner and Sven Apel