Software Architecture - 10th European Conference Proceedings
Software Architecture - 10th European Conference Proceedings
Software
Architecture
10th European Conference, ECSA 2016
Copenhagen, Denmark, November 28 – December 2, 2016
Proceedings
123
Lecture Notes in Computer Science 9839
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7408
Bedir Tekinerdogan Uwe Zdun
•
Software
Architecture
10th European Conference, ECSA 2016
Copenhagen, Denmark, November 28 – December 2, 2016
Proceedings
123
Editors
Bedir Tekinerdogan Ali Babar
Wageningen University University of Adelaide
Wageningen Adelaide, SA
The Netherlands Australia
Uwe Zdun
University of Vienna
Vienna
Austria
collocated events. We also thank the workshop organizers and tutorials presenters, who
also made significant contributions to the success of ECSA.
Owing to unfortunate events the conference had to be relocated from Istanbul,
Turkey, to Copenhagen, Denmark. We are grateful to the local Organizing Committee
at Kültür University in Istanbul for the initial organizations. We would like to thank the
management of IT University of Copenhagen, Denmark, for taking over the local
organization in a smooth way and providing its facilities and professionally trained staff
for the organization of ECSA 2016.
The ECSA 2016 submission and review process was extensively supported by the
EasyChair Conference Management System. We acknowledge the prompt and pro-
fessional support from Springer, who published these proceedings in printed and
electronic volumes as part of the Lecture Notes in Computer Science series.
General Chair
M. Ali Babar University of Adelaide, Australia and IT University
of Copenhagen, Denmark
Program Co-chairs
Bedir Tekinerdogan Wageningen University, The Netherlands
Uwe Zdun University of Vienna, Austria
Workshops Co-chairs
Rainer Weinreich Johannes Kepler University Linz, Austria
Rami Bahsoon The University of Birmingham, UK
Publicity Chair
Matthias Galster University of Canterbury, New Zealand
VIII Organization
Web Masters
Lina Maria Garcés University of São Paulo, Brazil
Rodriguez
Tiago Volpato University of São Paulo, Brazil
Steering Committee
Muhammad Ali Babar University of Adelaide, Australia
Paris Avgeriou University of Groningen, The Netherlands
Ivica Crnkovic Mälardalen University, Sweden
Carlos E. Cuesta Rey Juan Carlos University, Spain
Khalil Drira LAAS-CNRS – University of Toulouse, France
Patricia Lago VU University Amsterdam, The Netherlands
Tomi Männistö University of Helsinki, Finland
Raffaela Mirandola Politecnico di Milano, Italy
Flavio Oquendo IRISA – University of South Brittany, France (Chair)
Bedir Tekinerdogan Wageningen University, The Netherlands
Danny Weyns Linnaeus University, Sweden
Uwe Zdun University of Vienna, Austria
Program Committee
Bedir Tekinerdogan Wageningen University, The Netherlands
Muhammad Ali Babar IT University of Copenhagen, Denmark
Uwe Zdun University of Vienna, Austria
Antonia Bertolino ISTI-CNR, Italy
Mehmet Aksit University of Twente, The Netherlands
Eduardo Almeida CESAR, Brazil
Jesper Andersson Linnaeus University, Sweden
Paris Avgeriou University of Groningen, The Netherlands
Rami Bahsoon University of Copenhagen, Denmark
Thais Batista Federal University of Rio Grande do Norte, Brazil
Stefan Biffl TU Wien, Austria
Jan Bosch Chalmers and Gothenburg University, Sweden
Tomas Bures Charles University, Czech Republic
Rafael Capilla Universidad Rey Juan Carlos, Madrid
Roger Champagne École de technologie supérieure, Canada
Michel Chaudron Chalmers and Gothenburg University, Sweden
Ivica Crnkovic Chalmers and Gothenburg University, Sweden
Carlos E. Cuesta Rey Juan Carlos University, Spain
Organization IX
Runtime Architecture
Flavio Oquendo(&)
1 Introduction
The complexity of software and the complexity of systems reliant on software have
grown at a staggering rate. In particular, software-intensive systems have been rapidly
evolved from being stand-alone systems in the past, to be part of networked systems in
the present, to increasingly become systems-of-systems in the coming future [18].
The notion of system and the related notion of software-intensive system are well
known and defined in the ISO/IEC/IEEE Standard 42010. A system is a combination of
components organized to accomplish a specific behavior for achieving a mission.
Hence, a system exists to fulfill a mission in an environment. A software-intensive
system is a system where software contributes essential influences to the design,
construction, deployment, and evolution of the system as a whole [17].
The notion of software-intensive system-of-systems is however relatively new,
being the result of the ubiquity of computation and pervasiveness of communication
networks.
A System-of-Systems (SoS, as stated) is a combination of constituents, which are
themselves systems, that forms a more complex system to fulfill a mission, i.e. this
composition forms a larger system that performs a mission not performable by one of
the constituent systems alone [23], i.e. it creates emergent behavior.
For intuitively distinguishing an SoS from a single system, it is worth to recall that
every constituent system of an SoS fulfills its own mission in its own right, and
continues to operate to fulfill its mission during its participation in the SoS as well as
when disassembled from the encompassing SoS.
For instance, an airport, e.g. Paris-Charles-de-Gaulle, is an SoS, but an airplane
alone, e.g. an Airbus A380, is not. Indeed, if an airplane is disassembled in compo-
nents, no component is a system in itself. In the case of an airport, the constituent
systems are independent systems that will continue to operate, e.g. the air traffic control
and the airlines, even if the airport is disassembled in its constituents.
Operationally, an airport is an SoS spanning multiple organizations, categorized
into major facilities: (i) passenger, (ii) cargo, and (iii) aircraft departure, transfer and
arrival. Each facility is shared and operated by different organizations, including air
navigation services providers, ground handling, catering, airlines, various supporting
units and the airport operator itself. The airport facilities are geographically distributed,
managed by independent systems, and fall under multiple legal jurisdictions in regard
to occupational health and safety, customs, quarantine, and security. For the airport to
operate, these numerous constituent systems work together to create the emergent
behavior that fulfill the airport mission.
As a software-intensive SoS, an airport is composed of independent systems that
enable passengers, cargo, airplanes, information and services to be at the right place at
the right time via the seamless collaboration of these constituent systems, from
check-in, to security, to flight information displays, to baggage, to boarding, stream-
lining airport operations.
It is worth noting that the level of decentralization in the control of the constituent
systems of an SoS varies, e.g. regarding airports, the level of subordination in a military
airport and in a civil airport are very different. It is also worth noting that in some cases
Software Architecture Challenges and Emerging Research in SiSoS 7
the SoS has a central management, as it is the case of civil and military airports, and in
others do not, as it is the case e.g. in a metroplex, i.e. the set of airports in close
proximity sharing the airspace serving a city.
SoSs may be classified in four categories according to the levels of subordination
and awareness of the constituent systems on the SoS [8, 23]:
• Directed SoS: an SoS that is centrally managed and which constituent systems have
been especially developed or acquired to fit specific purposes in the SoS – the
constituent systems maintain the ability to operate independently, but their actual
operation is subordinated to the central SoS management (i.e. the management
system of the coalition of constituent systems); for instance, a military airport.
• Acknowledged SoS: an SoS that is centrally managed and which constituent sys-
tems operate under loose subordination – the constituent systems retain their
independent ownership; for instance, a civil airport.
• Collaborative SoS: an SoS in which there is no central management and constituent
systems voluntarily agree to fulfill a central mission – the constituent systems
operate under the policies set by the SoS; for instance, a metroplex.
• Virtual SoS: an SoS in which there is no central management or centrally agreed
mission – the constituent systems operate under local, possibly shared, policies; for
instance, the airports of a continent such as Europe.
These different categories of SoSs bring the need to architect SoSs where local
interactions of constituent systems influence the global desired behavior of the SoS
taking into account the levels of subordination and awareness of the constituent sys-
tems on the SoS.
Currently, the research on software-intensive SoSs is still in its infancy [14, 21]. In
addition, SoSs are developed mostly in a case-by-case basis, not addressing neither
cross-cutting concerns nor common foundations across SoS application domains [7].
Actually, the relevance and timeliness of progressing the state of the research for
developing critical software-intensive SoSs from now on are highlighted in several
roadmaps targeting year 2020 and beyond [9, 11, 41]. The needs for research on
software-intensive SoSs have been addressed in different studies carried out by the
initiative of the European Commission in the H2020 Program, as part of the European
Digital Agenda [7].
More precisely, in 2014, two roadmaps for SoSs were proposed (supported by the
European Commission) issued by the CSAs ROAD2SoS (Development of strategic
research and engineering roadmaps in Systems-of-Systems) [9] and T-Area-SoS
(Transatlantic research and education agenda in Systems-of-Systems) [11]. In 2015, the
CSA CPSoS [16] presented a research agenda for developing cyber-physical SoSs.
All these roadmaps show the importance of progressing from the current situation,
where software-intensive SoSs are basically developed in ad-hoc ways in specific
application sectors, to a scientific approach providing rigorous theories, languages,
tools, and methods for mastering the complexity of SoSs in general (transversally to
application domains).
8 F. Oquendo
These roadmaps highlight that now is the right time to initiate research efforts on
SoS to pave the way for developing critical software-intensive SoSs in particular
regarding architectural solutions for trustworthily harnessing emergent behaviors to
master the complexity of SoSs.
Overall, the long-term grand challenge raised by critical software-intensive SoSs
calls for a novel paradigm and novel scientific approaches for specifying, architecting,
analyzing, constructing, and evolving SoSs deployed in unpredictable open environ-
ments while assuring their continuous correctness.
In Europe, this effort started more intensively in 2010 when the European Com-
mission launched a first Call for Research Projects addressing SoS as the main
objective of study; in 2013 another Call for Projects had again SoS as an objective and
in 2016 the third was opened. The projects funded in the first European Call have now
ended: COMPASS (Comprehensive modelling for advanced Systems-of-Systems, from
Oct. 2010 to Sept. 2014) [3] and DANSE (Designing for adaptability and evolution in
System-of-Systems engineering, from Nov. 2010 to Oct. 2014) [4]. The projects of the
second Call started in 2014 [7]: AMADEOS (Architecture for multi-criticality agile
dependable evolutionary open System-of-Systems), DYMASOS (Dynamic manage-
ment of coupled Systems-of-Systems), and LOCAL4GLOBAL (System-of-Systems
that act locally for optimizing globally).
Regarding other parts of the world, in the USA, different research programs
specifically targets SoS, in particular in the Software Engineering Institute [42] and
Sandia National Laboratories [41] among others. In these programs, it is interesting to
pinpoint the different research actions that have evaluated current technologies
developed for single systems in terms of suitability/limitation for architecting and
engineering SoSs. In addition, prospective studies have highlighted the overwhelming
complexity of ultra-large-scale SoSs [12].
Note also that different industrial studies and studies from the industrial viewpoint
have highlighted the importance, relevance and timeliness of software-intensive SoSs
[10, 22].
Due to its inherent complex nature, architecting SoSs is a grand research challenge, in
particular for the case of critical software-intensive SoSs.
Precisely, an SoS is defined as a system constituted of systems having the following
five intrinsic characteristics [23]:
• Operational independence: every constituent system of an SoS operate indepen-
dently from each other for fulfilling its own mission;
• Managerial independence: every constituent system of an SoS is managed inde-
pendently, and may decide to evolve in ways that were not foreseen when they were
originally combined;
• Geographical distribution: the constituent systems of an SoS are physically
decoupled (in the sense that only information can be transmitted between con-
stituent systems, nor mass neither energy);
Software Architecture Challenges and Emerging Research in SiSoS 9
Undoubtedly, the main difference between an SoS and a single system is the nature
of their constituent systems, specifically their level of independence, and the exhibition
of emergent behavior.
Complexity is thereby innate to SoSs as they inherently exhibit emergent behavior:
in SoSs, missions are achieved through emergent behavior drawn from the local
interaction among constituent systems. In fact, an SoS is conceived to create desired
emergent behaviors for fulfilling specific missions and may, by side effect, create
undesirable behaviors possibly violating safety, which needs to be avoided. A further
complicating factor is that these behaviors may be ephemeral because the systems
constituting the SoS evolve independently, which may impact their availability.
Additionally, the environment in which an SoS operates is generally known only
partially at design-time and almost always is too unpredictable to be summarized
within a fixed set of specifications (thereby there will inevitably be novel situations,
possibly violating safety, to deal with at run-time).
Overall, major research challenges raised by software-intensive SoSs are funda-
mentally architectural: they are about how to organize the local interactions among
constituent systems to enable the emergence of SoS-wide behaviors and properties
derived from local behaviors and properties (by acting only on their interactions,
without being able to act in the constituent systems themselves) subject to evolutions
that are not controlled by the SoS due to the independent nature of constituents.
Therefore, enabling to describe SoS architectures is a grand research challenge.
Table 2. (continued)
• Note that the concept of abstract architecture is different and
does not have the same purpose as the concept of architectural
style or pattern. Both, style and pattern, are codifications of
design decisions used as architectural knowledge for designing
abstract or concrete architectures. An abstract architecture is
the expression of all possible valid concrete architectures in
declarative terms. A concrete architecture is the actual
architecture that operates at run-time
Run-time use of the architectural concepts
Concrete architecture Once an SoS is initiated, concrete systems coping with the
defined by extension specified system abstractions needs to be identified to create
concrete coalitions at run-time with the assistance of mediators
• Note that a concrete system may enter or leave the SoS at
run-time by its own decision (the SoS has no control on
concrete systems); mediators oppositely are dynamically
created and evolve under the control of the SoS
Engineering Conference (SoSE 2016) which presents its formal definition and
operational semantics).
• A novel formal architectural language embodying the SoS architectural concepts of
constituent system, mediator, and coalition: grounded on the p-Calculus for SoS, we
conceived a novel ADL based on the separation of concerns between architectural
abstractions at design-time and architectural concretions at run-time (for details see
[30] in the proceedings of the 2016 IEEE SoS Engineering Conference (SoSE 2016)
which presents the concepts and notation of this novel ADL, named SosADL).
• A novel temporal logic for expressing correctness properties of highly dynamic
software architectures (including SoS architectures) and verifying these properties
with statistical model checking: we conceived a novel temporal logic, named
DynBLTL, for supporting analysis of SoS architectures (for details on this temporal
logic see [36] in the proceedings of the 2016 International Symposium On Lev-
eraging Applications of Formal Methods, Verification and Validation (ISOLA
2016)); in addition we developed a novel statistical model checking method for
verifying properties expressed on DynBLTL on architecture descriptions based on
the p-Calculus (for details see [2] in these proceedings of ECSA 2016).
• A novel formalization for checking the architectural feasibility of SoS abstract
architecture descriptions and for creating concrete architectures from SoS abstract
architectures: it supports automated creation of concrete architectures from an
abstract architecture given selected concrete constituent systems as well as supports
the evolution of concrete architectures by automated constraint solving mechanisms
(for details see [15] in the proceedings of the 2016 IEEE SoS Engineering Con-
ference (SoSE 2016) which presents this novel formal system mechanizing the
solving of concurrent constraints of SosADL).
• A novel approach for modeling SoS missions in terms of goals relating them to
mediators and required SoS emergent behaviors (for details see [38] in the pro-
ceedings of the 2015 IEEE SoS Engineering Conference (SoSE 2015) which pre-
sents the SoS mission description notation and the supporting tool).
• The field validation of SosADL and its underlying p-Calculus for SoS drew from a
real pilot project and related case study of a Flood Monitoring and Emergency
Response SoS, summarized in the next section (for details see [32] in the pro-
ceedings of the 2016 IEEE International Conference on Systems, Man, and
Cybernetics (SMC 2016)).
Additionally, we have developed an SoS Architecture Development Environment
(SosADE) for supporting the architecture-centric formal evolutionary development of
SoSs using SosADL and associated analysis languages and tools. This toolset provides
a model-driven architecture development environment where the SosADL meta-model
is transformed to different analysis meta-models and converted to input languages of
analysis tools, e.g. Kodkod for concurrent constraint solving, UPPAAL for model
checking, DEVS for simulation, and PLASMA for statistical model checking.
Software Architecture Challenges and Emerging Research in SiSoS 15
Fig. 2. Monjolinho river crossing the city of Sao Carlos with deployed wireless river sensors
This Flood Monitoring and Emergency Response SoSs has the five defining
characteristics of an SoS. Let us now briefly present this in vivo field study in Table 3.
The aim of this field study was to assess the fitness for purpose and the usefulness
of SosADL to support the architectural design of real-scale SoSs.
The result of the assessment based on this pilot project shown that the SosADL met
the requirements for describing SoS architectures. As expected, using a formal ADL
compels the SoS architects to study different architectural alternatives and take key
architectural decisions based on SoS architecture analyses.
Learning SosADL in its basic form was quite straightforward; however, using the
advanced features of the language needed interactions with the SosADL expert
group. The SoS architecture editor and simulator were in practice the main tools to
learn and use SosADL and the SoS architecture model finder and model checker were
the key tools to show the added value of formally describing SoS architectures.
In fact, a key identified benefit of using SosADL was the ability, by its formal
foundation, to validate and verify the studied SoS architectures very early in the
application lifecycle with respect to the SoS correctness properties, in particular taking
into account emergent behavior in a critical set as the one of flash flood.
16 F. Oquendo
Table 3. Field study of SosADL on a flood monitoring and emergency response SoS
Field study for architecting a WSN-based flood monitoring and emergency response SoS
Purpose The aim of this field study of a Flood Monitoring and Emergency Response SoS
was to assess the fitness for purpose and the usefulness of SosADL and
underlying formal foundation to support the architectural design of real-scale
SoSs
Stakeholders The SoS stakeholder is the DAEE (Sao Paulo’s Water and Electricity
Department), a government organization of the State of Sao Paulo, Brazil,
responsible for managing water resources, including flood monitoring of urban
rivers. Stakeholders of the constituent systems are the different city councils
crossed by the Monjolinho river and the policy and fire departments of the city
of Sao Carlos that own Unmanned Aerial Vehicles (UAVs) and have cars
equipped with Vehicular Ad-hoc Networks (VANETs). The population, by
downloading an App from the DAEE department, are involved as target of the
alert actions. They may also register for getting alert messages by SMS
Mission The mission of this SoS is to monitor potential flash floods and to handle
related emergencies
Emergent In order to fulfil its mission, this monitoring SoS needs to create and maintain
behaviors an emergent behavior where sensor nodes (each including a sensor mote and an
analog depth sensor) and UAVs (each including communication devices) will
coordinate to enable an effective monitoring of the river and once a risk of flood
is detected, to prepare the emergence response for warning vehicles with
VANETs and drivers with smartphones approaching the flood area as well as
inhabitants that live in potential flooding zones. Resilience of the SoS, even in
case of failure of sensors and UAVs need to be managed as well as its operation
in an energy-efficient way. The emergence response involves warning the
policy and fire departments as well
SoS The architecture of this Flood Monitoring and Emergency Response SoS was
architecture described in SosADL as a Collaborative SoS having a self-organizing
architecture based on mediators for connecting sensors and forming multihop
ad-hoc networks for both flood monitoring and emergency response. The
designed SoS architecture allows for continuous connections and
reconfigurations around broken or blocked paths, supported by the SoS
evolutionary architecture with possible participation of UAVs and VANETs
(see [32] for details on the SoS architecture description)
The experimentation and the corresponding assessment have shown that SosADL
and its toolset, SosADE, are de facto suitable for formally describing and analyzing
real-scale SoS architectures.
1
We conducted automatic searches on the major publication databases related to the SoS domain
(IEEE Xplore, ISI Web of Science, Science Direct, Scopus, SpringerLink, and ACM Digital
Library), after having the defined the SLR protocol (see [14] for details on the SLR).
18 F. Oquendo
This paper introduced the notion of software-intensive SoS, raised key software
architecture challenges in particular related to SoS architecture description and briefly
surveyed emerging research on ADLs for SoS addressing these challenges based on a
paradigm shift from single systems to systems-of-systems.
Oppositely to single systems, SoSs exhibit emergent behavior. Hence, whether the
behavior of a single system can be understood as the sum of the behaviors of its
components, in SoSs, this reductionism fails: an SoS behaves in ways that cannot be
predicted from analyzing exclusively its individual constituents. In addition, SoS is
characterized by evolutionary development enabling to maintain emergent behavior for
sustaining SoS missions.
Software-intensive SoS has become a hotspot in the last 5 years, from both the
research and industry viewpoints. Indeed, various aspects of our lives and livelihoods
have progressively become overly dependent on some sort of software-intensive SoS.
If SoS is a field well established in Systems Engineering and SoS architecture has
been studied for two decades, it is yet in its infancy in Software Engineering and
particularly in Software Architecture. Only 3 years ago, the first workshop on the
architecture and engineering of software-intensive SoSs was launched: the first ACM
Sigsoft/Sigplan International Workshop on Software Engineering for Systems-of-
Systems was organized with ECSA 2013 [34] and since 2015 has been organized with
ACM/IEEE ICSE, being in 2016 in its fourth edition. The first conference track ded-
icated to software-intensive SoS, SiSoS, will be organized only in 2017 at ACM SAC.
Software Architecture Challenges and Emerging Research in SiSoS 19
References
1. Cavalcante, E., Batista, T.V., Oquendo, F.: Supporting dynamic software architectures: from
architectural description to implementation. In: Proceedings of the 12th Working IEEE/IFIP
Conference on Software Architecture (WICSA), Montreal, Canada, pp. 31–40, May 2015
2. Cavalcante, E., Quilbeuf, J., Traonouez, L.M., Oquendo, F., Batista, T., Legay, A.:
Statistical model checking of dynamic software architectures. In: Tekinerdogan, B., et al.
(eds.) ECSA 2016. LNCS, vol. 9839, pp. 185–200. Springer, Heidelberg (2016)
3. COMPASS: Comprehensive Modelling for Advanced Systems of Systems. http://www.
compass-research.eu
4. DANSE: Designing for Adaptability and Evolution in System-of-Systems Engineering.
http://www.danse-ip.eu
5. Lemos, R., et al.: Software engineering for self-adaptive systems: a second research
roadmap. In: Lemos, R., Giese, H., Müller, Hausi, A., Shaw, M. (eds.). LNCS, vol. 7475,
pp. 1–32. Springer, Heidelberg (2013). doi:10.1007/978-3-642-35813-5_1
6. ERCIM: Special Theme: Trustworthy Systems-of-Systems, ERCIM News, vol. 102, July
2015. http://ercim-news.ercim.eu/en102/
7. European Commission (EC) - Horizon 2020 Framework Program: H2020 Digital Agenda on
Systems-of-Systems. https://ec.europa.eu/digital-agenda/en/system-systems
8. Firesmith, D.: Profiling systems using the defining characteristics of systems of systems
(SoS), software engineering institute. SEI Technical report: CMU/SEI-2010-TN-001, 87 p.,
February 2010
9. FP7 CSA Road2SoS (Roadmaps to Systems-of-Systems Engineering) (2011–2013):
Commonalities in SoS Applications Domains and Recommendations for Strategic Action.
http://road2sos-project.eu/
20 F. Oquendo
29. Oquendo, F.: p-ADL: architecture description language based on the higher-order typed
p-calculus for specifying dynamic and mobile software architectures. ACM Sigsoft Softw.
Eng. Not. 29(3), 1–14 (2004)
30. Oquendo, F.: Formally describing the software architecture of systems-of-systems with
SosADL. In: Proceedings of the 11th IEEE System-of-Systems Engineering Conference
(SoSE), June 2016
31. Oquendo, F.: p-calculus for SoS: a foundation for formally describing software-intensive
systems-of-systems. In: Proceedings of the 11th IEEE System-of-Systems Engineering
Conference (SoSE), June 2016
32. Oquendo, F.: Case study on formally describing the architecture of a software-intensive
system-of-systems with SosADL. In: Proceedings of 15th IEEE International Conference on
Systems, Man, and Cybernetics (SMC), October 2016
33. Oquendo, F., Warboys, B., Morrison, R., Dindeleux, R., Gallo, F., Garavel, H.,
Occhipinti, C.: ArchWare: architecting evolvable software. In: Oquendo, F., Warboys,
Brian, C., Morrison, R. (eds.) EWSA 2004. LNCS, vol. 3047, pp. 257–271. Springer,
Heidelberg (2004). doi:10.1007/978-3-540-24769-2_23
34. Oquendo, F., et al.: Proceedings of the 1st ACM International Workshop on Software
Engineering for Systems-of-Systems (SESoS), Montpellier, France, July 2013
35. Ozkaya, M., Kloukinas, C.: “Are we there yet? Analyzing architecture description languages
for formal analysis, usability, and realizability. In: Proceedings of the 39th Euromicro
Conference on Software Engineering and Advanced Applications (SEAA), Santander,
Spain, pp. 177–184, September 2013
36. Quilbeuf, J., Cavalcante, E., Traonouez, L.-M., Oquendo, F., Batista, T., Legay, A.: A logic
for the statistical model checking of dynamic software architectures. In: Margaria, T.,
Steffen, B. (eds.) ISoLA 2016. LNCS, vol. 9952, pp. 806–820. Springer, Heidelberg (2016).
doi:10.1007/978-3-319-47166-2_56
37. SAE Standard AS5506-2012: Architecture Analysis & Design Language (AADL), 398 p.,
September 2012
38. Silva, E., Batista, T., Oquendo, F.: A mission-oriented approach for designing
system-of-systems. In: Proceedings of the 10th IEEE System-of-Systems Engineering
Conference (SoSE), pp. 346–351, May 2015
39. SysML: Systems Modeling Language. http://www.omg.org/spec/SysML
40. UML: Unified Modeling Language. http://www.omg.org/spec/UML
41. US Sandia National Laboratories, Roadmap: Roadmap for the Complex Adaptive
Systems-of-Systems (CASoS) Engineering Initiative. http://www.sandia.gov/
42. US Software Engineering Institute/Carnegie Mellon University: System-of-Systems Pro-
gram. http://www.sei.cmu.edu/sos/
43. Wirsing, M., Hölzl, M.: Rigorous Software Engineering for Service-Oriented Systems, 748
p. Springer, Heidelberg (2015)
44. Wirsing, M., et al.: Software Engineering for Collective Autonomic Systems, 537
p. Springer, Heidelberg (2015)
Software Architecture Design Reasoning:
A Card Game to Help Novice Designers
1 Introduction
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 22–38, 2016.
DOI: 10.1007/978-3-319-48992-6 2
Software Architecture Design Reasoning 23
There has been some software design reasoning research, but there is no sim-
ple and comprehensive method that can be used in practice. In this paper, we
propose a card game for this purpose. The card game combines techniques in a
simple form, and is developed to help novice designers to consider certain rea-
soning steps during design. This card game has been tested in an experiment
involving students of a Software Architecture course. The aim of the experiment
is to assess any obvious differences among control and test groups, and to estab-
lish if the card game had a positive influence on the logical reasoning process
using qualitative analysis.
The card game is based on the reasoning techniques described in Tang and
Lago [25]. The main reasoning techniques are the identification of assumptions,
risks and constraints. The students also need to carry out trade-offs and artic-
ulate the contexts, problems, and solutions. Novice designers can forget that
they need to reason about certain things. Razavian et al. [15] proposed to use
a reflection system to remind designers. In their experiment, reflective thinking
was applied using reasoning techniques to trigger student designers to use log-
ical reasoning. In our experiment, cards were used to remind novice designers
to use reasoning techniques. The choice for cards instead of a software tool was
made because many software tools, such as those used for Design Rationale,
have been developed but are not prevalently used in the design world. The most
common reason for this is that the adoption and use of these systems take too
much time and are too costly to be effectively used [24]. The cost-effectiveness
of such a system isin fact the most important characteristic for consideration
by software companies [12]. Cards do not cost much to produce or to use, and
depending on the rules do not needextensive time to learn. Additionally, cards
are not unfamiliar in software design, as already several card games exist and
designers are familiar with their uses. For example, IDEO method cards are used
for stimulating creativity [8], Smart Decisions card are used for learning about
architecture design [3], and planning poker is played during release planning to
estimate the time needed to implement a requirement [5]. Our card game addi-
tionally includes several reflection periods during the experiment to encourage
participants toexplicitly reason with the design.
We compare novice designers equipped with reasoning techniques to the con-
trol groups using natural design thinking. The results show that the test groups
came up with many more design ideas than the control group.
In the following sections more background information on design reasoning
are given, together with reasoning techniques which stimulate logical reasoning
(Sect. 2). Next, we introduce the student experiment, how it has been performed,
and how validation of the results has been reached (Sect. 3). After this the results
of the experiment will be explained, ending with an analysis of the results and
further discussion on how the card game can be used for further experiments
(Sect. 4). Threats to validity are discussed in Sect. 5. Section 6 concludes the
paper.
24 C. Schriek et al.
2 Design Reasoning
Design reasoning depends on logical and rational thinking to support arguments
and come to a decision. In a previous research, it has been found that many
designers do not reason systematically and satisfice results easily [29]. Designers
often use a naturalistic process when making decisions where experience and
intuition play a much larger role [31]. As such, people need to be triggered to
consider their decisions in a more rational manner. This can be done by using
reasoning techniques. Previous research has shown that supporting designers
with reasoning analysis techniques can positively influence design reasoning [15,
27,28]. There are other issues with design thinking, which can be categorized as:
cognitive bias, illogical reasoning, and low quality premises.
Cognitive bias occurs when judgments are distorted because the probability of
something occurring is not inferred correctly or there is an intuitive bias. This can
be seen with representativeness bias and availability bias, where the probability
of an event is mistaken because it either looks more typical, or representative,
or because it is more easily envisioned. An example is anchoring, where software
designers choose solutions for a design problem based on familiarity, even when
it is ill-suited to solve the problem [21,23].
Illogical reasoning is when the design reasoning process is not used and prob-
lems occur with identifying the relevant requirements. The basis premises and
arguments being used in the design discussion are not based on facts.
Low quality premises for design argumentation can be caused by missing
assumptions, constraints or context. Premises of poor quality can be caused by
either an inaccurate personal belief or the premise being incomplete or missing.
Much of reasoning depends on the quality of the premises themselves, if these are
not explicitly stated or questioned, software designers are more likely to make
incorrect decisions [23,26]. The basis of how such reasoning problems can develop
lies in the difference between the two design thinking approaches: the naturalistic
decision making, and the rational decision making. This is sometimes referred
to as a dual thinking system: System 1 is fast and intuitive with unconscious
processes, i.e., naturalistic decision making. System 2 is slow and deliberate with
controlled processes, i.e., rational decision making [9]. People naturally defer to
System 1 thinking, and so in the case of software design designers need to be
triggered to use System 2 thinking for decision making. This is done by invoking
reflective thinking or prompting, which in the simplest sense is thinking about
what you are doing, meaning that the person consciously evaluates their ideas
and decisions [18,22].
simple form to teach and remind students to consider certain reasoning steps
during design is new.
The reasoning techniques chosen for this experiment are not exhaustive and
are instead a selection of common techniques already used in software architec-
ture: problem structuring, option generation, constraint analysis, risk analysis,
trade-off analysis and assumption analysis.
Problem structuring is the process of constructing the problem space by
decomposing the design into smaller problems. These then lead to reasoning
about requirements and the unknown aspects of the design [19]. This reasoning
technique focuses on identifying design problems and how these can be resolved
when the situation is one the designer is unfamiliar with. It is used to identify
the problem space and the key issues in design by asking questions related to
the problem, such as what are the key issues. Its aim is to prevent the designer
from overlooking key issues because of unfamiliarity with the problem. The more
time spend on problem structuring the more rational an approach the designer
uses.
Solution Option generation is a technique specifically directed at the prob-
lem of anchoring by designers, in which the first solution which comes to mind
is implemented without considering other options. With option analysis the
designer looks at each decision point at what options are available to solve a
design problem.
Constraint analysis looks at the constraints exerted by the requirements,
context and earlier design decisions and how they impact the design. These
constraints are often tacit and should be explicitly expressed in order to take
them into account. Trade-offs can come from conflicting constraints.
Risk analysis is a technique to identify any risks or unknowns which might
adversely affect the design. Risks can come from the designer not being aware
if the design would satisfy the requirements, in which case the design needs
to be detailed in order to understand these risks and mitigate them. Or the
design might not be implementable because designers are unaware of the business
domain, technology being used and the skill set of the team. These risks should
be explicated and estimated.
Trade-off analysis is a technique to help assess and make compromises when
requirements or design issues conflict. It can be used for prioritization of prob-
lems and to weigh the pros and cons of a design which can be applied to all key
decisions in a design [25].
Assumption analysis is a technique used to question the validity and accuracy
of the premise of an argument or the requirements. It focusses mainly on finding
hidden assumptions. It is a general technique which can be used in combination
with the other reasoning techniques [23].
We propose a simple method that combines the main design reasoning tech-
niques, and use a card game to prompt novice designers. In our research, we test
the effectiveness of this technique.
26 C. Schriek et al.
3 Student Experiment
The theory studied in this paper is that applying reasoning techniques, through
the use of a card game, has a positive influence on design reasoning with software
designers. This theory is tested using an experiment focusing on inexperienced
designers or novices. This experiment involved test and control groups carrying
out a design. The results of the two groups are compared to one another. We
use simple descriptive statistics and qualitative analysis to analyse the results.
The subjects for the experiment are 12 teams of Master students from the
University of Utrecht, following a Software Architecture course. These were split
into 6 control teams and 6 test teams, with most having three designers working
together, two teams with two designers, and one team with four designers. Based
on an earlier assessment, the teams were ranked, from which they were randomly
selected for the test or control groups to ensure an equal amount of skill.
periods were added evenly spread throughout the session when cards have to
be played. Lastly, the card rules were simplified to remove restrictions on fluid
discussion. This resulted in the following set of playing rules:
The assignment used in the experiment is the same as used in the Irvine
experiment performed at the University of California [30]. This assignment is
well known in the field of design reasoning, as participants to the workshop
analysed the transcripts made and submitted papers on the subject [13]. The
assignment is to design a traffic simulator. Designers are provided with a problem
description, requirements, and a description of the desired outcomes. The design
session takes two hours. The assignment was slightly adjusted to include several
viewpoints as end products in order to conform to the course material [1]. The
sessions were recorded with audio only and transcribed by two researchers.
The card game is constructed based on an earlier experiment [15] which incor-
porated the reasoning techniques as reflective questions by an external observer
who served as a reflection advocate to ask reflective questions. The card game
replaces these questions with cards. Four of the reasoning techniques previously
given were made directly into cards; constraint, assumption, risk and trade-off.
Problem structuring and option generation would be triggered by using these
techniques and looking at the design activities; context, problem and solution.
Three reflection periods were created at 15 min, 45 min and 1 h and 45 min.
In these pre-set times, the students in the test groups were asked to use the
cards to prompt discussion and support collaboration. The cards were paired
with a table showing suggested questions to ask. Combining the cards enables
different questions, such as: which constraints cause design problems? The con-
trol groups performed the same assignment without the card game, nor having
pre-set reflection periods to revise their discussions.
A deductive analysis approach is used for coding the transcripts. The coding
scheme is based on the design activities and reasoning techniques. The results
of the experiment are analysed using qualitative measures, in this case with
discourse analysis performed on the transcripts.
3.2 Results
In this section the results of the experiment are shown. The results show that
there are significant differences between the control and test groups, supporting
the theory that reasoning techniques influence design reasoning in a positive
manner by having them use these techniques more.
Design Session Length. The first and most obvious difference is the time
taken for the design session between the control and test groups. Though all
28 C. Schriek et al.
groups were given two hours to complete their session, it is mostly the test
group which took full advantage of this (Table 1). Half of the control groups
finished their design before the 1.5 h mark. Only one test group did the same.
From the audio recording, we conclude that this was due to a misunderstanding,
as the group believed they had already breached the two hour mark. One test
group even surpassed the two hour mark by almost half an hour.
Our results show that assumptions and risks occur with a similar frequency
as with their reasoning techniques. The constraints are shown to have an even
more similar frequency across the test and control groups, there is hardly any
difference at all. Although trade-off analysis shows an obvious difference, it is
the lowest in frequency with both test and control groups. This is a surprising
result as option generation shows a much greater difference in frequency. How-
ever, trade-off analysis, which concerns options, does not. To investigate these
results we need to look at the elements which make up trade-offs; pros and cons
(Table 4). Taking a closer look towards the results, the differences between the
test and control group becomes more obvious. The frequencies of pros and cons
more closely match that of option generation. More pros and cons for various
options are given; only the combination of both pro and con is scarce. As the
coding scheme used requires a trade-off to have both a pro and a con for an
option explains why trade-off analysis has such low frequencies. Interestingly, in
30 C. Schriek et al.
comparison to the control groups, the test groups use both more pro with 53 %
more, but also far more cons to argue about their options, tripling the amount
with 269 % compared to the control group.
T1 T2 T3 T4 T5 T6 Total
Tradeoff analysis 5 2 2 4 2 1 16
Pros 17 4 10 8 4 3 46
Cons 10 2 8 13 9 6 48
C1 C2 C3 C4 C5 C6 Total
Tradeoff analysis 1 1 0 3 0 1 6
Pros 2 4 5 12 3 4 30
Cons 1 2 0 4 2 4 13
T1 T2 T3 T4 T5 T6 Total
Design problems 29 10 17 17 8 13 94
Design options 42 9 33 28 18 18 148
Design solutions 29 10 17 17 8 11 92
C1 C2 C3 C4 C5 C6 Total
Design problems 3 8 13 19 16 11 70
Design options 5 10 14 18 25 23 95
Design solutions 4 9 13 20 17 11 74
Looking at the identified design problems, options and solutions we find that
mostly the design options have increased in the test groups compared to the
control group, with a percentage difference of 56 % (Table 5). This corroborates
with the increase in option generation established before. The identified design
problems and solutions have increased with the test groups, but not by much: a
percentage difference of 34 % in design problems, and 24 % with design solutions.
4 Discussion
The results of the experiment show significant differences in applying reason-
ing between the control and test groups. The cards overall trigger more design
reasoning in the test groups. More assumptions and risks are identified, more
options are generated and more key issues are defined with problem structuring.
In this section we analyse the results and discuss their meaning.
Software Architecture Design Reasoning 31
T5 (1:52:06-1:52:15)
PERSON 2: So we have we got everything. I think maybe only the traffic light is not taken into account and that’s
connected to intersection.
PERSON 1: Yeah. Definitely need to be there just make it here. And do we also model dependencies.
PERSON 2: Okay I think we don’t have the time to put in. Maybe we can sketch it.
C5 (1:16:38 1:17:19)
PERSON 2: Oh ok. Do we have to say something more? Are we done actually? Or do they actually also wanna know
how we include the notation and such, because-
PERSON 1: No they also get the documents, so they can see
PERSON 2: Yeah ok, but maybe how we come up with the- I dont know. No? isnt necessary?
PERSON 3: Mm
PERSON 1: Its just use UML notation, for all
PERSON 2: For all?
PERSON 1: No, and lifecycle model, and petri net. No, no petri net
PERSON 2: Perhaps petri net. Ok, shall we- shall I just?
PERSON 1: Yeah
PERSON 2: Ok
T3 (0:20:31-0:21:10)
PERSON 2: HTML 5 yeah? Information would of course [inaudible] constraints or risk or trade-offs, we have to
make- a risk might be of course that- of course there is a [inaudible] so while you are travelling. For example, when
you have an older device that could be a problem of course. So then you couldnt use the navigation maybe, the- well,
[inaudible] right?
PERSON 1: What do you mean exactly? For example.
PERSON 2: Yeah well, for example, if you are travelling and you want to use the
application. You want to use the traffic simulator, then of course that might be the case that your device is not suitable
for it. For example. So, on the other hand
T2 (0:28:14-0:28:28)
PERSON 1: So this was a problem
PERSON 3: This was a problem
PERSON 1: Yeah
PERSON 2: Yeah. Because [inaudible]
PERSON 1: And a risk right
PERSON 2: A constraint? Yeah but it was also like an assumption that you have a minimum length. That is our
assumption right or-
PERSON 3: Yeah we created that now, and thats ok because its our own system
T4 (1:25:13-1:26:05)
PERSON 1: So that’s the trade-off. The other side is good to have in the cloud because you can easily push a new
update every hour if you want but you need really really strong server for all this simulations. Now professor did not
say how much money she has. So it can be also. There can be also an option to pay for usage of this server for every
simulation or for every hour of simulation.
PERSON 2: I don’t think so.
PERSON 1: There can be an option. But it can be also very expensive so when I think about everything I think that is
cheaper and easier to have local stand-alone version.
PERSON 2: Yeah.
PERSON 3: Yeah.
A main purpose of the reasoning card game is to prompt the students to consider
design elements. The results of the experiment show that especially risks and
assumptions are considered more by the test groups. Trade-off analysis does not
show much difference, whereas constraint remained the same.
In many cases, the test groups considered the design scope to be clear at
first glance. But when they started using the cards and thought more about the
design topics, they found out that it actually is more complicated than they first
realized. The designers reflect on their previous ideas, discuss and redefine them,
which clearly shows that the cards trigger reasoning in designers.
For the control groups it is clear that considering assumptions and risks
for decision making about the design is not at the forefront of their minds, as
indicated by their low distinct element frequencies. With the test groups, the
cards remind the designers to take these considerations into account, as again
can be seen in T3 where person 2 lists the cards, which prompts him to identify
a risk (Fig. 2).
For the trade-off analysis few pros and cons were discussed, contributing to
the low number of trade-offs. However, the test groups generated many pros,
and especially more cons to argue against the solution options than the control
group. The control groups also generate many pros, but fewer cons (Table 4).
This suggests that the control groups are more concerned with arguments that
support their options, or arguing why these are good, instead of looking at
potential problems that could invalidate their options (cons). The test groups
are more critical of their choices and look at options from different viewpoints.
The extract of T4 shows part of a larger trade-off analysis in which several
options are heavily discussed: mostly to have either a standalone program, or
one which is cloud or web-based (Fig. 2). In this part, person 1 mentions that a
pro for a cloud based program would be that you can update every hour, but
a con is that a strong server is necessary which would be costly. The person
then proceeds to suggest another option to ask users to pay for the usage of
the server. This is not well-received by the group and person 1 admits that this
option would still be a very expensive one and gives a pro to their first option:
a local standalone version to which the others agree.
Even though the group eventually went with their first option, they took the
time to explore multiple options and critically assess them by providing both
pros and cons. The control groups had fewer of such discussions.
The effect of the card game is to combat satisficing behaviour and lack of design
reasoning by stimulating the designers to reconsider their options and decisions,
ultimately taking more time to delve into the issues. And yet, when we look at the
design problems and design solutions identified by both groups, the percentage
difference is much lower than that of the other elements, such as options and
problems structuring. The cards prompt designers to consider their problems
34 C. Schriek et al.
and explore more of the design, but problems are not identified as much by the
test groups, as the other reasoning techniques are used.
Design problems identification can be influenced by other factors, such as
design strategy and designer experience. Design strategies such as problem-
oriented, or solution-oriented can influence the information seeking behaviour
of designers [16]. The approach used for problem-solving, whether to focus on
finding problems or solutions first, seems to have more of an impact on the design
problems being identified. When comparing groups with similar strategies, the
influence of the cards becomes clearer.
As an example, we have groups T2 and C1. Both use a satisficing strategy,
where they actively avoided going into the details. They preferred to view a
problem as being outside of their scope. Their option generation and trade-off
analysis results are very similar. But the problem structuring, risk, assumption
and constraint analysis of T2 is at least double of that of C1. Despite their
adherence to a minimum satisficing strategy, the cards prompted T2 to recognize
problems which often resulted from identified risks and constraints, for which
they made assumptions to simplify the problem and solution.
It seems that the design strategy used by the groups is a clearer indicator
for how many and what kind of design problems are identified, while the cards
influence how the designers solve these problems. This supports our finding that
problem identification depends more on the design strategy than on
the card game.
require minimal effort to find. It is easy to see why these constraints would be in
the text as requirements, as it is to the clients benefit to give clear instructions
on what the program should and should not do. This means that constraints
are easier to identify, causing the cards to have little influence, as these are
given as requirements. The card game provides no noticeable difference
in constraint identification. The other techniques, such as assumptions and
risks, must all be inferred from the text and are not clearly given. The effect of
the cards is more obviously shown there.
5 Threats to Validity
We recognise the threats to validity in this research, especially those revolving
around generalization. For the transcripts, discourse analysis was used to inter-
pret the text, which in itself is subjective and reliant on the view of the researcher
[7]. This paper is an empirical research in the form of an experiment involving
an example assignment. Empirical research is one of the main research methods
within the software architecture research field, relying on evidence to support
the research results. We address the internal and external validity of the results
acknowledging any limitations which may apply [4].
5.3 Reliability
Reliability is about ensuring that the results found in the study are consistent,
and would be the same if the study is conducted again. To ensure that the coding
of the transcripts is reliable it was tested for inter-reliability using Cohens kappa
coefficient [2] to measure the level of agreement. The transcripts were each coded
by two researchers using Nvivo 10. The average kappa coefficient of each of the
transcripts was above 0.6 which is considered to show a good level of agreement.
The average of all transcripts combined is 0.64.
6 Conclusions
Software design is a complicated problem-solving process, which due to its effect
on the later stages of software development, is one of the most important stages
to consider. Problems occurring at this stage which are not solved immediately
will result in problems later during development or implementation, costing
money and time. Problems with software design can result from problematic
design decisions, which are easily influenced by designer biases. These biases can
be avoided by using more logical reasoning.
In this paper, we propose a simple card game to help novice designers? use
design reasoning. Design reasoning means using logic and rational thinking in
order to make decisions, something which people as a whole find difficult due to
the usual way they think. In order to prompt design reasoning several common
reasoning techniques were chosen to be represented by the card game. These
techniques are; problem structuring, option generation, constraint analysis, risk
analysis, trade-off analysis, and assumption analysis.
To study the effect of the card game, we designed an experiment based on 12
student groups following a software architecture course. These 12 groups were
divided into 6 control and 6 test groups. The 12 groups were asked to construct
a software design. The transcripts of these experiments were analysed using
discourse analysis. The results show a notable difference between the test and
control groups on nearly all technique usages. The effect of the cards is to trigger
the designers to use design reasoning techniques to reason with different aspects
of design, to prompt new discussion topics, or to reconsider previous discussions.
In all manners, the cards trigger reasoning and lead to more discussion and
reconsideration of previous decisions. Those who use the card game generally
identify more distinct design elements and spend more time reasoning with the
design. Only the constraint analysis technique shows no obvious difference.
Software Architecture Design Reasoning 37
Further research includes to study the effect of the card game to professional
designers, i.e., those who are experienced in the field. Professionals have more
experience. Therefore, it would be interesting to observe how such a simple card
game works with people who are more aware of design techniques. The card
game could also be used as a learning tool for novice designers, to further their
understanding of software architecture and learn design issues from the reasoning
angles.
References
1. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice. Series in
Software Engineering. Addison Wesley, Reading (2012)
2. Cohen, J.: Weighted kappa: nominal scale agreement with provision for scaled
disagreement or partial credit. Psychol. Bull. 70, 213–220 (1968)
3. Smart Decisions: A software architecture design game. http://smartdecisionsgame.
com/
4. Galster, M., Weyns, D.: Emperical research in software architecture. In: 13th Work-
ing IEEE/IFIP Conference on Software Architecture, Italy, Venice, pp. 11–20.
IEEE Computer Society (2016)
5. Grenning, J.: Planning poker or how to avoid analysis paralysis while release plan-
ning. Hawthorn Woods Renaiss. Softw. Consult. 3, 1–3 (2002)
6. van Heesch, U., Avgeriou, P., Tang, A.: Does decision documentation help junior
designers rationalize their decisions? A comparative multiple-case study. J. Syst.
Softw. 86, 1545–1565 (2013)
7. Horsburgh, D.: Evaluation of qualitative research. J. Clin. Nurs. 12, 307–312 (2003)
8. IDEO: IDEO Method Cards. https://www.ideo.com/by-ideo/method-cards/
9. Kahneman, D.: Thinking, Fast and Slow. Penguin Books, London (2012)
10. Klein, G.: Naturalistic decision making. Hum. Factors J. Hum. Factors Ergon. Soc.
50, 456–460 (2008)
11. Lago, P., van Vliet, H.: Explicit assumptions enrich architectural models. In: 27th
International Conference on Software Engineering, ICSE 2005, pp. 206–214. ACM
(2005)
12. Lee, J.: Design rationale systems: understanding the issues. IEEE Expert. Syst.
Appl. 12, 78–85 (1997)
13. Petre, M., van der Hoek, A.: Software Designers in Action: A Human-Centric Look
at Design Work. CRC Press, Boca Raton (2013)
14. Poort, E.R., van Vliet, H.: Architecting as a risk- and cost management discipline.
In: 9th Working IEEE/IFIP Conference on Software Architecture, WICSA 2011,
pp. 2–11. IEEE Computer Society (2011)
15. Razavian, M., Tang, A., Capilla, R., Lago, P.: In two minds: how reflections influ-
ence software design thinking. J. Softw. Evol. Process 6, 394–426 (2016)
16. Restrepo, J., Christiaans, H.: Problem structuring, information access in design.
J. Des. Res. 4, 1551–1569 (2004)
17. Rittel, H.W.J., Webber, M.M.: Dilemnas in a general theory of planning. Policy
Sci. 4, 155–168 (1973)
18. Schön, D.A.: The Reflective Practitioner: How Professionals Think in Action. Basic
Books, New York (1983)
19. Simon, H.A.: The structure of ill structured problems. Artif. Intell. 4, 181–201
(1973)
38 C. Schriek et al.
20. Simon, H.A.: Rationality as process and as product of a thought. Am. Econ. Rev.
68, 1–16 (1978)
21. Stacy, W., MacMillan, J.: Cognitive bias in software engineering. Commun. ACM.
38, 57–63 (1995)
22. Stanovich, K.E.: Distinguishing the reflective, algorithmic, autonomous minds: is
it time for a tri-process theory? In: Two Minds: Dual Processes and Beyond, pp.
55–88. Oxford University Press (2009)
23. Tang, A.: Software designers, are you biased? In: 6th International Workshop on
SHAring and Reusing Architectural Knowledge, pp. 1–8. ACM, New York (2011)
24. Tang, A., Babar, M.A., Gorton, I., Han, J.: A survey of architecture design ratio-
nale. J. Syst. Softw. 79, 1792–1804 (2006)
25. Tang, A., Lago, P.: Notes on design reasoning techniques. SUTICT-TR.01,
Swimburne University of Technology (2010)
26. Tang, A., Lau, M.F.: Software architecture review by association. J. Syst. Softw.
88, 87–101 (2014)
27. Tang, A., Tran, M.H., Han, J., Vliet, H.: Design reasoning improves software design
quality. In: Becker, S., Plasil, F., Reussner, R. (eds.) QoSA 2008. LNCS, vol. 5281,
pp. 28–42. Springer, Heidelberg (2008). doi:10.1007/978-3-540-87879-7 2
28. Tang, A., Vliet, H.: Software architecture design reasoning. In: Babar, M.A.,
Dingsøyr, T., Lago, P., van Vliet, H. (eds.) Software Architecture Knowledge Man-
agement, pp. 155–174. Springer, Heidelberg (2009)
29. Tang, A., Vliet, H.: Software designers satisfice. In: Weyns, D., Mirandola, R.,
Crnkovic, I. (eds.) ECSA 2015. LNCS, vol. 9278, pp. 105–120. Springer, Heidelberg
(2015). doi:10.1007/978-3-319-23727-5 9
30. UCI: Studying professional software design. http://www.ics.uci.edu/
design-workshop/
31. Zannier, C., Chiasson, M., Maurer, F.: A model of design decision making based
on empirical results of interviews with software designers. Inf. Softw. Technol. 49,
637–653 (2007)
A Long Way to Quality-Driven Pattern-Based
Architecting
1 Introduction
Architectural patterns and styles are recurrent solutions to common problems.
Among others, they include knowledge on quality attributes (QAs) [1]. For the
sake of simplicity, throughout the paper we use the term architectural pattern
to mean both. In fact, according to Buschmann [2], patterns and styles are very
similar as every architectural style can be described as an architectural pattern.
However, some differences can be considered as essential, the most relevant being
that patterns are more problem oriented, while styles do not refer to a specific
design situation [2]. Accordingly, in our analysis we make explicit if and why
authors adopt the term pattern or style. We observe a similar problem with the
definition of quality attribute. Again, for the sake of simplicity, we adopt the
term quality attribute. In our analysis, if necessary, we make explicit the term
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 39–54, 2016.
DOI: 10.1007/978-3-319-48992-6 3
40 G. Me et al.
In the literature there are several works that, to various degrees, address the
interaction between architectural patterns and quality attributes. Many have
been included as primary studies of our SLR. In this section, we focus on
two additional works, Buschmann [2] and Harrison and Avgeriou [1], holis-
tic in nature and hence providing an excellent starting point for our SLR.
Buschmann [2] is the cornerstone of architectural patterns and many later pub-
lications refer to its taxonomy of patterns. The approach is holistic. Firstly,
software architecture design is considered more than a simple activity with a
limited scope. Software architecture design has system-wide goals. Secondly, it
aims at providing systematic support beyond that of a single pattern. As the title
of the book suggests, patterns are framed in a system of patterns. For our pur-
pose, we have considered the work of Buschmann in a pattern-quality interaction
perspective, i.e. with a special focus on such interaction. In particular, the rela-
tionship between patterns and quality is based on a quality model that includes
Changeability, Interoperability, Reliability, Efficiency, Testability and Reusabil-
ity. Several quality attributes (called in [2] non-functional properties) present
one or more sub-characteristics. Each quality attribute has been exemplified
by means of scenarios. Some good fitting solutions (pattern-quality attribute)
are given, for instance an example of fitting solution for changeability is the
pattern Reflection. Trade-off and prioritization of quality properties have been
mentioned. Non-functional properties can be classified according to the architec-
tural techniques for their achievement. Patterns provide a support for building
high-quality software system in a systematic way given some quality properties
and functionalities. According to [2], the final assessment of quality properties in
software architecture is still a difficult task. Indeed, although quality properties
are crucial for the design, we still have to solve problems in their measurement.
The lack of quantification makes the choice mostly based on the intuition and
knowledge of software architects [2]. Similar to [2], Harrison and Avgeriou [1]
A Long Way to Quality-Driven Pattern-Based Architecting 41
3 Study Design
Our systematic literature review has been carried out according to Kitchenham
guidelines [7]. Few studies focus exactly on the interaction between quality
attributes and architectural patterns. Therefore, we have decided to carry out
this SLR with the motivation of detecting the widespread knowledge and build
it in a systematic theoretical framework.
gathered the frequency of both patterns and attributes. Figure 1 shows the pat-
terns identified in the primary studies with the related frequency of appearance.
According to Fig. 1, the most frequent patterns (with a frequency of five
or higher) are those 10 enlisted patterns plus a set of pattern combinations
(see Combination). In fact, there are 44 additional patterns (not displayed in
the figure) with frequency between 1 to 5. Among these less frequent patterns,
multi-agent system patterns show a good potential for further research. Our
online protocol provides the frequency table for the full list of patterns.
We performed a similar analysis about the found quality attributes, which
provide a similar picture with an extended landscape of exotic quality attributes.
In this case we have found 43 quality attributes (plus a residual category of not
recognizable QAs) with a frequency mean of 15,6. Figure 2 includes only the
quality attributes that appear at least 13 times in our primary studies. Like for
patterns, the less frequent QAs are available in the online protocol1 . Finally we
have combined the two data pools (Patterns-QAs Frequency, see Table 2). In
particular, we identified 711 couples pattern-QA. Of these, 422 (62 %) are cou-
ples composed by one of the most frequent patterns and one of the most frequent
quality attributes. Interestingly, 166 couples out of 711 (23 %) are composed by
one of the most frequent quality attributes listed in Fig. 2. Other combinations
are much lower, for instance the couple “most frequent patterns-less frequent
quality attributes” appears just in 62 cases (9 %) and expectably the couple
“less frequent patterns-less frequent QAs” appears in even less cases (41, corre-
sponding to 6 %).
Regarding the frequency of patterns and QAs, we observe that the set of
most frequent quality attributes covers 85 % of all identified couples pattern-
QA. Only 70 % of the identified couples are composed by a pattern belonging to
the set of most frequent patterns. This might suggest that the set of most fre-
quent QAs is mature enough to be considered as a backbone for an architectural
quality model. On the other hand, patterns as a category is to be considered as
1
www.s2group.cs.vu.nl/gianantonio-me/.
46 G. Me et al.
Real World needs shed light on an intrinsic goal of the other methodologies,
namely to design systems that match business processes needs.
We have identified 13 different types of approaches. However, overlaps are
very common (e.g. knowledge-based and decision-making). A reason for this
overlap is that the approaches are at different levels of abstraction. For instance,
scenario-based approaches provide the space were patterns and quality will be
assessed; functionality-oriented approaches zoom in how the considered pattern
both satisfies functionality and quality, zooming in implementation level. Over-
all we noticed that Decision-Making elements are widespread in all the identi-
fied approaches. Many studies have the goal to provide support for decisions,
so decision-making can be considered as a cross-characterizing element for all
the methodologies. In the same line of reasoning, knowledge-based approaches
present a body of reusable knowledge for adopting decisions. In general, we
observe redundant elements proposed as new/different methodologies. Table 4
proposes a possible key of reading the holistic relation we uncovered in the 13
approaches we identified.
In looking for the interaction between architectural patterns and QAs it emerged
that a quality-driven combination of architectural patterns is among the most
important challenges in developing modern software systems. We zoomed into
the effect that combining multiple patterns may have on the overall quality deliv-
ered by the combination. I.e., while individually two patterns may contribute (or
hinder) a certain quality attribute, their combination might have a positive (or
conflicting) impact on the same. “Combination of patterns” can find a place
in our Unified Framework among the Knowledge-based elements. Interestingly
enough, among our 99 primary studies, we found only 8 papers mentioning such
a combination, as described in the following.
Background Works on Combination of Patterns. Study [1] considers the
research on combination of patterns as a great challenge, considering the lack
50 G. Me et al.
Lee et al. [14] present a method for evaluating quality attributes. This uses con-
joint analysis in order to quantify QAs preferences. It can be used in combination
with the ATAM. In this study the decision of a Layered+ MVC architecture is
the result of a composition of customers needs. The approach of [14] suggests
a conceptual building block where combinations of patterns reflect the result of
negotiation between stakeholders.
In [15] the authors provided a knowledge base for architecting wireless ser-
vices. They propose a knowledge-based model with a service taxonomy, a ref-
erence architecture and basic services as backbone. Regarding combinations of
patterns, in [15] the focus is on service sub-domain. Combinations of patterns are
solutions to achieve quality attributes in specific sub-domain. They are applied to
basic services and shape the reference architecture. The approach of [15] selects
the Layered as a main pattern for building the software architecture. The ratio-
nale is in the type of quality attributes supported and the popularity among
engineers. This study offers the conceptual idea that a combination of patterns
can be classified according the main pattern.
In [17] the authors are aiming for an architectural pattern language for
embedded middleware systems. The core architecture is a Layered+ Microkernel.
In [18] the authors proposed a framework for early estimation of energy con-
sumption, according to particular architectural styles, in distributed software
systems. In their experiment styles have been tested in isolation. Further, one
combination of them has been assessed regarding energy consumption. The com-
bination of patterns (called in the study hybrids) showed less energy consumption
and overhead with the same amount of data shared respect to each single pat-
tern. In that case the impact on the quality attributes is not merely addictive;
indeed combining patterns reduces the energy consumption of a single pattern.
In [16] a full model for architectural patterns and tactics interaction has been
analyzed, with the aim of linking strategic decisions (decisions that affect the
A Long Way to Quality-Driven Pattern-Based Architecting 51
In spite of its systematic nature, the SLR does not provide enough knowledge for
building either univocal types of interactions between given couples pattern-QA
or pattern combinations. Usually, if a given pattern addresses a particular QA
positively that interaction would be replicated in the combination. Generally all
the combinations reported above address QAs in the same way of each single
pattern. That means, for instance, that a Layered pattern addresses positively
Portability (according to [1]) also when Layered is combined with other pat-
terns. Similarly, combination 3 [15] supports portability as well. The only clear
(reported) exception regards Energy Efficiency, for which the QA measure seems
better if the patterns are combined instead of implemented in isolation. More evi-
dence is needed to confirm this result [21]. Finally, an interesting and promising
research path is to consider combinations of patterns as specific design-solutions
for real world problems.
5 Threats to Validity
As customary, for the analysis we followed a SLR protocol. However, there
are potential threats to validity. Firstly, the search string might not catch all
the relevant papers available. We mitigated this risk by adding a snowballing
phase, checking references of primary studies. Secondly, the process of inclusion
and exclusion criteria applications has been conducted by only one researcher.
Thirdly, there are almost no studies that explicitly address the focus of our
analysis. This means that the knowledge is widespread in a heterogeneous spec-
trum of studies. Relevant information might be hidden in studies not detectable
by a sound search string. To cope with this issue we performed a pilot study
for testing and refining the search string. Finally, the threats to validity for the
analysis results and conclusions might be considered as a problem of general-
ization. Since we are in search of a theory, our results should be generalizable
to different contexts. Our strategy mitigation for this issue has been the adop-
tion of a coding procedure. However, the context specifications still represent an
important challenge for this research work.
A Long Way to Quality-Driven Pattern-Based Architecting 53
6 Conclusion
We performed a systematic literature review in order to shed light on the inter-
action between architectural patterns and quality attributes. We answered three
main research questions. For the first research question we identified the ways
of addressing the interaction between quality attributes and patterns. We dis-
covered that relation remains mainly unexplored, with a high number of studies
showing an Undetermined type of interaction. We also analyzed the frequency
of recurring patterns and recurring quality attributes. The main finding was
that the set of most frequent quality attributes covers 85 % of the identified
couples patterns-QAs. We can conclude that the set of quality attributes we
found can act as backbone for a quality model. The second research question
was answered by identifying different types of approaches for addressing quality
through architectural patterns. We observed redundancy and overlapping, so we
described basic elements for a pattern-quality driven architecting and we unified
them in a holistic framework. The third research question, about challenges in
quality and patterns interaction, allowed us to explore combinations of patterns.
We realized that we still lack extended knowledge on this specific challenge in
particular. Overall, the knowledge gathered so far puts the basis for a further
development of a theory for pattern-quality driven architecting. However, in spite
of architectural patterns and quality attributes being both widely explored and
practiced, there is still a lot to learn on their interaction—a long way to go.
References
1. Harrison, N.B., Avgeriou, P.: Leveraging architecture patterns to satisfy qual-
ity attributes. In: Oquendo, F. (ed.) ECSA 2007. LNCS, vol. 4758, pp. 263–270.
Springer, Heidelberg (2007). doi:10.1007/978-3-540-75132-8 21
2. Buschmann, F., Meunier, R., Rohnert, H., Sommerlad, P., Stal, M.: A System of
Patterns. Wiley, Hoboken (1996)
3. Babar, M.A.: Scenarios, quality attributes, and patterns: capturing and using their
synergistic relationships for product line architectures. In: Software Engineering
Conference, 11th Asia-Pacific, pp. 574–578. IEEE (2004)
4. Zdun, U.: Systematic pattern selection using pattern language grammars and
design space analysis. Softw.-Pract. Exp. 37(9), 983 (2007)
5. Weyns, D.: Capturing expertise in multi-agent system engineering with architec-
tural patterns. In: Weyns, D. (ed.) Architecture-Based Design of Multi-Agent Sys-
tems, pp. 27–53. Springer, Heidelberg (2010)
6. Costa, B., Pires, P.F., Delicato, F.C., Merson, P.: Evaluating rest architectures?
Approach, tooling and guidelines. J. Syst. Softw. 112, 156–180 (2016)
54 G. Me et al.
1 Introduction
Design Diversity is “the approach in which the hardware and software elements
that are to be used for multiple computations are not copies, but are indepen-
dently designed to meet a system’s requirements” [2]. It is the generation of
functionally equivalent versions of a software system, but implemented differ-
ently [2]. Design diversification has the potential to mitigate risks and improve
the dependability in design for situations exhibiting uncertainty in operation,
usage, etc. On the other hand, architecture sustainability is “the architecture’s
capacity to endure different types of change through efficient maintenance and
orderly evolution over its entire life cycle” [1]. In this paper, we argue that we
can link diversity and sustainability from a value-based perspective. The link
can summarize the success of engineering and evolution decisions in meeting the
current and future changes to users, system, and environment requirements. We
are concerned with how to employ diversity in the architecture as a mechanism
to better support future changes. This requires rethinking architecture design
decisions by looking at their link to long-term value creation in enabling change
and reducing their debt, etc. The focus is on how we can sustain the architec-
ture, which requires treatment for not only short-term costs and benefits but
also for long-term ones and their likely debts. As the valuation shall take into
consideration uncertainty, we appeal to options thinking [7] to answer the above
question. Our novel contribution is an architecture-centric method, which builds
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 55–63, 2016.
DOI: 10.1007/978-3-319-48992-6 4
56 D. Sobhy et al.
on Cost-Benefit Analysis Method (CBAM) [5] and options theory [7] to evaluate
and reason about how architectural diversification decisions can be employed
and their augmentation to long-term value creation. In particular, the approach
uses real options analysis [7] to quantify the long-term contribution of these
decisions to value and determine how that value can assist decision-makers and
software architects in reasoning about sustainability in software. Our exploratory
case analysis is based on provisional data gathered from the GridStix prototype,
deployed at River Ribble in the North West England [4].
2 Background
4 The Approach
The proposed approach is a CBAM-based method for evaluating diversified
architectural options (DAO) with real options theory, as illustrated in Fig. 2.
Step 1: Choosing the business goals, Scenarios and DAOs. Our method
focuses on QAs and their responses with respect to scenarios of interest that are
related to sustainability. DAOs are the architectural options that deal with these
scenarios. In our approach, DAOs are represented as a portfolio of options. Exer-
cising each DAO can be formulated as call option [7], with an exercise price and
uncertain value. We aim to provide a good trade-off between the benefit and cost
of applying diversified options on system’s QA over time, given the following: 1 - A
set of diversified architectural options {DAO1 , DAO2 , DAOn }, where each DAO
is composed of integrated architectural strategies among candidate diversified ones
{AS1 , AS2 , ASm }. 2 - One or more ASs are selected as candidates for diversifica-
tion ASk , as shown in figure 2. 3 - The diversified ASs are denoted by ASka , where
0 <= k <= x, 0 <= a <= y. 4 - Each DAO comes with a cost CostDAOi (t) and a
benefit Benef itDAOi (t), which may vary over time.
Among the business goals, which we consider to illustrate our approach are
the accuracy of flood anticipation and reasonable warning time prior to the
flood. In our method, we mainly test and evaluate the application of diver-
sity versus no diversity. Therefore, non − diversif ied − option = W if i + F H,
DAO1 = W if i + BT + F H, and the following scenario Messages transmission
between any given sensor node and gateway should arrive in ≤ 30 ms (address-
ing the performance QA) are employed for evaluation. We set 60 % target for
improvement of average network latency backed up by [4].
Step 2: Assessing the relative importance of QAs (Elicit QAWeightj ).
The architect assigned a weight to the QA according to equation in Table 1.
Step 3: Quantifying the benefits of the DAOs (Elicit ContribScorej ).
The impact of Non-diversified option and DAOi on the QAs are elicited from
the stakeholders with respect to Benef itDAOi equation in Table 1.
Step 4: Quantifying the costs of DAOs and Incorporating Scheduling
implications. Classical CBAM uses the common measures for determining the
costs, which involves the implementation costs only. Unlike CBAM, our approach
embraces the switching costs between decisions, which is equivalent to the pri-
mary payment required for purchasing a stock option. This is in addition to the
costs of deploying DAOs, configuration costs, and maintenance costs, similarly
to the exercise price, denoted by Cost(DAOi ). It is essential to note that CBAM
implements the ASs with high benefit and low cost [5]. On the other hand, we
believe that some ASs could provide high cost with low benefit initially or high
cost with high benefit, but a much higher benefit in the long-term that outweighs
the cost. The long-term benefit is the key factor for ASs evaluation.
Diversifying Software Architecture for Sustainability 59
Step 5: Calculate the Return of each DAO for the scenarios. We used
binomial option pricing calculation [7] and steps inspired by [6]. Binomial option
pricing model is a constructive aid aiming to show the suitable time slot for
exercising an option i.e. the cost-benefit of diversified options over time. For
each step of the binomial tree, the up and down node values are important in
determining the system value rise and fall, which is ultimately used to calculate
the option price. Our method aims to determine the impact of applying each
DAO (i.e. utility) on the system QAs, which is computed at every time slot t,
60 D. Sobhy et al.
where t = l indicates that the time equals to l unit time of interest i.e. months in
GridStix. For example, currently, the approximate number of deployed gridstix
nodes is 14 [4]. It is likely that adding extra nodes may improve the system’s
safety due to the presence of backup nodes and providing wider network coverage.
This in turn promotes the accuracy of flood prediction, satisfying our main
business driver, thus sustaining the GridStix software. Figure 3 envisages the
enhanced utility gained with/without diversification versus reporting latency in
accordance to offering up to 20 nodes, based on the graphs in [4]. The following
steps are necessary for valuation of options using the binomial option pricing
model.
5 Preliminary Evaluation
Without Diversification Outcome: A preliminary analysis of the method
without diversifying ASs is necessary. The architecture comprising Wifi and FH
was evaluated. The utility values for the implementation of the latter architecture
is depicted in Fig. 4 along with utilities of other DAOs, which are elicited from
Diversifying Software Architecture for Sustainability 61
stakeholders. Decision makers can vary the base value at cell A (guided by the
chart in Fig. 4a) to perform what-if analysis. In this example, possible values
range from $400 to $1500. The likely value of each architecture is different.
The valuation of non-diversified option over varying time slots for uncertainty
of implementing additional nodes is clearly shown in Fig. 5. In this example,
Vs is $1750. For detailed analysis, consider cell D for the evaluation of two-
unit time as presented in Fig. 5, which is the upper cell value: Snon−div (2) =
Vs +Benef itnon−div (2) = 1750+1000 = $2750. The lower cell value is computed
as follows: fnon−div (2) = max(0, Suu − Costnon−div ) = max(0, 2750 − 1250) =
$1500. The option price formula f of non-diversified is:fnon−div = fDAOnon−div1 +
fnon−div2 + fnon−div3 = 905.47 + 910.79 + 915.60 = $2732.22.
Diversification Outcome: DAO1 is employed for method evaluation. The pre-
dicted utility values for the implementation of DAO1 are revealed in Fig. 4b,
which is elicited from stakeholders. By applying the same logic used to cal-
culate the option value for non-diversified decision, the valuation of DAO1
over varying time slots for uncertainty of implementing additional nodes is
shown in Fig. 6, where the orange cells represent fDAOi (t) and green cells
denote the SDAOi (t). For detailed analysis, Consider cell D for the evalua-
tion of two-unit time as presented in Fig. 6, which is the upper cell value:
SDAO1 (2) = Vs + Benef itDAO1 (2) = 1750 + 1300 = $3050. The lower cell
6 Conclusion
We have described an approach, which makes a novel extension of CBAM. The
approach reasons about diversification in software architecture design decisions
using real options. The fundamental premise is that diversification embeds flex-
ibility in an architecture. This flexibility can have value under uncertainty and
can be reasoned using Real Options. In particular, the approach can be used by
the architect and the decision maker to apprise the value of architecting for sus-
tainability via diversification based on binomial trees. For instance, the method
can be used to inform whether an architecture decision needs to be diversified
and what the trade-offs between cost and long term value resulting from diversi-
fication are. This trade-off can be used to reflect on sustainability. Our case study
illustrates that the method can provide systematic assessment for the interlink
between sustainability and diversity using value-based reasoning. In the future,
we plan to evaluate our model at run-time using machine learning techniques as
well as apply it on several case studies.
References
1. Avgeriou, P., Stal, M., Hilliard, R.: Architecture sustainability [guest editors’ intro-
duction]. IEEE Softw. 30(6), 40–44 (2013)
2. Avizienis, A., Kelly, J.P.J.: Fault tolerance by design diversity: concepts and experi-
ments. Computer 17(8), 67–80 (1984). http://dx.doi.org/10.1109/MC.1984.1659219
Diversifying Software Architecture for Sustainability 63
3. Becker, C., Chitchyan, R., Duboc, L., Easterbrook, S., Mahaux, M., Penzenstadler,
B., Rodriguez-Navas, G., Salinesi, C., Seyff, N., Venters, C., et al.: The Karlskrona
manifesto for sustainability design (2014). arXiv preprint arXiv:1410.6968
4. Grace, P., Hughes, D., Porter, B., Blair, G.S., Coulson, G., Taiani, F.: Experiences
with open overlays: a middleware approach to network heterogeneity. ACM SIGOPS
Oper. Syst. Rev. 42(4), 123–136 (2008)
5. Kazman, R., Asundi, J., Klein, M.: Quantifying the costs and benefits of archi-
tectural decisions. In: Proceedings of 23rd International Conference on Software
Engineering, pp. 297–306. IEEE Computer Society (2001)
6. Ozkaya, I., Kazman, R., Klein, M.: Quality-attribute based economic valuation of
architectural patterns. In: 1st International Workshop on Economics of Software
and Computation, ESC 2007, p. 5. IEEE (2007)
7. Trigeorgis, L.: Real Options: Managerial Flexibility and Strategy in Resource Allo-
cation. MIT Press, Cambridge (1996)
Software Architecture Documentation
Towards Seamless Analysis of Software
Interoperability: Automatic Identification
of Conceptual Constraints in API
Documentation
University of Kaiserslautern,
Gottlieb-Daimler-Straße 47, 67663 Kaiserslautern, Germany
{abukwaik,rombach}@cs.uni-kl.de, [email protected]
1 Introduction
Interoperating with externally developed black-box Web Service or Platform
APIs is restricted with their Conceptual interoperability constraints (COINs),
which are defined as the characteristics controlling the exchange of data or func-
tionalities at the following conceptual classes: Syntax, Semantics, Structure,
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 67–83, 2016.
DOI: 10.1007/978-3-319-48992-6 5
68 H. Abukwaik et al.
2 Background
In this section we present a brief introduction to conceptual interoperability
constraints and the used machine learning techniques in our research.
3 Related Work
A number of previous works proposed automating the identification of some
interoperability constraints from API documents. Wu et al. [19] targeted para-
meters dependency constraints, Pandita et al. [13] inferred formal specifications
for methods pre/post conditions, and Zhong et al. [20] recognized resource spec-
ifications. We complement these works and elaborate on Abukwaik et al. [2] idea
of extracting a comprehensive set of conceptual interoperability constraints.
On a broader scope, other works proposed retrieving information to assist
software architects in different tasks. Anvaari and Zimmermann [3] retrieved
architectural knowledge from documents for architectural guidance purposes.
Figueiredo et al. [7] and Lopez et al. [12] searched for architectural knowledge in
emails, meeting notes, and wikis for proper documentation purposes. Although,
these are important achievements, they do not meet our goal of assisting archi-
tects in interoperability analysis tasks.
In general, our work and the aforementioned related works intersect in the
utilization of natural language processing techniques in retrieving specific kind
of information from documents. However, they used rule-based and ontology-
based retrieval approaches, while we explored ML classification algorithms that
are helpful for information retrieval in natural language text. Add to this, our
systematic research contributed a reusable ground truth dataset for all COIN
types that enables related research replication and results’ comparison.
4 Research Methodology
In order to achieve the stated goal and answer the aforementioned questions,
we performed our research in two main parts as follows:
SC1: Mashup Score. This is a published statistical value1 for the popularity of
a Web Service API in terms of its integration frequency into new bigger APIs.
SC2: API Type. This can be either Web Service API or Platform API.
SC3: API Domain. This is the application domain for the considered API doc-
ument (e.g., social blogging, audio, software development, etc.).
Analysis Unit. Our case study has a holistic design, which means that we
have a single unit of analysis. This unit is “the sentences in API documents that
1
Programmable web: http://www.programmableweb.com/apis/directory.
72 H. Abukwaik et al.
Study Protocol. Our multiple-case study protocol includes three main activ-
ities that are adapted from the process proposed by Runeson [17]. The study
activities are case selection, case execution, and cross-case analysis as we sum-
marize in Fig. 1 below and describe in details within the next subsection.
Data Preparation. We started this step with fetching the API documentation
for the selected case from its online website. Then, we read the documents and
determined the webpages that had textual content offering conceptual software
description and constraints (e.g., the Overview, Introduction, Developer Guide,
API Reference, Summary, etc.). Subsequently, we started processing the text in
chosen webpages by performing the following:
- Automatic Filtering. We implemented a simple PHP code using Simple HTML
DOM Parser2 library to filter out the text noise (i.e., headers, images, tags,
symbols, html code, and JavaScript code). Thus, we passed the URL link of the
chosen webpage (input) to our implemented code. Then, we got back a .txt file
containing the textual content of the webpage (output).
- Manual Filtering. The automatic filtering fells short in excluding specific types
of noise (e.g., text and code mixture, references like “see also”, “for more infor-
mation”, “related topics”, copyrights, etc.). These sentences could mislead the
machine learning in our later research steps, so we removed them manually.
2
Simple HTML DOM: http://simplehtmldom.sourceforge.net/.
Towards Seamless Analysis of Software Interoperability 73
Data Collection. In this step, we cut the content of the text file resulted from
previous step into single sentences within our designed data extraction sheet
(.xsl file) that we described in Subsect. 5.1. We completed all the fields of the
data sheet for each sentence except for the “COIN class” filed that we did within
the next step. Note that, we maintained a data storage, in which we stored the
original HTML webpages of the selected API documentations, their text file, and
their excel sheet. This enables later replication of our work by other researchers
as documentations get changed so frequently.
The aim of building these two versions of the corpus is to better investigate
the performance results of the ML algorithms in the later research experiments.
We explain this in more details in Sect. 6.
COIN-Share in the Contributed Ground Truth Dataset. In Fig. 2, we illustrate
the distribution of sentences among the COIN classes within the Seven-COIN
Corpus (on the left) and the Two-COIN Corpus (on the right). It is noticed that
the Not-COIN class, which expresses technical constraints rather than concep-
tual ones, is the dominant among the other six classes (i.e., 42 %). The Dynamic
and Semantic classes have the second and third biggest shares. Remarkably, the
Structure, Syntax, Quality, and Context instances are very few with convergent
shares ranging between 1 % and 5 % of the dataset.
COIN-Share in the Cases. On a finer level, we have investigated the state of
COINs in each case rather than in the whole ground truth dataset. We found that
the content of each API document was focused on the Not-COIN, Dynamic and
Semantic classes similarly as in the aggregated findings on the complete dataset
seen in Fig. 2. For example, in the case of AppleWatch documentation, 40.8 % of
Towards Seamless Analysis of Software Interoperability 75
the content is for Not-COIN, 26.1 % for Dynamic, and 25 % for Semantic. Add to
this, all cases had less than 10 % of its content to the Structure, Syntax, Quality,
and Context classes (e.g., Eclipse-Plugin gave them 8.5 %).
5.3 Discussion
Technical-Oriented API Documentations. The Not-COIN class reserves
42 % of the total sentences in the investigated parts of the API documents that
were supposed to be conceptual (i.e., overview and introduction sections). A
noteworthy example is the GoogleMaps case, which took it to an extreme level
of focus on the technical information (i.e., 63 % of its content was under the Not-
COIN class, 11.2 % for Dynamic class, 13.1 % for Semantic class, and the rest is
shared by the other classes). Accordingly, it is important to raise a flag about
the lack of sufficient information about the conceptual aspects of interopera-
ble software units or APIs (e.g., usage context, terminology definitions, quality
attributes, etc.). This concern needs to be brought to the notice of researchers
and practitioners who care about the usefulness and adequacy of content in API
documentations. This obviously has a direct influence on the effectiveness of
architects and analysts in the conceptual interoperability analysis related activ-
ities.
of constraints within the verbose of text. For example, it would be easier to skim
the text, if the API goal get separated from its interaction protocol, rather than
blending them into long paragraphs. This would offer architects and analysts a
better experience and it would consequently enhance their analysis results.
- Patterns of the Semantic Class. We noticed repeated terms and organized them
into: “Input/Output Terms” (e.g., return, receive, display, response, send, result,
etc.) that are in 18.8 % of the Semantic COIN sentences and “Goal Terms” (e.g.,
allow, enable, let, grant, permit, facilitate, etc.) that are in 16.4 %. For example,
the sentence “A dynamic notification interface lets you provide a more enriched
notification experience for the user” has a Semantic COIN stating a goal.
Researcher Bias. To build our ground truth dataset in a way that guarantees
results accuracy and impartiality, we replicated the manual classification of the
cases sentences by two researchers separately based on the COINs Model as an
interpretation criteria. In multiple discussion sessions, the researchers compared
their classification decisions and resolved conflicts based on consensus.
Experiments Goal. This part of our research aims at answering the second
research question RQ2 that we stated in Sect. 4. In order to do so, we needed to
examine ML techniques to discover their potentials in supporting architects and
analysts in automatically identifying the COINs in text of API documents.
Feature Selection. After processing the text, we identified the most represen-
tative features or keywords for the COIN classes within the COINs Corpus using
the Bag-of-Words (BOWs) and N-Gram approaches, which we explained in the
background section. That is, each sentence was represented as a collection of
words. Then, each single word and each n-combination of words in the sentence
were considered as features, where N was between 1 and 3. For example, in a
3
Weka: http://www.cs.waikato.ac.nz/ml/weka.
Towards Seamless Analysis of Software Interoperability 79
Feature Modeling. In this stage, the whole COINs Corpus was transformed
into a mathematical model. That is, it was represented as a matrix, in which
headers contained all extracted features from the previous phase, while each row
represented a sentence of the corpus. Then, we weighted the matrix, where each
cell [row, column] held the weight of a feature in a specific sentence. For weight-
ing, we used the Term Frequency-Inverse Document Frequency (TF-IDF) [15],
which is often used for text retrieval. The result of this was the COINs Feature
Model (or the classification model), which is a reusable asset reserving knowl-
edge about conceptual interoperability constraints in API documents.
with almost 11 % compared to the results in the Seven-COIN case with the
ComplementNaı̈veBayes algorithm. That is, the precision increased to 81.9 %,
recall to 82.0 %, and F-measure to 81.9 %. Similar to the previous case,
Naı̈veBayesMutinomialupdatable came in the second rank and the 2-Nearest
Neighbor algorithm had the worst results as seen in Table 3. Note, we have
achieved an improvement in accuracy compared to our preliminary investigation
results [1], in which we had F-measure of 62.2 % using the Naı̈veBayes algorithm.
Efficiency of Identifying the COINs Using ML Algorithms. Obviously, the
machine beats the human performance in terms of the spent time in analyzing
the text. As we mentioned earlier, analyzing the documents costed us about 44
working hours, while, it took the machine way less time. For example, training
and testing the Naı̈veBayesMultinominalupdate took about 5 s on our complete
corpus with 2283 sentences). This efficiency would enhance when using machines
with faster and more powerful CPU (we ran the experiments on a machine with
Intel core i5 460 M CPU with 2.5 GHZ speed).
Fig. 4. Example of the tool identification for a Structure COIN in an API document
We implemented the prototype as a plugin for the Chrome web browser using
Java and JavaScript languages. The functionality is offered as a Web Service and
all communication is over the Simple Object Access Protocol (SOAP). The tool
design includes: (1) Front-End component that we developed using JavaScript to
provide the graphical user interface. (2) Back-End component that we developed
using Java and Weka APIs to be responsible for locating our service on the server,
passing it the input sentence, and carrying back the response.
pursued by this work was to utilize ML algorithms for effective and efficient iden-
tification of conceptual interoperability constraints in text of API documents.
Our systematic empirically-based research included a multiple case study that
resulted in the ground truth dataset. Then, we built a ML classification model
that we evaluated in experiments using different ML algorithms. The results
showed that we achieved up to 70.0 % accuracy for identifying seven classes of
interoperability constraints, and it increased to 81.9 % for two classes.
In the future, we plan to automate the manual filtering part of the data
preparation. We will also analyze further API documents to advance the gen-
eralizability of our results. This would enrich the ground truth dataset as well,
allowing better training for the ML algorithms and accordingly better accuracy
in identifying the conceptual interoperability constraints. With regards to the
tool, we will extend it to generate full reports about all interoperability con-
straints in a webpage and to collect instant feedback from users about automa-
tion results. In addition, we plan to empirically evaluate our ideas in industrial
case studies.
References
1. Abukwaik, H., Abujayyab, M., Humayoun, S.R., Rombach, D.: Extracting concep-
tual interoperability constraints from API documentation using machine learning.
In: ICSE 2016 (2016)
2. Abukwaik, H., Naab, M., Rombach, D.: A proactive support for conceptual inter-
operability analysis in software systems. In: WICSA 2015 (2015)
3. Anvaari, M., Zimmermann, O.: Semi-automated design guidance enhancer
(SADGE): a framework for architectural guidance development. In: Avgeriou, P.,
Zdun, U. (eds.) ECSA 2014. LNCS, vol. 8627, pp. 41–49. Springer, Heidelberg
(2014). doi:10.1007/978-3-319-09970-5 4
4. Banko, M., Brill, E.: Scaling to very very large corpora for natural language dis-
ambiguation. In: Proceedings of 39th Annual Meeting of the Association for Com-
putational Linguistics (2001)
5. Caldiera, V., Rombach, H.D.: The goal question metric approach. Encyclopedia
Softw. Eng. 2, 528–532 (1994)
6. Chu, W., Lin, T.Y.: Foundations and advances in data mining (2005)
7. Figueiredo, A.M., Dos Reis, J.C., Rodrigues, M.A.: Improving access to software
architecture knowledge an ontology-based search approach (2012)
8. Garlan, D., Allen, R., Ockerbloom, J.: Architectural mismatch or why it’s hard to
build systems out of existing parts. In: ICSE 1995 (1995)
9. Hallé, S., Bultan, T., Hughes, G., Alkhalaf, M., Villemaire, R.: Runtime verification
of web service interface contracts. Computer 43(3), 59–66 (2010)
10. John, G.H., Langley, P.: Estimating continuous distributions in Bayesian classifiers.
In: Conference on Uncertainty in Artificial Intelligence (1995)
Towards Seamless Analysis of Software Interoperability 83
11. Kohavi, R., et al.: A study of cross-validation and bootstrap for accuracy estimation
and model selection. In: Ijcai, vol. 14 (1995)
12. López, C., Codocedo, V., Astudillo, H., Cysneiros, L.M.: Bridging the gap between
software architecture rationale formalisms and actual architecture documents: an
ontology-driven approach. Sci. Comput. Program. 77, 66–80 (2012)
13. Pandita, R., Xiao, X., Zhong, H., Xie, T., Oney, S., Paradkar, A.: Inferring method
specifications from natural language API descriptions. In: ICSE 2012 (2012)
14. Powers, D.M.: Evaluation: from precision, recall and F-measure to ROC, informed-
ness, markedness and correlation (2011)
15. Robertson, S.: Understanding inverse document frequency: on theoretical argu-
ments for IDF. J. Documentation 60(5), 503–520 (2004)
16. Oakes, M.P., Ji, M. (eds.): Quantitative Methods in Corpus-Based Translation
Studies: A Practical Guide to Descriptive Translation Research, vol. 51. John
Benjamins Publishing, Amsterdam (2012)
17. Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research
in software engineering. Empirical Softw. Eng. 14(2), 131–164 (2009)
18. Tong, S., Koller, D.: Support vector machine active learning with applications to
text classification. J. Mach. Learn. Res. 2, 45–66 (2001)
19. Wu, Q., Wu, L., Liang, G., Wang, Q., Xie, T., Mei, H.: Inferring dependency
constraints on parameters for web services. In: WWW 2013 (2013)
20. Zhong, H., Zhang, L., Xie, T., Mei, H.: Inferring resource specifications from nat-
ural language API documentation. In: ASE 2009 (2009)
Design Decision Documentation: A Literature
Overview
1 Introduction
Software architecture is comprised of non-trivial design decisions, and their docu-
mentation is crucial for improved system evolution [1–6]. However, while there is
usually at least some kind of architectural model available, the underlying design
decisions are seldom documented in practice [7–9]. From the other side, there is
plenty of research work on documentation of design decisions, which practical
adoption seems to be still sparse [8,10–15]. To understand this issue we have
conducted a literature review on documentation of design decisions covering 96
publications dating from 2004 to 2015.
In this work, we treat “software architecture” and “software design” as syn-
onyms. Under software architecture we understand “a set of principal design
decisions made about the system” [2], and is “a structure or structures of the
system, which comprise software elements, the externally visible properties of
those elements, and the relationships among them” [16]. Architectural design
decision is “an outcome of a design process during the initial construction or
the evolution of a software system, which is a primary representation of archi-
tecture” (adopted from Tyree and Akerman [17], Jansen and Bosch [18] and
Kruchten [19]).
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 84–101, 2016.
DOI: 10.1007/978-3-319-48992-6 6
Design Decision Documentation: A Literature Overview 85
3 Overview Methodology
To structure the overview, classification dimensions are required that are (1) rel-
evant for the industrial adoption of the decision documentation approaches, and
(2) obtainable from publications. Inspired by the related work Sect. 2, we propose
6 following dimensions: 1. Goal – what is the goal of decision documentation.
2. Formalisation – what formalisation approach is used in the publication.
3. Extent – does the work attempt to capture all the decisions or it applies
some selection criteria. 4. Context – what additional artefacts or trace links to
other artefacts are captured together with the decisions. 5. Tool-support – is
there any tool-support, and if yes what kind of. 6. Evaluation – what type of
evaluation is described in the publication (Table 1).
Design Decision Documentation: A Literature Overview 87
4 Overview Results
We have found 432 publications that matched our keywords and the search
string. The preliminary evaluation in process step 4 reduced the number to
160 publications, and a full-text evaluation in step 5 reduced the number to
96 publications that truly matched the overview scope. The top-represented
venues are (Fig. 1): ECSA (29), SHARK (19), WICSA (15) and QoSA (10).
Furthermore, 56 papers were identified as “publication-lines” and clustered to
18 main representative publications. Thus, in overall, we report on 58 unique
approaches in the remainder of this section. The unique publication does not
mean unique authors, but the approach presented in the publication.
1. Goal. While all the publications deal with documentation of design deci-
sions, they follow different goals: a. Documentation/Capture – focus solely on
documentation of architectural knowledge including decisions, only on design
decisions and/or on design rationale. b. Consistency/Compliance – documenta-
tion of decisions in order to enable architecture consistency or compliance checks.
90 Z. Alexeeva et al.
as a pure industrial case study. c. Research Case Study – a research case study.
d. Research Example – a research example, on the contrary to the case study
meaning a smaller application, usually in a de-attached context. e. Empirical –
an empirical evaluation of the proposed approach. f. Not available – no evaluation
or no information available.
Publication reporting several types of evaluation were added to multiple cat-
egories. While evaluation is important, publications with no evaluation or with
evaluation based on a research example comprise 48 %. About 28 % of the pub-
lications report evaluation in industrial context and 7 % real-life case studies,
which is a prerequisite towards industrial applicability. However, despite the
reported positive feedback, there seem to be no follow-up actions or reports on
a more long-term application by the involved companies. We can conclude that
the adoption of approaches in organisations likely remains a problem even if the
first positive results were achieved.
Q2. Brownfield Support. Few publications explicitly define whether they tar-
get a new system development or an existing system. Implicitly, the majority of
the approaches deal with new development (so-called “greenfield” development),
except for [49–51]. To our best knowledge, system development in industry often
includes brownfield. Therefore, we see the support of legacy systems as a highly
reasonable research direction (“brownfield” support).
of stages of design decisions is a common feature (e.g. via status as new, accepted
or deprecated), documentation of decision evolution is not sufficiently covered.
Some exclusions are work by Zimmermann et al. [34], Dragomir et al. [11], van
Heesch et al. [52], Szlenk et al. [53] and Nowak and Pautasso [54].
5 Discussion
This section discusses limitations of the overview and the lessons learned.
5.1 Limitations
We have learned several lessons that we would like to share with community:
Acknowledgement. The authors would like to thank Ralf Reussner for his valuable
input. The work has been partially supported by the FP7 European project Seaclouds.
References
1. Ozkaya, I., Wallin, P., Axelsson, J.: Architecture knowledge management during
system evolution: observations from practitioners. In: SHARK (2010)
2. Taylor, R.N., Medvidovic, N., Dashofy, E.M.: Software Architecture: Foundations,
Theory, and Practice. Wiley, New York (2009)
3. Kruchten, P., Capilla, R., Dueas, J.: The decision view’s role in software architec-
ture practice. IEEE Softw. 26, 36–42 (2009)
4. Burge, J.E., Carroll, J.M., McCall, R., Mistrik, I.: Rationale-Based Software Engi-
neering. Springer, Heidelberg (2008)
5. Tang, A., Babar, M.A., Gorton, I., Han, J.: A survey of architecture design ratio-
nale. J. Syst. Softw. 79, 1792–1804 (2006)
6. Babar, M., Tang, A., Gorton, I., Han, J.: Industrial perspective on the usefulness
of design rationale for software maintenance: a survey. In: 2006 6th International
Conference on Quality Software (QSIC), pp. 201–208 (2006)
7. Manteuffel, C., Tofan, D., Koziolek, H., Goldschmidt, T., Avgeriou, P.: Indus-
trial implementation of a documentation framework for architectural decisions. In:
WICSA (2014)
8. Nkwocha, A., Hall, J.G., Rapanotti, L.: Design rationale capture for process
improvement in the globalised enterprise: an industrial study. Softw. Syst. Model.
12, 825–845 (2013)
Design Decision Documentation: A Literature Overview 97
9. Burge, J.E., Brown, D.C.: Software engineering using RATionale. J. Syst. Softw.
81, 395–413 (2008)
10. Anvaari, M., Zimmermann, O.: Semi-automated Design Guidance Enhancer
(SADGE): a framework for architectural guidance development. In: Avgeriou, P.,
Zdun, U. (eds.) ECSA 2014. LNCS, vol. 8627, pp. 41–49. Springer, Heidelberg
(2014). doi:10.1007/978-3-319-09970-5 4
11. Dragomir, A., Lichter, H., Budau, T.: Systematic architectural decision manage-
ment: a process-based approach. In: WICSA (2014)
12. Falessi, D., Briand, L.C., Cantone, G., Capilla, R., Kruchten, P.: The value of
design rationale information. ACM Trans. Softw. Eng. Methodol. 22(3), 21 (2013)
13. Tofan, D., Galster, M., Avgeriou, P.: Difficulty of architectural decisions – a survey
with professional architects. In: Drira, K. (ed.) ECSA 2013. LNCS, vol. 7957, pp.
192–199. Springer, Heidelberg (2013). doi:10.1007/978-3-642-39031-9 17
14. Tang, A., Avgeriou, P., Jansen, A., Capilla, R., Babar, M.A.: A comparative study
of architecture knowledge management tools. J. Syst. Softw. 83, 352–370 (2010)
15. Babar, M., de Boer, R., Dingsoyr, T., Farenhorst, R.: Architectural knowledge
management strategies: approaches in research and industry. In: SHARK (2007)
16. Rozanski, N., Woods, E.: Software Systems Architecture: Working with Stake-
holders Using Viewpoints and Perspectives. Addison-Wesley Professional, Upper
Saddle River (2009)
17. Tyree, J., Akerman, A.: Architecture decisions: demystifying architecture. IEEE
Softw. 22(2), 19–27 (2005)
18. Jansen, A., Bosch, J.: Software architecture as a set of architectural design deci-
sions. In: 5th Working IEEE/IFIP Conference on Software Architecture, WICSA
(2005)
19. Kruchten, P.: An ontology of architectural design decisions in software intensive
systems. In: 2nd Groningen Workshop on Software Variability (2004)
20. Regli, W., Hu, X., Atwood, M., Sun, W.: A survey of design rationale systems:
approaches, representation, capture and retrieval. Eng. Comput. 16, 209–235
(2000)
21. Shahin, M., Liang, P., Khayyambashi, M.: Architectural design decision: existing
models and tools. In: WICSA/ECSA (2009)
22. Farenhorst, R., Lago, P., Vliet, H.: Effective tool support for architectural knowl-
edge sharing. In: Oquendo, F. (ed.) ECSA 2007. LNCS, vol. 4758, pp. 123–138.
Springer, Heidelberg (2007). doi:10.1007/978-3-540-75132-8 11
23. Bu, W., Tang, A., Han, J.: An analysis of decision-centric architectural design
approaches. In: SHARK (2009)
24. Henttonen, K., Matinlassi, M.: Open source based tools for sharing and reuse of
software architectural knowledge. In: WICSA/ECSA (2009)
25. Hoorn, J.F., Farenhorst, R., Lago, P., van Vliet, H.: The lonesome architect. J.
Syst. Softw. 84(9), 1424–1435 (2011)
26. Kitchenham, B.: Procedures for performing systematic reviews. Keele University
Technical report TR/SE-0401 and NICTA Technical report 0400011T.1 (2004)
27. de Boer, R.C., Farenhorst, R.: In search of ‘architectural knowledge’. In: SHARK
(2008)
28. Koziolek, H.: Performance evaluation of component-based software systems: a sur-
vey. Perform. Eval. 67, 634–658 (2010)
29. Aleti, A., Buhnova, B., Grunske, L., Koziolek, A., Meedeniya, I.: Software archi-
tecture optimization methods: a systematic literature review. IEEE Trans. Softw.
Eng. 39, 658–683 (2013)
98 Z. Alexeeva et al.
70. Dı́az, J., Pérez, J., Garbajosa, J., Wolf, A.L.: Change impact analysis in
product-line architectures. In: Crnkovic, I., Gruhn, V., Book, M. (eds.) ECSA
2011. LNCS, vol. 6903, pp. 114–129. Springer, Heidelberg (2011). doi:10.1007/
978-3-642-23798-0 12
71. Egyed, A., Wile, D.: Support for managing design-time decisions. IEEE Trans.
Softw. Eng. 32, 299–314 (2006)
72. Garcia, A., Batista, T., Rashid, A., Sant’Anna, C.: Driving and managing archi-
tectural decisions with aspects. SIGSOFT Softw. Eng. Notes 31, 6 (2006)
73. Gerdes, S., Lehnert, S., Riebisch, M.: Combining architectural design decisions and
legacy system evolution. In: Avgeriou, P., Zdun, U. (eds.) ECSA 2014. LNCS, vol.
8627, pp. 50–57. Springer, Heidelberg (2014). doi:10.1007/978-3-319-09970-5 5
74. Gu, Q., Lago, P.: SOA process decisions: new challenges in architectural knowledge
modeling. In: SHARK (2008)
75. Gu, Q., van Vliet, H.: SOA decision making - what do we need to know. In: SHARK
(2009)
76. Habli, I., Kelly, T.: Capturing and replaying architectural knowledge through
derivational analogy. In: SHARK (2007)
77. Harrison, N.B., Avgeriou, P., Zdun, U.: Using patterns to capture architectural
decisions. IEEE Softw. 24, 38–45 (2007)
78. Jansen, A., Bosch, J., Avgeriou, P.: Documenting after the fact: recovering archi-
tectural design decisions. J. Syst. Softw. 81, 536–557 (2008)
79. Jansen, A., Avgeriou, P., van der Ven, J.S.: Enriching software architecture docu-
mentation. J. Syst. Softw. 82, 1232–1248 (2009)
80. Lee, L., Kruchten, P.: A tool to visualize architectural design decisions. In: Becker,
S., Plasil, F., Reussner, R. (eds.) QoSA 2008. LNCS, vol. 5281, pp. 43–54. Springer,
Heidelberg (2008). doi:10.1007/978-3-540-87879-7 3
81. Lytra, I., Tran, H., Zdun, U.: Supporting consistency between architectural design
decisions and component models through reusable architectural knowledge trans-
formations. In: Drira, K. (ed.) ECSA 2013. LNCS, vol. 7957, pp. 224–239. Springer,
Heidelberg (2013). doi:10.1007/978-3-642-39031-9 20
82. Cuesta, C.E., Navarro, E., Perry, D.E., Roda, C.: Evolution styles: using archi-
tectural knowledge as an evolution driver. J. Softw. Evol. Process 25(9), 957–980
(2013)
83. Sinnema, M., van der Ven, J.S., Deelstra, S.: Using variability modeling principles
to capture architectural knowledge. SIGSOFT Softw. Eng. Notes 31, 5 (2006)
84. Soliman, M., Riebisch, M., Zdun, U.: Enriching architecture knowledge with tech-
nology design decisions. In: WICSA (2015)
85. Tibermacine, C., Zernadji, T.: Supervising the evolution of web service orches-
trations using quality requirements. In: Crnkovic, I., Gruhn, V., Book, M. (eds.)
ECSA 2011. LNCS, vol. 6903, pp. 1–16. Springer, Heidelberg (2011). doi:10.1007/
978-3-642-23798-0 1
86. Tibermacine, C., Dony, C., Sadou, S., Fabresse, L.: Software architecture con-
straints as customizable, reusable and composable entities. In: Babar, M.A., Gor-
ton, I. (eds.) ECSA 2010. LNCS, vol. 6285, pp. 505–509. Springer, Heidelberg
(2010). doi:10.1007/978-3-642-15114-9 51
87. Tofan, D., Galster, M., Avgeriou, P.: Capturing tacit architectural knowledge using
the repertory grid technique (NIER track). In: ICSE (2011)
88. Trujillo, S., Azanza, M., Diaz, O., Capilla, R.: Exploring extensibility of architec-
tural design decisions. In: SHARK (2007)
Design Decision Documentation: A Literature Overview 101
89. Wu, W., Kelly, T.: Managing architectural design decisions for safety-critical soft-
ware systems. In: Hofmeister, C., Crnkovic, I., Reussner, R. (eds.) QoSA 2006.
LNCS, vol. 4214, pp. 59–77. Springer, Heidelberg (2006). doi:10.1007/11921998 9
90. Zdun, U., Avgeriou, P., Hentrich, C., Dustdar, S.: Architecting as decision making
with patterns and primitives. In: SHARK (2008)
91. Zhu, L., Gorton, I.: UML profiles for design decisions and non-functional require-
ments. In: SHARK (2007)
92. Li, Z., Liang, P., Avgeriou, P.: Architectural technical debt identification based on
architecture decisions and change scenarios. In: WICSA (2015)
93. de Boer, R., Lago, P., Telea, A., van Vliet, H.: Ontology-driven visualization of
architectural design decisions. In: WICSA/ECSA (2009)
94. Burge, J.E., Brown, D.C.: SEURAT: integrated rationale management. In: Pro-
ceedings of the 30th International Conference on Software Engineering, ICSE
(2008)
Task-Specific Architecture Documentation
for Developers
Why Separation of Concerns in Architecture
Documentation is Counterproductive for Developers
1 Introduction
development becomes a more and more distributed and globalized activity, often
delaying or even making direct communication impossible. In such settings, archi-
tecture documentation is a vital communication vehicle to allow a consistent realization
of the architecture.
Architecture documentation for such systems can become large. In our experience,
for large-scale projects several hundreds of pages are realistic. Working with such
documentation can be difficult, in particular for developers, who use it as the basis for
their implementation activities, for two main reasons:
First, the perspectives of architects and developers on the system diverge. Archi-
tects focus on the system as a whole, designing the overall principles of the system for a
multitude of stakeholders. They break down the big and complex problem of the
complete system into smaller parts, i.e. apply the principles of divide and conquer and
separation of concerns, to create concepts that address architecture drivers in a con-
sistent and uniform way. Examples are concepts for exception handling, validation,
scaling, etc. For a medium sized project this can easily lead to 20–50 different concepts.
Our central insight is that while separation of concerns is vital for architects, who
deal with a problem too large to handle as a whole, it is actually counter-productive
for a developer working on a task with a narrow focus on single entities, because the
separated concerns need to be located and synthesized again. When developers
implement single modules, they need to know and consider several architecture con-
cepts and realize them in their specific context. Such concepts are normally not
explicitly described for every element that needs to realize it, but once, in a general way
and then instantiated throughout the system (e.g. which interfaces to implement in
which way for the security concepts, for transaction handling, …). This means, every
single developer needs to be aware of or search for relevant concepts for the devel-
opment task at hand (cf. Fig. 1).
Both aspects lead to an architecture realization that is on the one hand less efficient,
because developers are required to search and identify relevant concepts in a large
amount of architecture information. On the other hand, it is error prone because devel-
opers under high time pressure might not take the time to consult the architecture doc-
umentation, causing architecture violations and consequently architecture erosion [2].
To address these problems, we propose an approach of automatically generating
architecture documentation specific for tasks of individual developers. An overview of
the approach is presented in Sect. 3 before we describe its details in Sect. 4. To get a
better understanding, we present an example in Sect. 5 and conclude in Sect. 6 with
validation and future work.
2 Related Work
3 Approach Overview
internal structure, interfaces to provide, location in the source code, and relations to
create. These pieces of information are combined, so that a meaningful view on a very
specific part of the system is created. Thus, the architecture documentation for
developers contains only a minimum of overhead information, and in a form that
allows direct realization. Manually creating such documentation is economically
impossible, hence, task-specific architecture documentation needs to be created fully
automated with a tool.
highlight, that are specific and have an influence on the documentation generation
approach. First, when we design architectures, we explicitly differentiate between
runtime and development time. We often see people drawing boxes and lines, mixing the
two arbitrarily, without understanding the differences. Runtime elements (components)
can be multiply instantiated and deployed. They are realized with development time
entities (modules), which for example represent classes. Different mappings are possible
between these entities. Components are normally realized by multiple modules; to
optimize reuse, one module can be used for the realization of many components.
The second aspect is template elements. They result from the idea of making
architecture modeling more efficient by grouping similarities. To eliminate the neces-
sity to describe a concept every time it is applied in the system, template elements are
used to describe a concept once. For example, if, as a part of the validation concept in a
system, we wanted to express that for every backend service in the system, there has to
be a corresponding validator component to validate the data received from clients, we
could model the template components T_Backend Service and T_Backend Service
Validator as shown in Fig. 3 and link them to concrete instances. With this idea, the
required amount of modeling to describe the architecture concepts of the system can be
reduced.
«Task»
«T_Data Servi...
«Component» validate «Component» Implement staff data Staff Data Service
T_Backend client data T_Backend service
Service Service Validator «assigned «create»
«use»
«Developer» to»
Daniel
4 Detailed Approach
4.2 Selection
Selection is the automatic process of analyzing the architecture model and identifying
model elements that are relevant to consider for the subsequent steps. The starting point
for selection is always one or more focus elements. The following elements are
included in the selection processes: Developers need to know all details about the focus
elements, so all occurrences are included with their properties and description. The
hierarchy of the element’s template elements need to be included because all concepts
involving the templates are relevant for the focus elements as well. The mapped
development-time elements are included because they describe how an element needs to
be realized. In our modeling approach, design decisions are created in the architecture
model as well and relevant ones are included in the selection process. Finally,
descriptions of all elements and diagrams are included as well.
5 Example
The following example illustrates the main ideas of the approach. The context is a farm
management system, a system with which farmers manage and plan machines, grain
supply, etc. [12]. The next user story to be implemented in the project is managing
staff. This includes the tasks to create the according database structure, the user
interfaces, etc. One task for a developer is to implement the backend service (cf.
Fig. 4).
Figure 5 shows an overview of the different kinds of services that are provided by
the backend. The services that provide the different kinds of data to the applications
running on the Farming Client are represented by the template component T_Data
Service.
The relevance of this template becomes clear when looking at Fig. 6. The diagram
shows the different kinds of services used in the application and an explicit mapping to
DT. In this case, for every service, an according package in the services package,
together with the processor and configuration classes have to be created. As one
example of an architecture concept, Fig. 3 shows a simple validation concept that
prescribes every backend service to use an according validator component.
The result of the generation process is depicted in Fig. 7. Colors denote corre-
sponding elements. The focus element Staff Data Service has been shifted to DT
according to the explicit mapping shown in Fig. 6, resulting in the Staff Data Service
package with the two contained modules. The relation target elements of these two
Task-Specific Architecture Documentation for Developers 109
modules, the two framework interfaces have been integrated. The interface provided by
the T_Data Service has been shifted and integrated with a realization relation. The
validation concept has been interleaved by adding the shifted validator module. Where
possible, the names of templates have been replaced by the name of the focus element.
«Package»
FarmingBackend
«Package»
Services
«Package»
Service Framework
«Package»
validate client data «use»
Staff Data Service
References
1. Fairbanks: Just Enough Software Architecture: A Risk-Driven Approach. Marshall &
Brainerd (2010)
2. Perry, D.E., Wolf, A.L.: Foundations for the study of software architecture. ACM SIGSOFT
Softw. Eng. Notes 17, 40–52 (1992)
110 D. Rost and M. Naab
3. Clements, P., Bachmann, F., Bass, L., Garlan, D., Ivers, J., Little, R., Merson, P., Nord, R.,
Stafford, J.: Documenting Software Architectures: Views and Beyond. Addison-Wesley
Professional, Boston (2002)
4. Hofmeister, C., Nord, R., Soni, D.: Applied Software Architecture. Addison-Wesley
Professional, Boston (1999)
5. Bayer, J., Muthig, D.: A view-based approach for improving software documentation
practices. In: 13th Annual IEEE International Symposium and Workshop on Engineering of
Computer-Based Systems, ECBS 2006, pp. 269–278 (10 p.) (2006)
6. Capilla, R., Jansen, A., Tang, A., Avgeriou, P., Babar, M.A.: 10 years of software
architecture knowledge management: practice and future. J. Syst. Softw. 116, 191–205
(2015)
7. Farenhorst, R., Lago, P., van Vliet, H.: EAGLE: effective tool support for sharing
architectural knowledge. Int. J. Coop. Inf. Syst. 16, 413–437 (2007)
8. Chen, L., Babar, M.A., Liang, H.: Model-centered customizable architectural design
decisions management. In: 2010 21st Australian Software Engineering Conference, pp. 23–
32. IEEE (2010)
9. Manteuffel, C., Tofan, D., Koziolek, H., Goldschmidt, T., Avgeriou, P.: Industrial
implementation of a documentation framework for architectural decisions. In: 2014
IEEE/IFIP Conference on Software Architecture, pp. 225–234. IEEE (2014)
10. Rost, D.: Generation of task-specific architecture documentation for developers. In: Proceedings
of the 17th International Doctoral Symposium on Components and Architecture - WCOP 2012,
p. 1. ACM Press, New York (2012)
11. Rost, D., Naab, M., Lima, C., Flach Garcia Chavez, C.: Software architecture documentation
for developers: a survey. In: Drira, K. (ed.) ECSA 2013. LNCS, vol. 7957, pp. 72–88.
Springer, Heidelberg (2013). doi:10.1007/978-3-642-39031-9_7
12. Naab, M., Braun, S., Lenhart, T., Hess, S., Eitel, A., Magin, D., Carbon, R., Kiefer, F.: Why
data needs more attention in architecture design - experiences from prototyping a large-scale
mobile app ecosystem. In: 2015 12th Working IEEE/IFIP Conference on Software
Architecture, pp. 75–84. IEEE (2015)
Runtime Architecture
Architectural Homeostasis in Self-Adaptive
Software-Intensive Cyber-Physical Systems
1 Introduction
Cyber-Physical Systems (CPS) [1] are large complex systems that rely more and more
on software for their operation—they are becoming software-intensive CPS [2, 3]. Such
systems, e.g., intelligent transportation systems, smart grids, are typically comprised of
several million lines of code. A high level view achieved via focusing on software
architecture abstractions is thus becoming increasingly important for dealing with such
scale and complexity during development, deployment, and maintenance.
These systems continuously sense physical properties in order to actuate physical
processes. Due to the close connection to the physical world that is hard to predict at
design time and control at run-time, they encounter a high level of uncertainty in their
© Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 113–128, 2016.
DOI: 10.1007/978-3-319-48992-6_8
114 I. Gerostathopoulos et al.
Fig. 2. Excerpt from DSL of DEECo components of the cleaning robots example.
Architectural Homeostasis in Self-Adaptive Software-Intensive CPS 117
Components do not interact with each other directly. Their interaction is dependent on
their membership in dynamic groups called ensembles. An ensemble is dynamically
created/disbanded depending on which components satisfy its membership condition.
The key task of an ensemble is to periodically exchange knowledge parts between its
coordinator and member components (determined by their roles). At design-time, an
ensemble specification consists of (i) ensemble roles that the member and coordinator
components should feature, and (ii) a membership condition prescribing the condition
under which components should interact (Fig. 3, lines 45–51), and (iii) a knowledge
exchange function, which specifies the knowledge exchange that takes place between the
components in the ensemble (lines 52–53). For instance, the components featuring the
Dockable role (e.g. Robot) can form an ensemble with components featuring the
Dock role (i.e. with a DockingStation) to coordinate on the docking activity (lines
36–37).
Matching of a component role and an ensemble role can be interpreted as estab-
lishing a connector in a classical component model; such a connector lasts only until
the next evaluation of the membership condition. This semantics provides for software
architecture that is dynamically adapted to the current components’ knowledge values.
Self-adaptation in DEECo. The semantics of switching modes within a component
reflects the idea of the MAPE-K self-adaptation loop (Monitor-Analyze-Plan-Execute
over Knowledge) [15]: Consider a component C and its associated mode-state machine
MC. In MC, the transition guards from the current state are periodically evaluated based
on monitoring the variables (knowledge parts featured in the guards) of C; then it is
analyzed which of the eligible transition should be selected by so that the next mode is
planned. Finally, the next mode is brought to action (executed).
Fig. 3. Excerpt from DSL of DEECo ensembles of the cleaning robots example.
118 I. Gerostathopoulos et al.
a. Data collection
(i) On a real system
I. Preventively
II. Ex-post
(ii) By simulation
b. Acquiring dependency relation
(i) Regression/machine learning
(ii) Empirical knowledge
Fig. 6. DSL excerpt from specification of collaborative sensing ensemble of the cleaning robots.
Fig. 7. Mode-state machine capturing the mode switching logic of the Robot component. Each
state (mode) is associated with several processes. Transitions are guarded by conditions upon the
Robot’s knowledge. Changes introduced by the EMS H-mechanism are marked in (bold) green –
transitions are now guarded by a condition/probability pair. States that are not allowed to have
incoming transitions are marked in grey background. (Color figure online)
Architectural Homeostasis in Self-Adaptive Software-Intensive CPS 123
Fig. 8. Scenarios considered in the controlled experiment. Simulation duration is 600 s (with
extra 300 s “learning phase” in scenarios 7 & 9), environment size is 20 × 20, number of robots
is 4.
1
Available at: https://github.com/d3scomp/uncertain-architectures.
124 I. Gerostathopoulos et al.
Scenario 1 represents the vanilla case (no faults – no H-mechanism active), acting
as the baseline. Not surprisingly, in other scenarios the 90th percentile of the time to
clean a tile increases when a fault occurs and is not counteracted by an H-mechanism
(scenarios 2, 4, 6, 8). When an H-mechanism counteracts the fault (scenarios 3, 5, 7, 9),
the overall utility improves, but does not reach the baseline scenario. Below we
comment more on CS and EMS, since the application of FCIA was straightforward.
As to the application of CS (scenario 2), a dependency relation (Sect. 3.1) was
identified such that the closeness of the positions of Robot components implied
similar values in their dirtinessMaps. This resulted in the creation and deployment
of the DirtinessMapExchange of Fig. 6. The used metrics, tolerable distances,
and confidence levels are depicted in Fig. 10.
The effect of EMS is illustrated in scenarios 6 and 7. In both scenarios, only a single
docking station is active, corresponding to the situation that one of the two docking
stations gets unavailable at run-time. When EMS is applied (scenario 7), due to the
introduced probabilistic mode switching, the robots started visiting the docking station
Fig. 10. Distance metrics, tolerable distances, and confidence levels in Robot knowledge fields.
Architectural Homeostasis in Self-Adaptive Software-Intensive CPS 125
at different times. Hence, the overall queueing time was reduced and the overall utility
increased. EMS needs time to auto-calibrate (set to 300 s) as it searches for the
probability value for the added transitions that yields the highest fitness value following
a simulated annealing algorithm. In Fig. 9, the results have been split into the learning
phase (7a) and the execution run with learned values (7b). The solution naturally
underperforms in the learning phase compared to the case without EMS (6) because of
the trial and error that the learning involves. However, once the learning period is over
and EMS uses the learned values, it yields a significantly better behavior compared to
(6). The fitness value was calculated as the inverse of the average time it takes the robot
to clean a tile since it discovered the dirt. Since EMS was running independently for
each robot, the local searches returned different optimal probabilities for each robot
found by the search (with values close to 0.0001).
In scenario (8) all the faults are introduced and in (9) they are handled by all the
three H-mechanisms; this illustrates that all of them can be active at the same time
without worsening the overall utility of the system. Since the fitness function in EMS
was selected in such a way that it does not depend on the faults triggering CS and
FCIA, all the three H-mechanisms behaved as orthogonal.
Discussion. We use two distinct architectural layers—“standard” self-adaptation and
adaptation of self-adaptation strategies (the task of H-mechanisms). Hence, our solution
basically follows the principle of architectural hoisting [17] —separating concerns by
assigning the possibility for a global system property (here self-adaptation) to system
architecture. Even though the H-mechanisms layer can be interpreted as (high-level)
exception handling in self-adaptation settings and can be implemented at the same level
of abstraction as the self-adaptation itself, achieving the same functionality without the
H-mechanism layer would make the code of ensembles and components very clumsy.
Architectural hoisting makes the separation of these concerns much easier and elegant.
Depending on the particular fitness function applied, EMS may be triggered in a
situation that is also covered by other H-mechanisms (e.g. by CS). In such a case it is
important to address this interference and state which H-mechanism has precedence in
order to avoid unnecessary side effects. This is the task of the H-Adaptation Manager.
126 I. Gerostathopoulos et al.
Limitations. In general, the extra layer demands additional computational load, since
monitoring of the triggering events is inherent to all three H-mechanisms. Even though
it is minor for CS and FCIA, in the case of EMS it depends on the complexity of the
associated fitness function. Obviously, the most computationally demanding step is the
data collection in CS if done preventively at runtime. This can be reduced by limiting
the time window for collecting data, or by starting it ex-post, i.e. when a need be.
Another limitation of the work presented in this paper is that the proposed
H-mechanisms have been only evaluated so far with DEECo self-adaptation strategies.
Investigating the generalizability of our homeostasis concept with other self-adaptation
approaches (e.g. Stitch) is an interesting topic of our future work.
5 Related Work
6 Conclusions
Acknowledgements. This work was partially supported by the project no. LD15051 from
COST CZ (LD) programme by the Ministry of Education, Youth and Sports of the Czech
Republic; by Charles University institutional fundings SVV-2016-260331 and PRVOUK; by
Charles University Grant Agency project No. 391115. This work is part of the TUM Living Lab
Connected Mobility project and has been funded by the Bayerisches Staatsministerium für
Wirtschaft und Medien, Energie und Technologie.
References
1. Kim, B.K., Kumar, P.R.: Cyber-physical systems: a perspective at the centennial. Proc. IEEE
100, 1287–1308 (2012)
2. Hölzl, M., Rauschmayer, A., Wirsing, M.: Engineering of software-intensive systems: state
of the art and research challenges. In: Wirsing, M., Banâtre, J.-P., Hölzl, M., Rauschmayer,
A. (eds.) Software-Intensive Systems. LNCS, vol. 5380, pp. 1–44. Springer, Heidelberg
(2008)
3. Beetz, K., Böhm, W.: Challenges in engineering for software-intensive embedded systems.
In: Pohl, K., Hönninger, H., Achatz, R., Broy, M. (eds.) Model-Based Engineering of
Embedded Systems, pp. 3–14. Springer, Heidelberg (2012)
4. Ramirez, A.J., Jensen, A.C., Cheng, B.H.: A taxonomy of uncertainty for dynamically
adaptive systems. In: SEAMS 2012, pp. 99–108. IEEE (2012)
5. Cheng, B.H.C.: Software engineering for self-adaptive systems: a research roadmap. In:
Cheng, B.H.C., de Lemos, R., Giese, H., Inverardi, P., Magee, J. (eds.) Self-adaptive
Systems. LNCS, vol. 5525, pp. 1–26. Springer, Heidelberg (2009)
6. Cheng, S.-W., Garlan, D., Schmerl, B.: Stitch: a language for architecture-based
self-adaptation. J. Syst. Softw. 85, 1–38 (2012)
128 I. Gerostathopoulos et al.
7. Iftikhar, M.U., Weyns, D.: ActivFORMS: active formal models for self-adaptation. In:
SEAMS 2014, pp. 125–134. ACM Press (2014)
8. Weyns, D., Malek, S., Andersson, J.: FORMS: a formal reference model for self-adaptation.
In: Proceedings of the 7th International Conference on Autonomic Computing, pp. 205–214.
ACM, New York (2010)
9. Floch, J., Hallsteinsen, S., Stav, E., Eliassen, F., Lund, K., Gjorven, E.: Using architecture
models for runtime adaptability. IEEE Softw. 23, 62–70 (2006)
10. Brun, Y., et al.: Engineering self-adaptive systems through feedback loops. In: Cheng, B.H.,
de Lemos, R., Giese, H., Inverardi, P., Magee, J. (eds.) Self-adaptive Systems. LNCS, vol.
5525, pp. 48–70. Springer, Heidelberg (2009)
11. Gerostathopoulos, I., Bures, T., Hnetynka, P., Hujecek, A., Plasil, F., Skoda, D.:
Meta-adaptation strategies for adaptation in cyber-physical systems. In: Weyns, D.,
Mirandola, R., Crnkovic, I. (eds.) ECSA 2015. LNCS, vol. 9278, pp. 45–52. Springer,
Heidelberg (2015). doi:10.1007/978-3-319-23727-5_4
12. Cheng, B.H.C., Sawyer, P., Bencomo, N., Whittle, J.: A goal-based modeling approach to
develop requirements of an adaptive system with environmental uncertainty. In: Schürr, A.,
Selic, B. (eds.) MODELS 2009. LNCS, vol. 5795, pp. 468–483. Springer, Heidelberg
(2009). doi:10.1007/978-3-642-04425-0_36
13. Shaw, M.: “Self-healing”: softening precision to avoid brittleness. In: Proceedings of the
First Workshop on Self-healing Systems, pp. 111–114. ACM (2002)
14. Bures, T., Gerostathopoulos, I., Hnetynka, P., Keznikl, J., Kit, M., Plasil, F.: DEECo – an
ensemble-based component system. In: Proceedings of CBSE 2013, pp. 81–90. ACM (2013)
15. Kephart, J., Chess, D.: The vision of autonomic computing. Computer 36, 41–50 (2003)
16. Perrouin, G., Morin, B., Chauvel, F., Fleurey, F., Klein, J., Traon, Y.L., Barais, O., Jezequel,
J.-M.: Towards flexible evolution of dynamically adaptive systems. In: Proceedings of ICSE
2012, pp. 1353–1356. IEEE (2012)
17. Fairbanks, G.: Architectural hoisting. IEEE Softw. 31, 12–15 (2014)
18. Ramirez, A.J., Cheng, B.H., Bencomo, N., Sawyer, P.: Relaxing claims: coping with
uncertainty while evaluating assumptions at run time. In: France, R.B., Kazmeier, J., Breu,
R., Atkinson, C. (eds.) MODELS 2012. LNCS, vol. 7590, pp. 53–69. Springer, Heidelberg
(2012)
19. Esfahani, N., Kouroshfar, E., Malek, S.: Taming uncertainty in self-adaptive software. In:
Proceedings of SIGSOFT/FSE 2011, pp. 234–244. ACM (2011)
20. Knauss, A., Damian, D., Franch, X., Rook, A., Müller, H.A., Thomo, A.: ACon: a
learning-based approach to deal with uncertainty in contextual requirements at runtime. Inf.
Softw. Technol. 70, 85–99 (2016)
21. Oreizy, P., Medvidovic, N., Taylor, R.N.: Architecture-based runtime software evolution. In:
Proceedings of ICSE 1998, pp. 177–186. IEEE (1998)
22. Cheng, S., Huang, A., Garlan, D., Schmerl, B., Steenkiste, P.: Rainbow: architecture-based
self-adaptation with reusable infrastructure. IEEE Comput. 37, 46–54 (2004)
23. Elkhodary, A., Esfahani, N., Malek, S.: FUSION: a framework for engineering self-tuning
self-adaptive software systems. In: Proceedings of FSE 2010, pp. 7–16. ACM (2010)
24. Villegas, N.M., Tamura, G., Müller, H.A., Duchien, L., Casallas, R.: DYNAMICO: a
reference model for governing control objectives and context relevance in self-adaptive
software systems. In: de Lemos, R., Giese, H., Müller, H.A., Shaw, M. (eds.) Self-adaptive
Systems. LNCS, vol. 7475, pp. 265–293. Springer, Heidelberg (2013)
Executing Software Architecture Descriptions
with SysADL
[email protected]
2
UFRN – Federal University of Rio Grande do Norte, Natal, Brazil
{jair,thais}@dimap.ufrn.br
1 Introduction
1
http://www.omg.org/spec/UML/.
2
http://www.omg.org/spec/SysML/.
has been increasingly used by systems engineers, inheriting the popularity of UML.
It enriches UML with new concepts, diagrams, and it has been widely adopted to
design software-intensive systems. However, in terms of architectural description,
SysML inherits the limitations of UML: architectural constructs are basically the same
as UML with the exception of richer features for the definition of ports.
The abovementioned problems motivated us to define SysADL as a specialization
of SysML to the architectural description domain, with the aim of bringing together the
expressive power of ADLs for architecture description with a standard language widely
accepted by the industry, which itself provides hooks for specialization. SysADL,
reconciles the expressive power of ADLs with the use of a common notation in line
with the SysML standard for modeling software-intensive systems, while also coping
with the ISO/IEC/IEEE 42010 Standard in terms of multiple viewpoints.
SysADL has a rigorous operational semantics, which allows the analysis (in terms
of verification of both safety and liveness properties) and execution (in terms of sim-
ulation for validation) of the architecture. It is structured according to three viewpoints:
(i) structural; (ii) behavioral; and (iii) executable. In a previous paper [2], we presented
the profile for the structural viewpoint with stereotypes to represent the well-known
architectural concepts of component, connector, port, and configuration. In another
previous paper [7], we presented the behavioral viewpoint, which complements the
structural viewpoint with the specification of behaviors for each structural element of
the architecture. However, these descriptions are not executable. To be able to execute
the architecture description, an action semantics is needed for all of them. In fact, most
ADLs lack explicit support for executing an architecture description. In the execution
view, the runtime behavior of an architecture is simulated to validate its logic regarding
satisfaction of behavioral requirements. Thus, an architecture description can be exe-
cuted, debugged, tested, and analyzed.
In this paper our focus is on the SysADL executable viewpoint that provides the
constructs to describe the execution of a software architecture. SysADL provides its
executable viewpoint by defining the execution semantics of the structural constructs
(components, connectors, and configurations), and of the behavioral constructs (actions
and activities). It also defines the data and control flow concepts for describing the body
of actions and activities. For this viewpoint, SysADL provides an extended action
language subsuming the ALF action language3 based on fUML4, adapted for SysML.
We use a Room Temperature Controller (RTC) system as a case study to illustrate the
concepts. We investigated the applicability of SysADL through two case studies and
interviews with software architecture specialists.
This paper is structured as follows. Section 2 briefly summarizes the SysADL
structural and behavioral viewpoints. Section 3 presents the executable viewpoint
of SysADL. Section 4 presents related work. Section 5 contains our concluding
remarks.
3
http://www.omg.org/spec/ALF/.
4
http://www.omg.org/spec/FUML.
Executing Software Architecture Descriptions with SysADL 131
progress from an action to another within an activity. Actions are atomic behaviors that
execute from beginning to end receiving parameters and returning a result. The
behavior also encompasses the protocol of ports and constraints.
Protocol. The behavioral specification of a port is expressed by a protocol in an
activity diagram. For instance, in the case of energy management of the temperature
sensor in the RTC System, the CTemperatureOPT port is specified to notify when the
energy level is low (represented by a threshold value).
The behavior of the SensorsMonitorCP component, depicted in Fig. 2, is defined in
terms of an activity, CalculateAverageTemperatureAC, specified in the behavioral
view. It declares the input and output parameters that are directly associated to the ports
of the component. This activity itself calls an action, CalculateAverageTempera-
tureAN. The behavior of this component is: it repeatedly waits to receive a value of
temperature in °C from port S1 and another value from port S2 (in any order) and after
calculating the average by calling action CalculateAverageTemperatureAN, it sends the
result through its port average. Both the activity and the action are specified in the
behavioral view. In the execution view, we need to complement it with the body
implementing the action specified in terms of pre- and post-conditions.
Equation. An equation specifies the constraints that must satisfy all executions of
actions and activities. It is defined by a logical expression using the input/output
parameters, where an output parameter is calculated using the input parameters.
SysADL extends OCL (part of UML adapted to SysML) to express equation
constraints.
After having overviewed the structural and behavioral viewpoints in the previous sec-
tion, we now present the executable viewpoint of SysADL. In the behavior viewpoint
we saw how to express the activities and interactions to achieve the required system
functionality. However, that behavior is not executable. To make an architecture exe-
cutable, the executable viewpoint provides the SysADL constructs to describe the
execution semantics of the body of actions. It comprises the data and control flow
concepts. The executable viewpoint is expressed by describing the body of the actions
expressing the computation. The SysADL notation to represent the body of the actions is
based on ALF (See footnote 3), part of UML and SysML. The architect will, then, be
able to run the executable architecture description for understanding the dynamics of the
structure and observing the specified behavior via concrete executions. For filling the
behavioral semantic gap of SysML for architecture description, we defined the opera-
tional semantics of SysADL based on the π-calculus [5]. We have enhanced it with
datatypes for expressing data values and data structures, and with logical assertions for
specifying constraints, as defined in the extended π-calculus, named π-ADL [6].
In Fig. 6 we show two examples of executable body. In the first example in this
figure, a search for an element (searchedTemp) is performed in a sequence of tem-
perature values in the RTC System, knowing that the sequence is stored in a variable
named temps. while is used to allow the searching loop until the searched temperature
is found. if is used to evaluate if the searched element is found. In the second example
in the same figure, it computes the sum of a sequence of temperature values in the RTC
System, stored in a variable named temps. A for declares a t to iterates over all the
elements of the temps sequence. In each iteration, t refers to an element of the sequence
(from the first to the last), and its value is added to the current sum.
SysADL provides a complete action language for describing the executable body of
actions. It is worth noting that the execution view is given by the interleaving of the
operational semantics of the description of structure, behavior and executable bodies.
The executable viewpoint, jointly with the structural and behavioral ones, is supported
136 F. Oquendo et al.
found = false;
i = 1;
while(not found) {
if (searchedTemp == temps[i]) found = true;
else i++;
}
sum = 0;
for (t in temps) {
sum = sum + t;
}
by a tool and has been validated by as presented in this section. We have developed
SysADL Studio as a plug-in for Eclipse, an open source IDE. The applicability of the
executable viewpoint presented in this paper was validated by the description of two
executable architectures: the one of the RTC System and the one of a Parking system.
More details about can be found at http://consiste.dimap.ufrn.br/sysadl.
4 Related Work
5 Conclusion
References
1. ADLs: current architectural languages, June 2016. http://www.di.univaq.it/malavolta/al/
2. Leite, J., Oquendo, F., Batista, T.: SysADL: a SysML profile for software architecture
description. In: Proceedings of 7th European Conference on Software Architecture (ECSA),
Montpellier, France, pp. 106–113 (2013)
3. Malavolta, I., Lago, P., Muccini, H., Pelliccione, P., Tang, A.: What industry needs from
architectural languages: a survey. IEEE Trans. Softw. Eng. 39(6), 869–891 (2013)
4. Medvidovic, N., et al.: A classification and comparison framework for software architecture
description languages. IEEE Trans. Softw. Eng. 26(1), 70–93 (2000)
5. Milner, R.: Communicating and Mobile Systems: The π-Calculus. Cambridge University
Press, Cambridge (1999)
6. Oquendo, F.: π-ADL: an architecture description language based on the higher-order typed
π-Calculus for specifying dynamic and mobile software architectures. ACM SIGSOFT SEN
29(3), 1–14 (2004)
7. Oquendo, F., Leite, J., Batista, T.: Specifying architecture behavior with SysADL. In:
Proceedings of 13th Working IEEE/IFIP Conference on Software Architecture (WICSA),
Venice (2016)
Towards an Architecture for an UI-Compositor
for Multi-OS Environments
1 Introduction
Automotive UIs have changed a lot over the last decades. Comparing car dash-
boards from 30 years ago to today’s dashboards leaves no doubt about the sig-
nificance and impact of modern UIs. The complexity of automotive software
rises with every new generation [3]. Up to 70 % of newly introduced features are
software related [1] and categorized into domains: safety related features (e.g.
ASP, ESP), driver assistance (e.g. distance checking, lane assist) and comfort
(e.g. entertainment, navigation).
The increasing amount of features hence influences the dashboards of cur-
rent and future cars, which is possible through advances in technology (e.g. freely
programmable instrument clusters (FPKs), touch-screens, etc.). System and soft-
ware architectures have to cope with the correlated complexity and increasing
dependencies. A current approach is the separation through hardware/software
virtualization to reduce complexity and dependencies and to mitigate the risks
of interferences by separating safety critical and non-safety critical applications.
A domain represents an OS, which encapsulates domain-specific applications
and services. This is also known as Multi-OS environment. When applications
from different domains share resources, such as a common UI, a component called
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 138–145, 2016.
DOI: 10.1007/978-3-319-48992-6 10
Towards an Architecture for an UI-Compositor for Multi-OS Environments 139
2 Related Work
One of the strongest motivations for virtualization in embedded systems is prob-
ably security [4]. Embedded systems are highly integrated and their subsystems
have to cooperate to contribute to the overall function of the system. Heiser [4]
states that “isolating them from each other interferes with the functional require-
ments of the system”. While isolation, i.e. the separation through virtualization,
increases security, it is still necessary to control or interact with subsystems.
Multi-OS environments use a type one hypervisor to run different OS types
concurrently on a multi-core hardware [10, p. 167]. If OSs of mixed criticality
are used it is categorized as a mixed-criticality system. A Hypervisor assigns
hardware resources, such as peripheral devices or hardware components, to a
certain OS. Accessing resources of other OSs is therefore only possible via inter-
VM-communication. Multi-OS environments aim to improve the software archi-
tecture attributes modifiability, security, availability and testability. Interoper-
ability is required to some extent, but every interconnection might mitigate the
encapsulation of an OS and lead to a security risk [9].
Compositing can take place in different layers, such as Hardware, OS, Appli-
cation or UI layer. Higher layers depend on lower layers and lower layers define
140 T. Holstein and J. Wietzke
3 UI-Compositor
An UI-Compositor assembles, i.e. blends, scales or places, and provides user
interaction with the different UI-Artefacts. This includes the redirection of input
events to one or more designated applications and the composition of different
types of output.
If multiple applications use the same type of UI, the UI-Compositor has to
decide how the composition is done. Figure 1 depicts a composition of multiple
UI-Artefacts based on an example of three UIs.
The UI-Compositor is also responsible to provide the primary UI-Logic,
which includes for example the mapping of input events to a certain application.
When a GUI is displayed and a user clicks inside one of many displayed GUIs,
the UI-Compositor has to calculate/map the actual position of the mouse click
to an relative coordinate of the GUI. The application does not know, whether
its GUI is displayed in an UI-Compositor or not.
If two applications on the same hierarchical layer have to negotiate about pri-
orities of each other, e.g. who’s window is to be displayed topmost, it usually
concludes in a tie. It also requires interconnections among all participating appli-
cations, which raises the complexity and dependencies.
A solution is a delegation of the decision to a higher instance (e.g. a UI-
Compositor, window manager, etc.), where contextual information is available.
This can be achieved through an implementation of UI-Logic. In case of the
Windows Icon Menu Pointer (WIMP) interaction style, the UI-Logic inside the
window manager handles all windows, including the current active window. Stor-
ing the information about the currently active window allows to determine where
keyboard events have to be sent.
The amount and diversity of available interaction techniques, devices and inter-
action styles affect the UI-Compositors decision making. The following enumer-
ation provides some examples, that outline the general problem.
Hard-Keys or Keyboard. For key events in the WIMP interaction style, the
decision making already requires contextual information: the current active
window. A user selects a window and then uses the keyboard to enter text.
Another variant would be to assign a key directly to an application. This
e.g. could be a hard key to always start the navigation program. In this case
the compositor would always redirect this specific input event directly to the
respective OS.
Stream Based Input. Stream-based input types, such as an interaction via
voice or speech, have no clear action points, such as e.g. button pressed
or button released, and are more difficult to handle. A microphone records
sound waves and whether or not those sound waves include a voice or speech
has to be determined by a speech recognition component. This component
translates or interprets the given raw data continuously. Delegating would
require to pre-interpret the raw data to determine its meaning. In order to
select a certain domain, a user would have to say a keyword to choose the
domain. Afterwards, an OS might use an own speech recognition to handle
incoming raw data.
Another variant would be to only have one speech recognition compo-
nent for all domains. However, this would cause more dependencies, because
a common protocol or interface between this component and each applica-
tion would be necessary. Applications would have to define, which speech
commands they expect, so that an UI-Compositor could redirect those com-
mands. Nevertheless, multiple applications could use the same speech com-
mand, which again causes a delegation.
Hard Coded Presets. Instead of a component that decides how events are
distributed/redirected, all input and output events could be assigned fixed
or part of a specification. For example KeyX is assigned to AppY and thus
it will always be redirected to this single application. Conflicts are avoided
by specification, but this also results in less flexibility.
Priority Based. The chain of responsibility pattern for example could be used
to redirect an input event to a certain domain. Therefore all domains have to
be sorted based on their assigned priority. An input event is always send to
the domain with the highest priority first. Domains may consume an event,
which causes the event not to be send to another domain. Therefore high
priority applications always receive events.
Broadcast. Another approach could be a broadcast of all input events to all
domains at the same time. Multiple applications may expect and receive
the same input events and trigger functions simultaneously. This will cause
the UI to be unusable, because an user looses the ability to make distinct
decisions for a certain application.
Towards an Architecture for an UI-Compositor for Multi-OS Environments 143
4 Architecture
In the previous sections we introduced Multi-OS environments, the definition
of an UI-Compositor and showed how the hierarchic structure of an Multi-OS
environment influences the distribution of input events and the composition of
outputs. In this section we compare a standard client/server architecture with
our new architecture and show advantages as well as disadvantages in both
architectures.
5 Conclusion
Interaction styles play an important role in architecture of Multi-OS environ-
ments. They define the common interfaces between application, OS and Compos-
itor. Without knowing the type of UI used by an application, an unwanted flexi-
bility in protocols has to be implemented. An exact definition of an application’s
UI, in terms of its inputs and outputs, allows to use minimal inter-connections
and well-defined interfaces, which reduces the overall complexity.
In Multi-OS environments the separation and secure encapsulation of
domains is the primary goal. Inter-connections between domains cause unwanted
dependencies and raise the complexity, which was supposed to be decreased
through separation.
Using the herein proposed compositor architecture allows a loose coupling
between UI-Compositor and applications from all domains by applying the pub-
lish/subscriber and data container architecture. Therefore applications are not
Towards an Architecture for an UI-Compositor for Multi-OS Environments 145
6 Future Work
Based on our research basic prototypes to verify the suggested approach were
implemented [5]. However, a fully working UI-Compositor for Multi-OS environ-
ments with support for different interaction styles for applications from different
OSs, is a complex task that will be implemented in the future. Also examples for
voice controlled UIs in a Multi-OS environment are subject of further research.
References
1. Bosch, J.: Continuous software engineering: an introduction. In: Bosch, J. (ed.)
Continuous Software Engineering. Springer, Heidelberg (2014)
2. Coulouris, G., Dollimore, J., Kindberg, T., Blair, G.: Distributed Systems: Con-
cepts and Design, 5th edn. Addison-Wesley Publishing Company, Hoboken (2011)
3. Ebert, C., Jones, C.: Embedded software facts, figures, and future. Computer
42(4), 42–52 (2009)
4. Heiser, G.: The role of virtualization in embedded systems. In: Proceedings of the
1st Workshop on Isolation and Integration in Embedded Systems, IIES 2008, pp.
11–16. ACM, New York (2008)
5. Holstein, T., Weißbach, B., Wietzke, J.: Towards a HTML-UI-compositor by intro-
ducing the wayland-protocol into a browser-engine. In: IEEE/IFIP Conference on
Software Architecture (WICSA), April 2016
6. Holstein, T., Wallmyr, M., Wietzke, J., Land, R.: Current challenges in compositing
heterogeneous user interfaces for automotive purposes. In: Kurosu, M. (ed.) HCI
2015. LNCS, vol. 9170, pp. 531–542. Springer, Heidelberg (2015). doi:10.1007/
978-3-319-20916-6 49
7. Holstein, T., Wietzke, J.: Contradiction of separation through virtualization and
inter virtual machine communication in automotive scenarios. In: Proceedings of
the 2015 European Conference on Software Architecture Workshops, ECSAW 2015,
pp. 4:1–4:5. ACM, New York (2015)
8. Knirsch, A.: Improved composability of software components through parallel hard-
ware platforms for in-car multimedia systems. Ph.D. thesis, Plymouth University,
Plymouth, UK (2015)
9. Schnarz, P., Wietzke, J.: It-sicherheits-eigenschaften für eng gekoppelte, asyn-
chrone multi-betriebssysteme im automotiven umfeld. In: Halang, W.A. (ed.) Funk-
tionale Sicherheit. Informatik aktuell, pp. 29–38. Springer, Heidelberg (2013)
10. Wietzke, J.: Embedded Technologies: Vom Treiber bis zur Grafik-Anbindung.
Springer, Heidelberg (2012)
11. Wietzke, J., Tran, M.T.: Automotive Embedded Systeme - Effizientes Framework -
Vom Design zur Implementierung. Springer Xpert.press, Heidelberg (2005)
Software Architecture Evolution
Inferring Architectural Evolution from Source
Code Analysis
A Tool-Supported Approach for the Detection of
Architectural Tactics
1 Introduction
Throughout the life of a software system, developers and maintainers will modify the
source code in order to add new features, correct or prevent defects. In doing so, they
will apply many simple coding techniques and patterns but they will also occasionally
introduce higher level elements that will be meaningful at an architectural level. While
there are many proposals concerned about evolution data at a low level [1], few
approaches have been proposed to analyze and interpret this information at the
architectural level. Even though several approaches that tackle the understanding and
formalization of architecture evolution have emerged (e.g., [2–8]), there exist very few
tools to help designers track and group a set of low-level source code changes and
translate them into a more concise high-level architectural intention. A key challenge is
that some architectural elements may not be traced easily and directly to code elements
(e.g., architectural constraints). In fact, architectural elements include extensional ele-
ments (e.g., module or component) and intensional ones (e.g., design decisions,
rationale, invariants) while source code elements are extensional [9, 10]. This con-
tributes to the absence of the architectural intention at the source code level and the
divergence of the source code from this intention. Moreover, architectural decisions are
non-local [9] and often define and constrain the structure and the interactions of several
code elements. If the developer is aware of the architectural decisions and constraints,
the changes she made to the source code will be consistent with these. In fact, some of
these changes may derive from the architecture evolution of the software and they
reveal some intentions at the architectural level.
Thus, in this work, we hypothesize that some of the architectural intentions can be
inferred from the analysis of the evolution of the source code. Clustering a set of
changes made to the source code and analyzing the results may reveal a high level
decision. We focus on object-oriented (OO) systems and modifiability tactics [11, 12]
as they involve changes that can be detected through the analysis of different releases of
a software system. Thus we propose an approach that enables detecting tactics’
application (or cancellation) in an OO system and inferring an architectural evolution
trend through the system’s evolution. To do so, we map high level descriptions of
tactics, as introduced in [11], to a number of operational representations (i.e., source
code transformations). Tactics are intensional and thus may have several operational
representations. An operational representation is a pattern of evolution described using
elementary actions on source code entities (e.g., adding a class to a package, moving a
class from a package to another, etc.) and a set of constraints describing the structure of
the system before or after these actions. Using these operational representations, we
analyze available evolution data about the source code to retrieve architectural tactics
that were applied or cancelled during development or maintenance. We developed a
prototype tool that supports our approach and experimented on a set of modifiability
tactics and a number of versions of a Java open source project.
The paper is organized as follows. Section 2 proposes some background and related
work about architectural tactics and evolution. Section 3 presents an overview of our
approach while Sects. 4 and 5 detail two key aspects of our proposal: the definition of
operational representations of tactics and the detection of their occurrences respec-
tively. Section 6 proposes a case study for our approach and discussion of the obtained
results. Finally Sect. 7 summarizes our proposal and outlines future work.
availability and security. Bass et al. [11] introduced the concept of an architectural
tactic as an architecture transformation that supports the achievement of a single quality
attribute. They catalogued a set of common tactics that address availability, interop-
erability, modifiability, performance, security, testability and usability. This catalog of
tactics aims to support systematic design. For instance, performance tactics aim at
ensuring that the system responds to arriving events within some time constraints while
security tactics aim at resisting, detecting and recovering from attacks [11]. Examples
of performance tactics include increasing computational efficiency, managing the event
rate and introducing concurrency. Common security tactics include authenticating users
and maintaining data confidentiality. The designer chooses the appropriate tactics
according to the system’s context and trade-offs, and the cost to implement these
tactics.
enhanced with colors to depict the impact of the code changes under study on the
entities and relationships of the system (e.g., added, deleted, etc.). D’Ambros et al. [13]
describe a general schema to analyze software repositories for studying software
evolution. This schema includes three essential steps: (1) modeling various aspects of
the software system and its evolution, (2) retrieving and processing the information
from the relevant data sources, and (3) analyzing the modeled and retrieved data using
appropriate techniques depending on the targeted software evolution problem. Though
we do not target the visualization of architecture evolution, our approach follows this
general schema and we also aim to help designers and developers understand and be
aware of the architectural evolution of a given system.
Le et al. [8] propose an approach called ARCADE (Architecture Recovery,
Change, And Decay Evaluator) which relies on various architecture recovery
techniques to build different views of the analyzed system and three metrics for
quantifying architectural changes at the system-level and component-level. ARCADE
was used in an empirical study. An interesting outcome of this study was that con-
siderable architectural change is introduced both between two major versions and
across minor versions. In [4], a metric-based approach is proposed to evaluate archi-
tectural stability. To do so, the approach starts by analyzing different releases of the
system under study and extracting facts from these releases. These facts are then
analyzed using some software metrics that are indicators of architectural stability (e.g.,
change rate, growth rate, cohesion and coupling). Our approach can be complementary
to these metric-based approaches as it relies on the detection of tactics applications or
cancellations to assess the architectural evolution of software systems.
Kim et al. [14] proposed Ref-Finder, an Eclipse plug-in, that automatically detects
refactorings that were applied between two versions of a given program. To do so,
Ref-Finder extracts logic facts from each program version and used predefined logic
queries to match program differences with the constraints of the refactorings under
study. This approach is more focused on the refactorings introduced in Fowler’s book
[15]. Unlike Ref-Finder, our goal is to detect evolution patterns that match architectural
tactics and to support the designer in defining any evolution pattern that might be of
interest in her context/domain.
TacMatch
Figure 1 presents an overview of our approach which defines two processes. The
first enables the designer to specify operational representations of a given tactic; this
process is described in Sect. 4. The second process aims at supporting the designer in
analyzing the evolution trend of a software system. It uses the operational represen-
tations of tactics and the available versions of the system under study and proceeds in
three steps (numbered 1 to 3 in Fig. 1). In the first step, a differencing tool is applied to
multiple versions of the system and generates deltas that are expressed using a number
of source code changes (e.g., removed package, added package, added class, removed
class, moved class, etc.). For this purpose, our approach uses MADMatch [16] a tool
that enables a many-to-many approximate diagram matching approach. The second
step matches the generated deltas to the operational representations of tactics to detect
applied or cancelled tactics. We designed and implemented a tool TacMatch which
generates on the fly detection algorithms from the operational representations of tactics
and executes these detection algorithms to find occurrences of tactics in the analyzed
delta of the source code. In the third step, the resulting occurrences are analyzed by the
designer to infer the architectural evolution trend of the analyzed system. The whole
process is described in detail in Sect. 5.
C
A B Abstract Common
Services A’’ A’B’ B’’
A’ A’’ B’ B’’
Fig. 2. A high level representation of Abstract Common Services, adapted from [12].
Table 1 presents the high level description of the ACS tactic in terms of actions on
architectural components and connectors.
specify the constraints on the selected elements; and (4) the preview zone that displays
the tactic’s specification in a form similar to an SQL query1. Figure 3 displays an
example of the ACS tactic (i.e., the variant described in row 3 of Table 2) where
multiple constraints were defined by the user using the filter zone (the “+” button
enables to add a constraint at a time to the specification). These declarative specifi-
cations are used by our tool TacMatch to generate on the fly (when the user launches an
analysis of a given system) the algorithm that retrieves the set of elements (from deltas)
that match the tactic’s application. This process is described in detail in Sect. 5.2.
Using the operational representations of tactics and two different versions of the
software system under study, TacMatch supports the designer in detecting occurrences
of these tactics in the system. To do so, TacMatch relies on MADMatch [16], a tool
that enables diagram matching, to compute the deltas between two different versions of
the same system. TachMatch uses the operational representation to generate on the fly
detection algorithms for the tactics selected by the designer in the current analysis of
the system. TachMatch executes these algorithms on the analyzed delta of the system
and returns tactics’ occurrences or cancellations. These occurrences can be used by the
designer to carry out different types of analysis and to evaluate the architectural evo-
lution of the analyzed system.
1
For lack of space, we do not discuss in this paper the predefined actions and constraints that
TacMatch provides, nor the specification language used to describe the tactics.
158 C. Kapto et al.
TacMatchEngine FilterFactory
firstFilter Filter
Selector
+ doFilter(List<Occurrences>): List<Occurrences>
Selector(cmd: Command)
select(): List<Occurrences>
Cardinality Existence
on the delta. To generate the detection algorithms, TacMatch relies on a set of classes
that read the specification of a tactic and generate different parts of the corresponding
algorithm. Figure 4 gives an excerpt of the core classes of TacMatch, which were
organized using the Chain of Responsibility (CoR) design pattern [19]. The Selector
class enables to select occurrences of the changes undergone by the system and that
correspond to those specified in the Select clause of the operational representation of
the tactic (e.g., see the first line of the preview in Fig. 3). The Filter type defines an
interface for filtering occurrences of the changes undergone by the system according to
a given constraint; i.e., sub-classes of Filter implement different constraints. We used
the CoR design pattern so that we can instantiate and configure, at runtime, the subset
of filters that correspond to the constraints defined by the tactic at hand. Moreover,
using the CoR design pattern makes it easy to add new filters (i.e., constraints).
TacMatch’s entry point is the class TacMatchEngine which reads the tactic’s
specification as entered by the designer and generates a collection of commands cor-
responding to the lines of the specification. These commands are then used to create an
ordered list of objects that starts with an instance of the Selector class followed by a
chain of the appropriate subset of the filters. This is done using the createChain method
which relies on the FilterFactory class to instantiate and set the appropriate filter for
each command2. The appropriate selector object and chain of filters are instantiated and
ordered in a dynamic way according to the operational representation of a tactic. This
corresponds to generating on the fly the skeleton of the detection algorithm for the
given tactic. For instance, given the operational representation described in the preview
zone of Fig. 3, TacMatch generates a selector object that is set to retrieve inheritance
relationships grouped by their superclass followed by a chain of two instances of the
Existence filter3 and one instance of the Cardinality filter.
The method executeChain enables execution of the detection algorithm related to a
given tactic. This method takes as input the selection object corresponding to the tactic
and it calls first the select() method of this object to retrieve the relevant occurrences of
2
Both the Selector and the filter classes have their own fields which are set during their respective
instantiation using the command parameter received by their respective constructor.
3
In some tactics, the same filter class can be instantiated more than once using different parameters
(i.e., commands). Moreover, we use a filter class to instantiate a constraint or its opposite depending
on the tactic’s definition.
Inferring Architectural Evolution from Source Code Analysis 159
changes from the delta. These occurrences are then sent to the first filter referenced by
the selector object and from one filter to its successor in the chain; each filter filters the
occurrences according to the constraint it implements (i.e., using the doFilter() method)
and passes the resulting occurrences to its successor in the chain.
4
In this three sequence-based schema, the first sequence is the major number (incremented when there
are significant changes to the system), the second sequence is the minor number (incremented when
there are minor changes to the system or significant bug fixes) and the last sequence is the revision
number (incremented when minor bugs were fixed).
160 C. Kapto et al.
positives). To reduce the size of the table, we have omitted the deltas that do not have
any occurrences. In purely quantitative terms, if we consider the total numbers of the
tactics that were applied (85) and those cancelled (19) through all the analyzed ver-
sions, cancellations represent 22 % of applications. We further investigated the
observed cancellations to understand the causes of such a high percentage.
Our analysis revealed that out of the 19 cancellations of tactics, 11 cancellations
were related to tactics already present in the first available release 0.5.6 while 8 can-
cellations are related to tactics that were introduced during the subsequent versions. For
instance, in the revision from versions 0.9.16 to 0.9.17, the class org.jfree.chart.ren-
derer.AbstractSeriesRenderer was introduced as a superclass for two other existing
sub-classes but was deleted two revisions later (i.e., in 0.9.19). We also observed an
interesting evolution pattern which involves the introduction, through different ver-
sions, of a number of super-classes that centralize a number of common constants and
the deletion of these classes later in other versions. For instance, from 0.8.1 to 0.9.0, the
classes CategoryPlotConstants and ChartPanelConstants (both in the package com.
jrefinery.chart) were created to centralize a number of constants. CategoryPlotCon-
stants was deleted later in the revision from 0.9.9 to 0.9.10 and its content was moved
back to the class com.jrefinery.chart.CategoryPlot. Likewise ChartPanelConstants was
deleted later in the transition from 0.9.20 to 1.0.0 and its content was moved to org.
jfree.chart.ChartPanel. This tendency to apply and cancel tactics raises some questions
about the consistency of the evolution of the system in general and its conformance to
architectural decisions in particular. In fact, this could be construed as a motivational
case for the importance of detecting architectural tactics and reminding them to
developers (especially in open-source and collaborative settings) in order to prevent
seemingly erratic modifications.
We also compared the results of our detection process when applied to the deltas
from two successive minor (respectively major) releases versus those generated by the
intermediate revisions between these minor (respectively major) versions. We presume
that if the developer consistently evolves the system through the intermediate revisions
between two successive minor (respectively major) versions, the aggregated results of
our detection process through these revisions would lead to the same result than the one
generated using the two minor (respectively major) versions. Table 4 displays the
number of occurrences of both applications and cancellations of tactics generated from
successive minor or major revisions. Similar to Tables 3 and 4 displays true positives
and it omits minor and major releases for which no occurrences were found (e.g., from
0.6.0 to 0.7.0) and successive minor releases for which there was no intermediate
revisions (e.g., from 0.5.6 to 0.6.0).
From 0.7.0 to 0.8.0, the only tactic occurrence (out of 7) that was detected in the
delta between these two minor versions but not in the revisions between them, is an
incremental application of the User Encapsulation (UE) tactic; i.e., a class (Sig-
nalsDataset) was created in 0.7.1 and an inheritance relationship was added later in
0.7.2 between this class and an existing subclass (SubSeriesDataset). As for the
detected tactics applications and cancellations from 0.8.0 and 0.9.0 (i.e., 9 occur-
rences), they match the aggregated results of the detection when applied to the revi-
sions from 0.8.0 to 0.8.1 and from 0.8.1 to 0.9.0. Finally, we found 34 occurrences of
applications and cancellations of tactics from 0.9.0 to 1.0.0 which is a major revision.
162 C. Kapto et al.
Table 3. Number of tactics applied or cancelled per deltas generated from successive versions
Delta Application of Cancellation of
tactics tactics
SR UE ACS IC SR UE ACS IC
v0.5.6_v0.6.0 1 2 1
v0.7.0_v0.7.1 1
v0.7.3_v0.7.4 1 2
v0.7.4_v0.8.0 1
v0.8.0_v0.8.1 1
v0.8.1_v0.9.0 3 1 1 1 1 1
v0.9.1_v0.9.2 1
v0.9.2_v0.9.3 1
v0.9.4_v0.9.5 3 5 2 1
v0.9.6_v0.9.7 1 4 1
v0.9.8_v0.9.9 1 1 1 6
v0.9.9_v0.9.10 1 1 1
v0.9.11_v0.9.12 1 1 1 2
v0.9.12_v0.9.13 1 2
v0.9.13_v0.9.14 2 1
v0.9.14_v0.9.15 1 1
v0.9.15_v0.9.16 1 1
v0.9.16_v0.9.17 2 5
v0.9.18_v0.9.19 3 2 2 1
v0.9.19_v0.9.20 1
v0.9.20_v1.0.0 9 3 2 4 2 1
v1.0.2_v1.0.3 1
v1.0.4_v1.0.5 1
v1.0.5_v1.0.6 1
Table 4. Number of tactics applied or cancelled per deltas generated from successive minor or
major versions
Delta Application of Cancellation of Total
tactics tactics
SR UE ACS IC SR UE ACS IC
v0.7.0_v0.8.0 2 5 7
v0.8.0_v0.9.0 4 1 1 1 1 1 9
v0.9.0_v1.0.0 10 5 14 1 4 34
However, the aggregation of the results from all the intermediate revisions between
0.9.0 and 1.0.0 yields 85 occurrences. We identified three main reasons for this dis-
crepancy some of which were already discussed above. First, some tactics were applied
through one or several revisions but all the entities involved in these tactics appear as
Inferring Architectural Evolution from Source Code Analysis 163
added in the major revision (i.e., the evolution pattern is visible through revisions but
not at the major versions level). For example, the UE tactic was incrementally applied
by adding a set of classes (e.g., ObjectList) in the revision from 0.9.9 to 0.9.10 and their
superclass (AbstractObjectList) in the revision from 0.9.11 to 0.9.12. This whole
evolution pattern is not detectable when we analyze the delta from 0.9.0 to 1.0.0; the
entire inheritance hierarchy appears to be newly created at the same time. Second, some
tactics were applied in an incremental way through changes spread over several revi-
sions starting from the revision 0.9.0. These occurrences are only detectable when we
analyze the delta from 0.9.0 to 1.0.0. Finally, as discussed before, several tactics were
applied and then cancelled through the revisions; these tactics are not present at major
versions level.
External validity: Our case study was carried out on a subset of the modifiability
tactics that we were able to detect through static analysis of different releases of a
software system. This is possible for most of the modifiability tactics and some other
tactics such as exception handling (for availability) and creating additional threads or
reducing the number of iterations (for performance). However, other tactics may
require a dynamic analysis of the code or are even not present in the source code (e.g.,
increasing computational efficiency or maintaining multiple copies of data). Thus, our
approach is limited to those tactics that have an observable impact on the source code.
As future work, we plan to extend our work to other tactics and identify precisely the
type of tactics to which our approach may be applied.
Internal validity: Some tactics (e.g., ACS) may be composed of several other more
elementary tactics (e.g., SR). Since we did not implement yet a mechanism that enables
to relate and aggregate detected tactics through a number of releases, we tend to
interpret each detected tactic locally and individually. This may have an impact on our
interpretation of the overall architectural evolution trend. Thus, as discussed in
Sect. 6.1, future work is needed to define the relationships between operational rep-
resentations and exploit these relationships to correctly aggregate and interpret a
number of successive applications of related tactics. Finally, our results are dependent
on the effectiveness of the other tools used, notably MADMatch that was used to
compute the deltas. We selected MADMatch because it is a recent tool which com-
pared favorably to other techniques [16] but other tools may provide different (better or
worse) results. Future work is planned for experimentation with different parameters of
MADMatch and different tools.
In this paper, we present a first iteration of a tool-supported approach that allows the
definition and detection of architectural tactics or more general evolution patterns using
basic changes extractable from the differencing of software versions. Once these
164 C. Kapto et al.
References
1. Negara, S., Vakilian, M., Chen, N., Johnson, R.E., Dig, D.: Is it dangerous to use version
control histories to study source code evolution? In: Noble, J. (ed.) ECOOP 2012. LNCS,
vol. 7313, pp. 79–103. Springer, Heidelberg (2012). doi:10.1007/978-3-642-31057-7_5
2. Garlan, D., Barnes, J.M., Schmerl, B.R., Celiku, O.: Evolution styles: foundations and tool
support for software architecture evolution. In: WICSA/ECSA, pp. 131–140 (2009)
3. Garlan, D., Schmerl, B.: Ævol: a tool for defining and planning architecture evolution. In:
the 31st International Conference on Software Engineering, pp. 591–594 (2009)
4. Tonu, S.A, Ashkan, A, Tahvildari, L.: Evaluating architectural stability using a metric-based
approach. In: CSMR 2006, 22–24 March 2006, p. 10, 270 (2006)
5. McNair, A., German, D.M., Weber-Jahnke, J.: Visualizing software architecture evolution
using change-sets. In: WCRE 2007, 28–31 October 2007, pp. 130–139 (2007)
6. Abi-Antoun, M., Aldrich, J., Nahas, N., Schmerl, B., Garlan, D.: Differencing and merging
of architectural views. ASE 15(1), 35–74 (2008)
7. Breivold, H.P., Crnkovic, I., Larsson, M.: A systematic review of software architecture
evolution research. IST 54(1), 16–40 (2012)
8. Le, D.M., Behnamghader, P., Garcia, J., Link, D., Shahbazian, A., Medvidovic, N.: An
empirical study of architectural change in open-source software systems. In: IEEE/ACM
12th Working Conference on Mining Software Repositories, Florence, pp. 235–245 (2015)
9. Eden, A.H., Kazman, R.: Architecture design implementation. In: 25th International
Conference on Software Engineering, pp. 149–159 (2003)
10. Fairbanks, G.: Just Enough Software Architecture: A Risk-Driven Approach. Marshall &
Brainerd, Boulder (2010)
11. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice. Addison-Wesley,
Boston (2003)
12. Bachmann, et al.: Modifiability tactics, CMU Software Engineering Institute Technical
Report CMU/SEI-2007-TR-002
Inferring Architectural Evolution from Source Code Analysis 165
13. D’Ambros, M., Gall, H., Lanza, M., Pinzger, M.: Analysing software repositories to
understand software evolution. In: D’Ambros, M., Gall, H., Lanza, M., Pinzger, M. (eds.)
Software Evolution, pp. 37–67. Springer, Heidelberg (2008)
14. Kim, M., Gee, M., Loh, A., Rachatasumrit, N.: Ref-Finder: a refactoring reconstruction tool
based on logic query templates. In: FSE 2010, Santa Fe, New Mexico, USA, pp. 371–372
(2010)
15. Fowler, M.: Refactoring: Improving the Design of Existing Code. Addison-Wesley, Boston
(1999)
16. Kpodjedo, S., et al.: Madmatch: many-to-many approximate diagram matching for design
comparison. IEEE Trans. Softw. Eng. 39(8), 1090–1111 (2013)
17. Aldrich, J., Sazawal, V., Chambers, C., Notkin, D.: Language support for connector
abstractions. In: Cardelli, L. (ed.) ECOOP 2003. LNCS, vol. 2743, pp. 74–102. Springer,
Heidelberg (2003). doi:10.1007/978-3-540-45070-2_5
18. Gueheneuc, Y.G., Antoniol, G.: DeMIMA: a multilayered approach for design pattern
identification. IEEE Trans. Softw. Eng. 34(5), 667–684 (2008)
19. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable
Object-Oriented Software. Addison-Wesley, Reading (1995)
20. Barnes, J.M., Garlan, D.: Challenges in developing a software architecture evolution tool as
a plug-in. In: 3rd International Workshop on Developing Tools as Plug-ins (TOPI), San
Francisco, CA, pp. 13–18 (2013)
Evolution Style: Framework for Dynamic
Evolution of Real-Time Software Architecture
1 Introduction
With daily changes in technologies and business environments, software systems
must evolve in order to adopt to the new requirements of these changes. Gen-
erally, the software evolution is a complex process that requires a great deal
of knowledge and skills. This is due to the fact that all artifacts produced and
used in the software development life-cycle are subject to changes. Since software
systems change fairly frequently, it is essential that their architectures must be
restructured. With the increase in size and complexity of software systems, the
computing community acknowledges the importance of software architecture as
a central artifact in the lifecycle of a software system. In this respect, the archi-
tecture is specified early in the software lifecycle, and constitutes the model that
drives the engineering process [10]. In the evolution process, architecture can
elucidate the reason behind design decisions that guided the building of the sys-
tem. Moreover, it can permit planning and system restructuring at a high level
of modeling, where business goals and quality requirements can be ensured and
where an alternative scenario of evolution can be explored. Modeling architec-
ture evolution process can support architects in representing reusable practices
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 166–174, 2016.
DOI: 10.1007/978-3-319-48992-6 12
Evolution Style: Framework for Dynamic Evolution 167
2 Related Work
The necessity of introducing change at runtime has resulted in different architec-
ture centric approaches for dynamic evolution. Dowling and Cahill [17] present
the K-Component model as a reflective framework for building self-adaptive
systems. K-Components are components with an architecture meta-model and
adaptation contracts to support their dynamic reconfiguration. Cuesta et al.
[16] present a reflective Architecture Description Language (ADL) named PiLar
which provides a framework to describe the dynamic change in software architec-
ture. It consists of a structural part and a dynamic part, which defines patterns
of change. Costa-Soria et al. [12] define a reflective approach for supporting
dynamic evolution of architectural types in a decentralized and independent
way. Their approach is applied to ADL, in particular to the PRISMA meta-
model, in order to develop an evolveable component type that is provided with
168 A. Hassan et al.
One of the main issues that must be considered when handling dynamic evolution
is to leave systems in a consistent state after a change is performed. Evolving
an artifact at runtime without considering its thread may disrupt or suspend
its service for an arbitrarily long time, which can lead real-time tasks to miss
some deadlines. Detecting when it is safe to actually evolve the artifacts is a
crucial key to guarantee that the system will not encounter an inconsistent state.
Therefore, various strategies have been defined in order to tackle this issue,
namely Quiescence [4] and Tranquility [5]. They differentiate the passive state
from the active state of software artifact and assume that an affected artifact
should be placed into a passive state before performing the evolution operation.
Generally, a real-time system consists mainly of a set of elements which
provide or/and create real-time services (threads). These real-time tasks can be
periodic, aperiodic or sporadic [15], depending on how their corresponding jobs
are activated. The passive elements in a real-time system are those that do not
have any execution thread, but typically provide services for other elements.
The quiescence and tranquility techniques can fit the dynamic evolution of these
passive elements. On the contrary, the active elements are those that have active
real-time threads; these techniques [4,5] require that elements should be shifted
Evolution Style: Framework for Dynamic Evolution 169
into a passive state in order to be modified. This means that the real-time
threads in the element would need to be suspended, thus potentially resulting
in deadline misses for the threads of the element being under evolution. This
behavior is, of course, undesirable for hard real-time systems. Therefore, the
evolution operation should respect the timing constraints of the active element
that is subject to change. In this respect, the evolution operation execution time
is part of the timing constraints of the real-time system itself. Therefore, it must
not exceed the safe state time of this element, i.e. the maximum duration of the
evolution operation should be less than the minimum separation between two
consecutive jobs of this element’s task.
The issue of the timing constraints is more important when we handle a dynamic
evolution of a hard real-time system, which needs to maintain high levels of
application availability. In fact, whatever the change management system be
used, the dynamic evolution operation is considered as a real-time task, and
once this unscheduled task occurs, it should not affect the timing constraints of
system’s tasks.
System tasks are scheduled and executed according to their dynamic pri-
orities. Indeed, whatever the system tasks are, the evolution operation should
interact/behave without compromising the system tasks’ completion. Generally,
tasks in a hard real-time system have higher priority than the evolution oper-
ation, which is usually handled as a background task (with lower priority). In
this aspect, if a background priority task is used to evolve an element, this evo-
lution task can be preempted by any higher priority task (including a task from
the element under evolution), which can lead to an unsafe state or loss of the
internal state of the element. Therefore, an evolution task should directly derive
its priority from the element that undergoes its change. Thus, the management
change should be able to safely handle the evolution tasks while still guaran-
teeing the timing constraints of the system, e.g. it should dynamically prioritize
this unexpected event (evolution operation) within the system threads.
Furthermore, in the replacement and addition operations, it is necessary
to guarantee that the new element threads have taken over the role of the
old element threads without deadline violation, which also requires a dynamic
rescheduling in order to integrate the new element threads with the rest of the
system’s threads in the scheduler.
The architectural element that can be changed at its active state must also
have the suitable (dynamic) interfaces to provide the required parameters in
order to be dynamically evolved. An interface is needed to handle the internal
state of the element during the replacement Operation. Another interface is also
needed to provide the time parameters for guaranteeing the safe stopping. These
scheduling parameters are required by the scheduler to dynamically schedule the
evolution operation and the threads of the new elements: Worst Case Execution
Time (WCET), deadline and release time. These parameters allow the schedu-
lability analysis of dynamic evolution of hard real-time systems.
Role: Generally, a Role is responsible for the evolution operation that performs
the changes. Managing and performing at run-time requires a highly interactive
Role (external agent or internal instrument).
Dynamic Operation: A dynamic evolution operation can be a simple evolu-
tion like add or delete, or a composite one like replacement. A dynamic evolution
process should be expressed in such a way that it supports both kinds of active-
ness, namely proactive and reactive. This can be achieved by separating evolution
requests (Events) from the evolution mechanisms (Actions). Therefore, the con-
struct of evolution operation is based of the ECA rules “On Event If Condition
Do Action At Time” which means: when an evolution Event occurs, if Condi-
tion is verified, then execute suitable Action at appropriate time. The dynamic
operation must offer a dynamic interface (plan) which provides relevant run-time
parameters that are needed to schedule the operation as soon as possible within
the system tasks.
Dynamic Architecture Elements: An architecture element must be evolu-
tionary open, which means it has an interface with the necessary parameters
that enables it to dynamically react to evolution operation. An element should
be able to provide its scheduling parameters to allow the Role to dynamically
effect the changes without breaking the timing constraints of the system.
Interaction: In fact, the dynamic evolution is a real-time task, so the inter-
action element must guarantee that evolution Operations are subject to the
timing constraints. The interaction element ensures the availability of required
interfaces and parameters among elements (Operation, Architecture Element,
Role) in the process.
Dynamic Interface: A dynamic element should have appropriate interface
which provides the required parameters to efficiently interact at run-time. Such
an interface is required, for example, to allow the Instrument (the Role in self-
managing system) to observe an architecture element in order to detect any
evolution event or to determine the appropriate time to effect the changes.
Process: Represents the dynamic configuration of the evolution elements which
transfers a software architecture from its current architecture style to a target
style. This configuration provides the temporal and topological organizing of
evolution operations while respecting the consistency and integrity of the archi-
tecture elements.
Evolution Style: Framework for Dynamic Evolution 173
5 Conclusions
In this paper, we propose a dynamic evolution style for specifying the dynamic
evolution for software architecture. Our intent is to provide a style sufficiently
rich to model the dynamic changes in software architecture of a real-time system
and to be able to represent the potential ways of performing these changes. To
better realize this intent, we integrate the behavior concepts of dynamic changes
into the MES so we can have a sound understanding of dynamic evolution issues
and constraints, which is a prerequisite to developing a modeling environment
that supports dynamic evolution styles. Our ongoing work is devoted to devel-
oping this environment.
References
1. Hassan, A., Oussalah, M.: Meta-evolution style for software architecture evolution.
In: Freivalds, R.M., Engels, G., Catania, B. (eds.) SOFSEM 2016. LNCS, vol. 9587,
pp. 478–489. Springer, Heidelberg (2016). doi:10.1007/978-3-662-49192-8 39
2. Oussalah, M., Tamzalit, D., Le Goaer, O., Seriai, A.: Updating styles challenge
updating needs within component-based software architectures. In: SEKE (2006)
3. Oreizy, P.: Issues in modeling and analyzing dynamic software architectures. In:
Proceedings of the International Workshop on the Role of Software Architecture
in Testing and Analysis (1998)
4. Kramer, J., Magee, J.: The evolving philosophers problem: dynamic change man-
agement. IEEE TSE 16(11), 1293–1306 (1990)
5. Vandewoude, Y., Ebraert, P., Berbers, Y., D’Hondt, T.: Tranquility: a low dis-
ruptive alternative to quiescence for ensuring safe dynamic updates. IEEE Trans.
Softw. Eng. 33(12), 856–868 (2007)
6. Oreizy, P., Medvidovic, N., Taylor, R.N.: Runtime software adaptation: framework,
approaches, and styles. In: Companion of the 30th International Conference on
Software Engineering, pp. 899–910. ACM (2008)
7. Buckley, J., Mens, T., Zenger, M., Rashid, A., Kniesel, G.: Towards a taxonomy
of software change. J. Softw. Maint. Evol. Res. Pract. 17(5), 309–332 (2005)
8. Liu, C.L., Layland, J.W.: Scheduling algorithms for multiprogramming in a hard-
real-time environment. J. ACM (JACM) 20(1), 46–61 (1973)
9. Richardson, T.: Developing dynamically reconfigurable real-time systems with real-
time OSGi (RT-OSGi). Ph.D. dissertation, University of York (2011)
10. Garlan, D., Perry, D.E.: Introduction to the special issue on software architecture.
IEEE Trans. Softw. Eng. 21(4), 269–274 (1995)
11. Chetto, H., Chetto, M.: Some results of the earliest deadline scheduling algorithm.
IEEE Trans. Softw. Eng. 15(10), 1261–1269 (1989)
12. Costa-Soria, C. Hervás-Muñoz, D., Pérez, J., Carsı́, J.Á: A reflective approach for
supporting the dynamic evolution of component types. In: 14th IEEE International
Conference, pp. 301–310 (2009)
13. Romero, C., J.Á: Contributions to the safe execution of dynamic component-based
real-time systems. Ph.D. dissertation, Carlos III University of Madrid (2012)
14. Spuri, M., Buttazzo, G.: Scheduling aperiodic tasks in dynamic priority systems.
Real-Time Syst. 10(2), 179–210 (1996)
15. Li, Q., Yao, C.: Real-Time Concepts for Embedded Systems. CRC Press, Boca
Raton (2003)
174 A. Hassan et al.
16. Cuesta, C.E., de la Fuente, P., Barrio-Solórzano, M., Beato, E.: Coordination in
a reflective architecture description language. In: Arbab, F., Talcott, C. (eds.)
COORDINATION 2002. LNCS, vol. 2315, pp. 141–148. Springer, Heidelberg
(2002). doi:10.1007/3-540-46000-4 15
17. Dowling, J., Cahill, V.: The K-component architecture meta-model for self-
adaptive software. In: Yonezawa, A., Matsuoka, S. (eds.) Reflection 2001. LNCS,
vol. 2192, pp. 81–88. Springer, Heidelberg (2001). doi:10.1007/3-540-45429-2 6
Retrofitting Controlled Dynamic
Reconfiguration into the Architecture
Description Language MontiArcAutomaton
1 Introduction
Component & connector (C&C) architecture description languages (ADLs) [1,2]
combine the benefits of component-based software engineering with model-driven
engineering (MDE) to abstract from the accidental complexities [3] and nota-
tional noise [4] of general-purpose programming languages (GPLs). They employ
abstract component models to describe software architectures as hierarchies of
connected components. This allows to abstract from ADL implementation details
to a conceptual level applicable to multiple C&C ADLs.
In many ADLs, including MontiArcAutomaton [5], the configuration of C&C
architectures is fixed at design time. The environment or the current goal of the
system might however change during runtime and require dynamic adaptation
of the system [6] to a new configuration that only includes a subset of already
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 175–182, 2016.
DOI: 10.1007/978-3-319-48992-6 13
176 R. Heim et al.
2 Example
Automatic transmission is a commonly used type of vehicle transmission, which
can automatically change gear ratios as a vehicle moves. The driver may choose
from different transmission operating modes (TOMs) such as Park, Reverse,
Neutral, Drive, Sport, or Manual while driving. Depending on the chosen TOM,
a transmission control system decides when to shift gears.
A C&C architecture might provide one component for each different shift-
ing behavior. If the architecture is static, components must exchange control
information at runtime to decide whether they take over the shifting behav-
ior. The architect then has to define and implement inter-component protocols
for switching between different behaviors. Dynamic reconfiguration enables to
model structural flexibility in composed software components explicitly. Here,
the transmission control system’s architecture uses only components related to
the selected transmission operating mode by reconfiguring connections between
components as well as by dynamic component activation and instantiation.
Figure 1 (top) depicts a C&C model showing the composed component
ShiftController. It contains the three subcomponents manual, auto, and
sport for the execution of different gear shifting behaviors and the subcom-
ponent scs for providing sensor data comprising the current revolutions per
Retrofitting Controlled Dynamic Reconfiguration into the ADL MontiArcAutomaton 177
minute (rpm), the vehicle inclination (vi), and the throttle pedal inclination
(tpi) encoded as integers. The component ShiftController has an interface
of type TOM to receive the currently selected TOM and one interface of type
GSCmd to emit commands for shifting gears. Immediately after engine start up
all subcomponents are neither active nor connected (top configuration). Once the
currently selected TOM is known to component ShiftController, it changes
its configuration accordingly and starts the contributing subcomponents (bottom
configurations). While the currently selected TOM is Sport (bottom left con-
figuration), only subcomponents scs and sport are active to emit sensor data
and commands for shifting gears, whereas only subcomponent manual is active
when the currently selected TOM is Manumatic (bottom right configuration).
Making the active components and connectors explicit increases comprehensibil-
ity of the architecture. The deactivation of components at runtime has further
practical benefits, such as saving computation time and power consumption.
MontiArcAutomaton
1 component ShiftController {
2 port in TOM tom, out GSCmd cmd;
3
4 component ManShiftCtrl manual;
5 component AutoShiftCtrl auto;
6 component SCSensors scs;
7
8 mode Idle {} mode Manumatic { /* ... */ } mode Auto { /* ... */ }
9
10 mode Sport, Kickdown {
11 activate scs;
12 component SportShiftCtrl sport;
13 connect scs.rpm -> sport.rpm; connect scs.vi -> sport.vi;
14 connect scs.tpi - sport.tpi; connect sport.cmd -> cmd;
15 }
16
17 modetransitions {
18 initial Idle;
19 Idle -> Auto [tom == DRIVE];
20 Auto -> Kickdown [scs.tpi > 90 && tom == DRIVE];
21 Kickdown -> Auto [scs.tpi < 90 && tom == DRIVE]; // further transitions
22 }
23 }
time only. Specific transitions control when components may change their con-
figuration. While this restricts arbitrary reconfiguration (cf. π-ADL [11], Arch-
Java [10]), it increases comprehensibility and guarantees static analyzability.
Dynamic reconfiguration can be programmed or ad-hoc [21]. In programmed
reconfiguration (e.g., ACME/Plastik [13], AADL [15], ArchJava [10]), conditions
and effects specified at design time are applied at runtime. Ad-hoc reconfigura-
tion (e.g., C2 SADL [9], Fractal [14], ACME/Plastik [13]) does not necessarily
have to be specified at design time and takes place at runtime, e.g., invoked by
reconfiguration scripts. It introduces greater flexibility, but component models
do not reflect the reconfiguration options. This enables simulating unforeseen
changes to test an architecture’s robustness, but it complicates analysis and
evolution. For the latter reason MontiArcAutomaton’s concept solely includes
programmed reconfiguration.
Besides modeling dynamic removal and establishment of connectors, Monti-
ArcAutomaton supports dynamic instantiation and removal of components. In
ACME/Plastik [13], so-called actions can remove and create connectors and com-
ponents. ArchJava [10] embeds architectural elements in Java and, hence, enables
instantiating corresponding component classes as Java objects. C2 SADL [9] sup-
ports ad-hoc instantiation and removal of components. Fractal [14] provides sim-
ilar concepts in its aspect-oriented Java implementation. π-ADL’s [11] language
constructs enable instantiation, removal, and movement of components.
5 Conclusion
References
1. Medvidovic, N., Taylor, R.: A classification and comparison framework for software
architecture description languages. IEEE Trans. Softw. Eng. 26, 70–93 (2000)
2. Malavolta, I., Lago, P., Muccini, H., Pelliccione, P., Tang, A.: What industry needs
from architectural languages: a survey. IEEE Trans. Softw. Eng. 39, 869–891 (2013)
3. France, R., Rumpe, B.: Model-driven development of complex software: a research
roadmap. In: 2007 Future of Software Engineering. ICSE (2007)
182 R. Heim et al.
4. Wile, D.S.: Supporting the DSL spectrum. Comput. Inf. Technol. 9, 263–287 (2001)
5. Ringert, J.O., Roth, A., Rumpe, B., Wortmann, A.: Language and code genera-
tor composition for model-driven engineering of robotics component & connector
systems. J. Softw. Eng. Robot. (JOSER) 6, 33–57 (2015)
6. Salehie, M., Tahvildari, L.: Self-adaptive software: landscape and research chal-
lenges. ACM Trans. Auton. Adapt. Syst. (TAAS) 4, 14–15 (2009)
7. Ringert, J.O., Rumpe, B., Wortmann, A.: Architecture and behavior modeling of
cyber-physical systems with MontiArcAutomaton. Shaker Verlag (2014)
8. Lim, W.Y.P.: PADL-a packet architecture description language. Massachusetts
Institute of Technology, Laboratory for Computer Science (1982)
9. Medvidovic, N.: ADLs and dynamic architecture changes. In: Joint Proceedings of
the Second International Software Architecture Workshop (ISAW-2) and Interna-
tional Workshop on Multiple Perspectives in Software Development (Viewpoints
1996) on SIGSOFT 1996 Workshops (1996)
10. Aldrich, J., Chambers, C., Notkin, D.: ArchJava: connecting software architec-
ture to implementation. In: Proceedings of the 24th International Conference on
Software Engineering (ICSE) (2002)
11. Oquendo, F.: π-ADL: an architecture description language based on the higher-
order typed π-calculus for specifying dynamic and mobile software architectures.
ACM SIGSOFT Softw. Eng. Notes 29, 1–14 (2004)
12. Cuesta, C.E., de la Fuente, P., Barrio-Solórzano, M., Beato, M.E.G.: An “abstract
process” approach to algebraic dynamic architecture description. J. Log. Algebr.
Program. 63, 177–214 (2005)
13. Joolia, A., Batista, T., Coulson, G., Gomes, A.T.: Mapping ADL specifications
to an efficient and reconfigurable runtime component platform. In: 5th Working
IEEE/IFIP Conference on Software Architecture, WICSA 2005 (2005)
14. Bruneton, E., Coupaye, T., Leclercq, M., Quéma, V., Stefani, J.: The fractal com-
ponent model and its support in Java. Softw. Pract. Exp. 36, 1257–1284 (2006)
15. Feiler, P.H., Gluch, D.P.: Model-Based Engineering with AADL: An Introduction
to the SAE Architecture Analysis & Design Language. Addison-Wesley, Upper
Saddle River (2012)
16. AutoFocus 3 Website. http://af3.fortiss.org/. Accessed: 18 Jan 2016
17. Aravantinos, V., Voss, S., Teufl, S., Hölzl, F., Schätz, B.: AutoFOCUS 3: tooling
concepts for seamless, model-based development of embedded systems. In: Joint
Proceedings of ACES-MB 2015 – Model-Based Architecting of Cyber-physical and
Embedded Systems and WUCOR 2015 – UML Consistency Rules (2015)
18. Cassou, D., Koch, P., Stinckwich, S.: Using the DiaSpec design language and
compiler to develop robotics systems. In: Proceedings of the Second Interna-
tional Workshop on Domain-Specific Languages and Models for Robotic Systems
(DSLRob) (2011)
19. Becker, S., Koziolek, H., Reussner, R.: Model-based performance prediction with
the palladio component model. In: Proceedings of the 6th International Workshop
on Software and Performance (2007)
20. Khare, R., Guntersdorfer, M., Oreizy, P., Medvidovic, N., Taylor, R.N.: xADL:
enabling architecture-centric tool integration with XML. In: Proceedings of the
34th Annual Hawaii International Conference on System Sciences (2001)
21. Bradbury, J.S.: Organizing definitions and formalisms for dynamic software archi-
tectures. Technical report, School of Computing, Queen’s University (2004)
Verification and Consistency
Management
Statistical Model Checking of Dynamic Software
Architectures
1 Introduction
One of the major challenges in software engineering is to ensure correctness
of software-intensive systems, especially as they have become increasingly com-
plex and used in many critical domains. Ensuring these concerns becomes more
important mainly when evolving these systems since such a verification needs
to be performed before, during, and after evolution. Software architectures play
an essential role in this context since they represent an early blueprint for the
system construction, deployment, execution, and evolution.
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 185–200, 2016.
DOI: 10.1007/978-3-319-48992-6 14
186 E. Cavalcante et al.
The critical nature of many complex software systems calls for rigorous archi-
tectural models (such as formal architecture descriptions) as means of supporting
the automated verification and enforcement of architectural properties. However,
architecture descriptions should not cover only structure and behavior of a soft-
ware architecture, but also the required and desired architectural properties, in
particular the ones related to consistency and correctness [15]. For instance, after
describing a software architecture, a software architect might want to verify if it
is complete, consistent, and correct with respect to architectural properties.
In order to foster the automated verification of architectural properties based
on architecture descriptions, they need to be formally specified. Despite the
inherent difficulty of pursuing formal methods, the advantage of a formal ver-
ification is to precisely determine if a software system can satisfy properties
related to user requirements. Additionally, automated verification provides an
efficient method to check the correctness of architectural design. As reported by
Zhang et al. [19], one of the most popular formal methods for analyzing software
architectures is model checking, an exhaustive, automatic verification technique
whose general goal is to verify if an architectural specification satisfies architec-
tural properties [8]. It takes as inputs a representation of the system (e.g., an
architecture description) and a set of property specifications expressed in some
notation. The model checker returns true if the properties are satisfied, or false
with the case in which a given property is violated.
Despite its wide and successful use, model checking faces a critical challenge
with respect to scalability. Holzmann [10] remarks that no currently available
traditional model checking approach is exempted from the state space explosion
problem, that is, the exponential growth of the state space. This problem is exac-
erbated in the contemporary dynamic software systems for two main reasons,
namely (i) the non-determinism of their behavior caused by concurrency and
(ii) the unpredictable environmental conditions in which they operate. In spite
of the existence of a number of techniques aimed at reducing the state space,
such a problem remains intractable for some software systems, thereby making
the use of traditional model checking techniques a prohibitive choice in terms of
execution time and computational resources. As a consequence, software archi-
tects have to trade-off the risks of possibly undiscovered problems related to the
violation of architectural properties against the practical limitations of applying
a model checking technique on a very large architectural model.
In order to tackle the aforementioned issues, this paper proposes the use of
statistical model checking (SMC) to support the formal verification of dynamic
software architectures while striving to reduce computational resources and time
for performing this task. SMC is a probabilistic, simulation-based technique
intended to verify, at a given confidence level, if a certain property is satis-
fied during the execution of a system [13]. Unlike model checking, SMC does
not analyze the internal logic of the target system, thereby not suffering from
the state space explosion problem [12]. Furthermore, an SMC-based approach
promotes better scalability and less consumption of computational resources,
important factors to be considered when analyzing software architectures for
Statistical Model Checking of Dynamic Software Architectures 187
under verification, a model checker for verifying properties, and a statistical ana-
lyzer responsible for calculating probabilities and performing statistical tests. It
receives three inputs: (i) an executable stochastic model of the target system M ;
(ii) a formula ϕ expressing a bounded property to be verified, i.e., a property
that can be decided over a finite execution of M ; and (iii) user-defined precision
parameters determining the accuracy of the probability estimation. The model
M is stochastic in the sense that the next state is probabilistically chosen among
the states that are reachable from the current one. Depending on the probabilis-
tic choices made during the executions of M , some executions will satisfy ϕ and
others will not. The simulator executes M and generates an execution trace σi
composed of a sequence of states. Next, the model checker determines if σi sat-
isfies ϕ and sends the result (either success or failure) to the statistical analyzer,
which in turn estimates the probability p for M to satisfy ϕ. The simulator
repeatedly generates other execution traces σi+1 until the analyzer determines
that enough traces have been analyzed to produce an estimation of p satisfying
the precision parameters. A higher accuracy of the answer provided by the model
checker requires generating more execution traces through simulations.
non-local action, it informs the scheduler and blocks until the scheduler responds.
The scheduler responds with the action executed (if the component has submit-
ted a choice between several actions) and a return value, corresponding either
to the receiving side of a communication or a decomposed architecture.
Figure 2 depicts the behavior of the scheduler. The scheduler waits until all
components and connectors have indicated their possible actions. At this step,
the scheduler builds a list of possible rendezvous by checking which declared uni-
fications have both sender and receiver ready to communicate. For this purpose,
the scheduler maintains a list of the active architectures and the correspond-
ing unifications. The possible communications are added to the list of possible
actions and the scheduler chooses one of them according to a probabilistic choice
function. The scheduler then executes the action and outputs its effect to the
statistical model checker. Finally, the scheduler notifies the components and
connectors involved in the action.
We have developed two plug-ins atop the PLASMA platform, namely (i) a
simulator plug-in that interprets execution traces produced by the generated Go
program and (ii) a checker plug-in that implements DynBLTL. With this tool-
chain, a software architect is able to evaluate the probability of a π-ADL archi-
tectural model to satisfy a given property specified in DynBLTL. The developed
tools are publicly available at http://plasma4pi-adl.gforge.inria.fr.
6 Case Study
6.1 Description
A flood monitoring system can support monitoring urban rivers and create alert
messages to notify authorities and citizens about the risks of an imminent flood,
thereby fostering effective predictions and improving warning times. This system
is typically based on a wireless sensor network composed of sensors that measure
the water level in flood-prone areas near the river. In addition, a gateway station
analyzes data measured by motes, makes such data available, and can trigger
194 E. Cavalcante et al.
alerts when a flood condition is detected. The communication among these ele-
ments takes place by using wireless network connections, such as WiFi, ZigBee,
GPRS, Bluetooth, etc.
Figure 4 shows the main architecture of the system. Sensor components com-
municate with each other through ZigBee connectors and a gateway component
receives all measurements to evaluate the current risk. Each measure from a sen-
sor is propagated its neighbors via ZigBee connectors until reaching the gateway.
The environment is modeled through the Env component and the SensorEnv and
Budget connectors. Env is responsible for synchronizing the model by defining
cycles corresponding to the frequency at which measures are taken by sensors.
A cycle consists of: (i) signaling Budget that a new cycle has started; (ii) updating
the river status; (iii) registering deployed sensors; (iv) signaling each SensorEnv
connector to deliver a new measure; and (v) waiting for each SensorEnv connec-
tor to confirm that a new measure has been delivered. The Sensor, SensorEnv,
and ZigBee elements can added and removed during the execution of the system
through reconfigurations triggered by the gateway component.
Fig. 4. Overview of the main architecture for the flood monitoring system.
Figure 5 shows an excerpt of the π-ADL description for the sensor component.
The behavior of this components comprises choosing between two alternatives,
either obtaining a new measure (i) from the environment via the sense input
connection or (ii) from a neighbor sensor via the pass input connection. After
receiving the gathered value, it is transmitted through the measure output con-
nection. Reading a negative value indicates a failure of the sensor, so that it
becomes a FailingSensor, which simply ignores all incoming messages.
We have modeled two reconfigurations, namely adding and removing a sensor,
as depicted in Fig. 6. The gateway component decides to add a sensor if the
coverage of the river is not optimal and the budget is sufficient to deploy a
new sensor. This operation is triggered by sending a message to Reconf via the
newS connection, with the desired location for the new sensor. The new sensor is
connected to other sensors in range via a ZigBee connector, as shown in Fig. 6(a).
Statistical Model Checking of Dynamic Software Architectures 195
During this operation, Reconf decomposes the main architecture to include the
new elements and unifications before recomposing it. The reconfiguration uses
the position of each sensor to determine which links have to be created. After
triggering the reconfiguration, the gateway indicates to the Budget connector
that it has spent the price of a sensor.
loses all messages. This new connection prevents deadlocks that occur when the
last element of the isolated chain cannot propagate its message. When a sensor
is removed, the connected ZigBee and SensorEnv are composed in a separated
architecture. This architecture connects the killZb connection of the sensor to
the die connections of the ZigBee connectors, which allows an other branch of
the behavior to properly terminate these components.
6.2 Requirements
This property characterizes a false negative: the gateway predicts a low risk and
a flood occurs in the next Y time units. The parameters of this formula are X,
the time during which the system is monitored, and Y , the time during which
the prediction of the gateway should hold.
Similarly, a false positive occurs when the system predicts a flood that does
not actually occur:
e v e n t u a l l y b e f o r e X time u n i t s { // F a l s e P o s i t i v e (X , Y)
gw . a l e r t = ” f l o o d d e t e c t e d ”
and a l w a y s d u r i n g Y t i m e u n i t s n o t e n v . f l o o d
}
The system is correct if there is no false negatives nor false positives for the
expected prediction anticipation (parameter Y ).
These two formulas are actually BLTL formulas as they involve simple pred-
icates on the state. However, DynBLTL allows expressing properties about the
dynamic architecture of the system. For example, suppose that one wants to
check that if a sensor sends a message indicating that it is failing, then it must
be removed from the system in a reasonable amount of time. This disconnection
is needed because the sensor in failure will not pass incoming messages. We char-
acterize the removal of a sensor by a link on the end connection, corresponding
to the initiation of the sensor termination (not detailed here).
Statistical Model Checking of Dynamic Software Architectures 197
In our dynamic system, sensors may appear and disappear during execution.
Therefore, the temporal pattern needs to be dynamically instantiated at each
step for each existing sensor:
always during X time u n i t s { // RemoveSensor (X , Y)
f o r a l l s : allOfType ( Sensor ) {
( i s T r u e s . measure < 0) i m p l i e s {
e v e n t u a l l y b e f o r e Y time u n i t s {
e x i s t s st : allOfType ( StartTerminate )
a r e L i n k e d ( s t . s t a r t , s . end )
}
}
}
}
This property cannot be stated in BLTL since it does not have a construct such
as forall for instantiating a variable number of temporal sub-formulas depending
on the current state.
Another property of interest consists in checking if a sensor is available, i.e.,
at least one sensor is connected to the gateway. More precisely, there must be
a ZigBee connector between the gateway and a sensor. If not, we require that
such a sensor appear in less than Y time units:
always during X time u n i t s { // S e n s o r A v a i l a b l e (X , Y)
( n o t ( e x i s t s zb : a l l O f T y p e ( Z i g B e e ) a r e L i n k e d ( zb . o u t p u t , gw . p a s s )
and ( e x i s t s s : a l l O f T y p e ( S e n s o r ) a r e L i n k e d ( s . measure , zb . i n p u t ) ) ) )
i m p l i e s ( e v e n t u a l l y b e f o r e Y time u n i t s {
e x i s t s zb : a l l O f T y p e ( Z i g B e e ) a r e L i n k e d ( zb . o u t p u t , gw . p a s s )
and ( e x i s t s s : a l l O f T y p e ( S e n s o r ) a r e L i n k e d ( s . measure , zb . i n p u t ) )
}
}
A confidence of 95 % was chosen and the precision has ranged on 0.02, 0.03, 0.04,
0.05, and 0.1, respectively requiring 4612, 2050, 1153, 738, and 185 simulations.
Figure 7(a) how the average analysis time (in seconds) increases when the
precision increases, i.e., the error decreases. As highlighted in Sect. 2, a higher
accuracy of the answer provided by the statistical model checker requires gener-
ating more execution traces through simulations, thereby increasing the analysis
time. The property regarding the sensor availability evaluated over a window of
50 time units requires less time than the other properties evaluated over a win-
dow of 100 time units because the analysis of each trace is faster. In Fig. 7(b),
it is possible to observe that the increase of the average amount of RAM (in
megabytes) required to perform the analyses is nearly constant, thus meaning
that the precision has no strong influence on the RAM consumption. This can be
explained by the fact that SMC only analyzes one trace at a time. Therefore, we
can conclude that our SMC approach and toolchain can be regarded as efficient
with respect to both execution time and RAM consumption.
Fig. 7. Effects of the variation in the precision in the analysis of three properties upon
analysis time (a) and RAM consumption (b).
7 Conclusion
In this paper, we have presented our approach on the use of statistical model
checking (SMC) to verify properties in dynamic software architectures. Our main
contribution is an SMC-based toolchain for specifying and verifying such proper-
ties atop the PLASMA platform. The inputs for this process are a probabilistic
Statistical Model Checking of Dynamic Software Architectures 199
References
1. The Go programming language. https://golang.org/
2. PLASMA-Lab. https://project.inria.fr/plasma-lab/
3. Arnold, A., Boyer, B., Legay, A.: Contracts and behavioral patterns for SoS: the EU
IP DANSE approach. In: Larsen, K.G., Legay, A., Nyman, U. (eds.) Proceedings
of the 1st Workshop on Advances in Systems of Systems, EPTCS, vol. 133, pp.
47–60 (2013)
4. Boyer, B., Corre, K., Legay, A., Sedwards, S.: PLASMA-lab: a flexible, distrib-
utable statistical model checking library. In: Joshi, K., Siegle, M., Stoelinga, M.,
D’Argenio, P.R. (eds.) QEST 2013. LNCS, vol. 8054, pp. 160–164. Springer,
Heidelberg (2013). doi:10.1007/978-3-642-40196-1 12
5. Cavalcante, E., Batista, T., Oquendo, F.: Supporting dynamic software architec-
tures: from architectural description to implementation. In: Proceedings of the
12th Working IEEE/IFIP Conference on Software Architecture, pp. 31–40. IEEE
Computer Society, USA (2015)
6. Cavalcante, E., Oquendo, F., Batista, T.: Architecture-based code generation: from
π-ADL descriptions to implementations in the Go language. In: Avgeriou, P., Zdun,
U. (eds.) ECSA 2014. LNCS, vol. 8627, pp. 130–145. Springer, Switzerland (2014).
doi:10.1007/978-3-319-09970-5 13
7. Cho, S.M., Kim, H.H., Cha, S.D., Bae, D.H.: Specification and validation of
dynamic systems using temporal logic. IEE Proc. Softw. 148(4), 135–140 (2001)
8. Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. The MIT Press, Cam-
bridge (1999)
9. Hérault, T., Lassaigne, R., Magniette, F., Peyronnet, S.: Approximate probabilistic
model checking. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp.
73–84. Springer, Heidelberg (2004). doi:10.1007/978-3-540-24622-0 8
10. Holzmann, G.J.: The logic of bugs. In: 10th ACM SIGSOFT Symposium on Foun-
dations of Software Engineering, pp. 81–87. ACM, New York (2002)
200 E. Cavalcante et al.
11. Jegourel, C., Legay, A., Sedwards, S.: A platform for high performance statis-
tical model checking - PLASMA. In: Flanagan, C., König, B. (eds.) TACAS
2012. LNCS, vol. 7214, pp. 498–503. Springer, Heidelberg (2012). doi:10.1007/
978-3-642-28756-5 37
12. Kim, Y., Choi, O., Kim, M., Baik, J., Kim, T.H.: Validating software reliability
early through statistical model checking. IEEE Softw. 30(3), 35–41 (2013)
13. Legay, A., Delahaye, B., Bensalem, S.: Statistical model checking: an overview.
In: Barringer, H., et al. (eds.) RV 2010. LNCS, vol. 6418, pp. 122–135. Springer,
Heidelberg (2010). doi:10.1007/978-3-642-16612-9 11
14. Legay, A., Sedwards, S.: On statistical model checking with PLASMA. In: Pro-
ceedings of the 2014 Theoretical Aspects of Software Engineering Conference, pp.
139–145. IEEE Computer Society, Washington, DC (2014)
15. Mateescu, R., Oquendo, F.: π-AAL: an architecture analysis language for formally
specifying and verifying structural and behavioural properties of software architec-
tures. ACM SIGSOFT Softw. Eng. Notes 31(2), 1–19 (2006)
16. Oquendo, F.: π-ADL: an architecture description language based on the higher-
order typed π-calculus for specifying dynamic and mobile software architectures.
ACM SIGSOFT Softw. Eng. Notes 29(3), 1–14 (2004)
17. Pnueli, A.: The temporal logics of programs. In: Proceedings of the 18th Annual
Symposium on Foundations of Computer Science, pp. 46–57. IEEE Computer Soci-
ety, Washington, DC (1977)
18. Quilbeuf, J., Cavalcante, E., Traonouez, L.M., Oquendo, F., Batista, T., Legay, A.:
A logic for statistical model checking of dynamic software architectures. In:
Margaria, T., Steffen, B. (eds.) ISoLA 2016. LNCS, vol. 9952, pp. 806–820.
Springer, Heidelberg (2016). doi:10.1007/978-3-319-47166-2 56
19. Zhang, P., Muccini, H., Li, B.: A classification and comparison of model checking
software architecture techniques. J. Syst. Softw. 83(5), 723–744 (2010)
Consistent Inconsistency Management:
A Concern-Driven Approach
Jasper Schenkhuizen1 , Jan Martijn E.M. van der Werf1(B) , Slinger Jansen1 ,
and Lambert Caljouw2
1
Department of Information and Computing Science,
Utrecht University, P.O. Box 80.089, 3508 TB Utrecht, The Netherlands
{j.schenkhuizen,j.m.e.m.vanderwerf,slinger.jansen}@uu.nl
2
Unit4, Papendorpseweg 100, 3528 BJ Utrecht, The Netherlands
[email protected]
1 Introduction
Inconsistency is prevalent in software development and software architecture
(SA) [7]. Although inconsistency in software architecture is not necessarily a
bad thing [18], undiscovered inconsistency leads to all kinds of problems [17,20].
Inconsistency is present if two or more statements made about a system or its
architecture are not jointly satisfiable [9], mutually incompatible, or conflicting
[3]. Examples of inconsistency are: failure of a syntactic equivalence test, non-
conformance to a standard or constraint [9], or two developers implementing a
non-relational and a relational database technology for the same database, to
name a few.
In SA, inconsistency has a wide range of dimensions, such as inconsistency
in code, inconsistent requirements, or model inconsistency. We refer to these
types of inconsistency as ‘tangible’ inconsistency. On the contrary, an ‘intangi-
ble’ inconsistency is often denominated as a conflict, still being undocumented or
unspecified: inconsistent design decisions or concerns. In architecture, a conflict
between concerns occurs if their associated design decisions are mutually incom-
patible, or negatively affect each other. That is, a conflict (intangible inconsis-
tency) can potentially manifest itself as a tangible inconsistency. Thus, if design
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 201–209, 2016.
DOI: 10.1007/978-3-319-48992-6 15
202 J. Schenkhuizen et al.
2 Inconsistency Management in SA
Fig. 1. This figure describes the 4 phases of the CDIM, with the 7 corresponding
activities. The activities are performed in cyclic manner. Each of the activities consists
of different sub steps.
1. a unique identifier,
2. a short, concise name,
3. a comprehensive definition and explanation of the concern,
4. the concern’s priority,
5. related stakeholders that have an interest in the concern,
6. the perspective or category to which the concern belongs, and
7. possible associated architectural requirements, which can be used during
discussion.
During the execution of the CDIM, concern-cards are used to assist the architect
in collecting and understanding the different concerns, by making these explicit.
3.3 Do Phase
The Do phase consists of two activities: identify and discover. In identify the
architect tries to identify possible conflicts through a workshop with the relevant
stakeholders. Input of the workshop is the previously constructed matrix. The
principal idea behind the cells of the matrix is that these provide the hotspots:
areas in the architecture where concerns could possibly overlap or conflict. Mul-
tiple overlaps or conflicts may be contained in each cell, as visualized in Fig. 2.
The “hotspots” are discussed by the architect and stakeholders. Their role is
to aid the evaluator with deciding on how conflicts affect the architecture or
possibly could affect the architecture, and which conflicts could be problematic.
The outcome is a completed matrix, presenting the architect the important areas
where inconsistencies may arise.
In discover, the architects go through the existing architecture to discover
whether these are actual inconsistencies, possibly together with relevant stake-
holders. The architect uses a combination of his expertise and knowledge of the
system, to systematically search for important inconsistencies, using the conflicts
identified in the matrix. Given the deliberate simplicity of CDIM, and the com-
plexity of a software architecture context, the steps in this phase are inevitably
one of judgment rather than quantifiable assessment. The drawback is that this
approach is less explicit and more based on subjective factors as intuition and
experience. The advantage is that this is easier and more flexible. The main
outcome of this activity is a list of important inconsistencies in the architecture.
206 J. Schenkhuizen et al.
Fig. 2. A tool can be beneficial to the extent that the architect can use an overview
of the amount of inconsistencies that are still open (red), that are carefully ignored
(bright yellow), tolerated (cream yellow), or that have been solved (green) to indicate
what needs to be done. (Color figure online)
In the final phase of CDIM, the architect creates and executes handling actions
for each inconsistency if needed (execute), and determines how to proceed (fol-
low up). In execute, the architect handles discovered inconsistency based on
five actions for settling inconsistencies: ‘resolve’, ‘ignore’, ‘postpone’, ‘bypass’,
and ‘improve’. It is important to note that handling an inconsistency is always
context-specific and requires human insight. ‘Resolving’ the inconsistency is rec-
ommended if the impact is high, the business value is high, regardless of the
engineering effort. Solving an inconsistency could be relatively simple (adding
or deleting information from a description or view). However, in some cases
Consistent Inconsistency Management: A Concern-Driven Approach 207
References
1. Babar, M.A., Zhu, L., Jeffery, R.: A framework for classifying and comparing soft-
ware architecture evaluation methods. In: 15th Australian Software Engineering
Conference, pp. 309–319. IEEE Computer Society (2004)
2. Blanc, X., Mounier, I., Mougenot, A., Mens, T.: Detecting model inconsistency
through operation-based model construction. In: 30th International Conference on
Software Engineering, pp. 511–520. ACM (2008)
3. Dashofy, E.M., Taylor, R.N.: Supporting stakeholder-driven, multi-view software
architecture modeling. Ph.D. thesis, University of California, Irvine (2007)
4. Easterbrook, S.: Handling conflict between domain descriptions with computer-
supported negotiation. Knowl. Acquis. 3(3), 255–289 (1991)
5. Finkelstein, A.: A foolish consistency: technical challenges in consistency manage-
ment. In: Ibrahim, M., Küng, J., Revell, N. (eds.) DEXA 2000. LNCS, vol. 1873,
pp. 1–5. Springer, Heidelberg (2000). doi:10.1007/3-540-44469-6 1
6. Finkelstein, A., Spanoudakis, G., Till, D.: Managing interference. In: 2nd Interna-
tional Software Architecture Workshop (ISAW-2) and International Workshop on
Multiple Perspectives in Software Development, pp. 172–174 (1996)
7. Ghezzi, C., Nuseibeh, B.: Guest editorial: introduction to the special section -
managing inconsistency in software development. IEEE Trans. Softw. Eng. 25(6),
782–783 (1999)
8. Grenning, J.: Planning Poker or How to Avoid Analysis Paralysis While Release
Planning, vol. 3. Renaissance Software Consulting, Hawthorn Woods (2002)
9. Herzig, S.J.I., Paredis, C.J.J.: A conceptual basis for inconsistency management in
model-based systems engineering. Procedia CIRP 21, 52–57 (2014)
10. Hilliard, R.: Lessons from the unity of architecting. In: Software Engineering in
the Systems, Context, pp. 225–250 (2015)
11. Johnson, C.N.N.: The benefits of PDCA. Qual. Prog. 35(3), 120 (2002)
12. Kazman, R., Bass, L., Klein, M.: The essential components of software architecture
design and analysis. J. Syst. Softw. 79(8), 1207–1216 (2006)
13. Kruchten, P., Lago, P., Vliet, H.: Building up and reasoning about architectural
knowledge. In: Hofmeister, C., Crnkovic, I., Reussner, R. (eds.) QoSA 2006. LNCS,
vol. 4214, pp. 43–58. Springer, Heidelberg (2006). doi:10.1007/11921998 8
14. Lago, P., Avgeriou, P., Hilliard, R.: Guest editors’ introduction: software architec-
ture: framing stakeholders’ concerns. IEEE Softw. 27(6), 20–24 (2010)
15. Lucassen, G., Dalpiaz, F., van der Werf, J.M.E.M., Brinkkemper, S.: The use and
effectiveness of user stories in practice. In: Daneva, M., Pastor, O. (eds.) REFSQ
2016. LNCS, vol. 9619, pp. 205–222. Springer, Heidelberg (2016). doi:10.1007/
978-3-319-30282-9 14
16. Luinenburg, L., Jansen, S., Souer, J., van de Weerd, I., Brinkkemper, S.: Designing
web content management systems using the method association approach. In: 4th
International Workshop on Model-Driven Web Engineering, pp. 106–120 (2008)
17. Muskens, J., Bril, R.J., Chaudron, M.R.V., Generalizing consistency checking
between software views. In: 5th Working IEEE/IFIP Conference on Software Archi-
tecture, pp. 169–180. IEEE Computer Society (2005)
18. Nentwich, C., Capra, L., Emmerich, W., Finkelstein, A.: xlinkit: a consistency
checking and smart link generation service. ACM Trans. Internet Technol. 2(2),
151–185 (2002)
19. Nuseibeh, B.: To be, not to be: on managing inconsistency in software development.
In: 8th International Workshop on Software Specification and Design, p. 164. IEEE
Computer Society (1996)
Consistent Inconsistency Management: A Concern-Driven Approach 209
20. Nuseibeh, B., Easterbrook, S.M., Russo, A.: Making inconsistency respectable in
software development. J. Syst. Softw. 58(2), 171–180 (2001)
21. Robinson, W.N., Pawlowski, S.D.: Managing requirements inconsistency with
development goal monitors. IEEE Trans. Softw. Eng. 25(6), 816–835 (1999)
22. Rozanski, N., Woods, E.: Software Systems Architecture: Working with Stakehold-
ers Using Viewpoints and Perspectives. Addison-Wesley, Reading (2012)
23. Schenkhuizen, J.: Consistent inconsistency management: a concern-driven app-
roach. Technical report, Utrecht University (2016). http://dspace.library.uu.nl/
bitstream/handle/1874/334223/thesisv1 digitaal.pdf
24. Spanoudakis, G., Zisman, A.: Inconsistency management in software engineering:
survey and open research issues. Handb. Softw. Eng. Knowl. Eng. 1, 329–380 (2001)
Formal Verification of Software-Intensive
Systems Architectures Described with Piping
and Instrumentation Diagrams
1 Introduction
diagram describing the architecture of the system porting the system in its entire
cycle of development [3]. Piping and Instrumentation Diagrams (P&IDs) are
technical diagrams widely used in the process industry.
P&ID is a detailed graphic description of the architecture in terms of process
flows showing all the pipes, the devices, and the instrumentation associated
with a given system [16]. The P&ID is the standard diagram that maps all
the components and connections of an industrial process. Each component is
represented by a symbol defined in ANSI/ISA 5.1 [9]. These components are
connected by physical connectors such as pipes and cables, or software links
[3]. The exchanged data (instrumentation) between the physical system and the
control programs are also represented in this diagram.
Currently, designers of socio-technical systems describe manually and infor-
mally the system architectures, which leads to several errors that are detected in
the testing phases. Verification steps must be integrated into the design process
in order to prevent errors and minimize costs. Despite the standardization efforts
of P&ID by ANSI/ISA-5.1 [9], there is currently no formal definition of these
diagrams enabling analysis. By verifying the P&ID already in the architectural
phase, can significantly reduce costs and errors [17]. We propose in this paper
a three-step formal approach to verify the “total correctness” in terms of com-
pleteness, consistency, compatibility, and correctness of P&ID. In the first step,
we propose to formalize the P&ID as an architectural style with Alloy. This
architectural style provides a common representation vocabulary and rules for
architectures described in terms of P&ID [22]. In the second step, in order to ver-
ify the architectural models in P&ID, we generate from these diagrams (by using
MDE) a formal model in Alloy. In the third step, we verify the compatibility of
the generated models with the style defined in the first step, its completeness,
consistency, and correctness using the Alloy Analyzer.
The remaining of this paper is organized as follows: in Sect. 2, we present
the state of the art on the formal verification of P&ID, including modeling
and analysis techniques applied to software architectures, and introduce the
Alloy language. In Sect. 3, we present our approach for the formal verification of
P&ID. We present our formalization of the P&ID architectural style with Alloy
in Sect. 4. The automated generation of Alloy models from the P&ID is shown in
Sect. 5. In Sect. 6, we illustrate our approach through an industrial case study. In
the final section, we present our conclusion and perspectives for future research.
Few studies have focused on the formal verification of the P&ID. Yang [25]
proposes a semi-automatic approach to build SMV models from plants CDEP
and P&ID. These models are used to verify safety properties written in CTL by
model checking (SMV). Krause et al. [15] propose a method to extract, from the
P&ID, the data related to the safety and reliability of systems. These data are
extracted in two graphs: Netgraph and a reliability graph. NetGraph represents
all the information related to devices and connections (structure). The reliability
212 S. Mesli-Kesraoui et al.
2.2 Alloy
Alloy is a formal declarative language based on relations and first-order logic
[10]. The idea behind Alloy is to enable system modeling with an abstract model
representing the important functionalities of the system (micro model).
Alloy logic is based on the concepts of atoms and relations. The atom
represents all the basic elementary entities characterized by a type. The relation
is a set of tuples linking atoms. These relations are combined with operators to
Formal Verification of SIS Architectures Described with P&ID 213
form expressions. There are three types of Alloy operators: set operators such
as union (+), difference (−), intersection (&), inclusion (in) and equality (=);
relational operators such as product (->), joint (.), transposed (∼), transitive
closure (∧) and reflexive transitive closure (∗); logical operators such as negation
(!), conjunction (&&), disjunction (||), implication (→) and bi-implication (↔).
Quantified constraints have the form: Q x : e | F , with F a constraint on the set
x, e an expression on the elements of the type x and Q a quantifier that can
take one of the values: all (each element in x), some (at least one element), no
(no element), lone (at most one element) and one (exactly one element). For
example, all x : e | F is true if all elements in x satisfy F . Declarations, in Alloy,
are in the form: relation: expression, with expression a constraint bounding the
relation elements. For example, r : An → mB with n, m ∈ {set (set of elements),
one, lone, some} and A, B a sets of atoms. The relation r defines that each
element of the set A is mapped to m elements of the set B, and that each element
in the set B is mapped to n elements of the set A.
An Alloy model , organized in modules, consists of a set of signatures, con-
straints and command. A signature, declared by sig, introduces a set of atoms
and defines the basic types, through a set of attributes and relations with other
signatures. It matches the notion of class in object-oriented modeling in that
it can be abstract and inherits other signatures [11]. Constraints, organized in
facts (fact), predicates (pred), functions (fun), and assertions (assert) [11],
restrict the space of model instances. The fact is a Boolean expression that each
instance of the model must satisfy. A pred is a reusable constraint that returns
Boolean values (true, false) when it is invoked. A fun is a reusable expression
that can be invoked in the model. It may have parameters and returns Boolean
or relational values. An assert is a theorem that has no arguments and that
requires verification. Commands describe the purpose of the verification. The
Alloy Analyzer [1] can be used as a simulator, with the run command, to obtain
a solution (instance) that satisfies all the constraints, or as a checker, with the
check command, for searching a counterexample that violates an assertion [11].
The model is transformed by the parser into Boolean expressions that can be
verified by different SAT (kodkod, SAT4J . . . ). A scope is necessary to limit
(bound) the search space.
The Alloy language has been used to specify architectural styles [14,24] and
to model and verify model architectures [4,13]. The atoms and relations are used
to model the design vocabulary (components and connectors). The constraints
(fact, pred, fun, assert) are used to manage style invariants (rules) that describe
the allowable configurations from its design vocabulary. In the next sections, we
present our formal approach to verify the P&ID.
3 Proposed Approach
In our previous study, we defined a standard library for performing a P&ID for a
fluid management system [3]. We used Microsoft Visio tool to capture the struc-
ture of different P&ID components and connectors. Figure 1 shows an extract of
214 S. Mesli-Kesraoui et al.
the P&ID metamodel. The P&ID is an assembly of shapes and bonds. Each shape
represents a component of the standard ANSI/ISA-5.1 [9]. It is characterized by
a name and id that correspond respectively to the component type and compo-
nent identifier. The shape contains a set of ports that represent its interaction
points. The bonds types can be: process (Process, in the enumeration TypeCon),
electrical (ElectricalSignal), instrument (InstrumentSupply), and software links
(InternalSystemLink). The Bonds ensure the interaction between components
and comprises at least two extremities and each extremity is connected to one
port.
As the P&ID is an architectural diagram, it must be complete, consistent,
compatible with an architectural style, and correct with the system requirements
[22]. To this end, we propose a three-step approach to formally verify the P&ID.
In the first step, we defined an architectural style in Alloy, based on ANSI/ISA-
5.1 [9] standard, to describe the different constraints that the P&ID must satisfy.
In the second step, we used the concepts of MDE to generate a configuration in
Alloy from the P&ID. The Alloy analyzer was used, in the third step, to verify
the completeness, consistency, compatibility of the P&ID to the style and its
correctness with system requirements.
Two modules were used for checking the P&ID. The first module, called
library, represents the architectural style and its invariants. The second module,
automatically generated from a P&ID, describes a configuration of a fluid man-
agement system. We show bellow the P&ID architectural style formalization and
the automatic transformation of P&ID into Alloy.
4.1 Component
We model the components by the Component and Port signatures (Listing 1).
The component is an abstract signature (abstract sig) that contains a set of
ports, described by the relation ports: set Port, and a set of actions. Actions,
modeled by the relation actions: Port set -> set Port, represent the com-
ponent’s actions on the fluid through its ports, i.e. the routing of the fluid, in
the component, from port A to port B. The constraint actions.Port in ports
means that the join between the sets of actions and Port is included in the set of
component ports. In other words, the component actions concern just its ports.
The Port signature describes a port related to one and only one component,
modeled by the relation component: one Component.
Listing 1
abstract s i g Component {
abstract sig PP extends Port {}
p o r t s : set Port ,
abstract sig IP extends Port {}
a c t i o n s : Port set −> set Port
abstract sig EP extends Port {}
}{ t h i s = p o r t s . component
abstract sig SP extends Port {}
a c t i o n s . Port in p o r t s
a c t i o n s [ Port ] in p o r t s }
abstract s i g V3VM extends P r o c e s s {
p1 : lone PP, p2 : lone PP,
abstract s i g Port {
p3 : lone PP, p4 : lone EP
component : one Component
}{ p1 + p2 + p3 + p4 = p o r t s
}{ t h i s in component . p o r t s }
a c t i o n s = ( p1−>p2 ) + ( p2−>p1 ) +
( p1−>p3 ) + ( p3−>p1 )
abstract s i g P r o c e s s extends
lone p o r t s & p1
Component {}
lone p o r t s & p2
lone p o r t s & p3
abstract s i g I n s t r u m e n t
lone p o r t s & p4 }
extends Component {}
that the valve contains at most one port of the type p1. In the same way, we
modeled 30 other components.
4.2 Connector
The connectors are described by two abstract signatures: Connector and Role
(Listing 2). A connector (abstract sig Connector) is a set of roles. Each role
(abstract sig Role) is related to a single connector. The relation connected:
one Port describes the fact that each role is connected to a single port. The
different connectors in the P&ID are: process link (PL, in Listing 2), instrument
link (IL), electrical link (EL), or software link (SL).
Listing 2
abstract s i g Connector {
abstract s i g Role {
r o l e s : set Role
c o n n e c t o r : one Connector ,
}{ t h i s = r o l e s . c o n n e c t o r }
c o n n e c t e d : one Port
}{ t h i s in c o n n e c t o r . r o l e s }
abstract s i g PL extends Connector {}
4.3 Configuration
To model the style invariants, derived from the ANSI/ISA-5.1 [9] standard, we
use the expressions presented in Listing 4. The pred isTyped returns true if
an element e1 is e2 typed, else, it returns false. The pred Attached returns
true if a role r is attached to port p by the relation connected. Finally, the
pred isCompatible determines if a connector c is compatible with the attached
port p.
Different style invariants are modeled as a fact (each element of the Alloy
model must satisfy it) named StyleInvariants (Listing 4); these invariants
concern:
Formal Verification of SIS Architectures Described with P&ID 217
Listing 4
fact StyleInvariants {
(1) all r : Role | some p : Port | Attached [r , p ] &&
isCompatible [ r . connector , p ]
(2) all disj r1 , r2 : Role | ( r1 . connector = r2 . connector )
= > ( r1 . connected . component != r2 . connected . component )
(3) all c : Component |!( c . ports .~ connected = none )
(4) all disj c1 , c2 : Connector |
no ( c1 . roles . connected & c2 . roles . connected )}
Fig. 2. Style consistency checking: (a) alloy instance; (b) corresponding diagram
transformed into the Role. For example, in Listing 5, the process link connector
PL 3 is composed of the roles: PL 3 1 and PL 3 2.
ExtremityToSignature. Each extremity is transformed into a role (exten-ds
Role). It is composed of two fields: connector and connected (Listing 5). The
connector field defines the containing connector. The connected field deter-
mines the port to which the role is connected. In Listing 5, for example, the role
PL 3 1 is connected to the port p1 of the component V3VM1.
Listing 5
one s i g EdS extends C o n f i g u r a t i o n {}
one s i g V3VM1 extends V3VM{}
{ components = V3VM1 + . . .
{ p1 = V3VM1 P1
c o n n e c t o r s = PL 3 + . . . }
p2 = V3VM1 P2
p3 = V3VM1 P3}
one s i g PL 3 extends PL{}{
PL 3 1 + PL 3 2 = r o l e s }
one s i g PL 3 1 extends Role {}
{ c o n n e c t o r = PL 3
one s i g V3VM1 P1 extends PP{}{
c o n n e c t e d = V3VM1. p1 }
component = V3VM1}
6 Case Study
Our work is in the maritime field. Specifically, we examine the system of produc-
tion, storage, and distribution of fresh water called EdS (standing for Eau douce
Sanitaire in French, or sanitary freshwater in English). An extract of the P&ID
of this system is illustrated in Fig. 3. This diagram must meet the architectural
style defined above and several requirements that come from the specifications,
standards, business rules, etc. We followed the protocol described by Wohlin
et al. [23] for performing an exploratory study of the case study to extract the
requirements that this system must meet.
and each step was validated by three professors and two industry experts. We
used an Excel document to identify and categorize requirements.
The information collected by the methods listed above is qualitative. The
different requirements are categorized and classified in the following sections.
pumps (H1, H2, H3). This functional requirement is shown in the P&ID (Fig. 3)
by a path (in bold) between the tank St1 and the tank St2 through the pump
H2. To achieve this, the designer sequenced the following components: the tank
St1, the pump H2, the chlorination unit TR1 used to treat water by chlorine, and
finally the tank St2. These components are isolated by motorized two-way valves
(V2VM02, V2VM05) and a motorized three-way valve (V3VM02) to facilitate
routing and maintenance. Check valves return (Cl6, Cl8) are also used to prevent
the fluid to circulate in opposite way of the flow.
To check that this diagram ensures the different functions, we used the
funs and preds in Listing 6. The function ComponentConnected returns the
tuples of components that communicate through the same connector. The pred
ExistPath returns true if a component destination dest can be reached from a
source sc component, passing by pas components. The pred isTransfer deter-
mines if a path exists from a tank (St) source to a tank destination, passing by
the pump (HP).
Listing 6
– Overall efficiency: for example, the flow in the pipe should not exceed 6 bars
or the production of 30 m3 of freshwater.
– Dependability: these requirements determine the necessary material redun-
dancies (3 pumps) and the operating time of components.
– Maintainability: each component must be preceded and followed by an isolat-
ing element (a valve) to facilitate maintenance.
– Safety: these requirements include the presence of the check valves to prevent
the traffic flow in opposite directions (leading to a collision of flows). It must
never be a check valve upstream of a pump.
222 S. Mesli-Kesraoui et al.
pred I s o l a t e d C o m p o n e n t {
l e t v a l v e s = V2VM+V3VM | a l l s :TR+St+HP |
some ( IsolatedComponentUp [ s ] & v a l v e s ) &&
some ( IsolatedComponentDown [ s ] & v a l v e s ) }
Compatibility. The EdS P&ID (Fig. 3) must be compatible with the style
defined in Sect. 4. To check the P&ID compatibility, we used the Alloy Analyzer
as simulator. If the simulator finds a solution for the generated model including
the library module (style formalization), this means there is diagram compati-
bility.
Correctness. The EdS system (Fig. 3) must meet its functional and non-
functional properties. We modeled these requirements as assertion (Listing 8)
and we checked the existence of a counterexample by the command check
Correctness Analysis for 1. The assertion (Correctness Analysis) checks
that each component is isolated (IsolatedComponent) by the valves and that
the diagram ensures a transfer function (isTransfer[St2,HP,St1]) between
tanks St1 and St2.
Listing 8
assert C o r r e c t n e s s _ A n a l y s i s {
IsolatedComponent && isTransfer [ St1 , HP , St2 ]}
check C o r r e c t n e s s _ A n a l y s i s for 1
Construct validity reflects the fact that the completed study must address
the problems identified. In this study, this threat may concern the interpretation
of the questions by subjects and researchers. To reduce this threat, we worked
with the technical terminology experts who had participated in the interviews.
Internal validity concerns the examination of causal relations in studies [21].
This study is explorative, hence less susceptible to this type of threat. Another
potential threat involves the small number of experts (five). To counter this
threat, we used semi-structured interviews to extract the maximum of data and
complemented them by archival data consisting of project documentation and
standards.
External validity refers to the generalization of the study findings to other
cases. We proposed an architectural style based on the ANSI/ISA-5.1 [9] stan-
dard. Hence, it can be used for all P&ID based on this standard. Another poten-
tial threat is that the interviews and the extraction data were performed by
the same researchers. To reduce the risk of bias, the interview questions were
reviewed and corrected by an independent expert. The data transcription and
extraction were also reviewed by this independent expert.
Reliability addresses the possibility that other researchers can replicate the
study. To facilitate this replication, we transcribed the interviews and placed the
complete project documentation on the company network.
7 Conclusion
In this paper, we proposed three contributions for the formal verification of
the physical architecture of an industrial process. This architecture is modeled
by P&ID, which captures the physical components and connectors constitut-
ing the system. First, we proposed the formalization, with Alloy language, of
an architectural style for the ANSI/ISA-5.1 [9] standard. Second, to facilitate
the use of formal methods in industry, we presented the MDE-based approach
to generate formal models, in Alloy, from the P&ID. The third contribution of
this paper includes the formal verification of the generated models. We verified
the compatibility of these models with the defined architectural style. We also
checked their completeness, consistency, and correctness with regard to the sys-
tem requirements. These contributions are illustrated through an industrial case
study: a system of production, storage, and distribution of freshwater on a ship.
For this case study, we carried out a survey with five experts to determine the
requirements to check. Then, we analyzed the P&ID of the EdS system.
In the near future, we intend to display the counterexample, returned by
the Alloy Analyzer, on the P&ID to facilitate the interpretation of errors by
the engineers. In complement of the presented work, we formalized the behavior
of the architectural elements [18,19] and we are now studying compositional
frameworks for verifying system behavioral properties driven by the formalized
architecture.
Formal Verification of SIS Architectures Described with P&ID 225
References
1. The alloy analyzer. http://alloy.mit.edu/
2. Allen, R.: A formal approach to software architecture. Ph.D. thesis, Carnegie
Mellon, School of Computer Science (1997)
3. Bignon, A., Rossi, A., Berruet, P.: An integrated design flow for the joint generation
of control and interfaces from a business model. Comput. Ind. 64(6), 634–649
(2013)
4. Brunel, J., Rioux, L., Paul, S., Faucogney, A., Vallée, F.: Formal safety and security
assessment of an avionic architecture with alloy. In: Proceedings 3rd International
Workshop on Engineering Safety and Security Systems, ESSS 2014, pp. 8–19 (2014)
5. Debruyne, V., Simonot-Lion, F., Trinquet, Y.: EAST-ADL an architecture descrip-
tion language. In: Michel, P., Vernadat, F. (eds.) WADL 2005. IFIP, vol. 176, pp.
181–195. Springer, Boston (2005)
6. Garis, A., Paiva, A.C.R., Cunha, A., Riesco, D.: Specifying UML protocol state
machines in alloy. In: Derrick, J., Gnesi, S., Latella, D., Treharne, H. (eds.) IFM
2012. LNCS, vol. 7321, pp. 312–326. Springer, Heidelberg (2012). doi:10.1007/
978-3-642-30729-4 22
7. Garlan, D., Monroe, R., Wile, D.: ACME: an architecture description interchange
language. In: CASCON First Decade High Impact Papers, pp. 159–173. IBM Corp.
(2010)
8. Haskins, C.: INCOSE Systems Engineering Handbook v. 3.2. International Council
on Systems Engineering, San Diego (2010)
9. ISA: 5.1 instrumentation symbols and identification (1992)
10. Jackson, D.: Alloy: a lightweight object modelling notation. ACM Trans. Softw.
Eng. Methodol. (TOSEM) 11(2), 256–290 (2002)
11. Jackson, D.: Software Abstractions: Logic, Language, and Analysis. MIT Press,
Cambridge (2012)
12. Jouault, F., Allilaire, F., Bézivin, J., Kurtev, I., Valduriez, P.: ATL: a QVT-like
transformation language. In: Companion to 21st ACM SIGPLAN Symposium on
Object-oriented Programming Systems, Languages, and Applications, OOPSLA
2006, pp. 719–720. ACM (2006)
13. Khoury, J., Abdallah, C.T., Heileman, G.L.: Towards formalizing network architec-
tural descriptions. In: Frappier, M., Glässer, U., Khurshid, S., Laleau, R., Reeves,
S. (eds.) ABZ 2010. LNCS, vol. 5977, pp. 132–145. Springer, Heidelberg (2010).
doi:10.1007/978-3-642-11811-1 11
14. Kim, J.S., Garlan, D.: Analyzing architectural styles. J. Syst. Softw. 83(7), 1216–
1235 (2010)
15. Krause, A., Obst, M., Urbas, L.: Extraction of safety relevant functions from CAE
data for evaluating the reliability of communications systems. In: Proceedings
of 2012 IEEE 17th International Conference on Emerging Technologies Factory
Automation (ETFA 2012), pp. 1–7 (2012)
16. McAvinew, T., Mulley, R.: Control System Documentation: Applying Symbols and
Identification. ISA-The Instrumentation, Systems, and Automation Society (2004)
17. Medvidovic, N., Taylor, R.N.: A classification and comparison framework for soft-
ware architecture description languages. IEEE Trans. Softw. Eng. 26(1), 70–93
(2000)
18. Mesli-Kesraoui, S., Bignon, A., Kesraoui, D., Toguyeni, A., Oquendo, F.,
Pascal, B.: Vérification formelle de chaı̂nes de contrôle-commande d’éléments de
conception standardisés. In: MOSIM 2016, 11ème Conférence Francophone de
Modélisation, Optimisation et Simulation (2016)
226 S. Mesli-Kesraoui et al.
19. Mesli-Kesraoui, S., Toguyeni, A., Bignon, A., Oquendo, F., Kesraoui, D.,
Berruet, P.: Formal and joint verification of control programs and supervision inter-
faces for socio-technical systems components. In: IFAC HMS, pp. 1–6 (2016)
20. Oquendo, F.: π-ADL: an architecture description language based on the higher-
order typed π-calculus for specifying dynamic and mobile software architectures.
ACM SIGSOFT Softw. Eng. Notes 29, 1–14 (2004)
21. Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research
in software engineering. Empirical Softw. Eng. 14(2), 131–164 (2009)
22. Taylor, R.N., Medvidovic, N., Dashofy, E.M.: Software Architecture: Foundations,
Theory, and Practice. Wiley, Hoboken (2009)
23. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Exper-
imentation in Software Engineering. Springer Science & Business Media, Berlin
(2012)
24. Wong, S., Sun, J., Warren, I., Sun, J.: A scalable approach to multi-style architec-
tural modeling and verification. In: 13th IEEE International Conference on Engi-
neering of Complex Computer Systems, ICECCS 2008, pp. 25–34 (2008)
25. Yang, S., Stursberg, O., Chung, P., Kowalewski, S.: Automatic safety analysis of
computer-controlled plants. Comput. Chem. Eng. 25(46), 913–922 (2001)
The Software Architect’s Role
and Concerns
Architects in Scrum: What Challenges
Do They Face?
Abstract. Context: Even though Scrum (the most popular agile software
development approach) does not consider architecting an explicit activity,
research and professional literature provide insights into how to approach
architecting in agile development projects. However, challenges faced by
architects in Scrum when performing tasks relevant to the architects’ role are
still unexplored. Objective: We aim at identifying challenges that architects face
in Scrum and how they tackle them. Method: We conducted a case study
involving interviews with architects from six Dutch companies. Results: Chal-
lenges faced by architects are mostly related to the autonomy of development
teams and expected competences of Product Owners. Conclusions: The results
presented in this paper help architects understand potential pitfalls that might
occur in Scrum and what they can do to mitigate or to avoid them.
1 Introduction
Agile software development has gained substantial popularity in recent years [1]. Even
though software architecture is considered crucial for software project success [2],
architecting activities and the role of software architects are not explicitly considered in
agile development methods [3]. Numerous publications have discussed the role of the
architecture in agile projects, and how software architecting could be approached in
agile projects [4]. However, there is currently little attention on the role of architects.
Few publications discuss the types of architects (e.g., generic types such as solution and
implementation architects [5], enterprise/domain and application architects [6]), and
their responsibilities and tasks in agile projects (e.g., [5, 7]).
In a development context where software architecting is not explicit, architects may
face particular challenges. However, few publications discuss such challenges. Faber
describes on a high level experiences from architecting and the role of architects in
agile projects at a specific company [7]: Architects should actively guide (but not
dominate) developers and be open to suggestions from the developers to deviate from
originally proposed design solutions. Woods reports that “difficulties frequently arise
when agile development teams and software architects work together” and proposes
general architecture practices (e.g., work in teams) that encourage collaborative
architecture work in agile development [8], but does not describe challenges faced by
architects. Martini et al. note that architecting in large agile projects is challenging and
propose a number of architectural roles to improve architecting practices [9].
In this paper, we identify challenges that architects face in Scrum when they
perform their tasks and how they address these challenges. Our study provides insights
for architects on how to improve their work in agile projects (e.g., by focusing their
attention on particular pitfalls) and will better prepare less experienced (or novice)
architects for challenges in agile projects. We focus on Scrum projects as Scrum is the
most often used agile development framework [1].
2 Background
The main roles in Scrum are the Scrum Master (SM), the team (including testers and
developers) and the Product Owner (PO). In Scrum, there is no explicit architect role
[10] and an architecture may emerge during a project rather than being designed
upfront. However, in many organizations that follow Scrum, architects create archi-
tecture designs and communicate their decisions to development teams [10]. The setup
in which architects are involved in agile projects (and Scrum) can differ [11]:
• In a “Team architect” setup, the architect is part of the development team [12]. If
there is no dedicated architecture expert, the role of the architect can be taken by the
whole team [13].
• In an “External architect” setup, the architect is not part of the agile team. He/she
might work with multiple agile teams and partner with other architects (for example,
as a project architecting team or as a member of an architecture board) [7].
• In a “Two architects” setup, there is an external and an internal architect. Abrahamsson
et al. define the types of architects as: architects who focus on big decision, facing
requirements and acting as external coordinators, and architects who focus on internal
team communications, mentoring, troubleshooting and code [3].
3 Research Method
with six cases. Our research is exploratory as we are looking into an unexplored
phenomenon [15]. Our unit of analysis is the architecting process in Scrum projects.
Our sampling method is quota sampling (because we included two cases for each of the
three setups described in Sect. 2) augmented with convenience sampling (because we
selected cases based on their accessibility) [16]. We selected representative projects
from six organizations from the Netherlands with established software development
practices.
Preparation for Data Collection: Data for each case was collected via
semi-structured interviews on-site and follow-up phone calls and e-mails. To avoid
terminology conflicts, we have selected interviewees based on their involvement in
architecting activities instead of solely focusing on the job title of individuals. We
asked questions about tasks which architects perform using tasks-descriptions in [17,
18] and questions based on the potential challenges that may occur in the setups as
described in Sect. 2. For each task, we asked if it was performed, if
challenges/problems were observed when performing it, and what was done to address
the challenge. The interview guideline can be found online1.
Analysis of Collected Data: The transcripts and the recordings were analysed and
information was clustered using open coding [19]. After initial coding, we looked at
groups of code phrases and merged them into concepts and related them to the research
questions. Codes and concepts emerged during the analysis. Since our data is context
sensitive, we performed iterative content analysis to make inferences from collected
data in its context [20]. Data were analysed by all authors.
4 Study Results
In Table 1, we provide an overview of the six cases. Next, we introduce the cases and
present the main challenges that we identified and their resolutions.
1
https://sites.google.com/site/samuilangelov/InterviewQuestions.docx.
232 S. Angelov et al.
4.1 Cases
Resolution in case 1: Architectural decisions and activities are presented by the team to
the PO in a simplified form. For gathering and providing architecting information, the
team communicates directly with external stakeholders. The PO accepts this “loss of
control” over communication and accepts architecting activities and decisions without
understanding them.
Resolution in cases 2, 4, 5, 6: The PO is not involved in the architectural discussions.
The architect (team) talks to other project stakeholders on architectural issues.
Challenge 2: PO lacks Scrum skills (reported in case 2): Insufficient competence of
POs on their responsibilities leads to incomplete information from stakeholders and the
PO then provides incomplete information to the team.
Architects in Scrum: What Challenges Do They Face? 233
Resolution in case 2: Some external stakeholders approach the team directly, cir-
cumventing the PO. The team architect also approaches external stakeholders (in-
cluding end users) and discusses functional and non-functional requirements with
them. Some POs are against architects to approach end users and the architect has to
justify talking directly to end users.
Challenge 3: Unavailability of PO (reported in cases 2, 3, 4, 5): The PO is sometimes
unavailable and the architect cannot obtain and provide information from/to the PO.
Resolution in case 2, 4: The team discusses the problem of not being available with the
PO. If this does not help, the problem is escalated to higher management.
Resolution in case 3: The external architect is empowered to remove user stories from
the sprint backlog which the PO did not elaborate on in sufficient detail.
Resolution in case 5: To compensate for the POs unavailability, the architect com-
municates with other stakeholders for providing and getting relevant information.
Challenge 1: External architect(s) and team cannot agree (reported in cases 3, 5, and
6): Architects face challenges in conveying their ideas to the team. Sometimes, the
team and architect disagree.
Resolution in case 3: The external and team architects discuss and agree on the
architectural decisions. Then, the external architect explains the reasoning behind
234 S. Angelov et al.
architectural decisions to the team at the beginning of each sprint. During demos he
reviews whether architectural directions are followed.
Resolution in case 5: The architect offers an architecture to the team leader. The
architect and team leader together introduce the architecture to the team. If the architect
and team disagree, the dispute is escalated to external managers.
Resolution in case 6: The architect tries to explain his choices to the team in a cour-
teous way. The architect would try to use the already convinced team members to
influence the rest of the team. In cases where still parts of the team would disagree, the
architect has to escalate the problem to external managers. On one occasion the
architect disagreed with the manager’s decision and left the project.
Challenge 2: External architect(s) cannot easily reach team members (reported in
case 3): Reaching all team members is difficult for the external architect, as the team
members are spread across multiple locations around the world.
Resolution in case 3: The external architect still tries to meet them physically 1–2 times
a year. This results in additional effort (e.g., travelling) and time required.
Challenge 3: External architect(s) provide insufficient input to the team (reported in
cases 4 and 5): The team has to align its decisions with the external architect(s) but does
not receive enough input or guidance during the project.
Resolution in case 4: A developer from the architectural team joins the Scrum team
during the first months of a project.
Resolution in case 5: The external architect talks to the team leader as a first point of
communication about architectural issues. He is “trying to make the team leader a sort
of architect in the team”.
Challenge 4: Teams struggle to provide documentation to external architect(s) (re-
ported in case 4): The architect expects from the team documentation. This often
conflicts with the team’s perception of agile practices.
Resolution in case 3: The team architects are encouraged to communicate between each
other on architectural decisions that span across teams and impact multiple teams.
4.6 Discussion
In all cases, the POs were reported to lack certain competences. In cases 1, 2, 4, 5, 6
they lack technical and architecting knowledge. POs performed their core activities
insufficiently in four of the cases: insufficient communication with external stake-
holders in case 2 and insufficient time for the team in cases 2, 3, 4, 5. Resolving these
challenges causes overhead (time, effort) for architects in reaching external stake-
holders and excluding the PO from architectural decisions in cases 2, 4, 5, 6.
In cases 1, 3, 4, 5, and 6, architects face conflicting situations with respect to
architectural decisions. Cases 3, 5, and 6 report disagreements between external
architects and teams. In cases 1 and 4 disagreements between management and teams
occurred. Resolutions to disagreements between external and team architects are about
getting buy-in from teams and team leaders or escalating to managers. To mitigate
problems caused by interfering management in case 4, an isolated (from management)
architecture prototyping project is started prior to the actual project. Challenges related
to architectural decisions were not reported only in case 2. This could be because the
organization of case 2 follows strictly Scrum practices (advocating team autonomy and
team architecting).
External architects fail to provide sufficient information to external stakeholders
about architectural decisions made (cases 3, 4) and to teams (cases 4, 5). This is
unexpected as this is one of the reasons for establishing an external architect. A pos-
sible explanation can be their high and diverse workloads (reported in case 5), geo-
graphical distance between a team and external architect (reported in case 3), or lack of
understanding for the value of architecting at external stakeholders (cases 3 and 5).
analytical generalization (i.e., our results are generalizable to other organizations that
have similar characteristics as the cases in our case study and use Scrum). The list of
challenges is based on six cases is insufficient for drawing major conclusions. How-
ever, the presented study is a first of its kind. With regards to reliability, we recorded
interviews and interview data, and reviewed data collection and analysis procedures
before conducting the study. Our study does not make any claims about causal rela-
tionships and therefore internal validity is not a concern.
5 Conclusions
We studied six cases involving companies that apply Scrum practices to identify
challenges that architects face in Scrum projects and what architects do about these
challenges. The cases were chosen to cover different setups of how architects can be
involved in Scrum. Main challenges found in the cases are (a) busy and incompetent
POs, (b) conflicts between architects and teams, and architects and management,
(c) failure of architects outside the Scrum teams to provide sufficient information to
stakeholders. The challenges reported in this paper increase architects’ awareness and
can be used to proactively address potential problems. As further research, we plan to
extend the number of cases and provide more general conclusions.
References
1. VersionOne: 9th Annual State of Agile Survey (2015)
2. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice. Addison-Wesley
Professional, Boston (2012)
3. Abrahamsson, P., Babar, M.A., Kruchten, P.: Agility and architecture: can they coexist?
IEEE Softw. 27, 16–22 (2010)
4. Yang, C., Liang, P., Avgeriou, P.: A systematic mapping study on the combination of
software architecture and agile development. J. Syst. Softw. 111, 157–184 (2016)
5. Babar, M.A.: An exploratory study of architectural practices and challenges in using agile
software development approaches. In: Joint Working IEEE/IFIP Conference on Software
Architecture, 2009 and European Conference on Software Architecture, WICSA/ECSA
2009, pp. 81–90 (2009)
6. van der Ven, J.S., Bosch, J.: Architecture decisions: who, how, and when? In: Babar, M.A.,
Brown, A.W., Mistrik, I. (eds.) Agile Software Architecture, chap. 5, pp. 113–136. Morgan
Kaufmann, Boston (2014)
7. Faber, R.: Architects as service providers. IEEE Softw. 27, 33–40 (2010)
8. Woods, E.: Aligning architecture work with agile teams. IEEE Softw. 32, 24–26 (2015)
9. Martini, A., Pareto, L., Bosch, J.: Towards introducing agile architecting in large companies:
the CAFFEA framework. In: Lassenius, C., Dingsøyr, T., Paasivaara, M. (eds.) XP 2015.
LNBIP, vol. 212, pp. 218–223. Springer, Heidelberg (2015). doi:10.1007/978-3-319-18612-
2_20
10. Eloranta, V.-P., Koskimies, K.: Lightweight architecture knowledge management for agile
software development. In: Babar, M.A., Brown, A.W., Mistrik, I. (eds.) Agile Software
Architecture, chap. 8, pp. 189–213. Morgan Kaufmann, Boston (2014)
Architects in Scrum: What Challenges Do They Face? 237
11. Rost, D., Weitzel, B., Naab, M., Lenhart, T., Schmitt, H.: Distilling best practices for agile
development from architecture methodology. In: Weyns, D., Mirandola, R., Crnkovic, I.
(eds.) ECSA 2015. LNCS, vol. 9278, pp. 259–267. Springer, Heidelberg (2015). doi:10.
1007/978-3-319-23727-5_21
12. Schwaber, K., Beedle, M.: Agile Software Development with Scrum. Prentice Hall, Upper
Saddle River (2002)
13. Beck, K.: Extreme Programming Explained (1999)
14. Fowler, M.: Who needs an architect? IEEE Softw. 20, 11–13 (2003)
15. Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in
software engineering. Empirical Softw. Eng. 14, 131–164 (2008)
16. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimen-
tation in Software Engineering. Springer, Heidelberg (2012)
17. Kitchenham, B., Pfleeger, S.L.: Principles of survey research: part 5: populations and
samples. SIGSOFT Softw. Eng. Notes 27, 17–20 (2002)
18. Kruchten, P.: What do software architects really do? J. Syst. Softw. 81(12), 2413–2416
(2008)
19. Miles, M., Huberman, M., Saldana, J.: Qualitative Data Analysis. Sage Publications,
Thousand Oaks (2014)
20. Krippendorff, K.: Content Analysis: An Introduction to its Methodology. Sage Publications,
Thousand Oaks (2003)
An Empirical Study on Collaborative
Architecture Decision Making
in Software Teams
1 Introduction
Software architecture serves as the intellectual centrepiece that not only governs
software development and evolution but also determines the overall characteristics of
the resulting software system [1]. It provides support for various aspects of software
system development by facilitating functions such as enabling the main quality attri-
butes of the system, managing changes, enhancing communication among the system
stakeholders and improving cost and schedule estimates [2]. Architecture decisions
stand out from the rest because they dictate all downstream design choices; thus, they
have far-reaching consequences and are hard to change [3]. Making the right archi-
tecture decisions, understanding their rationale and interpreting them correctly during
software system development are essential to building a system that satisfies stake-
holder expectations. As the system evolves, making new architecture decisions and
removing obsolete ones to satisfy changing requirements while maintaining harmony
with the existing decisions are crucial to keeping the system on course [4].
2 Background
Although the importance of architecture decisions has long been recognised, they only
began to gain prominence in software architecture about a decade ago [4]. Since then,
architecture decisions and the rationale behind them have been considered first-class
entities. Reasons such as dependencies between decisions, considerable business impact,
possible negative consequences and a large amount of effort required for analysing
alternatives are also recognised as factors contributing to the difficulty of architectural
decisions [8]. Due to the importance and complexity of architecture decision making, the
research community has given considerable attention to the topic, and a number of
techniques, tools and processes have been proposed to assist in different phases of the
architecture decision-making process [2]. Even though some attempts have been made to
develop GDM solutions for architecture decision making, most of the solutions,
including the most widely used ones, are not developed from a GDM perspective [11].
The groups can choose different decision-making methods such as consensus
decision making, majority rule, decisions by an internal expert and decisions by an
external expert, to reach a decision [12]. Based on the interaction between the team
leader and the team, the decision styles in teams can also be classified into many
different categories [13–15]. GDM has advantages such as increased acceptance, a
large amount of collective knowledge, multiple approaches provided by the different
perspectives and better comprehension of the problem and the solution [16]. At the
same time, there are also some weaknesses that undermine the use of GDM in cer-
tain situations. Liabilities such as being time-consuming and resource heavy, vulner-
ability to social pressure, possible individual domination and the pursuit of conflicting
secondary goals can result in low-quality compromised solutions [16]. One of the
major weaknesses of GDM is groupthink [17], where the group makes faulty decisions
without exploring the solutions objectively because of the social pressure to reach a
consensus and maintain the group solidarity.
3 Case Study
In this research, the case study approach was selected for two main reasons. First, the
case study is recommended for the investigation of a phenomenon when the current
perspectives seem inadequate because they have little empirical evidence [18].
240 S. Dasanayake et al.
Although generic GDM is a well-researched area, few empirical studies have been
made about GDM in software architecture. Second, in the case of decision making, the
context in which the decision is made is essential to understanding the decision fully
[19]. Since the case study allows us to study a phenomenon in its natural setting, the
case study makes it possible to gather insights about the phenomenon itself as well as
its interactions with its surroundings.
titles, all of them perform duties as software architects in their respective teams. The
software architects are located in three different sites: the headquarters (HQ) and two
development centres (DC1 and DC2).
A set of questions divided into different themes was used to guide the interviews.
The interview begins with questions related to the context and then gradually focuses
on software architecture and architecture decision making. The interview questions
later discuss the challenges that are faced and the possible solutions to these challenges.
The interviews were conducted by two researchers. Most of the interviews were carried
out face to face on site. Skype was used for three interviews due to travelling and
scheduling issues. Each interview lasted about 1.5 h. All interviews were recorded with
the consent of the interviewees.
It is clear that most of the software teams in the company follow GDM to make
architecture decisions. The decision-making process appears to be informal. However,
each team have some form of structured decision-making practice as all the intervie-
wees were able to describe it during the interviews. The software architecture
decision-making process in the case company is mainly a two-fold process composed
of team level and organisational level decision making. In addition to that, there is also
individual level decision-making, as each decision-maker makes individual decisions
while participating team level or organizational level decision-making sessions. Even
though software teams have freedom to make architecture decisions regarding their
own software components, architecture steering groups and the TC are involved in
making high-impact decisions that can affect the other teams or the company’s business
performances.
Architecture decision-making styles in each software team are based on the pref-
erences of the software architect and the team members. However, all the interviewees
made it clear that they selected the decision-making style based on the context, since
there is no “one size fits all” kind of solution. Meanwhile, the decisions related to tasks
that have an impact beyond the scope of the team are escalated to the architecture
steering groups or the TC. Figure 1 shows the most commonly used architecture
decision-making style of each team. According to that, consultative decision-making is
the most commonly used decision-making style; 8 teams (53 %) claimed to use that as
their primary decision-making style. One notable fact that is brought up during the
interviews was the majority of the consultative decision-making style followers are
willing to reach consensus during the consultation process if possible. However, they
keep consultative decision-making as the primary decision-making approach as it
allows them to avoid deadlocks and make timely decisions as the projects demand.
The interviewees provided arguments for choosing and not choosing each
decision-making style. The arguments in favour of collaborative decision-making styles
are that they increase team motivation, promote continuous knowledge sharing and
identify team members who have expertise in the problem domain. The main arguments
against these styles are that they are time-consuming and that it is difficult for team
members to come to an agreement. Clarity of responsibility and saving time and money
were given as reasons for using architect-driven decision-making styles. Others claimed
that architecture decision making is too complex to be handled by one person. It can
An Empirical Study on Collaborative Architecture Decision Making 243
Consultative
(53%)
Consensus
Persuasive (20%)
Authoritative (13%) Delegative
(7%) (7%)
limit the creativity of the solutions and introduce bias into the decisions since all the
interviewees use personal characteristics such as experience and intuition for individual
level decision-making. The only reason given for opting for delegative decision-making
style is that the architect’s unwillingness to take the responsibility of the design process.
The consultative decision-making style, which is preferred by the majority, brings
the right balance into the decision-making process as it allows the software teams to
makes decisions promptly while taking the opinion of the team members into con-
sideration. This style makes it easier to attribute a certain decision to the
decision-maker, hence maintaining the design rationale to some extent. The consul-
tation process also helps to share information and spread the knowledge within the
team. Since the majority of those who use consultative decision-making are open to
reach consensus during the consultation, there is a possibility of making collective
decisions when there are no demanding constrains. Eleven out of fifteen software teams
use either consultative or consensus decision-making styles, thus it is possible to claim
that collaborative way of decision-making has a strong presence in the case company.
Despite the availability of various architecture decision-making techniques, none of
the teams use any standard technique to make architecture decisions. Although a few
teams use software tools to create diagrams that can be used for decision making and
communication, the whiteboard is the standard tool for architecture decision making in
the case company. Despite being an external entity, the majority of interviewees view
architecture steering groups as useful bodies that support them in decision making. One
of the main reasons given for this view is that these groups support the teams by
reducing the complexity of the decision problem. Most of the time, software teams or
their representatives take the initiative to consult the steering group. That can also have
an impact on the teams’ view on steering groups, as consulting the steering group is
voluntary rather than forced upon the team.
required for informed decision making, including considering all possible alternatives,
evaluating risks, examining decision objectives and seeking information related to the
decision problem [17]. Based on the origin, the challenges are classified into three
different groups: organisational, process and human. Table 3 shows identified chal-
lenges and their impact on architecture decision-making.
6 Conclusion
The study revealed that the majority of software teams in the company use a consul-
tative decision-making approach to make architecture decisions. We were able to
identify the challenges related from three different aspects: organisational, process and
human, and their impact on architecture decision making. While discussing the overall
results, we also uncovered the existence groupthink that is known to influence group
decision making activities. The next logical step is to identify the relationship between
the type of architecture decisions and the decision-making style followed. Identifying
decision-making patterns that should be applied in different contexts will help software
architects and teams select the best possible course of action to make their decisions.
An Empirical Study on Collaborative Architecture Decision Making 245
We are currently planning to cross analyse our previous case study findings [20] with
the findings of this study to assess the generalisability.
Acknowledgements. This research is funded by ITEA2 and Tekes, the Finnish Funding Agency
for Innovation, via the MERgE project, which we gratefully acknowledge. We would also like to
thank all the interviewees and the management of the case company.
References
1. Taylor, R.N., Medvidovic, N., Dashofy, E.M.: Software Architecture: Foundations, Theory,
and Practice. Wiley, Hoboken (2010)
2. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice. Addison-Wesley
Professional, Reading (2012)
3. Clements, P.: A survey of architecture description languages. In: Proceedings of 8th
International Workshop on Software Specification and Design, pp. 16–25 (1996)
4. Jansen, A., Bosch, J.: Software architecture as a set of architectural design decisions. In: 5th
Working IEEE/IFIP Conference on Software Architecture (WICSA 2005). pp. 109–120
(2005)
5. Kruchten, P.: Software architecture and agile software development: a clash of two
cultures? In: 2010 ACM/IEEE 32nd International Conference Software Engineering, vol. 2,
pp. 497–498 (2010)
6. Shore, J.: Continuous design. IEEE Softw. 21, 20–22 (2004)
7. Abrahamsson, P., Ali Babar, M., Kruchten, P.: Agility and architecture: can they coexist?
IEEE Softw. 27, 16–22 (2010)
8. Tofan, D., Galster, M., Avgeriou, P.: Difficulty of architectural decisions – a survey with
professional architects. In: Drira, K. (ed.) ECSA 2013. LNCS, vol. 7957, pp. 192–199.
Springer, Heidelberg (2013)
9. Rekhav, V.S., Muccini, H.: A study on group decision-making in software architecture. In:
2014 IEEE/IFIP Conference on Software Architecture, pp. 185–194 (2014)
10. Tofan, D., Galster, M., Lytra, I., Avgeriou, P., Zdun, U., Fouche, M.-A., de Boer, R., Solms, F.:
Empirical evaluation of a process to increase consensus in group architectural decision making.
Inf. Softw. Technol. 72, 31–47 (2016)
11. Rekha V.S., Muccini, H.: Suitability of software architecture decision making methods for
group decisions. In: Avgeriou, P., Zdun, U. (eds.) ECSA 2014. LNCS, vol. 8627, pp. 17–32.
Springer, Heidelberg (2014)
12. Beebe, S.A., Masterson, J.T.: Communication in Small Groups: Principles and Practice.
Pearson Education Inc., New York (2009)
13. Hersey, P., Blanchard, K.H., Johnson, D.E.: Management of Organizational Behavior.
Pearson, New York (2012)
14. Tannenbaum, R., Schmidt, W.H.: How to choose a leadership pattern. Harv. Bus. Rev. 36,
95–101 (1958)
15. Stewart, L.P., Gudykunst, W.B., Ting-Toomey, S., Nishida, T.: The effects of
decision-making style on openness and satisfaction within Japanese organizations. Commun.
Monogr. 53, 236–251 (1986)
16. Schachter, S., Singer, J.E.: Assets and liabilities in group problem solving: the need for an
integrative function. Psychol. Rev. 74, 239–249 (1967)
246 S. Dasanayake et al.
17. Janis, I.L.: Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Cengage
Learning, Boston (1982)
18. Eisenhardt, K.M.: Building theories from case study research. Acad. Manag. Rev. 14, 532–
550 (1989)
19. Fantino, E., Stolarz-Fantino, S.: Decision-making: context matters. Behav. Process. 69, 165–
171 (2005)
20. Dasanayake, S., Markkula, J., Aaramaa, S., Oivo, M.: Software architecture decision-making
practices and challenges: an industrial case study. In: Proceedings of 24th Australasian
Software Engineering Conference (2015)
Architecture Enforcement Concerns
and Activities - An Expert Study
1 Introduction
Software architecture [1] builds the basis for the high-level design for a software
system and provides the basis for its implementation. It defines the fundamental
rules and guidelines that developers have to follow to ensure achieving quality
attributes such as performance or security.
In software engineering literature and community, the role of the architect is
widely discussed, especially in the context of agile development processes. For
example, McBride [14] defined the role of the architect as being “responsible
for the design and technological decisions in the software development process”.
However, the software architect role [7,12] is not only limited to making architec-
ture design decisions [10]. Additionally, the software architect is also responsible
for “sharing the results of the decision making with the stakeholders and the
project team, and getting them accepted” [23]. This task is called Architecture
Enforcement.
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 247–262, 2016.
DOI: 10.1007/978-3-319-48992-6 19
248 S. Schröder et al.
– RQ1: What are the concerns which architects consider during the
enforcement process? With this question we investigate which categories
of concerns software architects usually consider important. This will give us
further directions for our research activities in terms of detection and pri-
oritization of architectural violations concerning decisions that are especially
important for software practitioners.
– RQ2: What are the activities performed by the architects in order
to enforce and validate those concerns? The answer to this question
gives us a basis for developing appropriate approaches that best integrate
with methods that are currently used in practice in order to gain acceptance
by practitioners for new approaches.
In this section we present topics that are related with our study. We first present
related work concerning architecture decision enforcement. After that we present
related studies that investigate the concerns and activities of software architects
and discuss to which degree those studies consider architecture enforcement.
Architecture Enforcement Concerns and Activities - An Expert Study 249
In [5] the authors conducted an empirical study about the architects’ concerns.
They present some interesting findings. For example they found that “People
quality is as important as structure quality”. This is also confirmed by our study,
but we investigate in a more detailed fashion which are the actual dimensions
of “people quality”. Additionally, the authors’ understanding about “architects’
concerns” is a bit more general than ours. While they actually regard all the
phases (i.e. architecture analysis, evaluation, architecture design, realization etc.)
during the software engineering process as architects’ concerns, we especially
focus on the concerns that architects have corresponding to the architecture
enforcement process.
The study of Caracciolo et al. [4] is similar to ours. They investigated how
quality attributes are specified and validated by software architects and there-
fore also investigated what are important concerns for architects in terms of
quality attributes. They also conducted expert interviews as part of their study.
They identified several quality attributes that are important to software archi-
tects. Nevertheless, they solely concentrate on quality attributes and they do
not especially focus on architectural enforcement and by which activities it is
achieved.
250 S. Schröder et al.
3 Study Design
In our study we followed a qualitative research approach by applying a process
with two main phases: Practitioners Interviews, and Literature Categories’ Inte-
gration. The main purpose of the first phase is to explore the important aspects
in the current state of the practice regarding architecture enforcement, while
the second phase complements and relates the interviews’ findings with existing
concepts from the current state of the art. The two phases will be explained in
the following sub-sections.
Table 1. List of study participants, their domain and their years of experience.
Data Analysis Phase. For further analyses, all interviews were transcribed
word-by-word. After transcribing the interviews and checking them for cor-
rectness and completeness, we followed an inductive method for data analysis.
Instead of defining codes before analyzing the interviews, we let the categories
directly emerge from the data. For this, we first adapted Open Coding [19]. In
this step phenomena in the data are identified and labeled using a code that
1
http://swk-www.informatik.uni-hamburg.de/∼schroeder/ECSA2016/.
252 S. Schröder et al.
summarizes the meaning of the data. During this process, emerging codes are
compared with earlier ones in order to find similarities and maybe to merge
similar codes. Then we compared codes with each other and aggregated them
where possible to a higher level category. We used AtlasTi 2 in order to support
the codification process.
4 Results
Because of space limitations we discuss only the most interesting aspects in more
detail. The complete discussion with the data used is provided as supplementary
material and can be accessed through the paper’s website (see footnote 1).
important how you regard it. For me there do exist basically two views about
how software is built. First you have the global view [. . . ] There I decide how
I design my software, for example using Domain Oriented Design or SOA.”
(code: two different views of architecture, Participant D) and another partici-
pant reported: “. . . then we have the micro architecture, this is the architecture
within each team. A team can decide for its own component for which it is respon-
sible which libraries it wants to use.” (code: two different views of architecture,
macro architecture, micro architecture; Participant K). Those two views define
what architects basically consider as important for architecture enforcement in
different ways. The architects report to be concerned with macro architecture
issues and consider the micro architecture as developers’ responsibility, except
the coding style because of its relevance for maintainability: “. . . architecture is
also present in a single code statement. Code styles belong to it. Or simple things
like how do I define an interface. . . ” (code: micro architecture, Participant J).
Fig. 1. Overview of the identified categories of Enforcement Concerns from the inter-
views and the corresponding participant. Concerns marked with an asterisk are not
explained in detail in this paper but are available in the supplementary on the paper’s
website (see footnote 1).
Patterns. The architect may want to ensure that patterns are implemented
accordingly. Patterns related with the macro architecture view have to be
enforced and validated, patterns on the micro level are mostly considered as
a developers’ concern. But sometimes, pattern implementations are also checked
by architects on the micro architecture level, e.g. in order to discover what types
of design and architecture patterns are implemented and if they fit in the spe-
cific context: “which patterns are used and in which context. Are they only used
just because I have seen it in a book or because I wanted to try it or is it really
reasonable at this place. . . ” (code: pattern suitability, Participant C).
5 Discussion
In this section we discuss the results of the study and additionally impor-
tant implications of these results for future approaches concerning architecture
enforcement.
Architecture Enforcement Concerns and Activities - An Expert Study 259
5.1 Limitations
References
1. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice, 3rd edn.
Addison-Wesley Professional, Boston (2012)
2. Brunet, J., Serey, D., Figueiredo, J.: Structural conformance checking with design
tests: an evaluation of usability and scalability. In: 27th IEEE International Con-
ference on Software Maintenance (ICSM), pp. 143–152. IEEE Computer Society,
Washington, DC, September 2011
3. Caracciolo, A., Lungu, M.F., Nierstrasz, O.: A unified approach to architecture
conformance checking. In: 12th Working IEEE/IFIP Conference on Software Archi-
tecture (WICSA), pp. 41–50. IEEE Computer Society, Washington, DC, May 2015
4. Caracciolo, A., Lungu, M.F., Nierstrasz, O.: How do software architects spec-
ify and validate quality requirements? In: Avgeriou, P., Zdun, U. (eds.) ECSA
2014. LNCS, vol. 8627, pp. 374–389. Springer, Switzerland (2014). doi:10.1007/
978-3-319-09970-5 32
5. Christensen, H.B., Hansen, K.M., Schougaard, K.R.: An empirical study of software
architects’ concerns. In: 16th Asia-Pacific Software Engineering Conference, pp.
111–118. IEEE Computer Society, Washington, DC, December 2009
6. Fairbanks, G.: Just Enough Software Architecture: A Risk-Driven Approach. Mar-
shall & Brainerd, Boulder (2010)
7. Fowler, M.: Who needs an architect? IEEE Softw. 20(5), 11–13 (2003)
8. Gasson, S.: Rigor in grounded theory research: an interpretive perspective on gen-
erating theory from qualitative field studies. In: The Handbook of Information
Systems Research, pp. 79–102 (2004)
9. Hove, S.E., Anda, B.: Experiences from conducting semi-structured interviews in
empirical software engineering research. In: 11th IEEE International Software Met-
rics Symposium (METRICS 2005), pp. 10–23, September 2005
10. Jansen, A., Bosch, J.: Software architecture as a set of architectural design deci-
sions. In: 5th Working IEEE/IFIP Conference on Software Architecture, WICSA
2005, pp. 109–120. IEEE Computer Society, Washington, DC (2005)
11. Jansen, A., van der Ven, J., Avgeriou, P., Hammer, D.K.: Tool support for archi-
tectural decisions. In: Sixth Working IEEE/IFIP Conference on Software Archi-
tecture, WICSA 2007, p. 4. IEEE Computer Society, Washington, DC (2007)
12. Kruchten, P.: What do software architects really do? J. Syst. Softw. 81(12), 2413–
2416 (2008)
13. Malavolta, I., Lago, P., Muccini, H., Pelliccione, P., Tang, A.: What industry needs
from architectural languages: a survey. IEEE Trans. Softw. Eng. 39(6), 869–891
(2013)
14. McBride, M.R.: The software architect. Commun. ACM 50(5), 75–81 (2007)
262 S. Schröder et al.
15. Mirakhorli, M., Cleland-Huang, J.: Detecting, tracing, and monitoring architec-
tural tactics in code. IEEE Trans. Softw. Eng. 42(3), 205–220 (2016)
16. Murphy, G.C., Notkin, D., Sullivan, K.: Software reflexion models: bridging the gap
between source and high-level models. In: Proceedings of the 3rd ACM SIGSOFT
Symposium on Foundations of Software Engineering, SIGSOFT 1995, pp. 18–28.
ACM, New York (1995)
17. Perry, D.E., Wolf, A.L.: Foundations for the study of software architecture. SIG-
SOFT Softw. Eng. Notes 17(4), 40–52 (1992)
18. Sangal, N., Jordan, E., Sinha, V., Jackson, D.: Using dependency models to manage
complex software architecture. In: Proceedings of the 20th Annual ACM SIGPLAN
Conference on Object-Oriented Programming, Systems, Languages, and Applica-
tions, pp. 167–176. ACM, New York (2005)
19. Strauss, A., Corbin, J., et al.: Basics of Qualitative Research, vol. 15. Sage,
Newbury Park (1990)
20. Taylor, R.N., Medvidovic, N., Dashofy, E.M.: Software Architecture: Foundations,
Theory, and Practice. Wiley Publishing, Chichester (2009)
21. Terra, R., Valente, M.T.: A dependency constraint language to manage object-
oriented software architectures. Softw. Pract. Exper. 39(12), 1073–1094 (2009)
22. Vogel, O., Arnold, I., Chughtai, A., Kehrer, T.: Software Architecture: A Compre-
hensive Framework and Guide for Practitioners. Springer, Heidelberg (2011)
23. Zimmermann, O., Gschwind, T., Küster, J., Leymann, F., Schuster, N.: Reusable
architectural decision models for enterprise application development. In: Overhage,
S., Szyperski, C.A., Reussner, R., Stafford, J.A. (eds.) QoSA 2007. LNCS, vol. 4880,
pp. 15–32. Springer, Heidelberg (2007). doi:10.1007/978-3-540-77619-2 2
Software Architectures for Web
and Mobile Systems
The Disappearance of Technical Specifications
in Web and Mobile Applications
A Survey Among Professionals
1 Introduction
In recent years, we have been observing a paradigm shift in the software engineer-
ing community. Professional software development projects traditionally relied
on upfront planning and design, distinct software phases and often a clear sepa-
ration of roles and responsibilities within project teams. Ever growing time-to-
market constraints, however, leads to high innovation pressure, which brought
forth methods and techniques like agile project management, continuous deliv-
ery, and DevOps, which break with the traditional way of approaching software
projects. Apart from this, primarily for web and mobile application development,
developers now need to deal with an increasingly heterogeneous tool and lan-
guage stack. In that domain in particular, software is often designed ad-hoc, or at
best by drawing informal sketches on a white board while discussing with peers.
The reasons for this are multifold: where applications must be developed and
rolled out quickly, designers do not seem to see the value of spending much time
on modeling and documenting solutions. A second reason is that software engi-
neering has no guidelines for efficient modeling of heterogeneous multi-paradigm
applications. This is partly due to the fact that software engineering curricula
at universities are still heavily focused on traditional object-oriented analysis
and design using UML, which is not a good fit for applications that are not
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 265–273, 2016.
DOI: 10.1007/978-3-319-48992-6 20
266 T. Theunissen and U. van Heesch
2 Study Design
In this section, we present our research questions and the study design.
RQ2 originates from the conjecture that development teams partially reuse
architectural design from previous projects. Architectural reuse improves devel-
opment efficiency, which contributes to fast time-to-market of features. Further-
more, reuse is a means for risk mitigation [1]. Here, we want to find out which
parts of an architecture are typically re-used.
The third question concerns the timespan, developers expect their applica-
tions to reside on the market before they are discarded or subject to a major
re-engineering. This is relevant because the cost-effectiveness of documentation
effort is proportional to the expected lifetime of an application.
2.2 Methodology
A survey was conducted to collect data for our research questions. We chose to
use a web-based questionnaire over individual interviews, because we wanted to
reach a sufficiently large subject population. Questionnaire-based surveys addi-
tionally exhibit a higher degree of external validity than interviews [2]. When
used with closed-ended questions and fixed response options, data gathered with
questionnaires can easily be processed automatically. This is in contrast to inter-
views, which have high costs in terms of time per interview, traveling and process-
ing the results. On the other hand, interviews provide greater flexibility and
allow for more in-depth exploration of the respondents’ answers. As described
in Sect. 5, we plan to conduct interviews with a small number of the subjects at
a later stage to get more in-depth insight in the phenomena we observe through
the questionnaire.
We conducted a pilot study with three members from the target population
to improve the wording, order and answering options of the questions.
study. Our study database, which contains the questionnaire and all responses,
can be found on http://2question.com/q1q3/.
The Disappearance of Technical Specifications 269
The questionnaire has four sections. The first section (A) includes questions
about the organization, role of the respondent and previous experience. The
second section (B) concerns the applications developed. The third section (C)
addresses the design, development and maintenance. The last section (D) con-
cerns priorities regarding software development and software maintenance.
In the remainder of this section, we present the results and most relevant
answers for every research question. Additionally we discuss results of supporting
questions and control questions. This section ends with an interpretation of the
results, discussion and expected and remarkable outcomes.
3.1 Analysis RQ1: How are Web and Mobile Applications Designed
and How is Design Knowledge About These Applications
Preserved in the Industry?
The questions most relevant to RQ1 are “What types of tools do you use dur-
ing your design process?” (C3) and “How do you ensure that knowledge about
features, implementations, design decisions etc. is maintained?” (C5).
In question C3, we asked participants to specify the tools2 used for design, the
time they spend on each of these tools (as a percentage of the total time spent on
design activities), and the quantity of results (number of occurrences or number
of produced deliverables). The design approaches, where participants spend most
time on are “Experimenting, building proofs of concept” (26 %), “Documented
concepts in written language like Word documents” (22 %) and “Sketches like
annotated block/line diagrams” (19 %). With 11 % of total design time, technical
documentation (e.g. UML, SysML, ERD, Database models) received the lowest
score. In terms of quantity, the top three answers were “Verbal communication”
(14), “Sketches like annotated block/line diagrams” (6) and “Experimenting,
building proofs of concept” (3).
For knowledge preservation (question C5), the top three methods used in
terms of spent time (percentage of overall time spent on knowledge preservation)
are “Documented concepts in written language like Word documents” (26 %),
“Documented code (with tools like JavaDoc, JSDoc or no tools)” (26 %) and
“Verbal communication” (17 %).
Additionally, we asked participants about their top three priorities dur-
ing software development (question D1) and software maintenance (D2). Dur-
ing development time, the top three priorities are “Quality” (7,2 %), “(Func-
tional) requirements” and “time-to-market” (both 6,6 %), and “Maintainabil-
ity” (4,8 %). During maintenance, the top three priorities are “Documentation”
(18 %), “Code quality” (17 %) and both “Architecture” and “Maintainability”
(6,8 %).
2
The term tool is used in a wide sense here, covering among others UML, free-text,
but also conversations and informal whiteboard sketches.
270 T. Theunissen and U. van Heesch
less time on technical documentation, because they are reluctant to spend time
on non-engineering activities, i.e. activities that are no integral part of the built
process.
In question C5, we assume a typical division between development and main-
tenance, in which developers in a project are not responsible for deployment
and maintenance of applications. In this scenario, documentation is crucial for
deployment and maintenance, as well as for managing responsibilities [5]. How-
ever, most participants chose “Verbal communication” as the primary method
for handing over the code to other team members. In discussions with software
engineers in the pilot group and remarks from participants, we found that engi-
neers rather rely on proven practices in their teams, rather than formal methods,
to design, develop and maintain applications. One of these proven practices is
the use of verbal communication in weekly team meetings to discuss code and
design. These discussions aim at improving the quality of the code by reviewing
the contributions for that week and sharing the concepts and implementations.
In line with [6–8], we did not expect that webservice API’s (SOAP, RESTful)
would be the most re-used architectural assets (question B5). We had rather
expected that data would have a higher value both for business and for software
engineers and thus would be more often re-used that services.
With B6, we expected that the average life time of an application will be
within 3 to 5 years (as in [9]). This is related to IT expenditures that are typ-
ically budgeted from capital expenditures. Capital expenditures have a typical
amortization of 5 years. Nowadays, companies do not have to invest in costly
server infrastructure anymore (capital expenditure). Instead, web and mobile
applications are typically deployed in cloud environments, in which infrastruc-
ture is payed for as-a-service and is thus operational expenditure [10]. Further-
more, software engineers typically change their employer or job-role between 2
years [11] and 4.6 years [12]. Finally, software engineers typically favor build-
ing from scratch over brown-field applications that have been patched over the
years. In the latter cases, the technical debt exceeds the cost of re-building from
scratch.
4 Threats to Validity
In this section, we discuss possible threats to the internal and external validity
of our findings. A common threat to internal validity in questionnaire-based
surveys stems from poorly understood questions and a low coverage of constructs
under study. The former threat was mitigated to a large extent by piloting the
questionnaire with three participants form the target population. We asked these
participants to fill in the questionnaire. Afterwards, they were asked to describe
their interpretations of the questions and their answers. We used this input in
multiple iterations to revise the questions and answering options. We addressed
construct validity by explicitly mapping the questions of our questionnaire to the
research questions (see Table 1) and by making sure that each research question
is covered by multiple questions in the questionnaire.
272 T. Theunissen and U. van Heesch
External validity is concerned with the degree, to which the study results can
be generalized for a larger subject population [13]. We used statistical methods
to analyze whether our results are significant. Mason et al. postulate that, as a
rule of thumb, questionnaires require between 30 and 150 responses in order to
yield valid responses [14].
We had a total of 73 respondents; 39.7 % of whom answered all questions.
Thus, we suppose that the number of respondents is sufficient.
Two remarkable outcomes from the questionnaire (questions C3 and C5) are
(1) that technical documentation is less popular than plain text documentation
and (2) that continuity of knowledge is achieved primarily through verbal com-
munication. We calculated the variance and standard deviation of our responses.
For C3 the variance is 0.2 and thus very low; for C5 the calculated mean is 423,
the standard deviation is 193 and the weighted value for verbal communication
is 425. The actual weighted value deviates by 2 points only. Thus, the results
with respect to our most surprising outcomes are statistically significant.
References
1. van Heesch, U., Avgeriou, P.: Mature architecting - a survey about the reasoning
process of professional architects. In Proceedings of the Ninth Working IEEE/IFIP
Conference on Software Architecture, pp. 260–269. IEEE Computer Society (2011)
2. Ciolkowski, M., Laitenberger, O., Vegas, S., Biffl, S.: Practical experiences in
the design and conduct of surveys in empirical software engineering. In: Con-
radi, R., Wang, A.I. (eds.) Empirical Methods and Studies in Software Engi-
neering. LNCS, vol. 2765, pp. 104–128. Springer, Heidelberg (2003). doi:10.1007/
978-3-540-45143-3 7
3. Teddlie, C., Fen, Y.: Mixed methods sampling a typology with examples. J. Mixed
Methods Res. 1(1), 77–100 (2007)
4. Sonnenburg, S.: Creativity in communication: a theoretical framework for collab-
orative product creation. Creativity Innov. Manage. 13(4), 254–262 (2004)
5. Carzaniga, A., Fuggetta, A., Hall, R.S., Heimbigner, D., van der Hoek, A.,
Wolf, A.L.: A Characterization Framework for Software Deployment Technologies.
Colorado State Univ Fort Collins Dept of Computer Science (1998)
6. Teece, D.J.: Capturing value from knowledge assets: the new economy, markets for
know-how, and intangible assets. Calif. Manage. Rev. 40(3), 55–79 (1998)
7. Rayport, J.F., Sviokla, J.J.: Exploiting the virtual value chain. Harvard Bus. Rev.
73(6), 75 (1995)
8. Howard, R.A.: Information value theory. IEEE Trans. Syst. Sci. Cybern. 2(1),
22–26 (1966)
9. Tamai, T., Torimitsu, Y.: Software lifetime and its evolution process over gener-
ations. In: Proceerdings, Conference on Software Maintenance, pp. 63–69. IEEE
(1992)
10. Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R., Konwinski, A., Lee, G.,
Patterson, D., Rabkin, A., Stoica, I., et al.: A view of cloud computing. Commun.
ACM 53(4), 50–58 (2010)
11. Eriksson, T., Ortega, J.: The adoption of job rotation: testing the theories. Ind.
Labor Relat. Rev. 59(4), 653–666 (2006)
12. U.S. Bureau of Labor Statistics. Employee tenure summary, September 2014.
http://www.bls.gov/news.release/tenure.nr0.htm. Accessed 28 Mar 2016
13. Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C.,
El Emam, K., Rosenberg, J.: Preliminary guidelines for empirical research in
software engineering. IEEE Trans. Softw. Eng. 28(8), 721–734 (2002)
14. Mason, M.: Sample size and saturation in PhD studies using qualitative interviews.
Forum Qual. Sozialforschung/Forum: Qual. Soc. Res. 11(3), Art. 8 (2010). http://
nbn-resolving.de/urn:nbn:de:0114-fqs100387
Architecture Modeling and Analysis
of Security in Android Systems
1 Introduction
can be sent either to other activities within the app, or to activities that belong
to other apps. There are two forms of intent: explicit and implicit. Senders of
explicit intents specify the intended recipients, which can be in the same or
another app. For implicit intents, a recipient is not specified. Instead, Android
conducts intent resolution that matches the intent with intent patterns speci-
fied by activities. So, for example, an activity can request that a web page be
displayed, but can allow that web page to be displayed by third party browsing
apps that may be unknown at the time the requesting app is developed.
While intents provide a great deal of flexibility, they are also the source of a
number of security vulnerabilities such as intent spoofing, privilege escalation,
and unauthorized intent receipt [10]. To some degree, these vulnerabilities can
be uncovered by analyzing apps and performing static analysis to see how intents
are used, what checks are made on senders and receivers of intents, and so on [21].
However, Android is an extendable platform that allows users to dynamically
download, update, and delete apps that makes a full static analysis impossible.
Separ [7] provides an automatic scheme for formal synthesis of Android inter-
component security policies, allowing end-users to safeguard the apps installed
on their device from inter-component vulnerabilities. It relies on a constraint
solver to synthesize possible security exploits, from which fine-grained security
policies are derived. Such fine-grained, yet system-specific, policies can then be
enforced at run time to protect a given device.
Bagheri et al. conduct a bounded verification of the Android permission
protocol modeled in terms of architectural-level operations [4,5]. The results
of this study reveal a number of flaws in the permission protocol that cause
serious security defects, in some cases allowing the attacker to entirely bypass
the Android permission checks.
Secoria [1] provides security analysis for architectures and conformance
for systems with an underlying object-oriented implementation. Through static
analysis, a data flow architecture of a system is constructed as an instance of
a data flow architecture style defined in Acme [18]. Components are assigned
a trust level and data read and write permissions are specified on data stores.
Security constraints particular to a software systems (such as that “Access to the
key vault [. . . ] should be granted to only security officers and the cryptographic
engine”) are captured as Acme rules. In [16], this dataflow style is extended with
constraints for analyzing a subset of the STRIDE vulnerabilities. We show in
Sect. 6.2 how this latter approach can be applied to analyze vulnerabilities in
Android.
To evaluate the security of Android apps, the core Android architectural struc-
tures need to be represented in an architecture style that is expressive enough
to capture security properties. Android app component types, such as activi-
ties, services, and content providers form the building blocks of all apps. Each
Android component type possesses properties that are critical for security assess-
ment. For example, activities can be designated as “exported” if they can be
referenced outside of the app to which they belong. Exported activities are a
common source of security vulnerabilities, thus a security-focused architectural
model must include information about whether an activity is exported. Android
apps are distinct, yet they share many commonalities necessary for app creation
and interaction. A major consequence of this design is that boundaries between
apps are loosely defined and enforced. To identify and evaluate potential secu-
rity issues that emerge from app interaction on a device, all apps and their
connections deployed on the device must be made explicit in the architecture.
Furthermore, because apps can be updated, installed, and removed during the
lifetime of the device, the architecture model must be flexible and easy to modify.
Since a significant number of Android security issues arise from unexpected
interactions between apps, modeling communication pathways between apps on
a device is perhaps the most critical requirement for security analysis. At the
device level, each individual app is essentially a subsystem that operates in the
Architecture Modeling and Analysis of Security in Android Systems 279
So far we have discussed how we have modeled elements of an app, but not
the app itself. Because most vulnerabilities involve inter-app communication,
and apps themselves specify additional information (e.g., which activities are
exported), we need a way to explicitly represent them. One way to do this
would be via hierarchy: make each app a separate component with a subsystem
that is composed of the activities, services, etc. This would mean that we could
represent a device as a collection of App components, where the structure is
hidden in the hierarchy. However, this complicates security checking, because it
involves analyzing communication that is directly between activities and services
4
http://developer.android.com/guide/components/services.html.
Architecture Modeling and Analysis of Security in Android Systems 281
(and not apps); in such cases, any analysis would inevitably need to traverse the
hierarchy, complicating rules and pathways that are directly between constituent
components.
Alternatively, Acme has a notion of groups, which are architectural elements
that contain components and connectors as members. Like other architectural
elements, groups can be typed and can define properties and rules. So, we use
groups to model apps. The AppGroupT group type defines the permissions
that an Android app has as a property. It then specifies its members as instances
of the component types described above. Rules check that member elements
do not require permissions that are not required by the app itself, providing
some consistency checking about permission usage in the app. Groups naturally
capture Android apps as collections of activities, services, content providers,
etc., as well as the case where communication easily crosses app boundaries by
referring directly to activities that may be external to the app. Groups are shown
in Fig. 1 as dashed lines around the set of components that are provided by the
app. Furthermore, for security analysis, groups form natural trust boundaries –
communication within the app can be trusted because permissions are specified
at the app level; communication outside the app should be analyzed because
information flows to apps that may have different permissions. Therefore we also
capture the permissions that are specified by apps as properties of the group.
An application (group) specifies the set of permissions that an app is granted;
activities specify the permissions that are required for them to be used.
282 B. Schmerl et al.
One of the key requirements for enabling security analysis with formal models
is being able to explicitly capture inter-app communication. All intents use the
same underlying mechanism, but the semantics of implicit and explicit intents
are markedly different. Explicit intents require the caller to specify the target of
the intent, and hence are more like peer-to-peer communication. Implicit intents
require apps that can process the intent to specify their interest via subscription.
Senders of the intent do not name a receiver, and instead Android (or the user)
selects which of the interested apps should process it through a process called
intent resolution. This communication is like publish subscribe. Because these
different semantics are susceptible to different vulnerabilities, they need to be
distinguished in the style.
Explicit intents are modeled as point-to-point connectors (pairwise rectangu-
lar connectors in Fig. 1), where there is one source of the intent and one target.
On the other hand, we model implicit intent communication via publish sub-
scribe. We model one implicit intent bus per device. Implicit intents sent from
components in all apps are connected to this bus; publishers specify the kind of
intent that is being published (i.e., the intent’s action), whereas subscribers spec-
ify the intent filter being matched against. In Fig. 1 we can see one device-wide
implicit intent bus as the filled-in long rectangle in the middle of the figure. Ele-
ments from all apps connect to this bus (the intent type and intent subscriptions
are specified as properties on the ports of connected components).
Different connector types for each intent-messaging type allows for more
nuanced and in-depth reasoning about security properties than if they were
modeled using the same type. For example, identifying unintended recipients of
implicit intents is easier if implicit intents are first-order connectors.
Android also has a notion of broadcasts (intents sent to broadcast receivers
in apps). We did not define a separate connector for broadcasts because, for
the purposes of security analysis, broadcast communication is done by sending
intents (though via different APIs). Subscribing to broadcasts is also done by
registering an intent filter, making both the sending and receiving for broadcasts
the same as for intents.
5 Architecture Discovery
The model extractor relies on the Soot [24] static analysis framework to
capture an abstract model of each individual app. The captured model encodes
the high-level, static structure of each app, as well as possible intra- or inter-
app communications. To obtain an app model, the model extractor first extracts
information from the manifest, including an app’s components and their types,
permissions that the app requires, and permissions enforced by each component
to interact with other components. It also extracts public interfaces exposed
by each app, which are entry points defined in the manifest file through Intent
Filters of components. Furthermore, the model extractor obtains complementary
information latent in the application bytecode using static code analysis [6]. This
additional information, such as intent creation and transmission, or database
queries, are necessary for further security analysis.
Once the generic model of an app (App Model in Fig. 2) is obtained, the
template engine translates it to an Acme architecture. In fact, the input and
output of this phase are models extracted from apps APK files in an XML
format and their corresponding architecture descriptions in the Acme language,
respectively. Our template engine, which is based on the FreeMarker framework,5
needs a template (i.e., Acme Template in Fig. 2) that specifies the mapping
between an app’s extracted entities and the elements of the Acme’s architectural
style for Android (c.f. Sect. 4).
The model transformation process consists of multiple iterations over three
elements of apps (i.e., components, intents, and database queries) extracted by
the model extractor. It first iterates over the components of an app, and generates
a component element whose type corresponds to one of the four component types
of Android. The properties of the generated components are further set based on
the extracted information from the manifest (e.g., component name, permissions,
etc.). If the type of a component is ContentProvider, a provider port is added
to the component. Moreover, if a component has defined any public interface
through IntentFilters, a receiver port is added and connected to the Implicit
Intent Bus. Afterwards, it iterates over intents of the given app model. For
explicit intents, two ports are added to the sender and receiver components of
the intent, and a Explicit Intent connector is generated to connect those ports.
For implicit intents, however, only one port is added to the component sending
5
http://freemarker.org/.
284 B. Schmerl et al.
the message; this port is then attached to the Implicit Intent bus. Moreover, to
capture data sharing communications, the tool iterates over database queries,
and adds a port to the components calling a ContentProvider. This port is then
connected to the other port, previously defined for the called ContentProvider,
which is resolved based on the specified authority in the database query.
Finally, after translating the app models of all APK files, generated archi-
tectures are combined together and with the architecture style we developed for
the Android framework (Android Family), which are then fed as a whole into
AcmeStudio as the architecture of the entire system. This recovered architec-
ture is further analyzed to identify flaws and vulnerabilities that could lead to
security breaches in the system.
– All implicit intents are attached to the global implicit intent bus.
invariant forall c1 :! IntentFilteringApplicationElementT in self.components |
size (c1.intentFilters) > 0 −> connected (ImplicitIntentBus, c1);
Architecture Modeling and Analysis of Security in Android Systems 285
– Activities and services that are not exported by an app are not connected to
other apps.
invariant forall g1 :! AndroidApplicationGroupT in self.groups |
forall g2 :! AndroidApplicationGroupT in self.groups |
forall a1 :! IntentFilteringApplicationElementT in g1.members |
forall a2 :! AppElement in g2.members |
((a1 != a2 and connected(a1, a2) and !a1.exported) −> g1 == g2);
Using these rules, Acme is able to check the architecture of individual apps, as
well as a set of apps deployed together on an Android device. In Acme, invariants
are used to specify rules that must be satsified, whereas heuristics represent
rules-of-thumb that should be followed. When used in a forward engineering
setting, where a model of an app is constructed prior to its implementation, the
analysis can find flaws early in the development cycle. When used in a reverse
engineering setting, where a model of an app is recovered using the techniques
described in Sect. 2, the rules can be applied to identify flaws latent in the
implemented software or introduced as the system evolves.
A certain class of threats facing a system can be classified using STRIDE [23],
which captures five different kinds of threat categories: Spoofing, Tamper-
ing, Repudiation, Denial of Service, and Elevation of Privilege. According to
STRIDE, a system faces security threats when it has information or computing
elements that may be of value to a stakeholder. Such components or information
are termed the assets of the system. Furthermore, most threats occur when there
is a mismatch of trust between entities producing and those consuming the data.
This approach conforms to the security level approach mismatch idea proposed
in [13,14] and used by others since then (e.g., [8,15]).
STRIDE is often applied in the context of a larger threat modeling activ-
ity where the system is represented as a dataflow diagram. This representation
is particularly useful for evaluating Android security issues that emerge from
unintended intent passing. Viewing apps and the data they access as assets in
terms of data flow exposes situations when possibly sensitive data passes between
apps in an insecure way. For each data path between apps on a device, careful
analysis can be performed to identify vulnerabilities, such as spoofing and ele-
vation of privilege issues. Intent spoofing is a known classes of threat common
in Android systems that occurs when a malicious activity is able to forge an
intent to achieve an otherwise unexpected behavior. In one scenario the tar-
geted app contains exported activities capable of receiving the spoofed intent.
Once processed by the victim app it can be leveraged to elevate the privileges
of the malicious app by possibly providing access to protected resources.
Acme provides a framework for reasoning about app security. The properties
needed to reason about these threats are present in terms of Android structures
and data flow concerns. For example, Acme handles inter-app communication
and exposes security properties about apps, such as whether they are exported
286 B. Schmerl et al.
and what permissions they possess. With this information in the model, auto-
matically detecting app arrangements that may allow intent spoofing, informa-
tion disclosure, and elevation of privilege can be written as first order predicate
rules over the style. Consider Listing 1 which shows how information disclosure
vulnerabilities are detected. Each application group is assigned a trust level,
based on the category of the app - for example, banking and finance apps would
be more trusted than game apps; apps from certain providers like Google would
have higher trust. The rule specifies that if a source application sends an implicit
intent to a target application then the source applications trust level must be
lower than or equal to the recipient. These rules for STRIDE are consistent with
the approach taken in [16] for general data-flow architectures.
This rule (and others that are being checked) highlight potential pathways of
concern and may generate false positives. This is one reason why in the style we
specify the rule as a heuristic, rather than as an invariant. These pathways would
need to be more closely monitored at run time than other pathways that do not
fail the heuristic, to determine whether the information should be transmitted.
Sect. 4, the architectural style for the Android framework represents the foun-
dation upon which Android apps are constructed. Our formalization of these
concepts includes a set of rules to lay this foundation (e.g., application, compo-
nent, messages, etc.), how they behave, and how they interact with each other.
We regard vulnerability signatures as a set of assertions used to reify security
vulnerabilities in Android, such as privilege escalation. All the specifications are
uniformly captured in the Alloy language. As a concrete example, we illustrate
the semantics of one of these vulnerabilities in the following. The others are
evaluated similarly.
assert privilegeEscalation{
no disj src, dst: Component, i:Intent|
(src in i.sender) &&
(dst in src.ˆtransitiveIPC) &&
(some p: dst.app.usesPermissions |
not (p in src.app.usesPermissions) &&
not ((p in dst.permissions) ||(p in dst.app.appPermissions)))
}
Listing 2 presents an excerpt from an Alloy assertion that specifies the ele-
ments involved in and the semantics of the privilege escalation vulnerability. In
essence, the assertion states that the victim component (dst) has access to a per-
mission (usesPermission) that is missing in the src component (malicious), and
that permission is not being enforced in the source code of the victim component,
nor by the application embodying the victim component. As a consequence, an
application with less permissions (a non-privileged caller) is not restricted to
access components of a more privileged application (a privileged callee) [11].
The analysis is conducted by exhaustive enumeration over a bounded scope
of model instances. Here, the exact scope of each element, such as Application
and Component, required to instantiate each vulnerability type is automati-
cally derived from the system architectural model. If an assertion does not hold,
the analyzer reports it as a counterexample, along with the information help-
ful in locating the root cause of the violation. A counterexample is a certain
model instance that makes the assertion false, and encompasses an exact sce-
nario (states of all elements, such as components and intents) leading to the
violation.
7 Performance Analysis
To evaluate the performance of our approach, we randomly selected and down-
loaded 15 popular Android apps of different categories from the Google Play
repository, and ran the experiments on a computer with 2.2 GHz Intel Core
i7 processor and 16 GB DDR3 RAM. We repeated our experiments 33 times,
the minimum number of repetitions needed to accurately measure the average
execution time overhead at 95 % confidence level. Table 1 summarizes the perfor-
mance measurements for the architecture discovery process described in Sect. 5,
divided into the time of model extraction and architecture generation.
288 B. Schmerl et al.
The first column shows the number of instructions in the Smali assembly
code6 of the apps under analysis, representing their size in lieu of their corre-
sponding line of code due to unavailability of their source code. Moreover, as an
architectural metric, the number of components, categorized by their types (i.e.,
Activity, Service, Broadcast Receiver, Content Provider), and explicit connec-
tors, are provided in the table.
As shown in Table 1, there is a relationship between size (number of instruc-
tions) of the apps and model extraction time – apps with more instructions
require more time to capture their model. On the other hand, the performance
of the second part of the process, i.e., translating the extracted model to an
Acme architecture, depends on the total number of components and connector,
as the translator iterates over each of them.
Our approach can be used to facilitate this combination of static and dynamic
analysis. We are in the process of connecting our tool-suite to the Rainbow self-
adaptive framework [9,17], where the vulnerabilities found statically can be used
to choose adaptation strategies to change communication behavior in Android.
We are in the process of addressing some of the challenges in integrating these
two approaches, including disconnected operation and prevention of behaviors
rather than reaction to behaviors.
For the modeling aspect, we have concentrated on understanding the archi-
tecture of the applications on the device, and the communication pathways. How-
ever, many apps are part of a large ecosystem with diverse back ends that are not
on the device. Many of these apps may have information flows that affect secu-
rity. How we model this, and how much, is an area of future work. Furthermore,
security aspects are context-sensitive in the domain of mobile devices, where the
degree of analysis required might change depending on whether devices are, for
example, being used in a public coffee bar, or at home. We focused on analyzing
Android and extensions to it. In future work we plan to apply this type of rea-
soning to other plugin frameworks, and assess how we might inform the design
of new frameworks for which security is a concern.
References
1. Abi-Antoun, M., Barnes, J.M.: Analyzing security architectures. In: Proceedings
of the IEEE/ACM International Conference on Automated Software Engineering,
ASE 2010, pp. 3–12. ACM, New York (2010)
2. Almorsy, M., Grundy, J., Ibrahim, A.S.: Automated software architecture security
risk analysis using formalized signatures. In: 2013 35th International Conference
on Software Engineering (ICSE), pp. 662–671, May 2013
3. Bagheri, H., Garcia, J., Sadeghi, A., Malek, S., Medvidovic, N.: Software architec-
tural principles in contemporary mobile software: from conception to practice. J.
Syst. Softw. 119, 31–44 (2016)
4. Bagheri, H., Kang, E., Malek, S., Jackson, D.: Detection of design flaws in
the Android permission protocol through bounded verification. In: Bjørner, N.,
de Boer, F. (eds.) FM 2015. LNCS, vol. 9109, pp. 73–89. Springer, Heidelberg
(2015). doi:10.1007/978-3-319-19249-9 6
5. Bagheri, H., Kang, E., Malek, S., Jackson, D.: A formal approach for detection of
security flaws in the Android permission system. Formal Aspects Comput. (2016)
6. Bagheri, H., Sadeghi, A., Garcia, J., Malek, S.: COVERT: compositional analysis of
Android inter-app permission leakage. IEEE Trans. Software Eng. 41(9), 866–886
(2015)
290 B. Schmerl et al.
7. Bagheri, H., Sadeghi, A., Jabbarvand, R., Malek, S.: Practical, formal synthesis
and automatic enforcement of security policies for Android. In: Proceedings of the
46th IEEE/IFIP International Conference on Dependable Systems and Networks
(DSN), pp. 514–525 (2016)
8. Bodei, C., Degano, P., Nielson, F., Nelson, H.R.: Security analysis using flow logics.
In: Current Trends in Theoretical Computer Science, pp. 525–542. World Scientific
(2000)
9. Cheng, S.-W.: Rainbow: cost-effective software architecture-based self-aaptation.
PhD thesis, Carnegie Mellon University, Institute for Software Research Technical
Report CMU-ISR-08-113, May 2008
10. Chin, E., Felt, A.P., Greenwood, K., Wagner, D.: Analyzing inter-application com-
munication in Android. In: Proceedings of the 9th International Conference on
Mobile Systems, Applications, and Services, MobiSys 2011, pp. 239–252. ACM,
New York (2011)
11. Davi, L., Dmitrienko, A., Sadeghi, A.-R., Winandy, M.: Privilege escalation attacks
on Android. In: Proceedings of the 13th International Conference on Information
Security (ISC) (2010)
12. Deng, Y., Wang, J., Tsai, J.J.P., Beznosov, K.: An approach for modeling, analysis
of security system architectures. IEEE Trans. Knowl., Data Eng. 15(5), 1099–1119
(2003)
13. Denning, D.E.: A lattice model of secure information flow. Commun. ACM 19(5),
236–243 (1976)
14. Denning, D.E., Denning, P.J.: Certification of programs for secure information
flow. Commun. ACM 20(7), 504–513 (1977)
15. Fernandez, E.B., Larrondo-Petrie, M.M., Sorgente, T., Vannhist, M.: A methodol-
ogy to develop secure systems using patterns. In: Integrating Security and Software
Engineering: Advances and Future Visions. Idea Group Inc. (2007)
16. Garg, K., Garlan, D., Schmerl, B.: Architecture based information flow analy-
sis for software security (2008). http://acme.able.cs.cmu.edu/pubs/uploads/pdf/
ArchSTRIDE08.pdf
17. Garlan, D., Cheng, S.-W., Huang, A.-C., Schmerl, B., Steenkiste, P.: Rainbow:
Architecture-based self adaptation with reusable infrastructure. IEEE Comput.
37(10), 46–54 (2004)
18. Garlan, D., Monroe, R.T., Wile, D.: Acme: architectural description of component-
based systems. In: Foundations of Component-Based Systems, pp. 47–67.
Cambridge University Press, New York (2000)
19. Jackson, D., Abstractions, S.: Logic, Language, and Analysis, 2nd edn. MIT Press,
London (2012)
20. Ren, J., Taylor, R.: A secure software architecture description language. In: Work-
shop on Software Security Assurance Tools, Techniques, and Metrics, pp. 82–89
(2005)
21. Sadeghi, A., Bagheri, H., Malek, S.: Analysis of Android inter-app security vul-
nerabilities using COVERT. In: Proceedings of the 37th International Conference
on Software Engineering, ICSE 2015, vol. 2, pp. 725–728. IEEE Press, Piscataway
(2015)
22. Shaw, M., Garlan, D.: Software Architecture: Perspectives on and Emerging Dis-
cipline. Prentice Hall, Englewood Cliffs, NJ (1996)
23. Swiderski, F., Snyder, W.: Threat Modeling. Microsoft Press, Redmond (2004)
24. Vallée-Rai, R., Co, P., Gagnon, E., Hendren, L., Lam, P., Sundaresan, V.: Soot-a
Java bytecode optimization framework. In: Proceedings of the Conference of the
Centre for Advanced Studies on Collaborative Research, p. 13. IBM Press (1999)
Towards a Framework for Building SaaS
Applications Operating in Diverse
and Dynamic Environments
1 Introduction
Software-as-a-Service (SaaS) - a delivery model for software applications -
attracts customers by presenting features such as no up-front cost, on-demand
provisioning at an application-level of granularity and free from maintenance
[3,12]. In SaaS model, the service provider is responsible for managing all ser-
vice components (software and hardware) and ensuring application-level quality
attributes desired by a customer. These SaaS customers - “tenants” - may oper-
ate in diverse environments and may demand different levels of qualities (e.g.,
low or high availability) from the application [4,15]. For example, considering
an ERP SaaS, a small organization may need low availability (95 %) and an
enterprise may demand high availability (99.99 %). Similarly, a tenant may also
operate in a dynamic environment where expectations from the application may
change at run-time to accommodate changes in the environment. In our sce-
nario, the small organization may desire to have high availability for a time
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 291–306, 2016.
DOI: 10.1007/978-3-319-48992-6 22
292 A. Agrawal and T.V. Prabhakar
period such as peak load and business events. The motivation behind the need
for such dynamic quality requirements is the fact that some quality attributes
have an impact on the operational cost of the application, and the application
may not require high values of these quality attributes all the time. For example,
if an application achieves high availability by replicating to a redundant server,
this additional server will increase the operational cost. Figure 1 depicts a case
of quality expectations of tenants of a SaaS.
Fig. 1. An example showing diverse and dynamic quality expectations from a SaaS
To make the offering attractive to the tenants, a SaaS should have the abil-
ity to address diverse and dynamic quality requirements. From an architectural
perspective, two most common patterns [4,14] for building a SaaS are; (1) Multi-
tenant where all tenants share a common instance along with the code com-
ponents, and (2) Multi-instance where every tenant has a dedicated instance
allocated to it.
For building a SaaS operating in diverse and dynamic environments, Multi-
tenancy would be beneficial in terms of operational cost and maintenance. How-
ever, this pattern requires designing tenant-aware components that can increase
development cost and time to market. Although the development cost would
be high, it might be compensated by lower operational cost [6]. Contrary to
this, benefits of Multi-instance are; less time to market, lower design cost, and
flexibility for customization. However, Multi-instance pattern may have high
operational cost and high maintenance if there are a large number of tenants.
A service provider can select a pattern by analyzing these parameters in the
context of its business goals and policies. One can also use a combination of
these patterns where a group of tenants shares a common instance.
One thing to note here is that addressing diversity issues of the tenants in
Multi-tenancy may create a very complex architecture and design that can create
issues for maintaining the service. In some scenarios, it may be easy to maintain
multiple simple instances than a single complex instance. In this paper, we focus
Chameleonic-SaaS Framework 293
The rest of the paper is organized as follows. Section 2 describes the problem
statement and our approach. Section 3 explains the Chameleonic-SaaS frame-
work in detail. Section 4 presents an example by building a MOOC applica-
tion. Section 5 provides a brief summary of existing work related to this paper.
Section 6 discusses benefits and limitations of our approach. Section 7 concludes
the paper with scope of future work.
This section defines the problem statement and describes our approach.
294 A. Agrawal and T.V. Prabhakar
– Service should have the ability to address diverse quality requirements of dif-
ferent tenants at provisioning-time.
– Service should have the ability to change quality attributes for a particular
tenant dynamically at run-time.
– It should be easy to maintain the service.
2.2 Approach
Our idea is to expose quality attributes as scriptable resources to the tenants
of a SaaS. Using such resources, a tenant can customize the set of quality
attribute values provided by an application instance. Such customization can
occur either at provisioning-time or dynamically at run-time on demand basis.
Here, customizations in the quality attribute values are achieved by modifying
architecture-level decisions of the application.
This leads to the question of what architectural decisions need to be changed
in the architecture. Such architectural decisions should only impact quality
attributes of the application. We use the architectural tactics as the architectural
decisions that can be modified at run-time. A tactic is an architectural tool that
can be used to improve a particular quality attribute of an application [2]. For
example, Ping & Echo [2] is a tactic to improve the availability of an application
by detecting failures such as network failure. Thus, to modify quality attributes
of a tenant’s instance, SaaS can add or remove tactics in its architecture. The
approach mentioned above leads to a natural question:
3 Chameleonic-SaaS Framework
Findings of our investigation on building SaaS applications for diverse and
dynamic environments are formulated as a methodological framework called
Chameleonic-SaaS. Applications built using this framework can provision
instances with different quality attributes to address diverse quality requirements
of SaaS-users. Quality attributes of such instances can also be changed dynami-
cally at run-time, to accommodate dynamic operating environments. This frame-
work abstracts out the responsibilities involved and provides architectural guide-
lines for building such SaaS applications. Steps of the framework (depicted in
Fig. 2) are explained in the following sections.
4 Example
Fig. 6. Service base model with variation points bound to the VSpecs
choice that further has two child choices; FaultDetection and FaultRecovery.
There is also a constraint specifying that FaultRecovery requires FaultDetection
to be present in the instance. FaultDetection has PingEcho as a child choice that
is linked to various Variation Points relating to the existence of components
(PingSender, PingReceiver, etc.) and links. Figure 7 depicts resolution model for
QAS-1 where PassiveRedundancy choice is True but ColdSpare is False. Figure 8
depicts architectures of the various instances generated by the service depending
upon the resolution models.
For variation triggered by the events in the application environment, a sensor
to monitor events - course material release, quiz period and self-pace mode - is
implemented in the application that triggers the Event Monitor component of
302 A. Agrawal and T.V. Prabhakar
the Adaptation Manager. These events are analyzed to check occurrence of any
QAS.Adaptation Manager also exposes an API through which a customer can
directly request for an availability value (default, low, moderate, or high) to
an existing instance. Depending upon the current architecture of the instance
and the desired QAS, Adaptation Manager modifies the instance architecture by
adding or removing components.
Figure 9 presents experiments results conducted by dynamically provisioning
the availability values to a MMS instance. In our setup, service is offered by cre-
ating MMS instance over Linux containers (LXC). LXC containers were setup on
a virtual machine (1CPU Core, 2 GB RAM) running Ubuntu operating system.
For deployment of tactics components, we used the Puppet tool [18]. The results
show that adding quality to an existing instance is fast due to quick creation
of containers. Also, Passive Redundancy has less fault recovery time compared
Chameleonic-SaaS Framework 303
to Cold Spare tactic as the later requires creating a new container to recover.
These timings directly depend on our execution environment and should not be
used as benchmarks.
Fig. 9. Experiment results for availability scenarios in MMS (a) Provisioning time in
adding or removing the QASs, and (b) Availability benefits in terms of time consumed
in fault detection and fault recovery
5 Related Work
The demand for tenant-specific customization of a service has been highlighted
by several researchers [1,12,17]. Here, customization is desired in features, work-
flow, user-interface, etc., and facilitated using the virtualization techniques [4].
In context of SaaS applications, researchers have identified some architectural
patterns such as Multi-tenancy and Multi-instance and discussed their impact on
the quality attributes [4,6,14]. Koziolek [11] discussed various quality require-
ments from a SaaS such as resource sharing, scalability, maintainability, cus-
tomizability, and usability. The work also includes an architectural style called
SPOSAD based on multi-tier style. Software engineering issues with developing
SaaS applications have also been discussed [5].
Variability has been presented as a quality attribute of architecture [2] and
has been extensively used in Product Line Engineering (PLE). However, vari-
ability in quality attributes (performance variability, availability variability, etc.)
has not been much used and requires more explorations [7].
Several researchers have proposed techniques to design a SaaS as a Product
Line Architecture by introducing variability in the architecture [1,15,17]. Matar
et al. [1] discussed different kinds of variability for a SaaS such as application vari-
ability, business process variability and provisioning variability. However, most
of these works are focused only on the variations in the feature models. These
approaches are also not able to handle the environmental changes demanding
variations only in quality attributes. Horcas et al. [9] presented a technique to
inject functional quality attributes (that results in functional components) in an
application. In our work, our focus is on varying only the quality attributes of a
SaaS instance, by changing architectural decisions at a tactic level granularity.
304 A. Agrawal and T.V. Prabhakar
6 Discussion
Quality attributes exposed as scriptable resources enable variation in their values
for a running instance. As run-time quality attributes have an impact on the
operational cost of the instance, a tenant can exploit such resources to achieve
cost-efficiency by dynamically migrating between different offerings of quality
attributes on demand basis.
Modeling quality related concerns separately from the functional concerns
provides reusability of the quality concerns across multiple applications, and
modifiability of these concerns. For example, Tactics Model can be shared
between multiple applications. Similarly, in our MMS application, we can add
a new tactic such as Rollback without modifying the Application Model as the
support required by this new tactic (StateManager ) is already exposed by the
Application Model. In our approach, all instances of the SaaS are generated
using a single architecture which makes the maintenance easier compared to the
approach where every instance is designed and build separately.
In our framework, tactics are modeled as first-class concepts in the Variability
Model. As tactics are standard validated tools to improve quality attributes, such
modeling helps in evaluating the variations in an instance architecture in terms
of their impact on the quality attributes.
The framework only considers variations in the architectural decisions of an
instance, and does not cover other decisions such as deployment-level decisions
(e.g., sizing of hardware resources, etc.), implementation-level details (e.g., code,
logging, etc.), or application functionality. We do not aim to replace the other
techniques but to augment their capability to reach more diverse levels of quality
attributes. In a holistic approach, variability at different levels (architecture,
deployment, implementation, features) can be combined.
Another limitation of our work is that we presented a methodological frame-
work where several steps of the framework like Identify Tactics, merging the
Application Model with the Tactic Model, etc. are not automated. In this paper,
we explored adding tactics at the top level of application architecture. How-
ever, variations may be desired at a lower level architecture element such as a
microservice. Our framework can be further extended to handle such scenarios.
Not all quality attributes can be modeled as scriptable resources. For exam-
ple, quality attributes not discernible at run-time such as modifiability cannot
be changed using our approach. The capability of our approach to change qual-
ity attributes depends on the number of tactics supported by the application
architecture for dynamic addition (in terms of application-specific tactic compo-
nents exposed by the application). Our approach has an impact on design and
development cost of the application. Re-using the tactics related concerns can
help in reducing such overhead.
References
1. Abu Matar, M., Mizouni, R., Alzahmi, S.: Towards software product lines based
cloud architectures. In: 2014 IEEE International Conference on Cloud Engineering
(IC2E), pp. 117–126, March 2014
2. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice, 3rd edn.
Addison-Wesley Professional, Boston (2012)
3. Benlian, A., Hess, T.: Opportunities and risks of software-as-a-service: Findings
from a survey of IT executives. Decis. Support Syst. 52(1), 232–246 (2011)
4. Bezemer, C.P., Zaidman, A.: Multi-tenant SaaS applications: maintenance dream
or nightmare? In: Proceedings of the Joint ERCIM Workshop on Software Evo-
lution (EVOL) and International Workshop on Principles of Software Evolution
(IWPSE), IWPSE-EVOL 2010, pp. 88–92. ACM, New York (2010)
5. Cai, H., Wang, N., Zhou, M.J.: A transparent approach of enabling saas multi-
tenancy in the cloud. In: 2010 6th World Congress on Services (SERVICES-1), pp.
40–47, July 2010
6. Frederick Chong, G.C., Wolter, R.: Multi-tenant data architecture, June 2006.
http://msdn.microsoft.com/en-us/library/aa479086.aspx
7. Galster, M.: Architecting for variability in quality attributes of software systems.
In: Proceedings of the 2015 European Conference on Software Architecture Work-
shops, ECSAW 2015, pp. 23:1–23:4. ACM, New York (2015)
8. Haugen, O., et al.: Common variability language (CVL). OMG Submission (2012)
9. Horcas, J.M., Pinto, M., Fuentes, L.: Injecting quality attributes into software
architectures with the common variability language. In: Proceedings of the 17th
International ACM Sigsoft Symposium on Component-based Software Engineering,
CBSE 2014, pp. 35–44. ACM, New York (2014)
10. Jacob, B., Lanyon-Hogg, R., Nadgir, D.K., Yassin, A.F.: A practical guide to the
to the IBM autonomic computing toolkit, April 2004. http://www.redbooks.ibm.
com/redbooks/pdfs/sg246635.pdf
11. Koziolek, H.: The sposad architectural style for multi-tenant software applications.
In: 2011 9th Working IEEE/IFIP Conference on Software Architecture (WICSA),
pp. 320–327, June 2011
306 A. Agrawal and T.V. Prabhakar
12. La, H.J., Kim, S.D.: A systematic process for developing high quality SaaS cloud
services. In: Jaatun, M.G., Zhao, G., Rong, C. (eds.) CloudCom 2009. LNCS, vol.
5931, pp. 278–289. Springer, Heidelberg (2009). doi:10.1007/978-3-642-10665-1 25
13. Metzger, A., Pohl, K.: Software product line engineering and variability manage-
ment: achievements and challenges. In: Proceedings of the on Future of Software
Engineering, FOSE 2014, pp. 70–84. ACM, New York (2014)
14. Mietzner, R., Unger, T., Titze, R., Leymann, F.: Combining different multi-tenancy
patterns in service-oriented applications. In: Proceedings of the 13th IEEE Inter-
national Conference on Enterprise Distributed Object Computing, EDOC 2009,
pp. 108–117. IEEE Press, Piscataway (2009)
15. Ruehl, S.T., Andelfinger, U.: Applying software product lines to create customiz-
able software-as-a-service applications. In: Proceedings of the 15th International
Software Product Line Conference, SPLC 2011, vol. 2, pp. 16:1–16:4. ACM,
New York (2011)
16. Scott, J., Kazman, R.: Realizing and refining architectural tactics: availability,
Technical report, CMU/SEI-2009-TR-006 ESC-TR-2009-006 (2009)
17. Tekinerdogan, B., Ozturk, K., Dogru, A.: Modeling and reasoning about design
alternatives of software as a service architectures. In: 2011 9th Working IEEE/IFIP
Conference on Software Architecture (WICSA), pp. 312–319, June 2011
18. Tool, P.: Puppet tool (retrieved, April 2016). http://puppetlabs.com/
Software Architecture Reconstruction
Materializing Architecture Recovered
from Object-Oriented Source Code
in Component-Based Languages
1 Introduction
Component Based Software Development (CBSD) has been recognized as a com-
petitive principle methodology for developing modular software systems [4]. It
enforces the dependencies between components to be explicit through provided
and required interfaces. Moreover, it provides coarse grained high-level archi-
tecture views for component-based (CB) applications. These views facilitate the
communication between software architects and programmers during develop-
ment, maintenance and evolution phases [11].
Otherwise, object-oriented (OO) have fine-grained entities with complex and
numerous implicit dependencies [7]. Usually, they do not have explicit archi-
tectures or even have “drifted” ones. These adversely affect the software com-
prehension and makes these software systems hard to maintain and reuse [6].
Thus migrating OO software to CB one should contribute to gain the benefits
of CBSD [9].
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 309–325, 2016.
DOI: 10.1007/978-3-319-48992-6 23
310 Z. Alshara et al.
2 Problem Statement
To better illustrate our approach aiming to transform OO code to CB one, first,
we introduce in this section an example of a simple Java application. Second,
we present the expected architecture recovered by analyzing this application.
Finally, we illustrate the problem of OO code transformation guided by this
architecture.
Figure 3 shows the result of architecture recovery step applied on our exam-
ple. The recovery step identifies four clusters (components), where each cluster
may contain one or several classes. We consider a component-based architecture
as a set of components connected via interfaces, where interfaces are identified
from boundary classes. For example, the component DisplayedInformation con-
nected to ContentProvider component through two interfaces. The first interface
declares getCurrentTime method which is placed in class Clock and getContent
method from class Clock. The second one declares getContent method from class
Message.
(1) The component instance has three different releases (Fig. 4(a), (b) and (c)).
(2) The component instance could have many class instances of the same type.
For example, Fig. 4-(c) have two class instances from type E (e1 and e2).
(3) The client needs to have references to the class instances that provide ser-
vices/methods for them. For example, the classes that implement the pro-
vided component services are A and B. Then, the client needs to reference
instances of type A and B to get their required services. After that, instances
of type A and B are responsible to communicate with other instances to
complete its services. And therefore, the classes that have the component
provided services are considered as the only entrance to component instance.
Now, we can simply create an instance of the component using its constructor
using OO instantiation and then initialize the instance using appropriate initial-
izer. For example, an instance of DisplayedInformation component is created
by its constructor using new keyword. Listing 1.5 differentiates the refactoring
resulted from our approach (ComponentClient) and the original source code
(ClassClient).
In this section we describe how our proposed solution is easily mapped onto
existing component models. We have chosen two well known component models,
OSGi and SOFA, to explain the ease of the mapping.
OSGi is a set of specifications that define a component model for a set of Java
classes [23]. It enables component encapsulation by hiding their implementations
from other components by using services. The services are defined by standard
Java classes and interfaces that are registered into a service registry. A compo-
nent (bundle) can register and use services through the service registry.
5 Discussion
We can deploy recovered cluster of classes directly onto existing component
models without using our approach. Indeed, we can transform each class into a
component and then assemble these components that belong to the same cluster
using component composition property as a composite component. However, to
compare our approach with the composite component approach, we need first
to study the component composition types and component models that support
these types. Table 2 shows the selected object-based component models and com-
position supported composition types. Two types of component compositions;
the first one is horizontal composition, and the second type is vertical composi-
tion. The horizontal composition means that components can be binded through
their interfaces to construct component applications. The second type, vertical
composition, describes the mechanism of constructing a new component from
two or more other components. The new component is then called composite
because they are themselves made of more elementary components called inter-
nal components. Internal components could be accessible or visible from clients
(delegation) or not (aggregation).
We can observe from Table 2 that there are five component models that did
not support vertical composition at all (EJB, JavaBeans, OSGi, CCM and Pal-
ladio). Four of them provide vertical aggregation composition and six models
support vertical delegation composition. However, vertical delegation composi-
tion is not appropriate because clients can access or view the internal components
(violates component encapsulation). Consequently, the vertical aggregation com-
position could be replaced by our approach, but there are just two component
models that support it.
Component models EJB Fractal JavaBeans COM OpenCOM OSGi SOFA 2.0 CCM COMPO Palladio PECOS
Vertical composition No Yes No Yes Yes No Yes No Yes No Yes
Aggregation
Delegation
6 Related Work
Transforming OO applications to CB ones has two types of related works. The
first relates to CB architecture recovery, and the second relates to code trans-
formation from OO applications to Component-oriented ones. Many works have
proposed for recovering CB architectures from OO legacy code. A survey on
these works is presented in [5,17]. However, only few works have beenproposed
a transformation from OO code to CB one.
The approach proposed by [14] applies in transforming Java applications to
OSGi. The approach uses OO concepts and patterns to wrap cluster of class to
Materializing Architecture Recovered from Object-Oriented Source Code 323
components. However, they did not deal with component instantiation, where
they still used instances in terms of OO. Another approach for transforming Java
applications into the JavaBeans framework is proposed in [7]. They developed
an approach that can generate components from OO programs using a class
relations graph. This method did not deal with a component as a set of classes,
the authors assume that each class is transformed to a component. Therefore, it
can not treat the cluster of classes recovered from architecture recovery methods.
One of the closest works to our approach is proposed by [24]. They used
dynamic analysis to define component interfaces and component instances. The
idea of their work consists of four steps. The first one is an extraction of object
call graphs. The second step is transforming the object call graph into a com-
ponent call graph. The third step identifies component interfaces based on the
connections between component instances. The last step deals with component
constructors and its parameters. In contrast to our work, they use dynamic
analysis and execution trace, where they supposed the use cases of the recov-
ered applications exist and fully cover all execution cases. Moreover, they sup-
pose that two component instances may have intersected states, where a class
instance can be shared between two components which violate the principle of
component encapsulation.
7 Conclusion
In this paper, we proposed an approach to transform recovered components from
object-oriented applications to be easily mapped to component-based models.
We refactored clusters of classes (recovered component) to behave as a single
unit of behavior to enable component instantiation. Our approach guarantees
component-based principles by resolving component encapsulation and compo-
nent composition using component instances. The encapsulation of components
is guaranteed by transforming the OO dependencies between recovered compo-
nents which was proposed in our previous work [20]. Moreover, both principles
applied by refactoring a recovered component source code to be instantiable,
where the provided services are consumed by the component instance through
its interfaces (component binding). We have shown that the source code resulted
from our approach can be easily projected onto object-based component mod-
els. We illustrated the mapping onto two well known component models, OSGi
and SOFA. The illustration results show that our approach facilitates the trans-
formation process from OO applications into CB ones. Moreover, it effectively
reduces the gap between recovered component architectures and its implemen-
tation source code.
References
1. D. Box. Essential com. object technology series (1997)
2. Oracle E.E. Group. Jsr 220: Enterprise javabeanstm, version 3.0 ejb core contracts
and requirements version 3.0, final release, May 2006
324 Z. Alshara et al.
3. Shatnawi, A., Seriai, A., Sahraoui, H., Al-Shara, Z.: Mining software com-
ponents from object-oriented APIs. In: Schaefer, I., Stamelos, I. (eds.) ICSR
2015. LNCS, vol. 8919, pp. 330–347. Springer, Heidelberg (2014). doi:10.1007/
978-3-319-14130-5 23
4. Bertolino, A., et al.: An architecture-centric approach for producing quality sys-
tems. QoSA/SOQUA 3712, 21–37 (2005)
5. Birkmeier, D., Overhage, S.: On component identification approaches – classifica-
tion, state of the art, and comparison. In: Lewis, G.A., Poernomo, I., Hofmeister,
C. (eds.) CBSE 2009. LNCS, vol. 5582, pp. 1–18. Springer, Heidelberg (2009).
doi:10.1007/978-3-642-02414-6 1
6. Constantinou, E., et al.: Extracting reusable components: a semi-automated app-
roach for complex structures. Inf. Process. Lett. 115(3), 414–417 (2015)
7. Washizaki, H., et al.: A technique for automatic component extraction from object-
oriented programs by refactoring. Sci. Comput. Program. 56(1–2), 99–116 (2005)
8. Crnkovic, I., et al.: A classification framework for software component models.
IEEE Trans. Softw. Eng. 37(5), 593–615 (2011)
9. Lau, K., et al.: Software component models. IEEE Trans. Softw. Eng. 33(10),
709–724 (2007)
10. Clarke, M., Blair, G.S., Coulson, G., Parlavantzas, N.: An efficient component
model for the construction of adaptive middleware. In: Guerraoui, R. (ed.) Mid-
dleware 2001. LNCS, vol. 2218, pp. 160–178. Springer, Heidelberg (2001). doi:10.
1007/3-540-45518-3 9
11. Shaw, M., et al.: Software Architecture: Perspectives on an Emerging Discipline.
Prentice-Hall Inc., Upper Saddle River (1996)
12. Spacek, P., et al.: A component-based meta-level architecture and prototypical
implementation of a reflective component-based programming and modeling lan-
guage. In: Proceedings of the 17th International ACM Sigsoft Symposium on
Component-Based Software Engineering, CBSE 2014. ACM (2014)
13. Kazman, R., et al.: Requirements for integrating software architecture, reengineer-
ing models: Corum ii. In: Proceedings of Reverse Engineering (1998)
14. Allier, S., et al.: From object-oriented applications to component-oriented appli-
cations via component-oriented architecture. In: Software Architecture (WICSA)
(2011)
15. Becker, S., et al.: Model-based performance prediction with the palladio compo-
nent model. In: Proceedings of the 6th International Workshop on Software and
Performance, WOSP 2007. ACM (2007)
16. Chardigny, S., et al.: Extraction of component-based architecture from object-
oriented systems. In: Software Architecture, WICSA (2008)
17. Ducasse, S., et al.: Software architecture reconstruction: a process-oriented taxon-
omy. IEEE Trans. Softw. Eng. 35(4), 573–591 (2009)
18. Kebir, S., et al.: Quality-centric approach for software component identification
from object-oriented code. In: Software Architecture (WICSA) and European Con-
ference on Software Architecture (ECSA) (2012)
19. Bures T., et al.: Sofa 2.0: balancing advanced features in a hierarchical component
model. In: Software Engineering Research, Management and Applications (2006)
20. Alshara, Z., et al.: Migrating large object-oriented applications into component-
based ones: instantiation and inheritance transformation. In: Proceedings of the
2015 ACM SIGPLAN International Conference on Generative Programming: Con-
cepts and Experiences, GPCE 2015. ACM (2015)
21. Sun Microsystems. Javabeans specification (1997)
Materializing Architecture Recovered from Object-Oriented Source Code 325
1 Introduction
Modularity is one of the key properties for software design [16]. Especially large
scale software systems need to have a modular structure. Otherwise, the main-
tainability and evolvability of the system suffer. A modular structure can be
attained by decomposing the system into cohesive units that are loosely cou-
pled. Software architecture design [3,22] defines the gross-level decomposition of
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 326–333, 2016.
DOI: 10.1007/978-3-319-48992-6 24
Using Hypergraph Clustering for Software Architecture Reconstruction 327
2 Background
1
www.oracle.com.
328 E. Ersoy et al.
2.2 Hypergraphs
A hypergraph H = (V, N ) is defined as a set of vertices V and a set of nets
(hyperedges) N among those vertices. A net n ∈ N is a subset of vertices and
the vertices in n are called its pins. The number of pins of a net is called the
size of it, and the degree of a vertex is equal to the number of nets it belongs to.
We use pins[n] and nets[v] to represent the pin set of a net n, and the set of nets
containing a vertex v, respectively. In this work, we assume unit weights for all
nets and vertices.
A K-way partition of a hypergraph H is a partition of its vertex set, which
is denoted as Π = {V1 , V2 , . . . , VK }, where
– parts are pairwise disjoint, i.e., Vk ∩ V = ∅ for all 1 ≤ k < ≤ K,
– each Vk is a nonempty subset of V, i.e., Vk ⊆ V and Vk = ∅ for 1 ≤ k ≤ K,
K
– the union of K parts is equal to V, i.e., k=1 Vk = V.
In our modeling approach, we represent each PL/SQL procedure as a vertex
and each database table as a net. A net has several vertices as its pins if the
corresponding procedures access the database table represented by the net. We
convert this model to a weighted graph model and apply modularity clustering
as explained in the following subsection.
Using Hypergraph Clustering for Software Architecture Reconstruction 329
3 Related Work
There exist many techniques [8] for SAR. Several of them use DSM for reasoning
about architectural dependencies [2,18–20]. Some focus on analyzing the runtime
behavior for reconstructing execution scenarios [4] and behavioral views [12].
There are also tools that construct both structural and behavioral views [10,21]
which are mainly developed for reverse engineering C/C++ or Java programs.
Some tools are language independent; they take abstract inputs like module
dependency graphs [13] or execution traces [4]. However, hypergraphs have not
been utilized for SAR to the best of our knowledge.
There exist only a few studies [7,11] that focus on reverse engineering
PL/SQL programs. They mainly aim at deriving business rules [7] and data
flow graphs [11]. Recently, we proposed an approach for clustering PL/SQL pro-
cedures [2]. The actual coupling among these procedures can only be revealed
based on their dependencies on database elements. In our previous work, we
employed DSM [9] for representing these dependencies. In this work, we employ
hypergraphs, which can more naturally model such dependencies and lead to
more accurate results.
330 E. Ersoy et al.
Table 1. A sample list of nets and the set of vertices they interconnect (pins) in the
generated hypergraph for the CRM case study.
Net Vertices
T1 P119, P101, P1, P47, P15, P48
T2 P119, P57, P47, P26, P1
... ...
T11 P27, P26, P7, P1, P117, P119, P115, P111, P110, P109, ...
... ...
T67 P8
References
1. Oracle Database Online Documentation 11g Release developing and using stored
procedures. http://docs.oracle.com/. Accessed Mar 2016
2. Altinisik, M., Sozer, H.: Automated procedure clustering for reverse engineering
PL/SQL programs. In: Proceedings of the 31st ACM Symposium on Applied Com-
puting, pp. 1440–1445 (2016)
Using Hypergraph Clustering for Software Architecture Reconstruction 333
3. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice, 3rd edn.
Addison-Wesley, Boston (2003)
4. Callo, T., America, P., Avgeriou, P.: A top-down approach to construct execution
views of a large software-intensive system. J. Sof. Evol. Process 25(3), 233–260
(2013)
5. Çatalyürek, Ü.V., Kaya, K., Langguth, J., Uçar, B.: A partitioning-based divisive
clustering technique for maximizing the modularity. In: Proceedings of the 10th
DIMACS Implementation Challenge Workshop - Graph Partitioning and Graph
Clustering, pp. 171–186 (2012)
6. Çatalyürek, U., Kaya, K., Langguth, J., Uçar, B.: A partitioning-based divisive
clustering technique for maximizing the modularity. In: Graph Partitioning and
Graph Clustering. Contemporary Mathematics, vol. 588. American Mathematical
Society (2013)
7. Chaparro, O., Aponte, J., Ortega, F., Marcus, A.: Towards the automatic extrac-
tion of structural business rules from legacy databases. In: Proceedings of the 19th
Working Conference on Reverse Engineering, pp. 479–488 (2012)
8. Ducasse, S., Pollet, D.: Software architecture reconstruction: a process-oriented
taxonomy. IEEE Trans. Soft. Eng. 35(4), 573–591 (2009)
9. Eppinger, S., Browning, T.: Design Structure Matrix Methods and Applications.
MIT Press, Cambridge (2012)
10. Guo, G., Atlee, J., Kazman, R.: A software architecture reconstruction method.
In: Proceedings of the 1st Working Conference on Software Architecture, pp. 15–34
(1999)
11. Habringer, M., Moser, M., Pichler, J.: Reverse engineering PL/SQL legacy code:
an experience report. In: Proceedings of the IEEE International Conference on
Software Maintenance and Evolution, pp. 553–556 (2014)
12. Qingshan, L., et al.: Architecture recovery and abstraction from the perspective of
processes. In: WCRE, pp. 57–66 (2005)
13. Mitchell, B., Mancoridis, S.: On the automatic modularization of software systems
using the bunch tool. IEEE Trans. Soft. Eng. 32(3), 193–208 (2006)
14. Murphy, G., Notkin, D., Sullivan, K.: Software reflexion models: Bridging the gap
between design and implementation. IEEE Trans. Soft. Eng. 27(4), 364–380 (2001)
15. Clements, P., et al.: Documenting Software Architectures. Addison-Wesley, Bostan
(2002)
16. Parnas, D.L.: On the criteria to be used in decomposing systems into modules.
Commun. ACM 15(12), 1053–1058 (1972)
17. Eick, S., et al.: Does code decay? assessing the evidence from change management
data. IEEE Trans. Soft. Eng. 27(1), 1–12 (2001)
18. Sangal, N., Jordan, E., Sinha, V., Jackson, D.: Using dependency models to manage
complex software architecture. In: Proceedings of the 20th Conference on Object-
Oriented Programming, Systems, Languages and Applications, pp. 167–176 (2005)
19. Sangwan, R., Neill, C.: Characterizing essential and incidental complexity in soft-
ware architectures. In: Proceedings of the 3rd European Conference on Software
Architecture, pp. 265–268 (2009)
20. Sullivan, K., Cai, Y., Hallen, B., Griswold, W.: The structure and value of modu-
larity in software design. In: Proceedings of the 8th European Software Engineering
Conference, pp. 99–108 (2001)
21. Sun, C., Zhou, J., Cao, J., Jin, M., Liu, C., Shen, Y.: ReArchJBs: a tool for auto-
mated software architecture recovery of javabeans-based applications. In: Proceed-
ings of the 16th Australian Software Engineering Conference, pp. 270–280 (2005)
22. Taylor, R., Medvidovic, N., Dashofy, E.: Software Architecture: Foundations, The-
ory, and Practice. Wiley, Hoboken (2009)
SeaClouds: An Open Reference Architecture
for Multi-cloud Governance
This work has been partly supported by the EU-FP7-ICT-610531 SeaClouds project.
1
https://jclouds.apache.org.
2
https://www.docker.com.
c Springer International Publishing AG 2016
B. Tekinerdogan et al. (Eds.): ECSA 2016, LNCS 9839, pp. 334–338, 2016.
DOI: 10.1007/978-3-319-48992-6 25
SeaClouds: An Open Reference Architecture for Multi-cloud Governance 335
3
http://www.seaclouds-project.eu.
336 A. Brogi et al.
Figure 1 presents the reference architecture and the design of the SeaClouds
platform. The platform features a GUI used by two main stakeholders: Design-
ers and Deployment Managers, and it considers Cloud Providers offering
cloud resources. From SeaClouds platform functionalities standpoint, we can
identify five major components in the architecture, plus a RESTful harmonized
and unified SeaClouds API layer used for the deployment, management and
monitoring of simple cloud-based applications through different and heteroge-
neous cloud providers, and exploiting a Dashboard.
indistinctly using IaaS and PaaS services, which can be used for their deployment
using (an extended version of) Apache Brooklyn.4
The SeaClouds project has provided a solution which can be downloaded from
the github repository5 . The consortium identified Apache Brooklyn as the tool to
deploy SeaClouds. To ensure a good level of quality assurance, a free Continuos
Integration (CI) and Continuos Distribution was set up, travis-ci.org. SeaClouds
platform is built using Java language, distributing the artefacts generated from
the source code, like jar file, war file etc., to a well-know public managed maven
repository hosted by Sonatype (free for opensource projects).
The SeaClouds solution has been evaluated in several examples, with the
main focus on two real use cases: i) Atos Software application and ii) Nuro
Gaming application6 , both consisting of several components (servers, database)
and distributed in heterogeneous cloud providers (IaaS and PaaS).
We have presented the SeaClouds platform, which provides an open source frame-
work to address the problem of deploying, managing and reconfiguring complex
applications over multiple clouds. The SeaClouds solution has addressed the
main functionalities presented in previous section. As future work, the consor-
tium has agreed to create the SeaClouds Alliance in order to continue working on
some aspects, such as the improvement of the unification of providers supported
4
Apache Brooklyn: https://brooklyn.apache.org/.
5
https://github.com/SeaCloudsEU/SeaCloudsPlatform.
6
Deliverables 6.3.3 and D6.4.3 http://www.seaclouds-project.eu/deliverables.
338 A. Brogi et al.
by the deployment, and the reconfiguration covering replanning actions and data
synchronization in database. Also, SeaClouds is an open source project, so it is
open to receive more contributions and extensions from the community.
References
1. Elkhatib, Y.: Defining cross-cloud systems. ArXiv e-prints (2016)
2. Hossny, E., et al.: A case study for deploying applications on heterogeneous PaaS
platforms. In: CloudCom-Asia (2013)
3. Gonidis, F., et al: A development framework enabling the design of service-based
cloud applications. In: Advances in Service-Oriented and Cloud Computing (2014)
4. Pahl, C.: Containerization and the PaaS cloud. IEEE CLOUD (2015)
5. Pham, L.M., et al.: Roboconf, a hybrid cloud orchestrator to deploy complex appli-
cations. In: IEEE CLOUD (2015)
6. Sellami, M., et al: PaaS-independent provisioning and management of applications
in the cloud. In: IEEE CLOUD (2013)
7. Zeginis, D., D’Andria, F., et al.: Scalable Computing: Practice and Experience
(2013)
Author Index