2022 07 Developing Microservice-Based Applications Using T
2022 07 Developing Microservice-Based Applications Using T
sciences
Article
Developing Microservice-Based Applications Using the Silvera
Domain-Specific Language
Alen Suljkanović 1, * , Branko Milosavljević 2 , Vladimir Ind̄ić 2 and Igor Dejanović 2, *
1 Typhoon HIL, 21000 Novi Sad, Serbia
2 Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia; [email protected] (B.M.);
[email protected] (V.I.)
* Correspondence: [email protected] (A.S); [email protected] (I.D.)
Abstract: Microservice Architecture (MSA) is a rising trend in software architecture design. Ap-
plications based on MSA are distributed applications whose components are microservices. MSA
has already been adopted with great success by numerous companies, and a significant number of
published papers discuss its advantages. However, the results of recent studies show that there are
several important challenges in the adoption of microservices such as finding the right decomposition
approach, heterogeneous technology stacks, lack of relevant skills, out-of-date documentation, etc. In
this paper, we present Silvera, a Domain-Specific Language (DSL), and a compiler for accelerating
the development of microservices. Silvera is a declarative language that allows users to model the
architecture of microservice-based systems. It is designed so that it can be used both by inexperi-
enced and experienced developers. The following characteristics distinguish Silvera from similar
tools: (i) lightweight and editor-agnostic language, (ii) built with heterogeneity in mind, (iii) uses
microservice-tailored metrics to evaluate the architecture of the designed system, and (iv) automati-
cally generates the documentation. Silvera’s retargetable compiler transforms models into runnable
code and produces the documentation for each microservice in the model. The compiler can produce
code for any programming language or framework since code generators are registered as plugins.
Citation: Suljkanović, A.;
We present a case study that illustrates the use of Silvera and also discuss some current limitations
Milosavljević, B.; Ind̄ić, V.; Dejanović,
and development directions. To evaluate Silvera, we conducted a survey based on A Framework for
I. Developing Microservice-Based
Applications Using the Silvera
Qualitative Assessment of DSLs (FQAD), where we focused on the following DSL characteristics:
Domain-Specific Language. Appl. Sci. functional suitability, usability, reliability, productivity, extendability, and expressiveness. Overall,
2022, 12, 6679. https://doi.org/ the survey results show that Silvera satisfies these characteristics.
10.3390/app12136679
Keywords: domain-specific languages; microservice architecture; model-driven engineering;
Academic Editor: Arcangelo
software architecture
Castiglione
of the specification to a runnable code is performed by the compiler. The compiler supports
an arbitrary number of code generators where each code generator produces code for a
corresponding programming language and/or framework.
To better explain the language usage, we investigate building a small MSA application
in Section 4 that serves as a short demonstration of Silvera’s capabilities. Even though
small, the application is complete and demonstrates the implementation of almost all design
patterns supported in Silvera. The application model contains definitions of microservices,
their respective APIs, domain models, and description of inter-service communication.
Furthermore, to evaluate Silvera’s quality characteristics, we conducted a survey based
on FQAD where we focused on the following DSL characteristics: functional suitability,
usability, reliability, productivity, extendability, and expressiveness. The results of the study
and its limitation are discussed in Sections 5–7.
Silvera is a free and open-source project, and it is hosted on GitHub (Silvera project—
https://github.com/alensuljkanovic/silvera (accessed on 1 June 2022)).
The remainder of the paper is organized as follows. Section 2 presents related work.
Section 3 presents the implemented language, whereas Section 4 shows how Silvera can
be used during the implementation of microservice-based applications. Section 5 presents
results of the evaluation of Silvera language. Threats to validity are shown in Section 6.
Section 7 discussed the limitations of the current approach and implementation. Section 8
concludes the paper and discusses future improvements of Silvera.
2. Related Work
2.1. Comparison of MSA with Other Architectural Styles
The terms architecture and architecture description are introduced by ISO/IEC/IEEE
42010:2011 standard [14]. An architecture description expresses an architecture of a system-of-
interest [14]. Architecture encompasses fundamental concepts or properties of a system in its
environment embodied in its elements and relationships and in the principles of its design
and evolution [14]. Architecture descriptions are used by software teams to improve commu-
nication and cooperation among stakeholders, enabling them to work in a comprehended
and coherent manner. An architecture description comprises architecture views and models.
Gorski [15] presents the software architecture model 1+5 that encompasses various archi-
tectural views for modeling business processes, describing use cases and their realizations,
interaction and contract agreements between services, and deployment.
MSA is a service-based architecture, just like Service-Oriented Architecture (SOA). Even
though MSA and SOA represent very different architectural styles, they share many charac-
teristics. Services are a primary architecture component used to implement and perform
business and nonbusiness functionality in both MSA and SOA [16]. Both MSA and SOA are
generally distributed architectures and also lend themselves to more loosely coupled and
modular applications [16]. Furthermore, the implementation of a service is hidden behind
its publicly available API. Due to this, the implementation of a service can be entirely
changed without affecting the rest of the system as long as the API changes are backward
compatible [16].
Although MSA and SOA both rely on services as the main architecture component,
they vary greatly in terms of service characteristics [16]. Differences are shown in service
taxonomy (i.e., how services are classified within an architecture), service ownership, and
service granularity.
As shown in Table 1, microservices have limited service taxonomy. There are only two
service types: functional services and infrastructure services. Functional services implement
specific business operations or functions, whereas infrastructure services implement non-
functional tasks such as authentication, authorization, auditing, logging, and monitoring.
In MSA, infrastructure services are not exposed to the outside world and are only available
internally to other services [16]. On the other hand, in SOA, there are four types of services.
Business services are abstract, coarse-grained services that define core business operations.
These services are devoid of implementation details and usually only contain information
about a service name and expected inputs and outputs. Enterprise services are concrete,
Appl. Sci. 2022, 12, 6679 4 of 40
coarse-grained services that implement the functionality defined by the business services.
These services are generalized and shared across the organization [16]. Enterprise services
can contain business functionality, but usually, they rely on application and infrastructure
services. Application services are fine-grained services that are bound to a specific application
context. They provide functionality not found in the enterprise services. Infrastructure
services implement the same tasks as in MSA. In addition, there is a significant difference
in service ownership as development teams are responsible for full support and devel-
opment of a service throughout its life cycle (also known as the “you build, you run it”
principle) [17]. Microservices are small, fine-grained services (hence “micro”), whereas
services in SOA range in size from very small to large enterprise services.
MSA and SOA also differ with regard to data sharing and service coordination. In the
case of data sharing, MSA promotes a style where microservices share as little data as
possible, whereas SOA promotes the diametrically opposed concept of sharing as much
data as possible [16]. For this reason, a microservice and its associated data represent a
single unit with minimal dependencies, which facilitates maintenance and deployment
of the microservice (i.e., microservice can be changed and redeployed without affecting
the rest of the system). Multiple services can be composed together as a new service. This
process is known as service composition. Service composition implies the existence of service
coordination. For service coordination, MSA focuses on service choreography, whereas
SOA relies on both service choreography and service orchestration [16]. The term service
orchestration refers to the coordination of multiple services through a centralized mediator,
whereas service choreography lacks the mediator [16]. Implementation of service coordination
is a non-trivial and error-prone task. However, there are several tools that allow developers
to automate this process [18,19]. As a result of this approach, microservices are responsible
for interaction with others [20]. Of course, one can still utilize orchestration; however,
this is not a typical approach [20]. Due to the mentioned differences, systems built on
SOA tend to be slower than microservices and require more time and effort to develop,
test, deploy, and maintain [16]. In addition, a service composition should be efficient with
minimal execution time and energy consumption. An approach for execution time and
energy efficient service composition is presented by Li et al. in [21]. The order of service
tasks execution is determined by a service scheduling [22]. Resource allocation is another
important process that affects the performance. The goal of resource allocation is to select
resources for component instances and determine the number of component instances
needed to meet performance and reliability requirements [22]. Usually, microservices have
more components with less functionality that require fewer resources.
MSA also differs from Monolithic Architecture (MA). In MA, components rely on the
sharing of resources of the same machine (memory, databases, or files) and are therefore
not independently executable [23]. Monolithic applications are usually internally split into
multiple services and/or components, but they are all deployed as a single solution. Unless
the application is becoming too big, monolithic applications are easier to develop [24].
However, large monolithic applications suffer from the following problems [17]: they are
difficult to maintain and evolve, it is hard to add or update libraries due to dependency hell,
change in only one module requires rebooting the whole application, etc. Microservices
Appl. Sci. 2022, 12, 6679 5 of 40
succeed in mitigating these problems and are gaining in popularity due to the follow-
ing characteristics:
• Size: Because microservices implement a limited amount of functionalities, their code
bases are small which limits the scope of a bug [17]. The small size also provides
benefits in terms of service maintainability and extendability. A small service can be
easily modified or rebuilt from scratch with limited resources and in limited time [25].
• Independence: Each microservice in MSA is operationally independent of others and
communicates with other microservices through their published interfaces [25]. This
has several benefits: (i) microservices are independently deployed, (ii) the new version
of microservice can co-exist with the old version, and the microservices that use the old
microservice can be gradually modified to use the new microservice [17], (iii) changes
in one microservice do not require the reboot of the whole system, and (iv) scaling
MSA implies deploying or disposing of instances of microservices with respect to their
load [26].
Table 2. Microservice design patterns. Descriptions of all presented designed patterns are given
by Richardson [27].
External API patterns describe exactly how external clients will communicate with
microservices. An API Gateway provides a single entry point for all external clients. To avoid
a single point of failure, multiple instances of an API gateway are usually deployed [28].
Additionally, a separate API gateway can be provided for each kind of client (web app,
mobile, etc.). This variation of an API Gateway pattern is a Back-End for Front-End pattern.
Microservices are built to fail [30]. The solution for the problem of preventing the
microservice failure to cascade to other microservices is provided by a Reliability pattern
named Circuit Breaker.
studies suggest that the use of DSLs increases flexibility, productivity, reliability, and
usability [35,36].
The study performed by Johanson and Hasselbring [37] shows that domain experts
achieve significantly higher accuracy and spend less time solving tasks when using their
DSL instead of the comparable GPL-based solution. A study performed by Kosar et al. [38]
shows that participants are more effective and efficient with DSLs than with GPLs, es-
pecially in domains where participants were less experienced. The study also suggests
that in cases where a DSL is available, developers will perform better when using the
DSL than using a GPL. In this study, we did not explicitly compare our DSL with GPLs,
but we showed that the use of DSLs gives good results compared to the direct use of the
technologies with which respondents were experienced.
JHipster is a tool for generating, developing, and deploying web applications and
MSAs. MSAs in JHipster are modeled by using the JHipster domain language (JDL). Mod-
eling of microservices in JDL can be performed quickly due to its user-friendly syntax. JDL
specifications are transformed automatically into runnable Java applications by the JHipster
code generator. JHipster can be used online or installed locally with NPM (NPM—https:
//www.npmjs.com/ (accessed on 18 January 2022)), Yarn (Yarn—https://yarnpkg.com/
(accessed on 18 January 2022)), or Docker (Docker–https://www.docker.com/ (accessed on
18 January 2022)). However, the online version is quite limiting by default, and to achieve
the full experience, users must allow access to either GitHub or GitLab accounts. Installing
JHipster locally can be troublesome, but it is the preferred way.
LEMMA (Language Ecosystem for Modeling Microservice Architecture) is a set of
Eclipse-based modeling languages and model transformations for developing MSA [43].
These languages provide different modeling viewpoints for different roles in a microservice
development team [45]. By introducing explicit modeling viewpoints, LEMMA decom-
poses the system into smaller, more specialized parts. Because of this, each role is presented
only with the information relavant for that role [45]. Just like AjiL, LEMMA is also based
on the Eclipse ecosystem.
Jolie is a service-oriented programming language [44]. A service, in Jolie, is composed
of two parts: behavior and deployment. A behavior part defines the implementation of the
service’s functionalities, whereas the deployment part defines the necessary information
for establishing communication links between services [44]. The Jolie interpreter is imple-
mented in Java, and it comes with a Java API to interact with it.
Because both TheArchitect and MicroBuilder are not publicly available, unlike the rest
of presented tools (which are all open-source projects), we were unable to study them in
greater detail.
The comparison of available features in Silvera, MAGMA, AjiL, JHipster, LEMMA,
and Jolie is shown in Table 3, whereas a comparison of implemented microservice design
patterns is shown in Table 4. Table 4 contains only patterns implemented by at least one of
the tools.
Silvera is a lightweight language. Because of this, it can be used in any text editor,
whereas AjiL is tightly coupled with its GUI editor. The simple textual notation also enables
easier collaboration through version control systems. Silvera shares this characteristic
with JDL.
All tools presented in this section, except Jolie, implement the API Gateway pattern
from the External API group of patterns. Similarly, these tools also implement the Circuit
Breaker pattern from the Reliability group of patterns. In Jolie, these patterns are not imple-
mented as part of the language. However, Jolie offers composition primitives that can be
used for manual implementation of these patterns.
Unlike other tools presented in this section, both Silvera and LEMMA are built with
heterogeneity in mind. LEMMA provides a Technology Modeling Language (TML), where
users model custom technology aspects. The disadvantage of TML is that it does not
support versioning, meaning that only one version of the Technology Language is supported.
On the other hand, Silvera supports an arbitrary number of programming languages and
their versions (see Section 3.4). Various implementations of microservices, API gateways,
service registries, and message brokers can also be supported easily. This allows developers
to use the best tool for the job and also supports experimentation.
Silvera uses microservice-tailored metrics to evaluate the architecture of the designed
system. Besides TheArchitect, no other tool presented in this section supports architecture
evaluation. However, metrics used by TheArchitect are mainly derived from Object-Oriented
design and, as such, are not fully applicable to microservice-based systems [46]. Addition-
ally, in Silvera, users can easily provide custom evaluation functions (see Section 3.4.3).
Appl. Sci. 2022, 12, 6679 9 of 40
Tool Textual Notation GUI Editor Arch. Evaluation Database Support Target Language
Silvera SilveraDSL no yes any any
MAGMA Java yes no MySQL Java
MySQL,
AjiL XML yes no Java
MongoDB
MySQL,
MongoDB,
JHipster JDL yes no Java
PostgreSQL,
etc.
MariaDB, Java,
LEMMA Several DSLs yes no
MongoDB Python
Java,
Jolie Jolie yes no All supported by JDBC driver
Javascript
Table 4. Comparison of microservice design patterns implemented in Silvera with other tools.
Another characteristic that distinguishes Silvera from the rest of the presented tools
is the simple installation procedure. Silvera comes with a small number of dependencies
and is available at the Python Package Index (Python Package Index—https://pypi.org/
(accessed on 18 January 2022)). To install Silvera, use pip install silvera command (pip
is the package installer for Python, available at https://pypi.org/project/pip/ (accessed
on 18 January 2022)).
3. Silvera
In this section, we give an overview of the Silvera language.
Silvera is a declarative language developed for the domain of microservice software
architecture development. We call these types of DSL “technical DSLs” or “horizontal
DSLs”. The language is designed in a way that directly implements design patterns related
to the domain of MSA.
Silvera was developed in four successive phases: Analysis, Decision, Design, and Implementation.
Analysis phase. During the analysis phase, we used the analysis patterns from Table 5.
Most of the domain knowledge was gathered by analyzing the available literature, the
code, and documentation (Extract from Code) pattern of available systems. We analyzed the
domain in an informal way (Informal) pattern, and we gathered the literature mostly via
the snowballing approach. We gathered a body of relevant papers, searched the papers
that were in the reference list of these starting papers (Backward Snowballing [49]), and the
papers that cite these starting papers (Forward Snowballing [49]). The output of this phase
consisted of domain-specific terminology and semantics in more or less abstract form [48].
Decision phase. During this phase, we used the decision patterns from Table 6.
The decision to create a new DSL stemmed from the fact that we wanted to auto-
matically generate the infrastructure code (Task Automation) pattern and also to be able
to perform domain-specific analysis and evaluation of the designed microservice-based
system (AVOPT) pattern.
Design phase. This phase can be characterized along two orthogonal dimensions: the
relationship between the DSL and existing languages and the formal nature of the design
description [48]. During this phase, we used patterns shown in Table 7.
The easiest way to design a DSL is to base it on the existing language. DSLs built
this way are called Internal DSLs. The advantage of this approach is that no new language
infrastructure has to be built, but the downside is the limited flexibility since a DSL has to
be expressed by using concepts of the host language [50]. Another approach is to create a
so-called External DSL. An external DSL is a completely independent language built from
scratch. As external DSLs are independent of any other language, they need their own
infrastructures such as parsers, linkers, compilers, or interpreters [50]. Silvera is an external
DSL with no relationships with any existing languages (Language Invention) pattern.
Appl. Sci. 2022, 12, 6679 11 of 40
Mernik et al. [48] distinguish between informal and formal designs. In an informal
design, the language specification is usually in some form of natural language [48]. In a
formal design, a language syntax is usually specified via regular expression and grammar,
whereas a semantic is specified via attribute grammars, rewrite systems, and abstract
state machines [48]. The formal design has several benefits [48]: (i) brings problems to
light before the DSL is actually implemented and (ii) can be implemented automatically
by language development tools, which significantly reduces the implementation effort.
Silvera’s syntax specification is defined in the form of PEG (Parsing Expression Grammar)
grammar (Formal) pattern. However, Silvera’s semantic specification is defined by code
generators (Formal) pattern.
Implementation phase. In this phase, we considered multiple implementation patterns,
as shown in Table 8.
We chose Compiler Application Generator pattern over other patterns, such as Interpreter,
Embedding, Extensible Compiler/ Interpreter, and Commercial-Off-The-Shelf approach. A disad-
vantage of this approach is the higher cost of building the compiler from scratch. However,
this approach also yields advantages such as closer syntax to the notation used by domain
experts, good error reporting [48], and minimized user effort to write correct programs [51].
The patterns Compiler/Application Generator pattern and Interpreter offer similar advantages
and disadvantages [48], but we chose the former due to execution speed.
< module_path > . < module_name > . < declaration_name > (1)
The API (APIDecl) declaration consists of function definitions and the definition of
service-specific objects used for modeling microservice business entities. Each function
definition consists of a function name, function parameters, return type, and annotation
(optional), whereas the definition of the service-specific object (TypeDef ) is given by its
name and one or more fields (TypeField). Every field has its name and a data type, but can
also have a special ID attribute, which is used to identify fields during serialization and
deserialization of messages in a binary format and to ensure backward compatibility for
newer versions of the API. For that reason, once assigned, an ID attribute should not
be changed.
In Silvera, each microservice has a particular communication style. Communication
style defines a protocol used to send and receive messages. Currently, it is possible to choose
between RPC-based and messaging-based communication styles. The format of messages
that can be sent and received from a microservice is defined by its API. Microservices
that use RPC to call methods from other microservices must define those microservices as
dependencies. RPC-based communication is synchronous by default, but Silvera supports
asynchronous RPC communication as well.
Since microservices can fail at any time, MSAs must be designed to cope with fail-
ures [1]. The failure of one microservice should not take down the whole system. One of
the design patterns that helps in mitigating such problems is the Circuit Breaker pattern
(see Section 2.2), which is directly supported in Silvera. Failure recovery must be defined
for every API function that the start microservice calls from the end microservice. Table 9
shows failure recovery strategies supported by Silvera.
Message channels are logical addresses in the messaging system; how they are actually
implemented depends on the messaging system product and its implementation.
In MSA, client applications usually need to collect data from more than one mi-
croservice. If the communication is direct, the client needs to communicate with multiple
microservices to collect the data. Such communication is inefficient and increases the
coupling between the client and the microservices [27]. An alternative is to implement
an API Gateway. An API gateway represents a single entry point for all clients, and it can
handle requests in one of two ways: (a) requests are routed to the appropriate service,
and (b) requests are fanned out to multiple microservices. In Silvera, an API gateway is a
special service implemented in the form of an APIGateway object. Its attribute gateway_for
determines which microservices will be put behind the gateway. In the current implemen-
tation of Silvera, the API gateway only serves as a router of requests. In the future, we plan
to expand on this implementation by providing security features, implementing the API
Composition [27] pattern and adding the option to restrict services’ APIs to a certain set of
operations. The API Composition pattern uses an API composer, or aggregator, to implement
a query by invoking individual microservices that own the data and then combine the
results by performing an in-memory join [27].
In Silvera, each microservice can define its specific deployment requirements. De-
ployment is managed by the Deployment object, with the following attributes: version, url,
port, lang, packaging, host, replicas, and restart_policy. Attribute version defines a version of
a microservice. Attributes url and port define a location of a microservice on a computer
network. Attributes lang and packaging define a programming language in which the
microservice will be implemented and in which form it will be used (source or binary). At-
tribute host defines whether a microservice will run on a physical host, virtual machine, or
inside a container, whereas attribute replicas defines a number of instances of a microservice.
Finally, the attribute restart_policy defines when a microservice should be restarted (after
failure, always, etc.). This attribute can be currently used only if the host is a container.
A service registry is a special service that contains information about a number of
instances and locations of each microservice in the system. In Silvera, the service reg-
istry is implemented in the form of ServiceRegistryDecl object. This object contains the
following attributes: tool—which defines which tool will be used as a service registry,
client_mode—which defines whether the service registry could be registered within another
service registry. Since it is a special type of microservice, a service registry can also be
deployed in various ways by using the Deployment object. The microservice is registered
within the service registry by providing a reference to a ServiceRegistryDecl object to its
service_registry attribute.
In Silvera, microservices can draw configuration files from an external configuration
server. The configuration server is implemented in the form of a ConfigServerDecl object.
1 ServiceDecl:
2 ’service’ name=ID ’{’
3 (’config_server’ ’=’ config_server=FQN)?
4 (’service_registry’ ’=’ service_registry=FQN)?
5 (deployment=Deployment)?
6 ’communication_style’ ’=’ comm_style=CommunicationStyle
7 (api=APIDecl)?
8 ’}’
9 ;
The ServiceDecl rule starts with the keyword service followed by the attribute n
ame matched by the textX built-in rule ID that further follows the literal string match “{”
(line 2). The body of the microservice declaration starts with the definition of two optional
variables. First, we have a variable that keeps reference towards the configuration server. Its
definition starts with the config_server keyword followed by the literal string match “=”,
after which comes the attribute config_server matched by the rule FQN (line 3). Second,
we have a similarly defined variable that keeps reference towards the service registry (line
4). Then the optional variable assignment matched by the rule Deployment (line 5) follows.
Next, we have the definition of communication style CommunicationStyle (line 6). In the
end, another optional attribute api is matched by the rule APIDecl (line 7). The closing
curly brace ends the microservice declaration.
Listing 2 shows how to define a simple User microservice in Silvera. User microservice
is registered within ServiceRegistry (line 3), and it communicates with the rest of the
system by using RPC (line 4). This microservice is deployed inside a container, and it listens
to HTTP requests on the 8080 HTTP port. Since the host attribute is not defined in the de
ployment section, its default value will be applied—http://localhost, accessed on 1 June
2022. The API of this microservice consists of the User domain object and several publicly
available methods. CRUD methods for the User domain object are generated automatically
due to the @crud annotation. This annotation represents a shortcut, and the same effect
can be achieved by using @create, @read, @update, and @delete annotations. In addition
to the CRUD methods, we have three additional methods listUsers, userExists, and u
serEmail. All API methods for this microservice are exposed over REST. URL mapping
for each of the methods will be auto-generated based on the microservice URL, method
name, and HTTP method defined by the corresponding @rest annotation. For example, the
method listUsers can be accessed with the following URL: http://localhost:8080/user/
listusers, accessed on 1 June 2022. It is, however, possible to set custom URL mapping for
the API method by using the mapping attribute of the annotation: @rest(method=GET,map
ping=<user_defined_mapping>). This attribute, however, is not currently available when
using CRUD annotations.
Appl. Sci. 2022, 12, 6679 16 of 40
1 service User {
2
3 service_registry=ServiceRegistry
4 communication_style=rpc
5
6 deployment{
7 version="0.1"
8 port=8080
9 host=container
10 }
11
12 api{
13 @crud
14 typedef User[
15 @id str username
16 @required str password
17 @required @unique str email
18
19 int age //optional
20 ]
21
22 @rest (method=GET)
23 list<User> listUsers ()
24
25 @rest(method=GET)
26 bool userExists (str username)
27
28 @rest (method=GET)
29 str userEmail(str username)
30 }
31 }
1 service-registryServiceRegistry{
2 tool=eureka
3 client_mode=False
4 deployment {
5 version="0.0.1"
6 port=9091
7 url="http://registry.example.com"
8 host=container
9 }
10 }
Appl. Sci. 2022, 12, 6679 17 of 40
1 api-gateway EntryGateway {
2
3 service_registry=ServiceRegistry
4
5 deployment {
6 version="0.0.1"
7 port=9095
8 url="http://entry.example.com"
9 }
10
11 communication_style=rpc
12
13 gateway-for{
14 User as /api/u
15 }
16 }
1 msg-pool{
2 group UserMsgGroup [
3 msg UserAdded [
4 str userId
5 str userEmail
6 ]
7 ...
8 ]
9 }
Listing 7 shows how to define a message broker named Broker. This message broker
has three typed message channels: EV_USER_ADDED_CHANNEL channel for the UserAdded
message, EV_USER_UPDATED_CHANNEL channel for the UserUpdated message, and EV_USER
_DELETED_CHANNEL channel for the UserDeleted message. When instantiating a channel,
the FQN of a message must be used.
1 msg-broker Broker {
2
3 channel EV_USER_ADDED_CHANNEL(UserMsgGroup.UserAdded)
4 channel EV_USER_UPDATED_CHANNEL(UserMsgGroup.UserUpdated)
5 channel EV_USER_DELETED_CHANNEL(UserMsgGroup.UserDeleted)
6 }
Listing 8 shows how the User microservice should be changed to use the messaging
communication style. In the example, the User microservice publishes a message every time
a user is added, updated, or deleted. Not only CRUD methods can produce or consume
messages; regular API functions can use @producer and @consumer annotations that use
the same syntax as shown in the example.
1 serviceUser {
2 ...
3 communication_style=messaging
4 ...
5
6 api {
7 @create(UserMsgGroup.UserAdded -> Broker.EV_USER_ADDED_CHANNEL)
8 @read
9 @update(UserMsgGroup.UserUpdated ->
10 Broker.EV_USER_UPDATED_CHANNEL)
11 @delete(UserMsgGroup.UserDeleted ->
12 Broker.EV_DELETED_DELETED_CHANNEL)
13 typedef User [
14 ...
15 ]
16 ...
17 }
18 }
Appl. Sci. 2022, 12, 6679 19 of 40
3.4. Compiler
Silvera compiler consists of two logically separated parts: front-end and back-end.
The front-end further consists of modules for analysis and evaluation, whereas the back-end
is comprised of a set of language-specific code generators.
3.4.1. Front-End
The compiler’s front-end performs lexical analysis, parsing, semantic analysis, and trans-
lation to an intermediate representation. The parser is produced by textX [52] based on the
Silvera grammar. The textX is an open-source tool for fast DSL development in Python that
is IDE agnostic and provides a fast round-trip from grammar change to testing [52]. Since
DSLs are susceptible to changes [48], we chose textX because it provides easy language
evolution. The parser created by textX parses Silvera programs and creates a graph of
Python objects (model) where each object is an instance of a corresponding class from the
metamodel. This way, instead of an Abstract Syntax Tree (AST), textX returns a Model object
(Section 3.2).
The front-end detects both syntax and semantic errors. Since Silvera is IDE-independent,
errors are detected only during the compilation time. Syntax errors are detected early on,
during parsing, by textX, which comes with extensive error reporting and debugging
support [52].
Before the Model object is passed to the compiler’s back-end, it is processed by the
communication resolving processor (CRP) and the architecture evaluation processor (AEP). Since
microservices created in Silvera can have multiple communication styles, the primary
purpose of the CRP is to validate the Silvera model according to the corresponding commu-
nication style and enrich the model with communication-style-specific information needed
by the compiler’s back-end. Each communication style comes with a specific CRP.
The purpose of the AEP is to provide a metrics-based evaluation of the microservices-
based system implemented in Silvera. Evaluation metrics applicable to microservice-based
systems are defined by Bogner et al. [46]. Even though the AEP is an independent module,
by utilizing it in the front-end, we are implementing the AVOPT pattern (see Section 3.1).
This way, evaluation results can be used by the back-end to generate optimized applications.
For the evaluation, the AEP is using the following metrics: Weighted Service Interface
Count (WSIC), Number of Versions per Service (NVS), Services Interdependence in the System
(SIY), Absolute Importance of the Service (AIS), Absolute Dependence of the Service (ADS),
and Absolute Criticality of the Service (ACS).
WSIC (S) [53] is the number of exposed API functions of microservice S. Lower values
for WSIC are more favorable for the maintainability of a microservice. As absolute values
for this metric are not conclusive on their own [46], the system-wide average WSIC AVG is
calculated. By comparing values with the average, the largest microservices in the system
can be identified and potentially split.
NVS(S) [53] is the number of versions of microservice S currently used in the system.
A large NVS AVG value indicates high complexity and bears down on the maintainabil-
ity [46].
SIY [54] is the number of microservice pairs that are bi-directionally dependent on
each other. According to [54], interdependent pairs should be avoided as they attest to poor
services’ design. If such pairs exist, it can be a feasible solution to merge each of them into
a single microservice [46].
AIS(S) [54] is the number of consumer microservices that depend on the microservice
S. AIS of every microservice is compared to a system-wide AIS AVG , which can be used to
identify very important microservices in the system.
ADS(S) [54] is the number of microservices that microservice S depends on. Again,
ADS AVG is calculated and can be used for comparison.
ACS(S) combines AIS(S) and ADS(S) to find the most critical and potentially prob-
lematic parts of the system. According to [54], the most critical microservices are those that
are called from many different clients as well as those that invoke a lot of other microservices.
Appl. Sci. 2022, 12, 6679 20 of 40
3.4.2. Back-End
After the front-end processes the Model object, the object is being passed to the com-
piler’s back end as an input.
The back-end of the Silvera compiler iterates over each module in the model and
passes declarations (Decl objects) to code generators. The number of code generators is
not limited. The Silvera compiler offers a possibility to register custom code generators
as plugins.
For every REST-based microservice, the back-end generates an OpenAPI document
named openapi.json. OpenAPI files provide information about where to reach an API,
which operations are available, what are the expected inputs and outputs, etc.
The built-in code generator. The built-in code generator uses template-based model-
to-text transformations to produce the Java applications based on the Spring Boot (Spring
Boot—https://spring.io/projects/spring-boot (accessed on 24 December 2021)) framework.
The template-based code generation is a synthesis technique that produces code from high-
level specifications called templates [55]. A template is an abstract representation of the
textual output it describes. It has a static part, text fragments that appear in the output “as
is”, and a dynamic part embedded with splices of meta-code that encode the generation
logic [55]. Templates lend themselves to iterative development as they can be easily derived
from examples. Each declaration has a corresponding set of templates. The appropriate set
of templates is chosen based on the declaration type and its target programming language.
In the case of the built-in code generator, the target language is always Java 17. Once
the appropriate set of templates is chosen, the code generator analyses the declaration
and extracts relevant data. The data are subsequently used to fill the dynamic parts of
the template.
The built-in code generator generates a Spring Boot application for every microservice
present in the model. Most of the code is generated automatically; however, in some
cases, developers must implement business logic manually. To ensure that the manually
added code is preserved between successive code generations, the built-in code generator
implements the generation gap pattern [56]. The implementation of this pattern ensures
that manually written code can be added non-invasively using inheritance, where the
manually added classes inherit the generated classes. A guide on adding manual changes
to the generated code is part of Silvera’s documentation (Introduce manual changes to
the generated code—https://alensuljkanovic.github.io/silvera/compilation/#introduce-
manual-changes-to-the-generated-code (accessed on 1 June 2022)).
We adhered to the best practices defined by Hofmann et al. [57], so each generated
Spring Boot application has the following modules: domain, controller, repository, and service.
Microservices that use messaging communication style contain two additional modules:
config and messages.
The domain module contains classes that specify the application’s domain model
(business entities). These classes are derived from the type definitions (typedefs) located
in the microservice’s API. The service module contains classes that specify applications’
business rules. These modules contain two sub-modules: base and impl. The base module
contains a definition of the Java interface with methods defined in API, whereas the impl
module contains a class that implements the base interface. The impl module is different
from the rest of the generated modules because files in the impl module preserve manual
changes in the code between successive code generations. In contrast, the rest of the
generated files are always rewritten.
The repository module contains the implementation of the Repository pattern in the
form of MongoRepository provided by MongoDB. Currently, the built-in code generator by
Appl. Sci. 2022, 12, 6679 21 of 40
default supports only MongoDB. However, support for an arbitrary database can be added
either by extending the existing code generator or by registering a new one.
All messages defined in the message pool are generated as classes in the messages
module. Messages are sent through the network as JSON objects.
The config module contains classes used to define how the generated microservice
application will communicate with a message broker. These classes are: (i) the KafkaConfig
class that defines how the application is registered within the Kafka cluster, and (ii) the
MessageDeserializer class that defines how messages received as JSON objects will be trans-
formed into message objects defined in the messages module. MessageDeserializer is optional
and will be generated only if the application consumes messages from the message broker.
The controller module contains a class that specifies the REST API of the generated
microservice application. The class contains methods that belong to both the public and
internal API of the microservice. Methods from the internal API are private and cannot be
accessed from the outside.
In addition to the modules mentioned above, pom.xml and bootstrap.properties files are
generated for each microservice. The file bootstrap.properties is used to setup Spring Boot
applications, whereas Maven uses pom.xml files to manage dependencies.
API gateways and service registries are also generated as separate Spring Boot ap-
plications. The API gateway is generated as a Zuul Proxy (Netlix Zuul—https://github.
com/Netflix/zuul (accessed on 24 December 2021)) server. Zuul is a gateway service
developed by Netflix, and it provides dynamic routing, monitoring, resiliency, security, etc.
Generated code for the API gateway is simple because it contains only one class with the
main function and the application.properties file, which defines how the API gateway will
perform request routing and whether it will contact the service registry to retrieve the URL
of the corresponding microservice. The service registry is also a simple application, with
one class with the main function and the bootstrap.properties file.
The built-in code generator produces a special run script and a Docker file (if the
deployment host is set to the container) for each microservice, API gateway, or service
registry. The run script utilizes Maven to produce and run the jar file.
Listing 9. Implementation of a GeneratorDesc object and the prototype of the generate function.
The second step is to make the code generator discoverable by Silvera. To do this, we
must register the GeneratorDesc object in the setup.py entry point named silvera_gene
rators, as shown in. Listing 10.
Listing 10. Making the new code generator discoverable by Silvera by using silvera_generators
entry point.
Table 10. Microservices that compose Eat and Drink application and their descriptions.
Microservice Description
User Provides user-related operations.
Meal Provides means for adding meals to the menu.
Order Provides means for creating orders.
Storage Contains information about availability of ingredients used for preparing a meal.
EmailNotifier Notifies users about the order status via email.
The Order microservice has two entities: Order and OrderItem. Every order has infor-
mation about the user who created the order, a list of OrderItem entities, and a calculated
price. OrderItem contains information about the meal that is ordered, such as the meal name
and the amount. The API of the Order microservice contains CRUD methods for the Order
entity and the listOrders method, which retrieves all orders from the database. Methods
createOrder, updateOrder, and deleteOrder publish corresponding messages to the message
broker after they are successfully completed. These messages are used by the EmailNotifier
microservice to notify the user about the order status.
The EmailNotifier microservice has one Notification entity that contains an order ID
and a user’s email as attributes. The API of this microservice has listNotifications and
listHistoryForUser methods. The first method retrieves all notifications from the database,
whereas the second method retrieves notifications only for a selected user. In addition to
these publicly available methods, this microservice also contains three internal API methods:
Appl. Sci. 2022, 12, 6679 24 of 40
(i) orderCreated, (ii) orderUpdated, and (iii) orderDeleted. These methods are triggered after an
order in the Order microservice is created, updated, or deleted.
Figure 3. The architecture of the Eat and Drink application. Solid, black connections represent
dependencies. The label on dependency connection shows which method is required by dependent.
Green, dashed connections represent service registrations. Orange, dashed connections represent a
message publishing, and labels on these connections represent messages that are being published.
Table 11. The results of metrics-based evaluation of Eat and Drink application.
System
Metric User Meal Storage Order EmailNotifier
Average
WSIC 3.2 4 5.5 3 1 2.5
NVS 1 1 1 1 1 1
AIS 0.6 1 1 1 0 0
ADS 0.6 0 0 0 3 0
ACS 0 0 0 0 0 0
The microservice Order depends on three other microservices: User, Meal, and Stor-
age. Based on the ADS value calculated for the Order microservice, we can see that this
microservice could be problematic due to high coupling with its dependencies. However,
this is expected since the Order microservice is a central part of the application where the
main business logic is implemented. Since no other microservice depends on the Order
microservice, its AIS and ACS values are both 0.
Microservices: User, Meal, Storage, and EmailNotifier have no dependencies towards
other microservices, hence the zeros for their ADS and ACS values.
Appl. Sci. 2022, 12, 6679 25 of 40
Table 12. The comparison of the number of automatically generated lines of code and manually
written lines of code for the Eat and Drink application.
Kelly and Tolvanen [58] have shown that DSLs lead to better quality applications
because of two main reasons. First, DSLs can include correctness rules of the domain
that ensure that the user cannot create illegal specifications. Elimination of bugs in the
beginning is far better than finding and correcting them later [58]. Second, code generators
automatically convert DSL specifications to a lower abstraction level (normally code),
and the generated result does not need to be edited afterward. A study by Kieburtz et al. [59]
compared the reliability of the software built manually and by using the DSL approach.
The study compared the number of failed tests for the manual approach and the DSL
approach and showed that the DSL approach yielded significantly fewer errors.
Based on these studies and the results from Table 12, we can conclude that Silvera,
while accelerating the development of microservice-based distributed systems, also leads to
better quality applications. The improved quality comes as a result of: (a) correctness rules
built into Silvera that ensure that the user cannot create illegal specifications, and (b) the fact
that most of the code is generated automatically and does not need to be edited afterward.
To check Silvera’s performance, we compiled the Eat and Drink compilation 10 times
in a row and collected the results with the built-in Linux shell time command. Results
showed that it takes approximately 0.5 s to compile the application (The test was executed
on a laptop with Intel Core™ i7-6700HQ CPU and 8 Gb of RAM).
5. Evaluation of Silvera
In addition to the Eat and Drink application presented in Section 4, we developed
multiple applications to test Silvera. However, to achieve an objective assessment of the
language, we needed feedback from users that were not involved in the development of Sil-
vera. Therefore, we conducted a survey based on the FQAD framework [13] to understand
whether we achieved our goals. FQAD is based on the ISO/IEC 25010:2011 standard, and it
defines a set of quality characteristics that should be considered when creating a DSL. Many
stakeholders can be involved in the assessment of the DSL. Each stakeholder forms a per-
spective of what characteristics the DSL should have [13]. Stakeholders can choose between
Appl. Sci. 2022, 12, 6679 26 of 40
5.1. Scoping
The scope of the experiment was set by defining its goals [60]. Here we follow the
GQM template for goal definition, originally presented by Basili and Rombach [62].
The goal of the study was to analyze Silvera for the purpose of evaluation with respect
to the following DSL quality characteristics: functional suitability, usability, reliability,
productivity, extendability, and expressivenes. The perspective is from the researcher’s
point of view. The study was run using students and software developers with experience
in MSA or software architectures in general.
5.2. Hypotheses
Before we shared the survey with participants, we defined the following hypotheses.
Hypothesis 1. Silvera is not appropriate for the specification of MSA. Alternative Hypoth-
esis H11 : Silvera is appropriate for the specification of MSA.
Hypothesis 3. The concepts and notation of the Silvera language are not learnable and
rememberable. Alternative Hypothesis H13 : The concepts and notation of the Silvera language
are learnable and rememberable.
Hypothesis 4. Silvera is not appropriate for users’ needs when developing MSA. Alterna-
tive Hypothesis H14 : Silvera is appropriate for users’ needs when developing MSA.
Hypothesis 5. Silvera does not protect users against making errors. Alternative Hypothesis
H15 : Silvera protects users against making errors.
Hypothesis 6. Silvera does not shorten the time needed to develop MSAs. Alternative
Hypothesis H16 : Silvera shortens the time needed to develop MSAs.
Hypothesis 7. Silvera does not reduce the number of human resources used to develop
MSAs. Alternative Hypothesis H17 : Silvera reduces the number of human resources used to
develop MSA.
Hypothesis 8. Silvera does not provide one and only one good way to express every con-
cept of interest. Alternative Hypothesis H18 : Silvera provides one and only one good way to
express every concept of interest.
Hypothesis 9. Each construct in Silvera is not used to represent exactly one distinct con-
cept in the domain. Alternative Hypothesis H19 : All constructs in Silvera represent exactly one
distinct concept in the domain.
Hypothesis 10. Silvera contains conflicting elements. Alternative Hypothesis H110 : Silvera
does not contain conflicting elements.
Even though the task is simple, it covers all the basic concepts of Silvera. To success-
fully solve the task, participants needed to learn how to define a microservice and its API,
an API gateway, and a service registry. Then, they needed to determine how to register
a microservice within the service registry and how to make the microservice available in
the API gateway. To successfully implement the business logic, participants needed to use
both messaging and RCP communication styles. For the messaging communication style,
they needed to define messages and a message pool. To make the RPC work, participants
needed to define dependencies between the appropriate microservices.
The task needed to be implemented both in Silvera and manually. In the case of
manual implementation, participants had the freedom to choose a programming language,
framework, and tooling in which they implemented the task. Participants were asked to
record the time needed to complete each implementation.
Table 13. The survey questions. All questions were closed type, and only one answer could be
provided.
For questions from the Experience group, users were able to select answers ranging
from one, Inexperienced, to five. Experienced. For the rest of the questions, answers ranged
from 1, Strongly disagree, to 5, Strongly agree. At the end of the questionnaire, we added a
special field where participants could leave their comments. The data collected from the
Likert scale in the study are quantitative.
Appl. Sci. 2022, 12, 6679 29 of 40
expertise in MSAs varied to a great extent, high standard deviation values for the task
completion times were expected.
The completeness was measured by running the set of automated tests. As depicted
in Table 15, both approaches yielded similar results, with Silvera having an average task
completion rate of ∼95% compared to the ∼91% of the manual approach. While the
maximum task completion rate was the same for both approaches, the minimum completion
rate was slightly higher for the manual approach. Values for the minimum completion rate
were lower than expected because some participants did not name their functions as stated
in the task, which caused the failure of some of the automated tests. The median value was
the same for both approaches. However, the mode value (the most commonly occurring
value) showed that the task completion rate was 100% for the majority of participants when
using Silvera. Indeed, 50% of the participants had a task completion of 100% when using
Silvera, as opposed to 33.33% with the manual approach. We did not calculate the mode
value for columns related to time because all time durations reported by users were unique.
As hown in Table 16, the majority of participants agreed that Silvera is functionally
suitable for the specification of distributed systems based on microservices. However,
participants also pointed out the following problems:
• Syntax error. One participant reported that the generated code could not be compiled
due to a syntax error. This was caused by faulty template implementation and was
fixed immediately.
• Code formatting. A few participants complained about the formatting of the generated
code. In some cases, the spacing between lines of code was too big. We addressed this
problem, and we will pay special attention to the code formatting in the future.
• Underutilized Spring Boot frame. Some participants noted that Spring Boot comes with
built-in features that could simplify some parts of the generated code. For example,
we could use Feign client to implement the communication between microservices.
• Notation consistency. One participant commented that he was confused with the mixed
usage of brackets and braces. We plan to take this into account in future versions of
the tool.
• Documentation. One participant commented that video tutorials and documentation
are not up-to-date. Video tutorials were recorded for the previous version of Silvera,
but we did, however, describe all the backward-incompatible changes both in videos
and the documentation. Still, we will address this issue in the future.
Table 16. Percentages of responses (n = 18) for Functional suitability, Usability, Reliability, Productivity,
Extendability and Expressiveness groups of questions.
Table 17. Analysis of central tendency and dispersion of the responses (n = 18). Values for me-
dian and mode show the central tendency, while Inter-Quartile Range (IQR) shows the dispersion of
the responses.
Quantiles
Question Mode IQR
0(%) 25(%) 50(%) 75(%) 100(%)
Q4: Silvera is appropriate for MSA
3 4 4 4 5 4 0
specification needs.
Q5: Silvera language elements are
4 4.25 5 5 5 5 0.75
understandable.
Q6: The concepts and notation of
Silvera language are learnable and 4 4 5 5 5 5 1
rememberable.
Q7: Silvera is appropriate for needs
3 4 4 5 5 4 1
of MSA specification.
Q8: Silvera protects users against
3 4 4 4.75 5 4 0.75
making errors.
Q9: Silvera shortens the time needed
to develop a microservice software 3 4 5 5 5 5 1
architectures.
Q10: Silvera reduces the amount of
human resource used to develop the 2 4 4 4 5 4 0
microservice software architectures.
Appl. Sci. 2022, 12, 6679 32 of 40
Quantiles
Question Mode IQR
0(%) 25(%) 50(%) 75(%) 100(%)
Q11: Silvera provides one and only
one good way to express every con- 2 3 4 5 5 4 1
cept of interest.
Q12: Each construct in Silvera is
used to represent exactly one dis- 3 4 4 4 5 4 0
tinct concept in the domain.
Q13: Silvera does not contain con-
3 3 4 4 5 4 0
flicting elements.
Table 18. The corresponding test statistic, p-value and effect size for every question from groups
Functional suitability, Usability, Reliability, Productivity, Extendability and Expressiveness.
6. Threats to Validity
The aim of our study was to develop a language that will accelerate the development
of microservices. Although the study conducted a thorough survey, there are some threats
to validity that we will discuss in this section. We distinguish between threats to construct
validity, internal validity, and external validity.
only during the compilation. To mitigate this, we paid special attention to error reporting
and performance (see Section 4.3) when developing Silvera. This enabled participants to
quickly discover and fix errors. However, we plan to add IDE support in the future to
enhance the user experience (see Section 8).
Due to presented threats to external validity, we cannot claim that the results of the
study can be generalized to other participant populations or settings. To mitigate this, we
plan to perform more detailed studies in future (see Section 8).
7. Study Limitations
In this section, we discuss the limitations of the current approach and implementation.
In addition, we give some ideas for future work.
tionally, this work had to be repeated for each development tool, as each tool provides
different APIs for implementing the same feature. To support multiple IDEs, we plan to
add support for the Language Server Protocol (LSP) (The Language Server Protocol—https:
//microsoft.github.io/language-server-protocol/overviews/lsp/overview/ (accessed on
18 May 2022)). The LSP standardizes the way in which a language server, which provides
language data, and development tools communicate. For all textX-based DSLs, the lan-
guage server is provided through text–LS (textX–LS—https://github.com/textX/textX--LS
(accessed on 18 May 2022)) open–source project, and LSP support only needs to be added
on the client side (text editor or IDE).
In the current implementation, Silvera is not generating unit tests. By generating unit
tests, Silvera could further reduce the need for manual intervention.
Author Contributions: Conceptualization, A.S. and I.D.; methodology, A.S., V.I. and I.D.; software,
A.S.; validation, A.S., B.M., V.I. and I.D.; investigation, A.S., B.M., V.I. and I.D.; writing—original draft
preparation, A.S.; writing—review and editing, B.M., V.I. and I.D.; visualization, A.S.; supervision,
B.M. and I.D.; All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Silvera source code, examples, documentation and video materials
are available at https://alensuljkanovic.github.io/silvera/, accessed on 2 June 2022. Additional
information is available from corresponding authors upon a reasonable request.
Conflicts of Interest: The authors declare no conflict of interest.
Appl. Sci. 2022, 12, 6679 38 of 40
References
1. Fowler, M.; Lewis, J. Microservices. 2014. Available online: https://www.martinfowler.com/articles/microservices.html
(accessed on 30 April 2017).
2. Di Francesco, P.; Malavolta, I.; Lago, P. Research on architecting microservices: Trends, focus, and potential for industrial adoption.
In Software Architecture (ICSA), Proceedings of the 2017 IEEE International Conference on Software Architecture (ICSA), Gothenburg,
Sweden, 3–7 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 21–30.
3. Fritzsch, J.; Bogner, J.; Wagner, S.; Zimmermann, A. Microservices migration in industry: Intentions, strategies, and challenges.
In Proceedings of the 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME), Cleveland, OH, USA,
29 September–4 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 481–490.
4. Baškarada, S.; Nguyen, V.; Koronios, A. Architecting microservices: Practical opportunities and challenges. J. Comput. Inf. Syst.
2018, 60, 1–9. [CrossRef]
5. Bogner, J.; Fritzsch, J.; Wagner, S.; Zimmermann, A. Microservices in industry: Insights into technologies, characteristics, and
software quality. In Proceedings of the 2019 IEEE International Conference on Software Architecture Companion (ICSA-C),
Hamburg, Germany, 25–26 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 187–195.
6. Knoche, H.; Hasselbring, W. Drivers and barriers for microservice adoption-a survey among professionals in germany. Enterp.
Model. Inf. Syst. Archit. (EMISAJ)—Int. J. Concept. Model. 2019, 14, 1–35.
7. Wang, Y.; Kadiyala, H.; Rubin, J. Promises and challenges of microservices: An exploratory study. Empir. Softw. Eng. 2021,
26, 1–44. [CrossRef]
8. Lenarduzzi, V.; Lomio, F.; Saarimäki, N.; Taibi, D. Does migrating a monolithic system to microservices decrease the technical
debt? J. Syst. Softw. 2020, 169, 110710. [CrossRef]
9. Kleehaus, M.; Matthes, F. Challenges in Documenting Microservice-Based IT Landscape: A Survey from an Enterprise Architecture
Management Perspective. In Proceedings of the 2019 IEEE 23rd International Enterprise Distributed Object Computing Conference
(EDOC), Paris, France, 28–31 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 11–20.
10. Bushong, V.; Abdelfattah, A.S.; Maruf, A.A.; Das, D.; Lehman, A.; Jaroszewski, E.; Coffey, M.; Cerny, T.; Frajtak, K.; Tisnovsky, P.;
et al. On Microservice Analysis and Architecture Evolution: A Systematic Mapping Study. Appl. Sci. 2021, 11, 7856. [CrossRef]
11. Waseem, M.; Liang, P.; Shahin, M. A systematic mapping study on microservices architecture in devops. J. Syst. Softw. 2020,
170, 110798. [CrossRef]
12. Voelter, M.; Benz, S.; Dietrich, C.; Engelmann, B.; Helander, M.; Kats, L.C.; Visser, E.; Wachsmuth, G. DSL Engineering:
Designing, Implementing and Using Domain-Specific Languages; CreateSpace Independent Publishing Platform 2013. Available
online: dslbook.org (accessed on 20 June 2022).
13. Kahraman, G.; Bilgen, S. A framework for qualitative assessment of domain-specific languages. Softw. Syst. Model. 2015,
14, 1505–1526. [CrossRef]
14. ISO/IEC/IEEE. ISO/IEC/IEEE Systems and Software Engineering—Architecture Description; ISO/IEC/IEEE 42010:2011(E) (Revision
of ISO/IEC 42010:2007 and IEEE Std 1471-2000); IEEE: Piscataway, NJ, USA, 2011; pp. 1–46. [CrossRef]
15. Górski, T. The 1+5 Architectural Views Model in Designing Blockchain and IT System Integration Solutions. Symmetry 2021,
13, 2000. [CrossRef]
16. Richards, M. Microservices vs. Service-Oriented Architecture; O’Reilly Media: Newton, MA, USA, 2015.
17. Dragoni, N.; Giallorenzo, S.; Lafuente, A.L.; Mazzara, M.; Montesi, F.; Mustafin, R.; Safina, L. Microservices: Yesterday, today, and
tomorrow. arXiv 2016, arXiv:1606.04036.
18. Autili, M.; Di Salle, A.; Gallo, F.; Pompilio, C.; Tivoli, M. CHOReVOLUTION: Service choreography in practice. Sci. Comput.
Program. 2020, 197, 102498. [CrossRef]
19. Serhani, M.A.; El-Kassabi, H.T.; Shuaib, K.; Navaz, A.N.; Benatallah, B.; Beheshti, A. Self-adapting cloud services orchestration
for fulfilling intensive sensory data-driven IoT workflows. Future Gener. Comput. Syst. 2020, 108, 583–597. [CrossRef]
20. Cerny, T.; Donahoo, M.J.; Trnka, M. Contextual understanding of microservice architecture: Current and future directions. ACM
SIGAPP Appl. Comput. Rev. 2018, 17, 29–45. [CrossRef]
21. Li, J.; Zhong, Y.; Zhu, S.; Hao, Y. Energy-aware service composition in multi-Cloud. J. King Saud-Univ.-Comput. Inf. Sci. 2022,
in press. [CrossRef]
22. Gorski, T.; Woźniak, A.P. Optimization of business process execution in services architecture: a systematic literature review. IEEE
Access 2021, 9, 111833–111852. [CrossRef]
23. Bucchiarone, A.; Dragoni, N.; Dustdar, S.; Larsen, S.T.; Mazzara, M. From monolithic to microservices: An experience report from
the banking domain. IEEE Softw. 2018, 35, 50–55. [CrossRef]
24. Namiot, D.; Sneps-Sneppe, M. On micro-services architecture. Int. J. Open Inf. Technol. 2014, 2, 24–27.
25. Dragoni, N.; Lanese, I.; Larsen, S.T.; Mazzara, M.; Mustafin, R.; Safina, L. Microservices: How to make your application scale.
arXiv 2017, arXiv:1702.07149.
26. Gabbrielli, M.; Giallorenzo, S.; Guidi, C.; Mauro, J.; Montesi, F. Self-reconfiguring microservices. In Theory and Practice of Formal
Methods; Springer: Berlin/Heidelberg, Germany, 2016; pp. 194–210.
27. Richardson, C. Microservice Patterns; Manning Publications: Shelter Island, NY, USA, 2017.
28. Karabey Aksakalli, I.; Çelik, T.; Can, A.; Tekinerdogan, B. Deployment and communication patterns in microservice architectures:
A systematic literature review. J. Syst. Softw. 2021, 180, 111014. [CrossRef]
Appl. Sci. 2022, 12, 6679 39 of 40
29. Houmani, Z.; Balouek-Thomert, D.; Caron, E.; Parashar, M. Enhancing microservices architectures using data-driven service
discovery and QoS guarantees. In Proceedings of the 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and
Internet Computing (CCGRID), Melbourne, VIC, Australia, 1–14 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 290–299.
30. Newman, S. Building Microservices; O’Reilly Media: Newton, MA, USA, 2015.
31. Van Deursen, A.; Klint, P.; Visser, J. Domain-specific languages: An annotated bibliography. ACM Sigplan Not. 2000, 35, 26–36.
[CrossRef]
32. Visser, E. WebDSL: A case study in domain-specific language engineering. In International Summer School on Generative and
Transformational Techniques in Software Engineering; Springer: Berlin/Heidelberg, Germany, 2007; pp. 291–373.
33. Kosar, T.; Lu, Z.; Mernik, M.; Horvat, M.; Črepinšek, M. A Case Study on the Design and Implementation of a Platform for Hand
Rehabilitation. Appl. Sci. 2021, 11, 389. [CrossRef]
34. Dejanović, I.; Dejanović, M.; Vidaković, J.; Nikolić, S. PyFlies: A Domain-Specific Language for Designing Experiments in
Psychology. Appl. Sci. 2021, 11, 7823. [CrossRef]
35. Wile, D. Lessons learned from real DSL experiments. In Proceedings of the 36th Annual Hawaii International Conference on
System Sciences, Big Island, HI, USA, 6–9 January 2003; IEEE: Piscataway, NJ, USA, 2003; p. 10.
36. Gray, J.; Karsai, G. An examination of DSLs for concisely representing model traversals and transformations. In Proceedings of
the 36th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 6–9 January 2003; IEEE: Piscataway,
NJ, USA, 2003; p. 10.
37. Johanson, A.N.; Hasselbring, W. Effectiveness and efficiency of a domain-specific language for high-performance marine
ecosystem simulation: A controlled experiment. Empir. Softw. Eng. 2017, 22, 2206–2236. [CrossRef]
38. Kosar, T.; Gaberc, S.; Carver, J.C.; Mernik, M. Program comprehension of domain-specific and general-purpose languages:
Replication of a family of experiments using integrated development environments. Empir. Softw. Eng. 2018, 23, 2734–2763.
[CrossRef]
39. Wizenty, P.; Sorgalla, J.; Rademacher, F.; Sachweh, S. MAGMA: Build management-based generation of microservice infrastruc-
tures. In Proceedings of the 11th European Conference on Software Architecture: Companion Proceedings, Canterbury, UK,
11–15 September 2017; ACM: New York, NY, USA, 2017; pp. 61–65.
40. Sorgalla, J. Ajil: A graphical modeling language for the development of microservice architectures. In Proceedings of the
Microservices 2017 Conference, Extended Abstracts, Odense, Denmark, 25–26 October 2017.
41. Perera, K.; Perera, I. A Rule-based System for Automated Generation of Serverless-Microservices Architecture. In Proceedings of
the 2018 IEEE International Systems Engineering Symposium (ISSE), Rome, Italy, 1–3 October 2018; IEEE: Piscataway, NJ, USA,
2018; pp. 1–8.
42. Terzić, B.; Dimitrieski, V.; Kordić, S.; Milosavljević, G.; Luković, I. Development and evaluation of MicroBuilder: A Model-Driven
tool for the specification of REST Microservice Software Architectures. Enterp. Inf. Syst. 2018, 1–24. [CrossRef]
43. Sorgalla, J.; Wizenty, P.; Rademacher, F.; Sachweh, S.; Zündorf, A. Applying Model-Driven Engineering to Stimulate the Adoption
of DevOps Processes in Small and Medium-Sized Development Organizations. SN Comput. Sci. 2021, 2, 1–25. [CrossRef]
44. Montesi, F.; Guidi, C.; Zavattaro, G. Service-Oriented Programming with Jolie. In Web Services Foundations; Springer: Berlin/Hei-
delberg, Germany, 2014; pp. 81–107.
45. Rademacher, F.; Sachweh, S.; Zündorf, A. Aspect-oriented modeling of technology heterogeneity in microservice architecture. In
Proceedings of the 2019 IEEE International Conference on Software Architecture (ICSA), Hamburg, Germany, 25–29 March 2019;
IEEE: Piscataway, NJ, USA, 2019; pp. 21–30.
46. Bogner, J.; Wagner, S.; Zimmermann, A. Automatically measuring the maintainability of service-and microservice-based systems:
A literature review. In Proceedings of the 27th International Workshop on Software Measurement and 12th International
Conference on Software Process and Product Measurement, Gothenburg, Sweden, 25–27 October 2017; pp. 107–115.
47. Spinellis, D. Notable design patterns for domain-specific languages. J. Syst. Softw. 2001, 56, 91–99. [CrossRef]
48. Mernik, M.; Heering, J.; Sloane, A.M. When and how to develop domain-specific languages. ACM Comput. Surv. (CSUR) 2005,
37, 316–344. [CrossRef]
49. Jalali, S.; Wohlin, C. Systematic literature studies: Database searches vs. backward snowballing. In Proceedings of the 2012
ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Lund, Sweden, 20–21 September
2012; IEEE: Piscataway, NJ, USA, 2012; pp. 29–38.
50. Fowler, M. Domain-Specific Languages; Addison-Wesley Professional: Boston, MA, USA, 2010.
51. Kosar, T.; Martı, P.E.; Barrientos, P.A.; Mernik, M. A preliminary study on various implementation approaches of domain-specific
language. Inf. Softw. Technol. 2008, 50, 390–405. [CrossRef]
52. Dejanović, I.; Vaderna, R.; Milosavljević, G.; Vuković, Ž. TextX: A Python tool for Domain-Specific Languages implementation.
Knowl.-Based Syst. 2017, 115, 1–4. [CrossRef]
53. Hirzalla, M.; Cleland-Huang, J.; Arsanjani, A. A metrics suite for evaluating flexibility and complexity in service oriented
architectures. In Proceedings of the International Conference on Service-Oriented Computing; Springer: Berlin/Heidelberg, Germany,
2008; pp. 41–52.
54. Rud, D.; Schmietendorf, A.; Dumke, R. Product metrics for service-oriented infrastructures. In Proceedings of the 16th International
Workshop on Software Measurement and DASMA Metrik Kongress; IWSM/MetriKon: Potsdam, Germany, 2006; pp. 161–174.
55. Syriani, E.; Luhunu, L.; Sahraoui, H. Systematic mapping study of template-based code generation. Comput. Lang. Syst. Struct.
2018, 52, 43–62. [CrossRef]
Appl. Sci. 2022, 12, 6679 40 of 40
56. Vlissides, J. Pattern Hatching: Design Patterns Applied; Addison-Wesley Longman Ltd.: Boston, MA, USA, 1998.
57. Hofmann, M.; Schnabel, E.; Stanley, K. Microservices Best Practices for Java; IBM Redbooks: Armonk, NY, USA, 2017.
58. Kelly, S.; Tolvanen, J.P. Domain-Specific Modeling: Enabling Full Code Generation; Wiley–IEEE Computer Society Pr.: Hoboken, NJ,
USA, 2008.
59. Kieburtz, R.B.; McKinney, L.; Bell, J.M.; Hook, J.; Kotov, A.; Lewis, J.; Oliva, D.P.; Sheard, T.; Smith, I.; Walton, L. A software
engineering experiment in software component generation. In Proceedings of the IEEE 18th International Conference on Software
Engineering, Berlin, Germany, 25–30 March 1996; IEEE: Piscataway, NJ, USA, 1996; pp. 542–552.
60. Wohlin, C.; Runeson, P.; Höst, M.; Ohlsson, M.C.; Regnell, B.; Wesslén, A. Experimentation in Software Engineering: An Introduction;
Kluwer Academic Publishers: Alphen aan den Rijn, The Netherlands, 2000.
61. Jedlitschka, A.; Ciolkowski, M.; Pfahl, D. Reporting Experiments in Software Engineering. In Guide to Advanced Empirical Software
Engineering; Springer: Berlin/Heidelberg, Germany, 2008; pp. 201–228.
62. Basili, V.R.; Rombach, H.D. The TAME project: Towards improvement-oriented software environments. IEEE Trans. Softw. Eng.
1988, 14, 758–773. [CrossRef]
63. R Core Team R: A Language and Environment for Statistical Computing; R Core Team: Vienna, Austria, 2013.
64. Woolson, R. Wilcoxon signed-rank test. Wiley Encyclopedia of Clinical Trials; Wiley: Hoboken, NJ, USA, 2007; pp. 1–3.
65. Ghosh, A.; Mukherjee, A.; Misra, S. SEGA: Secured Edge Gateway Microservices Architecture for IIoT-based Machine Monitoring.
IEEE Trans. Ind. Inform. 2021, 18, 1949–1956. [CrossRef]
66. Yarygina, T.; Bagge, A.H. Overcoming security challenges in microservice architectures. In Proceedings of the 2018 IEEE
Symposium on Service-Oriented System Engineering (SOSE), Bamberg, Germany, 6–29 March 2018; IEEE: Piscataway, NJ, USA,
2018; pp. 11–20.
67. Belafia, R.; Jeanjean, P.; Barais, O.; Le Guernic, G.; Combemale, B. From Monolithic to Microservice Architecture: The Case
of Extensible and Domain-Specific IDEs. In Proceedings of the 2021 ACM/IEEE International Conference on Model Driven
Engineering Languages and Systems Companion (MODELS-C), Fukuoka, Japan, 10–15 October 2021; IEEE: Piscataway, NJ, USA,
2021; pp. 454–463.
68. El-Ghareeb, H.A. Neutrosophic-based domain-specific languages and rules engine to ensure data sovereignty and consensus
achievement in microservices architecture. In Optimization Theory Based on Neutrosophic and Plithogenic Sets; Elsevier: Amsterdam,
The Netherlands, 2020; pp. 21–43.
69. Aggarwal, K.; Singh, Y.; Kaur, A.; Malhotra, R. Empirical Study of Object-Oriented Metrics. J. Object Technol. 2006, 5, 149–173.
[CrossRef]
70. Athanasopoulos, D.; Zarras, A.V.; Miskos, G.; Issarny, V.; Vassiliadis, P. Cohesion-driven decomposition of service interfaces
without access to source code. IEEE Trans. Serv. Comput. 2014, 8, 550–562. [CrossRef]
71. Engel, T.; Langermeier, M.; Bauer, B.; Hofmann, A. Evaluation of microservice architectures: A metric and tool-based approach.
In International Conference on Advanced Information Systems Engineering; Springer: Berlin/Heidelberg, Germany, 2018; pp. 74–89.
72. Haupt, F.; Leymann, F.; Scherer, A.; Vukojevic-Haupt, K. A framework for the structural analysis of REST APIs. In Proceedings of
the 2017 IEEE International Conference on Software Architecture (ICSA), Gothenburg, Sweden, 3–7 April 2017; IEEE: Piscataway,
NJ, USA, 2017; pp. 55–58.