0% found this document useful (0 votes)
21 views5 pages

Supporting Microservice Evolution

The document discusses challenges in supporting the evolution of microservice-based applications. It describes tasks like checking upgrade consistency, identifying architectural improvements, and evaluating changing deployment trade-offs. The authors propose a service evolution model that combines structural, deployment, and runtime information about evolving microservices to help address these challenges.

Uploaded by

mprostak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views5 pages

Supporting Microservice Evolution

The document discusses challenges in supporting the evolution of microservice-based applications. It describes tasks like checking upgrade consistency, identifying architectural improvements, and evaluating changing deployment trade-offs. The authors propose a service evolution model that combines structural, deployment, and runtime information about evolving microservices to help address these challenges.

Uploaded by

mprostak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2017 IEEE International Conference on Software Maintenance and Evolution

Supporting Microservice Evolution


Adalberto R. Sampaio Jr.∗ , Harshavardhan Kadiyala§ , Bo Hu§ ,
John Steinbacher† , Tony Erwin‡ , Nelson Rosa∗ , Ivan Beschastnikh§ , Julia Rubin§
∗ Federal University of Pernambuco, Brazil § University of British Columbia, Canada † IBM, Canada ‡ IBM, USA

Abstract—Microservices have become a popular pattern for de- and analyzing such information in the context of an evolving
ploying scale-out application logic and are used at companies like system relies on non-trivial knowledge and effort.
Netflix, IBM, and Google. An advantage of using microservices
In collaboration with our industrial partner, IBM, we iden-
is their loose coupling, which leads to agile and rapid evolution,
and continuous re-deployment. However, developers are tasked tified several evolution-related maintenance tasks that are
with managing this evolution and largely do so manually by challenging for microservice developers. Supporting these and
continuously collecting and evaluating low-level service behaviors. similar tasks is the focus of our work.
This is tedious, error-prone, and slow. We argue for an approach Next, we overview the tasks and briefly outline the chal-
based on service evolution modeling in which we combine static
and dynamic information to generate an accurate representation lenges they entail.
of the evolving microservice-based system. We discuss how • Checking for upgrade consistency. Microservices are
our approach can help engineers manage service upgrades, developed and evolve independently, yet the μApp must
architectural evolution, and changing deployment trade-offs.
remain coherent and functional. Determining compatibil-
ity and consistency between microservice versions is a
I. I NTRODUCTION continuous challenge for developers. Today, developers
Cloud platforms offer pay-as-you-go resource elasticity and manually identify microservice dependencies and either
virtually unbounded resources. However, to take advantage engage with other developers who own that microservice
of these features, developers must judiciously distribute busi- or evaluate the dependency through code inspection.
ness logic on the platforms. Microservices [1] are a popular • Identifying architectural improvements. An evolving
pattern for distributing functionality. A Microservice-Based μApp will experience software architectural corrosion,
Application (μApp) is a distributed system that consists of such as a decrease in cohesion and increase in coupling
small, loosely coupled, mono-functional services (microser- between related services. Today detecting such architec-
vices) that communicate using REST-like interfaces over a tural problems and evolving microservice architectures
network. Microservices are typically developed and deployed are manual and highly involved processes that require
independently, resulting in polyglot μApps that rapidly evolve global knowledge of microservice inter-dependencies.
and are continuously re-deployed. • Evaluating changing deployment trade-offs. Microser-
Understanding a single microservice may be straightfor- vices offer extensive deployment flexibility. For example,
ward, but μApps often contain dozens of inter-dependent mi- two services can be co-located as two containers on the
croservices that continuously change. Monitoring and logging same machine, as two containers in one VM, or as two
stacks for microservices, such as the Elk stack1 , are essential VMs on the same machine. A poor deployment choice
to understanding the microservices in a μApp and are broadly can increase cost, and hurt performance, scalability, and
adopted. Unfortunately, logs produced by such stacks contain fault tolerance. Furthermore, these decisions must be re-
low-level information for a single deployment. Reconciling the evaluated as the μApp evolves. Today developers evaluate
view of the deployed version of the system with the historical changing deployment trade-offs through trial and error
view of changes being introduced requires interpretation by without a systematic strategy nor much tool support.
the developer. In this paper, we propose an approach for combining struc-
For example, a log may record a failing REST invocation tural, deployment, and runtime information about evolving mi-
against a particular URL, but it is up to the developer to croservices in one coherent space, which we refer to as service
determine if this invocation was introduced in a recent change evolution model. By aggregating and analyzing information in
and requires fixing or if it indicates an undesirable dependency the model, we aim to provide actionable insights, assisting
that should rather be eliminated. Furthermore, non-trivial tasks μApp developers with maintenance and evolution tasks.
require piecing together logged information from multiple In Section III, we introduce the proposed model. We also
sources, such as multiple system logs, container infrastructure describe a preliminary design of a system for populating the
data, real-time communication messages, and more; collecting model by collecting information from a variety of sources,
both static and dynamic. In Section IV, we discuss how the
1 https://logz.io/learn/complete-guide-elk-stack/ information captured in the model helps developers address

978-1-5386-0992-7/17 $31.00 © 2017 IEEE 539


DOI 10.1109/ICSME.2017.63
Authorized licensed use limited to: POLITECHNIKI WARSZAWSKIEJ. Downloaded on December 05,2023 at 13:09:32 UTC from IEEE Xplore. Restrictions apply.
*
* Scenario
{ordered} Operation Message
«enumeration»
Environment * +correlationId: String
* * +timestamp: long
Application ApplicationVersion *
CONTAINER +totalTime: long
VIRTUAL_MACHINE +processingTime: long
PHYSICAL_MACHINE *
* * *
Metric OperationVersion
Provider Service +source
+cpu: float
+isExternal: boolean
+memory: float
*
* *
+API +target
* Location Host * ServiceVersion
*
ServiceReplica
+hosting: Environment *
*

Architectural Layer Infrastructure Layer Instance Layer

Fig. 1. Service evolution model.

Host III. S ERVICE E VOLUTION M ODEL


Containers
\login\<username>
We propose a model for microservices and their evolution
\list-todos Frontend in Fig. 1. This model is divided into three layers: the Ar-
\users\<username> chitectural layer (unshaded elements) captures the topology
of a μApp. The Instance layer (black elements) captures
\todo\<username> Processing
information about service replicas and upgrades, and the flow
\users of μApp messages. This layer links the topology outlined in
\todos Database the Architectural layer with deployed microservice instances.
The Infrastructure layer (gray elements) captures deployment
parameters.
Fig. 2. ToDo application architecture.
Next, we describe each layer (Section III-A) and how we
populate the model with concrete information from a μApp
the above tasks. Next, we overview the requisite background deployment (Section III-B).
on microservices. As our running example we use a simplified version of
II. BACKGROUND ON M ICROSERVICES an open-source ToDo μApp application2 in Fig. 2, which
consists of three microservices: Frontend, Processing, and
High decoupling is a cornerstone of the microservice pat-
Database, each deployed in its own container. Frontend allows
tern [1], an architectural pattern of service-oriented comput-
new users to log in (via the \login\<username> opera-
ing [2]. While a consensus has not been reached on what
tion) and, for already logged in users, to retrieve the list of
exactly differentiates a microservice from traditional service-
their todo items (\list-todos\<username>). Frontend
oriented architecture (SOA) services [3], most agree that a
communicates with the Processing microservice to obtain
microservice can be defined as a decoupled and autonomous
information about a specific user (\users\<username>)
software, having a specific functionality in a bounded context.
and to retrieve all todo lists of a specific user from the database
Microservices are interdependently managed and upgraded.
(\todos\<username>). The database access is managed
They communicate using lightweight protocols and are usu-
by the Database microservice that provides access to the list
ally deployed inside containers, a lightweight alternative to
of all users (\users) and all todo items (\todos).
traditional virtual machines.
The decoupling provided by microservices, together with
A. Model Description
the agile software delivery and deployment processes [4],
[5] decreases the complexity of tasks like upgrades and A μApp is represented by the Application element in
replication. At the same time, using microservices typically Fig. 1, which consists of a set of Services, each exposing
increases the number of interrelated components that make up a set of Operations. For the example in Fig. 2, the Fron-
an application, which creates new consistency issues and poses tend service exposes two operations: \login\<username>
challenges to evolving microservice-based applications. and \list-todos\<username>. External services are
To make matters worse, an important feature of microser- marked using the isExternal flag; these are “black-box” ser-
vices is their ability to scale in/out by removing/creating mi- vices managed by third-party organizations.
croservice replicas as necessary. This causes the microservices
2 https://github.com/h4xr/todo
instances to have a short lifetime, inducing further dynamism
and complexity.

540

Authorized licensed use limited to: POLITECHNIKI WARSZAWSKIEJ. Downloaded on December 05,2023 at 13:09:32 UTC from IEEE Xplore. Restrictions apply.
\login\<username>
EastCoast WestCoast
Frontend Application
\list-todos v1 ToDo
\login\<username> Processing Database
\create-todo\<todo> Frontend
v2
\list-todos ApplicationVersion Provider ApplicationVersion
version 1 IBM Bluemix version 2
Fig. 3. Two versions of Frontend, differing in their supported operations.

Host Location Location Host


A Scenario describes a high-level use case of the ap- vm1 EastCoast WestCoast vm2

plication and is realized by an ordered list of operations


ServiceReplica ServiceReplica
executed by services. In the ToDo application, such sce- Frontend.1.A Frontend.2.A
narios are users logging into the application and retrieving
their todos. The login scenario is realized by the Fron- ServiceReplica
ServiceVersion Frontend.2.B
tend \login\<username> operation, followed by the frontend.1
Processing \users\<username> operation and Database
ServiceVersion
\users operation. A scenario specifies the allowed order of frontend.2
operations, helping to detect faulty behaviors: there is no sce- Service
Frontend
nario where the Processing \todo\<username> operation
precedes the Frontend \login\<username> operation. Fig. 4. An example deployment of the Frontend service in Fig. 2.
The ServiceVersion and OperationVersion elements keep
track of changes in services and their interfaces. Any upgrade
of a microservice creates a new ServiceVersion element. In croservice to a particular API (i.e., Operation) exposed by the
addition, if the upgrade involves an operation change, a destination microservice. For the example in Fig. 2, the Pro-
new OperationVersion element is created and attached to cessing microservice exposes the \users\<username> op-
that new ServiceVersion element. For example, adding the eration, which can be called by the \login\<username>
\create-todo\<todo> operation in the Frontend mi- Frontend microservice. Each Message carries the timestamp
croservice, as shown in Fig. 3, will create new ServiceVersion of the request, total time elapsed between issuing the request
and Operation instances. and obtaining the response, and the time spent in processing
In canary releases3 , multiple services and multiple versions the request by each downstream microservice.
of the same service can run in parallel, as part of the same Messages realizing the same Scenario are grouped together
application. The ApplicationVersion element groups all service via a correlationId. For example, when both User A and User
versions in a particular configuration. A sequence of Applica- B log into the ToDo application, they execute the same login
tionVersions captures the evolution of an application over time. scenario, which involves the same sequence of messages but
To model scale-in and out of services, multiple identical with different correlation ids: all messages corresponding to
instances of a service version are represented by the Ser- the User A login are correlated with each other and are distinct
viceReplica element. ServiceReplicas are Hosted by contain- from those of User B.
ers or by physical and virtual machines, depending on the
Environment made available by the cloud Provider. Common B. Towards Populating the Model
cloud providers are Amazon AWS, Microsoft Azure, IBM We generate the model by using information from system
Bluemix, and Google Cloud Platform, each offering several logs, container infrastructure data, and messages over pro-
hosting environments. tocols like HTTP. More specifically, we extract information
Hosts can be deployed in multiple geographic Locations. about microservices from hosts’ meta-data and configuration
Fig. 4 shows a snippet of the deployment model for the files, such as deployment files in Kubernetes4 . To identify
Frontend service of the ToDo application. In this example, operations and their association with services, we rely on a
the Frontend.1 version is hosted by VM1 on the East Coast variety of sources: when available, we extract information
and runs one replica: Frontend.1.A. The Frontend.2 version from API gateways combined with service discovery tools,
is hosted by VM2 on the West Coast and runs two replicas: such as Zuul5 . We also inspect documentation in tools such as
Frontend.2.A and Frontend.2.B. All replicas, on both coasts, Swagger6 , if that information was published by the developers.
correspond to different versions of the Frontend service. We correlate and augment the extracted information by mon-
To optimize deployment options as the application evolves, itoring HTTP messages between services to reveal the used
we periodically monitor and store Metrics related to hosts’ operations.
CPU load, memory utilization, traffic and latency of requests We generate message elements by using distributed tracing
from a certain area, etc. mechanisms, such as Zipkin [6]. We use correlationIds in
A core element of our model is Message. Each Message
represents a uniquely-identified call issued by the source mi- 4 https://kubernetes.io/
5 https://github.com/Netflix/zuul
3 https://martinfowler.com/bliki/CanaryRelease.html 6 http://swagger.io/

541

Authorized licensed use limited to: POLITECHNIKI WARSZAWSKIEJ. Downloaded on December 05,2023 at 13:09:32 UTC from IEEE Xplore. Restrictions apply.
Host
Retrospective Prospective Containers
Analysis Analysis \login\<username>
Future Frontend
\list-todos
Init
Model
… Prev
Model
Curr.
Model
Model 1
\todo\<username> Todo
Future
Now Model2 \users\<username>
Time
… \users
User

Database
\todos
Fig. 5. Retrospective and prospective model analysis.
Fig. 6. Refactored ToDo application architecture

HTTP requests if developers follow the Correlation Identifier


pattern [1]. In case this information is missing, we plan to Retrospective Analysis. Individual microservices depend on
implement dynamic information flow analysis techniques to each other to function; a failure in a service might be caused
correlate input messages with outgoing requests they trigger. by a change in an entirely different, dependent service. Com-
Scenarios can be identified by grouping requests with the paring the sequence of messages in the faulty application
same correlationId. In such an implementation, each operation scenario to that before the failure occurred, i.e., in the failing
that is the first point of contact for a user will generate a new and the previous versions of the model, helps detect modified
scenario. To identify scenarios, we plan to analyze application downstream service(s) involved in the scenario. Such services
test cases, with the assumption that all messages generated by are more likely to be responsible for the fault and should thus
a test contribute to one high-level scenario. be inspected first when checking for upgrade inconsistencies.
By querying cloud provider APIs we plan to extract infor- We can also use retrospective analysis to recommend ar-
mation about the properties of the provider, such as the data chitectural improvements. For example, we can use prior
center location, hosts, etc. Interfaces provided by container models to identify changes in the μApp topology w.r.t. their
orchestration systems, such as Kubernetes, can also be used to communication patterns. Changing coupling and cohesion of
obtain notifications of new versions and newly created replicas. services can trigger topology re-organization, e.g., by merging
Performance metrics, like network throughput, CPU, memory, interdependent services.
and disk usage can be periodically collected using monitoring Splitting “imbalanced” microservices, whose operations ex-
mechanism such as cAdvisor7 . hibit different workloads, can help to scale these operations
Feasibility. To assess the feasibility of the proposed approach, more accurately. For example, in our ToDo application, once
we implemented an initial prototype of the data collection sys- users log in, they create and modify numerous todo items. As
tem on top of Kubernetes, ELK Stack, and an HTTP monitor. such, the login endpoint is underutilized as compared to the
One major challenge for our collection and, at a later stage, endpoint that manages todos. Splitting this microservice into
analysis system is the sheer amount of data that we collect. two separate entities, as shown in Fig. 6, makes it possible to
We intend to utilize graph databases, such as IBM Graph8 , scale up the Todo microservice while avoiding simultaneous
which are designed to store large and complex networks of scaling of the Users microservice.
inter-related data. For host and network metrics, we intend to The history of metrics stored in our model, correlated with
use data stores built for time series data, such as InfluxDB9 . the info on services and their locations, can be used to suggest
Moreover, we intend to periodically compress historical data, deployment improvements. For example, the Frontend and
keeping only aggregated summaries and statistics. Todo microservices in Fig. 6 are tightly-coupled; we can thus
suggest that these microservices should be located close to one
another. Yet, the Todo and User microservices do not need
IV. E VOLUTION USE CASES REVISITED
such proximity. Likewise, if we observe a sudden decrease
Our generated model captures information about an evolv- in the number of users logging in from a certain geographic
ing μApp (Fig. 5). We envision two types of automated model location, we can recommend removing the replica at that
analyses: retrospective (considering current and past models) location, saving money and resources.
and prospective (considering current and future models). In our work, we plan to identify a set of desired architectural
Next, we describe how these analyses support the use cases and deployment patterns, as in the examples above, and
from the introduction: checking upgrade consistency, suggest- monitor their preservation as the application evolves. That
ing architectural improvements, and evaluating deployment can be achieved by analyzing the collected information on
trade-offs. services, operations, messages they exchange, networking and
CPU metrics, etc. Whenever the application integrity or quality
7 https://github.com/google/cadvisor
of service is compromised, our monitoring and analysis system
8 https://www.ibm.com/in-en/marketplace/graph
9 https://www.influxdata.com/
will recommend appropriate improvements, such as replacing
a microservice, or moving microservices to different hosts.

542

Authorized licensed use limited to: POLITECHNIKI WARSZAWSKIEJ. Downloaded on December 05,2023 at 13:09:32 UTC from IEEE Xplore. Restrictions apply.
Prospective Analysis. Furthermore, we can use our model as developers in evolving microservices and the μApps they
a “sandbox” for exploring the space of possible architectural comprise. In this paper, we proposed a vision for combining
and deployment refactorings. We would instantiate several structural, deployment, and runtime information about an
possible refactorings as new snapshots of the model (Future μApp to help with evolution-related tasks. Our approach relies
Models in Fig. 5) and evaluate their ability to handle the on distributed tracing, log analysis, and program analysis
collected real-life μApp scenarios. That is, we will assess techniques and we plan to fully realize and evaluate it in our
potential improvement suggestions by replaying the traces future work.
corresponding to the scenarios from the current model in the
ACKNOWLEDGMENTS
new model. If the new model withstands a battery of tests, we
will issue a recommendation for the change/refactoring to the This works is supported by IBM Canada, CAS program and
developers responsible for the relevant microservices. by CAPES Brazil, grant 88881.132774/2016-01.

V. R ELATED WORK R EFERENCES


[1] S. Newman, Building Microservices. O’Reilly Media, Inc., 2015.
Evolving architecture has been researched since the notion [2] F. Casati, “Service-oriented computing,” SIGWEB Newsl., vol. 2007, no.
of software architecture has been articulated. A key approach Winter, 2007.
in this space that combines static analysis, dependency model- [3] O. Zimmermann, “Microservices Tenets: Agile Approach to Service
Development and Deployment,” Computer Science - Research and
ing, and evolving architectural concerns is by Sangal et al. [7]. Development, vol. 32, no. 3, pp. 301–310, 2016.
Our approach is similar but targets the microservices domain, [4] M. Hüttermann, DevOps for Developers. Apress, 2012.
which is dynamic and requires runtime analysis. [5] L. Bass, I. Weber, and L. Zhu, DevOps: A Software Architect’s Perspec-
tive. Addison-Wesley, 2015.
Work on microservices. There is increasing interest in [6] “Zipkin,” last Accessed: June 2017. [Online]. Available: http://zipkin.io/
applying techniques from the software engineering [8], [9], [7] N. Sangal, E. Jordan, V. Sinha, and D. Jackson, “Using Dependency
[10], formal methods [11], and self-adaptive [12], [13], [14] Models to Manage Complex Software Architecture,” SIGPLAN Not.,
vol. 40, no. 10, pp. 167–176, 2005.
communities to microservices. Our proposal is most similar to [8] S. Rajagopalan and H. Jamjoom, “App–Bisect: Autonomous Healing for
app-bisect [8] which models the evolution of microservices to Microservice-Based Apps,” in HotCloud, 2015.
help repair bugs in deployment. Our proposal is more general, [9] V. Heorhiadi, S. Rajagopalan, H. Jamjoom, M. K. Reiter, and V. Sekar,
“Gremlin: Systematic Resilience Testing of Microservices,” in ICDCS,
as it addresses additional evolution-related maintenance tasks, 2016, pp. 57–66.
such as deployment and architectural refactorings. [10] A. Tarvo, P. F. Sweeney, N. Mitchell, V. Rajan, M. Arnold, and I. Baldini,
Modeling. Dependency modeling of services is an estab- “CanaryAdvisor: a statistical-based tool for canary testing (demo),” in
ISSTA, 2015.
lished topic [15]. Most recently, Düllmann and van Hoorn [11] A. Panda, M. Sagiv, and S. Shenker, “Verification in the Age of
described a top-down approach to generate a μApp from a Microservices,” in HotOS, 2017.
model [16]. By contrast, we propose a bottom-up approach [12] S. Hassan and R. Bahsoon, “Microservices and Their Design Trade-Offs:
A Self-Adaptive Roadmap,” in SCC, 2016.
that is closer to the work of Leitner et al. [17] and Brown [13] G. Toffetti, S. Brunner, M. Blöchlinger, F. Dudouet, and A. Edmonds,
et al. [18]. However, both these approaches only use network “An Architecture for Self-managing Microservices,” in AIMC, 2015.
interactions between services to generate models and do not [14] L. Florio, E. D. Nitto, D. Elettronica, I. Bioingegneria, P. Milano,
and A. Microservices, “Gru : an Approach to Introduce Decentralized
model microservice evolution. Autonomic Behavior in Microservices Architectures,” in ICAC, 2016.
Log analysis. Logs are a popular means of monitoring and [15] C. Ensel, “Automated Generation of Dependency Models for Service
analyzing software, particularly in the cloud [19]. The state-of- Management,” in OVUA, 1999.
[16] T. F. Düllmann and A. van Hoorn, “Model-driven Generation of Mi-
the-art log processing systems are high throughput, real-time, croservice Architectures for Benchmarking Performance and Resilience
and are capable of reconstructing rich session-level data from Engineering Approaches,” in ICPE, 2017.
logs [6], [20]. Our work builds on these systems. [17] P. Leitner, J. Cito, and E. Stöckli, “Modelling and Managing Deployment
Costs of Microservice-based Cloud Applications,” in UCC, 2016.
Supporting microservice evolution. Version consistency [18] A. Brown, G. Kar, and A. Keller, “An Active Approach to Characterizing
has been considered for runtime reconfiguration of distributed Dynamic Dependencies for Problem Determination in a Distributed
systems [21], fault tolerant execution [22], and in other do- Environment,” in IM, 2001.
[19] A. Oliner, A. Ganapathi, and W. Xu, “Advances and Challenges in Log
mains. We plan to build on this work and perform upgrade Analysis,” CACM, vol. 55, no. 2, pp. 55–61, Feb. 2012.
consistency checking at both the model and code levels. [20] Z. Chothia, J. Liagouris, D. Dimitrova, and T. Roscoe, “Online Recon-
Deployment trade-offs. Previous work considered deploy- struction of Structural Information from Datacenter Logs,” in Eurosys,
2017.
ment trade-offs in general distributed systems [23]. Recently [21] X. Ma, L. Baresi, C. Ghezzi, V. Panzica La Manna, and J. Lu, “Version-
Tarvo et al. described a monitoring tool to support canary de- consistent Dynamic Reconfiguration of Component-based Distributed
ployment [10] and Ji and Liu present a deployment framework Systems,” in ESEC/FSE, 2011.
[22] P. Hosek and C. Cadar, “Safe Software Updates via Multi-version
that accounts for SLAs [24]. We are interested in connect- Execution,” in ICSE, 2013.
ing evolving software engineering concerns with deployment [23] R. Guerraoui, M. Pavlovic, and D.-A. Seredinschi, “Trade-offs in Repli-
trade-off. cated Systems,” IEEE Data Engineering Bulletin, vol. 39, no. 1, pp.
14–26, 2016.
VI. C ONCLUSION [24] Z.-l. Ji and Y. Liu, “A dynamic deployment method of micro service
oriented to SLA,” IJCS, vol. 13, no. 6, pp. 8–14, 2016.
Microservices offer a flexible and scalable means of dis-
tributed business logic. However, there are few tools to support

543

Authorized licensed use limited to: POLITECHNIKI WARSZAWSKIEJ. Downloaded on December 05,2023 at 13:09:32 UTC from IEEE Xplore. Restrictions apply.

You might also like