Best Practices in Implementing A Secure Microservices Architecture
Best Practices in Implementing A Secure Microservices Architecture
Implementing a Secure
Microservices Architecture
Integrating Microservices Security
Considerations into the Engineering of
Trustworthy Secure Systems
The permanent and official location for Cloud Security Alliance Application Containers and
Microservices research is:
https://cloudsecurityalliance.org/research/working-groups/containerization/
© 2020 Cloud Security Alliance – All Rights Reserved. You may download, store, display on your
computer, view, print, and link to the Cloud Security Alliance at https://cloudsecurityalliance.org
subject to the following: (a) the draft may be used solely for your personal, informational, non-
commercial use; (b) the draft may not be modified or altered in any way; (c) the draft may not be
redistributed; and (d) the trademark, copyright or other notices may not be removed. You may quote
portions of the draft as permitted by the Fair Use provisions of the United States Copyright Act,
provided that you attribute the portions to the Cloud Security Alliance.
Anil Karmel
Team Leaders/Authors:
Marina Bregkou
Aradhna Chetal
Mark Yanalitis
Michael Roza
Authors:
Yeu Wen Mak James Turrell Atul Chaturvedi
Vishwas Manral William Gafford Michael Green
Ramaswamy Chandramouli Alex Rebo Amit Maheshwari
Shachaf Levy Jane Odero Greene Vrettos Moulos
Ricardo Ferreira
CSA Staff:
Hillary Baron
Marina Bregkou
Reviewers:
Marina Bregkou
Rohit Sansiya
Michael Roza
Vrettos Moulos
Prabath Siriwardena
Suhas Bhat
Carlos Villavicencio Sánchez
Ricardo Ferreira
David Cantrell
The earliest architecture for application systems is the “monolith” in which the entire application
is designed to run as a single process and is hosted on a resource-intensive computing platform
called the “server.” Although the application may be structured as different modules, a change in
any module requires the recompilation and redeployment of the entire application. Communication
between the modules is carried out by local procedure/function calls.
The next evolution of the application architecture is the “service-oriented architecture” (SOA). In
SOA, the entire gamut of solutions (e.g. supporting a business process) is broken up into multiple
parts or components called services. This approach makes the development, maintenance and
deployment of the entire application easier as operations can be limited to a specific service
rather than to an entire application. However, for services to work together to deliver the required
solution necessitates the use of heavyweight middleware such as an “Enterprise Service Bus” and
communication protocols (e.g. web services). Ensuring security of these middleware components
is a complex process. Furthermore, the nature of the connections provided by these middleware
components requires that the attributes of individual services such as interfaces to be tightly
controlled thus creating a tight coupling between services.
The design of a microservices architecture is intended to address the limitations of SOA by enabling
the individual microservices to communicate with each other using lightweight protocols such as
Representational State Transfer (REST). Furthermore, the individual microservices can be developed
in platforms best suited for them, allowing for heterogeneity in addition to independent scalability
and deployment due to loose coupling between individual microservices. However, this approach
presents new security challenges such as an increased attack surface due to an increase in the
number of components and secure service discovery as a result of the dynamic nature of service
instance due to location changes.
NIST SP800-180 defines a microservice as “a basic element that results from the architectural
decomposition of an application’s components into loosely coupled patterns consisting of self-
contained services that communicate with each other using a standard communications protocol
and a set of well-defined APIs, independent of any vendor, product or technology.” Microservices
are built around capabilities, as opposed to services, and are typically deployed inside Application
Containers [NIST SP800-180].
The logical and architectural differences between monolithic and microservices architecture are
described in section 1.1. An example application to illustrate the differences in building blocks
between the two architectures is also provided. Section 1.2 analyzes the SOA landscape and
highlights its characteristics, constraints and deficiencies as well as describes how the MSA extends
some of the SOA concepts and addresses those deficiencies.
Section 1.3 describes the benefits of microservices architecture and provides specific use cases
where it enjoys advantages of SOA. The security and configuration challenges identified for MSA
forms the basis for the issues addressed in the rest of this document.
Let’s clarify below the definitions of “Service,” “Service Orientation,” and “Architecture.”
A service is a self-contained piece of business functionality, with a clear purpose [Stojanovic et al.,
2004]. Services are modeled after one specific business process such as billing.
Service Orientation, according to [Erl, 2005; Rosen et al., 2012] is a collection of logic integrated in a
software component. This component is provided as a service to the component consumers, where the
consumer of the service can also be another service. These independent components, when connected,
can provide the full support for a specific business process, such as “managing customer account”.
Architecture is the description of the subsystems and the components of a software system and the
relationship between them.
The successful deployment of the SOA life cycle (modeling, assembly, deployment and management),
is done in the context of what is known as SOA governance. SOA governance is the process of
establishing the chain of responsibilities and communications, policies, measurements, and control
mechanisms that allow people to carry out their responsibilities.
• SOA governance is a process you implement, not a product you buy1. [Oracle 2013]
• SOA governance itself has a set of phases: plan, define, enable, and measure.
• SOA defines 4 basic service types:
Business services are coarse-grained services that define core business operations. They are usually
represented through XML, Web Services Definition Language (WSDL) or Business Process Execution
Language (BPEL).
Enterprise services implement the functionality defined by business services. They rely on application
services and infrastructure services to fulfill business requests.
Application services are fine-grained services that are bound to a specific application context. These
services can be invoked directly through a dedicated user interface.
1. Interoperability - Services should use standards that allow subscribers to use the service. This
allows for easier integration.
2. Loose Coupling - Services minimize dependencies on each other.
3. Knowledge Curtain / Service Abstraction - Services hide the logic they encapsulate from the
outside world.
4. Resource management/ Service reusability - Logic is divided into services with the intent of
maximizing reuse.
5. Service Discovery - Services should and can be discovered (usually in a service registry).
6. Structural Independence / Service Autonomy - Services should have control over the
resources the service consumes / relies upon.
7. Service Composition / Composability - Services “break” big tasks into smaller ones.
8. Granularity / Service Statelessness - Ideally, services should be stateless.
9. Service Quality - Services adhere to a Service-Level Agreement between the service provider
and the client.
10. High Cohesion - Services should ideally cater to a single task or group similar tasks as part of
the same module.
1
https://www.oracle.com/assets/implement-soa-governance-1970613.pdf
2
Erl quoted 7 principles, while others today mention 10 as they are listed above as we believe all of
those principles should be mentioned.
Since service orientation means different things to different people, the SOA challenges can be
viewed through three domain areas: business, technical/engineering, and operations.
There is dramatic variation in the implementation, attributes, descriptions, and datatypes of services;
therefore, it remains problematic to effectively manage the services.
In order to explore an initial classification of challenge areas related to service-oriented architecture
systems, the table below presents identified challenges for SOA implementation categorized in each
of the three domains3: [Lund University, 2015] [Beydoun et al, 2013]
Domain Challenges
3
In SOA Adoption Challenges, [Beydoun at al.], Challenges fall into two domains: Technical and Orga-
nizational-Business.
1. Legacy application security: Legacy capabilities can be externalized with the help of the SOA-
based adapter. While doing so is feasible, the designer should factor in the limitations of the
capability’s existing security model. The proposed SOA adapter might not have insight into
the model’s proprietary nature.
2. Loose coupling of services and applications: SOA security should not violate the overall
software design principles, such as the solution component’s loose coupling. The service is
intended to provide a sustainable interface to a capability, shielding the service consumer
from the design and implementation details.
3. Services that operate across organizational boundaries: Traditional perimeter security
(enterprise security boundary) might be insufficient to mitigate risks presented by
transorganizational interactions. A set of compensating controls might be necessary to
maintain compliance with enterprise security policies.
4. Dynamic trust relationships: SOA participants have to establish mutual trust, possibly
including the parties responsible for the hosting and maintenance of the SOA registries.
Since these relations are of a dynamic nature, trust is also likely to have a dynamic nature.
5. Composite services: Service composition and aggregation are two forms of service
association. Service composition, however, might violate the SOA independence principle.
Hence, service orchestration and choreography are the preferred service association strategies.
6. Need to be compliant with a growing list of standards: SOA standardization is a concern
due to the growing number of security standards and regulations. However, this need for
compliance is no different from any other capability integration realms.
7. SOA flexibility: SOA solutions are intended to be flexible and customizable. It can improve
time-to-market in IT supported processes and business solutions. Portfolio gap analysis,
transition planning, and architectural governance, is able to provide enterprise architecture with
opportunities for change and merge strategic business and IT objectives. By leveraging service-
oriented portfolio gap analysis, the enterprise planning cycle strategy can be transformed into a
roadmap of specific change initiatives, and provides the opportunity to govern the execution of
that resulting roadmap. The SOA lifecycle then drives solution delivery in the context of one or
more specific projects in the roadmap. Therefore, the SOA solutions should be customized and
extended as appropriate in order to make the business processes relevant, personalized and
responsive.[Simplicable, 2011] [The SOA Source Book]
SOA deficiencies:
While SOA is considered a modern abstraction approach, it is still assuming a centrally managed
abstraction. Some of the potential deficiencies are aligned to the constraints and to what
microservices deliver:
8. Using service bus to achieve SOA reliability, scalability and communications disparity promises
comes with certain limitations (e.g. tight coupling, dependent processes) uncharacteristic to
microservices.
9. SOA is perceived as having a performance impact; however, this is partly attributed to
the protocol rather than to the properly designed service itself. Poorly orchestrated /
choreographed services, however, might violate SLA / QSA.
In a way, SOA can be seen as a stepping stone that helps implementation of other common
architectural patterns (i.e. strangler facades4, anti-corruption layer5, etc.), between monolithic to
microservices architecture (MSA), when an architect is designing DevSecOps processes and the
associated level of freedom. In this way, service boundaries are already clearly defined and services
can independently evolve into a microservices architecture.
Microservices are an emerging architectural approach that extends SOA practices and overcomes
traditional SOA deficiencies. [Haddad, CIOReview]
Not from a computer science perspective, but from an implementation developer perspective, the
shift to a Microservices Architecture (MSA) can be extremely radical, although nothing radically
new has been introduced in the microservices architecture. Microservices architecture is the logical
evolution of SOA and supports modern business use cases.
The additional principles and patterns help teams deliver autonomous, loosely coupled, resilient, and
low cohesive solutions. The term that describes this is called Cloud-Native software development7
(see Section 2).
A microservices approach constrains SOA with additional principles and practices as mentioned
above, including: context mapping, loosely coupled/high cohesion, shared nothing architecture,
dynamic deployment, and parallel development.
4
Fowler M., (2004): StranglerFigApplication.
https://martinfowler.com/bliki/StranglerFigApplication.html
5
Kalske (2017), University of Helsinki: Transforming monolithic architecture towards microservice
ar-chitecture. https://core.ac.uk/download/pdf/157587910.pdf
6
Robert C. Martin (2005): https://drive.google.com/file/d/0ByOwmqah_nuGNHEtcU5OekdDMkk/view
7
These solutions are engineered to benefit from a cloud-native architecture. One of the four under-
lying pillars within each cloud application is: Microservices. (Along with Containers, Dynamic Orches-
tration, and Continuous Delivery).
SOA Microservices
Use of Enterprise Service Bus (ESB) for Uses less elaborate and simple messaging
communication system
Required to modify the monolith for some Creates a new service for some system
system changes changes
8
https://www.infoq.com/articles/engstrand-microservice-threading/
9
https://medium.com/@raycad.seedotech/monolithic-vs-microservice-architecture-e74bd951fc14
Microservices, however, enables a different approach to scaling. The increase in resources can
be applied selectively to those services whose performance is less than desirable, thus providing
flexibility in scalability efforts.
Some monolithic applications may be constructed modularly but may not have semantic or logical
modularity. By modular construction, what is meant is that the application may be built from a large
number of components and libraries that may have been supplied by different vendors and some
components (such as database) may also be distributed across the network.
In such monolithic applications, the design and specification of APIs (Application programming
interface) may be similar to that in microservices architecture. However, the difference between such
Another important difference is that using modules in a monolith is a weaker form of isolation
since the modules all run in the same process (generally) and developers can more easily create
cross-module dependencies, thus weakening cohesion. Comparatively, microservices establish
a strong isolation through a network boundary which is also usually reinforced by microservices
being maintained by separate development teams, making cross-microservice dependencies more
difficult, resulting in cleaner separation of responsibilities. This clean separation in turn is what makes
it possible to evolve microservices independently, since monolith module changes still require re-
generation/re-testing/re-deployment of the entire monolith’s artifacts. [Newman, 2015]
10
It should be clarified here, that this is simultaneously a benefit AND a drawback, since moving to
a microservices architecture is essentially moving to a distributed networked system-of-systems
architecture with significant increases in complexity that come with the increases in flexibility. A lot of
the MSA best practices are efforts to manage the complexity or accept and work within its limitations.
To illustrate the logical differences discussed above, lets explore an example of a small web-shop or
online retail application. The main functions of this application include the following:
• A module that displays the catalog of products offered by the web shop with a picture of the
product, product number, product name and the unit price;
• A module for processing customer orders gathering information about the customer (name,
address, etc.) and the details of the order (name of the product from the catalog, quantity, unit
price, total price, etc.) as well as creating a bin containing all the items ordered in that session;
• A module for preparing the order for shipping, specifying the total bill of lading (the total
package to be shipped with the different items in the order and the quantity of each item, the
type of shipping (one-day, two-day, etc.), the shipping address, etc.;
• A module for invoicing the customer with a built-in feature for making payments by credit
card or bank account.
The differences in the design of the above web-shop application when it is designed as a monolith
and microservices-based are explored in table 4 below:
Application
Monolith Microservices-based
Construct
Communication All communications are in the form The shipping functionality and the
between of procedure calls or some internal order processing functionality are
functional data structures (e.g., socket) – each designed as independent
modules the module handling the order services. Communication takes
processing makes a procedural place as an API call across the
call to the module handling the network using a web protocol.
shipping function and waits for The order processing service can
successful completion. put the details of the order to be
shipped in a message queue to be
picked up asynchronously by the
shipping application which has
subscribed to the event.
This document takes the view that the difference between SOA and microservices does not
concern the architectural style as asserted in [Zimmerman 2016]. Hence the basic architectural
principles for MSA and SOA are identical and include the following: decomposition of application into
technology-agnostic self-contained modules called Services which embody technical concepts such
as Interoperability (services call each other directly without the intervention of a link library), High
Cohesion (services cater to only a single task or several tasks which are similar in nature) and Loose
Coupling (lack of dependency – a situation where a change in one service does not require a change
in any other service). Thus, the architectural concepts discussed for microservices below are directly
inherited from SOA.
Since each of the modules operates without coordination with others, the overall system state is
unknown to individual nodes. Further, in the absence of any global information or global variable
values to go by, the individual nodes make decisions based on locally available information
[Tanenbaum at et., 2007].
11
When presenting best practices in relation to security as this document does, microservice
architecture best practices should have a mention also. One such best practice is that of each
microservice managing its own DB independently of all others. Hence, allowing 2..N microservices to
share one DB introduces inter-service coupling at the DB layer. [Newman, 2015]
Business Drivers
Technical Drivers
Operational Drivers
High rate of churn or update If the API definition, service High rates of service
of individual services structure, and operational update will likely be more
expected. requirements are not stable, complicated to support
MSA may be a better in a traditional SOA
candidate. infrastructure.
Application MSA offers low overhead and A more complex full SOA
development/deployment rapid parallel development. may take longer to develop
cycle is short. and deploy.
New application (or business New application or business Building on top of legacy
area) vs building on top of a area application
legacy application
1.3.2 BENEFITS
As alluded to in Section 1.1.2., the characteristic differences between a Microservices Architecture and
SOA bring about advantages beyond those provided by SOA as follows:
• Business benefits
- Alignment with the business
- A microservice is based on a single atomic business function realizing atomic
business capabilities, owned by a single business unit. Thus, it is able to be updated
in concert with changes to the business unit.
- Agile and Responsive to business needs
- An entire application does not have to be taken down just to update or scale a
component. Each service can be updated or scaled independently. This gives the
business the ability to respond faster.
12
https://martinfowler.com/articles/microservices.html
Managing a multitude of distributed services at scale is difficult for the following reasons:
• Business challenges
- Project teams need to easily discover services as potential reuse candidates. These
services should provide documentation, test consoles, etc. so re-use is significantly
easier than building from scratch
• Technical challenges
- Interdependencies between services need to be closely monitored. Downtime of
services, service outages, service upgrades, etc. can all have cascading downstream
Even with the above challenges and drawbacks, deploying microservices makes sense when
applications are complex and continuously evolving.
As more organizations shift away from the monolith towards distributed, microservice-based
architectures, security concerns are increasing. In a microservices architecture, the attack surface
increases significantly and security concerns are exacerbated due to the various network connections
and APIs used to establish communication channels between all those components, creating additional
methods for attack, interception of in-transit data and manipulation of the data at rest. Microservices-
based architectures also expose a lot more of the system’s functionality directly to the network
which in turn increases the attack surface. The fact that multiple small containers may spread across
many different systems/hosts, etc., and that they may have to function cohesively means the threat
landscape is significantly increased. This also means each container has to be properly maintained,
managed, and secured, all of which is extremely time consuming without proper tools. Furthermore,
all controls are implemented by software and services developers, which can lead to inconsistencies
and gaps in security controls. In order to achieve consistent enforcement of security controls and
enforce development processes along with containers, special tools and techniques are needed.
Another issue is that the standardized replicable nature of containers also means that a vulnerability
in one microservice can be quickly replicated many times over as the source code is reused.
With a traditional app/service monolith, the app/service components are typically hosted on one or
more servers in a network, which makes it easier to focus on the exposed ports and APIs, to identify
an IP address, and to configure a perimeter around it. With microservices, this gets much more
complex due to the many exposed ports, API gateways and wider attack surface as the economies
of APIs increases, and authentication is also very distributed. This essentially means that running a
distributed set of services will require enforcement of security controls in a distributed manner and
different sets of stakeholders will have to play their part and be onboard to implement a successful
security ecosystem for containers and microservices.
There are some clear security benefits for containers and microservices from a security standpoint,
especially given that the applications’ components and services are isolated. Microservice security
can be implemented at a much more granular level, with controls applied to specific services, APIs,
and network communication pathways.
Microservices do provide the ability to implement a defense in depth strategy, but the way
security controls are implemented is a huge shift from traditional methods. Within a microservices
architecture, there are multiple transactions and interactions. Thus, the security of the app/service
Threat models for microservices running in virtual machines are well known and some of them are
identified below.
There are many other threats in the VM environment. More details can be found in NIST 800-125 Rev 1.
Threat models and associated best practices for microservices running in containers are presented in
the following table:
Threat
# Identifier Weakness Mitigations to Consider
Impact
Ref: http://gzs715.github.io/
pubs/SECNAMESPACE_SEC18.
pdf
11 Base Image A container The older Having the minimal base image
Vulnerabili- could be versions is essential to reduce attack
ties performing could lead to surface.
well but could compromises.
have an older Use software from distribu-
version of tors who guarantee updates,
Java or other preferably progressive rolling
libraries updates.
running in it.
Data should be separated from
images.
14 App Code Any changes Multiple app Build once and deploy
Integration made to app code con- immutable containers.
with code can figs and the
Runtime impact the dependencies When code is updated, build
Libraries in a integrity of on infrastruc- a new image and deploy to
Container the software ture can lead runtime through the CI/CD
stack. to inconsis- pipeline. Automated builds
and deployments guarantee
configuration uniformity and
reduce risk to a great extent.
Configuration should be
applied to packages in the CI/
Deployment
Network Isolation
Threats of running microservices in a Container PaaS remain essentially the same as containers.
With that said, developers of the microservices don’t have to worry about the underlying threats in
Many cloud providers offer serverless compute (e.g. AWS Lambda). Following are some best
practices for microservices running in serverless environments.
It is important to invest in crafting suitable, minimal roles for each of the functions.
Additionally, it is important to ensure that each of the functions executes with the
smallest viable set of privileges, so the damage from any holes that slip through is
minimal. Roles and functions should be reviewed as often as possible. In serverless,
things that were once well-configured can suddenly be sub-optimal, as others might
have changed a role, policy, or function that makes some other part of the application
vulnerable. Developers should consider emerging technologies that can help craft these
policies and issue an alert any time things change.
• Secure Application Dependencies
- Functions often include dependencies, pulled in from npm (Node.js), PyPI (Python),
Maven (Java) or other relevant repositories. Application dependencies are prevalent
and frequently vulnerable, with new vulnerabilities disclosed regularly. The nature of
serverless makes managing third-party dependencies manually particularly challenging.
Furthermore, interdependent code bases can compound this issue (e.g. dependency
A in codebase imports dependency B that imports dependency C, which contains a
vulnerability). Securing application dependencies requires access to a good database
Like any facet of cybersecurity, securing serverless applications requires a variety of tactics
throughout the entire application development lifecycle and supply chain. Stringent adherence
to best practices will help improve the overall security posture. However, proper development is
not enough. To achieve ideal protection, it is important to leverage tools that support and provide
continuous security assurance, attack prevention and detection, and deception.
2.1.3 AUTHENTICATION
In a monolithic environment, authentication and authorization are handled within the application
using a module that validates the incoming requests for required authentication and authorization
information, as shown in the following diagram; this module also allows authorized users to define
the roles and permissions and assign them to other users in the system to allow them access to the
secured resource.
In a microservices-based application, each microservice is deployed in isolation and must not have
the responsibility of maintaining a separate user database or session database. Authentication has
to be two-fold; first is a user authenticating to a service, and second is application to application
authentication which is usually achieved using JWT tokens and using mTLS. Moreover, there is a need
for a standard way of authenticating and authorizing users across microservices; Open Policy Agent
(OPA) can be used for the same. OPA decouples policy decision-making from policy enforcement.
When a microservice needs to make a policy decision, it will query OPA and provide structured
data (e.g. JSON) as the input. OPA accepts arbitrary structured data as input. OPA generates policy
decisions by evaluating the query input and against policies and data. OPA and Rego are domain-
agnostic, so microservices describe almost any kind of invariant in the policies (e.g, which users can
access which resources, which subnets egress traffic is allowed to, which clusters a workload must
be deployed to, which registries binaries can be downloaded from, which OS capabilities a container
can execute with, and which times of day the system can be accessed). Policy decisions are not
limited to simple yes/no or allow/deny answers. Like query inputs, the policies can generate arbitrary
structured data as output. Ideally, it is essential to separate the authentication and authorization
responsibility as a separate Auth service that can own the user database to authenticate and
Different types of tokens can be used for authentication and authorization (e.g. JWT, JSON Web Signature
(JWS), JSON Web Encryption (JWE)). An encrypted token can be parsed only after it is decrypted with a
key that was used to encrypt the token. Instead of sharing the key across microservices, it is beneficial
to send the token to the Auth service directly to decrypt the token and authorize the request
on behalf of the service. Performance impact of this can be reduced by caching the prevalidated
tokens at each microservice level for a minimum configurable amount of time that can be decided
based on the expiry time of the token. Expiration time is an important criteria while working with
authentication tokens. Authentication tokens with a very large expiry time should be avoided to
reduce the attacks that use stolen tokens. Some examples of authentication token formats can be
found at JWT RFC-7519, JWS RFC-7515, and JWE RFC-7516. It is important to ensure the identity of
the containers in which services are hosted; communication between two containers should use
mTLS which authenticates both endpoints. This approach also helps in mitigating any man-in-the-
middle attacks that may lead to token stealing and reusing them for authenticating the service.
Interface inputs and outputs can be shielded, internal to the applications’ various modules, or
exposed outside the application boundary to serve as an interface available to receive and transmit
data outward to other externalized interfaces.
All functional and nonfunctional requirements (NFR) should be supported by a common set of
software architecture principles that guide the design. Traceability between requirements and the
principles guiding the construction of an API is a functional requirement in itself.
API best practices that can serve as the foundation for security guidance include the following:
• Utilize both coarse- and fine-grained access control interface and data access in combination
with token-based authorization and session-based access.
• By default, developers should practice service and interface segmentation.
• Developers should favor solutions that are highly partitioned, modular in design, and
composed of decoupled services
• Standards-based messaging and encryption protocols for communication should be utilized.
In a microservices architecture, business functions are packaged and deployed as services within
containers and communicate with each other using API calls; it is recommended to implement a
lightweight message bus that can implement different interaction styles. Other ways in which a
service discovery service can be implemented span two dimensions: (a) the way clients access the
service registry service and (b) centralized versus distributed service registry.
Enterprises have a choice of having a centralized service registry wherein all services are published
and registered at one point. With that said, this approach can potentially become a single point of
failure impacting confidentiality, integrity and availability. Compromise of the registry can result in
the deployment of malicious services which appear legitimate to other services. Service registries
should be protected from unauthorized users via appropriate access controls. Proper detective
controls should also be in place for unauthorized changes to the service registry, as a malicious entity
could potentially publish services which are not legitimate or approved for use and use them to
compromise other services, thus impacting the integrity of the enterprise platforms.
The authentication and authorization in the microservices architecture, however, involves scenarios
that are more complex. It involves both user-to-service communications as well as machine-to-
service (app-to-service and service-to-service) communications [Medium, 2018].
Following are several microservices architectural solutions to solve the issues for authorization and
authentication:
• Authorization Handling
• Authorization between Services
• API Access Control
• Firewall
• Secrets Management
The term “authorization” refers to what one can do, for example access, edit or delete permissions to
some documents, which takes place after verification passes.
Note that authorization is a separate step from authentication, which is about confirming the identity
of who is taking the action.
While entitlements are now implemented in Docker as a proof of concept, these mechanisms are not
ready to be used in production and are still a work in progress. [LWN.net, 2018]
Labels-based Identity. Labels define which containers can access which resources, requiring a
container to have the label of any secret it accesses. Labels allow users to consult a predefined policy
and authorize containers to receive secrets. Containers and their labels should be bound together
Client Token with API Gateway. Here, contrary to the basic process of token authentication, the API
Gateway is added as the entrance of the external request. This means that all requests go through
the API gateway, effectively hiding the microservices. On request, the API gateway translates
the original user token into an opaque token that can be resolved only by itself. The API gateway
can revoke the user’s token when it logs out. It also additionally protects auth token from being
decrypted, by hiding it from the client, when they log off.
By executing authorization in every microservice, fine-grained object permissions are possible as well
as different user authentication mechanisms for different microservices.
• API token
• OAuth
• Federation
API Token: The advantage that the API Token presents (instead of using the username/password
directly to access the API) is to reduce the risk of exposing the users’ password, and to reclaim the
token’s permissions at any time without needing to change the password. Here, the third party uses
an application-issued API Token to access the applications’ data. The Token is then generated by the
user in the application and provided for use by third-party applications.
OAuth: One of the best approaches is OAuth delegation protocol with a JSON token (JWT - JSON
Web Token). JWT is an open standard (RFC 7519) that defines a compact and self-contained way
for securely transmitting information between parties as a JSON object. This information can be
verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC
algorithm) or a public/private key pair using RSA or ECDSA.
The token can be signed with a “private/public key” method. This way, other microservices only need
to contain the code for checking the signature and knowing the public key. As the token is sent with
the authorization header as a bearer token, it can be evaluated by the microservices. The signature
provides that there are no restrictions regarding URLs. Hence, cross-site authorization is also
possible, which in return supports single sign-on (SSO) and is quite useful to users. [LeanIX, 2017]
The API Gateway’s best practice here is that of the “least privilege” which restricts access to only
resources that a user requires to perform their immediate job functions.
SPIFEE/SPIRE
SPIFFE (pronounced Spiffy) stands for Secure Production Identity Framework For Everyone. It is
an open-source workload identity framework that supports distributed systems deployed in on-
premises, private cloud, and public cloud environments. SPIFFE provides a secure identity, in the
form of a specially crafted X.509 certificate, to every workload in a modern production environment.
SPIRE (the SPIFFE Runtime Environment) is a software system that exposes an API (the SPIFFE
Workload API) to other running software systems (workloads) so they can retrieve their identity, as
well as documents (SVIDs) to prove it to other workloads, at run-time. This proof of identity can then
be used as the primary authentication mechanism for a workload when it accesses other systems.
SPIRE is an open source and it enables organizations to provision, deploy, and manage SPIFFE identities
throughout their heterogeneous production infrastructure.
The Workload API here (similar to other systems’ APIs), does not require a calling workload to have any
a priori knowledge of its own identity or possess any authentication token when calling the API. This
avoids the need to co-deploy any authentication secrets with the workload.
In addition and in comparison to other APIs, however, the SPIRE API runs on and across multiple
platforms and can also identify running services at a process level as well as a kernel level - which
makes it suitable for use with container schedulers such as Kubernetes.
In SPIRE, all private keys (and corresponding certificates) are short-lived, rotated frequently and
automatically in order to minimize the possibility of a key being leaked or compromised. Workloads can
request new keys and trust bundles from the Workload API before the corresponding key(s) expire.
Firewall
In order to have a much smaller attack surface, platforms that allow controlling egress traffic with a
firewall are considered. [Apriorit, 2018]
A distributed firewall (a service mesh) with centralized control which allows users more granular
control over each and every microservice is also an important component. In this way the developers
are able to define with fine granularity which connections are allowed and which are not. This
practically means that each service has its’ own micro-firewall. Hence, if a service is breached, the
rest of them remain secure. [Project Calico]
Security policies that allow fine-grained security of containers and services, protecting services
from each other and protecting the orchestrator itself, are a good approach to keep services
secure. Operators can turn the high-level security policy definitions into a set of firewall rules for
each container, and then verify and maintain the firewall rules for each of those containers. Lastly,
Secrets are credentials like API tokens, SSH keys, passwords, etc. which a service needs to authenticate
and communicate with other services.
Authorization for APIs need to be implemented in a distributed manner, which can be challenging,
and secrets management and key distribution for distributed applications also introduce another
headache for the developer. Putting secrets in the container image expose it to many users and
processes and puts it in jeopardy of being misused.
Automated credential management systems lean toward a Security by Design approach to create
microservices environments. This gives developers powerful tools (i.e. DevOps Audit Toolkit13) to
automate secret management immediately while maintaining critical separation of duties.
• Considering centrally managing secrets and the container access to secrets and view which
containers are using those secrets in real time.
• Injecting secrets into the container at runtime, and ensuring that the secrets are stored in
memory and accessible only to the designated container.
• Ensuring that the secret is delivered encrypted to the container that needs it. [Aquablog, 2019]
In order to centrally manage secrets, a tool for secrets management, encryption as a service, and
privileged access management can be used.
• Static Secrets control who can access them. Secrets should be encrypted prior to writing them
to persistent storage so gaining access to the raw storage is not enough to access them.
• Secret Engines store, generate or encrypt data. Some secret engines simply store and read
data; other secret engines connect to other services and generate dynamic credentials on
demand. Data should be encrypted and decrypted without storing them. This allows security
teams to define encryption parameters and developers to store encrypted data in a location
such as SQL without having to design their own encryption methods.
• Secret as a Service: Dynamic Secrets generate database credentials on-demand so that
each application or system can obtain its own credentials, and its permissions can be tightly
controlled. After creating the dynamic secrets, they are automatically revoked after the lease
is up.
• Database Root Credential Rotation enables the rotation of the database root credentials
for those managed by a system. The longer a secret exists, the higher the chance for it to be
compromised. Frequent rotation helps to reduce the secret lifespan and with that the risk of
exposure.
13
Separation of Duties in the DevOps Audit Toolkit: https://www.oreilly.com/library/view/devops-
sec/9781491971413/ch05.html
As with the cloud services model, Microservices have a shared responsibility between customer and
CSP depending on the deployment style chosen; this is shown in the microservices responsibility
model below. However, customer responsibility and cloud location are not the only decisions involved
in the style and strategy of microservices deployment. Other considerations of the deployment are
speed, portability and control. Balancing these factors will be key in the style of deployment. Strategy
around the deployment will be based heavily on the workload of the microservice and its suitability for
the organization to move it to the Public or Private cloud exposing APIs for integration.
A microservices-based application may contain stateful services in the form of a relational database
management system (RDBMS), Not only Sequential Query Language (NoSQL) databases, and file
systems. These microservices are packaged as containers with unique attributes. Technologies
supporting creation of a separate persistence layer that is not host-dependent should be used.
This helps address the host portability problem and maximize availability with minimal delay on
consistency.
Typically, stateful services offload persistence to the host or use highly available cloud data stores
to provide a persistence layer. Both approaches introduce complications: offloading to the host
makes it difficult to port containers from one host to another, and highly available data stores trade
consistency for availability, meaning that developers have to design for eventual consistency in their
data model.
In a stateless scenario, microservices do not keep a record of state, making it difficult to manage
session level security.
As the expiration time is information embedded in the token, it means the expiration of active
sessions cannot be revoked. One of the best practices to deal with this concern is to define short
expiration times for each token along with managing the confidentiality of the token.
Since the security module of microservice architecture is mostly responsible to renew the token at
each request, this should not pose a problem for service consumers.
To make the above happen, it is typical to have a container store data by attaching to storage outside
the container. This can be achieved by adding storage and volumes for a container. Adding storage
space to a host does not automatically make more storage available to containers running on that
host. There are, however, ways to mount and use host storage volumes within a container. Volumes
mounted on one container can also be made available to other containers. These are called data
volume containers, allowing services to share a volume from a source container with one or more
target containers. This approach has some advantages including sharing persistent storage across
several containers and providing a layer of abstraction around the bind mount.
Features for mounting storage volumes to containers allow developers to keep containers simple and
portable, while making the data accessible outside each container. Because images and containers
themselves consume space on the host system, being able to expand disk space on the host is beneficial.
Traditional VM tools do not support containers and cannot provide detection for containers.
There is a new class of container security tools available which help with runtime detection and
In short, at runtime, it is important to following these best practices for effective security:
The following best practices assume that readers have selected a microservices deployment model
leveraging containers and followed the best practices outlined in chapter two of this document.
Developers and Operators should ensure microservices performing similar functions and similar
sensitivities are hosted together to segregate them from apps with different sensitivities. Consistent
policies and controls should be applied based on the sensitivity of applications running on containers
on the same host as well as across the fleet.
Most vulnerability scanning tools are built for VMs; for containers it is critical to use tools that
support scanning of containers. Vulnerabilities must be actively addressed to reduce replication of
the same vulnerabilities at image/container deployment.
In a container build process, an applications’ components are built and placed into an image. The
image is a package that contains all the files required to run a container. An app image should include
only the executables and libraries required by the app, with all required OS functionality provided
by the underlying OS Images. App developers should use layering and master images in read-only
mode where practicable so no one can change the master image and introduce vulnerabilities.
Changes required to the master image should go through the CI/CD pipeline and security validation
processes. The goal should be to simplify the build process for application developers so they do not
have to worry about the underlying infrastructure and patching.
The build creation process is usually managed by developers, and as such, they are responsible for
packaging an app for handoff to testing and validation. Security bug bars and quality gates should
be employed and enforced in the CI/CD pipeline before a build can be promoted to the next stage.
Known as DevSecOps, this paradigm incorporates security from inception, instead of adding it after
the application is completed. “Shift Left” security ensures developers address security and quality
issues at the earliest stages of the pipeline, with secure, high quality code deployed into production
to the farthest right. Having feedback loops at every stage of this pipeline and employing automation
and remediation for previous steps decreases the probability of security defects reaching production.
After image creation, images should be tested and accredited using test automation tools to validate
the functionality of the final application. Security teams should certify the images by ensuring all
mitigations have been applied that were found in static and dynamic code scans, and ensuring final
images are digitally signed. If there are vulnerabilities identified in the application services which
do not align with the security bug bars, then the complete build should fail. Ideally, this should be
enforced through an automated CI/CD pipeline with enforced security bug bars. While tools are the
preferred method, human verification is vital as tools are not infallible and fail to understand context.
Context is paramount to assess security defects in code reviews correctly, as a human can triage and
accurately calculate the risk to the enterprise. As such, using a code review framework can improve
collaboration, reduce security defects, and enhance standardization. There are several types of code
reviews; they generally fall into either formal code reviews, (e.g. Fagan inspection, or lightweight
code reviews; synchronous, asynchronous, or instant code reviews).
Image Storage
Storing and accessing (IAM permission) images in a central location helps with easy management
of the same and also for deployment to multiple environments as needed. Image registries allow
developers to easily store images as they are created, tag and catalog images for identification
and version control to aid in discovery and reuse, as well as find and download images that others
developers may have created. Registries may be private or third-party provided public registries.
Registries provide APIs for automation (e.g. automating the push of images which have been tested
and have met the security and quality bug bars)
It needs to be noted that third-party public repositories cede control over the supply chain to
an external party. This may be acceptable but needs to be thoroughly assessed for risk so an
organization can understand and have the opportunity to mitigate the risk they are taking on.
Security of images at rest for most enterprise registries is done by the service provider as part of
their software suite. For example, Docker DTR has Docker Security Scanning, which employs active
image scanning. In the case of public images, it is recommended to follow best practices such as
Enterprises typically have a central artifact repository that provides a single source of truth for all
components of an application. This allows for management of build artifacts and release candidates
in one central location, transparency of component quality and security, management of licenses,
modernizes software development in phases like staging and release functionality, and helps scale
DevSecOps delivery with high availability and clustering.
Security testing of code is an integral part of the CI/CD pipeline. There should be a gatekeeper
process in the pipeline that enables secure testing of code from one stage to another. Apart from
this, developers should be enabled to run as much testing locally prior to committing to the shared
repository. Early detection is made possible by allowing tests to be run early and often by developers
alongside their code changes.
The CI/CD system itself should be containerized. An ephemeral environment possesses various
advantages: it executes the security testing on containers which abstracts the host systems and
provides standard APIs. This approach ensures that residual side effects from testing are not inherited
by subsequent runs of the test cases. It should be noted that stateless microservices make testing
much easier; therefore, they are more likely to be verifiably secure when delivered for deployment.
It is critical to ensure that CI/CD pipelines are isolated from the rest of the network and that
developers are not able to circumvent the pipeline partially or completely. At the same time,
identity and access management of the build pipeline is a critical aspect to implement. Appropriate
authentication and access permissions should be set up based on the roles of the Development,
Security, Operations and QA / Test teams appropriately.
As opposed to monolithic applications, where components operate and communicate internally and
present a single point of failure, Microservice applications operate and communicate both internally
and externally. This decomposition or breaking down of the application into individual services, while
increasing the application’s resiliency, increases the attack surface that needs to be monitored and
protected.
Additionally, decomposing an application into separate components, which gives Microservices its
advantages in speed, resiliency, flexibility and scalability, also serves to provide security challenges
with respect to the orchestrated distributed system. To meet these security challenges, secure
development, implementation, operations and ongoing monitoring need to consider Policies,
Logging, Monitoring and Alerts as well as Incident Management and Platforms.
3.2.1 POLICIES
Policies are a necessary precursor to logging, monitoring, alerts and incident resolution as they
establish a baseline for action. Policies are standards that a monitoring system uses to evaluate
behavior, events, and status. Because applications have been broken down into components as
microservices, which may or may not be stored in containers, the specific function and behavior
of this combination or separation can be well understood and therefore predicted. This makes
setting specific operational policies and the monitoring of malicious behavior (deviation from
standard behavior) possible. The implementation of any security policies should be defined as part
of the configuration artifact generated for execution (configuration files should have the policies
configured in them for any tools to enforce them). These policies can include microsegmentation,
communication, configurations, interfaces, logging and alerting targets, etc.
Developer and Operators should employ the following best policy practices:
1. Set policies at the OS, network, host, container, and application/microservice levels;
2. Protect services using service-level policies to ensure security regardless of the number of
containers, microservices/applications, etc.;
3. Set up a central configuration server from which microservices can load properties files via
discovery;
3.2.2 LOGGING
One of the most important, if not the most important, tool available to determine what has happened
in a system taking advantage of a microservice/container strategy is the logging system. Logging is
a critical part of keeping microservices alive and healthy. Logs contain valuable information such as
stack traces and information about where data is coming from and can help with reassembly such
as in the event of a server crash. SSHing into a machine and looking at STDOUT may work for a
single running instance, but in the world of highly available and redundant microservices, there is a
need to aggregate all those logs to a common place and query the same based on different criteria.
For example, if a single instance is bad, a SRE (site reliability engineer) might want to look at the
process ID (PID), hostname, and port. Aggregated logs provide unique insight and context given they
originate from services owned by different teams within the organization.
A log file is a record of what occurred (an event, transaction, message). A logging solution provides
for storing, recording, analyzing and reporting of these actions as they occur. One of the major
challenges to logging microservice/container activity is the ephemeral nature of this distributed
architecture. Simply stated, containers and the contained microservices do not exist indefinitely and
a well architected logging system needs to consider this.
Developers and Operators should employ the following best logging practices:
1. Tag all microservice request calls with unique IDs so any error can be traced to that call and
back to the server/container and or microservice (application) from which the error originated
even after destruction;
2. Code error responses with a unique ID so they (container and microservices errors) can more
easily be grouped and analyzed;
3. Structure Container and Microservice log data in a standard format (e.g. JSON);
4. Make all log fields in the chosen standard format searchable;
5. Log UTC time to order messages for aggregation, analysis and reporting purposes;
6. Log more data then less to avoid missing important information, but if logging creates
excessive demands due to container and microservice data volumes, then developers and
operators should be prepared to cut back and/or work with offline storage and analysis tools;
7. View the logs as a stream of data flowing via a dedicated log shipping container;
8. Forward, via a dedicated log shipping container, all logs to a centralized location to make
container and microservice activity reporting and monitoring easier;
9. Store logs externally to host due to container microservice unavailability (e.g. if container is
destroyed) and storage space resource availability;
10. Employ a container and microservices purposed (fast, handle large data volumes, using
visualization and AI) tool to store, aggregate and report log activity.
Monitoring is used to ensure the system is operating as designed and the data is secure. Again,
a major challenge to monitoring and alerting microservice / container activity is the fact that
containers and microservices do not exist indefinitely.
Another major challenge is that traditional tools which were used for monitoring and alerting on VMs
are not very effective for containers; hence it is pertinent to implement tools which are container
aware and have visibility of the whole ecosystem, can detect malware infections, perform runtime
validation of container images, enforce network microsegmentation policies, identify container
breakouts, enforce resource policies, etc.
Alerts are messages/notifications generated by monitoring systems that are sent to responsible
parties notifying them of events or status that need to be reviewed in order to determine the who,
what, where, when and why and mitigating actions if any are necessary.
Developers and Operators should employ the following best Monitoring Systems & Alerts practices:
Anyone who has ever dealt with information security knows the prevalence of security incidents and how
incredibly stressful they are for everyone involved. At the same time, preparation with tools, techniques
for identifying incidents, and the ability to contain and respond to them quickly is just as critical.
Just as technologies evolve, so do the challenges faced by incident response teams and digital forensic
investigators. Answers can be found in many places, not just on the hard disks of computers or servers.
Cloud storage and services, while highly convenient and cost-effective from a business perspective, are a
game changer from an evidence preservation perspective.
There is a definite overlap between the realms of incident response and digital forensics. Knowing how
to recognize this overlap and understand when an incident response is turning into a digital forensics
investigation, or, conversely, when an investigation requires that an incident response to be triggered,
will unquestionably help teams to deliver value to respective organizations. It is important to balance
risk in other aspects of information security; hence one should also balance the risk of not completing a
digital forensics investigation as a follow-up to an incident.
Containers are designed to be ephemeral, and in fact, multiple container security products play up this
property as a security feature. If a container runs for only five minutes, even if an attacker compromises
it, they will have access for only five minutes at a time. This property of containers runs contrary to the
fundamental forensics need to preserve evidence. Container images that start and stop constantly
represent not just moving targets, but targets that frequently cease to exist. However, the majority of
container platforms use a copy-on-write file system, which helps forensic investigators tremendously
when they are working with a running container. The underlying container image is stored in one
location; this image contains the configuration data and applications that form the container image.
Any changes made while the container is running will be written to a separate file and can actually be
committed into a new image on the fly, without affecting the running container. If a container is believed
to be compromised, a forensic investigator can run that newly committed image to explore its contents.
It must be noted that such an action results in the creation of a new container from the image, not the
exact same copy of the container that was originally committed. This differs from a VM snapshotting
approach, since running processes in the target container are not included in the image.
There are several examples of vulnerabilities that would permit malware to escape a container image
and access resources on the host machine. Security updates to container management platforms are
frequent; as soon as a bug is discovered it is patched. Therefore, if a container is believed to be affected
by some sort of malware, or other malicious actor, an investigator may consider simply treating the
container as ‘just another compromised application’ and can image the entire host machine.
Applications broken down into possibly tens if not hundreds of microservices that operate with different
life cycles that may interact with one or more other microservices which may or may not also be hosted
in containers in the cloud and do not exist indefinitely make the identification, logging, categorization,
prioritization, investigation, diagnosis, escalation, work around or resolution and closure of incidents a
challenge. This challenge can be most efficiently and effectively addressed with a platform that combines
monitoring with artificial intelligence.
Some companies (e.g. IBM X-Force, Facebook ThreatExchange, and AlienVault) provide access to
these platforms. Open Threat Exchange (OTX) offers free access to threat intelligence as well.
Microservices can leverage third-party community-powered threat data, to microsegment the
various services, effectively enabling collaborative defense.
There are bound to be times when the microservices’ users will be facing an error. It is important to
know what caused that error. Hence, developers should code the response the client receives so that
it contains a unique ID along with any other useful information about the error. This unique ID could
be the same one that is used to correlate the requests. Having a unique ID in the response payload
of the request will help the consuming service identify problems more quickly. The parameters of the
request-date, time, and other details will help incident responders better understand the problem. It
is also recommended to provide access to the data to support engineers via an open-source search
and analytics engine.
It is almost impossible to have a defined format for log data; some logs might need more fields
than others, and those that do not need all those excess fields will be wasting bytes. Microservice
architectures address this issue by using different technology stacks, which impacts the log format
of each service. One service might use a comma to separate fields, while others use pipes or spaces.
All of this can get complicated. Hence it is important to simplify the parsing of logs by structuring log
data into a standard format like JavaScript Object Notation (JSON). JSON allows one to have multiple
levels of data so that, when necessary, analysts can get more semantic info in a single log event. Or
one can use schema-on-read to format semi structured data on the fly.
Parsing is also more straightforward than dealing with particular log formats. With structured data,
the format of logs is standard, even though logs could have different fields.
Contextualizing Logs
Developers and Operators should employ the following log best practices, ensuring logs include:
Microservice communications styles are mainly about how services send or receive data from one
service to the other. The most common type of communication styles used in microservices are
synchronous and asynchronous.
REST doesn’t depend on any of the implementation protocols, but the most common
implementation is the HTTP application protocol. When a user accesses RESTful resources with
the HTTP protocol, the URL of the resource serves as the resource identifier and GET, PUT, DELETE,
POST, and HEAD are the standard HTTP operations to be performed on that resource. The REST
architecture style is inherently based on synchronous messaging.
In asynchronous communication, the client does not wait for a response in a timely manner. The client
may not receive a response at all, or the response will be received asynchronously via a different channel.
This messaging between microservices is implemented using a lightweight and dumb message
broker. There is no business logic in the broker and it is a centralized entity with high-availability.
There are two main types of asynchronous messaging styles—single receiver and multiple receivers.
a. Single receiver. Each request should be processed by exactly one receiver or service. An
example of this statement is the Command pattern.
b. Multiple receivers. Each request can be processed by zero to multiple receivers. This type
of communication should be asynchronous. An example of this is the publish/subscribe
mechanism used in patterns like Event-driven architecture. This is based on an event-bus
interface or message broker when propagating data updates between multiple microservices
through events. It is usually implemented through a service bus or similar artifact like
Microsoft Azure Service Bus using topics and subscriptions.
A microservice-based application will often use a combination of these communication styles. The
most common type is single-receiver communication with a synchronous protocol like HTTP/HTTPS
when invoking a regular Web API HTTP service. Microservices also typically use messaging protocols
for asynchronous communication between microservices. Microservices can also take advantage
of HTTP/2 to improve its security and speed of the workflows; as it is multiplexed, it can leverage
parallelism with one single connection.
Other protocols like Advanced Message Queing Protocol (AMQP), a protocol supported by many
operating systems and cloud environments, use asynchronous messages. It just sends the message
as when sending a message to a RabbitMQ queue or any other message agent.
Microservices governance comprises the people, processes, and technologies that are coordinated
together to implement a real-world solution. Most of these concepts are not new but ones that are
already successfully used in SOA governance. They are equally applicable under the microservices
architecture.
Service Definition
Any microservice that is developed should have enough information to uniquely identify itself,
its functionality, and how a consumer may consume it. It should have a mechanism to specify the
service definition, and it should be readily available to the service consumers.
There are several technologies such as OpenAPI, gRPC-Web, and protocol buffer, which help in
defining service interfaces. These technologies allow developers to define the service identifiers,
service interfaces, and service message models (e.g. service requests and responses). Other service
metadata, such as service ownership, service level agreements and their accompanying SLOs
(service level objectives) and SLIs (service level indicators or targets), etc., can also be part of the
service definition. Service definitions are usually stored in a central repository, consumers can have
access to the same and service owners can publish.
API Management
API Gateway and API management has a key role in the realization of several microservices
governance aspects. As part of API management, it is important to apply security, service versioning,
throttling, caching, monetization, etc. for services during runtime. It is important to understand that
most of these capabilities have to be applied centrally for service invocations. Therefore, API gateways
should be centrally governed or managed, and application of those capabilities can be either centralized
or decentralized. Also, API gateways can be used for external or internal consumers. These capabilities
can be equally applicable when microservices talk to each other via an internal API gateway. API
management solutions often work hand-in-hand with service registries to discover services as well
as to use them as the API repository. This is also quite useful when existing services are used and
new APIs are created out of them. One other important aspect is that API management solutions
provide a rich capability to discover and consume APIs. So, it is possible to leverage API management
to manage all microservices.
Service Observability
When an application interacts with multiple microservices, it is vital to have metrics, tracing, logging,
and visualization as well as detection and alerting capabilities for all services so that there is a clear
picture of their interactions for supportability and troubleshooting purposes.
All these requirements are consolidated under one concept, called observability. With a
microservices architecture, it is likely to have hundreds or thousands of services communicating with
each other. The ability to get service metrics, trace messages and service interactions, get service
logs, understand runtime dependencies of services, troubleshoot in the event of a failure, and set
alerts for anomalies can all be considered under the umbrella of observability.
The approach to breaking apart a monolithic application into smaller parts involves decomposition
and refactoring. Decomposition is a strategic work effort to analyze the code base and organize
the application into component parts. Refactoring restructures existing computer code without
changing its external behavior. Decomposition targets discovery of static dependencies and
minimizes their impact, while refactoring software targets improvement of software non-functional
attributes. Non-functional requirement examples include resiliency, security, modifiability, or
maintainability. Together decomposition and refactoring create a paradigm for re-organizing software
code and favorably altering its internal characteristics.
Cases exist where a monolith is a better short-term choice. A monolith might be better suited if
there is a need to get a product into the market, or close a capability gap for expediency. From a
software development point of view, building a monolith can be done with a microservice refactor in
mind. Although it can be interpreted as “build to throw away,” transitional architectures have value
for they can often uncover requirements that were not previously known until release, or recognition
of certain requirements and capabilities sensitive to initial conditions. As long as the initial software
architectural quality attributes exemplified by the monolithic applications NFRs (non-functional
requirements) carry forward into the microservice refactoring with careful attention to preserve
modularity, software construction is a matter of selecting an architectural style and not solely
development execution.
NFRs are often referred to as the “-ilities.” A more complete treatment can be found in Bass,
Clements and Kazman’s textbook titled Software Architecture 3rd Edition published by Carnegie
Mellon University Software Engineering Institute. Once the software architect and DEV lead receive
the NFR’s key to application success, the DEV lead should carry those NFRs forward so that later
code inspection reveals software constructions manifesting the NFRs. The microservice architecture
style addresses problems of scale and modularity. Developers should consider using this style based
on an application’s current and expected scale needs. Microservices architecture style introduces
new, additional architectural quality attributes (NFRs) into a monolith code base refactor such as
testability and adaptability. Developers should not hesitate to ask, “What is the simplest service
Developer’s Viewpoint:
1. As an application becomes more fragmented, the developer has less visibility into
microservices, creating a challenge to ensure that each component works flawlessly with
others. The absence of proper function increases the risk of data spillage, data destruction,
and availability issues.
2. A developer writing or maintaining a microservice interfacing with several other
microservices may be challenged to ensure interoperability. Reliable test cases are needed.
Weak test cases can increase risk of availability and performance issues later in the software
development cycle.
3. Monolith decomposition results in the categorization of internal software modules into
those that face-forward as part of the customer experience, those that are internal utility
or intermediary functions related to inter-process communications, and those that face
rearward toward data stores which are either systems of record, or systems of reference.
Decomposition will also produce an accounting of interfaces, TCP/IP protocol usage,
data store location, the presence of statically declared variables and credentials, and the
revelation of multiple programming languages used in conjunction to accomplish particular
tasks independent of any particular language used to compile or interpret it. Any one of the
discoveries can be a potential source of microservice adoption, or a source of constraint
preventing adoption.
4. A Microservice architectural approach is a RESTful method driven style, and not all
opportunities fit easily into the approach. SOAP protocol backend connections may be using
the WS-Security or WS-Reliability SOAP extensions, and any SOAP interface refactor will
require REST to carry forward such security and reliability NFRs into a development style that
offers only eventual data consistency. Developers will have to work with other SECOPS teams
to ensure that the capabilities offered by the RESTful style can duplicate those capabilities
offered by SOAP. Additionally, legacy backend data stores and other applications may not
support REST, and that will require the developer to wrap the SOAP endpoint in REST which
adds performance overhead in general. In cases where the monolith uses RPC (remote
procedure call) and NFS (network file share), usage can be difficult to resolve. For example,
the developer will have to assess the use of HTTP/TLS for file transfer to replace NFS,
forcing a re-evaluation and potential replacement of the authentication and access methods
used by NFS. In the other instance, RPC cannot easily use a TLS transport. The desire to
use JSON-RPC likely forces a full rewrite of the module. RPC over SSL/TLS is a difficult
problem to tackle. Standards have responded by removing RPC/TLS support and instead
favored OS specific tunneling mechanisms to handle remote procedure calls over untrusted
networks. Historically, RPC has specific vulnerabilities related to denial of service, access, and
authorization.
Operator’s Viewpoint:
Architect’s Viewpoint:
1. Architects will be challenged to find a balance between the costs and benefits of rebuilding
to a microservice architecture and then orchestrating those microservices. The absence
of balance can result in either cost overruns, or an application that does not fully benefit
from a microservice architecture. Architecture means different things to different people.
In general, an experienced and diversely skilled software project manager, senior developer,
or business process analyst capable of stepping back and filling the communication gap
between the business stakeholder and the multiple technical teams can fulfill the “architect”
role. But all architects have to remain close to the code. The architect should write code,
either for educational or leisurely purposes but not necessarily write software as a vocation.
Application decomposition is a software domain experience; as a result “the architect” is
really functioning as a software architect working between Agile development crews, crew
program managers and scrum masters, technical teams, and the business stakeholder. Not
every information technology architect, whether it be a solution, infrastructure, security,
data, or network architect can function in this role. Application decomposition and refactor
does not always need a software architect to work as a boundary spanner between groups,
but it helps to elevate the work above the next sprint horizon, maintains focus on the non-
functional requirements, and keeps the developer leads and product managers focused
on the work and on the big picture. Software architects that do not play a role in software
testing and QA processes have no visibility into whether the code satisfies the agreed-
upon architectural quality attributes (i.e. NFRs). While developers control the fulfillment of
functional requirements, architects control and influence the fulfillment of non-functional
requirements. If not empowered in this role, assessment of whether NFRs such as security,
resiliency, modifiability, modularity, and availability falls back to the Agile team or the testing
organization, neither of which may be compelled to take NFRs into account.
1. Each service runs its own process, and the service delivery is one service per container;
2. Optimize as-built services for one and only one business function per service. In other words,
a microservice should have one and only one reason to change.
3. Inter-service communication occurs via message brokers, distributed loosely coupled data
stores, and RESTful APIs. Caution should be exercised with data stores, for stores work
counter to loose coupling.
4. Microservices can be expected to evolve at differing rates. The system can evolve, but
software architecture principles guide the development over time.
5. Microservices rely upon distinct per-service high availability and clustering decisions. Not
all services need to scale; some will require auto-scaling, and it is unlikely that a common
scaling policy will fit all microservice profiles.
Portions of a monolith can be peeled or sliced off by dividing out service capability from a business
point of view. The internal technical representation should not change the external business
behavior. Developers should examine the code base for delineated or well-defined interfaces without
many inter-dependencies, or alternatively look for areas of the code base that are organized around a
specific language or specific type of data store. When examining data stores, it must be determined
if the data is tightly-coupled to the monolith code base or loosely coupled such that the monolith
can function in the presence of variable data consistency and data latency. Counter-intuitively,
developers should analyze the code base for bottlenecks that can benefit from a refactor as long
as the refactor can deploy, scale, and fail independently. Last, an assessment must be made as to
whether future feature sets are better cast as microservice enhancements rather than upgrades
of the monolithic code base. Depending upon the size of the monolith, the program manager
and dev lead may have to allow for co-existence between legacy monolithic applications and new
microservice style application. Only so many points can fit into a sprint, and sprints into an epic.
Regrettably, constraints in scope, cost, time, risk, and money all lead to circumstances where a hard
cut-over to the new microservice application carries too much risk. A parallel implementation may
present less business disruption potential.
Developer’s Viewpoint:
Operator’s Viewpoint:
1. Preferably before delivering microservices into the production environment, a robust service
mesh exists to handle sidecar functions necessary for the proper discovery of microservices
as well as provide other needed capabilities such as proxy, load-balancing, encryption,
authorization and access, and circuit breaker support. Service mesh integrates within the
computer container cluster and facilitates service-to-service communication independent of
the microservice business logic. Otherwise, these IT functions have to be provided by a host
of discrete appliances where managing the application run-time is much more difficult and
opaque.
Architect’s Viewpoint:
1. An architect functioning in the software architecture role has to operate at two or sometimes
three different levels in the institution. Software architects have to provide direct support to
the developer leads and program managers, requiring them to be close to the code. Software
architects also have to function at a higher level in the institution to coordinate solution
delivery containing features and functionality that are not germane to the developers’
workspace, but essential for a functional platform upon which to deploy microservice.
The architect in the microservice workspace has “two speeds”: one geared to providing
support to multiple Agile development teams and working ahead of them to sort out future
challenges, prototype capability, and render in UML upcoming integrations between the
code and the infrastructure, and another speed to communicate with upper management
and business stakeholders through the use of traditional business-technology alignment
viewpoints.
2. Ensuring a microservice architecture provides means for authentication, and authorization
of access to a microservice can be a challenge due to the variety of IAM solutions and
their compatibility with the hosting architecture of the microservice. As microservices
each provide smaller, usually quicker services that are accessed more often, they require
authentication and authorization functionalities that are also as efficient in nature. As the
number of microservices scales, the amount of time just authenticating and authorizing the
API caller increases.
3. Presently, few standardized methods to render architectural viewpoints that can scale
down to the code, while also being able to scale up to the technical or business alignment
point of view, exist. Most often, the resulting artifact mix is a combination of UML and
proprietary viewpoints rendered to meet the needs and culture of the business constituency.
It is common for architects to have a stable set of standard UML templates to convey
sequence, class, and component diagrams. One such example is to use UML to understand
how cryptography and identity and access management (IAM) interact and validate
service authorization. But when an architect tries to understand the technical interaction
between platforms supporting the code, UML begins to scale out in favor of network-centric
representations. In a business setting, the visual representations and knowledge domain
are entirely consistent with the underlying microservice interaction despite being very far
from the development team. Institutions who employ architects expect architects to have
different modes of operation, know when to change modes, and be successful in each mode,
hence the need for two-speed architects engaged in the microservice architectural style.
A number of architectural patterns layer upon one another to provide defense in depth given that
the API is not capable of providing for all the necessary control needed to ensure its integrity. API
integrity is not restricted to just the cryptographic capability used to assign identity to API interfaces;
integrity applies to other fundamentals in API construction to ensure that the data being passed
are of high integrity, fidelity and veracity. Cryptography is joined by identity and access control and
traditional network pattern adjuncts such as gateway proxy, offloading, aggregation, translation,
and routing. A specific challenge exists when essential security and network policy backstops do not
exist. The microservice diaspora inherits backplane control weaknesses, for it is not able to make up
for the absence with just defensive software programming alone.
APIs expose the underlying implementation of the application. The REST API is standard and generic.
A REST API offers predictable entry points that can be used for many different functions. In the first
generation of API enabled applications, the GET/fetch of data was typically one send followed by a
corresponding request-reply. Modern REST API development in the API-based application context is
now many fetches occurring simultaneously, with corresponding raw data and parameters returning
to the application. The client is the rendering agent and controls the majority of the client behavior
and maintenance of user state while the server functions as proxy capability fronting the API.
Software languages do offer specific security models whose code interacts with server-side settings
(The Angular JS Framework security model is an example), but by and large the modern client
handles requests and data responses by applying its own security model (document object model
[DOM] and content security policy [CSP], for example).
The developer has accountability to produce secure API code, of high quality, free of anti-pattern
usage, and free of defects. However, it is the engineer and operations staff that have to provide
and be capable of supporting the security and policy controls that the code cannot. The developer,
operator, and architect have to work together to ensure that the core software architecture non-
functional attributes of microservices (autonomy, ubiquity, loose coupling, reusability, composability,
fault tolerance, discoverability, and business process alignment) are neither compromised nor
constrained by layered security and network policy controls.
Developer’s Viewpoint:
1. Weak code leads to security vulnerabilities. It is generally less expensive to resolve coding
defects early in the development process than after merge or release. Process immaturity,
developer unpreparedness and unawareness are root causes of insecure software.
Vulnerability disclosure post-release creates data disclosure risk at a point in time where it
Operator’s Viewpoint:
Architect’s Viewpoint:
1. Three different architecture roles are in play. Enterprise architects offer direction in the
domain of business objectives to technology alignment but offer no input into the platform
or solution. A solution architect works directly with operations, security, network, and
application development leads to engineer a platform capable of supporting containers,
microservices, and the chosen bootstrap framework. A software architect works closest
to the code and frequently joins a specialist in information security who focuses on secure
coding and scripting practices; this is inclusive of developer leads. The collective goal is to
produce secure microservices. A particular challenge is that many institutions do not have
this level of bench strength and rely on the DevSecOps team, or DevSecOps person to fulfill
all architecture functions.
2. It is improbable to find multiple skills sets in a single person and that is a challenge unique
to architect work in general – it has either become highly specialized, or work gravitates
to common denominators. To mitigate such weaknesses, institutions need to employ third-
party consulting or canned educational packages tailored to specific needs like secure coding,
defensive software programming, and environment setup. Without this support, organizations
are at risk of organically growing a microservice architecture of low fidelity and integrity that
cannot scale as a result of its weak control plane infrastructure, not necessarily as a result of
the microservice code itself.
Deciding how to partition a system into a set of services is very much an art, but there are a number
of strategies that can help. One approach is to partition services by verb or use case. Another
partitioning approach is to partition the system by nouns or resources. This approach finds services
responsible for all operations that operate on entities/resources of a given type. A third method
is to follow the input and output paths and understand the locations where data creation, editing,
updating, or deleting occurs. By working backwards from the data source, the services that
consume the data reveal themselves. Ideally, in the design of microservices each service should
have only a small set of responsibilities, using the Single Responsibility Principle (SRP). The SRP
defines a responsibility of class as a reason to change, and that a class should have only one reason
to change. In Application Decomposition, developers need to refactor existing applications into
component microservices. The SRP is another way to restate the primary attribute of microservices,
that all constructed services provide the execution of only one business rule, logic set, or process.
A complete treatment of Web Scalability is explored in Abbott and Fischer’s The Art of Scalability:
The Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise 2nd Edition,”
Addison-Wesley Professional, 2015.
Developer’s Viewpoint:
A microservice is an API. Effective and secure API design leads to repeatable and reliable client
experiences whether that client is a person or another machine. APIs produce machine-readable
content that can be used for any number of repeated tasks. Microservices are stateless; what used
to be a stateful interface inside a monolith’s application boundary will now become a stateless RESTful
API. In 2014, James Lewis and Martin Fowler described the conceptual interpretation of microservices
as “Service oriented, independently deployable, independently manageable, ephemeral, and elastic.”
Since that time, microservices have become inseparable from API communications and container
technologies. Securing a microservice happens on three planes: The API, the code itself, and the
container platform. All three layers act in concert to preserve the integrity of the microservice.
1. Understand the underlying data model, and provide new capabilities for processing data
without state information. REST APIs can offer only eventual consistency and this is a
programming challenge when data crosses multiple network segments and security zones.
2. Build REST APIs from templates, scaffolding, and skeleton frameworks to increase
productivity. Not only does doing so standardize API construction, it provides a “tree and
leaf” approach to manually populating the API specification with the remainder of the
software functionality without concern about what functionality goes into making a “tree.”
3. Use code generators to create client stubs for the service under construction. Client
Operator’s Viewpoint:
In the modern organization context, Development (Dev), Security (Sec) and Operations (Ops) enjoins
to be a hybrid of all called DevSecOps. DevSecOps can be a person or it can be a team of people.
DevSecOps aspires to improve efficiency, much like the merging of storage, server, and networking
1. The CI/CD services used by an organization should scale as the organization moves from
traditional applications and services to microservices. This is especially important as
microservices should have more unit tests to ensure compatibility with each other.
2. Secure communication channels for communication should be leveraged. What used to be
sensitive data being transferred within a monolithic application is now traversing the network.
The operator needs to provide secure network communications when the application itself
cannot encrypt/decrypt traffic. This can be done, for example, by providing SSL termination
at provider load balancers.
3. Network segmentation should be leveraged to create secure communication areas. To
minimize exposure, microservice architectures should be implemented with a “need to know”
model. Third parties should have access only to published services, not communication
between microservices and their dependencies. The API gateway pattern could be leveraged
to have clients send their requests to a single-entry point, hence, shield the underlying
microservices by implementing security measures.
4. Microservices and containers can be short-lived. Operators and engineers need to consider
how to ensure the application’s state. The result will lead to a better platform with less points
of failure. By providing reliable orchestration that can scale microservices and restart them in
the event of failure, the operator allows developers to gain trust in the platform running the
microservices they develop and consume.
5. In conjunction with the Security organization, the operator should establish trusted signing
and certificate authorities. Providing trusted signing and certificate authorities will ensure
trust within the microservice environment by securing communications and provide the
basis for verifiable code and container images. Operators need to provide reliable signing
authorities to ensure trust within the microservice-based environment; what once was trust
within application packages and code is now trust between microservices and containers on
a network.
Architect’s Viewpoint:
IAM/
Identity and Access Management
IdAM
OS Operating System
VM Virtual Machines
Container lifecycle The main events in the life cycle of a container are Create container, Run
events docker container, Pause container, Unpause container, Start container,
Stop container, Restart container, Kill container, Destroy container.
Lateral movement Lateral action from a compromised internal host to strengthen the
[LIGHTCYBER 2016] attacker foothold inside the organizational network, to control additional
machines, and eventually control strategic assets.
Microservices Systems This is the process of breaking down an application into components
Software Development (microservices) basic elements to create through code extraction
or rewrite capability (greenfield) microservice architecture of self-
contained services that achieve a business objective.
Enterprise Operator The individual or organization responsible for the set of processes to
deploy and manage IT services. They ensure the smooth functioning
of the infrastructure and operational environments that support
application deployment to internal and external customers, including
the network infrastructure, server and device management,
computer operations, IT infrastructure library (ITIL) management,
and help desk services.14
Container resources Three resources required for containers to operate are CPU, Memory
(+swap) and Disk (space + speed) and Network.
14
https://en.wikipedia.org/wiki/Information_technology_operations
Container resource The maximum amount of resources (CPU, memory (+swap) and Disk
limit (space + speed)) that the system will allow a container to use.
Service Registry The registry contains the locations of available instances of services.
Service instances are registered with the service registry on startup and
deregistered on shutdown. Client of the service and/or routers query
the service registry to find the available instances of a service.
Client-Side Discovery The client requests the network locations of available services from the
service registry.
Server-Side Discovery The Server requests the load balancer for the network locations of
available services from the service registry.
[NIST SP 800- NIST Special Publication (SP) 800-160, Systems Security Engineering:
160] Considerations for a Multidisciplinary Approach in the Engineering
of Trustworthy Secure Systems, National Institute of Standards and
Technology, Gaithersburg, Maryland, November 2016, 257pp.
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-160.pdf
[NIST SP 800- NIST Special Publication (SP) 800-123, Guide to General Server Security,
123] National Institute of Standards and Technology, Gaithersburg, Maryland,
July 2008, 53pp. http://nvlpubs.nist.gov/nistpubs/Legacy/SP/
nistspecialpublication800-123.pdf nistspecialpublication800-123.pdf
[NIST SP 800-64 NIST Special Publication (SP) 800-64 Rev 2, Security Considerations in
Rev 2] the System Development Life Cycle, National Institute of Standards and
Technology, Gaithersburg, Maryland, October 2008, 67pp.
http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-
64r2.pdf
[NIST SP 800- NIST Special Publication (SP) 800-190, Application Container Security Guide,
190] National Institute of Standards and Technology, Gaithersburg, Maryland,
September 2017, 63pp.
https://doi.org/10.6028/NIST.SP.800-190
[NIST SP 800- NIST Special Publication (SP) 800-204 Security Strategies for Microservices-
204] based Application Systems, National Institute of Standards and Technology,
by Ramaswamy Chandramouli Computer Security Division Information
Technology Laboratory, Gaithersburg, Maryland, August2019, 33pp. https://
nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-204.pdf
[accessed 8/26/2019]
[Apriorit, 2018] Apriorit. (2018). Microservice and Container Security: 10 Best Practices,
by Bryk, A. https://www.apriorit.com/dev-blog/558-microservice-
container-security-best-practiceshttps://www.apriorit.com/dev-blog/558-
microservice-container-security-best-practices [accessed 9/10/2019]
[API University Biehl, M. (2016). API-University Series volume 3: RESTful API Design First
Press, 2016] Edition. API University Press.
[Aquablog, 2017] Aqua. (2017) Managing Secrets in Docker Containers. The Challenges of
Docker Secrets Management.Jerbi, A. Aquablog
https://blog.aquasec.com/managing-secrets-in-docker-containers
[accessed 9/4/2019]
https://dzone.com/articles/microservices-logging-best-practices
[Beydoun et al., Beydoun, G., Xu, D., Sugumaran, V. (2013). Service Oriented Architectures
(2013)] (SOA) Adoption Challenges Service Oriented Architecture (SOA) Adoption
Challenges. International Journal of Intelligent Technologies. 55.
[BMC Software, BMC Blogs - Microservices vs SOA: How Are They Different? by Watts, S.,
2017] 2017.
https://blogs.bmc.com/microservices-vs-soa-whats-difference/?print=pdf
[accessed 9/5/2019]
[Cerny et al., Cerny T., Donahoo J. M., Pechanec J. (2017). Disambiguation and
(2017)] Comparison of SOA, Microservices and Self-Contained Systems. RACS ‘17
Proceedings of the International Conference on Research in Adaptive and
Convergent Systems Pages 228-235 .
[Erl (2005, p. 54, Erl, T. (2005). Service-Oriented Architecture: Concepts, Technology, and
p.263)] Design: Prentice Hall
[HashiCorp Vault] Hashicorp Vault. Learn about secrets management and data protection with
HashiCorp Vault. Operations and Development Tracks.
https://www.vaultproject.io/guides/secret-mgmt/index.html [accessed
9/13/2019]
[LIGHTCYBER LightCyber. Cyber Weapons Report. (2016). Ramat Gan, Israel. 14pp.
2016] http://lightcyber.com/cyber-weapons-report-network-traffic-analytics-
reveals-attacker-tools/ [accessed 5/11/17].
[Lund University Lund University. (2015). Knutsson, M., Glennow, T. Challenges of Service-
2015] Oriented Architecture (SOA)-From the public sector perspective. School
of Economics and Management Department of Informatics. https://pdfs.
semanticscholar.org/23ef/7b2abcfaed46f37ba7330c1f4409ca8aff6a.pdf
[accessed 9/4/2019]
[LWN.net, 2018] LWN.net. (2018). Easier Container Security with Entitlements, by Beaupré,
A.
https://lwn.net/Articles/755238/ [accessed 9/10/2019]
[Nordic APIS, NORDIC APIS. (2017). API Security: The 4 Defenses of The API Stronghold.
2017] Sandoval, K. https://nordicapis.com/api-security-the-4-defenses-of-the-
api-stronghold/ [last accessed 11/11/2019]
[O’Reilly Safari McLarty, M., Wilson, R., and Morission, S. (2018). Securing Microservice
Publications, API’s: Sustainable and Scalable Access Control. First Edition. O’Reilly Safari
2018] Publications.
[Stojanovic et al., Stojanovic, Z., Dahanayake, A., Sol, H.G. (2004) Modeling and Design
(2004)] of Service-oriented Architecture. Conference: Proceedings of the IEEE
International Conference on Systems, Man & Cybernetics: The Hague,
Netherlands.
Tanenbaum at al., Tanenbaum, A. and Van Steen,M. (2007). Distributed Systems: Principles
(2007)] and Paradigms. Pearson Prentice Hall.
[Wei et al., Wei, F., Ramkumar, M., Mohanty, S.D. (2018). A Scalable, Trustworthy
(2018)] Infrastructure for Collaborative Container Repositories. ArXiv,
abs/1810.07315.
[Yarygina et al., Yarygina, T. and Bagge, A.H. (2018). Overcoming Security Challenges in
2018] Microservice Architecture. Proceedings of 2018 IEEE Symposium on Service-
Oriented System Engineering.