What Is Serverless
What Is Serverless
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. What Is Server‐
less?, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the
information and instructions contained in this work are accurate, the publisher and
the authors disclaim all responsibility for errors or omissions, including without
limitation responsibility for damages resulting from the use of or reliance on this
work. Use of the information and instructions contained in this work is at your own
risk. If any code samples or other technology this work contains or describes is sub‐
ject to open source licenses or the intellectual property rights of others, it is your
responsibility to ensure that your use thereof complies with such licenses and/or
rights.
978-1-491-98416-1
[LSI]
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
1. Introducing Serverless. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Setting the Stage 1
Defining Serverless 6
An Evolution, with a Jolt 10
3. Benefits of Serverless. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Reduced Labor Cost 20
Reduced Risk 21
Reduced Resource Cost 22
Increased Flexibility of Scaling 24
Shorter Lead Time 25
4. Limitations of Serverless. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Inherent Limitations 27
Implementation Limitations 32
Conclusion 36
5. Differentiating Serverless. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
The Key Traits of Serverless 39
Is It Serverless? 42
Is PaaS Serverless? 44
Is CaaS Serverless? 45
iii
6. Looking to the Future. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Predictions 47
Conclusion 48
iv | Table of Contents
Preface
Fifteen years ago most companies were entirely responsible for the
operations of their server-side applications, from custom engineered
programs down to the configuration of network switches and fire‐
walls, from management of highly available database servers down
to the consideration of power requirements for their data center
racks.
But then the cloud arrived. What started as a playground for hobby‐
ists has become a greater than $10 billion annual revenue business
for Amazon alone. The cloud has revolutionized how we think
about operating applications. No longer do we concern ourselves
with provisioning network gear or making a yearly capital plan of
what servers we need to buy. Instead we rent virtual machines by the
hour, we hand over database management to a team of folks whom
we’ve never met, and we pay as much concern to how much electric‐
ity our systems require as to how to use a rotary telephone.
But one thing remains: we still think of our systems in terms of
servers—discrete components that we allocate, provision, set up,
deploy, initialize, monitor, manage, shut down, redeploy, and reiniti‐
alize. The problem is most of the time we don’t actually care about
any of those activities; all we (operationally) care about is that our
software is performing the logic we intend it to, and that our data is
safe and correct. Can the cloud help us here?
Yes it can, and in fact the cloud is turning our industry up on its
head all over again. In late 2012, people started thinking about what
it would mean to operate systems and not servers—to think of
applications as workflow, distributed logic, and externally managed
data stores. We describe this way of working as Serverless, not
v
because there aren’t servers running anywhere, but because we don’t
need to think about them anymore.
This way of working first became realistic with mobile applications
being built on top of hosted database platforms like Google Firebase.
It then started gaining mindshare with server-side developers when
Amazon launched AWS Lambda in 2014, and became viable for
some HTTP-backed services when Amazon added API Gateway in
2015. By 2016 the hype machine was kicking in, but a Docker-like
explosion of popularity failed to happen. Why? Because while from
a management point of view Serverless is a natural progression of
cloud economics and outsourcing, from an architectural point of
view it requires new design patterns, new tooling, and new
approaches to operational management.
In this report we explain what Serverless really means and what its
significant benefits are. We also present its limitations, both inherent
and implementation specific. We close with looking to the future of
Serverless. The goal of this report is to answer the question, “Is Serv‐
erless the right choice for you and your team?”
vi | Preface
CHAPTER 1
Introducing Serverless
1
just in time, with the delay from requesting a machine to its availa‐
bility being in the order of minutes.
EC2’s five key advantages are:
Reduced labor cost
Before Infrastructure as a Service, companies needed to hire
specific technical operations staff who would work in data cen‐
ters and manage their physical servers. This meant everything
from power and networking, to racking and installing, to fixing
physical problems with machines like bad RAM, to setting up
the operating system (OS). With IaaS all of this goes away and
instead becomes the responsibility of the IaaS service provider
(AWS in the case of EC2).
Reduced risk
When managing their own physical servers, companies are
exposed to problems caused by unplanned incidents like failing
hardware. This introduces downtime periods of highly volatile
length since hardware problems are usually infrequent and can
take a long time to fix. With IaaS, the customer, while still hav‐
ing some work to do in the event of a hardware failure, no
longer needs know what to do to fix the hardware. Instead the
customer can simply request a new machine instance, available
within a few minutes, and re-install the application, limiting
exposure to such issues.
Reduced infrastructure cost
In many scenarios the cost of a connected EC2 instance is
cheaper than running your own hardware when you take into
account power, networking, etc. This is especially valid when
you only want to run hosts for a few days or weeks, rather than
many months or years at a stretch. Similarly, renting hosts by
the hour rather than buying them outright allows different
accounting: EC2 machines are an operating expense (Opex)
rather than the capital expense (Capex) of physical machines,
typically allowing much more favorable accounting flexibility.
Scaling
Infrastructure costs drop significantly when considering the
scaling benefits IaaS brings. With IaaS, companies have far more
flexibility in scaling the numbers and types of servers they run.
There is no longer a need to buy 10 high-end servers up front
Infrastructural Outsourcing
Using IaaS is a technique we can define as infrastructural outsourc‐
ing. When we develop and operate software, we can break down the
requirements of our work in two ways: those that are specific to our
needs, and those that are the same for other teams and organizations
working in similar ways. This second group of requirements we can
define as infrastructure, and it ranges from physical commodities,
such as the electric power to run our machines, right up to common
application functions, like user authentication.
Infrastructural outsourcing can typically be provided by a service
provider or vendor. For instance, electric power is provided by an
electricity supplier, and networking is provided by an Internet Ser‐
vice Provider (ISP). A vendor is able to profitably provide such a
service through two types of strategies: economic and technical, as
we now describe.
Economy of Scale
Almost every form of infrastructural outsourcing is at least partly
enabled by the idea of economy of scale—that doing the same thing
many times in aggregate is cheaper than the sum of doing those
things independently due to the efficiencies that can be exploited.
For instance, AWS can buy the same specification server for a lower
price than a small company because AWS is buying servers by the
thousand rather than individually. Similarly, hardware support cost
per server is much lower for AWS than it is for a company that owns
a handful of machines.
Common Benefits
Infrastructural outsourcing typically echoes the five benefits of IaaS:
Defining Serverless
As soon as we get into any level of detail about Serverless, we hit the
first confusing point: Serverless actually covers a range of techni‐
ques and technologies. We group these ideas into two areas: Back‐
end as a Service (BaaS) and Functions as a Service (FaaS).
Backend as a Service
BaaS is all about replacing server side components that we code
and/or manage ourselves with off-the-shelf services. It’s closer in
concept to Software as a Service (SaaS) than it is to things like vir‐
tual instances and containers. SaaS is typically about outsourcing
business processes though—think HR or sales tools, or on the tech‐
nical side, products like Github—whereas with BaaS, we’re breaking
up our applications into smaller pieces and implementing some of
those pieces entirely with external products.
BaaS services are domain-generic remote components (i.e., not in-
process libraries) that we can incorporate into our products, with an
API being a typical integration paradigm.
BaaS has become especially popular with teams developing mobile
apps or single-page web apps. Many such teams are able to rely sig‐
nificantly on third-party services to perform tasks that they would
otherwise have needed to do themselves. Let’s look at a couple of
examples.
First up we have services like Google’s Firebase (and before it was
shut down, Parse). Firebase is a database product that is fully man‐
aged by a vendor (Google in this case) that can be used directly from
a mobile or web application without the need for our own interme‐
diary application server. This represents one aspect of BaaS: services
that manage data components on our behalf.
Defining Serverless | 7
When we traditionally deploy server-side software, we start with a
host instance, typically a virtual machine (VM) instance or a con‐
tainer (see Figure 1-1). We then deploy our application within the
host. If our host is a VM or a container, then our application is an
operating system process. Usually our application contains of code
for several different but related operations; for instance, a web ser‐
vice may allow both the retrieval and updating of resources.
Once the function has finished executing, the FaaS platform is free
to tear it down. Alternatively, as an optimization, it may keep the
function around for a little while until there’s another event to be
processed.
FaaS is inherently an event-driven approach. Beyond providing a
platform to host and execute code, a FaaS vendor also integrates
with various synchronous and asynchronous event sources. An
example of a synchronous source is an HTTP API Gateway. An
example of an asynchronous source is a hosted message bus, an
object store, or a scheduled event similar to (cron).
AWS Lambda was launched in the Fall of 2014 and since then has
grown in maturity and usage. While some usages of Lambda are
very infrequent, just being executed a few times a day, some compa‐
nies use Lambda to process billions of events per day. At the time of
writing, Lambda is integrated with more than 15 different types of
event sources, enabling it to be used for a wide variety of different
applications.
Beyond AWS Lambda there are several other commercial FaaS
offerings from Microsoft, IBM, Google, and smaller providers like
Auth0. Just as with the various other Compute-as-a-Service plat‐
Defining Serverless | 9
forms we discussed earlier (IaaS, PaaS, CaaS), there are also open
source projects that you can run on your own hardware or on a
public cloud. This private FaaS space is busy at the moment, with no
clear leader, and many of the options are fairly early in their devel‐
opment at time of writing. Examples are Galactic Fog, IronFunc‐
tions, Fission (which uses Kubernetes), as well as IBM’s own
OpenWhisk.
Now that we’re well grounded in what the term Serverless means,
and we have an idea of what various Serverless components and
services can do, how do we combine all of these things into a com‐
plete application? What does a Serverless application look like, espe‐
cially in comparison to a non-Serverless application of comparable
scope? These are the questions that we’re going to tackle in this
chapter.
A Reference Application
The application that we’ll be using as a reference is a multiuser, turn-
based game. It has the following high-level requirements:
13
Non-Serverless Architecture
Given those requirements, a non-Serverless architecture for our
game might look something like Figure 2-1:
Why Change?
This simple architecture seems to meet our requirements, so why
not stop there and call it good? Lurking beneath those bullet points
are a host of development challenges and operational pitfalls.
In building our game, we’ll need to have expertise in iOS and Java
development, as well as expertise in configuring, deploying, and
operating Java application servers. We’ll also need to configure and
operate the relational database server. Even after accounting for the
application server and database, we need to configure and operate
their respective host systems, regardless of whether those systems
How to Change?
Now that we’ve uncovered some of the challenges of our legacy
architecture, how might we change it? Let’s look at how we can take
our high-level requirements and use Serverless architectural pat‐
terns and components to address some of the challenges of the pre‐
vious approach.
As we learned in Chapter 1, Serverless components can be grouped
into two areas, Backend as a Service and Functions as a Service.
Looking at the requirements for our game, some of those can be
addressed by BaaS components, and some by FaaS components.
A Reference Application | 15
backend gameplay logic in a secure, scalable manner. Each distinct
operation can then be encapsulated in a FaaS function.
A Reference Application | 17
CHAPTER 3
Benefits of Serverless
Serverless has elements of all five of these. The first four are all, to a
greater or lesser extent, about cost savings, and this is what Server‐
less is best known for: how to do the same thing you’ve done before,
but cheaper.
However, for us the cost savings are not the most exciting part of
Serverless. What we get our biggest kick from is how much it
reduces the time from conception to implementation, in other
words, how you do new things, faster.
In this chapter we’re going to dig into all these benefits and see how
Serverless can help us.
19
Reduced Labor Cost
We said in Chapter 1 that Serverless was fundamentally about no
longer needing to look after your own server processes—you care
about your application’s business logic and state, and you let some‐
one else look after whatever else is necessary for those to work.
The first obvious benefit here is that there is less operations work.
You’re no longer managing operating systems, patch levels, database
version upgrades, etc. If you’re using a BaaS database, message bus,
or object store, then congratulations—that’s another piece of infra‐
structure you’re not operating anymore.
With other BaaS services the labor benefits are even more clearly
defined—you have less logic to develop yourself. We’ve already talked
a couple of times about authentication services. The benefits to
using one of these are that you have less code to define, develop, test,
deploy, and operate, all of which takes engineering time and cost.
Another example is a service like Mailgun which removes most of
the hard work of processing the sending and receiving of email.
FaaS also has significant labor cost benefits over a traditional
approach. Software development with FaaS is simplified because
much of the infrastructural code is moved out to the platform. An
example here is in the development of HTTP API Services—here all
of the HTTP-level request and response processing is done for us by
the API Gateway, as we described in Chapter 2.
Deployment with FaaS is easier because we’re just uploading basic
code units—zip files of source code in the case of Javascript or
Python, and plain JAR files in the case of JVM-based languages.
There are no Puppet, Chef, Ansible, or Docker configurations to
manage. Other types of operational activity get more simple too,
beyond just those we mentioned earlier in this section. For example,
since we’re no longer looking after an “always on” server process, we
can limit our monitoring to more application-oriented metrics.
These are statistics such as execution duration and customer-
oriented metrics, rather than free disk space or CPU usage.
Reduced Risk
When we think about risk and software applications we often con‐
sider how susceptible we are to failures and downtime. The larger
the number of different types of systems, or components, our teams
are responsible for managing, the larger the exposure to problems
occurring. Instead of managing systems ourselves we can outsource
them, as we’ve described previously in this report, and also out‐
source having to solve problems in those systems.
While overall we’re still exposed to failure across all of the elements
of the application, we’ve chosen to manage the risk differently—we
are now relying on the expertise of others to solve some of those
failures rather than fixing them ourselves. This is often a good idea
since certain elements of a technical “stack” are ones that we might
change rarely, and when failure does occur in them, the length of the
downtime can be significant and indeterminant.
With Serverless we are significantly reducing the number of differ‐
ent technologies we are responsible for directly operating. Those
that we do still manage ourselves are typically ones that our teams
are working with frequently, and so we are much more able to han‐
dle failures with confidence when they occur.
A specific example here is managing a distributed NoSQL database.
Once such a component is set up, it might be relatively rare that a
Reduced Risk | 21
failure in a node occurs, but when it does, what happens? Does your
team have the expertise to quickly and efficiently diagnose, fix, and
recover from the problem? Maybe, but oftentimes not. Instead, a
team can opt to use a Serverless NoSQL database service, such as
Amazon DynamoDB. While outages in DynamoDB do occasionally
happen, they are both relatively rare and managed effectively since
Amazon has entire teams dedicated to this specific service.
As such, we say that risk is reduced when Serverless technologies are
used since the expected downtime of components is reduced, and
the time for them to be fixed is less volatile.
Over the last few years we’ve seen great advances in improving the
incremental cycle time of development through practices such as
continuous delivery and automated testing, and technologies like
So far we’ve talked about what Serverless is and how we got here,
shown you what Serverless applications look like, and told you the
many wonderful ways that Serverless will make your life better. So
far it’s been all smiles, but now we need to tell you some hard truths.
Serverless is a different way of building and operating systems, and
just like with most alternatives, there are limitations as well as
advantages. Add to that the fact that Serverless is still new—AWS
Lambda is the most mature FaaS platform, and its first, very limited
version was only launched in late 2014.
All of this innovation and novelty means some big caveats—not
everything works brilliantly well, and even those parts that do we
haven’t yet figured out the best ways of using. Furthermore, there
are some implicit tradeoffs of using such an approach, which we dis‐
cuss first.
Inherent Limitations
Some of the limitations of Serverless just come with the territory—
we’re never going to completely get around them. These are inherent
limitations. Over time we’ll learn better how to work around these,
or in some cases even to embrace them.
27
State
It may seem obvious, but in a Serverless application, the manage‐
ment of state can be somewhat tricky. Aside from the components
that are explicitly designed to be data stores, most Serverless compo‐
nents are effectively stateless. While this chapter is specifically about
limitations, it’s worth mentioning that one benefit of that stateless‐
ness is that scaling those components simply becomes a matter of
increasing concurrency, rather than giving each instance of a com‐
ponent (like an AWS Lambda function) more resources.
However, the limitations are certainly clear as well. Stateless compo‐
nents must, by definition, interact with other, stateful components to
persist any information beyond their immediate lifespan. As we’ll
talk about in the very next section, that interaction with other com‐
ponents inevitably introduces latency, as well as some complexity.
What’s more, stateful Serverless components may have very different
ways of managing information between vendors. For example, a
BaaS product like Firebase, from Google, has different data expiry
mechanisms and policies than a similar product like DynamoDB,
from AWS.
Also, while statelessness is the fundamental rule in many cases,
oftentimes specific implementations, especially FaaS platforms, do
preserve some state between function invocations. This is purely an
optimization and cannot be relied upon as it depends heavily on the
underlying implementation of the platform. Unfortunately, it can
also confuse developers and muddy the operational picture of a sys‐
tem. One knock-on effect of this opportunistic state optimization is
that of inconsistent performance, which we’ll touch on later.
Latency
In a non-Serverless application, if latency between application com‐
ponents is a concern, those components can generally be reliably co-
located (within the same rack, or on the same host instance), or can
even be brought together in the same process. Also, communication
channels between components can be optimized to reduce latency,
using specialized network protocols and data formats.
Successful early adopters of Serverless, however, advocate having
small, single-purpose FaaS functions, triggered by events from other
Local Testing
The difficulty of local testing is one of the most jarring limitations of
Serverless application architectures. In a non-Serverless world,
developers often have local analogs of application components (like
databases, or message queues) which can be integrated for testing in
much the same way the application might be deployed in produc‐
tion. Serverless applications can, of course, rely on unit tests, but
more realistic integration or end-to-end testing is significantly more
difficult.
The difficulties in local testing of Serverless applications can be clas‐
sified in two ways. Firstly, because much of the infrastructure is
abstracted away inside the platform, it can be difficult to connect the
application components in a realistic way, incorporating
production-like error handling, logging, performance, and scaling
characteristics. Secondly, Serverless applications are inherently dis‐
tributed, and consist of many separate pieces, so simply managing
the myriad functions and BaaS components is challenging, even
locally.
Instead of trying to perform integration testing locally, we recom‐
mend doing so remotely. This makes use of the Serverless platform
directly, although that too has limitations, as we’ll describe in the
next section.
Inherent Limitations | 29
Loss of Control
Many of the limitations of Serverless are related to the reality that
the FaaS or BaaS platform itself is developed and operated by a third
party.
In a non-Serverless application, the entirety of the software stack
may be under our control. If we’re using open source software, we
can even download and alter components from the operating system
boot loader to the application server. However, such breadth of con‐
trol is a double-edged sword. By altering or customizing our soft‐
ware stack, we take on implicit responsibility for that stack and all of
the attendant bug fixes, security patches, and integration. For some
use cases or business models, this makes sense, but for most, owner‐
ship and control of the software stack distracts focus from the busi‐
ness logic.
Going Serverless inherently involves giving up full control of the
software stack on which code runs. We’ll describe how that mani‐
fests itself in the remainder of this section.
Inherent Limitations | 31
“the service had a few sporadic errors” to “an earthquake destroyed a
data center.” While it seems like understatement, it is also a testa‐
ment to the global scale and resilience of the AWS infrastructure in
that the loss of a data center is not necessarily a catastrophic event.
Implementation Limitations
In contrast to all of the previous inherent limitations, implementation
limitations are those that are a fact of Serverless life for now, but
which should see rapid improvement as the Serverless ecosystem
improves and as the wider Serverless community gains experience
in using these new technologies.
Cold Starts
As we alluded to earlier, Serverless platforms can have inconsistent
and poorly documented performance characteristics.
One of the most common performance issues is referred to as a cold
start. On the AWS Lambda platform, this refers to the instantiation
of the container in which our code is run, as well as some initializa‐
tion of our code. These slower cold starts occur when a Lambda
function is invoked for the first time or after having its configura‐
tion altered, when a Lambda function scales out (to more instances
Tooling Limitations
Given the newness of Serverless technologies, it’s no surprise that
tooling around deployment, management, and development is still
in a state of infancy. While there are some tools and patterns out
there right now, it’s hard to say which tools and patterns will ulti‐
mately emerge as future “best practices.”
Implementation Limitations | 33
As the FaaS platform’s underlying hardware gets more powerful, we
can expect these resource limits to increase (as they already have in
some cases). Further, designing a system to work comfortably within
these limits often leads to a more scalable architecture.
Vendor Lock-In
Vendor lock-in seems like an obviously inherent limitation of Serv‐
erless applications. However, different Serverless platform vendors
enforce different levels of lock-in, through their choice of integra‐
tion patterns, APIs, and documentation. Application developers can
also limit their use of vendor-specific features, admittedly with vary‐
ing degrees of success depending on the platform.
For example, AWS services, while mostly closed-source and fully
managed, are well documented, and at a high level can be thought of
in abstract terms. DynamoDB can be thought of as simply a high-
performance key-value store. SQS is simply a message queue, and
Kinesis is an ordered log. Now, there are many specifics around the
implementation of those services which make them AWS-specific,
but as high-level components within a larger architecture, they
Implementation Limitations | 35
could be switched out for other, similar components from other
vendors.
That being said, we of course must also acknowledge that much of
the value of using a single Serverless vendor is that the components
are well integrated, so to some extent the vendor lock-in is not nec‐
essarily in the components themselves, but in how they can be tied
together easily, performantly, and securely.
On the other side of the vendor spectrum from AWS are platforms
like Apache OpenWhisk, which is completely open source and not
ostensibly tied to any single vendor (although much of its develop‐
ment is done by IBM to enable their fully-managed platform).
BaaS components, though, are somewhat more of a mixed bag. For
example, AWS’s S3 service has a published API specification, and
other vendors like Dreamhost provide object storage systems that
are API-compatible with S3.
Immaturity of Services
Some types of Serverless services, especially FaaS, work better with a
good ecosystem around them. We see that clearly with the various
services that AWS has built, or extended, to work well with Lambda.
Some of these services are new and still need to have a few more
revisions before they cover a lot of what we might want to throw at
them. API Gateway, for example, has improved substantially in its
first 18 months but still doesn’t support certain features we might
expect from a universal web server (e.g., web sockets), and some fea‐
tures it does have are difficult to work with.
Similarly, we see brand-new services (at time of writing) like AWS
Step Functions. This is a product that’s clearly trying to solve an
architectural gap in the Serverless world, but is very early in its
capabilities.
Conclusion
We’ve covered the inherent and implementation limitations of Serv‐
erless in a fairly exhaustive way. The inherent limitations, as we dis‐
cussed, are simply the reality of developing and operating Serverless
applications in general, and some of these limitations are related to
the loss of control inherent in using a Serverless or cloud platform.
Conclusion | 37
CHAPTER 5
Differentiating Serverless
39
Service. There’s no “standardization committee” to back up these
opinions, but in our experience they are all good areas to consider
when choosing a technology.
A Serverless service:
Is It Serverless?
Given the above criteria of Serverless, we can now consider whether
a whole raft of technologies and architectural styles are, or are not,
Serverless. Again, we are absolutely not saying that if a technology
isn’t Serverless it should be discounted for your particular problem.
What we are saying is that if a technology is Serverless, you should
expect it to have the previous list of qualities.
• By default the capacity for a Dynamo table, and the cost, don’t
scale automatically with load.
• The cost is never zero—even a minimally provisioned table
incurs a small monthly cost.
We’re going to give it the benefit of the doubt, though, because (1)
capacity scaling can be automated using third-party tools, and (2)
while costs can’t quite reach zero, a small, minimally provisioned
table costs less than a dollar per month.
Kinesis is another messaging product from Amazon, similar to Apa‐
che’s Kafka. Like DynamoDB, capacity doesn’t scale automatically
with load, and costs never reach zero. However, also like Dyna‐
moDB, scaling can be automated, and the cost of a basic Kinesis
stream is about 10 dollars per month.
Is It Serverless? | 43
Serverless/Non-Serverless Hybrid Architectures
Sometimes it’s not possible to build a purely Serverless system. Per‐
haps we need to integrate with existing non-Serverless endpoints.
Or maybe there’s an element of our architecture for which no cur‐
rent Serverless product is sufficient, possibly due to performance or
security requirements.
When this happens, it’s perfectly reasonable to build a hybrid archi‐
tecture of both Serverless and non-Serverless components. For
instance, using AWS, you may want to call an RDS database from a
Lambda function, or invoke Lambda functions from triggers in
RDS databases that use Amazon’s home-grown Aurora engine.
When building a hybrid architecture, it’s important to identify
which elements are Serverless and which aren’t in order to manage
the effects of scaling. If you have a non-Serverless element down‐
stream of a Serverless element, there is a chance the downstream
element may be overwhelmed if the upstream one scales out wider
than expected. As an example, this can happen if you call a rela‐
tional database from a Lambda function.
When you have a situation like this, it is wise to wrap the non-
Serverless element with another component that can throttle the
request flow—perhaps a message bus or a custom service that you
deploy traditionally.
Is PaaS Serverless?
Platform as a Service (PaaS) has many overlapping features and ben‐
efits with FaaS. It abstracts the underlying infrastructure and lets
you focus on your application, simplifying deployment and opera‐
tions concerns considerably. So is FaaS just another term for PaaS?
Adrian Cockroft, whom we also quoted in Chapter 3, tweeted this in
2016:
“If your PaaS can efficiently start instances in 20 ms that run for
half a second, then call it serverless.”
—Adrian Cockcroft
Is CaaS Serverless?
FaaS is not the only recent trend to push on the idea of abstracting
the underlying host that you run your software on. Docker exploded
into our world only a few years ago and has been extremely popular.
More recently, Kubernetes has taken up the challenge of how to
deploy and orchestrate entire applications, and suites of applica‐
tions, using Docker, without the user having to think about many
deployment concerns. And finally, Google Container Engine pro‐
vides a compelling cloud-hosted container environment (Containers
as a Service, or CaaS), using Kubernetes.
But is CaaS Serverless?
The simple answer is no. Containers, while providing an extremely
lean operating environment, are still based on an idea of running
long-lived applications and server processes. Serverless properties
like self auto-scaling (to zero) and self auto-provisioning are also
generally not present in CaaS platforms. CaaS is starting to catch up
though here, when we look at tools like the Cluster Autoscaler in
Google Compute Engine.
We mentioned earlier in this report that there are also a few FaaS
projects being built on top of Kubernetes. As such, even though
CaaS itself isn’t Serverless, the entire Kubernetes platform might
offer a very interesting hybrid environment in a year or two.
Is CaaS Serverless? | 45
CHAPTER 6
Looking to the Future
To close out this report we’re going to gaze into our crystal ball and
imagine what changes may happen with Serverless, and the organi‐
zations that use it, over the coming months and years.
Predictions
It’s apparent that Serverless tools and platforms will mature signifi‐
cantly. We’re still on the “bleeding edge” of many of these technolo‐
gies. Deployment and configuration will become far easier, we’ll
have great monitoring tools for understanding what is happening
across components, and the platforms will provide greater flexibility.
As an industry, we’ll also collectively learn how to use these technol‐
ogies. Ask five different teams today how to build and operate a
moderately complex Serverless application, and you’ll get five differ‐
ent answers. As we gain experience, we’ll see some ideas and pat‐
terns form about what “good practice” looks like in common
situations.
Our final prediction is that companies will change how they work in
order to make the most of Serverless. As we said in Chapter 3, “With
the right organizational support, innovation…can become the default
way of working for all businesses.” Those first five words are key—
engineering teams need autonomy in order to exploit the lead time
and experimentation advantages of Serverless. Teams gain this
autonomy by having true product ownership and an ability to
develop, deploy, and iterate on new applications without having to
47
wait on process roadblocks. Organizations that can manage them‐
selves like this, while still maintaining budgetary and data safety, will
find that their engineers are able to grow and be more effective by
forming a deep understanding of what makes their customers awe‐
some.
We give a more in-depth analysis of future trends in Serverless in
our article, “The Future of Serverless Compute”.
Conclusion
The goal of this report was to answer the question,“Is Serverless the
right choice for you and your team?” For those of you who have, or
can, embrace the public cloud; who have a desire to embrace experi‐
mentation as the way to produce the best product; and who are will‐
ing to roll up your sleeves and deal with a little lack of polish in your
tools, we hope you’ll agree with us that the answer is “yes.”
Your next step is to jump in! Evaluating Serverless technology
doesn’t require an enormous investment of money or time, and
most providers have a generous free tier of services. We provide a
wealth of information and resources on our Symphonia website and
blog, and we’d be excited to hear from you!