API Driven Devops
API Driven Devops
Nordic APIs
This book is for sale at http://leanpub.com/api-driven-devops
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Future of CI . . . . . . . . . . . . . . . . . . . . . . . . 22
Analysis: CI is a Mainstay for Development . . . . . . . 23
Final Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . 95
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Preface
There once was a time when software products were launched;
physically shipped in a CD-ROM to a storefront, purchased, and
then likely left to rust after the user’s initial installation.
Nowadays, nearly all code is shipped over the web, meaning
that continuous software updates are not only achievable, but
expected, whether for mobile, browser, or desktop user experiences.
Especially as the digital services we use embrace a subscription
billing format, the process of continually delivering many fine-
tuned iterations has become increasingly more strategic.
Thus, philosophies around this development style have proliferated
the industry in the past decade. DevOps embodies this shift. The
historical boundaries between development and operation teams
have ebbed, and as continuous deployment becomes the norm,
the tooling space has exploded to help startups and enterprise
developers alike embrace more automation, and more efficient
product cycles.
So, throughout early 2016 we admittedly followed some industry
trends and wrote a lot on DevOps and relevant tooling. In this
compendium we include curated lists of tools and analysis of
specific areas like:
• continuous integration/deployment
• Docker containers
• Automated testing
• configuration management
• IoT continuous integration
• DevOps as a corporate role
• Automated code generation
i
Preface ii
• and more…
1
Defining the Emerging Role of DevOps 2
What is DevOps?
Implementing DevOps
DevOps Tools
DevOps as a Career
and innovative. Not adopting it? Well, it could either have no effect
whatsoever, or leave you in the dust.
The choice is pretty clear.
10 Continuous Integration
Tools to Spur API
Development
9
10 Continuous Integration Tools to Spur API Development 10
Abao
DHC
Dredd, by Apiary
Dredd to execute their tests, with the results posted in the Apiary
development console.
Obviously, using Apiary and/or Dredd for continuous integration
necessitates the use of API Blueprint to describe the API. Apiary
also does not provide CI server integration out of the box, but again,
scripting the execution of Dredd from a build job definition would
not be difficult to achieve.
APIMATIC
Chakram
Frisby.js
Postman
Runscope
configure the webhook, what they call the “Trigger URL” for a series
of CI servers and platforms. They also provide functionality that
helps users to analyze their results by providing integration to ana-
lytics platforms, allowing detailed digestion of test results. Finally,
Runscope has developed a plugin for Jenkins that is available in the
Jenkins plugin repository, and has provided a walkthrough of the
implementation.
Swagger Diff
16
Reaching DevOps Zen: Tracking the Progress of Continuous Integration 17
Traditional CI
CI in the cloud
Fast forward to 2011, and many dev teams were tired of self-hosting
their own continuous integration system, as configuring it often
proved costly in time and resources — time that could be better
spent working on applications. SaaS solutions quickly proliferated
to fill this gap in the market.
Travis CI is a hosted continuous integration service built on top
of the GitHub API. It lets dev teams build any project, provided
that the code is hosted on GitHub. Travis reacts to triggers within
a GitHub repository such as a commit or a pull request using
webhooks to start tasks such as building a code base. It uses
GitHub not only to fetch the code but to authenticate users and
organizations.
Teams using cloud-based solutions like GitHub and Travis CI for
version control and CI no longer face the headaches of managing
these tools themselves as was necessary only a few years ago.
What’s more, since all actions on both Travis CI and GitHub can
be triggered via their respective APIs, this workflow can be entirely
API-driven.
Another advantage of hosted CI solutions is that they can provide
much broader testing facilities. Browser and OS testing used to be a
tedious affair — workstations and staff had to be dedicated to ensure
that bugs didn’t appear within a certain environment. In contrast,
a hosted solution can maintain a set of cloud-based servers with
different configuration for this purpose. Travis allows testing on
Linux, Windows and Mac environments. Travis CI supports a range
of programming languages such as PHP, Ruby, Node.js, Scala, Go,
C and Clojure. It now supports both public and private repositories,
but it was traditionally associated with open source software —
Travis CI’s free version is itself an open source project hosted on
GitHub.
Similar tools out there include:
Reaching DevOps Zen: Tracking the Progress of Continuous Integration 20
Mobile CI
Future of CI
24
Introducing Docker Containers 25
What is Docker?
Why Docker?
1 POST /containers/create
1 --name=""
1 --mac-address=""
You can even change the way the container functions with the
server itself, assigning the container to run on specific CPU cores
by ID:
Introducing Docker Containers 31
1 --cpuset-cpus=""
These two calls will first assign the container to port 8910 of the IP
192.168.0.1, and then expose that port to forward facing traffic (in
effect opening the port completely for API functionality).
In order to make these containers functional, of course, a container
needs to connect to an image. These images are built utilizing the
“build” call:
This call lists the entirety of the Docker image library without
truncation, which can then be called and utilized using the run
variables.
Docker avoids a lot of the dependency loading inherent in the API
process, simplifying code and making for a leaner network and
system utilization metric. For instance, view a theoretical custom
import in Golang:
Introducing Docker Containers 32
1 package main
2
3 import (
4 "encoding/json"
5 "fmt"
6 "net/http"
7 “customlib”
8 “main”
9 “golang-local/issr”
10 “functionreader”
11 “payment_processor”
12 “maths”
13 )
Caveat Emptor
35
Digging into Docker Architecture 36
While these tools are usually wielded day-to-day from the com-
mand line, they have all sprouted APIs, and developers are in-
creasingly building API clients to manage the DevOps workflows
at technology companies just as they do within their own products.
Out of this set of emerging technologies, one of them has taken the
world of DevOps by storm in the last three years: Docker.
Virtual Containers
Docker Architecture
the persistent process that runs on each host and listens to API
calls. Both the client and the daemon can share a single host, or
the daemon can run in a remote host.
Docker images are read-only templates from which containers are
generated. An image consists of a snapshot of a Linux distribution
like Ubuntu or Fedora — and maybe a set of applications or runtime
environments, like Apache, Java, or ElasticSearch. Users can create
their own Docker images, or reuse one of the many images created
by other users and available on the Docker Hub.
Docker registries are repositories from which one can download or
upload Docker images. The Docker Hub is a large public registry,
and can be used to pull images within a Docker workflow, but
more often teams prefer to have their own registry containing the
relevant subset of public Docker images that it requires along with
its own private images.
Docker containers are directories containing everything needed
for the application to run, including an operating system and a
file system, leveraging the underlying system’s kernel but without
relying on anything environment-specific. This enables containers
to be created once and moved from host to host without risk of
configuration errors. In other words, the exact same container will
work just as well on a developer’s workstation as it will on a remote
server.
A Docker workflow is a sequence of actions on registries, images
and containers. It allows a team of developers to create containers
based on a customized image pulled from a registry, and deploy
and run them on a host server. Every team has its own workflow
— potentially integrating with a continuous integration server
like Jenkins, configuration management tools like Chef or Puppet,
and maybe deploying to cloud servers like Amazon Web Services.
The daemon on each Docker host enables further actions on the
containers — they can be stopped, deleted or moved. The result of
all of these actions are called lifecycle events.
Digging into Docker Architecture 38
Since it arrived onto the scene in 2013, Docker has seen widespread
adoption at technology companies. Interestingly, whereas early
adopters for most new technologies are typically limited to small
startups, large enterprises were quick to adopt Docker as they
benefit more from the gains in efficiency that it enables, and from
the microservices architecture that it encourages. Docker’s adopters
include Oracle, Cisco, Zenefits, Sony, GoPro, Oculus and Harvard
University.
Docker’s growth has been phenomenal during the last three years,
its adoption numbers are impressive, and it has managed to attract
investments from top-tier venture capital funds.
Digging into Docker Architecture 39
Docker Book, “the recreation of state may often be cheaper than the
remediation of state”.
Of course, Docker lacks the flexibility afforded by tools like Chef
and Puppet, and using it by itself assumes that your team operates
only with containers. If this isn’t the case and your applications
straddle both container-based processes and bare metal or VM-
based apps, then configuration management tools retain their use-
fulness. Furthermore, immutable infrastructure doesn’t work when
state is essential to the application, like in the case of a database. It
can also be frustrating for small changes.
In these cases, or if Chef or Puppet are an important part of a team’s
architecture prior to introducing Docker, it is quite easy to integrate
these tools within a Docker container, or even to orchestrate Docker
containers using a Chef cookbook or a Puppet module.
Continuous integration software like Jenkins can work with Docker
to build images which can then be published to a Docker Registry.
Docker also enables artifact management by versioning images.
In that way the Docker Hub acts a bit like Maven Central or public
GitHub artifact repositories.
Digging into Docker Architecture 41
All of the events listed in the previous section can be triggered via
the Docker command line interface, which remains the weapon of
choice for many system engineers.
But daemons can also be accessed through a TCP socket using
Docker’s Remote API, enabling applications to trigger and monitor
Digging into Docker Architecture 42
43
Tools Built on Top of the Docker API 44
In this chapter we review these projects to see how they are using
the Docker API.
Dogfooding
The foremost user of the Docker API is Docker itself — they host a
series of tools to combine and orchestrate Docker containers in use-
ful configurations. [Docker Compose] (https://docs.docker.com/compose/)
facilitates the deployment of multi-container applications, while
Docker Swarm allows the creation of clusters of Docker containers.
While Docker itself is active in this area, they welcome the contri-
bution of other actors in orchestrating Docker containers. Orches-
tration is a broad term, but we can break it down into scheduling,
clustering, service discovery, and other tasks.
It is usually undesirable to have several processes running inside the
same Docker container, for reasons of efficiency, transparency, and
to avoid tight coupling of dependencies. It’s much more practical
for each container to remain limited to a single responsibility and
to offer a clearly defined service to the rest of the infrastructure.
A complete application therefore usually involves a collection of
Docker containers. This introduces complexity for which new solu-
tions abound.
Scheduling
Cluster Management
Service Discovery
Networking
Storage
life cycle. A data volume container can be used when sharing data
across multiple containers.
Data volumes can be backed up and restored; products like Flocker
by ClusterHQ, an open source data volume manager, manages data
volumes and containers, and performs data migration in order to
support container-based production databases.
Continuous Integration
Log Aggregation
Monitoring
Configuration Management
Security Auditing
PaaS
Full-blown OS
52
Description-Agnostic API Development with API Transformer 53
In order to test the use cases above we took the concept of a de-
veloper portal where API Transformer is used to create translations
of an API description. The API provider that owns the developer
portal in this use case specifies their APIs using Swagger, and
publishes the specification in different formats as as a convenience
to the developer community. In this scenario we envisaged this
being a facet of the development process, embedded in continuous
integration: When code is published to the git repository for the
API, a process is executed that creates translations of the Swagger
description in several pre-defined formats. The steps in the process
are:
For the purpose of brevity we then created a simple job that trig-
gered when a commit was made to a local git repository (in reality
we would obviously add the test suite and check for content changes
for each new version). When triggered, a shell script build step is
executed that initializes an instance of our demo API, downloads
a copy of the Swagger JSON, and then loops through our target
alternative format types:
S3 Bucket
Description-Agnostic API Development with API Transformer 59
60
The Present and Future of Configuration Management 61
CM in the Cloud
The advent of the cloud meant that servers were moved out of on
premise data centers and into those of cloud hosting vendors. While
the inherent complexities of running an on-premise infrastructure
disappeared, new problems arose as well.
Cloud technologies have enabled teams to deploy software to
hundreds if not thousands of servers concurrently to satisfy the
demands of software usage in the internet age. Managing that
many servers requires automation on a different scale, and a more
systematic approach. This is where an API-driven approach to
The Present and Future of Configuration Management 62
Puppet and Chef are the most mature and the most popular CM
tools at the moment. The packaging and deploying of applications
used to be the sole province of system engineers. By enabling
developers to take part in this process, Puppet and Chef have
together defined a new category of CM solutions — infrastructure
as code.
Both are open source projects and based on Ruby (although signifi-
cant portions of the Chef architecture have been rewritten in Erlang
for performance reasons). They both have an ecosystem of plugin
developers as well as a supporting company offering enterprise
solutions. Each of them features a client-server architecture, with
a master server pushing configuration items to agents running on
each node.
Puppet
Chef
SaltStack
Ansible
In-House CM tools
69
Security for Continuous Delivery Environments 70
Auditing Security
Segmentation of Services
77
API Testing: Using Virtualization for Advanced Mockups 78
Want to see how error messages function? How rate limits work?
From a user’s perspective, a virtual API looks and behaves like a
real service. However, distanced from live runtime, virtualization
can be used for simulating drastic scenarios. Use an emulator to
simulate real world behavior like downtime, slow or erratic API
responses to see how an app behaves when confronted with these
dilemmas.
A great test can provide the information on what happens when a
client calls an API that suddenly responds strangely, and do so in a
neutral, risk-free setting.
Hjmel sees a very similar development cycle within the API market:
One may think that this is a difficult process, but it can be im-
plemented in a few steps. In a previous post, we used SmarBear’s
ReadyAPI service to mock the endpoint of a REST API and create a
virtual service in about 20 steps.
API Testing: Using Virtualization for Advanced Mockups 83
The Internet of Things (IoT) is upon us. Every object you see around
you — whether it’s your fridge, your electric toothbrush, your car,
even your clothes, are about to acquire a form of sentience.
Some of them already have. Fitbit watches, Nest thermostats and
Apple TVs are just the tip of the iceberg when it comes to the
Internet of Things. Sensors, embedded systems and cloud back-
ends are coming together to bestow smartness upon a bewildering
array of previously inanimate, dumb objects. IoT will be a trillion-
dollar market by 2020 and every big player in the hardware space
is jockeying for position, along with a flood of new IoT startups.
While embedded systems have been around for a very long time
in the form of consumer electronics, IoT has given them a new
dimension. Previously these systems were essentially self-contained
and could work in isolation. But connected objects now need to
86
Automated Testing for the Internet of Things 87
converse with each other and rely on each other. Developers have
had to start thinking about device-to-device (D2D) and device-to-
server (D2S) communication and of course human interaction that
comes into play as home appliances and a host of other everyday
objects essentially become an extension of the Internet.
To complicate things even further, IoT comes with its own protocols
like MQTT, CoAP and ZigBee in addition to Wi-Fi and Bluetooth.
Furthermore, embedded systems are subjected to regulatory re-
quirements such as IEC 61508 and MISRA to ensure the safety and
reliability of programmable electronic devices.
Programming languages used in embedded systems tend to be
either C or C++. These languages, more low-level than those used in
Automated Testing for the Internet of Things 89
New Frontiers
In the age of the Internet of Things, slow testing cycles and poorly
tested products are no longer sufficient. Companies have adapted
their internal processes to reflect these new expectations to thrive
with products blending state-of-the-art software and hardware.
Smart objects like unmanned aerial drones will be subjected to deep
scrutiny and regulations, and users will become less accepting of
glitches in their smart home appliances. More companies will offer
IoT-specific testing software, like SmartBear whose Ready! API
Automated Testing for the Internet of Things 92
solution enables API testing with support for MQTT and CoAP. As
a side effect, test automation job opportunities will likely increase
in the embedded world, creating new career prospects for engineers
who straddle the line between software and hardware.
Expectations around new software deployment and availability of
new capabilities on existing hardware have been greatly increased
by recent advances in mobile and automotive firmware delivery
processes. Until recently, buying any electronic device constituted
a decreasing value proposition — the device would gradually lose
value over time and consumers would be pressured to buy new
versions of the hardware to benefit from new features.
But Over The Air (OTA) updates of firmware has changed that.
OTA is a deployment method where software updates are pushed
from a central cloud service to a range of devices anywhere in the
world typically via wi-fi but also via mobile broadband, or even
IoT-specific protocols like ZigBee.
Smartphones were the first connected devices to feature OTA
updates, leading to longer-lived devices and a diminished feeling of
planned obsolescence. Cars came next — Tesla famously designs
their cars so that software upgrades can fix problems and enable
new capabilities via OTA updates. This requires careful planning
and a systematic approach to software delivery. One recent is
example is the auto-pilot feature on the Tesla Model S that was
made available to existing cars after an OTA update. The cars had
already been equipped with all the hardware necessary for the
autopilot to function (cameras, sensors, radar), and a pure software
update was then enough to enable the new feature.
The fact that they are able to confidently ship these changes to a
product like a car, for which safety and usability are paramount,
speaks volumes about the level of planning and test automation
that they’ve put in place. The sensitiveness of these updates will
only increase in the era of self-driving vehicles, when artificial
intelligence will replace humans at the controls.
Automated Testing for the Internet of Things 93
Analysis
95
Final Thoughts 96
The API Economy: Tune into case studies as we explore how agile
businesses are using APIs to disrupt industries and outperform
competitors.
The API Lifecycle: An agile process for managing the life of an API
- the secret sauce to help establish quality standards for all API and
microservice providers.
Programming APIs with the Spark Web Framework: Learn how to
master Spark Java, a free open source micro framework that can be
used to develop powerful APIs alongside JVM-based programming
languages.
Securing the API Stronghold: The most comprehensive freely avail-
able deep dive into the core tenants of modern web API security,
identity control, and access management.
Developing The API Mindset: Distinguishes Public, Private, and
Partner API business strategies with use cases from Nordic APIs
events.
98