Containers and OpenStack
Containers and OpenStack
Containers and OpenStack
Exploring
Opportunities:
Containers and
OpenStack
OPENSTACK WHITE PAPER
Contributors:
www.openstack.org
Executive Summary
The important thing for us as a community is to think about OpenStack as
an integration engine thats agnostic, Collier said. That puts users in the best
position for success. Just like we didnt reinvent the wheel when it comes to
compute, storage and networking, well do the same with containers.
- Mark Collier, COO, OpenStack Foundation
Containers are certainly a hot topic. The OpenStack User Survey indicates over half of the
respondents are interested in containers in conjunction with their OpenStack clouds for
production uses. Thanks to new open source initiatives, primarily Docker, containers have
gained significant popularity lately among Developer and Ops communities alike.
The Linux kernel has supported containers for several years, and now even Microsoft
Windows is following suit. However, container use in the enterprise remains an emerging
opportunity since standards are still being formed, the toolset ecosystem around containers
is relatively new, and ROI is uncertain.
Containers are an evolving technology and OpenStack is evolving to support them, just as it
has supported other emerging technologies in the past. Rather than create new vertical silos
to manage containers in their data centers, IT organizations find value in OpenStack
providing a cross-platform API to manage virtual machines, containers and bare metal.
Trevor Pott, writing for The Register, provides perspective.
OpenStack is not a cloud. It is not a project or a product. It is not a
virtualization system or an API or a user interface or a set of standards.
OpenStack is all of these things and more: it is a framework for doing IT
infrastructure all IT infrastructure in as interchangeable and
interoperable a way as we are ever likely to know how. 1
Container support is just another example of the basic value proposition for OpenStack - that
by utilizing OpenStack as the foundation of a cloud strategy, you can add in new, even
experimental technologies, and then deploy them to production when the time is right, all
with one underlying cloud infrastructure - without compromising multi-tenant security and
isolation, management and monitoring, storage and networking and more.
In order to support accelerating interest in containers and highlight opportunities, this paper
offers readers a comprehensive understanding of containers and container management in
the context of OpenStack. This paper will describe how various services related to containers
are being developed as first-class resources in current and upcoming releases of OpenStack.
http://www.theregister.co.uk/2015/07/09/openstack_overview/
01
www.openstack.org
openstack
02
www.openstack.org
openstack
03
www.openstack.org
openstack
Containers offer deployment speed advantages over virtual machines because theyre smaller
megabytes instead of gigabytes. Typical application containers can be started in seconds,
whereas virtual machines often take minutes. Containers also allow direct access to device drivers
through the kernel, which makes I/O operations faster than with a hypervisor approach where
those operations must be virtualized. Even in environments with hundreds or thousands of
containers, this speed advantage can be significant and contributes to overall responsiveness new
workloads can be brought online quickly and make boot storms become a thing of the past.
Containers create a proliferation of compute units, and without robust monitoring, management,
and orchestration, IT administrators will be coping with container sprawl, where containers are
left running, mislocated or forgotten. As a result, some third-party ecosystem tools have become
so synonymous with containers that they need to be mentioned, in the context of OpenStack.
The three most common are Docker Swarm, Kubernetes, and Mesos.
Docker2 popularized the idea of the container image. They provide a straightforward way for
developers to package an application and its dependencies in a container image that can run on
any modern Linux, and soon Windows, server. Docker also has additional tools for container
deployments, including Docker Machine, Docker Compose, and Docker Swarm. At the highest
level, Machine makes it easy to spin up Docker hosts, Compose makes it easier to deploy complex
distributed apps on Docker, and Swarm enables native clustering for Docker.
Kubernetes3 (originally by Google, now contributes to the Cloud Native Computing Foundation4)
is an open source orchestration system for Docker containers. It handles scheduling onto nodes in
a compute cluster and actively manages workloads to ensure that their state matches the user's
declared intentions.
Apache Mesos5 can be used to deploy and manage application containers in large-scale clustered
environments. It allows developers to conceptualize their applications as jobs and tasks. Mesos, in
combination with a job system like Marathon, takes care of scheduling and running jobs and tasks.
OpenStack refers to these three options as Container Orchestration Engines (COE). All three of
these COE systems are supported in OpenStack Magnum, the containers service for OpenStack,
that allows your choice of COE to be automatically provisioned in a collection of compute
instances where your containers are run.
https://opensource.com/resources/whatdocker
http://kubernetes.io/
4
http://www.linuxfoundation.org/newsmedia/announcements/2015/07/newcloudnativecomputingfoundationdrivealignmentamong
5
http://opensource.com/business/14/9/opensourcedatacentercomputingapachemesos
2
3
04
www.openstack.org
openstack
Its worth mentioning that the container ecosystem, even for companies like Docker, remains a
work in progress. For example, a fundamental standard for container images is under
development. In June 2015, 21 companies formed the Open Container Initiative6 to address
this issue. Docker is donating its container format and runtime, runC, to the OCI to serve as the
cornerstone of this new effort . As container technology matures, a fundamental goal for
ongoing OpenStack development is to ensure that tools like Docker, Kubernetes and Mesos
work well within OpenStack. OpenStack, as a fundamental framework for IT infrastructure,
remains hardware and software agnostic so it can manage everything.
https://www.opencontainers.org/
05
www.openstack.org
openstack
Containers provide deterministic software packaging and fit nicely with an immutable
infrastructure model.
Containers are excellent for encapsulation of microservices.
For portability of containers on top of OpenStack virtual machines as well as bare metal
servers (Ironic) using a single, lightweight image.
One of the benefits of using an orchestration framework with containers, is that it can allow
switching between OpenStack or bare metal environments at any given point in time, abstracting
the application away from the infrastructure. In this way, either option can be selected by
pointing the orchestration engine to the target environment of choice. OpenStack Orchestration
Service (Heat) provides support for Docker orchestration starting from the Icehouse release.
With Googles recent sponsorship of the OpenStack Foundation8 and developer contributions,
the Kubernetes orchestration engine is integrated with OpenStack as well. In fact, with OpenStack
Magnum containers-as-a-service, the default bay type is a Kubernetes bay.
7
8
https://cloud.google.com/compute/docs/containers/container_vms
http://www.openstack.org/blog/2015/07/google-bringing-container-expertise-to-openstack/
06
www.openstack.org
openstack
http://www.zdnet.com/article/customers-reporting-interest-in-cloud-containers-linux-and-openstack-for-2015/
07
www.openstack.org
openstack
There are noticeable cost advantages as well. Essentially, administrators can create and destroy
container resources in their data center without worrying about costs. With typical data center
utilization at 30%, it is easy to bump up that number by deploying additional containers on the
same hardware. Also, because containers are small and launch quickly, clusters can scale up and
down in more affordable and granular ways, simply by running or stopping additional containers.
To that point, containers also enable density improvements. Instead of running a dozen or two
dozen virtual machines per server, its possible to run hundreds of application containers per server.
There are a few implications to this possibility. One is that enterprises might be able to make use of
older, or lower performing hardware thereby reducing costs. Another implication is that an
enterprise might be able to use fewer servers, or smaller cloud instances, in order to accomplish
their objectives.
There are a number of sophisticated users who have started to use containers at scale, in
production. Rackspace is using OpenStack to provision containers at scale in production products,
including Rackspace Private Cloud, Rackspace Public Cloud, and Rackspace Cloud Databases.
Pantheon, a website management platform serving over 100,000 Drupal and WordPress sites, is
powered by 1,000,000+ containers on virtual machines and bare metal, provisioned in exactly the
same way with their OpenStack-based CLI and RESTful API.
08
www.openstack.org
openstack
10
11
http://lists.openstack.org/pipermail/openstackdev/2015March/058714.html
http://docs.openstack.org/developer/nova/supportmatrix.html
09
www.openstack.org
openstack
10
www.openstack.org
openstack
Magnum
Magnum is an OpenStack API service that adds multi-tenant integration of prevailing container
orchestration software for use in OpenStack clouds. Magnum allows multiple container
technologies in OpenStack to be used concurrently, on a variety of Nova instance types.
Magnum makes orchestration engines, including Docker Swarm, Kubernetes, and Mesos,
available through first class resources in OpenStack. Magnum provides container specific
features that are beyond the scope of Nova's API, and implements its own API to surface these
features in a way that is consistent with other OpenStack services. It also allows for native APIs
and native tools to be used directly, so that container tooling (like the Docker CLI client) does
not need to be redesigned to work with OpenStack.
Containers started by Magnum are run on top of an OpenStack resource called a bay. Bays are
collections of Nova instances that are created using Heat. Magnum uses Heat to orchestrate an
OS image which contains Docker Swarm, Kubernetes or Mesos, and runs that image in either
virtual machines or bare metal in a cluster configuration. Magnum simplifies the required
integration with OpenStack, and allows cloud users who can already launch cloud resources
such as Nova instances, OpenStack Block Storage (Cinder) volumes, OpenStack Database
Service (Trove) databases, etc. to create bays where they can start application containers.
11
www.openstack.org
openstack
Magnum leverages Docker Swarm, Kubernetes, and Mesos as components, but differs in that
Magnum also offers an asynchronous OpenStack API that uses OpenStack Identity Service
(Keystone), and includes a complete multi-tenancy implementation. It does not perform
orchestration internally, and instead relies on Heat.
The same identity credentials used to create IaaS resources can be used to run containerized
applications using Magnum, via built-in integration with Keystone. Some examples of
advanced features available with Magnum are the ability to scale an application to a specified
number of instances, to cause your application to automatically re-spawn an instance in the
event of a failure, and to pack applications together more tightly than would be possible using
virtual machines.
The second release of Magnum (Kilo-2) is available for download12. It includes significant test
code coverage, multi-tenancy support, scalable bays, support for CoreOS Nodes, 8 bit character
support, and 61 other enhancements, bug fixes, and technical debt elimination.
Magnum is at the point today, prior to Liberty, where OpenStack-based public cloud providers
can start to leverage it. These users are contributing to Magnum, making containers accessible
for individual IT departments later this year and beyond. Experienced users assert it is very
reliable and really helps to run an app in an immutable infrastructure.
Magnum Networking
Magnum leverages OpenStack Networking (Neutron) capability when creating bays. This
allows each node in a bay to communicate with the other nodes. If the Kubernetes bay type is
used, a Flannel overlay network is used which allows Kubernetes to assign IP addresses to
containers in the bay while allowing multihost communication between containers. Work is
currently underway to leverage new networking features from the Docker community
(libnetwork) to provide a native client experience while leveraging OpenStack networking, and
offer a consistent experience between containers running outside of OpenStack cloud
environments. This capability is expected in the OpenStack Mitaka release timeframe.
Magnum Security and Multi-tenancy
Resources such as containers, services, pods, bays, etc. started by Magnum can only be viewed
and accessed by users of the tenant that created them. Bays are not shared, meaning that
containers will not run on the same kernel as neighboring tenants. This is a key security feature
that allows containers belonging to the same tenant to be tightly packed within the same pods
and bays, but runs separate kernels (in separate Nova Instances) between different tenants.
This is different than using a system like Kubernetes without Magnum, which is intended to be
used only by a single tenant, and leaves the security isolation design up to the implementer.
12
https://github.com/openstack/magnum/releases/tag/2015.1.0b2
12
www.openstack.org
openstack
Using Magnum provides the same level of security isolation as Nova provides when running
Virtual Machines belonging to different tenants on the same compute nodes.
If Nova is currently trusted to isolate workloads between multiple tenants in a cloud using a
hypervisor, then Magnum can also be trusted to provide equivalent isolation of containers
between multiple tenants. This is because Magnum uses Nova instances to compose bays,
and does not share bays between different tenants.
For more information or to get involved, please visit: https://wiki.openstack.org/wiki/Magnum
Kolla
Deploying and upgrading OpenStack has always been a complex task. With the advent of the
core and projects structure of the OpenStack software13, there is no longer an integrated
OpenStack release. Projects can be approved and released on their own cadence, with most
projects still opting to do releases at the end of the 6-month development cycles. Operators will
be able to select from a number of projects to custom-build their deployment. This introduces
more flexibility and choice but also complexity in deployment and ongoing operations.
Kolla is a new approach to deploying OpenStack within containers that results in new fast, reliable
and composable building blocks. Kolla simplifies deployment and ongoing operations by
packaging each service, for the most part, as a micro-service in a Docker container.
Containerized micro-services and Ansible orchestration allows operators to upgrade a service by
building a new Docker container and redeploying the system. Different versions and package
mechanisms such as distribution packaging, RDO, and from-source can be supported individually.
Deploying OpenStack with Kolla does not reconfigure anything else in the deployed operating
system, unlike other configuration management tools, lowering risk. And Kolla containers are easy
to create.
Kolla provides immutability, because the only things that change are the configuration files
loaded into a container and how those configuration changes modify the behavior of the
OpenStack services. This turns OpenStack from an imperative system to a declarative system;
either the container runs or doesnt. Kolla is implemented with data containers that can be
mounted on the host operating system. For example, databases, OpenStack Image Service
(Glance) information, Nova compute VMs and other persistent data can be stored in data
containers which can be backed up and restored individually, again lowering risk and extending
immutability throughout the OpenStack infrastructure.
13
https://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
13
www.openstack.org
openstack
Kolla comes out of the box as a highly opinionated deployment system meant to work with the
configuration of four pieces of information. For more experienced operators, Kolla can be
completely customized by configuration augmentation, enabling an operator to customize their
deployment to suit their needs as their experience with OpenStack increases with the execution
of one operation.
Kollas lead container distribution is CentOS but containers are built and tested for CentOS,
Fedora, Oracle Linux, Red Hat Enterperise Linux, and Ubuntu container runtimes in both source
and distro packaging models. Deployment occurs to an operating system that matches the
container run-time operating system to preserve system call, IOCTL, and netlink compatibility.
Deployment to micro-operating systems such as CoreOS, Fedora Atomic, and Red Hat Enterprise
Linux Atomic are planned for the future.
Kolla implements Ansible deployment of the following infrastructure and basic services with a
high availability model using redundancy to protect against faults:
RabbitMQ
MariaDB with Galera Replication
Keepalived
HAProxy
Keystone
Glance
Magnum
Nova
Neutron with both OVS and LinuxBridge support
Horizon
Heat
Cinder
Swift
Ceilometer
Kolla is ready for evaluation but not deployment by operators. A deployment-ready Kolla is
anxiously awaited by operators worldwide.
For more information or to get involved, please visit: https://wiki.openstack.org/wiki/Kolla
or meet the developers and users on #kolla on Freenode IRC.
Murano
Murano is an OpenStack project that provides an application catalog for app developers and
cloud administrators to publish cloud-ready applications in a browsable, categorized repository
available within OpenStack Dashboard (Horizon); and for administrators to obtain additional apps
easily from public repositories such as the OpenStack Community App Catalog
(apps.openstack.org), Google Container Repository, and Docker Hub/Registry. Murano provides
developers and operators with the ability to control full application lifecycles, while allowing users including inexperienced ones - a simple, self-service way of deploying reliable application
environments with the push of a button.
14
www.openstack.org
openstack
Murano thus enables management and self-service delivery both of conventional application
stacks and container oriented environments and PaaS solutions - including Kubernetes, Mesos,
Cloud Foundry, and Swarm, on OpenStack. It can coordinate the use of all the Docker drivers within
the context of an application through Heat or Python plugins. App and services developers use
containers to run services using the container management tools of their choice, and Murano then
acts as a lightweight abstraction so that the developer can provide guidance to the operator about
how to handle app/service lifecycle actions such as upgrade, scale-up/down, backup, and recover.
Murano is available today as OpenStack packages in Juno and Kilo. An organizations cloud
Application Catalog can be populated by importing Murano packages from a local repository,
or from the OpenStack Community Application Catalog.
Murano environments may be as simple as a single VM or may be complex, multi-tier applications
with auto-scaling and self healing.
Because each application and service definition includes all of the information the system needs
for deployment, users will not have to work through various IT departments in order to provision a
cloud service, nor are users required to provide detailed IT specifications. They are only required to
provide business and organization requirements.
Installing third party services and applications can be difficult in any environment, but the dynamic
nature of an OpenStack environment can make this problem worse. Murano is designed to solve
this problem by providing an additional integration layer between third-party components and
OpenStack infrastructure. This makes it possible to provide both
Infrastructure-as-a-Service and Platform-as-a-Service from a single control plane.
For users, the application catalog is a place to find and self-provision third-party applications and
services, integrate them into their environment, and track usage information and costs. The single
control plane becomes a single interface from which they can provision an entire fully-functional
cloud-based application environment.
From the third-party tool developers perspective, the application catalog provides a way to
publish applications and services, including deployment rules and requirements, suggested
configuration, output parameters and billing rules. It will also provide a way to track billing and
usage information. In this way, these third party developers can enrich the OpenStack ecosystem
to make it more attractive for users, and users can get more out of their OpenStack clusters more
easily, fostering adoption of OpenStack itself.
15
www.openstack.org
openstack
16
www.openstack.org
openstack
17
www.openstack.org
OpenStack is a registered trademark of the OpenStack Foundation in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows and Hyper-V are trademarks of Microsoft Corporation in the United States, other countries,
or both.
VMware and VMware vSphere are trademarks and registered trademarks of VMware, Inc. in the United States
and certain other countries.
Other product and service names might be trademarks of other companies.
www.openstack.org