OpenStack Security Guide
OpenStack Security Guide
May 1, 2015
current
May 1, 2015
current
Table of Contents
Preface .................................................................................................. ix
Conventions .................................................................................. ix
Document change history .............................................................. ix
1. Introduction ....................................................................................... 1
Acknowledgments ......................................................................... 1
Why and how we wrote this book ................................................. 2
Introduction to OpenStack ............................................................. 7
Security boundaries and threats ................................................... 12
Introduction to case studies ......................................................... 21
2. System documentation ..................................................................... 23
System documentation requirements ........................................... 23
Case studies ................................................................................. 25
3. Management ................................................................................... 27
Continuous systems management ................................................ 27
Integrity life-cycle ......................................................................... 32
Management interfaces ............................................................... 39
Case studies ................................................................................. 43
4. Secure communication ..................................................................... 45
Introduction to TLS and SSL ......................................................... 45
TLS proxies and HTTP services ...................................................... 48
Secure reference architectures ...................................................... 55
Case studies ................................................................................. 59
5. API endpoints .................................................................................. 61
API endpoint configuration recommendations .............................. 61
Case studies ................................................................................. 63
6. Identity ............................................................................................ 65
Authentication ............................................................................. 65
Authentication methods .............................................................. 66
Authorization ............................................................................... 68
Policies ......................................................................................... 70
Tokens ......................................................................................... 72
Future .......................................................................................... 73
Federated Identity ....................................................................... 74
Checklist ...................................................................................... 85
7. Dashboard ....................................................................................... 89
Basic web server configuration ..................................................... 89
HTTPS .......................................................................................... 90
HTTP Strict Transport Security (HSTS) ........................................... 90
Front end caching ........................................................................ 90
Domain names ............................................................................. 90
Static media ................................................................................. 91
iii
May 1, 2015
current
iv
May 1, 2015
current
155
158
159
161
161
171
178
181
181
190
193
193
195
197
197
201
203
206
211
211
215
215
216
217
217
217
218
219
219
221
May 1, 2015
current
List of Figures
1.1. Attack types .................................................................................. 20
9.1. An example diagram from the OpenStack Object Storage Administration Guide (2013) ........................................................................ 100
9.2. Object Storage network architecture with a management node
(OSAM) .............................................................................................. 102
vii
May 1, 2015
current
Preface
Conventions .......................................................................................... ix
Document change history ...................................................................... ix
Conventions
The OpenStack documentation uses several typesetting conventions.
Notices
Notices take these forms:
Note
A handy tip or reminder.
Important
Something you must be aware of before proceeding.
Warning
Critical information about the risk of data loss or security issues.
Command prompts
$ prompt
Any user, including the root user, can run commands that
are prefixed with the $ prompt.
# prompt
The root user must run commands that are prefixed with
the # prompt. You can also prefix these commands with the
sudo command, if available, to run them.
May 1, 2015
Revision Date
April 29, 2015
current
Summary of Changes
Final prep for Kilo release.
This book has been extensively reviewed and updated. Chapters have been
rearranged and a glossary has been added.
Havana release.
July 2, 2013
Initial creation...
May 1, 2015
current
1. Introduction
Acknowledgments ................................................................................. 1
Why and how we wrote this book ......................................................... 2
Introduction to OpenStack ..................................................................... 7
Security boundaries and threats ........................................................... 12
Introduction to case studies ................................................................. 21
The OpenStack Security Guide is the result of a five day sprint of collaborative work of many individuals. The purpose of this document is to provide the best practice guidelines for deploying a secure OpenStack cloud.
It is a living document that is updated as new changes are merged into the
repository, and is meant to reflect the current state of security within the
OpenStack community and provide frameworks for decision making where
listing specific security controls are not feasible due to complexity or other
environment specific details.
Acknowledgments
The OpenStack Security Group would like to acknowledge contributions
from the following organizations that were instrumental in making this
book possible. The organizations are:
May 1, 2015
current
May 1, 2015
current
Objectives
Identify the security domains in OpenStack
Provide guidance to secure your OpenStack deployment
Highlight security concerns and potential mitigations in present day
OpenStack
Discuss upcoming security features
To provide a community driven facility for knowledge capture and dissemination
How
As with the OpenStack Operations Guide, we followed the book sprint
methodology. The book sprint process allows for rapid development and
production of large bodies of written work. Coordinators from the OpenStack Security Group re-enlisted the services of Adam Hyde as facilitator.
Corporate support was obtained and the project was formally announced
during the OpenStack summit in Portland, Oregon.
The team converged in Annapolis, MD due to the close proximity of some
key members of the group. This was a remarkable collaboration between
public sector intelligence community members, silicon valley startups and
some large, well-known technology companies. The book sprint ran during
the last week in June 2013 and the first edition was created in five days.
May 1, 2015
current
May 1, 2015
current
Cody Bunch is a Private Cloud architect with Rackspace. Cody has coauthored an update to "The OpenStack Cookbook" as well as books on
VMware automation.
Malini Bhandaru, Intel
Malini Bhandaru is a security architect at Intel. She has a varied background, having worked on platform power and performance at Intel,
speech products at Nuance, remote monitoring and management at
ComBrio, and web commerce at Verizon. She has a Ph.D. in Artificial Intelligence from the University of Massachusetts, Amherst.
Gregg Tally, Johns Hopkins University Applied Physics Laboratory
Gregg Tally is the Chief Engineer at JHU/APL's Cyber Systems Group
within the Asymmetric Operations Department. He works primarily in
systems security engineering. Previously, he has worked at SPARTA,
McAfee, and Trusted Information Systems where he was involved in cyber security research projects.
Eric Lopez, VMware
Eric Lopez is Senior Solution Architect at VMware's Networking and Security Business Unit where he helps customers implement OpenStack
and VMware NSX (formerly known as Nicira's Network Virtualization
Platform). Prior to joining VMware (through the company's acquisition
of Nicira), he worked for Q1 Labs, Symantec, Vontu, and Brightmail. He
has a B.S in Electrical Engineering/Computer Science and Nuclear Engineering from U.C. Berkeley and MBA from the University of San Francisco.
Shawn Wells, Red Hat
Shawn Wells is the Director, Innovation Programs at Red Hat, focused
on improving the process of adopting, contributing to, and managing
open source technologies within the U.S. Government. Additionally,
Shawn is an upstream maintainer of the SCAP Security Guide project
which forms virtualization and operating system hardening policy with
the U.S. Military, NSA, and DISA. Formerly an NSA civilian, Shawn developed SIGINT collection systems utilizing large distributed computing infrastructures.
Ben de Bont, HP
May 1, 2015
current
Ben de Bont is the CSO for HP Cloud Services. Prior to his current role
Ben led the information security group at MySpace and the incident response team at MSN Security. Ben holds a master's degree in Computer
Science from the Queensland University of Technology.
Nathanael Burton, National Security Agency
Nathanael Burton is a Computer Scientist at the National Security Agency. He has worked for the Agency for over 10 years working on distributed systems, large-scale hosting, open source initiatives, operating systems, security, storage, and virtualization technology. He has a B.S. in
Computer Science from Virginia Tech.
Vibha Fauver
Vibha Fauver, GWEB, CISSP, PMP, has over fifteen years of experience in
Information Technology. Her areas of specialization include software engineering, project management and information security. She has a B.S.
in Computer & Information Science and a M.S. in Engineering Management with specialization and a certificate in Systems Engineering.
Eric Windisch, Cloudscaling
Eric Windisch is a Principal Engineer at Cloudscaling where he has been
contributing to OpenStack for over two years. Eric has been in the
trenches of hostile environments, building tenant isolation and infrastructure security through more than a decade of experience in the web
hosting industry. He has been building cloud computing infrastructure
and automation since 2007.
Andrew Hay, CloudPassage
Andrew Hay is the Director of Applied Security Research at CloudPassage, Inc. where he leads the security research efforts for the company
and its server security products purpose-built for dynamic public, private,
and hybrid cloud hosting environments.
Adam Hyde
Adam facilitated this Book Sprint. He also founded the Book Sprint
methodology and is the most experienced Book Sprint facilitator
around. Adam founded FLOSS Manualsa community of some 3,000 individuals developing Free Manuals about Free Software. He is also the
founder and project manager for Booktype, an open source project for
writing, editing, and publishing books online and in print.
May 1, 2015
current
During the sprint we also had help from Anne Gentle, Warren Wang, Paul
McMillan, Brian Schott and Lorin Hochstein.
This Book was produced in a 5 day book sprint. A book sprint is an intensely collaborative, facilitated process which brings together a group to produce a book in 3-5 days. It is a strongly facilitated process with a specific
methodology founded and developed by Adam Hyde. For more information visit the book sprint web page at http://www.booksprints.net.
After initial publication, the following added new content:
Rodney D. Beede, Seagate Technology
Rodney D. Beede is the Cloud Security Engineer for Seagate Technology.
He contributed the missing chapter on securing OpenStack Object Storage (swift). He holds a M.S. in Computer Science from the University of
Colorado.
Introduction to OpenStack
This guide provides security insight into OpenStack deployments. The intended audience is cloud architects, deployers, and administrators. In
addition, cloud users will find the guide both educational and helpful in
provider selection, while auditors will find it useful as a reference document to support their compliance certification efforts. This guide is also
recommended for anyone interested in cloud security.
Each OpenStack deployment embraces a wide variety of technologies,
spanning Linux distributions, database systems, messaging queues, OpenStack components themselves, access control policies, logging services, security monitoring tools, and much more. It should come as no surprise that
the security issues involved are equally diverse, and their in-depth analysis would require several guides. We strive to find a balance, providing
enough context to understand OpenStack security issues and their handling, and provide external references for further information. The guide
could be read from start to finish or sampled as necessary like a reference.
7
May 1, 2015
current
We briefly introduce the kinds of clouds: private, public, and hybrid before
presenting an overview of the OpenStack components and their related
security concerns in the remainder of the chapter.
Cloud types
OpenStack is a key enabler in adoption of cloud technology and has several common deployment use cases. These are commonly known as Public,
Private, and Hybrid models. The following sections use the National Institute of Standards and Technology (NIST) definition of cloud to introduce
these different types of cloud as they apply to OpenStack.
Public cloud
According to NIST, a public cloud is one in which the infrastructure is open
to the general public for consumption. OpenStack public clouds are typically run by a service provider and can be consumed by individuals, corporations, or any paying customer. A public cloud provider may expose a full
set of features such as software-defined networking, block storage, in addition to multiple instance types. Due to the nature of public clouds, they
are exposed to a higher degree of risk. As a consumer of a public cloud
you should validate that your selected provider has the necessary certifications, attestations, and other regulatory considerations. As a public cloud
provider, depending on your target customers, you may be subject to one
or more regulations. Additionally, even if not required to meet regulatory
requirements, a provider should ensure tenant isolation as well as protecting management infrastructure from external attacks.
Private cloud
At the opposite end of the spectrum is the private cloud. As NIST defines
it, a private cloud is provisioned for exclusive use by a single organization
comprising multiple consumers, such as business units. It may be owned,
managed, and operated by the organization, a third-party, or some combination of them, and it may exist on or off premises. Private cloud use cases
are diverse, as such, their individual security concerns vary.
Community cloud
NIST defines a community cloud as one whose infrastructure is provisioned
for the exclusive use by a specific community of consumers from organizations that have shared concerns. For example, mission, security requirements, policy, and compliance considerations. It may be owned, managed, and operated by one or more of the organizations in the communi8
May 1, 2015
current
Hybrid cloud
A hybrid cloud is defined by NIST as a composition of two or more distinct
cloud infrastructures, such as private, community, or public, that remain
unique entities, but are bound together by standardized or proprietary
technology that enables data and application portability, such as cloud
bursting for load balancing between clouds. For example an online retailer
may have their advertising and catalogue presented on a public cloud that
allows for elastic provisioning. This would enable them to handle seasonal loads in a flexible, cost-effective fashion. Once a customer begins to process their order, they are transferred to the more secure private cloud back
end that is PCI compliant.
For the purposes of this document, we treat Community and Hybrid similarly, dealing explicitly only with the extremes of Public and Private clouds
from a security perspective. Your security measures depend where your deployment falls upon the private public continuum.
Compute
OpenStack Compute service (nova) provides services to support the management of virtual machine instances at scale, instances that host multi-tiered applications, dev/test environments, "Big Data" crunching Hadoop
clusters, and/or high performance computing.
9
May 1, 2015
current
Object Storage
The OpenStack Object Storage service (swift) provides support for storing
and retrieving arbitrary data in the cloud. The Object Storage service provides both a native API and an Amazon Web Services S3 compatible API.
The service provides a high degree of resiliency through data replication
and can handle petabytes of data.
It is important to understand that object storage differs from traditional
file system storage. It is best used for static data such as media files (MP3s,
images, videos), virtual machine images, and backup files.
Object security should focus on access control and encryption of data in
transit and at rest. Other concerns may relate to system abuse, illegal or
malicious content storage, and cross authentication attack vectors.
Block Storage
The OpenStack Block Storage service (cinder) provides persistent block
storage for compute instances. The Block Storage service is responsible for
managing the life-cycle of block devices, from the creation and attachment
of volumes to instances, to their release.
Security considerations for block storage are similar to that of object storage.
Networking
The OpenStack Networking service (neutron, previously called quantum)
provides various networking services to cloud users (tenants) such as IP ad10
May 1, 2015
current
dress management, DNS, DHCP, load balancing, and security groups (network access rules, like firewall policies). It provides a framework for software defined networking (SDN) that allows for pluggable integration with
various networking solutions.
OpenStack Networking allows cloud tenants to manage their guest network configurations. Security concerns with the networking service include
network traffic isolation, availability, integrity and confidentiality.
Dashboard
The OpenStack dashboard (horizon) provides a web-based interface for
both cloud administrators and cloud tenants. Through this interface administrators and tenants can provision, manage, and monitor cloud resources. Horizon is commonly deployed in a public facing manner with all
the usual security concerns of public web portals.
Identity service
The OpenStack Identity service (keystone) is a shared service that provides
authentication and authorization services throughout the entire cloud infrastructure. The Identity service has pluggable support for multiple forms
of authentication.
Security concerns here pertain to trust in authentication, management of
authorization tokens, and secure communication.
Image service
The OpenStack Image service (glance) provides disk image management
services. The Image service provides image discovery, registration, and delivery services to the Compute service, as needed.
Trusted processes for managing the life cycle of disk images are required,
as are all the previously mentioned issues with respect to data security.
May 1, 2015
current
Security domains
A security domain comprises users, applications, servers or networks that
share common trust requirements and expectations within a system. Typically they have the same authentication and authorization (AuthN/Z) requirements and users.
Although you may desire to break these domains down further (we later
discuss where this may be appropriate), we generally refer to four distinct
security domains which form the bare minimum that is required to deploy
any OpenStack cloud securely. These security domains are:
1. Public
2. Guest
3. Management
12
May 1, 2015
current
4. Data
We selected these security domains because they can be mapped independently or combined to represent the majority of the possible areas of trust
within a given OpenStack deployment. For example, some deployment
topologies may consist of a combination of guest and data domains onto
one physical network while other topologies have these domains separated. In each case, the cloud operator should be aware of the appropriate
security concerns. Security domains should be mapped out against your
specific OpenStack deployment topology. The domains and their trust requirements depend upon whether the cloud instance is public, private, or
hybrid.
Public
The public security domain is an entirely untrusted area of the cloud infrastructure. It can refer to the Internet as a whole or simply to networks over
which you have no authority. Any data that transits this domain with confi-
13
May 1, 2015
current
Guest
Typically used for compute instance-to-instance traffic, the guest security
domain handles compute data generated by instances on the cloud but
not services that support the operation of the cloud, such as API calls.
Public and private cloud providers that do not have stringent controls on
instance use or allow unrestricted internet access to VMs should consider
this domain to be untrusted. Private cloud providers may want to consider this network as internal and trusted, only if the proper controls are implemented to assert that the instances and all associated tenants are to be
trusted.
Management
The management security domain is where services interact. Sometimes
referred to as the "control plane", the networks in this domain transport
confidential data such as configuration parameters, user names, and passwords. Command and Control traffic typically resides in this domain, which
necessitates strong integrity requirements. Access to this domain should be
highly restricted and monitored. At the same time, this domain should still
employ all of the security best practices described in this guide.
In most deployments this domain is considered trusted. However, when
considering an OpenStack deployment, there are many systems that
bridge this domain with others, potentially reducing the level of trust you
can place on this domain. See the section called Bridging security domains [15] for more information.
Data
The data security domain is concerned primarily with information pertaining to the storage services within OpenStack. Most of the data transmitted across this network requires high levels of integrity and confidentiality. In some cases, depending on the type of deployment there may also be
strong availability requirements.
The trust level of this network is heavily dependent on deployment decisions and as such we do not assign this any default level of trust.
14
May 1, 2015
current
The diagram above shows a compute node bridging the data and management domains, as such the compute node should be configured to meet
the security requirements of the management domain. Similarly, the API
Endpoint in this diagram is bridging the untrusted public domain and the
management domain, which should be configured to protect against attacks from the public domain propagating through to the management
domain.
15
May 1, 2015
current
In some cases deployers may want to consider securing a bridge to a higher standard than any of the domains in which it resides. Given the above
example of an API endpoint, an adversary could potentially target the API
endpoint from the public domain, leveraging it in the hopes of compromising or gaining access to the management domain.
The design of OpenStack is such that separation of security domains is difficult - as core services will usually bridge at least two domains, special consideration must be given when applying security controls to them.
Threat actors
A threat actor is an abstract way to refer to a class of adversary that you
may attempt to defend against. The more capable the actor, the more expensive the security controls that are required for successful attack mitigation and prevention. Security is a tradeoff between cost, usability and de16
May 1, 2015
current
17
May 1, 2015
current
May 1, 2015
current
Privacy concerns for public and private cloud users are typically diametrically opposed. The data generated and stored in private clouds is normally owned by the operator of the cloud, who is able to deploy technologies
such as data loss prevention (DLP) protection, file inspection, deep packet
inspection and prescriptive firewalling. In contrast, privacy is one of the primary barriers for the adoption of public cloud infrastructures, as many of
the previously mentioned controls do not exist.
Attack types
The diagram shows the types of attacks that may be expected from the actors described in the previous section. Note that there will always be exceptions to this diagram but in general, this describes the sorts of attack
that could be typical for each actor.
19
May 1, 2015
current
Figure1.1.Attack types
The prescriptive defense for each form of attack is beyond the scope of
this document. The above diagram can assist you in making an informed
decision about which types of threats, and threat actors, should be protected against. For commercial public cloud deployments this might include prevention against serious crime. For those deploying private clouds
for government use, more stringent protective mechanisms should be in
place, including carefully protected facilities and supply chains. In contrast
those standing up basic development or test environments will likely require less restrictive controls (middle of the spectrum).
20
May 1, 2015
current
21
May 1, 2015
current
2. System documentation
System documentation requirements ................................................... 23
Case studies ......................................................................................... 25
The system documentation for an OpenStack cloud deployment should follow the templates and best practices for the Enterprise Information Technology System in your organization. Organizations often have compliance
requirements which may require an overall System Security Plan to inventory and document the architecture of a given system. There are common
challenges across the industry related to documenting the dynamic cloud
infrastructure and keeping the information up-to-date.
System inventory
Documentation should provide a general description of the OpenStack
environment and cover all systems used (production, development, test,
etc.). Documenting system components, networks, services, and software
often provides the bird's-eye view needed to thoroughly cover and consider security concerns, attack vectors and possible security domain bridging
points. A system inventory may need to capture ephemeral resources such
as virtual machines or virtual disk volumes that would otherwise be persistent resources in a traditional IT system.
Hardware inventory
Clouds without stringent compliance requirements for written documentation might benefit from having a Configuration Management Database
23
May 1, 2015
current
(CMDB). CMDBs are normally used for hardware asset tracking and overall
life-cycle management. By leveraging a CMDB, an organization can quickly identify cloud infrastructure hardware such as compute nodes, storage
nodes, or network devices. A CMDB can assist in identifying assets that
exist on the network which may have vulnerabilities due to inadequate
maintenance, inadequate protection or being displaced and forgotten. An
OpenStack provisioning system can provide some basic CMDB functions if
the underlying hardware supports the necessary auto-discovery features.
Software inventory
As with hardware, all software components within the OpenStack deployment should be documented. Examples include:
System databases, such as MySQL or mongoDB
OpenStack software components, such as Identity or Compute
Supporting components, such as load-balancers, reverse proxies, DNS or
DHCP services
An authoritative list of software components may be critical when assessing the impact of a compromise or vulnerability in a library, application or
class of software.
Network topology
A network topology should be provided with highlights specifically calling out the data flows and bridging points between the security domains.
Network ingress and egress points should be identified along with any
OpenStack logical system boundaries. Multiple diagrams may be needed
to provide complete visual coverage of the system. A network topology
document should include virtual networks created on behalf of tenants by
the system along with virtual machine instances and gateways created by
OpenStack.
May 1, 2015
current
The level of detail contained in this type of table can be beneficial as the
information can immediately inform, guide, and assist with validating security requirements. Standard security components such as firewall configuration, service port conflicts, security remediation areas, and compliance
become easier to maintain when concise information is available. An example of this type of table is provided below:
Referencing a table of services, protocols and ports can help in understanding the relationship between OpenStack components. It is highly recommended that OpenStack deployments have information similar to this
on record.
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a government cloud and Bob is deploying a public cloud each with different security
requirements. Here we discuss how Alice and Bob would address their system documentation requirements. The documentation suggested above
includes hardware and software records, network diagrams, and system
configuration details.
May 1, 2015
26
current
May 1, 2015
current
3. Management
Continuous systems management ........................................................
Integrity life-cycle .................................................................................
Management interfaces .......................................................................
Case studies .........................................................................................
27
32
39
43
A cloud deployment is a living system. Machines age and fail, software becomes outdated, vulnerabilities are discovered. When errors or omissions
are made in configuration, or when software fixes must be applied, these
changes must be made in a secure, but convenient, fashion. These changes
are typically solved through configuration management.
Likewise, it is important to protect the cloud deployment from being configured or manipulated by malicious entities. With many systems in a cloud
employing compute and networking virtualization, there are distinct challenges applicable to OpenStack which must be addressed through integrity
lifecycle management.
Finally, administrators must perform command and control over the cloud
for various operational functions. It is important these command and control facilities are understood and secured.
Vulnerability management
For announcements regarding security relevant changes, subscribe to the
OpenStack Announce mailing list. The security notifications are also posted through the downstream packages, for example, through Linux distributions that you may be subscribed to as part of the package updates.
The OpenStack components are only a small fraction of the software in
a cloud. It is important to keep up to date with all of these other components, too. While certain data sources will be deployment specific, it is important that a cloud administrator subscribe to the necessary mailing lists
in order to receive notification of any security updates applicable to the
27
May 1, 2015
current
Note
OpenStack releases security information through two channels.
OpenStack Security Advisories (OSSA) are created by
the OpenStack Vulnerability Management Team (VMT).
They pertain to security holes in core OpenStack services.
More information on the VMT can be found here: https://
wiki.openstack.org/wiki/Vulnerability_Management
OpenStack Security Notes (OSSN) are created by the OpenStack Security Group (OSSG) to support the work of the
VMT. OSSN address issues in supporting software and
common deployment configurations. They are referenced
throughout this guide. Security Notes are archived at
https://launchpad.net/ossn/
Triage
After you are notified of a security update, the next step is to determine
how critical this update is to a given cloud deployment. In this case, it is
useful to have a pre-defined policy. Existing vulnerability rating systems
such as the common vulnerability scoring system (CVSS) v2 do not properly
account for cloud deployments.
In this example we introduce a scoring matrix that places vulnerabilities
in three categories: Privilege Escalation, Denial of Service and Information
Disclosure. Understanding the type of vulnerability and where it occurs in
your infrastructure will enable you to make reasoned response decisions.
Privilege Escalation describes the ability of a user to act with the privileges of some other user in a system, bypassing appropriate authorization
checks. A guest user performing an operation that allows them to conduct
unauthorized operations with the privileges of an administrator is an example of this type of vulnerability.
Denial of Service refers to an exploited vulnerability that may cause service or system disruption. This includes both distributed attacks to overwhelm network resources, and single-user attacks that are typically caused
through resource allocation bugs or input induced system failure flaws.
Information Disclosure vulnerabilities reveal information about your system or operations. These vulnerabilities range from debugging informa-
28
May 1, 2015
current
Cloud user
Cloud admin
Control plane
Critical
n/a
n/a
n/a
Critical
Critical
n/a
n/a
Critical
Critical
Critical
n/a
Denial of service
High
Medium
Low
Low
Information
disclosure
Critical / high
Critical / high
This table illustrates a generic approach to measuring the impact of a vulnerability based on where it occurs in your deployment and the effect. For
example, a single level privilege escalation on a Compute API node potentially allows a standard user of the API to escalate to have the same privileges as the root user on the node.
We suggest that cloud administrators use this table as a model to help define which actions to take for the various security levels. For example, a
critical-level security update might require the cloud to be upgraded quickly whereas a low-level update might take longer to be completed.
May 1, 2015
current
Configuration management
A production quality cloud should always use tools to automate configuration and deployment. This eliminates human error, and allows the cloud to
scale much more rapidly. Automation also helps with continuous integration and testing.
When building an OpenStack cloud it is strongly recommended to approach your design and implementation with a configuration management tool or framework in mind. Configuration management allows you
to avoid the many pitfalls inherent in building, managing, and maintaining
an infrastructure as complex as OpenStack. By producing the manifests,
cookbooks, or templates required for a configuration management utility,
you are able to satisfy a number of documentation and regulatory reporting requirements. Further, configuration management can also function as
part of your business continuity plan (BCP) and data recovery (DR) plans
wherein you can rebuild a node or service back to a known state in a DR
event or given a compromise.
Additionally, when combined with a version control system such as Git or
SVN, you can track changes to your environment over time and re-mediate unauthorized changes that may occur. For example, a nova.conf file
or other configuration file falls out of compliance with your standard, your
configuration management tool can revert or replace the file and bring
your configuration back into a known state. Finally a configuration management tool can also be used to deploy updates; simplifying the security
patch process. These tools have a broad range of capabilities that are useful in this space. The key point for securing your cloud is to choose a tool
for configuration management and use it.
There are many configuration management solutions; at the time of this
writing there are two in the marketplace that are robust in their support
of OpenStack environments: Chef and Puppet. A non-exhaustive listing of
tools in this space is provided below:
Chef
Puppet
Salt Stack
Ansible
30
May 1, 2015
current
Policy changes
Whenever a policy or configuration management is changed, it is good
practice to log the activity, and backup a copy of the new set. Often, such
policies and configurations are stored in a version controlled repository
such as Git.
Security considerations
Ensure only authenticated users and backup clients have access to the
backup server.
Use data encryption options for storage and transmission of backups.
Use a dedicated and hardened backup servers. The logs for the backup
server must be monitored daily and accessible by only few individuals.
Test data recovery options regularly. One of the things that can be restored from secured backups is the images. In case of a compromise,
the best practice would be to terminate running instances immediately
and then relaunch the instances from the images in the secured backup
repository.
References
OpenStack Operations Guide on backup and recovery
http://www.sans.org/reading_room/whitepapers/backup/security-considerations-enterprise-level-backups_515
OpenStack Security Primer, an entry in the music piracy blog by a former
member of the original NASA project team that created nova
May 1, 2015
current
number of security controls are satisfied for a given system configuration. These tools help to bridge the gap from security configuration guidance documentation (for example, the STIG and NSA Guides) to a specific system installation. For example, SCAP can compare a running system
to a pre-defined profile. SCAP outputs a report detailing which controls
in the profile were satisfied, which ones failed, and which ones were not
checked.
Combining configuration management and security auditing tools creates
a powerful combination. The auditing tools will highlight deployment concerns. And the configuration management tools simplify the process of
changing each system to address the audit concerns. Used together in this
fashion, these tools help to maintain a cloud that satisfies security requirements ranging from basic hardening to compliance validation.
Configuration management and security auditing tools will introduce another layer of complexity into the cloud. This complexity brings additional
security concerns with it. We view this as an acceptable risk trade-off, given their security benefits. Securing the operational use of these tools is beyond the scope of this guide.
Integrity life-cycle
We define integrity life cycle as a deliberate process that provides assurance that we are always running the expected software with the expected
configurations throughout the cloud. This process begins with secure bootstrapping and is maintained through configuration management and security monitoring. This chapter provides recommendations on how to approach the integrity life-cycle process.
Secure bootstrapping
Nodes in the cloudincluding compute, storage, network, service, and hybrid nodesshould have an automated provisioning process. This ensures
that nodes are provisioned consistently and correctly. This also facilitates
security patching, upgrading, bug fixing, and other critical changes. Since
this process installs new software that runs at the highest privilege levels
in the cloud, it is important to verify that the correct software is installed.
This includes the earliest stages of the boot process.
There are a variety of technologies that enable verification of these early
boot stages. These typically require hardware support such as the trusted
platform module (TPM), Intel Trusted Execution Technology (TXT), dynamic root of trust measurement (DRTM), and Unified Extensible Firmware In32
May 1, 2015
current
terface (UEFI) secure boot. In this book, we will refer to all of these collectively as secure boot technologies. We recommend using secure boot,
while acknowledging that many of the pieces necessary to deploy this require advanced technical skills in order to customize the tools for each environment. Utilizing secure boot will require deeper integration and customization than many of the other recommendations in this guide. TPM
technology, while common in most business class laptops and desktops for
several years, and is now becoming available in servers together with supporting BIOS. Proper planning is essential to a successful secure boot deployment.
A complete tutorial on secure boot deployment is beyond the scope of this
book. Instead, here we provide a framework for how to integrate secure
boot technologies with the typical node provisioning process. For additional details, cloud architects should refer to the related specifications and
software configuration manuals.
Node provisioning
Nodes should use Preboot eXecution Environment (PXE) for provisioning.
This significantly reduces the effort required for redeploying nodes. The
typical process involves the node receiving various boot stagesthat is progressively more complex software to execute from a server.
We recommend using a separate, isolated network within the management security domain for provisioning. This network will handle all PXE
traffic, along with the subsequent boot stage downloads depicted above.
33
May 1, 2015
current
Note that the node boot process begins with two insecure operations:
DHCP and TFTP. Then the boot process uses TLS to download the remaining information required to deploy the node. This may be an operating system installer, a basic install managed by Chef or Puppet, or even a complete file system image that is written directly to disk.
While utilizing TLS during the PXE boot process is somewhat more challenging, common PXE firmware projects, such as iPXE, provide this support. Typically this involves building the PXE firmware with knowledge of
the allowed TLS certificate chain(s) so that it can properly validate the server certificate. This raises the bar for an attacker by limiting the number of
insecure, plain text network operations.
Verified boot
In general, there are two different strategies for verifying the boot process. Traditional secure boot will validate the code run at each step in the
process, and stop the boot if code is incorrect. Boot attestation will record
which code is run at each step, and provide this information to another
machine as proof that the boot process completed as expected. In both
cases, the first step is to measure each piece of code before it is run. In this
context, a measurement is effectively a SHA-1 hash of the code, taken before it is executed. The hash is stored in a platform configuration register
(PCR) in the TPM.
Note: SHA-1 is used here because this is what the TPM chips support.
Each TPM has at least 24 PCRs. The TCG Generic Server Specification, v1.0,
March 2005, defines the PCR assignments for boot-time integrity measurements. The table below shows a typical PCR configuration. The context indicates if the values are determined based on the node hardware
(firmware) or the software provisioned onto the node. Some values are influenced by firmware versions, disk sizes, and other low-level information.
Therefore, it is important to have good practices in place around configuration management to ensure that each system deployed is configured exactly as desired.
Register
What is measured
PCR-00
PCR-01
PCR-02
34
Context
Hardware
May 1, 2015
current
PCR-03
Hardware
PCR-04
PCR-05
Software
PCR-06
Software
PCR-07
Software
PCR-08
Software
PCR-09
Software
PCR-10 to PCR-23
Platform specific
Software
At the time of this writing, very few clouds are using secure boot technologies in a production environment. As a result, these technologies are still
somewhat immature. We recommend planning carefully in terms of hardware selection. For example, ensure that you have a TPM and Intel TXT
support. Then verify how the node hardware vendor populates the PCR
values. For example, which values will be available for validation. Typically the PCR values listed under the software context in the table above are
the ones that a cloud architect has direct control over. But even these may
change as the software in the cloud is upgraded. Configuration management should be linked into the PCR policy engine to ensure that the validation is always up to date.
Each manufacturer must provide the BIOS and firmware code for their
servers. Different servers, hypervisors, and operating systems will choose to
populate different PCRs. In most real world deployments, it will be impossible to validate every PCR against a known good quantity ("golden measurement"). Experience has shown that, even within a single vendor's product line, the measurement process for a given PCR may not be consistent.
We recommend establishing a baseline for each server and monitoring the
PCR values for unexpected changes. Third-party software may be available
to assist in the TPM provisioning and monitoring process, depending upon
your chosen hypervisor solution.
The initial program loader (IPL) code will most likely be the PXE firmware,
assuming the node deployment strategy outlined above. Therefore, the
secure boot or boot attestation process can measure all of the early stage
boot code, such as BIOS, firmware, the PXE firmware, and the kernel image. Ensuring that each node has the correct versions of these pieces in35
May 1, 2015
current
stalled provides a solid foundation on which to build the rest of the node
software stack.
Depending on the strategy selected, in the event of a failure the node will
either fail to boot or it can report the failure back to another entity in the
cloud. For secure boot, the node will fail to boot and a provisioning service within the management security domain must recognize this and log
the event. For boot attestation, the node will already be running when the
failure is detected. In this case the node should be immediately quarantined by disabling its network access. Then the event should be analyzed
for the root cause. In either case, policy should dictate how to proceed after a failure. A cloud may automatically attempt to re-provision a node a
certain number of times. Or it may immediately notify a cloud administrator to investigate the problem. The right policy here will be deployment
and failure mode specific.
Node hardening
At this point we know that the node has booted with the correct kernel
and underlying components. There are many paths for hardening a given operating system deployment. The specifics on these steps are outside
of the scope of this book. We recommend following the guidance from a
hardening guide specific to your operating system. For example, the security technical implementation guides (STIG) and the NSA guides are useful
starting places.
The nature of the nodes makes additional hardening possible. We recommend the following additional steps for production nodes:
Use a read-only file system where possible. Ensure that writeable file systems do not permit execution. This can be handled through the mount
options provided in /etc/fstab.
Use a mandatory access control policy to contain the instances, the node
services, and any other critical processes and data on the node. See the
discussions on sVirt / SELinux and AppArmor below.
Remove any unnecessary software packages. This should result in a very
stripped down installation because a compute node has a relatively
small number of dependencies.
Finally, the node kernel should have a mechanism to validate that the rest
of the node starts in a known good state. This provides the necessary link
from the boot validation process to validating the entire system. The steps
for doing this will be deployment specific. As an example, a kernel mod36
May 1, 2015
current
ule could verify a hash over the blocks comprising the file system before
mounting it using dm-verity.
Runtime verification
Once the node is running, we need to ensure that it remains in a good
state over time. Broadly speaking, this includes both configuration management and security monitoring. The goals for each of these areas are
different. By checking both, we achieve higher assurance that the system
is operating as desired. We discuss configuration management in the management section, and security monitoring below.
May 1, 2015
current
lowing open source projects which implement a variety of host-based intrusion detection and file monitoring features.
OSSEC
Samhain
Tripwire
AIDE
Network intrusion detection tools complement the host-based tools. OpenStack doesn't have a specific network IDS built-in, but OpenStack Networking provides a plug-in mechanism to enable different technologies through
the Networking API. This plug-in architecture will allow tenants to develop
API extensions to insert and configure their own advanced networking services like a firewall, an intrusion detection system, or a VPN between the
VMs.
Similar to host-based tools, the selection and configuration of a network-based intrusion detection tool is deployment specific. Snort is the
leading open source networking intrusion detection tool, and a good starting place to learn more.
There are a few important security considerations for network and hostbased intrusion detection systems.
It is important to consider the placement of the Network IDS on the
cloud (for example, adding it to the network boundary and/or around
sensitive networks). The placement depends on your network environment but make sure to monitor the impact the IDS may have on your
services depending on where you choose to add it. Encrypted traffic,
such as TLS, cannot generally be inspected for content by a Network
IDS. However, the Network IDS may still provide some benefit in identifying anomalous unencrypted traffic on the network.
In some deployments it may be required to add host-based IDS on sensitive components on security domain bridges. A host-based IDS may detect anomalous activity by compromised or unauthorized processes on
the component. The IDS should transmit alert and log information on
the Management network.
Server hardening
Servers in the cloud, including undercloud and overcloud infrastructure,
should implement hardening best practices. As OS and server hardening is
38
May 1, 2015
current
common, applicable best practices including but not limited to logging, user account restrictions, and regular updates will not be covered here, but
should be applied to all infrastructure.
Management interfaces
It is necessary for administrators to perform command and control over
the cloud for various operational functions. It is important these command
and control facilities are understood and secured.
OpenStack provides several management interfaces for operators and tenants:
OpenStack dashboard (horizon)
OpenStack API
Secure shell (SSH)
OpenStack management utilities such as nova-manage and glancemanage
Out-of-band management interfaces, such as IPMI
Dashboard
The OpenStack dashboard (horizon) provides administrators and tenants with a web-based graphical interface to provision and access cloud39
May 1, 2015
current
Capabilities
As a cloud administrator, the dashboard provides an overall view of the
size and state of your cloud. You can create users and tenants/projects,
assign users to tenant/projects and set limits on the resources available
for them.
The dashboard provides tenant-users a self-service portal to provision
their own resources within the limits set by administrators.
The dashboard provides GUI support for routers and load-balancers. For
example, the dashboard now implements all of the main Networking
features.
It is an extensible Django web application that allows easy plug-in of
third-party products and services, such as billing, monitoring, and additional management tools.
The dashboard can also be branded for service providers and other commercial vendors.
Security considerations
The dashboard requires cookies and JavaScript to be enabled in the web
browser.
The web server that hosts the dashboard should be configured for TLS
to ensure data is encrypted.
Both the horizon web service and the OpenStack API it uses to communicate with the back end are susceptible to web attack vectors such as
denial of service and must be monitored.
It is now possible (though there are numerous deployment/security implications) to upload an image file directly from a user's hard disk to
OpenStack Image Service through the dashboard. For multi-gigabyte images it is still strongly recommended that the upload be done using the
glance CLI.
Create and manage security groups through dashboard. The security
groups allows L3-L4 packet filtering for security policies to protect virtual
machines.
40
May 1, 2015
current
References
Icehouse Release Notes
OpenStack API
The OpenStack API is a RESTful web service endpoint to access, provision
and automate cloud-based resources. Operators and users typically access
the API through command-line utilities (for example, nova or glance), language-specific libraries, or third-party tools.
Capabilities
To the cloud administrator, the API provides an overall view of the size
and state of the cloud deployment and allows the creation of users, tenants/projects, assigning users to tenants/projects, and specifying resource quotas on a per tenant/project basis.
The API provides a tenant interface for provisioning, managing, and accessing their resources.
Security considerations
The API service should be configured for TLS to ensure data is encrypted.
As a web service, OpenStack API is susceptible to familiar web site attack
vectors such as denial of service attacks.
May 1, 2015
current
All SSH daemons have private host keys and, upon connection, offer a host
key fingerprint. This host key fingerprint is the hash of an unsigned public key. It is important these host key fingerprints are known in advance
of making SSH connections to those hosts. Verification of host key fingerprints is instrumental in detecting man-in-the-middle attacks.
Typically, when an SSH daemon is installed, host keys will be generated. It
is necessary that the hosts have sufficient entropy during host key generation. Insufficient entropy during host key generation can result in the possibility to eavesdrop on SSH sessions.
Once the SSH host key is generated, the host key fingerprint should be
stored in a secure and queryable location. One particularly convenient solution is DNS using SSHFP resource records as defined in RFC-4255. For this
to be secure, it is necessary that DNSSEC be deployed.
Management utilities
The OpenStack Management Utilities are open-source Python command-line clients that make API calls. There is a client for each OpenStack
service (for example, nova, glance). In addition to the standard CLI
client, most of the services have a management command-line utility which
makes direct calls to the database. These dedicated management utilities
are slowly being deprecated.
Security considerations
The dedicated management utilities (*-manage) in some cases use the
direct database connection.
Ensure that the .rc file which has your credential information is secured.
References
OpenStack End User Guide section command-line clients overview
OpenStack End User Guide section Download and source the OpenStack
RC file
May 1, 2015
current
and reboot servers whether the operating system is running or the system
has crashed.
Security considerations
Use strong passwords and safeguard them, or use client-side TLS authentication.
Ensure that the network interfaces are on their own
private(management or a separate) network. Segregate management
domains with firewalls or other network gear.
If you use a web interface to interact with the BMC/IPMI, always use the
TLS interface, such as HTTPS or port 443. This TLS interface should NOT
use self-signed certificates, as is often default, but should have trusted certificates using the correctly defined fully qualified domain names
(FQDNs).
Monitor the traffic on the management network. The anomalies might
be easier to track than on the busier compute nodes.
Out of band management interfaces also often include graphical machine
console access. It is often possible, although not necessarily default, that
these interfaces are encrypted. Consult with your system software documentation for encrypting these interfaces.
References
Hacking servers that are turned off
Case studies
Previously we discussed typical OpenStack management interfaces and associated backplane issues. We will now approach these issues by returning
to the Alice and Bob case studies (See the section called Introduction to
case studies [21] ) where Alice is deploying a government cloud and Bob
is deploying a public cloud each with different security requirements. In
this section, we will look into how both Alice and Bob will address:
Cloud administration
Self service
Data replication and recovery
43
May 1, 2015
current
44
May 1, 2015
current
4. Secure communication
Introduction to TLS and SSL .................................................................
TLS proxies and HTTP services ..............................................................
Secure reference architectures ..............................................................
Case studies .........................................................................................
45
48
55
59
May 1, 2015
current
that SSL is disabled in all cases, unless compatibility with obsolete browsers
or libraries is required.
Public Key Infrastructure (PKI) is the framework for securing communication in a network. It consists of a set of systems and processes to ensure
traffic can be sent securely while validating the identities of the parties.
The core components of PKI are:
End entity
Repository
Relying party
Certification authorities
Many organizations have an established Public Key Infrastructure with
their own certification authority (CA), certificate policies, and management for which they should use to issue certificates for internal OpenStack
users or services. Organizations in which the public security domain is Internet facing will additionally need certificates signed by a widely recognized
public CA. For cryptographic communications over the management network, it is recommended one not use a public CA. Instead, we expect and
recommend most deployments deploy their own internal CA.
46
May 1, 2015
current
It is recommended that the OpenStack cloud architect consider using separate PKI deployments for internal systems and customer facing services.
This allows the cloud deployer to maintain control of their PKI infrastructure and among other things makes requesting, signing and deploying certificates for internal systems easier. Advanced configurations may use separate PKI deployments for different security domains. This allows deployers
to maintain cryptographic separation of environments, ensuring that certificates issued to one are not recognized by another.
Certificates used to support TLS on internet facing cloud endpoints (or customer interfaces where the customer is not expected to have installed anything other than standard operating system provided certificate bundles)
should be provisioned using Certificate Authorities that are installed in the
operating system certificate bundle. Typical well known vendors include
Verisign and Thawte but many others exist.
There are many management, policy, and technical challenges around creating and signing certificates. This is an area where cloud architects or operators may wish to seek the advice of industry leaders and vendors in addition to the guidance recommended here.
TLS libraries
Various components, services, and applications within the OpenStack
ecosystem or dependencies of OpenStack are implemented and can be
configured to use TLS libraries. The TLS and HTTP services within OpenStack are typically implemented using OpenSSL which has a module that
has been validated for FIPS 140-2. However, keep in mind that each application or service can still introduce weaknesses in how they use the
OpenSSL libraries.
May 1, 2015
current
Summary
Given the complexity of the OpenStack components and the number of
deployment possibilities, you must take care to ensure that each component gets the appropriate configuration of TLS certificates, keys, and CAs.
Subsequent sections discuss the following services:
Compute API endpoints
Identity API endpoints
Networking API endpoints
Storage API endpoints
Messaging server
Database server
Dashboard
May 1, 2015
current
network. It is highly recommended that all of these requests, both internal and external, operate over TLS. To achieve this goal, API services must
be deployed behind a TLS proxy that can establish and terminate TLS sessions. The following table offers a non-exhaustive list of open source software that can be used for this purpose:
Pound
Stud
nginx
Apache httpd
In cases where software termination offers insufficient performance, hardware accelerators may be worth exploring as an alternative option. It is important to be mindful of the size of requests that will be processed by any
chosen TLS proxy.
Examples
Below we provide sample recommended configuration settings for enabling TLS in some of the more popular web servers/TLS terminators.
Before we delve into the configurations, we briefly discuss the ciphers' configuration element and its format. A more exhaustive treatment on available ciphers and the OpenSSL cipher list format can be found at: ciphers.
ciphers = "HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
or
ciphers = "kEECDH:kEDH:kRSA:HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!
LOW:!MEDIUM"
Cipher string options are separated by ":", while "!" provides negation of
the immediately following element. Element order indicates preference
unless overridden by qualifiers such as HIGH. Let us take a closer look at
the elements in the above sample strings.
kEECDH:kEDH
May 1, 2015
current
HIGH
Selects highest possible security cipher in the negotiation phase. These typically have keys of length 128 bits
or longer.
!RC4
!MD5
!aNULL:!eNULL
!EXP
Disallows export encryption algorithms, which by design tend to be weak, typically using 40 and 56 bit
keys.
US Export restrictions on cryptography systems have
been lifted and no longer need to be supported.
!LOW:!MEDIUM
Protocols
50
May 1, 2015
current
Pound
This Pound example enables AES-NI acceleration, which helps to improve
performance on systems with processors that support this feature.
## see pound(8) for details
daemon
1
######################################################################
## global options:
User
"swift"
Group
"swift"
#RootJail
"/chroot/pound"
## Logging: (goes to syslog by default)
## 0
no logging
## 1
normal
## 2
extended
## 3
Apache-style (common log format)
LogLevel
0
## turn on dynamic scaling (off by default)
# Dyn Scale 1
## check backend every X secs:
Alive
30
## client timeout
#Client
10
## allow 10 second proxy connect time
ConnTO
10
## use hardware-acceleration card supported by openssl(1):
SSLEngine
"aesni"
# poundctl control socket
Control "/var/run/pound/poundctl.socket"
######################################################################
## listen, redirect and ... to:
## redirect all swift requests on port 443 to local swift proxy
ListenHTTPS
Address 0.0.0.0
Port
443
Cert
"/etc/pound/cert.pem"
## Certs to accept from clients
## CAlist
"CA_file"
## Certs to use for client verification
## VerifyList "Verify_file"
## Request client cert - don't verify
## Ciphers
"AES256-SHA"
## allow PUT and DELETE also (by default only GET, POST and
HEAD)?:
NoHTTPS11
0
## allow PUT and DELETE also (by default only GET, POST and
HEAD)?:
xHTTP
1
Service
BackEnd
51
May 1, 2015
current
Address 127.0.0.1
Port
80
End
End
End
Stud
The ciphers line can be tweaked based on your needs, however this is a
reasonable starting place.
# SSL x509 certificate file.
pem-file = "
# SSL protocol.
tls = on
ssl = off
# List of allowed SSL ciphers.
# OpenSSL's high-strength ciphers which require authentication
# NOTE: forbids clear text, use of RC4 or MD5 or LOW and MEDIUM
strength ciphers
ciphers = "HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
# Enforce server cipher list order
prefer-server-ciphers = on
# Number of worker processes
workers = 4
# Listen backlog size
backlog = 1000
# TCP socket keepalive interval in seconds
keepalive = 3600
# Chroot directory
chroot = ""
# Set uid after binding a socket
user = "www-data"
# Set gid after binding a socket
group = "www-data"
# Quiet execution, report only error messages
quiet = off
# Use syslog for logging
syslog = on
# Syslog facility to use
syslog-facility = "daemon"
# Run as daemon
daemon = off
# Report client address using SENDPROXY protocol for haproxy
# Disabling this until we upgrade to HAProxy 1.5
write-proxy = off
52
May 1, 2015
current
nginx
This nginx example requires TLS v1.1 or v1.2 for maximum security. The
ssl_ciphers line can be tweaked based on your needs, however this is a
reasonable starting place.
server {
listen : ssl;
ssl_certificate ;
ssl_certificate_key ;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
ssl_session_tickets off;
server_name _;
keepalive_timeout 5;
location / {
}
}
Apache
<VirtualHost <ip address>:80>
ServerName <site FQDN>
RedirectPermanent / https://<site FQDN>/
</VirtualHost>
<VirtualHost <ip address>:443>
ServerName <site FQDN>
SSLEngine On
SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2,
SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
SSLCertificateFile
/path/<site FQDN>.crt
SSLCACertificateFile /path/<site FQDN>.crt
SSLCertificateKeyFile /path/<site FQDN>.key
WSGIScriptAlias / <WSGI script location>
WSGIDaemonProcess horizon user=<user> group=<group> processes=
3 threads=10
Alias /static <static files location>
<Directory <WSGI dir>>
# For http server 2.2 and earlier:
Order allow,deny
Allow from all
# Or, in Apache http server 2.4 and later:
# Require all granted
</Directory>
</VirtualHost>
53
May 1, 2015
current
Compute API SSL endpoint in Apache, which you must pair with a short
WSGI script.
<VirtualHost <ip address>:8447>
ServerName <site FQDN>
SSLEngine On
SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2
SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
SSLCertificateFile
/path/<site FQDN>.crt
SSLCACertificateFile /path/<site FQDN>.crt
SSLCertificateKeyFile /path/<site FQDN>.key
SSLSessionTickets Off
WSGIScriptAlias / <WSGI script location>
WSGIDaemonProcess osapi user=<user> group=<group> processes=3
threads=10
<Directory <WSGI dir>>
# For http server 2.2 and earlier:
Order allow,deny
Allow from all
# Or, in Apache http server 2.4 and later:
# Require all granted
</Directory>
</VirtualHost>
Start with a short timeout of 1 day during testing, and raise it to one year
after testing has shown that you have not introduced problems for users.
Note that once this header is set to a large timeout, it is (by design) very
difficult to disable.
May 1, 2015
current
tickets options to help mitigate some of these concerns. Real-world deployments may desire to enable this feature for improved performance. This
can be done securely, but would require special consideration around key
management. Such configurations are beyond the scope of this guide. We
suggest reading How to botch TLS forward secrecy by ImperialViolet as a
starting place for understanding the problem space.
May 1, 2015
current
vulnerability allows them to break out of the hypervisor, they will have access to your management network. Using SSL/TLS on the management
network can minimize the damage that an attacker can cause.
Some of the concerns with the use of SSL/TLS proxies as pictured above:
Native SSL/TLS in OpenStack services does not perform/scale as well as
SSL proxies (particularly for Python implementations like Eventlet).
Native SSL/TLS in OpenStack services not as well scrutinized/ audited as
more proven solutions.
Native SSL/TLS configuration is difficult (not well documented, tested, or
consistent across services).
Privilege separation (OpenStack service processes should not have direct
access to private keys used for SSL/TLS).
Traffic inspection needs for load balancing.
All of the above are valid concerns, but none of them prevent SSL/TLS
from being used on the management network. Let's consider the next deployment model.
56
May 1, 2015
current
This is very similar to the "SSL/TLS in front model" but the SSL/TLS proxy
is on the same physical system as the API endpoint. The API endpoint
would be configured to only listen on the local network interface. All remote communication with the API endpoint would go through the SSL/
TLS proxy. With this deployment model, we address a number of the bullet
points in "SSL/TLS in front model" . A proven SSL implementation that performs well would be used. The same SSL proxy software would be used for
all services, so SSL configuration for the API endpoints would be consistent.
The OpenStack service processes would not have direct access to the private keys used for SSL/TLS, as you would run the SSL proxies as a different
user and restrict access using permissions (and additionally mandatory access controls using something like SELinux). We would ideally have the API
endpoints listen on a Unix socket such that we could restrict access to it using permissions and mandatory access controls as well. Unfortunately, this
does not seem to work currently in Eventlet from our testing. It is a good
future development goal.
May 1, 2015
current
to just pass the HTTPS traffic straight through to the API endpoint systems
in this case:
As with most things, there are trade-offs. The main trade-off is going to be
between security and performance. Encryption has a cost, but so does being hacked. The security and performance requirements are going to be
58
May 1, 2015
current
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case study where Alice is deploying a government
cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address deployment of PKI certification authorities (CA) and certificate management.
59
May 1, 2015
current
5. API endpoints
API endpoint configuration recommendations ...................................... 61
Case studies ......................................................................................... 63
The process of engaging an OpenStack cloud is started through the querying of an API endpoint. While there are different challenges for public and
private endpoints, these are high value assets that can pose a significant
risk if compromised.
This chapter recommends security enhancements for both public and private-facing API endpoints.
61
May 1, 2015
current
May 1, 2015
current
Namespaces
Many operating systems now provide compartmentalization support. Linux supports namespaces to assign processes into independent domains.
Other parts of this guide cover system compartmentalization in more detail.
Network policy
Because API endpoints typically bridge multiple security domains, you must
pay particular attention to the compartmentalization of the API processes.
See the section called Bridging security domains [15] for additional information in this area.
With careful modeling, you can use network ACLs and IDS technologies to
enforce explicit point to point communication between network services.
As a critical cross domain service, this type of explicit enforcement works
well for OpenStack's message queue service.
To enforce policies, you can configure services, host-based firewalls (such
as iptables), local policy (SELinux or AppArmor), and optionally global network policy.
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would address
endpoint configuration to secure their private and public clouds. Alice's
63
May 1, 2015
current
cloud is not publicly accessible, but she is still concerned about securing the
endpoints against improper use. Bob's cloud, being public, must take measures to reduce the risk of attacks by external adversaries.
64
May 1, 2015
current
6. Identity
Authentication .....................................................................................
Authentication methods ......................................................................
Authorization ......................................................................................
Policies .................................................................................................
Tokens .................................................................................................
Future ..................................................................................................
Federated Identity ...............................................................................
Checklist ..............................................................................................
65
66
68
70
72
73
74
85
Identity service (keystone) provides identity, token, catalog, and policy services for use specifically by services in the OpenStack family. Identity service
is organized as a group of internal services exposed on one or many endpoints. Many of these services are used in a combined fashion by the frontend, for example an authenticate call will validate user/project credentials
with the identity service and, upon success, create and return a token with
the token service. Further information can be found by reading the Keystone Developer Documentation.
Authentication
The OpenStack Identity service (keystone) supports multiple methods of
authentication, including user name & password, LDAP, and external authentication methods. Upon successful authentication, The Identity service
provides the user with an authorization token used for subsequent service
requests.
Transport Layer Security (TLS) provides authentication between services
and persons using X.509 certificates. Although the default mode for TLS is
server-side only authentication, certificates may also be used for client authentication.
May 1, 2015
current
tempts. The account then may only be unlocked with further side-channel
intervention.
If prevention is not an option, detection can be used to mitigate damage.
Detection involves frequent review of access control logs to identify unauthorized attempts to access accounts. Possible remediation would include
reviewing the strength of the user password, or blocking the network
source of the attack through firewall rules. Firewall rules on the keystone
server that restrict the number of connections could be used to reduce the
attack effectiveness, and thus dissuade the attacker.
In addition, it is useful to examine account activity for unusual login times
and suspicious actions, and take corrective actions such as disabling the account. Oftentimes this approach is taken by credit card providers for fraud
detection and alert.
Multi-factor authentication
Employ multi-factor authentication for network access to privileged user
accounts. The Identity service supports external authentication services
through the Apache web server that can provide this functionality. Servers
may also enforce client-side authentication using certificates.
This recommendation provides insulation from brute force, social engineering, and both spear and mass phishing attacks that may compromise administrator passwords.
Authentication methods
Internally implemented authentication methods
The Identity service can store user credentials in an SQL Database, or may
use an LDAP-compliant directory server. The Identity database may be separate from databases used by other OpenStack services to reduce the risk
of a compromise of the stored credentials.
When you use a user name and password to authenticate, Identity does
not enforce policies on password strength, expiration, or failed authentication attempts as recommended by NIST Special Publication 800-118 (draft).
Organizations that desire to enforce stronger password policies should
consider using Identity extensions or external authentication services.
66
May 1, 2015
current
Note
There is an OpenStack Security Note (OSSN) regarding
keystone.conf permissions.
There is an OpenStack Security Note (OSSN) regarding potential DoS attacks.
May 1, 2015
current
Authorization
The Identity service supports the notion of groups and roles. Users belong
to groups while a group has a list of roles. OpenStack services reference
the roles of the user attempting to access the service. The OpenStack policy enforcer middleware takes into consideration the policy rule associated with each resource then the user's group/roles and association to determine if access is allowed to the requested resource.
The policy enforcement middleware enables fine-grained access control to
OpenStack resources. Only admin users can provision new users and have
access to various management functionality. The cloud users would only be able to spin up instances, attach volumes, and perform other operational tasks.
Service authorization
Cloud administrators must define a user with the role of admin for each
service, as described in the OpenStack Cloud Administrator Guide. This ser68
May 1, 2015
current
Note
We recommend that you use client authentication with TLS for
the authentication of services to the Identity service.
The cloud administrator should protect sensitive configuration files
from unauthorized modification. This can be achieved with mandatory access control frameworks such as SELinux, including /etc/keystone/keystone.conf and X.509 certificates.
Client authentication with TLS requires certificates be issued to services.
These certificates can be signed by an external or internal certificate authority. OpenStack services check the validity of certificate signatures
against trusted CAs by default and connections will fail if the signature is
not valid or the CA is not trusted. Cloud deployers may use self-signed certificates. In this case, the validity check must be disabled or the certificate
should be marked as trusted. To disable validation of self-signed certificates, set insecure=False in the [filter:authtoken] section in the
/etc/nova/api.paste.ini file. This setting also disables certificates
for other components.
Administrative users
We recommend that admin users authenticate using Identity service and
an external authentication service that supports 2-factor authentication,
such as a certificate. This reduces the risk from passwords that may be
compromised. This recommendation is in compliance with NIST 800-53
69
May 1, 2015
current
IA-2(1) guidance in the use of multi-factor authentication for network access to privileged accounts.
End users
The Identity service can directly provide end-user authentication, or can
be configured to use external authentication methods to conform to an
organization's security policies and requirements.
Policies
Each OpenStack service has a policy file in JSON format, called
policy.json. The policy file specifies rules, and the rule that governs
each resource. A resource could be API access, the ability to attach to a volume, or to fire up instances.
The policies can be updated by the cloud administrator to further control
access to the various resources. The middleware could also be further customized. Note that your users must be assigned to groups/roles that you
refer to in your policies.
Below is a snippet of the Block Storage service policy.json file.
{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:
%(project_id)s",
"default": "rule:admin_or_owner",
"admin_api": "is_admin:True",
"volume:create": "",
"volume:get_all": "",
"volume:get_volume_metadata": "",
"volume:get_volume_admin_metadata": "rule:admin_api",
"volume:delete_volume_admin_metadata": "rule:admin_api",
"volume:update_volume_admin_metadata": "rule:admin_api",
"volume:get_snapshot": "",
"volume:get_all_snapshots": "",
"volume:extend": "",
"volume:update_readonly_flag": "",
"volume:retype": "",
"volume_extension:types_manage": "rule:admin_api",
"volume_extension:types_extra_specs": "rule:admin_api",
"volume_extension:volume_type_encryption": "rule:admin_api",
"volume_extension:volume_encryption_metadata":
"rule:admin_or_owner",
70
May 1, 2015
current
"volume_extension:extended_snapshot_attributes": "",
"volume_extension:volume_image_metadata": "",
"volume_extension:quotas:show": "",
"volume_extension:quotas:update": "rule:admin_api",
"volume_extension:quota_classes": "",
"volume_extension:volume_admin_actions:reset_status":
"rule:admin_api",
"volume_extension:snapshot_admin_actions:reset_status":
"rule:admin_api",
"volume_extension:backup_admin_actions:reset_status":
"rule:admin_api",
"volume_extension:volume_admin_actions:force_delete":
"rule:admin_api",
"volume_extension:volume_admin_actions:force_detach":
"rule:admin_api",
"volume_extension:snapshot_admin_actions:force_delete":
"rule:admin_api",
"volume_extension:volume_admin_actions:migrate_volume":
"rule:admin_api",
"volume_extension:volume_admin_actions:migrate_volume_completion":
"rule:admin_api",
"volume_extension:volume_host_attribute": "rule:admin_api",
"volume_extension:volume_tenant_attribute":
"rule:admin_or_owner",
"volume_extension:volume_mig_status_attribute":
"rule:admin_api",
"volume_extension:hosts": "rule:admin_api",
"volume_extension:services": "rule:admin_api",
"volume_extension:volume_manage": "rule:admin_api",
"volume_extension:volume_unmanage": "rule:admin_api",
"volume:services": "rule:admin_api",
"volume:create_transfer": "",
"volume:accept_transfer": "",
"volume:delete_transfer": "",
"volume:get_all_transfers": "",
"volume_extension:replication:promote": "rule:admin_api",
"volume_extension:replication:reenable": "rule:admin_api",
"backup:create" : "",
"backup:delete": "",
"backup:get": "",
"backup:get_all": "",
"backup:restore": "",
71
May 1, 2015
current
"backup:backup-import": "rule:admin_api",
"backup:backup-export": "rule:admin_api",
"snapshot_extension:snapshot_actions:update_snapshot_status":
"",
"consistencygroup:create" : "group:nobody",
"consistencygroup:delete": "group:nobody",
"consistencygroup:get": "group:nobody",
"consistencygroup:get_all": "group:nobody",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
"scheduler_extension:scheduler_stats:get_pools" :
"rule:admin_api"
}
Note the default rule specifies that the user must be either an admin or
the owner of the volume. It essentially says only the owner of a volume or
the admin may create/delete/update volumes. Certain other operations
such as managing volume types are accessible only to admin users.
Tokens
Once a user is authenticated a token is generated for authorization and access to an OpenStack environment. A token can have a variable life span;
however since the release of OpenStack Icehouse, the default value for expiry has been reduced to one hour. The recommended expiry value should
be set to a lower value that allows enough time for internal services to
complete tasks. In the event that the token expires before tasks complete,
the cloud may become unresponsive or stop providing services. An example of expended time during use would be the time needed by the Compute service to transfer a disk image onto the hypervisor for local caching.
The following example shows a PKI token. Note that token id values are
typically 3500 bytes. In this example, the value has been truncated.
72
May 1, 2015
current
{
"token": {
"expires": "2013-06-26T16:52:50Z",
"id": "MIIKXAY...",
"issued_at": "2013-06-25T16:52:50.622502",
"tenant": {
"description": null,
"enabled": true,
"id": "912426c8f4c04fb0a07d2547b0704185",
"name": "demo"
}
}
}
Future
Domains are high-level containers for projects, users and groups. As such,
they can be used to centrally manage all keystone-based identity components. With the introduction of account domains, server, storage and other resources can now be logically grouped into multiple projects (previously called tenants) which can themselves be grouped under a master account-like container. In addition, multiple users can be managed within an
account domain and assigned roles that vary for each project.
The Identity V3 API supports multiple domains. Users of different domains
may be represented in different authentication back ends and even have
different attributes that must be mapped to a single set of roles and privileges, that are used in the policy definitions to access the various service resources.
Where a rule may specify access to only admin users and users belonging
to the tenant, the mapping may be trivial. In other scenarios the cloud administrator may need to approve the mapping routines per tenant.
73
May 1, 2015
current
Federated Identity
Federated Identity is a mechanism to establish trusts between Identity Providers and Service Providers (SP), in this case, between Identity
Providers and the services provided by an OpenStack Cloud.
Federated Identity provides a way to securely use existing credentials to access cloud resources such as servers, volumes, and databases, across multiple endpoints provided in multiple authorized clouds using a single set of
credentials, without having to provision additional identities or log in multiple times. The credential is maintained by the user's Identity Provider.
Some important definitions:
Service Provider (SP)
A directory service, such as LDAP, RADIUS and Active Directory, which allows
users to login with a user name and password, is a typical source of authentication tokens (e.g. passwords) at an identity provider.
SAML assertion
Mapping
Protocol
Unscoped token
May 1, 2015
current
Allows a user to use all OpenStack services apart from the Identity service.
May 1, 2015
current
Enabling Federation
To enable Federation, perform the following steps:
1.
Enable TLS support. Install mod_nss according to your distribution, then apply the following patch and restart HTTPD:
--- /etc/httpd/conf.d/nss.conf.orig 2012-03-29 12:59:06.
319470425 -0400
+++ /etc/httpd/conf.d/nss.conf
2012-03-29 12:19:38.
862721465 -0400
@@ -17,7 +17,7 @@
# Note: Configurations that use IPv6 but not IPv4-mapped
addresses need two
#
Listen directives: "Listen [::]:8443" and
"Listen 0.0.0.0:443"
#
-Listen 8443
+Listen 443
##
## SSL Global Context
@@ -81,7 +81,7 @@
## SSL Virtual Host Context
##
-<virtualhost _default_:8443="">
+<virtualhost _default_:443="">
#
b.
76
May 1, 2015
current
Note this needs to be added before your reject all rule which
might be:
-A INPUT -j REJECT --reject-with icmp-host-prohibited
c.
Copy the httpd/wsgi-keystone.conf file to the appropriate location for your Apache server, for example, /etc/httpd/
conf.d/wsgi-keystone.conf file.
d.
Note
This path is Ubuntu-specific. For other distributions,
replace with appropriate path.
e.
If you are running with SELinux enabled ensure that the file has
the appropriate SELinux context to access the linked file. For example, if you have the file in /var/www/cgi-bin location, you
can do this by running:
# restorecon /var/www/cgi-bin
Make sure you use either the SQL or the memcached driver for
tokens, otherwise the tokens will not be shared between the processes of the Apache HTTPD server.
For SQL, in /etc/keystone/keystone.conf , set:
[token]
driver = keystone.token.backends.sql.Token
77
May 1, 2015
current
In both cases, all servers that are storing tokens need a shared
back end. This means either that both point to the same
database server, or both point to a common memcached instance.
g.
Install Shibboleth:
# apt-get install libapache2-mod-shib2
Note
The apt-get command is Ubuntu specific. For other
distributions, replace with appropriate command.
h.
Configure the Identity service virtual host and adjust the config to
properly handle SAML2 workflow.
Add WSGIScriptAlias directive to your vhost configuration:
WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/
identity_providers/.*?/protocols/.*?/auth)$ /var/www/
keystone/main/$1
i.
Note
The option saml2 may be different in your deployment, but do not use a wildcard value. Otherwise every Federated protocol will be handled by Shibboleth.
78
May 1, 2015
current
k.
l.
Restart Apache:
# service apache2 restart
Note
The service apache2 restart command is
Ubuntu-specific. For other distributions, replace with
appropriate command.
2.
Once you have your Identity service virtual host ready, configure
Shibboleth and upload your metadata to the Identity Provider.
If new certificates are required, they can be easily created by executing:
$ shib-keygen -y NUMBER_OF_YEARS
c.
May 1, 2015
current
d.
Identity service enforces external authentication when environment variable REMOTE_USER is present so make sure Shibboleth does not set the REMOTE_USER environment variable. To do
so, scan through the /etc/shibboleth/shibboleth2.xml
configuration file and remove the REMOTE_USER directives.
e.
Examine your attributes map in the /etc/shibboleth/attributes-map.xml file and adjust your requirements if needed. For more information see Shibboleth Attributes.
f.
3.
Add the Federation extension driver to the [federation] section in the keystone.conf file. For example:
[federation]
driver = keystone.contrib.federation.backends.sql.
Federation
b.
Note
The external method should be dropped to avoid
any interference with some Apache and Shibboleth
SP setups, where a REMOTE_USER environment variable is always set, even as an empty value.
c.
80
May 1, 2015
current
Ideally, to test that the Identity Provider and the Identity service are communicating, navigate to the protected URL and attempt to sign in. If you
get a response back from keystone, even if it is a wrong response, indicates the communication.
Configuring Federation
Now that the Identity Provider and Identity service are communicating,
you can start to configure the OS-FEDERATION extension.
1.
2.
81
May 1, 2015
current
Note
It is assumed that the keystone service is running on
port 5000.
2.
82
May 1, 2015
current
or
# curl -X GET \
-H "X-Auth-Token: <unscoped token>" http://localhost:5000/
v3/OS-FEDERATION/domains
3.
83
May 1, 2015
current
idp_contact_company=example_company
idp_contact_name=John
idp_contact_surname=Smith
[email protected]
idp_contact_telephone=555-55-5555
idp_contact_type=technical
Generate metadata
In order to create a trust between the Identity Provider and the Service
Provider, metadata must be exchanged. To create metadata for your Identity service, run the keystone-manage command and pipe the output to a
file. For example:
$ keystone-manage saml_idp_metadata > /etc/keystone/
saml2_idp_metadata.xml
Note
The file location should match the value of the configuration
option idp_metadata_path that was assigned in the list of
[saml] updates.
84
May 1, 2015
current
At this point the SAML Assertion can be sent to the Service Provider keystone, and a valid OpenStack token, issued by a Service Provider keystone,
will be returned.
Future
Currently, the CLI supports the Enhanced Client or Proxy (ECP), (the nonbrowser) support for keystoneclient from an API perspective. So, if
you are using the keystoneclient, you can create a client instance and
use the SAML authorization plugin. There is no support for dashboard
available presently. With the upcoming OpenStack releases, Federated
Identity should be supported with both CLI and the dashboard.
Checklist
Check-Identity-01: Is user and group ownership of
Identity configuration files set to keystone?
Configuration files contain critical parameters and information required
for smooth functioning of the component. If an unprivileged user, either
intentionally or accidently modifies or deletes any of the parameters or the
file itself then it would cause severe availability issue causing denial of service to the other end users. Thus user and group ownership of such critical
configuration files must be set to that component owner.
Run the following commands:
$ stat -L -c "%U %G" /etc/keystone/keystone.conf | egrep
"keystone keystone"
$ stat -L -c "%U %G" /etc/keystone/keystone-paste.ini | egrep
"keystone keystone"
$ stat -L -c "%U %G" /etc/keystone/policy.json | egrep "keystone
keystone"
$ stat -L -c "%U %G" /etc/keystone/logging.conf | egrep
"keystone keystone"
$ stat -L -c "%U %G" /etc/keystone/ssl/certs/signing_cert.pem |
egrep "keystone keystone"
$ stat -L -c "%U %G" /etc/keystone/ssl/private/signing_key.pem |
egrep "keystone keystone"
$ stat -L -c "%U %G" /etc/keystone/ssl/certs/ca.pem | egrep
"keystone keystone"
85
May 1, 2015
current
Pass: If user and group ownership of all these config files is set to keystone. The above commands show output of keystone keystone.
Fail: If the above commands does not return any output as the user or
group ownership might have set to any user other than keystone.
Recommended in: the section called Internally implemented authentication methods [66]
stat
stat
stat
stat
stat
stat
stat
-L
-L
-L
-L
-L
-L
-L
-c
-c
-c
-c
-c
-c
-c
"%a"
"%a"
"%a"
"%a"
"%a"
"%a"
"%a"
/etc/keystone/keystone.conf
/etc/keystone/keystone-paste.ini
/etc/keystone/policy.json
/etc/keystone/logging.conf
/etc/keystone/ssl/certs/signing_cert.pem
/etc/keystone/ssl/private/signing_key.pem
/etc/keystone/ssl/certs/ca.pem
May 1, 2015
current
Check-Identity-04: Does Identity use strong hashing algorithms for PKI tokens?
MD5 is a weak and depreciated hashing algorithm. It can be cracked using
bruteforce attack. Identity tokens are sensitive and need to be protected
with a stronger hashing algorithm to prevent unauthorized disclosure and
subsequent access.
Pass: If value of parameter hash_algorithm under [token] section in
/etc/keystone/keystone.conf is set to SHA256.
Fail: If value of parameter hash_algorithm under [token]section is set
to MD5.
Recommended in: the section called Tokens [72]
May 1, 2015
current
88
May 1, 2015
current
7. Dashboard
Basic web server configuration .............................................................
HTTPS ..................................................................................................
HTTP Strict Transport Security (HSTS) ...................................................
Front end caching ................................................................................
Domain names .....................................................................................
Static media .........................................................................................
Secret key ............................................................................................
Session back end .................................................................................
Allowed hosts ......................................................................................
Cross Site Request Forgery (CSRF) ........................................................
Cookies ................................................................................................
Cross Site Scripting (XSS) ......................................................................
Cross Origin Resource Sharing (CORS) ..................................................
Horizon image upload .........................................................................
Upgrading ...........................................................................................
Debug ..................................................................................................
89
90
90
90
90
91
92
92
92
93
93
93
93
94
94
94
Horizon is the OpenStack dashboard that provides users a self-service portal to provision their own resources within the limits set by administrators.
These include provisioning users, defining instance flavors, uploading VM
images, managing networks, setting up security groups, starting instances,
and accessing the instances through a console.
The dashboard is based on the Django web framework, therefore secure
deployment practices for Django apply directly to horizon. This guide provides a popular set of Django security recommendations. Further information can be found by reading the Django documentation.
The dashboard ships with reasonable default security settings, and has
good deployment and configuration documentation.
May 1, 2015
current
HTTPS
Deploy the dashboard behind a secure HTTPS server by using a valid, trusted certificate from a recognized certificate authority (CA). Private organization-issued certificates are only appropriate when the root of trust is preinstalled in all user browsers.
Configure HTTP requests to the dashboard domain to redirect to the fully
qualified HTTPS URL.
Note
If you are using an HTTPS proxy in front of your web server,
rather than using an HTTP server with HTTPS functionality,
modify the SECURE_PROXY_SSL_HEADER variable. Refer to
the Django documentation for information about modifying
the SECURE_PROXY_SSL_HEADER variable.
See the chapter on PKI/SSL Everywhere for more specific recommendations and server configurations for HTTPS configurations, including the
configuration of HSTS.
Domain names
Many organizations typically deploy web applications at subdomains of an
overarching organization domain. It is natural for users to expect a domain
of the form openstack.example.org. In this context, there are often
many other applications deployed in the same second-level namespace, often serving user-controlled content. This name structure is convenient and
simplifies name server maintenance.
90
May 1, 2015
current
Static media
The dashboard's static media should be deployed to a subdomain of the
dashboard domain and served by the web server. The use of an external
content delivery network (CDN) is also acceptable. This subdomain should
not set cookies or serve user-provided content. The media should also be
served with HTTPS.
Django media settings are documented in the Django documentation.
Dashboard's default configuration uses django_compressor to compress
and minify CSS and JavaScript content before serving it. This process
should be statically done before deploying the dashboard, rather than using the default in-request dynamic compression and copying the resulting
files along with deployed code or to the CDN server. Compression should
be done in a non-production build environment. If this is not practical, we
recommend disabling resource compression entirely. Online compression
dependencies (less, Node.js) should not be installed on production machines.
91
May 1, 2015
current
Secret key
The dashboard depends on a shared SECRET_KEY setting for some security functions. The secret key should be a randomly generated string at
least 64 characters long, which must be shared across all active dashboard
instances. Compromise of this key may allow a remote attacker to execute arbitrary code. Rotating this key invalidates existing user sessions and
caching. Do not commit this key to public repositories.
Allowed hosts
Configure the ALLOWED_HOSTS setting with the domain or domains
where the dashboard is available. Failure to configure this setting (especially if not following the recommendation above regarding second level domains) opens the dashboard to a number of serious attacks. Wild card domains should be avoided.
For further details, see the Django documentation.
92
May 1, 2015
current
Cookies
Session Cookies should be set to HTTPONLY:
SESSION_COOKIE_HTTPONLY = True
Never configure CSRF or session cookies to have a wild card domain with a
leading dot. Horizon's session and CSRF cookie should be secured when deployed with HTTPS:
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
May 1, 2015
current
Access-Control-Allow-Origin: https://example.com/
Upgrading
Django security releases are generally well tested and aggressively backwards compatible. In almost all cases, new major releases of Django are
also fully backwards compatible with previous releases. Dashboard implementers are strongly encouraged to run the latest stable release of Django
with up-to-date security releases.
Debug
Make sure DEBUG is set to False in production. In Django, DEBUG displays
stack traces and sensitive web server state information on any exception.
94
May 1, 2015
current
8. Compute
How to select virtual consoles .............................................................. 95
The Compute service (nova) is one of the more complex OpenStack services. It runs in many locations throughout the cloud and interacts with a
variety of internal services. For this reason, most of our recommendations
regarding best practices for Compute service configuration are distributed
throughout this book. We provide specific details in the sections on Management, API Endpoints, Messaging, and Database.
Capabilities
The OpenStack dashboard (horizon) can provide a VNC console for instances directly on the web page using the HTML5 noVNC client. This
requires the nova-novncproxy service to bridge from the public network to the management network.
The nova command-line utility can return a URL for the VNC console for access by the nova Java VNC client. This requires the nova-xvpvncproxy service to bridge from the public network to the
management network.
Security considerations
The nova-novncproxy and nova-xvpvncproxy services by default
open public-facing ports that are token authenticated.
By default, the remote desktop traffic is not encrypted. TLS can be enabled to encrypt the VNC traffic. Please refer to Introduction to TLS and
SSL for appropriate recommendations.
95
May 1, 2015
current
Bibliography
blog.malchuk.ru, OpenStack VNC Security. 2013. Secure Connections to
VNC ports
OpenStack Mailing List, [OpenStack] nova-novnc SSL configuration - Havana. 2014. OpenStack nova-novnc SSL Configuration
Redhat.com/solutions, Using SSL Encryption with OpenStack nova-novacproxy. 2014. OpenStack nova-novncproxy SSL encryption
Capabilities
SPICE is supported by the OpenStack dashboard (horizon) directly on
the instance web page. This requires the nova-spicehtml5proxy service.
The nova command-line utility can return a URL for SPICE console for access by a SPICE-html client.
Limitations
Although SPICE has many advantages over VNC, the spice-html5 browser integration currently doesn't really allow admins to take advantage
of any of the benefits. To take advantage of SPICE features like multi-monitor, USB pass through, etc. admins are recommended to use a
standalone SPICE client within the Management Network.
Security considerations
The nova-spicehtml5proxy service by default opens public-facing
ports that are token authenticated.
The functionality and integration are still evolving. We will access the
features in the next release and make recommendations.
96
May 1, 2015
current
As is the case for VNC, at this time we recommend using SPICE from the
management network in addition to limiting use to few individuals.
Bibliography
OpenStack Configuration Reference - Havana. SPICE Console. SPICE Console
bugzilla.redhat.com, Bug 913607 - RFE: Support Tunnelling SPICE over
websockets. 2013. Red Hat bug 913607
97
May 1, 2015
current
9. Object Storage
First thing to secure: the network ......................................................
Securing services: general ...................................................................
Securing storage services ....................................................................
Securing proxy services .......................................................................
Object Storage authentication ............................................................
Other notable items ...........................................................................
100
102
103
104
105
106
99
May 1, 2015
current
Note
An Object Storage installation does not have to necessarily
be on the Internet and could also be a private cloud with the
"Public Switch" being part of the organization's internal network infrastructure.
100
May 1, 2015
current
Caution
Object Storage does not employ encryption or authentication
with inter-node communications. This is why you see a "Private
Switch" or private network ([V]LAN) in the architecture diagrams. This data domain should be separate from other OpenStack data networks as well. For further discussion on security
domains please see the section called Security boundaries and
threats [12].
Tip
Rule: Use a private (V)LAN network segment for your storage
nodes in the data domain.
This necessitates that the proxy nodes have dual interfaces (physical or virtual):
1. One as a "public" interface for consumers to reach
2. Another as a "private" interface with access to the storage nodes
The following figure demonstrates one possible network architecture.
101
May 1, 2015
current
File permissions
The /etc/swift directory contains information about the ring topology and environment configuration. The following permissions are recommended:
# chown -R root:swift /etc/swift/*
# find /etc/swift/ -type f -exec chmod 640 {} \;
102
May 1, 2015
current
This restricts only root to be able to modify configuration files while allowing the services to read them through their group membership in the
'swift' group.
Port
Type
Account service
6002
TCP
Container service
6001
TCP
Object service
6000
TCP
873
TCP
Rsync
a
If ssync is used instead of rsync, the Object service port is used for maintaining durability.
Authentication does not take place at the storage nodes. If someone was
able to connect to a storage node on one of these ports they could access
or modify data without authentication. In order to secure against this issue you should follow the recommendations given previously about using
a private storage network.
Collection of objects. Metadata on the container is available for ACLs. The meaning of ACLs is
dependent on the authentication system used.
Tip
Another way of thinking about the above would be: A single
shelf (account) holds zero or more buckets (containers) which
103
May 1, 2015
current
each hold zero or more objects. A garage (Object Storage cluster) may have multiple shelves (accounts) with each shelf belonging to zero or more users.
At each level you may have ACLs that dictate who has what type of access. ACLs are interpreted based on what authentication system is in use.
The two most common types of authentication providers used are Identity service (keystone) and TempAuth. Custom authentication providers
are also possible. Please see the section called Object Storage authentication [105] for more information.
104
May 1, 2015
current
Load balancer
If the option of using Apache is not feasible or for performance you wish
to offload your TLS work you may employ a dedicated network device
load balancer. This is also the common way to provide redundancy and
load balancing when using multiple proxy nodes.
If you choose to offload your TLS, ensure that the network link between
the load balancer and your proxy nodes are on a private (V)LAN segment
such that other nodes on the network (possibly compromised) cannot
wiretap (sniff) the unencrypted traffic. If such a breach were to occur the
attacker could gain access to end-point client or cloud administrator credentials and access the cloud data.
The authentication service you use, such as Identity service (keystone) or
TempAuth, will determine how you configure a different URL in the responses to end-point clients so they use your load balancer instead of an
individual proxy node.
TempAuth
TempAuth is the default authentication for Object Storage. In contrast to
Identity it stores the user accounts, credentials, and metadata in object
storage itself. More information can be found in the section The Auth System of the Object Storage (swift) documentation.
Keystone
Keystone is the commonly used Identity provider in OpenStack. It may also be used for authentication in Object Storage. Coverage of securing keystone is already provided in Chapter6, Identity [65].
105
May 1, 2015
current
106
May 1, 2015
current
May 1, 2015
current
108
May 1, 2015
current
11. Networking
Networking architecture ....................................................................
Networking services ...........................................................................
Securing OpenStack Networking services ............................................
Networking services security best practices .........................................
Case studies .......................................................................................
109
113
117
119
121
Networking architecture
OpenStack Networking is a standalone service that often deploys several processes across a number of nodes. These processes interact with each
other and other OpenStack services. The main process of the OpenStack
Networking service is neutron-server, a Python daemon that exposes the OpenStack Networking API and passes tenant requests to a suite of
plug-ins for additional processing.
The OpenStack Networking components are:
neutron server (neutron-server and neutron-*-plugin)
109
May 1, 2015
current
Runs on each compute node to manage local virtual switch (vswitch) configuration. The plug-in that you use determine which agents run. This service requires message queue access. This service requires message queue access and
depends on the plugin used.
Provides DHCP services to tenant networks. This agent is the same across
all plug-ins and is responsible for maintaining DHCP configuration. The neutron-dhcp-agent requires message
queue access.
L3 agent (neutron-l3agent)
110
May 1, 2015
current
111
May 1, 2015
current
Guest network
External network
May 1, 2015
current
Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants. The IP addresses on this network should be reachable by
anyone on the Internet. This may be the same
network as the external network, as it is possible to create a subnet for the external network that uses IP allocation ranges to use only
less than the full range of IP addresses in an IP
block. This network is considered the Public Security Domain.
Networking services
In the initial architectural phases of designing your OpenStack Network infrastructure it is important to ensure appropriate expertise is available to
assist with the design of the physical networking infrastructure, to identify
proper security controls and auditing mechanisms.
OpenStack Networking adds a layer of virtualized network services which
gives tenants the capability to architect their own virtual networks. Currently, these virtualized services are not as mature as their traditional networking counterparts. Consider the current state of these virtualized services before adopting them as it dictates what controls you may have to
implement at the virtualized and traditional network boundaries.
VLANs
VLANs are realized as packets on a specific physical network containing
IEEE 802.1Q headers with a specific VLAN ID (VID) field value. VLAN networks sharing the same physical network are isolated from each other at
L2, and can even have overlapping IP address spaces. Each distinct physical
network supporting VLAN networks is treated as a separate VLAN trunk,
with a distinct space of VID values. Valid VID values are 1 through 4094.
113
May 1, 2015
current
VLAN configuration complexity depends on your OpenStack design requirements. In order to allow OpenStack Networking to efficiently use
VLANs, you must allocate a VLAN range (one for each tenant) and turn
each compute node physical switch port into a VLAN trunk port.
Note
NOTE: If you intend for your network to support more than
4094 tenants VLAN is probably not the correct option for you
as multiple 'hacks' are required to extend the VLAN tags to
more than 4094 tenants.
L2 tunneling
Network tunneling encapsulates each tenant/network combination with
a unique "tunnel-id" that is used to identify the network traffic belonging
to that combination. The tenant's L2 network connectivity is independent
of physical locality or underlying network design. By encapsulating traffic
inside IP packets, that traffic can cross Layer-3 boundaries, removing the
need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer
of obfuscation to network data traffic, reducing the visibility of individual
tenant traffic from a monitoring point of view.
OpenStack Networking currently supports both GRE and VXLAN encapsulation.
The choice of technology to provide L2 isolation is dependent upon the
scope and size of tenant networks that will be created in your deployment. If your environment has limited VLAN ID availability or will have a
large number of L2 networks, it is our recommendation that you utilize
tunneling.
Network services
The choice of tenant network isolation affects how the network security
and control boundary is implemented for tenant services. The following
additional network services are either available or currently under development to enhance the security posture of the OpenStack network architecture.
May 1, 2015
current
Note, legacy nova-network security groups are applied to all virtual interface ports on an instance using iptables.
Security groups allow administrators and tenants the ability to specify
the type of traffic, and direction (ingress/egress) that is allowed to pass
through a virtual interface port. Security groups rules are stateful L2-L4
traffic filters.
When using the Networking service, we recommend that you enable security groups in this service and disable it in the Compute service.
May 1, 2015
current
Load balancing
Another feature in OpenStack Networking is Load-Balancer-as-a-service
(LBaaS). The LBaaS reference implementation is based on HA-Proxy. There
are third-party plug-ins in development for extensions in OpenStack Networking to provide extensive L4-L7 functionality for virtual interface ports.
Firewalls
FW-as-a-Service (FWaaS) is considered an experimental feature for the Kilo
release of OpenStack Networking. FWaaS addresses the need to manage
and leverage the rich set of security features provided by typical firewall
products which are typically far more comprehensive than what is currently provided by security groups. Both Freescale and Intel developed thirdparty plug-ins as extensions in OpenStack Networking to support this component in the Kilo release. Documentation for administration of FWaaS is
located at
It is critical during the design of an OpenStack Networking infrastructure
to understand the current features and limitations of network services that
are available. Understanding where the boundaries of your virtual and
physical networks will help you add the required security controls in your
environment.
May 1, 2015
current
OpenStack Networking supports multiple L3 and DHCP agents with load balancing. However, tight coupling of the
location of the virtual machine is not
supported.
May 1, 2015
current
To isolate sensitive data communication between the OpenStack Networking services and other OpenStack core services, configure these communication channels to only allow communication over an isolated management network.
118
May 1, 2015
current
May 1, 2015
current
Note
It is important to review the default networking resource policy and modify the policy appropriately for your security posture.
If your deployment of OpenStack provides multiple external access points
into different security domains it is important that you limit the tenant's
ability to attach multiple vNICs to multiple external access pointsthis
would bridge these security domains and could lead to unforeseen security
compromise. It is possible mitigate this risk by utilizing the host aggregates
functionality provided by OpenStack Compute or through splitting the tenant VMs into multiple tenant projects with different virtual network configurations.
Security groups
The OpenStack Networking service provides security group functionality using a mechanism that is more flexible and powerful than the security group capabilities built into OpenStack Compute. Thus, nova.conf
should always disable built-in security groups and proxy all security group
calls to the OpenStack Networking API when using OpenStack Networking. Failure to do so results in conflicting security policies being simultaneously applied by both services. To proxy security groups to OpenStack Networking, use the following configuration values:
firewall_driver must be set to
nova.virt.firewall.NoopFirewallDriver so that nova-compute does not perform iptables-based filtering itself.
security_group_api must be set to neutron so that all security
group requests are proxied to the OpenStack Networking service.
A security group is a container for security group rules. Security groups and
their rules allow administrators and tenants the ability to specify the type
of traffic and direction (ingress/egress) that is allowed to pass through a
virtual interface port. When a virtual interface port is created in OpenStack
Networking it is associated with a security group. If a security group is not
specified, the port will be associated with a 'default' security group. By default this group will drop all ingress traffic and allow all egress. Rules can
be added to this group in order to change the behaviour.
When using the security group API through OpenStack Compute, security
groups are applied to all virtual interface ports on an instance. The reason
120
May 1, 2015
current
for this is that OpenStack Compute security group APIs are instance based
and not virtual interface port based as OpenStack Networking.
Quotas
Quotas provide the ability to limit the number of network resources available to tenants. You can enforce default quotas for all tenants. The /etc/
neutron/neutron.conf includes these options for quota:
[QUOTAS]
# resource name(s) that are supported in quota features
quota_items = network,subnet,port
# default number of resource allowed per tenant, minus for
unlimited
#default_quota = -1
# number of networks allowed per tenant, and minus means
unlimited
quota_network = 10
# number of subnets allowed per tenant, and minus means
unlimited
quota_subnet = 10
# number of ports allowed per tenant, and minus means unlimited
quota_port = 50
# number of security groups allowed per tenant, and minus means
unlimited
quota_security_group = 10
# number of security group rules allowed per tenant, and minus
means unlimited
quota_security_group_rule = 100
# default driver to use for quota checks
quota_driver = neutron.quota.ConfDriver
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
121
May 1, 2015
current
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would address
providing networking services to the user.
122
May 1, 2015
current
Messaging security
This section discusses security hardening approaches for the three most
common message queuing solutions used in OpenStack: RabbitMQ, Qpid,
and ZeroMQ.
123
May 1, 2015
current
May 1, 2015
current
RabbitMQ Configuration
RabbitMQ SSL
On the RabbitMQ server, for each OpenStack service or node that communicates with the message queue set up user accounts and privileges:
125
May 1, 2015
current
126
May 1, 2015
current
Optionally, if using SASL with Qpid specify the SASL mechanisms in use by
adding:
qpid_sasl_mechanisms=<space separated list of SASL mechanisms to
use for auth>
Namespaces
Network namespaces are highly recommended for all services running on
OpenStack Compute Hypervisors. This will help prevent against the bridging of network traffic between VM guests and the management network.
When using ZeroMQ messaging, each host must run at least one ZeroMQ
message receiver to receive messages from the network and forward messages to local processes through IPC. It is possible and advisable to run an
independent message receiver per project within an IPC namespace, along
with other services within the same project.
Network policy
Queue servers should only accept connections from the management network. This applies to all implementations. This should be implemented
through configuration of services and optionally enforced through global
network policy.
When using ZeroMQ messaging, each project should run a separate ZeroMQ receiver process on a port dedicated to services belonging to that
project. This is equivalent to the AMQP concept of control exchanges.
May 1, 2015
current
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would address
security concerns around the messaging service.
The message queue is a critical piece of infrastructure that supports a number of OpenStack services but is most strongly associated with the Compute service. Due to the nature of the message queue service, Alice and
Bob have similar security concerns. One of the larger concerns that remains
is that many systems have access to this queue and there is no way for a
consumer of the queue messages to verify which host or service placed the
messages on the queue. An attacker who is able to successfully place messages on the queue is able to create and delete VM instances, attach the
block storage of any tenant and a myriad of other malicious actions. There
are a number of solutions anticipated in the near future, with several proposals for message signing and encryption making their way through the
OpenStack development process.
128
May 1, 2015
current
129
132
134
138
May 1, 2015
current
Architecture
The following diagram presents a conceptual view of how the Data processing service fits into the greater OpenStack ecosystem.
The Data processing service makes heavy use of the Compute, Orchestration, Image, and Block Storage services during the provisioning of clusters.
It will also use one or more networks, created by the Networking service,
provided during cluster creation for administrative access to the instances.
While users are running framework applications the controller and the
clusters will be accessing the Object Storage service. Given these service usages, we recommend following the instructions outlined in Chapter2, System documentation [23] for cataloging all the components of an installation.
Technologies involved
The Data Processing service is responsible for the deployment and management of several applications. For a complete understanding of the security
options provided we recommend that operators have a general familiarity
with these applications. The list of highlighted technologies is broken into
two sections: first, high priority applications that have a greater impact on
security, and second, supporting applications with a lower impact.
Higher impact
130
May 1, 2015
current
Hadoop
Hadoop secure mode docs
HDFS
Spark
Spark Security
Storm
Zookeeper
Lower impact
Oozie
Hive
Pig
These technologies comprise the core of the frameworks that are deployed with the Data processing service. In addition to these technologies,
the service also includes bundled frameworks provided by third party vendors. These bundled frameworks are built using the same core pieces described above plus configurations and applications that the vendors include. For more information on the third party framework bundles please
see the following links:
Cloudera CDH
Hortonworks Data Platform
MapR
May 1, 2015
current
Deployment
The Data processing service is deployed, like many other OpenStack services, as an application running on a host connected to the stack. As of
the Kilo release, it has the ability to be deployed in a distributed manner
with several redundant controllers. Like other services, it also requires
a database to store information about its resources. See Chapter14,
Databases [141]. It is important to note that the Data processing service will need to manage several Identity service trusts, communicate directly with the Orchestration and Networking services, and potentially create users in a proxy domain. For these reasons the controller will need access to the control plane and as such we recommend installing it alongside
other service controllers.
Data processing interacts directly with several openstack services:
Compute
Identity
Networking
Object Storage
Orchestration
Block Storage (optional)
132
May 1, 2015
current
We recommend documenting all the data flows and bridging points between these services and the data processing controller. See Chapter2,
System documentation [23].
The Object Storage service is used by the Data processing service to store
job binaries and data sources. Users wishing to have access to the full Data processing service functionality will need an object store in the projects
they are using.
The Networking service plays an important role in the provisioning of clusters. Prior to provisioning, the user is expected to provide one or more networks for the cluster instances. The action of associating networks is similar to the process of assigning networks when launching instances through
the dashboard. These networks are used by the controller for administrative access to the instances and frameworks of its clusters.
Also of note is the Identity service. Users of the Data processing service
will need appropriate roles in their projects to allow the provisioning of
instances for their clusters. Installations that use the proxy domain configuration require special consideration. See the section called Proxy domains [135]. Specifically, the Data processing service will need the ability to create users within the proxy domain.
May 1, 2015
current
TLS
The Data processing service controller, like many other OpenStack controllers, can be configured to require TLS connections.
Pre-Kilo releases will require a TLS proxy as the controller does not allow
direct TLS connections. Configuring TLS proxies is covered in the section
called TLS proxies and HTTP services [48], and we recommend following
the advice there to create this type of installation.
From the Kilo release onward the data processing controller allows direct
TLS connections. Enabling this behavior requires some small adjustments
to the controller configuration file. For any post-Juno installation we recommend enabling the direct TLS connections in the controller configuration.
May 1, 2015
current
tion, and retrieval of the Data processing service resources. Operators who
need to restrict access within a project should be fully aware that there will
need to be alternative means for users to gain access to the core functionality of the service (for example, provisioning clusters).
Security groups
The Data processing service allows for the association of security groups
with instances provisioned for its clusters. With no additional configuration the service will use the default security group for any project that provisions clusters. A different security group may be used if requested, or
an automated option exists which instructs the service to create a security
group based on ports specified by the framework being accessed.
For production environments we recommend controlling the security
groups manually and creating a set of group rules that are appropriate
for the installation. In this manner the operator can ensure that the default security group will contain all the appropriate rules. For an expanded discussion of security groups please see the section called Security
groups [120].
Proxy domains
When using the Object Storage service in conjunction with data processing it is necessary to add credentials for the store access. With proxy domains the Data processing service can instead use a delegated trust from
the Identity service to allow store access via a temporary user created in
the domain. For this delegation mechanism to work the Data processing
135
May 1, 2015
current
service must be configured to use proxy domains and the operator must
configure an identity domain for the proxy users.
The data processing controller retains temporary storage of the username
and password provided for object store access. When using proxy domains
the controller will generate this pair for the proxy user, and the access of
this user will be limited to that of the identity trust. We recommend using
proxy domains in any installation where the controller or its database have
routes to or from public networks.
Indirect access
For installations in which the controller will have limited access to all the instances of a cluster, due to limits on floating IP addresses or security rules,
indirect access may be configured. This allows some instances to be designated as proxy gateways to the other instances of the cluster.
136
May 1, 2015
current
This configuration can only be enabled while defining the node group templates that will make up the data processing clusters. It is provided as a run
time option to be enabled during the cluster provisioning process.
Rootwrap
When creating custom topologies for network access it can be necessary to
allow non-root users the ability to run the proxy commands. For these situations the oslo rootwrap package is used to provide a facility for non-root
users to run privileged commands. This configuration requires the user associated with the data processing controller application to be in the sudoers list and for the option to be enabled in the configuration file. Optionally, an alternative rootwrap command can be provided.
For more information on the rootwrap project, please see the official documentation:
https://wiki.openstack.org/wiki/Rootwrap
Logging
Monitoring the output of the service controller is a powerful forensic
tool, as described more thoroughly in Chapter18, Monitoring and logging [193]. The Data processing service controller offers a few options
for setting the location and level of logging.
References
Sahara project documentation: http://docs.openstack.org/developer/sahara
137
May 1, 2015
current
Case studies
Continuing with the studies described in the section called Introduction to
case studies [21], we present Alice and Bob's approaches to deploying the
Data processing service for their users.
May 1, 2015
current
the cluster instances allowing for users to gain access to the instances in
the event of errors. She enables the use of proxy domains to prevent the
users from needing to enter their credentials into the Data processing service.
139
May 1, 2015
current
14. Databases
Database back end considerations .....................................................
Database access control .....................................................................
Database transport security ................................................................
Case studies .......................................................................................
141
142
147
149
May 1, 2015
current
142
May 1, 2015
current
Nova-conductor
The compute nodes are the least trusted of the services in OpenStack because they host tenant instances. The nova-conductor service has been
introduced to serve as a database proxy, acting as an intermediary between the compute nodes and the database. We discuss its ramifications
later in this chapter.
We strongly recommend:
All database communications be isolated to a management network
Securing communications using TLS
Creating unique database user accounts per OpenStack service endpoint
(illustrated below)
143
May 1, 2015
current
Privileges
A separate database administrator (DBA) account should be created and
protected that has full privileges to create/drop databases, create user accounts, and update user privileges. This simple means of separation of responsibility helps prevent accidental misconfiguration, lowers risk and lowers scope of compromise.
144
May 1, 2015
current
The database user accounts created for the OpenStack services and for
each node should have privileges limited to just the database relevant to
the service where the node is a member.
Note this command only adds the ability to communicate over SSL and is
non-exclusive. Other access methods that may allow unencrypted transport should be disabled so that SSL is the sole access method.
The md5 parameter defines the authentication method as a hashed password. We provide a secure authentication example in the section below.
145
May 1, 2015
current
Nova-conductor
OpenStack Compute offers a sub-service called nova-conductor which
proxies database connections, with the primary purpose of having the nova compute nodes interfacing with nova-conductor to meet data persistence needs as opposed to directly communicating with the database.
Nova-conductor receives requests over RPC and performs actions on behalf
of the calling service without granting granular access to the database, its
tables, or data within. Nova-conductor essentially abstracts direct database
access away from compute nodes.
This abstraction offers the advantage of restricting services to executing
methods with parameters, similar to stored procedures, preventing a large
number of systems from directly accessing or modifying database data.
This is accomplished without having these procedures stored or executed
within the context or scope of the database itself, a frequent criticism of
typical stored procedures.
146
May 1, 2015
current
Unfortunately, this solution complicates the task of more fine-grained access control and the ability to audit data access. Because the nova-conductor service receives requests over RPC, it highlights the importance
of improving the security of messaging. Any node with access to the message queue may execute these methods provided by the nova-conductor and effectively modifying the database.
Note, as nova-conductor only applies to OpenStack Compute, direct
database access from compute hosts may still be necessary for the operation of other OpenStack components such as Telemetry (ceilometer), Networking, and Block Storage.
To disable the nova-conductor, place the following into your
nova.conf file (on your compute hosts):
[conductor]
use_local = true
May 1, 2015
current
Database transport
In addition to restricting database communications to the management
network, we also strongly recommend that the cloud administrator configure their database back end to require TLS. Using TLS for the database
client connections protects the communications from tampering and
eavesdropping. As will be discussed in the next section, using TLS also
provides the framework for doing database user authentication through
X.509 certificates (commonly referred to as PKI). Below is guidance on
how TLS is typically configured for the two popular database back ends
MySQL and PostgreSQL.
Note
When installing the certificate and key files, ensure that the file
permissions are restricted, for example chmod 0600, and the
ownership is restricted to the database daemon user to prevent unauthorized access by other processes and users on the
database server.
May 1, 2015
current
In my.cnf:
[[mysqld]]
...
ssl-ca=/path/to/ssl/cacert.pem
ssl-cert=/path/to/ssl/server-cert.pem
ssl-key=/path/to/ssl/server-key.pem
Optionally, if you wish to restrict the set of SSL ciphers used for the encrypted connection. See http://www.openssl.org/docs/apps/ciphers.html
for a list of ciphers and the syntax for specifying the cipher string:
ssl-cipher='cipher:list'
Optionally, if you wish to restrict the set of SSL ciphers used for the encrypted connection. See http://www.openssl.org/docs/apps/ciphers.html
for a list of ciphers and the syntax for specifying the cipher string:
ssl-ciphers = 'cipher:list'
The server certificate, key, and certificate authority (CA) files should be
placed in the $PGDATA directory in the following files:
$PGDATA/server.crt - Server certificate
$PGDATA/server.key - Private key corresponding to server.crt
$PGDATA/root.crt - Trusted certificate authorities
$PGDATA/root.crl - Certificate revocation list
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would address
database selection and configuration for their respective private and public clouds.
149
May 1, 2015
current
150
May 1, 2015
current
151
155
158
159
May 1, 2015
current
Data disposal
OpenStack operators should strive to provide a certain level of tenant data
disposal assurance. Best practices suggest that the operator sanitize cloud
system media (digital and non-digital) prior to disposal, release out of organization control or release for reuse. Sanitization methods should implement an appropriate level of strength and integrity given the specific security domain and sensitivity of the information.
"The sanitization process removes information from the
media such that the information cannot be retrieved or reconstructed. Sanitization techniques, including clearing,
purging, cryptographic erase, and destruction, prevent the
disclosure of information to unauthorized individuals when
such media is reused or released for disposal." NIST Special
Publication 800-53 Revision 4
General data disposal and sanitization guidelines as adopted from NIST
recommended security controls. Cloud operators should:
1. Track, document and verify media sanitization and disposal actions.
2. Test sanitation equipment and procedures to verify proper performance.
3. Sanitize portable, removable storage devices prior to connecting such
devices to the cloud infrastructure.
4. Destroy cloud system media that cannot be sanitized.
152
May 1, 2015
current
May 1, 2015
current
If a backend plugin is being used, there may be independent ways of doing encryption or non-standard overwrite solutions. Plugins to OpenStack
Block Storage will store data in a variety of ways. Many plug-ins are specific
to a vendor or technology, whereas others are more DIY solutions around
filesystems such as LVM or ZFS. Methods to securely destroy data will vary
from one plugin to another, from one vendor's solution to another, and
from one filesystem to another.
Some back ends such as ZFS will support copy-on-write to prevent data exposure. In these cases, reads from unwritten blocks will always return zero. Other back ends such as LVM may not natively support this, thus the
Block Storage plug-in takes the responsibility to override previously written
blocks before handing them to users. It is important to review what assurances your chosen volume back end provides and to see what mediations
may be available for those assurances not provided.
Finally, while not a feature of OpenStack, vendors and implementors may
choose to add or support encryption of volumes. In this case, destruction
of data is as simple as throwing away the key.
May 1, 2015
current
When using LVM backed ephemeral storage, which is block-based, it is necessary that the OpenStack Compute software securely erases blocks to prevent information disclosure. There have in the past been information disclosure vulnerabilities related to improperly erased ephemeral block storage devices.
Filesystem storage is a more secure solution for ephemeral block storage
devices than LVM as dirty extents cannot be provisioned to users. However, it is important to be mindful that user data is not destroyed, so it is suggested to encrypt the backing filesystem.
Data encryption
The option exists for implementers to encrypt tenant data wherever it is
stored on disk or transported over a network, such as the OpenStack volume encryption feature described below. This is above and beyond the
general recommendation that users encrypt their own data before sending
it to their provider.
The importance of encrypting data on behalf of tenants is largely related to the risk assumed by a provider that an attacker could access tenant
data. There may be requirements here in government, as well as requirements per-policy, in private contract, or even in case law in regard to private contracts for public cloud providers. It is recommended that a risk assessment and legal consul advised before choosing tenant encryption policies.
Per-instance or per-object encryption is preferable over, in descending order, per-project, per-tenant, per-host, and per-cloud aggregations. This recommendation is inverse to the complexity and difficulty of implementation. Presently, in some projects it is difficult or impossible to implement
155
May 1, 2015
current
encryption as loosely granular as even per-tenant. We recommend implementors make a best-effort in encrypting tenant data.
Often, data encryption relates positively to the ability to reliably destroy
tenant and per-instance data, simply by throwing away the keys. It should
be noted that in doing so, it becomes of great importance to destroy those
keys in a reliable and secure manner.
Opportunities to encrypt data for users are present:
Object Storage objects
Network data
Volume encryption
A volume encryption feature in OpenStack supports privacy on a per-tenant basis. As of the Kilo release, the following features are supported:
Creation and usage of encrypted volume types, initiated through the
dashboard or a command line interface
Enable encryption and select parameters such as encryption algorithm
and key size
Volume data contained within iSCSI packets is encrypted
Supports encrypted backups if the original volume is encrypted
Dashboard indication of volume encryption status. Includes indication
that a volume is encrypted, and includes the encryption parameters such
as algorithm and key size
Interface with the Key management service through a secure wrapper
Volume encryption is supported by back-end key storage for enhanced security (for example, a Hardware Security Module (HSM) or a
KMIP server can be used as a Barbican back-end secret store)
May 1, 2015
current
cessed on this disk, and vestigial information could remain after the disk is
unmounted. As of the Kilo release, the following ephemeral disk encryption features are supported:
Creation and usage of encrypted LVM ephemeral disks
Compute configuration enables encryption and specifies encryption
parameters such as algorithm and key size
Interface with the Key management service through a secure wrapper
Key management service will support data isolation by providing
ephemeral disk encryption keys on a per-tenant basis
Ephemeral disk encryption is supported by back-end key storage for
enhanced security (for example, an HSM or a KMIP server can be used
as a Barbican back-end secret store)
With the Key management service, when an ephemeral disk is no
longer needed, simply deleting the key may take the place of overwriting the ephemeral disk storage area
May 1, 2015
current
For the purpose of performance, many storage protocols are unencrypted. Some protocols such as iSCSI can provide authentication and encrypted
sessions, it is our recommendation to enable these features.
As both block storage and compute support LVM backed storage, we can
easily provide an example applicable to both systems. In deployments using LVM, encryption may be performed against the backing physical volumes. An encrypted block device would be created using the standard Linux tools, with the LVM physical volume (PV) created on top of the decrypted block device using pvcreate. Then, the vgcreate or vgmodify tool may
be used to add the encrypted physical volume to an LVM volume group
(VG).
Network data
Tenant data for compute could be encrypted over IPsec or other tunnels.
This is not functionality common or standard in OpenStack, but is an option available to motivated and interested implementors.
Likewise, encrypted data will remain encrypted as it is transferred over the
network.
Key management
To address the often mentioned concern of tenant data privacy and limiting cloud provider liability, there is greater interest within the OpenStack community to make data encryption more ubiquitous. It is relatively easy for an end-user to encrypt their data prior to saving it to the cloud,
and this is a viable path for tenant objects such as media files, database
archives among others. In some instances, client-side encryption is utilized
to encrypt data held by the virtualization technologies which requires
client interaction, such as presenting keys, to decrypt data for future use.
To seamlessly secure the data and have it accessible without burdening
the client with having to manage their keys and interactively provide them
calls for a key management service within OpenStack. Providing encryption
and key management services as part of OpenStack eases data-at-rest security adoption and addresses customer concerns about privacy or misuse
of data, while also limiting cloud provider liability. This can help reduce a
provider's liability when handling tenant data during an incident investigation in multi-tenant public clouds.
The volume encryption and ephemeral disk encryption features rely on a
key management service (for example, Barbican) for the creation and se158
May 1, 2015
current
cure storage of keys. The key manager is pluggable to facilitate deployments that need a third-party Hardware Security Module (HSM) or the use
of the Key Management Interchange Protocol (KMIP), which is supported
by an open-source project called PyKMIP.
Bibliography:
OpenStack.org, Welcome to Barbican's Developer Documentation!.
2014. Barbican developer documentation
oasis-open.org, OASIS Key Management Interoperability Protocol
(KMIP). 2014. KMIP
PyKMIP library https://github.com/OpenKMIP/PyKMIP
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we dive into their particular tenant data privacy requirements. Specifically, we will look into how Alice and Bob both
handle tenant data, data destruction, and data encryption.
May 1, 2015
current
160
May 1, 2015
current
Hypervisor selection
Hypervisors in OpenStack
Whether OpenStack is deployed within private data centers or as a public cloud service, the underlying virtualization technology provides enterprise-level capabilities in the realms of scalability, resource efficiency, and
uptime. While such high-level benefits are generally available across many
OpenStack-supported hypervisor technologies, there are significant differences in the security architecture and features for each hypervisor, particularly when considering the security threat vectors which are unique to elastic OpenStack environments. As applications consolidate into single Infrastructure-as-a-Service (IaaS) platforms, instance isolation at the hypervisor
level becomes paramount. The requirement for secure isolation holds true
across commercial, government, and military communities.
Within the OpenStack framework, you can choose among many hypervisor
platforms and corresponding OpenStack plug-ins to optimize your cloud
environment. In the context of this guide, hypervisor selection considerations are highlighted as they pertain to feature sets that are critical to security. However, these considerations are not meant to be an exhaustive investigation into the pros and cons of particular hypervisors. NIST provides
additional guidance in Special Publication 800-125, "Guide to Security for
Full Virtualization Technologies".
161
May 1, 2015
current
Selection criteria
As part of your hypervisor selection process, you must consider a number
of important factors to help increase your security posture. Specifically,
you must become familiar with these areas:
Team expertise
Product or project maturity
Common criteria
Certifications and attestations
Hardware concerns
Hypervisor vs. baremetal
Additional security features
Additionally, the following security-related criteria are highly encouraged
to be evaluated when selecting a hypervisor for OpenStack deployments:
Has the hypervisor undergone Common Criteria certification? If so, to
what levels?
Is the underlying cryptography certified by a third-party?
Team expertise
Most likely, the most important aspect in hypervisor selection is the expertise of your staff in managing and maintaining a particular hypervisor platform. The more familiar your team is with a given product, its configuration, and its eccentricities, the fewer the configuration mistakes. Additionally, having staff expertise spread across an organization on a given hypervisor increases availability of your systems, allows segregation of duties,
and mitigates problems in the event that a team member is unavailable.
May 1, 2015
current
Common criteria
Common Criteria is an internationally standardized software evaluation
process, used by governments and commercial companies to validate software technologies perform as advertised. In the government sector, NSTISSP No. 11 mandates that U.S. Government agencies only procure software
which has been Common Criteria certified, a policy which has been in place
since July 2002. It should be specifically noted that OpenStack has not undergone Common Criteria certification, however many of the available hypervisors have.
In addition to validating a technologies capabilities, the Common Criteria
process evaluates how technologies are developed.
How is source code management performed?
How are users granted access to build systems?
163
May 1, 2015
current
Audit
164
May 1, 2015
based on Access Control Lists (ACLs)
that include the standard UNIX permissions for user, group and others.
Access control mechanisms also protect IPC objects from unauthorized
access.
The system includes the ext4 file system, which supports POSIX ACLs. This
allows defining access rights to files
within this type of file system down
to the granularity of a single user.
Mandatory Access Control (MAC) restricts access to objects based on labels assigned to subjects and objects.
Sensitivity labels are automatically attached to processes and objects. The
access control policy enforced using
these labels is derived from the BellLaPadula access control model.
SELinux categories are attached to
virtual machines and its resources.
The access control policy enforced using these categories grant virtual machines access to resources if the category of the virtual machine is identical to the category of the accessed resource.
The TOE implements non-hierarchical
categories to control access to virtual
machines.
Object Reuse
Security Management
The management of the security critical parameters of the system is performed by administrative users. A set
of commands that require root privileges (or specific roles when RBAC
is used) are used for system management. Security parameters are stored
in specific files that are protected by
the access control mechanisms of the
system against unauthorized access
by users that are not administrative
users.
Secure Communication
165
current
May 1, 2015
current
TSF Protection
While in operation, the kernel software and data are protected by the
hardware memory protection mechanisms. The memory and process management components of the kernel
ensure a user process cannot access
kernel storage or storage belonging
to other processes.
Non-kernel TSF software and data
are protected by DAC and process
isolation mechanisms. In the evaluated configuration, the reserved user
ID root owns the directories and files
that define the TSF configuration. In
general, files and directories containing internal TSF data, such as configuration files and batch job queues, are
also protected from reading by DAC
permissions.
The system and the hardware and
firmware components are required
to be physically protected from unauthorized access. The system kernel
mediates all access to the hardware
mechanisms themselves, other than
program visible CPU instruction functions.
In addition, mechanisms for protection against stack overflow attacks
are provided.
Cryptography standards
Several cryptography algorithms are available within OpenStack for identification and authorization, data transfer and protection of data at rest.
When selecting a hypervisor, the following are recommended algorithms
and implementation standards to ensure the virtualization layer supports:
Algorithm
Key length
AES
128, 192, or
256 bits
Encryption /
decryption
166
May 1, 2015
current
Algorithm
Key length
TDES
168 bits
Encryption /
decryption
RFC 4253
RSA
Identification
and authentication, protected data
transfer
DSA
L=1024,
N=160 bits
Identification
and authentication, protected data
transfer
Serpent
128, 192, or
256 bits
Encryption /
decryption
Protection of
data at rest
http://
www.cl.cam.ac.uk/
~rja14/Papers/serpent.pdf
Twofish
128, 192, or
256 bit
Encryption /
decryption
Protection of
data at rest
http://
www.schneier.com/
paper-twofishpaper.html
SHA-1
Message Digest
SHA-2 (224,
256, 384, or
512 bits)
Message Digest
FIPS 140-2
In the United States the National Institute of Science and Technology
(NIST) certifies cryptographic algorithms through a process known the
Cryptographic Module Validation Program. NIST certifies algorithms for
conformance against Federal Information Processing Standard 140-2 (FIPS
140-2), which ensures:
Products validated as conforming to FIPS 140-2 are accepted by the Federal agencies of both countries [United States
and Canada] for the protection of sensitive information
(United States) or Designated Information (Canada). The
goal of the CMVP is to promote the use of validated cryptographic modules and provide Federal agencies with a security metric to use in procuring equipment containing validated cryptographic modules.
167
May 1, 2015
current
Hardware concerns
Further, when you evaluate a hypervisor platform, consider the supportability of the hardware on which the hypervisor will run. Additionally, consider the additional features available in the hardware and how those features are supported by the hypervisor you chose as part of the OpenStack
deployment. To that end, hypervisors each have their own hardware compatibility lists (HCLs). When selecting compatible hardware it is important
to know in advance which hardware-based virtualization technologies are
important from a security perspective.
Description
Technology
Explanation
I/O MMU
VT-d / AMD-Vi
Network virtualization
Improves performance
of network I/O on hypervisors
VT-c
May 1, 2015
current
reusing a node, you must provide assurances that the hardware has not
been tampered or otherwise compromised.
Note
While OpenStack has a baremetal project, a discussion of the
particular security implications of running baremetal is beyond
the scope of this book.
Finally, due to the time constraints around a book sprint, the team chose
to use KVM as the hypervisor in our example implementations and architectures.
Note
There is an OpenStack Security Note pertaining to the use of
LXC in Compute.
May 1, 2015
current
cates. When found, the Xen Virtual Machine Monitor (VMM) discards one
of the duplicates and records the reference of the second one.
sVirt
TXT
AppArmor
cgroups
MAC Policy
KVM
Xen
ESXi
Hyper-V
Bibliography
Sunar, Eisenbarth, Inci, Gorka Irazoqui Apecechea. Fine Grain CrossVM Attacks on Xen and VMware are possible!. 2014. https://
eprint.iacr.org/2014/248.pdf
170
May 1, 2015
current
May 1, 2015
current
GPUs offering the compute unified device architecture (CUDA) for high
performance computation. This feature carries two types of security risks:
direct memory access and hardware infection.
Direct memory access (DMA) is a feature that permits certain hardware devices to access arbitrary physical memory addresses in the host computer.
Often video cards have this capability. However, an instance should not be
given arbitrary physical memory access because this would give it full view
of both the host system and other instances running on the same node.
Hardware vendors use an input/output memory management unit (IOMMU) to manage DMA access in these situations. Therefore, cloud architects
should ensure that the hypervisor is configured to utilize this hardware feature.
KVM: How to assign devices with VT-d in KVM
Xen: VTd Howto
Note
The IOMMU feature is marketed as VT-d by Intel and AMD-Vi
by AMD.
A hardware infection occurs when an instance makes a malicious modification to the firmware or some other part of a device. As this device is used
by other instances or the host OS, the malicious code can spread into those
systems. The end result is that one instance can run code outside of its security domain. This is a significant breach as it is harder to reset the state of
physical hardware than virtual hardware, and can lead to additional exposure such as access to the management network.
Solutions to the hardware infection problem are domain specific. The strategy is to identify how an instance can modify hardware state then determine how to reset any modifications when the instance is done using the
hardware. For example, one option could be to re-flash the firmware after use. Clearly there is a need to balance hardware longevity with security as some firmwares will fail after a large number of writes. TPM technology, described in the section called Secure bootstrapping [32], provides
a solution for detecting unauthorized firmware changes. Regardless of the
strategy selected, it is important to understand the risks associated with
this kind of hardware sharing so that they can be properly mitigated for a
given deployment scenario.
Additionally, due to the risk and complexities associated with PCI
passthrough, it should be disabled by default. If enabled for a specific
172
May 1, 2015
current
need, you will need to have appropriate processes in place to ensure the
hardware is clean before re-issue.
May 1, 2015
current
QEMU configure script. For a complete list of up-to-date options simply run
./configure --help from within the QEMU source directory. Decide what is
needed for your deployment, and disable the remaining options.
Compiler hardening
The next step is to harden QEMU using compiler hardening options. Modern compilers provide a variety of compile time options to improve the security of the resulting binaries. These features, which we will describe in
more detail below, include relocation read-only (RELRO), stack canaries,
never execute (NX), position independent executable (PIE), and address
space layout randomization (ASLR).
Many modern Linux distributions already build QEMU with compiler hardening enabled, so you may want to verify your existing executable before
proceeding with the information below. One tool that can assist you with
this verification is called checksec.sh.
RELocation Read-Only (RELRO)
Hardens the data sections of an executable. Both full and partial RELRO
modes are supported by gcc. For QEMU full RELRO is your best choice. This
will make the global offset table readonly and place various internal data
sections before the program data section in the resulting executable.
Stack canaries
Also known as Data Execution Prevention (DEP), ensures that data sections
of the executable can not be executed.
The following compiler options are recommend for GCC when compiling
QEMU:
174
May 1, 2015
current
May 1, 2015
current
176
May 1, 2015
current
As shown above, sVirt isolation is provided regardless of the guest Operating System running inside the virtual machineLinux or Windows VMs can
be used. Additionally, many Linux distributions provide SELinux within the
operating system, allowing the virtual machine to protect internal virtual
resources from threats.
image1
image2
image3
image4
The svirt_image_t label uniquely identifies image files on disk, allowing for the SELinux policy to restrict access. When a KVM-based Compute
image is powered on, sVirt appends a random numerical identifier to the
image. sVirt is capable of assigning numeric identifiers to a maximum of
524,288 virtual machines per hypervisor node, however most OpenStack
deployments are highly unlikely to encounter this limitation.
This example shows the sVirt category identifier:
system_u:object_r:svirt_image_t:s0:c87,c520 image1
system_u:object_r:svirt_image_t:s0:419,c172 image2
Booleans
To ease the administrative burden of managing SELinux, many enterprise
Linux platforms utilize SELinux Booleans to quickly change the security posture of sVirt.
177
May 1, 2015
current
Description
virt_use_common
virt_use_fusefs
virt_use_nfs
virt_use_samba
virt_use_sanlock
virt_use_sysfs
virt_use_usb
virt_use_xserver
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would ensure
that their instances are properly isolated. First we consider hypervisor selection, and then techniques for hardening QEMU and applying mandatory access controls.
May 1, 2015
current
stances. Alice also uses the Intel TXT support in Xen to measure the hypervisor launch in the TPM.
179
May 1, 2015
current
May 1, 2015
current
May 1, 2015
current
Custom criteria
Multiple filters can be applied at once, such as the ServerGroupAffinity filter to ensure an instance is created on a member of a specific set
of hosts and ServerGroupAntiAffinity filter to ensure that same instance is not created on another specific set of hosts. These filters should
be analyzed carefully to ensure they do not conflict with each other and
result in rules that prevent the creation of instances.
183
May 1, 2015
current
Trusted images
In a cloud environment, users work with either pre-installed images or images they upload themselves. In both cases, users should be able to ensure
the image they are utilizing has not been tampered with. This requires a
method of validation, such as a checksum for the known good image as
well as verification of a running instance. While there are current best practices around these actions there are also several gaps in the process.
May 1, 2015
current
have a process by which you install and harden operating systems. Thus,
the following items will provide additional guidance on how to ensure
your images are transferred securely into OpenStack. There are a variety of
options for obtaining images. Each has specific steps that help validate the
image's provenance.
The first option is to obtain boot media from a trusted source.
$ mkdir -p /tmp/download_directorycd /tmp/download_directory
$ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/
ubuntu-12.04.2-server-amd64.iso
$ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/
SHA256SUMS
$ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/
SHA256SUMS.gpg
$ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys
0xFBB75451
$ gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS
2>&1 | grep OK
The second option is to use the OpenStack Virtual Machine Image Guide
. In this case, you will want to follow your organizations OS hardening
guidelines or those provided by a trusted third-party such as the Linux
STIGs.
The final option is to use an automated image builder. The following example uses the Oz image builder. The OpenStack community has recently
created a newer tool worth investigating: disk-image-builder. We have not
evaluated this tool from a security perspective.
Example of RHEL 6 CCE-26976-1 which will help implement NIST 800-53
SectionAC-19(d) in Oz.
<template>
<name>centos64</name>
<os>
<name>RHEL-6</name>
<version>4</version>
<arch>x86_64</arch>
<install type='iso'>
<iso>http://trusted_local_iso_mirror/isos/x86_64/RHEL-6.4x86_64-bin-DVD1.iso</iso>
</install>
<rootpw>CHANGE THIS TO YOUR ROOT PASSWORD</rootpw>
</os>
<description>RHEL 6.4 x86_64</description>
<repositories>
<repository name='epel-6'>
<url>http://download.fedoraproject.org/pub/epel/6/
$basearch</url>
185
May 1, 2015
current
<signed>no</signed>
</repository>
</repositories>
<packages>
<package name='epel-release'/>
<package name='cloud-utils'/>
<package name='cloud-init'/>
</packages>
<commands>
<command name='update'>
yum update
yum clean all
sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
chkconfig --level 0123456 autofs off
service autofs stop
</command>
</commands>
</template>
It is recommended to avoid the manual image building process as it is complex and prone to error. Additionally, using an automated system like Oz
for image building or a configuration management utility like Chef or Puppet for post-boot image hardening gives you the ability to produce a consistent image as well as track compliance of your base image to its respective hardening guidelines over time.
If subscribing to a public cloud service, you should check with the cloud
provider for an outline of the process used to produce their default images. If the provider allows you to upload your own images, you will want
to ensure that you are able to verify that your image was not modified before using it to create an instance. To do this, refer to the following section
on Image Provenance.
May 1, 2015
current
launched from the same expanded image. Since this expanded image is
not re-verified before launching, it could be tampered with and the user
would not have any way of knowing, beyond a manual inspection of the
files in the resulting image.
We hope that future versions of Compute and/or the Image service will
offer support for validating the image hash before each instance launch.
An alternative option that would be even more powerful would be allow
users to sign an image and then have the signature validated when the instance is launched.
Instance migrations
OpenStack and the underlying virtualization layers provide for the live migration of images between OpenStack nodes, allowing you to seamlessly
perform rolling upgrades of your OpenStack compute nodes without instance downtime. However, live migrations also carry significant risk. To
understand the risks involved, the following are the high-level steps performed during a live migration:
1. Start instance on destination host
2. Transfer memory
3. Stop the guest & sync disks
4. Transfer state
5. Start the guest
May 1, 2015
current
Migration network
As a general practice, live migration traffic should be restricted to the management security domain, see the section called Management [14]. With
live migration traffic, due to its plain text nature and the fact that you are
transferring the contents of disk and memory of a running instance, it is
recommended you further separate live migration traffic onto a dedicated
network. Isolating the traffic to a dedicated network can reduce the risk of
exposure.
May 1, 2015
current
May 1, 2015
current
as possible to ensure both stability and resolution of the issue behind the
patch.
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would architect
their clouds with respect to instance entropy, scheduling instances, trusted
images, and instance migrations.
190
May 1, 2015
current
May 1, 2015
current
May 1, 2015
current
May 1, 2015
current
that show when and how an attack or intrusion took place. Deploying
these tools on the cloud machines provides value and protection. Cloud
users, those running instances on the cloud, may also want to run such
tools on their instances.
Bibliography
Siwczak, Piotr. Some Practical Considerations for Monitoring in the OpenStack Cloud. 2012. http://www.mirantis.com/blog/openstack-monitoring
blog.sflow.com, sflow: Host sFlow distributed agent. 2012. http://
blog.sflow.com/2012/01/host-sflow-distributed-agent.html
blog.sflow.com, sflow: LAN and WAN. 2009. http://
blog.sflow.com/2009/09/lan-and-wan.html
blog.sflow.com, sflow: Rapidly detecting large flows sFlow vs. NetFlow/IPFIX. 2013. http://blog.sflow.com/2013/01/rapidly-detecting-large-flowssflow-vs.html
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address monitoring and logging in the public vs a private cloud. In both instances, time synchronization and a centralized store of logs become extremely important for performing proper assessments and troubleshooting of anomalies. Just collecting logs is not very useful, a robust monitoring
system must be built to generate actionable events.
May 1, 2015
current
ing process therefore she should continue to define use cases and alerts in
order to have a better understanding of the network traffic activity and
usage over time.
196
May 1, 2015
current
19. Compliance
Compliance overview .........................................................................
Understanding the audit process ........................................................
Compliance activities ..........................................................................
Certification and compliance statements ............................................
Privacy ...............................................................................................
Case studies .......................................................................................
197
201
203
206
211
211
An OpenStack deployment may require compliance activities for many purposes, such as regulatory and legal requirements, customer need, privacy
considerations, and security best practices. The Compliance function is important for the business and its customers. Compliance means adhering
to regulations, specifications, standards and laws. It is also used when describing an organizations status regarding assessments, audits, and certifications. Compliance, when done correctly, unifies and strengthens the other security topics discussed in this guide.
This chapter has several objectives:
Review common security principles.
Discuss common control frameworks and certification resources to
achieve industry certifications or regulator attestations.
Act as a reference for auditors when evaluating OpenStack deployments.
Introduce privacy considerations specific to OpenStack and cloud environments.
Compliance overview
Security principles
Industry standard security principles provide a baseline for compliance certifications and attestations. If these principles are considered and referenced throughout an OpenStack deployment, certification activities may
be simplified.
Layered defenses
May 1, 2015
current
In the case of failure, systems should be configured to fail into a closed secure state. For example, TLS certificate verification should fail
closed by severing the network connection if
the CNAME doesn't match the server's DNS
name. Software often fails open in this situation, allowing the connection to proceed without a CNAME match, which is less secure and
not recommended.
Least privilege
Compartmentalize
Promote privacy
The amount of information that can be gathered about a system and its users should be
minimized.
Logging capability
May 1, 2015
current
ISO 27001/2:2013
May 1, 2015
current
Audit reference
OpenStack is innovative in many ways however the process used to audit
an OpenStack deployment is fairly common. Auditors will evaluate a process by two criteria: Is the control designed effectively and if the control
is operating effectively. An understanding of how an auditor evaluates if
a control is designed and operating effectively will be discussed in the section called Understanding the audit process [201].
The most common frameworks for auditing and evaluating a cloud deployment include the previously mentioned ISO 27001/2 Information Security standard, ISACA's Control Objectives for Information and Related Technology (COBIT) framework, Committee of Sponsoring Organizations of
the Treadway Commission (COSO), and Information Technology Infrastructure Library (ITIL). It is very common for audits to include areas of focus
from one or more of these frameworks. Fortunately there is a lot of overlap between the frameworks, so an organization that adopts one will be in
a good position come audit time.
200
May 1, 2015
current
May 1, 2015
current
provider. The CSA CMM provides a controls framework that map to many
industry-accepted standards and regulations including the ISO 27001/2,
ISACA, COBIT, PCI, NIST, Jericho Forum and NERC CIP.
The SCAP Security Guide is another useful reference. This is still an emerging source, but we anticipate that this will grow into a tool with controls
mappings that are more focused on the US federal government certifications and recommendations. For example, the SCAP Security Guide currently has some mappings for security technical implementation guides
(STIGs) and NIST-800-53.
These control mappings will help identify common control criteria across
certifications, and provide visibility to both auditors and auditees on problem areas within control sets for particular compliance certifications and attestations.
Internal audit
Once a cloud is deployed, it is time for an internal audit. This is the time
compare the controls you identified above with the design, features, and
deployment strategies utilized in your cloud. The goal is to understand
how each control is handled and where gaps exist. Document all of the
findings for future reference.
When auditing an OpenStack cloud it is important to appreciate the multi-tenant environment inherent in the OpenStack architecture. Some critical areas for concern include data disposal, hypervisor security, node hardening, and authentication mechanisms.
May 1, 2015
current
Selecting an auditor can be challenging. Ideally, you are looking for someone with experience in cloud compliance audits. OpenStack experience is
another big plus. Often it is best to consult with people who have been
through this process for referrals. Cost can vary greatly depending on the
scope of the engagement and the audit firm considered.
External audit
This is the formal audit process. Auditors will test security controls in scope
for a specific certification, and demand evidentiary requirements to prove
that these controls were also in place for the audit window (for example
SOC 2 audits generally evaluate security controls over a 6-12 months period). Any control failures are logged, and will be documented in the external auditors final report. Dependent on the type of OpenStack deployment, these reports may be viewed by customers, so it is important to
avoid control failures. This is why audit preparation is so important.
Compliance maintenance
The process doesn't end with a single external audit. Most certifications require continual compliance activities which means repeating the audit process periodically. We recommend integrating automated compliance verification tools into a cloud to ensure that it is compliant at all times. This
should be in done in addition to other security monitoring tools. Remember that the goal is both security and compliance. Failing on either of these
fronts will significantly complicate future audits.
Compliance activities
There are a number of standard activities that will greatly assist with the
compliance process. In this chapter we outline some of the most common
compliance activities. These are not specific to OpenStack, however we
provide references to relevant sections in this book as useful context.
May 1, 2015
current
Risk assessment
A risk assessment framework identifies risks within an organization or service, and specifies ownership of these risks, along with implementation
and mitigation strategies. Risks apply to all areas of the service, from technical controls to environmental disaster scenarios and human elements, for
example a malicious insider (or rogue employee). Risks can be rated using
a variety of mechanisms, for example likelihood vs impact. An OpenStack
deployment risk assessment can include control gaps that are described in
this book.
Security training
Annual, role-specific, security training is a mandatory requirement for almost all compliance certifications and attestations. To optimize the effectiveness of security training, a common method is to provide role specific
training, for example to developers, operational personnel, and non-technical employees. Additional cloud security or OpenStack security training
based on this hardening guide would be ideal.
Security reviews
As OpenStack is a popular open source project, much of the codebase and
architecture has been scrutinized by individual contributors, organizations
204
May 1, 2015
current
Vulnerability management
Security updates are critical to any IaaS deployment, whether private or
public. Vulnerable systems expand attack surfaces, and are obvious targets
for attackers. Common scanning technologies and vulnerability notification services can help mitigate this threat. It is important that scans are authenticated and that mitigation strategies extend beyond simple perimeter
hardening. Multi-tenant architectures such as OpenStack are particularly
prone to hypervisor vulnerabilities, making this a critical part of the system
for vulnerability management. See the section on instance isolation for additional details.
Data classification
Data Classification defines a method for classifying and handling information, often to protect customer information from accidental or deliberate
theft, loss, or inappropriate disclosure. Most commonly this involves classifying information as sensitive or non-sensitive, or as personally identifiable information (PII). Depending on the context of the deployment various other classifying criteria may be used (government, health-care etc).
The underlying principle is that data classifications are clearly defined and
in-use. The most common protective mechanisms include industry standard
encryption technologies. See the data security section for additional details.
Exception process
An exception process is an important component of an ISMS. When certain actions are not compliant with security policies that an organization
has defined, they must be logged. Appropriate justification, description
and mitigation details need to be included, and signed off by appropriate
authorities. OpenStack default configurations may vary in meeting various compliance criteria, areas that fail to meet compliance requirements
205
May 1, 2015
current
Commercial standards
For commercial deployments of OpenStack, it is recommended that SOC
1/2 combined with ISO 2700 1/2 be considered as a starting point for
OpenStack certification activities. The required security activities mandated
by these certifications facilitate a foundation of security best practices and
common control criteria that can assist in achieving more stringent compliance activities, including government attestations and certifications.
After completing these initial certifications, the remaining certifications
are more deployment specific. For example, clouds processing credit card
transactions will need PCI-DSS, clouds storing health care information require HIPAA, and clouds within the federal government may require FedRAMP/FISMA, and ITAR, certifications.
May 1, 2015
current
SOC 2
Service Organization Controls (SOC) 2 is a self attestation of controls that
affect the security, availability, and processing integrity of the systems a
service organization uses to process users' data and the confidentiality and
privacy of information processed by these system. Examples of users are
those responsible for governance of the service organization; customers
of the service organization; regulators; business partners; suppliers and
others who have an understanding of the service organization and its controls.
There are two types of SOC 2 reports:
Type 1 - report on the fairness of the presentation of management's description of the service organization's system and the suitability of the
design of the controls to achieve the related control objectives included
in the description as of a specified date.
Type 2 - report on the fairness of the presentation of management's description of the service organization's system and the suitability of the
design and operating effectiveness of the controls to achieve the related
control objectives included in the description throughout a specified period.
For more details see the AICPA Report on Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality
or Privacy.
SOC 3
Service Organization Controls (SOC) 3 is a trust services report for service
organizations. These reports are designed to meet the needs of users who
want assurance on the controls at a service organization related to security, availability, processing integrity, confidentiality, or privacy but do not
have the need for or the knowledge necessary to make effective use of a
SOC 2 Report. These reports are prepared using the AICPA/Canadian Institute of Chartered Accountants (CICA) Trust Services Principles, Criteria, and
207
May 1, 2015
current
ISO 27001/2
The ISO/IEC 27001/2 standards replace BS7799-2, and are specifications
for an Information Security Management System (ISMS). An ISMS is a comprehensive set of policies and processes that an organization creates and
maintains to manage risk to information assets. These risks are based upon
the confidentiality, integrity, and availability (CIA) of user information. The
CIA security triad has been used as a foundation for much of the chapters
in this book.
For more details see ISO 27001.
HIPAA / HITECH
The Health Insurance Portability and Accountability Act (HIPAA) is a United States congressional act that governs the collection, storage, use and
destruction of patient health records. The act states that Protected Health
Information (PHI) must be rendered "unusable, unreadable, or indecipherable" to unauthorized persons and that encryption for data 'at-rest' and 'inflight' should be addressed.
HIPAA is not a certification, rather a guide for protecting healthcare data.
Similar to the PCI-DSS, the most important issues with both PCI and HIPPA
is that a breach of credit card information, and health data, does not occur. In the instance of a breach the cloud provider will be scrutinized for
compliance with PCI and HIPPA controls. If proven compliant, the provider
can be expected to immediately implement remedial controls, breach notification responsibilities, and significant expenditure on additional compliance activities. If not compliant, the cloud provider can expect on-site audit
teams, fines, potential loss of merchant ID (PCI), and massive reputation
impact.
Users or organizations that possess PHI must support HIPAA requirements
and are HIPAA covered entities. If an entity intends to use a service, or in
this case, an OpenStack cloud that might use, store or have access to that
PHI, then a Business Associate Agreement must be signed. The BAA is a
contract between the HIPAA covered entity and the OpenStack service
208
May 1, 2015
current
provider that requires the provider to handle that PHI in accordance with
HIPAA requirements. If the service provider does not handle the PHI, such
as with security controls and hardening, then they are subject to HIPAA
fines and penalties.
OpenStack architects interpret and respond to HIPAA statements, with data encryption remaining a core practice. Currently this would require any
protected health information contained within an OpenStack deployment
to be encrypted with industry standard encryption algorithms. Potential
future OpenStack projects such as object encryption will facilitate HIPAA
guidelines for compliance with the act.
For more details see the Health Insurance Portability And Accountability
Act.
PCI-DSS
The Payment Card Industry Data Security Standard (PCI DSS) is defined by
the Payment Card Industry Standards Council, and created to increase controls around card holder data to reduce credit card fraud. Annual compliance validation is assessed by an external Qualified Security Assessor (QSA)
who creates a Report on Compliance (ROC), or by a Self-Assessment Questionnaire (SAQ) dependent on volume of card-holder transactions.
OpenStack deployments which stores, processes, or transmits payment
card details are in scope for the PCI-DSS. All OpenStack components that
are not properly segmented from systems or networks that handle payment data fall under the guidelines of the PCI-DSS. Segmentation in the
context of PCI-DSS does not support multi-tenancy, but rather physical separation (host/network).
For more details see PCI security standards.
Government standards
FedRAMP
"The Federal Risk and Authorization Management Program (FedRAMP) is a
government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services". NIST 800-53 is the basis for both FISMA and FedRAMP
which mandates security controls specifically selected to provide protection
in cloud environments. FedRAMP can be extremely intensive from specifici209
May 1, 2015
current
ITAR
The International Traffic in Arms Regulations (ITAR) is a set of United
States government regulations that control the export and import of defense-related articles and services on the United States Munitions List
(USML) and related technical data. ITAR is often approached by cloud
providers as an "operational alignment" rather than a formal certification.
This typically involves implementing a segregated cloud environment following practices based on the NIST 800-53 framework, as per FISMA requirements, complemented with additional controls restricting access to
"U.S. Persons" only and background screening.
For more details see https://www.pmddtc.state.gov/regulations_laws/
itar.html.
FISMA
The Federal Information Security Management Act requires that government agencies create a comprehensive plan to implement numerous government security standards, and was enacted within the E-Government
Act of 2002. FISMA outlines a process, which utilizing multiple NIST publications, prepares an information system to store and process government
data.
This process is broken apart into three primary categories:
System categorization: The information system will receive a security
category as defined in Federal Information Processing Standards Publication 199 (FIPS 199). These categories reflect the potential impact of system compromise.
Control selection:Based upon system security category as defined in FIPS
199, an organization utilizes FIPS 200 to identify specific security control
requirements for the information system. For example, if a system is categorized as "moderate" a requirement may be introduced to mandate
"secure passwords".
Control tailoring: Once system security controls are identified, an OpenStack architect will utilize NIST 800-53 to extract tailored control selection. For example, specification of what constitutes a "secure password".
210
May 1, 2015
current
Privacy
Privacy is an increasingly important element of a compliance program.
Businesses are being held to a higher standard by their customers, who
have increased interest in understanding how their data is treated from a
privacy perspective.
An OpenStack deployment will likely need to demonstrate compliance
with an organization's Privacy Policy, with the U.S.-E.U. Safe Harbor framework, the ISO/IEC 29100:2011 privacy framework or with other privacy-specific guidelines. In the U.S. the AICPA has defined 10 privacy areas of
focus, OpenStack deployments within a commercial environment may desire to attest to some or all of these principles.
To aid OpenStack architects in the protection of personal data, it is recommended that OpenStack architects review the NIST publication 800-122, titled "Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)." This guide steps through the process of protecting:
"any information about an individual maintained by an
agency, including (1) any information that can be used to
distinguish or trace an individual's identity, such as name,
social security number, date and place of birth, mother's
maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as
medical, educational, financial, and employment information"
Comprehensive privacy management requires significant preparation,
thought and investment. Additional complications are introduced when
building global OpenStack clouds, for example navigating the differences
between U.S. and more restrictive E.U. privacy laws. In addition, extra care
needs to be taken when dealing with sensitive PII that may include information such as credit card numbers or medical records. This sensitive data is not only subject to privacy laws but also regulatory and governmental
regulations. By deferring to established best practices, including those published by governments, a holistic privacy management policy may be created and practiced for OpenStack deployments.
Case studies
Earlier in the section called Introduction to case studies [21] we introduced the Alice and Bob case studies where Alice is deploying a private
211
May 1, 2015
current
government cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would address
common compliance requirements. The preceding chapter refers to a wide
variety of compliance certifications and standards. Alice will address compliance in a private cloud, while Bob will be focused on compliance for a
public cloud.
May 1, 2015
current
213
May 1, 2015
current
AppendixA.Community support
Table of Contents
Documentation ..................................................................................
ask.openstack.org ..............................................................................
OpenStack mailing lists .......................................................................
The OpenStack wiki ...........................................................................
The Launchpad Bugs area ..................................................................
The OpenStack IRC channel ................................................................
Documentation feedback ...................................................................
OpenStack distribution packages ........................................................
215
216
217
217
217
218
219
219
The following resources are available to help you run and use OpenStack.
The OpenStack community constantly improves and adds to the main features of OpenStack, but if you have any questions, do not hesitate to ask.
Use the following resources to get OpenStack support, and troubleshoot
your installations.
Documentation
For the available OpenStack documentation, see docs.openstack.org.
To provide feedback on documentation, join and use the
<[email protected]> mailing list at OpenStack
Documentation Mailing List, or report a bug.
The following books explain how to install an OpenStack cloud and its associated components:
Installation Guide for openSUSE 13.2 and SUSE Linux Enterprise Server
12
Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora
21
Installation Guide for Ubuntu 14.04 (LTS)
The following books explain how to configure and run an OpenStack
cloud:
215
May 1, 2015
current
ask.openstack.org
During the set up or testing of OpenStack, you might have questions
about how a specific task is completed or be in a situation where a feature
does not work correctly. Use the ask.openstack.org site to ask questions
and get answers. When you visit the http://ask.openstack.org site, scan
the recently asked questions to see whether your question has already
been answered. If not, ask a new question. Be sure to give a clear, concise
216
May 1, 2015
current
summary in the title and provide as much detail as possible in the description. Paste in your command output or stack traces, links to screen shots,
and any other information which might be useful.
May 1, 2015
current
May 1, 2015
current
nel and want to share code or command output, the generally accepted
method is to use a Paste Bin. The OpenStack project has one at http://
paste.openstack.org. Just paste your longer amounts of text or logs in
the web form and you get a URL that you can paste into the channel. The
OpenStack IRC channel is #openstack on irc.freenode.net. You can
find a list of all OpenStack IRC channels at https://wiki.openstack.org/wiki/IRC.
Documentation feedback
To provide feedback on documentation, join and use the
<[email protected]> mailing list at OpenStack
Documentation Mailing List, or report a bug.
219
May 1, 2015
current
Glossary
access control list
A list of permissions attached to an object. An ACL specifies which users or system processes have access to objects. It also defines which operations can be performed on specified objects. Each entry in a typical ACL specifies a subject and an
operation. For instance, the ACL entry (Alice, delete) for a file gives Alice
permission to delete the file.
ACL
See access control list.
Advanced Message Queuing Protocol (AMQP)
The open standard messaging protocol used by OpenStack components for intra-service communications, provided by RabbitMQ, Qpid, or ZeroMQ.
API
Application programming interface.
Bell-LaPadula model
A security model that focuses on data confidentiality and controlled access to
classified information. This model divide the entities into subjects and objects.
The clearance of a subject is compared to the classification of the object to determine if the subject is authorized for the specific access mode. The clearance or
classification scheme is expressed in terms of a lattice.
Block Storage
The OpenStack core project that enables management of volumes, volume snapshots, and volume types. The project name of Block Storage is cinder.
BMC
Baseboard Management Controller. The intelligence in the IPMI architecture,
which is a specialized micro-controller that is embedded on the motherboard of
a computer and acts as a server. Manages the interface between system management software and platform hardware.
CA
Certificate Authority or Certification Authority. In cryptography, an entity that issues digital certificates. The digital certificate certifies the ownership of a public
key by the named subject of the certificate. This enables others (relying parties)
to rely upon signatures or assertions made by the private key that corresponds
to the certified public key. In this model of trust relationships, a CA is a trusted
third party for both the subject (owner) of the certificate and the party relying
221
May 1, 2015
current
upon the certificate. CAs are characteristic of many public key infrastructure (PKI)
schemes.
Chef
An operating system configuration management tool supporting OpenStack deployments.
cinder
A core OpenStack project that provides block storage services for VMs.
CMDB
Configuration Management Database.
Compute
The OpenStack core project that provides compute services. The project name of
Compute service is nova.
DAC
Discretionary access control. Governs the ability of subjects to access objects,
while enabling users to make policy decisions and assign security attributes. The
traditional UNIX system of users, groups, and read-write-execute permissions is
an example of DAC.
dashboard
The web-based management interface for OpenStack. An alternative name for
horizon.
Data processing service
OpenStack project that provides a scalable data-processing stack and associated
management interfaces. The code name for the project is sahara.
DHCP
Dynamic Host Configuration Protocol. A network protocol that configures devices that are connected to a network so that they can communicate on that network by using the Internet Protocol (IP). The protocol is implemented in a clientserver model where DHCP clients request configuration data, such as an IP address, a default route, and one or more DNS server addresses from a DHCP server.
Django
A web framework used extensively in horizon.
DNS
Domain Name Server. A hierarchical and distributed naming system for computers, services, and resources connected to the Internet or a private network. Associates a human-friendly names to IP addresses.
222
May 1, 2015
current
federated identity
A method to establish trusts between identity providers and the OpenStack
cloud.
glance
A core project that provides the OpenStack Image service.
horizon
OpenStack project that provides a dashboard, which is a web interface.
HTTPS
Hypertext Transfer Protocol Secure (HTTPS) is a communications protocol for secure communication over a computer network, with especially wide deployment
on the Internet. Technically, it is not a protocol in and of itself; rather, it is the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the SSL/
TLS protocol, thus adding the security capabilities of SSL/TLS to standard HTTP
communications.
identity provider
A directory service, which allows users to login with a user name and password. It
is a typical source of authentication tokens.
Identity Service
The OpenStack core project that provides a central directory of users mapped
to the OpenStack services they can access. It also registers endpoints for OpenStack services. It acts as a common authentication system. The project name of
the Identity Service is keystone.
Image service
An OpenStack core project that provides discovery, registration, and delivery services for disk and server images. The project name of the Image service is glance.
keystone
The project that provides OpenStack Identity services.
Networking
A core OpenStack project that provides a network connectivity abstraction layer
to OpenStack Compute. The project name of Networking is neutron.
neutron
A core OpenStack project that provides a network connectivity abstraction layer
to OpenStack Compute.
nova
OpenStack project that provides compute services.
223
May 1, 2015
current
Object Storage
The OpenStack core project that provides eventually consistent and redundant
storage and retrieval of fixed digital content. The project name of OpenStack Object Storage is swift.
OpenStack
OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a data center, all managed through a
dashboard that gives administrators control while empowering their users to provision resources through a web interface. OpenStack is an open source project licensed under the Apache License 2.0.
Puppet
An operating system configuration-management tool supported by OpenStack.
Qpid
Message queue software supported by OpenStack; an alternative to RabbitMQ.
RabbitMQ
The default message queue software used by OpenStack.
sahara
OpenStack project that provides a scalable data-processing stack and associated
management interfaces.
SAML assertion
Contains information about a user as provided by the identity provider. It is an indication that a user has been authenticated.
scoped token
An Identity Service API access token that is associated with a specific tenant.
service provider
A system that provides services to other system entities. In case of federated
identity, OpenStack Identity is the service provider.
SPICE
The Simple Protocol for Independent Computing Environments (SPICE) provides
remote desktop access to guest virtual machines. It is an alternative to VNC.
SPICE is supported by OpenStack.
swift
An OpenStack core project that provides object storage services.
unscoped token
Alternative term for an Identity Service default token.
224
May 1, 2015
current
225