23522

Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

Download the full version of the ebook at

https://ebookfinal.com

Building the Infrastructure for Cloud


Security 1st ed. Edition Raghuram Yeluri

https://ebookfinal.com/download/building-the-
infrastructure-for-cloud-security-1st-ed-edition-
raghuram-yeluri/

Explore and download more ebook at https://ebookfinal.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Cloud Native Infrastructure Patterns for Scalable


Infrastructure and Applications in a Dynamic Environment
1st Edition Justin Garrison
https://ebookfinal.com/download/cloud-native-infrastructure-patterns-
for-scalable-infrastructure-and-applications-in-a-dynamic-
environment-1st-edition-justin-garrison/
ebookfinal.com

Securing the Cloud Cloud Computer Security Techniques and


Tactics 1st Edition Graham Speake

https://ebookfinal.com/download/securing-the-cloud-cloud-computer-
security-techniques-and-tactics-1st-edition-graham-speake/

ebookfinal.com

Resource Optimization and Security for Cloud Services 1st


Edition Kaiqi Xiong

https://ebookfinal.com/download/resource-optimization-and-security-
for-cloud-services-1st-edition-kaiqi-xiong/

ebookfinal.com

Building on Progress Expanding the Research Infrastructure


for the Social Economic and Behavioral Sciences 1st
Edition German Data Forum (Ed.)
https://ebookfinal.com/download/building-on-progress-expanding-the-
research-infrastructure-for-the-social-economic-and-behavioral-
sciences-1st-edition-german-data-forum-ed/
ebookfinal.com
Cryptography for Security and Privacy in Cloud Computing
1st Edition Stefan Rass

https://ebookfinal.com/download/cryptography-for-security-and-privacy-
in-cloud-computing-1st-edition-stefan-rass/

ebookfinal.com

3DTV Content Capture Encoding and Transmission Building


the Transport Infrastructure for Commercial Services 1st
Edition Daniel Minoli
https://ebookfinal.com/download/3dtv-content-capture-encoding-and-
transmission-building-the-transport-infrastructure-for-commercial-
services-1st-edition-daniel-minoli/
ebookfinal.com

Veerappan The Untold Story 1st Edition Sunaad Raghuram

https://ebookfinal.com/download/veerappan-the-untold-story-1st-
edition-sunaad-raghuram/

ebookfinal.com

Applied Cyber Security and the Smart Grid Implementing


Security Controls into the Modern Power Infrastructure 1st
Edition Eric D. Knapp
https://ebookfinal.com/download/applied-cyber-security-and-the-smart-
grid-implementing-security-controls-into-the-modern-power-
infrastructure-1st-edition-eric-d-knapp/
ebookfinal.com

Handbook of research on security considerations in cloud


computing 1st Edition Munir

https://ebookfinal.com/download/handbook-of-research-on-security-
considerations-in-cloud-computing-1st-edition-munir/

ebookfinal.com
Building the Infrastructure for Cloud Security 1st ed.
Edition Raghuram Yeluri Digital Instant Download
Author(s): Raghuram Yeluri, Enrique Castro-Leon
ISBN(s): 9781430261452, 1430261455
Edition: 1st ed.
File Details: PDF, 8.54 MB
Year: 2014
Language: english
www.it-ebooks.info
For your convenience Apress has placed some of the front
matter material after the index. Please use the Bookmarks
and Contents at a Glance links to access them.

www.it-ebooks.info
Contents at a Glance

About the Authors���������������������������������������������������������������������������� xv


About the Technical Reviewers����������������������������������������������������� xvii
Acknowledgments�������������������������������������������������������������������������� xix
Foreword���������������������������������������������������������������������������������������� xxi
Introduction���������������������������������������������������������������������������������� xxiii

■■Chapter 1: Cloud Computing Basics����������������������������������������������� 1


■■Chapter 2: The Trusted Cloud: Addressing Security
and Compliance���������������������������������������������������������������������������� 19
■■Chapter 3: Platform Boot Integrity: Foundation for Trusted
Compute Pools������������������������������������������������������������������������������ 37
■■Chapter 4: Attestation: Proving Trustability��������������������������������� 65
■■Chapter 5: Boundary Control in the Cloud: Geo-Tagging
and Asset Tagging������������������������������������������������������������������������ 93
■■Chapter 6: Network Security in the Cloud���������������������������������� 123
■■Chapter 7: Identity Management and Control for Clouds����������� 141
■■Chapter 8: Trusted Virtual Machines: Ensuring the Integrity
of Virtual Machines in the Cloud������������������������������������������������ 161
■■Chapter 9: A Reference Design for Secure Cloud Bursting �������� 179

Index���������������������������������������������������������������������������������������������� 211

www.it-ebooks.info
Introduction

Security is an ever-present consideration for applications and data in the cloud. It is a


concern for executives trying to come up with criteria for migrating an application, for
marketing organizations in trying to position the company in a good light as enlightened
technology adopters, for application architects attempting to build a safe foundation and
operations staff making sure bad guys don’t have a field day. It does not matter whether an
application is a candidate for migration to the cloud or it already runs using cloud-based
components. It does not even matter that an application has managed to run for years in
the cloud without a major breach: an unblemished record does not entitle an organization
to claim to be home free in matters of security; its executives are acutely aware that resting
on their laurels regardless of an unblemished record is an invitation to disaster; and
certainly past performance is no predictor for future gains.
Irrespective of whom you ask, security is arguably the biggest inhibitor for the
broader adoption of cloud computing. Many organizations will need to apply best
practices security standards that set a much higher bar than that for on-premise systems,
in order to dislodge that incumbent on-premise alternative. The migration or adoption of
cloud services then can provide an advantage, in that firms can design, from the ground
up, their new cloud-based infrastructures with security “baked-in;” this is in contrast to
the piecemeal and “after the fact” or “bolted-on” nature of security seen in most data
centers today. But even a baked-in approach has its nuances, as we shall see in Chapter 1.
Cloud service providers are hard at work building a secure infrastructure as the foundation
for enabling multi-tenancy and providing the instrumentation, visibility, and control that
organizations demand. They are beginning to treat security as an integration concern to be
addressed as a service like performance, power consumption, and uptime. This provides
a flexibility and granularity wherein solution architects design in as much security as
their particular situation demands: security for a financial services industry (FSI) or an
enterprise resource planning (ERP) application will be different from security for a bunch
of product brochures, yet they both may use storage services from the same provider,
which demands a high level of integrity, confidentiality, and protection.
Some practices—for instance, using resources in internal private clouds as opposed
to public, third-party hosted clouds—while conferring some tactical advantages do
not address fundamental security issues, such as perimeter walls made of virtual Swiss
cheese where data can pass through anytime. We would like to propose a different
approach: to anchor a security infrastructure in the silicon that runs the volume servers in
almost every data center. However, end users running mobile applications don’t see the
servers. What we’ll do is define a logical chain of trust rooted in hardware, in a manner
not unlike a geometry system built out of a small set of axioms. We use the hardware
to ensure the integrity of the firmware: BIOS code running in the chipset and firmware

xxiii

www.it-ebooks.info
■ Introduction

taking care of the server’s housekeeping functions. This provides a solid platform on
which to run software: the hypervisor environment and operating systems. Each software
component is “measured” initially and verified against a “known good” with the root
of trust anchored in the hardware trust chain, thereby providing a trusted platform to
launch applications.
We assume that readers are already familiar with cloud technology and are
interested in a deeper exploration of security aspects. We’ll cover some cloud technology
principles, primarily with the purpose of establishing a vocabulary from which to build a
discussion of security topics (offered here with no tutorial intent). Our goal is to discuss
the principles of cloud security, the challenges companies face as they move into the
cloud, and the infrastructure requirements to address security requirements. The content
is intended for a technical audience and provides architectural, design, and code samples
as needed to show how to provision and deploy trusted clouds. While documentation
for low-level technology components such as trusted platform modules and the
basics of secure boot is not difficult to find from vendor specifications, the contextual
perspective—a usage-centric approach describing how the different components are
integrated into trusted virtualized platforms—has been missing from the literature. This
book is a first attempt at filling this gap through actual proof of concept implementations
and a few initial commercial implementations. The implementation of secure platforms is
an emerging and fast evolving issue. This is not a definitive treatment by a long measure,
and trying to compile one at this early juncture would be unrealistic. Timeliness is a
more pressing consideration, and the authors hope that this material will stimulate the
curiosity of the reader and encourage the community to replicate the results, leading to
new deployments and, in the process, advancing the state of the art.
There are three key trends impacting security in the enterprise and cloud
data centers:
• The evolution of IT architectures. This is pertinent especially with
the adoption of virtualization and now cloud computing.
Multi-tenancy and consolidation are driving significant
operational efficiencies, enabling multiple lines of business
and tenants to share the infrastructure. This consolidation and
co-tenancy provide a new dimension and attack vector.
How do you ensure the same level of security and control
in an infrastructure that is not owned and operated by
you? Outsourcing, cross-business, and cross-supply chain
collaboration are breaking through the perimeter of traditional
security models. These new models are blurring the distinction
between data “inside” an organization and that which exists
“outside” of those boundaries. The data itself is the new perimeter.

xxiv

www.it-ebooks.info
■ Introduction

• The sophistication of attacks. No longer are attacks targeted at


software and no longer are the hackers intent on gaining bragging
rights. Attacks are sophisticated and targeted toward gaining
control of assets, and with staying hidden. These attacks have
progressively moved closer to the lower layers of the platform:
firmware, BIOS, and the hypervisor hosting the virtual machine
operating environment. Traditionally, controls in these lower
layers are few, allowing malware to hide. With multi-tenancy and
consolidation through virtualization, taking control of a platform
could provide significant leverage and a large attack surface.
How does an organization get out of this quandary and institute
controls to verify the integrity of the infrastructure on which their
mission-critical applications can run? How do they prove to their
auditors that the security controls and procedures in effect are
still enforced even when their information systems are hosted at a
cloud provider?
• The growing legal and regulatory burden. Compliance
requirements have increased significatly for IT practitioners and
line-of-business owners. The cost of securing data and the risks
of unsecured personally identifiable data, intellectual property,
or financial data, as well as the implications of noncompliance to
regulations, are very high. Additionally, the number of regulations
and mandates involved are putting additional burdens on IT
organizations.
Clearly, cloud security is a broad area with cross-cutting concerns that involve
technology, products, and solutions that span mobility, networks security, web security,
messaging security, protection of data or content and storage, identity management,
hypervisor and platform security, firewalls, and audit and compliance, among other
concerns. Looking at security from a tools and products perspective is an interesting
approach. However, an IT practitioner in an enterprise or a cloud service provider
iscompelled to look at usages and needs at the infrastructure level, and to provide a set
of cohesive solutions that address business security concerns and requirements. Equally
intriguing is to look at the usages that a private cloud or a public cloud have so as to
address the following needs:
• For service providers to deliver enterprise-grade solutions. What
does this compliant cloud look like? What are its attributes and
behaviors?
• For developers, service integrators, and operators to deliver
protected applications and workloads from and in the cloud.
Irrespective of the type of cloud service, how does a service
developer protect the static and the dynamic workload contents
and data?
• For service components and users alike to granularly manage,
authenticate, and assign trust for both devices and users.

xxv

www.it-ebooks.info
■ Introduction

Intel has been hard at work with its partners and as fellow travelers in providing
comprehensive solution architectures and a cohesive set of products to not only address
these questions but also deploy e solutions in private clouds, public clouds at scale.
This book brings together the contributions of various Intel technologists, architects,
engineers, and marketing and solution development managers, as well as a few key
architects from our partners.
The book has roughly four parts:
• Chapters 1 and 2 cover the context of cloud computing and the
idea of security, introducing the concept of trusted clouds. They
discuss the key usage models to enable and instantiate the trusted
infrastructure, which is a foundational for those trusted clouds.
Additionally, these chapters cover the use-models with solution
architectures and component exposition.
• Chapters 3, 4, and 5 cover use-cases, solution architectures, and
technology components for enabling the trusted infrastructure,
with emphasis on trusted compute, the role of attestation, and
attestation solutions, as well as geo-fencing and boundary control
in the cloud.
• Chapters 6 and 7 provide an interesting view of identity
management and control in the cloud, as well as network security
in the cloud.
• Chapter 8 extends the notion of trust to the virtual machines
and workloads, with reference architecture and components
built on top of the trusted compute pools discussed in earlier
chapters. Then, Chapter 9 provides a comprehensive exposition
of secure cloud bursting reference architecture and a real-world
implementation that brings together all the concepts and usages
discussed in the preceeding chapters.
These chapters take us on a rewarding journey. Starting with a set of basic technology
ingredients rooted in hardware, namely the ability to carry out the secure launch of
programs; not just software programs, but also implemented in firmware in server
platforms: the BIOS and the system firmware. We have also added other platform sensors
and devices to the mix, such as TPMs, location sensors. Eventually it will be possible
integrate information from other security related telemetry in the platform: encryption
accelerators, secure random generators for keys, secure containers, compression
accelerators, and other related entities.
With a hardened platform defined it now becomes possible to extend the scope of
the initial set of security features to cloud environments. We extend the initial capability
for boot integrity and protection to the next goal of data protection during its complete
life cycle: data at rest, in motion and during execution. Our initial focus is on the server
platform side. In practical terms we use an approach similar to building a mathematical
system, starting with a small set of assertions or axioms and slowly extending the
scope of the assertions until the scope becomes useful for cloud deployments. On the
compute side we extend the notion of protected boot to hypervisors and operating

xxvi

www.it-ebooks.info
■ Introduction

systems running on bare metal followed by the virtual machines running on top of the
hypervisors. Given the intense need in the industry secure platforms, we hope this need
will motivate application vendors and system integrators to extend this chain of trust all
the way to application points of consumption.
The next abstraction beyond trust established by secure boot is to measure the level
of trust for applications running in the platform. This leads to a discussion on attestation
and frameworks and processes to accomplish attestation. Beyond that there are a
number of practical functions needed in working deployments, including geo-location
monitoring and control (geo-fencing), extending trust to workloads, the protected launch
of workloads and ensuring run time integrity of workloads and data.
The cloud presents a much more dynamic environment than previous operating
environments, including consolidated virtualized environments. For instance, virtual
machines may get migrated for performance or business reasons, and within the
framework of secure launch, it is imperative to provide security for these virtual machines
and their data while they move and where they land. This leads to the notion of trusted
compute pools.
Security aspects for networks comes next. One aspect left to be developed is the
role of hardened network appliances taking advantage of secure launch to complement
present safe practices. Identity management is an ever present challenge due to the
distributed nature of the cloud, more so than its prior incarnation in grid computing
because distribution, multi-tenancy and dynamic behaviors are carried out well beyond
the practices of grid computing.
Along with the conceptual discussions we sprinkle in a number of case studies in
the form of proofs of concept and even a few deployments by forward thinking service
providers. For the architects integrating a broad range of technology components beyond
those associated with the secure launch foundation these projects provides invaluable
proofs of existence, an opportunity to identify technology and interface gaps and to
provide very precise feedback to standards organizations. This will help accelerate the
technology learning curve for the industry as a whole, enabling a rapid reduction in the
cost and time to deploy specific implementations.
The compute side is only one aspect of cloud. We’ll need to figure out how to extend
this protection to the network and storage capabilities in the cloud. The experience of
building a trust chain starting from a secure boot foundation helps: network and storage
appliances also run on the same components used to build servers. We believe that if
we follow the same rigorous approach used to build a compute trust chain, it should be
possible to harden network and storage devices to the same degree we attained with the
compute subsystem. From this perspective the long journey is beginning to look more
than like a trailblazing path.
Some readers will shrewdly note that the IT infrastructure in data centers
encompasses more than servers; it also includes networks and storage equipment. The
security constructs discussed in this book relate mostly to application stacks running
on server equipment, and they are still evolving. It must be noted that network and
storage equipment also runs on computing equipment, and therefore one strategy
for securing network and storage equipment will be precisely to build analogous trust
chains applicable to the equipment. These topics are beyond the scope of this book but
are certainly relevant to industry practitioners and therefore are excellent subjects for
subject-matter experts to document in future papers and books.

xxvii

www.it-ebooks.info
■ Introduction

The authors acknowledge the enormous amount of work still to be done, but by
the same token, these are enormously exciting areas to explore, with the potential of
delivering equally enormous value to a beleaguered security industry—an industry that
has been rocked by a seemingly endless stream of ever-more sophisticated and brazen
exploits. We invite industry participants in any role, whether executive, architecture,
engineering, system integration, or development, to join us in broadening this path.
Actually, the path to innovation will never end—this is the essence of security. However,
along the way, industry participants will build a much more robust foundation to the
cloud, bringing some well-deserved assurances to customers.

xxviii

www.it-ebooks.info
Chapter 1

Cloud Computing Basics

In this chapter we go through some basic concepts with the purpose of providing context
for the discussions in the chapters that follow. Here, we review briefly the concept of the
cloud as defined by the U.S. National Institute of Standards and Technology, and the
familiar terms of IaaS, PaaS, and SaaS under the SPI model. What is not often discussed is
that the rise of cloud computing comes from strong historical motivations and addresses
shortcomings of predecessor technologies such as grid computing, the standard enterprise
three-tier architecture, or even the mainframe architecture of many decades ago.
From a security perspective, the main subjects for this book—perimeter and
endpoint protection—were pivotal concepts in security strategies prior to the rise of
cloud technology. Unfortunately these abstractions were inadequate to prevent recurrent
exploits, such as leaks of customer credit card data, even before cloud technology
became widespread in the industry. We’ll see in the next few pages that, unfortunately
for this approach, along with the agility, scalability, and cost advantages of the cloud,
the distributed nature of these third-party-provided services also introduced new risk
factors. Within this scenario we would like to propose a more integrated approach to
enterprise security, one that starts with server platforms in the data center and builds
to the hypervisor operating system and applications that fall under the notion of trusted
compute pools, covered in the chapters that follow.

Defining the Cloud


We will use the U.S. government’s National Institute of Standards and Technology (NIST)
cloud framework for purposes of our discussions in the following chapters. This provides
a convenient, broadly understood frame of reference, without our attempts to treat it
as a definitive definition or to exclude other perspectives. These definitions are stated
somewhat tersely in The NIST Definition of Cloud Computing1 and have been elaborated
by the Cloud Security Alliance.2

1
Peter Mell and Timothy Grance, The NIST Definition of Cloud Computing. NIST Special Publication
800-145, September 2011.
2
Security Guidance for Critical Areas of Focus in Cloud Computing, Cloud Security Alliance,
rev. 2.1 (2009).

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

The model consists of three main layers (see Figure 1-1), laid out in a top-down
fashion: global essential characteristics that apply to all clouds, the service models by
which cloud services are delivered, and how the services are instantiated in the form of
deployment models. There is a reason for this structure that’s rooted in the historical
evolution of computer and network architecture and in the application development and
deployment models. Unfortunately most discussions of the cloud gloss over this aspect.
We assume readers of this book are in a technology leadership role in their respective
fields, and very likely are influential in the future direction of cloud security. Therefore, an
understanding of the dynamics of technology evolution will be helpful for the readers in
these strategic roles. For this purpose, the section that follows covers the historical context
that led to the creation of the cloud.

Figure 1-1. NIST cloud computing definition

The Cloud’s Essential Characteristics


The main motivation behind the pervasive adoption of cloud use today is economic.
Cloud technology allows taking a very expensive asset, such as a $200 million data center,
and delivering its capabilities to individual users for a few dollars per month, or even
for free, in some business models. This feat is achieved through resource pooling, which
is essentially treating an asset like a server as a fungible resource; a resource-intensive
application might take a whole server, or even a cluster of servers, whereas the needs of
users with lighter demands can be packed as hundreds or even thousands to a server.
This dynamic range in the mapping of applications to servers has been achieved
through virtualization technology. Every intervening technology and the organizations
needed to run them represent overhead. However, the gains in efficiency are so large
that this inherent overhead is rarely in question. With applications running on bare-
metal operating systems, it is not unusual to see load factors in the single digits. Cloud
applications running on virtualized environments, however, typically run utilizations up
to 60 to 80 percent, increasing the application yield of a server by several-fold.

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

Cloud applications are inherently distributed, and hence they are necessarily
delivered over a network. The largest applications may involve millions of users, and
the conveyance method is usually the Internet. An example is media delivery through
Netflix, using infrastructure from Amazon Web Services. Similarly, cloud applications are
expected to have automated interfaces for setup and administration. This usually means
they are accessible on demand through a self-service interface. This is usually the case, for
instance, with email accounts through Google Gmail or Microsoft Outlook.com.
With the self-service model, it is imperative to establish methods for measuring
service. This measuring includes guarantees of service provider performance,
measurement of services delivered for billing purposes, and very important from the
perspective of our discussion, measurement of security along multiple vectors. The
management information exchanged between a service provider and consumers is
defined as service metadata. This information may be facilitated by auxiliary services or
metaservices.
The service provider needs to maintain a service pool large enough to address
the needs of the largest customer during peak demand. The expectation is that, with
a large customer base, most local peaks and valleys will cancel out. In order to get the
same quality of service (QoS), an IT organization would need to size the equipment for
expected peak demand, leading to inefficient use of capital. Under some circumstances,
large providers can smooth out even regional peaks and valleys by coordinating their
geographically disperse data centers, a luxury that mid-size businesses might not be able
to afford.
The expectation for cloud users, then, is that compute, network, and data resources
in the cloud should be provided on short order. This property is known as elasticity. For
instance, virtual machines should be available on demand in seconds, or no more than
minutes, compared to the normal physical server procurement process that could take
anywhere from weeks to years.
At this point, we have covered the what question—namely, the essential
characteristics of the cloud. The next section covers service models, which is essentially
the how question.

The Cloud Service Models


The unit of delivery for cloud technology is a service. NIST defines three service models,
affectionately known as the SPI model, for SaaS, PaaS, and IaaS, or, respectively, software,
platform, and infrastructure services.
Under the SaaS service model, applications run at the service provider or delegate
services under the service network paradigm described below. Users access their
applications through a browser, thin client, or mobile device. Examples are Google Docs,
Gmail, and MySAP.
PaaS refers to cloud-based application development environments, compilers, and
tools. The cloud consumer does not see the hardware or network directly, but is able to
determine the application configuration and the hosting environment configuration.
IaaS usually refers to cloud-based compute, network, and storage resources. These
resources are generally understood to be virtualized. For simplicity, some providers may
require running pre-configured or highly paravirtualized operating system images. This is

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

how a pool of physical hosts is able to support 500 or more virtual machines each. Some
providers may provide additional guarantees—for instance, physical hosts shared with no
one else or direct access to a physical host from a pool of hosts.
The bottom layer of the NIST framework addresses where cloud resources are
deployed, which is covered in the next section.

The Cloud Deployment Models


The phrase cloud deployment models refers to the environment or placement of cloud
services as deployed. The quintessential cloud is the multi-tenant public cloud, where
the infrastructure is pooled and made available to all customers. Cloud customers
don’t have a say in the selection of the physical host where their virtual machines land.
This environment is prone to the well-known noisy and nosy neighbor problems, with
multiple customers sharing a physical host.
The noisy neighbor problem might manifest when a customer’s demand on host
resources impacts the performance experienced by another customer running on the
same host; an application with a large memory footprint may cause the application from
another customer to start paging and to run slowly. An application generating intense I/O
traffic may starve another customer trying to use the same resource.
As for the nosy neighbor problem, the hypervisor enforces a high level of isolation
between tenants through the virtual machine abstraction—much higher, for instance,
than inter-process isolation within an operating system. However, there is no absolute
proof that the walls between virtual machines belonging to unrelated customers are
completely airtight. Service-level agreements for public clouds usually do not provide
assurances against tenants sharing a physical host. Without a process to qualify tenants,
a virtual machine running a sensitive financial application could end up sharing the
host with an application that has malicious intent. To minimize the possibility of such
breaches, customers with sensitive workloads will, as a matter of practice, decline to run
them in public cloud environments, choosing instead to run them in corporate-owned
infrastructure. These customers need to forfeit the benefits of the cloud, no matter how
attractive they may seem.
As a partial remedy for the nosy neighbor problem, an entity may operate a cloud for
exclusive use, whether deployed on premises or operated by a third party. These clouds
are said to be private clouds. A variant is a community cloud, operated not by one entity
but by more than one with shared affinities, whether corporate mission, security, policy,
or compliance considerations, or a mix thereof.
The community cloud is the closest to the model under which a predecessor
technology, grid computing, operated. A computing grid was operated by an affinity group.
This environment was geared toward high-performance computing usages, emphasizing
the allocation of multiple nodes—namely, computers or servers to run a job of limited
duration—rather than an application running for indefinite time that might use a
fractional server.
The broad adoption of the NIST definition for cloud computing allows cloud
service providers and consumers alike to establish an initial set of expectations about
management, security, and interoperability, as well as determine the value derived from
use of cloud technology. The next section covers these aspects in more detail.

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

The Cloud Value Proposition


The NIST service and deployment models—namely public, private, and hybrid—get realized
through published APIs, whether open or proprietary. It is through these APIs that customers
can elicit capabilities related to management, security, and interoperability for cloud
computing. The APIs get developed through diverse industry efforts, including the Open
Cloud Computing Interface Working Group, Amazon EC2 API, VMware’s DMTF-submitted
vCloud API, Rackspace API, and GoGrid’s API, to name just a few. In particular, open,
standard APIs will play a key role in cloud portability, federation, and interoperability, as
will common container formats such as the DMTF’s Open Virtualization Format or OVF, as
specified by the Cloud Security Alliance in the citation above.
Future flexibility, security, and mobility of the resultant solution, as well as its
collaborative capabilities, are first-order considerations in the design of cloud-based
solutions. As a rule of thumb, de-perimeterized solutions have the potential to be more
effective than perimeterized solutions relying on the notion of an enterprise perimeter to
be protected, especially in cloud-based environments that have no clear notion of inside
or outside. The reasons are complex. Some are discussed in the section “New Enterprise
Security Boundaries,” later in this chapter. Careful consideration should also be given to
the choice between proprietary and open solutions, for similar reasons.
The NIST definition emphasizes the flexibility and convenience of the cloud,
enabling customers to take advantage of computing resources and applications that they
do not own for advancing their strategic objectives. It also emphasizes the supporting
technological infrastructure, considered an element of the IT supply chain managed to
respond to new capacity and technological service demands without the need to acquire
or expand in-house complex infrastructures.
Understanding the dependencies and relationships between the cloud computing
deployment and the service models is critical for assessing cloud security risks and
controls. With PaaS and SaaS built on top of IaaS, as described in the NIST model above,
inherited or imported capabilities introduce security issues and risks. In all cloud models,
the risk profile for data and security changes is an essential factor in deciding which
models are appropriate for an organization. The speed of adoption depends on how fast
security and trust in the new cloud models can be established.
Cloud resources can be created, moved, migrated, and multiplied in real time to
meet enterprise computing needs. A trusted cloud can be an application accessible
through the Web or a server provisioned as available when needed. It can involve a
specific set of users accessing it from a specific device on the Internet. The cloud model
delivers convenient, on-demand access to shared pools of hardware and infrastructure,
made possible by sophisticated automation, provisioning, and virtualization
technologies. This model decouples data and software from the servers, networks, and
storage systems. It makes for flexible, convenient, and cost-effective alternatives to
owning and operating an organization’s own servers, storage, networks, and software.
However, it also blurs many of the traditional, physical boundaries that help define
and protect an organization’s data assets. As cloud- and software-defined infrastructure
becomes the new standard, the security that depends on static elements like hardware,
fixed network perimeters, and physical location won’t be guaranteed. Enterprises seeking
the benefits of cloud-based infrastructure delivery need commensurate security and

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

compliance. Covering this topic is the objective for this book. The new perimeter is
defined in terms of data, its location, and the cloud resources processing it, given that the
old definition of on-premise assets no longer applies.
Let’s now explore some of the historical drivers of the adoption of cloud technology.

Historical Context
Is it possible to attain levels of service in terms of security, reliability, and performance
for cloud-based applications that rival implementations using corporate-owned
infrastructure? Today it is challenging not only to achieve this goal but also to measure
that success except in a very general sense. For example, consider doing a cost rollup at
the end of a fiscal year. There’s no capability today to establish operational metrics and
service introspection. A goal for security in the cloud, therefore, is not to just match this
baseline but to surpass it. In this book, we’d like to claim that is possible.
Cloud technology enables the disaggregation of compute, network, and storage
resources in a data center into pools of resources, as well as the partitioning and
re-aggregation of these resources according to the needs of consumers down the supply
chain. These capabilities are delivered through a network, as explained earlier in the
chapter. A virtualization layer may be used to smooth out the hardware heterogeneity and
enable configurable software-defined data centers that can deliver a service at a quality
level that is consistent with a pre-agreed SLA.
The vision for enterprise IT is to be able to run varied workloads on a software-defined
data center, with ability for developers, operators, or in fact, any responsible entity to use
self-service unified management tools and automation software. The software-defined
data center must be abstracted from, but still make best use of, physical infrastructure
capability, capacity, and level of resource consumption across multiple data centers and
geographies. For this vision to be realized, it is necessary that enterprise IT have products,
tools, and technologies to provision, monitor, remediate, and report on the service level
of the software-defined data center and the underlying physical infrastructure.

Traditional Three-Tier Architecture


The three-tier architecture shown in Figure 1-2 is well established in data centers
today for application deployment. It is highly scalable, whereby each of the tiers can be
expanded independently by adding more servers to remove choke points as needed, and
without resorting to a forklift upgrade.

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

Figure 1-2. Three-tier application architecture

While the traditional three-tier architecture did fine in the scalability department, it
was not efficient in terms of cost and asset utilization, however. This was because of the
reality of procuring a physical asset. If new procurement needs to go through a budgetary
cycle, the planning horizon can be anywhere from six months to two years. Meanwhile,
capacity needs to be sized for the expected peak demand, plus a generous allowance
for demand growth over the system’s planning and lifecycle, which may or may not
be realized. This defensive practice leads to chronically low utilization rates, typically
in the 5 to 15 percent range. Managing infrastructure in this overprovisioned manner
represents a sunk investment, with a large portion of the capacity not used during most
of the infrastructure’s planned lifetime. The need for overprovisioning would be greatly
alleviated if supply could somehow be matched with demand in terms of near-real
time—perhaps on a daily or even an hourly basis.
Server consolidation was a technique adopted in data centers starting in the early
2000s, which addressed the low-utilization problem using virtualization technology to
pack applications into fewer physical hosts. While server consolidation was successful at
increasing utilization, it brought significant technical complexity and was a static scheme,
as resource allocation was done only at planning or deployment time. That is, server
consolidation technology offered limited flexibility in changing the machine allocations
during operations, after an application was launched. Altering the resource mix required
significant retooling and application downtime.

Software Evolution: From Stovepipes to Service Networks


The low cost of commodity servers made it easy to launch application instances.
However, little thought was given to how the different applications would interact with
one another. For instance, the information about the employee roster in an organization

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

is needed for applications as diverse as human resources, internal phone directory,


expense reporting, and so on. Having separate copies of these resources meant allocating
infrastructure to run these copies, and running an infrastructure was costly in terms of
extra software licensing fees. Having several copies of the same data also introduced the
problem of keeping data synchronized across the different copies.

■■Note Cloud computing has multiplied the initial gains in efficiency delivered by server
consolidation by allowing dynamic rebalancing of workloads at run time, not just at planning
or deployment time.

The initial state of IT applications circa 2000 ran in stovepipes, shown in Figure 1-3
on the left, with each application running on assigned hardware. Under cloud computing,
capabilities common across multiple stacks, such as the company’s employee database,
are abstracted out in the form of a service or of a limited number of service instances that
would certainly be smaller than the number of application instances. All applications
needing access to the employee database, for instance, get connected to the employee
database service.

Figure 1-3. Transition from stovepipes to a service network ecosystem

Under these circumstances, duplicated stacks characterizing stovepiped applications


now morph into a graph, with each node representing a coalesced capability. The
capability is implemented as a reusable service. The abstract connectivity of the service
components making up an application can be represented as a network—a service
network. The stovepipes, thus, have morphed into service networks, as depicted on the
right side of Figure 1-3. We call these nodes servicelets; they are service components
designed primarily to be building blocks for cloud-based applications, but they are not
necessarily self-contained applications.

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

With that said, we have an emerging service ecosystem with composite applications
that are freely using both internally and third-party servicelets. A strong driver for this
application architecture has been the consumerization of IT and the need to make
existing corporate applications available through mobile devices.
For instance, front-end services have gone through a notable evolution, whereby
the traditional PC web access has been augmented to enable application access
through mobile devices. A number of enterprises have opened applications for public
access, including travel reservation systems, supply chain, and shopping networks. The
capabilities are accessible to third-party developers through API managers that make it
relatively easy to build mobile front ends to cloud capabilities; this is shown in Figure 1-4.
A less elegant version of this scheme is the “lipstick on a pig” approach of retooling
a traditional three-tier application and slapping a REST API on top, to “servitize” the
application and make it accessible as a component for integration into other third-party
applications. As technology evolves, we can expect more elegantly architected servicelets
built from the ground up to function as such.

Figure 1-4. Application service networks

So, in Figure 1-4 we see a composite application with an internal API built out of
four on-premise services hosted in an on-premise private cloud, the boundary marked
by the large, rounded rectangle. The application uses four additional services offered by
third-party providers and possibly hosted in a public cloud. A fifth service, shown in the
lower right corner, uses a third-party private cloud, possibly shared with other corporate
applications from the same company.
Continuing on the upper left corner of Figure 1-4, note the laptop representing a
client front end for access by nomadic employees. The mobile device on the lower left
represents a mobile app developed by a third-party ISV accessing another application API
posted through an API manager. An example of such an application could be a company’s
e-commerce application. The mobile app users are the company’s customers, able to
check stock and place purchase orders. However, API calls for inventory restocking and
visibility into the supply chain are available only through the internal API. Quietly, behind
the scenes, the security mechanisms to be discussed in the following chapters are acting
to ensure the integrity of the transactions throughout.

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

In this section we have covered the evolution of application architecture from


application stovepipes to the current service paradigm. IT processes have been evolving
along with the architecture. Process evolution is the subject of the next section.

The Cloud as the New Way of Doing IT


The cloud represents a milestone in technology maturity for the way IT services are
delivered. This has been a common pattern, with more sophisticated technologies taking
the place of earlier ones. The automobile industry is a fitting example. At the dawn of the
industry, the thinking was to replace horses with the internal combustion engine. There
was little realization then of the real changes to come, including a remaking the energy
supply chain based on petroleum and the profound ripple effects on our transportation
systems. Likewise, servicelets will become more than server replacements; they will
be key components for building new IT capabilities unlimited by underlying physical
resources.

■■Note An important consideration is that the cloud needs to be seen beyond just a
drop-in replacement for the old stovepipes. This strategy of using new technology to
re-implement existing processes would probably work, but can deliver only incremental
benefits, if any at all. The cloud represents a fundamental change in how IT gets done and
delivered. Therefore, it also presents an opportunity for making a clean break with the
past, bringing with it the potential for a quantum jump in asset utilization and, as we hope
to show in this book, in greater security.

Here are some considerations:

• Application development time scales are compressing, yet the


scope of these applications keeps expanding, with new user
communities being brought in. IT organizations need to be ready
to use applications and servicelets from which to easily build
customized applications in a fraction of the time it takes today.
Unfortunately, the assets constituting these applications will
be owned by a slew of third parties: the provider may be a SaaS
provider using a deployment assembled by a systems integrator;
the systems integrator will use offerings from different software
vendors; IaaS providers will include network, computing, and
storage resources.

10

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

• A high degree of operational transparency is required to build


a composite application out of servicelets—that is, in terms of
application quantitative monitoring and control capability.
A composite application built from servicelets must offer
end-to-end service assurance better than the same application
built from traditional, corporate-owned assets. The composite
application needs to be more reliable and secure than incumbent
alternatives if it’s to be accepted. Specific to security, operational
transparency means it can be used as a building block for
auditable IT processes, an essential security requirement.
• QoS constitutes an ever-present concern and a barrier; today’s
service offerings do not come even close to reaching this goal,
and that limits the migration of a sizable portion of corporate
applications to cloud. We can look at security as one of the most
important QoS issues for applications, on a par with performance.

On the last point, virtually all service offerings available today are not only opaque
when it comes to providing quantifiable QoS but, when it comes to QoS providers, they
also seem to run in the opposite direction of customer desires and interests. Typical
messsages, including those from large, well-known service providers, have such
unabashed clauses as the following:
“Your access to and use of the services may be suspended . . .
for any reason . . .”
“We will not be liable for direct, indirect or consequential
damages . . .”
“The service offerings are provided ‘as is’ . . . ”
“We shall not be responsible for any service interruptions . . . ”
These customer agreements are written from the perspective of the service provider.
The implicit message is that the customer comes as second priority, and the goal of
the disclaimers is to protect the provider from liability. Clearly, there are supply gaps
in capabilities and unmet customer needs with the current service offerings. Providers
addressing the issue head on, with an improved ability to quantify their security risks and
the capability of providing risk metrics for their service products, will have an advantage
over their competition, even if their products are no more reliable than comparable
offerings. We hope the trusted cloud methods discussed in the following chapters will
help providers deliver a higher level of assurance in differentiated service offerings. We’d
like to think that these disclaimers reflect service providers’ inability, considering the
current state of the art, to deliver the level of security and performance needed, rather
than any attempts to dodge the issue.
Given that most enterprise applications run on servers installed in data centers, the
first step is to take advantage of the sensors and features already available in the server
platforms. The next chapters will show how, through the use of Intel Trusted Execution
Technology (TXT) and geolocation sensors, it is possible to build more secure platforms.

11

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

We believe that the adoption, deployment, and application of the emerging


technologies covered in this book will help the industry address current quandaries with
service-level agreements (SLAs) and enable new market entrants. Addressing security
represents a baby step toward cloud service assurance. There is significant work taking place
in other areas, including application performance and power management, which will
provide a trove of material for future books.

Security as a Service
What would be a practical approach to handling security in a composite application
environment? Should it be baked-in—namely, every service component handling its own
security—or should it be bolted on after integration? As explained above, we call these
service components servicelets, designed primarily to function as application building
blocks rather than as full-fledged, self-contained applications.
Unfortunately, neither approach constitutes a workable solution. A baked-in
approach requires the servicelet to anticipate every possible circumstance for every
customer during the product’s lifetime. This comprehensive approach may be overkill
for most applications. It certainly burdens with overwrought security features the service
developer trying to quickly bring a lightweight product to market. The developer may see
this effort as a distraction from the main business. Likewise, a bolted-on approach makes
it difficult both to retrofit security on the servicelet and to implement consistent security
policies across the enterprise.
One possible approach out of this maze is to look at security as a horizontal
capability, to be handled as another service. This approach assumes the notion of a
virtual enterprise service boundary.

New Enterprise Security Boundaries


The notion of a security perimeter for the enterprise is essential for setting up a first line
of defense. The perimeter defines the notion of what is inside and what is outside the
enterprise. Although insider attacks can’t be ruled out, let’s assume for the moment that
we’re dealing with a first line of defense to protect the “inside” from outsider attacks.
In the halcyon days, the inside coincided with a company’s physical assets. A common
approach was to lay out a firewall to protect unauthorized access between the trusted
inside and untrusted outside networks.
Ideally, a firewall can provide centralized control across distributed assets with
uniform and consistent policies. Unfortunately, these halcyon days actually never existed.
Here’s why:
• A firewall only stands a chance of stopping threats that attempt to
cross the boundary.
• Large companies, and even smaller companies after a merger
and acquisition, have or end up having a geographically disperse
IT infrastructure. This makes it difficult to set up single-network
entry points and it stretches the notion of what “inside” means.

12

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

• The possibility of composite application with externalized


solution components literally turns the concept of “inside”
inside out. In an increasingly cloud-oriented world, composite
applications are becoming the rule more than the exception.
• Mobile applications have become an integral part of corporate IT.
In the mobile world, certain corporate applications get exposed to
third-party consumers, so it’s not just matter of considering what
to do with external components supporting internal applications;
also, internal applications become external from the application-
consumer perspective.
The new enterprise security perimeter has different manifestations depending on the
type of cloud architecture in use—namely, whether private, hybrid, or public under the
NIST classification.
The private cloud model is generally the starting point for many enterprises, as they
try to reduce data center costs by using a virtualized pooled infrastructure. The physical
infrastructure is entirely on the company’s premises; the enterprise security perimeter is
the same as for the traditional, vertically owned infrastructure, as shown in Figure 1-5.

Existing Enterprise
Information Systems

Virtualized
Infrastructure

End User

Figure 1-5. Traditional security perimeter

13

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

The next step in sophistication is the hybrid cloud, shown in Figure 1-6. A hybrid
cloud constitutes the more common example of an enterprise using an external cloud
service in a targeted manner for a specific business need. This model is hybrid because the
core business services are left in the enterprise perimeter, and some set of cloud services
are selectively used for achieving specific business goals. There is additional complexity, in
that we have third-party servicelets physically outside the traditional enterprise perimeter.

Existing Enterprise
Information Systems

Cloud Service Provider


(PaaS)
Virtualized
Infrastructure

Cloud Service Provider


(SaaS)
Cloud Service Provider
End User (SaaS)

Figure 1-6. Security perimeter in the hybrid cloud

The last stage of sophistication comes with the use of public clouds, shown in
Figure 1-7. Using public clouds brings greater rewards for the adoption of cloud
technology, but also greater risks. In its pure form, unlike the hybrid cloud scenario,
the initial on-premise business core may become vanishingly small. Only end users
remain in the original perimeter. All enterprise services may get offloaded to external
cloud providers on a strategic and permanent basis. Application components become
externalized, physically and logically.

Existing Enterprise
Information Systems

Cloud Service Provider


(PaaS)
Virtualized
Infrastructure

Cloud Service Provider Cloud Service Provider


(SaaS) End User (SaaS)

Figure 1-7. Generalized cloud security perimeter

14

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

Yet another layer of complexity is the realization that the enterprise security
perimeter as demarcation for an IT fortress was never a realistic concept. For instance,
allowing employee access to the corporate network through VPN is tantamount to
extending a bubble of the internal network to the worker in the field. However, in
practical situations, that perimeter must be semipermeable, allowing a bidirectional flow
of information.
A case in point is a company’s website. An initial goal may have been to provide
customers with product support information. Beyond that, a CIO might be asked to
integrate the website into the company’s revenue model. Examples might include
supply-chain integration: airlines making their scheduling and reservation systems,
or hotel chains publishing available rooms, not only for direct consumption through
browsers but also as APIs for integration with other applications. Any of these extended
capabilities will have the effect of blurring the security boundaries by bringing in external
players and entities.

■■Note An IT organization developing an application is not exclusively a servicelet


consumer but also is making the company become a servicelet provider in the pursuit of
incremental revenue. The enterprise security boundary becomes an entity enforcing the
rules for information flow in order to prevent a free-for-all, including corporate secrets flying
out the window.

If anything, the fundamental security concerns that existed with IT delivered out of
corporate-owned assets also apply when IT functions, processes, and capabilities migrate
to the cloud. The biggest challenge is to define, devise, and carry out these concepts
into the new cloud-federated environment in a way that is more or less transparent to
the community of users. An added challenge is that, because of the broader reach of the
cloud, the community of users expands by several orders of magnitude. A classic example
is the airline reservation system, such as the AMR Sabre passenger reservation system,
later spun out as an independent company. Initially it was the purview of corporate staff.
Travel agents in need of information or making reservations phoned to access the airline
information indirectly. Eventually travel agents were able to query and make reservations
directly. Under the self-service model of the cloud today, it is customary for consumers
to make reservations themselves through dozens of cloud-based composite applications
using web-enabled interfaces from personal computers and mobile devices.
Indeed, security imperatives have not changed in the brave new world of cloud
computing. Perimeter management was an early attempt at security management, and it
is still in use today. The cloud brings new challenges, though, such as the nosy neighbor
problem mentioned earlier. To get started in the cloud environments, the concept of
trust in a federated environment needs to be generalized. The old concept of inside vs.
outside the firewall has long been obsolete and provides little comfort. On the one hand,
the federated nature of the cloud brings the challenge of ensuring trust across logically
and geographically distributed components. On the other hand, we believe that the goal
for security in the cloud is to match current levels of security in the enterprise, preferably
by removing some of the outstanding challenges. For instance, the service abstraction

15

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

used internally provides additional opportunities for checks and balances in terms of
governance, risk management, and compliance (GRC) not possible in earlier monolithic
environments.
We see this transition as an opportunity to raise the bar, as is expected when any
new technology displaces the incumbent. Two internal solution components may
trust each other, and therefore their security relationships are said to be implicit. If
these components become servicelets, the implicit relationship becomes explicit:
authentication needs to happen and trust needs to be measured. If these actions can’t be
formalized, though, the provider does not deliver what the customer wants. The natural
response from the provider is to put liability-limiting clauses in place of an SLA. Yet there
is trouble when the state-of-the-art can’t provide what the customer wants. This inability
by service providers to deliver security assurances leads to the brazen disclaimers
mentioned above.
Significant progress has been achieved in service performance management. Making
these contractual relationships explicit in turn makes it possible to deliver predictable
cost and performance in ways that were not possible before. This dynamic introduces the
notion of service metadata, described in Chapter 10. We believe security is about to cross
the same threshold. As we’ve mentioned, this is the journey we are about to embark on
during the next few chapters.
The transition from a corporate-owned infrastructure to a cloud technology poses
a many-layered challenge: every new layer addressed then brings a fresh one to the fore.
Today we are well past the initial technology viability objections, and hence the challenge
du jour is security, with security cited as a main roadblock on the way to cloud adoption.

A Roadmap for Security in the Cloud


Now that we have covered the fundamentals of cloud technology and expressed some
lingering security issues, as well as the dynamics that led to the creation of the cloud, we
can start charting the emerging technology elements and see how they can be integrated
in a way that can enhance security outcomes. From a security perspective, there are
two necessary conditions for the cloud to be accepted as a mainstream medium for
application deployment. We covered the first: essentially embracing its federated nature
and using it to advantage. The second is having an infrastructure that directly supports
the security concerns inherent in the cloud, offering an infrastructure that can be trusted.
In Chapter 2, we go one level deeper, exploring the notion of “trusted cloud.” The trusted
cloud infrastructure is not just about specific features. It also encompasses processes
such as governance, assurance, compliance, and audits.
In Chapter 3, we introduce the notions of trusted infrastructure and trusted
distributed resources under the umbrella of trusted compute pools and enforcement of
security policies steming from a hardware-based root of trust. Chapter 4 deals with the
idea of attestation, an essential operational capability allowing the authentication of
computational resources.
In a federated environment, location may be transparent. In other cases, because
of the distributed nature of the infrastructure, location needs to be explicit: policies
prescribing where data sets and virtual machine can travel, as well as useful ex post facto
audit trails. The topic of geolocation and geotagging is covered in Chapter 5. Chapter 6
surveys security considerations for the network infrastructure that links cloud resources.

16

www.it-ebooks.info
CHAPTER 1 ■ Cloud Computing Basics

Chapter 7 considers issues of identity management in the cloud. And Chapter 8 discusses
the idea of identity in a federated environment. The latter is not a new problem; federated
identity management was an important feature of the cloud’s predecessor technology,
grid computing. However, as we’ll show, considerations of federation for the cloud are
much different.

Summary
We started this chapter with a set of commonly understood concepts. We also observed
the evolution of security as IT made of corporate-owned assets to that of augmented with
externalized resources. The security model also evolved from an implicit, essentially
“security by obscurity” approach involving internal assets to one that is explicit across
assets crossing corporate boundaries. This federation brings new challenges, but it also
has the possibility of raising the bar in terms of security for corporate applications. This
new beginning can be built upon a foundation of trusted cloud infrastructure, which is
discussed in the rest of this book.

17

www.it-ebooks.info
Chapter 2

The Trusted Cloud:


Addressing Security and
Compliance

In Chapter 1 we reviewed the essential cloud concepts and took a first look at cloud
security. We noted that the traditional notion of perimeter or endpoint protection
left much to be desired in the traditional architecture with enterprise-owned
assets. Such a notion is even less adequate today when we add the challenges
that application developers, service providers, application architects, data center
operators, and users face in the emerging cloud environment.
In this chapter we’ll bring the level of discourse one notch tighter and focus
on defining the issues that drive cloud security. We’ll go through a set of initial
considerations and common definitions as prescribed by industry standards. We’ll also
look at current pain points in the industry regarding security and the challenges involved
in addressing those pains.
Beyond these considerations, we first take a look at the solution space: the concept
of a trusted infrastructure and usages to be implemented in a trusted cloud, starting
with a trust chain that consists of hardware that supports boot integrity. Then, we take
advantage of that trust chain to implement data protection, equally at rest and in motion
and during application execution, to support application run-time integrity and offer
protection in the top layer.
Finally, we look briefly at some of the “to be” scenarios for users who are able to put
these recommendations into practice.

Security Considerations for the Cloud


One of the biggest barriers to broader adoption of cloud computing is security—the real
and perceived risks of providing, accessing, and controlling services in a multi-tenant
cloud environment. IT managers would like to see higher levels of assurance before they
can declare their cloud-based services and data ready for prime time, similar to the level
of trust they have in corporate-owned infrastructure. Organizations require their compute
platforms to be secure and compliant with relevant rules, regulations, and laws. These

19

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

requirements must be met, whether deployment uses a dedicated service available via a
private cloud or is a service shared with other subscribers via a public cloud. There’s no
margin for error when it comes to security. According to a research study conducted by
the Ponemon Institute and Symantec, the average cost to an organization of a data breach
in 2013 was $5.4 million, and the corresponding cost of lost business came to about
$3 million.1 It is the high cost of such data breaches and the inadequate security monitoring
capabilities offered as part of the cloud services that pose the greatest threats to wider
adoption of cloud computing and that create resistance within organizations to public
cloud services.
From an IT manager’s perspective, cloud computing architectures bypass or work
against traditional security tools and frameworks. The ease with which services are
migrated and deployed in a cloud environment brings significant benefits, but they
are a bane from a compliance and security perspective. Therefore, this chapter focuses
on the security challenges involved in deploying and managing services in a cloud
infrastructure. To serve as an example, we describe work that Intel is doing with partners
and the software vendor ecosystem to enable a security-enhanced platform and solutions
with security anchored and rooted in hardware and firmware. The goal of this effort is to
increase security visibility and control in the cloud.
Cloud computing describes the pooling of an on-demand, self-managed virtual
infrastructure, consumed as a service. This approach abstracts applications from the
complexity of the underlying infrastructure, allowing IT to focus on enabling greater
business value and innovation instead of getting bogged down by technology deployment
details. Organizations welcome the presumed cost savings and business flexibility
associated with cloud deployments. However, IT practitioners unanimously cite
security, control, and IT compliance as primary issues that slow the adoption of cloud
computing. These considerations often denote general concerns about privacy, trust,
change management, configuration management, access controls, auditing, and logging.
Many customers also have specific security requirements that mandate control over data
location, isolation, and integrity. These requirements have traditionally been met through
a fixed hardware infrastructure.
At the current state of cloud computing, the means to verify a service’s compliance
are labor-intensive, inconsistent, non-scalable, or just plain impractical to implement.
The necessary data, APIs, and tools are not available from the provider. Process
mismatches occur when service providers and consumers work under different
operating models. For these reasons, many corporations deploy less critical applications
in the public cloud and restrict their sensitive applications to dedicated hardware
and traditional IT architecture running in a corporate-owned vertical infrastructure.
For business-critical applications and processes, and for sensitive data, third-party
attestations of security controls usually aren’t enough. In such cases, it is absolutely
critical for organizations to be able to ascertain that the underlying cloud infrastructure is
secure enough for the intended use.

1
https://www4.symantec.com/mktginfo/whitepaper/053013_GL_NA_WP_Ponemon-2013-
Cost-of-a-Data-Breach-Report_daiNA_cta72382.pdf

20

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

This requirement thus drives the next frontier of cloud security and compliance:
implementing a level of transparency at the lowest layers of the cloud, through the
development of standards, instrumentation, tools, and linkages to monitor and prove
that the IaaS cloud’s physical and virtual servers are actually performing as they should
be and that they meet defined security criteria. The expectation is that the security of a
cloud service should match or exceed the equivalent in house capabilities before it can be
considered an appropriate replacement.
Today, security mechanisms in the lower stack layers (for example, hardware,
firmware, and hypervisors) are almost absent. The demand for security is higher for
externally sourced services. In particular, the requirements for transparency are higher:
while certain monitoring and logging capabilities might not have been deemed necessary
for an in-house component, they become absolute necessities when sourced from
third parties to support operations, meet SLA compliance, and have audit trails should
litigation and forensics become necessary. On the positive side, the use of cloud services
will likely drive the re-architecturing of crusty applications with much higher levels of
transparency and scalability with, we hope, moderate cost impact due to the greater
efficiency the cloud brings.
Cloud providers and the IT community are working earnestly to address these
requirements, allowing cloud services to be deployed and managed with predictable
outcomes, with controls and policies in place to monitor trust and compliance of these
services in cloud infrastructures. Specifically, Intel Corporation and other technology
companies have come together to enable a highly secure cloud infrastructure based on
a hardware root of trust, providing tamper-proof measurements of physical and virtual
components in the computing stack, including hypervisors. These collaborations are
working to develop a framework that integrates the secure hardware measurements
provided by the hardware root of trust with adjoining virtualization and cloud
management software. The intent is to improve visibility, control, and compliance for
cloud services. For example, making the trust and integrity of the cloud servers visible
will allow cloud orchestrators to provide improved controls of on boarding services for
their more sensitive workloads, offering more secure hardware and subsequently better
control over the migration of workloads and greater ability to deliver on security policies.
Security requirements for cloud use are still works in progress, let alone firming
up the security aspects proper. Let’s look at some of the security issues being captured,
defined, and specified by the government and standards organizations.

Cloud Security, Trust, and Assurance


There is significant focus on and activity across various standards organizations and
forums to define the challenges facing cloud security, as well as solutions to those
challenges. The Cloud Security Alliance (CSA), NIST, and the Open Cloud Computing
Interface (OCCI) are examples of organizations promoting cloud security standards. The
Open Data Center Alliance (ODCA), an alliance of customers, recognizes that security
is the biggest challenge organizations face as they plan for migration to cloud services.
The ODCA is developing usage models that provide standardized definitions for security
in the cloud services and detailed procedures for service providers to demonstrate
compliance with those standards. These attempts seek to give organizations an ability to
validate adherence to security standards within the cloud services.

21

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

Here are some important considerations dominating the current work on cloud
security:

• Visibility, compliance, and monitoring. Ways are needed to


provide seamless access to security controls, conditions, and
operating states within a cloud’s virtualization and hardware
layers for auditability and at the bottom-most infrastructure layers
of the cloud security providers. The measured evidence enables
organizations to comply with security policies and with regulated
data standards and controls such as FISMA and DPA (NIST 2005).
• Data discovery and protection. Cloud computing places data in
new and different places—not just user data but also application
and VM data (source). Key issues include data location and
segregation, data footprints, backup, and recovery.
• Architecture. Standardized infrastructure and applications
provide opportunities to exploit a single vulnerability many times
over. This is the BORE (Break Once, Run Everywhere) principle at
work. Considerations for the architecture include:
• Protection. Protecting against attacks with standardized
infrastructure when the same vulnerability can exist at many
places, owing to the standardization.
• Support for multi-tenant environments. Ensuring that
systems and applications from different tenants are isolated
from one another appropriately.
• Security policies. Making sure that security policies are
accurately and fully implemented across cloud architectures.
• Identity management. Identity management (IdM) is described
as “the management of individual identities, their authentication,
authorization, roles, and privileges/permissions within or across
system and enterprise boundaries, with the goal of increasing
security and productivity while decreasing cost, downtime, and
repetitive tasks.” From a cloud security perspective, questions like,
“How do you control passwords and access tokens in the cloud?”
and “How do you federate identity in the cloud?” are very real,
thorny questions for cloud providers and subscribers.
• Automation and policy orchestration. The efficiency, scale,
flexibility, and cost-effectiveness that cloud computing brings
are because of the automation—the ability to rapidly deploy
resources, and to scale up and scale down with processes,
applications, and services provisioned securely “on demand.”
A high degree of automation and policy evaluation and
orchestration are required so that security controls and
protections are handled correctly, with minimal errors and
minimal intervention needed.

22

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

Trends Affecting Data Center Security


The industry working groups that are addressing the issues identified above are carrying
on their activities with some degree of urgency, driven as they are by a number of
circumstances and events. There are three overriding security considerations applicable
to data centers, namely:
• New types of attacks
• Changes in IT systems architecture as a transformation to the
cloud environment takes place
• Increased governmental and international compliance
requirements because of the exploits
The nature and types of attacks on information systems are changing dramatically.
That is, the threat landscape is changing. Attackers are evolving from being hackers
working on their own and looking for personal fame into organized, sophisticated
attackers targeting specific types of data and seeking to gain and retain control of assets.
These attacks are concerted, stealthy, and organized. The attacks have predominantly
targeted operating systems and application environments, but new attacks are no longer
confined to software and operating systems. Increasingly, they are moving lower down
in the solution stacks to the platform, and they are affecting entities such as the BIOS,
various firmware sites in the platform, and the hypervisor running on the bare-metal
system. The attackers find it is easy to hide there, and the number of controls at that level
is still minimal, so leverage is significant. Imagine, in a multi-tenant cloud environment,
what impact malware can have if it gets control of a hypervisor.
Similarly, the evolving IT architecture is creating new security challenges. Risks
exist anywhere there are connected systems. It does not help that servers, whether in
a traditional data center or in a cloud implementation, were designed to be connected
systems. Today, there is an undeniable trend toward virtualization, outsourcing, and
cross-business and cross-supply chain collaboration, which blurs the boundaries
between data “inside” an organization and data “outside” that organization. Drawing
perimeters around these abstract and dynamic models is quite a challenge, and that
may not even be practical anymore. The traditional perimeter-defined models aren’t
as effective as they once were. Perhaps they never were, but the cloud brings these
issues to the point they can’t be ignored anymore. The power of that cloud computing
and virtualization lies in the abstraction, whereby workloads can migrate for efficiency,
reliability, and optimization.
This fungibility of infrastructure, therefore, compounds the security and compliance
problems. A vertically owned infrastructure at least provided the possibility of running
critical applications with high security and with successfully meeting compliance
requirements. But this view becomes unfeasible in a multi-tenant environment. With the
loss of visibility comes the question of how to verify the integrity of the infrastructure on
which an organization’s workloads are instantiated and run.
Adding to the burden of securing more data in these abstract models is a growing
legal or regulatory compliance demand to secure personally identifiable data, intellectual

23

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

property, or financial data. The risks (and costs) of non-compliance continue to grow.
The Federal Information Security Management Act (FISMA) and the Federal Risk
and Authorization Management Program (FedRAMP) are two examples of how non-
compliance prevents the cloud service providers from competing in the public sector.
But even if cloud providers aren’t planning to compete in the public sector by offering
government agencies their cloud services, it’s still important that they have at least a basic
understanding of both programs. That’s because the federal government is the largest
single producer, collector, consumer, and disseminator of information in the United
States. Any changes in regulatory requirements that affect government agencies will also
have the potential of significantly affecting the commercial sector. These trends have
major bearing on the security and compliance challenges that organizations face as they
consider migrating their workloads to the cloud.
As mentioned, corporate-owned infrastructure can presumably provide a security
advantage by virtue of its being inside the enterprise perimeter. The first defense is
security by obscurity. Resources inside the enterprise, especially inside a physical
perimeter, are difficult for intruders to reach. The second defense is genetic diversity.
Given that IT processes vary from company to company, an action that breaches one
company’s security may not work for another company’s. However, these presumed
advantages are unintended, and therefore difficult to quantify; in practice, they offer little
comfort or utility.

Security and Compliance Challenges


The four basic security and compliance challenges that organizations face are as follows:

• Governance. Cloud computing abstracts the infrastructure, and


in order to prove compliance and satisfy audit requirements,
organizations rely on the cloud providers to supply logs, reports,
and attestation. When companies outsource parts of their IT
infrastructure to cloud providers, they effectively give up some
control of their information infrastructure and processes, even
as they are required to bear greater responsibility for data
confidentiality and compliance. While enterprises still get to
define how their information is handled, who gets access to that
information, and under what conditions in their private or hybrid
clouds, they must largely take cloud providers at their word that
their SLA trusting security policies and conditions are being
met. Even then, service customers may have to compromise
to have the capabilities that cloud providers can deliver. The
organization’s ability to monitor actual activities and verify
security conditions within the cloud is usually very limited,
and there are no standards or commercial tools to validate
conformance to policies and SLAs.

24

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

• Co-Tenancy and Noisy or Adversarial Neighbors. Cloud


computing introduces new risks resulting from multi-tenancy, an
environment in which different users within a cloud share physical
resources to run their virtual machines. Creating secure partitions
between co-residental virtual machines has proved challenging
for many cloud providers. Results range from the unintentional,
noisy-neighbor syndrome whereby workloads that consume more
than their fair share of compute, storage, or I/O resources starve
the other virtual tenants on that host; to the deliberately malicious
efforts, such as when malware is injected into the virtualization
layer, enabling hostile parties to monitor and control any of
the virtual machines residing on the system. To test this idea,
researchers at UCSD and MIT were able to pinpoint the physical
server used by programs running on the EC2 cloud, and then
extract small amounts of data from these programs by inserting
their own software and launching a side-channel attack.2
• Architecture and Applications. Cloud services are typically
virtualized, which adds a hypervisor layer to a traditional IT
application stack. This new layer introduces opportunities for
improvements in security and compliance, but it also creates
new attack surfaces and different risk exposure. Organizations
must evaluate the new monitoring opportunities and the risks
presented by the hypervisor layer, and account for them in their
policy definition and compliance reporting.
• Data. Cloud services raise access and protection issues for user
data and applications, including source code. Who has access, and
what is left behind when an organization scales down a service?
How is corporate confidential data protected from the virtual
infrastructure administrators and cloud co-tenants? Encryption
of data, at rest, in transit, and eventually in use, becomes a basic
requirement, yet it comes with a performance cost (penalty).
If we truly want to encrypt everywhere, how is it done in a
cost-effective and efficient manner? Finally, data destruction
at end of life is a subject not often discussed. There are clear
regulations on how long data has to be retained. The assumption
is that data gets destroyed or disposed of once the retention period
expires. Examples of these regulations include Sarbanes-Oxley
Act (SOX), Section 802: seven years (U.S. Security and Exchange
Commission 2003); HIPAA, 45 C.F.R. §164.530(j): six years; and
FACTA Disposal Rule (Federal Trade Commission 2005).

S. Curry, J. Darbyshire, Douglas Fisher, et al., RSA Security Brief, March 2010. Also, T. Ristenpart,
2

E. Tromer, et al., Hey, You, Get Off of My Cloud: Exploring Information Leakage in Third-Party
Compute Clouds, CCS’09, Chicago.

25

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

With many organizations using cloud services today for non-mission-critical


operations or for low-confidentiality applications, security and compliance challenges
seem manageable, but this is a policy of avoidance. These services don’t deal with
data and applications governed by strict information security policies such as health
regulations, FISMA regulations, and the Data Protection Act in Europe. But the security
and compliance challenges mentioned above would become central to cloud providers
and subscribers once these higher-value business functions and data begin migrating
to private cloud and hybrid clouds. Industry pundits believe that the cloud value
proposition will increasingly drive the migration of these higher value applications, as
well as information and business processes, to cloud infrastructures. As more and more
sensitive data and business-critical processes move to these cloud environments, the
implications for security officers in these organizations will be to provide a transparent
and compliant framework for information security, with monitoring.
So how do IT people address these challenges and requirements? With the concept
of trusted clouds. This answer addresses many of these challenges and provides the
ability for organizations to migrate both regular and mission-critical applications so as to
leverage the benefits of cloud computing.

Trusted Clouds
There are many definitions and industry descriptions for the term trusted cloud, but at the
core these definitions all have four foundational pillars:
• A trusted computing infrastructure
• A trusted cloud identity and access management
• Trusted software and applications
• Operations and risk management
Each of these pillars is broad and goes deep, with a rich cohort of technologies,
patterns of development, and of course security considerations. It is not possible to cover
all of them in one book. Since this book deals with the infrastructure for cloud security,
we focus on the first pillar, the trusted infrastructure, and leave the others for future
work. (Identity and access management are covered very briefly within the context of
the trusted infrastructure.) But before we delve into this subject, let’s review some key
security concepts to ensure clarity in the discussion. These terms lay the foundation for
what visibility, compliance, and monitoring entail, and we start with baseline definitions
for trust and assurance.
• Trust. The assurance and confidence that people, data, entities,
information, and processes will function or behave in expected
ways. Trust may be human-to-human, machine-to-machine
(e.g., handshake protocols negotiated within certain protocols),
human-to-machine (e.g., when a consumer reviews a digital
signature advisory notice on a website), or machine-to-human.
At a deeper level, trust might be regarded as a consequence of
progress toward achieving security or privacy objectives.

26

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

• Assurance. Evidence or grounds for confidence that the security


controls implemented within an information system are effective
in their application. Assurance can be shown in:
• Actions taken by developers, implementers, and operators
in the specification, design, development, implementation,
operation, and maintenance of security controls.
• Actions taken by security control assessors to determine the
extent to which those controls are implemented correctly,
operating as intended, and producing the desired outcomes
with respect to meeting the security requirements for the
system.
With these definitions established, let’s now take a look at the trusted computing
infrastructure, where computing infrastructure embraces three domains: compute,
storage, and network.

Trusted Computing Infrastructure


Trusted computing infrastructure systems consistently behave in expected ways, with
hardware and software working together to enforce these behaviors. The behaviors are
consistent across compute on servers, storage, and network elements in the data center.
In the traditional infrastructure, hardware is a bystander to security measures, as
most of the malware prevention, detection, and remediation is handled by software in the
operating system, applications, or services layers. This approach is no longer adequate,
however, as software layers have become more easily circumvented or corrupted. To
deliver on the promise of trusted clouds, a better approach is the creation of a root of
trust at the most foundational layer of a system—that is, in the hardware. Then, that root
of trust grows upward, into and through the operating system, applications, and services
layers. This new security approach is known as hardware-based or hardware-assisted
security, and it becomes the basis for enabling the trusted clouds.
Trusted computing relies on cryptographic and measurement techniques to
enforce a selected behavior by authenticating the launch and authorizing processes.
This authentication allows an entity to verify that only authorized code runs on a system.
Though this typically covers initial booting, it may also include applications and scripts.
Establishing trust for a particular component implies also an ability to establish trust for
that component relative to other trusted components. This transitive trust path is known
as the chain of trust, with the initial component being the root of trust.
A system of geometry is built on a set of postulates assumed to be true. Likewise, a
trusted computing infrastructure starts with a root of trust that contains a set of trusted
elemental functions assumed to be immune from physical and other attacks. Since an
important requirement for trust is that conditions be tamper-proof, cryptography or some
immutable unique signature is used to identify a component. The hardware platform is
usually a good proxy for the root of trust; for most attackers, the risk, cost, and difficulty of
tampering with hardware exceeds the potential benefits of attempting to do so.

27

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

With the use of hardware as the initial root of trust, you can then measure (which
means taking a hash, like an MD5 or SHA1, of the image of component or components)
the software, such as the hypervisor or operating system, to determine whether
unauthorized modifications have been made to it. In this way, a chain of trust relative
to the hardware can be established. Trust techniques include hardware encryption,
signing, machine authentication, secure key storage, and attestation. Encryption and
signing are well-known techniques, but these are hardened by the placement of keys in
protected hardware storage. Machine authentication provides a user with a higher level
of assurance, as the machine is indicated as known and authenticated. Attestation, which
is covered in Chapter 4, provides the means for a third party (also called a trusted third
party) to affirm that loaded firmware and software are correct, true, or genuine. This is
particularly important for cloud architectures based on virtualization.

Trusted Cloud Usage Models


In this abstracted and fungible cloud environment, the focus needs to be on enabling
security across the three infrastructure domains. Only then can an enterprise have
an infrastructure that is trusted to enable the broad migration of critical applications.
Mitigating risk becomes more complex, as cloud use introduces an ever-expanding,
transient chain of custody for sensitive data and applications. Only when security is
addressed in a transparent and auditable way can enterprises and developers have:
• Confidence that their applications and workloads are equally safe
in multi-tenant clouds
• Greater visibility and control of the operational state of the
infrastructure, to balance the loss of physical control that comes
with this abstracted environment
• Capability to continuously monitor for compliance
Cloud consumers may not articulate the needs in this fashion. From their
perspective, there are key mega-needs, such as:
• How can I trust the cloud enough to use it?
• How can I protect my application and workloads in the
cloud—and from the cloud?
• How can I broker between device and cloud services to ensure
trust and security?
A cloud provider has to address these questions in a meaningful way for its
tenants. These needs translate into a set of foundational usage models for trusted
clouds that apply across the three infrastructure domains, as shown in Figure 2-1.

28

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

Compute Network Storage


Manag
M
ement
Run-Time Integrity & Protection

VMs/Workloads

Data Protection – At Rest, In Motion, In Execution

OS/VMM & Serv


Servers
vers

Boot Integrity & Protection

Figure 2-1. A framework for the trusted cloud

1. Boot integrity and protection


2. Data governance and protection, at rest, in motion, and
during execution
3. Run-time integrity and protection
The scope and semantics of these usage models changes across the three
infrastructure domains, but the purpose and intent are the same. How they manifest and
are implemented in each of the domains could differ. For example, data protection in the
context of the compute domain entails protection (both confidentiality and integrity)
of the virtual machines at rest, in motion, and during execution; this applies to their
configuration, state, secrets, keys, certificates, and other entities stored within. The same
data-protection usage for the network domain has a different focus; it is on protection
of the network flows, network isolation, confidentiality on the pipe, tenant-specific IPS,
IDS, firewalls, deep packet inspection, and so on. In the storage domain, data protection
pinpoints strong isolation/segregation, confidentiality, sovereignty, and integrity. Data
confidentiality, which is a key part of data protection across the three domains, uses the
same technological components and solutions—that is, encryption.

29

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

As a solution provider, methodical development and instantiation of these usage


models across all the domains will provide the necessary assurance for organizations
migrating their critical applications to a cloud infrastructure, and will enable
establishment of the foundational pillar for trusted clouds.
In the rest of this chapter, we provide an exposition of the usage models listed above.
We include enough definition of these four usage models for them to provide a broad
overview. Subsequent chapters go into greater detail on each of these models and offer
solutions, including the solution architecture and a reference implementation using
commercial software and management components.

The Boot Integrity Usage Model


Boot integrity represents the first step toward achieving a trusted infrastructure. This
model applies equally well to the compute, network, and storage domains. As illustrated
in Figure 2-1, every network switch, router, or storage controller (in a SAN or NAS) runs
a compute layer operating specialized OS to provide networking and storage functions,
so this model enables a service provider to make claims about the boot integrity of the
network, storage, and compute platforms, as well as the operating system and hypervisor
instances running in them. As discussed earlier, boot integrity supported in the hardware
makes the system robust and less vulnerable to tampering and targeted attacks. It enables
an infrastructure service provider to make quantifiable claims about the boot-time
integrity of the pre-launch and the launch components. This provides a means, therefore,
to observe and measure the integrity of the infrastructure. In a cloud infrastructure, these
security features refer to the virtualization technology in use, which comprises two layers:
• The boot integrity of the BIOS, firmware, and hypervisor. We
identify this capability as trusted platform boot.
• The boot integrity of the virtual machines that host the workloads
and applications. We want these applications to run on trusted
virtual machines.

Understanding the Value of Platform Boot Integrity


To attain trusted computing, cloud users need systems hardened against emerging
threats such as rootkits. Historically, many have viewed these threats as someone else’s
problem or as a purely hypothetical issue. This position is untenable in view of today’s
threats.
The stealthy, low-level threats are real and they occur in actual operating
environments. The recent Mebromi BIOS rootkit low-level attack on a shipping platform
was an eye-opener, as it took the industry by surprise. Unfortunately, as is often the case,
it takes an actual exploit to change the mindset and drive change. And indeed, there
are many more IT managers and security professionals taking action to improve the
situation. As of 2012, a growing number of entities, including the U.S. National Institute
of Standards and Technologies (NIST), are developing recommendations for protecting
a system’s boot integrity. These recommendations contain measures for securing very
basic, but highly privileged platform components.

30

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

Given the crucial role played by the hypervisor as essential software responsible
for managing the underlying hardware and allocating resources such as processor,
disk, memory, and I/O to the guest virtual machines and arbitrating the accesses and
privileges among guests, it is imperative to have the highest levels of assurance so that it
is uncompromised. This was the rationale for conducting the survey shown in Figure 2-2.
With this growing awareness and concern has come a corresponding growth in vendors
looking to define the solutions.

Figure 2-2. Survey results showing concerns over hypervisor integrity across regions

For the various devices/nodes across the infrastructure domains (compute, storage,
and network), the integrity of the pre-launch and launch environment can be asserted
anytime during the execution’s lifecycle. This is done by verifying that the identity and
values of the components have not changed unless there has been a reset or a reboot of
the platform by the controlling software. This assertion of integrity is deferred to a trusted
third party that fulfills the role of a trust authority, and the verification process is known
as trust attestation. The trust authority service is an essential component of a trusted
cloud solution architecture.

The Trusted Virtual Machine Launch Usage Model


A trusted platform boot capability provides a safe launch environment for provisioning
virtual machines running workloads. This environment has the mechanisms to evaluate
the integrity of pre-launch and launch components on a platform, from the BIOS to the
operating system and hypervisor. The service provider thus attests to the trust-ability

31

www.it-ebooks.info
CHAPTER 2 ■ The Trusted Cloud: Addressing Security and Compliance

of the launch environment. However, no specific claims can be made about the virtual
machines being launched, other than indicating that they are being launched on a
measured and attested hypervisor platform. Although virtual machine monitors (VMM)
or hypervisors are naturally good at isolating workloads from each other because they
mediate all access to physical resources by virtual machines, they cannot by themselves
attest and assert the state of the virtual machine that is launched.
The trusted virtual machine launch usage model applies the same level of trust-
ability to the pre-launch and launch environment of the virtual machines and workloads.
Each virtual machine launched on a virtual machine manager and hypervisor platform
benefits from a hardware root of trust by storing the launch measurements of the virtual
machines’ sealing and remote attestation capabilities. However, this requires virtualizing
the TPM, with a virtual TPM (vTPM) for each of the virtual machines. Each of these
virtual TPM vTPM instances then emulates the functions of a hardware TPM. Currently,
there are no real virtualized TPM implementations available, owing to the challenges
related to virtualizing the TPM. The difficulty lies not in providing the low-level TPM
instructions but in ensuring that the security properties are supported and established
with an appropriate level of trust. Specifically, we have to extend the chain of trust from
the physical TPM to each virtual TPM by carefully managing the signing keys, certificates,
and lifecycle of all necessary elements. An added dimension is the mobility of the virtual
machines and how these virtual TPMs would migrate with the virtual machines.
There are other ways of enabling a measured launch of virtual machines, such as
storing the measurements in memory as part of a trusted hypervisor platform without
the use of virtual TPMs but still ensuring that the chain of trust is extended from the
physical TPM. Irrespective of the design approach, day-to-day operations on virtual
machines—such as suspend and resume, creating snapshots of running virtual machines,
and playing them back on other platforms or live migration of virtual machines—become
challenging to implement.
There are no real production-quality implementations of these architectures.
There are few academic and research implementations of vTPMs and other memory
structure–based approaches, each with its own pros and cons. Trusted virtual machine
usages are still evolving at the time of this writing; hence it’s not possible to be definitive.
Chapter 8 covers aspects of the measured VM launch and some architectural elements.
Chapter 3 covers in depth the matter of boot integrity and trusted boot of platforms and
the hypervisors, as well as the associated trusted compute pools concept that aggregates
systems so specific policies can be applied to those pools. The discussion also includes
the solution architecture, and a snapshot of industry efforts to support the enabling
of trusted compute pools. Chapter 4 covers the trust attestation or remote attestation
architecture, including a reference implementation.

The Data Protection Usage Model


This usage model is about protecting data in the cloud that is at rest, in motion, and
undergoing execution. It applies uniformly across infrastructure domains (compute,
storage, and network). On the compute domain, the protection is for the virtual machines
and workloads that have the applications, configurations, state, keys, secrets, and needed
mechanisms to ensure confidentiality and integrity.

32

www.it-ebooks.info
Random documents with unrelated
content Scribd suggests to you:
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in paragraph
1.F.3, the Project Gutenberg Literary Archive Foundation, the owner
of the Project Gutenberg™ trademark, and any other party
distributing a Project Gutenberg™ electronic work under this
agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO
REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF
WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE
FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving it,
you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or entity
that provided you with the defective work may elect to provide a
replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and distribution
of Project Gutenberg™ electronic works, harmless from all liability,
costs and expenses, including legal fees, that arise directly or
indirectly from any of the following which you do or cause to occur:
(a) distribution of this or any Project Gutenberg™ work, (b)
alteration, modification, or additions or deletions to any Project
Gutenberg™ work, and (c) any Defect you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many small
donations ($1 to $5,000) are particularly important to maintaining tax
exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.

Project Gutenberg™ eBooks are often created from several printed


editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookfinal.com

You might also like