Isca 10
Isca 10
net/publication/278688751
NoHype
CITATIONS READS
59 1,863
4 authors, including:
Ruby B. Lee
Princeton University
266 PUBLICATIONS 10,690 CITATIONS
SEE PROFILE
All content following this page was uploaded by Jennifer Rexford on 24 June 2015.
ABSTRACT 1. INTRODUCTION
Cloud computing is a disruptive trend that is changing the There is no doubt that “cloud computing” has tremendous
way we use computers. The key underlying technology in promise. The end user of a service running “in the cloud”
cloud infrastructures is virtualization – so much so that is unaware of how the infrastructure is architected – it just
many consider virtualization to be one of the key features works. The provider of that service (the cloud customer in
rather than simply an implementation detail. Unfortunately, Fig. 1(a)) is able to dynamically provision infrastructure to
the use of virtualization is the source of a significant security meet the currene demand by leasing resources from a host-
concern. Because multiple virtual machines run on the same ing company (the cloud provider). The cloud provider can
server and since the virtualization layer plays a considerable leverage economies of scale to provide dynamic, on-demand,
role in the operation of a virtual machine, a malicious party infrastructure at a favorable cost.
has the opportunity to attack the virtualization layer. A While there is debate over the exact definition, the main
successful attack would give the malicious party control over idea behind cloud computing, common to all approaches,
the all-powerful virtualization layer, potentially compromis- is enabling a virtual machine to run on any server. Since
ing the confidentiality and integrity of the software and data there are many customers and many servers, the manage-
of any virtual machine. In this paper we propose removing ment of the infrastructure must be highly automated – a
the virtualization layer, while retaining the key features en- customer can request the creation (or removal) of a virtual
abled by virtualization. Our NoHype architecture, named machine and without human intervention a virtual machine
to indicate the removal of the hypervisor, addresses each of is started (or stopped) on one of the servers. To take ad-
the key roles of the virtualization layer: arbitrating access vantage of the economic benefits, the cloud providers use
to CPU, memory, and I/O devices, acting as a network de- multi-tenancy, where virtual machines from multiple cus-
vice (e.g., Ethernet switch), and managing the starting and tomers share a server.
stopping of guest virtual machines. Additionally, we show Unfortunately, this multi-tenancy is the source of a major
that our NoHype architecture may indeed be “no hype” since security concern with cloud computing as it gives malicious
nearly all of the needed features to realize the NoHype ar- parties direct access to the server where their victim may
chitecture are currently available as hardware extensions to be executing in the cloud. The malicious party can actively
processors and I/O devices. attack the virtualization layer. If successful, as many vul-
nerabilities have shown to be possible [1, 2, 3, 4, 5, 6], the
attacker has an elevated level of execution capabilities on
Categories and Subject Descriptors a system running other virtual machines. The malicious
C.1.0 [Processor architectures]: General; D.4.6 [Operating party can then inspect the memory, exposing confidential
systems]: Security and protection—invasive software information such as encryption keys and customer data,
or even modify the software a virtual machine is running.
General Terms Even without compromising the hypervisor, multi-tenancy
exposes side-channels that can be used to learn confiden-
Design, Management, Security tial information [7]. These security risks make companies
hesitant to use hosted virtualized infrastructures [8].
Keywords In fact, if not for this security concern, running applica-
Cloud computing, Multi-core, Security, Hypervisor, Virtu- tions in the cloud can actually be more secure than when
alization run in private facilities. Commonly cited are the economic
benefits that the economies of scale provide to the cloud in-
frastructure providers [9]. There is a similar principle with
regards to security that is, however, not often discussed. In
Permission to make digital or hard copies of all or part of this work for many organizations, physical security is limited to a locked
personal or classroom use is granted without fee provided that copies are closet which stores the servers in the company’s office. Since
not made or distributed for profit or commercial advantage and that copies
cloud providers are served out of large data centers, there
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific are surveillance cameras, extra security personnel, and by
permission and/or a fee. the very nature of the environment, the access is much more
ISCA’10, June 19–23, 2010, Saint-Malo, France. controlled. That level of physical security is cost prohibitive
Copyright 2010 ACM 978-1-4503-0053-7/10/06 ...$10.00.
for a single organization, but when spread out across many, to be given direct access to a dedicated (virtual) I/O
it almost comes for free to the customer of the cloud. Sim- device. The memory management facilities along with
ilarly, in a private organization, network security of servers chipset support ensure that only the authorized VM
is commonly limited to a firewall. Cloud providers can in- can access the memory-mapped I/O and only at a
stall and maintain special intrusion detection (or prevention) given rate.
systems which inspect packets for matches to known attacks
that exploit bugs in commonly used software. As with physi- Removing the active virtualization layer brings significant
cal security, these devices may be cost prohibitive for a single security benefits. Doing so comes at the cost of not being
organization but can be provided by the cloud provider for able to (i) sell in extremely fine grain units (e.g., selling
a small cost. 1/8th of a core) and (ii) highly over-subscribe a physical
Rather than attempting to make the virtualization layer server (i.e., sell more resources than are available). How-
more secure by reducing its size or protecting it with addi- ever, we do not see either of these as a serious limitation.
tional hardware [10], we instead take the position that the As processors have increasingly many cores on them, the
virtualization layer should be removed altogether. In this granularity of a single core will become finer with each multi-
paper we propose getting rid of the virtualization layer (the core processor generation (e.g., a single core in a 4-core de-
hypervisor) running beneath each guest operating system vice is 25% of the system’s compute power, in a 128-core
(OS) in order to make running a virtual machine in the cloud device, it’s less than 1% of the compute power and it is
as secure as running it in the customer’s private facilities – doubtful applications have such consistent load that expan-
and possibly even more secure. As a side benefit, removing sion and contraction of compute units at a finer granularity
the active hypervisor removes the ‘virtualization tax’ which than core units would even make sense). Also, attempting
is incurred when needing to invoke a hypervisor for many to over-subscribe a server is counter to the model of what
operations. We argue that today’s virtualization technology cloud computing provides. As opposed to private infras-
is used as a convenience, but is not necessary for what cloud tructures where the goal is to maximize utilization of the
providers really want to achieve. We believe that the key server by using techniques that adjust the allocation of re-
capabilities are the automation, which eases management of sources across virtual machines based on current usage [11],
the cloud infrastructure by enabling the provisioning of new in cloud computing the customer is promised the ability to
virtual machines on-the-fly, and multi-tenancy, which allows use a certain amount of resources. The customer chooses
the cloud provider to gain the financial benefits of sharing a the amount of resources it needs to match its application. If
server. the customer needs some resources, it pays for them, if not,
To remove the virtualization layer, we present the NoHype the cloud provider can assign them to another customer –
architecture, which addresses, and renders unnecessary, each but does not sell to another customer the resources already
of the responsibilities of today’s hypervisors1 . This archi- promised to a customer. That said, we do not require that
tecture is an entire system solution combining processor the entire cloud infrastructure use the NoHype architecture
technology, I/O technology, and software in order to real- – a provider may offer a discount to customers willing to
ize the same capabilities enabled by today’s virtualization forgo either the security benefits or the resource guarantees,
in a cloud infrastructure, yet done without an active virtu- enabling the provider to over-subscribe a sub-set of servers.
alization layer running under a potentially malicious guest The NoHype name comes from the radical idea of remov-
operating system. The main components of the architecture ing the hypervisor. It also implies that it is not hype, but
revolve around resource isolation. The NoHype architecture indeed implementable. In this paper, we also discuss cur-
automatically dedicates resources to each guest VM, and the rently available hardware and show that nearly all of what is
guest VM has full control of these resource throughout its needed to realize the NoHype architecture is available from
runtime on the physical machine. The important features of computer hardware vendors. As such, we are proposing a
our architecture are: commercially viable solution. This does not mean that the
current architectures are ideal – we also wish to stimulate
• One VM per Core - Each processor core is dedicated the architecture community to think along the lines of ad-
to a single VM. This prevents interference between dif- ditional hardware features that extend the concepts behind
ferent VMs, mitigates side-channels which exist with NoHype to increase performance and achieve an even greater
shared resources (e.g., L1 cache), and simplifies billing, level of security.
in terms of discrete compute units2 . Yet, multi-tenancy The remainder of the paper is organized as follows. In Sec-
is still possible as there are multiple cores on a chip. tion 2 we discuss the security threats that arise from the use
of virtualization and formulate our threat model. In Section
• Memory Partitioning - Hardware-enforced parti- 3 we discuss the role of the virtualization layer. In Section
tioning of physical memory ensures that each VM can 4 we propose our NoHype architecture which removes the
only access the assigned physical memory and only in virtualization layer. We then discuss the security benefits
a fair manner. of moving to a cloud infrastructure which uses NoHype in
Section 5. Then in Section 6 we take a look at currently
• Dedicated Virtual I/O Devices - I/O device mod- available hardware technology and assess the possibility of
ifications to support virtualization enables each VM realizing the NoHype architecture. Finally, we wrap up with
1
We will use the term hypervisor to mean hypervisor, VMM related work in Section 7 and conclude in Section 8.
(virtual machine monitor) or other similar virtualization
technology.
2
A compute unit is a definition of a unit of computational re-
2. SECURITY THREATS
sources that the customer purchases, e.g.: 1GHz CPU core, The problem we are addressing with the NoHype archi-
2GB of RAM and 160GB of disk space. tecture is that of multi-tenancy in a hosted (public) cloud
environment. Essentially, the goal is to make running a vir- the virtualization layer is complex, having many responsi-
tual machine in the cloud as secure as running it in the cus- bilities as discussed in Section 3, we fully expect many new
tomer’s private facilities. In this section we briefly explain vulnerabilities to emerge.
why this is not the case with today’s virtualization-based To exploit one of these vulnerabilities, an attacker needs
cloud architecture. We also detail our threat model. only to gain access to a guest OS and run software that can
attack the hypervisor or root context (via the hypervisor) –
2.1 Concerns with Current Virtualization since the guest OS interacts with both for many functions,
Shown in Figure 1(a) is a diagram depicting the cloud cus- there is a large attack surface. Getting access to a guest
tomer’s interaction with the cloud provider to control VM OS is simple as the malicious party can lease a VM directly
setup as well as an end user then using the service offered from the cloud provider. Further, if the attacker is targeting
by the cloud customer. The cloud customer makes a request a specific party (e.g., company A wanting to disrupt its com-
with a description of the VM (e.g., provides a disk image petitor, company B), it can check whether its VM is located
with the desired OS) to the cloud provider’s cloud man- on the same physical server as the targeted victim’s VM
ager. The cloud manager chooses a server to host the VM, using network-based co-residence checks such as matching
fetches the image from storage, and forwards the request to small packet round-trip times (between the attacker’s VM
the control software running on the server to be handled lo- and the victim’s VM) and numerically close IP addresses [7].
cally. Once the cloud customer’s VM is running, end users To keep a guest VM from being able to exploit the vulnera-
can begin using the cloud customer’s service (end users can bilities of the hypervisor, in the NoHype architecture we pro-
be external users, as in the case of web service, or internal pose removing the hypervisor completely, as shown in Figure
customers, as in the case where the cloud is used to expand 1(c), by removing extraneous functionality not needed in the
an internal IT infrastructure). Shown in Figure 1(b) is a cloud computing scenario and by transferring some function-
generic diagram of one of the servers in today’s infrastruc- ality to virtualization-aware hardware and firmware. We
tures – consisting of a hypervisor along with several virtual acknowledge that the hardware and firmware may not be
machines. The virtual machine labeled “root context” is a completely free of bugs, however, we feel that the extensive
special virtual machine which contains control software that testing and relatively non-malleable nature of the hardware
interacts with the cloud manager and has elevated privileges and firmware configurations (when compared to software)
that, for example, allow it to access devices and control the makes it more difficult to attack. Of course, we will need to
startup and shutdown of virtual machines. As such, it is retain system management software used to start and stop
considered part of the virtualization layer (along with the VMs at the behest of the cloud manager. This software will
hypervisor)3 . The virtual machines labeled VM1 and VM2 be privileged, however to attack it will require going through
are the guest virtual machines running on the system and the cloud manager first – guest VMs cannot directly invoke
can be unmodified as the hypervisor provides the abstrac- system management software in our NoHype architecture.
tion that they are running directly on the processor – in a
hosted infrastructure, these would be the customer’s virtual 2.2 Threat Model
machines. First, we assume the cloud provider is not malicious. The
VM1 and VM2 should be completely isolated from one cloud provider’s business model centers around providing
another. They should not be able to (i) inspect each other’s the hosted infrastructure and any purposeful deviation from
data or software, (ii) modify each other’s data or software, the stated service agreements would effectively kill its rep-
or (iii) affect the availability of each other (either by hog- utation and shut down its business. To that end, we as-
ging resources or triggering extra work to be done by each sume that sufficient physical security controls are employed
other). As the virtualization layer is privileged software in to prevent hardware attacks (e.g., probing on the memory
charge of administering the physical system, a compromised buses of physical servers) through surveillance cameras and
virtualization layer can affect the running VMs. As it man- restricted access to the physical data center facilities.
ages the virtual to physical mapping of memory addresses, Second, we make no assumptions about security of guest
confidential data may be exposed or the executable or data operating systems. The cloud provider can restrict what
can be modified. As it manages the scheduling of VMs, a OS the customer can run, but even that OS can have secu-
virtual machine can be interrupted (switching to the hy- rity holes. Therefore, we simply assume customers can run
pervisor’s code) – exposing the current registers to inspec- whatever software they desire.
tion and modification, and control flow modification (e.g., Third, the security and correctness of the cloud manage-
by making the virtual machine return to a different location ment software is out of scope for this paper – we assume it is
in the executable). secure. The cloud management software runs on dedicated
Unfortunately, securing the virtualization layer (hypervi- servers and is the interface that the cloud customers use to
sor plus root context) is getting more difficult as hypervisors request and relinquish resources. From this, we also assume
become more complex. Xen’s hypervisor is about 100k lines the system management software (system manager shown
of code and the dom0 kernel can be 1500k lines of code [12]. in Figure 1(c) and core managers) running on each server
While the size of VMWare’s solution is not publicly avail- is also secure as the only communication it has is with the
able, it is likely to match or exceed Xen’s size. This com- cloud management software. The guest VMs are isolated
plexity makes the current virtualization solutions difficult from it (no shared memory, disk or network), and do not
to verify and vulnerable to attacks – numerous vulnerabil- interact with it.
ities have already been shown in [1, 2, 3, 4, 5, 6, 7]. As
3
Note that the functionality of the “root context” could be
3. THE VIRTUALIZATION LAYER’S ROLE
included in the hypervisor for performance benefits. That We propose the NoHype architecture which removes the
has no impact on our argument. virtualization layer in the multi-tenant server setting. To
(b)
(a)
(c)
Figure 1: (a) High-level view of the cloud customer’s interaction with the cloud provider’s management to
start a VM and the end user’s interaction with the service offered by the cloud customer (a web service in
this case). (b) Generic virtualization of one server – arrows indicate interaction between the guest OS and
hypervisor, host OS and the hypervisor, guest OS and the host OS (via the hypervisor), and the host OS
and the I/O devices. (c) A server in the NoHype architecture after the removal of the hypervisor: the direct
interaction between VMs and management software is removed.
better understand the implications of this proposal, we need Emulating I/O Devices and Arbitrating Access to
to first understand the role the virtualization layer plays in them : Access to the physical devices is essential as the
today’s technology. Below, we discuss the many functions I/O of the system is how a program interacts with the real
of today’s virtualization layers (as used in cloud infrastruc- world. Whether sending packets to a remote computer, or
tures). writing to storage, programs require I/O. As the machine
Scheduling Virtual Machines : Since in today’s typi- is shared, the virtualization layer must arbitrate accesses
cal virtualized environment, multiple virtual machines are to each device. Additionally, the virtualization layer can
running on a single processor, the hypervisor needs to ar- present a generic view of the underlying device rather than
bitrate the access to the processor cycles. Much as an OS the actual device in order to enable a virtual machine to run
controls the CPU allocation of running processes, the hy- on servers with equivalent but different devices. In this sce-
pervisor controls the CPU allocation of running virtual ma- nario, the root context virtual machine runs the real device
chines. Whenever a timer expires, I/O is performed, or a drivers and emulates the generic device the VMs access.
VM exit occurs, the hypervisor’s scheduler routine is run to Network Packet Processing (switching, NAT, and
decide which VM to run next. access control) : VM-to-VM communication is essential to
Memory Management : The hypervisor takes care of enable communication between the services running inside
managing the physical memory available on the system. Mem- the VMs. Since VMs can run on the same server, VM-to-
ory is a limited resource which the hypervisor needs to ar- VM communication may not need to go through the cloud
bitrate and share among the guest VMs. To help with the provider’s network. Because of this, modern virtualization
illusion that each guest VM has its own physical memory, technology typically runs a software Ethernet switch in the
the hypervisor presents each guest VM with its guest phys- root context virtual machine with similar functionality as
ical memory. In order to maximize utilization across all of separate hardware switches would have [13].
the VMs, the hypervisor can coax one VM to page some Starting/Stopping/Migrating Virtual Machines : The
memory to disk in order to be able to allocate that physical hypervisor emulates the underlying hardware, giving each
memory to another VM [11]. The hypervisor then maps the virtual machine the view that it is running on its own ma-
guest physical memory to the host physical memory which chine. Management software in the root context can inter-
is the actual physical memory available. Through this re- act with the hypervisor to create a new virtual machine and
mapping of memory, the hypervisor is able to achieve isola- control the power to the virtual machine (e.g., power on or
tion between the guest VMs. Each VM thinks that it has reset). The hypervisor emulates all of the aspects of the
some physical memory, the guest physical memory, and is server, enabling the OS to run through its boot sequence.
only able to access that memory. This prevents VMs from The management software provides an interface to a central
accessing the memory of other VMs. cloud manager which controls the creation and placement
of virtual machines based on requests from customers, typ-
ically through a web interface. The management software the customer can be scaled with the demand of the applica-
may also be involved in live migration – the process of mov- tion. Therefore, idleness is handled by the customer by shut-
ing a running VM from one physical server to another while ting down some of its virtual machines, instead of the cloud
maintaining the appearance that the VM is continuously ac- provider over-subscribing. While such over-subscribing can
tive to those that are interacting with it. be done at a discount for customers who do not value se-
curity, it may not be necessary for customers who demand
Secure Cloud Computing, especially with tens or hundreds
4. NOHYPE ARCHITECTURE: REMOVING of cores in future many-core chips.
THE HYPERVISOR
Our NoHype architecture removes the virtualization layer 4.2 Memory: Hardware support for
yet retains the management capabilities needed by cloud in- partitioning and enforcing fairness
frastructures. To do this, recall the major functions of the Unlike the processor, which is already moving toward di-
virtualization layer: arbitrating access to memory, CPU, and visible units, memory is still a unified resource. In the No-
devices, providing important network functionality, and con- Hype architecture, we propose using partitioning of physical
trolling the execution of virtual machines. Our architecture, memory. Here, we capitalize on the fact that modern servers
shown in Figure 2, addresses each of these issues in order to are supporting more and more RAM – 256GB is not unheard
remove the virtualization layer. Note that the cloud archi- of [18].
tecture remains unchanged, where the servers in Figure 1(a) The ideal way to partition memory is to give each guest
are now NoHype-enabled servers. OS a view of memory where the OS has a dedicated and
The main point is that each of the guest VMs runs directly guaranteed fraction of physical memory (the guest physical
on the hardware without an underlying active hypervisor. memory) on the host system. Each VM can be assigned a
Of course we cannot remove management software com- different amount as decided by the customer when request-
pletely – there is the need for a management entity that can ing a new VM to be started – any ‘underutilization’ is within
start and stop a virtual machine based on requests from the each VM based on the customer requesting, and paying for,
cloud provider’s management software. Unlike today’s vir- more than was needed. Given a portion of the memory,
tualization architectures, in the NoHype architecture, once the OS will then be able to manage its own memory as it
a VM is started it runs uninterrupted and has direct access does today (managing the mapping from virtual memory
to devices. The guest OS does not interact with any man- locations to physical memory locations and swapping pages
agement software which is on the server and there are no to/from disk as needed). Hardware support in the proces-
tasks that a virtualization layer must do while the VM is sor then performs the mapping between the guest physi-
running. In the following sub-sections we will discuss each cal memory address and the host physical memory address
of the roles of a typical hypervisor, and how the NoHype and restricts memory operations to the assigned range. In
architecture removes the need for it. Figure 2, this responsibility falls to the multi-core memory
controller (MMC) and the hardware page table mechanisms
4.1 CPU: One VM per core (inside each core) which will have built-in support for per-
In the NoHype architecture, each core can run only one forming these re-mappings.
VM. That is, cores are not shared among different guest Not only must access to memory be partitioned, it must
VMs, which removes the need for the active VM schedul- be fair. In the context of multi-tenant servers, rather than
ing done by the hypervisor. As an added benefit, dedicat- optimizing for memory bus utilization, the MMC must be
ing a core to a single VM eliminates the potential software designed for providing fairness among each of the VMs. As
cache-based side channel which exists when sharing an L1 there can only be one VM assigned to a core, the MMC
cache [14, 15]. can instead provide fairness among each of the cores. This
Dedicating a core to each VM may not seem reasonable, greatly simplifies the MMC as it can be designed for the
as many associate virtualization with maxing out resources number of cores the device has, rather than the number of
on each physical server to run as many VMs on it as possi- VMs that may run on the system (which in traditional vir-
ble. However, we believe that is counter to (i) the trend in tualization is undetermined and variable, and can be many
computing towards multi-core, and (ii) the cloud computing more times than the number of cores). Including fairness
model. The trend in processors is to increase the number of in the multi-core memory controller and running one VM
cores on the chip with each generation (as opposed to using per core creates a system where one cloud customer’s VM is
clock frequency as the main source of performance improve- greatly limited in how it can affect the performance of other
ments). Already, 8-core devices are available today [16] and cloud customers’ VMs.
there are predictions of 16-core devices becoming available in
the next few years [17]4 . With each generation, the number 4.3 Devices: Per-VM virtualized devices and
of VMs a server can support would grow with the number rate-limited I/O
of cores and sharing of cores to support more VMs will not An additional aspect of the physical system that needs
be necessary. to be partitioned is the access to I/O devices. In today’s
Furthermore, we view running extra VMs to fill in for virtualization, operating systems interact with virtualized
idle VMs (i.e., over-subscribing) to be counter to the model devices in the virtualization layer. This hides the details of
of cloud computing. The cloud infrastructure is dynamic the actual device and also allows the hypervisor to arbitrate
in nature and the number of virtual machines needed by access to the real device. Therefore, when the guest oper-
4
We’re referring to server class processors. Network proces- ating systems tries to access a device, this causes a switch
sors and other specialized chips have on the order of 100 over to the hypervisor and then to the root context VM.
cores already. In the NoHype architecture, each guest operating system is
Figure 2: The NoHype system architecture.
assigned its own physical device and given direct access to ization layer, the single piece of software can arbitrate access
it. and prevent any single VM from overloading the shared bus.
Of course, this relies on the assumption that there are However, giving each VM the ability to access the devices
enough devices to assign at least one per virtual machine. directly means that there is no ability for software to pre-
We believe that the view of multiple physical devices should vent a VM from overloading the I/O bus. As such, in the
be realized by the device itself supporting virtualization – NoHype architecture rate-limited access to each I/O bus is
that is, the device would be a single physical device, but tell achieved via a flow-control mechanism where the I/O de-
the system that it is N separate devices. Each VM will in- vice controls the rate of transmission. Each VM’s assigned
teract only with the virtual device(s) assigned to it. As seen device is configured with the rate at which the device can
in Figure 2, a virtual device can have one or more queues be accessed by the VM. The device uses this to limit the
dedicated to it. This forms the interface that is seen by the amount of data it sends over the peripheral bus and uses a
associated VM. The primary devices needed in a cloud com- feedback signal to limit the amount of data sent to it by the
puting scenario are the network interface card (NIC) and I/O MMU.
the disk. Other devices, such as graphics processing units
(GPUs) could also be virtualized, thus removing a need for
having N separate devices. 4.4 Networking: Do networking in the
Providing this view requires some support in the MMC network, not on the server
and I/O MMU5 . For writes/reads to/from the device initi- In the NoHype architecture, we argue that the virtual-
ated by the cores, each device will be mapped to a different ization layer should be removed and instead its function-
range in memory (as is done today) and each would be al- ality should be provided through modifications to system
lowed to access only its memory ranges. This would enable components. For networking, this means that the Ether-
the guest OS to interact directly (and only) with its assigned net switches in the data center network should perform the
devices. From the device side, the I/O MMU would enforce switching and security functions, not a software switch in
the DMA to/from memory, so a device assigned to one core the virtualization layer. This is consistent with giving VMs
would only be able to access that core’s memory range. For direct access to the network interfaces – because they bypass
interrupts, since each VM will be assigned dedicated access the virtualization layer, the software Ethernet switch is also
to a given device, the interrupts associated with that de- bypassed. Doing so has numerous benefits: (i) it simplifies
vice will be assigned to the guest virtual machine such that management as it removes an extra type of switch and layer
when an interrupt occurs, the guest OS handles it, not the in the switch hierarchy, (ii) it frees up the processor on the
virtualization layer. server as it no longer has to perform, in software, Ethernet
One complication introduced by enabling direct access to switching for an increasingly large number of VMs, and (iii)
devices is the bandwidth of the shared I/O bus (e.g., PCIe), it allows the use of all of the features of the Ethernet switch,
is limited. When all device accesses go through the virtual- not just the ones also supported by the software switch.
Further, we argue that the software Ethernet switch is
5
I/O MMU - Input/Output Memory Management Unit is not doing anything special. Instead, it is merely used as
responsible for enforcing memory protection for transactions an initial solution that enables using the Ethernet switches
coming from I/O devices. that are currently available. These switches are not designed
for virtualized infrastructures, but instead designed for en- venting VMs from issuing or masking IPIs. Upon receiving
terprise networks. For example, because of its history as a an IPI, the core will jump to a predefined location to begin
shared medium, Ethernet switches drop packets that would executing the core manager code to handle the VM man-
be forwarded out of the same port as they arrived – a situ- agement. Figure 3 summarizes the actual procedures for
ation which would occur in virtualized infrastructures when starting and stopping VMs, with detailed description in the
two communicating VMs are located on the same server. following paragraphs.
Integrating support into hardware Ethernet switches for ca- Starting a VM: Before a VM is started, the system man-
pabilities such as allowing a packet to be forwarded out of ager must receive a command from the cloud manager. The
the same port as it was received would eliminate the need instructions to the cloud manager are issued by the cus-
for the software switch. tomer who specifies how many VMs and of what type he or
While one may argue that requiring packets to go to the she wants. The cloud manger then provides both a descrip-
dedicated Ethernet switch has performance limitations, we tion of the VM (e.g., amount of memory it is assigned) and
note that this is only the case for the special situation of com- the location of the disk image to the system manager. Next,
munication between two VMs located on the same server. the system manager maps the to-be-assigned VM’s memory
In the cloud environment, the customers intended use of and disk into its space to allow the manager to access the
VMs is not known to the cloud provider. Making assump- resources and initialize them. The disk image is then down-
tions about the amount of traffic between co-resident VMs loaded by the manager and stored on the local disk and the
is equivalent to attempting to over subscribe the system, memory assigned to the core allocated to this VM is zeroed
which we argue is not a good idea as the customer is pay- out. This brings in the guest OS image into the VM. Next,
ing for guaranteed resource usage. By not over subscribing, the to-be-assigned VM’s disk and memory are un-mapped
the peripheral bus bandwidth and bandwidth from the NIC from the system manager’s space so it no longer has access
to the top-of-rack switches must be provisioned to be able to them. Finally a ‘start’ inter-processor interrupt (IPI) is
to support the guaranteed traffic bandwidth. With the soft- issued to the core where the VM is to start. Upon receiv-
ware Ethernet switch, latency will be reduced for co-resident ing a start IPI the core comes online and starts executing
VM communication. However, it incurs the extra latency of code which is stored at a predefined location. The code that
going through an extra, lower performance (since it’s in soft- executes is the core manager which starts running in the
ware), switch for all other packets. Instead, by bypassing hyper-privileged mode and initializes the core (sets up the
the software Ethernet switch we are providing better av- memory mapping and maps the virtual NIC and disk de-
erage latency. While the cloud provider could attempt to vices). To start the guest OS, the core manager performs a
maximize this situation by placing the VMs from a given VM exit which switches the core out of the hyper-privileged
customer on a single server, doing so comes at the cost of mode and starts the execution of the guest OS from the im-
increasing the impact that a single server failure will have age now stored locally on the disk. On bootup the guest
on that customer. OS reads the correct system parameters (e.g. the amount of
memory that it has been assigned) and starts execution.
4.5 Starting/Stopping/Migrating Virtual
Stopping a VM: A guest OS can exit when a stop com-
Machines: Decouple VM management mand is issued by the system manager (e.g., the system man-
from VM operation ager receives a message from the cloud manager that the
In the NoHype architecture we removed the hypervisor, customer does not need the VM anymore or the instance-
yet we still need the ability to start and stop virtual ma- hour(s) purchased by the customer have been used up). In
chines on demand. To do this, we decouple the VM man- this situation, the system manager sends a ‘stop’ IPI to the
agement from the VM operation – the management code is core running the VM that is to be shut down. This inter-
active before a VM is started and after it is stopped, but dur- rupt causes the core to switch to the hyper-privileged mode
ing the life of the VM, the guest OS never interacts with the and jump to a predefined code location in the core man-
management code on the core, or on the server (i.e., with the ager’s code. Next, the core manager optionally zeros out
system manager running on a separate core). When a server the memory space assigned to the VM, and potentially the
starts up, one core is randomly selected as the bootstrap pro- assigned disk space if the customer’s VM uploads its data
cessor, as is done in today’s multi-core systems. The code to some storage before termination. The core manager also
that starts executing on that core is the trusted NoHype optionally saves the disk image of the VM, depending on
system manager6 . The system manager initially starts up the Service Level Agreement (SLA) for continuing service of
in hyper-privileged mode to setup the server. It is then re- this VM at a later time. Finally, the core manager puts the
sponsible for accepting commands from the cloud manager core in sleep mode (to wait for the next start IPI) and the
software (via its network interface) and issuing commands system manager notifies the cloud manager of completion.
to individual cores to start/stop guest VMs via the inter- Aborting a VM: A guest OS can be aborted when the
processor interrupts (IPIs). The sending, and masking, of guest OS performs an illegal operation (e.g., trying to ac-
IPIs is controlled through memory-mapped registers of the cess memory not assigned to it)8 . An illegal operation will
core’s local APIC7 . The memory management can be used cause a trap, which in turn causes entrance into the hyper-
to restrict the access to these memory regions to only soft- privileged mode and execution of code located at a prede-
ware running in hyper-privileged mode, and therefore pre- fined location in the core manager. At this point, the core
6
The whole system manager does not have to be trusted,
but for clarity of presentation and space reasons we will not 8
Illegal operations also include the use of some processor
explore issues of trust of the system manager here and hence instructions, which may require the OS to be altered. For
we make this simplifying statement. example, we do not support nested virtualization, so any
7
APIC is the Advanced Programmable Interrupt Controller. attempts to do so by the OS would be illegal.
Figure 3: Outline of steps used to start, stop and abort a VM.
manager sends an ‘end’ IPI to the system manager to in- makes it such that (i) how long the downtime will actually
form the system manager that the VM exited abnormally. be for a given VM cannot be known, and (ii) the downtime
Optionally, the core manager can zero out the memory and the customer is willing to tolerate is not known. As such,
the disk to prevent data leaks. The memory and I/O is an alternate approach would be to involve the customer in
un-mapped. The core is then put into sleep mode (waiting the process, enabling them to gracefully ‘drain’ any ongoing
for a start IPI) and the system manager notifies the cloud work or perform migration themselves [20].
manager of the aborted VM’s status change. For all four actions, during the runtime of the guest OS,
Live Migration of a VM: A live migration operation is the guest OS does not invoke either the core manager or the
initiated by the cloud manager, which instructs the system system manager. Even during a live migration, no action the
manager on the source server to migrate a particular VM guest OS performs causes any management software to run
to a given target server. In a simplistic implementation, the – each iteration is initiated by the system manager. Hence,
system manager would send a ‘migrate’ IPI to a core on the guest OS has no opportunity to directly corrupt these
which the VM is running. The interrupt handler located trusted software components. Interaction with the cloud
in the core manager would stop the execution of the VM, manager is from servers external to the cloud infrastructure
capture the entire state of the VM, and hash and encrypt (i.e., the customer’s server). Securing this interaction is not
it. The system manager would then take the state, send it the focus of this paper.
to the target server where the system manager would send
an IPI to the core manager, which would check the hash 5. SECURITY BENEFITS OF NOHYPE
and decrypt the state, re-start the new VM and continue Because of the numerous benefits that hosted cloud infras-
execution. tructures provide, many organizations want to use the cloud.
Of course, this process can take a while and in order to However, concerns over security are holding some of them
minimize downtime, optimizations using iterative approaches back. The NoHype architecture targets these concerns, cre-
have been developed [19]. In these approaches, a current ating an architecture where customers are given comparable
snapshot is taken, but the VM is not stopped. When the security to running their VMs in their own virtualized in-
state is done transferring, the difference from the current frastructure – and even improved security when considering
snapshot and the previous snapshot is sent. The process the extra physical security and protection against malware
may be repeated until the difference is sufficiently small to that cloud providers can provide.
minimize actual downtime. In NoHype, we have the mem- To achieve a comparable level of security, no VM should
ory management unit track which pages have been modi- be able to (i) affect the availability of another VM, (ii) access
fied. This enables the system manager to periodically send the data/software of another VM (either to read or modify),
an IPI to obtain only the differences (for the first time ac- or (iii) learn confidential information through side channels.
cessing it, the difference will be a comparison to when the Note that this does not mean that the cloud customer is com-
VM started, so will be an entire snapshot). With this, we do pletely protected, as vulnerabilities in the cloud customer’s
not introduce a hypervisor which is actively involved. For own applications and operating system could still be present
each iteration, the system manager forwards any data to the – end users can attack a vulnerable server independent of
target server – on the target server the system manager for- whether it is running in the cloud or not.
wards data to the core manager, the core manager updates
Availability : Availability can be attacked in one of three
the memory, disk and other state it receives. The last step
ways in current hypervisor-based virtualization architectures–
is for the core manager on the source server to send one
(i) altering the hypervisor’s scheduling of VMs, (ii) inter-
last set of differences, shutting down the VM on the last
rupting a core running a VM, or (iii) performing extraor-
‘migrate’ IPI. After all the state is replicated on the target
dinary amounts of memory or I/O reads/writes to gain a
server, the system manager on the target sends a ‘start’ IPI
disproportionate share of the bus and therefore affect the
to start the VM. It should be noted that while the downtime
performance of another VM. By dedicating a core to a single
can be reduced from what would be seen with the simplis-
VM and removing the hypervisor from making any schedul-
tic approach, it cannot be eliminated altogether. In Xen,
ing decisions, we eliminate the first attack. With hardware
this may range from 60ms for an ideal application to 3.5s
masking for inter-processor and device interrupts, there is no
for a non-ideal case [19]. The nature of cloud computing
possible way to interrupt another VM, eliminating the sec-
ond attack. Through additions to the mutli-core memory
controller for providing fairness [21] and through the chipset
to rate-limit access to I/O, we eliminate the third attack.
As such, with the NoHype architecture, a VM has no ability
to disrupt the execution of another VM.
Confidentiality/integrity of data and software : In
order to modify or inspect another VM’s software or data,
one would need to have access to either registers or physical
memory. Since cores are not shared and since there is no
hypervisor that runs during the entire lifetime of the virtual
machine, there is no possible way to access the registers. Figure 4: Selected parts of a modern x86 architec-
Memory access violations are also mitigated. Since the No- ture which are relevant to NoHype.
Hype architecture enforces memory accesses in hardware,
the only way a VM could access physical memory outside
troller (APIC). Modern x86 processors have an interrupt
of the assigned range would be to alter the tables specify-
mechanism based on the APIC. As shown in Figure 4, each
ing the mapping of guest physical addresses to host physical
core has a local APIC and there is a single I/O APIC for
addresses. To do so would require compromising the sys-
the entire system.
tem manager software and altering the code performing the
The APIC can be used by a core to send an interrupt to
start/stop functions. This would first require compromising
another core. These IPIs9 are generated by programming
the cloud manager, which we assume is trusted, as the sys-
the interrupt command register (ICR) in the core’s local
tem manager only interacts with the cloud manager and the
APIC. By writing to this register, a message will be sent
core managers, and is isolated from the guest VMs.
over the APIC bus with the destination field of the ICR in-
Side-channels : Side-channels exist whenever resources are dicating which core should receive the interrupt. While one
shared among multiple pieces of software. Ignoring any side- VM can attempt to interrupt another VM with this mecha-
channels that are based on physical analysis (e.g., examin- nism, each core can mask out maskable interrupts that may
ing power usage), which falls outside of our threat model, be sent from other cores, to prevent them from interrupting
side-channels are typically based on the timing of opera- the core.
tions (e.g., hits in caches are shorter than misses, so one can
determine if a particular cache line was accessed by tim- 6.2 Memory Partitioning
ing an access to a specific memory location). While com-
A key to isolating each virtual machine is making sure
pletely eliminating side-channels is a near impossible feat
that each VM can access its own guest physical memory
for a shared infrastructure, reducing them to be of little use
and not be allowed to access memory of other VMs. In
is possible. In the NoHype architecture, since L1 caches are
order to remove the virtualization layer, we must have the
not shared, some of the most damaging attacks to date have
hardware enforce the access isolation. Here, we capitalize on
been eliminated (using cache-based side-channel attacks to
the paging protection available in modern processors, along
infer a cryptographic key) [22]. Since NoHype provides fair
with advances in providing these protections to each VM.
access to memory [21] and rate-limited accesses to I/O, a
The Extended Page Tables (EPT) mechanism supported
bandwidth-based side channel is likely very limited in each
by VT-x technology from Intel can be used to enforce the
of those cases as well.
memory isolation. The EPT logically adds another layer
to the familiar virtual-to-physical memory translation page
6. CURRENT HARDWARE SUPPORT tables. The OS running on a core manages the translation
from guest virtual to guest physical addresses, using the fa-
In Section 4, we presented our NoHype architecture. In
miliar mechanisms (e.g., CR3 register is used to set the page
this section, we examine current technology and whether it
directory location). The EPT mechanisms are then used to
is possible to actually realize NoHype. A lot of hardware
translate from the guest physical to host physical address –
extensions, both in the processors and in the devices, have
the set of page tables which define this mapping are initial-
been introduced in recent years which aid computing in the
ized by the core manager. Once initialized, the translation
virtualized model. Interestingly, while these extensions are
does not have to be modified throughout the lifetime of the
mostly designed to address the performance overhead, we
OS instance running on a given core. A hardware page table
are able to leverage these in the NoHype architecture that we
walker uses the tables to automatically perform the transla-
propose. We will present extensions found in Intel products
tion. Access to a virtual address not currently in the guest
– similar solutions are available from AMD and NoHype
physical memory will cause a page fault that is handled by
should be implementable on AMD platforms as well.
the guest OS – a hypervisor layer is not needed. Access
6.1 Isolating Each Core to memory outside of the guest physical address range pre-
assigned to the VM will cause a new fault which is handled
As multi-tenancy requires running multiple virtual ma- by the core manager, which causes the VM to be killed –
chines on the same processor, they need to share the proces- as any action that would exit to a virtualization layer is as-
sor resources. In order to remove the need for the hypervisor sumed by NoHype architecture to be due to a misbehaving
to schedule each VM, the NoHype architecture assigns a sin- guest OS.
gle VM per core. To fully isolate each VM from other VMs, An open issue is the memory fairness and preventing cores
however, we need to be able to restrict a core running one
VM from sending IPIs to other cores. This can be achieved 9
The “processor” in “inter-processor interrupt” actually
through use of the advanced programmable interrupt con- refers to a processor core.
from interfering with each other through use of the the mem- fication [23] serve just this purpose. With MSI-X, a device
ory subsystem. Currently, a processor core can hog the can request an interrupt by writing to a particular mem-
memory bandwidth, based on current memory controllers’ ory address. The value written to that address indicates
scheduling algorithms which optimize overall memory per- which interrupt is to be raised. By setting up the APIC ta-
formance, rather than being fair to all cores. We propose us- bles, a given interrupt can be directed to a specific core on
ing a solution similar to one presented by Mutlu, et al., [21] the processor (via its local APIC). The memory protection
whereby the memory controller is modified to help achieve mechanisms can be used to block devices from writing cer-
a more fair sharing of memory bandwidth. tain memory locations, consequently limiting which device
Another open issue is the optimal support for live migra- can interrupt which core.
tion that is completely transparent to the customer (involv-
ing the customer requires no support from the system). The
simplistic stop-and-copy approach is fully supported. How- 6.3.2 I/O Device Virtualization
ever, in order to use optimizations which iteratively copy While it is possible to realize each VM having its own de-
differences during execution requires tracking dirty pages. vice by having multiple physical devices, as the number of
Current processors do not support this, so software would cores (and therefore the number of VMs) grows, this will
be needed. In this case, each time a page is accessed, an become impractical. The advantage we have here when con-
exception would occur and be handled by the core manager sidering the hosted cloud infrastructure setting is that there
which would simply mark a bit in a table. As this inter- is only a small subset of devices that are actually of interest:
action is extremely limited (no information is passed and the network interface card, the storage device, and a timer,
the required function is extremely simple), the vulnerability but not, for example, a monitor or keyboard. Here, we can
that is opened is very limited (though, still not ideal). take advantage of a standard specified by PCI-SIG called
single root I/O virtualization (SR-IOV). SR-IOV enables a
6.3 I/O Devices single device to advertise that it is actually multiple devices,
The NoHype architecture relies on each virtual machine each with independent configuration and memory space.
having direct access to its own physical device(s). Fortu- To do this, the devices present two types of interfaces:
one for configuration and the other for regular device access.
nately, I/O performance is an important factor in overall
The system software can use the configuration interface to
system performance, and as such, technologies are available
to make this a reality. specify how many virtual devices the physical device should
act as and the options for each virtual device. Each virtual
6.3.1 Direct Access to I/O Devices device is assigned a separate address range, enabling the sys-
tem manager to map a virtual device to a particular virtual
In the NoHype architecture the virtual machine must be machine. The guest VM can then interact directly with the
able to access its own devices without going through the device through the regular device interface presented by the
virtualization layer. This involves three aspects: (i) the VM physical device. Only the system manager has the privileges
being able to DMA to the device, (ii) the device being able to use the configuration interface and the VMs can only ac-
to DMA to the VM, and (iii) the device being able to send cess the interfaces assigned to them.
an interrupt to the VM.
For the VM being able to DMA to the device, the memory Network: Intel has a network interface card that supports
partitioning discussed in Section 6.2 is the enabling technol- SR-IOV [24]. To distinguish between each of the virtual de-
ogy. Each VM is assigned a device, with each device being vices, the NIC has multiple queues on it. The NIC performs
assigned a memory mapped I/O range. The guest physical classification on the packets to direct each packet to the cor-
to host physical memory mapping enables VMs to directly rect queue. Then, because of the memory partitioning, only
interact with memory mapped I/O. The fact that the guest the OS on the appropriate core can read directly from that
VM can only write to particular physical memory ranges, virtual device, or write to it..
enforced by the hardware, means that each VM can write Storage: Unfortunately, there are currently no available
only to its device. storage devices which support the notion of virtual devices.
In the reverse direction, the I/O device sending data to LSI, however, recently announced a demonstration of an
the VM, requires support from the chipset which sits be- SR-IOV capable storage controller [25]. Additionally, more
tween the device and the system memory. Here the I/O should be coming as disk drives already have a lot of func-
MMU provides protection by creating multiple DMA protec- tionality in the form of firmware on the drive itself – for
tion domains. These domains are the memory regions that a example, reordering requests to optimize read/write perfor-
given device is allowed to DMA into. This would correspond mance. Furthermore, there are disk devices such as from
to the assigned physical memory that the VM was given. Fusion-io which have a PCIe interface (and therefore can im-
The chipset enforces this by using address-translation tables. plement SR-IOV) and a programmable FPGA on board (and
When a device attempts to write to a memory location, the therefore can be upgraded). Until the LSI chip is integrated
I/O MMU performs a lookup in its address-translation ta- into disk drives or firmware is modified, multiple physical
bles for access permission of that device. If the device is disk drives can be used. Server motherboards have already
attempting to write to a location outside of the permitted multiple SATA (Serial Advanced Technology Attachment)
range, the access is blocked. connections. When used in combination with network stor-
The final aspect of interacting with devices without the age, this provides an acceptable, though not ideal, solution
involvement of the virtualization layer is the ability for in- until SR-IOV enabled disk drives become available.
terrupts from a device to be directly handled by the vir- Timers: Timers are an essential part of operating systems
tual machine associated with that device. For this, the MSI and many programs which perform periodic tasks (such as
(Message Signaled Interrupts) and extension MSI-X speci- scheduling processes). Each local APIC contains a 32-bit
timer that is software configurable, and therefore provides We should note that trusting the virtualization layer usu-
each VM with its own timer. However, higher-precision ally comes with the disclaimer that the virtualization layer
timers require an external chip – the HPET (High Pre- must be minimal (i.e., minimizing the Trusted Computing
cision Event Timer). The HPET communicates with the Base, TCB). A number of projects have worked on minimiz-
I/O APIC, which as previously mentioned can be configured ing the TCB (for example Flicker [32]) which attests to the
to deliver (timer) interrupts to a specific core via its local importance of having the TCB as small as possible. None,
APIC. The HPET specification allows for up to 8 blocks of however, went so far as to completely remove the hypervisor
32 timers and the HPET configuration is memory mapped that actively runs under the OS, as we have.
– the map for each block can be aligned on a 4 kilobyte Finally, hardware architectures have been proposed to en-
boundary so paging mechanisms can easily be used to re- able running applications without trusting the operating
strict which core has access to which block of timers. In this system [33, 34, 35, 36, 10]. While these could perhaps be
way, the core can have full control over its timers which are extended, or applied, to support running a trusted virtual
not affected by other VMs. machine on an untrusted hypervisor, we instead took an
Both the networking and disk devices’ bandwidth usage approach which proposes minimal changes to the processor
need to be shared among the VMs. If a VM is able to use core architecture along with modifications to system periph-
a disproportionately large amount of the I/O bandwidth, erals and software.
other VMs may be denied service very easily. Fortunately,
the PCI Express (PCIe) interconnection network uses credit- 8. CONCLUSIONS
based flow control. Each device has an input queue and
While cloud computing has tremendous promise there are
senders consume tokens when sending packets. When the
concerns over the security of running applications in a hosted
queue in the device associated with the sending core becomes
cloud infrastructure. One significant source of the prob-
full, the sender no longer has tokens for sending data. A
lem is the use of virtualization, where an increasingly com-
device, e.g. the NIC, with virtual queues will take packets
plex hypervisor is involved in many aspects of running the
from the queues in round-robin fashion and return tokens to
guest virtual machines. Unfortunately, virtualization is also
the sender. If a VM fills up its assigned queue, it will not
a key technology of the cloud infrastructure enabling multi-
be able to send more packets which will allow other VMs to
tenancy and the ability to use highly automated processes
regain their share of the bandwidth.
to provision infrastructure. Rather than focus on making
6.4 Virtualization aware Ethernet Switches virtualization more secure, we remove the need for an active
virtualization layer in the first place.
In the NoHype architecture, the networking is performed
We presented the NoHype architecture which provides
by networking devices and not by the software running on
benefits on par with those of today’s virtualization solutions
the server. This requires the Ethernet switches that are
without the active virtualization layer. Instead, our archi-
connected to the servers (e.g., the top-of-rack switches) to
tecture makes use of hardware virtualization extensions in
be virtualization aware. The switches need the abstraction
order to remove any need for a virtualization layer to run
that they have a connection to each virtual interface, rather
during the lifetime of a virtual machine. The extensions are
than the single connection to the NIC shared across multiple
used to flexibly partition resources and isolate guest VMs
virtual machines. As the market for virtualized infrastruc-
from each other. Key aspects of the architecture include (i)
tures (not just hosted cloud infrastructures) is potentially
running one VM per core, (ii) hardware enforced memory
very large, commercial Ethernet switch vendors are mov-
partitioning, and (iii) dedicated (virtual) devices that each
ing in this direction already. The Cisco Virtual Network
guest VM controls directly without intervention of a hyper-
Link (VN-link) [26] technology is supported in Cisco Nexus
visor. We also discussed the fact that current processor and
switches [13] where it is used to simplify the software switch
I/O devices support much of the functionality required by
and offload switching functionality to a physical switch, but
the NoHype architecture. While there are clearly more ar-
this requires some changes to the Ethernet format. In con-
chitectural, implementation, software, performance and se-
trast, HP has proposed Virtual Ethernet Port Aggregator
curity issues to be resolved, we believe that the architecture
(VEPA) [27], which does not require changes to the Eth-
is not hype but is a very viable architecture that can be im-
ernet format, but instead overcomes the limitation of not
plemented in the near future. We also hope that this paper
being able to forward a packet out of the same port it was re-
will stimulate further research into many-core architectural
ceived. Commercial availability of hardware switches which
features, at both the processor and system levels, that sup-
support VEPA are forthcoming, and Linux has support for
port using the many cores for security as in our NoHype
VEPA for software based switches [28]. (not suitable for use
architecture, rather than just for performance and power
in the data center, but useful for experimentation).
improvements.