It Takes Virtualization To Make An Agile Infrastructure: Part IV of A Series of AMD White Papers On 64-Bit Computing
It Takes Virtualization To Make An Agile Infrastructure: Part IV of A Series of AMD White Papers On 64-Bit Computing
AGILE INFRASTRUCTURE
Just sixty years ago, scientists powered up the ENIAC,1 allowed programmers to implement software
generally regarded as the first electronic computer. algorithms in a manner divorced from underlying
ENIACs clock ran at five kilohertz, and the systems machine architectures. Operating systems provided
17,468 vacuum tubes consumed over 150 kilowatts of abstractions that freed programs from the complex
2
power. ENIACs software environment was primitive and varied details needed to manage memory and I/O
by todays standards. Programs consisted of sequences devices. Contemporary application software, swaddled
of register settings, entered via dials on control panels, within layers of middleware and dynamic linked
and small modifications to the internal circuits, libraries, must work overtime to determine the physical
implemented like the connections in operator-assisted characteristics of the hardware on which it runs.
telephone switchboards.
Although application packages and middleware have
The industry has come a long way in sixty years. First, become blissfully unaware of the vagaries of specific
the transistor and, later, the integrated circuit enabled hardware implementations, the operating systems
the creation of inexpensive microprocessors containing that provide this isolation must themselves be totally
hundreds of millions of transistors, running at multi- cognizant of the hardware on which they reside. Details
gigahertz frequencies, and consuming less than 100 like MAC and IP addresses, SAN LUN assignments,
watts. Advances in software technology enabled the physical memory configurations, processor counts, and
productive deployment of these powerful systems. system serial numbers become enmeshed within the
Technological evolution both drives, and is driven OS state at the time of system installation. This stateful
by, ever-increasing levels of abstraction in hardware information locks the OS to the specific hardware on
and software architectures. High-level programming which it was installed and complicates hardware fault
languages like Fortran, COBOL, BASIC, C, and Java recovery, system upgrades, and application consolidation.
1
The name ENIAC was an acronym for Electronic Numerical Integrator and Computer.
2
Transistor technology had yet to be invented at the time of ENIACs design. Since power cycling shortened the life of the vacuum tubes, operators soon decided to leave the system
powered-up, even when it had nothing to do.
1
Of course, the same hardware and software architects incorporate support for AMD-V in upcoming software
who created the abstract environments in which todays releases. Why all the fuss and effort around
software and middleware packages reside were not virtualization?
about to let a few annoying details like stateful
information stand in their way. They realized that if they The x86-based server market grew exponentially over
could abstract the hardware as seen by the operating the past decade, driven largely by a philosophy of one
system, then they could finesse softwares view of the application, one server. This approach filled datacenters
physical configuration on which it was installed. They with rack after rack of over-provisioned systems, most
called their approach virtualization, and it turns operating at less than 15 percent of capacity, but
out to be harder than you might imagine. Operating consuming power and generating heat on a 24x7 basis.
system software likes to think it owns the hardware Even with these low utilization rates, IT managers often
on which it runs, and does not like to be fooled. It is need to dedicate three separate systems to each
harder still, given that the x86 architecture came into application: one to run the application, one to back up
existence long before notions of virtualization entered the first system in the event of a hardware failure, and
the industry, and does not lend itself easily to such one to serve as a development platform for ongoing
notions. In the remainder of this document, we will development and problem analysis. These systems
survey the kinds of problems virtualization can address operate under a variety of current and legacy OS
and how it addresses them. We will review the features environments, including Windows NT, Windows
AMD added to Next-Generation AMD Opteron Server 2000, Windows Server 2003, Solaris, UNIX,
processors and AMD Athlon 64 processors to make and Linux. IT managers would love to consolidate
AMD Virtualization (AMD-V) an industry leader. these disparate workloads onto a smaller number of
We will look at how AMDs ecosystem partners are hardware systems, but are understandably wary of the
adapting their software to take advantage of these potential software problems that can arise when they
new virtualization extensions. Finally, well delve into make several independent applications share a single
the future of virtual technology, and assess the impact instance of an operating system. Virtualization provides
its likely to have on the industry in years to come. a mechanism to consolidate these applications, along
with their existing OS, middleware, and communications
Who Needs Virtual Technology? environments, on a larger shared system. Each
Practically everyone who uses or supports computer workload continues to see a virtual environment that
systems stands to benefit from the emergence of corresponds exactly to the physical environment of
advanced virtual technology that enhances the agility its earlier dedicated system, eliminating the need to
and efficiency of server and client systems alike. Until change OS or communications parameters. These
now, the benefits from virtualization came at some virtual machines stand ready to respond to external
cost in terms of application performance, but the latest requests, but consume almost no machine resources
AMD Athlon 64 processor and AMD Opteron or power in the absence of such requests a far cry
processor, as well as AMD Turion 64 mobile from the real resources consumed in real, or non-
technology, all include AMD-V enhanced hardware that virtual, deployments.
helps reduce the software overhead needed to support
virtual machine environments. Suppliers like Microsoft, In addition to consolidating existing workloads,
Virtual Iron, VMware, and XenSource are rushing to virtualization also facilitates the introduction of new
2
applications in the datacenter. Once IT gets the
go-ahead on a new project, development can begin
immediately on brand new virtual machines added to
an existing physical server. Virtualization essentially
allows the enterprise to base its hardware acquisition
plans on aggregate demand, rather than the vagaries
of any given program. Although not strictly a portion
of virtual technology per se, most virtual software
environments include utilities that facilitate operational
tasks such as the provisioning of new virtual servers,
the allocation (and reallocation) of virtual resources,
and the assignment of virtual machines (VMs) to
physical systems. These utilities simplify the scheduling
of planned hardware outages as well as the recovery
from unplanned outages. To accommodate planned
outages, IT management merely relocates the VMs
to pursue a software anomaly, they merely load the
running on a particular hardware configuration to an
proper virtual machine, and theyre ready to pursue the
alternate system, allowing the original hardware to be
problematic code.
taken off-line for service. For unplanned outages, the
system operator simply reinitiates the relevant VMs
Virtualization greatly eases the task of application
on an operational system until the failed hardware
migration to new versions of operating systems for
configuration can be restored to service.
both client and server applications. IT can install
multiple VMs, each running different versions of the
Virtualization is changing the way software developers
OS, and migrate specific applications to the new and
work. Developers must often adapt their code for
improved OS at a pace convenient to IT and the
operation in a wide variety of operating system
end-user, rather than on the all-or-nothing basis that
environments, and then test those codes in the relevant
has characterized software transitions in the past.
environments. To accomplish this, they would often
dedicate specific developer machines to different
Virtualization may even alter the way organizations deploy
versions of Windows, UNIX, and Linux. When pursuing
desktop technology. The recently formed Virtual Desktop
software anomalies, they would find the machine with
Infrastructure Alliance allows IT administrators to
the OS environment on which the bug had been
create and manage desktop virtual machines on servers
observed, and attempt to come up with a correction.
within datacenters. End-users can access these desktop
Of course, if the particular system with the required
environments at any time and from any place, using thin
environment had not been used for a while, there was
client devices (or thin client access utilities on more fully
no assurance it would be in good working order when
configured systems). Even old, underpowered systems
needed. Virtualization lets development organizations
can use the Remote Desktop Protocol (RDP) to access
maintain a library of virtual machines corresponding to
more powerful virtual PC desktops arrayed with up-to-
all the specific hardware and software environments on
date software versions. This approach to client
which their software runs. Then, if and when they need
3
deployment can lower support expenses, as well as good news is that since this approach operates
hardware acquisition costs, since the virtual PCs reside invisibly from the perspective of the guest OS,
on centralized servers in a managed IT environment, it requires no changes to the guest OS or the
eliminating the need to visit the clients actual desktop applications running under the guest. Off-the-shelf
system for most maintenance activities. versions of software developed long before
virtualization ever came to the x86 world, like MS-
Last, but far from least, virtualization will play an ever DOS, Windows 3.1, or Windows NT 3.5
increasing role in creating more secure and robust client can be installed as guest operating systems. The
environments. For example, on one client system, IT bad news is that all that instruction trapping and
might install two virtual machines, one that handles emulation can reduce overall system performance
sensitive corporate data, and a second for less secure significantly in I/O-intensive environments. EMCs
end-user tasks. It could block the first VM from VMware, the best known and most popular virtual
downloading unauthorized applications, screen savers, environment for x86 processors, has used
and other security threats, while allowing the second variations on this software approach in all the
VM access to less secure downloads and material. products it has delivered to this point. Microsofts
Virtual PC and Virtual Server packages use a
The Technology of Virtualization similar approach.
Anyone who has ever mastered a magic trick like
sawing a woman in half or pulling a rabbit out of a hat
knows that creating an illusion requires planning and
deft execution. Convincing an operating system like
Windows or Linux that it has exclusive control of a
computer system that, in fact, it shares with other Ap
plic
atio
operating systems also requires the same degree of Op
era
ns
ting
planning and execution. Architects have pursued three Sys
tem
different strategies in their pursuit of virtual technology. Vir
tua
liza
tio
All require the presence of hypervisor software that nL
aye
r
x86
allocates basic machine resources including CPU time Arc
hite
ctu
and memory. All consider the OS software running re
4
rigged and know how to play along to maintain the network of technology providers. Software developers
virtual illusion. This approach precludes the ability like VMware and Connectix (now part of Microsoft)
to run off-the-shelf and legacy operating software offered the first packages that enabled x86 virtual
in para-virtual environments. Xen, the open-source environments. Over the years, VMware has augmented
communitys approach to virtualization, uses para- its basic hypervisor product with a suite of utilities
virtualization as the basis for its technology. that allows end-users to create, provision, and manage
an array of virtual machines within a datacenter.
Hardware-assisted virtualization relies on hardware XenSource and Virtual Iron have emerged with Xen-
extensions to x86 system architecture to eliminate based virtualization schemes principally targeted at the
much of the hypervisor overhead associated with Linux market. Some have assumed that the introduction
trapping and emulating I/O operations and status of hardware virtualization assists in processors will
instructions executed within a guest OS. The latest narrow or eliminate the need for virtualization
processors from both AMD and Intel include software, but this is far from the actual case. All virtual
hardware virtualization assists known as AMD-V environments require a hypervisor of some form to
and Intel VT. Key hypervisor suppliers (Microsoft, allocate real machine resources to virtual machines;
VMware, and XenSource) support these features in processor and chipset extensions assist the hypervisor,
their software. CPU extensions solve only part of but cannot replace it. So, as enhanced hardware shrinks
the virtualization problem, since they require the the overhead associated with virtualization, end-user
hypervisor to do lots of work to finesse input- adoption of the technology will increase and create
output operations, adding overhead to each I/O new opportunities for these ISVs.
call. A complete solution requires the virtual
mapping of I/O devices, which in turn requires AMD has extended its architectures to improve
changes to the chipsets and I/O bridges that link virtualization performance, but there remains much
system processors to I/O buses like PCI Express. room for subsequent innovation in this area. Unlike
AMD and Intel have each issued specifications for instruction set enhancements (like AMDs pioneering
chipset extensions consistent with their system 64-bit extensions to the 32-bit x86 architecture), AMD-V
architectures, and hardware incorporating these operates behind the scenes, and directly impacts only
extensions will likely emerge in 2007. a few areas of operating system and hypervisor code.
This allows CPU designers to innovate with regard to
The good news is that once these processor and virtualization support, both to enhance functionality or
chipset extensions, and the software that supports improve performance. End users should not assume
them, reach the market, end-users will have access that competitors virtualization extensions do the same
to advanced virtualization technology that rings in things, and offer similar levels of performance. The devil
an era of enhanced agility in IT operations with little is in the details, a few of which are outlined in the
incremental software overhead. There is no bad news feature box. Even if system purchasers do not lift the
in this regard. hood and check out the engine, they should ascertain
virtual (as opposed to real) machine performance as
The Virtual Ecosystem part of their acquisition checklist.
The delivery of all important industry-standard
technologies, including virtualization, relies on a
5
Chipset suppliers like ATI, Nvidia, and VIA plan to AMD is not stopping there. Its quad-core processor
incorporate I/O virtualization capabilities into their I/O architecture plans include features that extend the
bridges. AMDs I/O virtualization spec differs from capabilities of AMD-V and improve its performance.
Intels, which becomes just one more way the chipsets AMD plans to demonstrate once again how its Direct
designed for AMDs revolutionary Direct Connect Connect Architecture enables AMD processor-based
Architecture must differ from those designed to tie systems to outperform those based on legacy front-side
into Intels traditional front-side bus approach. bus based approaches. You wont have to wait too
long to try these processors yourself AMD plans to
Suppliers of system management software like CAs start shipping quad-core processors in 2007. Because
Unicenter and HPs System Insight Manager have of AMDs commitment to platform stability, its OEM
extended their packages to manage virtual as well as system partners will be able to drop these quad-core
physical resources. The shift to virtual environments processors into the DDR2-based platforms they started
will enable IT managers to consolidate workloads and shipping in 2006.
allocate system resources with a far greater degree of
granularity than they have at present. This in turn will The Future Virtually at Your Fingers
create the need for software to meter, provision, and Virtual technology is the latest in a long line of technical
allocate system resources in a more autonomic manner. advancements that have increased the level of system
abstraction and enabled IT users to harness ever-
AMDs Real Roadmap for increasing levels of computer performance. Virtualization
Virtual Technology essentially decouples users and applications from the
Just in case you havent been paying attention up to specific hardware characteristics of the systems they
now, the goal is to make sure you understand that use to perform computational tasks. This change
AMD believes virtualization will play a big part in the will usher in an entirely new wave of hardware and
future of client and server computing, and that AMD software innovation in years to come. Virtualization
is backing up its words with a solid roadmap of will simplify system upgrades (and in some cases may
virtualization solutions. eliminate the need for such upgrades), by capturing
the state of a virtual machine, and transporting that
AMD added hardware virtualization capabilities (along state in its entirety from the old to new host system.
with DDR2 support and a few other enhancements) Virtualization will enable a generation of more energy-
to the Next-Generation AMD Opteron processor, efficient computing. Processor, memory, and storage
AMD Athlon 64 processor, and AMD Athlon 64 X2 resources that today must be delivered in fixed
Dual-Core processor, as well as AMD Turion 64 X2 amounts determined by real hardware system
dual-core mobile technology. Of course, AMD is not configurations will be delivered with finer granularity
planning on resting on its laurels. It is working with via dynamically tuned virtual machines in the future.
chipset partners to incorporate the features outlined The combination of virtual technology and powerful
in the IOMMU (I/O Memory Management Unit) AMD technology-based processors allows users to
spec it issued early in 2006. And, its working with deploy computing resources in more agile, efficient,
ecosystem partners to help make sure they will and cost-effective ways. AMD is proud to participate
be ready to support these new features on a in this process and assist its customers in extracting
timely basis. ever-increasing value from their IT investments.
6
MEMORY: THE FIRST VIRTUAL RESOURCE
Long before computer scientists came up with the notion of virtualizing an entire system,
architects had already invented techniques to virtualize memory management 1. Virtual memory
technology lets a system with a limited amount of physical memory look much larger to
application software. To create this illusion, the OS stores the full memory image of the
application and its data on the systems hard drive, and transfers required pieces of this image
into the systems DRAM memory as the program executes. The system you are using to read
this document on-line (assuming you didnt get a printed version) undoubtedly uses some
form of virtual memory management.
To translate the virtual addresses seen by each application into physical DRAM memory
addresses, the system relies on a map (known as a page table) that contains vectors linking
chunks of virtual memory to real memory. Modern x86 processors include hardware features
known as translation look-aside buffers (TLBs) that cache the translation vectors for recently
accessed chunks of memory, thus speeding up the process. TLBs play a role in virtually all
memory references, so the manner in which they perform their translations can play a big role
in determining overall system performance.
Architects soon learned that TLB design can seriously impact multitasking system operation.
Most tasks in such systems have unique page tables. This forces the operating system to reset
(or, more colorfully, flush) the TLB each time it switches from one task to another. Then, as
the new task executes, its page table entries fill up the TLB, at least until the next task switch.
This constant flushing and reloading can really eat into performance, especially if each task runs
for only a few milliseconds before the next switch.
To mitigate the impact of task switching, architects added a task ID field to each TLB entry.
This allows the system to retain the mapping information in the TLB when switching tasks, since
it only uses the entries for the task actually executing at any point, which in turn eliminates the
need for performance-inhibiting TLB flushes. At least until virtualization entered the scene.
Since the guest OS running on a virtual machine is unaware of other guests, it can only assign
unique task IDs within its own environment. Thus, multiple VMs can have tasks with the same
ID, confusing the TLB and making a real mess. Theres a simple solution to this problem the
hypervisor merely flushes the TLB every time it switches from one VM to another. This forces
the tasks executing in the next VM to reload the TLB with their own page table entries.
Unfortunately, this approach seriously impacts virtual system performance, giving architects
everywhere that dj vu feeling all over again.
AMDs architects had a better idea. They merely added a new, VM-specific tag called an address
space identifier (ASID) to the TLBs in their new AMD-V enhanced processors. Each VM has
a unique ASID value, known only to the hypervisor and the TLB hardware. The ASID is invisible
to the guest OS, thus eliminating the need to modify the guest, preserving the virtual illusion
and avoiding any performance degradation. Sounds simple, doesnt it?
1
The Atlas computer at the University of Manchester was the first system to incorporate virtual memory technology and
became operational in 1962.
7
2006 Advanced Micro Devices, Inc. All rights reserved. AMD Athlon, AMD Turion, AMD, the AMD arrow logo, AMD Opteron, AMD Virtualization, AMD-V, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Microsoft and
Windows are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdictions. Other names are for identification purposes only and may be trademarks of their respective companies. 41632A