Virtualization Guide
Virtualization Guide
Virtualization Guide
This Guide contains information on configuring, creating and monitoring guest operating sys-
tems on Red Hat Enterprise Linux 5, using virsh, xm, vmm and xend.
If you find an error in the Red Hat Enterprise Linux Virtualization Guide, or if you have thought
of a way to make this manual better, we would like to hear from you! Submit a report in Bugzilla
(http://bugzilla.redhat.com/bugzilla/) against the product Red Hat Enterprise Linux and the
component Virtualization_Guide.
Documentation-Deployment
Copyright © 2007 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set forth in
the Open Publication License, V1.0 or later (the latest version is presently available at ht-
tp://www.opencontent.org/openpub/).
Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copy-
right holder.
Distribution of the work or derivative of the work in any standard (paper) book form for commercial purposes is prohib-
ited unless prior permission is obtained from the copyright holder.
Red Hat and the Red Hat "Shadow Man" logo are registered trademarks of Red Hat, Inc. in the United States and other
countries.
All other trademarks referenced herein are the property of their respective owners.
CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E
Table of Contents
1. Red Hat Virtualization System Architecture ..............................................................1
2. Operating System Support ......................................................................................3
3. Hardware Support ..................................................................................................5
4. Red Hat Virtualization System Requirements ...........................................................7
5. Booting the System ................................................................................................8
6. Configuring GRUB .................................................................................................9
7. Booting a Guest Domain .......................................................................................11
8. Starting/Stopping a Domain at Boot Time ..............................................................12
9. Configuration Files ...............................................................................................13
10. Managing CPUs .................................................................................................14
11. Migrating a Domain ............................................................................................15
12. Configuring for Use on a Network ........................................................................16
13. Securing Domain0 ..............................................................................................17
14. Storage .............................................................................................................18
15. Managing Virtual Machines with virsh ..................................................................19
1. Connecting to a Hypervisor ...........................................................................19
2. Creating a Virtual Machine ............................................................................19
3. Configuring an XML Dump ............................................................................19
4. Suspending a Virtual Machine .......................................................................19
5. Resuming a Virtual Machine .........................................................................20
6. Saving a Virtual Machine ..............................................................................20
7. Restoring a Virtual Machine ..........................................................................20
8. Shutting Down a Virtual Machine ...................................................................20
9. Rebooting a Virtual Machine .........................................................................21
10. Terminating a Domain ................................................................................21
11. Converting a Domain Name to a Domain ID .................................................21
12. Converting a Domain ID to a Domain Name .................................................21
13. Converting a Domain Name to a UUID ........................................................21
14. Displaying Virtual Machine Information ........................................................22
15. Displaying Node Information .......................................................................22
16. Displaying the Virtual Machines ...................................................................22
17. Displaying Virtual CPU Information ..............................................................23
18. Configuring Virtual CPU Affinity ...................................................................23
19. Configuring Virtual CPU Count ....................................................................23
20. Configuring Memory Allocation ....................................................................23
21. Configuring Maximum Memory ....................................................................24
16. Managing Virtual Machines Using xend ...............................................................25
17. Managing Virtual Machines Using xm ..................................................................28
1. xm Configuration File ...................................................................................28
1.1. Configuring vfb ..................................................................................29
2. Creating and Managing Domains with xm ......................................................30
2.1. Connecting to a Domain ....................................................................30
2.2. Creating a Domain ............................................................................31
2.3. Saving a Domain ...............................................................................31
2.4. Terminating a Domain ID ...................................................................31
iv
Virtualization Guide
v
Virtualization Guide
vi
1. Useful Websites ...........................................................................................94
2. Installed Documentation ...............................................................................94
A. Revision History ..................................................................................................95
B. Lab 1 ..................................................................................................................96
C. Lab 2 ................................................................................................................ 101
Chapter 1. Red Hat Virtualization
System Architecture
A functional Red Hat Virtualization system is multi-layered and is driven by the privileged Red
Hat Virtualization component. Red Hat Virtualization can host multiple guest operating systems.
Each guest operating system runs in its own domain, Red Hat Virtualization schedules virtual
CPUs within the virtual machines to make the best use of the available physical CPUs. Each
guest operating systems handles its own applications. These guest operating systems schedule
each application accordingly.
You can deploy Red Hat Virtualization in one of two choices: full virtualization or paravirtual-
ization. Full virtualization provides total abstraction of the underlying physical system and cre-
ates a new virtual system in which the guest operating systems can run. No modifications are
needed in the guest OS or application (the guest OS or application is not aware of the virtual-
ized environment and runs normally). Paravirualization requires user modification of the guest
operating systems that run on the virtual machines (these guest operating systems are aware
that they are running on a virtual machine) and provide near-native performance. You can de-
ploy both paravirtualization and full virtualization across your virtualization infrastructure.
The first domain, known as domain0 (dom0), is automatically created when you boot the sys-
tem. Domain0 is the privileged guest and it possesses management capabilities which can cre-
ate new domains and manage their virtual devices. Domain0 handles the physical hardware,
such as network cards and hard disk controllers. Domain0 also handles administrative tasks
such as suspending, resuming, or migrating guest domains to other virtual machines.
The hypervisor (Red Hat's Virtual Machine Monitor) is a virtualization platform that allows mul-
tiple operating systems to run on a single host simultaneously within a full virtualization environ-
ment. A guest is an operating system (OS) that runs on a virtual machine in addition to the host
or main OS.
With Red Hat Virtualization, each guests memory comes from a slice of the host's physical
memory. For paravirtual guests, you can set both the initial memory and the maximum size of
the virtual machine. You can add (or remove) physical memory to the virtual machine at runtime
without exceeding the maximum size you specify. This process is called ballooning.
You can configure each guest with a number of virtual cpus (called vcpus). The Virtual Machine
Manager schedules the vcpus according to the workload on the physical CPUs.
You can grant a guest any number of virtual disks. The guest sees these as either hard disks
or (for full virtual guests) as CD-ROM drives. Each virtual disk is served to the guest from a
block device or from a regular file on the host. The device on the host contains the entire full
disk image for the guest, and usually includes partition tables, multiple partitions, and potentially
LVM physical volumes.
Virtual networking interfaces runs on the guest. Other interfaces can run on the guest like vir-
tual ethernet internet cards (VNICs). These network interfaces are configured with a persistent
virtual media access control (MAC) address. The default installation of a new guest installs the
VNIC with a MAC address selected at random from a reserved pool of over 16 million ad-
1
dresses, so it is unlikely that any two guests will receive the same MAC address. Complex sites
with a large number of guests can allocate MAC addresses manually to ensure that they remain
unique on the network.
Each guest has a virtual text console that connects to the host. You can redirect guest logins
and console output to the text console.
You can configure any guest to use a virtual graphical console that corresponds to the normal
video console on the physical host. You can do this for full virtual and paravirtual guests. It em-
ploys the features of the standard graphic adapter like boot messaging, graphical booting, mul-
tiple virtual terminals, and can launch the x window system. You can also use the graphical key-
board to configure the virtual keyboard and mouse.
Guests can be identified in any of three identities: domain name (domain-name), identity
(domain-id), or UUID. The domain-name is a text string that corresponds to a guest configura-
tion file. The domain-name is used to launch the guests, and when the guest runs the same
name is used to identify and control it. The domain-id is a unique, non-persistent number that
gets assigned to an active domain and is used to identify and control it. The UUID is a persist-
ent, unique identifier that is controlled from the guest's configuration file and ensures that the
guest is identified over time by system management tools. It is visible to the guest when it runs.
A new UUID is automatically assigned to each guest by the system tools when the guest first in-
stalls.
2
Chapter 2. Operating System
Support
Red Hat Virtualization's paravirtualization mode allows you to utilize high performance virtualiza-
tion on architectures that are potentially difficult to virtualize such as x86 based systems. To de-
ploy para-virtualization across your operating system(s), you need access to the paravirtual
guest kernels that are available from a respective Red Hat distro (for example, RHEL 4.0, RHEL
5.0, etc.). Whilst your operating system kernels must support Red Hat Virtualization, it is not ne-
cessary to modify user applications or libraries.
Red Hat Virtualization allows you to run an unmodified guest kernel if you have Intel VT and
AMD SVM CPU hardware. You do not have to port your operating system to deploy this archi-
tecture on your Intel VT or AMD SVM systems. Red Hat Virtualization supports:
• Intel VT-x or AMD-V Pacifica and Vanderpool technology for full and paravirtualization.
• Linux and UNIX operating systems, including NetBSD, FreeBSD, and Solaris.
• Microsoft Windows as an unmodified guest operating system with Intel Vanderpool or AMD's
Pacifica technology.
To run full virtualization guests on systems with Hardware-assisted Virtual Machine (HVM), In-
tel, or AMD platforms, you must check to ensure your CPUs have the capabilities needed to do
so.
To check if you have the CPU flags for Intel support, enter the following:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht
To check if you have the CPU flags for AMD support, enter the following:
3
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dt acpi mmx fxsr sse sse2 ss ht
note
In addition to checking the CPU flags, you should enable full virtualization within
your system BIOS.
4
Chapter 3. Hardware Support
Red Hat Virtualization supports multiprocessor systems and allows you to run Red Hat Virtualiz-
ation on x86 architectured systems with a P6 class (or earlier) processors like:
• Celeron
• Pentium II
• Pentium III
• Pentium IV
• Xeon
• AMD Athlon
• AMD Duron
With Red Hat Virtualization, 32-bit hosts runs only 32-bit paravirtual guests. 64-bit hosts runs
only 64-bit paravirtual guests. And a 64-bit full virtualization host runs 32-bit, 32-bit PAE, or
64-bit guests. A 32-bit full virtualization host runs both PAE and non-PAE full virtualization
guests.
The Red Hat Enterprise Linux Virtualization kernel does not support more than 32GB of memory
for x86_64 systems. If you need to boot the virtualization kernel on systems with more than
32GB of physical memory installed, you must append the kernel command line with mem=32G.
This example shows how to enable the proper parameters in the grub.conf file:
PAE (Physical Address Extension) is a technology that increases the amount of physical or vir-
tual memory available to user applications. Red Hat Virtualization requires that PAE is active on
your systems. Red Hat Virtualization 32 bit architecture with PAE supports up to 16 GB of phys-
ical memory. It is recommended that you have at least 256 megabytes of RAM for every guest
you have running on the system. Red Hat Virtualization enables x86/64 machines to address up
to physical 64 GB. The Red Hat Virtualization kernels will not run on a non-PAE system. To de-
termine if a system supports PAE, type the following commands:
5
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 mmx fxsr sse syscall mmtext 3dnowext 3dnow
If your output matches (or is similar to) the above, then your CPU supports PAE. If the com-
mand prompt displays nothing, then your CPU does not support PAE.
6
Chapter 4. Red Hat Virtualization
System Requirements
The items listed below are required by the Red Hat Virtualization system:
• Root access
• initscripts
7
Chapter 5. Booting the System
After installing the Red Hat Virtualization components, you must reboot the system. When the
boot completes, you must log into your system as usual. Then before you start Red Hat Virtual-
ization you must log in a root. The xend control daemon should already be initiated by
initscripts, but to start the xend manually, enter:
You can also use chkconfig xend when installing to enable xend at boot time.
The xend node control daemon performs system management functions that relate to virtual
machines. This daemon controls the virtualized resources, and xend must be running to interact
with virtual machines. Before you start xend, you must specify the operating parameters by edit-
ing the xend configuration file xend-config.sxp which is located in the etc/xen directory.
8
Chapter 6. Configuring GRUB
GNU Grand Unified Boot Loader (or GRUB) is a program which enables the user to select
which installed operating system or kernel to load at system boot time. It also allows the user to
pass arguments to the kernel. The GRUB configuration file (located in /boot/grub/grub.conf) is
used to create a list of operating systems to boot in GRUB's menu interface. When you install
the kernel-xen RPM, a post script adds kernel-xen entries to the GRUB configuration file. You
can edit the grub.conf file and enable the following GRUB parameter:
If you set your Linux grub entries to reflect this example, the boot loader loads the hypervisor,
initrd image, and Linux kernel. Since the kernel entry is on top of the other entries, the kernel
loads into memory first. The boot loader sends (and recieves) command line arguments to and
from the hypervisor and Linux kernel. This example entry shows how you would restrict the Do-
main0 linux kernel memory to 800 MB:
You can use these GRUB parameters to configure the Virtualization hypervisor:
mem
com1=115200, 8n1
This enables the first serial port in the system to act as serial console (com2 is assigned for the
next port, and so on...).
dom0_mem
9
This limits the amount of memory that is available for domain0.
dom0_max_vcpus
acpi
This switches the ACPI hypervisor to the hypervisor and domain0. The ACPI parameter options
include:
noacpi
10
Chapter 7. Booting a Guest Domain
You can boot guest domains by using the xm application. You can also use virsh and the Virtual
Machine Manager to boot the guests. A prerequisite for booting a guest domain is to install a
guest host first. This example uses the xm create subcommand:
# xm create -c guestdomain1
The guestdomain1 is the configuration file for the domain you are booting. The -c option con-
nects to the actual console after booting.
11
Chapter 8. Starting/Stopping a
Domain at Boot Time
You can start or stop running domains at any time. Domain0 waits for all running domains to
shutdown before restarting. You must place the configuration files of the domains you wish to
shut down in the /etc/xen/ directory. All the domains that you want to start at boot time must be
symlinked to /etc/xen/auto.
chkconfig xendomains on
The chkconfig xendomains on command does not automatically start domains; instead it will
start the domains on the next boot.
Terminates all running Red Hat Virtualization domains. The chkconfig xendomains off com-
mand shuts down the domains on the next boot.
12
Chapter 9. Configuration Files
Red Hat Virtualization configuration files contain the following standard variables. Configuration
items within these files must be enclosed in quotes ("). These configuration files reside in the /
etc/xen directory.
Item Description
13
Chapter 10. Managing CPUs
Red Hat Virtualization allows a domain's virtual CPUs to associate with one or more host CPUs.
This can be used to allocate real resources among one or more guests. This approach allows
Red Hat Virtualization to make optimal use of processor resources when employing dual-core,
hyperthreading, or other advanced CPU technologies. If you are running I/O intensive tasks, its
typically better to dedicate either a hyperthread or entire core to run domain0. The Red Hat Vir-
tualization credit scheduler automatically rebalances virtual cpus between physical ones, to
maximize system use. The Red Hat Virtualization system allows the credit scheduler to move
CPUs around as necessary, as long as the virtual CPU is pinned to a physical CPU.
14
Chapter 11. Migrating a Domain
Migration is the transferal of a running virtual domain from one physical host to another. Red
Hat Virtualization supports two varieties of migration — offline and live. Offline migration moves
a virtual machine from one host to another by pausing it, transferring its memory, and then re-
suming it on the host destination. Live migration does the same thing, but does not directly af-
fect the domain. When performing a live migration, the domain continues its usual activities, and
from the user perspective is unnoticeable. To initiate a live migration, both hosts must be run-
ning Red Hat Virtualization and the xend daemon. The destinations host must have sufficient re-
sources (such as memory capacity) to accommodate the domain bandwidth after the migration.
Both the source and destination machines must have the same architecture and virtualization
extensions (such as i386-VT, x86-64-VT, x86-64-SVM, etc.) and must be on the same L2 sub-
net.
When a domain migrates its MAC and IP addresses move with it. Only virtual machines with the
same layer-2 network and subnets will successfully migrate. If the destination node is on a dif-
ferent subnet, the administrator must manually configure a suitable EtherIP or IP tunnel in the
remote node of domain0. The xend daemon stops the domain and copies the job over to the
new node and restarts it. The Red Hat Virtualization RPM does not enable migration from any
other host except the localhost (see the /etc/xend-config.sxp file for information). To allow the
migration target to accept incoming migration requests from remote hosts, you must modify the
target's xen-relocation-hosts-allow parameter. Be sure to carefully restrict which hosts are al-
lowed to migrate, since there is no authentication.
Since these domains have such large file allocations, this process can be time consuming. If
you migrate a domain with open network connections, they will be preserved on the host destin-
ation, and SSH connections should still function. The default Red Hat Virtualization iptables
rules will not permit incoming migration connections. To allow this, you must create explicit ipt-
ables rules.
You may need to reconnect to the domain's console on the new machine. You can use the xm
console command to reconnect.
15
Chapter 12. Configuring for Use on a
Network
Integrating Red Hat Virtualization into your network architecture is a complicated process and
depending upon your infrastructure, may require custom configuration to deploy multiple ether-
net interfaces and setup bridging.
Each domain network interface is connected to a virtual network interface in dom0 by a point to
point link. These devices are vif<domid> and <vifid>. vif1.0 for the first interface in domain 1;
vif3.1 for the second interface in domain 3.
Domain0 handles traffic on these virtual interfaces by using standard Linux conventions for
bridging, routing, rate limiting, etc. The xend daemon employs two shell scripts to perform initial
configuration of your network and new virtual interfaces. These scripts configure a single bridge
for all virtual interfaces. You can configure additional routing and bridging by customizing these
scripts.
Red Hat Virtualization's virtual networking is controlled by the two shell scripts, network-bridge
and vif-bridge. xend calls these scripts when certain events occur. Arguments can be passed
to the scripts to provide additional contextual information. These scripts are located in the /
etc/xen/scripts directory. You can change script properties by modifying the xend-config.sxp
configuration file located in the /etc/xen directory.
network-bridge — When xend is started or stopped, this script initializes or shuts down the vir-
tual network. Then the configuration initialization creates the bridge xen—br0 and moves eth0
onto that bridge, modifying the routing accordingly. When xend finally exits, it deletes the bridge
and removes eth0, thereby restoring the original IP and routing configuration.
vif-bridge is a script that is invoked for every virtual interface on the domain. It configures fire-
wall rules and can add the vif to the appropriate bridge.
There are other scripts that you can use to help in setting up Red Hat Virtualization to run on
your network, such as network-route, network-nat, vif-route, and vif-nat. Or these scripts can
be replaced with customized variants.
16
Chapter 13. Securing Domain0
When deploying Red Hat Virtualization on your corporate infrastructure, you must ensure that
domain0 cannot be compromised. Domain0 is the privileged domain that handles system man-
agement. If domain0 is insecure, all other domains in the system are vulnerable. There are sev-
eral ways to implement security you should know about when integrating Red Hat Virtualization
into your systems. Together with other people in your organization,you should create
a'deployment plan' that contains the operating specifications and services that will run on Red
Hat Virtualization, and what is needed to support these services. Here are some security issues
to consider when putting together a deployment plan:
• Run the lowest number of necessary services. You do not want to include too many jobs and
services in domain0. The less things running on domain0, the higher the level of security.
• Use a firewall to restrict traffic to domain0. You can setup a firewall with default-reject rules
that will help secure attacks on domain0. It is also important to limit network facing services.
• Do not allow normal users to access domain0. If you do permit normal users domain0 ac-
cess, you run the risk of rendering domain0 vulnerable. Remember, domain0 is privileged,
and granting unprivilged accounts may compromise the level of security.
17
Chapter 14. Storage
There are several ways to manage virtual machine storage. You can export a domain0 physical
block device (hard drive or partition) to a guest domain as a virtual block device (VBD). You can
also export directly from a partitioned image as a file-backed VBD. Red Hat Virtualization en-
ables LVM and blktap by default during installation. You can also employ standard network pro-
tocols such as NFS, CLVM, or iSCSI to provide storage for virtual machines.
18
Chapter 15. Managing Virtual
Machines with virsh
You can use the virsh application to manage virtual machines. This utility is built around the lib-
virt management API and operates as an alternative to the xm tool or the graphical Virtual Ma-
chine Manager. Unprivileged users can employ this utility for read-only operations. If you plan
on running xend/qemu, you should enable xend/qemu to run as a service. After modifying the re-
spective configuration file, reboot the system, and xend/qemu will run as a service. You can use
virsh to script vm work. Like the xm tool, you run virsh from the command line.
1. Connecting to a Hypervisor
You can use virsh to initiate a hypervisor session:
Where <name> is the machine name of the hypervisor. If you want to initiate a read—only con-
nection, append the above command with —readonly.
This command outputs the domain information (in XML) to stdout . If you save the data to a file,
you can use the create option to recreate the virtual machine.
19
5. Resuming a Virtual Machine
When a domain is in a suspended state, it still consumes system RAM. There will also be no
disk or network I/O when suspended. This operation is immediate and the virtual machine must
be restarted with the resume option .
This operation is immediate and the virtual machine parameters are preserved in a suspend and
resume cycle.
This stops the virtual machine you specify and saves the data to a file, which may take some
time given the amount of memory in use by your virtual machine. You can restore the state of
the virtual machine with the restore option .
This restarts the saved virtual machine, which may take some time. The virtual machine's name
and UUID are preserved but are allocated for a new id.
20
9. Rebooting a Virtual Machine
You can control the behavior of the rebooting virtual machine by modifying the on_shutdown
parameter of the xmdomain.cfg file.
You can control the behavior of the rebooting virtual machine by modifying the on_reboot para-
meter of the xmdomain.cfg file.
This command does an immediate ungraceful shutdown and stops any guest domain sessions
(which could potentially lead to file corruptted filesystems still in use by the virtual machine). You
should use the destroy option only when the virtual machine's operating system is non-
responsive. For a paravirtualized virtual machine, you should use the shutdown option .
21
14. Displaying Virtual Machine Information
You can use virsh to display information for a given virtual machine identified by its domain ID,
domain name, or UUID:
virsh nodeinfo
This displays the node information and the machines that support the virtualization process.
The ——inactive option lists inactive domains (domains that have been defined but are not cur-
rently active). The — -all domain lists all domains, whether active or not. Your output should re-
semble the this example:
ID Name State
————————————————
0 Domain0 running
1 Domain202 paused
2 Domain010 inactive
3 Domain9600 crashed
Here are the six domain states:
Where [vcpu] is the virtual VCPU number and [cpulist] lists the physical number of CPUs.
Note that the new count cannot exceed the amount you specified when you created the Virtual
Machine.
You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount
you specified when you created the Virtual Machine. Values lower than 64 MB probably won't
work. You can adjust the Virtual Machine memory as necessary.
You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount
you specified when you created the Virtual Machine. Values lower than 64 MB probably won't
work. The maximum memory doesn't affect the current use of the Virtual Machine (unless the
new value is lower which should shrink memory usage).
Chapter 16. Managing Virtual
Machines Using xend
The xend node control daemon performs certain system management functions that relate to
virtual machines. This daemon controls the virtualized resources, and xend must be running to
interact with virtual machines. Before you start xend, you must specify the operating parameters
by editing the xend configuration file xend-config.sxp which is located in the etc/xen directory.
Here are the parameters you can enable or disable in the xend-config.sxp configuration file:
Item Description
25
Item Description
After setting these operating parameters, you should verify that xend is running and if not, initil-
ize the daemon. At the command prompt, you can start the xend daemon by entering the fol-
lowing:
26
service xend restart
27
Chapter 17. Managing Virtual
Machines Using xm
The xm application is a robust management tool that allows you to configure your Red Hat Vir-
tualization environment. As a prerequisite to using xm, you must ensure that the xend daemon
is running on your system.
1. xm Configuration File
The operating parameters that you must modify reside within the xmdomain.cfg file, which is loc-
ated in the etc/xen directory. Here are the parameters you can enable or disable in the xmdo-
main.cfg configuration file:
Item Description
28
1.1. Configuring vfb
Item Description
29
2. Creating and Managing Domains with xm
You can further configure your vfb environment by incorporating the options shown in Table
16.2:
Item Description
xm console domain-id
30
This causes the console to attach to the domain-id's text console.
This creates a domain named domain001 with the file residing in the /etc/xen/ directory. The
[-c]option aids with troubleshooting by allowing you to connect to the text console.
xm destroy [domain-id]
This instantly terminates the domain-id. If you prefer another method of safely terminating your
session, you can use the shutdown parameter instead.
xm shutdown [domain-id] [ -a | -w ]
The [ -a] option shuts down all domains on your system. The [-w] option waits for a domain
to completely shut down.
xm restore [state-file]
2.7. Suspending a Domain
You can use xm to suspend a domain:
xm suspend [domain-id]
xm resume [domain-id]
xm reboot [domain-id] [ -a | -w ]
The [ -a] option reboots all domains on your system. The [-w]option waits for a domain to
completely reboot. You can control the behavior of the rebooting domain by modifying the
on_boot parameter of the xmdomain.cfg file.
Domain renaming will keep the same settings (same hard disk, same memory, etc.).
xm pause [domain-id]
xm unpause [domain-id]
xm domid [domain-name]
xm domname [domain-id]
Note
You cannot grow a domain's memory beyond the maximum amount you spe-
cified when you first created the domain.
Note
You cannot grow a domain's memory beyond the maximum amount you spe-
cified when you first created the domain.
Where [vcpu] is the VCPU that you want to attach to, and [cpus] is the target. Pinning ensures
that certain VCPUs can only run on certain CPUs.
Where [domain-id] is the domain you want to migrate, and [host] is the target. The [options]
include ——live (or -l) for a live migration, or ——resource (or -r) to specify maximum speed of the
migration (in Mbs).
To ensure a successful migration, you must ensure that the xend daemon is running on all hosts
domains. All hosts must also be running Red Hat RHEL 5.0+ and have migration TCP ports
open to accept connections from the source hosts.
xm top [domain-id]
You can specify a specific domain(s) by name (s). The [——long] option provides a more detailed
breakdown of the domain you specified. The [——label] option adds an additional column that
displays label status. The outputs displays:
State Description
35
State Description
4. Displaying Uptime
You can use xm to display the uptime:
xm uptime [domain-id]
Name ID Uptime
Domain0 0 4:45:02
Domain202 1 3:32:00
Domain9600 2 0:09:14
DomainR&D 3 2:21:41
xm vcpu-list [domain-id]
You must specify the which vcpus you want to list. If you do not specify, the vcpus will be dis-
played for all domains.
xm info
36
7. Displaying TPM Devices
host : redhat83-157.brisbane.redhat.com
release : 2.6..18-1.2714.el5xen
version : #1 SMP Mon Oct 21 17:57:21 EDT 2006
machine : x86_64
nr_cpus : 8
nr_nodes : 1
sockets_per_node : 2
cores_per_socket : 2
threads_per_core : 2
cpu_mhz : 2992
hw_caps : bfeebbef:20100000:00000000:00000000
total_mememory : 1022
free_memory : 68
xen_major : 3
xen_minor : 0
xen_extra : -unstable
xen_caps : xen-3.0-x86_84
xen_pagesize : 4096
platform_params : virt_start=0xffff88000000000000000000
xen_changeset : unavailable
cc_compiler : gcc compiler version 4.1.1 200060928
cc_compile_by : brewbuilder
cc_compile_domain : build.redhat.com
cc_compile_date : Mon Oct 2 17:00 EDT 2006
xend_config_format : 2
The [——long] option provides a more detailed breakdown of the domain you specified.
xm log
xm dmesg
10. Displaying ACM State Information
xm dumppolicy [policy.bin]
xm vnet-list [ -l | ——long]
List Vnets
-l, ——long List Vnets as SXP
The output displays the block devices for the domain you specify.
The output displays the network interfaces for the domain you specify.
Parameter Description
xm vnet-create [configfile]
xm dry-run [configfile]
This checks each resource listed in your configfile. It lists the status of each resource and the fi-
nal security decision.
xm resources
The output displays the resources for the domains on your system.
You can configure Weight with the [ -w] option. You can configure Cap with the [ -c] option.
You can attach (or detach) virtual devices even if guests are running. The five parameter op-
tions are defined below:
Parameter Description
24. Security
24.1. Removing a Domain Security Label
You can use xm to remove a domain security label:
41
24.2. Creating a Resource Security Label
Label Description
Simple Type Enforcement The ACP interprets the labels and assigns ac-
cess requests to domains that require virtual
(or physical) access. The security policy con-
trols access between domains and assigns
the proper labels to the respective domain. By
default, access to domains with Simple Type
Enforcement domains is not enabled.
A policy is a separated list of names that translates into a local path and points to the policy
XML file (relative to the global policy root directory). For instance, the domain file
chinese_wall.client_V1 pertains to the policy file /example/chinese_wall.client_v1.xml.
Red Hat Virtualization includes these parameters that allow you to manage security policies and
42
assign labels to domains:
xm makepolicy [policy]
This creates the binary policy and saves it as binary file [policy.bin].
xm loadpolicy [policy.bin]
xm cfgbootpolicy [kernelversion]
This copies the binary policy into the /boot directory and modifies the corresponding line in the /
boot/grub/menu.1st file.
Adds a security label with to a domain configfile. It also verifies that the respective policy defini-
tion matches the corresponding label name.
ACM_SECURITY ?= y
ACM_DEFAULT_SECURITY_POLICY ? =
ACM_CHINESE_WALL__AND_SIMPLE_TYPE_ENFORCEMENT_POLICY
xm makepolicy chinesewall_ste.client_v1
xm loadpolicy example.chwall_ste.client_v1
24.16. Displaying Security Labels
xm cfgbootpolicy chinesewall_ste.client_v1
This causes the ACM to use this label to boot Red Hat Virtualization.
dom_StorageDomain
dom_SystemManagement
dom_NetworkDomain
dom_QandA
dom_R&D
Attaching the security label ensures that the domain does not share data with other non-Soft-
wareDev user domains. This example includes the myconfig.xm configuration file represents a
domain that runs workloads related to the SoftwareDev's infrastructure.
Edit your respective configuration file and verify that the addlabel command correctly added the
access_control entry (and associated parameters) to the end of the file:
If anything does not appear correct, make the necessary modifications and save the file.
Chapter 18. Managing Virtual
Machines with Virtual Machine
Manager
This section describes the Red Hat Virtualization Virtual Machine Manager (VMM) windows, dia-
log boxes, and various GUI controls.
47
3. Virtual Machine Manager Window
48
5. Virtual Machine Graphical Console
This window displays graphs and statistics of a guest's live resource utilization data available
from the Red Hat Virtualization Virtual Machine Manager. The UUID field displays the globally
unique identifier for the virtual machines(s).
49
Figure 18.4. Graphical Console window
Your local desktop can intercept key combinations (for example, Ctrl+Alt+F11) to prevent them
from being sent to the guest machine. You can use the Virtual Machine Manager's 'sticky key'
capability to send these sequences. You must press any modifier key (like Ctrl or Alt) 3 times
and the key you specify gets treated as active until the next non-modifier key is pressed. Then
you can send Ctrl-Alt-F11 to the guest by entering the key sequence 'Ctrl Ctrl Ctrl Alt+F1'.
• Summarize running domains with live performance and resource utilization statistics.
• Display graphs that show performance and resource utilization over time.
• Use the embedded VNC client viewer which presents a full graphical console to the guest
domain.
Note:
You must install Red Hat Enterprise Linux 5.0, virt-manager, and the kernel
packages on all systems that require virtualization. All systems then must be
booted and running the Red Hat Virtualization kernel.
These are the steps required to install a guest operating system on Red Hat Enterprise Linux 5
using the Virtual Machine Monitor:
1. From the Applications menu, select System Tools and then Virtual Machine Manager.
3. Click Forward.
7. Creating a New Virtual Machine
4. Enter the name of the new virtual system and then click Forward.
Figure 18.9. Naming the Virtual System
5. Enter the location of your install media. Location of the kickstart file is optional. Then click
Forward .
54
Figure 18.10. Locating the Installation Media
6. Install either to a physical disk partition or install to a virtual file system within a file.
Note
This example installs a virtual system within a file.
The default SELinux policy permits xen disk images to reside in the /var/lib/xen .
If you have SELinux enabled and want to specify a custom path for the virtual
disk then you need to change SELinux policies accordingly
Open a terminal and create the /xen directory and set the SELinux policy with the command
restorecon -v /xen. Specify your location and the size of the virtual disk, then click For-
ward.
55
7. Creating a New Virtual Machine
7. Select memory to allocate the guest and the number of virtual CPUs then click Forward.
7. Creating a New Virtual Machine
10. Type xm create -c xen-guest to start the Red Hat Enterprise Linux 5.0 guest. Right click on
the guest in the Virtual Machine Manager and choose Open to open a virtual console.
8. Restoring A Saved Machine
11. Enter user name and password to continue using the Virtual Machine Manager.
60
8. Restoring A Saved Machine
4. Click Open.
The saved virtual system appears in the Virtual Machine Manager main window.
61
Figure 18.18. The Restored Virtual Machine Manager Session
1. In the Virtual Machine Manager main window, highlight the virtual machine that you want to
view.
2. From the Virtual Machine Manager Edit menu, select Machine Details (or click the Details
button on the bottom of the Virtual Machine Manager main window).
Figure 18.20. Displaying Virtual Machine Details Menu
The Virtual Machine Details Overview window appears. This window summarizes CPU and
memory usage for the domain(s) you specified.
4. On the Hardware tab, click on Processor to view or change the current processor memory
allocation.
5. On the Hardware tab, click on Memory to view or change the current RAM memory alloca-
tion.
6. On the Hardware tab, click on Disk to view or change the current hard disk configuration.
10. Configuring Status Monitoring
7. On the Hardware tab, click on Network to view or change the current network configura-
tion.
2. From the Status monitoring area selection box, specify the time (in seconds) that you want
the system to update.
3. From the Consoles area, specify how to open a console and specify an input device.
66
To view the domain IDs for all virtual machines on your system:
2. The Virtual Machine Manager lists the Domain ID's for all domains on your system.
67
12. Displaying Virtual Machine Status
2. The Virtual Machine Manager lists the status of all virtual machines on your system.
13. Displaying Virtual CPUs
1. From the View menu, select the Virtual CPUs check box.
Figure 18.33. Displaying Virtual CPUs
2. The Virtual Machine Manager lists the Virtual CPUs for all virtual machines on your system.
1. From the View menu, select the CPU Usage check box.
Figure 18.35. Displaying CPU Usage
2. The Virtual Machine Manager lists the percentage of CPU in use for all virtual machines on
your system.
1. From the View menu, select the Memory Usage check box.
2. The Virtual Machine Manager lists the percentage of memory in use (in megabytes) for all
virtual machines on your system.
72
15. Displaying Memory Usage
73
Chapter 19. Red Hat Virtualization
Troubleshooting
This section covers potential issues you may experience in the installation, management, and
general day-to-day operations of your Red Hat Virtualization system(s). This troubleshooting
section covers the error messages, log file locations, system tools, and general approaches to
research data and analyze problems.
• The Red Hat Virtualization main configuration directory is /etc/xen/. This directory contains
the xend daemon and other virtual machine configuration files. The networking script files
reside here as well (in the /scripts subdirectory).
• All of actual log files themselves that you will consult for troubleshooting purposes reside in
the /var/log/xen directory.
• You should also know that the default directory for all virtual machine file-based disk images
resides in the /var/lib/xen directory.
• Red Hat Virtualization information for the /proc file system reside in the /proc/xen/ direct-
ory.
2. Logfile Descriptions
Red Hat Virtualization features the xend daemon and qemu-dm process, two utilities that write
the multiple log files to the /var/log/xen/ directory:
• xend.log is the logfile that contains all the data collected by the xend daemon, whether it is a
normal system event, or an operator initiated action. All virtual machine operations (such as
create, shutdown, destroy, etc.) appears here. The xend.log is usually the first place to look
when you track down event or performance problems. It contains detailed entries and condi-
tions of the error messages.
• xend-debug.log is the logfile that contains records of event errors from xend and the Virtualiz-
ation subsystems (such as framebuffer, Python scripts, etc.).
• xen-hotplug-log is the logfile that contains data from hotplug events. If a device or a network
script does not come online, the event appears here.
• qemu-dm.[PID].log is the logfile created by the qemu-dm process for each fully virtualized
74
3. Important Directory Locations
guest. When using this logfile, you must retrieve the given qemu-dm process PID, by using
the ps command to examine process arguments to isolate the qemu-dm process on the virtual
machine. Note that you must replace the [PID] symbol with the actual PID qemu-dm process.
If you encounter any errors with the Virtual Machine Manager, you can review the generated
data in the virt-manager.log file that resides in the /.virt-manager directory. Note that every
time you start the Virtual Machine Manager, it overwrites the existing logfile contents. Make sure
to backup the virt-manager.log file, before you restart the Virtual Machine manager after a sys-
tem error.
• When you restart the xend daemon, it updates the xend-database that resides in the /
var/lib/xen/xend-db directory.
• Virtual machine dumps (that you perform with xm dump-core command) resides in the /
var/lib/xen/dumps directory.
• The /etc/xen directory contains the configuration files that you use to manage system re-
sources. The xend daemon configuration file is called xend-config.sxp and you can use this
file to implement system-wide changes and configure the networking callouts.
• The proc commands are another resource that allows you to gather system information.
These proc entries reside in the /proc/xen directory:
/proc/xen/capabilities
/proc/xen/balloon
/proc/xen/xenbus/
4. Troubleshooting Tools
This section summarizes the System Administrator applications, the networking utilities, and the
Advanced Debugging Tools (for more information on using these tools to configure the Red Hat
Virtualization services, see the respective configuration documentation). You can employ these
standard System Administrator Tools and logs to assist with troubleshooting:
• xentop
• xm dmesg
• xm log
75
4. Troubleshooting Tools
• vmstat
• iostat
• lsof
You can employ these Advanced Debugging Tools and logs to assist with troubleshooting:
• XenOprofile
• systemTap
• crash
• sysrq
• sysrq t
• sysrq w
• ifconfig
• tcpdump
• brtctl
brctl is a networking tool that inspects and configures the ethernet bridge configuration in the
Virtualization linux kernel. You must have root access before performing these example com-
mands:
# brctl show
xenbr0
bridge-id 8000.fefffffffff
designated-root 8000.fefffffffff
root-port 0 path-cost 0
76
hello-time 2.00 bridge-hello-time 2.00
ageing-time 300.01
[2006-12-27 02:23:02 xend] ERROR (SrvBase: 163) op=create: Error creating domain: (0, 'Error')
Traceback (most recent call list)
File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvBase.py" line 107 in_perform val = op_method (
File
"/usr/lib/python2.4/site-packages/xen/xend/server/SrvDomainDir.py line 71 in op_create
raise XendError ("Error creating domain: " + str(ex))
XendError: Error creating domain: (0, 'Error')
The other log file, xend-debug.log , is very useful to system administrators since it contains
even more detailed information than xend.log . Here is the same error data for the same kernel
domain creation problem:
When calling customer support, always include a copy of both these log files when contacting
the technical support staff.
The sync_console can help determine a problem that causes hangs with asynchronous hyper-
visor console output, and the "pnpacpi=off" works around a problem that breaks input on the
serial console. The parameters "console=ttyS0" and "console=tty" means that kernel errors
get logged with on both the normal VGA console and on the serial console. Then you can install
and set up ttywatch to capture the data on a remote host connected by a standard null-modem
cable. For example, on the remote host you could type:
This pipes the output from /dev/ttyS0 into the file /var/log/ttywatch/myhost.log .
Where domain100 represents a running name or number. You can also use the Virtual Machine
Manager to display the virtual text console. On the Virtual Machine Details window, select Serial
Console from the View menu.
xm console
You can also use the Virtual Machine Manager to display the serial console. On the Virtual Ma-
chine Details window, select Serial Console from the View menu.
If your system is not using multipath, you can use udev to implement lun persistence. Before im-
plementing lun persistence in your system, ensure that you acquire the proper UUIDs. Once you
aquire these, you can configure lun persistence by editing the scsi_id file that resides in the /
etc directory. Once you have this file open in a text editor, you must comment out this line:
# options=-b
# options=-g
This tells udev to monitor all system SCSI devices for returning UUIDs. To determine the sys-
tem UUIDs, type:
# scsi_id -g -s /block/sdc
This long string of characters is the UUID. To get the device names to key off the UUID, check
each device path to ensure that the UUID number is the same for each device. The UUIDs do
not change when you add a new device to your system. Once you have checked the device
paths, you must create rules for the device naming. To create these rules, you must edit the
20-names.rules file that resides in the /etc/udev/rules.d directory. The device naming rules
you create here should follow this format:
Replace your exisiting UUID and devicename with the above UUID retrieved entry. So the rule
should resemble the following:
This causes the system to enable all devices that match /dev/sd* to inspect the given UUID.
When it finds a matching device, it creates a device node called /dev/devicename. For this ex-
10. SELinux Considerations
ample, the device node is /dev/mydevice . Finally, you need to append the rc.local file that
resides in the /etc directory with this path:
/sbin/start_udev
To implement lun persistence in a multipath environment, you must define the alias names for
the multipath devices. For this example, you must define four device aliases by editing the mul-
tipath.conf file that resides in the /etc/ directory:
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp1
}
multipath {
wwid 3600a0b80001327510000015427b6
alias oramp2
}
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp3
}
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp4
}
The boolean parameter xend_disable_trans put xend in unconfined mode after restarting the
daemon. It is better to disable protection for a single daemon than the whole system. It is advis-
able that you should not re-label directories as xen_image_t that you will use elsewhere.
You can use the kpartx application to handle partitioned disks or LVM volume groups:
To access LVM volumes on a second partiton, you must rescan LVM with vgscan and activate
the volume group on the partition (called VolGroup00 by default) by using the vgchange -ay
command:
# kpartx -a /dev/xen/guest1
#vgscan
Reading all physical volumes . This may take a while...
Found volume group "VolGroup00" using metadata type 1vm2
# vgchange -ay VolGroup00now
2 logical volume(s) in volume group VolGroup00 now active.
# lvs
LV VG Attr Lsize Origin Snap% Move Log Copy%
LogVol00 VolGroup00 -wi-a- 5.06G
LogVol01 VolGroup00 -wi-a- 800.00M
# mount /dev/VolGroup00/LogVol00 /mnt/
....
#umount /mnt/
#vchange -an VolGroup00
#kpartx -d /dev/xen/guest1
You must remember to deactivate the logical volumes with vgchange -an, remove the partitions
with kpartx-d , and delete the loop device with losetup -d when you finish.
You try to run xend start manually and receive more errors:
Error: Could not obtain handle on privileged command interfaces (2 = No such file or directory)
Traceback (most recent call last:)
81
File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvDaemon.py" , line 26 in ?
import images
xc = xen.lowlevel.xc.xc ()
What is most likely happened here is that you rebooted your host into a kernel that is not a xen-
hypervisor kernel. To correct this, you must select the xen-hypervisor kernel at boot time (or
set the xen-hypervisor kernel to default in your grub.conf file.
This example uses 64 but you can specify another number to set the maximum loop value. You
may also have to implement loop device backed guests on your system. To employ loop device
backed guests for a paravirtual system, use the phy: block device or tap:aio commands. To
employ loop device backed guests for a full virtualized system, use the phy: device or file:
file commands.
You do a yum update and receive a new kernel, the grub.conf default kernel switches right back
to a bare-metal kernel instead of the Virtualization kernel.
To correct this problem you must modify the default kernel RPM that resides in the /
etc/sysconfig/kernel/ directory. You must ensure that kernel-xen parameter is set as the de-
fault option in your gb.conf file.
82
15. Serial Console Errors
These changes to the grub.conf should enable your serial console to work correctly. You
should be able to use any number for the ttyS and it should work like ttyS0 .
#/etc/sysconfig/network-scripts/fcfg-eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=10.1.1.1
GATEWAY=10.1.1.254
ARP=yes
Edit /etc/xen/xend-config.sxp and add a line to your new network bridge script (this example
uses "network-virtualization-multi-bridge" ).
17. Laptop Configurations
In the xend-config.sxp file, the new line should reflect your new script:
network-script network-xen-multi-bridge
network-script network-bridge
If you want to create multiple Xen bridges, you must create a custom script. This example below
creates two Xen bridges (called xenbr0 and xenbr1 ) and attaches them to eth1 and eth0 , re-
spectively:
# !/bin/sh
# network-xen-multi-bridge
# Exit if anything goes wrong
set -e
# First arg is operation.
OP=$1
shift
script=/etc/xen/scripts/network-bridge.xen
case ${op} in
start)
$script start vifnum=1 bridge=xenbr1 netdev=eth1
$script start vifnum=0 bridge=xenbr0 netdev=eth0
..
,,
stop)
$script stop vifnum=1 bridge=xenbr1 netdev=eth1
$script stop vifnum=0 bridge=xenbr0 netdev=eth0
..
,,
status)
$script status vifnum=1 bridge=xenbr1 netdev=eth1
$script status vifnum=0 bridge=xenbr0 netdev=eth0
..
,,
*)
echo 'Unknown command: ' ${OP}
echo 'Valid commands are: start, stop, status'
exit 1
esac
If you want to create additional bridges, just use the example script and copy/paste the file ac-
cordingly.
The idea here is to create a 'dummy' network interface for Red Hat Virtualization to use.
This technique allows you to use a hidden IP address space for your guests and Virtual Ma-
chines. To do this operation successfully, you must use static IP addresses as DHCP does not
listen for IP addresses on the dummy network. You also must configure NAT/IP masquerading
to enable network access for your guests and Virtual Machines. You should attach a static IP
when you create the 'dummy' network interface.
For this example, the interface is called dummy0 and the IP used is 10.1.1.1 The script is called
ifcfg-dummy0 and resides in the /etc/sysconfig/network-scripts/ directory:
DEVICE =dummy0
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=10.1.1.1
ARP=yes
You should bind xenbr0 to dummy0 to allow network connection even when disconnected from
the physical network.
You will need to make additional modifications to the xend-config.sxp file. You must locate the (
network-script 'network-bridge' bridge=xenbr0 ) section and add include this in the end of the
line:
netdev=dummy0
You must also make some modifications to your guest's domU networking configuration to en-
able the default gateway to point to dummy0. You must edit the DomU 'network' file that resides
in the /etc/sysconfig/ directory to reflect the example below:
NETWORKING=yes
HOSTNAME=localhost.localdomain
GATEWAY=10.1.1.1
IPADDR=10.1.1.10
NETMASK=255.255.255.0
It is a good idea to enable NAT in domain0 so that domU can access the public net. This way,
even wireless users can work around the Red Hat Virtualization wireless limitations. To do this,
you must modify the S99XenLaptopNAT file that resides in the /etc/rc3.d directory to reflect the
example below:
#!/bin/bash/
PATH=/usr/bin:/sbin:/bin:/usr:/sbin
export PATH
GATEWAYDEV= "ip route | grep default | awk {print $5'}'
iptables -F
case "$1" in
start)
if test -z "$GATEWAYDEV"; then
echo "No gateway device found"
else
echo "Masquerading using $GATEWAYDEV"
/sbin/iptables -t nat -A POSTROUTING -o $GATEWAYDEV -j
MASQUERADE
fi
echo "Enabling IP forwarding"
echo 1 . /proc/sys/net/ipv4/ip_forward
echo "IP forwarding set to 'cat /proc/sys/net/ipv4/ip_forward'"
echo "done"
..
''
*)
echo "Usage: $0 {start | restart | status}"
..
,,
esac
If you want to automatically have the network setup at boot time, you must create a softlink to /
etc/rc3.d/S99XenLaptopNAT
When modifying the modprobe.conf file, you must include these lines:
You can configure your guests to start automatically when you boot the system. To do this, you
must modify the symbolic links that resides in /etc/xen/auto . This file points to the guest con-
figuration files that you need to start automatically. The startup process is serialized, meaning
that the higher the number of guests, the longer the boot process will take. This example shows
you how to use symbolic links for the guest rhel5vm01 :
[root@python auto]#
# boot=/dev/sda/
default=0
timeout=15
#splashimage=(hd0, 0)/grub/splash.xpm.gz
hiddenmenu
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
title Red Hat Enterprise Linux Server (2.6.17-1.2519.4.21. el5xen)
root (hd0, 0)
kernel /xen.gz-2.6.17-1.2519.4.21.el5 com1=115200, 8n1
module /vmlinuz-2.6.17-1.2519.4.21el5xen ro root=/dev/VolGroup00/LogVol00
module /initrd-2.6.17-1.2519.4.21.el5xen.img
For example, if you need to change your dom0 hypervisor's memory to 256MB at boot time, you
must edit the 'xen' line and append it with the correct entry, 'dom0_mem=256M' . This example
represents the respective grub.conf xen entry:
# boot=/dev/sda
default=0
timeout=15
#splashimage=(hd0,0)/grubs/splash.xpm.gz
hiddenmenu
serial --unit=0 --speed =115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
title Red Hat Enterprise Linux Server (2.6.17-1.2519.4.21. el5xen)
root (hd0,0)
kernel /xen.gz-2.6.17-1.2519.4.21.el5 com1=115200, 8n1 dom0_mem=256MB
module /vmlinuz-2.6.17-1.2519.4.21.el5xen ro
root=/dev/VolGroup00/LogVol00
module /initrd-2.6.17-1.2519.4.21.el5xen.img
87
21. Cloning the Guest Configuration Files
name = "rhel5vm01"
memory = "2048"
disk = ['tap:aio:/xen/images/rhel5vm01.dsk,xvda,w',]
vif = ["type=ieomu, mac=00:16:3e:09:f0:12 bridge=xenbr0',
"type=ieomu, mac=00:16:3e:09:f0:13 ]
vnc = 1
vncunused = 1
uuid = "302bd9ce-4f60-fc67-9e40-7a77d9b4e1ed"
bootloader = "/usr/bin/pygrub"
vcpus=2
on_reboot = "restart"
on_crash = "restart"
Note that the serial="pty" is the default for the configuration file. This configuration file ex-
ample is for a fully-virtualized guest:
name = "rhel5u5-86_64"
builder = "hvm"
memory = 500
disk = ['file:/xen/images/rhel5u5-x86_64.dsk.hda,w [../../../home/mhideo/.evolution//xen/images/rhel5u5-
vif = [ 'type=ioemu, mac=00:16:3e:09:f0:12, bridge=xenbr0', 'type=ieomu, mac=00:16:3e:09:f0:13, bridge=x
uuid = "b10372f9-91d7-ao5f-12ff-372100c99af5'
device_model = "/usr/lib64/xen/bin/qemu-dm"
kernel = "/usr/lib/xen/boot/hvmloader/"
vnc = 1
vncunused = 1
apic = 1
acpi = 1
pae = 1
vcpus =1
serial ="pty" # enable serial console
on_boot = 'restart'
You must also modify these system configuration settings on your guest. You must modify the
HOSTNAME entry of the /etc/sysconfig/network file to match the new guest's hostname.
88
Red Hat Virtualization can generate a MAC address for each virtual machine at the time of cre-
ation. Since is a nearly unlimited amount of numbers on the same subnet, it is unlikely you could
get the same MAC address. To work around this, you can also write a script to generate a MAC
address. This example script contains the parameters to generate a MAC address:
#! /usr/bin/python
# macgen.py script generates a MAC address for Xen guests
#
import random
mac = [ 0x00, 0x16, 0x3e,
random.randint(0x00, 0x7f),
random.randint(0x00, 0xff),
random.randint(0x00, 0xff) ]
print ':'.join(map(lambda x: "%02x" % x, mac))
Generates e.g.:
00:16:3e:66:f5:77
to stdout
(xend-relocation-server yes)
The default for this parameter is 'no', which keeps the relocation/migration server deactiv-
ated (unless on a trusted network) and the domain virtual memory is exchanged in raw form
without encryption.
(xend-relocation-port 8002)
This parameter sets the port that xend uses for migration. This value is correct, just make
sure to remove the comment that comes before it.
(xend-relocation-address )
This parameter is the address that listens for relocation socket connections, after you en-
able the xend-relocation-server . When listening, it restricts the migration to a particular in-
terface.
(xend-relocation-hosts-allow )
This parameter controls the host that communicates with the relocation port. If the value is
empty, then all incoming connections are allowed. You must change this to a space-
separated sequences of regular expressions (such as xend-relocation-hosts-allow-
'^localhost\\.localdomain$' ). A host with a fully qualified domain name or IP address that
matches these expressions are accepted.
After you configure these parameters, you must reboot the host for the Red Hat Virtualization to
accept your new parameters.
24. Interpreting Error Messages
You receive the following error:
A domain can fail if there is not enough RAM available. Domain0 does not balloon down enough
to provide space for the newly created guest. You can check the xend.log file for this error:
[2006-12-21] 20:33:31 xend 3198] DEBUG (balloon:133) Balloon: 558432 Kib free; 0 to scrub; need 1048576;
[2006-12-21] 20:33:31 xend. XendDomainInfo 3198] ERROR (XendDomainInfo: 202
Domain construction failed
You can check the amount of memory in use by domain0 by using the xm list Domain0 com-
mand. If domain0 is not ballooned down, you can use the command "xm mem-set Domain-0 New-
MemSize" to check memory.
This message indicates that you are trying to run an unsupported guest kernel image on your
Hypervisor. This happens when you try to boot a non-PAE paravirtual guest kernel on a RHEL
5.0 hypervisor. Red Hat Virtualization only supports guest kernels with PAE and 64bit architec-
tures.
If you need to run a 32bit/non-PAE kernel you will need to run your guest as a fully virtualized
virtual machine. For paravirtualized guests, if you need to run a 32bit PAE guest, then you must
24. Interpreting Error Messages
have a 32bit PAE hypervisor. For paravirtualized guests, if you need to run a 64bit PAE guest,
then you must have a 64bit PAE hypervisor. For full virtulization guests you must run a 64bit
guest with a 64bit hypervisor. The 32bit PAE hypervisor that comes with RHEL 5 i686 only sup-
ports running 32bit PAE paravirtualized and 32 bit fully virtualized guest OSes. The 64bit hyper-
visor only supports 64bit paravirtualized guests.
This happens when you move the full virtualized HVM guest onto a RHEL 5.0 system. Your
guest may fail to boot and you will see an error in the console screen. Check the PAE entry in
your configuration file and ensure that pae=1.You should use a 32bit distibution.
This happens when the virt-manager application fails to launch. This error occurs when there is
no localhost entry in the /etc/hosts configuration file. Check the file and verify if the localhost
entry is enabled. Here is an example of an incorrect localhost entry:
This happens when the guest's bridge is incorrectly configured and this forces the Xen hotplug
scipts to timeout. If you move configuration files between hosts, you must ensure that you up-
date the guest configuration files to reflect network topology and configuration modifications.
When you attempt to start a guest that has an incorrect or non-existent Xen bridge configura-
tion, you will receive the following errors:
initrd: /initrd-2.6.18-1.2747.el5xen.img
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.
[2006-11-14 15:07:08 xend 3875] DEBUG (DevController:143) Waiting for devices vif
[2006-11-14 15:07:08 xend 3875] DEBUG (DevController:149) Waiting for 0
[2006-11-14 15:07:08 xend 3875] DEBUG (DevController:464) hotplugStatusCallback
/local/domain/0/backend/vif/2/0/hotplug-status
/local/domain/0/backend/vif/2/0/hotplug-status
To resolve this problem, you must edit your guest configuration file, and modify the vif entry.
When you locate the vif entry of the configuration file, assuming you are using xenbr0 as the
default bridge, ensure that the proper entry resembles the following:
Python generates these messages when an invalid (or incorrect) configuration file. To resolve
this problem, you must modify the incorrect configuration file, or you can generate a new one.
http://www.openvirtualization.com
[http://www.openvirtualization.com/]
• Red Hat Enterprise Linux 5 Beta 2 Documentation
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/index.html
• Libvirt API
http://www.libvirt.org
[http://www.libvirt.org/]
http://virt-manager.et.redhat.com
[http://virt-manager.et.redhat.com/]
http://www.xensource.com/xen/xen/
http://virt.kernelnewbies.org
[http://virt.kernelnewbies.org/]
http://et.redhat.com
[http://et.redhat.com/]
93
Chapter 20. Additional Resources
To learn more about Red Hat Virtualization, refer to the following resources.
1. Useful Websites
• http://www.libvirt.org/ — The official website for the libvirt virtualization API that interacts
with the virtualization framework of a host OS.
2. Installed Documentation
94
Appendix A. Revision History
Revision History
Revision 5.0.0-8 Thu Apr 05 2007
Resolves: #235311
Clarifying SELinux installation procedure
Revision 5.0.0-7 Wed Feb 07 2007
Resolves: #221137
Fix to broken rpm
95
Appendix B. Lab 1
Xen Guest Installation
Prerequisites: A workstation installed with Red Hat Enterprise Linux 5.0 with Virtualization com-
ponent.
For this lab, you will configure and install RHEL 3, 4, or 5 and Win XP Xen guests using various
virtualization tools.
You must determine whether your system has PAE support. Red Hat Virtualization supports
x86_64 or ia64 based CPU architectures to run para-virtualized guests. To run i386 guests the
system requires a CPU with PAE extensions. Many older laptops (particularly those based on
Pentium Mobile or Centrino) do not support PAE.
2. The following output shows a CPU that has PAE support. If the command returns nothing,
then the CPU does not have PAE support. All the lab exercises require a i386 CPU with
PAE extension or x86_64 or ia64 in order to proceed.
flags :
fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat clflush dts acpi
mmx fxsr sse sse2 ss tm pbe nx up est tm2
Lab Sequence 2: Installing RHEL5 Beta 2 Xen para-virtualized guest using virt-install.
For this lab, you must install a Red Hat Enterprise Linux 5 Beta 2 Xen guest using virt-install.
1. To install your Red Hat Enterprise Linux 5 Beta 2 Xen guest, at the command prompt type:
virt-install.
96
6. Type 6 for the size of your disk (guest image).
10. After the installation completes, type /etc/xen/rhel5b2-pv1, and make the following
changes: #vnc=1#vncunused=1sdl=1
11. Use a text editor to modify /etc/inittab, and append this to the file: init
5.#id:3:initdefault:id:5:initdefault:
Lab Sequence 3: Installing RHEL5 Beta 2 Xen para-virtualized guest using virt-manager.
For this lab, you will install a Red Hat Enterprise Linux 5 Beta 2 Xen paravirtualized guest using
virt-manager.
1. To install your Red Hat Enterprise Linux 5 Beta 2 Xen guest, at the command prompt type:
virt-manager.
2. On the Open Connection window, select Local Xen host, and click on Connect.
3. Start Red Hat's Virtual Machine Manager application, and from the File menu, click on
New.
4. Click on Forward.
7. Type nfs:server:/path/to/rhel5b2 for your install media URL, and click Forward.
8. Select Simple File, type /xen/rhel5b2-pv2.img for your file location. Choose 6000 MB, and
click Forward.
9. Choose 500 for your VM Startup and Maximum Memory, and click Forward.
The Virtual Machine Console window appears. Proceed as normal and finish up the installation.
For this lab, you must determine if your system supports Intel-VT or AMD-V hardware. Your sys-
tem must support Intel-VT or AMD-V enabled CPUs to successfully install the fully virtualized
guest operating systems. Red Hat Virtualization incorporates a generic HVM layer to support
these CPU vendors.
1. To determine if your CPU has Intel-VT or AMD-V support, type the following command:
egrep -e 'vmx|svm' /proc/cpuinfo
97
2. The following output shows a CPU that supports Intel-VT:
.flags :
fpu tsc msr pae mce cx8 apic mtrr mca cmov pat clflush dts acpi mmx fxsr sse
sse2 ss ht tm pbe constant_tsc pni monitor vmx est tm2 xtpr
If the command returns nothing, then the CPU does not support Intel-VT or AMD-V.
3. To determine if your CPU has Intel-VT or AMD-V support, type the following command:
at /sys/hypervisor/properties/capabilities
4. The following output shows that Intel-VT support has been enabled in the BIOS. If the com-
mand returns nothing, then go into the BIOS Setup Utlility and look for a setting related to
'Virtualization', i.e. 'Intel(R) Virtualization Technology' under 'CPU' section on a IBM T60p.
Enable and save the setting and do a power off to take effect.
Lab Sequence 5: Installing RHEL5 Beta 2 Xen fully virtualized guest using virt-install.
For this lab, you will install a Red Hat Enterprise Linux 5 Beta 2 Xen fully virtualized guest using
virt-install:
1. To install your Red Hat Enterprise Linux 5 Beta 2 Xen guest, at the command prompt type:
virt-install.
9. The VNC viewer appears within the installation window. If there is an error message that
says “main: Unable to connect to host: Connection refused (111)”, then type the following
command to proceed: vncviewer localhost:5900. VNC port 5900 refers to the first Xen
guest that is running on VNC. If it doesn't work, you might need to use 5901, 5902, etc.
98
The installation begins. Proceed as normal with the installation.
Lab Sequence 6: Installing RHEL5 Beta 2 Xen fully virtualized guest using virt-manager.
For this lab, you will install a Red Hat Enterprise Linux 5 Beta 2 Xen fully virtualized guest using
virt-manager:
1. To install your Red Hat Enterprise Linux 5 Beta 2 Xen guest, at the command prompt type:
virt-manager.
2. On the Open Connection window, select Local Xen host, and click on Connect.
3. Start Red Hat's Virtual Machine Monitor application, and from the File menu, click on New.
4. Click on Forward.
7. Specify either CD-ROM or DVD, and enter the path to install media. Specify ISO Image loc-
ation if you will install from an ISO image. Click Forward.
8. Select Simple File, type /xen/rhel5b2-fv2.img for your file location. Specify 6000 MB, and
click Forward.
9. Choose 500 for your VM Startup and Maximum Memory, and click Forward.
Lab Sequence 7: Installing RHEL3 Xen fully virtualized guest using virt-manager.
For this lab, you will install a Red Hat Enterprise Linux 3 Xen guest using virt-manager:
Lab Sequence 8: Installing RHEL4 Xen fully virtualized guest using virt-manager
For this lab, you will install a Red Hat Enterprise Linux 4 Xen guest using virt-manager :
Lab Sequence 9: Installing Windows XP Xen fully virtualized guest using virt-manager.
For this lab, you will install a Windows XP Xen fully virtualized guest using virt-manager:
1. To install your Red Hat Enterprise Linux 5 on your Windows XP host, at the command
prompt type: virt-manager.
2. On the Open Connection window, select Local Xen host, and click on Connect.
3. Start Red Hat's Virtual Machine Manager application, and from the File menu click on New.
4. Click on Forward.
7. Specify either CD-ROM or DVD, and enter the path to install media. Specify ISO Image loc-
ation if you will install from an ISO image. Click Forward.
8. Select Simple File, type /xen/winxp.img for your file location. Specify 6000 MB, and click
Forward.
9. Select 1024 for your VM Startup and Maximum Memory, and select 2 for VCPUs. Click
Forward .
11. The Virtual Machine Console window appears. Proceed as normal and finish up the install-
ation.
12. Choose to format the C:\ partition in FAT file system format. Red Hat Enterprise Linux 5
does not come with NTFS kernel modules. Mounting or writing files to the Xen guest image
may not be as straight-forward if you were to format the partition in NTFS file system
format.
13. After you reboot the system for the first time, edit the winxp guest image: losetup /
dev/loop0 /xen/winxp.imgkpartx -av /dev/loop0mount /dev/mapper/loop0p1 /mntcp -prv
$WINDOWS/i386 /mnt/. This fixes a problem that you may face in the later part of the Win-
dows installation.
15. In the Virtual Machine Manager window, select the winxp Xen guest and click Open.
16. The Virtual Machine Console window appears. Proceed as normal and finish up with the in-
stallation.
17. Whenever a 'Files Needed' dialog box appears, change the path GLOBAL-
ROOT\DEVICE\CDROM0\I386 to C:\I386. Depending on your installation, you may or may not
see this problem. You may be prompted for missing files during the installation. Changing
the path to C:\I386 should compensate for this problem.
18. If the Xen guest console freezes, click shutdown, make the following changes in /
etc/xen/winxp:#vnc=1#vncunused=1sdl=1#vcpus=2
Prerequisite: Two workstations installed with Red Hat Enterprise Linux 5.0 Beta 2 with Virtualiz-
ation Platform, and a Fedora Core 6 Xen guest on one of the two workstations.
For this lab, you will configure the migration and execute a live migration between two hosts.
For this lab, you will need two Virtualization hosts: a Xen guest and a shared storage. You must
connect the two Virtualization hosts via a UTP cable. One of the Virtualization hosts exports a
shared storage via NFS. You must configure both of the Virtualization hosts so they migrate
successfully. The Xen guest resides on the shared storage. On the Xen guest, you should in-
stall a streaming server. You must make sure that the streaming server still runs without any in-
terruptions on the Xen guest, so the live migration takes place between one Virtualization host
and the other. For Lab 2, you will refer the two Virtualization hosts as host1 and host2 .
In this Lab procedure, you configure xend to start up as a HTTP server and a relocation server.
The xend daemon does not initiate the HTTP server by default. It starts the UNIX domain socket
management server (for xm) and communicates with xend. To enable cross-machine live migra-
tion, you must configure it to support live migration:
#(xend-unix-server yes)(xend-relocation-server
yes)(xend-relocation-port 8002)(xend-relocation-address
'')(xend-relocation-hosts-allow '')#(xend-relocation-hosts-allow '^localhost$
^localhost\\.localdomain$')
For this lab procedure, you will configure NFS and use it to export a shared storage.
101
1. Edit /etc/exports and include the line: /xen *(rw,sync,no_root_squash)/
2. Save /etc/exports and restart the NFS server. Make sure that the NFS server starts by de-
fault:service nfs startchkconfig nfs on.
3. After starting the NFS server on host1, we can then mount it on host2:mount host1:/xen .
4. Now start the Xen guest on host1 and select fc6-pv1 (or fc6-pv2 from Lab 1):
xm create -c fc6-pv1
For this lab step, you will install a streaming server, gnump3d, for our demonstration purposes.
You will select gnump3d because it supports OGG vorbis files and is easy to install, configure,
and modify.
2. Create a /home/mp3 directory and copy TruthHappens.ogg from Red Hat's Truth Happens
page to mkdir /home/mp3wget -c http://www.redhat.com/v/ogg/TruthHappens.ogg
command:gnump3d
4. On either one of the two Xen hosts, start running the Movie Player. If it is not installed, then
install the totem and iso-codecs rpms before running the Movie Player. Click Applications,
then Sound & Video, and finally Movie Player.
102
3. Open multiple window terminals on both Xen hosts with the following command:
4. Observe as the live migration beginss. Note how long it takes for migration to complete.
Challenge Sequence: Configuring VNC server from within the Xen guest
If time permits, from within the Xen guest, configure the VNC server to initiate when gdm starts
up. Run VNC viewer and connect to the Xen guest. Play with the Xen guest when the live mi-
gration occurs. Attempt to pause/resume, and save/restore the Xen guest and observe what
happens to the VNC viewer. If you connect to the VNC viewer via localhost:590x, and do a live
migration, you won't be able to connect to the VNC viewer again when it dies. This is a known
bug.
103