Red Hat Enterprise Linux 6 Virtualization en US
Red Hat Enterprise Linux 6 Virtualization en US
Red Hat Enterprise Linux 6 Virtualization en US
Virtualization Guide
Red Hat Enterprise Linux 6 Virtualization Guide Guide to Virtualization on Red Hat Enterprise Linux 6 Edition 3.2
Author Copyright 2008, 2009, 2010, 2011 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons AttributionShare Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the United States and other countries. Java is a registered trademark of Oracle and/or its affiliates. XFS is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL is a registered trademark of MySQL AB in the United States, the European Union and other countries. All other trademarks are the property of their respective owners. 1801 Varsity Drive Raleigh, NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701
The Red Hat Enterprise Linux Virtualization Guide contains information on installation, configuring, administering, and troubleshooting virtualization technologies included with Red Hat Enterprise Linux. Current version: 3.2-064 - Review for 6.1 release.
Preface ix 1. Document Conventions ................................................................................................... ix 1.1. Typographic Conventions ..................................................................................... ix 1.2. Pull-quote Conventions ........................................................................................ xi 1.3. Notes and Warnings ............................................................................................ xi 2. We need your feedback ................................................................................................. xii 1. Introduction 1.1. What is virtualization? .................................................................................................. 1.2. KVM and virtualization in Red Hat Enterprise Linux ........................................................ 1.3. libvirt and the libvirt tools .............................................................................................. 1.4. Virtualized hardware devices ........................................................................................ 1.4.1. Virtualized and emulated devices ....................................................................... 1.4.2. Para-virtualized drivers ...................................................................................... 1.4.3. Physical host devices ........................................................................................ 1.4.4. Guest CPU models ............................................................................................ 1.5. Storage ........................................................................................................................ 1.5.1. Storage pools .................................................................................................... 1.6. Virtualization security features ....................................................................................... 1.7. Migration ...................................................................................................................... 1.8. Virtualized to virtualized migration (V2V) ........................................................................ I. Requirements and restrictions 2. System requirements 3. KVM Guest VM compatibility 4. Virtualization restrictions 4.1. KVM restrictions ................................................................................................. 4.2. Application restrictions ........................................................................................ 4.3. Other restrictions ................................................................................................ II. Installation 1 1 1 2 3 3 5 5 6 7 7 7 8 8 9 11 13 15 15 16 17 19
5. Installing the virtualization packages 21 5.1. Installing KVM with a new Red Hat Enterprise Linux installation ............................. 21 5.2. Installing KVM packages on an existing Red Hat Enterprise Linux system ............... 25 6. Virtualized guest installation overview 6.1. Virtualized guest prerequisites and considerations ................................................ 6.2. Creating guests with virt-install ............................................................................ 6.3. Creating guests with virt-manager ....................................................................... 6.4. Installing guests with PXE ................................................................................... 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6 7.1. Creating a Red Hat Enterprise Linux 6 guest with local installation media ............... 7.2. Creating a Red Hat Enterprise Linux 6 guest with a network installation tree ........... 7.3. Creating a Red Hat Enterprise Linux 6 guest with PXE ......................................... 27 27 27 28 36 45 45 55 57
8. Installing Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5 61 8.1. Using virt-install .................................................................................................. 61 8.2. Using virt-manager ............................................................................................. 62 9. Installing a fully-virtualized Windows guest 71 9.1. Using virt-install to create a guest ....................................................................... 71 iii
10. Network Configuration 75 10.1. Network Address Translation (NAT) with libvirt .................................................... 75 10.2. Bridged networking with libvirt ........................................................................... 76 11. KVM Para-virtualized Drivers 79 11.1. Using the para-virtualized drivers with Red Hat Enterprise Linux 3.9 guests ........... 79 11.2. Installing the KVM Windows para-virtualized drivers ............................................ 82 11.2.1. Installing the drivers on an installed Windows guest .................................. 82 11.2.2. Installing drivers during the Windows installation ...................................... 92 11.3. Using KVM para-virtualized drivers for existing devices ....................................... 99 11.4. Using KVM para-virtualized drivers for new devices ........................................... 100 12. PCI device assignment 12.1. Adding a PCI device with virsh ........................................................................ 12.2. Adding a PCI device with virt-manager ............................................................. 12.3. PCI device assignment with virt-install .............................................................. 13. SR-IOV 13.1. Introduction .................................................................................................... 13.2. Using SR-IOV ................................................................................................. 13.3. Troubleshooting SR-IOV .................................................................................. 14. KVM guest timing management IV. Administration 15. Server best practices 16. Security for virtualization 16.1. Storage security issues ................................................................................... 16.2. SELinux and virtualization ............................................................................... 16.3. SELinux ......................................................................................................... 16.4. Virtualization firewall information ...................................................................... 107 108 110 113 115 115 116 118 119 123 125 127 127 127 128 129
17. sVirt 131 17.1. Security and Virtualization ............................................................................... 132 17.2. sVirt labeling ................................................................................................... 132 18. Remote management of virtualized guests 18.1. Remote management with SSH ....................................................................... 18.2. Remote management over TLS and SSL ......................................................... 18.3. Transport modes ............................................................................................. 19. Overcommitting with KVM 20. KSM 21. Hugepage support 22. Migrating to KVM from other hypervisors using virt-v2v 22.1. Preparing to convert a virtualized guest ........................................................... 22.2. Converting virtualized guests ........................................................................... 22.2.1. virt-v2v ................................................................................................ 22.2.2. Converting a local Xen virtualized guest ................................................ 22.2.3. Converting a remote Xen virtualized guest ............................................. 22.2.4. Converting a VMware ESX virtualized guest .......................................... 22.2.5. Converting a virtualized guest running Windows ..................................... 22.3. Running converted virtualized guests ............................................................... iv 135 135 136 137 141 145 149 151 151 155 155 157 157 157 158 159
22.4. Configuration changes .................................................................................... 159 22.4.1. Configuration changes for Linux virtualized guests .................................. 159 22.4.2. Configuration changes for Windows virtualized guests ............................ 160 23. Miscellaneous administration tasks 23.1. Automatically starting guests ........................................................................... 23.2. Using qemu-img ............................................................................................. 23.3. Verifying virtualization extensions ..................................................................... 23.4. Setting KVM processor affinities ...................................................................... 23.5. Generating a new unique MAC address ........................................................... 23.6. Improving guest response time ........................................................................ 23.7. Very Secure ftpd ......................................................................................... 23.8. Disable SMART disk monitoring for guests ....................................................... 23.9. Configuring a VNC Server ............................................................................... 23.10. Gracefully shutting down guests .................................................................... 23.11. Virtual machine timer management with libvirt ................................................. V. Virtualization storage topics 163 163 163 165 166 170 171 172 172 172 173 174 177
24. Storage concepts 179 24.1. Storage pools ................................................................................................. 179 24.2. Volumes ........................................................................................................ 180 25. Storage pools 25.1. Creating storage pools ................................................................................... 25.1.1. Dedicated storage device-based storage pools ...................................... 25.1.2. Partition-based storage pools ................................................................ 25.1.3. Directory-based storage pools ............................................................... 25.1.4. LVM-based storage pools ..................................................................... 25.1.5. iSCSI-based storage pools ................................................................... 25.1.6. NFS-based storage pools ..................................................................... 26. Volumes 26.1. Creating volumes ............................................................................................ 26.2. Cloning volumes ............................................................................................. 26.3. Adding storage devices to guests .................................................................... 26.3.1. Adding file based storage to a guest ..................................................... 26.3.2. Adding hard drives and other block devices to a guest ............................ 26.4. Deleting and removing volumes ....................................................................... 27. Miscellaneous storage topics 27.1. Creating a virtualized floppy disk controller ....................................................... 27.2. Configuring persistent storage in Red Hat Enterprise Linux 6 ............................. 27.3. Accessing data from a guest disk image .......................................................... 28. N_Port ID Virtualization (NPIV) 28.1. Enabling NPIV on the switch ........................................................................... 28.1.1. Identifying HBAs in a Host System ........................................................ 28.1.2. Verify NPIV is used on the HBA ............................................................ VI. Host virtualization tools 29. Managing guests with virsh 183 183 183 185 191 197 203 212 217 217 217 218 218 220 221 223 223 224 227 231 231 231 232 235 237
30. Managing guests with the Virtual Machine Manager (virt-manager) 249 30.1. Starting virt-manager ....................................................................................... 249 30.2. The Virtual Machine Manager main window ...................................................... 250 v
Virtualization Guide 30.3. The virtual hardware details window ................................................................ 30.4. Virtual Machine graphical console .................................................................... 30.5. Adding a remote connection ............................................................................ 30.6. Restoring a saved machine ............................................................................ 30.7. Displaying guest details .................................................................................. 30.8. Performance monitoring .................................................................................. 30.9. Displaying guest identifiers .............................................................................. 30.10. Displaying a guest's status ........................................................................... 30.11. Displaying CPU usage ................................................................................... 30.12. Displaying Disk I/O ....................................................................................... 30.13. Displaying Network I/O .................................................................................. 30.14. Displaying memory usage ............................................................................. 31. Guest disk access with offline tools 31.1. Introduction .................................................................................................... 31.2. Terminology .................................................................................................... 31.3. Installation ...................................................................................................... 31.4. The guestfish shell .......................................................................................... 31.4.1. Viewing file systems with guestfish ........................................................ 31.4.2. Modifying files with guestfish ................................................................ 31.4.3. Other actions with guestfish .................................................................. 31.4.4. Shell scripting with guestfish ................................................................. 31.4.5. Augeas and libguestfs scripting ............................................................. 31.5. Other commands ............................................................................................ 31.6. virt-rescue: The rescue shell ........................................................................... 31.6.1. Introduction .......................................................................................... 31.6.2. Running virt-rescue .............................................................................. 31.7. virt-df: Monitoring disk usage ........................................................................... 31.7.1. Introduction .......................................................................................... 31.7.2. Running virt-df ..................................................................................... 31.8. virt-resize: resizing guests offline ..................................................................... 31.8.1. Introduction .......................................................................................... 31.8.2. Expanding a disk image ....................................................................... 31.9. virt-inspector: inspecting guests ....................................................................... 31.9.1. Introduction .......................................................................................... 31.9.2. Installation ........................................................................................... 31.9.3. Running virt-inspector ........................................................................... 31.10. virt-win-reg: Reading and editing the Windows Registry ................................... 31.10.1. Introduction ........................................................................................ 31.10.2. Installation ......................................................................................... 31.10.3. Using virt-win-reg ............................................................................... 31.11. Using the API from Programming Languages .................................................. 31.11.1. Interaction with the API via a C program .............................................. 31.12. Troubleshooting ............................................................................................ 31.13. Where to find further documentation .............................................................. 32. Virtual Networking 32.1. Virtual network switches .................................................................................. 32.1.1. Network Address Translation ................................................................ 32.2. DNS and DHCP ............................................................................................. 32.3. Other virtual network switch routing types ........................................................ 32.4. The default configuration ................................................................................. 32.5. Examples of common scenarios ...................................................................... 32.5.1. Routed mode ....................................................................................... 32.5.2. NAT mode ........................................................................................... vi 251 253 255 256 258 265 267 268 269 270 271 272 275 275 275 276 276 277 279 279 279 280 281 281 281 282 283 283 283 284 284 284 286 286 286 286 288 288 288 288 289 290 293 293 295 295 296 297 298 299 300 300 301
32.5.3. Isolated mode ...................................................................................... 301 32.6. Managing a virtual network ............................................................................. 302 32.7. Creating a virtual network ............................................................................... 303 33. libvirt configuration reference 313 34. Creating custom libvirt scripts 315 34.1. Using XML configuration files with virsh ........................................................... 315 VII. Troubleshooting 35. Troubleshooting 35.1. Debugging and troubleshooting tools ............................................................... 35.2. kvm_stat ........................................................................................................ 35.3. Log files ......................................................................................................... 35.4. Troubleshooting with serial consoles ................................................................ 35.5. Virtualization log files ...................................................................................... 35.6. Loop device errors .......................................................................................... 35.7. Enabling Intel VT and AMD-V virtualization hardware extensions in BIOS ............ 35.8. KVM networking performance .......................................................................... 317 319 319 320 323 323 324 324 324 325
A. Additional resources 327 A.1. Online resources ...................................................................................................... 327 A.2. Installed documentation ............................................................................................ 327 B. Revision History C. Colophon 329 335
vii
viii
Preface
Welcome to the Red Hat Enterprise Linux 6 Virtualization Guide. This guide covers all aspects of using and managing virtualization products included with Red Hat Enterprise Linux 6. This book is divided into 7 parts: System Requirements Installation Configuration Administration Reference Troubleshooting Appendixes
1. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information. In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default.
1
https://fedorahosted.org/liberation-fonts/
ix
Preface The first paragraph highlights the particular keycap to press. The second highlights two key combinations (each a set of three keycaps with each set pressed simultaneously). If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold. For example: File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions. Proportional Bold This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example: Choose System Preferences Mouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand). To insert a special character into a gedit file, choose Applications Accessories Character Map from the main menu bar. Next, choose Search Find from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Doubleclick this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose Edit Paste from the gedit menu bar. The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context. Mono-spaced Bold Italic or Proportional Bold Italic Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example: To connect to a remote machine using ssh, type ssh [email protected] at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh [email protected]. The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home. To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release. Note the words in bold italics above username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system. Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example: Publican is a DocBook publishing system. x
Pull-quote Conventions
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
package org.jboss.book.jca.ex1; import javax.naming.InitialContext; public class ExClient { public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); } }
Note
Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled 'Important' will not cause data loss but may cause irritation and frustration.
Warning
Warnings should not be ignored. Ignoring warnings will most likely cause data loss.
xi
Preface
xii
Chapter 1.
Introduction
This chapter introduces various virtualization technologies, applications and features, and explains how they work.
Overcommitting
The KVM hypervisor supports overcommitting of system resources. Overcommitting means allocating more virtualized CPUs or memory than the available resources on the system. Memory overcommitting allows hosts to utilize memory and virtual memory to increase guest densities.
Chapter 1. Introduction
Important
A single guest can not use more CPU or memory than physically available. Overcommitting does, however, support the operation of multiple guests that have a total CPU and/or memory requirement greater than the physical host. Overcommitting involves possible risks to system stability. For more information on overcommitting with KVM, and the precautions that should be taken, refer to Chapter 19, Overcommitting with KVM.
KSM
Kernel SamePage Merging (KSM) is used by the KVM hypervisor to allow KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. For more information on KSM, refer to Chapter 20, KSM.
virsh
The virsh command-line tool is built on the libvirt management API and operates as an alternative to the graphical virt-manager application. The virsh command can be used in readonly mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration. 2
Virtualized hardware devices The virsh tool is included in the libvirt-client package.
virt-manager
virt-manager is a graphical desktop tool for managing virtualized guests. It can be used to perform virtualization administration, virtualized guest creation, migration and configuration tasks and allows access to graphical guest consoles. The ability to view virtualized guests, host statistics, device information and performance graphs is also provided. The local hypervisor and remote hypervisors can be managed through a single interface.
RHEV-M
Red Hat Enterprise Virtualization Manager (RHEV-M) provides a graphical user interface to administer the physical and logical resources within the virtual environment infrastructure. It can be used to manage provisioning, connection protocols, user sessions, virtual machine pools, images and high availability/clustering as an alternative to the virsh and virt-manager tools. It runs on Windows Server 2008 R2 in clustered mode, with active-standby configuration. For more information on RHEVM, refer to the Red Hat Enterprise Virtualization documentation at http://docs.redhat.com.
Chapter 1. Introduction
Para-virtualized drivers The emulated floppy disk drive driver The emulated floppy disk drive driver is used for creating virtualized floppy drives.
Chapter 1. Introduction
SR-IOV
SR-IOV (Single Root I/O Virtualization) is a PCI Express standard that extends a single physical PCI function to share its PCI resources as separate, virtual functions (VF). Each of these functions are capable of being used by a different guest via PCI device assignment. An SR-IOV capable PCI-e device provides a Single Root Function (for example, a single Ethernet port), and presents multiple, separate virtual devices as separate, unique PCI device functions, each with its own unique PCI configuration space, memory-mapped registers and separate (MSI-based) interrupts. For more information on SR-IOV, refer to Chapter 13, SR-IOV.
NPIV
N_Port ID Virtualization (NPIV) is a function available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Fibre Channel Host Bus Adaptors (HBAs) that SR-IOV provides for PCIe interfaces. With NPIV, virtualized guests can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs). NPIV can provide high density virtualized environments with enterprise-level storage solutions. For more information on NPIV, refer to Chapter 28, N_Port ID Virtualization (NPIV).
Storage CPU Name cpu64-rhel6 Description Red Hat Enterprise Linux 6 supported QEMU Virtual CPU version (cpu64-rhel6)
1.5. Storage
Storage for virtualized guests is abstracted from the physical storage used by the guest. Storage is attached to virtualized guests using the para-virtualized (Section 1.4.2, Para-virtualized drivers) or emulated block device drivers (Emulated storage drivers).
Storage Volumes
Storage pools are further divided into storage volumes. Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt. Storage volumes are presented to virtualized guests as local storage devices regardless of the underlying hardware. For more information on storage and virtualization refer to Part V, Virtualization storage topics.
sVirt
sVirt is a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtualized guests. sVirt 7
Chapter 1. Introduction improves security and hardens the system against bugs in the hypervisor that might be used as an attack vector for the host or to another virtualized guest. For more information on sVirt, refer to Chapter 17, sVirt.
1.7. Migration
Migration is the term for the process of moving a virtualized guest from one host to another. Migration is a key feature of virtualization as software is completely separated from hardware. Migration is useful for: Load balancing - guests can be moved to hosts with lower usage when a host becomes overloaded. Hardware failover - when hardware devices on the host start to fail, guests can be safely relocated so the host can be powered down and repaired. Energy saving - guests can be redistributed to other hosts and host systems powered off to save energy and cut costs in low usage periods. Geographic migration - guests can be moved to another location for lower latency or in serious circumstances. Migration only moves the virtualized guest's memory. The guest's storage is located on networked storage which is shared between the source host and destination hosts. Shared, networked storage must be used for storing guest images. Without shared storage, migration is not possible. It is recommended to use libvirt managed storage pools for shared storage.
Offline migration
An offline migration suspends the guest, and then moves an image of the guest's memory to the destination host. The guest is then resumed on the destination host and the memory used by the guest on the source host is freed.
Live migration
Live migration is the process of migrating a running guest from one physical host to another physical host.
Part I. Requirements and restrictions System requirements and restrictions for virtualization with Red Hat Enterprise Linux 6
These chapters outline the system requirements and restrictions for virtualization on Red Hat Enterprise Linux 6.
Chapter 2.
System requirements
This chapter lists system requirements for successfully running virtualized guest operating systems with Red Hat Enterprise Linux 6. Virtualization is available for Red Hat Enterprise Linux 6 on the Intel 64 and AMD64 architecture. The KVM hypervisor is provided with Red Hat Enterprise Linux 6. For information on installing the virtualization packages, read Chapter 5, Installing the virtualization packages. Minimum system requirements 6GB free disk space 2GB of RAM. Recommended system requirements 6GB plus the required disk space recommended by the guest operating system per guest. For most operating systems more than 6GB of disk space is recommended. One processor core or hyper-thread for each virtualized CPU and one for the host. 2GB of RAM plus additional RAM for virtualized guests.
KVM overcommit
KVM can overcommit physical resources for virtualized guests. Overcommitting resources means the total virtualized RAM and processor cores used by the guests can exceed the physical RAM and processor cores on the host. For information on safely overcommitting resources with KVM refer to Chapter 19, Overcommitting with KVM.
KVM requirements
The KVM hypervisor requires: an Intel processor with the Intel VT and the Intel 64 extensions, or an AMD processor with the AMD-V and the AMD64 extensions. Refer to Section 23.3, Verifying virtualization extensions to determine if your processor has the virtualization extensions.
Storage support
The working guest storage methods are: files on local storage, physical disk partitions, locally connected physical LUNs, LVM partitions, NFS shared file systems, iSCSI, 11
Chapter 2. System requirements GFS2 clustered file systems, and Fibre Channel-based LUNs SRP devices (SCSI RDMA Protocol), the block export protocol used in Infiniband and 10GbE iWARP adapters.
12
Chapter 3.
13
14
Chapter 4.
Virtualization restrictions
This chapter covers additional support and product restrictions of the virtualization packages in Red Hat Enterprise Linux 6.
15
Chapter 4. Virtualization restrictions Storage restrictions Guest should not be given write access to whole disks or block devices (for example, /dev/sdb). Virtualized guests with access to block devices may be able to access other block devices on the system or modify volume labels which can be used to compromise the host system. Use partitions (for example, /dev/sdb1) or LVM volumes to prevent this issue. SR-IOV restrictions SR-IOV is only thoroughly tested with the following devices (other SR-IOV devices may work but have not been tested at the time of release): Intel 82576NS Gigabit Ethernet Controller (igb driver) Intel 82576EB Gigabit Ethernet Controller (igb driver) Neterion X3100 Series 10GbE PCIe (vxge driver) Intel 82599ES 10 Gigabit Ethernet Controller (ixgbe driver) Intel 82599EB 10 Gigabit Ethernet Controller (ixgbe driver) PCI device assignment restrictions PCI device assignment (attaching PCI devices to guests) requires host systems to have AMD IOMMU or Intel VT-d support to enable device assignment of PCI-e devices. For parallel/legacy PCI, only single devices behind a PCI bridge are supported. Multiple PCIe endpoints connected through a non-root PCIe switch require ACS support in the PCIe bridges of the PCIe switch. This restriction can be disabled in /etc/libvirt/qemu.conf, setting relaxed_acs_check=1 Red Hat Enterprise Linux 6 has limited PCI configuration space access by guest device drivers. This limitation could cause drivers that are dependent on PCI configuration space to fail configuration.
Other restrictions
17
18
Chapter 5.
5.1. Installing KVM with a new Red Hat Enterprise Linux installation
This section covers installing virtualization tools and KVM package as part of a fresh Red Hat Enterprise Linux installation.
1. 2. 3.
Start an interactive Red Hat Enterprise Linux installation from the Red Hat Enterprise Linux Installation CD-ROM, DVD or PXE. You must enter a valid installation number when prompted to receive access to the virtualization and other Advanced Platform packages. Complete the other steps up to the package selection step.
http://www.redhat.com/docs/manuals/enterprise/
21
Select the Virtual Host server role to install a platform for virtualized guests. Alternatively, select the Customize Now radio button to specify individual packages. 4. Select the Virtualization package group. This selects the KVM hypervisor, virt-manager, libvirt and virt-viewer for installation.
22
5.
Customize the packages (if required) Customize the Virtualization group if you require other virtualization packages.
23
Press the Close button then the Next button to continue the installation.
Note
You require a valid RHN virtualization entitlement to receive updates for the virtualization packages.
More information on Kickstart files can be found on Red Hat's website, redhat.com , in the Installation Guide.
http://www.redhat.com/docs/manuals/enterprise/
24
5.2. Installing KVM packages on an existing Red Hat Enterprise Linux system
The section describes the steps for installing the KVM hypervisor on a working Red Hat Enterprise Linux 6 or newer system.
Your system is now entitled to receive the virtualization packages. The next section covers installing these packages.
Now, install additional virtualization management packages. Recommended virtualization packages: python-virtinst Provides the virt-install command for creating virtual machines. libvirt The libvirt package provides the server and host side libraries for interacting with hypervisors and host systems. The libvirt package provides the libvirtd daemon that handles the library calls, manages virtualizes guests and controls the hypervisor.
https://www.redhat.com/wapps/store/catalog.html
25
Chapter 5. Installing the virtualization packages libvirt-python The libvirt-python package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager virt-manager, also known as Virtual Machine Manager, provides a graphical tool for administering virtual machines. It uses libvirt-client library as the management API. libvirt-client The libvirt-client package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command line tool to manage and control virtualized guests and hypervisors from the command line or a special virtualization shell. Install the other recommended virtualization packages:
# yum install virt-manager libvirt libvirt-python python-virtinst libvirt-client
26
Chapter 6.
The virt-install man page also documents each command option and important variables. qemu-img is a related command which may be used before virt-install to configure storage options. An important option is the --vnc option which opens a graphical window for the guest's installation. Example 6.1. Using virt-install to install a RHEL 5 guest This example creates a RHEL 5 guest: 27
virt-install \ --name=guest1-rhel5-64 \ --disk path=/var/lib/libvirt/images/guest1-rhel5-64,size=8 \ --nonsparse --vnc \ --vcpus=2 --ram=2048 \ --location=http://example1.com/installation_tree/RHEL5.6-Server-x86_64/os \ --network bridge=br0 \ --os-type=linux \ --os-variant=rhel5.4
Note
When installing a Windows guest with virt-install, the --os-type=windows option is recommended. This option prevents the CD-ROM from disconnecting when rebooting during the installation procedure. The --os-variant option further optimizes the configuration for a specific guest operating system. Refer to man virt-install for more examples.
28
Figure 6.1. Virtual Machine Manager window 4. New VM wizard The New VM wizard breaks down the guest creation process into five steps: 1. Naming the guest and choosing the installation type 2. Locating and configuring the installation media 3. Configuring memory and CPU options 4. Configuring the guest's storage 5. Configuring networking, hypervisor type, architecture, and other hardware settings Ensure that virt-manager can access the installation media (whether locally or over the network). 5. Specify name and installation type The guest creation process starts with the selection of a name and installation type. Virtual machine names can have underscores (_), periods (.), and hyphens (-).
29
Figure 6.2. Step 1 Type in a virtual machine name and choose an installation type: Local install media (ISO image or CDROM) This method uses a CD-ROM, DVD, or image of an installation disk (e.g. .iso). Network Install (HTTP, FTP, or NFS) Network installing involves the use of a mirrored Red Hat Enterprise Linux or Fedora installation tree to install a guest. The installation tree must be accessible through either HTTP, FTP, or NFS. Network Boot (PXE) This method uses a Preboot eXecution Environment (PXE) server to install the guest. Setting up a PXE server is covered in the Deployment Guide. To install via network boot, the guest must have a routable IP address or shared network device. For information on the required networking configuration for PXE installation, refer to Chapter 10, Network Configuration. Import existing disk image This method allows you to create a new guest and import a disk image (containing a preinstalled, bootable operating system) to it. Click Forward to continue.
30
Creating guests with virt-manager 6. Configure installation Next, configure the OS type and Version of the installation. Depending on the method of installation, provide the install URL or existing storage path.
31
Figure 6.4. Import existing disk image (configuration) 7. Configure CPU and memory The next step involves configuring the number of CPUs and amount of memory to allocate to the virtual machine. The wizard shows the number of CPUs and amount of memory you can allocate; configure these settings and click Forward.
32
Figure 6.5. Configuring CPU and Memory 8. Configure storage Assign storage to the guest.
33
Figure 6.6. Configuring virtual storage If you chose to import an existing disk image during the first step, virt-manager will skip this step. Assign sufficient space for your virtualized guest and any applications the guest requires, then click Forward to continue. 9. Final configuration Verify the settings of the virtual machine and click Finish when you are satisfied; doing so will create the guest with default networking settings, virtualization type, and architecture.
34
Figure 6.7. Verifying the configuration If you prefer to further configure the virtual machine's hardware first, check the Customize configuration before install box first before clicking Finish. Doing so will open another wizard Figure 6.8, Virtual hardware configuration that will allow you to add, remove, and configure the virtual machine's hardware settings.
35
Figure 6.8. Virtual hardware configuration After configuring the virtual machine's hardware, click Apply. virt-manager will then create the guest with your specified hardware settings. This concludes the general process for creating guests with virt-manager. Chapter 6, Virtualized guest installation overview contains step-by-step instructions to installing a variety of common operating systems.
36
Warning
The line, TYPE=Bridge, is case-sensitive. It must have uppercase 'B' and lower case 'ridge'.
b.
Start the new bridge by restarting the network service. The ifup installation command can start the individual bridge but it is safer to test the entire network restarts properly.
# service network restart
c.
There are no interfaces added to the new bridge yet. Use the brctl show command to view details about network bridges on the system.
# brctl show bridge name installation virbr0
interfaces
The virbr0 bridge is the default bridge used by libvirt for Network Address Translation (NAT) on the default Ethernet device. 2. Add an interface to the new bridge Edit the configuration file for the interface. Add the BRIDGE parameter to the configuration file with the name of the bridge created in the previous steps.
# Intel Corporation Gigabit Network Connection DEVICE=eth1 BRIDGE=installation BOOTPROTO=dhcp HWADDR=00:13:20:F7:6E:8E ONBOOT=yes
interfaces eth1
3.
Security configuration Configure iptables to allow all traffic to be forwarded across the bridge.
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT # service iptables save
37
4.
38
39
40
41
2.
42
A DHCP request is sent and if a valid PXE server is found the guest installation processes will start.
43
44
Chapter 7.
Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6
This Chapter covers how to install Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6. This procedure assumes that the KVM hypervisor and all other required packages are installed and the host is configured for virtualization. For more information on installing the virtualization pacakges, refer to Chapter 5, Installing the virtualization packages.
7.1. Creating a Red Hat Enterprise Linux 6 guest with local installation media
This procedure covers creating a virtualized Red Hat Enterprise Linux 6 guest with a locally stored 1 installation DVD or DVD image. DVD images are available from rhn.redhat.com for Red Hat Enterprise Linux 6. Procedure 7.1. Creating a Red Hat Enterprise Linux 6 guest with virt-manager 1. Optional: Preparation Prepare the storage environment for the virtualized guest. For more information on preparing storage, refer to Part V, Virtualization storage topics.
Note
Various storage types may be used for storing virtualized guests. However, for a guest to be able to use migration features the guest must be created on networked storage.
Red Hat Enterprise Linux 6 requires at least 1GB of storage space. However, Red Hat recommends at least 5GB of storage space for a Red Hat Enterprise Linux 6 installation and for the procedures in this guide. 2. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications -> System Tools -> Virtual Machine Manager.
http://www.rhn.redhat.com
45
Chapter 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6
Figure 7.1. The main virt-manager window Press the create new virtualized guest button (see figure Figure 7.2, The create new virtualized guest button) to start the new virtualized guest wizard.
Figure 7.2. The create new virtualized guest button The Create a new virtual machine window opens. 3. Name the virtualized guest Guest names can contain letters, numbers and the following characters: '_', '.' and '-'. Guest names must be unique for migration. Choose the Local install media (ISO image or CDROM) radio button.
46
Creating a Red Hat Enterprise Linux 6 guest with local installation media
Figure 7.3. The Create a new virtual machine window - Step 1 Press Forward to continue. 4. Select the installation media Select the installation ISO image location or a DVD drive with the installation disc inside. This example uses an ISO file image of the Red Hat Enterprise Linux 6.0 installation DVD image.
47
Chapter 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6
Select the operating system type and version which match the installation media you have selected.
48
Creating a Red Hat Enterprise Linux 6 guest with local installation media
Figure 7.5. The Create a new virtual machine window - Step 2 Press Forward to continue. 5. Set RAM and virtual CPUs Choose appropriate values for the virtualized CPUs and RAM allocation. These values affect the host's and guest's performance. Memory and virtualized CPUs can be overcommitted, for more information on overcommitting refer to Chapter 19, Overcommitting with KVM. Virtualized guests require sufficient physical memory (RAM) to run efficiently and effectively. Red Hat supports a minimum of 512MB of RAM for a virtualized guest. Red Hat recommends at least 1024MB of RAM for each logical core. Assign sufficient virtual CPUs for the virtualized guest. If the guest runs a multithreaded application, assign the number of virtualized CPUs the guest will require to run efficiently. You cannot assign more virtual CPUs than there are physical processors (or hyper-threads) available on the host system. The number of virtual CPUs available is noted in the Up to X available field.
49
Chapter 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6
Figure 7.6. The Create a new virtual machine window - Step 3 Press Forward to continue. 6. Storage Enable and assign storage for the Red Hat Enterprise Linux 6 guest. Assign at least 5GB for a desktop installation or at least 1GB for a minimal installation.
Migration
Live and offline migrations require guests to be installed on shared network storage. For information on setting up shared storage for guests refer to Part V, Virtualization storage topics.
a.
With the default local storage Select the Create a disk image on the computer's hard drive radio button to create a file-based image in the default storage pool, the /var/lib/libvirt/images/ directory. Enter the size of the disk image to be created. If the Allocate entire disk now check box is selected, a disk image of the size specified will be created immediately. If not, the disk image will grow as it becomes filled.
50
Creating a Red Hat Enterprise Linux 6 guest with local installation media
Figure 7.7. The Create a new virtual machine window - Step 4 b. With a storage pool Select Select managed or other existing storage to use a storage pool.
51
Chapter 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6
Figure 7.8. The Locate or create storage volume window i. ii. iii. iv. Press the browse button to open the storage pool browser. Select a storage pool from the Storage Pools list. Optional: Press the New Volume button to create a new storage volume. Enter the name of the new storage volume. Press the Choose Volume button to select the volume for the virtualized guest.
52
Creating a Red Hat Enterprise Linux 6 guest with local installation media
Figure 7.9. The Create a new virtual machine window - Step 4 Press Forward to continue. 7. Verify and finish Verify there were no errors made during the wizard and everything appears as expected. Select the Customize configuration before install check box to change the guest's storage or network devices, to use the para-virtualized drivers or, to add additional devices. Press the Advanced options down arrow to inspect and modify advanced options. For a standard Red Hat Enterprise Linux 6 none of these options require modification.
53
Chapter 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6
Figure 7.10. The Create a new virtual machine window - Step 5 Press Finish to continue into the Red Hat Enterprise Linux installation sequence. For more information on installing Red Hat Enterprise Linux 6 refer to the Red Hat Enterprise Linux 6 Installation Guide. A Red Hat Enterprise Linux 6 guest is now created from a an ISO installation disc image.
54
Creating a Red Hat Enterprise Linux 6 guest with a network installation tree
7.2. Creating a Red Hat Enterprise Linux 6 guest with a network installation tree
Procedure 7.2. Creating a Red Hat Enterprise Linux 6 guest with virt-manager 1. Optional: Preparation Prepare the storage environment for the virtualized guest. For more information on preparing storage, refer to Part V, Virtualization storage topics.
Note
Various storage types may be used for storing virtualized guests. However, for a guest to be able to use migration features the guest must be created on networked storage.
Red Hat Enterprise Linux 6 requires at least 1GB of storage space. However, Red Hat recommends at least 5GB of storage space for a Red Hat Enterprise Linux 6 installation and for the procedures in this guide. 2. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications -> System Tools -> Virtual Machine Manager.
55
Chapter 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6
Figure 7.11. The main virt-manager window Press the create new virtualized guest button (see figure Figure 7.12, The create new virtualized guest button) to start the new virtualized guest wizard.
Figure 7.12. The create new virtualized guest button The Create a new virtual machine window opens. 3. Name the virtualized guest Guest names can contain letters, numbers and the following characters: '_', '.' and '-'. Guest names must be unique for migration. Choose the installation method from the list of radio buttons.
56
Figure 7.13. The Create a new virtual machine window - Step 1 Press Forward to continue.
Note
Various storage types may be used for storing virtualized guests. However, for a guest to be able to use migration features the guest must be created on networked storage.
Red Hat Enterprise Linux 6 requires at least 1GB of storage space. However, Red Hat recommends at least 5GB of storage space for a Red Hat Enterprise Linux 6 installation and for the procedures in this guide. 57
Chapter 7. Installing Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6 2. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications -> System Tools -> Virtual Machine Manager.
Figure 7.14. The main virt-manager window Press the create new virtualized guest button (see figure Figure 7.15, The create new virtualized guest button) to start the new virtualized guest wizard.
Figure 7.15. The create new virtualized guest button The Create a new virtual machine window opens. 3. Name the virtualized guest Guest names can contain letters, numbers and the following characters: '_', '.' and '-'. Guest names must be unique for migration.
58
Creating a Red Hat Enterprise Linux 6 guest with PXE Choose the installation method from the list of radio buttons.
Figure 7.16. The Create a new virtual machine window - Step 1 Press Forward to continue.
59
60
Chapter 8.
Installing Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5
This section describes how to install Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5. Para-virtualization is only available for Red Hat Enterprise Linux 5 hosts. Red Hat Enterprise Linux 6 uses the PV-opts features of the Linux kernel to appear as a compatible Xen para-virtualized guest.
Red Hat Enterprise Linux can be installed without a graphical interface or manual input. Use a Kickstart file to automate the installation process. This example extends the previous example with a Kickstart file, located at http://example.com/kickstart/ks.cfg, to fully automate the installation.
# virt-install --name rhel6pv-64 \ --disk /var/lib/xen/images/rhel6pv-64.img,--file-size=6 \ --nonsparse --nographics --paravirt --vcpus=2 --ram=2048 \ --location=http://example.com/installation_tree/RHEL6-x86/ \ -x "ks=http://example.com/kickstart/ks.cfg"
The graphical console opens showing the initial boot phase of the guest:
61
Chapter 8. Installing Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5
After your guest has completed its initial boot, the standard installation process for Red Hat Enterprise Linux 6 starts. Refer to the Red Hat Enterprise Linux 6 Installation Guide for more information on installing Red Hat Enterprise Linux 6.
62
Using virt-manager
Press Forward to continue. 4. Name the virtual machine Provide a name for your virtualized guest. The following punctuation and whitespace characters are permitted for '_', '.' and '-' characters.
63
Chapter 8. Installing Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5
Press Forward to continue. 5. Select the installation method Red Hat Enterprise Linux can be installed using one of the following methods: local install media, either an ISO image or physical optical media. Select Network install tree if you have the installation tree for Red Hat Enterprise Linux hosted somewhere on your network via HTTP, FTP or NFS. PXE can be used if you have a PXE server configured for booting Red Hat Enterprise Linux installation media. Configuring a sever to PXE boot a Red Hat Enterprise Linux installation is not covered by this guide. However, most of the installation steps are the same after the media boots. Set OS Type to Linux and OS Variant to Red Hat Enterprise Linux 6 as shown in the screenshot.
64
Using virt-manager
Press Forward to continue. 6. Locate installation media Enter the location of the installation tree.
65
Chapter 8. Installing Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5
7.
Storage setup Assign a physical storage device (Block device) or a file-based image (File). Assign sufficient space for your virtualized guest and any applications the guest requires.
66
Using virt-manager
Migration
Live and offline migrations require guests to be installed on shared network storage. For information on setting up shared storage for guests refer to Part V, Virtualization storage topics.
8.
Network setup Select either Virtual network or Shared physical device. The virtual network option uses Network Address Translation (NAT) to share the default network device with the virtualized guest. The shared physical device option uses a network bridge to give the virtualized guest full access to a network device. 67
Chapter 8. Installing Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5
Press Forward to continue. 9. Memory and CPU allocation The Memory and CPU Allocation window displays. Choose appropriate values for the virtualized CPUs and RAM allocation. These values affect the host's and guest's performance. Virtualized guests require sufficient physical memory (RAM) to run efficiently and effectively. Choose a memory value which suits your guest operating system and application requirements. Remember, Xen guests use physical RAM. Running too many guests or leaving insufficient memory for the host system results in significant usage of virtual memory and swapping. Virtual memory is significantly slower which causes degraded system performance and responsiveness. Ensure you allocate sufficient memory for all guests and the host to operate effectively. Assign sufficient virtual CPUs for the virtualized guest. If the guest runs a multithreaded application, assign the number of virtualized CPUs the guest will require to run efficiently. Do not assign more virtual CPUs than there are physical processors (or hyper-threads) available on the host system. It is possible to over allocate virtual processors, however, over allocating VCPUs has a significant, negative effect on Xen guest and host performance.
68
Using virt-manager
Press Forward to continue. 10. Verify and start guest installation Verify the configuration.
69
Chapter 8. Installing Red Hat Enterprise Linux 6 as a Xen para-virtualized guest on Red Hat Enterprise Linux 5
Press Finish to start the guest installation procedure. 11. Installing Red Hat Enterprise Linux Complete the Red Hat Enterprise Linux installation sequence. The installation sequence is 1 covered by the Red Hat Enterprise Linux 6 Installation Guide. Refer to Red Hat Documentation for the Red Hat Enterprise Linux 6 Installation Guide.
70
Chapter 9.
Important
Before creating the guest, consider first if the guest needs to use KVM Windows para-virtualized drivers. If it does, keep in mind that you can do so during or after installing the Windows operating system on the guest. For more information about para-virtualized drivers, refer to Chapter 11, KVM Para-virtualized Drivers. For instructions on how to install KVM para-virtualized drivers, refer to Section 11.2, Installing the KVM Windows para-virtualized drivers.
It is possible to create a fully-virtualized guest with only a single command. To do so, simply run the following program (replace the values accordingly):
# virt-install \ --name=guest-name \ --network network=default \ --disk path=path-to-disk \ --disk size=disk-size \ --cdrom=path-to-install-disk \ --vnc --ram=1024
The path-to-disk must be a device (e.g. /dev/sda3) or image file (/var/lib/libvirt/ images/name.img). It must also have enough free space to support the disk-size. 71
Important
All image files should be stored in /var/lib/libvirt/images/. Other directory locations for file-based images are prohibited by SELinux. If you run SELinux in enforcing mode, refer to Section 16.2, SELinux and virtualization for more information on installing guests.
You can also run virt-install interactively. To do so, use the --prompt command, as in:
# virt-install --prompt
Once the fully-virtualized guest is created, virt-viewer will launch the guest and run the operating system's installer. Refer to to the relevant Microsoft installation documentation for instructions on how to install the operating system.
72
Chapter 10.
Network Configuration
This page provides an introduction to the common networking configurations used by libvirt based applications. For additional information consult the libvirt network architecture documentation: http:// libvirt.org/intro.html. Red Hat Enterprise Linux 6 supports the following networking setups for virtualization: virtual networks using Network Address Translation (NAT) directly allocated physical devices using PCI device assignment or SR-IOV. bridged networks You must enable NAT, network bridging or directly share a physical device to allow external hosts access to network services on virtualized guests.
Host configuration
Every standard libvirt installation provides NAT based connectivity to virtual machines out of the box. This is the so called 'default virtual network'. Verify that it is available with the virsh net-list --all command.
# virsh net-list --all Name State Autostart ----------------------------------------default active yes
If it is missing, the example XML configuration file can be reloaded and activated:
# virsh net-define /usr/share/libvirt/networks/default.xml
The default network is defined from /usr/share/libvirt/networks/default.xml Mark the default network to automatically start:
# virsh net-autostart default Network default marked as autostarted
Once the libvirt default network is running, you will see an isolated bridge device. This device does not have any physical interfaces added. The new device uses NAT and IP forwarding to connect to outside world. Do not add new interfaces.
# brctl show bridge name virbr0
bridge id 8000.000000000000
interfaces
75
Chapter 10. Network Configuration libvirt adds iptables rules which allow traffic to and from guests attached to the virbr0 device in the INPUT, FORWARD, OUTPUT and POSTROUTING chains. libvirt then attempts to enable the ip_forward parameter. Some other applications may disable ip_forward, so the best option is to add the following to /etc/sysctl.conf.
net.ipv4.ip_forward = 1
Guest configuration
Once the host configuration is complete, a guest can be connected to the virtual network based on its name. To connect a guest to the 'default' virtual network, the following could be used in the XML configuration file (such as /etc/libvirtd/qemu/myguest.xml) for the guest:
Note
Defining a MAC address is optional. A MAC address is automatically generated if omitted. Manually setting the MAC address may be useful to maintain consistency or easy reference throughout your environment, or to avoid the very small chance of a conflict.
<interface type='network'> <source network='default'/> <mac address='00:16:3e:1a:b3:4a'/> </interface>
Disable NetworkManager
NetworkManager does not support bridging. NetworkManager must be disabled to use networking with the network scripts (located in the /etc/sysconfig/network-scripts/ directory).
# # # # chkconfig NetworkManager off chkconfig network on service NetworkManager stop service network start
Note
Instead of turning off NetworkManager, add "NM_CONTROLLED=no" to the ifcfg-* scripts used in the examples.
76
2.
Modify a network interface to make a bridge Edit the network script for the network device you are adding to the bridge. In this example, /etc/sysconfig/network-scripts/ifcfg-eth0 is used. This file defines eth0, the physical network interface which is set as part of a bridge:
DEVICE=eth0 # change the hardware address to match the hardware address your NIC uses HWADDR=00:16:76:D6:C9:45 ONBOOT=yes BRIDGE=br0
Tip
You can configure the device's Maximum Transfer Unit (MTU) by appending an MTU variable to the end of the configuration file.
MTU=9000
3.
Create the bridge script Create a new network script in the /etc/sysconfig/network-scripts directory called ifcfg-br0 or similar. The br0 is the name of the bridge, this can be anything as long as the name of the file is the same as the DEVICE parameter, and that it matches the bridge name used in step 2.
DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes DELAY=0
Warning
The line, TYPE=Bridge, is case-sensitive. It must have uppercase 'B' and lower case 'ridge'.
4.
77
Chapter 10. Network Configuration 5. Configure iptables Configure iptables to allow all traffic to be forwarded across the bridge.
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT # service iptables save # service iptables restart
6.
Restart the libvirt service Restart the libvirt service with the service command.
# service libvirtd reload
7.
Verify the bridge Verify the new bridge is available with the bridge control command (brctl).
# brctl show bridge name virbr0 br0
interfaces eth0
A "Shared physical device" is now available through virt-manager and libvirt, which guests can be attached and have full network access. Note, the bridge is completely independent of the virbr0 bridge. Do not attempt to attach a physical device to virbr0. The virbr0 bridge is only for Network Address Translation (NAT) connectivity.
78
Chapter 11.
Note
PCI devices are limited by the virtualized system architecture. Out of the 32 available PCI devices for a guest, 4 are not removable. This means there are up to 28 free PCI slots available for additional devices per guest. Each PCI device in a guest can have up to 8 functions.
The following Microsoft Windows versions are expected to function normally using KVM paravirtualized drivers: Windows XP (32-bit only) Windows Server 2003 (32-bit and 64-bit versions) Windows Server 2008 (32-bit and 64-bit versions) Windows 7 (32-bit and 64-bit versions)
11.1. Using the para-virtualized drivers with Red Hat Enterprise Linux 3.9 guests
Para-virtualized drivers for Red Hat Enterprise Linux 3.9 consist of five kernel modules: virtio, virtio_blk, virtio_net, virtio_pci and virtio_ring. All five modules must be loaded to use both the para-virtualized block and network devices drivers.
79
Note
For Red Hat Enterprise Linux 3.9 guests, the kmod-virtio package is a requirement for the virtio module.
Note
To use the network device driver only, load the virtio, virtio_net and virtio_pci modules. To use the block device driver only, load the virtio, virtio_ring, virtio_blk and virtio_pci modules.
if 80
[ -f /etc/rc.modules ]; then
Using the para-virtualized drivers with Red Hat Enterprise Linux 3.9 guests
/etc/rc.modules fi modprobe modprobe modprobe modprobe modprobe virtio virtio_ring # Comment this out if you do not need block driver virtio_blk # Comment this out if you do not need block driver virtio_net # Comment this out if you do not need net driver virtio_pci
Disable the GSO and TSO options with the following commands on the host:
# ethtool -K interface gso off # ethtool -K interface tso off
The para-virtualized drivers use the /dev/vd* naming convention, not the /dev/hd* naming convention. To resolve this issue modify the incorrect swap entries in the /etc/fstab file to use the /dev/vd* convention, for the example above: 81
/dev/vda3
Save the changes and reboot the virtualized guest. The guest should now correctly have swap partitions.
The drivers are also from Microsoft (windowsservercatalog.com ). Note that the Red Hat Enterprise Virtualization Hypervisor and Red Hat Enterprise Linux are created on the same code base so the drivers for the same version (for example, Red Hat Enterprise Virtualization Hypervisor 2.2 and Red Hat Enterprise Linux 5.5) are supported for both environments. The virtio-win package installs a CD-ROM image, virtio-win.iso, in the /usr/share/ virtio-win/ directory. 2. Install the para-virtualized drivers It is recommended to install the drivers on the guest before attaching or modifying a device to use the para-virtualized drivers. For block devices storing root file systems or other block devices required for booting the guest, the drivers must be installed before the device is modified. If the drivers are not installed on the guest and the driver is set to the virtio driver the guest will not boot.
82
Installing the drivers on an installed Windows guest Procedure 11.1. Installing from the driver CD-ROM image with virt-manager 1. Open virt-manager and the guest Open virt-manager, select your virtualized guest from the list by double clicking the guest name. 2. Open the hardware window Click the blue Information button at the top to view guest details. Then click the Add Hardware button at the bottom of the window.
3.
Select the device type This opens a wizard for adding the new device. Select Storage from the dropdown menu.
83
Click the Forward button to proceed. 4. Select the ISO file Select Select managed or other existing storage and set the file location of the para-virtualized drivers .iso image file. The default location for the latest version of the drivers is /usr/share/ virtio-win/virtio-win.iso. Change the Device type to IDE cdrom and click the Forward button to proceed.
84
5.
Finish adding virtual hardware Press the Finish button to complete the wizard.
85
6.
Reboot Reboot or start the guest to begin using the driver disc. Virtualized IDE devices require a restart to for the guest to recognize the new device.
Once the CD-ROM with the drivers is attached and the guest has started, proceed with Procedure 11.2, Windows installation. Procedure 11.2. Windows installation 1. Open My Computer On the Windows guest, open My Computer and select the CD-ROM drive.
86
2.
Select the correct installation files There are four files available on the disc. Select the drivers you require for your guest's architecture: the para-virtualized block device driver (RHEV-Block.msi for 32-bit guests or RHEVBlock64.msi for 64-bit guests), the para-virtualized network device driver (RHEV-Network.msi for 32-bit guests or RHEVBlock64.msi for 64-bit guests), or both the block and network device drivers. Double click the installation files to install the drivers.
3.
Install the block device driver a. Start the block device driver installation Double click RHEV-Block.msi or RHEV-Block64.msi.
87
Press Install to continue. b. Confirm the exception Windows may prompt for a security exception.
88
Press Finish to complete the installation. 4. Install the network device driver a. Start the network device driver installation Double click RHEV-Network.msi or RHEV-Network64.msi.
89
Press Next to continue. b. Performance setting This screen configures advanced TCP settings for the network driver. TCP timestamps and TCP window scaling can be enabled or disabled. The default is, 1, for window scaling to be enabled. TCP window scaling is covered by IETF RFC 1323 . The RFC defines a method of increasing the receive window size to a size greater than the default maximum of 65,535 bytes up to a new maximum of 1 gigabyte (1,073,741,824 bytes). TCP window scaling allows networks to transfer at closer to theoretical network bandwidth limits. Larger receive windows may not be supported by some networking hardware or operating systems. TCP timestamps are also defined by IETF RFC 1323 . TCP timestamps are used to better calculate Return Travel Time estimates by embedding timing information is embedded in packets. TCP timestamps help the system to adapt to changing traffic levels and avoid congestion issues on busy networks. Value 0 1 2 3 Action Disable TCP timestamps and window scaling. Enable TCP window scaling. Enable TCP timestamps. Enable TCP timestamps and window scaling.
3 2
90
Press Next to continue. c. Confirm the exception Windows may prompt for a security exception.
91
Press Finish to complete the installation. 5. Reboot Reboot the guest to complete the driver installation.
Change an existing device to use the para-virtualized drivers (Section 11.3, Using KVM paravirtualized drivers for existing devices) or install a new device using the para-virtualized drivers (Section 11.4, Using KVM para-virtualized drivers for new devices).
Creating guests
Create the guest, as normal, without starting the guest. Follow one of the procedures below.
92
Installing drivers during the Windows installation 2. Creating the guest with virsh This method attaches the para-virtualized driver floppy disk to a Windows guest before the installation. If the guest is created from an XML definition file with virsh use the virsh define command not the virsh create command. a. b. Create, but do not start, the guest. Refer to Chapter 29, Managing guests with virsh for details on creating guests with the virsh command. Add the driver disk as a virtualized floppy disk with the virsh command. This example can be copied and used if there are no other virtualized floppy devices attached to the virtualized guest.
# virsh attach-disk guest1 /usr/share/virtio-win/virtio-win.vfd fda --type floppy
3.
Creating the guest with virt-manager a. At the final step of the virt-manager guest creation wizard, check the Customize configuration before install checkbox.
93
Press the Finish button to continue. b. Add the new device Select Storage from the Hardware type list. Click Forward to continue.
94
c.
Select the driver disk Select Select managed or existing storage. Set the location to /usr/share/virtio-win/virtio-win.vfd. Change Device type to Floppy disk.
95
Press the Forward button to continue. d. Confirm the new device Click the Finish button to confirm the device setup and add the device to the guest.
96
Press the green tick button to add the new device. 4. Creating the guest with virt-install Append the following parameter exactly as listed below to add the driver disk to the installation with the virt-install command :
--disk path=/usr/share/virtio-win/virtio-win.vfd,device=floppy
5.
During the installation, additional steps are required to install drivers, depending on the type of Windows guest. a. Windows Server 2003 and Windows XP Before the installation blue screen repeatedly press F6 for third party drivers.
97
98
Press Enter to continue the installation. b. Windows Server 2008 Install the guest as described by Section 9.1, Using virt-install to create a guest When the installer prompts you for the driver, click on Load Driver, point the installer to Drive A: and pick the driver that suits your guest operating system and architecture.
2.
3.
Change the entry to use the para-virtualized device by modifying the bus= entry to virtio. Note that if the disk was previously IDE it will have a target similar to hda, hdb, or hdc and so on. When changing to bus=virtio the target needs to be changed to vda, vdb, or vdc accordingly.
<disk type='file' device='disk'>
99
4.
Remove the address tag inside the disk tags. This must be done for this procedure to work. Libvirt will regenerate the address tag appropriately the next time the guest is started.
Please refer to the libvirt wiki: http://wiki.libvirt.org/page/Virtio for more details on using Virtio.
Procedure 11.3. Starting the new device wizard 1. Open the virtualized guest by double clicking on the name of the guest in virt-manager. 2. Open the Information tab by pressing the i information button.
Figure 11.1. The information tab button 3. 4. In the information tab, press the Add Hardware button. In the Adding Virtual Hardware tab select Storage or Network for the type of device. The storage and network device wizards are covered in procedures Procedure 11.4, Adding a storage device using the para-virtualized storage driver and Procedure 11.5, Adding a network device using the para-virtualized network driver
Procedure 11.4. Adding a storage device using the para-virtualized storage driver 1. Select hardware type Select Storage as the Hardware type.
100
Press Forward to continue. 2. Select the storage device and driver Create a new disk image or select a storage pool volume. Set the Device type to Virtio Disk to use the para-virtualized drivers.
101
Press Forward to continue. 3. Finish the procedure Confirm the details for the new device are correct.
102
Press Finish to complete the procedure. Procedure 11.5. Adding a network device using the para-virtualized network driver 1. Select hardware type Select Network as the Hardware type.
103
Press Forward to continue. 2. Select the network device and driver Select the network device from the Host device list. Create a custom MAC address or use the one provided. Set the Device model to virtio to use the para-virtualized drivers.
104
Press Forward to continue. 3. Finish the procedure Confirm the details for the new device are correct.
105
Press Finish to complete the procedure. Once all new devices are added, reboot the guest. Windows guests may may not recognise the devices until the guest is rebooted.
106
Chapter 12.
Note
The Red Hat Enterprise Linux 6.0 release comes with limitations for operating system drivers of KVM guests to have full access to a device's standard and extended configuration space. The Red Hat Enterprise Linux 6.0 limitations have been significantly reduced in the Red Hat Enterprise Linux 6.1 release and enable a much larger set of PCI Express devices to be successfully assigned to KVM guests.
PCI device assignment is only available on hardware platforms supporting either Intel VT-d or AMD IOMMU. These Intel VT-d or AMD IOMMU extensions must be enabled in BIOS for PCI device assignment to function. Red Hat Enterprise Linux 6.0 and newer supports hot plugging assigned PCI devices into virtualized guests. Virtualized guests require PCI hot plug support to be enabled in order to hot plug PCI devices from the host to the guest. Statically defining assigned PCI devices to virtualized guests at startup works the same as any other device presented to the virtualized guest (PCI-IDE, PCI-rtl8139, PCI-e1000e). Out of the 32 available PCI devices for a guest 4 are not removable. This means there are only 28 PCI slots available for additional devices per guest. Every para-virtualized network or block device uses one slot. Each guest can use up to 28 additional devices made up of any combination of paravirtualized network, para-virtualized disk devices, or other PCI devices using VT-d. Procedure 12.1. Preparing an Intel system for PCI device assignment 1. Enable the Intel VT-d extensions The Intel VT-d extensions provides hardware support for directly assigning a physical devices to guest. The VT-d extensions are required for PCI device assignment with Red Hat Enterprise Linux. The extensions must be enabled in the BIOS. Some system manufacturers disable these extensions by default. These extensions are often called various terms in BIOS which differ from manufacturer to manufacturer. Consult your system manufacturer's documentation. 107
Chapter 12. PCI device assignment 2. Activate Intel VT-d in the kernel Activate Intel VT-d in the kernel by appending the intel_iommu=on parameter to the kernel line of the kernel line in the /boot/grub/grub.conf file. The example below is a modified grub.conf file with Intel VT-d activated.
default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-36.x86-645) root (hd0,0) kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet intel_iommu=on initrd /initrd-2.6.32-36.x86-64.img
3.
Ready to use Reboot the system to enable the changes. Your system is now PCI device assignment capable.
Procedure 12.2. Preparing an AMD system for PCI 1. Enable AMD IOMMU extensions The AMD IOMMU extensions are required for PCI device assignment with Red Hat Enterprise Linux. The extensions must be enabled in the BIOS. Some system manufacturers disable these extensions by default. 2. Enable IOMMU kernel support Add append amd_iommu=on to the kernel line so that AMD IOMMU extensions are enabled at boot.
In the output from this command, each PCI device is identified by a string, as shown in the following example output:
108
2.
Run lspci and look for (or use the grep command) to determine the identifier code for your device.
Record the PCI device number; the number is needed in other steps. 3. Convert slot and function values to hexadecimal values (from decimal) to get the PCI bus addresses. Append "0x" to the beginning of the output to tell the computer that the value is a hexadecimal number. For example, if bus = 0, slot = 26 and function = 7 run the following:
$ printf %x 0 0 $ printf %x 26 1a $ printf %x 7 7
4.
Run virsh edit (or virsh attach device) and added a device entry in the <devices> section to attach the PCI device to the guest.
# virsh edit win2k3 <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x00' slot='0x1a' function='0x7'/> </source> </hostdev>
5.
Toggle an SELinux Boolean to allow the management of the PCI device from the guest:
$ setsebool -P virt_use_sysfs 1
6.
The PCI device should now be successfully attached to the guest and accessible to the guest operating system.
2.
Add the new device Select Physical Host Device from the Hardware type list. Click Forward to continue.
110
3.
Select a PCI device Select an unused PCI device. Note that selecting PCI devices presently in use on the host causes errors. In this example a PCI to USB interface device is used.
111
4.
Confirm the new device Click the Finish button to confirm the device setup and add the device to the guest.
112
The setup is complete and the guest can now use the PCI device.
In the output from this command, each PCI device is identified by a string, as shown in the following example output:
113
2.
Add the device Use the PCI identifier output from the virsh nodedev command as the value for the --hostdevice parameter.
# virt-install \ -n hostdev-test -r 1024 --vcpus 2 \ --os-variant fedora11 -v \ -l http://download.fedoraproject.org/pub/fedora/linux/development/x86_64/os \ -x 'console=ttyS0 vnc' --nonetworks --nographics \ --disk pool=default,size=8 \ --debug --host-device=pci_8086_10bd
3.
Complete the installation Complete the guest installation. The PCI device should be attached to the guest.
114
Chapter 13.
SR-IOV
13.1. Introduction
The PCI-SIG (PCI Special Interest Group) developed the Single Root I/O Virtualization (SR-IOV) specification. The PCI-SIG Single Root IOV specification is a standard for a type of PCI device assignment which natively shares a single device to multiple guests. SR-IOV reduces hypervisor involvement by specifying virtualization compatible memory spaces, interrupts and DMA streams. SRIOV improves device performance for virtualized guests.
Figure 13.1. How SR-IOV works SR-IOV enables a Single Root Function (for example, a single Ethernet port), to appear as multiple, separate, physical devices. A physical device with SR-IOV capabilities can be configured to appear in the PCI configuration space as multiple functions, each device has its own configuration space complete with Base Address Registers (BARs) and (MSI-based) interrupts. SR-IOV uses two new PCI functions: Physical Functions (PFs) are full PCIe devices that include the SR-IOV capabilities. Physical Functions are discovered, managed, and configured as normal PCI devices. Physical Functions configure and manage the SR-IOV functionality by instantiating Virtual Functions. Virtual Functions (VFs) are simple PCIe functions that only process I/O. Each Virtual Function is derived from a Physical Function. The number of Virtual Functions a device supports is limited by the device hardware. Virtual Functions can be assigned to virtualized guests. The hypervisor can map one or more Virtual Functions to a virtualized guest. The Virtual Function's configuration space is mapped to the configuration space presented to the virtualized guest by the hypervisor. Each Virtual Function can only be mapped to a single guest at a time, as Virtual Functions require real hardware resources. A virtualized guest can have multiple Virtual Functions. A Virtual Function appears as a PCI device in the same manner as a normal PCI device would appear to an operating system. 115
Chapter 13. SR-IOV The SR-IOV (VF) drivers are implemented in the guest VM. The core implementation is contained in the PCI subsystem, but there must also be driver support for both the Physical Function (PF) and Virtual Function (VF) devices. With an SR-IOV capable device one can allocate VFs from a PF. The VFs appear as PCI devices which are backed on the physical PCI device by resources (queues, and register sets).
Advantages of SR-IOV
SR-IOV devices can share a single physical port with multiple virtualized guests. Virtual Functions have near-native performance and provide better performance than para-virtualized drivers and emulated access. Virtual Functions provide data protection between virtualized guests on the same physical server as the data is managed and controlled by the hardware. These features allow for increased virtualized guest density on hosts within a data center. SR-IOV is better able to utilize the bandwidth of devices with multiple guests.
Note that the output has been modified to remove all other devices. 3. Start the SR-IOV kernel modules and Virtual Functions If the device is supported the driver kernel module should be loaded automatically by the kernel. Optional parameters can be passed to the module using the modprobe command. The Intel 82576 network interface card uses the igb driver kernel module. The max_vfs parameter of the igb module specifies the maximum number of Virtual Functions. The max_vfs parameter causes the driver to spawn Virtual Functions. For this particular card the valid range is 0 to 7. Start the module with the max_vfs set to 1 or any number of Virtual Functions up to the maximum supported by your device.
# modprobe igb max_vfs=7
116
Using SR-IOV
4.
Make the Virtual Functions persistent Add the modprobe command to the /etc/modprobe.d/igb.conf file:
5.
Inspect the new Virtual Functions Using the lspci command, list the newly added Virtual Functions attached to the Intel 82576 network device.
# lspci 0b:00.0 0b:00.1 0b:10.0 0b:10.1 0b:10.2 0b:10.3 0b:10.4 0b:10.5 0b:10.6 0b:10.7 0b:11.0 0b:11.1 0b:11.2 0b:11.3 0b:11.4 0b:11.5 | grep 82576 Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller: Ethernet controller:
Intel Intel Intel Intel Intel Intel Intel Intel Intel Intel Intel Intel Intel Intel Intel Intel
Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation Corporation
82576 82576 82576 82576 82576 82576 82576 82576 82576 82576 82576 82576 82576 82576 82576 82576
Gigabit Gigabit Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual
Network Connection (rev 01) Network Connection (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01) Function (rev 01)
The identifier for the PCI device is found with the -n parameter of the lspci command. The Physical Functions corresponds to 0b:00.0 and 0b:00.1. All the Virtual Functions have Virtual Function in the description. 6. Add the Virtual Function to the guest a. Shut down the guest. b. Open the XML configuration file with the virsh edit command. This example edits a guest named MyGuest.
# virsh edit MyGuest
c.
The default text editor will open the libvirt configuration file for the guest. Add the new device to the devices section of the XML configuration file.
<hostdev mode='subsystem' type='pci' manage='yes'> <source> <address bus='0x03' slot='0x10' function='0x01'/> </source> </hostdev>
d.
117
Chapter 13. SR-IOV 7. Restart Restart the guest to complete the installation.
# virsh start MyGuest
The guest should start successfully and detect a new network interface card. This new card is the Virtual Function of the SR-IOV device.
This error is often caused by a device that is already assigned to another guest or to the host itself.
118
Chapter 14.
NTP
The Network Time Protocol (NTP) daemon should be running on the host and the guests. Enable the ntpd service:
# service ntpd start
Using the ntpd service should minimize the affects of clock skew in all cases.
If any output is given your CPU has the constant_tsc bit. If no output is given follow the instructions below.
Note
These instructions are for AMD revision F cpus only.
119
Chapter 14. KVM guest timing management If the CPU lacks the constant_tsc bit, disable all power management features (BZ#513138 ). Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append processor.max_cstate=1 to the kernel boot options in the grub.conf file on the host:
title Red Hat Enterprise Linux (2.6.32-36.x86-64) root (hd0,0) kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet processor.max_cstate=1
1
Disable cpufreq (only necessary on hosts without the constant_tsc) by editing the /etc/ sysconfig/cpuspeed configuration file and change the MIN_SPEED and MAX_SPEED variables to the highest frequency available. Valid limits can be found in the /sys/devices/system/cpu/ cpu*/cpufreq/scaling_available_frequencies files.
Using the para-virtualized clock with Red Hat Enterprise Linux guests
For certain Red Hat Enterprise Linux guests, additional kernel parameters are required. These parameters can be set by appending them to the end of the /kernel line in the /boot/grub/grub.conf file of the guest. The table below lists versions of Red Hat Enterprise Linux and the parameters required for guests on systems without a constant Time Stamp Counter. Red Hat Enterprise Linux 6.0 AMD64/Intel 64 with the para-virtualized clock 6.0 AMD64/Intel 64 without the para-virtualized clock 5.5 AMD64/Intel 64 with the para-virtualized clock 5.5 AMD64/Intel 64 without the para-virtualized clock 5.5 x86 with the para-virtualized clock 5.5 x86 without the paravirtualized clock 5.4 AMD64/Intel 64 5.4 x86 5.3 AMD64/Intel 64 5.3 x86 4.8 AMD64/Intel 64 4.8 x86 3.9 AMD64/Intel 64 3.9 x86 Additional guest kernel parameters Additional parameters are not required notsc lpj=n Additional parameters are not required divider=10 notsc lpj=n Additional parameters are not required divider=10 clocksource=acpi_pm lpj=n divider=10 notsc divider=10 clocksource=acpi_pm divider=10 notsc divider=10 clocksource=acpi_pm notsc divider=10 clock=pmtmr divider=10 Additional parameters are not required Additional parameters are not required
https://bugzilla.redhat.com/show_bug.cgi?id=513138
120
Using the Real-Time Clock with Windows Server 2003 and Windows XP guests
Windows uses the both the Real-Time Clock (RTC) and the Time Stamp Counter (TSC). For Windows guests the Real-Time Clock can be used instead of the TSC for all time sources which resolves guest timing issues. To enable the Real-Time Clock for the PMTIMER clock source (the PMTIMER usually uses the TSC) add the following line to the Windows boot settings. Windows boot settings are stored in the boot.ini file. Add the following line to the boot.ini file:
/use pmtimer
For more information on Windows boot settings and the pmtimer option, refer to Available switch 2 options for the Windows XP and the Windows Server 2003 Boot.ini files .
Using the Real-Time Clock with Windows Vista, Windows Server 2008 and Windows 7 guests
Windows uses the both the Real-Time Clock (RTC) and the Time Stamp Counter (TSC). For Windows guests the Real-Time Clock can be used instead of the TSC for all time sources which resolves guest timing issues. The boot.ini file is no longer used from Windows Vista and newer. Windows Vista, Windows Server 2008 and Windows 7 use the Boot Configuration Data Editor (bcdedit.exe) to modify the Windows boot parameters. This procedure is only required if the guest is having time keeping issues. Time keeping issues may not affect guests on all host systems. 1. 2. 3. 4. Open the Windows guest. Open the Accessories menu of the start menu. Right click on the Command Prompt application, select Run as Administrator. Confirm the security exception, if prompted. Set the boot manager to use the platform clock. This should instruct Windows to use the PM timer for the primary clock source. The system UUID ({default} in the example below) should be changed if the system UUID is different than the default boot device.
C:\Windows\system32>bcdedit /set {default} USEPLATFORMCLOCK on The operation completed successfully
This fix should improve time keeping for Windows Vista, Windows Server 2008 and Windows 7 guests.
http://support.microsoft.com/kb/833721
121
122
Chapter 15.
Remove or disable any unnecessary services such as AutoFS, NFS, FTP, HTTP, NIS, telnetd, sendmail and so on. Only add the minimum number of user accounts needed for platform management on the server and remove unnecessary user accounts. Avoid running any unessential applications on your host. Running applications on the host may impact virtual machine performance and can affect server stability. Any application which may crash the server will also cause all virtual machines on the server to go down. Use a central location for virtual machine installations and images. Virtual machine images should be stored under /var/lib/libvirt/images/. If you are using a different directory for your virtual machine images make sure you add the directory to your SELinux policy and relabel it before starting the installation. Installation sources, trees, and images should be stored in a central location, usually the location of your vsftpd server.
125
126
Chapter 16.
127
Chapter 16. Security for virtualization 2. Format the NewVolumeName logical volume with a file system that supports extended attributes, such as ext3.
# mke2fs -j /dev/volumegroup/NewVolumeName
3.
Create a new directory for mounting the new logical volume. This directory can be anywhere on your file system. It is advised not to put it in important system directories (/etc, /var, /sys) or in home directories (/home or /root). This example uses a directory called /virtstorage
# mkdir /virtstorage
4.
5.
Set the correct SELinux type for the libvirt image location.
# semanage fcontext -a -t virt_image_t "/virtstorage(/.*)?"
If the targeted policy is used (targeted is the default policy) the command appends a line to the / etc/selinux/targeted/contexts/files/file_contexts.local file which makes the change persistent. The appended line may resemble this:
/virtstorage(/.*)? system_u:object_r:virt_image_t:s0
6.
Run the command to change the type of the mount point (/virtstorage) and all files under it to virt_image_t (the restorecon and setfiles commands read the files in /etc/selinux/ targeted/contexts/files/).
# restorecon -R -v /virtstorage
Verify the file has been relabeled using the following command:
# sudo ls -Z /virtstorage -rw-------. root root system_u:object_r:virt_image_t:s0 newfile
The output shows that the new file has the correct attribute, virt_image_t.
16.3. SELinux
This section contains topics to consider when using SELinux with your virtualization deployment. When you deploy system changes or add devices, you must update your SELinux policy accordingly. To configure an LVM volume for a guest, you must modify the SELinux context for the respective underlying block device and volume group. 128
ICMP requests must be accepted. ICMP packets are used for network testing. You cannot ping guests if ICMP packets are blocked. Port 22 should be open for SSH access and the initial installation. Ports 80 or 443 (depending on the security settings on the RHEV Manager) are used by the vdsmreg service to communicate information about the host. Ports 5634 to 6166 are used for guest console access with the SPICE protocol. Ports 49152 to 49216 are used for migrations with KVM. Migration may use any port in this range depending on the number of concurrent migrations occurring. Enabling IP forwarding (net.ipv4.ip_forward = 1) is also required for shared bridges and the default bridge. Note that installing libvirt enables this variable so it will be enabled when the virtualization packages are installed unless it was manually disabled.
Note
Note that enabling IP forwarding is not required for physical bridge devices. When a guest is connected through a physical bridge, traffic only operates at a level that does not require IP configuration such as IP forwarding.
129
130
Chapter 17.
sVirt
sVirt is a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtualized guests. The main reasons for integrating these technologies are to improve security and harden the system against bugs in the hypervisor that might be used as an attack vector aimed toward the host or to another virtualized guest. This chapter describes how sVirt integrates with virtualization technologies in Red Hat Enterprise Linux 6.
Non-virtualized environments
In a non-virtualized environment, hosts are separated from each other physically and each host has a self-contained environment, consisting of services such as a web server, or a DNS server. These services communicate directly to their own user space, host kernel and physical host, offering their services directly to the network. The following image represents a non-virtualized environment:
Virtualized environments
In a virtualized environment, several operating systems can run on a single host kernel and physical host. The following image represents a virtualized environment:
131
SELinux introduces a pluggable security framework for virtualized instances in its implementation of Mandatory Access Control (MAC). The sVirt framework allows guests and their resources to be uniquely labeled. Once labeled, rules can be applied which can reject access between different guests.
# ps -eZ | grep qemu system_u:system_r:svirt_t:s0:c87,c520 27950 ? 00:00:17 qemu-kvm system_u:system_r:svirt_t:s0:c639,c757 27989 ? 00:00:06 qemu-system-x86
The actual disk images are automatically labeled to match the processes, as shown in the following output:
# ls -lZ /var/lib/libvirt/images/*
132
sVirt labeling
system_u:object_r:svirt_image_t:s0:c87,c520
image1
The following table outlines the different labels that can be assigned when using sVirt: Table 17.1. sVirt labels Type Virtualized guest processes SELinux Context Description system_u:system_r:svirt_t:MCS1 MCS1 is a randomly selected MCS field. Currently approximately 500,000 labels are supported. system_u:object_r:svirt_image_t:MCS1svirt_t processes with the Only same MCS fields are able to read/write these image files and devices. system_u:object_r:svirt_image_t:s0 svirt_t processes are All allowed to write to the svirt_image_t:s0 files and devices. system_u:object_r:svirt_content_t:s0 svirt_t processes are able All to read files/devices with this label. system_u:object_r:virt_content_t:s0 System default label used when an image exits. No svirt_t virtual processes are allowed to read files/devices with this label.
It is also possible to perform static labeling when using sVirt. Static labels allow the administrator to select a specific label, including the MCS/MLS field, for a virtualized guest. Administrators who run statically-labeled virtualized guests are responsible for setting the correct label on the image files. The virtualized guest will always be started with that label, and the sVirt system will never modify the label of a statically-labeled virtual machine's content. This allows the sVirt component to run in an MLS environment. You can also run multiple virtualized guests with different sensitivity levels on a system, depending on your requirements.
133
134
Chapter 18.
1.
Optional: Changing user Change user, if required. This example uses the local root user for remotely managing the other hosts and the local host.
$ su -
2.
Generating the SSH key pair Generate a public key pair on the machine virt-manager is used. This example uses the default key location, in the ~/.ssh/ directory. 135
$ ssh-keygen -t rsa
3.
Copying the keys to the remote hosts Remote login without a password, or with a passphrase, requires an SSH key to be distributed to the systems being managed. Use the ssh-copy-id command to copy the key to root user at the system address provided (in the example, [email protected]).
$ ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected] [email protected]'s password:
Now try logging into the machine, with the ssh [email protected] command and check in the .ssh/authorized_keys file to make sure unexpected keys have not been added. Repeat for other systems, as required. 4. Optional: Add the passphrase to the ssh-agent Add the passphrase for the SSH key to the ssh-agent, if required. On the local host, use the following command to add the passphrase (if there was one) to enable password-less login.
# ssh-add ~/.ssh/id_rsa.pub
After libvirtd and SSH are configured you should be able to remotely access and manage your virtual machines. You should also be able to access your guests with VNC at this point.
136
Transport modes
UNIX sockets
Unix domain sockets are only accessible on the local machine. Sockets are not encrypted, and use UNIX permissions or SELinux for authentication. The standard socket names are /var/ run/libvirt/libvirt-sock and /var/run/libvirt/libvirt-sock-ro (for read-only connections).
SSH
Transported over a Secure Shell protocol (SSH) connection. Requires Netcat (the nc package) installed. The libvirt daemon (libvirtd) must be running on the remote machine. Port 22 must be open for SSH access. You should use some sort of ssh key management (for example, the sshagent utility) or you will be prompted for a password.
ext
The ext parameter is used for any external program which can make a connection to the remote machine by means outside the scope of libvirt. This parameter is unsupported.
tcp
Unencrypted TCP/IP socket. Not recommended for production use, this is normally disabled, but an administrator can enable it for testing or use over a trusted network. The default port is 16509. The default transport, if no other is specified, is tls. 137
Remote URIs
A Uniform Resource Identifier (URI) is used by virsh and libvirt to connect to a remote host. URIs can also be used with the --connect parameter for the virsh command to execute single commands or migrations on remote hosts. libvirt URIs take the general form (content in square brackets, "[]", represents optional functions):
driver[+transport]://[username@][hostname][:port]/[path][?extraparameters]
The transport method or the hostname must be provided to target an external location. Examples of remote management parameters Connect to a remote KVM host named server7, using SSH transport and the SSH username ccurran.
qemu+ssh://ccurran@server7/
Connect to a remote KVM hypervisor on the host named server7 using TLS.
qemu://server7/
Connect to a remote KVM hypervisor on host server7 using TLS. The no_verify=1 instructs libvirt not to verify the server's certificate.
qemu://server7/?no_verify=1
Testing examples Connect to the local KVM hypervisor with a non-standard UNIX socket. The full path to the Unix socket is supplied explicitly in this case.
qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock
Connect to the libvirt daemon with an unencrypted TCP/IP connection to the server with the IP address 10.1.1.10 on port 5000. This uses the test driver with default settings.
test+tcp://10.1.1.10:5000/default
138
Transport modes Name Transport mode Description hostname, port number, username and extra parameters from the remote URI, but in certain very complex cases it may be better to supply the name explicitly. command ssh and ext The external command. For ext transport this is required. For ssh the default is ssh. The PATH is searched for the command. command=/opt/ openssh/bin/ssh Example usage
socket
The path to the UNIX socket=/opt/libvirt/run/ domain socket, which libvirt/libvirt-sock overrides the default. For ssh transport, this is passed to the remote netcat command (see netcat). The netcat command can be used to connect to remote systems. The default netcat parameter uses the nc command. For SSH transport, libvirt constructs an SSH command using the form below: command -p port [-l username] hostname netcat -U socket The port, username and hostname parameters can be specified as part of the remote URI. The command, netcat and socket come from other extra parameters. netcat=/opt/netcat/bin/ nc
netcat
ssh
no_verify
tls
If set to a non-zero value, this disables client checks of the server's certificate. Note that to disable
no_verify=1
139
Chapter 18. Remote management of virtualized guests Name Transport mode Description server checks of the client's certificate or IP address you must change the libvirtd configuration. no_tty ssh If set to a non-zero value, this stops ssh from asking for a password if it cannot log in to the remote machine automatically (for using ssh-agent or similar). Use this when you do not have access to a terminal - for example in graphical programs which use libvirt. no_tty=1 Example usage
140
Chapter 19.
Overcommitting memory
Most operating systems and applications do not use 100% of the available RAM all the time. This behavior can be exploited with KVM. KVM can allocate more memory for virtualized guests than the host has physically available. With KVM, virtual machines are Linux processes. Guests on the KVM hypervisor do not have dedicated blocks of physical RAM assigned to them, instead guests function as Linux processes. The Linux kernel allocates each process memory when the process requests more memory. KVM guests are allocated memory when requested by the guest operating system. The guest only requires slightly more physical memory than the virtualized operating system reports as used. The Linux kernel swaps infrequently used memory out of physical memory and into virtual memory. Swapping decreases the amount of memory required by virtualized guests. When physical memory is completely used or a process is inactive for some time, Linux moves the process's memory to swap. Swap is usually a partition on a hard disk drive or solid state drive which Linux uses to extend virtual memory. Swap is significantly slower than RAM due to the throughput and response times of hard drives and solid state drives. As KVM virtual machines are Linux processes, underused or idle memory of virtualized guests is moved by default to swap. The total memory used by guests can be overcommitted, which is to use more than the physically available host memory. Overcommitting requires sufficient swap space for all guests and all host processes. Without sufficient swap space for all processes in virtual memory the pdflush process, the cleanup process, starts. The pdflush process kills processes to free memory so the system does not crash. pdflush may destroy virtualized guests or other system processes which may cause file system errors and may leave virtualized guests unbootable.This can cause issues if virtualized guests use their total RAM.
Warning
If sufficient swap is not available guest operating systems will be forcibly shut down. This may leave guests inoperable. Avoid this by never overcommitting more memory than there is swap available.
141
Important
The example below is provided as a guide for configuring swap only. The settings listed may not be appropriate for your environment.
Example 19.1. Memory overcommit example ExampleServer1 has 32GB of RAM. The system is being configured to run 56 guests with 1GB of virtualized memory. The host system rarely uses more than 4GB of memory for system processes, drivers and storage caching. 32GB minus 4GB for the host leaves 28GB of physical RAM for virtualized guests. Each guest uses 1GB of RAM, a total of 56GB of virtual RAM is required for the guests. The Red Hat Knowledgebase recommends 8GB of swap for a system with 32GB of RAM. To safely overcommit memory there must be sufficient virtual memory for all guests and the host. The host has 28GB of RAM for guests (which need 56GB of RAM). Therefore, the system needs at least 28GB of swap for the guests. ExampleServer1 requires at least 36GB (8GB for the host and 28GB for the guests) of swap to safely overcommit for all 56 guests. It is possible to overcommit memory over ten times the amount of physical RAM in the system. This only works with certain types of guest, for example, desktop virtualization with minimal intensive usage or running several identical guests with KSM. Configuring swap and memory overcommit is not a formula, each environment and setup is different. Your environment must be tested and customized to ensure stability and performance. For more information on KSM and overcommitting, refer to Chapter 20, KSM.
http://kbase.redhat.com/faq/docs/DOC-15252
142
core processor. Overcommitting symmetric multiprocessing guests in over the physical number of processing cores will cause significant performance degradation. Assigning guests VCPUs up to the number of physical cores is appropriate and works as expected. For example, running virtualized guests with four VCPUs on a quad core host. Guests with less than 100% loads should function effectively in this setup.
143
144
Chapter 20.
KSM
The concept of shared memory is common in modern operating systems. For example, when a program is first started it shares all of its memory with the parent program. When either the child or parent program tries to modify this memory, the kernel allocates a new memory region, copies the original contents and allows the program to modify this new region. This is known as copy on write. KSM is a new Linux feature which uses this concept in reverse. KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple references to multiple identical memory pages to a single reference to a single page. This page is then marked copy on write. If the contents of the page is modified, a new page is created. This is useful for virtualization with KVM. When a virtualized guest is started, it only inherits the memory from the parent qemu-kvm process. Once the guest is running the contents of the guest operating system image can be shared when guests are running the same operating system or applications. KSM only identifies and merges identical pages which does not interfere with the guest or impact the security of the host or the guests. KSM allows KVM to request that these identical guest memory regions be shared. KSM provides enhanced memory speed and utilization. With KSM, common process data is stored in cache or in main memory. This reduces cache misses for the KVM guests which can improve performance for some applications and operating systems. Secondly, sharing memory reduces the overall memory usage of guests which allows for higher densities and greater utilization of resources. Red Hat Enterprise Linux uses two separate methods for controlling KSM: The ksm service starts and stops the KSM kernel thread. The ksmtuned service controls and tunes the ksm, dynamically managing same-page merging. The ksmtuned service starts ksm and stops the ksm service if memory sharing is not necessary. The ksmtuned service must be told with the retune parameter to run when new virtualized guests are created or destroyed. Both of these services are controlled with the standard service management tools.
OK
The ksm service can be added to the default startup sequence. Make the ksm service persistent with the chkconfig command.
# chkconfig ksm on
145
OK
The ksmtuned service can be tuned with the retune parameter. The retune parameter instructs ksmtuned to run tuning functions manually. The /etc/ksmtuned.conf file is the configuration file for the ksmtuned service. The file output below is the default ksmtuned.conf file.
# Configuration file for ksmtuned. # How long ksmtuned should sleep between tuning adjustments # KSM_MONITOR_INTERVAL=60 # Millisecond sleep between ksm scans for 16Gb server. # Smaller servers sleep more, bigger sleep less. # KSM_SLEEP_MSEC=10 # # # # KSM_NPAGES_BOOST=300 KSM_NPAGES_DECAY=-50 KSM_NPAGES_MIN=64 KSM_NPAGES_MAX=1250
# KSM_THRES_COEF=20 # KSM_THRES_CONST=2048 # uncomment the following to enable ksmtuned debug information # LOGFILE=/var/log/ksmtuned # DEBUG=1
146
pages_volatile Number of volatile pages. run Whether the KSM process is running. sleep_millisecs Sleep milliseconds. KSM tuning activity is stored in the /var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings. The /etc/sysconfig/ksm file can manually set a number or all pages used by KSM as not swappable. 1. Open the /etc/sysconfig/ksm file with a text editor.
# # # # The maximum number of unswappable kernel pages which may be allocated by ksm (0 for unlimited) If unset, defaults to half of total memory KSM_MAX_KERNEL_PAGES=
2.
Uncomment the KSM_MAX_KERNEL_PAGES line to manually configure the number of unswappable pages for KSM. Setting this variable to 0 configures KSM to keep all identical pages in main memory which can improve performance if the system has sufficient main memory.
# The maximum number of unswappable kernel pages # which may be allocated by ksm (0 for unlimited) # If unset, defaults to half of total memory KSM_MAX_KERNEL_PAGES=0
Deactivating KSM
KSM has a performance overhead which may be too large for certain environments or host systems. KSM can be deactivated by stopping the ksm service and the ksmtuned service. Stopping the services deactivates KSM but does not persist after restarting.
# service ksm stop Stopping ksm: # service ksmtuned stop Stopping ksmtuned:
[ [
OK OK
] ]
Persistently deactivate KSM with the chkconfig command. To turn off the services, run the following commands:
# chkconfig ksm off # chkconfig ksmtuned off
147
148
Chapter 21.
Hugepage support
Introduction
x86 CPUs usually address memory in 4kB pages, but they are capable of using larger pages known as huge pages. KVM guests can be deployed with huge page memory support in order to reduce memory consumption and improve performance by reducing CPU cache usage. By using huge pages for a KVM guest, less memory is used for page tables and TLB (Translation Lookaside Buffer) errors are reduced, thereby significantly increasing performance, especially for memory-intensive situations. Transparent Hugepage Support is a kernel feature that reduces TLB entries needed for an application. By also allowing all free memory to be used as cache, performance is increased.
149
150
Chapter 22.
Installing virt-v2v
To install virt-v2v from RHN, ensure the system is subscribed to the appropriate channel, then run:
yum install virt-v2v
151
Figure 22.2. The storage tab Click the plus sign (+) button to add a new storage pool.
152
Figure 22.3. Adding a storage pool 2. Create local network interfaces. The local machine must have an appropriate network to which the converted virtualized guest can connect. This is likely to be a bridge interface. A bridge interface can be created using standard tools on the host. Since version 0.8.3, virt-manager can also create and manage bridges. 3. Specify network mappings in virt-v2v.conf. This step is optional, and is not required for most use cases. If your virtualized guest has multiple network interfaces, /etc/virt-v2v.conf must be edited to specify the network mapping for all interfaces. You can specify an alternative virt-v2v.conf file with the -f parameter. If your virtualized guest only has a single network interface, it is simpler to use the --network or --bridge parameters, rather than modifying virt-v2v.conf.
153
Chapter 22. Migrating to KVM from other hypervisors using virt-v2v virt-v2v/software/. virt-v2v will display an error similar to Example 22.1, Missing Package error if software it depends upon for a particular conversion is not available. Example 22.1. Missing Package error
virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: rhel/5/kernel-2.6.18-128.el5.x86_64.rpm rhel/5/ecryptfs-utils-56-8.el5.x86_64.rpm rhel/5/ecryptfs-utils-56-8.el5.i386.rpm
To obtain the relevant RPMs for your environment, follow these steps: 1. 2. 3. 4. 5. 6. Log in to Red Hat Network: https://rhn.redhat.com/ In the Customer Portal section of RHN (https://rhn.redhat.com/rhn/software/channels/All.do), select the Software Channels tab. Select your product, version and architecture to enter the correct channel. Select the Packages tab. Use the Filter by Package function to locate the missing package. Select the package exactly matching the one shown in the error message. For the example shown in Example 22.1, Missing Package error, the first package is kernel-2.6.18-128.el5.x86_64 Select Download Packages at the bottom of the package details page. Save the downloaded package to the appropriate directory in /var/lib/virt-v2v/ software. For example, the Red Hat Enterprise Linux 5 directory is /var/lib/virt-v2v/ software/rhel/5.
7. 8.
Important
Virtualized guests running Windows can only be converted for output to Red Hat Enterprise Virtualization. The conversion procedure depends on post-processing by the Red Hat Enterprise Virtualization Manager for completion. See Section 22.4.2, Configuration changes for Windows virtualized guests for details of the process. Virtualized guests running Windows cannot be converted for output to libvirt.
1. Obtain the Guest Tools ISO As part of the conversion process for virtualized guests running Windows, the Red Hat Enterprise Virtualization Manager will install drivers using the Guest Tools ISO. See Section 22.4.2, Configuration changes for Windows virtualized guests for details of the process. The Guest Tools ISO is obtained as follows: 154
Converting virtualized guests 1. 2. 3. 4. 5. From the Red Hat Enterprise Virtualization Manager, Login to Red Hat Network Click on Download Software Select the Red Hat Enterprise Virtualization (x86-64) channel Select the Red Hat Enterprise Virt Manager for Desktops (v.2 x86) or Red Hat Enterprise Virt Manager for Desktops (v.2 x86) channel, as appropriate for your subscription. Download Guest Tools ISO for 2.2 and save it locally
2. Upload the Guest Tools ISO to the Red Hat Enterprise Virtualization Manager Upload the Guest Tools ISO using the ISO Uploader. See the Red Hat Enterprise Virtualization for Servers Administration Guide for instructions. 3. Ensure that the libguestfs-winsupport package is installed on the host running virt-v2v. This package provides support for NTFS, which is used by many Windows systems. If you attempt to convert a virtualized guest using NTFS without the libguestfs-winsupport package installed, the conversion will fail. 4. Ensure that the virtio-win package is installed on the host running virt-v2v. This package provides para-virtualized block and network drivers for Windows guests. If you attempt to convert a virtualized guest running Windows without the virtio-win package installed, the conversion will fail giving an error message concerning missing files.
This will require booting into a Xen kernel to obtain the XML, as libvirt needs to connect to a running Xen hypervisor to obtain its metadata. The conversion process is optimized for KVM, so obtaining domain data while running a Xen kernel, then performing the conversion using a KVM kernel will be more efficient than running the conversion on a Xen kernel.
22.2.1. virt-v2v
virt-v2v converts guests from a foreign hypervisor to run on KVM, managed by libvirt.
155
virt-v2v -i libvirtxml -op pool --bridge brname vm-name.xml virt-v2v -op pool --network netname vm-name virt-v2v -ic esx://esx.example.com/?no_verify=1 -op pool --bridge brname vm-name
Parameters
-i input Specifies the input method to obtain the guest for conversion. The default is libvirt. Supported options are: libvirt Guest argument is the name of a libvirt domain. libvirtxml Guest argument is the path to an XML file containing a libvirt domain. -ic URI Specifies the connection to use when using the libvirt input method. If omitted, this defaults to qemu:///system. virt-v2v can currently automatically obtain guest storage from local libvirt connections, ESX connections, and connections over SSH. Other types of connection are not supported. -o method Specifies the output method. If no output method is specified, the default is libvirt. Supported output methods are: libvirt, create a libvirt guest. See the -oc and -op options. -op must be specified for the libvirt output method. rhev, create a guest on a Red Hat Enterprise Virtualization Export storage domain, which can later be imported using the manager. The -osd or Export storage domain must be specified for the rhev output method. -oc URI Specifies the libvirt connection to use to create the converted guest. If omitted, this defaults to qemu:///system. Note that virt-v2v must be able to write directly to storage described by this libvirt connection. This makes writing to a remote connection impractical at present. Specifies the pool which will be used to create new storage for the converted guest. Specifies the path to an existing Red Hat Enterprise Virtualization Export storage domain. The domain must be in the format <host > <path>; for example, storage.example.com:/rhev/export. The nfs export must be mountable and writable by the machine running virt-v2v. -f file | --config file -n network | -network network Load the virt-v2v configuration from file. Defaults to /etc/virt-v2v.conf if it exists. Map all guest bridges or networks which don't have a mapping in the configuration file to the specified network. This option cannot be used in conjunction with --bridge. -b bridge | --bridge bridge Map all guest bridges or networks which don't have a mapping in the configuration file to the specified bridge. This option cannot be used in conjunction with --network. --help Display brief help.
156
Converting a local Xen virtualized guest --version Display version number and exit.
Where pool is the local storage pool to hold the image, brname is the name of a local network bridge to connect the converted guest's network to, and vm-name.xml is the path to the virtualized guest's exported XML. You may also use the --network parameter to connect to a locally managed network, or specify multiple mappings in /etc/virt-v2v.conf. If your guest uses a Xen para-virtualized kernel (it would be called something like kernel-xen or kernel-xenU), virt-v2v will attempt to install a new kernel during the conversion process. You can avoid this requirement by installing a regular kernel, which won't reference a hypervisor in its name, alongside the Xen kernel prior to conversion. You should not make this newly installed kernel your default kernel, because Xen will not boot it. virt-v2v will make it the default during conversion.
Where vmhost.example.com is the host running the virtualized guest, pool is the local storage pool to hold the image, brname is the name of a local network bridge to connect the converted guest's network to, and vm-name is the domain of the Xen virtualized guest. You may also use the -network parameter to connect to a locally managed network, or specify multiple mappings in /etc/ virt-v2v.conf. If your guest uses a Xen para-virtualized kernel (it would be called something like kernel-xen or kernel-xenU), virt-v2v will attempt to install a new kernel during the conversion process. You can avoid this requirement by installing a regular kernel, which won't reference a hypervisor in its name, alongside the Xen kernel prior to conversion. You should not make this newly installed kernel your default kernel, because Xen will not boot it. virt-v2v will make it the default during conversion.
Where esx.example.com is the VMware ESX server, pool is the local storage pool to hold the image, brname is the name of a local network bridge to connect the converted guest's network to,
157
Chapter 22. Migrating to KVM from other hypervisors using virt-v2v and vm-name is the name of the virtualized guest. You may also use the --network parameter to connect to a locally managed network, or specify multiple mappings in /etc/virt-v2v.conf.
.netrc permissions
The .netrc file must have a permission mask of 0600 to be read correctly by virt-v2v
This example demonstrates converting a local (libvirt-managed) Xen virtualized guest running Windows for output to Red Hat Enterprise Virtualization. Ensure that the virtualized guest's XML is available locally, and that the storage referred to in the XML is available locally at the same paths. To convert the virtualized guest from an XML file, run:
virt-v2v -i libvirtxml -o rhev -osd storage.example.com:/exportdomain --network rhevm vmname.xml
Where vm-name.xml is the path to the virtualized guest's exported xml, and storage.example.com:/exportdomain is the export storage domain. You may also use the -network parameter to connect to a locally managed network, or specify multiple mappings in /etc/ virt-v2v.conf. If your guest uses a Xen para-virtualized kernel (it would be called something like kernel-xen or kernel-xenU), virt-v2v will attempt to install a new kernel during the conversion process. You can avoid this requirement by installing a regular kernel, which won't reference a hypervisor in its name, 158
Running converted virtualized guests alongside the Xen kernel prior to conversion. You should not make this newly installed kernel your default kernel, because Xen will not boot it. virt-v2v will make it the default during conversion.
X reconfiguration
initrd
SELinux
Chapter 22. Migrating to KVM from other hypervisors using virt-v2v Table 22.2. Configured drivers in a Linux Guest Para-virtualized driver type Display Storage Network In addition, initrd will preload the virtio_pci driver Other drivers Display Block Network cirrus Virtualized IDE Virtualized e1000 Driver module cirrus virtio_blk virtio_net
virt-v2v can convert virtualized guests running Windows XP, Windows Vista, Windows 7, Windows Server 2003 and Windows Server 2008. The conversion process for virtualized guests running Windows is slightly to different to the process for virtualized guests running Linux. Windows virtualized guest images are converted as follows: 1. 2. 3. virt-v2v installs virtio block drivers. virt-v2v installs the CDUpgrader utility. virt-v2v copies the virtio block and network drivers to %SystemRoot%\Drivers\VirtIO. The virtio-win package does not include drivers for Windows 7 and Windows XP. When using these operating systems, rtl8139 network drivers are used. Support for rtl8139 must be already available in the guest. virt-v2v adds %SystemRoot%\Drivers\VirtIO to DevicePath, meaning this directory is automatically searched for drivers when a new device is connected. virt-v2v makes registry changes to include the virtio block drivers in the CriticalDeviceDatabase section of the registry, and ensures the CDUpgrader service is started at the next boot.
4. 5.
At this point, virt-v2v has completed the conversion. The converted virtualized guest is now bootable, but does not yet have all the drivers installed necessary to function correctly. The conversion must be finished by the Red Hat Enterprise Virtualization Manager. The Manager performs the following steps: 1. The virtualized guest is imported and run on the Manager. See the Red Hat Enterprise Virtualization for Servers Administration Guide for details.
160
Important
The first boot stage can take several minutes to run, and must not be interrupted. It will run automatically without any administrator intervention other than starting the virtualized guest. To ensure the process is not interrupted, no user should login to the virtualized guest until it has quiesced. You can check for this in the Manager GUI.
2.
The Guest Tools ISO is detected and, if found, installs all the virtio drivers from it, including a reinstall of the virtio block drivers.
Note
As the Windows conversion can copy drivers from virtio-win directly to the guest, the Guest Tools ISO is not required. It is however recommended as the included tools will be kept up to date, and additional tools that are not included in virtio-win can be installed.
161
162
Chapter 23.
The guest now automatically starts with the host. To stop a guest automatically booting use the --disable parameter
# virsh autostart --disable TestServer Domain TestServer unmarked as autostarted
Create
Create the new disk image filename of size size and format format.
# qemu-img create [-6] [-e] [-b base_image] [-f format] filename [size]
If base_image is specified, then the image will record only the differences from base_image. No size needs to be specified in this case. base_image will never be modified unless you use the commit command.
Commit
Commit any changes recorded in the file name filename with format format into its base image with the qemu-img commit command:
# qemu-img commit [-f format] filename
Convert
The convert option is used for converting a recognized format to another image format. Command format: 163
# qemu-img convert [-c] [-e] [-f format] filename [-O output_format] output_filename
Convert the disk image filename to disk image output_filename using format output_format. The disk image can be optionally encrypted with the -e option or compressed with the -c option. Only the qcow2 format supports encryption or compression. the compression is read-only. It means that if a compressed sector is rewritten, then it is rewritten as uncompressed data. The encryption uses the AES format with very secure 128-bit keys. Use a long password (over 16 characters) to get maximum protection. Image conversion is also useful to get a smaller image when using a format which can grow, such as qcow or cow. The empty sectors are detected and suppressed from the destination image.
Info
The info parameter displays information about a disk image filename. The format for the info option is as follows:
# qemu-img info [-f format] filename
This command is often used to discover the size reserved on disk which can be different from the displayed size. If snapshots are stored in the disk image, they are displayed also.
Resize
Change the disk image filename as if it had been created with size size.
# qemu-img resize filename [+ | -] size
Warning
Before using this command to shrink a disk image, you must use file system and partitioning tools inside the VM itself to reduce allocated file systems and partition sizes accordingly. Failure to do so will result in data loss. After using this command to grow a disk image, you must use file system and partitioning tools inside the VM to actually begin using the new space on the device.
Snapshot
List (-l), apply (-a), create (-c), or delete (-d) snapshot snapshot in image filename.
# qemu-img snapshot [-l | -a snapshot | -c snapshot | -d snapshot ] filename
Supported formats
The format of an image is usually guessed automatically. The following formats are supported: raw Raw disk image format (default). This format has the advantage of being simple and easily exportable to all other emulators. If your file system supports holes (for example in ext2 or ext3 164
Verifying virtualization extensions on Linux or NTFS on Windows), then only the written sectors will reserve space. Use qemu-img info to know the real size used by the image or ls -ls on Unix/Linux. qcow2 QEMU image format, the most versatile format. Use it to have smaller images (useful if your file system does not supports holes, for example: on Windows), optional AES encryption, zlib based compression and support of multiple VM snapshots. qcow Old QEMU image format. Only included for compatibility with older versions. cow User Mode Linux Copy On Write image format. The cow format is included only for compatibility with previous versions. It does not work with Windows. vmdk VMware 3 and 4 compatible image format. cloop Linux Compressed Loop image, useful only to reuse directly compressed CD-ROM images present for example in the Knoppix CD-ROMs.
2.
Analyze the output. The following output contains a vmx entry indicating an Intel processor with the Intel VT extensions:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm
The following output contains an svm entry indicating an AMD processor with the AMD-V extensions:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc
If any output is received, the processor has the hardware virtualization extensions. However in some circumstances manufacturers disable the virtualization extensions in BIOS. The "flags:" output content may appear multiple times, once for each hyperthread, core or CPU on the system. The virtualization extensions may be disabled in the BIOS. If the extensions do not appear or full virtualization does not work refer to Procedure 35.1, Enabling virtualization extensions in BIOS. 165
Chapter 23. Miscellaneous administration tasks 3. For users of the KVM hypervisor If the kvm package is installed. I As an additional check, verify that the kvm modules are loaded in the kernel:
# lsmod | grep kvm
If the output includes kvm_intel or kvm_amd then the kvm hardware virtualization modules are loaded and your system meets requirements. sudo
Additional output
If the libvirt package is installed, the virsh command can output a full list of virtualization system capabilities. Run virsh capabilities as root to receive the complete list.
This system has eight CPUs, in two sockets, each processor has four cores. The output shows that that the system has a NUMA architecture. NUMA is more complex and requires more data to accurately interpret. Use the virsh capabilities to get additional output data on the CPU configuration.
# virsh capabilities <capabilities> <host> <cpu>
166
The output shows two NUMA nodes (also know as NUMA cells), each containing four logical CPUs (four processing cores). This system has two sockets, therefore it can be inferred that each socket is a separate NUMA node. For a guest with four virtual CPUs, it would be optimal to lock the guest to physical CPUs 0 to 3, or 4 to 7 to avoid accessing non-local memory, which are significantly slower than accessing local memory. If a guest requires eight virtual CPUs, as each NUMA node only has four physical CPUs, a better utilization may be obtained by running a pair of four virtual CPU guests and splitting the work between them, rather than using a single 8 CPU guest. Running across multiple NUMA nodes significantly degrades performance for physical and virtualized tasks.
If a guest requires 3 GB of RAM allocated, then the guest should be run on NUMA node (cell) 1. Node 0 only has 2.2GB free which is probably not sufficient for certain guests.
167
2. 3.
Observe that the node 1, <cell id='1'>, has physical CPUs 4 to 7. The guest can be locked to a set of CPUs by appending the cpuset attribute to the configuration file. a. b. While the guest is offline, open the configuration file with virsh edit. Locate where the guest's virtual CPU count is specified. Find the vcpus element.
<vcpus>4</vcpus>
The guest in this example has four CPUs. c. Add a cpuset attribute with the CPU numbers for the relevant NUMA cell.
<vcpus cpuset='4-7'>4</vcpus>
4.
The virsh vcpuinfo output (the yyyyyyyy value of CPU Affinity) shows that the guest can presently run on any CPU. To lock the virtual CPUs to the second NUMA node (CPUs four to seven), run the following commands.
# # # # virsh virsh virsh virsh vcpupin vcpupin vcpupin vcpupin guest1 guest1 guest1 guest1 0 1 2 3 4 5 6 7
169
Information from the KVM processes can also confirm that the guest is now running on the second NUMA node.
# grep pid /var/run/libvirt/qemu/guest1.xml <domstatus state='running' pid='4907'> # grep Cpus_allowed_list /proc/4907/task/*/status /proc/4907/task/4916/status:Cpus_allowed_list: 4 /proc/4907/task/4917/status:Cpus_allowed_list: 5 /proc/4907/task/4918/status:Cpus_allowed_list: 6 /proc/4907/task/4919/status:Cpus_allowed_list: 7 </section>
The script above can also be implemented as a script file as seen below.
#!/usr/bin/env python # -*- mode: python; -*print "" print "New UUID:" import virtinst.util ; print virtinst.util.uuidToString(virtinst.util.randomUUID()) print "New MAC:" import virtinst.util ; print virtinst.util.randomMAC() print ""
170
To make this change permanent, remove swap lines from the /etc/fstab file and restart the host system. 171
2. Verify that vsftpd is not enabled using the chkconfig --list vsftpd:
$ chkconfig --list vsftpd vsftpd 0:off 1:off
2:off
3:off
4:off
5:off
6:off
3. Run the chkconfig --levels 345 vsftpd on to start vsftpd automatically for run levels 3, 4 and 5. 4. Use the chkconfig --list vsftpd command to verify the vsftpd daemon is enabled to start during system boot:
$ chkconfig --list vsftpd vsftpd 0:off 1:off
2:off
3:on
4:on
5:on
6:off
5. use the service vsftpd start vsftpd to start the vsftpd service:
$service vsftpd start vsftpd Starting vsftpd for vsftpd:
OK
Gracefully shutting down guests 1. Edit the ~/.vnc/xstartup file to start a GNOME session whenever vncserver is started. The first time you run the vncserver script it will ask you for a password you want to use for your VNC session. 2. A sample xstartup file:
#!/bin/sh # Uncomment the following two lines for normal desktop: # unset SESSION_MANAGER # exec /etc/X11/xinit/xinitrc [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources #xsetroot -solid grey #vncconfig -iconic & #xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #twm & if test -z "$DBUS_SESSION_BUS_ADDRESS" ; then eval `dbus-launch --sh-syntax exit-with-session` echo "D-BUS per-session daemon address is: \ $DBUS_SESSION_BUS_ADDRESS" fi exec gnome-session
Procedure 23.1. Workaround for Red Hat Enterprise Linux 6 1. Install the acpid package The acpid service listen and processes ACPI requests. Log into the guest and install the acpid package on the guest:
# yum install acpid
2.
Enable the acpid service Set the acpid service to start during the guest boot sequence and start the service:
# chkconfig acpid on
173
The guest is now configured to shut down when the virsh shutdown command is used.
<clock>
The clock element is used to determine how the guest clock is synchronized with the host clock. The clock element has the following attributes: offset Determines how the guest clock is offset from the host clock. The offset attribute has the following possible values: Table 23.1. Offset attribute values Value utc localtime timezone variable Description The guest clock will be synchronized to UTC when booted. The guest clock will be synchronized to the host's configured timezone when booted, if any. The guest clock will be synchronized to a given timezone, specified by the timezone attribute. The guest clock will be synchronized to an arbitrary offset from UTC. The delta relative to UTC is specified in seconds, using the adjustment attribute. The guest is free to adjust the Real Time Clock (RTC) over time and expect that it will be honored following the next reboot. This is in contrast to utc mode, where any RTC adjustments are lost at each reboot.
timezone The timezone to which the guest clock is to be synchronized. adjustment The delta for guest clock synchronization. In seconds, relative to UTC. Example 23.1. Always synchronize to UTC
<clock offset="utc" />
174
<timer>
A clock element can have zero or more timer elements as children. The timer element specifies a time source used for guest clock synchronization. The timer element has the following attributes. Only the name is required, all other attributes are optional. name The name of the time source to use. Table 23.2. name attribute values Value platform pit rtc hpet tsc Description The master virtual time source which may be used to drive the policy of other time sources. Programmable Interval Timer - a timer with periodic interrupts. Real Time Clock - a continuously running timer with periodic interrupts. High Precision Event Timer - multiple timers with periodic interrupts. Time Stamp Counter - counts the number of ticks since reset, no interrupts.
wallclock Specifies whether the wallclock should track host or guest time. Only valid for a name value of platform or rtc. Table 23.3. wallclock attribute values Value host guest tickpolicy The policy used to pass ticks on to the guest. Description RTC wallclock always tracks host time. RTC wallclock always tracks guest time.
175
Chapter 23. Miscellaneous administration tasks Table 23.4. tickpolicy attribute values Value none catchup merge discard Description Continue to deliver at normal rate (i.e. ticks are delayed). Deliver at a higher rate to catch up. Ticks merged into one single tick. All missed ticks are discarded.
frequency Used to set a fixed frequency, measured in Hz. This attribute is only relevant for a name value of tsc. All other timers operate at a fixed frequency (pit, rtc), or at a frequency fully controlled by the guest (hpet). mode Determines how the time source is exposed to the guest. This attribute is only relevant for a name value of tsc. All other timers are always emulated. Table 23.5. mode attribute values Value auto native emulate paravirt Description Native if safe, otherwise emulated. Always native. Always emulate. Native + para-virtualized.
present Used to override the default set of timers visible to the guest. For example, to enable or disable the HPET. Table 23.6. present attribute values Value yes no Description Force this timer to the visible to the guest. Force this timer to not be visible to the guest.
Example 23.5. Clock synchronizing to local time with RTC and PIT timers, and the HPET timer disabled
<clock mode="localtime"> <timer name="rtc" tickpolicy="catchup" wallclock="guest" /> <timer name="pit" tickpolicy="none" /> <timer name="hpet" present="no" /> </clock>
176
Chapter 24.
Storage concepts
This chapter introduces the concepts used for describing and managing storage devices.
Local storage
Local storage is directly attached to the host server. Local storage includes local directories, directly attached disks, and LVM volume groups on local storage devices.
Networked storage
Networked storage covers storage devices shared over a network using standard protocols. Networked storage includes shared storage devices using Fibre Channel, iSCSI, NFS, GFS2, and SCSI RDMA protocols. Networked storage is a requirement for migrating guest virtualized guests between hosts.
179
Storage volumes
Storage pools are divided into storage volumes. Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt. Storage volumes are presented to virtualized guests as local storage devices regardless of the underlying hardware.
24.2. Volumes
Storage pools are divided into storage volumes. Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt. Storage volumes are presented to virtualized guests as local storage devices regardless of the underlying hardware.
Referencing volumes
To reference a specific volume, three approaches are possible: The name of the volume and the storage pool A volume may be referred to by name, along with an identifier for the storage pool it belongs in. On the virsh command line, this takes the form --pool storage_pool volume_name. For example, a volume named firstimage in the guest_images pool.
# virsh vol-info --pool guest_images firstimage Name: firstimage Type: block Capacity: 20.00 GB Allocation: 20.00 GB virsh #
The full path to the storage on the host system A volume may also be referred to by its full path on the file system. When using this approach, a pool identifier does not need to be included. For example, a volume named secondimage.img, visible to the host system as /images/ secondimage.img. The image can be referred to as /images/secondimage.img.
# virsh vol-info /images/secondimage.img Name: secondimage.img Type: file Capacity: 20.00 GB Allocation: 136.00 KB
The unique volume key When a volume is first created in the virtualization system, a unique identifier is generated and assigned to it. The unique identifier is termed the volume key. The format of this volume key varies upon the storage used. When used with block based storage such as LVM, the volume key may follow this format:
c3pKz4-qPVc-Xf7M-7WNM-WJc8-qSiz-mtvpGn
When used with file based storage, the volume key may instead be a copy of the full path to the volume storage. 180
Volumes
/images/secondimage.img
virsh provides commands for converting between a volume name, volume path, or volume key: vol-name Returns the volume name when provided with a volume path or volume key.
# virsh vol-name /dev/guest_images/firstimage firstimage # virsh vol-name Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr
vol-path Returns the volume path when provided with a volume key, or a storage pool identifier and volume name.
# virsh vol-path Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr /dev/guest_images/firstimage # virsh vol-path --pool guest_images firstimage /dev/guest_images/firstimage
The vol-key command Returns the volume key when provided with a volume path, or a storage pool identifier and volume name.
# virsh vol-key /dev/guest_images/firstimage Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr # virsh vol-key --pool guest_images firstimage Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr
181
182
Chapter 25.
Storage pools
25.1. Creating storage pools
25.1.1. Dedicated storage device-based storage pools
This section covers dedicating storage devices to virtualized guests.
Warning
Dedicating a disk to a storage pool will reformat and erase all data presently stored on the disk device. Back up the storage device before commencing the following procedure.
1.
Create a GPT disk label on the disk The disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating a large numbers of partitions, up to 128 partitions, on each device. GPT partition tables can store partition data for far more partitions than the msdos partition table.
# parted /dev/sdb GNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel New disk label type? gpt (parted) quit Information: You may need to update /etc/fstab. #
2.
Create the storage pool configuration file Create a temporary XML text file containing the storage pool information required for the new device. The file must be in the format shown below, and contain the following fields: <name>guest_images_disk</name> The name parameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below. 183
Chapter 25. Storage pools <device path='/dev/sdb'/> The device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb . <target> <path>/dev</path> The file system target parameter with the path sub-parameter determines the location on the host file system to attach volumes created with this this storage pool. For example, sdb1, sdb2, sdb3. Using /dev/, as in the example below, means volumes created from this storage pool can be accessed as /dev/sdb1, /dev/sdb2, /dev/sdb3. <format type='gpt'/> The format parameter specifies the partition table type. his example uses the gpt in the example below, to match the GPT disk label type created in the previous step. Create the XML file for the storage pool device with a text editor. Example 25.1. Dedicated storage device storage pool
<pool type='disk'> <name>guest_images_disk</name> <source> <device path='/dev/sdb'/> <format type='gpt'/> </source> <target> <path>/dev</path> </target> </pool>
3.
Attach the device Add the storage pool definition using the virsh pool-define command with the XML configuration file created in the previous step.
# virsh pool-define ~/guest_images_disk.xml Pool guest_images_disk defined from /root/guest_images_disk.xml # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_disk inactive no
4.
Start the storage pool Start the storage pool with the virsh pool-start command. Verify the pool is started with the virsh pool-list --all command.
# virsh pool-start guest_images_disk Pool guest_images_disk started # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_disk active no
184
Partition-based storage pools 5. Turn on autostart Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostarted # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_disk active yes
6.
Verify the storage pool configuration Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running.
# virsh pool-info guest_images_disk Name: guest_images_disk UUID: 551a67c8-5f2a-012c-3844-df29b167431c State: running Capacity: 465.76 GB Allocation: 0.00 Available: 465.76 GB # ls -la /dev/sdb brw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb # virsh vol-list guest_images_disk Name Path -----------------------------------------
7.
Optional: Remove the temporary configuration file Remove the temporary storage pool XML configuration file if it is not needed.
# rm ~/guest_images_disk.xml
185
b.
186
2.
Create the new storage pool a. Add a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a Name for the storage pool. This example uses the name guest_images_fs. Change the Type to fs: Pre-Formatted Block Device.
187
Chapter 25. Storage pools Press the Forward button to continue. b. Add a new pool (part 2) Change the Target Path, Format, and Source Path fields.
Target Path Enter the location to mount the source device for the storage pool in the Target Path field. If the location does does not already exist, virt-manager will create the directory. Format Select a format from the Format list. The device is formatted with the selected format. This example uses the ext4 file system, the default Red Hat Enterprise Linux file system. Source Path Enter the device in the Source Path field. This example uses the /dev/sdc1 device. Verify the details and press the Finish button to create the storage pool. 3. Verify the new storage pool The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 458.20 GB Free in this example. Verify the State field reports the new storage pool as Active. Select the storage pool. In the Autostart field, click the On Boot checkbox. This will make sure the storage device starts whenever the libvirtd service starts.
188
The storage pool is now created, close the Host Details window.
Security warning
Do not use this procedure to assign an entire disk as a storage pool (for example, /dev/sdb). Guests should not be given write access to whole disks or block devices. Only use this method to assign partitions (for example, /dev/sdb1) to storage pools.
Procedure 25.2. Creating pre-formatted block device storage pools using virsh 1. Create the storage pool definition Use the virsh pool-define-as command to create a new storage pool definition. There are three options that must be provided to define a pre-formatted disk as a storage pool: Partition name The name parameter determines the name of the storage pool. This example uses the name guest_images_fs in the example below. device The device parameter with the path attribute specifies the device path of the storage device. This example uses the partition /dev/sdc1 . mountpoint The mountpoint on the local file system where the formatted device will be mounted. If the mount point directory does not exist, the virsh command can create the directory. 189
Chapter 25. Storage pools The directory /guest_images is used in this example.
# virsh pool-define-as guest_images_fs fs - - /dev/sdc1 - "/guest_images" Pool guest_images_fs defined
The new pool and mount points are now created. 2. Verify the new pool List the present storage pools.
# virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_fs inactive no
3.
Ceate the mount point Use the virsh pool-build command to create a mount point for a pre-formatted file system storage pool.
# virsh pool-build guest_images_fs Pool guest_images_fs built # ls -la /guest_images total 8 drwx------. 2 root root 4096 May 31 19:38 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 .. # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_fs inactive no
4.
Start the storage pool Use the virsh pool-start command to mount the file system onto the mount point and make the pool available for use.
# virsh pool-start guest_images_fs Pool guest_images_fs started # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_fs active no
5.
Turn on autostart By default, a storage pool is defined with virsh is not set to automatically start each time the libvirtd starts. Turn on automatic start with the virsh pool-autostart command. The storage pool is now automatically started each time libvirtd starts.
# virsh pool-autostart guest_images_fs Pool guest_images_fs marked as autostarted # virsh pool-list --all Name State Autostart ----------------------------------------default active yes
190
6.
Verify the storage pool Verify the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running. Verify there is a "lost+found" directory in the mount point on the file system, indicating the device is mounted.
# virsh pool-info guest_images_fs Name: guest_images_fs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB # mount | grep /guest_images /dev/sdc1 on /guest_images type ext4 (rw) # ls -la /guest_images total 24 drwxr-xr-x. 3 root root 4096 May 31 19:47 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 .. drwx------. 2 root root 16384 May 31 14:18 lost+found
b.
Set directory ownership Change the user and group ownership of the directory. The directory must be owned by the root user.
# chown root:root /guest_images
c.
d.
Verify the changes Verify the permissions were modified. The output shows a correctly configured empty directory.
# ls -la /guest_images
191
2.
Configure SELinux file contexts Configure the correct SELinux context for the new directory.
# semanage fcontext -a -t virt_image_t /guest_images
3.
Open the storage pool settings a. In the virt-manager graphical interface, select the host from the main window. Open the Edit menu and select Host Details
b.
192
4.
Create the new storage pool a. Add a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a Name for the storage pool. This example uses the name guest_images_dir. Change the Type to dir: Filesystem Directory.
193
Chapter 25. Storage pools Press the Forward button to continue. b. Add a new pool (part 2) Change the Target Path field. This example uses /guest_images.
Verify the details and press the Finish button to create the storage pool. 5. Verify the new storage pool The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 36.41 GB Free in this example. Verify the State field reports the new storage pool as Active. Select the storage pool. In the Autostart field, click the On Boot checkbox. This will make sure the storage pool starts whenever the libvirtd service sarts.
194
The storage pool is now created, close the Host Details window.
2.
Verify the storage pool is listed Verify the storage pool object is created correctly and the state reports it as inactive.
# virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_dir inactive no
195
Chapter 25. Storage pools 3. Create the local directory Use the virsh pool-build command to build the directory-based storage pool. virsh poolbuild sets the required permissions and SELinux settings for the directory and creates the directory if it does not exist.
# virsh pool-build guest_images_dir Pool guest_images_dir built # ls -la /guest_images total 8 drwx------. 2 root root 4096 May 30 02:44 . dr-xr-xr-x. 26 root root 4096 May 30 02:44 .. # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_dir inactive no
4.
Start the storage pool Use the virsh command pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guests.
# virsh pool-start guest_images_dir Pool guest_images_dir started # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_dir active no
5.
Turn on autostart Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart guest_images_dir Pool guest_images_dir marked as autostarted # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_dir active yes
6.
Verify the storage pool configuration Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running.
# virsh pool-info guest_images_dir Name: guest_images_dir UUID: 779081bf-7a82-107b-2874-a19a9c51d24c State: running Capacity: 49.22 GB Allocation: 12.80 GB Available: 36.41 GB # ls -la /guest_images total 8 drwx------. 2 root root 4096 May 30 02:44 . dr-xr-xr-x. 26 root root 4096 May 30 02:44 .. #
196
LVM
Please refer to Chapter 3 of the Red Hat Enterprise Linux Storage Administration Guide for more details on LVM: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/ Storage_Administration_Guide/ch-lvm.html
Warning
LVM-based storage pools require a full disk partition. This partition will be formatted and all data presently stored on the disk device will be erased. Back up the storage device before commencing the following procedure.
Warning
This procedure will remove all data from the selected storage device.
a.
Create a new partition Use the fdisk command to create a new disk partition from the command line. The following example creates a new partition that uses the entire disk on the storage device /dev/sdb.
# fdisk /dev/sdb Command (m for help):
c.
Choose an available partition number. In this example the first partition is chosen by entering 1. 197
d.
e.
Select the size of the partition. In this example the entire disk is allocated by pressing Enter.
Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
f.
g.
Choose the partition you created in the previous steps. In this example, the partition number is 1.
Partition number (1-4): 1
h.
i.
j.
Create a new LVM volume group Create a new LVM volume group with the vgcreate command. This example creates a volume group named guest_images_lvm.
# vgcreate guest_images_lvm /dev/sdb1 Physical volmue "/dev/vdb1" successfully created Volume group "guest_images_lvm" successfully created
The new LVM volume group, guest_images_lvm, can now be used for an LVM-based storage pool. 2. Open the storage pool settings a. In the virt-manager graphical interface, select the host from the main window. Open the Edit menu and select Host Details
198
b.
199
3.
Create the new storage pool a. Start the Wizard Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a Name for the storage pool. We use guest_images_lvm for this example. Then change the Type to logical: LVM Volume Group, and
200
LVM-based storage pools Press the Forward button to continue. b. Add a new pool (part 2) Change the Target Path field. This example uses /guest_images. Now fill in the Target Path and Source Path fields, then tick the Build Pool check box. Use the Target Path field to either select an existing LVM volume group or as the name for a new volume group. The default format is /dev/storage_pool_name. This example uses a new volume group named /dev/guest_images_lvm. The Source Path field is optional if an existing LVM volume group is used in the Target Path. For new LVM volume groups, input the location of a storage device in the Source Path field. This example uses a blank partition /dev/sdc. The Build Pool checkbox instructs virt-manager to create a new LVM volume group. If you are using an existing volume group you should not select the Build Pool checkbox. This example is using a blank partition to create a new volume group so the Build Pool checkbox must be selected.
Verify the details and press the Finish button format the LVM volume group and create the storage pool. c. Confirm the device to be formatted A warning message appears.
201
Press the Yes button to proceed to erase all data on the storage device and create the storage pool. 4. Verify the new storage pool The new storage pool will appear in the list on the left after a few seconds. Verify the details are what you expect, 465.76 GB Free in our example. Also verify the State field reports the new storage pool as Active. It is generally a good idea to have the Autostart check box enabled, to ensure the storage pool starts automatically with libvirtd.
202
Convert
http://en.wikipedia.org/wiki/ISCSI
203
Chapter 25. Storage pools Procedure 25.3. Creating an iSCSI target 1. Install the required packages Install the scsi-target-utils package and all dependencies
# yum install scsi-target-utils
2.
Start the tgtd service The tgtd service hosts SCSI targets and uses the iSCSI protocol to host targets. Start the tgtd service and make the service persistent after restarting with the chkconfig command.
# service tgtd start # chkconfig tgtd on
3.
Optional: Create LVM volumes LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be beneficial for virtualized guests. This example creates an LVM image named virtimage1 on a new volume group named virtstore on a RAID5 array for hosting virtualized guests with iSCSI. a. Create the RAID array Creating software RAID5 arrays is covered by the Red Hat Enterprise Linux Deployment Guide. Create the LVM volume group Create a volume group named virtstore with the vgcreate command.
# vgcreate virtstore /dev/md1
b.
c.
Create a LVM logical volume Create a logical volume group named virtimage1 on the virtstore volume group with a size of 20GB using the lvcreate command.
# lvcreate --size 20G -n virtimage1 virtstore
The new logical volume, virtimage1, is ready to use for iSCSI. 4. Optional: Create file-based images File-based storage is sufficient for testing but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based imaged named virtimage2.img for an iSCSI target. a. Create a new directory for the image Create a new directory to store the image. The directory must have the correct SELinux contexts.
# mkdir -p /var/lib/tgtd/virtualization
b.
Create the image file Create an image named virtimage2.img with a size of 10GB.
# dd if=/dev/zero of=/var/lib/tgtd/virtualization/virtimage2.img bs=1M seek=10000 count=0
204
iSCSI-based storage pools c. Configure SELinux file contexts Configure the correct SELinux context for the new image and directory.
# restorecon -R /var/lib/tgtd
The new file-based image, virtimage2.img, is ready to use for iSCSI. 5. Create targets Targets can be created by adding a XML entry to the /etc/tgt/targets.conf file. The target attribute requires an iSCSI Qualified Name (IQN). The IQN is in the format:
iqn.yyyy-mm.reversed domain name:optional identifier text
Where: yyyy-mm represents the year and month the device was started (for example: 2010-05); reversed domain name is the hosts domain name in reverse (for example server1.example.com in an IQN would be com.example.server1); and optional identifier text is any text string, without spaces, that assists the administrator in identifying devices or hardware. This example creates iSCSI targets for the two types of images created in the optional steps on server1.example.com with an optional identifier trial. Add the following to the /etc/tgt/ targets.conf file.
<target iqn.2010-05.com.example.server1:trial> backing-store /dev/virtstore/virtimage1 #LUN 1 backing-store /var/lib/tgtd/virtualization/virtimage2.img write-cache off </target>
#LUN 2
Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI. The driver uses iSCSI by default.
Important
This example creates a globally accessible target without access control. Refer to the scsitarget-utils for information on implementing secure access.
6.
Restart the tgtd service Restart the tgtd service to reload the configuration changes.
# service tgtd restart
7.
iptables configuration Open port 3260 for iSCSI access with iptables.
# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT # service iptables save
205
8.
Verify the new targets View the new targets to ensure the setup was success with the tgt-admin --show command.
# tgt-admin --show Target 1: iqn.2010-05.com.example.server1:trial System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: None LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 20000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /dev/virtstore/virtimage1 LUN: 2 Type: disk SCSI ID: IET 00010002 SCSI SN: beaf12 Size: 10000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /var/lib/tgtd/virtualization/virtimage2.img Account information: ACL information: ALL
Security warning
The ACL list is set to all. This allows all systems on the local network to access this device. It is recommended to set host access ACLs for production environments.
9.
Optional: Test discovery Test whether the new iSCSI device is discoverable.
# iscsiadm --mode discovery --type sendtargets --portal server1.example.com 127.0.0.1:3260,1 iqn.2010-05.com.example.server1:trial1
10. Optional: Test attaching the device Attach the new device (iqn.2010-05.com.example.server1:trial1) to determine whether the device can be attached. 206
# iscsiadm -d2 -m node --login scsiadm: Max file limits 1024 1024 Logging in to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260] successful.
207
c. d.
Open the Edit menu and select Host Details. Click on the Storage tab of the Host Details window.
208
2.
Add a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
Choose a name for the storage pool, change the Type to iscsi, and press Forward to continue.
209
Chapter 25. Storage pools 3. Add a new pool (part 2) Enter the target path for the device, the host name of the target and the source path (the IQN). The Format option is not available as formatting is handled by the guests. It is not advised to edit the Target Path. The default target path value, /dev/disk/by-path/, adds the drive path to that directory. The target path should be the same on all hosts for migration. Enter the hostname or IP address of the iSCSI target. This example uses server1.example.com. Enter the source path, the IQN for the iSCSI target. This example uses iqn.2010-05.com.example.server1:trial1.
iSCSI-based storage pools <device path='iqn.2010-05.com.example.server1:trial1'/> The device element path attribute must contain the IQN for the iSCSI server. With a text editor, create an XML file for the iSCSI storage pool. This example uses a XML definition named trial1.xml.
<pool type='iscsi'> <name>trial1</name> <uuid>afcc5367-6770-e151-bcb3-847bc36c5e28</uuid> <source> <host name='server1.example.com'/> <device path='iqn.2010-05.com.example.server1:trial1'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>
Use the pool-define command to define the storage pool but not start it.
# virsh pool-define trial1.xml Pool trial1 defined
2.
Alternative step: Use pool-define-as to define the pool from the command line Storage pool definitions can be created with the virsh command line tool. Creating storage pools with virsh is useful for systems administrators using scripts to create multiple storage pools. The virsh pool-define-as command has several parameters which are accepted in the following format:
virsh pool-define-as name type source-host source-path source-dev source-name target
The type, iscsi, defines this pool as an iSCSI based storage pool. The name parameter must be unique and sets the name for the storage pool. The source-host and source-path parameters are the hostname and iSCSI IQN respectively. The source-dev and source-name parameters are not required for iSCSI-based pools, use a - character to leave the field blank. The target parameter defines the location for mounting the iSCSI device on the host. The example below creates the same iSCSI-based storage pool as the previous step.
# virsh pool-define-as trial1 iscsi server1.example.com iqn.2010-05.com.example.server1:trial1 - - /dev/disk/by-path Pool trial1 defined
3.
Verify the storage pool is listed Verify the storage pool object is created correctly and the state reports as inactive.
# virsh pool-list --all Name State Autostart ----------------------------------------default active yes trial1 inactive no
211
Chapter 25. Storage pools 4. Start the storage pool Use the virsh command pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guests.
# virsh pool-start guest_images_disk Pool guest_images_disk started # virsh pool-list --all Name State Autostart ----------------------------------------default active yes trial1 active no
5.
Turn on autostart Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart trial1 Pool trial1 marked as autostarted
6.
Verify the storage pool configuration Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running.
# virsh pool-info trial1 Name: trial1 UUID: afcc5367-6770-e151-bcb3-847bc36c5e28 State: running Persistent: unknown Autostart: yes Capacity: 100.31 GB Allocation: 0.00 Available: 100.31 GB
c. d.
Open the Edit menu and select Host Details. Click on the Storage tab of the Host Details window.
213
2.
Create a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
Choose a name for the storage pool and press Forward to continue.
214
NFS-based storage pools 3. Create a new pool (part 2) Enter the target path for the device, the hostname and the NFS share path. Set the Format option to NFS or auto (to detect the type). The target path must be identical on all hosts for migration. Enter the hostname or IP address of the NFS server. This example uses server1.example.com. Enter the NFS path. This example uses /nfstrial.
215
216
Chapter 26.
Volumes
26.1. Creating volumes
This section shows how to create disk volumes inside a block based storage pool.
# virsh vol-create-as guest_images_disk volume1 8G Vol volume1 created # virsh vol-create-as guest_images_disk volume2 8G Vol volume2 created # virsh vol-create-as guest_images_disk volume3 8G Vol volume3 created # virsh vol-list guest_images_disk Name Path ----------------------------------------volume1 /dev/sdb1 volume2 /dev/sdb2 volume3 /dev/sdb3 # parted -s /dev/sdb print Model: ATA ST3500418AS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start 2 17.4kB 3 8590MB 1 21.5GB # End 8590MB 17.2GB 30.1GB Size 8590MB 8590MB 8590MB File system Name primary primary primary Flags
217
b.
Non-sparse, pre-allocated files are recommended for file-based storage images. Create a non-sparse file, execute:
# dd if=/dev/zero of=/var/lib/libvirt/images/FileName.img bs=1M count=4096
Both commands create a 4GB file which can be used as additional storage for a virtualized guest. 2. Dump the configuration for the guest. In this example the guest is called Guest1 and the file is saved in the users home directory.
# virsh dumpxml Guest1 > ~/Guest1.xml
3.
Open the configuration file (Guest1.xml in this example) in a text editor. Find the <disk> elements, these elements describe storage devices. The following is an example disk element:
<disk type='file' device='disk'> <driver name='virtio' cache='none'/> <source file='/var/lib/libvirt/images/Guest1.img'/> <target dev='sda' /> </disk>
4.
Add the additional storage by duplicating or writing a new <disk> element. Ensure you specify a device name for the virtual block device attributes. These attributes must be unique for each guest configuration file. The following example is a configuration file section which contains an additional file-based storage container named FileName.img.
<disk type='file' device='disk'> <driver name='virtio' cache='none'/> <source file='/var/lib/libvirt/images/Guest1.img'/> <target dev='sda' /> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source dev='/var/lib/libvirt/images/FileName.img'/> <target dev='vdb' bus='virtio'/> </disk>
218
Adding file based storage to a guest 5. Start the guest from the updated configuration file.
# virsh start Guest1.xml
6.
The following steps are Linux guest specific. Other operating systems handle new storage devices in different ways. For other systems, refer to that operating system's documentation The guest now uses the file FileName.img as the device called /dev/sdb. This device requires formatting from the guest. On the guest, partition the device into one primary partition for the entire device then format the device. a. Press n for a new partition.
# fdisk /dev/sdb Command (m for help):
b.
c.
Choose an available partition number. In this example the first partition is chosen by entering 1.
Partition number (1-4): 1
d.
e.
Select the size of the partition. In this example the entire disk is allocated by pressing Enter.
Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
f.
g.
Choose the partition you created in the previous steps. In this example, the partition number is 1.
Partition number (1-4): 1
h.
i.
219
j.
7.
myguest with the name of the guest. /dev/sdb1 with the device on the host to add. sdc with the location on the guest where the device should be added. It must be an unused device name. Use the sd* notation for Windows guests as well, the guest will recognize the device correctly. Only include the --mode readonly parameter if the device should be read only to the guest. Additionally, there are optional arguments that may be added: Append the --type hdd parameter to the command for CD-ROM or DVD devices. Append the --type floppy parameter to the command for floppy devices. 4. The guest now has a new hard disk device called /dev/sdb on Linux or D: drive, or similar, on Windows. This device may require formatting.
220
221
222
Chapter 27.
This example uses a guest created with virt-manager running a fully virtualized Fedora installation with an image located in /var/lib/libvirt/images/Fedora.img. 1. Create the XML configuration file for your guest image using the virsh command on a running guest.
# virsh dumpxml Fedora > Fedora.xml
This saves the configuration settings as an XML file which can be edited to customize the operations and devices used by the guest. For more information on using the virsh XML configuration files, refer to Chapter 34, Creating custom libvirt scripts. 2. Create a floppy disk image for the guest.
# dd if=/dev/zero of=/var/lib/libvirt/images/Fedora-floppy.img bs=512 count=2880
3.
Add the content below, changing where appropriate, to your guest's configuration XML file. This example is an emulated floppy device using a file-based image.
<disk type='file' device='floppy'> <source file='/var/lib/libvirt/images/Fedora-floppy.img'/> <target dev='fda'/> </disk>
4.
Force the guest to stop. To shut down the guest gracefully, use the virsh shutdown command instead.
# virsh destroy Fedora
5.
The floppy device is now available in the guest and stored as an image file on the host.
223
This sets the default options for scsi_id, ensuring returned UUIDs contains no spaces. The IET iSCSI target otherwise returns spaces in UUIDs, which can cause problems. 2. To display the UUID for a given device run the scsi_id --whitelisted --replacewhitespace --device=/dev/sd* command. For example:
# scsi_id --whitelisted --replace-whitespace --device=/dev/sdc 1IET_00010001
The output may vary from the example above. The output in this example displays the UUID of the device /dev/sdc. 3. 4. Verify the UUID output from the scsi_id --whitelisted --replace-whitespace -device=/dev/sd* command is correct and as expected. Create a rule to name the device. Create a file named 20-names.rules in the /etc/udev/ rules.d directory. Add new rules to this file. All rules are added to the same file using the same format. Rules follow this format:
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id --whitelisted --replacewhitespace /dev/$name", RESULT=="UUID", NAME="devicename"
Replace UUID and devicename with the UUID retrieved above, and a name for the device. This is an example for the rule above for three example iSCSI luns:
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id --whitelisted --replacewhitespace /dev/$name", RESULT=="1IET_00010001", NAME="rack4row16lun1" KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id --whitelisted --replacewhitespace /dev/$name", RESULT=="1IET_00010002", NAME="rack4row16lun2"
224
The udev daemon now searches all devices named /dev/sd* for a matching UUID in the rules. When a matching device is connected to the system the device is assigned the name from the rule. For example:
# ls -la /dev/rack4row16* brw-rw---- 1 root disk 8, 18 May 25 23:35 /dev/rack4row16lun1 brw-rw---- 1 root disk 8, 34 May 25 23:35 /dev/rack4row16lun2 brw-rw---- 1 root disk 8, 50 May 25 23:35 /dev/rack4row16lun3
5.
Networked storage devices with configured rules now have persistent names on all hosts where the files were updated This means you can migrate guests between hosts using the shared storage and the guests can access the storage devices in their configuration files.
2.
Create the multipath configuration file, /etc/multipath.conf. In it create a defaults section, and disable the user_friendly_names option unless you have a specific need for it. It is also a good idea to configure the default arguments for the getuid_callout option. This is generally a useful start:
defaults { user_friendly_names no getuid_callout "/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n" }
3.
Below the defaults section add a multipaths section (note the plural spelling). In this section add each of the WWIDs identified from the scsi_id command above. For example:
multipaths {
225
Multipath devices are created in the /dev/mapper directory. The above example will create 4 LUNs named /dev/mapper/oramp1, /dev/mapper/oramp2, /dev/mapper/oramp3 and / dev/mapper/oramp4. 4. Enable the multipathd daemon to start at system boot.
# chkconfig multipathd on # chkconfig --list multipathd multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
5.
The mapping of these device WWIDs to their alias names will now persist across reboots. For example:
# ls -la /dev/mapper/oramp* brw-rw---- 1 root disk 253, 6 May 26 00:17 /dev/mapper/oramp1 brw-rw---- 1 root disk 253, 7 May 26 00:17 /dev/mapper/oramp2 brw-rw---- 1 root disk 253, 8 May 26 00:17 /dev/mapper/oramp3 brw-rw---- 1 root disk 253, 9 May 26 00:17 /dev/mapper/oramp4 # multipath -ll oramp1 (1IET_00010004) dm-6 IET,VIRTUAL-DISK size=20.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 8:0:0:4 sde 8:64 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 9:0:0:4 sdbl 67:240 active ready running oramp3 (1IET_00010006) dm-8 IET,VIRTUAL-DISK size=20.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 8:0:0:6 sdg 8:96 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 9:0:0:6 sdbn 68:16 active ready running oramp2 (1IET_00010005) dm-7 IET,VIRTUAL-DISK size=20.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 8:0:0:5 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 9:0:0:5 sdbm 68:0 active ready running oramp4 (1IET_00010007) dm-9 IET,VIRTUAL-DISK size=20.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 8:0:0:7 sdh 8:112 active ready running `-+- policy='round-robin 0' prio=1 status=enabled
226
Warning
Guests must be offline before their files can be read. Editing or reading files of an active guest is not possible and may cause data loss or damage.
Procedure 27.1. Accessing guest image data 1. Install the kpartx package.
# yum install kpartx
2.
Use kpartx to list partition device mappings attached to a file-based storage image. This example uses a image file named guest1.img.
# kpartx -l /var/lib/libvirt/images/guest1.img loop0p1 : 0 409600 /dev/loop0 63 loop0p2 : 0 10064717 /dev/loop0 409663
guest1 is a Linux guest. The first partition is the boot partition and the second partition is an EXT3 containing the root partition. 3. Add the partition mappings to the recognized devices in /dev/mapper/.
# kpartx -a /var/lib/libvirt/images/guest1.img
Test that the partition mapping worked. There should be new devices in the /dev/mapper/ directory
# ls /dev/mapper/ loop0p1 loop0p2
The mappings for the image are named in the format loopXpY.
http://fedoraproject.org/wiki/EPEL
227
Chapter 27. Miscellaneous storage topics 4. Mount the loop device which to a directory. If required, create the directory. This example uses / mnt/guest1 for mounting the partition.
# mkdir /mnt/guest1 # mount /dev/mapper/loop0p1 /mnt/guest1 -o loop,ro
5. 6.
The files are now available for reading in the /mnt/guest1 directory. Read or copy the files. Unmount the device so the guest image can be reused by the guest. If the device is mounted the guest cannot access the image and therefore cannot start.
# umount /mnt/guest1
7.
2.
In this example the LVM volumes are on a second partition. The volumes require a rescan with the vgscan command to find the new volume groups.
# vgscan Reading all physical volumes . This may take a while... Found volume group "VolGroup00" using metadata type lvm2
3.
Activate the volume group on the partition (called VolGroup00 by default) with the vgchange ay command.
# vgchange -ay VolGroup00 2 logical volumes in volume group VolGroup00 now active.
4.
Use the lvs command to display information about the new volumes. The volume names (the LV column) are required to mount the volumes.
# lvs LV VG Attr Lsize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-a- 5.06G LogVol01 VolGroup00 -wi-a- 800.00M
5.
228
6. 7.
The files are now available for reading in the /mnt/guestboot directory. Read or copy the files. Unmount the device so the guest image can be reused by the guest. If the device is mounted the guest cannot access the image and therefore cannot start.
# umount /mnt/guestboot
8.
9.
229
230
Chapter 28.
Example:
admin> portCfgNPIVPort 0 1
QLogic Example
# ls /proc/scsi/qla2xxx
Emulex Example
# ls /proc/scsi/lpfc
231
If the creation is successful, a new HBA in the system with the next available host number.
Note
The virtual HBAs can be destroyed with the following command:
# echo '1111222233334444:5555666677778888' > /sys/class/fc_host/host5/vport_delete
232
2.
Gather parent HBA device data Output the XML definition for each required HBA. This example uses the HBA, pci_10df_fe00_scsi_host.
# virsh nodedev-dumpxml pci_10df_fe00_scsi_host <device> <name>pci_10df_fe00_scsi_host</name> <parent>pci_10df_fe00</parent> <capability type='scsi_host'> <host>5</host> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> </capability> <capability type='vport_ops' /> </capability> </device>
HBAs capable of creating virtual HBAs have a capability type='vport_ops' in the XML definition. 3. Create the XML definition for the virtual HBA With information gathered in the previous step, create an XML definition for the virtual HBA. This example uses a file named newHBA.xml.
<device> <parent>pci_10df_fe00_0_scsi_host</parent> <capability type='scsi_host'> <capability type='fc_host'> <wwpn>1111222233334444</wwpn> <wwnn>5555666677778888</wwnn> </capability> </capability> </device>
The <parent> element is the name of the parent HBA listed by the virsh nodedev-list command. The <wwpn> and <wwnn> elements are the WWNN and WWPN for the virtual HBA.
233
4.
Create the virtual HBA Create the virtual HBA with the virsh nodedev-create command using the file from the previous step.
# virsh nodedev-create newHBA.xml Node device pci_10df_fe00_0_scsi_host_0_scsi_host created from newHBA.xml
The new virtual HBA should be detected and available to the host. The create command output gives you the node device name of the newly created device.
234
Part VI. Host virtualization tools Virtualization commands, system tools, applications and additional systems reference
These chapters provide detailed descriptions of virtualization commands, system tools, and applications included in Red Hat Enterprise Linux 6. These chapters are designed for users requiring information on advanced functionality and other features.
Chapter 29.
The following virsh command options manage guest and hypervisor resources: Table 29.2. Resource management options Command setmem Description Sets the allocated memory for a guest. Refer to the virsh manpage for more details.
237
Chapter 29. Managing guests with virsh Command setmaxmem setvcpus Description Sets maximum memory limit for the hypervisor. Refer to the virsh manpage for more details. Changes number of virtual CPUs assigned to a guest. Refer to the virsh manpage for more details. Displays virtual CPU information about a guest. Controls the virtual CPU affinity of a guest. Displays block device statistics for a running guest. Displays network interface statistics for a running guest. Attach a device to a guest, using a device definition in an XML file. Attaches a new disk device to a guest. Attaches a new network interface to a guest. Detach a disk image from a guest's CD-ROM drive. See Attaching and updating a device with virsh for more details. Detach a device from a guest, takes the same kind of XML descriptions as command attachdevice. Detach a disk device from a guest. Detach a network interface from a guest.
detach-device
detach-disk detach-interface
The virsh commands for managing and creating storage pools and volumes. For more information on using storage pools with virsh, refer to http://libvirt.org/formatstorage.html Table 29.3. Storage Pool options Command find-storage-pool-sources find-storage-pool-sources port Description Returns the XML definition for all storage pools of a given type that could be found. Returns data on all storage pools of a given type that could be found as XML. If the host and port are provided, this command can be run remotely. Sets the storage pool to start at boot time. The pool-build command builds a defined pool. This command can format disks and create partitions. pool-create creates and starts a storage pool from the provided XML storage pool definition file. Creates and starts a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool.
pool-autostart pool-build
pool-create
pool-create-as name
238
Description Creates a storage bool from an XML definition file but does not start the new storage pool. Creates but does not start, a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool. Permanently destroys a storage pool in libvirt. The raw data contained in the storage pool is not changed and can be recovered with the pool-create command. Destroys the storage resources used by a storage pool. This operation cannot be recovered. The storage pool still exists after this command but all data is deleted. Prints the XML definition for a storage pool. Opens the XML definition file for a storage pool in the users default text editor. Returns information about a storage pool. Lists storage pools known to libvirt. By default, pool-list lists pools in use by active guests. The --inactive parameter lists inactive pools and the --all parameter lists all pools. Deletes the definition for an inactive storage pool. Returns the UUID of the named pool. Prints a storage pool's name when provided the UUID of a storage pool. Refreshes the list of volumes contained in a storage pool. Starts a storage pool that is defined but inactive.
pool-destroy
pool-delete
pool-undefine pool-uuid pool-name pool-refresh pool-start Table 29.4. Volume options Command vol-create vol-create-from vol-create-as vol-clone vol-delete vol-wipe vol-dumpxml vol-info vol-list
Description Create a volume from an XML file. Create a volume using another volume as input. Create a volume from a set of arguments. Clone a volume. Delete a volume. Wipe a volume. Show volume information in XML. Show storage volume information. List volumes.
239
Chapter 29. Managing guests with virsh Command vol-pool vol-path vol-name vol-key Description Returns the storage pool for a given volume key or path. Returns the volume path for a given volume name or key. Returns the volume name for a given volume key or path. Returns the volume key for a given volume name or path.
Table 29.5. Secret options Command secret-define secret-dumpxml secret-set-value secret-get-value secret-undefine secret-list Table 29.6. Network filter options Command nwfilter-define nwfilter-undefine nwfilter-dumpxml nwfilter-list nwfilter-edit Description Define or update a network filter from an XML file. Undefine a network filter. Show network filter information in XML. List network filters. Edit XML configuration for a network filter. Description Define or modify a secret from an XML file. Show secret attributes in XML. Set a secret value. Output a secret value. Undefine a secret. List secrets.
This table contains virsh command options for snapshots: Table 29.7. Snapshot options Command snapshot-create snapshot-current snapshot-delete snapshot-dumpxml snapshot-list snapshot-revert Description Create a snapshot. Get the current snapshot. Delete a domain snapshot. Dump XML for a domain snapshot. List snapshots for a domain. Revert a domain to a snapshot.
This table contains miscellaneous virsh commands: Table 29.8. Miscellaneous options Command version nodeinfo Description Displays the version of virsh. Outputs information about the hypervisor.
240
# virsh attach-disk <GuestName> sample.iso hdc --type cdrom --mode readonly Disk attached successfully
2. Create an XML file to update a specific device. To detach, remove a line for the source device:
<disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' unit='0'/> </disk>
Where {name} is the machine name (hostname) or URL of the hypervisor. To initiate a read-only connection, append the above command with --readonly.
This command outputs the guest's XML configuration file to standard out (stdout). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml:
# virsh dumpxml GuestID > guest.xml
This file guest.xml can recreate the guest (refer to Editing a guest's configuration file. You can edit this XML configuration file to configure additional devices or to deploy additional guests. Refer to Section 34.1, Using XML configuration files with virsh for more information on modifying files created with virsh dumpxml. An example of virsh dumpxml output:
# virsh dumpxml r5b2-mySQL01 <domain type='kvm' id='13'>
241
This opens a text editor. The default text editor is the $EDITOR shell parameter (set to vi by default).
Suspending a guest
Suspend a guest with virsh:
# virsh suspend {domain-id, domain-name or domain-uuid}
When a guest is in a suspended state, it consumes system RAM but not processor resources. Disk and network I/O does not occur while the guest is suspended. This operation is immediate and the guest can be restarted with the resume (Resuming a guest) option.
Resuming a guest
Restore a suspended guest with virsh using the resume option:
# virsh resume {domain-id, domain-name or domain-uuid}
242
This operation is immediate and the guest parameters are preserved for suspend and resume operations.
Save a guest
Save the current state of a guest to a file using the virsh command:
# virsh save {domain-name, domain-id or domain-uuid} filename
This stops the guest you specify and saves the data to a file, which may take some time given the amount of memory in use by your guest. You can restore the state of the guest with the restore (Restore a guest) option. Save is similar to pause, instead of just pausing a guest the present state of the guest is saved.
Restore a guest
Restore a guest previously saved with the virsh save command (Save a guest) using virsh:
# virsh restore filename
This restarts the saved guest, which may take some time. The guest's name and UUID are preserved but are allocated for a new id.
You can control the behavior of the rebooting guest by modifying the on_shutdown parameter in the guest's configuration file.
Rebooting a guest
Reboot a guest using virsh command:
#virsh reboot {domain-id, domain-name or domain-uuid}
You can control the behavior of the rebooting guest by modifying the on_reboot element in the guest's configuration file.
This command does an immediate ungraceful shutdown and stops the specified guest. Using virsh destroy can corrupt guest file systems . Use the destroy option only when the guest is unresponsive.
244
Memory size:
1046528 kb
This displays the node information and the machines that support the virtualization process.
Note
The default editor is defined by the $VISUAL or $EDITOR environment variables, and default is vi.
Other options available include: the --inactive option to list inactive guests (that is, guests that have been defined but are not currently active), and the --all option lists all guests. For example:
# virsh list --all Id Name State ---------------------------------0 Domain-0 running 1 Domain202 paused 2 Domain010 inactive 3 Domain9600 crashed
The output from virsh list is categorized as one of the six states (listed below). The running state refers to guests which are currently active on a CPU. Guests listed as blocked are blocked, and are not running or runnable. This is caused by a guest waiting on I/O (a traditional wait state) or guests in a sleep mode. The paused state lists domains that are paused. This occurs if an administrator uses the pause button in virt-manager, xm pause or virsh suspend. When a guest is paused it consumes memory and other resources but it is ineligible for scheduling and CPU resources from the hypervisor. The shutdown state is for guests in the process of shutting down. The guest is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest operating systems; some operating systems do not respond to these signals. 245
Chapter 29. Managing guests with virsh Domains in the dying state are in is in process of dying, which is a state where the domain has not completely shut-down or crashed. crashed guests have failed while running and are no longer running. This state can only occur if the guest has been configured not to restart on crash.
The domain-id parameter is the guest's ID number or name. The vcpu parameter denotes the number of virtualized CPUs allocated to the guest.The vcpu parameter must be provided. The cpulist parameter is a list of physical CPU identifier numbers separated by commas. The cpulist parameter determines which physical CPUs the VCPUs can run on.
The new count value cannot exceed the count above the amount specified when the guest was created.
You must specify the count in kilobytes. The new count value cannot exceed the amount you specified when you created the guest. Values lower than 64 MB are unlikely to work with most guest operating systems. A higher maximum memory value does not affect an active guests. If the new value is lower the available memory will shrink and the guest may crash. 246
The --live parameter is optional. Add the --live parameter for live migrations. The GuestName parameter represents the name of the guest which you want to migrate. The DestinationURL parameter is the URL or hostname of the destination system. The destination system requires: Red Hat Enterprise Linux 5.4 (ASYNC update 4) or newer, the same hypervisor version, and the libvirt service must be started. Once the command is entered you will be prompted for the root password of the destination system.
247
Other virsh commands used in managing virtual networks are: virsh net-autostart network-name Autostart a network specified as network-name. virsh net-create XMLfile generates and starts a new network using an existing XML file. virsh net-define XMLfile generates a new network device from an existing XML file without starting it. virsh net-destroy network-name destroy a network specified as network-name. virsh net-name networkUUID convert a specified networkUUID to a network name. virsh net-uuid network-name convert a specified network-name to a network UUID. virsh net-start nameOfInactiveNetwork starts an inactive network. virsh net-undefine nameOfInactiveNetwork removes the definition of an inactive network.
248
Chapter 30.
249
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager)
Figure 30.1. Starting virt-manager Alternatively, virt-manager can be started remotely using ssh as demonstrated in the following command:
ssh -X host's address [remotehost]# virt-manager
Using ssh to manage virtual machines and hosts is discussed further in Section 18.1, Remote management with SSH.
250
251
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager)
Figure 30.3. The virtual hardware details icon Clicking the icon displays the virtual hardware details window.
252
253
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager)
Your local desktop can intercept key combinations (for example, Ctrl+Alt+F11) to prevent them from being sent to the guest machine. You can use virt-managersticky key' capability to send these sequences. You must press any modifier key (Ctrl or Alt) 3 times and the key you specify gets treated as active until the next non-modifier key is pressed. Then you can send Ctrl-Alt-F11 to the guest by entering the key sequence 'Ctrl Ctrl Ctrl Alt+F1'. SPICE is an alternative to VNC available for Red Hat Enterprise Linux. 254
Figure 30.6. Add Connection 3. Enter the root password for the selected host when prompted.
A remote host is now connected and appears in the main virt-manager window.
255
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager)
256
Restoring a saved machine 1. From the File menu, select Restore a saved machine.
Figure 30.8. Restoring a virtual machine 2. 3. 4. The Restore Virtual Machine main window appears. Navigate to correct directory and select the saved session file. Click Open.
The saved virtual system appears in the Virtual Machine Manager main window.
257
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager)
258
Displaying guest details 1. In the Virtual Machine Manager main window, highlight the virtual machine that you want to view.
259
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager) 2. From the Virtual Machine Manager Edit menu, select Virtual Machine Details.
Figure 30.11. Displaying the virtual machine details On the Virtual Machine window, select Overview from the navigation pane on the left hand side. The Overview view shows a summary of configuration details for the virtualized guest.
260
Figure 30.12. Displaying guest details overview 3. Select Performance from the navigation pane on the left hand side. The Performance view shows a summary of guest performance, including CPU and Memory usage.
261
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager)
262
Displaying guest details 4. Select Processor from the navigation pane on the left hand side. The Processor view allows you to view or change the current processor allocation.
263
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager) 5. Select Memory from the navigation pane on the left hand side. The Memory view allows you to view or change the current memory allocation.
264
Performance monitoring 6. Each virtual disk attached to the virtual machine is displayed in the navigation pane. Click on a virtual disk to modify or remove it.
Figure 30.16. Displaying disk configuration 7. Each virtual network interface attached to the virtual machine is displayed in the nagivation pane. Click on a virtual network interface to modify or remove it.
265
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager) 1. From the Edit menu, select Preferences.
266
Displaying guest identifiers 2. From the Stats tab specify the time in seconds or stats polling options.
267
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager) 1. From the View menu, select the Domain ID check box.
Figure 30.20. Viewing guest IDs 2. The Virtual Machine Manager lists the Domain IDs for all domains on your system.
268
Displaying CPU usage 1. From the View menu, select the Status check box.
Figure 30.22. Selecting a virtual machine's status 2. The Virtual Machine Manager lists the status of all virtual machines on your system.
269
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager) 1. From the View menu, select Graph, then the CPU Usage check box.
Figure 30.24. Selecting CPU usage 2. The Virtual Machine Manager shows a graph of CPU usage for all virtual machines on your system.
270
Displaying Network I/O 1. From the View menu, select Graph, then the Disk I/O check box.
Figure 30.26. Selecting Disk I/O 2. The Virtual Machine Manager shows a graph of Disk I/O for all virtual machines on your system.
271
Chapter 30. Managing guests with the Virtual Machine Manager (virt-manager) 1. From the View menu, select Graph, then the Network I/O check box.
Figure 30.28. Selecting Network I/O 2. The Virtual Machine Manager shows a graph of Network I/O for all virtual machines on your system.
272
Displaying memory usage 1. From the View menu, select the Memory Usage check box.
Figure 30.30. Selecting Memory Usage 2. The Virtual Machine Manager lists the percentage of memory in use (in megabytes) for all virtual machines on your system.
273
274
Chapter 31.
Warning
You must never use these tools to write to a guest or disk image which is attached to a running virtual machine, not even to open such a disk image in write mode. Doing so will result in disk corruption of the guest. The tools try to prevent you from doing this, however do not catch all cases. If there is any suspicion that a guest might be running, it is strongly recommended that the tools not be used, or at least always use the tools in read-only mode.
31.2. Terminology
This section explains the terms used throughout this chapter. libguestfs (GUEST FileSystem LIBrary) - the underlying C library that provides the basic functionality for opening disk images, reading and writing files and so on. You can write C programs directly to this API, but it is quite low level. guestfish (GUEST Filesystem Interactive SHell) is an interactive shell that you can use from the command line or from shell scripts. It exposes all of the functionality of the libguestfs API. Various virt tools are built on top of libguestfs, and these provide a way to perform specific single tasks from the command line. Tools include virt-df, virt-rescue, virt-resize and virt-edit. hivex and Augeas are libraries for editing the Windows Registry and Linux configuration files respectively. Although these are separate from libguestfs, much of the value of libguestfs comes from the combination of these tools. 275
Chapter 31. Guest disk access with offline tools guestmount is an interface between libguestfs and FUSE. It is primarily used to mount file systems from disk images on your host. This functionality is not necessary, but can be useful.
31.3. Installation
To install libguestfs, guestfish, the libguestfs tools, guestmount and support for Windows guests, run the following command:
To install every libguestfs-related package including the language bindings, run the following command:
--ro means that the disk image is opened read-only. This mode is always safe but does not allow write access. Only omit this option when you are certain that the guest is not running, or the disk image is not attached to a live guest. It is not possible to use libguestfs to edit a live guest, and attempting to will assuredly result in irreversible disk corruption. /path/to/disk/image is the path to the disk. This can be a file, a host logical volume (such as /dev/VG/ LV), a host device (/dev/cdrom) or a SAN LUN (/dev/sdf3).
Note
libguestfs and guestfish do not require root privileges. You only need to run them as root if the disk image being accessed needs root to read and/or write.
guestfish --ro -a /path/to/disk/image Welcome to guestfish, the libguestfs filesystem interactive shell for editing virtual machine filesystems. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell
276
><fs>
At the prompt, type run to initiate the library and attach the disk image. This can take up to 30 seconds the first time it is done. Subsequent starts will complete much faster.
Note
libguestfs will use hardware virtualization acceleration such as KVM (if available) to speed up this process.
Once the run command has been entered, other commands can be used, as the following section demonstrates.
><fs> run ><fs> list-filesystems /dev/vda1: ext3 /dev/VolGroup00/LogVol00: ext3 /dev/VolGroup00/LogVol01: swap
Other useful commands are list-devices, list-partitions, lvs, pvs, vfs-type and file. You can get more information and help on any command by typing help command, as shown in the following output:
><fs> help vfs-type NAME vfs-type - get the Linux VFS type corresponding to a mounted device SYNOPSIS vfs-type device DESCRIPTION This command gets the filesystem type corresponding to the filesystem on "device". For most filesystems, the result is the name of the Linux VFS module which would be used to mount this filesystem if you mounted it without specifying the filesystem type. For example a string such as "ext3" or
277
To view the actual contents of a file system, it must first be mounted. This example uses one of the Windows partitions shown in the previous output (/dev/vda2), which in this case is known to correspond to the C:\ drive:
><fs> mount-ro ><fs> ll / total 1834753 drwxrwxrwx 1 drwxr-xr-x 21 lrwxrwxrwx 2 drwxrwxrwx 1 drwxrwxrwx 1 drwxrwxrwx 1
/dev/vda2 /
1 16 14 15 19 19
You can use guestfish commands such as ls, ll, cat, more, download and tar-out to view and download files and directories.
Note
There is no concept of a current working directory in this shell. Unlike ordinary shells, you can not for example use the cd command to change directories. All paths must be fully qualified starting at the top with a forward slash (/) character. Use the Tab key to complete paths.
guestfish --ro -a /path/to/disk/image -i Welcome to guestfish, the libguestfs filesystem interactive shell for editing virtual machine filesystems. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 4 (Nahant Update 8) /dev/VolGroup00/LogVol00 mounted on / /dev/vda1 mounted on /boot ><fs> ll / total 210 drwxr-xr-x. 24 root root 4096 drwxr-xr-x 21 root root 4096 drwxr-xr-x. 2 root root 4096 drwxr-xr-x. 4 root root 1024 drwxr-xr-x. 4 root root 4096 drwxr-xr-x. 86 root root 12288 [etc]
28 17 27 27 27 28
278
Modifying files with guestfish Because guestfish needs to start up the libguestfs back end in order to perform the inspection and mounting, the run command is not necessary when using the -i option. The -i option works for many common Linux and Windows guests.
guestfish -d RHEL3 -i Welcome to guestfish, the libguestfs filesystem interactive shell for editing virtual machine filesystems. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 3 (Taroon Update 9) /dev/vda2 mounted on / /dev/vda1 mounted on /boot ><fs> edit /boot/grub/grub.conf
Commands to edit files include edit, vi and emacs. Many commands also exist for creating files and directories, such as write, mkdir, upload and tar-in.
279
#!/bin/bash set -e guestname="$1" guestfish -d "$1" -i --ro <<'EOF' aug-init / 0 aug-get /files/etc/sysconfig/keyboard/LAYOUT EOF
Augeas can also be used to modify configuration files. You can modify the above script to change the keyboard layout:
#!/bin/bash set -e guestname="$1" guestfish -d "$1" -i <<'EOF' aug-init / 0 aug-set /files/etc/sysconfig/keyboard/LAYOUT '"gb"' aug-save EOF
Note the three changes between the two scripts: 1. The --ro option has been removed in the second example, giving the ability to write to the guest. 2. The aug-get command has been changed to aug-set to modify the value instead of fetching it. The new value will be "gb" (including the quotes). 3. The aug-save command is used here so Augeas will write the changes out to disk.
Note
More information about Augeas can be found on the website http://augeas.net.
guestfish can do much more than we can cover in this introductory document. For example, creating disk images from scratch:
guestfish -N fs
Other commands
virt-edit is similar to the guestfish edit command. It can be used to interactively edit a single file within a guest. For example, you may need to edit the grub.conf file in a Linux-based guest that will not boot:
virt-edit has another mode where it can be used to make simple non-interactive changes to a single file. For this, the -e option is used. This command, for example, changes the root password in a Linux guest to having no password:
virt-ls is similar to the guestfish ls, ll and find commands. It is used to list a directory or directories (recursively). For example, the following command would recursively list files and directories under /home in a Linux guest:
virt-rescue GuestName
virt-rescue /path/to/disk/image
(where the path can be any file, any logical volume, LUN, or so on) containing a guest disk. You will first see output scroll past, as virt-rescue boots the rescue VM. In the end you will see:
Welcome to virt-rescue, the libguestfs rescue shell. Note: The contents of / are the rescue appliance. You have to mount the guest's partitions under /sysroot before you can examine them. bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell ><rescue>
The shell prompt here is an ordinary bash shell, and a reduced set of ordinary Red Hat Enterprise Linux commands is available. For example, you can enter:
The previous command will list disk partitions. To mount a file system, it is suggested that you mount it under /sysroot, which is an empty directory in the rescue machine for the user to mount anything you like. Note that the files under / are files from the rescue VM itself:
><rescue> mount /dev/vda1 /sysroot/ EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null) ><rescue> ls -l /sysroot/grub/ total 324 -rw-r--r--. 1 root root 63 Sep 16 18:14 device.map -rw-r--r--. 1 root root 13200 Sep 16 18:14 e2fs_stage1_5 -rw-r--r--. 1 root root 12512 Sep 16 18:14 fat_stage1_5 -rw-r--r--. 1 root root 11744 Sep 16 18:14 ffs_stage1_5 -rw-------. 1 root root 1503 Oct 15 11:19 grub.conf [...]
When you are finished rescuing the guest, exit the shell by entering exit or Ctrl+d. virt-rescue has many command line options. The options most often used are: --ro: Operate in read-only mode on the guest. No changes will be saved. You can use this to experiment with the guest. As soon as you exit from the shell, all of your changes are discarded. --network: Enable network access from the rescue shell. Use this if you need to, for example, download RPM or other files into the guest. 282
(Where /dev/vg_guests/RHEL4 is a Red Hat Enterprise Linux 4 guest disk image. The path in this case is the host logical volume where this disk image is located.) You can also use virt-df on its own to list information about all of your guests (ie. those known to libvirt). The virt-df command recognizes some of the same options as the standard df such as -h (human-readable) and -i (show inodes instead of blocks). virt-df also works on Windows guests:
# virt-df -h Filesystem Size F14x64:/dev/sda1 484.2M F14x64:/dev/vg_f14x64/lv_root 7.4G RHEL6brewx64:/dev/sda1 484.2M RHEL6brewx64:/dev/vg_rhel6brewx64/lv_root 13.3G Win7x32:/dev/sda1 100.0M Win7x32:/dev/sda2 19.9G 7.4G
Note
You can use virt-df safely on live guests, since it only needs read-only access. However, you should not expect the numbers to be precisely the same as those from a df command running inside the guest. This is because what is on disk will be slightly out of synch with the state of the live guest. Nevertheless it should be a good enough approximation for analysis and monitoring purposes.
virt-df is designed to allow you to integrate the statistics into monitoring tools, databases and so on. This allows system administrators to generate reports on trends in disk usage, and alerts if a guest is about to run out of disk space. To do this you should use the --csv option to generate machinereadable Comma-Separated-Values (CSV) output. CSV output is readable by most databases, spreadsheet software and a variety of other tools and programming languages. The raw CSV looks like the following:
283
For resources and ideas on how to process this output to produce trends and alerts, refer to the following URL: http://virt-tools.org/learning/advanced-virt-df/.
This example will demonstrate how to: Increase the size of the first (boot) partition, from approximately 100MB to 500MB. Increase the total disk size from 8GB to 16GB. Expand the second partition to fill the remaining space. Expand /dev/VolGroup00/LogVol00 to fill the new space in the second partition. 1. Make sure the guest is shut down. 2. Rename the original disk as the backup. How you do this depends on the host storage environment for the original disk. If it is stored as a file, use the mv command. For logical volumes (as demonstrated in this example), use lvrename:
3. Create the new disk. The requirements in this example are to expand the total disk size up to 16GB. Since logical volumes are used here, the following command is used: 284
The first two arguments are the input disk and output disk. --resize /dev/sda1=500M resizes the first partition up to 500MB. --expand /dev/sda2 expands the second partition to fill all remaining space. --LV-expand /dev/VolGroup00/LogVol00 expands the guest logical volume to fill the extra space in the second partition. virt-resize describes what it is doing in the output:
Summary of changes: /dev/sda1: partition will be resized from 101.9M to 500.0M /dev/sda1: content will be expanded using the 'resize2fs' method /dev/sda2: partition will be resized from 7.9G to 15.5G /dev/sda2: content will be expanded using the 'pvresize' method /dev/VolGroup00/LogVol00: LV will be expanded to maximum size /dev/VolGroup00/LogVol00: content will be expanded using the 'resize2fs' method Copying /dev/sda1 ... [#####################################################] Copying /dev/sda2 ... [#####################################################] Expanding /dev/sda1 using the 'resize2fs' method Expanding /dev/sda2 using the 'pvresize' method Expanding /dev/VolGroup00/LogVol00 using the 'resize2fs' method
5. Try to boot the virtual machine. If it works (and after testing it thoroughly) you can delete the backup disk. If it fails, shut down the virtual machine, delete the new disk, and rename the backup disk back to its original name. 6. Use virt-df and/or virt-list-partitions to show the new size:
Use% 3% 16%
Resizing guests is not an exact science. If virt-resize fails, there are a number of tips that you can review and attempt in the virt-resize(1) man page. For some older Red Hat Enterprise Linux guests, you may need to pay particular attention to the tip regarding GRUB.
285
Note
Red Hat Enterprise Linux 6.1 ships with two variations of this progam: virt-inspector is the original program as found in Red Hat Enteprise Linux 6.0 and is now deprecated upstream. virt-inspector2 is the same as the new upsteam virt-inspector program.
31.9.2. Installation
To install virt-inspector and the documentation, enter the following command:
To process Windows guests you must also install libguestfs-winsupport. The documentation, including example XML output and a Relax-NG schema for the output, will be installed in /usr/share/doc/ libguestfs-devel-*/ where "*" is replaced by the version number of libguestfs.
Or as shown here:
The result will be an XML report (report.xml). The main components of the XML file are a top-level <operatingsytems> element containing usually a single <operatingsystem> element, similar to the following:
<operatingsystems> <operatingsystem> <!-- the type of operating system and Linux distribution --> <name>linux</name> <distro>fedora</distro> <!-- the name, version and architecture --> <product_name>Fedora release 12 (Constantine)</product_name> <major_version>12</major_version> <arch>x86_64</arch>
286
Running virt-inspector
<!-- how the filesystems would be mounted when live --> <mountpoints> <mountpoint dev="/dev/vg_f12x64/lv_root">/</mountpoint> <mountpoint dev="/dev/sda1">/boot</mountpoint> </mountpoints> <!-- the filesystems --> <filesystems> <filesystem dev="/dev/sda1"> <type>ext4</type> </filesystem> <filesystem dev="/dev/vg_f12x64/lv_root"> <type>ext4</type> </filesystem> <filesystem dev="/dev/vg_f12x64/lv_swap"> <type>swap</type> </filesystem> </filesystems> <!-- packages installed --> <applications> <application> <name>firefox</name> <version>3.5.5</version> <release>1.fc12</release> </application> </applications> </operatingsystem> </operatingsystems>
Processing these reports is best done using W3C standard XPath queries. Red Hat Enterprise Linux 6.1 comes with a command line program (xpath) which can be used for simple instances; however, for long-term and advanced usage, you should consider using an XPath library along with your favorite programming language. As an example, you can list out all file system devices using the following XPath query:
virt-inspector --xml GuestName | xpath //filesystem/@dev Found 3 nodes: -- NODE -dev="/dev/sda1" -- NODE -dev="/dev/vg_f12x64/lv_root" -- NODE -dev="/dev/vg_f12x64/lv_swap"
The version of virt-inspector in Red Hat Enterprise Linux 6.1 has a number of shortcomings. It has limited support for Windows guests and the XML output is over-complicated for Linux guests. These limitations will addressed in future releases.
287
31.10.2. Installation
To use virt-win-reg you must run the following:
The output is in the standard text-based format used by .REG files on Windows.
Note
Hex-quoting is used for strings because the format does not properly define a portable encoding method for strings. This is the only way to ensure fidelity when transporting .REG files from one machine to another. You can make hex-quoted strings printable by piping the output of virt-win-reg through this simple Perl script:
perl -MEncode -pe's?hex\((\d+)\):(\S+)?$t=$1;$_=$2;s,\,,,g;"str($t): \"".decode(utf16le=>pack("H*",$_))."\""?eg'
To merge changes into the Windows Registry of an offline guest, you must first prepare a .REG file. There is a great deal of documentation about doing this available from MSDN, and there is a good summary in the following Wikipedia page: https://secure.wikimedia.org/wikipedia/en/wiki/ Windows_Registry#.REG_files. When you have prepared a .REG file, enter the following:
The binding for each language is essentially the same, but with minor syntactic changes. A C statement:
guestfs_launch (g);
$g->launch ()
g#launch ()
289
Chapter 31. Guest disk access with offline tools In the C and C++ bindings, you must manually check for errors. In the other bindings, errors are converted into exceptions; the additional error checks shown in the examples below are not necessary for other languages, but conversely you may wish to add code to catch exceptions. Refer to the following list for some points of interest regarding the architecture of the libguestfs API: The libguestfs API is synchronous. Each call blocks until it has completed. If you want to make calls asynchronously, you have to create a thread. The libguestfs API is not thread safe: each handle should be used only from a single thread, or if you want to share a handle between threads you should implement your own mutex to ensure that two threads cannot execute commands on one handle at the same time. You should not open multiple handles on the same disk image. It is permissible if all the handles are read-only, but still not recommended. You should not add a disk image for writing if anything else could be using that disk image (eg. a live VM). Doing this will cause disk corruption. Opening a read-only handle on a disk image which is currently in use (eg. by a live VM) is possible; however, the results may be unpredictable or inconsistent particularly if the disk image is being heavily written to at the time you are reading it.
#include <stdio.h> #include <stdlib.h> #include <guestfs.h> int main (int argc, char *argv[]) { guestfs_h *g; g = guestfs_create (); if (g == NULL) { perror ("failed to create libguestfs handle"); exit (EXIT_FAILURE); } /* ... */ guestfs_close (g); exit (EXIT_SUCCESS); }
Save this program to a file (test.c). Compile this program and run it with the following two commands:
At this stage it should print no output. The rest of this section demonstrates an example showing how to extend this program to create a new disk image, partition it, format it with an ext4 file system, and create some files in the file system. The disk image will be called disk.img and be created in the current directory. 290
Interaction with the API via a C program The outline of the program is: Create the handle. Add disk(s) to the handle. Launch the libguestfs back end. Create the partition, file system and files. Close the handle and exit. Here is the modified program:
int main (int argc, char *argv[]) { guestfs_h *g; size_t i; g = guestfs_create (); if (g == NULL) { perror ("failed to create libguestfs handle"); exit (EXIT_FAILURE); } /* Create a raw-format sparse disk image, 512 MB in size. */ int fd = open ("disk.img", O_CREAT|O_WRONLY|O_TRUNC|O_NOCTTY, 0666); if (fd == -1) { perror ("disk.img"); exit (EXIT_FAILURE); } if (ftruncate (fd, 512 * 1024 * 1024) == -1) { perror ("disk.img: truncate"); exit (EXIT_FAILURE); } if (close (fd) == -1) { perror ("disk.img: close"); exit (EXIT_FAILURE); } /* Set the trace flag so that we can see each libguestfs call. */ guestfs_set_trace (g, 1); /* Set the autosync flag so that the disk will be synchronized * automatically when the libguestfs handle is closed. */ guestfs_set_autosync (g, 1); /* Add the disk image to libguestfs. */ if (guestfs_add_drive_opts (g, "disk.img", GUESTFS_ADD_DRIVE_OPTS_FORMAT, "raw", /* raw format */ GUESTFS_ADD_DRIVE_OPTS_READONLY, 0, /* for write */ -1 /* this marks end of optional arguments */ ) == -1) exit (EXIT_FAILURE);
291
292
Troubleshooting
free (partitions[i]); free (partitions); exit (EXIT_SUCCESS); }
Compile and run this program with the following two commands:
If the program runs to completion successfully then you should be left with a disk image called disk.img, which you can examine with guestfish:
By default (for C and C++ bindings only), libguestfs prints errors to stderr. You can change this behavior by setting an error handler. The guestfs(3) man page discusses this in detail.
31.12. Troubleshooting
A test tool is available to check that libguestfs is working. Run the following command after installing libguestfs (root access not required) to test for normal operation:
$ libguestfs-test-tool
This tool prints a large amount of text to test the operation of libguestfs. If the test is successful, the following text will appear near the end of the output:
===== TEST FINISHED OK =====
293
294
Chapter 32.
Virtual Networking
This chapter introduces the concepts needed to create, start, stop, remove and modify virtual networks with libvirt.
Figure 32.1. Virtual network switch with two guests Linux host servers represent a virtual network switch as a network interface. When the libvirt daemon is first installed and started, the default network interface representing the virtual network switch is virbr0.
295
Figure 32.2. Linux host with an interface to a virtual network switch This virbr0 interface can be viewed with the ifconfig and ip commands like any other interface:
$ ifconfig virbr0 virbr0 Link encap:Ethernet HWaddr 1B:C4:94:CF:FD:17 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:11 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:3097 (3.0 KiB)
$ ip addr show virbr0 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
296
Figure 32.3. Virtual network switch using NAT with two guests
Warning
Virtual network switches use NAT configured by iptables rules. Editing these rules while the switch is running is not recommended, as incorrect rules may result in the switch being unable to communicate.
Routed mode
When using routed mode, the virtual switch connects to the physical LAN connected to the host, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, all of the virtual machines are in their own subnet, routed through a virtual switch. This situation is not always ideal as no other hosts on the physical network are aware of the virtual machines without manual physical router configuration, and can not access the virtual machines. Routed mode operates at Layer 3 of the ISO networking model.
Isolated mode
When using Isolated mode, guests guests connected to the virtual switch can communicate with each other, and with the host, but their traffic will not pass outside of the host, nor can they receive traffic from outside the host. Using dnsmasq in this mode is required for basic functionality such as DHCP. However, even if this network is isolated from any physical network, DNS names are still resolved. Therefore a situation can arise when DNS names resolve but ICMP echo request (ping) commands fail.
298
Hosts in a DMZ typically provide services to WAN (external) hosts as well as LAN (internal) hosts. As this requires them to be accessible from multiple locations, and considering that these locations are controlled and operated in different ways based on their security and trust level, routed mode is the best configuration for this environment.
NAT mode machines to connect through. Each guest has its own public IP address, but the hosts use private IP address as management of the guests can only be performed by internal administrators. Refer to the following diagram to understand this scenario:
When the host has a public IP address and the virtual machines have static public IPs, bridged networking can not be used, as the provider only accepts packets from the MAC adress of the public host. The following diagram demonstrates this:
301
302
Creating a virtual network 2. This will open the Host Details menu. Click the Virtual Networks tab.
Figure 32.9. Virtual network configuration 3. All available virtual networks are listed on the left-hand box of the menu. You can edit the configuration of a virtual network by selecting it from this box and editing as you see fit.
303
Chapter 32. Virtual Networking 1. Open the Host Details menu (refer to Section 32.6, Managing a virtual network) and click the Add Network button, identified by a plus sign (+) icon.
Figure 32.10. Virtual network configuration This will open the Create a new virtual network window. Click Forward to continue.
304
305
Chapter 32. Virtual Networking 2. Enter an appropriate name for your virtual network and click Forward.
306
Creating a virtual network 3. Enter an IPv4 address space for your virtual network and click Forward.
307
Chapter 32. Virtual Networking 4. Define the DHCP range for your virtual network by specifying a Start and End range of IP addresses. Click Forward to continue.
308
Creating a virtual network 5. Select how the virtual network should connect to the physical network.
Figure 32.15. Connecting to physical network If you select Forwarding to physical network, choose whether the Destination should be Any physical device or a specific physical device. Also select whether the Mode should be NAT or Routed. Click Forward to continue.
309
Chapter 32. Virtual Networking 6. You are now ready to create the network. Check the configuration of your network and click Finish.
310
Creating a virtual network 7. The new virtual network is now available in the Virtual Network tab of the Host Details window.
311
312
Chapter 33.
disk
313
314
Chapter 34.
Run virsh attach-device to attach the ISO as hdc to a guest called "satellite" :
# virsh attach-device satellite satelliteiso.xml
315
316
http://www.redhat.com/docs/manuals/enterprise/
Chapter 35.
Troubleshooting
This chapter covers common problems and solutions for Red Hat Enterprise Linux 6 virtualization issues. This chapter is to give you, the reader, a background to identify where problems with virtualization technologies are. Troubleshooting takes practice and experience which are difficult to learn from a book. It is recommended that you experiment and test virtualization on Red Hat Enterprise Linux 6 to develop your troubleshooting skills. If you cannot find the answer in this document there may be an answer online from the virtualization community. Refer to Section A.1, Online resources for a list of Linux virtualization websites.
319
# brctl showmacs virtbr0 port-no mac-addr 1 fe:ff:ff:ff:ff: 2 fe:ff:ff:fe:ff: # brctl showstp virtbr0 virtbr0 bridge-id 8000.fefffffffff designated-root 8000.fefffffffff root-port 0 max-age 20.00 hello-time 2.00 forward-delay 0.00 aging-time 300.01 hello-timer 1.43 topology-change-timer 0.00
Listed below are some other useful commands for troubleshooting virtualization. strace is a command which traces system calls and events received and used by another process. vncviewer: connect to a VNC server running on your server or a virtual machine. Install vncviwer using the yum install vnc command. vncserver: start a remote desktop on your server. Gives you the ability to run graphical user interfaces such as virt-manager via a remote session. Install vncserver using the yum install vnc-server command.
35.2. kvm_stat
The kvm_stat command is a python script which retrieves runtime statistics from the kvm kernel module. The kvm_stat command can be used to diagnose guest behavior visible to kvm. In particular, performance related issues with guests. Currently, the reported statistics are for the entire system; the behavior of all running guests is reported. The kvm_stat command requires that the kvm kernel module is loaded and debugfs is mounted. If either of these features are not enabled, the command will output the required steps to enable debugfs or the kvm module. For example:
# kvm_stat Please mount debugfs ('mount -t debugfs debugfs /sys/kernel/debug') and ensure the kvm modules are loaded
kvm_stat output
The kvm_stat command outputs statistics for all virtualized guests and the host. The output is updated until the command is terminated (using Ctrl+c or the q key).
# kvm_stat kvm statistics efer_reload exits fpu_reload 94 4003074 1313881 0 31272 10796
320
kvm_stat
halt_exits 14050 259 halt_wakeup 4496 203 host_state_reload 1638354 24893 hypercalls 0 0 insn_emulation 1093850 1909 insn_emulation_fail 0 0 invlpg 75569 0 io_exits 1596984 24509 irq_exits 21013 363 irq_injections 48039 1222 irq_window 24656 870 largepages 0 0 mmio_exits 11873 0 mmu_cache_miss 42565 8 mmu_flooded 14752 0 mmu_pde_zapped 58730 0 mmu_pte_updated 6 0 mmu_pte_write 138795 0 mmu_recycled 0 0 mmu_shadow_zapped 40358 0 mmu_unsync 793 0 nmi_injections 0 0 nmi_window 0 0 pf_fixed 697731 3150 pf_guest 279349 0 remote_tlb_flush 5 0 request_irq 0 0 signal_exits 1 0 tlb_flush 200190 0
Explanation of variables: efer_reload The number of Extended Feature Enable Register (EFER) reloads. exits The count of all VMEXIT calls. fpu_reload The number of times a VMENTRY reloaded the FPU state. The fpu_reload is incremented when a guest is using the Floating Point Unit (FPU). halt_exits Number of guest exits due to halt calls. This type of exit is usually seen when a guest is idle. halt_wakeup Number of wakeups from a halt. host_state_reload Count of full reloads of the host state (currently tallies MSR setup and guest MSR reads). hypercalls Number of guest hypervisor service calls. insn_emulation Number of guest instructions emulated by the host. insn_emulation_fail Number of failed insn_emulation attempts. io_exits Number of guest exits from I/O port accesses. 321
Chapter 35. Troubleshooting irq_exits Number of guest exits due to external interrupts. irq_injections Number of interrupts sent to guests. irq_window Number of guest exits from an outstanding interrupt window. largepages Number of large pages currently in use. mmio_exits Number of guest exits due to memory mapped I/O (MMIO) accesses. mmu_cache_miss Number of KVM MMU shadow pages created. mmu_flooded Detection count of excessive write operations to an MMU page. This counts detected write operations not of individual write operations. mmu_pde_zapped Number of page directory entry (PDE) destruction operations. mmu_pte_updated Number of page table entry (PTE) destruction operations. mmu_pte_write Number of guest page table entry (PTE) write operations. mmu_recycled Number of shadow pages that can be reclaimed. mmu_shadow_zapped Number of invalidated shadow pages. mmu_unsync Number of non-synchronized pages which are not yet unlinked. nmi_injections Number of Non-maskable Interrupt (NMI) injections to the guest. nmi_window Number of guest exits from (outstanding) Non-maskable Interrupt (NMI) windows. pf_fixed Number of fixed (non-paging) page table entry (PTE) maps. pf_guest Number of page faults injected into guests. remote_tlb_flush Number of remote (sibling CPU) Translation Lookaside Buffer (TLB) flush requests.
322
Log files request_irq Number of guest interrupt window request exits. signal_exits Number of guest exits due to pending signals from the host. tlb_flush Number of tlb_flush operations performed by the hypervisor.
Note
The output information from the kvm_stat command is exported by the KVM hypervisor as pseudo files located in the /sys/kernel/debug/kvm/ directory.
323
Reboot the guest. On the host, access the serial console with the following command:
# virsh console
You can also use virt-manager to display the virtual text console. In the guest console window, select Serial Console from the View menu.
This example uses 64 but you can specify another number to set the maximum loop value. You may also have to implement loop device backed guests on your system. To use a loop device backed guests for a full virtualized system, use the phy: device or file: file commands.
a. b.
Open the Processor submenu The processor settings menu may be hidden in the Chipset, Advanced CPU Configuration or Northbridge. Enable Intel Virtualization Technology (also known as Intel VT). AMD-V extensions cannot be disabled in the BIOS and should already be enabled. The virtualization extensions may be labeled Virtualization Extensions, Vanderpool or various other names depending on the OEM and system BIOS. Enable Intel VTd or AMD IOMMU, if the options are available. Intel VTd and AMD IOMMU are used for PCI device assignment. Select Save & Exit.
c. d. 3. 4.
Reboot the machine. When the machine has booted, run cat /proc/cpuinfo | grep "vmx svm". If the command outputs, the virtualization extensions are now enabled. If there is no output your system may not have the virtualization extensions or the correct BIOS setting enabled.
Note
Note that the virtualized Intel PRO/1000 (e1000) driver is also supported as an emulated driver choice. To use the e1000 driver, replace virtio in the procedure below with e1000. For the best performance it is recommended to use the virtio driver.
Procedure 35.2. Switching to the virtio driver 1. Shutdown the guest operating system. 2. Edit the guest's configuration file with the virsh command (where GUEST is the guest's name):
# virsh edit GUEST
The virsh edit command uses the $EDITOR shell variable to determine which editor to use. 325
Chapter 35. Troubleshooting 3. Find the network interface section of the configuration. This section resembles the snippet below:
<interface type='network'> [output truncated] <model type='rtl8139' /> </interface>
4.
Change the type attribute of the model element from 'rtl8139' to 'virtio'. This will change the driver from the rtl8139 driver to the e1000 driver.
<interface type='network'> [output truncated] <model type='virtio' /> </interface>
5. 6.
Save the changes and exit the text editor Restart the guest operating system.
2.
Copy and edit the XML file and update the unique fields: virtual machine name, UUID, disk image, MAC address, and any other unique parameters. Note that you can delete the UUID and MAC address lines and virsh will generate a UUID and MAC address.
# cp /tmp/guest-template.xml /tmp/new-guest.xml # vi /tmp/new-guest.xml
3.
326
327
328
Revision Thu Apr 21 2011 3.2-057 Drop KVM Live Migration chapter
Revision Fri Apr 15 2011 3.2-056 Updated screenshots for 6.3, 6.4 and 11.
Revision Wed Apr 13 2011 Scott Radvan [email protected] 3.2-052-draft Update steps of Windows guest conversion: 23.4.2
Revision Tue Apr 12 2011 Scott Radvan [email protected] 3.2-051-draft Refer to virsh manpage in Table 30.2. List supported guest O/S for virt-v2v in Chapter 23.
Revision 3.2-048-draft
Revision Mon Apr 04 2011 Scott Radvan [email protected] 3.2-047-draft Fix adding of file-based storage to work with virtio bus.
Revision 3.2-045-draft
329
Appendix B. Revision History Add volume, secret, and nwfilter command options to virsh chapter.
Revision Wed Mar 23 2011 Scott Radvan [email protected] 3.2-043-draft Rename `virtualization reference guide' Part, to `host virtualization tools'.
Revision Mon Mar 7 2011 Scott Radvan [email protected] 3.2-040-draft Added a description and steps for attaching and updating a disk image with virsh.
Revision Fri Mar 4 2011 Scott Radvan [email protected] 3.2-039-draft use "size=8" on the disk path line for guest creation.
Revision Thu Mar 3 2011 3.2-038-draft Minor virt-install command errors fixed (6.2).
Revision Wed Mar 2 2011 3.2-037-draft Fix `virsh restart' command in 27.3.1
Revision Wed Feb 23 2011 Scott Radvan [email protected] 3.2-034-draft PCI management from guest now uses `virt_use_sysfs'.
Revision Thu Feb 17 2011 Scott Radvan [email protected] 3.2-033-draft Add Transparent Hugepage Support to Chapter 22, describe turning off swap in 24.6.
Revision Thu Feb 17 2011 3.2-032-draft Fix storage size for virtual disk in 8.2
Revision 3.2-028-draft
330
Revision Mon Feb 07 2011 Scott Radvan [email protected] 3.2-021-draft Add table showing virsh command options, other minor fixes.
Revision Fri Feb 04 2011 3.2-020-draft Remove unsupported guest CPU models.
Revision Fri Feb 04 2011 Scott Radvan [email protected] 3.2-019-draft Add table of supported CPUs that can be presented to guests.
Revision Thu Feb 03 2011 Scott Radvan [email protected] 3.2-017-draft Fix description for the 'define' guest management command (#657337)
Revision Thu Feb 03 2011 Scott Radvan [email protected] 3.2-017-draft Add --os-type and --os-variant descriptions to virt-install with Windows guests.
Revision Wed Feb 02 2011 Scott Radvan [email protected] 3.2-016-draft Should be virtio-win.vfd; specify IP forwarding not required for physical bridges, dd calculation fix.
Revision Tue Feb 01 2011 3.2-015-draft Several wording and flow issues.
Revision Mon Jan 31 2011 Scott Radvan [email protected] 3.2-014-draft Minor fixes in storage chapter; remove text describing storage pool volume allocation.
Revision 3.2-013-draft
331
Appendix B. Revision History Remove Fedora from list of supported KVM guests.
Revision Tue Jan 25 2011 Scott Radvan [email protected] 3.2-012-draft Add device assignment restrictions and fix minor wording issues.
Revision Fri Jan 21 2011 Scott Radvan [email protected] 3.2-011-draft Add PCI-e passthrough limitations in Introduction and link to related chapter. Start `Virtual networking' chapter.
Revision Wed Jan 19 2011 Scott Radvan [email protected] 3.2-007 Replace `passthrough' with `device assignment', but reference old term as meaning the same thing.
Revision Tue Jan 18 2011 Scott Radvan [email protected] 3.2-006 Fix command used when running grep on /proc/cpuinfo for CPU extensions.
Revision Tue Jan 18 2011 Scott Radvan [email protected] 3.2-005 Resolve small typographical errors and add balloon driver details.
Revision Tue Jan 18 2011 Scott Radvan [email protected] 3.2-004 Add more corrections for Introduction and Part I; include draft watermark.
Revision Mon Jan 17 2011 3.2-003 Corrections for Introduction and Part I.
332
Revision Fri Sep 03 2010 Christopher Curran [email protected] 6.0-24 2 Updated para-virtualized driver usage procedures. BZ#621740 .
Revision Fri May 14 2010 Christopher Curran [email protected] 6.0-22 4 Fixes BZ#587911 , which expands supported storage devices. Updated Introduction chapter Updated Troubleshooting chapter Updated KSM chapter Updated overcommitting guidance.
Revision Tue Apr 20 2010 6.0-11 Beta version update. Various fixes included.
Revision Thu Apr 15 2010 Christopher Curran [email protected] 6.0-10 Forward-ported the following fixes from the Red Hat Enterprise Linux 5.5 release: 5 Fixes BZ#573558 , and expands SR-IOV content. 6 Fixes BZ#559052 , expands the KVM para-virtualized drivers chapter. 7 Fixes BZ#578342 . 8 Fixes BZ#573553 . 9 Fixes BZ#573556 . 10 Fixes BZ#573549 . 11 Fixes BZ#534020 . 12 Fixes BZ#573555 .
333
334
Appendix C. Colophon
This manual was written in the DocBook XML v4.3 format. This book is based on the original work of Jan Mark Holzer, Justin Clift and Chris Curran. This book is edited and maintained by Scott Radvan. Other writing credits go to: Daniel Berrange contributed various sections on libvirt. Don Dutile contributed technical editing for the para-virtualized drivers section. Barry Donahue contributed technical editing for the para-virtualized drivers section. Rick Ring contributed technical editing for the Virtual Machine Manager Section. Michael Kearey contributed technical editing for the sections on using XML configuration files with virsh and virtualized floppy drives. Marco Grigull contributed technical editing for the software compatibility and performance section. Eugene Teo contributed technical editing for the Managing Guests with virsh section. Publican, the publishing tool which produced this book, was written by Jeffrey Fearn.
Translators
Due to technical limitations, the translators credited in this section are those who worked on previous versions of the Red Hat Enterprise Linux Virtualization Guide and the Fedora Virtualization Guide. To find out who translated the current version of the guide, visit https://fedoraproject.org/wiki/ Fedora_13_Documentation_Translations_-_Contributors. These translators will receive credit in subsequent versions of this guide. Simplified Chinese Leah Wei Liu Traditional Chinese Chester Cheng Terry Chuang Japanese Kiyoto Hashida Korean Eun-ju Kim Dutch Geert Warrink French 335
Appendix C. Colophon Sam Friedmann German Hedda Peters Greek Nikos Charonitakis Italian Silvio Pierro Francesco Valente Brazilian Portuguese Glaucia de Freitas Leticia de Lima Spanish Domingo Becker Hctor Daniel Cabrera Angela Garcia Gladys Guerrero Russian Yuliya Poyarkova
336