Oracle Linux-KVM User's Guide
Oracle Linux-KVM User's Guide
F29966-20
September 2022
Oracle Linux KVM User's Guide,
F29966-20
iii
Upgrading Virtualization Packages 2-13
Switching Application Streams on Oracle Linux 8 2-14
Switching to the Oracle KVM Stack 2-14
Switching to the Default KVM Stack 2-14
Validating the Host System 2-15
3 KVM Usage
Checking the Libvirt Daemon Status 3-1
Oracle Linux 7 and Oracle Linux 8 3-1
Oracle Linux 9 3-1
Working With Virtual Machines 3-2
Creating a New Virtual Machine 3-2
Starting and Stopping Virtual Machines 3-3
Starting a VM 3-3
Shutting Down a VM 3-3
Rebooting a VM 3-4
Suspending a VM 3-4
Resuming a Suspended VM 3-4
Forcefully Stopping a VM 3-4
Deleting a Virtual Machine 3-4
Configuring a Virtual Machine With a Virtual Trusted Platform Module 3-5
Working With Storage for KVM Guests 3-6
Storage Pools 3-7
Creating a Storage Pool 3-7
Listing Storage Pools 3-8
Starting a Storage Pool 3-8
Stopping a Storage Pool 3-9
Removing a Storage Pool 3-9
Storage Volumes 3-9
Creating a New Storage Volume 3-9
Viewing Information About a Storage Volume 3-10
Cloning a Storage Volume 3-10
Deleting a Storage Volume 3-10
Resizing a Storage Volume 3-11
Managing Virtual Disks 3-11
Adding or Removing a Virtual Disk 3-11
Removing a Virtual Disk 3-12
Extending a Virtual Disk 3-12
Working With Memory and CPU Allocation 3-13
Configuring Virtual CPU Count 3-13
iv
Configuring Memory Allocation 3-14
Setting Up Networking for KVM Guests 3-15
Setting Up and Managing Virtual Networks 3-16
Adding or Removing a vNIC 3-17
Bridged and Direct vNICs 3-18
Interface Bonding for Bridged Networks 3-20
Cloning Virtual Machines 3-20
Preparing a Virtual Machine for Cloning 3-21
Cloning a Virtual Machine by Using the Virt-Clone Command 3-23
Cloning a Virtual Machine by Using Virtual Machine Manager 3-23
v
Preface
Preface
Oracle Linux: KVM User's Guide provides information about how to install, configure,
and use the Oracle Linux KVM packages to run guest system on top of a bare metal
Oracle Linux system. This documentation provides information on using KVM on a
standalone platform in an unmanaged environment. Typical usage in this mode is for
development and testing purposes, although production level deployments are
supported. Oracle recommends that customers use Oracle Linux Virtualization
Manager for more complex deployments of a managed KVM infrastructure.
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user
interface elements associated with an
action, or terms defined in text or the
glossary.
italic Italic type indicates book titles, emphasis,
or placeholder variables for which you
supply particular values.
monospace Monospace type indicates commands
within a paragraph, URLs, code in
examples, text that appears on the screen,
or text that you enter.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at https://www.oracle.com/corporate/accessibility/.
For information about the accessibility of the Oracle Help Center, see the Oracle
Accessibility Conformance Report at https://www.oracle.com/corporate/accessibility/
templates/t2-11535.html.
vi
Preface
vii
1
About Oracle Linux KVM
This chapter provides a high-level overview of the Kernel-based Virtual Machine (KVM)
feature on Oracle Linux, the user space tools that are available for installing and managing a
standalone instance of KVM, and the differences between KVM usage in this mode and
usage within a managed environment provided by Oracle Linux Virtualization Manager.
1-1
Chapter 1
Guest Operating System Requirements
Important:
* cloud-init is unavailable for 32-bit architectures.
You can download Oracle Linux ISO images and disk images from Oracle Software
Delivery Cloud: https://edelivery.oracle.com/linux.
1-2
Chapter 1
System Requirements and Recommendations
Caution:
Microsoft Windows 7 is no longer supported by Microsoft. See https://
docs.microsoft.com/en-us/lifecycle/products/windows-7 for more information.
Microsoft Windows 8 is no longer supported by Microsoft. See https://
docs.microsoft.com/en-us/lifecycle/products/windows-8 for more information.
Microsoft Windows 8.1 falls out of extended support by Microsoft in January 2023.
See https://docs.microsoft.com/en-us/lifecycle/products/windows-81 for more
information.
Note:
Oracle recommends that you install the Oracle VirtIO Drivers for Microsoft Windows
in Windows VMs for improved performance for network and block (disk) devices
and to resolve common issues. The drivers are paravirtualized drivers for Microsoft
Windows guests running on Oracle Linux KVM hypervisors.
Testing of all Microsoft Windows guests on KVM is performed by using the Oracle VirtIO
Drivers for Microsoft Windows.
For instructions on how to obtain and install the drivers, see Oracle Linux: Oracle VirtIO
Drivers for Microsoft Windows for use with KVM.
1-3
Chapter 1
About Virtualization Packages
1-4
Chapter 1
About Virtualization Packages
• libvirt: This package provides an interface to KVM, as well as the libvirtd daemon for
managing guest VMs.
• qemu-kvm: This package installs the QEMU emulator that performs hardware virtualization
so that guests can access host CPU and other resources.
• virt-install: This package provides command line utilities for creating and provisioning
guest VMs.
• virt-viewer: This package provides a graphical utility that can be loaded into a desktop
environment to access the graphical console of a guest VM.
As an alternative to installing virtualization packages individually, you can install virtualization
package groups.
The Virtualization Host package group contains the minimum set of packages that are
required for a virtualization host. If your Oracle Linux system includes a GUI environment,
you can also choose to install the Virtualization Client package group.
Note that the Cockpit web console also provides a graphical interface to interact with KVM
and libvirtd to set up and configure VMs on a system. See Oracle Linux: Using the Cockpit
Web Console for more information.
1-5
2
Installing KVM User Space Packages
This chapter describes how to configure the appropriate ULN channels or yum repositories
and how to install user space tools to manage a standalone instance of KVM. A final check is
performed to validate whether the system is capable of hosting guest VMs.
Oracle Linux 7
Due to the availability of several very different kernel versions and the requirement for more
recent versions of user space tools that may break compatibility with RHCK, there are several
different yum repositories and ULN channels across the different supported architectures for
Oracle Linux 7. Packages in the different channels have different use cases and different
levels of support. This section describes the available yum repositories and ULN channels for
each architecture.
Repositories and Channels That Are Available for x86_64 Platforms
2-1
Chapter 2
Configuring Yum Repositories and ULN Channels
No
te:
Th
e
ol
7_
kv
m_
ut
il
s
an
d
ol
7_
x8
6_
64
_k
vm
_u
ti
ls
cha
nn
els
dis
tri
but
e
64-
bit
pac
kag
es
2-2
Chapter 2
Configuring Yum Repositories and ULN Channels
onl
y.
If
yo
u
ma
nu
all
y
ins
tall
ed
an
y
32-
bit
pac
kag
es,
for
exa
mp
le,
li
bv
ir
t-
cl
ie
nt,
yu
m
up
dat
es
fro
m
the
se
cha
nn
els
wil
l
fail
. To
use
the
ol
7_
kv
m_
ut
2-3
Chapter 2
Configuring Yum Repositories and ULN Channels
il
s
an
d
ol
7_
x8
6_
64
_k
vm
_u
ti
ls
cha
nn
els,
yo
u
mu
st
firs
t
re
mo
ve
an
y
32-
bit
ver
sio
ns
of
the
pac
kag
es
dis
tri
but
ed
by
the
se
cha
nn
els
tha
t
are
ins
tall
2-4
Chapter 2
Configuring Yum Repositories and ULN Channels
ed
on
yo
ur
sys
te
m.
2-5
Chapter 2
Configuring Yum Repositories and ULN Channels
2-6
Chapter 2
Configuring Yum Repositories and ULN Channels
Caution:
Virtualization packages may also be available in the ol7_developer_EPEL yum
repository or the ol7_arch_developer_EPEL ULN channel. These packages are
unsupported and contain features that might never be tested on Oracle Linux and
may conflict with virtualization packages from other channels. If you intend to use
packages from any of the repositories or channels that are previously listed, first
uninstall any virtualization packages that installed from this repository. You can also
disable this repository or channel or set exclusions to prevent virtualization
packages from being installed from this repository.
Depending on your use case and support requirements, you must enable the repository or
ULN channel that you require before installing the virtualization packages from that repository
or ULN channel.
If you want to prevent yum from installing the package versions from a particular repository,
you can set an exclude option on these packages for that repository. For instance, to prevent
yum from installing the virtualization packages in the ol7_developer_EPEL repository, use the
following command:
sudo yum-config-manager --setopt="ol7_developer_EPEL.exclude=libvirt* qemu*" --save
Oracle Linux 8
The number of options available on Oracle Linux 8 are significantly reduced as the available
kernels are newer and there are less options from which to choose.
Repositories and Channels That Are Available for Oracle Linux 8
2-7
Chapter 2
Configuring Yum Repositories and ULN Channels
2-8
Chapter 2
Configuring Yum Repositories and ULN Channels
Since the Application Stream repository or channel is required for system software on Oracle
Linux 8, it is enabled by default on any Oracle Linux 8 system.
If you intend to use the virt:kvm_utils2 application stream for improved functionality and
integration with newer features released within UEK, you must subscribe to the
ol8_kvm_appstream yum repository or ol8_base_arch_kvm_utils ULN channel. Note that the
virt:kvm_utils application stream is now a legacy stream on Oracle Linux 8.
Oracle Linux 9
The number of options available on Oracle Linux 9 are significantly reduced as the available
kernels are newer and there are less options from which to choose. Note also that unlike
Oracle Linux 8, the packages for Oracle Linux 9 are not released as part of a DNF module.
Repositories and Channels That Are Available for Oracle Linux 9
2-9
Chapter 2
Configuring Yum Repositories and ULN Channels
No
te:
Yo
u
mu
st
re
mo
ve
all
exi
sti
ng
vir
tua
liza
tio
n
pac
kag
es
bef
ore
en
abl
ing
thi
s
cha
nn
el
or
rep
osit
ory
.
2-10
Chapter 2
Installing Virtualization Packages
Since the Application Stream repository or channel is required for system software on Oracle
Linux 9, it is enabled by default on any Oracle Linux 9 system.
2-11
Chapter 2
Installing Virtualization Packages
Specify the appropriate package groups for the installation type in the %packages
section of the kickstart file by using the @GroupID format:
2-12
Chapter 2
Installing Virtualization Packages
Note:
If the target host system is running Oracle Linux 9 and you intend to use the
virtualization packages available in ol9_kvm_utils. You must first remove any
existing virtualization packages that may already be installed:
a. Run the following command to remove packages:
sudo dnf remove libvirt qemu-kvm edk2
3. Update the system so that it has the most recent packages available.
• If you are using Oracle Linux 7, run the yum update command.
• If you are using Oracle Linux 8 or Oracle Linux 9, run the dnf update command.
4. Install virtualization packages on the system.
• If you are using Oracle Linux 7 run the following commands to install the base
virtualization packages and additional utilities:
sudo yum groupinstall "Virtualization Host"
sudo yum install qemu-kvm virt-install virt-viewer
• If you are using Oracle Linux 8 run the following commands to install the base
virtualization packages and additional utilities:
sudo dnf module install virt
sudo dnf install virt-install virt-viewer
Additional steps are required to start virtualization services on Oracle Linux 9 after
installation. For more details, see Validating the Host System.
2-13
Chapter 2
Installing Virtualization Packages
For more information about DNF modules and application streams, see Oracle Linux:
Managing Software on Oracle Linux.
2. Reset the virt module state so that it is neither enabled nor disabled:
sudo dnf module reset virt -y
Caution:
Pre-existing guests that were created by using the default KVM stack may
not be compatible and may not start using the Oracle KVM stack.
Note that although you are able to switch to the Oracle KVM stack and install the
packages while using RHCK, the stack is not compatible. You must be running a
current version of UEK to use this software.
2-14
Chapter 2
Validating the Host System
2. Reset the virt module state so that it is neither enabled nor disabled:
sudo dnf module reset virt -y
Caution:
Pre-existing guests that were created by using the Oracle KVM stack are not
compatible and may not start using the default KVM stack.
If all of the checks return a PASS value, the system can host guest VMs. If any of the tests fail,
a reason is provided and information is displayed on how to resolve the issue, if such an
option is available.
Note:
If the following message is displayed, the system is not capable of functioning as a
KVM host:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are
available, performance will be significantly limited)
In the event that this message is displayed, attempts to create or start a VM on the
host are likely to fail.
2-15
3
KVM Usage
Several tools exist for administering the libvirt interface with KVM. In most cases, a variety
of different tools are capable of performing the same operation. This document focuses on
the tools that you can use from the command line. However, if you are using a desktop
environment, you might consider using a graphical user interface (GUI) such as the VM
Manager, to create and manage VMs. For more information about VM Manager, see https://
virt-manager.org/.
The Cockpit web console also provides a graphical interface to interact with KVM and
libvirtd to set up and configure VMs on a system. See Oracle Linux: Using the Cockpit
Web Console for more information.
The output should indicate that the libvirtd daemon is running, as shown in the following
example output:
* libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset:
enabled)
Active: active (running) since time_stamp; xh ago
After you verify that the libvirtd service is running, you can start provisioning guest
systems.
Oracle Linux 9
Individual libvirt functional components or drivers are modularized into separate daemons
that are exposed using three systemd sockets for each driver.
The following systemd daemons are defined for individual drivers within libvirt specifically
for KVM:
• virtqemud: is the QEMU management daemon, for running virtual machines on KVM.
3-1
Chapter 3
Working With Virtual Machines
All of the virtualization daemons must be running to expose the full virtualization
functionality available in libvirt. There is a single service and three UNIX sockets for
each daemon to expose different levels of access to the daemon. To enable all access
levels and to start all daemons, run:
for drv in qemu network nodedev nwfilter secret storage interface;
do
sudo systemctl enable virt${drv}d.service
sudo systemctl enable virt${drv}d{,-ro,-admin}.socket;
sudo systemctl start virt${drv}d{,-ro,-admin}.socket;
done
You do not need to start the service for each daemon, as the service is automatically
started when the first socket is established.
To see the a list of all of the sockets started and their current status, run:
sudo systemctl list-units --type=socket virt*
The following example, illustrates the creation of a simple VM and assumes that virt-
viewer is installed and available to load the installer in a graphical environment:
virt-install --name guest-ol8 --memory 2048 --vcpus 2 \
--disk size=8 --location OracleLinux-R8.iso --os-variant ol8.0
The following are detailed descriptions of each of the options that are specified in the
example:
• --name is used to specify a name for the VM. This name is registered as a domain
within libvirt.
3-2
Chapter 3
Working With Virtual Machines
• --memory is used to specify the RAM available to the VM and is specified in MB.
• --vcpus is used to specify the number of virtual CPUs (vCPUs) that should be available
to the VM.
• --disk is used to specify hard disk parameters. In this case, only the size is specified in
GB. If a path is not specified the disk image is created as a qcow file automatically. If
virt-install is run as root, the disk image is created in /var/lib/libvirt/images/
and is named using the name specified for the VM at install. If virt-install is run as
an ordinary user, the disk image is created in $HOME/.local/share/libvirt/images/.
• --location is used to provide the path to the installation media. The location can be an
ISO file, or an expanded installation resource hosted at a local path or remotely on an
HTTP or NFS server.
• --os-variant is an optional specification but provides some default parameters for each
VM that can help improve performance for a specific operating system or distribution. For
a complete list of options available, run osinfo-query os.
When you run the command, the VM is created and automatically starts to boot using the
install media specified in the location parameter. If you have the virt-viewer package
installed and the command is run in a terminal within a desktop environment, the graphical
console opens automatically and you can proceed with the guest operating system
installation within the console.
Use the virsh help command to view available options and syntax. For example, to find
out more about the options available to listings of VMs, run virsh help list. This
command shows options to view listings of VMs that are stopped or paused or that are
currently active.
Starting a VM
To start a VM, run the following command:
virsh start guest-ol8
Shutting Down a VM
To gracefully shut down a VM, run the following command:
3-3
Chapter 3
Working With Virtual Machines
Rebooting a VM
To reboot a VM, run the following command:
virsh reboot guest-ol8
Suspending a VM
To suspend a VM, run the following command:
virsh suspend guest-ol8
Resuming a Suspended VM
To resume a suspended VM, run the following command:
virsh resume guest-ol8
Forcefully Stopping a VM
To forcefully stop a VM, run the following command:
virsh destroy guest-ol8
3-4
Chapter 3
Configuring a Virtual Machine With a Virtual Trusted Platform Module
This step is helpful if you are unsure of the path where the disk for the VM is located.
2. Shut down the VM, if possible, by running the following command:
virsh shutdown guest-ol8
If the VM cannot be shut down gracefully you can force it to stop by running:
virsh destroy guest-ol8
This step removes all configuration information about the VM from libvirt. Storage
artifacts such as virtual disks are left intact. If you need to remove these as well, you can
delete them manually from their location returned in the first step in this procedure, for
example:
rm /home/testuser/.local/share/libvirt/images/guest-
ol8-1.qcow2
Note:
It is not possible to delete a VM if it has snapshots. You should remove any
snapshots using the virsh snapshot-delete command before attempting to
remove a VM that has any snapshots defined.
Note:
Virtual Trusted Platform Module is available on Oracle Linux 7, Oracle Linux 8, and
Oracle Linux 9 KVM guests, but not on QEMU.
To provide a vTPM to an existing Oracle Linux 7, Oracle Linux 8 or Oracle Linux 9 KVM VM,
follow the steps below.
1. Install the vTPM packages:
3-5
Chapter 3
Working With Storage for KVM Guests
• Modify the KVM VM's XML to include the TPM, as shown in the tpm section in
the following example:
<devices>
...
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
<graphics type='vnc' port='-1' autoport='yes'>
<listen type='address'/>
</graphics>
...
</devices>
Note that if you are creating a new VM, the virt-install command on Oracle
Linux 8 and Oracle Linux 9 also provides a --tpm option that enables you to
specify the vTPM information at installation time, for example:
virt-install --name guest-ol8-tpm2 --memory 2048 --vcpus 2 \
--disk path=/systest/images/guest-ol8-tpm2.qcow2,size=20 \
--location /systest/iso/ol8.iso --os-variant ol8 \
--network network=default --graphics vnc,listen=0.0.0.0 --tpm
emulator,model=tpm-crb,version=2.0
If you are using Oracle Linux 7, the virt-install command does not provide
this option, but you can manually edit the configuration after the VM is created.
4. Start the KVM VM.
3-6
Chapter 3
Working With Storage for KVM Guests
Oracle recommends using Oracle Linux Virtualization Manager to easily manage and
configure complex storage requirements for KVM environments.
Storage Pools
Storage pools provide logical groupings of storage types that are available to host the
volumes that can be used as virtual disks by a set of VMs. A wide variety of different storage
types are provided. Local storage can be used in the form of directory based storage pools,
file system storage and disk based storage. Other storage types such as NFS and iSCSI
provide standard network based storage, while RBD and Gluster types provide support for
distributed storage mechanisms. More information is provided at https://libvirt.org/
storage.html.
Storage pools help to abstract underlying storage resources from the VM configurations. This
abstraction is particularly useful if you suspect that resources such as virtual disks may
change physical location or media type. Abstraction becomes even more important when
using network based storage because target paths, DNS or IP addressing may change over
time. By abstracting this configuration information, you can manage resources in a
consolidated way without needing to update multiple VM configurations.
You can create transient storage pools that are available until the host reboots, or you can
define persistent storage pools that are restored after a reboot.
Transient storage pools are started automatically as soon as they are created and the
volumes that are within them are made available to VMs immediately, however any
configuration information about a transient storage pool is lost after the pool is stopped, the
host reboots or if the libvirtd service is restarted. The storage itself is unaffected, but VMs
configured to use resources in a transient storage pool lose access to these resources.
Transient storage pools are created using the virsh pool-create command.
For most use cases, you should consider creating persistent storage pools. Persistent
storage pools are defined as a configuration entry that is stored within /etc/libvirt.
Persistent storage pools can be stopped and started and can be configured to start when the
host system boots. Libvirt can take care of automatically mounting and enabling access to
network based resources when persistent storage is configured. Persistent storage pools are
created using the virsh pool-define command, and usually need to be started after they
have been created before you are able to use them.
You can create other storage pool types by using the same virsh pool-define-as
command. The options that you use with this command depend on the storage type that you
select when you create your storage pool. For example, to create file system based storage,
that mounts a formatted block device, /dev/sdc1, at the mount point /share/storage_mount,
you can run:
virsh pool-create-as pool_fs fs --source-dev /dev/sdc1 --target /share/storage_mount
Similarly, you can add an NFS share as a storage pool, for example:
3-7
Chapter 3
Working With Storage for KVM Guests
It is also possible to create an XML file representation of the storage pool configuration
and load the configuration information from file using the virsh pool-define
command. For example, you could create a storage pool for a Gluster volume by
creating an XML file named gluster_pool.xml with the following content:
<pool type='gluster'>
<name>pool_gluster</name>
<source>
<host name='192.0.2.1'/>
<dir path='/'/>
<name>gluster-vol1</name>
</source>
</pool>
The previous example assumes that a Gluster server is already configured and
running on a host with IP address 192.0.2.1 and that a volume named gluster-vol1 is
exported. Note that the glusterfs-fuse package must be installed on the host and
you should verify that you are able to mount the Gluster volume before attempting to
use it with libvirt.
Run the following command to load the configuration information from the
gluster_pool.xml file into libvirt:
virsh pool-define gluster_pool.xml
Note that Oracle recommends using Oracle Linux Virtualization Manager when
attempting to use complex network based storage such as Gluster.
For more information on the XML format for a storage pool definition, see https://
libvirt.org/formatstorage.html#StoragePool.
Use this command after you create a new storage pool to verify that it the storage pool
is available.
3-8
Chapter 3
Working With Storage for KVM Guests
Storage Volumes
Storage volumes are created within a storage pool and represent the virtual disks that can be
loaded as block devices within one or more VMs. Some storage pool types do not need
storage volumes to be created individually as the storage mechanism may present these to
as block devices already. For example, iSCSI storage pools present the individual logical unit
numbers (LUNs) for an iSCSI target as separate block devices.
In some cases, such as when using directory or file system based storage pools, storage
volumes are individually created for use as virtual disks. In these cases, several disk image
formats are supported although some formats, such as qcow2, may require additional tools
such as qemu-img for creation.
For disk based pools, standard partition type labels are used to represent individual volumes;
while for pools based on the logical volume manager, the volumes themselves are presented
individually within the pool.
Note that storage volumes can be sparsely allocated when they are created by setting the
allocation value for the initial size of the volume to a value lower than the capacity of the
volume. The allocation indicates the initial or current physical size of the volume, while the
capacity indicates the size of the virtual disk as it is presented to the VM. Sparse allocation is
frequently used to over-subscribe physical disk space where VMs may ultimately require
more disk space than is initially available. For a non-sparsely allocated volume, the allocation
matches or exceeds the capacity of the volume. Exceeding the capacity of the disk provides
space for metadata, if required.
Note that you can use the --pool option if you have volumes with matching names in
different pools on the same system and you need to specify the pool to use for any virsh
volume operation. This practice is replicated across subsequent examples.
The XML for a volume may depend on the pool type and the volume that is being created, but
in the case of a sparsely allocated 10 GB image in qcow2 format, the XML might look similar
to the following:
3-9
Chapter 3
Working With Storage for KVM Guests
<volume>
<name>volume1</name>
<allocation>0</allocation>
<capacity unit="G">10</capacity>
<target>
<path>/home/testuser/.local/share/libvirt/images/volume1.qcow2</path>
<permissions>
<owner>107</owner>
<group>107</group>
<mode>0744</mode>
<label>virt_image_t</label>
</permissions>
</target>
</volume>
3-10
Chapter 3
Working With Storage for KVM Guests
It is generally not advisable to reduce the size of an existing volume, as doing so can risk
destroying data. However, if you attempt to resize a volume to reduce it, you must specify the
--shrink option with the new size value.
You can equally use virt-install to create a virtual disk as a volume within an existing
storage pool automatically at install. For example, to create a new disk image as a volume
within the storage pool named storage_pool1:
virt-install --name guest --disk pool=storage_pool1 size=10
...
Tools to attach a volume to an existing VM are limited and it is generally recommended that
you use a GUI tool like virt-manager or cockpit to assist with this operation. If you
expect that you may need to work with volumes a lot, consider using Oracle Linux
Virtualization Manager.
You can use the virsh attach-disk command to attach a disk image to an existing VM.
This command requires that you provide the path to the disk image when you attach it to the
VM. If the disk image is a volume, you can obtain it's correct path by running the virsh
vol-list command first.
virsh vol-list storage_pool_1
Attach the disk image within the existing VM configuration so that it is persistent and attaches
itself on each subsequent restart of the VM:
3-11
Chapter 3
Working With Storage for KVM Guests
Note that you can use the --live option with this command to temporarily attach a
disk image to a running VM; or you can use the --persistent option to attach a disk
image to a running VM and also update it's configuration so that the disk is attached
on each subsequent restart.
Note that you can use the --live option with this command to temporarily detach a
disk image from a running VM; or you can use the --persistent option to detach a
disk image from a running VM and also update it's configuration so that the disk is
permanently detached from the VM on subsequent restarts.
Where disks are attached as block devices within a guest VM, you can obtain a listing
of the block devices attached to a guest so that you are able to identify the disk target
that is associated with a particular source image file, by running the virsh
domblklist command, for example:
virsh domblklist guest1
Detaching a virtual disk from the VM does note delete the disk image file or volume
from the host system. If you need to delete a virtual disk, you can either manually
delete the source image file or delete the volume from the host.
You can verify that the resize has worked by checking the block device information for
the running VM, using the virsh domblkinfo command. For example to list all
block devices attached to guest1 in human readable format:
virsh domblkinfo guest1 --all --human
The virsh blockresize command enables you to scale up a disk on a live VM, but
it does not guarantee that the VM is able to immediately identify that the additional disk
resource is available. For some guest operating systems, restarting the VM may be
required before the guest is capable of identifying the additional resources available.
Individual partitions and file systems on the block device are not scaled using this
command. You need to perform these operations manually from withing the guest, as
required.
3-12
Chapter 3
Working With Memory and CPU Allocation
For example, run the following command to set the number of vCPUs on a running VM:
virsh setvcpus domain-name, id, or uuid count-value --live
Note that the count value cannot exceed the number of CPUs assigned to the guest VM. The
count value also might be limited by the host, hypervisor, or from the original description of
the guest VM.
The following command options are available:
• domain
A string value representing the VM name, ID or UUID.
• count
A number value representing the number of vCPUs.
• --maximum
Controls the maximum number of vCPUs that can be hot plugged the next time the guest
VM is booted. This option can only be used with the --config option.
• --config
Changes the stored XML configuration for the guest VM and takes effect when the guest
is started.
• --live
The guest VM must be running and the change takes place immediately, thus hot
plugging a vCPU.
• --current
Affects the current guest VM.
• --guest
Modifies the CPU state in the current guest VM.
• --hotpluggable
3-13
Chapter 3
Working With Memory and CPU Allocation
You must specify the size as a scaled integer in kibibytes and the new value cannot
exceed the amount you specified for the VM. Values lower than 64 MB are unlikely to
work with most VM operating systems. A higher maximum memory value does not
affect active VMs. If the new value is lower than the available memory, it shrinks
possibly causing the VM to crash.
The following command options are available:
• domain
A string value representing the VM name, ID or UUID.
• size
A number value representing the new memory size, as a scaled integer. The
default unit is KiB, but you can select from other valid memory units:
– b or bytes for bytes
– KB for kilobytes (103 or blocks of 1,000 bytes)
– k or KiB for kibibytes (210 or blocks of 1024 bytes)
– MB for megabytes (106 or blocks of 1,000,000 bytes)
– M or MiB for mebibytes (220 or blocks of 1,048,576 bytes)
– GB for gigabytes (109 or blocks of 1,000,000,000 bytes)
– G or GiB for gibibytes (230 or blocks of 1,073,741,824 bytes)
– TB for terabytes (1012 or blocks of 1,000,000,000,000 bytes)
– T or TiB for tebibytes (240 or blocks of 1,099,511,627,776 bytes)
• --config
Changes the stored XML configuration for the guest VM and takes effect when the
guest is started.
3-14
Chapter 3
Setting Up Networking for KVM Guests
• --live
The guest VM must be running and the change takes place immediately, thus hot
plugging memory.
• --current
Affects the memory on the current guest VM.
To set the maximum memory that can be allocated to a VM, run:
virsh setmaxmem domain-name_id_or_uuid size --current
You must specify the size as a scaled integer in kibibytes unless you also specify a
supported memory unit, which are the same as for the virsh setmem command.
All other options for virsh setmaxmem are the same as for virsh setmem with one
caveat. If you specify the --live option be aware that not all hypervisors facilitate live
changes of the maximum memory limit.
3-15
Chapter 3
Setting Up Networking for KVM Guests
KVM is able to use SR-IOV for passthrough networking where a PCIe interface
supports this functionality. The SR-IOV hardware must be properly set up and
configured on the host system before you are able to attach the device to a VM and
configure the network to use this device.
Where network configuration is likely to be complex, Oracle recommends using Oracle
Linux Virtualization Manager. Simple networking configurations and operations are
described here to facilitate the majority of basic deployment scenarios.
You can find out more about a network using the virsh net-info command. For
example, to find out about the default network, run:
virsh net-info default
Note that the virtual network uses a network bridge, called virbr0, not to be confused
with traditional bridged networking. The virtual bridge is not connected to a physical
interface and relies on NAT and IP forwarding to connect VMs to the physical network
beyond. Libvirt also handles IP address assignment for VMs using DHCP. The default
network is typically in the range 192.168.122.1/24. To see the full configuration
information about a network, use the virsh net-dumpxml command:
virsh net-dumpxml default
3-16
Chapter 3
Setting Up Networking for KVM Guests
<mac address='52:54:00:82:75:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
3-17
Chapter 3
Setting Up Networking for KVM Guests
presented to the VM. By default, the virtio model is used, but alternate models, such
as e1000 or rtl8139 are available, Run virsh help attach-interface for more
information, or refer to the VIRSH(1) man page.
Remove a vNIC from a VM using the virsh detach-interface command, for
example:
virsh detach-interface --domain guest --type network --mac 52:54:00:41:6a:65 --
config
Note that the domain or VM name and type are required parameters. If the VM has
more than one vNIC attached, you must specify the mac parameter to provide the MAC
address of the vNIC that you wish to remove. You can obtain this value by listing the
vNICs that are currently attached to a VM. For example, you can run:
virsh domiflist guest
Once the bridge is created, you can attach it by using the virsh attach-
interface command as described in Adding or Removing a vNIC.
There are several issues that you may need to be aware of when using traditional linux
bridged networking for KVM guests. For instance, it is not simple to set up a bridge on
a wireless interface due to the number of addresses available in 802.11 frames.
Furthermore, the complexity of the code to handle software bridges can result in
reduced throughput, increased latency and additional configuration complexity. The
main advantage that this approach offers, is that it allows the host system to
communicate across the network stack directly with any guests configured to use
bridged networking.
Most of the issues related to using traditional linux bridges can be easily overcome by
using the macvtap driver which simplifies virtualized bridge network significantly. For
most bridged network configurations in KVM, this is the preferred approach because it
offers better performance and it is easier to configure. The macvtap driver is used
when the network type is set to direct.
3-18
Chapter 3
Setting Up Networking for KVM Guests
The macvtap driver creates endpoint devices that follow the tun/tap ioctl interface model to
extend an existing network interface so that KVM can use it to connect to the physical
network interface directly to support different network functions. These functions can be
controlled by setting a different mode for the interface. The following modes are available:
• vepa (Virtual Ethernet Port Aggregator) is the default mode and forces all data from a
vNIC out of the physical interface to a network switch. If the switch supports hairpin
mode, different vNICs connected to the same physical interface are able to communicate
via the switch. Many switches currently do not support hairpin mode, which means that
VMs with direct connection interfaces running in VEPA mode are unable to communicate,
but can connect to the external network via the switch.
• bridge mode connects all vNICS directly to each other so that traffic between VMs using
the same physical interface is not sent out to the switch and is facilitated directly. This
mode is the most useful option when using switches that do not support hairpin mode,
and when you need maximum performance for communications between VMs. It is
important to note that when configured in this mode, unlike a traditional software bridge,
the host is unable to use this interface to communicate directly with the VM.
• private mode behaves like a VEPA mode vNIC in the absence of a switch supporting
hairpin mode. However, even if the switch does support hairpin mode, two VMs
connected to the same physical interface are unable to communicate with each other.
This option has limited use cases.
• passthrough mode attaches a physical interface device or an SR-IOV Virtual Function
(VF) directly to the vNIC without losing the migration capability. All packets are sent
directly to the configured network device. There is a one-to-one mapping between
network devices and VMs when configured in passthrough mode because a network
device cannot be shared between VMs in this configuration.
Unfortunately, the virsh attach-interface command does not provide an option for you
to specify the different modes available when attaching a direct type interface that uses the
macvtap driver and defaults to vepa mode . The graphical virt-manager utility makes setting
up bridged networks using macvtap significantly easier and provides options for each different
mode.
Nonetheless, it is not very difficult to change the configuration of a VM by editing the XML
definition for it directly. The following steps can be followed to configure a bridged network
using the macvtap driver on an existing VM:
2. Dump the XML for the VM configuration and copy it to a file that you can edit:
virsh dumpxml guest1 > /tmp/guest1.xml
3. Edit the XML for the VM to change the vepa mode interface to use bridged mode. If there
are many interfaces connected to the VM, or you wish to review your changes, you can
do this in a text editor. If you are happy to make this change globally, run:
sed -i "s/mode='vepa'/mode='bridge'/g" /tmp/guest1.xml
4. Remove the existing configuration for this VM and replace it with the modified
configuration in the XML file:
3-19
Chapter 3
Cloning Virtual Machines
5. Restart the VM for the changes to take affect. The direct interface is attached in
bridge mode and is persistent and automatically started when the VM boots.
3-20
Chapter 3
Cloning Virtual Machines
Note:
For more information on how to use the virt-sysprep utility to prepare a VM and
understand the available options, see https://libguestfs.org/virt-sysprep.1.html.
1. Build the VM that you want to use for the clone or template.
a. Install any needed software.
b. Configure any non-unique operating system and application settings.
2. Remove any persistent or unique network configuration details.
a. Run the following command to remove any persistent udev rules:
rm -f /etc/udev/rules.d/70-persistent-net.rules
Note:
If you do not remove the udev rules, the name of the first NIC might be
eth1instead of eth0.
After modification, your file should not include a HWADDR entry or any unique
information, and at a minimum include the following lines:
DEVICE=eth[x]
ONBOOT=yes
3-21
Chapter 3
Cloning Virtual Machines
Important:
You must remove the HWADDR entry because if its address does not
match the new guest's MAC address, the ifcfg is ignored.
Note:
Ensure that any additional unique information is removed from the
ifcfg files.
3. If the guest VM from which you want to create a clone is registered with ULN, you
must de-register it. For more information, see the Oracle Linux: Unbreakable Linux
Network User's Guide for Oracle Linux 6 and Oracle Linux 7.
4. Run the following command to remove any sshd public/private key pairs:
rm -rf /etc/ssh/ssh_host_*
Note:
Removing ssh keys prevents problems with ssh clients not trusting these
hosts.
• For Oracle Linux 7, run the following commands to enable the first boot and
initial-setup wizards:
sed -ie 's/RUN_FIRSTBOOT=NO/RUN_FIRSTBOOT=YES/' /etc/sysconfig/firstboot
systemctl enable firstboot-graphical
systemctl enable initial-setup-graphical
Note:
The wizards that run on the next boot depend on the configurations
that have been removed from the VM. Also, on the first boot of the
clone we recommend that you change the hostname.
3-22
Chapter 3
Cloning Virtual Machines
Important:
Before proceeding with cloning, shut down the VM. You can clone a VM using
virt-clone or virt-manager.
Run virt-clone --help to see a complete list of options, or refer to the VIRT-CLONE(1)
man page.
Run the following command to clone a VM on the default connection, automatically
generating a new name and disk clone path:
virt-clone --original vm-name --auto-clone
3-23
4
Known Issues for Oracle Linux KVM
This chapter provides information about known issues for Oracle Linux KVM. If a workaround
is available, that information is also provided.
To work around this issue so that KVM guests can run the updated qemu version, edit the
XML file of each KVM guest, adding the caching_mode='on' parameter to the iommu section
for each driver sub-element, as shown in the following example:
<iommu model='intel'>
<driver aw_bits='48' caching_mode='on'/>
</iommu>
(Bug ID 32312933)
4-1
Chapter 4
Downgrading Application Streams Fail
4-2