Red Hat Virtualization-4.4-Planning and Prerequisites Guide-En-Us
Red Hat Virtualization-4.4-Planning and Prerequisites Guide-En-Us
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides requirements, options, and recommendations for Red Hat Virtualization
environments.
Table of Contents
Table of Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
PREFACE
.CHAPTER
. . . . . . . . . . 1.. .RED
. . . . .HAT
. . . . .VIRTUALIZATION
. . . . . . . . . . . . . . . . . . ARCHITECTURE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . .
1.1. SELF-HOSTED ENGINE ARCHITECTURE 5
1.2. STANDALONE MANAGER ARCHITECTURE 5
.CHAPTER
. . . . . . . . . . 2.
. . REQUIREMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
2.1. RED HAT VIRTUALIZATION MANAGER REQUIREMENTS 7
2.1.1. Hardware Requirements 7
2.1.2. Browser Requirements 7
2.1.3. Client Requirements 8
2.1.4. Operating System Requirements 8
2.2. HOST REQUIREMENTS 9
2.2.1. CPU Requirements 9
2.2.1.1. Checking if a Processor Supports the Required Flags 10
2.2.2. Memory Requirements 10
2.2.3. Storage Requirements 10
2.2.4. PCI Device Requirements 11
2.2.5. Device Assignment Requirements 11
2.2.6. vGPU Requirements 12
2.3. NETWORKING REQUIREMENTS 12
2.3.1. General requirements 12
2.3.2. Network range for self-hosted engine deployment 12
2.3.3. Firewall Requirements for DNS, NTP, and IPMI Fencing 13
2.3.4. Red Hat Virtualization Manager Firewall Requirements 13
2.3.5. Host Firewall Requirements 17
2.3.6. Database Server Firewall Requirements 22
2.3.7. Maximum Transmission Unit Requirements 23
.CHAPTER
. . . . . . . . . . 3.
. . CONSIDERATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
..............
3.1. HOST TYPES 24
3.1.1. Red Hat Virtualization Hosts 24
3.1.2. Red Hat Enterprise Linux hosts 24
3.2. STORAGE TYPES 24
3.2.1. NFS 25
3.2.2. iSCSI 25
3.2.3. Fibre Channel 25
3.2.4. Fibre Channel over Ethernet 26
3.2.5. Red Hat Hyperconverged Infrastructure 26
3.2.6. POSIX-Compliant FS 26
3.2.7. Local Storage 26
3.3. NETWORKING CONSIDERATIONS 27
3.4. DIRECTORY SERVER SUPPORT 28
3.5. INFRASTRUCTURE CONSIDERATIONS 28
3.5.1. Local or Remote Hosting 28
3.5.2. Remote Hosting Only 29
.CHAPTER
. . . . . . . . . . 4.
. . .RECOMMENDATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
..............
4.1. GENERAL RECOMMENDATIONS 30
4.2. SECURITY RECOMMENDATIONS 30
4.3. HOST RECOMMENDATIONS 31
4.4. NETWORKING RECOMMENDATIONS 31
1
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
. . . . . . . . . . . .A.
APPENDIX . . LEGAL
. . . . . . . . NOTICE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
..............
2
Table of Contents
3
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
PREFACE
Red Hat Virtualization is made up of connected components that each play different roles in the
environment. Planning and preparing for their requirements in advance helps these components
communicate and run efficiently.
4
CHAPTER 1. RED HAT VIRTUALIZATION ARCHITECTURE
One Red Hat Virtualization Manager virtual machine that is hosted on the self-hosted engine
nodes. The RHV-M Appliance is used to automate the installation of a Red Hat Enterprise Linux
8 virtual machine, and the Manager on that virtual machine.
A minimum of two self-hosted engine nodes for virtual machine high availability. You can use
Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent)
runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. The HA
services run on all self-hosted engine nodes to manage the high availability of the Manager
virtual machine.
One storage service, which can be hosted locally or on a remote server, depending on the
storage type used. The storage service must be accessible to all hosts.
One Red Hat Virtualization Manager machine. The Manager is typically deployed on a physical
server. However, it can also be deployed on a virtual machine, as long as that virtual machine is
hosted in a separate environment. The Manager must run on Red Hat Enterprise Linux 8.
5
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
A minimum of two hosts for virtual machine high availability. You can use Red Hat Enterprise
Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to
facilitate communication with the Red Hat Virtualization Manager.
One storage service, which can be hosted locally or on a remote server, depending on the
storage type used. The storage service must be accessible to all hosts.
6
CHAPTER 2. REQUIREMENTS
CHAPTER 2. REQUIREMENTS
Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat
Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware
certification?. To confirm whether specific hardware items are certified for use with Red Hat Enterprise
Linux, see Red Hat certified hardware .
Network Interface 1 Network Interface Card (NIC) 1 Network Interface Card (NIC)
with bandwidth of at least 1 Gbps. with bandwidth of at least 1 Gbps.
Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red
Hat Engineering is committed to fixing issues with browsers on this tier.
Tier 2: Browser and operating system combinations that are partially tested, and are likely to
work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with
browsers on this tier.
Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal
7
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal
support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with
browsers on this tier.
Tier 2
You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You
can install the QXLDOD graphical driver in the guest operating system to improve the functionality of
SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels.
NOTE
SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified
nor tested.
Do not install any additional packages after the base installation, as they may cause dependency issues
when attempting to install the packages required by the Manager.
Do not enable additional repositories other than those required for the Manager installation.
8
CHAPTER 2. REQUIREMENTS
For more information on the requirements and limitations that apply to guests see Red Hat Enterprise
Linux Technology Capabilities and Limits and Supported Limits for Red Hat Virtualization .
AMD
Opteron G4
Opteron G5
EPYC
Intel
Nehalem
Westmere
SandyBridge
IvyBridge
Haswell
Broadwell
Skylake Client
Skylake Server
Cascadelake Server
IBM
POWER8
POWER9
For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For
example:
9
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
The Secure CPU type contains the latest updates. For details, see BZ#1731395
You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure
that the change is applied.
Procedure
1. At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and
select the Boot or Boot with serial console entry from the list.
2. Press Tab to edit the kernel parameters for the selected option.
3. Ensure there is a space after the last kernel parameter listed, and append the parameter
rescue.
5. At the prompt, determine that your processor has the required extensions and that they are
enabled by running this command:
If any output is shown, the processor is hardware virtualization capable. If no output is shown, your
processor may still support hardware virtualization; in some circumstances manufacturers disable the
virtualization extensions in the BIOS. If you believe this to be the case, consult the system’s BIOS and
the motherboard manual provided by the manufacturer.
However, the amount of RAM required varies depending on guest operating system requirements, guest
application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM
for virtualized guests, allowing you to provision guests with RAM requirements greater than what is
physically present, on the assumption that the guests are not all working concurrently at peak load. KVM
does this by only allocating RAM for guests as required and shifting underutilized guests into swap.
The minimum storage requirements of RHVH are documented in this section. The storage requirements
for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing
configuration but are expected to be greater than those of RHVH.
The minimum storage requirements for host installation are listed below. However, use the default
10
CHAPTER 2. REQUIREMENTS
The minimum storage requirements for host installation are listed below. However, use the default
allocations, which use more storage space.
/ (root) - 6 GB
/home - 1 GB
/tmp - 1 GB
/boot - 1 GB
/var - 5 GB
/var/crash - 10 GB
/var/log - 8 GB
/var/log/audit - 2 GB
/var/tmp - 10 GB
swap - 1 GB. See What is the recommended swap size for Red Hat platforms? for details.
Anaconda reserves 20% of the thin pool size within the volume group for future metadata
expansion. This is to prevent an out-of-the-box configuration from running out of space under
normal usage conditions. Overprovisioning of thin pools during installation is also not supported.
If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at
least 10 GB.
If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of
virtual machines. See Memory Optimization .
For information about how to use PCI Express and conventional PCI devices with Intel Q35-based
virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine .
CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by
default.
All PCIe switches and bridges between the PCIe device and the root port should support ACS.
For example, if a switch does not support ACS, all devices behind that switch share the same
IOMMU group, and can only be assigned to the same virtual machine.
For GPU support, Red Hat Enterprise Linux 8 supports PCI device assignment of PCIe-based
NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics
devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of
the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation
and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the
NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card.
Check vendor specification and datasheets to confirm that your hardware meets these requirements.
The lspci -v command can be used to print information for PCI devices already installed on a system.
vGPU-compatible GPU
Select a vGPU type and the number of instances that you would like to use with this virtual
machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the
virtual machine.
When installing the self-hosted engine using the command line, you can set the deployment script to use
an alternate /24 network range with the option --ansible-extra-vars=he_ipv4_subnet_prefix=PREFIX,
where PREFIX is the prefix for the default range. For example:
NOTE
12
CHAPTER 2. REQUIREMENTS
NOTE
You can only set another range by installing Red Hat Virtualization as a self-hosted
engine using the command line.
By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination
address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP
servers.
IMPORTANT
The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host
and Red Hat Enterprise Linux host) must have a fully qualified domain name and
full, perfectly-aligned forward and reverse name resolution.
Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file
typically requires more work and has a greater chance for errors.
By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If
you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers.
Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to
connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an
error (network error, storage error…) and cannot function as hosts, they must be able to connect to
other hosts in the data center.
The specific port number depends on the type of the fence agent you are using and how it is configured.
The firewall requirement tables in the following sections do not represent this option.
13
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
NOTE
14
CHAPTER 2. REQUIREMENTS
15
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
16
CHAPTER 2. REQUIREMENTS
NOTE
A port for the OVN northbound database (6641) is not listed because, in the
default configuration, the only client for the OVN northbound database (6641) is
ovirt-provider-ovn. Because they both run on the same host, their
communication is not visible to the network.
By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on
any destination address. If you disable outgoing traffic, make exceptions for the
Manager to send requests to DNS and NTP servers. Other nodes may also
require DNS and NTP. In that case, consult the requirements for those nodes and
configure the firewall accordingly.
To disable automatic firewall configuration when adding a new host, clear the Automatically configure
host firewall check box under Advanced Parameters.
To customize the host firewall rules, see RHV: How to customize the Host’s firewall rules? .
NOTE
17
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
Optional.
18
CHAPTER 2. REQUIREMENTS
Optional.
19
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
H11 54322 TCP Red Hat Red Hat Required for Yes
Virtualizatio Virtualizatio communicat
n Manager n Hosts ion with the
ovirt- ovirt-
imageio Red Hat imageo
service Enterprise service.
Linux hosts
20
CHAPTER 2. REQUIREMENTS
H15 4500 TCP, UDP Red Hat Red Hat Internet Yes
Virtualizatio Virtualizatio Security
n Hosts n Hosts Protocol
(IPSec)
21
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
NOTE
By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any
destination address. If you disable outgoing traffic, make exceptions for the Red Hat
Virtualization Hosts
Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes
may also require DNS and NTP. In that case, consult the requirements for those nodes
and configure the firewall accordingly.
Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the
database must allow connections from that system.
IMPORTANT
NOTE
D1 5432 TCP, Red Hat Manager (engine) Default port for No, but
UDP Virtualization database server PostgreSQL can be
Manager database enabled.
Data Warehouse connections.
Data Warehouse (ovirt-engine-
service history ) database
server
22
CHAPTER 2. REQUIREMENTS
D2 5432 TCP, External systems Data Warehouse Default port for Disabled
UDP (ovirt-engine- PostgreSQL by default.
history ) database database No, but
server connections. can be
enabled.
23
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
CHAPTER 3. CONSIDERATIONS
This chapter describes the advantages, limitations, and available options for various Red Hat
Virtualization components.
All managed hosts within a cluster must have the same CPU type. Intel and AMD CPUs cannot co-exist
within the same cluster.
For information about supported maximums and limits, such as the maximum number of hosts that the
Red Hat Virtualization Manager can support, see Supported Limits for Red Hat Virtualization .
RHVH is included in the subscription for Red Hat Virtualization. Red Hat Enterprise Linux hosts
may require additional subscriptions.
RHVH is deployed as a single image. This results in a streamlined update process; the entire
image is updated as a whole, as opposed to packages being updated individually.
Only the packages and services needed to host virtual machines or manage the host itself are
included. This streamlines operations and reduces the overall attack vector; unnecessary
packages and services are not deployed and, therefore, cannot be exploited.
The Cockpit web interface is available by default and includes extensions specific to Red Hat
Virtualization, including virtual machine monitoring tools and a GUI installer for the self-hosted
engine. Cockpit is supported on Red Hat Enterprise Linux hosts, but must be manually installed.
Red Hat Enterprise Linux hosts are highly customizable, so may be preferable if, for example,
your hosts require a specific file system layout.
Red Hat Enterprise Linux hosts are better suited for frequent updates, especially if additional
packages are installed. Individual packages can be updated, rather than a whole image.
A storage domain can be made of either block devices (iSCSI or Fibre Channel) or a file system.
By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can
provide better performance, especially when using large files, and it is also necessary when you use tools
that require 4K compatibility, such as VDO.
24
CHAPTER 3. CONSIDERATIONS
NOTE
IMPORTANT
Red Hat Virtualization currently does not support block storage with a block size of 4K.
You must configure block storage in legacy (512b block) mode.
The storage types described in the following sections are supported for use as data storage domains.
ISO and export storage domains only support file-based storage types. The ISO domain supports local
storage when used in a local storage data center.
See:
3.2.1. NFS
NFS versions 3 and 4 are supported by Red Hat Virtualization 4. Production workloads require an
enterprise-grade NFS server, unless NFS is only being used as an ISO storage domain. When enterprise
NFS is deployed over 10GbE, segregated with VLANs, and individual services are configured to use
specific ports, it is both fast and secure.
As NFS exports are grown to accommodate more storage needs, Red Hat Virtualization recognizes the
larger data store immediately. No additional configuration is necessary on the hosts or from within Red
Hat Virtualization. This provides NFS a slight edge over block storage from a scale and operational
perspective.
See:
Network File System (NFS) in the Red Hat Enterprise Linux Storage Administration Guide .
3.2.2. iSCSI
Production workloads require an enterprise-grade iSCSI server. When enterprise iSCSI is deployed over
10GbE, segregated with VLANs, and utilizes CHAP authentication, it is both fast and secure. iSCSI can
also use multipathing to improve high availability.
Red Hat Virtualization supports 1500 logical volumes per block-based storage domain. No more than
300 LUNs are permitted.
See:
Online Storage Management in the Red Hat Enterprise Linux Storage Administration Guide .
Fibre Channel is both fast and secure, and should be taken advantage of if it is already in use in the
25
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
Fibre Channel is both fast and secure, and should be taken advantage of if it is already in use in the
target data center. It also has the advantage of low CPU overhead as compared to iSCSI and NFS. Fibre
Channel can also use multipathing to improve high availability.
Red Hat Virtualization supports 1500 logical volumes per block-based storage domain. No more than
300 LUNs are permitted.
See:
Online Storage Management in the Red Hat Enterprise Linux Storage Administration Guide .
Red Hat Virtualization supports 1500 logical volumes per block-based storage domain. No more than
300 LUNs are permitted.
See:
Online Storage Management in the Red Hat Enterprise Linux Storage Administration Guide .
How to Set Up Red Hat Virtualization Manager to Use FCoE in the Administration Guide.
See:
3.2.6. POSIX-Compliant FS
Other POSIX-compliant file systems can be used as storage domains in Red Hat Virtualization, as long
as they are clustered file systems, such as Red Hat Global File System 2 (GFS2), and support sparse
files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O,
making it incompatible with Red Hat Virtualization.
See:
26
CHAPTER 3. CONSIDERATIONS
Local storage is set up on an individual host, using the host’s own resources. When you set up a host to
use local storage, it is automatically added to a new data center and cluster that no other hosts can be
added to. Virtual machines created in a single-host cluster cannot be migrated, fenced, or scheduled.
For Red Hat Virtualization Hosts, local storage should always be defined on a file system that is separate
from / (root). Use a separate logical volume or disk.
Logical networks may be supported using physical devices such as NICs, or logical devices such as
network bonds. Bonding improves high availability, and provides increased fault tolerance, because all
network interface cards in the bond must fail for the bond itself to fail. Bonding modes 1, 2, 3, and 4
support both virtual machine and non-virtual machine network types. Modes 0, 5, and 6 only support
non-virtual machine networks. Red Hat Virtualization uses Mode 4 by default.
It is not necessary to have one device for each logical network, as multiple logical networks can share a
single device by using Virtual LAN (VLAN) tagging to isolate network traffic. To make use of this
feature, VLAN tagging must also be supported at the switch level.
The limits that apply to the number of logical networks that you may define in a Red Hat Virtualization
environment are:
The number of logical networks attached to a host is limited to the number of available network
devices combined with the maximum number of Virtual LANs (VLANs), which is 4096.
The number of networks that can be attached to a host in a single operation is currently limited
to 50.
The number of logical networks in a cluster is limited to the number of logical networks that can
be attached to a host as networking must be the same for all hosts in a cluster.
The number of logical networks in a data center is limited only by the number of clusters it
contains in combination with the number of logical networks permitted per cluster.
IMPORTANT
Take additional care when modifying the properties of the Management network
(ovirtmgmt). Incorrect changes to the properties of the ovirtmgmt network may cause
hosts to become unreachable.
IMPORTANT
If you plan to use Red Hat Virtualization to provide services for other environments,
remember that the services will stop if the Red Hat Virtualization environment stops
operating.
Red Hat Virtualization is fully integrated with Cisco Application Centric Infrastructure (ACI), which
provides comprehensive network management capabilities, thus mitigating the need to manually
configure the Red Hat Virtualization networking infrastructure. The integration is performed by
27
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
configuring Red Hat Virtualization on Cisco’s Application Policy Infrastructure Controller (APIC) version
3.1(1) and later, according to the Cisco’s documentation.
You can also attach an external directory server to your Red Hat Virtualization environment and use it as
an external domain. User accounts created on external domains are known as directory users.
Attachment of more than one directory server to the Manager is also supported.
The following directory servers are supported for use with Red Hat Virtualization. For more detailed
information on installing and configuring a supported directory server, see the vendor’s documentation.
OpenLDAP
IMPORTANT
A user with permissions to read all users and groups must be created in the directory
server specifically for use as the Red Hat Virtualization administrative user. Do not use
the administrative user for the directory server as the Red Hat Virtualization
administrative user.
To migrate Data Warehouse post-installation, see Migrating Data Warehouse to a Separate Machine
in the Data Warehouse Guide.
28
CHAPTER 3. CONSIDERATIONS
You can also host the Data Warehouse service and the Data Warehouse database separately from one
another.
Manager database
To host the Manager database on the Manager, select Local when prompted by engine-setup.
To host the Manager database on a remote machine, see Preparing a Remote PostgreSQL Database
in Installing Red Hat Virtualization as a standalone Manager with remote databases before running
engine-setup on the Manager.
To migrate the Manager database post-installation, see Migrating the Engine Database to a Remote
Server Database in the Administration Guide.
Websocket proxy
To host the websocket proxy on the Manager, select Yes when prompted by engine-setup.
IMPORTANT
Self-hosted engine environments use an appliance to install and configure the Manager
virtual machine, so Data Warehouse, the Manager database, and the websocket proxy can
only be made external post-installation.
DNS
Due to the extensive use of DNS in a Red Hat Virtualization environment, running the environment’s
DNS service as a virtual machine hosted in the environment is not supported.
Storage
With the exception of local storage , the storage service must not be on the same machine as the
Manager or any host.
Identity Management
IdM (ipa-server) is incompatible with the mod_ssl package, which is required by the Manager.
29
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
CHAPTER 4. RECOMMENDATIONS
This chapter describes configuration that is not strictly required, but may improve the performance or
stability of your environment.
Avoid running any service that Red Hat Virtualization depends on as a virtual machine in the
same environment. If this is done, it must be planned carefully to minimize downtime, if the
virtual machine containing that service incurs downtime.
Ensure the bare-metal host or virtual machine that the Red Hat Virtualization Manager will be
installed on has enough entropy. Values below 200 can cause the Manager setup to fail. To
check the entropy value, run cat /proc/sys/kernel/random/entropy_avail. To increase entropy,
install the rng-tools package and follow the steps in How can I customize rngd service startup? .
You can automate the deployment of hosts and virtual machines using PXE, Kickstart, Satellite,
CloudForms, Ansible, or a combination thereof. However, installing a self-hosted engine using
PXE is not supported. See:
Automating Red Hat Virtualization Host Deployment for the additional requirements for
automated RHVH deployment using PXE and Kickstart.
Set the system time zone for all machines in your deployment to UTC. This ensures that data
collection and connectivity are not interrupted by variations in your local time zone, such as
daylight savings time.
Use Network Time Protocol (NTP) on all hosts and virtual machines in the environment in order
to synchronize time. Authentication and certificates are particularly sensitive to time skew.
Previously, NTP could be implemented using chrony (chronyd) or ntp (ntpd) but in Red Hat
Enterprise Linux 8, only chrony is supported.
For information about migrating from ntp to chrony, see Migrating to chrony.
For more information on chrony, see Using the Chrony Suite to configure NTP .
Document everything, so that anyone who works with the environment is aware of its current
state and required procedures.
30
CHAPTER 4. RECOMMENDATIONS
Register all hosts and Red Hat Enterprise Linux virtual machines to either the Red Hat Content
Delivery Network or Red Hat Satellite in order to receive the latest security updates and errata.
Create individual administrator accounts, instead of allowing many people to use the default
admin account, for proper activity tracking.
Limit access to the hosts and create separate logins. Do not create a single root login for
everyone to use. For specific information about managing users, groups, and root permissions,
see Configuring Basic System Settings .
When deploying the Red Hat Enterprise Linux hosts, only install packages and services required
to satisfy virtualization, performance, security, and monitoring requirements. Production hosts
should not have additional packages such as analyzers, compilers, or other components that add
unnecessary security risk.
Although you can use both Red Hat Enterprise Linux host and Red Hat Virtualization Host in the
same cluster, this configuration should only be used when it serves a specific business or
technical requirement.
Configure fencing devices at deployment time. Fencing devices are required for high
availability.
Use separate hardware switches for fencing traffic. If monitoring and fencing go over the same
switch, that switch becomes a single point of failure for high availability.
If bonds will be shared with other network traffic, proper quality of service (QoS) is required for
storage and other network traffic.
For optimal performance and simplified troubleshooting, use VLANs to separate different
traffic types and make the best use of 10 GbE or 40 GbE networks.
If the underlying switches support jumbo frames, set the MTU to the maximum size (for
example, 9000) that the underlying switches support. This setting enables optimal throughput,
with higher bandwidth and reduced CPU usage, for most applications. The default MTU is
determined by the minimum size supported by the underlying switches. If you have LLDP
enabled, you can see the MTU supported by the peer of each host in the NIC’s tool tip in the
Setup Host Networks window.
IMPORTANT
31
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
IMPORTANT
If you change the network’s MTU settings, you must propagate this change to
the running virtual machines on the network: Hot unplug and replug every virtual
machine’s vNIC that should apply the MTU setting, or restart the virtual
machines. Otherwise, these interfaces fail when the virtual machine migrates to
another host. For more information, see After network MTU change, some VMs
and bridges have the old MTU and seeing packet drops and BZ#1766414.
1 GbE networks should only be used for management traffic. Use 10 GbE or 40 GbE for virtual
machines and Ethernet-based storage.
If additional physical interfaces are added to a host for storage use, clear VM network so that
the VLAN is assigned directly to the physical interface.
IMPORTANT
Always use the RHV Manager to modify the network configuration of hosts in your
clusters. Otherwise, you might create an unsupported configuration. For details, see
Network Manager Stateful Configuration (nmstate) .
If your network environment is complex, you may need to configure a host network manually before
adding the host to the Red Hat Virtualization Manager.
Configure the network with Cockpit. Alternatively, you can use nmtui or nmcli.
If a network is not required for a self-hosted engine deployment or for adding a host to the
Manager, configure the network in the Administration Portal after adding the host to the
Manager. See Creating a New Logical Network in a Data Center or Cluster .
Use network bonding. Network teaming is not supported in Red Hat Virtualization and will cause
errors if the host is used to deploy a self-hosted engine or added to the Manager.
If the ovirtmgmt network is not used by virtual machines, the network may use any
supported bonding mode.
If the ovirtmgmt network is used by virtual machines, see Which bonding modes work when
used with a bridge that virtual machine guests or containers connect to?.
Red Hat Virtualization’s default bonding mode is (Mode 4) Dynamic Link Aggregation. If
32
CHAPTER 4. RECOMMENDATIONS
Red Hat Virtualization’s default bonding mode is (Mode 4) Dynamic Link Aggregation. If
your switch does not support Link Aggregation Control Protocol (LACP), use (Mode 1)
Active-Backup. See Bonding Modes for details.
Configure a VLAN on a physical NIC as in the following example (although nmcli is used, you
can use any tool):
# nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50
# nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ipv4.gateway
123.123.0.254
Configure a VLAN on a bond as in the following example (although nmcli is used, you can use
any tool):
# nmcli connection add type bond con-name bond0 ifname bond0 bond.options
"mode=active-backup,miimon=100" ipv4.method disabled ipv6.method ignore
# nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type
bond
# nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type
bond
# nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50
# nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ipv4.gateway
123.123.0.254
Customize the firewall rules in the Administration Portal after adding the host to the Manager.
See Configuring Host Firewall Rules .
A storage domain dedicated to the Manager virtual machine is created during self-hosted
engine deployment. Do not use this storage domain for any other virtual machines.
If you are anticipating heavy storage workloads, separate the migration, management, and
storage networks to reduce the impact on the Manager virtual machine’s health.
Although there is technically no hard limit on the number of hosts per cluster, limit self-hosted
engine nodes to 7 nodes per cluster. Distribute the servers in a way that allows better resilience
(such as in different racks).
All self-hosted engine nodes should have an equal CPU family so that the Manager virtual
machine can safely migrate between them. If you intend to have various families, begin the
installation with the lowest one.
If the Manager virtual machine shuts down or needs to be migrated, there must be enough
memory on a self-hosted engine node for the Manager virtual machine to restart on or migrate
to it.
33
Red Hat Virtualization 4.4 Planning and Prerequisites Guide
Licensed under the (Creative Commons Attribution–ShareAlike 4.0 International License). Derived from
documentation for the (oVirt Project). If you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora,
the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other
countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or
other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or
endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or
trademarks/service marks of the OpenStack Foundation, in the United States and other countries and
are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored
by the OpenStack Foundation, or the OpenStack community.
34