Storage Best Practices For KVM On Netapp
Storage Best Practices For KVM On Netapp
Storage Best Practices For KVM On Netapp
2 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
6.6 REQUIRED SERVICES ............................................................................................................................. 15
14.3 MAXIMIZING STORAGE EFFICIENCY WITH NETAPP DEDUPLICATION AND THIN PROVISIONING .. 22
15 CONCLUSION .......................................................................................................................... 23
16 APPENDIXES ........................................................................................................................... 24
APPENDIX A: PORTS TO ALLOW IN IPTABLES FIREWALL ............................................................................. 24
3 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
1 PURPOSE OF THIS DOCUMENT
This technical report discusses the best prescriptive means of setting up a virtual server environment built
® ®
around the Kernel-Based Virtual Machine (KVM) hypervisor on Red Hat Enterprise Linux and NetApp
storage. Regardless of the application or applications to be supported, the KVM environment described in
this technical report offers a solid foundation.
This technical report underscores the requirements of security, separation, and redundancy. These
requirements are emphasized in the three major layers of the environment—server, network, and storage.
1.2 TERMINOLOGY
The following terms are used in this technical report:
• Host node. The physical server or servers that host one or more virtual servers.
• Virtual server. A guest instance that resides on a host node.
• Shared storage. A common pool of disk space, file- or LUN-based, available to two or more host nodes
simultaneously.
• KVM environment. A general term that encompasses KVM, RHEL, network, and NetApp storage as
described in this technical report.
• Cluster. A group of related host nodes that support the same virtual servers.
• Virtual local access network (VLAN). Useful at layer 2 switching to segregate broadcast domains and
to ease the physical elements of managing a network.
• Virtual interface (VIF). A means of bonding two or more physical NICs for purposes of redundancy or
aggregation.
• Channel bond. Red Hat’s naming convention for bonding two or more physical NICs for purposes of
redundancy or aggregation.
4 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
Guests under KVM operate as processes just like any other application, service, or script. This means that
KVM administrators can use the traditional top command to monitor things like utilization and process state.
®
Also, although KVM uses full virtualization, it includes paravirtualized drivers (virtio) for Windows guests to
boost block I/O and network performance.
Red Hat’s implementation of KVM includes the KVM kernel module and a processor-specific module for
®
Intel or AMD, plus QEMU, which is a CPU emulator.
3 SYSTEM REQUIREMENTS
Requirements to launch the hypervisor are conservative; however, overall system performance depends on
the nature of the workload.
5 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
3.4 KVM REQUIREMENTS
The KVM hypervisor requires a 64-bit Intel processor with the Intel VT extensions or a 64-bit AMD processor
with the AMD-V extensions. It may be necessary to first enable the hardware virtualization support from the
system BIOS.
Run the following command from within Linux to verify that the CPU virtualization extensions are available.
6 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
3.8 NETWORK ARCHITECTURE
Although this technical report does not discuss the specifics of setting up an Ethernet or switched fabric, the
following considerations do need to be addressed.
MULTIPATHING
Regardless of how the network is laid out, the concept of multipathing should be incorporated. That is, if
each host node has a public interface and a data interface, each interface should have two or more physical
NICs managed by a virtual interface. Red Hat refers to this as a channel bond. On the NetApp side, this is
referred to as a VIF (virtual interface). To extend the redundancy, each path should go to its own switch.
When using Fibre Channel Protocol (FCP), the same best practices apply. Multiple-fibre HBAs should be
used on the host nodes. Each fibre path is directed to a separate fibre switch. The hosts manage the
multiple paths with Device Mapper Multipath I/O (DM-MPIO), which is also supported for use with iSCSI.
Finally, if the different IP networks are to be combined on a 10GB network, there still need to be multiple
10GB NICs managed by a channel bond, and each path goes to its own switch. The separation is managed
by VLAN segmentation, which is described in the following section.
SEPARATION
Keeping the different networks separated is important for both performance and security. The best way to
provide this separation is by using VLAN segmentation. Simply put, VLAN segmentation allows a single
switch to carry and separate several IP networks simultaneously.
In the case of a host node having separate channel bonds for public and data traffic, the primary paths for
each could run to the same switch. In the case of the single 10GB channel bond, VLAN segmentation is the
best way to separate the traffic on the same wire.
The server shown in Figure 3 has access to the public (primary) network as well as to the private (data)
network. The NetApp FAS controllers are accessible only from the private network. In addition, there are
redundant NICs (or HBAs) and paths for each network managed by a channel bond on the servers. Channel
bonds allow multiple physical interfaces to be managed as one virtual interface for purposes of redundancy.
The private (data) network in Figure 3 could represent NFS, iSCSI, or FCP.
Note: A bonded pair of 10GB NICs could carry all of the traffic. The separation of public and data traffic
occurs by way of VLANs. This separation has the added benefit of faster throughput and fewer cables.
7 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
4 BEST PRACTICES FOR NETAPP FAS CONTROLLER CONFIGURATION
The first disk aggregate, aggr0, should be running on the default three-disk group along with the default
volume, vol0. Do not put any other user data in aggr0 or vol0. Create a separate aggregate for user data.
When creating aggregates, it is best to use the defaults for most items such as RAID groups and RAID level.
®
The default RAID group size is 16 disks, used in a RAID-DP configuration. RAID-DP is NetApp’s high-
performance implementation of RAID 6. Also, when creating aggregates and volumes, allow Data ONTAP to
automatically choose disks and always maintain a hot spare in the storage array.
The data volumes should be named something meaningful, such as kvm_vol or vol_kvm_store. If
multiple environments are to be stored on the same NetApp FAS controller, then extend the volume name to
be even more descriptive, such as vol_hr_nfs or vol_mrktg_fcp.
In the example deployment described in this technical report, there are only two physical servers, which
service a number of virtual guests. A pair of NetApp FAS controllers backs the environment. The group of
two servers is referred to as a cluster. If there are multiple clusters, each one should have its own flexible
volume. This allows a more secure approach to each cluster’s shared storage.
™
NetApp Snapshot copies allow a point-in-time, read-only copy of a flexible volume that incurs no
performance hit to the server or storage virtual environment. Further, it takes very little space, so it has little
impact on storage consumption. A Snapshot copy usually takes less than a second to make, and up to 255
Snapshot copies can be stored per volume. In the context of the KVM virtual environment, the copy can be
used to recover virtual servers and data affected by human error or software corruption. The use of separate
flexible volumes for each KVM environment also makes more efficient use of the Snapshot copy.
4.3 SIZING
Sizing depends on the number of VMs to be deployed and on projected growth. To maximize storage
efficiency, NetApp deduplication and thin provisioning should be employed on the volume and LUN,
respectively. NetApp deduplication increases storage efficiency by folding identical blocks from different
virtual machine images into a single instance on the storage controller. Thin provisioning allocates space for
8 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
a LUN without actually reserving the space all at once. When using NetApp deduplication with LUN-based
storage, the LUN should be twice the size of the volume in order to see the full benefit of deduplication.
Volumes allocated for NFS-based storage do not need the same consideration.
A single NetApp FAS controller can also be used to store multiple clusters. For instance, a single group of
host nodes might support a number of databases while another group of host nodes supports application
servers, all using the same NetApp FAS controller. By using VLAN segregation, they operate side by side
without interacting with each other.
®
The secure separation can be further extended with the use of NetApp MultiStore on the NetApp FAS
controller. This allows the partitioning of a single storage device into multiple logical storage devices. For
more information on MultiStore, refer to http://www.netapp.com/us/products/platform-os/multistore.html.
9 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
an iSCSI HBA. For a highly available host node environment, iSCSI can also be used in conjunction with
GFS or GFS2.
For an environment that already includes Fibre Channel switches and the required cables and adapters,
FCP may be an attractive choice. But for a data center that has not yet invested in FCP, the initial cost may
outweigh the advantages. Like iSCSI, FCP can be used in conjunction with GFS or GFS2.
10 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
Unfortunately, this does not usually bode well because the default values used by tools like fdisk do not
align the offsets properly for use in a virtual environment.
To quote TR-3747, “NetApp uses 4KB blocks (4 x 1,024 = 4,096 bytes) as its basic storage building block.
Write operations can consume no less than a single 4KB block and can consume many 4KB blocks
depending on the size of the write operation. Ideally, the guest/child OS should align its file system(s) such
that writes are aligned to the storage device's logical blocks. The problem of unaligned LUN I/O occurs when
the partitioning scheme used by the host OS doesn't match the block boundaries inside the LUN.”
Without this proper alignment, significant latency occurs because the storage controller has to perform
additional reads and writes for the misaligned blocks. For example, most modern operating systems such as
RHEL and Windows 2000 and 2003 use a starting offset of sector 63. Pushing the offset to sector 64 or
sector 128 causes the blocks to align properly with the layers below.
Figure 6 shows proper alignment of the guest file system, through the host node file system, and down to
the LUN on the NetApp FAS controller.
Aligning the underlying storage is discussed in each of the storage cases later in this technical report.
Aligning the disk images is discussed in more detail in the companion deployment guide, “Deployment
Guide for KVM and Red Hat Enterprise Linux on NetApp Storage,” as well as in NetApp TR-3747. From a
high level, it involves pushing the offset of the first disk partition to a number divisible by 8 sectors, with each
subsequent partition aligning with a starting sector that is also divisible by 8.
11 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
Most important, the NFS traffic must be on a separate 1GB network minimum and must employ a separate
VLAN. A 10GB network is preferable, in which case all traffic can exist on the same wire that carries the
different VLANs. With the NFS traffic kept separate, the benefits of jumbo frames can be taken advantage of
without affecting the other networks. A jumbo frame is a datagram that is larger than the default of 1500
bytes that standard Ethernet uses. This is in comparison to a typical NFS datagram, which is 8400 bytes in
size.
When using the default datagram size of 1500 bytes, NFS datagrams get fragmented on the network and
additional cycles are required to piece them back together, which can lead to serious performance
degradation. By increasing the maximum transmission unit (MTU) to 9000 bytes, each NFS datagram can
be sent across the wire in one piece.
On the NetApp controller, jumbo frames are best configured on a per VLAN basis. Following the networking
best practice of setting up VIFs first, simply create a new VLAN to be associated with a VIF and adjust the
MTU to 9000. From there, each switch port on the way to the host node should also be configured for jumbo
frames in addition to the proper VLAN.
The naming portion of the NFS best practices is simple. The process of creating an NFS export on a NetApp
FAS controller is straightforward:
1. Create a flexible volume.
2. Create a qtree on that volume (optional).
3. Export the volume or qtree.
When creating the volume for use as an NFS export, the space guarantee can be None and the Snapshot
reserve should be left at 20% to account for the Snapshot copies. NetApp highly recommends using
Snapshot copy. If Snapshot copy is not used, then enter 0% for the reserve.
A qtree is a special subdirectory of a volume that can be exported as an NFS share. The key benefits of
®
qtrees are that they allow UNIX and Linux style quotas and NetApp SnapVault and SnapMirror products
can easily use them. In addition, a qtree can be assigned a security style that affects only its directories and
files, not the entire volume. The use of qtrees is optional.
Give each volume or qtree a meaningful name, such as /vol/kvm_vol/nfs_stor or
/vol/vol_mktg/qt_nfs_storage. The naming convention should be descriptive and consistent across
volumes and exports. Storage and system administrators should be able to look at an export name and
immediately recognize its type and use.
Finally, when creating the export, part of the process involves specifying which hosts and/or subnets have
access to the NFS export. Be sure to allow access only to the specific hosts that need access. Don't give All
Hosts access, and don't grant access to an entire subnet (unless all hosts on a subnet are going to mount
the export).
12 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
created, it is mapped to the LUN. Name igroups in the same meaningful way that LUNs and exports are
named, using a name such as ig_iscsi_kvm.
13 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
6.3 PACKAGE SELECTION
The package selection should be particular and minimal for the host nodes. Do not install anything related to
a purely desktop machine; games and office productivity packages have no place on a host node. The
packages and package groups should be limited to items such as the base installation, text editors, and the
KVM-related packages. As a case in point, the servers used to test and develop this document had the
following packages listed in their kickstart files:
• @admin-tools. Group of Linux administrative tools
• @base. Group of base Linux packages
• @core. Group of packages required for the smallest installation
• @editor.: Group of text editors
• @text-internet. Group of nongraphical Internet tools
• @gnome-desktop. Group of packages for GUI (to demo GUI-based tools)
• device-mapper-multipath. Package for multipathing of LUN-based storage
• Various packages associated with KVM and QEMU
• Various packages associated with Red Hat Cluster Suite and GFS2
There are many other packages (805 in total) that are installed for dependencies, but these are very specific
choices. Anything under 900 packages is considered streamlined.
The decision of whether to install the graphical packages is best left up to the business needs and
requirements of the group responsible for maintaining the virtual environment. From a resource and security
standpoint, it is better not to install any of the graphical packages; however, the graphical tool Virtual
Machine Manager may appeal to many system administrators and engineers. One solution is to install and
run only the graphical packages on one of the host nodes. This decision is best left to the maintainers of the
environment.
6.4 SECURITY
Properly securing the host nodes is of paramount importance. This includes proper use of iptables for
packet filtering and Security-Enhanced Linux (SELinux) for file-level security.
The firewall provided by iptables should allow only the ports needed to operate the virtual environment as
well as communicate with the NetApp FAS controller. Under no circumstances should iptables be disabled.
For a list of ports, see Appendix A: Ports to Allow in IPtables Firewall.”
SELinux was developed largely by the NSA (and later incorporated into the 2.6 Linux kernel in 2003) to
comply with U.S. government computer security policy enforcement. SELinux is built into the kernel and
provides a mandatory access control (MAC) mechanism, which allows the administrator to define the
permissions for how all processes interact with items like files, devices, and processes.
For example, the default directory for disk images in a KVM environment is /var/lib/libvirt/images.
SELinux has a default rule that gives that directory a security context consistent with virtualization. If
someone or something creates a disk image in /etc/root/kit, for example, SELinux does not allow it to
run without the proper security context. This provides a very granular level of security. For more information
on SELinux, see the "Red Hat Enterprise Linux 5 Deployment Guide," referenced in Appendix C.
Unless Red Hat Cluster Suite is used, SELinux should remain in its default state of “enabled” and “targeted.”
As of this writing, SELinux and Red Hat Cluster Suite are not supported by Red Hat when used together.
Separately, however, they are fully supported by Red Hat.
The primary means of connecting to a virtual server and the virsh (virtual shell) console is by way of SSH
and an SSH tunnel, respectively. It is also possible to configure communication to the virsh console with the
use of TLS, but that is outside the scope of this technical report. For more information, see the "Red Hat
Enterprise Linux 5 Virtualization Guide," referenced in Appendix C.
The default means of communicating with virsh is by way of an SSH tunnel. Essentially, a URI is called
(qemu+SSH://<host_node>/system) and the tunnel is opened. Although it is possible to enter a
password for each connection, the best practice is to use SSH keys. A key pair is created on the remote
host, and the public key is distributed to each of the host nodes. This enables encrypted communication to
14 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
the host nodes without the use of passwords. This is very important when considering the automation of live
migrating virtual servers.
15 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
In the storage pool method, libvirtd handles the mounting and unmounting of the storage. Storage pools
can be created and managed from the Virtual Machine Manager graphical tool as well as from the virsh
command line tool. In both cases, an XML file is created and must be copied to each of the host nodes. For
more information on storage pools, refer to the "Red Hat Enterprise Linux 5 Virtualization Guide," referenced
in Appendix C.
Figure 7 illustrates the virtual guests accessing the network by way of a public (virtual) bridge. Because the
virtual guests have their own twp-way access to the network, the only access that the host node needs is for
management purposes.
16 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
8 HOST NODE CONFIGURATION OF NFS SHARED STORAGE
The configuration of the NFS-based shared storage is typically categorized in terms of the network
configuration, defining specific ports for NFS, and the mount options. Also, there are some things to consider
regarding SELinux.
17 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
8.4 SELINUX CONSIDERATIONS FOR NFS-BASED SHARED STORAGE
The default location of the disk images is in /var/lib/libvirt/images. It is assumed that subdirectories
will be created under the images directory to keep things organized. Common subdirectories include one
for each operating system, as well as a place for storing golden images and a place for storing ISO images.
Some virtualization administrators prefer to move the images directory directly under the root directory (/).
Many decisions depend on the business needs of the virtual environment as well as the preferences of the
administrators and engineers responsible for the environment. Regardless of whether the images directory
is moved or subdirectories are created, SELinux must be configured to allow access for KVM activity.
It is a minor matter to update the security context of the KVM image subdirectories so that any new files
created under them also inherit the proper context.
18 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
Note: When using Red Hat Cluster Suite, SELinux must be disabled.
19 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
To configure the Red Hat Cluster Suite with GFS2, follow these steps.
1. Install the host nodes as described in section 7, with the addition of the GFS2-specific packages.
2. Create the LUN to be used for the shared storage (iSCSI or FCP).
3. Configure the multipathing.
4. Install the cluster management piece on the remote administration host (described in section 12).
5. Configure the cluster.
6. Using clustered LVM (part of the GFS2-related packages), create a volume using the entire device
(/dev/sdb, not /dev/sdb1) for proper disk alignment.
7. Create the GFS2 file system with the noatime option.
8. Configure the mount in fstab or storage pool.
Note: Although the use of LVM is usually optional for the host nodes, it is required for GFS and GFS2.
20 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
corruption. The Red Hat-recommended fencing device is a networked power switch that, when triggered,
forces the host node to reboot. A complete list of supported fencing devices is available on the Red Hat site.
21 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
Because only changed blocks are replicated and bandwidth impact is limited to the rate of data change on
the primary storage controller, both SnapMirror and SnapVault are excellent choices to replicate data over
generally slow WANs to increase data protection options. Each replication option is highly configurable to
meet business requirements.
SnapMirror can be configured to replicate data in asynchronous mode, semisynchronous mode, and full
synchronous mode. SnapVault replicates NetApp Snapshot copies, and the frequency of the Snapshot
replication process can be configured during initial SnapVault configuration or changed as needed
afterward.
As a critical piece of disaster recovery planning and implementation, the best practice is to choose one of
the products for data replication. In addition, replicating the data to a second tier of storage allows faster
backup and recovery in comparison to traditional tape backups. Although having a local second tier of
storage is good, NetApp highly recommends having a second tier of storage at a remote site.
After a product is chosen and implemented, it is important to stagger the transfers for non-peak-load times.
For SnapMirror, NetApp also recommends throttling the bandwidth on transfers.
22 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
the LUN needs to expand. Snapshot Auto Delete deletes the oldest Snapshot copies when the volume
reaches a soft limit and is nearing capacity. The recommended soft limit is 5% remaining space. Finally,
Fractional Reserve is a policy to define any additional space to reserve for LUN writes if the volume
becomes full. When Auto Size and Auto Delete are in use, Fractional Reserve should be set to 0%
When using deduplication with NFS, the storage efficiency is seen immediately. Beyond the initial enabling
of deduplication, there are no other considerations and no other configurations to be made.
More information is available in the "NetApp Deduplication for FAS and V-Series Deployment and
Implementation Guide" referenced in Appendix C.
15 CONCLUSION
Although KVM does not have the robust management tools included in RHEV, it offers a highly configurable
and high-performance virtual environment that is easy to learn. This makes it a primary candidate for IT
infrastructures that already have their own tools, a foundation of Linux or Linux skills, and the need for a
solid virtualization platform that plugs in to an existing environment.
In a matter of minutes, a simple KVM environment can be set up and tested. A more complex production
KVM infrastructure can be planned and deployed in a few short weeks. The graphical tools enable
newcomers to quickly grasp the concepts; and the command line tools are very easily integrated into
automation, management, and monitoring applications and tools.
From a storage and data efficiency standpoint, NetApp FAS controllers offer a unified, flexible approach to
storage. The ability to deliver NFS, iSCSI, and FCP to multiple KVM environments simultaneously means
that the storage scales nondisruptively with the KVM environment. Multiple KVM environments with different
storage needs can be supported from the same NetApp FAS controller.
Additional NetApp products like Snapshot, SnapMirror, and deduplication offer the protection and storage
efficiency required in any infrastructure.
Using the best practices in this guide provides a solid virtual infrastructure based on KVM and NetApp that
serves as a solid foundation for many applications.
23 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
16 APPENDIXES
22 TCP SSH
Cluster-Related Ports
24 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
Table 3) Remote host ports.
22 TCP SSH
APPENDIX C: REFERENCES
Home page for KVM
www.linux-kvm.org
Red Hat and Microsoft Virtualization Interoperability
http://www.redhat.com/promo/svvp/
KVM – Kernel-Based Virtual Machine
www.redhat.com/f/pdf/rhev/DOC-KVM.pdf
Red Hat Enterprise Linux 5 Virtualization Guide
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/index.html
Red Hat Enterprise Linux 5 Deployment Guide
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Deployment_Guide/index.html
Red Hat Enterprise Linux 5 Installation Guide
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Installation_Guide/index.html
Red Hat Enterprise Linux 5.5 Online Storage Guide
http://www.redhat.com/docs/en-
US/Red_Hat_Enterprise_Linux/html/Online_Storage_Reconfiguration_Guide/index.html
Best Practices for File System Alignment in Virtual Environments
http://www.netapp.com/us/library/technical-reports/tr-3747.html
Using the Linux NFS Client with NetApp Storage
www.netapp.com/us/library/technical-reports/tr-3183.html
Storage Best Practices and Resiliency Guide
http://media.netapp.com/documents/tr-3437.pdf
25 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage
KVM Known Issues
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Technical_Notes/Known_Issues-kvm.html
NetApp Deduplication for FAS and V-Series Deployment and Implementation Guide
http://www.netapp.com/us/library/technical-reports/tr-3505.html
SnapMirror Async Overview and Best Practices Guide
http://www.netapp.com/us/library/technical-reports/tr-3446.html
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be
obtained by the use of the information or observance of any recommendations provided herein. The
information in this document is distributed AS IS, and the use of this information or the implementation of
any recommendations or techniques herein is a customer’s responsibility and depends on the customer’s
ability to evaluate and integrate them into the customer’s operational environment. This document and
the information contained herein may be used solely in connection with the NetApp products discussed
in this document.
© 2010 NetApp. All rights reserved. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster,
Data ONTAP, FlexClone, MultiStore, NOW, RAID-DP, Snapshot, SnapMirror, and SnapVault are trademarks or registered
trademarks of NetApp, Inc. in the United States and/or other countries. Intel is a registered trademark of Intel Corporation. Java is a
trademark of Oracle Corporation. Linux is a registered trademark of Linus Torvalds. Windows and Windows Server are registered
trademarks of Microsoft Corporation. UNIX is a registered trademark of the Open Group. All other brands or products are trademarks
or registered trademarks of their respective holders and should be treated as such. TR-3858
26 Best Practices for KVM and Red Hat Enterprise Linux on NetApp Storage