Dell Emc Ready Architecture For Red Hat Hci Architecture Guide
Dell Emc Ready Architecture For Red Hat Hci Architecture Guide
Contents
List of Figures................................................................................................................................... v
List of Tables................................................................................................................................... vi
Trademarks..................................................................................................................................... viii
Notes, cautions, and warnings......................................................................................................... ix
Chapter 1: Overview........................................................................................................................10
Executive summary.............................................................................................................................................11
Key benefits........................................................................................................................................................ 11
Key differentiators.............................................................................................................................................. 12
Chapter 5: Deployment....................................................................................................................34
Before you begin................................................................................................................................................ 35
Tested BIOS and firmware versions...................................................................................................... 35
Disk layout.............................................................................................................................................. 35
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Contents | iii
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
iv | Contents
Appendix C: References.................................................................................................................. 99
To learn more................................................................................................................................................... 100
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
List of Figures | v
List of Figures
Figure 1: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat
OpenStack Platform key differentiators.....................................................................................12
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
vi | List of Tables
List of Tables
Table 1: RHOSP deployment elements.......................................................................................... 16
Table 2: Solution Admin Host hardware configuration – Dell EMC PowerEdge R640................ 19
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
List of Tables | vii
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
viii | Trademarks
Trademarks
Copyright © 2019 Dell EMC or its subsidiaries. All rights reserved.
The information in this publication is provided “as is.” Dell EMC makes no representations or warranties of any kind
with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Red Hat®, Red Hat Enterprise Linux®, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE
are trademarks or registered trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the
registered trademark of Linus Torvalds in the U.S. and other countries. Oracle® and Java® are registered trademarks of
Oracle Corporation and/or its affiliates.
Intel® and Xeon® are registered trademarks of Intel Corporation.
Dell EMC believes the information in this document is accurate as of its publication date. The information is subject
to change without notice.
Spirent Temeva®, Cloudstress®, MethodologyCenter® and TrafficCenter® are registered trademarks of Spirent
Communication Inc. All rights reserved. Specifications subject to change without notice.
DISCLAIMER: The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries, and are used
with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack
Foundation or the OpenStack community.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Notes, cautions, and warnings | ix
CAUTION: A Caution indicates potential damage to hardware or loss of data if instructions are not
followed.
Warning: A Warning indicates a potential for property damage, personal injury, or death.
This document is for informational purposes only and may contain typographical errors and technical inaccuracies.
The content is provided as is, without express or implied warranties of any kind.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
10 | Overview
Chapter
1
Overview
Topics: Dell EMC and Red Hat have worked closely together to build an enterprise-
scale hyper-converged infrastructure architecture guide ideally suited for
• Executive summary customers who are looking for performance and ease of management.
• Key benefits
This architecture guide describes prescriptive guidance and recommendations
• Key differentiators for complete configuration, sizing, bill-of-material, and deployment details.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Overview | 11
Executive summary
This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node
hardware through configurations of Hyper-Converged Infrastructure using Red Hat OpenStack Platform 13 and Red
Hat Ceph Storage that consumes OpenStack Nova Compute and Ceph storage services.
Communication Service Providers inherently have distributed operation environments, whether multiple large
scale core datacenters, 100's and 1000's of central offices and Edge locations, or even customer premise equipment
for same infrastructure services in remote and branch offices as run in the core datacenter. However, remote and
branch offices can have unique challenges such as less space and power/cooling, fewer (or no) technical staff on-site.
Organizations in this situation require powerful integrated services on a single easily scaled environment.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform is designed to
address these challenges by integrating compute and storage together on a single aggregated cluster, making it a
well-suited solution for low-footprint remote or central office installations and Edge computing. Dell EMC Hyper-
Converged Infrastructure for Red Hat OpenStack Platform is designed to enable organizations to deploy and manage
distributed infrastructure centrally, enabling remote locations to benefit from high-performing systems without
requiring extensive or highly specialized on-site technical staff.
This architecture guide defines hardware and software building block details including but not limited to Red Hat
OpenStack Platform configuration, network switch configuration, and all software and hardware components.
This all-NVMe configuration is optimized for block storage performance.
Key benefits
The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform offers
several benefits to help Service Providers reduce CAPEX/OPEX (Capital expenditures/ operating expenditures) and
simplify planning and procurement with the following features:
• Infrastructure Consolidation. A smaller hardware footprint eases power, cooling and deployment reducing
CAPEX.
• Operational Efficiency. A single supported rack is easier to train personel to manage and configure resulting in
lower OPEX overhead.
• Fully engineered, validated, tested and documented by Dell EMC.
• Based on Dell EMC PowerEdge R-Series servers and specifically Dell EMC PowerEdge R640 and Dell EMC
PowerEdge R740xd which are the server models recommended for this architecture guide, which includes Intel
Xeon processors, Intel NVMe disks and Intel 25GbE network interface cards.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
12 | Overview
Key differentiators
Figure 1: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack
Platform key differentiators
The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform has some
major enhancements from the regular Dell EMC Ready Architecture Guide.
• Implementation model. Compute and storage are deployed in a Hyper-Converged Infrastructure approach. As a
result, both services and their associated OpenStack services are deployed and managed as a single entity.
• Server model. Dell EMC PowerEdge R640 and Dell EMC PowerEdge R740xd servers are used in this
architecture guide providing the most cutting edge range of Dell EMC PowerEdge R-Series with optimized
technology for all kind of workloads and offers sophisticated built-in protection at every step.
• Hardware resources. Optimized for Hyper-Converged Infrastructure, this Ready Architecture Guide combines
scalability, robustness, and efficiency by leveraging the following Intel components:
• Intel Platinum 8160 Skylake processors. Used for compute and storage needs. This 64-bit 24-core x86 multi-
socket high performance server microprocessor provides 48 cores per node which maximizes the concurrent
execution of multi-threaded applications.
• Intel 25GbE adaptors. Used for all network communications. The flexible and scalable Intel XXV710
network adapter offers broad interoperability, critical performance optimizations, and increased agility. A
couple of ports have also been reserved for future usage of NFV oriented optimizations such as SR-IOV or
OVS-DPDK.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Overview | 13
• Intel P4600 NVMe drives. Acts as a key for Red Hat Ceph Storage backend. This NAND SSD drive is
optimized for the data caching needs of cloud storage and more particularly software-defined solutions. It
helps to modernize the data center by combining performance, capacity, manageability and scalability.
• RAM Optimized. Memory is a key concern when it comes to virtualization and even more with a Hyper-
Converged Infrastructure. Each compute/storage server is configured with 384GB of RAM, delivering optimal
performance and available resources for both compute and storage services.
This architecture guide will cover in details the key differentiators in the next sections.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
14 | Architecture overview
Chapter
2
Architecture overview
Topics: Undercloud and Overcloud deployment elements are part of Dell EMC Ready
Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack
• Overview Platform. This Ready Architecture Guide uses Red Hat OpenStack Platform
• Software RHOSP 13. The Red Hat OpenStack Platform implementation of Hyper-
• Hardware Converged Infrastructure (HCI) uses Red Hat Ceph Storage version 3.2 as the
• Network layout storage provider.
• Physical network This overview of the deployment process for Dell EMC Ready Architecture
for Hyper-Converged Infrastructure on Red Hat OpenStack Platform on Dell
EMC Dell EMC PowerEdge R640 and Dell EMC PowerEdge R740xd server
hardware and network outlines the following:
• Software requirements
• Hardware requirements
• Dell EMC networking switch requirements
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 15
Overview
This chapter describes the complete architecture for Red Hat OpenStack Platform version 13 over Hyper-Converged
Infrastructure. Figure 2: Architecture for RHOSP version 13 over HCI on page 15 shows components used
in Undercloud and Overcloud in detail and description of SAH node with logical networks. This section provides
description of hardware and software components.
Figure 2: Architecture for RHOSP version 13 over HCI on page 15 illustrates ready architecture for RHOSP
version 13 deployment over HCI.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
16 | Architecture overview
Undercloud. The Undercloud is the OSP director (TripleO) node. It is a single-VM OpenStack installation that
includes components for provisioning and managing the Overcloud.
HCI Overcloud: The Overcloud is the end user RHOSP environment created using Undercloud. HCI Overcloud has
only two types of roles:
• Controller. A node that provides administration, networking, and high availability for the OpenStack Overcloud
environments.
• ComputeHCI. A real hyper-converged role designed to have compute and storage services like OpenStack Nova
Compute and Ceph storage to run in tandem. This role has a direct application for Edge computing for Telcos. We
refer to this role as converged through this architecture guide.
Table 1: RHOSP deployment elements on page 16 describes the basic RHOSP 13 deployment sequence.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 17
Undercloud, Glance • In the Undercloud, Glance stores images that will be used by
Overcloud bare-metal machines when doing introspection and overcloud
deployment.
• In the Overcloud, it is also used to store VM images that other
OpenStack services will use as templates to deploy VM instances.
Undercloud, Neutron • In the Undercloud, Neutron controls networking for managing bare
Overcloud metal machines.
• In the Overcloud, Neutron offers networking capabilities in
complex cloud environment. It also helps to ensure that any of the
components of an OpenStack environment can communicate with
each other quickly and efficiently.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
18 | Architecture overview
Undercloud Ansible • Ansible is used by the OSP director to install and configure the
Undercloud. When deploying the Overcloud, it is also used by Ceph-
ansible to deploy and configure the Ceph cluster.
Software
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 19
Red Hat Ceph Storage is an essential component of Hyper-Converged Infrastructure (HCI). Please refer detailed
information on Red Hat Ceph Storage for HCI on page 24
Hardware
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
20 | Architecture overview
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 21
Network layout
Figure Figure 4: HCI network layout on page 21 illustrates the network layout for Hyper-Converged
Infrastructure.
Bond0 interface is associated with p1p1, p2p1 physical interfaces and is paired with br-tenant and br-int
virtual bridges. Bond0 is used to communicate with the internal API, Tenant, and Storage network.
Bond1 interface is associated with physical interfaces p1p2, p2p2, and br-ex and is paired with br-int
virtual bridge which communicates with instances. Bond1 interface is used for external network and storage cluster
network. Open vSwitch bridges are mapped between physical and virtual interfaces.
Bond2 interface is especially used for NFV workload with dedicated NICs for either SRIOV, or OVS-DPDK. This
interface handles NFV workload on hyper-converged nodes.
Table 5: Logical Networks on page 21 describes the network layout:
Note: All the VLANs described here are used explicitly for Overcloud network and are bound to differ as
per end user configuration. Environment file network-environment.yaml configures VLANs that is
passed to Undercloud to deploy and manage Overcloud.
iDRAC network 110 • It is used to manage bare-metal nodes remotely using dracclient.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
22 | Architecture overview
Storage 180 • The backend storage network to which Ceph routes its heartbeat,
Clustering object replication, and recovery traffic. The Ceph OSDs use this
network network to balance data according to the replication policy. This
private network only needs to be accessed by the OSDs.
Storage network 170 • The frontend storage network where Ceph clients (through Glance
API, Cinder API , or Ceph CLI) access the Ceph cluster. Ceph
monitors operate on this network.
Tenant network 130 • This is the network for allocating IP addresses to tenant instances.
OpenStack tenants create private networks provided by VLANs
configured on underlying physical switch. This network facilitates
communication across tenant instances.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 23
Physical network
The Figure 5: Physical network on page 23 diagram illustrates the physical network wiring for RHOSP
deployment in a Converged infrastructure.
Stacks of S5248-ON switches uplink to an external network with a S3048-ON management switch. It also consists
of SAH node and cluster of three Controller nodes and three Converged nodes. For seamless communication, the
interfaces are wired as follows:
1. iDRAC interfaces to S3048-ON switch for all nodes. This is used to access iDRAC session for all nodes.
2. Interface em3 to S3048-ON switch via VLAN 120 for all nodes to provision bare-metal servers.
3. A bridge Bond0 is set up between the first port of the two 25G NICs for all nodes. VLAN 130, 140 and 170 also
use this interface. This bridge is connected as a Link Aggregation Control Protocol (LACP) connection.
4. A bridge Bond1 is set up between the second port of the 25G NICs for all nodes. VLAN 180 uses this interface
for Converged Nodes whereas Controllers access Public Network through this bridge. This bridge is connected as
a Link Aggregation Control Protocol (LACP) connection.
5. The last 25G NIC on Converged nodes remains available for future NFV operations such as SRIOV or OVS-
DPDK, but currently not being used. A bridge Bond2 is set up between two ports out of four (two interface of
two ports each) of 25G NICs on Converged nodes. It remains available for future operations such as SR=IOV or
OVS-DPDK while the remaining two ports remain free.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
24 | Red Hat Ceph Storage for HCI
Chapter
3
Red Hat Ceph Storage for HCI
Topics: Ceph is a widely used open-source storage platform. It provides high
performance, reliability and scalability. The Ceph distributed storage system
• Introduction to Red Hat Ceph provides an interface for object, block and file-level storage.
Storage
This chapter describes the Red Hat Ceph Storage and its integration with
• Hyper-Converged Infrastructure Controller and Converged node in Hyper-Converged Infrastructure.
(HCI) Ceph storage
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Red Hat Ceph Storage for HCI | 25
RADOS
RADOS system stores data as objects in logical storage pools, and utilizes the Controlled Replication Under Scalable
Hashing (CRUSH) data placement algorithm to automatically determine where that object should be stored.
RADOS, the Ceph storage backend is based on the following daemons which can be easily scaled to meet the
requirements of any deployed architecture.
• Monitors (MONs). Daemons responsible for maintaining a master copy of the cluster map which contains
information about the state of the Ceph cluster and its configuration. When the number of active monitors falls
below the threshold, the entire cluster is unaccessible for any data integrity client operation.
• Object Storage Devices (OSDs). Building blocks of a Ceph storage cluster. They connect a storage
device to the Ceph storage cluster.An individual storage server may run multiple OSDs daemons and can also
provide multiple OSDs to the cluster. Each OSD Daemon provides a storage device which is normally formatted
with an Extents File System (XFS). A new feature called BlueStore introduced in Red Hat Ceph Storage permits
raw access to local storage devices. The replication of objects to multiple OSDs is handled automatically. One
OSD is called the primary OSD and a Ceph client reads or writes data from the primary OSD. Secondary OSDs
play an important role in ensuring the resilience of data in the event of a failure in the cluster. Primary OSD
functions are:
• Serves I/O requests.
• Replicates and protects the data.
• Rebalances the data to ensure performance.
• Recovers the data in case of a failure.
Secondary OSDs functions are always under control of a Primary OSD and are all capable of becoming the
Primary OSD.
• Ceph Manager (MGRS). Gathers a collection of statistics of the Ceph storage cluster. There is no impact on
client I/O operations if the Ceph Manager daemon fails. However, to avoid this scenario, a minimum of two Ceph
managers are recommended.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
26 | Red Hat Ceph Storage for HCI
Pools
Ceph pools are logical partitions of the Ceph storage cluster, and are used to store objects under a common name tag.
Each pool is assigned a specific number of hash buckets to a group objects together for storage. These hash buckets
are call Placement Groups (PGs).
The number of placement groups assigned to each pool can be configured independently to fit any type of data. This
number is configured at the time of the cluster creation can be increased dynamically but can never be decreased.
The CRUSH algorithm is used to select the OSDs that will serve the data for a pool. Permissions such as read, write
or execute can be set at the pool level.
When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data. A pool provides
you with:
• Resilience
• Placement Groups
• CRUSH rules
• Snapshots
• Set ownership
Placement Groups
A Placement Group (PG) aggregates a series of objects into a hash bucket, or group, and is mapped to a set of OSDs.
An object belongs to only one PG, and all objects belonging to the same PG return the same hash result.
The placement strategy is known as the CRUSH placement rule. When a client writes an object to a pool, it uses the
pool's CRUSH placement rule to determine the object's placement group and the cluster map to calculate where an
object is written to which OSD(s).
When OSDs are added or removed from a cluster, placement groups are automatically rebalanced between
operational OSDs.
You can set the number of placement groups for the pool. The number of placement groups per OSD in Hyper-
Converged Infrastructure (HCI) environment is set to 200 for optimal usage of OSDs/NVMe SSDs.
CRUSH
When you store data in a pool, a CRUSH rule set mapped to the pool enables CRUSH to identify a rule for the
placement of the object and its replicas (or chunks for erasure coded pools) in your cluster. CRUSH rules can be
customized.
Objectstore
Ceph is an ecosystem of technologies offering three different storage models - object storage, block storage and
filesystem storage. Ceph’s approach is to treat object storage as its foundation, and provide block and filesystem
capabilities as layers built upon that foundation. Objectstores store data in a flat non-hierarchical namespace where
each piece of data is identified by an arbitrary unique identifier. Any other details about the piece of data are stored
along with the data itself, as metadata.
Objectstore is an abstract interface for storing the data and can be implemented through two mediums - filestore
(legacy) and BlueStore. BlueStore stores objects directly on the block devices without any file system interface,
which improves the performance of the cluster and FileStore is the legacy approach to storing objects in Ceph.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Red Hat Ceph Storage for HCI | 27
Dashboard to perform management and monitoring of Ceph storage through a web-based application. Deployment
and Configuration of dashboard uses JetPack scripts.
Note: Please refer https://github.com/dsp-jetpack/JetPack for more information on JetPack 13.1 Please refer
Red Hat Ceph Storage Dashboard deployment and configuration (optional) on page 58 for deployment
and configuration of Red Hat Ceph Storage Dashboard.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
28 | Red Hat Ceph Storage for HCI
The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform uses Red Hat
Ceph Storage as the storage provider. The architecture includes co-located compute and Ceph storage services.
Ceph cluster configuration features:
• The daemon Ceph monitor running on Controller nodes maintains a master-copy of the cluster map.
• The daemon OSD running on converged nodes stores objects on Ceph, in addition to a KVM module required for
instance spawning.
HCI uses Red Hat Ceph Storage features to have a highly reliable, scalable, easily manageable and optimized
performant Ready Architecture for HCI. Features include:
• Usage of NVMe SSDs reduce latency with higher IOPs and potentially lower power consumption.
• Optimal count of two Ceph OSDs per NVMe SSDs based on performance statistics.
• BlueStore. A new and high performance backend Objectstore for OSD.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment considerations | 29
Chapter
4
Deployment considerations
Topics: This section highlights the key elements which have been covered during the
design phase as well as the reason behind those choices.
• Converged Nodes with
integrated Ceph storage
• Resource isolation
• Performance tuning
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
30 | Deployment considerations
Resource isolation
Resource isolation for Ceph OSDS
Limiting the amount of CPU and memory for each Ceph OSD is important, so resources are free for the OpenStack
Nova Compute process. When reserving memory and CPU Resources for Ceph on hyper-converged nodes, each
containerized OSD should be limited in GB of RAM and vCPUs.
ceph_osd_docker_memory_limit constraints the memory available to a container. If the host supports swap
memory, then it can be larger than physical RAM. If a limit of 0 is specified, the containers memory is not limited.
To allow maximum performance while preserving memory for other usage, a maximum amount of 10GB can be
allocated for each OSD container.
ceph_osd_docker_cpu_limit limits the CPU usage of container. By default, containers run with the full CPU
resource. This flag tells the kernel to restrict the container's CPU usage to the quota you specify. A maximum count of
four vCPUs per OSD can be allocated (eight vCPUs per NVMe physical disk).
This architecture guide uses the following parameters to optimize Ceph OSDs containers:
CephAnsibleExtraConfig:
ceph_osd_docker_memory_limit: 10g
ceph_osd_docker_cpu_limit: 4
The deployment with these parameters is done by modifying the two above values in the dell-
environment.yaml heat template.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment considerations | 31
Performance tuning
Nova reserved memory
It reserves the amount of memory required for a host to perform its operation. This memory should normally be
tuned to maximize the number of guests while protecting the host. For a Hyper Converged Infrastructure, it should
maximize guests while protecting the host and Ceph.
We can figure out the reserved memory with the following formula which is the recommended approach by Red Hat:
This architecture guide follows this approach with values and calculation as described below:
Given our nodes with 384GB of RAM and 16 OSDs per node, assuming that each OSD consumes 3GB of RAM, that
is 48GB of RAM for Ceph, and leaving 336GB of RAM for Nova Compute.
If the average guest each uses 2GB of RAM, then the overall system could host 113 guest machines. However, there
is the additional overhead for each guest machine running on the hypervisor. Assuming this overhead is 500MB, the
maximum number of 2GB guest machines that could be ran would be 134.
Thus, reserved_host_memory_mb would equal 115000. The parameter value must be in megabytes (MB).
This value is defined in the dell-environment.yaml file as described below:
nova::compute::reserved_host_memory: 115000
Given our nodes with 48 cores and 16 OSDs per node with Hyper-threading enabled leaving 32 cores for Nova.
32 = 48 - (1 * 16)
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
32 | Deployment considerations
Assuming that each guest machine utilizes 10% of its core, we end up with 320 available vCPUs.
320 = 32 / 0.1
6.667 = 320 / 48
nova::cpu_allocation_ratio: 6.7
Note: Red Hat provides a script to do all the calculations called nova_mem_cpu_calc.py
To design the sizing of each placement group we can use Ceph Calculator https://ceph.com/pgcalc/ to identify the
optimal value. We use this tool to calculate the amount of PG per pool.
Where
• Pool name: Name of the pool.
• Replication factor: Number of replicas the pool will have. Two is the recommended value when using NVMe
disks.
• OSD #: Number of OSDs which this Pool will have PGs. 48 is the entire cluster OSD count.
• %Data: This value represents the approximate percentage of data which will be contained in this pool for that
specific OSD set.
• Target PGs/OSD: Expected cluster OSD count when considering future scaling.
• PG #: Number of PGs to create.
Note: Keep in mind that the PG count can be increased, but NEVER decreased without destroying and
recreating the pool.
The values calculated above are defined in the dell-environment.yaml file as described below.
BlueStore
BlueStore is a new storage backend for Ceph. It gives better performance (roughly 2x for writes), full data
checksumming, and built-in compression. BlueStore stores objects directly on the block devices without any file
system interface, which improves the performance of the cluster. It provides features like efficient block device usage,
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment considerations | 33
direct management of storage devices, metadata management with RocksDB, multi device support, no large double
writes, efficient copy-on-write and inline compression.
BlueStore backend supports three storage devices - Primary storage device , Write-ahead-log (WAL) device, and
database device. It can manage either one, two or all three storage devices.
Modify the dell-environment.yaml with the following parameters to enable BlueStore as the Ceph backend.
CephAnsibleDisksConfig:
osd_scenario: lvm
devices:
- /dev/nvme0n1
- /dev/nvme1n1
- /dev/nvme2n1
- /dev/nvme3n1
- /dev/nvme4n1
- /dev/nvme5n1
- /dev/nvme6n1
- /dev/nvme7n1
CephAnsibleExtraConfig:
osd_objectstore: bluestore
osds_per_device: 2
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
34 | Deployment
Chapter
5
Deployment
Topics: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red
Hat OpenStack Platform utilizes Dell EMC PowerEdge R-Series servers
• Before you begin for deployment of RHOSP version13. Dell EMC PowerEdge R-Series rack
• Deployment workflow servers key features:
• Solution Admin Host • Automate productivity
• Red Hat OpenStack Platform • Comprehensive security
Director
This chapter also describes BIOS and Network configuration, installation
• Undercloud Deployment
prerequisites with proper configuration for Controller and Converged nodes.
• Configure and deploy the Additionally entire deployment process including manual creation of SAH
Overcloud and Director node for Undercloud and deploying Overcloud is part of
• Red Hat Ceph Storage this chapter. Lastly this chapter describes performance tuning parameters,
Dashboard deployment and resources isolation and how node placement works.
configuration (optional)
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 35
Components Versions
BIOS version 1.6.13
iDRAC with Lifecycle Controller 3.30.30.30
Power supply 00.1B.53
Intel(R) ethernet 25G 2P XXV710 adapter 18.8.9
Intel(R) Gigabyte 4P X710/I350 rNDC 18.8.9
PERC H740P Mini 50.5.0-1750
Product Version
S3048-ON firmware Cumulus Linux OS 3.7.1
S5248-ON firmware Cumulus Linux OS 3.7.1
Disk layout
Disk layout for Controller node and Converged node
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
36 | Deployment
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 37
Note: The display can differ depending on the node verification of the Virtual Disk.
Software requirements
Software requirements includes:
• Red Hat Enterprise Linux 7.6
• Red Hat OpenStack Platform version 13
• Red Hat Ceph Storage 3.2
Note:
User needs to be aware of Pool IDs at this stage.
Please contact Dell EMC sales representative for any software components required in performing these
steps.
Deployment workflow
Figure 8: Workflow for RHOSP deployment over HCI on page 37 illustrates workflow of RHOSP deployment
over Hyper-Converged Infrastructure. The activity involves deployment of SAH node, configuring and installing
Undercloud and finally deployment of Overcloud. The chapter gives detailed deployment.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
38 | Deployment
Variables Description
HostName The FQDN of the server, e.g., sah.acme.com.
SystemPassword The root user password for the system.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 39
Variables Description
SubscriptionManagerUser The user credential when registering with Subscription
manager.
SubscriptionManagerPassword The user password when registering with Subscription
manager.
SubscriptionManagerPool The pool ID used when attaching the system to an
entitlement.
SubscriptionManagerProxy Optional proxy server to use when attaching the system to
an entitlement.
SubscriptionManagerProxyPort Optional port for the proxy server.
SubscriptionManagerProxyUser Optional user name for the proxy server.
SubscriptionManagerProxyPassword Optional password for the proxy server.
Gateway The default gateway for the system.
NameServers A comma-separated list of nameserver IP addresses.
NTPServers A comma-separated list of time servers. This can be IP
addresses or FQDNs.
TimeZone The time zone in which the system resides.
anaconda_interface The public interface that allows connection to Red Hat
Subscription services. For 10GbE or 25GbE Intel NICs,
"em4" (the fourth nic on the motherboard) should be
used.
extern_bond_name The name of the bond that provides access to the external
network.
extern_boot_opts The boot options for the bond on the external network.
Typically, there is no need to change this variable.
extern_bond_opts The bonding options for the bond on the external
network. Typically, there is no need to change this
variable.
extern_ifaces A space delimited list of interface names to bond together
for the bond on the external network.
internal_bond_name The name of the bond that provides access for all internal
networks.
internal_boot_opts The boot options for the bond on the internal network.
Typically, there is no need to change this variable.
internal_bond_opts The bonding options for the bond on the internal network.
Typically, there is no need to change this variable.
internal_ifaces A space delimited list of interface names to bond together
for the bond on the internal network.
mgmt_bond_name The boot options for the management VLAN interface.
Typically, there is no need to change this variable.
prov_bond_name The VLAN interface name for the provisioning network.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
40 | Deployment
Variables Description
prov_boot_opts The boot options for the provisioning VLAN interface.
Typically, there is no need to change this variable.
stor_bond_name The VLAN interface name for the storage network.
stor_boot_opts The boot options for the storage VLAN interface.
Typically, there is no need to change this variable.
pub_api_bond_name The VLAN interface name for the public API interface.
pub_api_boot_opts The VLAN interface name for the private API interface.
priv_api_bond_name The VLAN interface name for the private API interface.
priv_api_boot_opts The boot options for the private API VLAN interface.
Typically, there is no need to change this variable.
br_mgmt_boot_opts The bonding options, IP address and netmask for the
management bridge.
br_prov_boot_opts The bonding options, IP address and netmask for the
provisioning bridge.
br_stor_boot_opts The bonding options, IP address and netmask for the
storage bridge.
br_pub_api_boot_opts The bonding options, IP address and netmask for the
public API bridge.
br_priv_api_boot_opts The bonding options, IP address and netmask for the
private API bridge.
prov_network The network IP address for the provisioning network for
use by the NTP server.
prov_netmask The netmask for the provisioning network for use by the
NTP server.
Creating image
1. Create .img file
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 41
$ mkdir /mnt/usb
4. Mount the filesystem in the usb directory
$ cp osp_sah.ks /mnt/usb/
6. Umount the filesystem
$ umount /mnt/usb
7. Copy the .img file to a host from which you have access to the SAH node iDRAC user interface.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
42 | Deployment
$ inst.ks=hd:sdb:/osp_sah.ks
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 43
Variables Description
rootpassword The root user password for the system.
timezone The timezone the system is in.
smuser The user credential when registering with Subscription manager.
smpassword The user password when registering with Subscription manager. The password
must be enclosed in single quotes if it contains certain special characters.
smpool Red Hat OpenStack Platform Director (Virtual Node) pool ID used when
attaching the system to an entitlement.
hostname The FQDN of the Director Node.
gateway The default gateway for the system.
nameserver A comma-separated list of nameserver IP addresses.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
44 | Deployment
Variables Description
ntpserver The SAH node's provisioning IP address. The SAH node is an NTP server,
and will synchronize to the NTP servers specified in the SAH node's kickstart
file.
user The ID of an admin user to create to use for installing Red Hat OpenStack
Platform Director . Default admin user is osp_admin
password The password for the osp admin user.
eth0 This line specifies the IP address and network mask for the public API
network. The line begins with eth0, followed by at least one space, the IP
address of the VM on the public API network, another set of spaces, and then
the network mask.
eth1 This line specifies the IP address and network mask for the provisioning
network. The line begins with eth1, followed by at least one space, the IP
address of the VM on the provisioning network, another set of spaces, and
then the network mask.
eth2 This line specifies the IP address and network mask for the management
network. The line begins with eth2, followed by at least one space, the IP
address of the VM on the management network, another set of spaces, and
then the network mask.
eth3 This line specifies the IP address and network mask for the private API
network. The line begins with eth3, followed by at least one space, the IP
address of the VM on the private API network, another set of spaces, and then
the network mask.
6. Save the file under /tmp.
$ mkdir -p /store/data/iso
2. Create the images directory where the VM image will be located.
$ mkdir /store/data/images
3. Copy RHEL 7.6 iso file to /store/data/iso directory.
4. Create the director VM as follows:
Note: Please refer Overview on page 15 for overview of the network bridges.
5. Once the deployment is triggered, progress can be monitored using virt-viewer.
$ virt-viewer director
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 45
Note: The VM will appear as shut off when the installation completes.
7. Once the VM is installed and shut off, start Director VM
Undercloud Deployment
$ su -l osp_admin
4. Create directories for templates and images.
$ mkdir ~/images
$ mkdir ~/templates
5. Copy Undercloud configuration sample file and save it as Undercloud.conf.
$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/
undercloud.conf
6. Modify undercloud.conf file according to the needs.
Parameters Description
undercloud_hostname Defines the fully-qualified host name for the Undercloud.
local_ip The IP address and prefix of the Director Node on the
provisioning network in Classless Inter-Domain Routing
(CIDR) format (xx.xx.xx.xx/yy). This must be the IP address
used for eth1 in director.cfg. The prefix used here must
correspond to the netmask for eth1 as well (usually 24).
subnets List of routed network subnet for provisioning and
introspection.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
46 | Deployment
Parameters Description
local_subnet Name of the local subnet where PXE and DHCP interfaces
reside.
local_interface Name of the network interface responsible for PXE booting the
Overcloud instances.
masquerade_network The network address and prefix of the Director Node on the
provisioning network in CIDR format (xx.xx.xx.xx/yy). This
must be the network used for eth1 in director.cfg. The prefix
used here must correspond to the netmask for eth1 as well
(usually 24).
inspection_enable_uefi To support UEFI boot method.
enable_ui To enable TripleO user interface.
ipxe_enabled To support iPXE for deploy and introspection.
scheduler_max_attempts 30 maximum attempts when deploying the Overcloud
instances.
clean_nodes To swipe out disks of the Converged nodes when data already
exists.
cidr Network CIDR for the Neutron-managed subnet for Overcloud
instances.
dhcp_start The starting IP address on the provisioning network to use for
Overcloud nodes.
Note: Ensure the IP address of the Director Node is
not included.
Note: For more modification details please refer appendix Undercloud configuration file on page 92.
7. Save and exit the file.
Install Undercloud
1. Start the undercloud installation.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 47
These files are needed to interact with the OpenStack services, and should
be
secured.
#############################################################################
3. Download images which are required to install Overcloud.
$ source stackrc
$ sudo yum install rhosp-director-images rhosp-director-images-ipa
4. Extract images to osp_admin/images directory.
$ cd ~/images
$ for i in /usr/share/rhosp-director-images/overcloud-full-
latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-
latest-13.0.tar ; do tar -xvf $i ; done
5. Upload images to Glance.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
48 | Deployment
+--------------------------------------+-----------------------
+-------------+----------+--------+
Image "overcloud-full" was uploaded.
+--------------------------------------+----------------+-------------
+------------+--------+
| ID | Name | Disk Format |
Size | Status |
+--------------------------------------+----------------+-------------
+------------+--------+
| 64981da1-498a-421a-8613-9acc38ff125a | overcloud-full | qcow2 |
1347420160 | active |
+--------------------------------------+----------------+-------------
+------------+--------+
Image "bm-deploy-kernel" was uploaded.
+--------------------------------------+------------------+-------------
+---------+--------+
| ID | Name | Disk Format |
Size | Status |
+--------------------------------------+------------------+-------------
+---------+--------+
| 625a1fac-9fa6-46b2-9abb-465e744b808a | bm-deploy-kernel | aki |
6639920 | active |
+--------------------------------------+------------------+-------------
+---------+--------+
Image "bm-deploy-ramdisk" was uploaded.
+--------------------------------------+-------------------+-------------
+-----------+--------+
| ID | Name | Disk Format |
Size | Status |
+--------------------------------------+-------------------+-------------
+-----------+--------+
| 9e653fad-cd55-4b14-9c30-2488ed5b239c | bm-deploy-ramdisk | ari |
420527022 | active |
+--------------------------------------+-------------------+-------------
+-----------+--------+
+--------------------------------------+------------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------------+--------+
| 625a1fac-9fa6-46b2-9abb-465e744b808a | bm-deploy-kernel | active |
| 9e653fad-cd55-4b14-9c30-2488ed5b239c | bm-deploy-ramdisk | active |
| 64981da1-498a-421a-8613-9acc38ff125a | overcloud-full | active |
| 94e8715e-232a-4ca0-bad3-e674dfb6264e | overcloud-full-initrd | active |
| 74abf8fe-686f-47d2-8fe4-12c92c88c2a5 | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+
+--------------------------------------+-----------------
+--------------------------------------+------------------+
| ID | Name | Network
| Subnet |
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 49
+--------------------------------------+-----------------
+--------------------------------------+------------------+
| 23f88d67-0f42-42a4-b508-19b411404ee3 | ctlplane-subnet | 26ee3e35-
accb-4e16-8b53-1f06288c6ed1 | 192.168.120.0/24 |
+--------------------------------------+-----------------
+--------------------------------------+------------------+
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
50 | Deployment
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 51
$ source ~/stackrc
2. Register the nodes in the Undercloud.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
52 | Deployment
+--------------------------------------+--------------+---------------
+-------------+--------------------+-------------+
| 1cc47781-2b62-4fe2-8ce4-6a10c609ff4a | control-0 | None |
power off | available | False |
| adaca128-1f2c-4eb6-a1bb-ae0b93fe2d2f | control-1 | None |
power off | available | False |
| 2effb861-e759-4e5b-8744-16ab9d859cf0 | control-2 | None |
power off | available | False |
| 26cb068a-84d6-4fb6-9433-9d061a7c6adb | computeHCI-0 | None |
power off | available | False |
| e3c2906f-d590-4ae0-927f-4267312545b6 | computeHCI-1 | None |
power off | available | False |
| 48309885-ebcd-41dd-8a9f-7cecfe3f1a0e | computeHCI-2 | None |
power off | available | False |
+--------------------------------------+--------------+---------------
+-------------+--------------------+-------------+
Configure networking
To configure network environment parameters:
1. On the Director VM, from the osp_admin home directory, copy all files needed for the upcoming deployment.
$ cp -R JetPack/templates/ templates/
2. Edit the templates/overcloud/network-environment.yaml file.
3. Search the CHANGEME section to make changes.
4. Make changes as described in the following table.
Note: IP, VLAN and MTU are a set of examples the user can configure and deploy as per their network
requirement.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 53
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
54 | Deployment
Configure cluster
To configure cluster environment parameters:
1. On the Director VM, from the osp_admin home directory, edit the templates/dell-
environment.yaml file
2. Search the CHANGEME section to make changes.
3. Make changes as described in the following table.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 55
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
56 | Deployment
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 57
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
58 | Deployment
$ cp -R /usr/share/openstack-tripleo-heat-templates/* ~/templates/
overcloud
2. Generate a custom roles_data.yaml file that includes the HCI role.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 59
Starting install...
Retrieving file .treeinfo... | 1.9 kB 00:00:00
Retrieving file vmlinuz... | 6.3 MB 00:00:00
Retrieving file initrd.img... | 52 MB 00:00:00
Allocating 'dashboard.img' | 100 GB 00:00:01
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
60 | Deployment
13 dashboard running
$ cp -R JetPack/src/pilot .
4. Copy the network-environment.yaml file to the pilot/templates directory.
$ cp /home/osp_admin/templates/overcloud/network-environment.yaml /home/
osp_admin/pilot/templates/network-environment.yaml
5. Copy the undercloud.conf file to the pilot directory.
$ cp /home/osp_admin/undercloud.conf /home/osp_admin/pilot/
6. Replace ceph-storage with computeHCI in the pilot/subscription.json file
$ cd ~/pilot
9. Run the script provided as part of JetPack to configure the dashboard.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 61
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
62 | Deployment
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 63
Chapter
6
Validation and testing
Topics: This chapter illustrates the optional manual deployment of the Sanity
procedure including instructions for configuring and running the Tempest test
• Manual validation suite.
• Tempest test suite
Tempest is OpenStack's official test Suite for all OpenStack services post
deployment.
Tempest validates the Dell EMC Ready Architecture Guide for the
deployment of Red Hat OpenStack Platform over Hyper-Converged
Infrastructure.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
64 | Validation and testing
Manual validation
The following illustrates post deployment of Overcloud OpenStack services through creation and validation of
networks and subnets instances.
This section includes instructions for creating the networks and testing a majority of your RHOSP environment using
Glance (configured with Red Hat Ceph Storage), Cinder and Nova.
Note: You must complete those prior to creating instances and volumes, and testing of the functional
operations of OpenStack.
$ cd ~/ $ source <overcloud_name>rc
2. Setting up new project.
+--------------------------------------
+----------------------------------------------------
+--------------------------------------+
| ID | Name
| Subnets |
+--------------------------------------
+----------------------------------------------------
+--------------------------------------+
| 4164e0ba-07fe-4b2e-b5fa-01181987ab9f | public
| fd43cb2b-b746-4443-9a81-ad99e36431df |
| 8e36a5dd-383e-4415-9be6-d91d9fedb023 | HA network tenant
8b6fe7f3af074ccb9285043bb2f3cf5b | 86fd1a5b-d02d-42a9-9808-b3bfdac6f422 |
| e44c6fe7-19d4-40a3-ae13-330ee7fb49cf | tenant_net1
| cfc4cbec-ea71-4384-9179-9dc7bc6d8c9e |
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 65
+--------------------------------------
+----------------------------------------------------
+--------------------------------------+
8. Add the tenant network interface between the router and the tenant network.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
66 | Validation and testing
Note: MY_KEY.pem is an output file created by the Nova keypair-add command, and will be used later.
5. Create an instance using the Nova boot command.
Note: Change the IDs to your IDs and the nameofinstance and the key_name .
6. List the instance you created:
$ swift stat
Expected output.
Account: AUTH_6f049e55ab9b49ca9ee342ed4c17a86b
Containers: 13
Objects: 2066
Bytes: 10602595
Containers in policy "policy-0": 13
Objects in policy "policy-0": 2066
Bytes in policy "policy-0": 10602595
Meta Temp-Url-Key: 60b16566fd14c10017ce78124af6e028
X-Account-Project-Domain-Id: default
X-openstack-Request-Id: tx584d13dce54d4462b5e91-005c3f8a46
X-Timestamp: 1546424543.31946
X-Trans-Id: tx584d13dce54d4462b5e91-005c3f8a46
Content-Type: application/json; charset=utf-8
Accept-Ranges: bytes
$ cd ~/ $ source stackrc
$ nova list (make note of the controllers ips)
$ ssh heat-admin@<controller ip>
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 67
$ sudo -i
# pcs cluster status
+--------------------------------------+-----------------------+--------
+------------+-------------+--------------------------+
| ID | Name | Status |
Task State | Power State | Networks |
+--------------------------------------+-----------------------+--------
+------------+-------------+--------------------------+
| cfe21aea-91be-49bb-931f-5061e4be397d | r139-hci-computehci-0 | ACTIVE |
- | Running | ctlplane=192.168.120.134 |
| 64b94937-7a29-4950-af9e-d9980502d90d | r139-hci-computehci-1 | ACTIVE |
- | Running | ctlplane=192.168.120.135 |
| e14e34ae-fdce-4865-bd8c-a9e5a6dbf9af | r139-hci-computehci-2 | ACTIVE |
- | Running | ctlplane=192.168.120.136 |
| 8d1ecfde-47f0-4112-baf8-8877416a8a82 | r139-hci-controller-0 | ACTIVE |
- | Running | ctlplane=192.168.120.141 |
| fd7f6bf6-b6e8-4154-b68f-7f92c274a29b | r139-hci-controller-1 | ACTIVE |
- | Running | ctlplane=192.168.120.127 |
| a419f86d-5e86-490a-a583-62e14d7c5508 | r139-hci-controller-2 | ACTIVE |
- | Running | ctlplane=192.168.120.129 |
+--------------------------------------+-----------------------+--------
2. Initiate an SSH session to the active Controller, as heat-admin.
3. Find the instances by executing the following command:
$ sudo -i
$ ip netns
qdhcp-0a5a594a-f442-4e33-b025-0ba65969ab09 (id: 2)
qrouter-bb00b972-f67c-45ba-a573-ad5d7e8debc5 (id: 1)
qdhcp-2e43972b-0778-4cc3-be64-9dcc9789863b (id: 0)
4. Access an instance namespace by executing the following command:
$ ip a
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
68 | Validation and testing
Configure Tempest
1. Login to OSP director VM with user osp_admin.
2. Clone tempest repository on home directory /home/osp_admin/ from Github.
$ tempest --version
5. Source the admin credentials in Overcloud.
$ source ~/overcloudrc
6. Create and initialize the tempest workspace.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 69
$ cd cloud-01
Summary
The main objective of Tempest API tests is to ensure that our Dell EMC Ready Architecture for Hyper-Converged
Infrastructure on Red Hat OpenStack Platform is compatible with the OpenStack APIs. Tempest API tests ensure that
deployment of the HCI cloud does not interrupt any OpenStack API functionality.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
70 | Performance measuring
Chapter
7
Performance measuring
Topics: This chapter details the testing methodology used throughout the
experimentation. It includes the tools and workloads used and the rationale
• Overview for their choice. It also provides the benchmark results along with the
• Performance tools bottleneck analysis.
• Test cases and test reports
• Conclusion
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 71
Overview
A myriad of Cloud computing and networking technology development delivers a wide variety of choices for an
equally diverse group of information system organizations.
Cloud services performance significantly impacts future functionality and execution of information infrastructure.
A thorough evaluation of Cloud service performance is crucial and beneficial to both service providers and
consumers.
The following chapter outlines performance test methodology and the graphical representation of the results in three
different areas:
• Network performance: evaluation of network performance by evaluating throughput, latency and jitter of the
network traffic between virtual machines.
• Compute performance: evaluation of compute performance by evaluating memory throughput and memory
latency.
• Storage performance: evaluation of storage performance by evaluating IOPs and Latency.
Performance tools
The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform performance
is measured using the following tools:
1. Spirent: Spirent Temeva (Test Measure Validate) is a revolutionary new platform providing a Software-as-a-
Service (SaaS) based dashboard to configure, measure, analyze and share valuable test metrics. This tool measures
network and compute performance.
2. FIO: A popular Open-source performance benchmarking I/O workload generator tool. This tool measures storage
performance.
Network performance
Test case 1
Objective: Measure network throughput (Mbits/sec) and L2/L3 network latency (ms) between instances on same/
different compute hosts and network.
Description: The test is performed to calculate network throughput between instances using four sets of
combinations:
1. Two VMs residing on same compute host and same network
2. Two VMs residing on same compute host and different network
3. Two VMs residing on different compute host and same network
4. Two VMs residing on different compute host and different network
A unidirectional network traffic is generated between two VMs (referred as East/West traffic) for a range of frame
sizes - 64B, 256B, 512B, 1024B and 1518B each iterating over a duration of 60 seconds. Considering the same
network traffic or different network traffic, learning mode is configured as L2 or L3 respectively. The number of
flows per port is set to 1000.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
72 | Performance measuring
Graphical representation of the test result with x-axis having standard ethernet frame sizes and y-axis having
corresponding maximum network throughput. Figure 12: Network throughput vs frame size on page 72 graph
depicts network latency behavior with x-axis having standard Eehernet frame sizes and y-axis having corresponding
latency.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 73
Analysis and inference: Network throughput increases as the frame size increases and becomes maximum for a
standard ethernet frame size i.e., 1518B. Latency is high for low frame sizes and minimum at standard ethernet frame
size i.e., 1518B.
When two VMs are present in the same compute host with different networks, packets flow through Linux Bridge,
OVS, and virtual routers. Since routing is involved, it requires a network layer (layer 3) with higher latency.
Packet flows through Linux Bridge, OVS, and physical infrastructure when two VMs are in different compute and the
same network. Activity is limited to layer 2 data with no substantial delay like layer 3.
Frame size greater than or equal to 1024B yields maximum throughput and minimum consistent with L2/L3 network
latency.
Host linear frame size increases to 75% of the maximum throughput between VMs on the same computer. The
placement of VMs on hyperconverged nodes are the bottleneck to high network throughput performance.
Enabling NFV feature like OVS-DPDK in Hyper-Converged Infrastructure enhances the packet processing and
forwarding rate optimizing the HCI solution.
Test case 2
Objective: Measure L2/L3 network jitter (Mbits/sec) and L2/L3 network latency (ms) between instances on same/
different compute hosts and network.
Description: The test is performed to calculate network throughput between instances using two sets of
combinations:
1. Both of the two VMs spawned in same network (L2).
2. Both of the two VMs spawned in different network (L3).
Unidirectional network traffic generated between two VMs (referred as east/west traffic) for a range of frame sizes -
64B, 256B, 512B, 1024B and 1518B running over 60 seconds.
Learning mode is configured as L2 or L3 for either same or different network traffic for 1000 flows per port.
Graphical representation of test result with x-axis has standard ethernet frame sizes and y-axis corresponding to
network jitter.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
74 | Performance measuring
Analysis and inference: When VMs are present in a different network, the packet goes through Linux Bridge, OVS,
and virtual routers. Packets moving through different paths will rise and increase the packet delay variation (Jitter).
When VMs are present in the same network, a packet goes through Linux Bridge and OVS, following a single path
reducing delay variation.
Compute performance
Test Case 3
Objective: Measure compute performance to memory IOPs (millions) and latency (us).
Description: The test is performed on a varied increasing count of agent VMs each having four vCPUs, 4GB RAM
and 20GB disk hosted on a single compute node. Read and write memory IOPs of block size 4KB with different
access pattern i.e., random and sequential is stressed to maximum value on a group of VMs ranging from count 1 to
30 which in return yields maximum possible IOPs supported by infrastructure at that point of time.
Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis showing
average memory read IOPs in millions.
Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis showing
average memory write IOPs in millions.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 75
Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis are
showing corresponding memory latency (us) for random/sequential read and write operations.
Analysis and inference: Memory read/write IOPs for both access pattern - random and sequential depicted in Figure
17: 4KB memory latency on page 75 shows the maximum possible IOPs per instance. Increasing OpenStack
instances decrease the average IOPs. The bends in the graph denote bottleneck for a particular number of instances
under test for memory IOPs. IOPs can be improved significantly with the enablement of huge-pages and NUMA in
Hyperconverged compute nodes.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
76 | Performance measuring
A write operation consumes more IOPs than a read operation regardless of the memory access pattern. The latency of
both random write and sequential write for 4KB block size is consequently higher than random read and sequential
read.
Storage performance
Test case 4:
Objective: Measure Storage performance to IOPs and latency (ms).
Description: The test measure storage performance to IOPs and latency on a configured Ceph Cluster that uses 16
OSDs (2 OSDs per NVMe SSD disk) per node, i.e., 48 OSDs in total across three nodes.
The test performs a series of FIO benchmarks on a group of VMs ranging from count 40 to 240. The VMs under test
configured with one vCPU, 1024MB RAM and 10GB disk on a single compute host.
FIO benchmark details follow:
• I/O Engine : libaio
• I/O mode : direct I/O
• Block size : 4KB
• I/O Depth : 64
• Number of jobs per VM : 8
• File size : 512MB
Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing random read/write IOPs.
Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing Sequential Read/Write IOPs.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 77
Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing sequential read/write IOPs.
Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing sequential read/write IOPs.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
78 | Performance measuring
Analysis and inference: Write operations are more expensive than read operations for smaller block size. Ceph
acknowledges a client after data has been entirely written off on a given number of OSDs, which in our case is 2x
replication, i.e., primary and secondary OSD.
Read operation client communication is acting/primary OSD. 240 VMs on a single node result illustrates a gradual
but minimal decrease in average IOPs with an increase in the number of compute resources. Latency is higher for a
random pattern than a sequential pattern for a given number of instances.
There is a consistent graphical curve for random latency. Sequential latency shows a linear incremental curve with an
increase in virtual compute resources.
Faster NVMe drives could improve performance but may shift the load to CPUs.
Conclusion
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform is designed in a
hyper-converged approach by colocating Ceph and compute services.
Dell EMC PowerEdge R740xd and Dell EMC PowerEdge R640 servers with Intel 25GbE networking provide a
concrete performance baseline and state-of-the-art hardware.
Software-defined storage Red Hat Ceph Storage 3.2 with BlueStore backend enabled is well suited for use cases
where performance is a critical element.
Finally, Intel NVMe drives offer robustness and an improved model of performance-driven SSD drives.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform testing
performance methodology supplied by Dell EMC trusted Spirent partner.
The biggest challenge of the Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat
OpenStack Platform is optimal tuning of memory, CPU cores, and Ceph OSD-disk ratio to address the resource
distribution and contention. Section Performance tuning on page 31 illustrates performance tuning parameters
defining a flexible and optimized architecture. Each use case requirement is modifiable for the customer's
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 79
infrastructure. Simulated testing methodology for measuring various performance metrics applies to a myriad of
devices.
Performance improves by enabling NFV oriented features like Huge Pages for high memory I/O applications and
CPU pinning for NUMA aware nodes with additional functionality like OVS-DPDK for intelligent packet forwarding
and processing.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
80 | Bill of Materials
Appendix
A
Bill of Materials
Topics: This chapter provide bill of materials information necessary to purchase
the proper hardware to deploy the Dell EMC Ready Architecture for Hyper
• Bill of Materials - SAH node Converged Infrastructure on Red Hat OpenStack Platform
• Bill of Materials - 3 Controller
nodes
• Bill of Materials - 3 Converged
nodes
• Bill of Materials - 1 Dell EMC
Networking S3048-ON switch
• Bill of Materials - 2 Dell EMC
Networking S5248-ON switches
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Bill of Materials | 81
Function Description
Platform Dell EMC PowerEdge R640
CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25M
Cache,Turbo,HT (125W) DDR4-26662
RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+
Disk 10 x 600GB, 15K SAS,12Gb,512n,2.5,HP
Storage controller PERC H740P
RAID layout RAID10
Function Description
Platform Dell EMC PowerEdge R640
CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25M
Cache,Turbo,HT (125W) DDR4-2666
RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+
Disk 10 x 600GB, 15K SAS,12Gb,512n,2.5,HP
Storage controller PERC H740P
RAID layout RAID10
Function Description
Platform Dell EMC PowerEdge R740xd
CPU 2 x Intel® Xeon® Platinum 8160 2.1G,24C/48T,10.4GT/s, 33M
Cache,Turbo,HT (150W) DDR4-2666
RAM 384GB RAM (12 x 32GB RDIMM, 2666MT/s, Dual Rank)
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
82 | Bill of Materials
Function Description
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in network 4 x Intel XXV710 DP 25GbE DA/SFP+
Disk 8 x 3.2TB, NVMe, Mxd Use Expr Flash, P4610
2 x 240GB, SSD SATA, 2.5, HP, S4600
Product Description
S3048-ON 48 line-rate 1000BASE-T ports, 4 line-rate 10GbE SFP+ ports
Redundant power supply AC or DC power supply
Fans Fan Module I/O Panel to PSU Airflow
or
Fan Module I/O Panel to PSU Airflow
Product Description
S5248-ON S5248-ON 100GbE, 40GbE, and 25Gb
Redundant power supply AC or DC power supply
Fans Fan Module I/O Panel to PSU Airflow
or
Fan Module I/O Panel to PSU Airflow
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 83
Appendix
B
Environment Files
Topics: This appendix provides modification details of files which is mandatory
process for deployment of Overcloud. Following list are:
• Heat templates and
environment yaml files • yaml files(.yaml)
• instackenv file(.json)
• Nodes registration json file
• undercloud(.conf)
• Undercloud configuration file
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
84 | Environment Files
network-environment.yaml
resource_registry:
OS::TripleO::Network::Ports::StorageMgmtVipPort: ./network/ports/
ctlplane_vip.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: ./network/ports/noop.yaml
parameter_defaults:
# CHANGEME: Change the following to the desired MTU for Neutron Networks
NeutronGlobalPhysnetMtu: 1500
# CHANGEME: Change the following to the CIDR for the Management network
ManagementNetCidr: 192.168.110.0/24
# CHANGEME: Change the following to the CIDR for the Private API network
InternalApiNetCidr: 192.168.140.0/24
# CHANGEME: Change the following to the CIDR for the Tenant network
TenantNetCidr: 192.168.130.0/24
# CHANGEME: Change the following to the CIDR for the Storage network
StorageNetCidr: 192.168.170.0/24
# CHANGEME: Change the following to the CIDR for the Storage Clustering
network
StorageMgmtNetCidr: 192.168.180.0/24
# CHANGEME: Change the following to the CIDR for the External network
ExternalNetCidr: 100.67.139.0/26
# CHANGEME: Change the following to the DHCP ranges for the iDRACs to use
on
# the Management network
ManagementAllocationPools: [{'start': '192.168.110.30', 'end':
'192.168.110.45'}]
# Tenant network
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 85
# Storage network
StorageAllocationPools: [{'start': '192.168.170.50', 'end':
'192.168.170.120'}]
# External network
ExternalAllocationPools: [{'start': '100.67.139.20', 'end':
'100.67.139.50'}]
# CHANGEME: Set to the DNS servers to use for the overcloud nodes (maximum
2)
DnsServers: ["8.8.8.8"]
# External network
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
86 | Environment Files
ExternalNetworkVlanID: 1391
# CHANGEME: Change the following to mtu size used for the floating network
ExtraConfig:
neutron::plugins::ml2::physical_network_mtus: ['physext:1500']
# neutron::plugins::ml2::physical_network_mtus: physext:1500
# CHANGEME: Set to empty string for External VLAN, br-ex if on native VLAN
of br-ex
NeutronExternalNetworkBridge: "''"
ServiceNetMap:
NeutronTenantNetwork: tenant
CeilometerApiNetwork: internal_api
AodhApiNetwork: internal_api
GnocchiApiNetwork: internal_api
MongoDbNetwork: internal_api
CinderApiNetwork: internal_api
CinderIscsiNetwork: storage
GlanceApiNetwork: storage
GlanceRegistryNetwork: internal_api
KeystoneAdminApiNetwork: ctlplane # allows undercloud to config
endpoints
KeystonePublicApiNetwork: internal_api
NeutronApiNetwork: internal_api
HeatApiNetwork: internal_api
NovaApiNetwork: internal_api
NovaMetadataNetwork: internal_api
NovaVncProxyNetwork: internal_api
SwiftMgmtNetwork: storage # Changed from storage_mgmt
SwiftProxyNetwork: storage
SaharaApiNetwork: internal_api
HorizonNetwork: internal_api
MemcachedNetwork: internal_api
RabbitMqNetwork: internal_api
RedisNetwork: internal_api
MysqlNetwork: internal_api
CephClusterNetwork: storage_mgmt
CephPublicNetwork: storage
CephRgwNetwork: storage
ControllerHostnameResolveNetwork: internal_api
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 87
ComputeHostnameResolveNetwork: internal_api
BlockStorageHostnameResolveNetwork: internal_api
ObjectStorageHostnameResolveNetwork: internal_api
CephStorageHostnameResolveNetwork: storage
NovaColdMigrationNetwork: internal_api
NovaLibvirtNetwork: internal_api
static-vip-environment.yaml
resource_registry:
OS::TripleO::Network::Ports::NetVipMap: ./network/ports/
net_vip_map_external.yaml
OS::TripleO::Network::Ports::ExternalVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::InternalApiVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::StorageVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::StorageMgmtVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::RedisVipPort: ./network/ports/
from_service.yaml
parameter_defaults:
ServiceVips:
# CHANGEME: Change the following to the VIP for the redis service on the
# Private API/internal_api network.
# Note that this IP must lie outside the InternalApiAllocationPools
range
# specified in network-environment.yaml.
redis: 192.168.140.49
ControlPlaneIP: 192.168.120.251
# CHANGEME: Change the following to the VIP on the Private API network.
# Note that this IP must lie outside the InternalApiAllocationPools range
# specified in network-environment.yaml.
InternalApiNetworkVip: 192.168.140.121
# CHANGEME: Change the following to the VIP on the Public API network.
# Note that this IP must lie outside the ExternalAllocationPools range
# specified in network-environment.yaml.
ExternalNetworkVip: 100.67.139.62
StorageNetworkVip: 192.168.170.121
StorageMgmtNetworkVip: 192.168.120.252
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
88 | Environment Files
static-ip-environment.yaml
resource_registry:
OS::TripleO::Controller::Ports::ExternalPort: ./network/ports/
external_from_pool.yaml
OS::TripleO::Controller::Ports::InternalApiPort: ./network/ports/
internal_api_from_pool.yaml
OS::TripleO::Controller::Ports::StoragePort: ./network/ports/
storage_from_pool.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: ./network/ports/noop.yaml
OS::TripleO::Controller::Ports::TenantPort: ./network/ports/
tenant_from_pool.yaml
OS::TripleO::ComputeHCI::Ports::ExternalPort: ./network/ports/noop.yaml
OS::TripleO::ComputeHCI::Ports::InternalApiPort: ./network/ports/
internal_api_from_pool.yaml
OS::TripleO::ComputeHCI::Ports::StoragePort: ./network/ports/
storage_from_pool.yaml
OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: ./network/ports/
storage_mgmt_from_pool.yaml
OS::TripleO::ComputeHCI::Ports::TenantPort: ./network/ports/
tenant_from_pool.yaml
parameter_defaults:
# Specify the IPs for the overcloud nodes on the indicated networks below.
# The IPs are listed in the order: node0, node1, node2 for each network.
#
# Note that the IPs chosen must lie outside the allocation pools defined
in
# network-environment.yaml, and must not collide with the IPs assigned to
# other nodes or networking equipment on the network, such as the SAH,
# OSP Director node, Ceph Storage Admin node, etc.
ControllerIPs:
tenant:
- 192.168.130.12
- 192.168.130.13
- 192.168.130.14
internal_api:
- 192.168.140.12
- 192.168.140.23
- 192.168.140.14
storage:
- 192.168.170.12
- 192.168.170.13
- 192.168.170.14
external:
- 100.67.139.12
- 100.67.139.13
- 100.67.139.14
ComputeHCIIPs:
tenant:
- 192.168.130.15
- 192.168.130.16
- 192.168.130.17
internal_api:
- 192.168.140.15
- 192.168.140.16
- 192.168.140.17
storage:
- 192.168.170.15
- 192.168.170.16
- 192.168.170.17
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 89
storage_mgmt:
- 192.168.180.15
- 192.168.180.16
- 192.168.180.17
nic_environment.yaml
resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig: ./controller.yaml
#############To be modified by EndUser######################
OS::TripleO::ComputeHCI::Net::SoftwareConfig: ./computeHCI.yaml
parameter_defaults:
# CHANGEME: Change the interface names in the following lines for the
# controller nodes provisioning interface and to include in the controller
# nodes bonds
ControllerProvisioningInterface: em3
ControllerBond0Interface1: p1p1
ControllerBond0Interface2: p2p1
ControllerBond1Interface1: p1p2
ControllerBond1Interface2: p2p2
# The bonding mode to use for controller nodes
ControllerBondInterfaceOptions: mode=802.3ad miimon=100
xmit_hash_policy=layer3+4 lacp_rate=1
# CHANGEME: Change the interface names in the following lines for the
# compute nodes provisioning interface and to include in the compute
# nodes bonds
ComputeHCIProvisioningInterface: em3
ComputeHCIBond0Interface1: p3p1
ComputeHCIBond0Interface2: p2p1
ComputeHCIBond1Interface1: p3p2
ComputeHCIBond1Interface2: p2p2
ComputeHCIBond2Interface2: p6p1
ComputeHCIBond2Interface2: p7p1
# The bonding mode to use for compute nodes
ComputeHCIBondInterfaceOptions: mode=802.3ad miimon=100
xmit_hash_policy=layer3+4 lacp_rate=1
##############Modification Ends Here#####################
dell-environment.yaml
resource_registry:
OS::TripleO::NodeUserData: /home/osp_admin/templates/wipe-disks.yaml
parameter_defaults:
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
90 | Environment Files
CloudDomain: oss.labs
# Configure Ceph Placement Group (PG) values for the indicated pools
CephPools: [{"name": "volumes", "pg_num": 4096, "pgp_num": 4096}, {"name":
"vms", "pg_num": 1024, "pgp_num": 1024},{"name": "images", "pg_num": 512,
"pgp_num": 521}]
CephAnsiblePlaybookVerbosity: 1
CephPoolDefaultSize: 2
CephConfigOverrides:
journal_size: 10240
journal_collocation: true
CephAnsibleDisksConfig:
osd_scenario: lvm
devices:
- /dev/nvme0n1
- /dev/nvme1n1
- /dev/nvme2n1
- /dev/nvme3n1
- /dev/nvme4n1
- /dev/nvme5n1
- /dev/nvme6n1
- /dev/nvme7n1
CephAnsibleExtraConfig:
osd_objectstore: bluestore
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 91
osds_per_device: 2
osd_recovery_op_priority: 3
osd_recovery_max_active: 3
osd_max_backfills: 1
ceph_osd_docker_memory_limit: 10g
ceph_osd_docker_cpu_limit: 4
ComputeHCIExtraConfig:
cpu_allocation_ratio: 6.7
ComputeHCIParameters:
NovaReservedHostMemory: 115000
NovaComputeExtraConfig:
nova::migration::libvirt::live_migration_completion_timeout: 800
nova::migration::libvirt::live_migration_progress_timeout: 150
ControllerExtraConfig:
nova::api::osapi_max_limit: 10000
nova::rpc_response_timeout: 180
nova::keystone::authtoken::revocation_cache_time: 300
neutron::rpc_response_timeout: 180
neutron::keystone::authtoken::revocation_cache_time: 300
cinder::keystone::authtoken::revocation_cache_time: 300
glance::api::authtoken::revocation_cache_time: 300
tripleo::profile::pacemaker::database::mysql::innodb_flush_log_at_trx_commit:
0
tripleo::haproxy::haproxy_default_maxconn: 10000
instackenv.json
{
"nodes": [
{
"name": "control-0",
"capabilities": "node:control-0,boot_option:local,boot_mode:uefi",
"root_device": {"size":"2791"},
"pm_addr": "192.168.110.12",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},
{
"name": "control-1",
"capabilities": "node:control-1,boot_option:local,boot_mode:uefi",
"root_device": {"size":"2791"},
"pm_addr": "192.168.110.13",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},
{
"name": "control-2",
"capabilities": "node:control-2,boot_option:local,boot_mode:uefi",
"root_device": {"size":"2791"},
"pm_addr": "192.168.110.14",
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
92 | Environment Files
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},
{
"name": "computeHCI-0",
"capabilities": "node:computeHCI-0,boot_option:local,boot_mode:uefi",
"root_device": {"size":"223"},
"pm_addr": "192.168.110.15",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},
{
"name": "computeHCI-1",
"capabilities": "node:computeHCI-1,boot_option:local,boot_mode:uefi",
"root_device": {"size":"223"},
"pm_addr": "192.168.110.16",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},
{
"name": "computeHCI-2",
"capabilities": "node:computeHCI-2,boot_option:local,boot_mode:uefi",
"root_device": {"size":"223"},
"pm_addr": "192.168.110.17",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
}
]
}
undercloud.conf
[DEFAULT]
#
# From instack-undercloud
#
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 93
# DNS domain name to use when deploying the overcloud. The overcloud
# parameter "CloudDomain" must be set to a matching value. (string
# value)
#overcloud_domain_name = localdomain
# Name of the local subnet, where the PXE boot and DHCP interfaces for
# overcloud instances is located. The IP address of the
# local_ip/local_interface should reside in this subnet. (string
# value)
local_subnet = ctlplane-subnet
# The kerberos principal for the service that will use the
# certificate. This is only needed if your CA requires a kerberos
# principal. e.g. with FreeIPA. (string value)
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
94 | Environment Files
#service_principal =
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 95
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
96 | Environment Files
[auth]
#
# From instack-undercloud
#
# Password used for MySQL root user. If left unset, one will be
# automatically generated. (string value)
#undercloud_db_password = <None>
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 97
#undercloud_gnocchi_password = <None>
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
98 | Environment Files
[ctlplane-subnet]
#
# From instack-undercloud
#
# End of DHCP allocation range for PXE and DHCP of Overcloud instances
# on this network. (string value)
# Deprecated group/name - [DEFAULT]/dhcp_end
dhcp_end = 192.168.120.250
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
References | 99
Appendix
C
References
Topics: Additional information can be found at https://www.dell.com/support/
article/us/en/19/sln310368/dell-emc-ready-architecture-for-red-hat-
• To learn more openstackplatform?lang=en
Note: If you need additional services or implementation help, please
contact your Dell EMC representative.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
100 | References
To learn more
Additional information on the Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat
OpenStack Platform can be found at https://www.dell.com/support/article/us/en/19/sln310368/dell-emc-ready-
architecture-for-red-hat-openstackplatform?lang=en or by emailing [email protected]
Copyright © 2019 Dell EMC or its subsidiaries. All rights reserved. Trademarks and trade names may be used in this
document to refer to either the entities claiming the marks and names or their products. Specifications are correct at
date of publication but are subject to availability or change without notice at any time. Dell EMC and its affiliates
cannot be responsible for errors or omissions in typography or photography. Dell EMC’s Terms and Conditions of
Sales and Service apply and are available on request. Dell EMC service offerings do not affect consumer’s statutory
rights.
Dell EMC, the DELL EMC logo, the DELL EMC badge, and PowerEdge are trademarks of Dell EMC.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1