0% found this document useful (0 votes)
350 views100 pages

Dell Emc Ready Architecture For Red Hat Hci Architecture Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
350 views100 pages

Dell Emc Ready Architecture For Red Hat Hci Architecture Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

Dell EMC Ready Architecture for Hyper-Converged

Infrastructure on Red Hat OpenStack Platform


Architecture Guide
Version 1

Dell EMC Service Provider Solutions


ii | Contents

Contents
List of Figures................................................................................................................................... v

List of Tables................................................................................................................................... vi

Trademarks..................................................................................................................................... viii
Notes, cautions, and warnings......................................................................................................... ix

Chapter 1: Overview........................................................................................................................10
Executive summary.............................................................................................................................................11
Key benefits........................................................................................................................................................ 11
Key differentiators.............................................................................................................................................. 12

Chapter 2: Architecture overview................................................................................................... 14


Overview............................................................................................................................................................. 15
Software...............................................................................................................................................................18
Red Hat Enterprise Linux Server 7.6.....................................................................................................18
Red Hat OpenStack Platform version 13...............................................................................................18
Red Hat Ceph Storage 3.2......................................................................................................................18
Hardware............................................................................................................................................................. 19
Solution Admin Host..............................................................................................................................19
OpenStack Controller nodes...................................................................................................................19
OpenStack Converged nodes..................................................................................................................19
Intel NVMe 3.2TB P4600...................................................................................................................... 20
Intel XXV710 Dual Port 25GbE............................................................................................................20
Network layout................................................................................................................................................... 21
Physical network.................................................................................................................................................23

Chapter 3: Red Hat Ceph Storage for HCI.....................................................................................24


Introduction to Red Hat Ceph Storage.............................................................................................................. 25
RADOS................................................................................................................................................... 25
Pools........................................................................................................................................................ 26
Placement Groups................................................................................................................................... 26
CRUSH....................................................................................................................................................26
Objectstore...............................................................................................................................................26
Red Hat Ceph Storage Dashboard introduction.....................................................................................26
Hyper-Converged Infrastructure (HCI) Ceph storage........................................................................................27

Chapter 4: Deployment considerations........................................................................................... 29


Converged Nodes with integrated Ceph storage................................................................................................30
Resource isolation...............................................................................................................................................30
Performance tuning.............................................................................................................................................31

Chapter 5: Deployment....................................................................................................................34
Before you begin................................................................................................................................................ 35
Tested BIOS and firmware versions...................................................................................................... 35
Disk layout.............................................................................................................................................. 35

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Contents | iii

Virtual Disk Creation............................................................................................................................. 36


Software requirements............................................................................................................................ 37
Deployment workflow........................................................................................................................................ 37
Solution Admin Host..........................................................................................................................................38
Prepare Solution Admin Host................................................................................................................ 38
SAH deployment overview.................................................................................................................... 38
Kickstart file customization....................................................................................................................38
Creating image........................................................................................................................................ 40
Presenting the image to the RHEL OS installation process.................................................................. 41
Deploy SAH node...................................................................................................................................42
Red Hat OpenStack Platform Director...............................................................................................................43
Kickstart file customization....................................................................................................................43
Red Hat OpenStack Platform Director as VM deployment...................................................................44
Undercloud Deployment.....................................................................................................................................45
Configure the Undercloud...................................................................................................................... 45
Install Undercloud...................................................................................................................................46
Configure and deploy the Overcloud................................................................................................................. 49
Prepare the nodes registration file..........................................................................................................50
Register and introspect the nodes.......................................................................................................... 51
Configure networking............................................................................................................................. 52
Configure cluster.....................................................................................................................................54
Configure the static IPs.......................................................................................................................... 55
Configure the Virtual IPs....................................................................................................................... 56
Configure the NIC interfaces................................................................................................................. 56
Prepare and deploy the Overcloud......................................................................................................... 57
Red Hat Ceph Storage Dashboard deployment and configuration (optional)....................................................58
Red Hat Ceph Storage Dashboard deployment..................................................................................... 58
Ceph dashboard VM configuration........................................................................................................ 60

Chapter 6: Validation and testing....................................................................................................63


Manual validation............................................................................................................................................... 64
Test Glance image service..................................................................................................................... 65
Testing Nova compute provisioning service..........................................................................................65
Test Cinder block storage service.......................................................................................................... 66
Test Swift object storage service........................................................................................................... 66
Accessing the instance test.....................................................................................................................66
Tempest test suite............................................................................................................................................... 68
Configure Tempest..................................................................................................................................68
Run Tempest tests...................................................................................................................................69
Summary................................................................................................................................................. 69

Chapter 7: Performance measuring................................................................................................. 70


Overview............................................................................................................................................................. 71
Performance tools............................................................................................................................................... 71
Test cases and test reports..................................................................................................................................71
Network performance............................................................................................................................. 71
Compute performance.............................................................................................................................74
Storage performance............................................................................................................................... 76
Conclusion...........................................................................................................................................................78

Appendix A: Bill of Materials........................................................................................................ 80


Bill of Materials - SAH node............................................................................................................................ 81
Bill of Materials - 3 Controller nodes............................................................................................................... 81

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
iv | Contents

Bill of Materials - 3 Converged nodes.............................................................................................................. 81


Bill of Materials - 1 Dell EMC Networking S3048-ON switch........................................................................82
Bill of Materials - 2 Dell EMC Networking S5248-ON switches.....................................................................82

Appendix B: Environment Files......................................................................................................83


Heat templates and environment yaml files.......................................................................................................84
network-environment.yaml..................................................................................................................... 84
static-vip-environment.yaml....................................................................................................................87
static-ip-environment.yaml......................................................................................................................88
nic_environment.yaml.............................................................................................................................89
dell-environment.yaml............................................................................................................................ 89
Nodes registration json file................................................................................................................................ 91
instackenv.json........................................................................................................................................ 91
Undercloud configuration file............................................................................................................................ 92
undercloud.conf.......................................................................................................................................92

Appendix C: References.................................................................................................................. 99
To learn more................................................................................................................................................... 100

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
List of Figures | v

List of Figures
Figure 1: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat
OpenStack Platform key differentiators.....................................................................................12

Figure 2: Architecture for RHOSP version 13 over HCI...............................................................15

Figure 3: SAH node layout.............................................................................................................16

Figure 4: HCI network layout........................................................................................................ 21

Figure 5: Physical network............................................................................................................. 23

Figure 6: HCI Ceph storage........................................................................................................... 27

Figure 7: Verify Virtual Disk......................................................................................................... 37

Figure 8: Workflow for RHOSP deployment over HCI................................................................ 37

Figure 9: Check RHEL iso and image file.....................................................................................41

Figure 10: Install Redhat................................................................................................................ 42

Figure 11: Provide kickstart file.....................................................................................................42

Figure 12: Network throughput vs frame size................................................................................72

Figure 13: Network latency vs frame size..................................................................................... 72

Figure 14: Network jitter vs frame size......................................................................................... 73

Figure 15: 4KB memory read IOPs............................................................................................... 74

Figure 16: 4KB memory write IOPs.............................................................................................. 75

Figure 17: 4KB memory latency.................................................................................................... 75

Figure 18: 4K storage random IOPs...............................................................................................76

Figure 19: 4K storage sequential IOPs...........................................................................................77

Figure 20: 4K storage random latency........................................................................................... 77

Figure 21: 4K storage sequential latency....................................................................................... 78

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
vi | List of Tables

List of Tables
Table 1: RHOSP deployment elements.......................................................................................... 16

Table 2: Solution Admin Host hardware configuration – Dell EMC PowerEdge R640................ 19

Table 3: Controller nodes hardware configuration – Dell EMC PowerEdge R640........................19

Table 4: OpenStack Converged nodes hardware configuration – Dell EMC PowerEdge


R740xd........................................................................................................................................ 19

Table 5: Logical Networks............................................................................................................. 21

Table 6: Placement Group configuration summary........................................................................32

Table 7: Validated firmware versions............................................................................................ 35

Table 8: BIOS configuration.......................................................................................................... 35

Table 9: Switches firmware version............................................................................................... 35

Table 10: Disk configuration for Controller and SAH Node......................................................... 36

Table 11: Disk configuration for Converged Node........................................................................36

Table 12: Kickstart file variables................................................................................................... 38

Table 13: Kickstart File Variables..................................................................................................43

Table 14: Undercloud variables......................................................................................................45

Table 15: List of environment files................................................................................................50

Table 16: instackenv.json.yaml file parameters............................................................................. 50

Table 17: network-environment.yaml file parameters....................................................................52

Table 18: dell-environment.yaml file parameters...........................................................................54

Table 19: static-ip-environment.yaml file parameters.................................................................... 55

Table 20: static-vip-environment.yaml file parameters.................................................................. 56

Table 21: nic_environment.yaml file parameters........................................................................... 57

Table 22: dashboard.cfg file parameters.........................................................................................59

Table 23: Bill Of Materials - SAH node........................................................................................81

Table 24: Bill Of Materials - 3 Controller nodes...........................................................................81

Table 25: Bill Of Materials - 3 Converged nodes..........................................................................81

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
List of Tables | vii

Table 26: Bill Of Materials - 1 Dell EMC Networking S3048-ON switch....................................82

Table 27: Bill Of Materials - 2 Dell EMC Networking S5248-ON switches.................................82

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
viii | Trademarks

Trademarks
Copyright © 2019 Dell EMC or its subsidiaries. All rights reserved.
The information in this publication is provided “as is.” Dell EMC makes no representations or warranties of any kind
with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Red Hat®, Red Hat Enterprise Linux®, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE
are trademarks or registered trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the
registered trademark of Linus Torvalds in the U.S. and other countries. Oracle® and Java® are registered trademarks of
Oracle Corporation and/or its affiliates.
Intel® and Xeon® are registered trademarks of Intel Corporation.
Dell EMC believes the information in this document is accurate as of its publication date. The information is subject
to change without notice.
Spirent Temeva®, Cloudstress®, MethodologyCenter® and TrafficCenter® are registered trademarks of Spirent
Communication Inc. All rights reserved. Specifications subject to change without notice.
DISCLAIMER: The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries, and are used
with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack
Foundation or the OpenStack community.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Notes, cautions, and warnings | ix

Notes, cautions, and warnings


Note: A Note indicates important information that helps you make better use of your system.

CAUTION: A Caution indicates potential damage to hardware or loss of data if instructions are not
followed.
Warning: A Warning indicates a potential for property damage, personal injury, or death.

This document is for informational purposes only and may contain typographical errors and technical inaccuracies.
The content is provided as is, without express or implied warranties of any kind.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
10 | Overview

Chapter

1
Overview
Topics: Dell EMC and Red Hat have worked closely together to build an enterprise-
scale hyper-converged infrastructure architecture guide ideally suited for
• Executive summary customers who are looking for performance and ease of management.
• Key benefits
This architecture guide describes prescriptive guidance and recommendations
• Key differentiators for complete configuration, sizing, bill-of-material, and deployment details.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Overview | 11

Executive summary
This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node
hardware through configurations of Hyper-Converged Infrastructure using Red Hat OpenStack Platform 13 and Red
Hat Ceph Storage that consumes OpenStack Nova Compute and Ceph storage services.
Communication Service Providers inherently have distributed operation environments, whether multiple large
scale core datacenters, 100's and 1000's of central offices and Edge locations, or even customer premise equipment
for same infrastructure services in remote and branch offices as run in the core datacenter. However, remote and
branch offices can have unique challenges such as less space and power/cooling, fewer (or no) technical staff on-site.
Organizations in this situation require powerful integrated services on a single easily scaled environment.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform is designed to
address these challenges by integrating compute and storage together on a single aggregated cluster, making it a
well-suited solution for low-footprint remote or central office installations and Edge computing. Dell EMC Hyper-
Converged Infrastructure for Red Hat OpenStack Platform is designed to enable organizations to deploy and manage
distributed infrastructure centrally, enabling remote locations to benefit from high-performing systems without
requiring extensive or highly specialized on-site technical staff.
This architecture guide defines hardware and software building block details including but not limited to Red Hat
OpenStack Platform configuration, network switch configuration, and all software and hardware components.
This all-NVMe configuration is optimized for block storage performance.

Key benefits
The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform offers
several benefits to help Service Providers reduce CAPEX/OPEX (Capital expenditures/ operating expenditures) and
simplify planning and procurement with the following features:
• Infrastructure Consolidation. A smaller hardware footprint eases power, cooling and deployment reducing
CAPEX.
• Operational Efficiency. A single supported rack is easier to train personel to manage and configure resulting in
lower OPEX overhead.
• Fully engineered, validated, tested and documented by Dell EMC.
• Based on Dell EMC PowerEdge R-Series servers and specifically Dell EMC PowerEdge R640 and Dell EMC
PowerEdge R740xd which are the server models recommended for this architecture guide, which includes Intel
Xeon processors, Intel NVMe disks and Intel 25GbE network interface cards.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
12 | Overview

Key differentiators

Figure 1: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack
Platform key differentiators

The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform has some
major enhancements from the regular Dell EMC Ready Architecture Guide.
• Implementation model. Compute and storage are deployed in a Hyper-Converged Infrastructure approach. As a
result, both services and their associated OpenStack services are deployed and managed as a single entity.
• Server model. Dell EMC PowerEdge R640 and Dell EMC PowerEdge R740xd servers are used in this
architecture guide providing the most cutting edge range of Dell EMC PowerEdge R-Series with optimized
technology for all kind of workloads and offers sophisticated built-in protection at every step.
• Hardware resources. Optimized for Hyper-Converged Infrastructure, this Ready Architecture Guide combines
scalability, robustness, and efficiency by leveraging the following Intel components:
• Intel Platinum 8160 Skylake processors. Used for compute and storage needs. This 64-bit 24-core x86 multi-
socket high performance server microprocessor provides 48 cores per node which maximizes the concurrent
execution of multi-threaded applications.
• Intel 25GbE adaptors. Used for all network communications. The flexible and scalable Intel XXV710
network adapter offers broad interoperability, critical performance optimizations, and increased agility. A
couple of ports have also been reserved for future usage of NFV oriented optimizations such as SR-IOV or
OVS-DPDK.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Overview | 13

• Intel P4600 NVMe drives. Acts as a key for Red Hat Ceph Storage backend. This NAND SSD drive is
optimized for the data caching needs of cloud storage and more particularly software-defined solutions. It
helps to modernize the data center by combining performance, capacity, manageability and scalability.
• RAM Optimized. Memory is a key concern when it comes to virtualization and even more with a Hyper-
Converged Infrastructure. Each compute/storage server is configured with 384GB of RAM, delivering optimal
performance and available resources for both compute and storage services.
This architecture guide will cover in details the key differentiators in the next sections.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
14 | Architecture overview

Chapter

2
Architecture overview
Topics: Undercloud and Overcloud deployment elements are part of Dell EMC Ready
Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack
• Overview Platform. This Ready Architecture Guide uses Red Hat OpenStack Platform
• Software RHOSP 13. The Red Hat OpenStack Platform implementation of Hyper-
• Hardware Converged Infrastructure (HCI) uses Red Hat Ceph Storage version 3.2 as the
• Network layout storage provider.
• Physical network This overview of the deployment process for Dell EMC Ready Architecture
for Hyper-Converged Infrastructure on Red Hat OpenStack Platform on Dell
EMC Dell EMC PowerEdge R640 and Dell EMC PowerEdge R740xd server
hardware and network outlines the following:
• Software requirements
• Hardware requirements
• Dell EMC networking switch requirements

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 15

Overview
This chapter describes the complete architecture for Red Hat OpenStack Platform version 13 over Hyper-Converged
Infrastructure. Figure 2: Architecture for RHOSP version 13 over HCI on page 15 shows components used
in Undercloud and Overcloud in detail and description of SAH node with logical networks. This section provides
description of hardware and software components.
Figure 2: Architecture for RHOSP version 13 over HCI on page 15 illustrates ready architecture for RHOSP
version 13 deployment over HCI.

Figure 2: Architecture for RHOSP version 13 over HCI

Note: We recommend minimal configuration of seven nodes as follows:


1. Solution Admin Host (SAH)
2. Controller node one
3. Controller node two
4. Controller node three
5. Converged node one
6. Converged node two
7. Converged node three
for optimal performance and high-availability.
Solution Admin Host (SAH)
The Solution Admin Host (SAH) is the central administration server with internal bridged networks for virtual
machines (VM) responsible for performing management operations across the domain. Figure 3: SAH node layout on
page 16 shows a brief layout of the SAH node. It is also a physical server that supports VMs for the Undercloud
that are needed for the OpenStack Overcloud to be deployed and operated. The SAH communicates with the node
managers to perform management operations across the domain. It is physically connected to the following networks.
For more details of networks, please refer to Table 5: Logical Networks on page 21

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
16 | Architecture overview

Figure 3: SAH node layout

Undercloud. The Undercloud is the OSP director (TripleO) node. It is a single-VM OpenStack installation that
includes components for provisioning and managing the Overcloud.
HCI Overcloud: The Overcloud is the end user RHOSP environment created using Undercloud. HCI Overcloud has
only two types of roles:
• Controller. A node that provides administration, networking, and high availability for the OpenStack Overcloud
environments.
• ComputeHCI. A real hyper-converged role designed to have compute and storage services like OpenStack Nova
Compute and Ceph storage to run in tandem. This role has a direct application for Edge computing for Telcos. We
refer to this role as converged through this architecture guide.
Table 1: RHOSP deployment elements on page 16 describes the basic RHOSP 13 deployment sequence.

Table 1: RHOSP deployment elements

Deployment Deployment Description


Layer elements
Undercloud RHEL 7.6 • Dell EMC PowerEdge R-Series servers need an operating system
which handles high density networks. Red Hat Enterprise Linux
Operating System is the best optimized operating system for these
servers.

Undercloud KVM • An open-source virtualization protocol which is part of Linux


flavored operating systems and creates a hypervisor host machine
with a myriad of individual virtual machines. This virtualization
layer runs on the SAH and hosts the director VM as well as the
dashboard VM. These two VMS shares the Solution Admin Host
resources to run their applications.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 17

Deployment Deployment Description


Layer elements
Undercloud Swift Swift provides object storage for multiple OpenStack Platform
components including:
• Image storage for Glance Image service
• Introspection data for Ironic Baremetal service
• Deployment plans for Mistral Workflow service

Undercloud, Keystone • In the Undercloud, it is an identity service which provides


Overcloud authentication and grants access for the director's components.
• In the Overcloud, it also maps users and groups to projects and
resources based on a catalog where every OpenStack service
endpoints are referenced.

Undercloud Ironic • Ironic provides bare-metal As A Service (BaaS) to provision and


manage physical machines for end users. Ironic uses iPXE to
provision physical machines. This solution uses Ironic iDRAC driver
to manage all Overcloud servers.

Undercloud, Glance • In the Undercloud, Glance stores images that will be used by
Overcloud bare-metal machines when doing introspection and overcloud
deployment.
• In the Overcloud, it is also used to store VM images that other
OpenStack services will use as templates to deploy VM instances.

Undercloud, Nova • In the Undercloud, Nova service is used to manage bare-metal


Overcloud instances that comprise the infrastructure services that are used by
Overcloud administrator.
• In the Overcloud, it is used for deploying and managing large
numbers of virtual machines and other instances to handle
computing tasks. It facilitates enterprises and service providers
to offer on-demand computing resources, by provisioning and
managing large networks of virtual machines.Compute resources are
accessible via APIs for developers or users.

Undercloud, Neutron • In the Undercloud, Neutron controls networking for managing bare
Overcloud metal machines.
• In the Overcloud, Neutron offers networking capabilities in
complex cloud environment. It also helps to ensure that any of the
components of an OpenStack environment can communicate with
each other quickly and efficiently.

Undercloud, Heat • In the Undercloud, Heat provides orchestration and configuration of


Overcloud nodes based upon templates customization.
• In the Overcloud, it allows developers to store the requirements
of a cloud application as templates which contains definition of
resources that are necessary for a particular application. The flexible
template language can specify compute, storage, and networking
configurations — detailed post-deployment activity automating the
full provisioning of infrastructure for services and applications.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
18 | Architecture overview

Deployment Deployment Description


Layer elements
Overcloud Ceph • Ceph is used as the backend storage by Cinder for block persistent
storage and Swift API for object level access. It also help storing
images for Glance and ephemeral storage with Nova.

Undercloud Ansible • Ansible is used by the OSP director to install and configure the
Undercloud. When deploying the Overcloud, it is also used by Ceph-
ansible to deploy and configure the Ceph cluster.

Software

Red Hat Enterprise Linux Server 7.6


Dell EMC PowerEdge R-Series servers need an operating system which can handle high density networks. It offers
a scalable, fault-tolerant platform for the development of cloud-enabled workloads and require high throughput.
Red Hat Enterprise Linux operating system is the best optimized operating system for these servers. Dell EMC
recommends using Red Hat Enterprise Linux Server 7.6 for deployment which includes the following:
• Security and compliance
• Performance and efficiency
• Platform manageability
• Stability and reliability
• Multi-platform support
• Application experience
Note: Check the used RHEL version for standard compatibility checks.

Red Hat OpenStack Platform version 13


Red Hat OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS)
cloud on Red Hat Enterprise Linux. Red Hat OpenStack Platform (RHOSP 13 is based on the OpenStack Queens.
It includes additional features packaged to turn available physical hardware into a private, public, or hybrid cloud
platform including:
• Overcloud provisioning using node
• Fully containerized services
• bare-metal (Ironic) service
• Ceph storage default support
• Integration of real time KVM (Kernel-based Virtual Machine) RT-KVM (Real-time KVM) with the Compute
service
• High availability

Red Hat Ceph Storage 3.2


Ceph is an open-source distributed object, block, and file storage. It is open-source software that runs on readily-
available hardware. It is designed to be scalable and to have no single point of failure.
Reasons to choose Ceph storage over traditional alternatives are:
• Unified Storage - It supports block, object, and file in one system.
• Flexible configuration - Adjust as per application load and deployment demands changes in the cloud.
• Open foundation - Built on the shared open development process and proprietary ecosystem.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 19

Red Hat Ceph Storage is an essential component of Hyper-Converged Infrastructure (HCI). Please refer detailed
information on Red Hat Ceph Storage for HCI on page 24

Hardware

Solution Admin Host


Table 2: Solution Admin Host hardware configuration – Dell EMC PowerEdge R640

Machine function Configurations


Platform Dell EMC PowerEdge R640
CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25M
Cache,Turbo,HT (125W) DDR4-2666
RAM (Minimum) 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in Network 2 x Intel XXV710 DP 25GbE DA/SFP+
Disk 6TB SAS (10 x 600GB 15k RPM, SAS 12Gbps)
Storage Controller PERC H740P RAID Controller
RAID RAID 10

OpenStack Controller nodes


Table 3: Controller nodes hardware configuration – Dell EMC PowerEdge R640

Machine function Configurations


Platform Dell EMC PowerEdge R640
CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25M Cache,
Turbo, HT (125W) DDR4-2666
RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in Network 2 x Intel XXV710 DP 25GbE DA/SFP+
Disk 6TB SAS (10 x 600GB 15k RPM, SAS 12Gbps)
Storage Controller PERC H740P RAID Controller
RAID Layout RAID 10

OpenStack Converged nodes


Table 4: OpenStack Converged nodes hardware configuration – Dell EMC PowerEdge R740xd

Machine function Configurations


Platform Dell EMC PowerEdge R740xd

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
20 | Architecture overview

Machine function Configurations


CPU 2 x Intel® Xeon® Platinum 8160 2.1G,24C/48T,10.4GT/s, 33M
Cache,Turbo,HT (150W) DDR4-2666
RAM 384GB RAM (12 x 32GB RDIMM, 2666MT/s, Dual Rank)
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in Network 4 x Intel XXV710 DP 25GbE DA/SFP+
OS Disk 240GB (2 x 240GB, SSD SATA, 2.5, HP, S4600)
Disk 25,6TB NVMe (8 x 3.2TB NVMe, Mixed Use, P4610)
Storage Controller PERC H740P RAID Controller
RAID Layout RAID 1

Intel NVMe 3.2TB P4600


The Intel P4600 Mainstream NVMe SSDs are advanced data center SSDs optimized for mixed read-write
performance, endurance, and strong data protection . They are designed for greater performance and endurance in a
cost-effective design, and to support a broader set of workloads. NVMe drives are also optimized for heavy multi-
threaded workloads by using internal parallelism and many other improvements, such as enlarged I/O queues. The
Intel P4600 NVMe drives have the following key characteristics:
• Suitable for mixed read-write workloads.
• Variable sector size and end-to-end data-path protection.
• Enhanced power-loss data protection.
Note: Our performance testing was conducted with P4600 because the P4610 was not orderable at the time
the servers were acquired. Please use P4610 instead of P4600

Intel XXV710 Dual Port 25GbE


The Intel XXV710 Dual Port 25GbE delivers excellent performance for 25GbE connectivity that is backwards
compatible to 1/10GbE, making migration to higher speeds easier. It also features a foundation for server
connectivity, providing broad interoperability, critical performance optimization, and increased agility for
Telecommunications, Cloud, and Enterprise IT network solutions.
Interoperability. Multiple speeds and media types for broad compatibility backed by extensive testing and
validation.
Agility. Both kernel and Data Plane Development Kit (DPDK) drives for scalable packet processing.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 21

Network layout

Figure 4: HCI network layout

Figure Figure 4: HCI network layout on page 21 illustrates the network layout for Hyper-Converged
Infrastructure.
Bond0 interface is associated with p1p1, p2p1 physical interfaces and is paired with br-tenant and br-int
virtual bridges. Bond0 is used to communicate with the internal API, Tenant, and Storage network.
Bond1 interface is associated with physical interfaces p1p2, p2p2, and br-ex and is paired with br-int
virtual bridge which communicates with instances. Bond1 interface is used for external network and storage cluster
network. Open vSwitch bridges are mapped between physical and virtual interfaces.
Bond2 interface is especially used for NFV workload with dedicated NICs for either SRIOV, or OVS-DPDK. This
interface handles NFV workload on hyper-converged nodes.
Table 5: Logical Networks on page 21 describes the network layout:
Note: All the VLANs described here are used explicitly for Overcloud network and are bound to differ as
per end user configuration. Environment file network-environment.yaml configures VLANs that is
passed to Undercloud to deploy and manage Overcloud.

Table 5: Logical Networks

Network VLAN Description


Internal API 140 • The Internal API network is used for communication between the
network OpenStack services.

iDRAC network 110 • It is used to manage bare-metal nodes remotely using dracclient.

Provisioning/ 120 • The provisioning network is used to provision bare-metal servers so


IPMI network that the bare-metal can communicate with the Networking service
for DHCP, PXE boot and other requirements.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
22 | Architecture overview

Network VLAN Description


Public/External Untagged • OSP director VM uses the Public/External network to download
network software updates for the overcloud.
• Administrator of OSP Director VM uses this network to access
undercloud to manage the overcloud.
• This network is also responsible for floating IP addresses
management across the tenant instances where external users can
access them using the same.
• The converges nodes do not need to be directly connected to the
external network, as their instances communicate via the Tenant
network to the controllers who then route external traffic on their
behalf to the external network.

Storage 180 • The backend storage network to which Ceph routes its heartbeat,
Clustering object replication, and recovery traffic. The Ceph OSDs use this
network network to balance data according to the replication policy. This
private network only needs to be accessed by the OSDs.

Storage network 170 • The frontend storage network where Ceph clients (through Glance
API, Cinder API , or Ceph CLI) access the Ceph cluster. Ceph
monitors operate on this network.

Tenant network 130 • This is the network for allocating IP addresses to tenant instances.
OpenStack tenants create private networks provided by VLANs
configured on underlying physical switch. This network facilitates
communication across tenant instances.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Architecture overview | 23

Physical network

Figure 5: Physical network

The Figure 5: Physical network on page 23 diagram illustrates the physical network wiring for RHOSP
deployment in a Converged infrastructure.
Stacks of S5248-ON switches uplink to an external network with a S3048-ON management switch. It also consists
of SAH node and cluster of three Controller nodes and three Converged nodes. For seamless communication, the
interfaces are wired as follows:
1. iDRAC interfaces to S3048-ON switch for all nodes. This is used to access iDRAC session for all nodes.
2. Interface em3 to S3048-ON switch via VLAN 120 for all nodes to provision bare-metal servers.
3. A bridge Bond0 is set up between the first port of the two 25G NICs for all nodes. VLAN 130, 140 and 170 also
use this interface. This bridge is connected as a Link Aggregation Control Protocol (LACP) connection.
4. A bridge Bond1 is set up between the second port of the 25G NICs for all nodes. VLAN 180 uses this interface
for Converged Nodes whereas Controllers access Public Network through this bridge. This bridge is connected as
a Link Aggregation Control Protocol (LACP) connection.
5. The last 25G NIC on Converged nodes remains available for future NFV operations such as SRIOV or OVS-
DPDK, but currently not being used. A bridge Bond2 is set up between two ports out of four (two interface of
two ports each) of 25G NICs on Converged nodes. It remains available for future operations such as SR=IOV or
OVS-DPDK while the remaining two ports remain free.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
24 | Red Hat Ceph Storage for HCI

Chapter

3
Red Hat Ceph Storage for HCI
Topics: Ceph is a widely used open-source storage platform. It provides high
performance, reliability and scalability. The Ceph distributed storage system
• Introduction to Red Hat Ceph provides an interface for object, block and file-level storage.
Storage
This chapter describes the Red Hat Ceph Storage and its integration with
• Hyper-Converged Infrastructure Controller and Converged node in Hyper-Converged Infrastructure.
(HCI) Ceph storage

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Red Hat Ceph Storage for HCI | 25

Introduction to Red Hat Ceph Storage


Red Hat Ceph Storage is an open-source, petabyte-scale distributed storage system primarily designed for object
and block data, which can be scaled with commodity hardware. It can provide unstructured object storage for cloud
workloads which can be accessed either through a native API or by using Amazon S3 or OpenStack Object Storage
(Swift) API protocols. Block based storage is managed by either a native block based protocol or by using the iSCSI
protocol. The latest version of Red Hat Ceph Storage can also provide client access to a Network File system (NFS)
by using its native Ceph File System (CephFS).
Red Hat Ceph Storage offers the following features:
• Multi-site and disaster recovery options.
• Flexible storage policies.
• Data durability and resiliency via erasure coding or replication.
• Deployment in a containerized environment.
Red Hat Ceph Storage can be integrated with OpenStack Nova Compute to store cloud instance disks. Glance
manages the images used to deploy cloud instances. Cinder provides persistent storage for cloud instances.
Red Hat Ceph Storage is based on a modular and distributed architecture that contains the following:
• An object storage backend named Reliable Autonomic Distributed Object Store (RADOS).
• A variety of access methods to interact with RADOS – RADOS Block Device (RBD), RADOS Gateway (RGW)
and CephFS.

RADOS
RADOS system stores data as objects in logical storage pools, and utilizes the Controlled Replication Under Scalable
Hashing (CRUSH) data placement algorithm to automatically determine where that object should be stored.
RADOS, the Ceph storage backend is based on the following daemons which can be easily scaled to meet the
requirements of any deployed architecture.
• Monitors (MONs). Daemons responsible for maintaining a master copy of the cluster map which contains
information about the state of the Ceph cluster and its configuration. When the number of active monitors falls
below the threshold, the entire cluster is unaccessible for any data integrity client operation.
• Object Storage Devices (OSDs). Building blocks of a Ceph storage cluster. They connect a storage
device to the Ceph storage cluster.An individual storage server may run multiple OSDs daemons and can also
provide multiple OSDs to the cluster. Each OSD Daemon provides a storage device which is normally formatted
with an Extents File System (XFS). A new feature called BlueStore introduced in Red Hat Ceph Storage permits
raw access to local storage devices. The replication of objects to multiple OSDs is handled automatically. One
OSD is called the primary OSD and a Ceph client reads or writes data from the primary OSD. Secondary OSDs
play an important role in ensuring the resilience of data in the event of a failure in the cluster. Primary OSD
functions are:
• Serves I/O requests.
• Replicates and protects the data.
• Rebalances the data to ensure performance.
• Recovers the data in case of a failure.
Secondary OSDs functions are always under control of a Primary OSD and are all capable of becoming the
Primary OSD.
• Ceph Manager (MGRS). Gathers a collection of statistics of the Ceph storage cluster. There is no impact on
client I/O operations if the Ceph Manager daemon fails. However, to avoid this scenario, a minimum of two Ceph
managers are recommended.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
26 | Red Hat Ceph Storage for HCI

Pools
Ceph pools are logical partitions of the Ceph storage cluster, and are used to store objects under a common name tag.
Each pool is assigned a specific number of hash buckets to a group objects together for storage. These hash buckets
are call Placement Groups (PGs).
The number of placement groups assigned to each pool can be configured independently to fit any type of data. This
number is configured at the time of the cluster creation can be increased dynamically but can never be decreased.
The CRUSH algorithm is used to select the OSDs that will serve the data for a pool. Permissions such as read, write
or execute can be set at the pool level.
When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data. A pool provides
you with:
• Resilience
• Placement Groups
• CRUSH rules
• Snapshots
• Set ownership

Placement Groups
A Placement Group (PG) aggregates a series of objects into a hash bucket, or group, and is mapped to a set of OSDs.
An object belongs to only one PG, and all objects belonging to the same PG return the same hash result.
The placement strategy is known as the CRUSH placement rule. When a client writes an object to a pool, it uses the
pool's CRUSH placement rule to determine the object's placement group and the cluster map to calculate where an
object is written to which OSD(s).
When OSDs are added or removed from a cluster, placement groups are automatically rebalanced between
operational OSDs.
You can set the number of placement groups for the pool. The number of placement groups per OSD in Hyper-
Converged Infrastructure (HCI) environment is set to 200 for optimal usage of OSDs/NVMe SSDs.

CRUSH
When you store data in a pool, a CRUSH rule set mapped to the pool enables CRUSH to identify a rule for the
placement of the object and its replicas (or chunks for erasure coded pools) in your cluster. CRUSH rules can be
customized.

Objectstore
Ceph is an ecosystem of technologies offering three different storage models - object storage, block storage and
filesystem storage. Ceph’s approach is to treat object storage as its foundation, and provide block and filesystem
capabilities as layers built upon that foundation. Objectstores store data in a flat non-hierarchical namespace where
each piece of data is identified by an arbitrary unique identifier. Any other details about the piece of data are stored
along with the data itself, as metadata.
Objectstore is an abstract interface for storing the data and can be implemented through two mediums - filestore
(legacy) and BlueStore. BlueStore stores objects directly on the block devices without any file system interface,
which improves the performance of the cluster and FileStore is the legacy approach to storing objects in Ceph.

Red Hat Ceph Storage Dashboard introduction


The Red Hat Ceph Storage Dashboard is a built-in web-based Ceph management and monitoring application. It is
used to administer various aspects and objects of the cluster. This web-based dashboard application runs on a virtual
machine known as the Red Hat Ceph Storage Dashboard which is deployed on the Solution Admin Host. As Hyper-
Converged Infrastructure is using Red Hat Ceph Storage as its storage mechanism, it is very important to manage
and monitor the Ceph cluster. After the deployment of Overcloud, the user can deploy the Red Hat Ceph Storage

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Red Hat Ceph Storage for HCI | 27

Dashboard to perform management and monitoring of Ceph storage through a web-based application. Deployment
and Configuration of dashboard uses JetPack scripts.
Note: Please refer https://github.com/dsp-jetpack/JetPack for more information on JetPack 13.1 Please refer
Red Hat Ceph Storage Dashboard deployment and configuration (optional) on page 58 for deployment
and configuration of Red Hat Ceph Storage Dashboard.

Hyper-Converged Infrastructure (HCI) Ceph storage


Currently, three types of nodes for OpenStack deployment are required:
• Controller
• Compute
• Storage
A separate network node also may be a requirement. Hardware resources are high with this infrastructure. Hyper-
Converged Infrastructure is an ideal solution to address the problem. HCI integrates compute and storage services on
one hardware reducing cost and efforts significantly.
Ceph is a preferred and proven storage for HCI because it supports block and object storage. Ceph use all the services
in one cluster in a HCI environment.This solution is ideal for 5G and Edge computing. These applications require
small setups that perform high volume operations.

Figure 6: HCI Ceph storage

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
28 | Red Hat Ceph Storage for HCI

The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform uses Red Hat
Ceph Storage as the storage provider. The architecture includes co-located compute and Ceph storage services.
Ceph cluster configuration features:
• The daemon Ceph monitor running on Controller nodes maintains a master-copy of the cluster map.
• The daemon OSD running on converged nodes stores objects on Ceph, in addition to a KVM module required for
instance spawning.
HCI uses Red Hat Ceph Storage features to have a highly reliable, scalable, easily manageable and optimized
performant Ready Architecture for HCI. Features include:
• Usage of NVMe SSDs reduce latency with higher IOPs and potentially lower power consumption.
• Optimal count of two Ceph OSDs per NVMe SSDs based on performance statistics.
• BlueStore. A new and high performance backend Objectstore for OSD.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment considerations | 29

Chapter

4
Deployment considerations
Topics: This section highlights the key elements which have been covered during the
design phase as well as the reason behind those choices.
• Converged Nodes with
integrated Ceph storage
• Resource isolation
• Performance tuning

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
30 | Deployment considerations

Converged Nodes with integrated Ceph storage


1. Replication Factor - NVMe SSDs have higher reliability than rotational disks. They also have higher MTBR
(Mean Time Between Failure) and lower bit error rate. 2x replication is recommended in production when
deploying OSDs on NVMe versus the 3x replication used with legacy rotational disks. This architecture guide
uses 2x replication factor.
2. OSD Sizing - To reduce tail latency in 4KW write transfers, it is recommended to run more than one OSD per
drive for NVMe SSDs. It has been proven that it's not possible to take advantage of NVMe SSD bandwidth with a
single OSD. Four is the optimum partitions per SSD drive that gives the best possible performance. The downside
is that each OSD must add a memory overhead in the system which results in less RAM available for Nova. This
architecture guide uses two OSDs per NVMe which is a good compromise and provides enough performance
without reducing RAM availability for the compute engine.
3. Ceph Journal consideration - In an all-flash Ceph cluster using Intel NVMe SSDs, separating the journal from
the OSD datastore usually does not produce additional benefits. In these all-flash configurations, a Ceph journal is
frequently co-located on the same NVMe drive in a different partition from the OSD data. This maintains a simple
to use configuration and also limits the scope of the any drive failures in an operational cluster. This architecture
guide uses co-located journal on each NVMe disk.
4. CPU sizing - Ceph OSD processes can consume large amount of CPUs while doing block operations, and the
recommended ratio of CPUs per NVMe is 10:1, assuming a 2Ghz CPU. As the Ceph OSDs are hosted on the same
node that the compute engine uses, it's even more important to have a good number of CPUs available to allow
intensive workload and to protect the VM which are run by Nova. With Intel Platinum 8160 2.1GHz, 48 cores are
available to both OpenStack Nova Compute and Ceph storage and end up with 96 vCores with Hyper-threading
enabled. This architecture guide uses a ratio of 12:1 CPUs per NVMe.
5. Memory sizing - OSDs do not require as much RAM for regular operations (500MB of RAM per daemon
instance), however, during recovery they need significantly more RAM (~3GB per 1TB of storage per daemon).
Generally, more RAM is better. With that in mind, RAM usage for OSDs should not exceed 24GB of usage in
case of a severe failure in a node. There is still plenty of RAM available for Nova usage. This architecture guide
uses 384GB of RAM on each Converged node.
6. Networking - A 25GbE Network is required to leverage the maximum performance benefits of a NVMe-based
Hyper-Converged Infrastructure platform. This architecture guide uses Intel XXV710 25GbE adapter on all nodes.

Resource isolation
Resource isolation for Ceph OSDS
Limiting the amount of CPU and memory for each Ceph OSD is important, so resources are free for the OpenStack
Nova Compute process. When reserving memory and CPU Resources for Ceph on hyper-converged nodes, each
containerized OSD should be limited in GB of RAM and vCPUs.
ceph_osd_docker_memory_limit constraints the memory available to a container. If the host supports swap
memory, then it can be larger than physical RAM. If a limit of 0 is specified, the containers memory is not limited.
To allow maximum performance while preserving memory for other usage, a maximum amount of 10GB can be
allocated for each OSD container.
ceph_osd_docker_cpu_limit limits the CPU usage of container. By default, containers run with the full CPU
resource. This flag tells the kernel to restrict the container's CPU usage to the quota you specify. A maximum count of
four vCPUs per OSD can be allocated (eight vCPUs per NVMe physical disk).
This architecture guide uses the following parameters to optimize Ceph OSDs containers:

CephAnsibleExtraConfig:
ceph_osd_docker_memory_limit: 10g
ceph_osd_docker_cpu_limit: 4

The deployment with these parameters is done by modifying the two above values in the dell-
environment.yaml heat template.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment considerations | 31

Performance tuning
Nova reserved memory
It reserves the amount of memory required for a host to perform its operation. This memory should normally be
tuned to maximize the number of guests while protecting the host. For a Hyper Converged Infrastructure, it should
maximize guests while protecting the host and Ceph.
We can figure out the reserved memory with the following formula which is the recommended approach by Red Hat:

left_over_mem = mem - (GB_per_OSD * osds)


number_of_guests = left_over_mem / (average_guest_size +
GB_overhead_per_guest)
nova_reserved_memory_MB = 1024 * ((number_of_guests * GB_overhead_per_guest)
+ (GB_per_OSD * osds))

This architecture guide follows this approach with values and calculation as described below:

336 = 384 - (2 * 16)

Given our nodes with 384GB of RAM and 16 OSDs per node, assuming that each OSD consumes 3GB of RAM, that
is 48GB of RAM for Ceph, and leaving 336GB of RAM for Nova Compute.

134 = 336 / (2 + 0.5)

If the average guest each uses 2GB of RAM, then the overall system could host 113 guest machines. However, there
is the additional overhead for each guest machine running on the hypervisor. Assuming this overhead is 500MB, the
maximum number of 2GB guest machines that could be ran would be 134.

115000 = 1000 * ((134 * 0.5 ) + (3 * 16))

Thus, reserved_host_memory_mb would equal 115000. The parameter value must be in megabytes (MB).
This value is defined in the dell-environment.yaml file as described below:

nova::compute::reserved_host_memory: 115000

CPU allocation ratio


This option helps you specify virtual CPU to physical CPU allocation ratio.Nova’s cpu_allocation_ratio
parameter is used by the Nova scheduler when choosing which compute nodes to run the guest machines before it
considers that the node is unable to handle any more guest machines. The reason behind is that the Nova scheduler
does not take into account the CPU needs of the Ceph OSD services running on the same node as the Nova scheduler.
Modifying the cpu_allocation_ratio parameter allows Ceph to have the CPU resources it needs to operate
effectively without those CPU resources being given to Nova Compute.
We can determine the cpu_allocation_ratio with the following formula which is the recommended approach by Red
Hat:

num_of_non_ceph_cores = total_num_of_cores - (num_of_cores_per_osd *


num_osds)
num_of_guest_machines_vcpus = num_of_non_ceph_cores /
avg_guest_machine_cpu_utilization
cpu_allocation_ratio = num_of_guest_machines_vcpus / total_num_of_cores

Given our nodes with 48 cores and 16 OSDs per node with Hyper-threading enabled leaving 32 cores for Nova.

32 = 48 - (1 * 16)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
32 | Deployment considerations

Assuming that each guest machine utilizes 10% of its core, we end up with 320 available vCPUs.

320 = 32 / 0.1

Finally, we can consider using a cpu_allocation_ratio of 6.667

6.667 = 320 / 48

This value is defined in the dell-environment.yaml file as described below:

nova::cpu_allocation_ratio: 6.7

Note: Red Hat provides a script to do all the calculations called nova_mem_cpu_calc.py

Ceph placement group


A Placement Group (PG) aggregates a series of objects into a hash bucket, or group, and is mapped to a set of OSDs.
An object belongs to only one PG, and all objects belonging to the same PG return the same hash result.
Note: For detailed information please refer Placement Groups on page 26

To design the sizing of each placement group we can use Ceph Calculator https://ceph.com/pgcalc/ to identify the
optimal value. We use this tool to calculate the amount of PG per pool.

Table 6: Placement Group configuration summary

Pool name Replication OSD # %Data Target PGs/ PG #


factor OSD
volumes 2 48 63.00 200 4096
vms 2 48 25.00 200 1024
images 2 48 12.00 200 512

Where
• Pool name: Name of the pool.
• Replication factor: Number of replicas the pool will have. Two is the recommended value when using NVMe
disks.
• OSD #: Number of OSDs which this Pool will have PGs. 48 is the entire cluster OSD count.
• %Data: This value represents the approximate percentage of data which will be contained in this pool for that
specific OSD set.
• Target PGs/OSD: Expected cluster OSD count when considering future scaling.
• PG #: Number of PGs to create.
Note: Keep in mind that the PG count can be increased, but NEVER decreased without destroying and
recreating the pool.
The values calculated above are defined in the dell-environment.yaml file as described below.

CephPools: [{"name": "volumes", "pg_num": 4096, "pgp_num": 4096}, {"name":


"vms", "pg_num": 1024, "pgp_num": 1024},{"name": "images", "pg_num": 512,
"pgp_num": 512}]

BlueStore
BlueStore is a new storage backend for Ceph. It gives better performance (roughly 2x for writes), full data
checksumming, and built-in compression. BlueStore stores objects directly on the block devices without any file
system interface, which improves the performance of the cluster. It provides features like efficient block device usage,

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment considerations | 33

direct management of storage devices, metadata management with RocksDB, multi device support, no large double
writes, efficient copy-on-write and inline compression.
BlueStore backend supports three storage devices - Primary storage device , Write-ahead-log (WAL) device, and
database device. It can manage either one, two or all three storage devices.
Modify the dell-environment.yaml with the following parameters to enable BlueStore as the Ceph backend.

CephAnsibleDisksConfig:
osd_scenario: lvm
devices:
- /dev/nvme0n1
- /dev/nvme1n1
- /dev/nvme2n1
- /dev/nvme3n1
- /dev/nvme4n1
- /dev/nvme5n1
- /dev/nvme6n1
- /dev/nvme7n1
CephAnsibleExtraConfig:
osd_objectstore: bluestore
osds_per_device: 2

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
34 | Deployment

Chapter

5
Deployment
Topics: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red
Hat OpenStack Platform utilizes Dell EMC PowerEdge R-Series servers
• Before you begin for deployment of RHOSP version13. Dell EMC PowerEdge R-Series rack
• Deployment workflow servers key features:
• Solution Admin Host • Automate productivity
• Red Hat OpenStack Platform • Comprehensive security
Director
This chapter also describes BIOS and Network configuration, installation
• Undercloud Deployment
prerequisites with proper configuration for Controller and Converged nodes.
• Configure and deploy the Additionally entire deployment process including manual creation of SAH
Overcloud and Director node for Undercloud and deploying Overcloud is part of
• Red Hat Ceph Storage this chapter. Lastly this chapter describes performance tuning parameters,
Dashboard deployment and resources isolation and how node placement works.
configuration (optional)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 35

Before you begin


This chapter outlines requirements for setting up an environment to provision Red Hat OpenStack Platform 13. It
comprises requirements for setting up director, accessing it, and hardware requirements for hosts that the director
provisions for OpenStack services. The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on
Red Hat OpenStack Platform is a companion to this deployment and provides detailed description of underlying
Red Hat OpenStack Platform for this ready architecture, its hardware and software components, and deployment
methodologies.

Tested BIOS and firmware versions


Table 7: Validated firmware versions

Components Versions
BIOS version 1.6.13
iDRAC with Lifecycle Controller 3.30.30.30
Power supply 00.1B.53
Intel(R) ethernet 25G 2P XXV710 adapter 18.8.9
Intel(R) Gigabyte 4P X710/I350 rNDC 18.8.9
PERC H740P Mini 50.5.0-1750

Table 8: BIOS configuration

Parameter Controller node Converged SAH node


PXE Device Device 3 enabled Device 3 enabled Default
settings
PXE device Integrated NIC 1 Port 3 Integrated NIC 1 Port 3 Default
interface partition 1 partition 1
Virtualization Default Enabled Enabled
technology
UEFI Boot Integrated RAID Controller Integrated RAID Controller Integrated RAID Controller
sequence 1: red 1: red 1: red
PXE Device 3: integrated PXE Device 3: integrated PXE Device 3: integrated
NIC 1 Port 3 partition NIC 1 Port 3 partition NIC 1 Port 3 partition

Note: Values of UEFI Boot settings should be checked in check box

Table 9: Switches firmware version

Product Version
S3048-ON firmware Cumulus Linux OS 3.7.1
S5248-ON firmware Cumulus Linux OS 3.7.1

Disk layout
Disk layout for Controller node and Converged node

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
36 | Deployment

Table 10: Disk configuration for Controller and SAH Node

Components Disk information


Layout RAID-10
Media type HDD
Capacity 2791 GB
Physical disk 10
Physical disk name Physical Disk 0:1:0 to Physical Disk 0:1:9

Table 11: Disk configuration for Converged Node

Components Disk information


Layout RAID-1
Media type SSD
Capacity 223 GB
Physical disk 2
Physical disk name Solid State Disk 0:1:0
Solid State Disk 0:1:1

Virtual Disk Creation


The following procedure describes the steps required for successfully creating the virtual disk on which the Operating
System will be installed. Depending on the type of node you are preparing, the virtual disk layout may differ. Refer to
the tables presented in the previous section for detailed information.
Virtual Disk creation procedure for Controller, SAH and Converged.
1. Log to the iDRAC Web User interface via the iDRAC IP of the physical host
2. Expand the Configuration tab
3. Select Storage Configuration
4. Make sure the PERC H740P Mini Controller is selected in the drop-down list
5. Under Virtual Disk Configuration, click on Create Virtual Disk
6. Give a name to the Virtual Disk, select the appropriate RAID Layout
7. Select appropriate Physical Disks and click on Add to Pending Operations
8. Once back on the Configuration page, click on Apply now
9. Watch the status of the creation on the Job Queue
To verify Virtual Disks have been created
1. Go to the iDRAC and login with given credentials
2. On Dashboard, select Storage
3. When you click on Virtual Disks as per below screenshot Virtual Disk details will be displayed.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 37

Figure 7: Verify Virtual Disk

Note: The display can differ depending on the node verification of the Virtual Disk.

Software requirements
Software requirements includes:
• Red Hat Enterprise Linux 7.6
• Red Hat OpenStack Platform version 13
• Red Hat Ceph Storage 3.2
Note:
User needs to be aware of Pool IDs at this stage.
Please contact Dell EMC sales representative for any software components required in performing these
steps.

Deployment workflow

Figure 8: Workflow for RHOSP deployment over HCI

Figure 8: Workflow for RHOSP deployment over HCI on page 37 illustrates workflow of RHOSP deployment
over Hyper-Converged Infrastructure. The activity involves deployment of SAH node, configuring and installing
Undercloud and finally deployment of Overcloud. The chapter gives detailed deployment.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
38 | Deployment

Solution Admin Host

Prepare Solution Admin Host


SAH node - Solution Admin Host node which holds Director VM, serves the purpose of configuring and deploying of
OpenStack plaform services to Controller and Converged nodes. For detailed information please refer to the Overview
on page 15 section.
Solution Admin Host requires jumphost, either Linux or Windows. Preparation of the Solution Admin Host begins
with the installation of Red Hat Enterprise Linux Server 7.6.
Creating Virtual Disks is an essential prerequisite for SAH Node deployment.
Note: For virtual disk layout please refer Virtual disk section in Virtual Disk Creation on page 36.

SAH deployment overview


A kickstart mechanism provides automated deployment of SAH node. The installation process can be accomplished
using Virtual CD / DVD Media. SAH node kickstarts Undercloud deployment and creates a Dashboard VM required
for Overcloud.

Kickstart file customization


This kickstart file performs the following steps when properly configured:
• Partitions the disks
• Sets SELinux to permissive mode
• Disables firewall, and uses iptables
• Disables NetworkManager
• Configures networking for the rest of the nodes, including:
• Bonding
• Bridges
• Static IP addresses
• Gateway
• Name resolution
• NTP service
• Registers the system using the Red Hat Subscription Manager
• Additionally, there are some requirements that must be satisfied prior to installation of the Operating System:
• A Red Hat subscription
• Access to the Subscription manager hosts
1. From a Linux host, clone the public repository of the Ready Architecture https://github.com/dsp-jetpack/JetPack/
using Git.
2. Switch to the HCI_OSP13 branch https://github.com/dsp-jetpack/JetPack/tree/HCI_OSP13
3. Edit the osp_sah.ks located in the kickstart folder.
4. Modify it according to your needs and set the following variables:

Table 12: Kickstart file variables

Variables Description
HostName The FQDN of the server, e.g., sah.acme.com.
SystemPassword The root user password for the system.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 39

Variables Description
SubscriptionManagerUser The user credential when registering with Subscription
manager.
SubscriptionManagerPassword The user password when registering with Subscription
manager.
SubscriptionManagerPool The pool ID used when attaching the system to an
entitlement.
SubscriptionManagerProxy Optional proxy server to use when attaching the system to
an entitlement.
SubscriptionManagerProxyPort Optional port for the proxy server.
SubscriptionManagerProxyUser Optional user name for the proxy server.
SubscriptionManagerProxyPassword Optional password for the proxy server.
Gateway The default gateway for the system.
NameServers A comma-separated list of nameserver IP addresses.
NTPServers A comma-separated list of time servers. This can be IP
addresses or FQDNs.
TimeZone The time zone in which the system resides.
anaconda_interface The public interface that allows connection to Red Hat
Subscription services. For 10GbE or 25GbE Intel NICs,
"em4" (the fourth nic on the motherboard) should be
used.
extern_bond_name The name of the bond that provides access to the external
network.
extern_boot_opts The boot options for the bond on the external network.
Typically, there is no need to change this variable.
extern_bond_opts The bonding options for the bond on the external
network. Typically, there is no need to change this
variable.
extern_ifaces A space delimited list of interface names to bond together
for the bond on the external network.
internal_bond_name The name of the bond that provides access for all internal
networks.
internal_boot_opts The boot options for the bond on the internal network.
Typically, there is no need to change this variable.
internal_bond_opts The bonding options for the bond on the internal network.
Typically, there is no need to change this variable.
internal_ifaces A space delimited list of interface names to bond together
for the bond on the internal network.
mgmt_bond_name The boot options for the management VLAN interface.
Typically, there is no need to change this variable.
prov_bond_name The VLAN interface name for the provisioning network.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
40 | Deployment

Variables Description
prov_boot_opts The boot options for the provisioning VLAN interface.
Typically, there is no need to change this variable.
stor_bond_name The VLAN interface name for the storage network.
stor_boot_opts The boot options for the storage VLAN interface.
Typically, there is no need to change this variable.
pub_api_bond_name The VLAN interface name for the public API interface.
pub_api_boot_opts The VLAN interface name for the private API interface.
priv_api_bond_name The VLAN interface name for the private API interface.
priv_api_boot_opts The boot options for the private API VLAN interface.
Typically, there is no need to change this variable.
br_mgmt_boot_opts The bonding options, IP address and netmask for the
management bridge.
br_prov_boot_opts The bonding options, IP address and netmask for the
provisioning bridge.
br_stor_boot_opts The bonding options, IP address and netmask for the
storage bridge.
br_pub_api_boot_opts The bonding options, IP address and netmask for the
public API bridge.
br_priv_api_boot_opts The bonding options, IP address and netmask for the
private API bridge.
prov_network The network IP address for the provisioning network for
use by the NTP server.
prov_netmask The netmask for the provisioning network for use by the
NTP server.

Creating image
1. Create .img file

$ dd if=/dev/zero of=osp_ks.img bs=1M count=1


2. Create file system

$ mkfs ext3 -F osp_ks.img

mke2fs 1.42.9 (28-Dec-2013)


Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
128 inodes, 1024 blocks
51 blocks (4.98%) reserved for the super user
First data block=1
Maximum filesystem blocks=1048576
1 block group
8192 blocks per group, 8192 fragments per group

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 41

128 inodes per group


Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
3. Create a directory where the image file will be mounted

$ mkdir /mnt/usb
4. Mount the filesystem in the usb directory

$ mount -o loop osp_ks.img /mnt/usb


5. Copy the SAH kickstart file to the usb directory

$ cp osp_sah.ks /mnt/usb/
6. Umount the filesystem

$ umount /mnt/usb
7. Copy the .img file to a host from which you have access to the SAH node iDRAC user interface.

Presenting the image to the RHEL OS installation process


Note: At this stage, the RHEL 7.6 ISO file needs to be downloaded to the host from which you will access
the SAH iDRAC user interface.
From a host that has access to the SAH iDRAC user interface,
1. Connect to the SAH iDRAC using appropriate credentials
2. Refer to Table 8: BIOS configuration on page 35 and verify that BIOS parameters are set appropriately.
3. Click on Launch virtual.
4. On the console, click on Connect Virtual Media.
5. On the Virtual Media window, browse for RHEL 7.6 ISO image and click on Map Device This will map the
RHEL 7.6 ISO as a virtual DVD.
6. On the same window, browse to the image containing the kickstart file created in the previous section and click on
Map Device. This will map the osp_ks.img file as a virtual removable disk.

Figure 9: Check RHEL iso and image file

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
42 | Deployment

Deploy SAH node


1. From the console, click on Boot and select Virtual CD/DVD/ISO
2. Click on power then select Power on System.
3. Boot on the DVD.
• Select install Redhat enterprise Linux 7.6 and press ‘e’ to edit the selected item.

Figure 10: Install Redhat


• Edit the command line when prompted. Add the following:

$ inst.ks=hd:sdb:/osp_sah.ks

Figure 11: Provide kickstart file

• Exit and continue the boot process (Ctrl+X).


• After successful deployment of SAH node, from the console, click on Virtual Media and Disconnect
Virtual Media.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 43

Red Hat OpenStack Platform Director


Director is a single-system OpenStack installation that comprises components for provisioning and managing
the OpenStack nodes that form your OpenStack environment (Overcloud). For detailed information on Red Hat
OpenStack Platform Director, please refer to Red Hat OpenStack Platform Director on page 43

Kickstart file customization


This kickstart file performs the following steps when properly configured:
• Partitions the disk
• Sets SELinux to enforcing mode
• Configures iptables to ensure the following services can pass traffic:
• HTTP
• HTTPS
• DNS
• TFTP
• TCP port 8140
• Configures networking, including:
• Static IP addresses
• Gateway
• Name resolution
• NTP time service
• Registers the system using the Red Hat Subscription Manager
• Installs the Red Hat OpenStack Platform Director installer
1. Login to the SAH node as root.
2. Clone the public repository of the Ready Architecture https://github.com/dsp-jetpack/JetPack/ using Git.
3. Switch to the HCI_OSP13 branch https://github.com/dsp-jetpack/JetPack/tree/HCI_OSP13
4. Edit the director.ks located in the kickstart folder.
5. From the director Kickstart template provided as part of the Ready Architecture, modify it according to your needs
and set the following variables:

Table 13: Kickstart File Variables

Variables Description
rootpassword The root user password for the system.
timezone The timezone the system is in.
smuser The user credential when registering with Subscription manager.
smpassword The user password when registering with Subscription manager. The password
must be enclosed in single quotes if it contains certain special characters.
smpool Red Hat OpenStack Platform Director (Virtual Node) pool ID used when
attaching the system to an entitlement.
hostname The FQDN of the Director Node.
gateway The default gateway for the system.
nameserver A comma-separated list of nameserver IP addresses.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
44 | Deployment

Variables Description
ntpserver The SAH node's provisioning IP address. The SAH node is an NTP server,
and will synchronize to the NTP servers specified in the SAH node's kickstart
file.
user The ID of an admin user to create to use for installing Red Hat OpenStack
Platform Director . Default admin user is osp_admin
password The password for the osp admin user.
eth0 This line specifies the IP address and network mask for the public API
network. The line begins with eth0, followed by at least one space, the IP
address of the VM on the public API network, another set of spaces, and then
the network mask.
eth1 This line specifies the IP address and network mask for the provisioning
network. The line begins with eth1, followed by at least one space, the IP
address of the VM on the provisioning network, another set of spaces, and
then the network mask.
eth2 This line specifies the IP address and network mask for the management
network. The line begins with eth2, followed by at least one space, the IP
address of the VM on the management network, another set of spaces, and
then the network mask.
eth3 This line specifies the IP address and network mask for the private API
network. The line begins with eth3, followed by at least one space, the IP
address of the VM on the private API network, another set of spaces, and then
the network mask.
6. Save the file under /tmp.

Red Hat OpenStack Platform Director as VM deployment


1. Create the iso directory where the RHEL 7.6 iso file will be located.

$ mkdir -p /store/data/iso
2. Create the images directory where the VM image will be located.

$ mkdir /store/data/images
3. Copy RHEL 7.6 iso file to /store/data/iso directory.
4. Create the director VM as follows:

$ virt-install --name director --ram 32768 --vcpus 8 --hvm --os-type linux


--os-variant rhel7 \
--disk /store/data/images/director.img,bus=virtio,size=80 --network
bridge=br-pub-api \
--network bridge=br-prov --network bridge=br-mgmt --network bridge=br-
priv-api \
--initrd-inject /tmp/director.ks --extra-args ks=file:/director.ks --
noautoconsole --graphics spice \
--autostart --location /store/data/iso/rhel-server-7.6-x86_64-dvd.iso

Note: Please refer Overview on page 15 for overview of the network bridges.
5. Once the deployment is triggered, progress can be monitored using virt-viewer.

$ virt-viewer director

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 45

6. The status of the VM can be verify by using virsh list command.

$ virsh list --all


Id Name State
----------------------------------------------------
- director shut off

Note: The VM will appear as shut off when the installation completes.
7. Once the VM is installed and shut off, start Director VM

$ virsh start director

Note: It can take a few minutes before the Director VM is pingable.


8. After successful installation, the Director VM is accessible through SSH using the appropriate credentials
Note: You need to use credentials that you have mentioned in kickstart file.

Undercloud Deployment

Configure the Undercloud


1. Login to director VM as root.
2. Install python-tripleoclient and ceph-ansible.

$ yum -y install python-tripleoclient ceph-ansible


3. Change user to osp_admin and passwordless ssh.

$ su -l osp_admin
4. Create directories for templates and images.

$ mkdir ~/images
$ mkdir ~/templates
5. Copy Undercloud configuration sample file and save it as Undercloud.conf.

$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/
undercloud.conf
6. Modify undercloud.conf file according to the needs.

Table 14: Undercloud variables

Parameters Description
undercloud_hostname Defines the fully-qualified host name for the Undercloud.
local_ip The IP address and prefix of the Director Node on the
provisioning network in Classless Inter-Domain Routing
(CIDR) format (xx.xx.xx.xx/yy). This must be the IP address
used for eth1 in director.cfg. The prefix used here must
correspond to the netmask for eth1 as well (usually 24).
subnets List of routed network subnet for provisioning and
introspection.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
46 | Deployment

Parameters Description
local_subnet Name of the local subnet where PXE and DHCP interfaces
reside.
local_interface Name of the network interface responsible for PXE booting the
Overcloud instances.
masquerade_network The network address and prefix of the Director Node on the
provisioning network in CIDR format (xx.xx.xx.xx/yy). This
must be the network used for eth1 in director.cfg. The prefix
used here must correspond to the netmask for eth1 as well
(usually 24).
inspection_enable_uefi To support UEFI boot method.
enable_ui To enable TripleO user interface.
ipxe_enabled To support iPXE for deploy and introspection.
scheduler_max_attempts 30 maximum attempts when deploying the Overcloud
instances.
clean_nodes To swipe out disks of the Converged nodes when data already
exists.
cidr Network CIDR for the Neutron-managed subnet for Overcloud
instances.
dhcp_start The starting IP address on the provisioning network to use for
Overcloud nodes.
Note: Ensure the IP address of the Director Node is
not included.

dhcp_end The ending IP address on the provisioning network for


Overcloud nodes.
inspection_iprange An IP address range on the provisioning network to use during
node inspection.
Note: This should not overlap with the dhcp_start/
dhcp_end range.

gateway IP address of the gateway used by the Overcloud instances.


Generally the undercloud ctl-plane IP.

Note: For more modification details please refer appendix Undercloud configuration file on page 92.
7. Save and exit the file.

Install Undercloud
1. Start the undercloud installation.

$ openstack undercloud install


2. When successful the displayed output will be similar to the following.

2019-01-29 10:48:29,615 INFO: Logging to /home/osp_admin/.instack/install-


undercloud.log
2019-01-29 10:48:29,730 INFO: Checking for a FQDN hostname...

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 47

2019-01-29 10:48:29,748 INFO: Static hostname detected as


director.oss.labs
2019-01-29 10:48:29,764 INFO: Transient hostname detected as
director.oss.labs
2019-01-29 11:00:47,078 INFO: Created flavor "swift-storage" with profile
"swift-storage"
2019-01-29 11:00:47,078 INFO: Configuring Mistral workbooks
2019-01-29 11:00:59,358 INFO: Mistral workbooks configured successfully
2019-01-29 11:01:50,544 INFO: Configuring an hourly cron trigger for
tripleo-ui logging
2019-01-29 11:01:52,118 INFO:
#############################################################################
Undercloud install complete.

The file containing this installation's passwords is at


/home/osp_admin/undercloud-passwords.conf.

There is also a stackrc file at /home/osp_admin/stackrc.

These files are needed to interact with the OpenStack services, and should
be
secured.

#############################################################################
3. Download images which are required to install Overcloud.

$ source stackrc
$ sudo yum install rhosp-director-images rhosp-director-images-ipa
4. Extract images to osp_admin/images directory.

$ cd ~/images
$ for i in /usr/share/rhosp-director-images/overcloud-full-
latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-
latest-13.0.tar ; do tar -xvf $i ; done
5. Upload images to Glance.

$ openstack overcloud image upload --image-path /home/osp_admin/images

Image "overcloud-full-vmlinuz" was uploaded.


+--------------------------------------+------------------------
+-------------+---------+--------+
| ID | Name | Disk
Format | Size | Status |
+--------------------------------------+------------------------
+-------------+---------+--------+
| 74abf8fe-686f-47d2-8fe4-12c92c88c2a5 | overcloud-full-vmlinuz | aki
| 6639920 | active |
+--------------------------------------+------------------------
+-------------+---------+--------+
Image "overcloud-full-initrd" was uploaded.
+--------------------------------------+-----------------------
+-------------+----------+--------+
| ID | Name | Disk
Format | Size | Status |
+--------------------------------------+-----------------------
+-------------+----------+--------+
| 94e8715e-232a-4ca0-bad3-e674dfb6264e | overcloud-full-initrd | ari
| 62457227 | active |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
48 | Deployment

+--------------------------------------+-----------------------
+-------------+----------+--------+
Image "overcloud-full" was uploaded.
+--------------------------------------+----------------+-------------
+------------+--------+
| ID | Name | Disk Format |
Size | Status |
+--------------------------------------+----------------+-------------
+------------+--------+
| 64981da1-498a-421a-8613-9acc38ff125a | overcloud-full | qcow2 |
1347420160 | active |
+--------------------------------------+----------------+-------------
+------------+--------+
Image "bm-deploy-kernel" was uploaded.
+--------------------------------------+------------------+-------------
+---------+--------+
| ID | Name | Disk Format |
Size | Status |
+--------------------------------------+------------------+-------------
+---------+--------+
| 625a1fac-9fa6-46b2-9abb-465e744b808a | bm-deploy-kernel | aki |
6639920 | active |
+--------------------------------------+------------------+-------------
+---------+--------+
Image "bm-deploy-ramdisk" was uploaded.
+--------------------------------------+-------------------+-------------
+-----------+--------+
| ID | Name | Disk Format |
Size | Status |
+--------------------------------------+-------------------+-------------
+-----------+--------+
| 9e653fad-cd55-4b14-9c30-2488ed5b239c | bm-deploy-ramdisk | ari |
420527022 | active |
+--------------------------------------+-------------------+-------------
+-----------+--------+

6. Verify successful uploading of Overcloud images to Glance.

$ openstack image list

+--------------------------------------+------------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------------+--------+
| 625a1fac-9fa6-46b2-9abb-465e744b808a | bm-deploy-kernel | active |
| 9e653fad-cd55-4b14-9c30-2488ed5b239c | bm-deploy-ramdisk | active |
| 64981da1-498a-421a-8613-9acc38ff125a | overcloud-full | active |
| 94e8715e-232a-4ca0-bad3-e674dfb6264e | overcloud-full-initrd | active |
| 74abf8fe-686f-47d2-8fe4-12c92c88c2a5 | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+

7. Get ID of subnet used as ctl-plane.

$ openstack subnet list

+--------------------------------------+-----------------
+--------------------------------------+------------------+
| ID | Name | Network
| Subnet |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 49

+--------------------------------------+-----------------
+--------------------------------------+------------------+
| 23f88d67-0f42-42a4-b508-19b411404ee3 | ctlplane-subnet | 26ee3e35-
accb-4e16-8b53-1f06288c6ed1 | 192.168.120.0/24 |
+--------------------------------------+-----------------
+--------------------------------------+------------------+

8. Add a DNS entry to the subnet.

$ openstack subnet set 23f88d67-0f42-42a4-b508-19b411404ee3 --dns-


nameserver 8.8.8.8
9. Create registries for Overcloud and Ceph images in containerized environment.

$ openstack overcloud container image prepare --output-env-file /home/


osp_admin/overcloud_images.yaml \
--namespace=registry.access.redhat.com/rhosp13 \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/
ceph-ansible.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-
docker/ironic.yaml \
--set ceph_namespace=registry.access.redhat.com/rhceph --set
ceph_image=rhceph-3-rhel7 \
--tag-from-label 13-0

Configure and deploy the Overcloud


This topic describes the steps needed to successfully deploy the Overcloud. The following procedures are discussed in
the order they need to be executed:
1. Prepare the nodes registration file.
2. Configure networking.
3. Configure cluster.
4. Configure the static IPs.
5. Configure the virtual IPs.
6. Configure the nodes NICs.
7. Register and introspect the nodes.
8. Prepare and deploy the Overcloud.
In order to deploy the Overcloud successfully, a few heat template files are needed. Some of them have to be altered,
others will keep their default values.
Directory structure stores the files per functionality and application. The templates that the user needs to modify are in
two directories ~/templates/overcloud/ and ~/templates/nic-configs/.
Please refer https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html-single/hyper-
converged_infrastructure_guide/index for overall deployment configuration management.
Note: Only the files that you need to modify will be covered in this chapter. However, you can refer to the
Environment Files on page 83 appendix for a complete overview of the environment files used with this
architecture guide.
The following table describe the functionality of each environment file.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
50 | Deployment

Table 15: List of environment files

Filename Location Description User


customized
network-environment.yaml on ~/templates/overcloud/ Defines network environment Yes
page 84 for all OpenStack services
dell-environment.yaml on page ~/templates/ Defines all cluster parameters Yes
89 including ceph
static-ip-environment.yaml on ~/templates/overcloud/ Defines IPs reserved for each Yes
page 88 node per OpenStack network
static-vip-environment.yaml on ~/templates/overcloud/ Defines IPs reserved for er Yes
page 87 OpenStack services requiring
HA
storage-environment.yaml ~/templates/overcloud/ Defines Storage backend used No
environments/ by Nova and Cinder
nic_environment.yaml on page ~/templates/nic- Defines network interfaces Yes
89 configs/ reserved bonding mapping
controller.yaml ~/templates/nic- Defines network configuration No
configs/ for Controller nodes
computeHCI.yaml ~/templates/nic- Defines network configuration Yes
configs/ for Converged nodes
overcloud_images.yaml ~/ Defines docker images location No
for all OpenStack related
services
puppet-pacemaker.yaml ~/templates/overcloud/ Defines HA OpenStack No
environments configuration with Pacemaker

Prepare the nodes registration file


In order to register the physical nodes on which the Overcloud will be deployed, the following actions need to be
executed:
1. On the Director VM, navigate to the osp_admin home directory.
2. Clone the public repository of the Ready Architecture https://github.com/dsp-jetpack/JetPack/ using Git.
3. Switch to the HCI_OSP13 branch https://github.com/dsp-jetpack/JetPack/tree/HCI_OSP13
4. Edit the instackenv.json file.
5. Collect the physical information for each node described in the following table.
Note: IP, VLAN and MTU are examples a user can use to configure and deploy per their network
requirement.

Table 16: instackenv.json.yaml file parameters

Parameter Sample Value Description


Name
name control-0 Name of the node.
capabilities node:control-0, Capabilities of the node. UEFI
boot_option:local, boot_mode:uefi boot mode enabled.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 51

Parameter Sample Value Description


Name
root_device {"size":"2791"} Size of the Virtual boot disk.
It should be either 2791 or 223
depending of the type of node.
pm_addr 192.168.110.12 IPMI iDRAC IP address
pm_password xxxxx IPMI iDRAC Password.
pm_type pxe_drac DRAC mode
pm_user root IPMI iDRAC User.
6. Save the file under the osp_admin home directory.
Note: Please refer to Appendix B for more details about the file content.

Register and introspect the nodes


To register and introspect the nodes:
Note: At this time, all physical nodes need to be configured properly including BIOS settings and the virtual
disk created as described into the Before you begin on page 35 section.
1. From the osp_admin home directory, source the Undercloud environment file.

$ source ~/stackrc
2. Register the nodes in the Undercloud.

$ openstack overcloud node import ~/instackenv.json


3. Wait for successful registration.

Started Mistral Workflow tripleo.baremetal.v1.register_or_update.


Execution ID: 1a85356c-7059-47c2-8342-0a7209e5d62d
Waiting for messages on queue 'tripleo' with no timeout.
6 node(s) successfully moved to the "manageable" state.
Successfully registered node UUID 2a71f666-61ac-498d-a1e5-be6b340c3f69
Successfully registered node UUID 518ea65f-a415-47fe-8839-637e7ab0f83f
Successfully registered node UUID 7b4f7223-4fe3-4100-9a35-8175ace438b4
Successfully registered node UUID f5a1b33a-0017-4712-8638-35d070a418a1
Successfully registered node UUID 374e9765-2cdf-4986-b9cb-fc8ae35e4de0
Successfully registered node UUID ba730382-2453-4a5b-b5db-1542dee710bc
4. Launch the introspection.

$ openstack overcloud node introspect --all-manageable --provide


5. Monitor introspection while it is running.

$ journalctl -l -u openstack-ironic-inspector -u openstack-ironic-


inspector-dnsmasq -u openstack-ironic-conductor –f
6. At the time the introspection ends, all bare-metal nodes should be marked as available.

$ openstack baremetal node list


+--------------------------------------+--------------+---------------
+-------------+--------------------+-------------+
| UUID | Name | Instance UUID |
Power State | Provisioning State | Maintenance |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
52 | Deployment

+--------------------------------------+--------------+---------------
+-------------+--------------------+-------------+
| 1cc47781-2b62-4fe2-8ce4-6a10c609ff4a | control-0 | None |
power off | available | False |
| adaca128-1f2c-4eb6-a1bb-ae0b93fe2d2f | control-1 | None |
power off | available | False |
| 2effb861-e759-4e5b-8744-16ab9d859cf0 | control-2 | None |
power off | available | False |
| 26cb068a-84d6-4fb6-9433-9d061a7c6adb | computeHCI-0 | None |
power off | available | False |
| e3c2906f-d590-4ae0-927f-4267312545b6 | computeHCI-1 | None |
power off | available | False |
| 48309885-ebcd-41dd-8a9f-7cecfe3f1a0e | computeHCI-2 | None |
power off | available | False |
+--------------------------------------+--------------+---------------
+-------------+--------------------+-------------+

Configure networking
To configure network environment parameters:
1. On the Director VM, from the osp_admin home directory, copy all files needed for the upcoming deployment.

$ cp -R JetPack/templates/ templates/
2. Edit the templates/overcloud/network-environment.yaml file.
3. Search the CHANGEME section to make changes.
4. Make changes as described in the following table.
Note: IP, VLAN and MTU are a set of examples the user can configure and deploy as per their network
requirement.

Table 17: network-environment.yaml file parameters

Parameter Name Default Value Description


NeutronGlobalPhysnetMtu 1500 MTU value for Neutron networks
ManagementNetCidr 192.168.110.0/24 CIDR block for the Management
network
InternalApiNetCidr 192.168.140.0/24 CIDR block for the Private API
network.
TenantNetCidr 192.168.130.0/24 CIDR block for the Tenant
network. For future support of
Generic Routing Encapsulation
(GRE) or VXLAN networks.
StorageNetCidr 192.168.170.0/24 CIDR block for the Storage
network.
StorageMgmtNetCidr 192.168.180.0/24 CIDR block for the Storage
Clustering network.
ExternalNetCidr 100.67.139.0/26 CIDR block for the External
network.
ManagementAllocationPools [{'start': IP address range on the
'192.168.110.30', Management network for use by
'end': the iDRAC DHCP server.
'192.168.110.45'}]

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 53

Parameter Name Default Value Description


InternalApiAllocationPools [{'start': IP address range for the Private
'192.168.140.50', API network.
'end':
'192.168.140.120'}]
TenantAllocationPools [{'start': IP address range for the Tenant
'192.168.130.50', network. Not used unless you
'end': wish to configure Generic Routing
'192.168.130.120'}] Encapsulation (GRE) or VXLAN
networks.
StorageAllocationPools [{'start': IP address range for the Storage
'192.168.170.50', network.
'end':
'192.168.170.120'}]
StorageMgmtAllocationPools [{'start': IP address range for the Storage
'192.168.180.50', Clustering network.
'end':
'192.168.180.120'}]
ExternalAllocationPools [{'start': IP address range for the External
'100.67.139.20', 'end': network.
'192.168.139.50'}]
ExternalInterfaceDefaultRoute 100.67.139.1 Router gateway on the External
network.
ManagementNetworkGateway 192.168.110.1 The IP address of the gateway on
the Management network.
ProvisioningNetworkGateway 192.168.120.1 The IP address of the gateway on
the Provisioning network, which
allows access to the Management
network.
ControlPlaneDefaultRoute 192.168.120.13 Router gateway on the
provisioning network (or
Undercloud IP address).
ControlPlaneSubnetCidr 24 CIDR of the control plane network.
EC2MetadataIp 192.168.120.13 IP address of the Undercloud.
DnsServers ["8.8.8.8"] DNS servers for the Overcloud
nodes to use.
InternalApiNetworkVlanID 140 VLAN ID of the Private API
network.
StorageNetworkVlanID 170 VLAN ID of the Storage network.
StorageMgmtNetworkVlanID 180 VLAN ID of the Storage
Clustering network.
TenantNetworkVlanID 130 VLAN ID of the Tenant network.
For future support of Generic
Routing Encapsulation (GRE) or
VXLAN networks.
ExternalNetworkVlanID 1391 VLAN ID of the External network.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
54 | Deployment

Parameter Name Default Value Description


NeutronExternalNetworkBridge "" Empty string for External VLAN,
or br-ex if on the native VLAN.
ExternalNetworkMTU 1500 MTU Value for External network
InternalApiMTU 1500 MTU Value for Interlan API
network
StorageNetworkMTU 1500 MTU Value for Storage network
TenantNetworkMTU 1500 MTU Value for Tenant network
ProvisioningNetworkMTU 1500 MTU Value for Provisioning
network
ManagementNetworkMTU 1500 MTU Value for Management
network
DefaultBondMTU 1500 MTU Value for Default bonds

Configure cluster
To configure cluster environment parameters:
1. On the Director VM, from the osp_admin home directory, edit the templates/dell-
environment.yaml file
2. Search the CHANGEME section to make changes.
3. Make changes as described in the following table.

Table 18: dell-environment.yaml file parameters

Parameter Name Default Value Description


NeutronPublicInterface bond1 Bond interface for external access
NeutronNetworkType vlan Tenant network type
OvercloudComputeHCIFlavor baremetal Flavor used for Converged role
OvercloudControllerFlavor baremetal Flavor used for Controller role
ComputeHCICount 3 Number of Converged nodes
ControllerCount 3 Number of Controller nodes
CephPools [{"name": "volumes", Ceph PG values
"pg_num": 4096,
"pgp_num": 4096},
{"name": "vms",
"pg_num": 1024,
"pgp_num": 1024},
{"name": "images",
"pg_num": 512,
"pgp_num": 521}]
CephPoolDefaultSize 2 Default Ceph pool size
osd_scenario lvm Storage backend scenario

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 55

Parameter Name Default Value Description


devices - /dev/nvme0n1 - /dev/ List of NVMe disks to use as
nvme1n1 - /dev/nvme2n1 OSDs
- /dev/nvme3n1 - /dev/
nvme4n1 - /dev/nvme5n1
- /dev/nvme6n1 - /dev/
nvme7n1
osd_objectstore bluestore Objectstore to used
osds_per_device 2 Number of OSDs per NVMe disk
osd_max_backfills 1 Limit backfill process running
simultaneously
ceph_osd_docker_memory_limit 10 Limit the amount of memory for
each OSD docker process
ceph_osd_docker_cpu_limit 4 Limit the number of vCPUs for
each OSD docker process
cpu_allocation_ratio 6.7 CPU Allocation ratio
NovaReservedHostMemory 115000 Amount of memory reserved for
Nova

Configure the static IPs


To configure static IPs isolation:
1. On the Director VM, from the osp_admin home directory, edit the templates/overcloud/static-ip-
environment.yaml file.
2. Make changes as described in the following table.
Note: These IP, VLAN and MTU are a set of examples the user can configure and deploy as per their
network requirement.

Table 19: static-ip-environment.yaml file parameters

Parameter Name Sub-parameter Name Default Value Description


ControllerIPs tenant - 192.168.130.12 List of tenant IPs for
- 192.168.130.13 Controller nodes on
- 192.168.130.14 tenant network
ControllerIPs internal_api - 192.168.140.12 List of Internal IPs for
- 192.168.140.13 Controller nodes on
- 192.168.140.14 Internal API Network
ControllerIPs storage - 192.168.170.12 List of storage IPs for
- 192.168.170.13 Controller nodes on
- 192.168.170.14 storage network
ControllerIPs external - 100.67.139.12 List of external IPs for
- 100.67.139.13 - Controller nodes on
100.67.139.14 external network
ComputeHCIIPs tenant - 192.168.130.15 List of tenant IPs for
- 192.168.130.16 Converged nodes on
- 192.168.130.17 tenant network

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
56 | Deployment

Parameter Name Sub-parameter Name Default Value Description


ComputeHCIIPs internal_api - 192.168.140.15 List of Internal IPs for
- 192.168.140.16 Converged nodes on
- 192.168.140.17 Internal API network
ComputeHCIIPs storage - 192.168.170.15 List of storage IPs for
- 192.168.170.16 Converged nodes on
- 192.168.170.17 storage network
ComputeHCIIPs storage_mgmt - 192.168.180.15 List of storage mgmt IPs
- 192.168.180.16 for Converged nodes
- 192.168.180.17 on storage management
network

Configure the Virtual IPs


To configure the service Virtual IPs :
1. On the Director VM, from the osp_admin home directory, edit the templates/overcloud/static-
vip-environment.yaml file.
2. Search the CHANGEME section to make changes.
3. Make changes as described in the following table.
Note: IP, VLAN and MTU are set of examples and user can configure and deploy as per their network
requirement.

Table 20: static-vip-environment.yaml file parameters

Parameter Name Default Value Description


redis 192.168.140.49 Virtual IP for the redis service
ControlPlaneIP 192.168.120.121 Virtual IP for the provisioning
network
InternalApiNetworkVIP 192.168.140.121 Virtual IP for the internal API
network
ExternalNetworkVIP 100.67.139.62 Virtual IP for the Public API
network
StorageNetworkVIPs 192.168.170.121 Virtual IP for the storage network
StorageMgmtNetworkVIPs 192.168.180.122 Virtual IP for the storage
management network

Configure the NIC interfaces


Steps to configure the NICs interfaces:
1. On the Director VM, from to the osp_admin home directory, edit the templates/nic-configs/
nic_environment.yaml file.
2. Search the CHANGEME section to make changes.
3. Make changes as described in the following table.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 57

Table 21: nic_environment.yaml file parameters

Parameter Name Default Value Description


ControllerProvisioningInterface em3 Provisioning interface
name for Controller
nodes
ControllerBond0Interface1 p1p1 Bond 0 Interface #1 name
for Controller nodes
ControllerBond0Interface2 p2p1 Bond 0 Interface #2 name
for Controller nodes
ControllerBond1Interface1 p1p2 Bond 1 Interface #1 name
for Controller nodes
ControllerBond1Interface2 p2p2 Bond 1 Interface #2 name
for Controller nodes
ControllerBondInterfaceOptions mode=802.3ad miimon=100 Bonding mode for
xmit_hash_policy=layer3+4 Controller nodes
lacp_rate=1
ComputeHCIProvisioningInterface em3 Provisioning interface
name for Converged
nodes
ComputeHCIBond0Interface1 p2p1 Bond 0 Interface #1 name
for Converged nodes
ComputeHCIBond0Interface2 p3p1 Bond 0 Interface #2 name
for Converged nodes
ComputeHCIBond1Interface1 p2p2 Bond 1 Interface #1 name
for Converged nodes
ComputeHCIBond1Interface2 p3p2 Bond 1 Interface #2 name
for Converged nodes
ComputeHCIBondInterfaceOptions mode=802.3ad miimon=100 Bonding mode for
xmit_hash_policy=layer3+4 Converged nodes
lacp_rate=1
ComputeHCIBond2Interface1 p6p1 Bond 2 interface #1 name
for Converged nodes
(for SRIOV/OVS-DPDK
future usage)
ComputeHCIBond2Interface2 p7p1 Bond 2 interface #2 name
for Converged nodes
(for SRIOV/OVS-DPDK
future usage)

Prepare and deploy the Overcloud


Note: Before beginning with Overcloud deployment, in order to avoid power-cycling timeout issue which
may occur during the Overcloud deployment, it is recommended to increase the default value of the following
parameter.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
58 | Deployment

1. On the director VM, change the value of the post_deploy_get_power_state_retry_interval to 60

# sudo sed -i 's/#post_deploy_get_power_state_retry_interval.*/


post_deploy_get_power_state_retry_interval = 60/' /etc/ironic/ironic.conf
2. Restart all Ironic services

# sudo systemctl restart openstack-ironic*

Prepare and deploy the Overcloud.


1. Copy the TripleO heat-templates directory to the osp_admin templates directory.

$ cp -R /usr/share/openstack-tripleo-heat-templates/* ~/templates/
overcloud
2. Generate a custom roles_data.yaml file that includes the HCI role.

$ openstack overcloud roles generate -o /home/osp_admin/templates/


roles_data.yaml Controller ComputeHCI
3. Finally, launch the Overcloud deployment.

$ openstack overcloud deploy --log-file ~/overcloud_deployment.log \


-t 120 --stack R139-HCI --templates ~/templates/overcloud \
-e ~/templates/overcloud/environments/ceph-ansible/ceph-ansible.yaml \
-e ~/templates/overcloud/environments/ceph-ansible/ceph-rgw.yaml \
-r ~/templates/roles_data.yaml -e ~/templates/overcloud/environments/
network-isolation.yaml \
-e ~/templates/overcloud/network-environment.yaml \
-e /home/osp_admin/templates/nic-configs/nic_environment.yaml \
-e ~/templates/overcloud/static-ip-environment.yaml \
-e ~/templates/overcloud/static-vip-environment.yaml \
-e ~/templates/overcloud/node-placement.yaml \
-e ~/templates/overcloud/environments/storage-environment.yaml \
-e ~/overcloud_images.yaml \
-e ~/templates/dell-environment.yaml \
-e ~/templates/overcloud/environments/puppet-pacemaker.yaml \
--libvirt-type kvm --ntp-server 192.168.120.8
4. Wait for the Overcloud deployment to complete successfully.

Stack R139-HCI CREATE_COMPLETE


Host 100.67.139.62 not found in /home/osp_admin/.ssh/known_hosts
Started Mistral Workflow tripleo.deployment.v1.get_horizon_url. Execution
ID: 9d0e7fad-2cef-4552-9079-9b07f91cea13
Overcloud Endpoint: http://100.67.139.62:5000/
Overcloud Horizon Dashboard URL: http://100.67.139.62:80/dashboard
Overcloud rc file: /home/osp_admin/R139-HCIrc
Overcloud Deployed

Red Hat Ceph Storage Dashboard deployment and configuration (optional)


The following section illustrates steps to deploy and configure Red Hat Ceph Storage Dashboard VM which is
optional and based on the JetPack 13.1 deployment.

Red Hat Ceph Storage Dashboard deployment


1. Login to SAH node as root.
2. From the directory where the JetPack repository has been cloned, switch to the master branch.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 59

3. Change directory to JetPack/src/mgmt .

[root@sah ~]# cd JetPack/src/mgmt/


4. Edit dashboard.cfg configuration file in the JetPack/src/mgmt/ directory in the following settings.

Table 22: dashboard.cfg file parameters

Parameter Name Description


rootpassword The root user password for the Ceph Dashboard VM.
smuser The user credential when registering with Subscription
manager.
smpassword The user password when registering with Subscription
manager. The password must be enclosed in single quotes if
it contains certain special characters.
smpool The pool ID used when attaching the Ceph Dashboard VM
to an entitlement.
hostname The FQDN of the Ceph Dashboard VM.
gateway The default gateway for the Ceph Dashboard VM.
nameserver A comma-separated list of nameserver IP addresses.
eth0 This line specifies the IP address and network mask for the
public API network. The line begins with eth0, followed by
at least one space, the IP address, another set of spaces, and
then the network mask.
eth1 (if required) This line specifies the IP address and network mask for the
storage network. The line begins with eth1, followed by at
least one space, the IP address, another set of spaces, and
then the network mask.
5. Deploy the Ceph dashboard VM

$ python deploy-dashboard-vm.py dashboard.cfg /store/data/iso/rhel-


server-7.6-x86_64-dvd.iso

Starting install...
Retrieving file .treeinfo... | 1.9 kB 00:00:00
Retrieving file vmlinuz... | 6.3 MB 00:00:00
Retrieving file initrd.img... | 52 MB 00:00:00
Allocating 'dashboard.img' | 100 GB 00:00:01
Domain installation still in progress. You can reconnect to
the console to complete the installation process.

6. After deployment, VM will be in a shut-off state. Start the dashboard VM

$ virsh start dashboard


Domain dashboard started

7. Verify the Ceph dashboard VM is up and running.

$ virsh list --all


Id Name State
----------------------------------------------------
1 director running

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
60 | Deployment

13 dashboard running

Ceph dashboard VM configuration


1. Login to director VM as osp_admin .
2. From the directory where the JetPack repository has been cloned, switch to the master branch.
3. Copy pilot directory from JetPack folder to osp_admin home directory

$ cp -R JetPack/src/pilot .
4. Copy the network-environment.yaml file to the pilot/templates directory.

$ cp /home/osp_admin/templates/overcloud/network-environment.yaml /home/
osp_admin/pilot/templates/network-environment.yaml
5. Copy the undercloud.conf file to the pilot directory.

$ cp /home/osp_admin/undercloud.conf /home/osp_admin/pilot/
6. Replace ceph-storage with computeHCI in the pilot/subscription.json file

$ sed -i 's/ceph-storage/computeHCI/' pilot/subscription.json


7. Replace storage with computeHCI in the pilot/config_dashboard.py file

$ sed -i 's/"storage" in node.fqdn/"computehci" in node.fqdn/' pilot/


config_dashboard.py
8. Browse to the pilot directory located under the osp_admin home directory.

$ cd ~/pilot
9. Run the script provided as part of JetPack to configure the dashboard.

$ python config_dashboard.py <dashboard-public-ip> <password> <smuser>


<smpasswd> <physical-pool-id> <ceph-pool-id>
INFO:config_dashboard:Configuring Ceph Storage Dashboard on 100.67.139.185
(dashboard.labs.dell)
INFO:config_dashboard:Identifying Ceph nodes (Monitor and OSD nodes)
INFO:config_dashboard:r139a-hci-controller-0 (192.168.170.12) is a Ceph
node
INFO:config_dashboard:r139a-hci-controller-1 (192.168.170.13) is a Ceph
node
INFO:config_dashboard:r139a-hci-controller-2 (192.168.170.14) is a Ceph
node
INFO:config_dashboard:r139a-hci-computehci-1 (192.168.170.16) is a Ceph
node
INFO:config_dashboard:r139a-hci-computehci-2 (192.168.170.17) is a Ceph
node
INFO:config_dashboard:r139a-hci-computehci-0 (192.168.170.15) is a Ceph
node
INFO:config_dashboard:Preparing the subscription json file.
INFO:config_dashboard:Register the overcloud nodes.
INFO:__main__:Registering control 192.168.120.104 with CDN
INFO:__main__:Disabling all repos on control 192.168.120.104
INFO:__main__:Enabling the following repos on control 192.168.120.104:
[u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-
rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-
openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-
server-rhceph-3-tools-rpms']
INFO:__main__:Registering control 192.168.120.124 with CDN

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Deployment | 61

INFO:__main__:Disabling all repos on control 192.168.120.124


INFO:__main__:Enabling the following repos on control 192.168.120.124:
[u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-
rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-
openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-
server-rhceph-3-tools-rpms']
INFO:__main__:Registering control 192.168.120.129 with CDN
INFO:__main__:Disabling all repos on control 192.168.120.129
INFO:__main__:Enabling the following repos on control 192.168.120.129:
[u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-
rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-
openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-
server-rhceph-3-tools-rpms']
INFO:__main__:Registering computeHCI 192.168.120.115 with CDN
INFO:__main__:Disabling all repos on computeHCI 192.168.120.115
INFO:__main__:Enabling the following repos on computeHCI 192.168.120.115:
[u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-
rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-
openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-
server-rhceph-3-osd-rpms', u'rhel-7-server-rhceph-3-tools-rpms']
INFO:__main__:Registering computeHCI 192.168.120.108 with CDN
INFO:__main__:Disabling all repos on computeHCI 192.168.120.108
INFO:__main__:Enabling the following repos on computeHCI 192.168.120.108:
[u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-
rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-
openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-
server-rhceph-3-osd-rpms', u'rhel-7-server-rhceph-3-tools-rpms']
INFO:__main__:Registering computeHCI 192.168.120.112 with CDN
INFO:__main__:Disabling all repos on computeHCI 192.168.120.112
INFO:__main__:Enabling the following repos on computeHCI 192.168.120.112:
[u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-
rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-
openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-
server-rhceph-3-osd-rpms', u'rhel-7-server-rhceph-3-tools-rpms']
INFO:config_dashboard:Preparing hosts file on Ceph Storage Dashboard.
INFO:config_dashboard:Preparing hosts file on Ceph nodes.
INFO:config_dashboard:Preparing remote access on the Ceph Storage
Dashboard.
INFO:config_dashboard:Preparing remote access on the Ceph Storage
Dashboard.
INFO:config_dashboard:Preparing ansible host file on Ceph Storage
Dashboard.
INFO:config_dashboard:Adding Monitors Stanza to Ansible hosts file
INFO:config_dashboard:Adding RadosGW Stanza to Ansible hosts file
INFO:config_dashboard:Adding OSD Stanza to Ansible hosts file
INFO:config_dashboard:Adding MGR Stanza to Ansible hosts file
INFO:config_dashboard:Adding Graphana Stanza to Ansible hosts file
INFO:config_dashboard:Preparing /etc/ceph/ceph.conf file on Ceph nodes.
INFO:config_dashboard:Preparing the Ceph Storage Cluster for data
collection.
INFO:config_dashboard:Installing the Ceph Storage Dashboard.
INFO:config_dashboard:Ceph Storage Dashboard configuration is complete
INFO:config_dashboard:You may access the Ceph Storage Dashboard at:
INFO:config_dashboard: http://100.67.139.185:3000,
INFO:config_dashboard:with user 'admin' and password 'admin'.
INFO:config_dashboard:Add new ports to iptables ceph nodes
INFO:config_dashboard:Unregister the overcloud nodes.
INFO:register_overcloud:Unregistering control 192.168.120.104 with CDN
INFO:register_overcloud:Unregistering control 192.168.120.124 with CDN
INFO:register_overcloud:Unregistering control 192.168.120.129 with CDN
INFO:register_overcloud:Unregistering computeHCI 192.168.120.115 with CDN
INFO:register_overcloud:Unregistering computeHCI 192.168.120.108 with CDN
INFO:register_overcloud:Unregistering computeHCI 192.168.120.112 with CDN

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
62 | Deployment

INFO:config_dashboard:Preparing the Ceph Storage Cluster prometheus


service.
INFO:config_dashboard:Restarting prometheus service on node
(dashboard.labs.dell)
10. Browse and verify the Ceph dashboard by the url generated in previous logs.
11. You can change the default login credentials admin/admin

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 63

Chapter

6
Validation and testing
Topics: This chapter illustrates the optional manual deployment of the Sanity
procedure including instructions for configuring and running the Tempest test
• Manual validation suite.
• Tempest test suite
Tempest is OpenStack's official test Suite for all OpenStack services post
deployment.
Tempest validates the Dell EMC Ready Architecture Guide for the
deployment of Red Hat OpenStack Platform over Hyper-Converged
Infrastructure.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
64 | Validation and testing

Manual validation
The following illustrates post deployment of Overcloud OpenStack services through creation and validation of
networks and subnets instances.
This section includes instructions for creating the networks and testing a majority of your RHOSP environment using
Glance (configured with Red Hat Ceph Storage), Cinder and Nova.
Note: You must complete those prior to creating instances and volumes, and testing of the functional
operations of OpenStack.

Testing OpenStack services


1. Log into the Director VM as osp_admin using the user name and password specified when creating the node
and source the overcloudrc file, or the name of the stack defined when deploying the overcloud.

$ cd ~/ $ source <overcloud_name>rc
2. Setting up new project.

$ openstack project create <project name>


3. Create new user for sanity.

$ openstack user create --project <project name> --password <password> --


email <email id> <user name>
4. Create the tenant network by executing the following commands.

$ openstack network create <tenant_network_name>


5. Create the tenant subnet on the tenant network.

$ openstack subnet create <tenant_network_name> <vlan_network> \ --name


<vlan_name>
6. Create the Router.

$ openstack router create <tenant_router>


7. Before you add the tenant network interface, you will need the subnets ID. Execute the following command to
display them.

$ openstack network list

+--------------------------------------
+----------------------------------------------------
+--------------------------------------+
| ID | Name
| Subnets |
+--------------------------------------
+----------------------------------------------------
+--------------------------------------+
| 4164e0ba-07fe-4b2e-b5fa-01181987ab9f | public
| fd43cb2b-b746-4443-9a81-ad99e36431df |
| 8e36a5dd-383e-4415-9be6-d91d9fedb023 | HA network tenant
8b6fe7f3af074ccb9285043bb2f3cf5b | 86fd1a5b-d02d-42a9-9808-b3bfdac6f422 |
| e44c6fe7-19d4-40a3-ae13-330ee7fb49cf | tenant_net1
| cfc4cbec-ea71-4384-9179-9dc7bc6d8c9e |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 65

+--------------------------------------
+----------------------------------------------------
+--------------------------------------+
8. Add the tenant network interface between the router and the tenant network.

$ openstack router-interface-add <tenant_router> <subnets_id>


9. Create the external network by executing the following commands.

$ openstack network create <external_network_name> --router:external \


--provider:network_type vlan --provider:physical_network physext \ --
provider:segmentation_id <external_vlan_id>
10. Create the external subnet with floating IP addresses on the external network

$ openstack subnet create --name <external_subnet_name> \ --allocation-


pool start=<start_ip>,end=<end_ip> \ --disable-dhcp --gateway <gateway_ip>
<external_network_name> <external_vlan_network>
11. Set the external network gateway for the router.

$ openstack router gateway set <tenant_router_name>


<external_network_name>

Test Glance image service


1. Create and upload the Glance image.

$ openstack image create --disk-format <format> \ --container-format


<format> --public --file <file_path>
2. List available images to verify that your image was uploaded successfully.

$ openstack image list


3. To view more detailed information about an image, use the identifier of the image from the output of the
OpenStack image list command above.

$ openstack image show <id>

Testing Nova compute provisioning service


Launch an instance using the boot image that you uploaded:
1. Get the ID of the flavor you will use.

$ openstack flavor list


2. Get the image ID.

$ openstack image list


3. Get the tenant network ID.

$ openstack network list


4. Generate a key pair. The command below generates a new key pair; if you try using an existing key pair in the
command, it fails.

$ openstack keypair create --public ~/.ssh/id_rsa.pub <key_name>

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
66 | Validation and testing

Note: MY_KEY.pem is an output file created by the Nova keypair-add command, and will be used later.
5. Create an instance using the Nova boot command.

$ openstack server create --flavor <flavor_id> --image <imageid> --nic


net-id=<tenantNetID> --key_name <key_name> <nameofinstance>

Note: Change the IDs to your IDs and the nameofinstance and the key_name .
6. List the instance you created:

$ openstack server list

Test Cinder block storage service


1. Create new volume.

$ openstack volume create --image centos --bootable --read-write --size 10


$VOL_NAME-$i
2. Verify list of volumes created.

$ openstack volume list


3. Attach the newly created volume to instance.

$ openstack volume attach $server_id $volume_id /dev/vdb

Test Swift object storage service


Verify operation of the Object Storage service.
• Show the service status.

$ swift stat

Expected output.

Account: AUTH_6f049e55ab9b49ca9ee342ed4c17a86b
Containers: 13
Objects: 2066
Bytes: 10602595
Containers in policy "policy-0": 13
Objects in policy "policy-0": 2066
Bytes in policy "policy-0": 10602595
Meta Temp-Url-Key: 60b16566fd14c10017ce78124af6e028
X-Account-Project-Domain-Id: default
X-openstack-Request-Id: tx584d13dce54d4462b5e91-005c3f8a46
X-Timestamp: 1546424543.31946
X-Trans-Id: tx584d13dce54d4462b5e91-005c3f8a46
Content-Type: application/json; charset=utf-8
Accept-Ranges: bytes

Accessing the instance test


1. Find the active Controller by executing the following commands from the Director Node.

$ cd ~/ $ source stackrc
$ nova list (make note of the controllers ips)
$ ssh heat-admin@<controller ip>

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 67

$ sudo -i
# pcs cluster status

The displayed output will be similar to the following.

+--------------------------------------+-----------------------+--------
+------------+-------------+--------------------------+
| ID | Name | Status |
Task State | Power State | Networks |
+--------------------------------------+-----------------------+--------
+------------+-------------+--------------------------+
| cfe21aea-91be-49bb-931f-5061e4be397d | r139-hci-computehci-0 | ACTIVE |
- | Running | ctlplane=192.168.120.134 |
| 64b94937-7a29-4950-af9e-d9980502d90d | r139-hci-computehci-1 | ACTIVE |
- | Running | ctlplane=192.168.120.135 |
| e14e34ae-fdce-4865-bd8c-a9e5a6dbf9af | r139-hci-computehci-2 | ACTIVE |
- | Running | ctlplane=192.168.120.136 |
| 8d1ecfde-47f0-4112-baf8-8877416a8a82 | r139-hci-controller-0 | ACTIVE |
- | Running | ctlplane=192.168.120.141 |
| fd7f6bf6-b6e8-4154-b68f-7f92c274a29b | r139-hci-controller-1 | ACTIVE |
- | Running | ctlplane=192.168.120.127 |
| a419f86d-5e86-490a-a583-62e14d7c5508 | r139-hci-controller-2 | ACTIVE |
- | Running | ctlplane=192.168.120.129 |
+--------------------------------------+-----------------------+--------
2. Initiate an SSH session to the active Controller, as heat-admin.
3. Find the instances by executing the following command:

$ sudo -i
$ ip netns

The displayed output will be similar to the following.

qdhcp-0a5a594a-f442-4e33-b025-0ba65969ab09 (id: 2)
qrouter-bb00b972-f67c-45ba-a573-ad5d7e8debc5 (id: 1)
qdhcp-2e43972b-0778-4cc3-be64-9dcc9789863b (id: 0)
4. Access an instance namespace by executing the following command:

$ ip netns exec <namespace> bash


5. Verify that the namespace is the desired tenant network, by executing the following command.

$ ip a

The displayed output will be similar to the following.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue


link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen
1000
link/ether fa:16:3e:5b:59:d2 brd ff:ff:ff:ff:ff:ff
inet 192.168.202.36/24 brd 192.168.202.255 scope global eth0
inet6 fe80::f816:3eff:fe5b:59d2/64 scope link
valid_lft forever preferred_lft forever

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
68 | Validation and testing

6. Ping the IP address of the instance.

$ ip netns exec qdhcp-0a5a594a-f442-4e33-b025-0ba65969ab09 ping


192.168.202.36

PING 192.168.202.36 (192.168.202.36) 56(84) bytes of data.


64 bytes from 192.168.202.36: icmp_seq=1 ttl=64 time=1.06 ms
64 bytes from 192.168.202.36: icmp_seq=2 ttl=64 time=0.158 ms
^C
--- 192.168.202.36 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.158/0.611/1.064/0.453 ms
7. SSH into the instance, as Centos, using the keypair generated above.

sudo ip netns exec qdhcp-0a5a594a-f442-4e33-b025-0ba65969ab09 ssh -i ~/


MY_KEY.pem [email protected]

Tempest test suite


Tempest is OpenStack's official test suite which includes a set of integration tests to run against an OpenStack cluster.
Tempest automatically runs against every service in every project of OpenStack to avoid failures that could occur
during merged changes. Verified post-installation Tempest API tests include:
• Under project creation operation - Scenarios include creation of project, update, deletion of project, creation of
project by unauthorized name, empty name, duplicate name list projects, and deletion of non-existent project.
• User creation operation - List current users, get users, and list users with names.
• Network - Create, update and delete networks, create port on non-existent network, show non-existent network,
bulk create, and delete network.
• Image upload and launch - Activate, deactivate image, delete, register and upload image
• Create floating IPs in private network - External network , fixed IP address, and a list of floating IPs.
• Volume creation and attachment to VM - Create, get , update and delete.

Configure Tempest
1. Login to OSP director VM with user osp_admin.
2. Clone tempest repository on home directory /home/osp_admin/ from Github.

$ git clone https://git.openstack.org/openstack/tempest


3. Install tempest.

$ sudo pip install tempest/


4. Verify version.

$ tempest --version
5. Source the admin credentials in Overcloud.

$ source ~/overcloudrc
6. Create and initialize the tempest workspace.

$ tempest init cloud-01

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Validation and testing | 69

7. List the existing workspace.

$ tempest workspace list


8. Generate the /etc/tempest.conf file.

$ discover-tempest-config --deployer-input ~/tempest-deployer-input.conf


--debug --create identity.uri $OS_AUTH_URL identity.admin_password
$OS_PASSWORD --network-id 3ba5a660-5172-4ea0-bb1c-72c0c14f87b6
9. Verify and modify tempest.conf file according to network, image, and url details
10. Verify configuration file.

$ tempest verify-config -o /etc/tempest.conf

Run Tempest tests


Run Tempest for services from the existing Tempest workspace

$ cd cloud-01

$ stestr run --black-regex '\[.*\bslow\b.*\]' '^tempest\.(api)'

Summary
The main objective of Tempest API tests is to ensure that our Dell EMC Ready Architecture for Hyper-Converged
Infrastructure on Red Hat OpenStack Platform is compatible with the OpenStack APIs. Tempest API tests ensure that
deployment of the HCI cloud does not interrupt any OpenStack API functionality.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
70 | Performance measuring

Chapter

7
Performance measuring
Topics: This chapter details the testing methodology used throughout the
experimentation. It includes the tools and workloads used and the rationale
• Overview for their choice. It also provides the benchmark results along with the
• Performance tools bottleneck analysis.
• Test cases and test reports
• Conclusion

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 71

Overview
A myriad of Cloud computing and networking technology development delivers a wide variety of choices for an
equally diverse group of information system organizations.
Cloud services performance significantly impacts future functionality and execution of information infrastructure.
A thorough evaluation of Cloud service performance is crucial and beneficial to both service providers and
consumers.
The following chapter outlines performance test methodology and the graphical representation of the results in three
different areas:
• Network performance: evaluation of network performance by evaluating throughput, latency and jitter of the
network traffic between virtual machines.
• Compute performance: evaluation of compute performance by evaluating memory throughput and memory
latency.
• Storage performance: evaluation of storage performance by evaluating IOPs and Latency.

Performance tools
The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform performance
is measured using the following tools:
1. Spirent: Spirent Temeva (Test Measure Validate) is a revolutionary new platform providing a Software-as-a-
Service (SaaS) based dashboard to configure, measure, analyze and share valuable test metrics. This tool measures
network and compute performance.
2. FIO: A popular Open-source performance benchmarking I/O workload generator tool. This tool measures storage
performance.

Test cases and test reports


The following test scenarios are performed after setup configuration. Refer to https://www.spirent.com/products/
temeva for how-to setup and configure Temeva.
The following sections detail the test case experimentation and corresponding analytical metrics performed.

Network performance
Test case 1
Objective: Measure network throughput (Mbits/sec) and L2/L3 network latency (ms) between instances on same/
different compute hosts and network.
Description: The test is performed to calculate network throughput between instances using four sets of
combinations:
1. Two VMs residing on same compute host and same network
2. Two VMs residing on same compute host and different network
3. Two VMs residing on different compute host and same network
4. Two VMs residing on different compute host and different network
A unidirectional network traffic is generated between two VMs (referred as East/West traffic) for a range of frame
sizes - 64B, 256B, 512B, 1024B and 1518B each iterating over a duration of 60 seconds. Considering the same
network traffic or different network traffic, learning mode is configured as L2 or L3 respectively. The number of
flows per port is set to 1000.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
72 | Performance measuring

Graphical representation of the test result with x-axis having standard ethernet frame sizes and y-axis having
corresponding maximum network throughput. Figure 12: Network throughput vs frame size on page 72 graph
depicts network latency behavior with x-axis having standard Eehernet frame sizes and y-axis having corresponding
latency.

Figure 12: Network throughput vs frame size

Figure 13: Network latency vs frame size

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 73

Analysis and inference: Network throughput increases as the frame size increases and becomes maximum for a
standard ethernet frame size i.e., 1518B. Latency is high for low frame sizes and minimum at standard ethernet frame
size i.e., 1518B.
When two VMs are present in the same compute host with different networks, packets flow through Linux Bridge,
OVS, and virtual routers. Since routing is involved, it requires a network layer (layer 3) with higher latency.
Packet flows through Linux Bridge, OVS, and physical infrastructure when two VMs are in different compute and the
same network. Activity is limited to layer 2 data with no substantial delay like layer 3.
Frame size greater than or equal to 1024B yields maximum throughput and minimum consistent with L2/L3 network
latency.
Host linear frame size increases to 75% of the maximum throughput between VMs on the same computer. The
placement of VMs on hyperconverged nodes are the bottleneck to high network throughput performance.
Enabling NFV feature like OVS-DPDK in Hyper-Converged Infrastructure enhances the packet processing and
forwarding rate optimizing the HCI solution.
Test case 2
Objective: Measure L2/L3 network jitter (Mbits/sec) and L2/L3 network latency (ms) between instances on same/
different compute hosts and network.
Description: The test is performed to calculate network throughput between instances using two sets of
combinations:
1. Both of the two VMs spawned in same network (L2).
2. Both of the two VMs spawned in different network (L3).
Unidirectional network traffic generated between two VMs (referred as east/west traffic) for a range of frame sizes -
64B, 256B, 512B, 1024B and 1518B running over 60 seconds.
Learning mode is configured as L2 or L3 for either same or different network traffic for 1000 flows per port.
Graphical representation of test result with x-axis has standard ethernet frame sizes and y-axis corresponding to
network jitter.

Figure 14: Network jitter vs frame size

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
74 | Performance measuring

Analysis and inference: When VMs are present in a different network, the packet goes through Linux Bridge, OVS,
and virtual routers. Packets moving through different paths will rise and increase the packet delay variation (Jitter).
When VMs are present in the same network, a packet goes through Linux Bridge and OVS, following a single path
reducing delay variation.

Compute performance
Test Case 3
Objective: Measure compute performance to memory IOPs (millions) and latency (us).
Description: The test is performed on a varied increasing count of agent VMs each having four vCPUs, 4GB RAM
and 20GB disk hosted on a single compute node. Read and write memory IOPs of block size 4KB with different
access pattern i.e., random and sequential is stressed to maximum value on a group of VMs ranging from count 1 to
30 which in return yields maximum possible IOPs supported by infrastructure at that point of time.
Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis showing
average memory read IOPs in millions.

Figure 15: 4KB memory read IOPs

Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis showing
average memory write IOPs in millions.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 75

Figure 16: 4KB memory write IOPs

Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis are
showing corresponding memory latency (us) for random/sequential read and write operations.

Figure 17: 4KB memory latency

Analysis and inference: Memory read/write IOPs for both access pattern - random and sequential depicted in Figure
17: 4KB memory latency on page 75 shows the maximum possible IOPs per instance. Increasing OpenStack
instances decrease the average IOPs. The bends in the graph denote bottleneck for a particular number of instances
under test for memory IOPs. IOPs can be improved significantly with the enablement of huge-pages and NUMA in
Hyperconverged compute nodes.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
76 | Performance measuring

A write operation consumes more IOPs than a read operation regardless of the memory access pattern. The latency of
both random write and sequential write for 4KB block size is consequently higher than random read and sequential
read.

Storage performance
Test case 4:
Objective: Measure Storage performance to IOPs and latency (ms).
Description: The test measure storage performance to IOPs and latency on a configured Ceph Cluster that uses 16
OSDs (2 OSDs per NVMe SSD disk) per node, i.e., 48 OSDs in total across three nodes.
The test performs a series of FIO benchmarks on a group of VMs ranging from count 40 to 240. The VMs under test
configured with one vCPU, 1024MB RAM and 10GB disk on a single compute host.
FIO benchmark details follow:
• I/O Engine : libaio
• I/O mode : direct I/O
• Block size : 4KB
• I/O Depth : 64
• Number of jobs per VM : 8
• File size : 512MB
Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing random read/write IOPs.

Figure 18: 4K storage random IOPs

Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing Sequential Read/Write IOPs.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 77

Figure 19: 4K storage sequential IOPs

Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing sequential read/write IOPs.

Figure 20: 4K storage random latency

Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis is
showing sequential read/write IOPs.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
78 | Performance measuring

Figure 21: 4K storage sequential latency

Analysis and inference: Write operations are more expensive than read operations for smaller block size. Ceph
acknowledges a client after data has been entirely written off on a given number of OSDs, which in our case is 2x
replication, i.e., primary and secondary OSD.
Read operation client communication is acting/primary OSD. 240 VMs on a single node result illustrates a gradual
but minimal decrease in average IOPs with an increase in the number of compute resources. Latency is higher for a
random pattern than a sequential pattern for a given number of instances.
There is a consistent graphical curve for random latency. Sequential latency shows a linear incremental curve with an
increase in virtual compute resources.
Faster NVMe drives could improve performance but may shift the load to CPUs.

Conclusion
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform is designed in a
hyper-converged approach by colocating Ceph and compute services.
Dell EMC PowerEdge R740xd and Dell EMC PowerEdge R640 servers with Intel 25GbE networking provide a
concrete performance baseline and state-of-the-art hardware.
Software-defined storage Red Hat Ceph Storage 3.2 with BlueStore backend enabled is well suited for use cases
where performance is a critical element.
Finally, Intel NVMe drives offer robustness and an improved model of performance-driven SSD drives.
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform testing
performance methodology supplied by Dell EMC trusted Spirent partner.
The biggest challenge of the Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat
OpenStack Platform is optimal tuning of memory, CPU cores, and Ceph OSD-disk ratio to address the resource
distribution and contention. Section Performance tuning on page 31 illustrates performance tuning parameters
defining a flexible and optimized architecture. Each use case requirement is modifiable for the customer's

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Performance measuring | 79

infrastructure. Simulated testing methodology for measuring various performance metrics applies to a myriad of
devices.
Performance improves by enabling NFV oriented features like Huge Pages for high memory I/O applications and
CPU pinning for NUMA aware nodes with additional functionality like OVS-DPDK for intelligent packet forwarding
and processing.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
80 | Bill of Materials

Appendix

A
Bill of Materials
Topics: This chapter provide bill of materials information necessary to purchase
the proper hardware to deploy the Dell EMC Ready Architecture for Hyper
• Bill of Materials - SAH node Converged Infrastructure on Red Hat OpenStack Platform
• Bill of Materials - 3 Controller
nodes
• Bill of Materials - 3 Converged
nodes
• Bill of Materials - 1 Dell EMC
Networking S3048-ON switch
• Bill of Materials - 2 Dell EMC
Networking S5248-ON switches

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Bill of Materials | 81

Bill of Materials - SAH node


Table 23: Bill Of Materials - SAH node

Function Description
Platform Dell EMC PowerEdge R640
CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25M
Cache,Turbo,HT (125W) DDR4-26662
RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+
Disk 10 x 600GB, 15K SAS,12Gb,512n,2.5,HP
Storage controller PERC H740P
RAID layout RAID10

Bill of Materials - 3 Controller nodes


Table 24: Bill Of Materials - 3 Controller nodes

Function Description
Platform Dell EMC PowerEdge R640
CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25M
Cache,Turbo,HT (125W) DDR4-2666
RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+
Disk 10 x 600GB, 15K SAS,12Gb,512n,2.5,HP
Storage controller PERC H740P
RAID layout RAID10

Bill of Materials - 3 Converged nodes


Table 25: Bill Of Materials - 3 Converged nodes

Function Description
Platform Dell EMC PowerEdge R740xd
CPU 2 x Intel® Xeon® Platinum 8160 2.1G,24C/48T,10.4GT/s, 33M
Cache,Turbo,HT (150W) DDR4-2666
RAM 384GB RAM (12 x 32GB RDIMM, 2666MT/s, Dual Rank)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
82 | Bill of Materials

Function Description
LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+
Add-in network 4 x Intel XXV710 DP 25GbE DA/SFP+
Disk 8 x 3.2TB, NVMe, Mxd Use Expr Flash, P4610
2 x 240GB, SSD SATA, 2.5, HP, S4600

Storage Controller PERC H740P


RAID layout RAID 1 (Operating System disk only)

Bill of Materials - 1 Dell EMC Networking S3048-ON switch


Table 26: Bill Of Materials - 1 Dell EMC Networking S3048-ON switch

Product Description
S3048-ON 48 line-rate 1000BASE-T ports, 4 line-rate 10GbE SFP+ ports
Redundant power supply AC or DC power supply
Fans Fan Module I/O Panel to PSU Airflow
or
Fan Module I/O Panel to PSU Airflow

Validated operating system Cumulus Linux OS 3.7.1

Bill of Materials - 2 Dell EMC Networking S5248-ON switches


Table 27: Bill Of Materials - 2 Dell EMC Networking S5248-ON switches

Product Description
S5248-ON S5248-ON 100GbE, 40GbE, and 25Gb
Redundant power supply AC or DC power supply
Fans Fan Module I/O Panel to PSU Airflow
or
Fan Module I/O Panel to PSU Airflow

Validated operating system Cumulus Linux OS 3.7.1

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 83

Appendix

B
Environment Files
Topics: This appendix provides modification details of files which is mandatory
process for deployment of Overcloud. Following list are:
• Heat templates and
environment yaml files • yaml files(.yaml)
• instackenv file(.json)
• Nodes registration json file
• undercloud(.conf)
• Undercloud configuration file

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
84 | Environment Files

Heat templates and environment yaml files


Their are two types of yaml files one is heat templates and second one is environment files. Heat templates are
responsible for OpenStack orchestration. The environment affects the runtime behavior of a template. It provides a
way to override the resource implementations and a mechanism to place parameters that the service needs.
Note: The files are sample files. The actual configurations are per site specific. An IP address blocks
reserved setting values per configuration parameters which are site unique.

network-environment.yaml

resource_registry:
OS::TripleO::Network::Ports::StorageMgmtVipPort: ./network/ports/
ctlplane_vip.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: ./network/ports/noop.yaml

parameter_defaults:
# CHANGEME: Change the following to the desired MTU for Neutron Networks
NeutronGlobalPhysnetMtu: 1500

# CHANGEME: Change the following to the CIDR for the Management network
ManagementNetCidr: 192.168.110.0/24

# CHANGEME: Change the following to the CIDR for the Private API network
InternalApiNetCidr: 192.168.140.0/24

# CHANGEME: Change the following to the CIDR for the Tenant network
TenantNetCidr: 192.168.130.0/24

# CHANGEME: Change the following to the CIDR for the Storage network
StorageNetCidr: 192.168.170.0/24

# CHANGEME: Change the following to the CIDR for the Storage Clustering
network
StorageMgmtNetCidr: 192.168.180.0/24

# CHANGEME: Change the following to the CIDR for the External network
ExternalNetCidr: 100.67.139.0/26

# CHANGEME: Change the following to the DHCP ranges for the iDRACs to use
on
# the Management network
ManagementAllocationPools: [{'start': '192.168.110.30', 'end':
'192.168.110.45'}]

# The allocation pools below are used to dynamically assign DHCP IP


addresses
# to the various networks on the overcloud nodes as they are provisioned.
If
# using static IPs instead, see static-ip-environment.yaml.

# CHANGEME: Change the following to the DHCP range to use on the


# Private API network
InternalApiAllocationPools: [{'start': '192.168.140.50', 'end':
'192.168.140.120'}]

# CHANGEME: Change the following to the DHCP range to use on the

# Tenant network

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 85

TenantAllocationPools: [{'start': '192.168.130.50', 'end':


'192.168.130.120'}]

# CHANGEME: Change the following to the DHCP range to use on the

# Storage network
StorageAllocationPools: [{'start': '192.168.170.50', 'end':
'192.168.170.120'}]

# CHANGEME: Change the following to the DHCP range to use on the

# Storage Clustering network


StorageMgmtAllocationPools: [{'start': '192.168.180.50', 'end':
'192.168.180.120'}]

# CHANGEME: Change the following to the DHCP range to use on the

# External network
ExternalAllocationPools: [{'start': '100.67.139.20', 'end':
'100.67.139.50'}]

# CHANGEME: Set to the router gateway on the external network


ExternalInterfaceDefaultRoute: 100.67.139.1

# CHANGEME: Set to the router gateway on the management network


ManagementNetworkGateway: 192.168.110.1

# CHANGEME: Set to the IP of the gateway on the provisioning network which


# will allow access to the management network
ProvisioningNetworkGateway: 192.168.120.1

# CHANGEME: Set to the router gateway on the provisioning network (or


Undercloud IP)
ControlPlaneDefaultRoute: 192.168.120.13

# CHANGEME: Set to the CIDR of the control plane network


ControlPlaneSubnetCidr: "24"

# CHANGEME: Set to the IP address of the Undercloud


EC2MetadataIp: 192.168.120.13

# CHANGEME: Set to the DNS servers to use for the overcloud nodes (maximum
2)
DnsServers: ["8.8.8.8"]

# CHANGEME: Change the following to the VLAN ID to use on the


# Private API network
InternalApiNetworkVlanID: 140

# CHANGEME: Change the following to the VLAN ID to use on the


# Storage network
StorageNetworkVlanID: 170

# CHANGEME: Change the following to the VLAN ID to use on the


# Storage Clustering network
StorageMgmtNetworkVlanID: 180

# CHANGEME: Change the following to the VLAN ID to use on the


# Tenant network
TenantNetworkVlanID: 130

# CHANGEME: Change the following to the VLAN ID to use on the

# External network

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
86 | Environment Files

ExternalNetworkVlanID: 1391

# CHANGEME: Change the following to the MTU value to use on the


# External network
ExternalNetworkMTU: 1500
# CHANGEME: Change the following to the MTU value to use on the
# internal network
InternalApiMTU: 1500
# CHANGEME: Change the following to the MTU value to use on the
# Storage network
StorageNetworkMTU: 1500
# CHANGEME: Change the following to the MTU value to use on the
# StorageMgmtNetwork network
StorageMgmtNetworkMTU: 1500
# CHANGEME: Change the following to the MTU value to use on the
# TenantNetwork network
TenantNetworkMTU: 1500
# CHANGEME: Change the following to the MTU value to use on the
# Provisioning network
ProvisioningNetworkMTU: 1500
# CHANGEME: Change the following to the MTU value to use on the
# Management network
ManagementNetworkMTU: 1500
# CHANGEME: Change the following to the MTU value to use on the
# Default Bonds MTU
DefaultBondMTU: 1500

# CHANGEME: Change the following to mtu size used for the floating network
ExtraConfig:
neutron::plugins::ml2::physical_network_mtus: ['physext:1500']
# neutron::plugins::ml2::physical_network_mtus: physext:1500
# CHANGEME: Set to empty string for External VLAN, br-ex if on native VLAN
of br-ex
NeutronExternalNetworkBridge: "''"
ServiceNetMap:
NeutronTenantNetwork: tenant
CeilometerApiNetwork: internal_api
AodhApiNetwork: internal_api
GnocchiApiNetwork: internal_api
MongoDbNetwork: internal_api
CinderApiNetwork: internal_api
CinderIscsiNetwork: storage
GlanceApiNetwork: storage
GlanceRegistryNetwork: internal_api
KeystoneAdminApiNetwork: ctlplane # allows undercloud to config
endpoints
KeystonePublicApiNetwork: internal_api
NeutronApiNetwork: internal_api
HeatApiNetwork: internal_api
NovaApiNetwork: internal_api
NovaMetadataNetwork: internal_api
NovaVncProxyNetwork: internal_api
SwiftMgmtNetwork: storage # Changed from storage_mgmt
SwiftProxyNetwork: storage
SaharaApiNetwork: internal_api
HorizonNetwork: internal_api
MemcachedNetwork: internal_api
RabbitMqNetwork: internal_api
RedisNetwork: internal_api
MysqlNetwork: internal_api
CephClusterNetwork: storage_mgmt
CephPublicNetwork: storage
CephRgwNetwork: storage
ControllerHostnameResolveNetwork: internal_api

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 87

ComputeHostnameResolveNetwork: internal_api
BlockStorageHostnameResolveNetwork: internal_api
ObjectStorageHostnameResolveNetwork: internal_api
CephStorageHostnameResolveNetwork: storage
NovaColdMigrationNetwork: internal_api
NovaLibvirtNetwork: internal_api

static-vip-environment.yaml
resource_registry:
OS::TripleO::Network::Ports::NetVipMap: ./network/ports/
net_vip_map_external.yaml
OS::TripleO::Network::Ports::ExternalVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::InternalApiVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::StorageVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::StorageMgmtVipPort: ./network/ports/noop.yaml
OS::TripleO::Network::Ports::RedisVipPort: ./network/ports/
from_service.yaml

parameter_defaults:
ServiceVips:
# CHANGEME: Change the following to the VIP for the redis service on the
# Private API/internal_api network.
# Note that this IP must lie outside the InternalApiAllocationPools
range
# specified in network-environment.yaml.

redis: 192.168.140.49

# CHANGEME: Change the following to the VIP on the Provisioning network


# Note that this IP must lie outside the dhcp_start/dhcp_end range
# specified in undercloud.conf.

ControlPlaneIP: 192.168.120.251

# CHANGEME: Change the following to the VIP on the Private API network.
# Note that this IP must lie outside the InternalApiAllocationPools range
# specified in network-environment.yaml.

InternalApiNetworkVip: 192.168.140.121

# CHANGEME: Change the following to the VIP on the Public API network.
# Note that this IP must lie outside the ExternalAllocationPools range
# specified in network-environment.yaml.

ExternalNetworkVip: 100.67.139.62

# CHANGEME: Change the following to the VIP on the Storage network


# Note that this IP must lie outside the StorageAllocationPools range
# specified in network-environment.yaml.

StorageNetworkVip: 192.168.170.121

# CHANGEME: Change the following to the VIP on the Provisioning network.


# The Storage Clustering network is not connected to the controller nodes,
# so the VIP for this network must be mapped to the provisioning network.
# Note that this IP must lie outside the dhcp_start/dhcp_end range
# specified in undercloud.conf.

StorageMgmtNetworkVip: 192.168.120.252

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
88 | Environment Files

static-ip-environment.yaml
resource_registry:
OS::TripleO::Controller::Ports::ExternalPort: ./network/ports/
external_from_pool.yaml
OS::TripleO::Controller::Ports::InternalApiPort: ./network/ports/
internal_api_from_pool.yaml
OS::TripleO::Controller::Ports::StoragePort: ./network/ports/
storage_from_pool.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: ./network/ports/noop.yaml
OS::TripleO::Controller::Ports::TenantPort: ./network/ports/
tenant_from_pool.yaml

OS::TripleO::ComputeHCI::Ports::ExternalPort: ./network/ports/noop.yaml
OS::TripleO::ComputeHCI::Ports::InternalApiPort: ./network/ports/
internal_api_from_pool.yaml
OS::TripleO::ComputeHCI::Ports::StoragePort: ./network/ports/
storage_from_pool.yaml
OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: ./network/ports/
storage_mgmt_from_pool.yaml
OS::TripleO::ComputeHCI::Ports::TenantPort: ./network/ports/
tenant_from_pool.yaml

parameter_defaults:
# Specify the IPs for the overcloud nodes on the indicated networks below.
# The IPs are listed in the order: node0, node1, node2 for each network.
#
# Note that the IPs chosen must lie outside the allocation pools defined
in
# network-environment.yaml, and must not collide with the IPs assigned to
# other nodes or networking equipment on the network, such as the SAH,
# OSP Director node, Ceph Storage Admin node, etc.
ControllerIPs:
tenant:
- 192.168.130.12
- 192.168.130.13
- 192.168.130.14
internal_api:
- 192.168.140.12
- 192.168.140.23
- 192.168.140.14
storage:
- 192.168.170.12
- 192.168.170.13
- 192.168.170.14
external:
- 100.67.139.12
- 100.67.139.13
- 100.67.139.14

ComputeHCIIPs:
tenant:
- 192.168.130.15
- 192.168.130.16
- 192.168.130.17
internal_api:
- 192.168.140.15
- 192.168.140.16
- 192.168.140.17
storage:
- 192.168.170.15
- 192.168.170.16
- 192.168.170.17

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 89

storage_mgmt:
- 192.168.180.15
- 192.168.180.16
- 192.168.180.17

nic_environment.yaml

resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig: ./controller.yaml
#############To be modified by EndUser######################

OS::TripleO::ComputeHCI::Net::SoftwareConfig: ./computeHCI.yaml

parameter_defaults:
# CHANGEME: Change the interface names in the following lines for the
# controller nodes provisioning interface and to include in the controller
# nodes bonds
ControllerProvisioningInterface: em3
ControllerBond0Interface1: p1p1
ControllerBond0Interface2: p2p1
ControllerBond1Interface1: p1p2
ControllerBond1Interface2: p2p2
# The bonding mode to use for controller nodes
ControllerBondInterfaceOptions: mode=802.3ad miimon=100
xmit_hash_policy=layer3+4 lacp_rate=1

# CHANGEME: Change the interface names in the following lines for the
# compute nodes provisioning interface and to include in the compute
# nodes bonds
ComputeHCIProvisioningInterface: em3
ComputeHCIBond0Interface1: p3p1
ComputeHCIBond0Interface2: p2p1
ComputeHCIBond1Interface1: p3p2
ComputeHCIBond1Interface2: p2p2
ComputeHCIBond2Interface2: p6p1
ComputeHCIBond2Interface2: p7p1
# The bonding mode to use for compute nodes
ComputeHCIBondInterfaceOptions: mode=802.3ad miimon=100
xmit_hash_policy=layer3+4 lacp_rate=1
##############Modification Ends Here#####################

dell-environment.yaml

resource_registry:
OS::TripleO::NodeUserData: /home/osp_admin/templates/wipe-disks.yaml

parameter_defaults:

# Defines the interface to bridge onto br-ex for network nodes


NeutronPublicInterface: bond1
# The tenant network type for Neutron
NeutronNetworkType: vlan
## >neutron-disable-tunneling no mapping.

# The neutron ML2 and OpenvSwith VLAN Mapping ranges to support.


NeutronNetworkVLANRanges: physint:201:250,physext

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
90 | Environment Files

# The logical to physical bridge mappings to use.


# Defaults to mapping the external bridge on hosts (br-ex) to a physical
name (datacenter).
# You would use this for the default floating network
NeutronBridgeMappings: physint:br-tenant,physext:br-ex

# Flavor used as the HCI compute


OvercloudComputeHCIFlavor: baremetal

# Flavor to use for the Controller nodes


OvercloudControllerFlavor: baremetal
# Flavor to use for the ceph Storage nodes
OvercloudCephStorageFlavor: baremetal
# Flavor to use for the Swift storage nodes
OvercloudSwiftStorageFlavor: baremetal
# Flavor to use for the Cinder nodes
OvercloudBlockStorageFlavor: baremetal

# Number of HCI Compute nodes


ComputeHCICount: 3

# Number of Controller nodes


ControllerCount: 3

# To customize the domain name of the overcloud nodes, change


"localdomain"
# in the following line to the desired domain name.

CloudDomain: oss.labs

# Set to true to enable Nova usage of Ceph for ephemeral storage.


# If set to false, Nova uses the storage local to the compute.
NovaEnableRbdBackend: true
# devices:
# - /dev/sda2

# Configure Ceph Placement Group (PG) values for the indicated pools
CephPools: [{"name": "volumes", "pg_num": 4096, "pgp_num": 4096}, {"name":
"vms", "pg_num": 1024, "pgp_num": 1024},{"name": "images", "pg_num": 512,
"pgp_num": 521}]
CephAnsiblePlaybookVerbosity: 1

CephPoolDefaultSize: 2

CephConfigOverrides:
journal_size: 10240
journal_collocation: true

# Below parameter added with respect to HCI environment


mon_max_pg_per_osd: 300

CephAnsibleDisksConfig:
osd_scenario: lvm
devices:
- /dev/nvme0n1
- /dev/nvme1n1
- /dev/nvme2n1
- /dev/nvme3n1
- /dev/nvme4n1
- /dev/nvme5n1
- /dev/nvme6n1
- /dev/nvme7n1
CephAnsibleExtraConfig:
osd_objectstore: bluestore

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 91

osds_per_device: 2
osd_recovery_op_priority: 3
osd_recovery_max_active: 3
osd_max_backfills: 1
ceph_osd_docker_memory_limit: 10g
ceph_osd_docker_cpu_limit: 4

ComputeHCIExtraConfig:
cpu_allocation_ratio: 6.7

ComputeHCIParameters:
NovaReservedHostMemory: 115000

NovaComputeExtraConfig:
nova::migration::libvirt::live_migration_completion_timeout: 800
nova::migration::libvirt::live_migration_progress_timeout: 150
ControllerExtraConfig:
nova::api::osapi_max_limit: 10000
nova::rpc_response_timeout: 180
nova::keystone::authtoken::revocation_cache_time: 300
neutron::rpc_response_timeout: 180
neutron::keystone::authtoken::revocation_cache_time: 300
cinder::keystone::authtoken::revocation_cache_time: 300
glance::api::authtoken::revocation_cache_time: 300

tripleo::profile::pacemaker::database::mysql::innodb_flush_log_at_trx_commit:
0
tripleo::haproxy::haproxy_default_maxconn: 10000

Nodes registration json file

instackenv.json

{
"nodes": [
{
"name": "control-0",
"capabilities": "node:control-0,boot_option:local,boot_mode:uefi",
"root_device": {"size":"2791"},
"pm_addr": "192.168.110.12",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},
{
"name": "control-1",
"capabilities": "node:control-1,boot_option:local,boot_mode:uefi",
"root_device": {"size":"2791"},
"pm_addr": "192.168.110.13",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},
{
"name": "control-2",
"capabilities": "node:control-2,boot_option:local,boot_mode:uefi",
"root_device": {"size":"2791"},
"pm_addr": "192.168.110.14",

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
92 | Environment Files

"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},

{
"name": "computeHCI-0",
"capabilities": "node:computeHCI-0,boot_option:local,boot_mode:uefi",
"root_device": {"size":"223"},
"pm_addr": "192.168.110.15",
"pm_password": "xxxxxxxx",

"pm_type": "pxe_drac",
"pm_user": "root"
},

{
"name": "computeHCI-1",
"capabilities": "node:computeHCI-1,boot_option:local,boot_mode:uefi",
"root_device": {"size":"223"},
"pm_addr": "192.168.110.16",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
},

{
"name": "computeHCI-2",
"capabilities": "node:computeHCI-2,boot_option:local,boot_mode:uefi",
"root_device": {"size":"223"},
"pm_addr": "192.168.110.17",
"pm_password": "xxxxxxxx",
"pm_type": "pxe_drac",
"pm_user": "root"
}
]
}

Undercloud configuration file

undercloud.conf
[DEFAULT]

#
# From instack-undercloud
#

# Fully qualified hostname (including domain) to set on the


# Undercloud. If left unset, the current hostname will be used, but
# the user is responsible for configuring all system hostname settings
# appropriately. If set, the undercloud install will configure all
# system hostname settings. (string value)
undercloud_hostname = director.OSS.LABS

# IP information for the interface on the Undercloud that will be


# handling the PXE boots and DHCP for Overcloud instances. The IP
# portion of the value will be assigned to the network interface
# defined by local_interface, with the netmask defined by the prefix

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 93

# portion of the value. (string value)


local_ip = 192.168.120.13/24

# Virtual IP or DNS address to use for the public endpoints of


# Undercloud services. Only used with SSL. (string value)
# Deprecated group/name - [DEFAULT]/undercloud_public_vip
#undercloud_public_host = 192.168.24.2

# Virtual IP or DNS address to use for the admin endpoints of


# Undercloud services. Only used with SSL. (string value)
# Deprecated group/name - [DEFAULT]/undercloud_admin_vip
#undercloud_admin_host = 192.168.24.3

# DNS nameserver(s) to use for the undercloud node. (list value)


#undercloud_nameservers =

# List of ntp servers to use. (list value)


#undercloud_ntp_servers =

# DNS domain name to use when deploying the overcloud. The overcloud
# parameter "CloudDomain" must be set to a matching value. (string
# value)
#overcloud_domain_name = localdomain

# List of routed network subnets for provisioning and introspection.


# Comma separated list of names/tags. For each network a section/group
# needs to be added to the configuration file with these parameters
# set: cidr, dhcp_start, dhcp_end, inspection_iprange, gateway and
# masquerade. Note: The section/group must be placed before or after
# any other section. (See the example section [ctlplane-subnet] in the
# sample configuration file.) (list value)
subnets = ctlplane-subnet

# Name of the local subnet, where the PXE boot and DHCP interfaces for
# overcloud instances is located. The IP address of the
# local_ip/local_interface should reside in this subnet. (string
# value)
local_subnet = ctlplane-subnet

# Certificate file to use for openstack service SSL connections.


# Setting this enables SSL for the openstack API endpoints, leaving it
# unset disables SSL. (string value)
#undercloud_service_certificate =

# When set to True, an SSL certificate will be generated as part of


# the undercloud install and this certificate will be used in place of
# the value for undercloud_service_certificate. The resulting
# certificate will be written to
# /etc/pki/tls/certs/undercloud-[undercloud_public_host].pem. This
# certificate is signed by CA selected by the
# "certificate_generation_ca" option. (boolean value)
#generate_service_certificate = false

# The certmonger nickname of the CA from which the certificate will be


# requested. This is used only if the generate_service_certificate
# option is set. Note that if the "local" CA is selected the
# certmonger's local CA certificate will be extracted to /etc/pki/ca-
# trust/source/anchors/cm-local-ca.pem and subsequently added to the
# trust chain. (string value)
#certificate_generation_ca = local

# The kerberos principal for the service that will use the
# certificate. This is only needed if your CA requires a kerberos
# principal. e.g. with FreeIPA. (string value)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
94 | Environment Files

#service_principal =

# Network interface on the Undercloud that will be handling the PXE


# boots and DHCP for Overcloud instances. (string value)
local_interface = eth1

# MTU to use for the local_interface. (integer value)


local_mtu = 1500

# DEPRECATED: Network that will be masqueraded for external access, if


# required. This should be the subnet used for PXE booting. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: With support for routed networks, masquerading of the
# provisioning networks is moved to a boolean option for each subnet.
masquerade_network = 192.168.120.0/24

# Path to hieradata override file. If set, the file will be copied


# under /etc/puppet/hieradata and set as the first file in the hiera
# hierarchy. This can be used to custom configure services beyond what
# undercloud.conf provides (string value)
#hieradata_override =

# Path to network config override template. If set, this template will


# be used to configure the networking via os-net-config. Must be in
# json format. Templated tags can be used within the template, see
# instack-undercloud/elements/undercloud-stack-config/net-
# config.json.template for example tags (string value)
#net_config_override =

# Network interface on which inspection dnsmasq will listen. If in


# doubt, use the default value. (string value)
# Deprecated group/name - [DEFAULT]/discovery_interface
#inspection_interface = br-ctlplane

# Whether to enable extra hardware collection during the inspection


# process. Requires python-hardware or python-hardware-detect package
# on the introspection image. (boolean value)
inspection_extras = true

# Whether to run benchmarks when inspecting nodes. Requires


# inspection_extras set to True. (boolean value)
# Deprecated group/name - [DEFAULT]/discovery_runbench
inspection_runbench = false

# Whether to support introspection of nodes that have UEFI-only


# firmware. (boolean value)
inspection_enable_uefi = true

# Makes ironic-inspector enroll any unknown node that PXE-boots


# introspection ramdisk in Ironic. By default, the "fake" driver is
# used for new nodes (it is automatically enabled when this option is
# set to True). Set discovery_default_driver to override.
# Introspection rules can also be used to specify driver information
# for newly enrolled nodes. (boolean value)
#enable_node_discovery = false

# The default driver or hardware type to use for newly discovered


# nodes (requires enable_node_discovery set to True). It is
# automatically added to enabled_drivers or enabled_hardware_types
# accordingly. (string value)
#discovery_default_driver = ipmi

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 95

# Whether to enable the debug log level for Undercloud openstack


# services. (boolean value)
#undercloud_debug = true

# Whether to update packages during the Undercloud install. (boolean


# value)
#undercloud_update_packages = true

# Whether to install Tempest in the Undercloud. (boolean value)


enable_tempest = true

# Whether to install Telemetry services (ceilometer, gnocchi, aodh,


# panko ) in the Undercloud. (boolean value)
#enable_telemetry = false

# Whether to install the TripleO UI. (boolean value)


enable_ui = true

# Whether to install requirements to run the TripleO validations.


# (boolean value)
enable_validations = true

# Whether to install the Volume service. It is not currently used in


# the undercloud. (boolean value)
#enable_cinder = false

# Whether to install novajoin metadata service in the Undercloud.


# (boolean value)
#enable_novajoin = false

# Array of host/port combiniations of docker insecure registries.


# (list value)
#docker_insecure_registries =

# One Time Password to register Undercloud node with an IPA server.


# Required when enable_novajoin = True. (string value)
#ipa_otp =

# Whether to use iPXE for deploy and inspection. (boolean value)


# Deprecated group/name - [DEFAULT]/ipxe_deploy
ipxe_enabled = true

# Maximum number of attempts the scheduler will make when deploying


# the instance. You should keep it greater or equal to the number of
# bare metal nodes you expect to deploy at once to work around
# potential race condition when scheduling. (integer value)
# Minimum value: 1
scheduler_max_attempts = 30

# Whether to clean overcloud nodes (wipe the hard drive) between


# deployments and after the introspection. (boolean value)
clean_nodes = false

# DEPRECATED: List of enabled bare metal drivers. (list value)


# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Please switch to hardware types and the
# enabled_hardware_types option.
#enabled_drivers = pxe_ipmitool,pxe_drac,pxe_ilo

# List of enabled bare metal hardware types (next generation drivers).


# (list value)
#enabled_hardware_types = ipmi,redfish,ilo,idrac

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
96 | Environment Files

# An optional docker 'registry-mirror' that will be configured in


# /etc/docker/daemon.json. (string value)
#docker_registry_mirror =

# List of additional architectures enabled in your cloud environment.


# The list of supported values is: ppc64le (list value)
#additional_architectures =

# Enable support for routed ctlplane networks. (boolean value)


#enable_routed_networks = false

[auth]

#
# From instack-undercloud
#

# Password used for MySQL root user. If left unset, one will be
# automatically generated. (string value)
#undercloud_db_password = <None>

# Keystone admin token. If left unset, one will be automatically


# generated. (string value)
#undercloud_admin_token = <None>

# Keystone admin password. If left unset, one will be automatically


# generated. (string value)
#undercloud_admin_password = <None>

# Glance service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_glance_password = <None>

# Heat db encryption key(must be 16, 24, or 32 characters. If left


# unset, one will be automatically generated. (string value)
#undercloud_heat_encryption_key = <None>

# Heat service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_heat_password = <None>

# Heat cfn service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_heat_cfn_password = <None>

# Neutron service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_neutron_password = <None>

# Nova service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_nova_password = <None>

# Ironic service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_ironic_password = <None>

# Aodh service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_aodh_password = <None>

# Gnocchi service password. If left unset, one will be automatically


# generated. (string value)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
Environment Files | 97

#undercloud_gnocchi_password = <None>

# Ceilometer service password. If left unset, one will be


# automatically generated. (string value)
#undercloud_ceilometer_password = <None>

# Panko service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_panko_password = <None>

# Ceilometer metering secret. If left unset, one will be automatically


# generated. (string value)
#undercloud_ceilometer_metering_secret = <None>

# Ceilometer snmpd read-only user. If this value is changed from the


# default, the new value must be passed in the overcloud environment
# as the parameter SnmpdReadonlyUserName. This value must be between 1
# and 32 characters long. (string value)
#undercloud_ceilometer_snmpd_user = ro_snmp_user

# Ceilometer snmpd password. If left unset, one will be automatically


# generated. (string value)
#undercloud_ceilometer_snmpd_password = <None>

# Swift service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_swift_password = <None>

# Mistral service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_mistral_password = <None>

# Rabbitmq cookie. If left unset, one will be automatically generated.


# (string value)
#undercloud_rabbit_cookie = <None>

# Rabbitmq password. If left unset, one will be automatically


# generated. (string value)
#undercloud_rabbit_password = <None>

# Rabbitmq username. If left unset, one will be automatically


# generated. (string value)
#undercloud_rabbit_username = <None>

# Heat stack domain admin password. If left unset, one will be


# automatically generated. (string value)
#undercloud_heat_stack_domain_admin_password = <None>

# Swift hash suffix. If left unset, one will be automatically


# generated. (string value)
#undercloud_swift_hash_suffix = <None>

# HAProxy stats password. If left unset, one will be automatically


# generated. (string value)
#undercloud_haproxy_stats_password = <None>

# Zaqar password. If left unset, one will be automatically generated.


# (string value)
#undercloud_zaqar_password = <None>

# Horizon secret key. If left unset, one will be automatically


# generated. (string value)
#undercloud_horizon_secret_key = <None>

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
98 | Environment Files

# Cinder service password. If left unset, one will be automatically


# generated. (string value)
#undercloud_cinder_password = <None>

# Novajoin vendordata plugin service password. If left unset, one will


# be automatically generated. (string value)
#undercloud_novajoin_password = <None>

[ctlplane-subnet]

#
# From instack-undercloud
#

# Network CIDR for the Neutron-managed subnet for Overcloud instances.


# (string value)
# Deprecated group/name - [DEFAULT]/network_cidr
cidr = 192.168.120.0/24

# Start of DHCP allocation range for PXE and DHCP of Overcloud


# instances on this network. (string value)
# Deprecated group/name - [DEFAULT]/dhcp_start
dhcp_start = 192.168.120.121

# End of DHCP allocation range for PXE and DHCP of Overcloud instances
# on this network. (string value)
# Deprecated group/name - [DEFAULT]/dhcp_end
dhcp_end = 192.168.120.250

# Temporary IP range that will be given to nodes on this network


# during the inspection process. Should not overlap with the range
# defined by dhcp_start and dhcp_end, but should be in the same ip
# subnet. (string value)
# Deprecated group/name - [DEFAULT]/inspection_iprange
inspection_iprange = 192.168.120.21,192.168.120.120

# Network gateway for the Neutron-managed network for Overcloud


# instances on this network. (string value)
# Deprecated group/name - [DEFAULT]/network_gateway
gateway = 192.168.120.13

# The network will be masqueraded for external access. (boolean value)


#masquerade = false

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
References | 99

Appendix

C
References
Topics: Additional information can be found at https://www.dell.com/support/
article/us/en/19/sln310368/dell-emc-ready-architecture-for-red-hat-
• To learn more openstackplatform?lang=en
Note: If you need additional services or implementation help, please
contact your Dell EMC representative.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1
100 | References

To learn more
Additional information on the Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat
OpenStack Platform can be found at https://www.dell.com/support/article/us/en/19/sln310368/dell-emc-ready-
architecture-for-red-hat-openstackplatform?lang=en or by emailing [email protected]
Copyright © 2019 Dell EMC or its subsidiaries. All rights reserved. Trademarks and trade names may be used in this
document to refer to either the entities claiming the marks and names or their products. Specifications are correct at
date of publication but are subject to availability or change without notice at any time. Dell EMC and its affiliates
cannot be responsible for errors or omissions in typography or photography. Dell EMC’s Terms and Conditions of
Sales and Service apply and are available on request. Dell EMC service offerings do not affect consumer’s statutory
rights.
Dell EMC, the DELL EMC logo, the DELL EMC badge, and PowerEdge are trademarks of Dell EMC.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

You might also like