OpenStack Documentation
OpenStack Documentation
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Executive summary
This paper provides information about an HP lab implementation of Red Hat Enterprise Linux (RHEL) OpenStack Platform
5.0 on HP ConvergedSystem 700x.
OpenStack makes offering enterprise Infrastructure as a Service (IaaS) Private Cloud a reality. RHEL OpenStack Platform
makes implementing and managing OpenStack easier but does not specify hardware deployment or optimization. This
white paper includes specific recommendations and best practices for deploying a small but scalable OpenStack cloud on an
HP ConvergedSystem 700x system.
HP ConvergedSystem 700x is part of a family of solutions offering simplified, efficient, and reliable application deployment
platforms. This solution is built on HP Converged Infrastructure, with integrated and optimized models for RHEL and Red Hat
Enterprise Virtualization (RHEV) virtualized workloads. Based on a modular design, ConvergedSystem 700x provides options
for components and services to meet a broad set of requirements, deliver seamless scalability and provide an open onramp to the cloud.
Target audience: This document is intended for data center administrators, managers, and staff wishing to learn more
about Red Hat OpenStack Platform on ConvergedSystem 700x deployment. A working knowledge of Linux, OpenStack,
DHCP, VLANs, iptables, HP Virtual Connect, iLO and virtualization is recommended.
Document purpose: The purpose of this document is to describe our lab environment and offer ideas on how you can
streamline and optimize your deployment.
This white paper describes test deployment performed in July 2014.
Introduction
About OpenStack
OpenStack is an open source platform that lets you build an Infrastructure as a Service (IaaS) cloud that runs on commodity
hardware. OpenStack is designed for scalability so you can easily add new compute and storage resources to grow your
cloud over time. Large organizations such as HP have built massive public clouds on top of OpenStack.
OpenStack is more than a standard software package; it lets you integrate a number of different technologies to construct a
cloud. Although the number of options to do this may appear daunting at first, the OpenStack approach provides the
greatest amount of flexibility to the users.
The Red Hat Enterprise Linux OpenStack Platform IaaS cloud is implemented by a collection of interacting services that
control its computing, storage, and networking resources. The cloud is managed using a web-based interface that allows
administrators to control, provision, and automate OpenStack resources. Additionally, the OpenStack infrastructure is
facilitated through an extensive API, which is also available to end users of the cloud.
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
ConvergedSystem 700x provides standardized building blocks of server, storage, networking, rack and power, and HP
innovation. At its core, ConvergedSystem 700x includes:
HP ProLiant BL460c Gen8 servers in an HP BladeSystem c7000 enclosure with HP Virtual Connect FlexFabric
interconnects for the simplest, most cost-efficient virtualization platform (requiring 95 percent fewer cables, NICs and
switches than the competition).
HP 3PAR StoreServ 7000 or 10000 series, for efficient, flexible and easy-to-manage storage with non-disruptive scaling
lifecycle support.
Overview
This white paper has been created to provide guidance in the deployment of a RHEL OpenStack Platform 5.0 on the HP
ConvergedSystem 700x.
The ConvergedSystem 700x has been chosen, and we describe the steps necessary to successfully install RHEL OpenStack
Platform 5.0 on this hardware, providing a small private cloud which may be scaled up by using additional compute nodes.
This document presents an architectural view of a RHEL OpenStack Platform private cloud and describes this as
implemented on an HP ConvergedSystem 700x. This document has been written as a companion to the RHEL OpenStack
Platform and OpenStack.org documentation for a dual purpose.
1. To examine best practices, deployment, and integration excellence with:
Ensured business continuity through ease of deployment and consistent high availability
Comprehensive strategies for backup, disaster recovery, and security
Greater storage versatility and value
Superior networking innovation
End-to-end support ownership
2. To examine how to lower costs and provide greater investment protection with:
Greater efficiencies from a solution architecture of HP ProLiant servers, HP 3PAR StoreServ arrays, HP FlexNetwork
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Intended audience
To be successful with this guide it is expected that:
You are familiar with the Red Hat distribution of Linux, OpenStack and virtualization.
You are comfortable administering and configuring multiple Linux machines for networking.
You are familiar with concepts such as DHCP, Linux bridges, VLANs, and iptables.
You have access to configure HP Virtual Connect, switches and routers.
You are comfortable installing and maintaining a MySQL database, and occasionally running SQL queries against it.
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Helpful information
OpenStack Foundation documentation is available at http://docs.OpenStack.org. The OpenStack Operations Guide
provides invaluable insights and guidance to consider as you design and create your RHEL OpenStack Platform cloud. You
can also find information on installation, configuration, training, user guides and even how to develop applications and
contribute code.
Additional documentation for the Red Hat Enterprise Linux OpenStack Platform in the Red Hat customer portal is available
at: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform.
The following documents are included:
Administration user guide
How-to procedures for administrating Red Hat Enterprise Linux OpenStack Platform environments
Configuration reference guide
Configuration options and sample configuration files for each OpenStack component
End user guide
How-to procedures for using Red Hat Enterprise Linux OpenStack Platform environments
Getting started guide
Packstack deployment procedures for a Red Hat Enterprise Linux OpenStack Platform cloud, as well as brief instructions
for getting your cloud up and running
Installation and configuration guide
Deployment procedures for a Red Hat Enterprise Linux OpenStack Platform cloud; procedures for both a manual and
foreman installation are included. Also included are brief procedures for validating and monitoring the installation.
Release notes
Information about the current release, including notes about technology previews, recommended practices, and known
issues
Technical notes
These Technical Notes are provided to supplement the information contained in the text of Red Hat Enterprise Linux
OpenStack Platform errata advisories released through Red Hat Network
Please download the OpenStack HP 3PAR StoreServ Block Storage Drivers Configuration Best Practices document,
available at http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA5-1930ENW as we will reference this
document later in the deployment.
Other documentation related to configuring your HP servers will be referenced when required.
Components
OpenStack architecture
OpenStack is designed to be massively horizontally scalable, which allows all services to be distributed widely. However, to
simplify this guide we have decided to discuss services of a more central nature using the concept of a single cloud
controller. As described in this guide, the cloud controller is a single node that hosts the databases, message queue service,
authentication and authorization service, image management service, and externally accessible API endpoints for
OpenStack services.
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Cloud controller
The cloud controller provides the central management system for multi-node OpenStack deployments. Typically, the cloud
controller manages authentication and sends messages to all the systems through a message queue. For our example, the
cloud controller has a collection of nova-* components that represent the global state of the cloud, talk to services such as
authentication, maintain information about the cloud in a database, communicate with all compute nodes and storage
workers through a queue, and provide API access. Each service running on a designated cloud controller may be broken out
into separate nodes for scalability or availability. It's also possible to use virtual machines for all or some of the services that
the cloud controller manages, such as the message queuing.
In this reference architecture we used a single cloud controller server to host the OpenStack management services. By doing
this we are trading off fault tolerance for simplicity. Its possible to configure a fully redundant and highly available cloud
controller configuration by replicating services and clustering the database storage and message queue capability. We have
chosen an implementation that runs all services directly on the cloud controller. This provides a simple and scalable
configuration that works well for small to medium size clouds.
Database
Most OpenStack Compute central services, and currently also the nova-compute nodes, use the database for stateful
information. Loss of database availability leads to errors. As a result, in a production deployment you should consider
clustering your databases in some way to make them failure tolerant. The reference architecture explained in this white
paper does not implement a clustered database configuration.
Message queue
Most OpenStack Compute services communicate with each other using the Message Queue. In general, if the message
queue fails or becomes inaccessible, the cluster grinds to a halt and ends up in a read only state, with information stuck at
6
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
the point where the last message was sent. In a large production OpenStack environment it is recommended that you
cluster the message queue; Rabbitmq has built-in abilities to do this. However, implementation of a clustered message
queue is beyond the scope of this white paper.
Scheduler
Fitting various sized virtual machines (different flavors) into different sized physical nova-compute nodes is a challenging
problem. To support your scheduling choices, OpenStack Compute provides several different types of scheduling drivers, a
full discussion of which is found in the reference manual (http://docs.openstack.org/trunk/openstackops/content/cloud_controller_design.html#scheduling). The reference architecture uses the default libvirt-based scheduler
with Kernel-based Virtual Machine (KVM) for virtualization.
For availability purposes, or for very large or high-schedule frequency installations, you should consider running multiple
nova-scheduler services. No special load balancing is required, as the nova-scheduler communicates entirely using the
message queue.
Images
The OpenStack Image Service consists of two parts glance-api and glance-registry. The former is responsible for the
delivery of images; the compute node uses it to download images from the back-end. The latter maintains the metadata
information associated with virtual machine images and requires a database.
The glance-api part is an abstraction layer that allows a choice of back-end. Currently, it supports:
OpenStack Object Storage: Allows you to store images as objects.
File system: Uses any traditional file system to store the images as files.
S3: Allows you to fetch images from Amazon S3.
HTTP: Allows you to fetch images from a web server. You cannot write images by using this mode.
This reference architecture uses HP 3PAR to provide a file system to store images. You can make use of advanced HP 3PAR
features for thin provisioning and replication for this file system.
Dashboard
The OpenStack Dashboard is implemented as a Python web application that runs in the Apache web-server (httpd). It is
accessed using a web browser via traditional http protocol. Because it uses the service APIs for the other OpenStack
components, it must also be able to reach the API servers (including their admin endpoints) over the network.
Authentication and authorization
The concepts supporting OpenStack authentication and authorization are derived from well understood and widely used
systems of a similar nature. Users have credentials they can use to authenticate, and they can be a member of one or more
groups (known as projects or tenants interchangeably).
For example, a cloud administrator might be able to list all instances in the cloud, whereas a user can only see those in their
current group. Resources quotas, such as the number of cores that can be used, disk space, etc., are associated with a
project.
The OpenStack Identity Service (Keystone) is the point that provides the authentication decisions and user attribute
information, which is then used by the other OpenStack services to perform authorization. Policy is set in the
policy.json file.
The Identity Service supports different plugins for back-end authentication decisions, and storing information. These range
from pure storage choices to external systems, and currently include:
In-memory Key-Value Store
SQL database
PAM
LDAP
Many deployments use the SQL database; however, LDAP is also a popular choice for those with an existing authentication
infrastructure that needs to be integrated. In organizations that have a centralized LDAP server for authentication, using
LDAP allows synchronizing its use with the HP Integrated Lights-Out (iLO) based credentials used to access the server iLO
management controller so it is a good choice in this case. This reference architecture uses a SQL database for the identity
storage instead of depending on LDAP being present. If LDAP is available, the OpenStack Operations Guide shows how you
can configure LDAP to enable its use with the OpenStack Identity Service.
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Network considerations
Because the cloud controller handles so many different services, it must be able to handle the amount of traffic that hits it.
For example, if you choose to host the OpenStack Imaging Service on the cloud controller, the cloud controller should be
able to support the transferring of the images at an acceptable speed. We recommend that you use a fast NIC, such as
10 GbE. This reference architecture makes use of 10 GbE network connections via HP Virtual Connect FlexFabric modules.
Reference architecture
When implementing a Red Hat Enterprise Linux OpenStack Platform cloud you will need to make many choices that
influence the resulting implementation. For this document we've made some decisions that allow for a small-to-medium
size cloud installation that scales well. In this reference architecture implementation, the following design has been
considered:
One blade server acts as the cloud controller by hosting many services including the dashboard and API services.
Another blade server acts as the network node by hosting OpenStack Networking (neutron) services.
All other blade servers act as compute nodes by hosting nova services.
One rack server acts as a client node.
We have specified a set of compute nodes with a uniform configuration. Adding additional compute capacity is as simple as
adding additional compute nodes. The sections below provide more details on the hardware, software, and procedures used
to configure this reference architecture in the lab.
Purpose
Storage back-end for Glance Image service and Cinder Block Storage service
Fibre Channel Switches for SAN connectivity between servers and 3PAR
Ethernet switches
Note
For this reference architecture an additional server installed with Microsoft Windows Server 2008 R2 operating system
was used as a jumpstation. This server was used to download or install any necessary software components, and connect to
iLOs, Virtual Connect Manager and Onboard Administrator. HP 3PAR Management Console was installed on this server to
manage the HP 3PAR used for this reference architecture.
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Software requirements
1. All servers must meet the following software requirements:
Running Red Hat Enterprise Linux 7
Registered to Red Hat Network (RHN) or the Red Hat Content Delivery Network (CDN)
Subscribed to following repositories:
Red Hat Enterprise Linux 7
Red Hat Enterprise Linux OpenStack Platform 5.0
OpenStack services
The image below depicts the RHEL OpenStack Platform services and their interactions with each other.
Figure 3. OpenStack services
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
OpenStack Compute is composed of many services that work together to provide the full functionality. The openstacknova-cert and openstack-nova-consoleauth services handle authorization. The openstack-nova-api responds to service
requests and the openstack-nova-scheduler dispatches the requests to the message queue. The openstack-novaconductor service updates the state database which limits direct access to the state database by compute nodes for
increased security. The openstacknova-compute service creates and terminates virtual machine instances on the compute
nodes. Finally, openstack-nova-novncproxy provides a VNC proxy for console access to virtual machines via a standard web
browser.
Cinder Block Storage service
While the OpenStack Compute service provisions ephemeral storage for deployed instances based on their hardware
profiles, the OpenStack Block Storage service provides compute instances with persistent block storage. Block storage is
appropriate for performance sensitive scenarios such as databases or frequently accessed file systems. Persistent block
storage can survive instance termination. It can also be moved between instances like any external storage device. This
service can be backed by a variety of enterprise storage platforms or simple NFS servers. This services features include:
Persistent block storage devices for compute instances
Self-service volume creation, attachment, and deletion
A unified interface for numerous storage platforms
Volume snapshots
The Block Storage service is comprised of openstack-cinder-api which responds to service requests and openstack-cinderscheduler which assigns tasks to the queue. The openstack-cinder-volume service interacts with various storage providers
to allocate block storage for virtual machines. By default the Block Storage server shares local storage via the iSCSI tgtd
daemon.
Neutron Network service
OpenStack Networking is a scalable API-driven service for managing networks and IP addresses. OpenStack Networking
gives users self-service control over their network configurations. Users can define, separate, and join networks on demand.
This allows for flexible network models that can be adapted to fit the requirements of different applications.
OpenStack Networking has a pluggable architecture that supports numerous virtual networking technologies as well as
native Linux networking mechanisms including Open vSwitch and linuxbridge. OpenStack Networking is composed of
several services. The neutron-server exposes the API and responds to user requests. The neutron-l3-agent provides L3
functionality, such as routing, through interaction with the other networking plugins and agents. The neutron-dhcp-agent
provides DHCP to tenant networks. There are also a series of network agents that perform local networking configuration
for the nodes virtual machines.
This reference architecture is based on the Open vSwitch plugin, which uses the neutron-openvswitch-agent.
Horizon Dashboard
The OpenStack Dashboard is an extensible web-based application that allows cloud administrators and users to control and
provision compute, storage, and networking resources. Administrators can use the Dashboard to view the state of the cloud,
create users, assign them to tenants, and set resource limits. The OpenStack Dashboard runs as an Apache web server via
the httpd service.
10
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Supporting technologies
This section describes the supporting technologies used to develop this reference architecture beyond the OpenStack
services and core operating system. Supporting technologies include:
MySQL
A state database resides at the heart of an OpenStack deployment. This SQL database stores most of the build-time and
run-time state information for the cloud infrastructure including available instance types, networks, and the state of running
instances in the compute fabric. Although OpenStack theoretically supports any SQL-Alchemy compliant database, Red Hat
Enterprise Linux OpenStack Platform uses MySQL, a widely used open source database packaged with Red Hat Enterprise
Linux.
11
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
RabbitMQ
RabbitMQ is open source message broker software that implements the Advanced Message Queuing Protocol (AMQP).
AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP broker, either RabbitMQ or Qpid, sits between
any two OpenStack components and allows them to communicate in a loosely coupled fashion. Red Hat Enterprise Linux
OpenStack Platform 5 makes uses RabbitMQ as default open source enterprise messaging.
KVM
Kernel-based Virtual Machine (KVM) is a full virtualization solution for Linux on x86 and x86_64 hardware containing
virtualization extensions for both Intel and AMD processors. It consists of a loadable kernel module that provides the core
virtualization infrastructure. Red Hat Enterprise Linux OpenStack Platform Compute uses KVM as its underlying hypervisor
to launch and control virtual machine instances.
Packstack
Packstack is a Red Hat Enterprise Linux OpenStack Platform 5 installer utility. Packstack uses Puppet modules to install
OpenStack packages via SSH. Puppet modules ensure OpenStack can be installed and expanded in a consistent and
repeatable manner. This reference architecture uses Packstack for a multi-server deployment. Through the course of this
reference architecture, the initial Packstack installation is modified with OpenStack Network and Storage service
enhancements.
Open vSwitch
Open vSwitch is a production-quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is
designed to enable massive network automation through programmatic extension, while still supporting standard
management interfaces and protocols. In addition, it is designed to support distribution across multiple physical servers.
Red Hat Enterprise Linux OpenStack Platform 5 provides an Open vSwitch plugin for Neutron that provides next-generation
software networking infrastructure for both public and private clouds.
Deployment model
Network topology
Figure 5 shows the network topology used for this reference architecture.
Figure 5. Network topology
All servers are connected over the Lab Network switch 10.64.80.0/20. This network is used for client requests to the API
servers as well as service communication between the OpenStack services.
The network node and compute nodes are connected via a 10 GbE network on the Data network. This network carries the
communication between virtual machines in the cloud and also carries all communications between the software-defined
networking components. In this specific reference architecture, it is a switch configured to trunk a range of VLAN tags
between the compute and network nodes.
12
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
The controller and compute nodes are connected to HP 3PAR via a storage area network. HP 3PAR provides the backend
storage for the image service (glance) as well as persistent storage for the VMs via block storage service (cinder).
OpenStack Service placement
The table below shows the final service placement for all OpenStack services. The API-listener services (including neutronserver) run on the cloud controller in order to field client requests. The Network node runs all other Network services except
for those necessary for Nova client operations, which also run on the Compute nodes.
Table 2. OpenStack final service placement
Component
Hostname
Role
Service
controller
Cloud Controller
openstack-cinder-api
openstack-cinder-scheduler
openstack-cinder-volume
openstack-glance-api
openstack-glance-registry
openstack-keystone
openstack-nova-api
openstack-nova-cert
openstack-nova-conductor
openstack-nova-consoleauth
openstack-nova-novncproxy
openstack-nova-scheduler
neutron-server
openstack-ceilometer-alarm-evaluator
openstack-ceilometer-alarm-notifier
openstack-ceilometer-api
openstack-ceilometer-central
openstack-ceilometer-collector
openstack-ceilometer-alarm-notification
httpd
neutron
Network node
neutron-dhcp-agent
neutron-l3-agent
neutron-metadata-agent
neutron-openvswitch-agent
neutron-ovs-cleanup
nova1 nova6
Compute node
neutron-openvswitch-agent
neutron-ovs-cleanup
openstack-ceilometer-compute
openstack-nova-compute
DL360p Gen8
cr1-mgmt1
Client
Note
Install the required Python client packages on the Client node if you need to remotely manage OpenStack services via CLI.
13
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Installation
HP hardware configuration
HP Integrated Lights-Out (iLO)
ProLiant servers provide exceptional remote management capabilities through the HP Integrated Lights-Out (iLO) solution.
Make sure that you connect each systems iLO to your management network. Some key features that you may find helpful
during OpenStack deployment include the Integrated Remote Console (IRC) and remote reset and power control. Console
access via the integrated remote console (IRC) can be especially valuable during remote network configuration and
troubleshooting. For more information about iLO configuration and features you can go to the general iLO web page at
hp.com/go/ilo or visit the support page for your individual server.
Storage configuration for boot disk
All servers in this reference architecture are specified with multiple 300 GB physical drives. Each server is configured with an
HP Smart Array controller, and we will use that to configure the available physical drives into a logical drive with your
preferred RAID configuration. As shown in Figure 6, this logical drive will be used as a boot disk in this implementation.
Figure 6. Smart Array controller configuration
This configuration provides good I/O performance and data protection for the server boot drive, database, message queue
and services on the controller. For the Compute services the RAID 50 configuration will be a benefit because we are using
local storage as boot disk with nova services.
Storage connection to blades
Controller and compute nodes need block storage access. The glance service running on the controller node needs storage
space to store images. An HP 3PAR volume must be created and presented to the controller node. Compute nodes which
run VM instances must have a path to HP 3PAR for VMs to access persistent storage.
14
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Virtual Connect Manager is used to configure SAN Fabrics that define storage connections from server blades to HP 3PAR, as
shown in Figure 7.
Figure 7. Virtual Connect SAN Fabric
15
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
16
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Table 3 describes the VLANs used for this reference architecture. Define the following VLANs listed in Table 3 using the +Add
button on the Associated Networks (VLAN tagged) section as shown in Figure 9.
Table 3. VLANs used in reference architecture for Network Topology
Network
Name
VLAN
Purpose
Lab
CR1_E1_IC1_DC_Lab
64
Data
CR1_E1_IC1_Data
120
Tenants
ovs_vlan10xx
1000-1050
Data network for tenants. Define VLAN for every OpenStack tenant.
17
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Next, configure the blade servers to make use of the defined Ethernet and SAN fabric connections. Using Virtual Connect
Manager, define a Server profile as shown in Figure 10. Specify the Lab, Data and Tenant network under the Ethernet
Adapter Connections. For SAN connections, specify SAN fabric under FCoE HBA Connections. Create server profiles for all
blade servers. Do not define SAN fabrics for the blade hosting the network (neutron) services.
Figure 10. Virtual Connect Server Profile
18
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
While defining Ethernet connections in a server profile, configure Multiple Networks for the second Ethernet connection. This
connection must be updated for every new tenant VLAN you create. Ensure you create enough VLANs and add them under
the Multiple Networks as shown in Figure 11.
Figure 11. Edit Multiple Networks
19
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Note
Other methods of installation, such as using a PXE server, can also be employed. Ensure a consistent installation on all
servers.
20
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
After Red Hat Enterprise Linux 7 installation is complete, configure hostnames and NICs on servers as shown in Table 4.
Configure /etc/hosts or DNS to reflect these settings.
Table 4. Host names and IP addresses
Hostname
Role (Services)
Network/Interface
IP address
controller
Cloud controller
(Cinder, Glance & Dashboard)
Lab/eno1
Data/eno2
10.64.80.83
neutron
Network
(Neutron)
Lab/eno1
Data/eno2
10.64.80.84
VLANs 1000-1050
nova1
Compute
(Nova)
Lab/eno1
Data/eno2
10.64.80.85
VLANs 1000-1050
nova2
Compute
(Nova)
Lab/eno1
Data/eno2
10.64.80.86
VLANs 1000-1050
nova3
Compute
(Nova)
Lab/eno1
Data/eno2
10.64.80.87
VLANs 1000-1050
nova4
Compute
(Nova)
Lab/eno1
Data/eno2
10.64.80.88
VLANs 1000 - 1050
nova5
Compute
(Nova)
Lab/eno1
Data/eno2
10.64.80.89
VLANs 1000-1050
nova6
Compute
(Nova)
Lab/eno1
Data/eno2
10.64.80.90
VLANs 1000-1050
Cr1-mgmt1
Client
Lab/eno1
10.64.80.81
Lab
10.64.80.237
HP 3PAR
Note
Be sure to enable the corresponding VLAN IDs on all Ethernet switches as necessary. If not, connections to the servers or the
VM instances deployed using OpenStack will not be available.
Configure the eno1 interface on all nodes to start on boot and use a static IP. The interface configuration file
/etc/sysconfig/network-scripts/ifcfg-eno1 for controller node is as shown below.
DEVICE=eno1
HWADDR=00:17:A4:77:7C:00
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=10.64.80.83
NETMASK=255.255.240.0
GATEWAY=10.64.80.1
Specifically on the network node (neutron), configure a bridge interface br-ex, which will be used by OpenStack as external
network. The br-ex interface is defined in file /etc/sysconfig/network-scripts/ifcfg-br-ex as shown below.
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=10.64.80.84
NETMASK=255.255.240.0
GATEWAY=10.64.80.1
21
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
The eno1 interface on the network node must be defined as an Open vSwitch port as shown below in the file
/etc/sysconfig/network-scripts/ifcfg-eno1.
DEVICE=eno1
ONBOOT=yes
TYPE=OVSPort
DEVICETYPE=ovs
NM_CONTROLLED=no
BOOTPROTO=none
OVS_BRIDGE=br-ex
Restart networking after changes:
$ service network restart
Key point
Red Hat documentation suggests disabling Network Manager and setting NM_CONTROLLED=no. But it has been observed
that on disabling Network Manager and setting NM_CONTROLLED=no the VM instance IP address becomes inaccessible. In
your environment, if VM instances are unreachable, try setting Network Manager to yes, restart Network Manager and check
if VM instance becomes reachable.
Note
A provider network can also be used instead of the above shown bridge configuration. A provider network maps directly to a
physical network in the data center. They are used to give tenants direct access to public networks.
Repository Name
rhel-7-server-openstack-5.0-rpms
rhel-7-server-rpms
You can now verify if the above channels are subscribed by analyzing the output of the yum repolist command.
Table 6 lists the repos that must be in the output of the command.
Table 6. Repositories for command output
Repo ID
Repository Name
rhel-7-server-openstack-5.0-rpms/7server/x86_64
rhel-7-server-rpms/7server/x86_64
For more details on how to add channels and subscriptions refer to section 2.1.2 in the Red Hat Enterprise Linux OpenStack
Platform 5 Getting Started Guide.
Finally, update all servers.
$ yum y update
22
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Configure multipath
Install, configure and enable multipath on all servers that need connection to storage on HP 3PAR. Use the sample
configuration below, /etc/multipath.conf, as a reference.
devices {
device {
vendor "3PARdata"
product "VV"
no_path_retry 18
features "0"
hardware_handler "0"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
path_selector "round-robin 0"
rr_weight uniform
rr_min_io_rq 1
path_checker tur
failback immediate
}
}
Enable and restart the multipathd service after the configuration is applied to the controller and compute nodes. Reboot
nodes as necessary.
Configure HP 3PAR
Create a Domain rhos_d0 on HP 3PAR to host all volumes that are created for use by the Red Hat OpenStack services.
Launch HP 3PAR Management Console installed on the jumpstation. Navigate to Actions Security & Domains Domains
Create Domain. This will pop-up a window to create the domain.
Figure 13. HP 3PAR domain creation
23
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
On this window, specify the domain name and any comments optionally. Click on the Add button below the comments input
box. This will add the domain to the list of new domains. Click OK to confirm and add a new domain.
Figure 14. Create Domain
Next, create a 3PAR common provisioning group (CPG) under the newly created domain and name it cpg_rhos. It is under
this CPG, volumes get provisioned by OpenStack cinder.
Figure 15. Create CPG
24
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Create a virtual volume under the rhos_d0 domain and present it to the cloud controller server. It is on this controller server
that glance services run and are configured to store all images on this newly created virtual volume.
Figure 16. Create Virtual Volume
all-in-one install above, except you may use one or more additional hardware nodes for running virtual machines.
Packstack is provided by the openstack-packstack package. Follow this procedure to install the openstack-packstack
package on the client server.
1. Use yum command to install Packstack
$ yum install openstack-packstack
2. Verify Packstack is installed
$ which packstack
/usr/bin/packstack
Running Packstack deployment utility
The steps below outline the procedure to run Packstack. Run the following commands on the controller node.
1. Generate packstack answer file.
$ packstack --gen-answer-file=packstack.txt
25
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
2. Edit packstack answer file to key in the values. Refer to Appendix A for the values that were used for this reference
architecture.
$ vi packstack.txt
3. Run the packstack utility providing the answer file as input.
$ packstack --answer-file=packstack.txt
4. After the run is complete, you should see a success message and no errors displayed. This may take a few minutes
depending on the number of compute servers to be configured. Observe the progress on the console.
**** Installation completed successfully ******
5. Reboot all servers.
6. Packstack creates a demo tenant and configures a password as provided in the answer file.
7. When the servers come back up, log into the Horizon dashboard on the client server using user demo to verify the
installation, http://10.64.80.83/dashboard
8. Packstack creates a keystonerc_admin file for admin user in the home directory of the node where packstack is run.
Create a new identity for demo user by copying the keystonerc_admin file to keystonerc_demo. Edit the file to
change user from admin to demo, change the password as appropriate. These files are sourced when running OpenStack
commands for authentication purposes. If there is no demo user or an associated tenant, use the commands below to
configure demo user.
$ source keystonerc_admin
$ keystone tenant-create --name demo-tenant
$ keystone user-create --name demo --pass password
$ keystone role-create --name Member
$ keystone user-role-add --user-id demo --tenant-id demo-tenant --role-id
Member
Key point
Red Hat Openstack Platform 5 Packstack utility is ideal for installing a proof-of-concept OpenStack deployment. Such
installations may not be suitable for your production environments. Follow Red Hat Openstack Platform 5 Installation and
Configuration Guide for complete manual installation.
Note
You can as well run Packstack interactively and provide input on the command line. Use the answer file as a reference and
key-in input accordingly.
Configure Glance
Configure Glance to use a virtual volume that was created earlier on HP 3PAR. In this reference architecture glance service is
hosted on the controller node.
1. Configure a filesystem on the new disk on the controller node.
$ mkfs.ext4 /dev/mapper/mpatha
2. Glance places all images under /var/lib/glance/images. Mount the new disk on path /var/lib/glance/images
$ mount /dev/mapper/mpathb /var/lib/glance/images
3. Log in to https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952 with your Customer Portal
user name and password and download the KVM Guest Image
4. Switch to demo identity
$ source keystonerc_demo
26
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Note
You can use the dashboard UI to upload the image. Log in as admin or demo user and upload the downloaded image. Add
any additional images that you may need for testing, for example, CirrOS 0.3.1 image in qcow2 format.
-HTTP_State- HTTP_Port
Enabled
8008
-HTTPS_StateEnabled
HTTPS_Port
8080
27
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Note
For more details on HP 3PAR StoreServ block storage drivers and to configure multiple HP 3PAR storage backends refer to
the OpenStack HP 3PAR StoreServ Block Storage Drivers Configuration Best Practices document available at
http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA5-1930ENW. More advanced configuration with
Volume Types is available in the guide on creating OpenStack cinder type-keys.
The HP3PARFCDriver is based on the Block Storage (Cinder) plug-in architecture. The driver executes the volume operations
by communicating with the HP 3PAR storage system over HTTP/HTTPS and SSH connections. The HTTP/HTTPS
communications use the hp3parclient, which is part of the Python standard library.
Configure security group rules
Security groups control access to VM instances. Define protocol level access to VM instances using Security Groups. Navigate
to Manage Compute Access & Security Security Groups. Edit the default security group. Click on the +Add Rule button
to add new rules into the default security group as shown below. Ensure SSH and ICMP protocols are configured to allow
traffic from the public and private network.
Figure 17. Add Rule
Note
For troubleshooting purposes add Custom TCP Rules for both Ingress and Egress directions allowing port range 1 65535
to CIDR 0.0.0.0/0.
28
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
During the Packstack installation all necessary Open vSwitch configurations will be created on the neutron server.
Ensure the following entries are already configured under the OVS section in the
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file.
[OVS]
vxlan_udp_port=4789
network_vlan_ranges=physnet1:1000:1050
tenant_network_type=vlan
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet1:br-eno2
Run the command below to ensure eno1 exists as a port under bridge br-ex.
[root@neutron ~]# ovs-vsctl show
00c91a3f-47a5-439a-b27a-648db5b1e7c0
Bridge "br-eno2"
Port "eno2"
Interface "eno2"
Port "phy-br-eno2"
Interface "phy-br-eno2"
29
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Port "br-eno2"
Interface "br-eno2"
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port "int-br-eno2"
Interface "int-br-eno2"
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eno1"
Interface "eno1"
ovs_version: "1.11.0"
At this point, we are ready to create OpenStack networking elements. The steps below list all commands to run to create
public and private networks, create public_sub and priv_sub subnets, create a virtual router, and create routing between
private and public networks.
1. Switch to admin identity:
[root@neutron ~]# source keystonerc_admin
2. Create a public network:
[root@neutron ~(keystone_admin)]# neutron net-create public --shared -router:external=True
3. Create a subnet under public network:
[root@neutron ~(keystone_admin)]# neutron subnet-create --name public_sub -enable-dhcp=False --allocation-pool start=10.64.80.200,end=10.64.80.250 -gateway=10.64.80.1 public 10.64.80.0/20
4. Switch to demo identity:
[root@neutron ~(keystone_admin)]# source keystonerc_demo
5. Create a private network:
[root@neutron ~(keystone_demo)]# neutron net-create private
6. Create a subnet under private network for VM traffic:
[root@neutron ~(keystone_demo)]# neutron subnet-create --name priv_sub -enable-dhcp=True private 192.168.32.0/24
7. Create a virtual router:
[root@neutron ~(keystone_demo)]# neutron router-create router01
8. Add the private subnet to the router:
[root@neutron ~(keystone_demo)]# neutron router-interface-add router01
priv_sub
9. Switch back to admin identity:
[root@neutron ~(keystone_demo)]# source keystonerc_admin
10 . Set the public network as gateway to the router:
[root@neutron ~(keystone_admin)]# neutron router-gateway-set router01 public
30
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
b.
c.
d.
Determine router network namespace on the neutron server. In this reference architecture, the network server is
the neutron server.
[root@CR1-Mgmt1 ~(keystone_demo)]# qroute_id=$(ssh neutron ip netns list |
grep qrouter)
e.
Ping the external interface of the router within the network namespace on the network node. This proves network
connectivity between the server and the router.
[root@CR1-Mgmt1 ~(keystone_demo)]# ssh neutron ip netns exec $qroute_id ping
-c 2 $router_ip
PING 192.168.32.1 (192.168.32.1) 56(84) bytes of data.
64 bytes from 192.168.32.1: icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from 192.168.32.1: icmp_seq=2 ttl=64 time=0.034 ms
--- 192.168.32.1 ping statistics --2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.034/0.049/0.065/0.017 ms
Validation
Launch an instance
At this point, the OpenStack cloud is deployed and should be functioning. Point your browser to the public address of the
OpenStack-dashboard node, "http://10.64.80.83/horizon", login as user demo.
As a first step, create a public keypair for SSH access to the instances. Navigate to Manage Compute Access & Security
Keypairs Click on the + Create Keypair button. Key in the keypair name as demokey. Download this keypair file and copy it
to the client server from which instances can be accessed.
Figure 19. Creation of SSH Keypair
31
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Next, navigate to Manage Compute Instances Click on the + Launch Instance button. This will pop-up a window as
shown below. Click on the Launch button to create an instance for the RHEL 6.5 image that was uploaded earlier.
Figure 20. Launch instance Details tab
Under the Access & Security tab, select the demokey and check the default security group.
Figure 21. Launch instance Access and Security tab
32
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Under the Networking tab, configure to use private network by selecting and dragging up the private network name.
Figure 22. Launch instance Networking
Once the instance is launched, the power state will be set to running if there were no errors during instance creation. Wait
for a while for the VM instance to boot completely. Click on the instance name rhelvm1 to view more details. On the same
page navigate to the Console tab to view the VM instance console.
Figure 23. Instance status
Verify routing
Follow the steps below to test network connectivity to the newly created instance from the client server on which you have
copied the demokey keypair.
1. Determine the gateway IP of the router using the command below. The IP 10.64.80.200 is the gateway IP.
[root@CR1-Mgmt1 ~(keystone_demo)]# ssh neutron 'ip netns exec $(ip netns | grep
qrouter) ip a | grep 10.64.80'
inet 10.64.80.200/20 brd 10.64.95.255 scope global qg-e0836894-7e
2. Add a route to the private network on the public network via routers interface:
[root@CR1-Mgmt1 ~(keystone_demo)]# route add -net 192.168.32.0 netmask
255.255.255.0 gateway 10.64.80.200
33
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
On the same window, you will now see the newly created floating IP. Click on the Associate button under the Actions
column. Select the rhelvm1 Port from the dropdown list and click on Associate.
Figure 25. Map floating IP
34
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
The Instances page will now show the floating IP associated with the rhelvm1 instance.
Figure 26. Instance status with floating IP
Test the connectivity to the floating IP from the same client server.
[root@CR1-Mgmt1 ~]# ssh -i demokey.pem [email protected] uptime
04:31:47 up 6 min,0 users,load average: 0.00, 0.00, 0.00
Create multiple instances to test the setup. After multiple instances are launched, the network topology will look as shown
below.
Figure 27. Network topology
35
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Volume management
Volumes are block devices that can be attached to instances. The HP 3PAR drivers for OpenStack cinder execute the volume
operations by communicating with the HP 3PAR storage system over HTTP/HTTPS and SSH connections. Volumes are
carved out from HP 3PAR StoreServ and presented to the instances. Use the dashboard to create and attach the volumes to
the instances.
1. Log in to the dashboard as demo user. Navigate to Manage Compute Volumes Click on the + Create Volume button.
Key in the volume name and required size. Click on the Create Volume button.
Figure 28. Create new volume
36
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
2. Verify the creation on HP 3PAR Management Console. Note that there are no Hosts mappings shown in the lower part of
the figure below.
Figure 29. 3PAR Virtual Volumes display
3. From the dashboard, click on Edit Attachments for the volume data_vol that was newly created. This will pop-up a
Manage Volume Attachments page to configure the instance to which this volume must be attached to. Choose the
rhelvm1 instance that was created earlier and click on the Attach Volume button at the bottom. Once attached you can
see the status on the dashboard.
Figure 30. Volumes status
37
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
4. Verify on HP 3PAR Management Console. You should now see the Hosts mappings populated. The volume will be
presented to the compute node that hosts the rhelvm1 instance.
Figure 31. Volume Mapping to Host
5. Verify from within the instance. Log in to the VM instance and run the fdisk command as shown below. The disk /dev/vdb
is the newly attached volume.
[root@CR1-Mgmt1 ~(keystone_demo)]# ssh -i demokey.pem [email protected]
[cloud-user@rhelvm1 ~]$ sudo fdisk -l
Disk /dev/vda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000397ec
Device Boot
/dev/vda1
*
Start
1
End
1959
Blocks
15728640
Id
83
System
Linux
38
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Create a mountpoint:
[cloud-user@rhelvm1 ~]$ sudo mkdir /DATA
C.
D.
Bill of materials
Note
Part numbers are at time of publication and subject to change. The bill of materials does not include complete support
options or other rack and power requirements. If you have questions regarding ordering, please consult with your HP
Reseller or HP Sales Representative for more details. hp.com/large/contact/enterprise/index.html
Part number
Description
727178-B21
HP ConvergedSystem 700x
Implementing a proof-of-concept
As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept using a test
environment that matches as closely as possible the planned production environment. In this way, appropriate performance
and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HP Services representative
(hp.com/large/contact/enterprise/index.html) or your HP partner.
Summary
After understanding and working through the steps weve described, you should have a working small cloud that is scalable
through the addition of compute and network nodes. OpenStack is a complex suite of software and may be configured in
many different ways. This reference architecture should provide a baseline for implementation and can serve as a functional
environment for many workloads. We recommend the excellent documentation on the OpenStack website if you want to
learn more about the individual components and architectural choices available to you when setting up and running
OpenStack.
39
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
The HP ConvergedSystem 700x is an excellent platform for implementation of OpenStack. It provides powerful, dense
compute and storage capabilities for this reference architecture; and the iLO management capability is indispensable in
managing a small cluster of this kind.
Enjoy your OpenStack Cloud!
40
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
CONFIG_DEBUG_MODE=n
# The IP address of the server on which to install OpenStack services
# specific to controller role such as API servers, Horizon, etc.
CONFIG_CONTROLLER_HOST=10.64.80.83
# The list of IP addresses of the server on which to install the Nova
# compute service
CONFIG_COMPUTE_HOSTS=10.64.80.85,10.64.80.86,10.64.80.87,10.64.80.88,10.64.80.89,10.64.80.90
# The list of IP addresses of the server on which to install the
# network service such as Nova network or Neutron
CONFIG_NETWORK_HOSTS=10.64.80.84
# Set to 'y' if you want to use VMware vCenter as hypervisor and
# storage. Otherwise set to 'n'.
CONFIG_VMWARE_BACKEND=n
# The IP address of the VMware vCenter server
CONFIG_VCENTER_HOST=
# The username to authenticate to VMware vCenter server
CONFIG_VCENTER_USER=
# The password to authenticate to VMware vCenter server
CONFIG_VCENTER_PASSWORD=
# The name of the vCenter cluster
CONFIG_VCENTER_CLUSTER_NAME=
# To subscribe each server to EPEL enter "y"
CONFIG_USE_EPEL=n
# A comma separated list of URLs to any additional yum repositories
# to install
CONFIG_REPO=
# To subscribe each server with Red Hat subscription manager, include
# this with CONFIG_RH_PW
CONFIG_RH_USER=
# To subscribe each server with Red Hat subscription manager, include
# this with CONFIG_RH_USER
CONFIG_RH_PW=
# To enable RHEL optional repos use value "y"
CONFIG_RH_OPTIONAL=y
# To subscribe each server with RHN Satellite, fill Satellite's URL
# here. Note that either satellite's username/password or activation
# key has to be provided
CONFIG_SATELLITE_URL=
# Username to access RHN Satellite
CONFIG_SATELLITE_USER=
# Password to access RHN Satellite
CONFIG_SATELLITE_PW=
# Activation key for subscription to RHN Satellite
CONFIG_SATELLITE_AKEY=
# Specify a path or URL to a SSL CA certificate to use
CONFIG_SATELLITE_CACERT=
# If required specify the profile name that should be used as an
# identifier for the system in RHN Satellite
CONFIG_SATELLITE_PROFILE=
# Comma separated list of flags passed to rhnreg_ks. Valid flags are:
# novirtinfo, norhnsd, nopackages
CONFIG_SATELLITE_FLAGS=
# Specify a HTTP proxy to use with RHN Satellite
41
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
CONFIG_SATELLITE_PROXY=
# Specify a username to use with an authenticated HTTP proxy
CONFIG_SATELLITE_PROXY_USER=
# Specify a password to use with an authenticated HTTP proxy.
CONFIG_SATELLITE_PROXY_PW=
# Set the AMQP service backend. Allowed values are: qpid, rabbitmq
CONFIG_AMQP_BACKEND=rabbitmq
# The IP address of the server on which to install the AMQP service
CONFIG_AMQP_HOST=10.64.80.83
# Enable SSL for the AMQP service
CONFIG_AMQP_ENABLE_SSL=n
# Enable Authentication for the AMQP service
CONFIG_AMQP_ENABLE_AUTH=n
# The password for the NSS certificate database of the AMQP service
CONFIG_AMQP_NSS_CERTDB_PW=adc34cdc773c46f2b42b878fcb73d7e7
# The port in which the AMQP service listens to SSL connections
CONFIG_AMQP_SSL_PORT=5671
# The filename of the certificate that the AMQP service is going to
# use
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
# The filename of the private key that the AMQP service is going to
# use
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
# Auto Generates self signed SSL certificate and key
CONFIG_AMQP_SSL_SELF_SIGNED=y
# User for amqp authentication
CONFIG_AMQP_AUTH_USER=amqp_user
# Password for user authentication
CONFIG_AMQP_AUTH_PASSWORD=c989b5f5b2df48bd
# The IP address of the server on which to install MySQL or IP
# address of DB server to use if MySQL installation was not selected
CONFIG_MYSQL_HOST=10.64.80.83
# Username for the MySQL admin user
CONFIG_MYSQL_USER=root
# Password for the MySQL admin user
CONFIG_MYSQL_PW=password
# The password to use for the Keystone to access DB
CONFIG_KEYSTONE_DB_PW=22ff2be708a44cb9
# The token to use for the Keystone service api
CONFIG_KEYSTONE_ADMIN_TOKEN=dbe640130f0e420aa2c0f981f37d696b
# The password to use for the Keystone admin user
CONFIG_KEYSTONE_ADMIN_PW=password
# The password to use for the Keystone demo user
CONFIG_KEYSTONE_DEMO_PW=password
# Keystone token format. Use either UUID or PKI
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
# The password to use for the Glance to access DB
CONFIG_GLANCE_DB_PW=6fef64ea0c944f27
# The password to use for the Glance to authenticate with Keystone
CONFIG_GLANCE_KS_PW=c8445f4867e140dc
# The password to use for the Cinder to access DB
CONFIG_CINDER_DB_PW=b8f782ee12654e4a
42
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
43
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
44
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
45
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
CONFIG_SWIFT_STORAGE_SIZE=2G
# Whether to provision for demo usage and testing. Note that
# provisioning is only supported for all-in-one installations.
CONFIG_PROVISION_DEMO=n
# Whether to configure tempest for testing
CONFIG_PROVISION_TEMPEST=n
# The name of the Tempest Provisioning user. If you don't provide a
# user name, Tempest will be configured in a standalone mode
CONFIG_PROVISION_TEMPEST_USER=
# The password to use for the Tempest Provisioning user
CONFIG_PROVISION_TEMPEST_USER_PW=5a69af604a13433c
# The CIDR network address for the floating IP subnet
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
# The uri of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
# The revision of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
# Whether to configure the ovs external bridge in an all-in-one
# deployment
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
# The password used by Heat user to authenticate against MySQL
CONFIG_HEAT_DB_PW=54179705a4eb48b0
# The encryption key to use for authentication info in database
CONFIG_HEAT_AUTH_ENC_KEY=e1d351151d86456e
# The password to use for the Heat to authenticate with Keystone
CONFIG_HEAT_KS_PW=2a934681a2294947
# Set to 'y' if you would like Packstack to install Heat CloudWatch
# API
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
# Set to 'y' if you would like Packstack to install Heat
# CloudFormation API
CONFIG_HEAT_CFN_INSTALL=n
# Name of Keystone domain for Heat
CONFIG_HEAT_DOMAIN=heat
# Name of Keystone domain admin user for Heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
# Password for Keystone domain admin user for Heat
CONFIG_HEAT_DOMAIN_PASSWORD=9136e64a26f24906
# Secret key for signing metering messages
CONFIG_CEILOMETER_SECRET=b4d902a7c2ed4e05
# The password to use for Ceilometer to authenticate with Keystone
CONFIG_CEILOMETER_KS_PW=374486a577ce4b83
# The IP address of the server on which to install MongoDB
CONFIG_MONGODB_HOST=10.64.80.83
# The password of the nagiosadmin user on the Nagios server
CONFIG_NAGIOS_PW=b9d3a8fbcc504e17
46
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x
Appendix B: Troubleshooting
1.
Check the security group rules assigned to the VM instance. Verify that rules allow ICMP and SSH protocols. Enable
all protocols from all networks for troubleshooting purposes.
If VM IP is unreachable, ping the private gateway IP:
$ ip netns exec qrouter-71e12c86-97d9-4dd7-9765-6cd584385916 ping -c 2 <Gateway IP>
If the Gateway IP is also not reachable, verify VLAN configuration starting from the Virtual Connect server profiles,
Ethernet profiles and switch configurations. Finally, try disabling firewall with iptables F command.
2.
3.
Problem: Unable to attach a volume to an instance. The /var/log/cinder/cinder.log shows an error KeyError:
wwpns.
Solution: Possible cause is sysfsutils and sg3-utils packages are not installed on the compute node. Install these
packages and try to attach the volume again.
Portions of this white paper are used with permission from Red Hat, namely; Deploying and Using Red Hat Enterprise Linux
OpenStack Platform 3 by Jacob Liberman, Principal Software Engineer and Red Hat Enterprise Linux OpenStack Platform 5
Getting Started Guide
WARRANTY DISCLAIMER
HP MAKES NO EXPRESS OR IMPLIED WARRANTY OF ANY KIND REGARDING THE SYSTEM AND SOFTWARE DESCRIBED IN THIS
WHITE PAPER, INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NONINFRINGEMENT. HP SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES,
WHETHER BASED ON CONTRACT, TORT OR ANY OTHER LEGAL THEORY, IN CONNECTION WITH OR ARISING OUT OF THE
FURNISHING, PERFORMANCE OR USE OF THE SYSTEM AND SOFTWARE DESCRIBED IN THIS WHITE PAPER.
47
Technical white paper | Red Hat Enterprise Linux 7 OpenStack Platform 5 on HP ConvergedSystem 700x