Red Hat Openstack
Red Hat Openstack
Linux
Academy Red
Hat Certified
Engineer
in Red Hat
OpenStack
Contents
Introduction 3
Course Prerequisites 5
Ceph 6
Architectural Overview 8
CRUSH 10
Ceph Troubleshooting 17
Glossary 18
Lab Setup 19
Linux Bridges 19
OS Installation 21
Post Installation 32
Managing Projects 45
Managing Users 46
Managing Roles 46
Neutron 47
Floating IPs 49
Security Groups 49
LBaaS 50
Network Namespaces 53
Nova Compute 53
Server Flavors 54
Glance 54
Ceph Troubleshooting 56
Starting Over 58
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Introduction
• Set quotas
• Integrate Ceph block devices with OpenStack services such as Glance, Cinder, and Nova
-1-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Requirements
• To earn the RHCE in Red Hat OpenStack certification, you must have a current RHCSA in Red Hat
OpenStack certification.
• Red Hat recommends that candidates first earn the Red Hat Certified Systems Administrator
(RHCSA - EX200) before attempting the RHCE in Red Hat OpenStack, but it is not required.
• Note: Only certain locations are able to handle the EX310 individual exam at this time. As of
creation of this course, the following locations were verified by Red Hat as equipped to deliver the
EX310K:
• Canberra, Australia
• Lisboa, Portugal
• Paris, France
• Stockholm, Sweden
• Vienna, Austria
Course Prerequisites
Before starting this course, it is recommended that students have current Red Hat Certified Systems
Administrator (EX200) and Red Hat Certified Systems Administrator in Red Hat OpenStack (EX210)
certifications or equivalent experience.
Students should have an intermediate grasp of the following topics before beginning this course.
• RHEL Satellite
• systemd
• OpenStack administration
• CLI
• Keystone
+ User management
+ Project management
• Nova
• Glance
• Neutron
• Swift
-3-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
• Cinder
• Networking
• VLANs
• Routing
• IP addresses (CIDRs)
• DHCP
Recommended Courses
The following Linux Academy courses are recommended, but not required, before starting this course:
• OpenStack Essentials
Ceph
Introduction to Ceph
-4-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Initially created in 2007 by Sage Weil for his doctoral dissertation, with sponsorships from the US
Department of Energy, Oak Ridge National Library, Intel, Microsoft, and others. If you're interested, that
dissertation can be found in PDF format online.
After graduating in 2007, Sage continued development and support of Ceph full time with Yehuda Weinraub
and Gregory Farnum as the core development team. In 2012, Weil created Inktank Storage for professional
services and support for Ceph.
Ceph is a self-healing and self-managing open source system that replicates data and makes it fault tolerant
using commodity hardware and requiring no specific hardware support at the object, file, and block level
under a unified system.
Merged into the Linux kernel by Linus Torvalds on March 19, 2010
What is Ceph?
Ceph is an open source software designed to provide highly scalable object, block, and file-based storage
-5-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Architectural Overview
Requirements
• All Ceph Storage Cluster deployments begin with setting up each Ceph Node, your network, and
the Ceph Storage Cluster.
• A Ceph Storage Cluster requires at least one Ceph Monitor and at least two Ceph OSD Daemons.
The Ceph Metadata Server is essential when running Ceph Filesystem clients.
Ceph Components
• Ceph OSDs: A Ceph OSD Daemon (Ceph OSD) stores data;handles data replication, recovery,
backfilling, and rebalancing; and provides some monitoring information to Ceph Monitors by checking
other Ceph OSD Daemons for a heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD
Daemons to achieve an active and clean state when the cluster makes two copies of your data (Ceph
makes three copies by default, but you can adjust it).
• Monitors: A Ceph Monitor maintains maps of the cluster state, including the monitor map, the
OSD map, the Placement Group (PG) map, and the CRUSH map. Ceph maintains a history (called an
"epoch") of each state change in the Ceph Monitors, Ceph OSD Daemons, and PGs.
• MDSs: A Ceph Metadata Server (MDS) stores metadata on behalf of the Ceph Filesystem (i.e.,
Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers make it
feasible for POSIX file system users to execute basic commands like ls, find, etc. without placing an
enormous burden on the Ceph Storage Cluster.
-6-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
OSDs
• 10s to 10000s in a cluster
Monitors
• Maintain cluster membership and state
Keyrings
When cephx is used, authentication is handled with keyrings generated by Ceph for each service.
• ceph.mon.keyring - Used by all Mon nodes to communicate with other Mon nodes
-7-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
• Ceph is a scalable, high performance distributed file system offering performance, reliability, and
scalability.
• Aims for a completely distributed operation without a single point of failure, is scalable to the
exabyte-level, and is freely available
• Ceph code is completely open source, released under the LPGL license
• Block storage - Physical storage media appears to computers as a series of sequential blocks of
a uniform size (elder storage). Ceph object storage allows you to mount Ceph as a thin-provisioned
block device. Writes to Ceph using a block device are automatically be striped and replicated across the
cluster.
• File storage - File systems allow users to organize data stored in blocks using hierarchical folders
and files. The Ceph file system runs on top of the same object storage system that provides object
storage and block device interfaces.
• Object storage - Object stores distribute data algorithmically throughout a cluster of media without
a rigid structure (NKOTB).
Ceph Concepts
• Monitors - Otherwise known as mons, maintain maps (CRUSH, PG, OSD, etc.) and cluster state.
Monitors use Paxos to establish consensus.
• Storage or OSD node - Provides one or more OSDs. Each OSD represents a disk and has a running
daemon process controlled by systemctl. Ceph has two disk types:
• data
• journal - Enables Ceph to commit small writes quickly and guarantees atomic compound
operations.
• Calamari (optional) - Runs on one of the monitors to get statistics on Ceph cluster and provides a
REST endpoint. RHSC talks to calamari.
• Clients - Each client requires authentication if cephx is enabled. cephx is based on Kerberos.
• Fuse client
• RADOS GW
• Gateways (optional) - Ceph is based on RADOS. The RADOS gateway is a web server that
provides S3 and Swift endpoints and sends those requests to Ceph via RADOS. Similarly, there is an
iSCSI gateway that provides iSCSI target to clients and talks to Ceph via RADOS.
CRUSH
-8-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Ceph Storage clusters are designed to run on commodity hardware, using the CRUSH (Controlled
Replication Under Scalable Hashing) algorithm to ensure even distribution of data across the cluster, and
that all cluster nodes can retrieve data quickly without any centralized bottlenecks.
CRUSH Algorithm
• Ceph stores a client's data as objects within storage pools. Using the CRUSH algorithm, Ceph
calculates which placement group should contain the object and further calculates which Ceph OSD
Daemon should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to
scale, rebalance, and recover dynamically.
• CRUSH distributes data evenly across available object storage devices in what is often described as
a pseudo-random manner.
• Distribution is controlled by a hierarchical cluster map called a CRUSH map. The map, which can
be customized by the storage administrator, informs the cluster about the layout and capacity of nodes
in the storage network and specifies how redundancy should be managed. By allowing cluster nodes
to calculate where a data item has been stored, CRUSH avoids the need to look up data locations in
a central directory. CRUSH also allows for nodes to be added or removed, moving as few objects as
possible while still maintaining balance across the new cluster configuration.
• CRUSH was designed for Ceph, an open source software designed to provide object-, block- and
file-based storage under a unified system. Because CRUSH allows clients to communicate directly
with storage devices without the need for a central index server to manage data object locations, Ceph
clusters can store and retrieve data very quickly and scale up or down quite easily.
CRUSH Maps
Source: Ceph Docs
• The CRUSH algorithm determines how to store and retrieve data by computing data storage
locations. CRUSH empowers Ceph clients to communicate with OSDs directly rather than through a
centralized server or broker.
• With an algorithmically determined method of storing and retrieving data, Ceph avoids a single
point of failure, a performance bottleneck, and a physical limit to its scalability.
-9-
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
• CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly store and
retrieve data in Object Storage Devices with a uniform distribution of data across the cluster.
• CRUSH maps contain a list of OSDs, a list of 'buckets' for aggregating the devices into physical
locations, and a list of rules that tell CRUSH how it should replicate data in a Ceph cluster's pools.
• By reflecting the underlying physical organization of the installation, CRUSH can model—and
thereby address—potential sources of correlated device failures.
• Typical sources include physical proximity, a shared power source, and a shared network. By
encoding this information into the cluster map, CRUSH placement policies can separate object replicas
across different failure domains while still maintaining the desired distribution. For example, to address
the possibility of concurrent failures, it may be desirable to ensure that data replicas are on devices
using different shelves, racks, power supplies, controllers, and/or physical locations.
• When you create a configuration file and deploy Ceph with ceph-deploy, Ceph generates a
default CRUSH map for your configuration. The default CRUSH map is fine for your Ceph sandbox
environment; however, when you deploy a large-scale data cluster, you should give significant
consideration to developing a custom CRUSH map, because it will help you manage your Ceph cluster,
improve performance, and ensure data safety.
CRUSH Location
• The location of an OSD in terms of the CRUSH map's hierarchy is referred to as a 'crush location'.
This location specifier takes the form of a list of key and value pairs describing a position.
• For example, if an OSD is in a particular row, rack, chassis and host and is part of the 'default'
CRUSH tree, its crush location could be described as root=default row=a rack=a2
chassis=a2a host=a2a1
• datacenter
• room
• row
• pod
• pdu
• rack
• chassis
• host
- 10 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Default CRUSH types can be customized to be anything appropriate by modifying the CRUSH map.
Not all keys need to be specified. For example, by default, Ceph automatically sets a Ceph OSD Daemon's
location to be root=default host=HOSTNAME (based on the output from hostname -s).
CRUSH Rules
Example policy:
rule flat {
ruleset 0
type replicated
min_size 1
max_size 10
step take root
step choose firstn 0 type osd
step emit
}
In the above example, firstn declares how to do a replacement. type has been specified to be osd.
firstn is set to 0, which allows access to however many the caller needs. The next example sets rules by
host:
rule by-host {
ruleset 0
type replicated
min_size 1
max_size 10
step take root
step choose firstn 0 type host
step choose firstn 1 type osd
step emit
}
This example chooses from a selection of hosts, which is set to firstn 0. We learned in the first example
this means that the rule is able to choose however many the caller needs. The rule specifies that it is to
choose 1 osd for each host with firstn 1 type osd.
Cluster Expansion
• Stable mapping
• Elastic placement
- 11 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Weighted Devices
• OSDs may be different sizes
• Different capacities
• HDD or SSD
• Standard practice
• Weight = size in TB
Data Imbalance
• CRUSH placement is pseudo-random
Reweighting
• OSDs get data proportional to their weight – unless it fails
• No data received
• 1 = device is OK
- 12 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Reweight-by-Utilization
• Find OSDs with highest utilization
How It Works
• Object is given a name based on PG and replicated at least 3 times in binary matching PG for
chosen placement group
• Placement is a function
• Hash (object name) PG ID --> CRUSH (PG ID, Cluster topology) --> OSD.165, OSD.67
- 13 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
• In the case of a device failure, Ceph avoids the failed device and rebalances itself, moving the
replicated data to another, functional OSD
• Crush generates n distinct target devices, or OSDs. These may be replicas or erasure coding shards.
• Size of failure domain depends on cluster size (disk), host (NIC, RAM, PS), rack (ToR switch,
PDU), row (distribution switch)
• http://ceph.com/pgcalc/
• https://access.redhat.com/labs/cephpgc/
- 14 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Ceph Troubleshooting
Health Monitoring
It is best practice to check the health of your cluster before performing any reads or writes of data. This can
be done with the command ceph health.
Watching a Cluster
You can watch ongoing events with the command ceph -w, which prints each event and contains the
following information:
• Cluster ID
• The monitor map epoch and the status of the monitor quorum
• The notional amount of data stored and the number of objects stored
Adjust weights:
- 15 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Glossary
• iSCSI - An extension of the standard SCSI storage interface that allows SCSI commands to be sent
over an IP-based network.
• Linux Bridge - A way to connect two Ethernet segments together in a protocol independent way.
• Ceph Platform - All Ceph software, which includes any piece of code hosted in the Ceph GitHub
repository
• Ceph Cluster Map - The set of maps comprising of the monitor map, OSD map, PG map,
MDS map, and CRUSH map.
• Cephx - Ceph authentication protocol. Cephx operates like Kerberos but has no single point of
failure
• Ceph OSD Daemons - The Ceph OSD software which interacts with a logical disk (OSD).
Sometimes "OSD" is used to refer to the "Ceph OSD Daemon", but the proper term is "Ceph OSD".
• Object Storage Device (OSD) - A physical or logical storage unit. Sometimes confused with
Ceph OSD Daemon.
• RADOS (Reliable Atomic Distributed Object Store) Cluster - The core set of storage software
which stores the user's data (Mon+OSD)
• Controlled Replication Under Scalable Hashing (CRUSH) - The algorithm Ceph uses to
compute object storage locations.
• Ruleset - A set of CRUSH data placement rules that applies to a particular pool(s).
- 16 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Lab Setup
Linux Bridges
Bridging is the process of transparently connecting two networks segments together, so that packets can
pass between the two as if they were a single logical network. Bridging is performed on the data link layer;
hence, it is independent of the network protocol being used as the bridge operates upon the raw ethernet
packets. Typically, in a non-bridged situation, a computer with two network cards would be connected to a
separate network on each; while the computer itself may or may not route packets between the two, in the IP
realm, each network interface would have a different address and different network number. When bridging
is used, however, each network segment is effectively part of the same logical network, the two network
cards are logically merged into a single bridge device and devices connected to both network segments are
assigned addresses from the same network address range.
- 17 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
sudo virt-manager
Create a new virtual network under Menu --> Edit --> Connection Details.
Step 1
Step 2
• Network: 192.168.10.0/24
• Enable DHCPv4
• Start: AUTOMATIC
• End: AUTOMATIC
Step 3
Step 4
• Mode: NAT
In the Virtual Machine Manager menu, go to File --> New virtual machine.
Step 1
Step 2
• Use ISO image: Browse to your hard drive to select the downloaded RHEL7.2 ISO
- 18 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Step 3
• CPUs: 1
Step 4
• Choose storage pool, then click the blue + by Volumes to create a new volume
• Format: qcow2
• Max Capacity: Controller, Compute: 40GiB, Network: 20GiB, Ceph{1,2,3}: 238GiB (30GB
root disk, 100x2 ephermal disk, 8GB optional swap)
Step 5
OS Installation
• Select regional settings, then continue
• Software:
• System
• Configure:
• Method: Manual
- 19 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
• Addresses:
• Netmask: 24 (probably)
• Begin installation
• While install runs, set root password and create a user with admin privileges
• After install completes, select the Reboot option, then log in through a terminal
subscription-manager register
- 20 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Update system:
yum -y update
systemctl reboot
Controller
Install openstack-packstack:
packstack —gen-answer-file=/root/answers.txt
CONFIG_DEFAULT_PASSWORD=openstack
CONFIG_NETWORK_HOST=$NETWORK_IP
CONFIG_KEYSTONE_ADMIN_TOKEN=linuxacademy123
CONFIG_KEYSTONE_ADMIN_PW=openstack
CONFIG_KEYSTONE_DEMO_PW=openstack
CONFIG_PROVISION_DEMO=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat
CONFIG_LBAAS_INSTALL=y
CONFIG_HEAT_INSTALL=y
- 21 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
CONFIG_HEAT_CLOUDWATCH_INSTALL=y
CONFIG_HEAT_CFN_INSTALL=y
Populate /etc/hosts:
Network Node
Attach subscription:
subscription-manager register
Update system:
yum -y update
- 22 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
systemctl reboot
Populate /etc/hosts:
Controller Node
Kick off installation:
packstack —answer-file=/root/answers.txt
NOTE: PackStack installations can take upwards of 30 minutes to complete depending on system resources.
• Admin console - This is the node that hosts the UI and CLI used for managing the Ceph cluster.
• Monitors - Monitor health of the Ceph cluster. One or more monitors forms a Paxos Part-Time
Parliament, providing extreme reliability and durability of cluster membership. Monitors maintain
monitor, OSD, placement group (PG), and CRUSH maps and are installed on all nodes.
• OSD - The object storage daemon handles storing data, recovery, backfiling, rebalanicng, and
replication. OSDs sit on top of a disk or filesystem. Bluestore enables OSDs to bypass the filesystem but
is not an option in Ceph 1.3. Installed on all nodes.
- 23 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
NOTE: Verify that SELinux permissions are correct on the folder storing the disk files.
for i in {1,2,3};
do
fallocate -l 100G /var/lib/libvirt/qemu/disk$i.img;
done
for i in {1,2,3};
do
virsh attach-device --config ceph$i ~/disk$i.xml;
done
systemctl reboot
# ceph nodes
$IP1 ceph1 ceph1.$FQDN
$IP2 ceph2 ceph2.$FQDN
$IP3 ceph3 ceph3.$FQDN
- 24 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
firewalld Configuration
Permanently open ports 80, 2003, 4505, 4506, 6789, and 6800-7300:
#!/bin/bash
#
# easy firewalld script for OpenStack Ceph to accompany the
# Linux Academy Red Hat Certified Engineer in Red Hat OpenStack prep
course
# Opens ports 80, 2003, 4505, 4506, 6789, 6800-7300
# June 13, 2017
# This script comes with no guarantees.
#
systemctl enable firewalld && systemctl start firewalld
sleep 3
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=2003/tcp --permanent
- 25 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
NTP Configuration
Install NTP:
Start NTP:
ntpq -p
useradd ceph
echo openstack | passwd --stdin ceph
cat /etc/sudoers.d/ceph
ceph ALL = (root) NOPASSWD:ALL
Defaults:ceph !requiretty
EOF
su - ceph
- 26 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
ssh-keygen
Load ssh keys for Ceph1, Ceph2, and Ceph3 on each node:
for i in {1,2,3};
do
ssh-copy-id ceph@ceph$1;
done
Modify your ~/.ssh/config file of your admin node so that it defaults to logging in as the ceph user
across your Ceph environment when no username is specified:
Host ceph1
Hostname ceph-server.fqdn-or-ip-address.com
User ceph
Host ceph2
Hostname ceph-server.fqdn-or-ip-address.com
User ceph
Host ceph3
Hostname ceph-server.fqdn-or-ip-address.com
User ceph
mkdir ~/ceph-config
cd ceph-config
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.
rpm
- 27 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.
rpm
rpm -ivh epel-release-latest-7.noarch.rpm
[mons]
$IP or hostname
[osds]
$ip or hostname
$IP or hostname
$IP or hostname
[rgws]
$IP or hostname
ssh-copy-id ceph@ceph1
ssh-copy-id ceph@ceph2
ssh-copy-id ceph@ceph3
cd ~/ceph-ansible/group_vars
cp all.yml.sample all.yml
cp mons.yml.sample mons.yml
cp osds.yml.sample osds.yml
...
ceph_origin: upstream
....
# COMMUNITY VERSION
ceph_stable: true
ceph_mirror: http://download.ceph.com
ceph_stable_key: https://download.ceph.com/keys/release.asc
ceph_stable_release: kraken
ceph_stable_repo: https://download.ceph.com/rpm-kraken/el7/x86_64
...
ceph_stable_redhat_distro: el7
...
cephx_requires_signature: false
...
# CEPH CONFIGURATION #
cephx: true
...
## Monitor Options
monitor_interface: $NETWORK
monitor_address: $SERVER_IP
- 29 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
ip_version: ipv4
...
## OSD options
journal_size: 10240
public_network: $SERVER_IP/24
...
########################
# CEPH OPTIONS
########################
devices:
- dev/vdb
...
journal_collocation: true
cd ..
Start installation:
ansible-playbook site.yml
Post Installation
Cephx Keyrings
If Cephx is enabled, ceph-ansible is going to generate initial auth files for your user, which can be retrieved
in a new directory created under the /ceph-ansible/fetch/ directory. If gatherkeys complains about
locating ceph.keyring files on ceph1, you can manually relocate the initial keyrings created by ceph-
ansible from the fetch directory. Under fetch, there is a directory named as a UUID, and under this are /
etc/ and /var/ directories. The /var/ directory contains the missing keyrings. Just manually copy these
to the /etc/ceph/ directory.
Install ceph-deploy:
su – ceph
Move to ceph-config:
- 30 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
cd ceph-config
su - ceph
sudo ceph osd pool create images 64
sudo ceph auth get-or-create client.images mon 'allow r' osd 'allow
class-read object_prefix rdb_children, allow rwx pool=images' -o /etc/
ceph/ceph.client.images.keyring
vim /etc/ceph/ceph.conf
...
[client.images]
keyring = /etc/ceph/ceph.client.images.keyring
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig
Update /etc/glance/glance-api.conf:
...
[glance_store]
stores = glance.store.rbd.Store
default_store = rbd
rbd_store_pool = images
rbd_store_user = images
rbd_store_ceph_conf = /etc/ceph/ceph.conf
Restart Glance:
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Convert from QCOW2 to RAW. It is recommended that Ceph use RAW format:
su - ceph
sudo rbd ls images
$UUID RESPONSE
sudo rbd info images/$UUID
- 32 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Cinder is the block storage service in OpenStack. Cinder provides an abstraction around block storage and
allows vendors to integrate by providing a driver. In Ceph, each storage pool can be mapped to a different
Cinder backend. This allows for creating storage services such as gold, silver, or bronze. You can decide,
for example, that gold should be fast SSD disks that are replicated three times, while silver only should be
replicated two times and bronze should use slower disks with erasure coding.
Create a file that contains only the authentication key on OpenStack controller(s):
- 33 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Edit /etc/ceph/ceph.conf:
...
[client.volumes]
keyring = /etc/ceph/ceph.client.volumes.keyring
On the compute node, create a secret in virsh so KVM can access the Ceph pool for Cinder volumes.
Edit /etc/ceph/cinder.xml:
ce6d1549-4d63-476b-afb6-88f0b196414f
client.volumes secret
virsh secret-define --file /etc/ceph/cinder.xml
virsh secret-set-value --secret ce6d1549-4d63-476b-afb6-88f0b196414f \
--base64 $(cat /etc/ceph/client.volumes.key)
Edit /etc/cinder/cinder.conf:
...
[DEFAULT]
...
enabled_backends = ceph
[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = volumes
rbd_secret_uuid = 00000000-00000000-00000000 # This should be the same
as the secret set in /etc/ceph/cinder.xml
- 34 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Create volume:
cinder list
List volume using the rbd CLI from the Ceph admin node (node1):
Nova is the compute service within OpenStack. Nova stores virtual disks images associated with running
VMs by default, locally on the hypervisor under /var/lib/nova/instances. There are a few drawbacks
to using local storage on compute nodes for virtual disk images:
- 35 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
• Images are stored under the root filesystem. Large images can cause the file system to fill up, thus
crashing compute nodes.
• A disk crash on the compute node could cause loss of virtual disk, and as such, a VM recovery
would be impossible.
Ceph is one of the storage backends that can integrate directly with Nova. In this section, we see how to
configure that.
OpenStack Controller
Set permissions of the keyring file to allow access by Nova:
- 36 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Edit /etc/ceph/ceph.conf:
...
[client.nova]
keyring = /etc/ceph/ceph.client.nova.keyring
Create a secret in virsh so KVM can access the Ceph pool for Cinder volumes.
Edit /etc/ceph/nova.xml:
c89c0a90-9648-49eb-b443-c97adb538f23
client.volumes secret
Run:
cp /etc/nova/nova.conf /etc/nova/nova.conf.orig
Edit /etc/nova/nova.conf:
...
force_raw_images = True
disk_cachemodes = writeback
...
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = c89c0a90-9648-49eb-b443-c97adb538f23
Restart Nova.
- 37 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Launch a new ephemeral instance using the image uploaded during Glance/Ceph integration.
After build completes, list the images in the Ceph VMS pool from the Ceph admin node (node1):
• 4096MB RAM
• 1 vCPU
• Network
• Shadowman
• Subscription
• Repositories
• rhel-7-server-rpms
Repositories
Register with Red Hat Subscription Manager:
subscription-manager register
- 38 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Verify synchronization:
ntpq -p
Disable SELinux:
FirewallD
Permanently open ports 80, 2003, 4505-4506, 6789, 6800-7300, 7480:
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=2003/tcp --permanent
firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
firewall-cmd --zone=public --add-port=6789/tcp --permanent
firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
firewall-cmd --zone=public --add-port=7400/tcp --permanent
firewall-cmd --reload
- 39 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Ceph User
Create Ceph user:
useradd ceph
echo openstack | passwd --stdin ceph
Create /etc/sudoers.d/ceph
ceph ALL = (root) NOPASSWD:ALL
Defaults:ceph !requiretty
SSH Keys
Drop to ceph user:
su – ceph
ssh-keygen
ssh-copy-id ceph@ceph{1-3}
EPEL Repository
Install Extra Packages for Enterprise Linux (EPEL) repository:
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.
rpm
rpm -ivh epel-release-latest-7.noarch.rpm
ceph_stable Repository
Manually create /etc/yum.repos.d/ceph_stable.repo:
[ceph_stable]
- 40 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
baseurl = http://download.ceph.com/rpm-kraken/el7/$basearch
gpgcheck = 1
gpgkey = https://download.ceph.com/keys/release.asc
name = Ceph Stable repo
Host ceph4
Hostname ceph4.fqdn
User ceph
ssh-copy-id ceph@ceph4
ceph-deploy --version
Version v1.5.25 of ceph-deploy is available in the Ceph Kraken repository but has a few bugs that will
interrupt the build of your Ceph RGW node. This has been resolved in a later release. Before we can install
Ceph OGW on ceph4, we need to update ceph-deploy to v1.5.36 or later using python2-pip.
Install python2-pip:
- 41 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Uninstall ceph-deploy-1.5.25:
Upgrade ceph-deploy:
Verify installation:
curl http://ceph4:7480
If the gateway is operational, the response should look a bit like this:
As of Ceph Firefly (v0.80), Ceph Object Gateway is running on Civetweb (embedded into the ceph-
radosgw daemon) instead of Apache and FastCGI. Using Civetweb simplifies the Ceph Object Gateway
installation and configuration.
Civetweb runs on port 7480, by default. To change the default port (e.g., to port 80), modify your Ceph
configuration file in the working directory of your administration server. Add a section entitled [client.
rgw.], replacing with the short node name of your Ceph Object Gateway node (i.e., hostname -s).
[client.rg w.ceph4]
rgw_frontends = "civetweb port=80"
- 42 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Managing Projects
Projects, previously known as tenants or accounts, are organizational units in the cloud comprised of zero
or more users. Projects own specific resources in the OpenStack environment. Projects, users, and roles can
be managed independently of one another, and, during setup, the operator defines at least one project, user,
and role.
Create a project:
- 43 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Disable/enable a project:
Delete a project:
Managing Users
List users:
Disable/enable a user:
Managing Roles
Keystone roles are a personality, including a set of rights and privileges, that a user assumes to perform
a specific set of operations. Roles permit users to be members of multiple projects. Users are assigned to
more than one project by defining a role, then assigning said role to a user/project pair.
OpenStack identity (Keystone) defines a user's role on a project, but it is up to the individual service
(policy) to define what that role means. To change a policy, edit the file /etc/$service/policy.json.
Create role:
- 44 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Remove a role:
Neutron
Neutron is an OpenStack project to provide "network connectivity as a service" between interface devices
(e.g., vNICs) managed by other OpenStack services (e.g., Nova). The Networking service, code-named
Neutron, provides an API that lets you define network connectivity and addressing in the cloud.
Advanced network services such as FWaaS, LBaaS, and VPNaaS can be inserted either as VMs that route
between networks or as API extensions.
Create new subnet for the ex310k-private network named ex310k-sub on 192.168.1.0/24:
Create subnet:
Create port:
- 46 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Floating IPs
Create new server:
Security Groups
Neutron Security Groups are for filtering both ingress and egress traffic at the hypervisor level. For ingress
traffic (to an instance), only traffic matched with security group rules are allowed. When there is no rule
defined, all traffic is dropped. For egress traffic (from an instance) only traffic matched with security group
rules are allowed. When there is no rule defined, all egress traffic is dropped.
When a new security group is created, rules to allow all egress traffic are automatically added and the
"default" security group is defined for each tenant.
For the default security group, a rule which allows intercommunication among hosts associated with the
default security group is defined by default. As a result, all egress traffic and intercommunication in the
default group are allowed and all ingress from outside of the default group is dropped by default (in the
default security group).
- 47 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
LBaaS
- 48 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Neutron Load balancing as a service, or LBaaS, is an advanced service in Neutron that allows for proprietary
and open source load balancing technologies to drive the actual load balancing of requests, and enables
Neutron networking to distribute incoming requests evenly among designated instances. Neutron LBaaS
ensures that the workload is shared predictably among instances and enables more effective use of system
resources.
• Monitors: Implemented to determine whether pool members are available to handle requests
• Session persistence: Supports routing decisions based on cookies & source IP addresses
• Management: LBaaS is managed using a variety of tool sets. The REST API is available for
programmatic administration and scripting. Users perform administrative management of load balancers
through either the CLI (neutron) or the OpenStack Dashboard.
LBaaS Algorithms
• Round Robin: Rotates requests evenly between multiple instances
• Source IP: Requests from a unique source IP address are consistently directed to the same instance
• Least Connections: Allocates requests to the instance with the least number of active connections
Pre-configuration
Install dependencies for OpenStackCLI and Horizon:
/etc/openstack-dashboard/local_settings
'enable_lb': True
Restart Apache:
Create LB listener:
- 49 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
ovs-vsctl show
ovs-vsctl list-br
- 50 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
ovs-vsctl emer-reset
Network Namespaces
Linux network namespaces are a kernel feature the networking service uses to support multiple isolated
layer-2 networks with overlapping IP address ranges. The support may be disabled, but it is on by default.
If it is enabled in your environment, your network nodes will run their dhcp-agents and l3-agents in isolated
namespaces. Network interfaces and traffic on those interfaces will not be visible in the default namespace.
ip netns
Nova Compute
OpenStack Nova source software is designed to provision and manage large networks of virtual machines,
create redundant and scalable cloud computing platform, and allow you to control an IaaS cloud computing
- 51 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
platform. Nova defines drivers that interact with underlying virtualization mechanisms that run on your host
OS and exposes functionality over a web-based API. Hardware and hypervisor agnostic, Nova currently
supports a variety of standard hardware configurations and seven major hypervisors.
Server Flavors
List flavors:
Glance
OpenStack Compute (Nova) relies on an external image service to store virtual machine images and maintain
a catalog of available images. Glance is a core OpenStack service that accepts API requests for disk or
server images, metadata definitions from end users or OpenStack Compute components, and provides a
service that allows users to upload and discover data assets, including images and metadata definitions,
meant to be used with other services Glance supports several backends. This is including but not limited to:
- 52 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
• Regular filesystems
• Vmware
List images:
- 53 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Ceph Troubleshooting
Check Ceph cluster health:
ceph health
ceph -w
ceph status
ceph df
- 54 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Output:
{
"election_epoch": 6,
"quorum": [
0,
1,
2
],
"quorum_names": [
"ceph1",
"ceph2",
"ceph3"
],
"quorum_leader_name": "ceph1",
"monmap": {
"epoch": 1,
"fsid": "188aff9b-7da5-46f3-8eb8-465e014a472e",
"modified": "0.000000",
"created": "0.000000",
"mons": [
{
"rank": 0,
"name": "ceph1",
"addr": "192.168.0.31:6789\/0"
},
{
"rank": 1,
"name": "ceph2",
"addr": "192.168.0.32:6789\/0"
},
{
"rank": 2,
"name": "ceph3",
"addr": "192.168.0.33:6789\/0"
}
]
}
}
Repair an OSD:
- 55 -
Linux Academy Red Hat Certified Engineer in Red Hat OpenStack Linux Academy
Delete an OSD:
In /etc/ceph/ceph.conf:
[mon]
mon_allow_pool_delete = True
systemctl reboot
Starting Over
If you want to redeploy Ceph without needing to reinstall the OS, the ceph-deploy command line client
provides three easy commands to refresh your environment.
Remove keyrings:
ceph-deploy forgetkeys
- 56 -