0% found this document useful (0 votes)
572 views119 pages

LFS252 Openstack OCA

Labs from the LFS252 OCA certification course

Uploaded by

Angel Pance
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
572 views119 pages

LFS252 Openstack OCA

Labs from the LFS252 OCA certification course

Uploaded by

Angel Pance
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 119

LFS252

OpenStack
Administration
Fundamentals
Version 2018-09-11

LFS252: Version 2018-09-11


c Copyright the Linux Foundation 2018. All rights reserved.
ii

c Copyright the Linux Foundation 2018. All rights reserved.



The training materials provided or developed by The Linux Foundation in connection with the training services are protected
by copyright and other intellectual property rights.

Open source code incorporated herein may have other copyright holders and is used pursuant to the applicable open source
license.

The training materials are provided for individual use by participants in the form in which they are provided. They may not be
copied, modified, distributed to non-participants or used to provide training to others without the prior written consent of The
Linux Foundation.

No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without express prior
written consent.

Published by:

the Linux Foundation


http://www.linuxfoundation.org

No representations or warranties are made with respect to the contents or use of this material, and any express or implied
warranties of merchantability or fitness for any particular purpose or specifically disclaimed.

Although third-party application software packages may be referenced herein, this is for demonstration purposes only and
shall not constitute an endorsement of any of these software applications.

Linux is a registered trademark of Linus Torvalds. Other trademarks within this course material are the property of their
respective owners.

If there are any questions about proper and fair use of the material herein, please contact:
[email protected]

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Contents

1 Introduction 1
1.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Cloud Fundamentals 3
2.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Managing Guests Virtual Machines with OpenStack Compute 11


3.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Components of an OpenStack Cloud 37


4.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5 Components of a Cloud - Part Two 43


5.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

6 Reference Architecture 45
6.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

7 Deploying Prerequisite Services 47


7.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

8 Deploying Services Overview 49


8.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

9 Advanced Software Defined Networking with Neutron 55


9.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

10 Advanced Software Defined Networking with Neutron - Part Two 57


10.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

11 Distributed Cloud Storage with Ceph 71


11.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

12 OpenStack Object Storage with Swift 85


12.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

13 High Availability in the Cloud 95


13.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

iii
iv CONTENTS

14 Cloud Security with OpenStack 97


14.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

15 Monitoring and Metering 99


15.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

16 Cloud Automation 101


16.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

17 Conclusion 113
17.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
List of Figures

2.1 Katacoda Welcome Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


2.2 Katacoda Second Terminal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Browser User Interface (BUI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


3.2 Project Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Project Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Adding a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Network Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.6 Subnet Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.7 Subnet Details Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.8 View router ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.9 Add SSH Ingress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.10 Deploying a new instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.11 Viewing Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.12 Create a Security Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.13 Rules for SSH and HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.1 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38


4.2 Create Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Create Volume Type Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

8.1 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

10.1 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58


10.2 Add Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
10.3 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.4 Connecting Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

11.1 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72


11.2 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

12.1 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

16.1 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

v
vi LIST OF FIGURES

16.2 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104


16.3 Katacoda Horizon Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 1

Introduction

1.1 Labs

Exercise 1.1: Configuring the System for sudo


It is very dangerous to run a root shell unless absolutely necessary: a single typo or other mistake can cause serious (even
fatal) damage.

Thus, the sensible procedure is to configure things such that single commands may be run with superuser privilege, by using
the sudo mechanism. With sudo the user only needs to know their own password and never needs to know the root password.

If you are using a distribution such as Ubuntu, you may not need to do this lab to get sudo configured properly for the course.
However, you should still make sure you understand the procedure.

To check if your system is already configured to let the user account you are using run sudo, just do a simple command like:

$ sudo ls

You should be prompted for your user password and then the command should execute. If instead, you get an error message
you need to execute the following procedure.

Launch a root shell by typing su and then giving the root password, not your user password.

On all recent Linux distributions you should navigate to the /etc/sudoers.d subdirectory and create a file, usually with the
name of the user to whom root wishes to grant sudo access. However, this convention is not actually necessary as sudo will
scan all files in this directory as needed. The file can simply contain:

student ALL=(ALL) ALL

if the user is student.

An older practice (which certainly still works) is to add such a line at the end of the file /etc/sudoers. It is best to do so using
the visudo program, which is careful about making sure you use the right syntax in your edit.

You probably also need to set proper permissions on the file by typing:

$ chmod 440 /etc/sudoers.d/student

1
2 CHAPTER 1. INTRODUCTION

(Note some Linux distributions may require 400 instead of 440 for the permissions.)

After you have done these steps, exit the root shell by typing exit and then try to do sudo ls again.

There are many other ways an administrator can configure sudo, including specifying only certain permissions for certain
users, limiting searched paths etc. The /etc/sudoers file is very well self-documented.

However, there is one more setting we highly recommend you do, even if your system already has sudo configured. Most
distributions establish a different path for finding executables for normal users as compared to root users. In particular the
directories /sbin and /usr/sbin are not searched, since sudo inherits the PATH of the user, not the full root user.

Thus, in this course we would have to be constantly reminding you of the full path to many system administration utilities;
any enhancement to security is probably not worth the extra typing and figuring out which directories these programs are in.
Consequently, we suggest you add the following line to the .bashrc file in your home directory:

PATH=$PATH:/usr/sbin:/sbin

If you log out and then log in again (you don’t have to reboot) this will be fully effective.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 2

Cloud Fundamentals

2.1 Labs

Exercise 2.1: Installing DevStack

Overview

All access to lab systems takes place via the Katacoda browser interface. In some labs the deployed cloud will include several
instances. Access to secondary instances will take place through the browser interface using SSH from one virtual instance
to another.

The suggested and tested browser to use is Chrome, although others may work. The course material includes a URL for lab
access. You will use your Linux Foundation login and password to gain access. It may take up to 24 hours after registration
for your email to be added to the lab environment.

Each URL will bring you to an environment which has been pre-configured with the lab steps up to that point. This allows you
to work on a lab again, without having to redo all the steps up to that point.

Some labs will use the Horizon BUI to manage the cloud graphically. The Katacoda page offers a second tab to access the
BUI, named OpenStack Dashboard. The URL can also be found in the /opt/host file on the instance. There will not be a
web page until you have successfully installed OpenStack.

Please be sure to use the Shutdown Cluster link when finished with the lab to release the resources. It will ask if you want to
shutdown the cluster, answer with y for yes.

3
4 CHAPTER 2. CLOUD FUNDAMENTALS

Figure 2.1: Katacoda Welcome Page

Should you want a second terminal to test or view real-time output you can select the plus sign +, which will show a drop-down
menu. From that menu choose Open New Terminal.

Figure 2.2: Katacoda Second Terminal

There are two OpenStack deployments in this course, using two distributions. DevStack will be deployed on Ubuntu for the
early labs and RDO will be deployed on CentOS for later labs.

Different lab equipment may be available for each lab, so be sure to begin each lab by choosing the link provided for each
exercise section.

Add a non-root user to the System

The DevStack installer must be run as a non-root user. If a user does not already exist add a new user and configure them
to use sudo without needing to pass a password.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
2.1. LABS 5

1. Test your user ID and if you can become the ubuntu user. If the user already exists you won’t need to create the user in
the following steps. The prompt may be an indication as well.
ubuntu@base-xenial:~$ id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),....

ubuntu@base-xenial:~$ sudo -i
$ id
uid=0(root) gid=0(root) groups=0(root)
$ exit

2. Add the new ubuntu user, if it does not already exist. If the user exists you’ll receive an error.
$ useradd -m -d /home/ubuntu -s /bin/bash ubuntu
useradd: user ’ubuntu’ already exists

3. Assign a password for the new user, in this case we’ll use LFtrain! as the password. You won’t see the output as you
type the password for security reasons.
$ passwd ubuntu
Enter new UNIX password: LFtrain!
Retype new UNIX password: LFtrain!
passwd: password updated successfully

4. Update the /etc/sudoers file to allow the ubuntu user full sudo access, without requiring a password. There may be a
stack or sudo user listed with the same ability. It may be easiest to copy, paste, and edit that line.
$ vim /etc/sudoers
....
%sudo ALL=(ALL) NOPASSWD:ALL
stack ALL=(ALL) NOPASSWD:ALL
ubuntu ALL=(ALL) NOPASSWD:ALL # Add this line

5. Become the ubuntu user and test sudo usage. You should be able to view the contents of a protected directory without
error. Note that the prompt will change to to show the user, node name and current directory.
$ su - ubuntu

ubuntu@openstack:~$ sudo ls -l /root


total 0

6. While the installation script will choose a primary network interface it is good practice to configure the interface and IP
address to use. Begin by finding the IP address of the primary interface. In the example below the IP is 172.17.0.13,
your IP may be different.
ubuntu@openstack:~$ ip addr show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
link/ether 02:42:ac:11:00:0d brd ff:ff:ff:ff:ff:ff
inet 172.17.0.13/16 brd 172.17.255.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:d/64 scope link
valid_lft forever preferred_lft forever

IP for ens3

7. Be aware that as these labs could run in a variety of places the specific interface and IP addresses may be different.
The following labs will use a generic prompt. The use of devstack-cc is to indicate the command should be run on
the DevStack cloud controller node. The use of compute-node will indicate the command should be run on an added
worker node instead.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
6 CHAPTER 2. CLOUD FUNDAMENTALS

Exercise 2.2: Working with DevStack


DevStack is not typically considered safe for production, but can be useful for testing and learning. It is easy to configure and
reconfigure. While other distributions may be more stable they tend to be difficult to reconfigure, with a fresh installation being
the easiest option. DevStack can be rebuilt in place with just a few commands.

DevStack is under active development. What you download could be different from a download made just minutes later. While
most updates are benign, there is a chance that a new version could render a system difficult or impossible to use. Never
deploy DevStack on an otherwise production machine.

Install the git Command and DevStack Software

1. Before we can download the software we will need to update the package information and install a version control system
command, git.

ubuntu@devstack-cc:~$ sudo apt-get update


<output_omitted>

ubuntu@devstack-cc:~$ sudo apt-get install git -y


<output_omitted>

2. Now to retrieve the DevStack software:

ubuntu@devstack-cc:~$ pwd
/home/ubuntu
ubuntu@devstack-cc:~$ git clone https://git.openstack.org/openstack-dev/devstack -b stable/pike
Cloning into ’devstack’...
<output_omitted>

3. The newly installed software can be found in a new sub-directory named devstack. Installation of the script is by a shell
script called stack.sh. Take a look at the file:

ubuntu@devstack-cc:~$ cd devstack
ubuntu@devstack-cc:~/devstack$ less stack.sh

4. There are several files and scripts to investigate. If you have issues during installation and configuration you can use the
unstack.sh and clean.sh script to (usually) return the system to the starting point:

ubuntu@devstack-cc:~/devstack$ less unstack.sh


ubuntu@devstack-cc:~/devstack$ less clean.sh

5. We will need to create a configuration file for the installation script. A sample has been provided to review. Use the
contents of the file to answer the following questions.

ubuntu@devstack-cc:~/devstack$ less samples/local.conf

6. What is the location of script output logs?

7. There are several test and exercise scripts available, found in sub-directories of the same name. A good, general test is
the run_tests.sh script.
Due to the constantly changing nature of DevStack these tests are not always useful or consistent. You can expect
to see errors but be able to use OpenStack without issue. For example missing software should be installed by the
upcoming stack.sh script.
Keep the output of the tests and refer back to it as a place to start troubleshooting if you encounter an issue.

ubuntu@devstack-cc:~/devstack$ ./run_tests.sh

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
2.1. LABS 7

Create a local.conf File

While there are many possible options we will do a simple OpenStack deployment. Create a ~/devstack/local.conf file.
Parameters not found in this file will use default values, ask for input at the command line or generate a random value.

1. OpenStack is written in Python, and as such there may be extra steps required when either project updates. In our
environment we will need to install a particular Python tools using a Python tool instead of the default apt installed tool.
Begin by removing the OS tool
ubuntu@openstack:~/devstack$ sudo apt-get remove python-psutil
Reading package lists... Done
Building dependency tree
Reading state information... Done

2. Now install the pip tool.


ubuntu@openstack:~/devstack$ sudo apt-get install python-pip -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
<output_omitted>

3. Use the pip installer to install the psutil package. There may be some warning in the output having to do with directory
ownership and version of pip. These warnings can be safely ignored.
ubuntu@openstack:~/devstack$ sudo pip install psutil
<output_omitted>
Downloading https://files.pythonhosted.org/packages/14/a2/8ac7dda36
e03950ec2668ab1b466314403031c83a95c5efc81d2acf163/psutil-5.4.5.tar.gz
100% || 419kB 1.7MB/s
Installing collected packages: psutil
Running setup.py install for psutil ... done
Successfully installed psutil-5.4.5
You are using pip version 8.1.1, however version 10.0.1 is available.
You should consider upgrading via the ’pip install --upgrade pip’ command.

4. We will create a basic configuration file. In our labs we’ll use ens3 and it’s IP address, found in an earlier step, when
you create the following file.
ubuntu@devstack-cc:~devstack$ vim local.conf

[[local|localrc]]
HOST_IP=172.17.0.13
FLAT_INTERFACE=ens3
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=db-secret
RABBIT_PASSWORD=rb-secret
SERVICE_PASSWORD=sr-secret
# Use the following to explore new project
enable_plugin barbican https://git.openstack.org/openstack/barbican stable/pike

Install and Configure OpenStack

The following command will generate a lot of output to the terminal window. The stack.sh script will run for 20 to 37 minutes.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
8 CHAPTER 2. CLOUD FUNDAMENTALS

1. Start the installation script:

ubuntu@devstack-cc:~devstack$ ./stack.sh
<output_omitted>

2. View the directory where various logs have been made. If the logs are not present you may have an issue with the
syntax of the local.conf file:

ubuntu@devstack-cc:~devstack$ ls -l /opt/stack/logs

3. Review the output from the stack.sh script:


ubuntu@devstack-cc:~devstack$ less /opt/stack/logs/stack.sh.log

DevStack runs under a user account. DevStack is not meant to be durable, so there is no longer a rejoin script. If the node
reboots, you must run stack.sh again.

Log into the OpenStack Browser User Interface

The Horizon software produces a web page for management. By logging into this Browser User Interface (BUI) we can
configure almost everything in OpenStack. The look and feel may be different than what you see in the book. The project and
vendor updates change often.

1. With Katacoda we are using a browser interface to access the command line as well as HTTP access. You can either
use the second tab on the page, which will open another browser page or the URL found in /opt/host.

Figure 2.3: Katacoda Horizon Login

2. Log into the BUI with a username of admin and a password of openstack. Using the tabs on the left, navigate to drop-
down named Project. You will find three other drop-downs, Compute, Volumes and Network. Choose the Compute
drop-down, then the Overview tab. It should look something like the following:

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
2.1. LABS 9

Figure 2.4: Browser User Interface (BUI)

3. Navigate to the Admin -> Compute -> Hypervisors page. Use the Hypervisor and Compute Host sub-tabs to
answer the following questions.
a. How many hypervisors are there?
b. How many VCPUs are used?
c. How many VCPUs total?
d. How many compute hosts are there?
e. What is its state?

4. Navigate to the Admin -> Compute -> Instances page.


a. How many instances are there currently?

5. Navigate to the Identity -> Projects page.


a. How many projects exist currently?

6. Navigate through the other tabs and subtabs to become familiar with the BUI.

Solution 2.2

Install the git Command and DevStack Software

6. Console only unless LOGFILE is set, ours is set to /opt/stack/logs/stack.sh.log

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10 CHAPTER 2. CLOUD FUNDAMENTALS

Log into the OpenStack Browser User Interface

3. a. 1
b. 0
c. 2
d. 1
e. up

4. a. 0

5. a. 6

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 3

Managing Guests Virtual Machines with


OpenStack Compute

3.1 Labs

Exercise 3.1: Deploying and Managing an Instance

Overview

In a previous exercise you deployed an All-In-One DevStack instance, running on Ubuntu. Use the provided link to begin a
new lab with the previous configurations already completed.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

11
12 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

Figure 3.1: Katacoda Horizon Login

In this exercise we will investigate available resources, configure our cloud and deploy a virtual machine. You may see pop-up
error ”Error: Unable to retrieve usage information.” This can safely be ignored.

All access to lab systems takes place through a browser type interface. In some labs the deployed cloud will include several
instances. Access to secondary instances will take place through the browser interface using SSH from the command line.

This lab uses DevStack running on Ubuntu. Later labs will use the RDO version of OpenStack running on CentOS.

Logging into the Dashboard

During the OpenStack installation, a configuration file is created with login and environmental information. Devstack creates
a file .localrc.auto with the password information from the stack.sh script and the local.conf information.

1. Use the browser link to connect to command line terminal of the DevStack instance.

2. Change into the devstack directory and find the password to log into the BUI as the user admin.
ubuntu@devstack-cc:~/devstack$ grep ADMIN_PASSWORD .localrc.auto
ADMIN_PASSWORD=openstack

3. Use the second tab of the Katacoda window and find the Horizon BUI. You can also look inside /opt/host.

4. Log into the BUI with a username of admin and the password output of the previous grep command.

5. There are two drop-downs across the top of the BUI. One says admin the other alt_demo or demo. What does each
represent?
a. Left drop down:
b. Right drop down:

6. How many projects appear accessible currently?

Create A Project

A project, once known as a tenant, is a collection of resources available to a user or customer. It allows an OpenStack
administrator to delegate resources and the ability to control them.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 13

1. Navigate to the Identity -> Projects page.

2. Select the +Create Project button.

3. Fill out the Project Information tab with these values. Reference the following graphic:
Name: SoftwareTesters
Description: A project for software testers

Figure 3.2: Project Creation

4. Modify the Quotas tab with the following values. Leave the others as default. Reference the following graphic for any
unpopulated fields which require a value (It may look slightly different depending on distribution):

VCPUs: 5
Instances: 5
Floating IPs: 2

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
14 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

Figure 3.3: Project Quotas

5. Once you have completed editing both tabs select Create Project. You should have returned to the Projects page.

6. Find the newly created line for SoftwareTesters. Select the drop-down next to Manage Members and select
Edit Project. Notice you can edit any of the settings you have made.

Add A User

While the admin user is able to manage the infrastructure of OpenStack we will create a user with member privileges for a
project to deploy and manage an instance.

1. Navigate to the Identity -> Users page.

2. Select the +Create User button. Fill it out to match the following graphic, then select the Create User button.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 15

Figure 3.4: Adding a user

3. Select the Users tab on the left and verify the new user is in the list. Use the button in the upper right to sign out as the
user admin and log back in as the user developer1 with the password you set, openstack

4. Using the information at the top of the BUI, what project is developer1 working with?

5. Working through the tabs on the left, what are some differences of the developer 1 view?
a.
b.
c.
d.

Deploy a New Network and Router

Before we launch an instance we will create a network and router to attach it to.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

1. To create a new network, remaining logged in as developer1, navigate to the


Project -> Network -> Network Topology page. Select the +Create Network button in the upper right.
2. Work across the three tabs of the pop-up using the following graphics. When finished select Create.

Figure 3.5: Network Tab

Figure 3.6: Subnet Tab

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 17

Figure 3.7: Subnet Details Tab

3. Now to create a router. The UUID of the network namespace (covered in a later chapter) is derived from the router the
instance network is attached to. We will find this UUID to know which network to use. First we create the router. Select
the +Create Router button in the upper right. Enter the net-router as the name, then Create Router in the lower
right.

4. Use the mouse to hover over the router icon (looks like a small X). Select the blue link: View Router Details.

5. Select the second tab over, for Interfaces then +Add interface. Select the drop-down for the Net1: 10.0.0.64/25
(sub-net1) subnet then the Submit button. When the screen refreshes the Status will show as Down. After a minute if
you reload the page it should show as Active.

6. Change to the Overview tab when the interface has been added and take note of the ID. In the following graphic the ID
begins with 4fd279 and ends with e0d2. Yours will be different.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
18 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

Figure 3.8: View router ID

7. Return to the command line and view the newly created namespace. Use the ip netns list command. We will learn
more about namespaces in a later chapter. Note the line which begins with qrouter- and has the same ID as the router
we just created. You will use this information to connect to an instance on this network in a future step.

ubuntu@devstack-cc:~/devstack$ ip netns list


qrouter-4fd279c4-b125-4611-956d-adc67432e0d2
qdhcp-ae0db819-c3a6-4680-9f51-b2817060bdd6
qrouter-f8b5a5ee-00fb-4db1-9bbf-347d9983ad58
qdhcp-4482bf67-2827-48df-99d7-2664cd83f76a

Allow SSH Access to Default Security Group

By default only egress is allowed to an instance. We will add ssh ingress to the default group.

1. Use the BUI to navigate to the Project -> Network -> Security Groups page. Select the Manage Rules button on
the default line.

2. Select the Add Rule button. Fill it out as per the following graphic. You will need to select the Rule drop down and scroll
to see the SSH option.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 19

Figure 3.9: Add SSH Ingress

Log into a New Instance

Now that we have a new network and router, we can deploy a new instance. Once it has fully spawned you will log into the
instance via the new namespace.

1. Navigate to the Project -> Compute -> Instances page.

2. Select the Launch Instance button in the upper right. Fill it out according to the following graphic. You will need to work
with each tab marked with an asterisk. If you receive the error: Error: Host <name> is not mapped to any cell
return to the command line and type: nova-manage cell v2 discover hosts. This is a hiccup with Pike.
Note that you must first select Boot from Image in the Instance Boot Source drop-down before you will be able to
select an image to use. Also set the Delete Volume on Instance Delete slider to Yes to avoid running out of space.
The icon to move a resource from Available to Allocated is either an arrow or a plus sign.

3. When you have worked through the tabs and entered the necessary fields select the Launch Instance button. It will
be grayed out until all requirements are met. We will revisit the other tabs in later exercises. The instance should now
be spawning.
Details:
Instance Name: devOS1
Source:
Select Boot Source: Image
Set Delete Volume on Instance Delete to Yes
Add the cirros image using small up arrow, lower right
Flavor:
Flavor: m1.tiny

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
20 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

Figure 3.10: Deploying a new instance

4. At first the new instance will show a Task state of scheduling, Block Device Mapping then Spawning. When that finishes,
usually within a minute, the status should change from Build to Active. Take note of the listed IP Address:

5. The name of the instance is a blue link. Select devOS1 link.

6. Navigate from the Overview tab to Action Log reviewing the information available. The Console tab may not work
because of the nature of the lab environment. If we were local to devstack-cc we would be able to log into the
instance.

7. From the Log tab, does it appear the system booted?

8. If so, what is the listed username and password, right above the login:

9. Return to the command line. Using sudo get a list of network namespaces. Look for a line beginning with qrouter-
containing the UUID of the router we created earlier via the BUI.
ubuntu@devstack-cc:~/devstack$ sudo ip netns list
qrouter-4fd279c4-b125-4611-956d-adc67432e0d2
qdhcp-f7695fb9-577b-45fd-bcc4-75b3dc0d7c74
qrouter-8112815b-45d4-4c7c-8af2-4c06e9e86994
qdhcp-dea04a0d-3deb-419b-8955-9f6d3a2fa5e4

10. Now that we know which namespace to use, again use sudo and ip netns exec to run the ssh command in that
namespace. Use the IP Address for your instance, which may be different than the example below. The command is on
three lines for readability. Once you log into the instance run a few commands and create a file to be used in a future
lab:
ubuntu@devstack-cc:~/devstack$ sudo ip netns exec \
qrouter-4fd279c4-b125-4611-956d-adc67432e0d2 \
ssh [email protected]
The authenticity of host ’10.0.0.74 (10.0.0.74)’ can’t be established.
RSA key fingerprint is 27:6b:b3:f0:4e:44:01:70:51:e8:ad:1b:28:31:e0:aa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ’10.0.0.74’ (RSA) to the list of known hosts.
[email protected]’s password: cubswin:)
$ uname -a

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 21

Linux devos1 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC....


$ uname -a > uname.out
$ ls
uname.out
$ sudo -i
# exit
$ exit
Connection to 10.0.0.74 closed.
ubuntu@devstack-cc:~/devstack$

Congratulations you just deployed your first instance and logged in!

Viewing Resources

With a instance deployed we should see some usage on various BUI screens.

1. Navigate to the Project -> Compute -> Overview page.

2. Record the following values:


a. Instances used:
b. VCPUs used:
c. RAM Used:
d. Of how much RAM total:

3. Sign out of the BUI and log back in as admin.

4. Navigate to the Admin -> Compute -> Hypervisors page. It should look something like this, but the exact numbers
don’t matter as much as the difference in view from this and the admin user view we will see in a few steps:

Figure 3.11: Viewing Resources

5. Review the VCPU, Memory and other reported usage.

6. a. What can we know about the difference between an admin view and a project view?
b. How many VCPUs do you have remaining?

7. Sign out of the BUI as admin and back in as developer1. We will deploy two more instances and view resource usage.
Note that the Hypervisor Summary as admin indicates we are currently using one of four VCPUs. The developer1
view shows the quota totals not the actual resources, even if the quota is much larger than the actual resources.

8. After logging in, navigate to the Project -> Compute -> Instances page.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
22 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

9. Select the Launch Instance button and deploy two instances. Work down the tabs on the left filling in the necessary
information. Select the Launch Instance button in the lower right once the fields have been updated.
Details:
Instance Name: devOS2
Count: 2
Source:
Select Boot Source: Image
Set Delete Volume on Instance Delete to Yes
Allocated: cirros image using small up arrow, lower right
Flavor:
Allocated: m1.tiny

10. When you have entered the appropriate information select the Launch Instance button. Wait until the instances finish
spawning. Did you receive any errors?

11. What are the names of the new instances?

12. Navigate to the Project -> Compute -> Overview page.


a. How many listed instances?
b. How much VCPUs in use?
c. Any errors?

13. Log out as developer1 and back in as the user admin.

14. Navigate to the Admin -> Compute -> Hypervisors page. How many VCPUs in use?

15. Navigate to the Admin -> Compute -> Instances page.

16. Select devos2-1 and devos2-2 then select the red Delete Instances button.

17. When the pop-up asks for confirmation select the Delete Instances button.

18. You should have one remaining instance. You can verify this from the command line:
ubuntu@devstack-cc:~$ sudo virsh list --all
Id Name State
----------------------------------------------------
2 instance-00000001 running

Solution 3.1

Logging into the Dashboard

5. a. Left drop down:


Which project or tenant the BUI will affect
b. Right drop down:
The user I am currently logged in as

6. Three, with alt demo the current selection.

Add A User

1. Software Testers

2. a. No admin tab
b. Insufficient privilege to add users or projects
c. View only that project resources, which do not reflect the actual system at all

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 23

Allow SSH Access to Default Security Group

1. 10.0.0.74

4. yes

5. cirros cubswin:

Log into a New Instance

2. a. 1
b. 1
c. 512
d. 50G

6. a. Those with admin ability see the actual usage, the project view represents a view of quota not real resources
b. 1

10. No

11. devos2-1 and devos2-2


12. a. 3
b. 3
c. It’s in red

14. 3

Exercise 3.2: Add a Compute Host

Overview

In a previous exercise you deployed an All-In-One DevStack instance, running on Ubuntu. Then configured a project, user
and deployed a new virtual machine.

Connect to the terminal of your cloud controller, devstack-cc, via the provided link for lab3.2. You will be presented a new
Katacoda environment. The new instance may have a different public IP address and URL for BUI access. Use the ip
command, as shown in a previous task, to determine the IP address for eth0 for the new instance and reference the file
/opt/host for the URL to the Horizon BUI. You can also use the OpenStack Dashboard tab on the Katacoda page.

Install Software on the New Compute Node

In this exercise we will grow our cloud by adding a Nova compute node. Connect to the terminal via the browser. The only
way to connect to compute-node is via devstack-cc node.

An SSH public key for the Ubuntu user has been implemented and the compute-node has been pre-populated. If asked to
accept the SSH fingerprint choose yes. Use exit to return to devstack-cc when necessary. For example:

ubuntu@devstack-cc:~$ ssh compute-node


ubuntu@compute-node:~$ exit
ubuntu@devstack-cc:~$

The backslash in the git command following is to indicate that the command should be on one line.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
24 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

1. Install the git command and pull down the DevStack software.

ubuntu@devstack-cc:~$ ssh compute-node

ubuntu@compute-node:~$ sudo apt-get update

ubuntu@compute-node:~$ sudo apt-get install git vim


<output_ommited>
After this operation, 21.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
<output_omitted>

ubuntu@compute-node:~$ git clone \


https://git.openstack.org/openstack-dev/devstack -b stable/pike
<output_omitted>

2. Find the private IP address of the compute node. Update the table at the beginning of the lab for future reference. Your
IP may be different than the example below.
ubuntu@compute-node:~$ ip addr show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP
group default qlen 1000
link/ether 02:1f:91:1e:db:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.97.2/20 brd 172.31.47.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::1f:91ff:fe1e:db18/64 scope link
valid_lft forever preferred_lft forever

3. We need to create another local.conf file, similar but different from the first node. This file will point to the IP Address
of the first node so that the script can sign in to the various services. We will also limit which services are enabled on
the new node. Note the flat interface may be different. Nodes dedicated to compute services don’t need access to the
same networks as a head node or the network node and may use a data network instead.
ubuntu@compute-node:~$ cd devstack ; vim local.conf
[[local|localrc]]
HOST_IP=192.168.97.2 # IP for compute-node
SERVICE_HOST=192.168.97.1 # devstack-cc IP, first node you used
FLAT_INTERFACE=ens3
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=db-secret
RABBIT_PASSWORD=rb-secret
SERVICE_PASSWORD=sr-secret
DATABASE_TYPE=mysql
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,q-agt,n-api-meta,c-vol,placement-client
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN

4. Before running the stack.sh script, save the output of the ip command for later comparison:
ubuntu@compute-node:~devstack$ ip addr show > ~/ip.before.out

5. Install the DevStack software on the second node. If there are issues, double-check and edit the local.conf configu-
ration file, run ./unstack.sh and ./clean.sh and try again. Ask for assistance if you continue to receive errors.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 25

ubuntu@compute-node:~devstack$ ./stack.sh
<output_omitted>

6. Once the script has finished check to see if you have a second hypervisor. As admin, navigate to
Admin -> Compute -> Hypervisors The Hypervisor tab should show two hostnames, as does the Compute Host
tab.
If not you will need to use a five step process to enable the new node. You may see some output about python code
deprecation. This can be ignored if the node is added. Your hostnames and IP addresses may be different. Below we
find only one hypervisor after adding the compute node.
ubuntu@compute-node:~devstack$ source openrc admin

ubuntu@compute-node:~devstack$ openstack hypervisor list


+----+---------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+---------------+-------+
| 1 | devstack-cc | QEMU | 192.168.71.1 | up |
+----+---------------------+-----------------+---------------+-------+

7. Return to the compute-node and enable the new hypervisor.


(a) Return to the devstack-cc node. Source the config file as admin
ubuntu@devstack-cc:~$ cd devstack/

ubuntu@devstack-cc:~/devstack$ source openrc admin


WARNING: setting legacy OS_TENANT_NAME to support cli tools.
(b) Verify the compute-host was added and is up.
ubuntu@devstack-cc:~/devstack$ nova service-list --binary nova-compute
+--------------------------------------+--------------+--------------+------+--------
+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status
| State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+--------------+--------------+------+--------
-+-------+----------------------------+-----------------+-------------+
| 32fa0ccd-45a8-45f5-b2e7-2d84e7377eb3 | nova-compute | devstack-cc | nova | enabled
| up | 2017-12-19T19:49:35.000000 | - | False |
| 9fc15015-d150-478f-ba4f-764fe4ed03c9 | nova-compute | compute-node | nova | enabled
| up | 2017-12-19T19:49:35.000000 | - | False |
+--------------------------------------+--------------+--------------+------+--------
-+-------+----------------------------+-----------------+-------------+
(c) Verify the hypervisor has not yet been added.
ubuntu@devstack-cc:~/devstack$ openstack hypervisor list
+----+---------------------+-----------------+--------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+--------------+-------+
| 1 | devstack-cc | QEMU | 192.168.97.1 | up |
+----+---------------------+-----------------+--------------+-------+
(d) Use a script to join the hypervisor to the cloud.
ubuntu@devstack-cc:~/devstack$ ./tools/discover_hosts.sh
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166: Warning: (1287, u"’@@t
x_isolation’ is deprecated and will be removed in a future release. Please use ’@@tra
nsaction_isolation’ instead")
result = self._query(query)
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell ’cell1’: 79bd3053-a007-469d-ba72-d7b106d08568
Found 1 unmapped computes in cell: 79bd3053-a007-469d-ba72-d7b106d08568
Checking host mapping for compute host ’compute-node’: b3caa6f3-fe33-49af-839a-375813
8af2b1
Creating host mapping for compute host ’compute-node’: b3caa6f3-fe33-49af-839a-375813
8af2b1

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
26 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

(e) Verify the compute-host has been added.


ubuntu@devstack-cc:~/devstack$ openstack hypervisor list
+----+---------------------+-----------------+--------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+--------------+-------+
| 1 | devstack-cc | QEMU | 192.168.97.1 | up |
| 2 | compute-node | QEMU | 192.168.97.2 | up |
+----+---------------------+-----------------+--------------+-------+

8. Return to the compute-node. Save the output of the ip command again to a new file. Compare how the networking on
the node has changed. Note the new bridges and interfaces created.
ubuntu@compute-node:~devstack$ ip addr show > ~/ip.after.out

ubuntu@compute-node:~devstack$ diff ~/ip.before.out ~/ip.after.out

9. We will create another instance from the BUI. After it has finished spawning run the ip command again and view the
differences again.

• On your local system open a browser and point it at the public IP Address of your devstack-cc node.
• Log into BUI as developer1 with the password openstack.
• Navigate to Project -> Compute -> Instances. Select Launch Instance.
• Use the name devOS3 and boot from the available cirros image. Select the m1.tiny flavor. When the fields are
filled select Launch
• When it finishes spawning check the differences in IP information on the new compute host.

ubuntu@compute-node:~devstack$ ip addr show > ~/ip.devos3.out

ubuntu@compute-node:~devstack$ diff ~/ip.after.out \


~/ip.devos3.out

10. Log into the BUI as the user admin with the password openstack

11. Navigate to the Admin -> Compute -> Hypervisors page.

12. Select the hypervisor tab. You should see a second hypervisor listed. Also a second compute host listed under the
Compute Host tab.
13. Navigate to the Admin -> Compute -> Instances page. You should find that each compute host has one instance
running.

14. Return to the command line. Use exit to return to the devstack-cc system. Using the same command and namespace
as before, but with the IP Address for devOS3 try to log into the new instance. Your instance IP may be different.
ubuntu@compute-node:~/devstack$ exit
logout
Connection to compute-node closed.

ubuntu@devstack-cc:~/devstack$ sudo ip netns exec \


qrouter-4fd279c4-b125-4611-956d-adc67432e0d2 \
ssh [email protected].
The authenticity of host ’10.0.0.10 (10.0.0.10)’ can’t be established.
RSA key fingerprint is f8:3f:2a:07:d4:31:51:66:ee:a7:00:5c:22:f8:ce:c3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ’10.0.0.10’ (RSA) to the list of known hosts.
[email protected]’s password: cubswin:)
$ exit
Connection to 10.0.0.10 closed.

15. You can delete the devOS3 instance, as well, to conserve resources.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 27

Create a Security Group

The private IP Address allows access to an instance from the host machine. In order to allow outside access to an instance a
new security group must be created and rules for access added.

1. Log into the BUI as developer1.

2. Navigate to the Project -> Network -> Security Groups page.

3. Select the +Create Security Group button. Fill it out as found in the following graphic, then select the
Create Security Group button.

Figure 3.12: Create a Security Group

4. Select the button Manage Rules under the Actions column on the right of the newly created line for the Basic group.

5. Select the +Add Rule button. Add rules for ssh and HTTP access. To add ssh access, under the top drop-down scroll
to the bottom and select SSH, then the Add button.

6. Follow the same steps to add a rule for HTTP. After adding the rule your page should look something like this:

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
28 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

Figure 3.13: Rules for SSH and HTTP

7. After adding the rules navigate back to the Project -> Compute -> Instances page.
8. Click on the drop-down under the Actions field under your longest running, instance, devOS1, and select
Edit Security Groups.
9. Select the blue plus sign to add the Basic group to this instance, then Save.

Use a Floating IP Address

Now that we have associated a new security group which allows ssh, let’s test our work. First we add a gateway so our private
network can access the public network, allocate an IP to the Project, then associate with a port of an instance.

1. Navigate to the Project -> Network -> Routers page. Select the Set Gateway button. Choose the drop-down and
select public as the External Network. Then select Submit.
2. Navigate to the Project -> Network -> Floating IPs page.
3. Select the Allocate IP to Project button. Use the drop-down to select the public pool. Then the Allocate IP
button. A new address should be listed, but in a Down status.
4. Navigate to the Project -> Compute -> Instances page.
5. Click on the drop-down under the Actions field for devOS1 and select Associate Floating IP.
6. Use the drop-down to select the newly allocated IP address. Then the Associate button.
7. When the BUI updates, write down the newly assigned floating IP address:
8. Return to the command line of your cloud controller and log into the instance, but without using a namespace. Instead
using the newly assigned floating IP address. Your IP address will be different than the example following.
ubuntu@compute-node:~/devstack$ ssh [email protected]
<output_omitted>
[email protected]’s password: cubswin:)
$ uname -a
Linux devos1 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
$ exit
Connection to 192.168.42.141 closed.

Solution 3.2
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 29

Use a Floating IP Address

2. 192.168.42.141

Exercise 3.3: Explore Command Line Tools

Overview

Everything the BUI can do is possible from the command line. DevStack has moved to a new Python-based tool called
openstack. It can run individual commands and act as a utility. Any commands run within the openstack utility would not
show up in your bash history. Some underlying service commands remain, although not as many as you will find in more
stable deployments.

Connect to the terminal of your cloud controller, devstack-cc, via the provided link for lab3.3. You will be presented a new
Katacoda environment. The new instance may have a different public IP address and URL for BUI access. Reference the file
/opt/host for the URL to the Horizon BUI. You can also use the OpenStack Dashboard tab on the Katacoda page.

Use the openstack Utility

Commands can be run one at a time or within the utility. We will reproduce some of the BUI functions via the command line.

1. Let’s begin by sourcing the openrc file. If this file is not read into the current shell you will need to set requested
parameters by hand.
ubuntu@devstack-cc:~$ cd ~/devstack

ubuntu@devstack-cc:~/devstack$ source openrc admin

2. Start the openstack utility. Notice the prompt changes to reflect you are no longer entering commands to the bash
shell. Then create a new project.
ubuntu@devstack-cc:~/devstack$ openstack

(openstack) project create CallCenter


+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | 05425440ce5147b2be06efa40713807a |
| is_domain | False |
| name | CallCenter |
| parent_id | default |
+-------------+----------------------------------+

3. Create a new user who is a member of CallCenter. This is a single, long command, not two.
(openstack) user create --email ubuntu@localhost --project CallCenter \
--password openstack operator1
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | 05425440ce5147b2be06efa40713807a |
| domain_id | default |
| email | ubuntu@localhost |
| enabled | True |
| id | faab415e3ee142d79d83169c0b5be193 |

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
30 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

| name | operator1 |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

4. openstack uses special IDs called endpoints to communicate between services.


(openstack) endpoint list
+----------------------------------+-----------+--------------+----------------+---------+-----------....
| ID | Region | Service Name | Service Type | Enabled | Interface ....
+----------------------------------+-----------+--------------+----------------+---------+-----------....
| 148a1abb542644d8b5c327138cc37bae | RegionOne | placement | placement | True | public ....
| 3f3f5359f1384facac2de7a87e833284 | RegionOne | glance | image | True | public ....
| 56e9920cfd1b4ec683688c73311a587b | RegionOne | cinderv3 | volumev3 | True | public ....
| 5927935062f448b9af90d77352ca3dce | RegionOne | keystone | identity | True | admin ....
<output_omitted>

5. Get a list of instances. We sourced the openrc file as admin. The admin doesn’t have any running instances. You can
pass variables from the command line, as we see from the second command.
(openstack) server list

(openstack) server list --project SoftwareTesters


+-------------------+--------+--------+-------------------+------------+
| ID | Name | Status | Networks | Image Name |
+-------------------+--------+--------+-------------------+------------+
| 956e2617-4ff2-41c | devOS1 | ACTIVE | Net1=10.0.0.5, | |
| 9-af29-f25b37126a | | | 192.168.42.132 | |
| 95 | | | | |
+-------------------+--------+--------+-------------------+------------+

6. View the running hypervisors. The output below shows the alias names, yours will look different, perhaps like ip-172-31-
45-74.
(openstack) hypervisor list
+----+---------------------+-----------------+--------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+--------------+-------+
| 1 | devstack-cc | QEMU | 172.31.4.94 | up |
| 2 | compute-node | QEMU | 172.31.6.143 | up |
+----+---------------------+-----------------+--------------+-------+

7. As a collection of federated services other hosts will support OpenStack services.


(openstack) host list
+-----------------+-------------+----------+
| Host Name | Service | Zone |
+-----------------+-------------+----------+
| devstack-cc | scheduler | internal |
| devstack-cc | consoleauth | internal |
| devstack-cc | conductor | internal |
| devstack-cc | conductor | internal |
| devstack-cc | compute | nova |
| compute-node | compute | nova |
+-----------------+-------------+----------+

8. View the OS images uploaded to glance via the stack.sh script.


(openstack) image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 138f3ff3-0427-444b-a3fd-218fe9a088af | cirros-0.3.5-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 31

9. View the current instance configurations configured.


(openstack) flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
<output_omitted>

10. Retrieve IDs for current users.


(openstack) user list
+----------------------------------+---------------------+
| ID | Name |
+----------------------------------+---------------------+
| 0e09bc6a5f4c49859e2f37e975ac24c3 | project_a_auditor |
| 0ebabdf9fceb4036a55cb002fe2838fb | project_b_creator |
| 136fea04f6ef4222aaec423ea3e461cf | project_b_auditor |
| 241d8a07b5e44b0794419222e4cbf54f | cinder |
| 2c219e9f70464e909e1751d7bc0f7d7c | nova |
<output_omitted>

11. We have not configured any secondary roles yet, but you can still list the primary role. Note the ID of each is wrapped
on the line.
(openstack) role assignment list --user admin --project demo
+----------------------------------+--------------------------
| Role | User
| Group | Project | Domain | Inherited |
+----------------------------------+--------------------------
| f617b324f31d400eb82500a285e6ce8d | 32eab78f89d94d40b406bc94c1447c81
| 7f779f3c9d964123a619ff1e6c0caf27 | | False |
+----------------------------------+---------------------------------

(openstack) role show f617b324f31d400eb82500a285e6ce8d


+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | f617b324f31d400eb82500a285e6ce8d |
| name | admin |
+-----------+----------------------------------+

12. Get a list of current projects:


(openstack) project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 290e37418bed4f759508f5f9b159240f | demo |
| 31e4bd52e6de415e9306adaecd8a6a14 | project_b |
| 323015bca49045039c72a36a071d1d62 | alt_demo |
| 39901d32c34b4a65a631847398af7196 | admin |
| 6d408be3105141b6aa72145153c27b95 | SoftwareTesters |
| 910b5a25cf74463c98228617ccaa04bd | service |
| bc56b093cb4c43bfab205d1916f7576d | CallCenter |
| cbb10bade4aa4fe48f3b69201ce747be | invisible_to_admin |
| ee3481cd26644985a36cff7a6a263bf3 | project_a |
+----------------------------------+--------------------+

13. View neutron networking.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
32 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

(openstack) network list


+-----------------------------+---------+------------------------------+
| ID | Name | Subnets |
+-----------------------------+---------+------------------------------+
| 2ce75fa5-624c-4883-b389-185 | Net1 | bc195f2c- |
| 9beab3f41 | | fd57-46ec-8797-33ec695e59ab |
| 68915a7c-00b2-409f- | private | 2293ddd8-2394-4a88-865a- |
| 87e9-a6908ba4958c | | a4f92d2402b4, 7c0fa080-eba1- |
| | | 4c51-a075-d8506c47941b |
| b9994d3d-83f2-4e0b-b95c- | public | 5863e06e-fa7b-480c- |
| ea02ccdbdb04 | | bb63-1550cdfdf342, |
| | | 5f14a0d7-65d1-418a- |
| | | 8ce0-286052c16b86 |
+-----------------------------+---------+------------------------------+

14. View some of the RESTapi addresses. The list is long.


(openstack) catalog list
+----------+------------+--------------------------------------------------------------------------------+
| Name | Type | Endpoints |
+----------+------------+--------------------------------------------------------------------------------+
| nova | compute | RegionOne |
| | | publicURL: http://172.31.26.105:8774/v2/4f37966692ba4b90b7d497fe68fe40c8 |
| | | internalURL: http://172.31.26.105:8774/v2/4f37966692ba4b90b7d497fe68fe40c8 |
| | | adminURL: http://172.31.26.105:8774/v2/4f37966692ba4b90b7d497fe68fe40c8 |
| | |

<content_omitted>

| keystone | identity | RegionOne |


| | | publicURL: http://172.31.26.105:5000/v2.0 |
| | | internalURL: http://172.31.26.105:5000/v2.0 |
| | | adminURL: http://172.31.26.105:5000/v2.0
<content_omitted>

15. Create a new 1GB volume.


(openstack) volume create --size 1 volumeA
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-02-24T19:41:13.762604 |
| description | None |
| encrypted | False |
| id | 8d708fa5-b53e-490f-8999-a49cbd706196 |
| migration_status | None |
| multiattach | False |
| name | volumeA |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | lvmdriver-1 |
| updated_at | None |
| user_id | afe6ce8f863b4561887ab555975edfc7 |
+---------------------+--------------------------------------+

16. Create a snapshot of the volume and verify it.


(openstack) volume snapshot create --volume volumeA volA-snap1
+-------------+--------------------------------------+

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 33

| Field | Value |
+-------------+--------------------------------------+
| created_at | 2017-11-05T00:33:28.292292 |
| description | None |
| id | 0e332e2c-1ec1-4a93-9179-08bff761f506 |
| name | volA-snap1 |
| properties | |
| size | 1 |
| status | creating |
| updated_at | None |
| volume_id | 174df03b-060d-4465-a820-97ec18846400 |
+-------------+--------------------------------------+

(openstack) volume snapshot list


+----------------------------------+---------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+----------------------------------+---------------+-------------+-----------+------+
| 18187daf- | volA-snap1 | None | available | 1 |
| 014b-4887-9f44-284426bd7b7e | | | | |
+----------------------------------+---------------+-------------+-----------+------+

17. Use the –debug option to see the back end communication which can be helpful for troubleshooting. There is a lot of
output. We will write everything to a file for ease of viewing.
(openstack) quit
ubuntu@devstack-cc:~/devstack$ openstack --debug server list &> debug.out
ubuntu@devstack-cc:~/devstack$ less debug.out
<output-omitted>

18. Review all the possible utility sub-commands:


ubuntu@devstack-cc:~/devstack$ openstack help |less
<output_omitted>

Exercise 3.4: Decommission a Compute Node

Overview

In a previous exercise you deployed an All-In-One DevStack instance, running on Ubuntu. Then configured a project, user
and deployed a new virtual machine.

Connect to the terminal of your cloud controller, devstack-cc, via the provided link for lab3.4. You will be presented a new
Katacoda environment. The new instance may have a different public IP address and URL for BUI access. Reference the file
/opt/host for the URL to the Horizon BUI. You can also use the OpenStack Dashboard tab on the Katacoda page.
In this exercise we will first disable the services on a node, which is safe, then remove a node fully from OpenStack, which
may not be safe. There is not an official process to fully decommission a node in OpenStack yet. It is something being
worked on, along with in-place upgrades which has become part of the Kilo software release.

The safe operation to disable services on a particular node will prevent new service running on that node. It will still show up
in the BUI and command line output as disabled. Errors concerning that node will also continue.

If you are knowledgeable and experienced at editing a database in mariadb you could remove the node entirely. Any mistake
with the database could render the whole OpenStack deployment useless.

DO NOT DO THIS IN PRODUCTION AND/OR ON ANY SYSTEM YOU WANT TO CONTINUE USING.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
34 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

Disable a Node in OpenStack – Safe

To begin we must move any deployed instances from the node we intend to remove. We will the target node compute-node
for removal as an example. Your system will be different. Once there are no services in use on node we will disable it.

1. You cannot see which hypervisor an instance is running on as a member of a project. Log out of the BUI as developer1
and back in as admin.

2. Navigate to Admin -> Compute -> Instances page. The second column, Host, shows which nodes are running on a
particular hypervisor.

3. Select each instance running on the target node, the compute-node for example, then the drop-down on the right side
of the line. Choose Terminate Instance, Migrate Instance or Live Migrate Instance depending on your needs
and current configurations. Our current configuration won’t allow us to migrate so we will Terminate Instance.

4. Once the hypervisor is without instances navigate to Admin -> Compute -> Hypervisors page.

5. Select the Compute Host tab. Find the host with no instances running and select the Disable Service button.

6. Fill in a reason such as “Upgrade hardware” for the reason and select the Disable Service button. You’ll notice the
hypervisor summary will update to show half as much resources available.

7. Verify the state by navigating to Admin -> System -> System Information page.
Select the second tab Compute Services. It should show Status as Disabled, with a recent state change.

8. To make lab 4 have more resources, you may want to enable the node again.

OPTIONAL LAB: Remove a Node from OpenStack - Dangerous

The following steps are optional. There is not a formal way to remove a node completely. The following steps involve editing a
database manually. Any mistake could render the cloud unusable.

Please wait until all labs using DevStack have been completed before attempting these steps. The next chapter has more
DevStack labs. After completing those, you could return for this task.

1. Get the database password from the local.conf file.


ubuntu@devstack-cc:~/devstack$ grep DATABASE_PASSWORD local.conf
<<<<<<< HEAD
MYSQL_PASSWORD=db-secret
=======
DATABASE_PASSWORD=db-secret
>>>>>>> 6a7824d71778b97f2bfdc67bfed03843c14c7b00

2. Log into the mysql database on the keystone database node.


ubuntu@devstack-cc:~/devstack$ mysql -u root -p
Enter password: db-secret
<output_omitted>
mysql>

3. View the available databases. This list has changed over time so the output may be slightly different.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| barbican |
| cinder |

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 35

| glance |
| keystone |
| mysql |
| neutron |
| nova_api |
| nova_cell0 |
| nova_cell1 |
| performance_schema |
| sys |
+--------------------+

4. Select the nova cell1 database


mysql> use nova_cell1
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

5. View the current tables of this database. Find the compute nodes.
mysql> show tables;
+--------------------------------------------+
| Tables_in_nova_cell1 |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
<output_omitted>

6. View the current compute nodes information. There will be a lot of output. It may help to copy and paste the output then
search for the ip addresses or node names. Find the line that matches the node you want to remove. In the following
example the node name is ip-172-31-39-120. Your node name will be different.
mysql> select * from compute_nodes;
+---------------------+---------------------+------------+----+-
-----------+-------+-----------+----------+------------+--------
<output_omitted>
172.31.39.120 | [["armv7l", "qemu", "hvm"], ["aarch64", "qemu",
<output_omitted>
2 rows in set (0.00 sec)

7. Delete the disabled host from the list of compute nodes. Please note that the value given must be enclosed by single
quotes. If not you will receive a SQL syntax error. When completed the Hypervisor tab in the BUI should no longer
show it as a hypervisor, but it will remain as a Compute Host.
mysql> DELETE QUICK FROM compute_nodes WHERE host_ip=’172.31.39.120’;
Query OK, 1 row affected (0.01 sec)

mysql> select * from compute_nodes;


+---------------------+---------------------+------------+----+-----
<output_omitted>
1 row in set (0.00 sec)

8. Next we will remove the nodea as a Compute Host. View the current service nodes. Look for the nova-compute lines.
Then delete using the host entry. The value must be enclosed in single quotes, and your host name will be different.
mysql> select * from services;
+---------------------+---------------------+------------+----+------------------+----------------+-----------+--------------+----------+---------+-----------------+---------------------+-------------+-
| created_at | updated_at | deleted_at | id | host | binary | topic | report_count | disabled | deleted | disabled_reason | last_seen_up | forced_down |
+---------------------+---------------------+------------+----+------------------+----------------+-----------+--------------+----------+---------+-----------------+---------------------+-------------+-
| 2018-05-21 04:49:02 | 2018-06-08 21:01:23 | NULL | 1 | ip-172-31-45-74 | nova-conductor | conductor | 161358 | 0 | 0 | NULL | 2018-06-08 21:01:23 | 0 |
| 2018-05-21 04:49:13 | 2018-06-08 21:01:25 | NULL | 2 | ip-172-31-45-74 | nova-compute | compute | 161338 | 0 | 0 | NULL | 2018-06-08 21:01:25 | 0 |
| 2018-05-23 17:00:48 | 2018-06-08 21:01:26 | NULL | 3 | ip-172-31-39-120 | nova-compute | compute | 139679 | 1 | 0 | NULL | 2018-06-08 21:01:26 | 0 |
+---------------------+---------------------+------------+----+------------------+----------------+-----------+--------------+----------+---------+-----------------+---------------------+-------------+-
3 rows in set (0.00 sec)

mysql> DELETE QUICK FROM services WHERE host=’ip-172-31-39-120’;


Query OK, 1 row affected (0.01 sec)

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
36 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE

9. Exit from the database.


mysql> quit
Bye
ubuntu@devstack-cc:~/devstack$

10. Verify the change in the BUI. Log into the BUI as admin with a password of openstack. Navigate to
System -> Hypervisors and view the Compute Host tab. The node should not be in the list, you may have to perform
a no-cache refresh of the page to see the changes. Errors indicate something has gone wrong. This may be why the
process is not supported and under development.

11. Verify from the command line the node is fully removed from OpenStack. If BUI shows error, this may error as well.
ubuntu@devstack-cc:~/devstack$ nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+--------------------------------------+---------------------+-------+---------+
| 93b03401-1128-49c6-8d41-dd743267ecb2 | devstack | up | enabled |
+--------------------------------------+---------------------+-------+---------+

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 4

Components of an OpenStack Cloud

4.1 Labs

Exercise 4.1: Working with Images, Snapshots and Volumes

Overview

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

37
38 CHAPTER 4. COMPONENTS OF AN OPENSTACK CLOUD

Figure 4.1: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

This lab uses DevStack running on Ubuntu. Later labs will use RDO running on CentOS.

Working with Volumes

In this lab we will create a snapshot from a running instance. We will create a new volume from the snapshot then use to
launch a new instance and create a new image.

We will also work with encryption of volumes and volume types.

Create the Snapshot

In this task we will use an an existing instance to create a snapshot,then an image in a different tenant.

1. Logged in as admin navigate to the Admin -> Compute -> Instances page. Select the drop-down under the Actions
column of the devOS1 instance, then Create Snapshot. Give a name of dev-snap1 and then Create Snapshot into
the pop-up window.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
4.1. LABS 39

Figure 4.2: Create Snapshot

2. Notice that upon finishing Horizon changed to the Project -> Compute -> Images page. Select the drop-down under
Actions for the newly created snapshot. If the image appears queued for more than a minute refresh the page. Once it
shows as active notice there are several options including launch. Select Edit Image. Note the options under Format.
Then find and change the Image Sharing Visibility to Public if not already set.

3. After noting the visibility now shows Public go to the Project -> Compute -> Images page. Use the drop down to
edit dev-snap1. It should look similar to the Admin.

4. Go to the top of the BUI and change the current project to be demo instead of alt demo.

5. Select the Launch button on the dev-snap1 line.

6. Fill in the following values for the new instance. The source should already be set. Select Launch Instance when
complete.
Instance Name: golden
Source: Image
Allocated: dev-snap1
Flavor: m1.tiny
Networks: Private

7. Navigate to the Project -> Compute -> Instances page. Once the new instance becomes active and has time to
boot take note of the assigned IP address and log in. The username and password remain the same as source instance.
Even though a snapshot, the instance was created by a different user, in a different project, on a different network. Use
the ip netns list and previous steps to find the correct namespace to access the instance. For example to ssh, on the
Private network, see the following. Your namespace will be different. Remember this is a different project, you will need
to add ssh to the security group.
The historical password for Cirros images has been cubswin:). Now that the cubs have actually won, they are changing
to gocubsgo. Should one password not work, try the other.
ubuntu@devstack-cc:~/devstack$ sudo ip netns exec \
qrouter-0701411b-91d8-4871-8191-7c808b1c1144 \
ssh [email protected]
<output_omitted>
[email protected]’s password: cubswin:)
$

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
40 CHAPTER 4. COMPONENTS OF AN OPENSTACK CLOUD

8. Look for existing files and verify the new node name. You should see the file created in a previous lab, prior to creating
the snapshot. Note the difference node name.
$ ls
uname.out
$ cat uname.out
Linux devos1 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
$ uname -a
Linux golden 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
$ exit

Create an Encrypted Volume

Now we enable encryption and create a new encrypted volume. Some volume drivers may not set the encrypted flag. These
cannot use encrypted volumes. We will review how the BUI can be used but preform the steps from the command line.

1. You can create new volume types from the BUI. Navigate to the Admin -> Volumes -> Volume Types page. In the
Actions column select the Create Encryption button. Read through the Description on the right.

Figure 4.3: Create Volume Type Encryption

2. Return to the command line. Make sure you have sourced the admin file.
ubuntu@devstack-cc:~/devstack$ source openrc admin

3. Use the openstack utility to create the new volume type.


ubuntu@devstack-cc:~/devstack$ openstack volume type create LUKS
<output_omitted>

4. Use the output of the cinder help command to view the syntax.
ubuntu@devstack-cc:~/devstack$ cinder help encryption-type-create
<output_omitted>

5. Use the cinder command to create the encryption type and assign a cipher and key size.
ubuntu@devstack-cc:~/devstack$ cinder encryption-type-create \
--cipher aes-xts-plain64 \
--key_size 256 \
--control_location front-end LUKS \
LuksEncryptor

<output_omitted>

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
4.1. LABS 41

6. Now that we have the type we can create a new encrypted volume.
ubuntu@devstack-cc:~/devstack$ openstack volume create --size 1 --type LUKS crypt-vol
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
<output_omitted>

7. View the newly created volume. Verify you can see the encrypted setting.
ubuntu@devstack-cc:~/devstack$ cinder show crypt-vol
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-12-30T07:12:48.000000 |
| description | None |
| encrypted | True |
| id | b133e0dd-177c-44f2-a8d8-418269e0211b |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | crypt-vol |
| os-vol-host-attr:host | devstack-cc@lvmdriver-1#lvmdriver-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 8e806b4eeada4305a4a327341a3f44dd |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2016-12-30T07:12:50.000000 |
| user_id | 534ab9b6f27c4be281bab1ffe94cf023 |
| volume_type | LUKS |
+--------------------------------+--------------------------------------+

8. Now we add the volume to a running instance. Begin by viewing instance information. Take note of the ID.
buntu@devstack-cc:~/devstack$ openstack server list
+--------------------------------------+----------+--------+---------------------------------------------------------+------------
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+----------+--------+---------------------------------------------------------+------------
| e743fc56-ee0f-4858-9ce7-e0a796154319 | golden | ACTIVE | private=fd12:74d3:437f:0:f816:3eff:fe35:dddf, 10.0.0.10 | dev-snap1 |
+--------------------------------------+----------+--------+---------------------------------------------------------+------------

9. View the volume information. You can either list all or view the details of a particular. Take note of the ID for crypt-vol.
ubuntu@devstack-cc:~/devstack$ openstack volume list
+--------------------------------------+--------------+-----------+------+---------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+---------------------------------+
| b133e0dd-177c-44f2-a8d8-418269e0211b | crypt-vol | available | 1 | |
| 3f7d187e-0160-4a04-ba83-ceb21ca99317 | | in-use | 1 | Attached to golden on /dev/vda |
+--------------------------------------+--------------+-----------+------+---------------------------------+

ubuntu@devstack-cc:~/devstack$ openstack volume show crypt-vol


<output-omitted>

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
42 CHAPTER 4. COMPONENTS OF AN OPENSTACK CLOUD

10. Now use the openstack utility to attach the volume to the golden instance. Pass first the ID for the instance then the ID
for the volume. The command is on multiple lines for ease of reading.
ubuntu@devstack-cc:~/devstack$ openstack server add volume \
e743fc56-ee0f-4858-9ce7-e0a796154319 \
b133e0dd-177c-44f2-a8d8-418269e0211b \
--device /dev/vdb

11. Verify the item shows a status of in-use.


ubuntu@devstack-cc:~/devstack$ openstack volume list
+--------------------------------------+--------------+--------+------+---------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+--------+------+---------------------------------+
| b133e0dd-177c-44f2-a8d8-418269e0211b | crypt-vol | in-use | 1 | Attached to golden on /dev/vdb |

12. Log into the instance and verify the volume can be seen.
ubuntu@devstack-cc:~/devstack$ sudo ip netns exec qrouter-0701411b-91d8-4871-8191-7c808b1c1144\
ssh [email protected]
[email protected]’s password: cubswin:)
$ sudo fdisk -l | grep vdb
Disk /dev/vdb doesn’t contain a valid partition table
Disk /dev/vdb: 1071 MB, 1071644672 bytes

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 5

Components of a Cloud - Part Two

5.1 Labs

There is no lab to complete for this chapter.

43
44 CHAPTER 5. COMPONENTS OF A CLOUD - PART TWO

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 6

Reference Architecture

6.1 Labs

There is no lab to complete for this chapter.

45
46 CHAPTER 6. REFERENCE ARCHITECTURE

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 7

Deploying Prerequisite Services

7.1 Labs

There is no lab to complete for this chapter.

47
48 CHAPTER 7. DEPLOYING PREREQUISITE SERVICES

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 8

Deploying Services Overview

8.1 Labs

Exercise 8.1: Installing and Configuring the RDO OpenStack deployment

Overview

In this exercise we will be deploying RDO onto a new CentOS system. This will begin as an All-In-One deployment. Later we
will add nodes for Ceph. Once it has been configured compare and contrast with the DevStack systems.

You cannot use the Ubuntu nodes for this lab. The course material includes a URL for lab access. You will use your Linux
Foundation login and password to gain access. After successfully logging in you will be presented a new page and a virtual
machine instance will be created.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

49
50 CHAPTER 8. DEPLOYING SERVICES OVERVIEW

Figure 8.1: Katacoda Horizon Login

The suggested and supported browser to use is Chrome, although others may work. Unlike DevStack you must complete
steps in RDO as the root user.

Installing the Software

The RDO distribution of OpenStack has an easy to use installer called Packstack. It leverages purpose built puppet scripts,
written and maintained by Red Hat. To access the new CentOS system we will use the similar process as the previous Ubuntu
system.

1. Log into your rdo-cc system. Note that the log in for the CentOS nodes is the user centos.

2. Become root after logging in again and install the RDO yum repository. We will be using the Pike release. To install the
latest, nightly build you can use the https://rdo.fedorapeople.org/rdo-release.rpm
The URL below is divided to appear on two lines. If the backslash does not work properly type the URL on one line.

centos@rdo-cc ~]$ sudo -i

[root@rdo-cc ~]# yum install -y centos-release-openstack-pike


<output_omitted>
---> Package centos-release-openstack-pike.x86_64 0:1-1.el7 will be installed
<output_omitted>

3. Install the packstack command, and vim if you want to take advantage of it.
[root@rdo-cc ~]# yum install -y \
openstack-packstack vim

4. If you received an error in the previous step, something like


failure: repodata/repomd.xml from centos-qemu-ev: [Errno 256]

you will need to edit the /etc/yum.repos.d/CentOS-QEMU-EV.repo file and change the architecture variable, which
may not match the website. use a browser to find the proper URL, as it may change again. Recent testing showed that
the architecture needs to be aarch64. This may be a typo by Red Hat and fixed in the future. You may have to edit this
file again to get the yum update to work properly.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
8.1. LABS 51

[root@rdo-cc ~]# vi /etc/yum.repos.d/CentOS-QEMU-EV.repo


....
[centos-qemu-ev]
name=CentOS-$releasever - QEMU EV
baseurl=http://mirror.centos.org/$contentdir/$releasever/virt/aarch64/kvm-common
....

5. Update the node using the newly installed repositories.

[root@rdo-cc ~]$ yum update -y


<output_omitted>
Complete!

6. Generate the answer configuration file for packstack.


[root@rdo-cc ~]# packstack --gen-answer-file rdo.txt
Packstack changed given value to required value /root/.ssh/id_rsa.pub

Configuring packstack

While there are hundreds of options to configure inside of the answer file we will start with the following, minimal changes.

1. Edit the newly created answer file and modify the following parameters.
[root@rdo-cc ~]# vim rdo.txt
CONFIG_HEAT_INSTALL=y
CONFIG_NTP_SERVERS=0.pool.ntp.org
CONFIG_DEBUG_MODE=y
CONFIG_KEYSTONE_ADMIN_PW=openstack

2. Review the rest of the configuration file.


a. How many entries have PW_PLACEHOLDER set?
b. How many parameters are in the current answer file?

Allowing Root Access

The packstack script uses ssh to connect into each node so we must configure access. Public Key access may be easiest,
and more common in a production environment. In this example we will allow standard root access. Double check your edits
of the sshd_config file. A mistake here will keep the daemon from starting and render the instance unreachable. The use of
backslash is to indicate the commands are to be run as one line.

1. Become root, if not already, to easier read following commands.


[centos@rdo-cc ~]$ sudo -i

2. Use sed to allow root to log in, then double check your work.

[root@rdo-cc ~]# sed -i \


’s/\#PermitRootLogin\ yes/PermitRootLogin\ yes/’ /etc/ssh/sshd_config

[root@rdo-cc ~]# grep PermitRoot /etc/ssh/sshd_config


PermitRootLogin yes
# the setting of "PermitRootLogin without-password".

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
52 CHAPTER 8. DEPLOYING SERVICES OVERVIEW

Allow users to log in without an existing public key. On a production system remember to return and disable this after
we push keys in a few steps.
3. [root@rdo-cc ~]# sed -i \
’s/PasswordAuthentication\ no/PasswordAuthentication\ yes/’ \
/etc/ssh/sshd_config

[root@rdo-cc ~]# grep PasswordAuth \


/etc/ssh/sshd_config
#PasswordAuthentication yes
PasswordAuthentication yes
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication, then enable this but set PasswordAuthentication

4. Restart sshd for the changes to take effect.


[root@rdo-cc ~]# systemctl restart sshd

Installing RDO Using packstack

When you are ready to install RDO, execute the packstack command and pass the answer file you created. It is not terribly
uncommon that an issue prevents the script from finishing. Puppet is an end-state focused tool, so run the command a second
time. If it errors at the same place again, fix the answer file and continue running the script until the installation is successful.

The script can take up to 25 minutes to run. Take a short break and check back for errors mid-run.

If you receive another repodata error 256 edit the /etc/yum/repos.d/CentOS-QEMU-EV.repo again. Change the architec-
ture from aarch64 to x86 64. Then run the packstack script again.

1. Run the packstack script. Use of the equal sign to point at the file is optional.
[root@rdo-cc ~]# packstack --answer-file rdo.txt
<output-omitted>
**** Installation completed successfully ******
<output-omitted>

2. Find your public IP Address or FQDN. With this information add a ServerAlias parameter to match the inbound address
request then restart the web server and memcached services. The example below shows an URL of 288278-8-ollie3.
openstack-environments.katacoda.com Yours will be different. This information can be found by opening the Open-
Stack Dashboard tab on the Katacoda page. Once open you should see the default Apache welcome page. This
indicates you are connecting to the newly installed web server but it is not aware of the how to handle the URL being
requested. Copy the URL to create the ServerAlias. You may also find this URL inside the /opt/host, file if it exists.
[root@rdo-cc ~]$ vim /etc/httpd/conf.d/15-horizon_vhost.conf
<...>
## Server aliases
ServerAlias 288278-8-ollie3.openstack-environments.katacoda.com
ServerAlias 172.17.0.14
ServerAlias localhost
<...>

[root@rdo-cc ~]$ systemctl restart httpd ; systemctl restart memcached

3. Log into the BUI with the username admin and the password openstack. You may need to refresh the web page once
httpd and memcached finish their restart.
4. Navigate around the RDO BUI. Compare and contrast with the Devstack deployment. Notice anything different?

5. Create a new project named rdo1. Reference the steps from the previous lab for assistance. Change the number of
vCPUs to 10. How different are the actual steps?

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
8.1. LABS 53

6. Create a new user named operator, with the following settings:


a. Email of centos@localhost
b. A password of openstack
c. Primary project of rdo1
d. Keep them as a _member_

7. Navigate through the rest of the tabs on the left of the BUI. What tabs have the word network or networks on them.
Compare to the DevStack systems. Are they the same?
a.
b.
c.

8. Using the drop down in the upper left corner, how many projects can be selected?

9. Navigate to the Identity -> Projects page. How many projects do you see listed?
a. Is this the same behavior as DevStack?

Solution 8.1

Configuring packstack

2. a. 19
b. 334 using grep = rdo.txt |wc -l

Installing RDO Using packstack

4. Lots and lots

5. Notvery

6. Various differences, which change over time like: a. Project -> Network -> Network Topology
b. Project -> Networks -> Networks
c. Admin -> System -> Networks

7. just one

8. 4
a. no

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
54 CHAPTER 8. DEPLOYING SERVICES OVERVIEW

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 9

Advanced Software Defined Networking with


Neutron

9.1 Labs

There is no lab to complete for this chapter.

55
56 CHAPTER 9. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 10

Advanced Software Defined Networking with


Neutron - Part Two

10.1 Labs

Exercise 10.1: Neutron Networking

Overview

We will perform several familiar steps and learn more about the the CLI tools and capabilities of Neutron networking. This
exercise uses the RDO OpenStack deployment running on CentOS.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

57
58 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO

Figure 10.1: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

Deploy Neutron networks and instances into RDO OpenStack

Explore the BUI as Non-admin User

We will explore the BUI as a member of a project to compare and contrast with DevStack

Remember you can use the second tab found on the Katacoda page, or use your own browser and go to the URL found in the
/opt/host file.

1. Log into the BUI as the operator user, with a password of openstack.

2. Using the tabs on the left, explore the BUI, compare to what the admin user had available.

3. Using the drop down in the upper left window. How many projects do you see?

Create A New Private Network

We cannot deploy an instance on the existing private network. First we have to create a new private network, a network we
will call Accounting Internal.

1. Remaining logged in as operator user, navigate to the Project -> Network -> Network Topology page. Select the
+Create Network button. Enter in the name Accounting Internal on the first tab Network, then select Next.
2. Fill out the Subnet tab with a name of acct-sub-internal, a network address of 192.168.0.0/16 with the gateway
of 192.168.0.200. Then select Next.

3. Enter into the Allocation Pools box the addresses 192.168.0.10,192.168.0.20. Then select Create.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 59

Create a Router

While we could provide an interface to each newly created instance directly on a public network, it allows more administrative
flexibility and possibly security to add a software router for traffic instead.

1. Navigate to the Project -> Network -> Network Topology page. In the upper right, select the +Create Router
button.

2. Type in the name Accounting-1, then select Create Router. The router should appear on the topology, but is not
associated yet with any interfaces.

3. Use your mouse to select the newly created router. Select the View Router Details link. Then select the Interfaces
tab, then the Add Interface button.

4. Work through the wizard with the values from the following graphic. When it matches, select the Add Interface button.
The interface may show down for a moment. Refresh the page to check that the status is Active.

Figure 10.2: Add Interface

5. Only a user with admin capability add an interface to the public network but you can set access as a gateway. Select the
Set Gateway button in the upper right of the new window.
6. Use the dropdown to select the network public. Then select Submit.

7. Navigate to the Network Topology page. Do you see the Accounting internal network or the new router? Do they
connect?

Launch an Instance

Use the BUI to launch another instance.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
60 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO

1. Navigate to Project -> Compute -> Instances. Select the Launch Instance button.

2. Fill in an instance name of acct1. Under the Source tab select boot source of Image, then select the up-arrow icon to
add the cirros image.

3. Move to the Flavor tab. Choose the m1.tiny flavor.

4. Select the Networks tab. The network should already be assigned. If not select the up-arrow icon next to
Accounting Internal. If you have multiple networks, you will have to choose at least one prior to launch.
5. Select Launch Instance. The new page should show the instance in a Spawning state. Depending on resources being
requested and other activity it can take a minute or two for the instance to finish its build and become active.

6. Verify by returning to the Network Topology page. You should see the newly created instance attached to the
Accounting Internal network.

Create a Project and User from Using the CLI

Similar to what we have accomplished from the BUI we perform the same tasks from the command line.

1. Log into the node and become root. Create a new project named finance.
[root@rdo-cc ~]# source keystonerc_admin

[root@rdo-cc ~(keystone_admin)]# openstack project create finance


+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | e600e54c56b145848d9287474f196be4 |
| is_domain | False |
| name | finance |
| parent_id | default |
+-------------+----------------------------------+

2. Create a new user named tester who is a member of the finance project:
[root@rdo-cc ~(keystone_admin)]# openstack user create --project finance --password openstack \
--email centos@localhost tester
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | e600e54c56b145848d9287474f196be4 |
| domain_id | default |
| email | centos@localhost |
| enabled | True |
| id | 0d895ead0f344b93aa3789a14d119576 |
| name | tester |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

3. While the previous command should have set the project, it does not make the user a member of that project in Pike.
We will need to add the user manually.
[root@rdo-cc ~(keystone_admin)]# openstack role add --user tester \
--project finance _member_

4. Verify the addition of the project and the user:

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 61

[root@rdo-cc ~(keystone_admin)]# openstack project list


+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 04a3cfa3290c424d8d30b35cfb544753 | services |
| 4a31b72d4fe0437781c745b18e277f00 | admin |
| c9443e46cec44c94a5a906ea682296dd | demo |
| e573d66eba6e4b1c8dcffb246a47d305 | finance |
| ea0a8b68277a494ea3e284a466de29e3 | rdo1 |
+----------------------------------+----------+

[root@rdo-cc ~(keystone_admin)]# openstack user list


<output_omitted>
| d2ad3783b31f473c952be0f255d29809 | tester |
<output_omitted>

5. Note the networks the admin user can view:


[root@ip-172-31-19-175 ~(keystone_admin)]# neutron net-list
+--------------------------+---------------------+---------------------------+
| id | name | subnets |
+--------------------------+---------------------+---------------------------+
| 2e575d3c-55e6-4c2c-b8fc- | Accounting Internal | 690777c8-6019-4f06-8ff8-e |
| 99bbf9590aa2 | | 9e8fd4221ec |
| | | 192.168.0.0/16 |
| 667ede37-5ead- | private | 29c63f44-bc04-426d-8124-1 |
| 4dc0-bfa1-dd4d15ca9b7a | | 0361eea0f9a 10.0.0.0/24 |
| ed906677-4918-472c-a28f- | public | 45bc3127-fef4-49e0-88a5-b |
| 4249c840d575 | | 08f78f965bb |
| | | 172.24.4.224/28 |
+--------------------------+---------------------+---------------------------+

Create a Network and Router Using the CLI

Neutron is a fast evolving CLI command with lots of functions and features, which supersede the capabilities of the BUI. The
default installation uses Open vSwitch for networking functionality. To see all the work being done you may want to read up
on development here: https://wiki.openstack.org/wiki/NeutronDevelopment.

Just as the login to the BUI affects what resources can be seen, so to does the username and tenant name settings in the
keystonerc files. We will begin by creating a new file for the finance group. You may want to log into the BUI as tester
and view the network topology change. The BUI will update within a minute to show the changes to the network.

1. Copy the admin file and edit the three of the parameters to match the new user:
[root@rdo-cc ~(keystone_admin)]# cp keystonerc_admin \
keystonerc_finance

[root@rdo-cc ~(keystone_admin)]# vim keystonerc_finance


<...>
export OS_USERNAME=tester #Edit this line

export PS1=’[\u@\h \W(keystone_tester)]\$ ’ #Edit this line

export OS_PROJECT_NAME=finance #Edit this line


<...>

2. Source the newly created file. Note the change in prompt:


[root@rdo-cc ~(keystone_admin)]# source keystonerc_finance
[root@rdo-cc ~(keystone_tester)]#

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
62 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO

3. Create a new network called finance-internal. Make a note of the network id on the line for when we launch an
instance in a later step, or copy and paste it to a file:
[root@rdo-cc ~(keystone_tester)]# openstack network create finance-internal
Created a new network:
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | UP |

<output-omitted>

| id | ffe41f70-962f-4693-9014-2275080cd44a |

<output-omitted>

4. Create a new subnet for the network with a network address of 10.10.0.0/24 and a gateway of 10.10.0.1.
[root@rdo-cc ~(keystone_tester)]# openstack subnet create sub-financial-int \
--subnet-range 10.0.0.0/24 --network finance-internal
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| allocation_pools | 10.0.0.2-10.0.0.254 |
| cidr | 10.0.0.0/24 |
| created_at | 2018-06-11T21:30:56Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | 17a3c73a-aea4-4833-a0f7-047efb61713c |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | sub-financial-int |
| network_id | 544e7326-c416-4a2c-9025-e2361b435c1d |
| project_id | e600e54c56b145848d9287474f196be4 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2018-06-11T21:30:56Z |
| use_default_subnet_pool | None |
+-------------------------+--------------------------------------+

5. Create a new router called finance-router. Make sure the status reports as active.
[root@rdo-cc ~(keystone_tester)]# openstack router create finance-router
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-06-11T21:33:59Z |
| description | |
| distributed | False |
| external_gateway_info | None |
| flavor_id | None |
| ha | False |
| id | 335699ea-8324-494e-bd1c-200b181124bc |

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 63

| name | finance-router |
| project_id | e600e54c56b145848d9287474f196be4 |
| revision_number | None |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2018-06-11T21:33:59Z |
+-------------------------+--------------------------------------+

6. Set a gateway for the new router to use the shared Accounting Exterior network.
[root@rdo-cc ~(keystone_tester)]# openstack router set --external-gateway public finance-router

Add an interface to the sub-financial-int network.


[root@rdo-cc ~(keystone_tester)]# openstack router add subnet \
finance-router sub-financial-int

7. Log into the BUI as tester. Navigate to the Network Topology page. It should show the new network attached via the
router to an exterior network.

Exercise 10.2: Use Neutron to Connect Instances in RDO OpenStack

Overview

This exercise uses the RDO OpenStack deployment running on CentOS. The lab ties together the previously used steps to
deploy multiple Neutron networks and instances and configure connectivity.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
64 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO

Figure 10.3: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

Deploy a New Instance from the CLI

We will add some common tasks to launching an instance we have not done via the BUI. We will generate an ssh key for easy
access and a network security group.

1. For ease of access we will generate a new public/private SSH keypair. Press enter key twice to accept default value of
no passphrase.
[root@rdo-cc ~]# source keystonerc_finance
[root@rdo-cc ~(keystone_tester)]# ssh-keygen -f ~/.ssh/finance-key
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): <enter>
Enter same passphrase again: <enter>
Your identification has been saved in /root/.ssh/finance-key.
Your public key has been saved in /root/.ssh/finance-key.pub.
The key fingerprint is:
fe:e9:f3:6c:78:b5:2a:ad:c2:75:46:61:e7:56:bc:9a \
[email protected]
The key’s randomart image is:
+--[ RSA 2048]----+
| . |
| o . o|
| . + ..|
| . o. |
| S . .o |
| . . oE. |
| ... = . . |
| o.+o+ . |
| o=B+. |
+-----------------+

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 65

2. Add the key to Nova compute service and verify it. Some small images do not contain cloud-init and my not accept
the key.
[root@rdo-cc ~(keystone_tester)]# nova keypair-add \
--pub-key ~/.ssh/finance-key.pub finance-key

[root@rdo-cc ~(keystone_tester)]# nova keypair-list


+-------------+------+-------------------------------------------------+
| Name | Type | Fingerprint |
+-------------+------+-------------------------------------------------+
| finance-key | ssh | fe:e9:f3:6c:78:b5:2a:ad:c2:75:46:61:e7:56:bc:9a |
+-------------+------+-------------------------------------------------+

3. Review the flavors currently configured for instances.


[root@rdo-cc ~(keystone_tester)]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

4. Create a new flavor and verify it. By default only admins can do this. We will give it the name smallfry, an ID of 6,
512MB of memory, 2GB of disk and 1 vCPU.
[root@rdo-cc ~(keystone_tester)]# source keystonerc_admin

[root@rdo-cc ~(keystone_admin)]# nova flavor-create smallfry 6 512 2 1


+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6 | smallfry | 512 | 2 | 0 | | 1 | 1.0 | True |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+

5. Return to running commands as the tester user.


[root@rdo-cc ~(keystone_admin)]# source keystonerc_finance

6. Verify the tester user can view the new flavor.


[root@rdo-cc ~(keystone_tester)]# nova flavor-list
<output_omitted>
| 6 | smallfry | 512 | 2 | 0 | | 1 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

7. Review the available security groups.


[root@rdo-cc ~(keystone_tester)]# openstack security group list
+----------------------------+---------+------------------------+------------------------------+
| ID | Name | Description | Project |
+----------------------------+---------+------------------------+------------------------------+
| 3399f232-5876-4781-b12d- | default | Default security group | dd25b7768fb84b43a09b9b9b9019 |
| f3000dcce041 | | | e91e |
+----------------------------+---------+------------------------+------------------------------+

8. Take a look at the current rules inside the default group listed above.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
66 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO

[root@rdo-cc ~(keystone_tester)]# openstack security group rule list default


+-----------------------------+-------------+----------+------------+------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+-----------------------------+-------------+----------+------------+------------------------------+
| 00bd46e8-35df-4e39-af0b- | None | None | | None |
| 763bdd8a450d | | | | |
| 5a56d96c-caac-4003-90bc- | None | None | | 3399f232-5876-4781-b12d- |
| 9e4d286dbcbe | | | | f3000dcce041 |
| 6a8a7784-6a5c-4ffc- | None | None | | None |
| bb29-9820832d7485 | | | | |
| 724f2f12-ccdb-4766-ae9c- | None | None | | 3399f232-5876-4781-b12d- |
| b8aba1fac88a | | | | f3000dcce041 |
+-----------------------------+-------------+----------+------------+------------------------------+

9. Run the following commands to create a new security group and add rules to allow SSH and web traffic. Begin by
entering the openstack utility.
[root@rdo-cc ~(keystone_tester)]# openstack

(openstack) security group create --description "Allow http and ssh traffic" web-ssh
+-----------------+--------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------+
| created_at | 2017-01-29T01:14:18Z |
| description | Allow http and ssh traffic |
| headers | |
| id | 28c1056e-d07e-46cc-9092-09c661137a77 |
| name | web-ssh |
| project_id | dd25b7768fb84b43a09b9b9b9019e91e |
| project_id | dd25b7768fb84b43a09b9b9b9019e91e |
| revision_number | 1 |
| rules | created_at=’2017-01-29T01:14:18Z’, direction=’egress’, ethertype=’IPv4’, |
| | id=’3925e8f5-ea72-4c09-ac8e-20e7b8f4298f’, |
| | project_id=’dd25b7768fb84b43a09b9b9b9019e91e’, revision_number=’1’, |
| | updated_at=’2017-01-29T01:14:18Z’ |
| | created_at=’2017-01-29T01:14:18Z’, direction=’egress’, ethertype=’IPv6’, id |
| | =’a5dcb42c-1fee-4e88-8aa5-221b1ab28f67’, |
| | project_id=’dd25b7768fb84b43a09b9b9b9019e91e’, revision_number=’1’, |
| | updated_at=’2017-01-29T01:14:18Z’ |
| updated_at | 2017-01-29T01:14:18Z |
+-----------------+--------------------------------------------------------------------------------+

10. Now add a rules to allow ssh and http traffic.


(openstack) security group rule create --protocol tcp --ingress --dst-port 22 web-ssh
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-01-29T01:19:35Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 95d62129-0a86-47a9-84f8-c504f86d93e1 |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | dd25b7768fb84b43a09b9b9b9019e91e |
| project_id | dd25b7768fb84b43a09b9b9b9019e91e |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 28c1056e-d07e-46cc-9092-09c661137a77 |

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 67

| updated_at | 2017-01-29T01:19:35Z |
+-------------------+--------------------------------------+

(openstack) security group rule create --protocol tcp --ingress --dst-port 80 web-ssh
<output-omitted>

11. Verify the new group has both rules:


(openstack) security group rule list web-ssh
+------------------------+-------------+-----------+------------+-----------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+------------------------+-------------+-----------+------------+-----------------------+
| 3925e8f5-ea72-4c09 | None | None | | None |
| -ac8e-20e7b8f4298f | | | | |
| 51f7c948-0bae-4ffb- | tcp | 0.0.0.0/0 | 80:80 | None |
| b61f-405997f8e0f2 | | | | |
| 95d62129-0a86-47a9-84f | tcp | 0.0.0.0/0 | 22:22 | None |
| 8-c504f86d93e1 | | | | |
| a5dcb42c-1fee- | None | None | | None |
| 4e88-8aa5-221b1ab28f67 | | | | |
+------------------------+-------------+-----------+------------+-----------------------+

12. Exit the openstack utility.


(openstack) exit
[root@rdo-cc ~(keystone_tester)]#

13. View the available images:


[root@rdo-cc ~(keystone_tester)]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 8bb0a060-ad5b-4a5b-9ea2-9f1ea9e826a7 | cirros |
+--------------------------------------+--------+

14. Launch a new instance, called bc1 with the recently configured settings. You will need the network ID
for the finance-internal network. Run openstack network list if you had not saved it from an earlier exercise.
[root@rdo-cc ~(keystone_tester)]# nova boot --flavor smallfry --image cirros \
--security-group web-ssh --key-name finance-key \
--nic net-id=ffe41f70-962f-4693-9014-2275080cd44a bc1
<output_omitted>

15. Verify the instance is running. It may take a few seconds to change from build state to active. Take note of the IP
Address. We will use the IP in a following step to gain access.
[root@rdo-cc ~(keystone_tester)]# nova list
<some output_omitted>
| 3a911544-a229-46d7-bbef-ac2cfd832e76 | bc1 | ACTIVE | - | Running \
| finance-internal=10.10.0.6

[root@rdo-cc ~(keystone_tester)]# nova show bc1


<output_omitted>

16. Log into the instance. Look at a list of configured IP names spaces. Typically the last created namespace is the first one
listed. Multiple namespaces may have the same IP range. If you cannot SSH to the instance, check to see if another
network also has a 10.0.0.0/24 network. Then double-check the network security groups are in place and have a rule
which allows SSH access. In Pike the peer id will also show in the ip netns list command. This is a bug listed as fixed
in the Ocata release. Once we find the correct namespace we will log into the instance. If the public key does not work
the login is cirros with a password of cubswin:)

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
68 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO

[root@rdo-cc ~(keystone_tester)]# ip netns list


qrouter-2bd990fc-6b46-4247-9bdc-94464334207f (id: 4)
qrouter-308f5d7f-6d9a-41dd-a1ce-b1b351dd5d4b (id: 5)
qrouter-335699ea-8324-494e-bd1c-200b181124bc (id: 3)
qdhcp-bc968da4-8c2c-4ce3-bebf-c5ad20621824 (id: 2)
qdhcp-c944aaa6-7193-4bdb-b3fb-d22c3adda846 (id: 1)
qdhcp-544e7326-c416-4a2c-9025-e2361b435c1d (id: 0)

[root@rdo-cc ~(keystone_tester)]# ip netns exec \


qrouter-2bd990fc-6b46-4247-9bdc-94464334207f ip a
<output_omitted>
24: qr-c301fda8-d4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:ef:bb:09 brd ff:ff:ff:ff:ff:ff
inet 10.10.0.1/24 brd 10.10.0.255 scope global qr-c301fda8-d4

17. Now that have found the correct namespace try to log in. Change the IP to match the nova list output. The password
would only be required if the public key was not properly inserted:
[root@rdo-cc ~(keystone_tester)]# ip netns exec \
qrouter-2bd990fc-6b46-4247-9bdc-94464334207f ssh -i ~/.ssh/finance-key \
[email protected]
[email protected]’s password: cubswin:)
$

Connect from One Instance to Another

Task Goal: Tie the concepts together, we will deploy a second node, then connect from the two nodes across a router. Neutron
replaces only the switch side of the network. Our lab environment has some settings that make exterior access complicated
so we will deploy a new internal network, router and instance. Once we have two instances we will update each route table
and test by connecting via ssh from one instance to the other.

1. Return to the BUI and log in as tester if you are not already. Navigate to the
Project -> Network -> Network Topology page.
2. Select the +Create Network button and create a network called back-office. Assign a subnet called sub-bk-off
with a network address of 192.168.5.0/24. Default values otherwise.

3. Select the +Create Router icon. Give the router the name bk-router. Default values otherwise.

4. When it has been created use the mouse to select the router. Select the View Router Details button. Then select the
Interfaces tab. Select the +Add Interface button to create two interfaces. Attach one to back-office, with default
values. The second interface to finance-internal, specify the IP address is 10.10.0.10.
The Network Topology should look something like the graphic that follows. You may need to select the Graph tab
followed by Toggle Labels to see all the details.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 69

Figure 10.4: Connecting Instances

5. Now we will add a second instance on a different network. Get a list of networks in order to launch the new instance in
the back-office network.
[root@rdo-cc ~(keystone_tester)]# openstack network list
<output_omitted>
| 580b9d4e-c3da-4215-b9e7-91f349e581c6 | back-office | beeccd33...

6. View the IP addresses of the bk-router ports. Use grep to narrow down the output to only ports on back-office.
[root@rdo-cc ~(keystone_tester)]# openstack port list |grep beeccd33
| 23585c62-3701-4fbb-a0a6-8eabb348d3b3 | | fa:16:3e:74:69:98 | ip_address=’192.168.5.1’, \
subnet_id=’beeccd33-7d86-475e-aed6-163d4acd0cc0’ | ACTIVE |
| 40294a98-bc04-41ec-88e0-8c67561cdd81 | | fa:16:3e:38:58:b9 | ip_address=’192.168.5.2’, \
subnet_id=’beeccd33-7d86-475e-aed6-163d4acd0cc0’ | ACTIVE |

7. Launch an instance named bc2 with the back-office net-id.


[root@rdo-cc ~(keystone_tester)]# nova boot --flavor smallfry --image cirros \
--security-group web-ssh --key-name finance-key \
--nic net-id=580b9d4e-c3da-4215-b9e7-91f349e581c6 bc2
<output_omitted>

8. Find the VM IP address and correct namespace. Log into the newly deployed instance. Remember it may take a minute
to finish the build and boot. First find the correct namespace, use the id of the router, pre-pending qrouter- to it.
[root@rdo-cc ~(keystone_tester)]# openstack router show bk-router |grep id
| flavor_id | None |
| id | e7886409-bc48-4877-af10-2de3752f4c67 |
| project_id | e600e54c56b145848d9287474f196be4 |

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
70 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO

[root@rdo-cc ~(keystone_tester)]# ip netns exec \


qrouter-e7886409-bc48-4877-af10-2de3752f4c67 ssh -i ~/.ssh/finance-key [email protected]
The authenticity of host ’192.168.5.2 (192.168.5.2)’ can’t be established.
RSA key fingerprint is 03:79:27:9f:1f:72:71:91:5e:2c:cc:f1:6e:e0:1e:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ’192.168.5.2’ (RSA) to the list of known hosts.
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:64:17:63 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.2/24 brd 192.168.5.255 scope global eth0
inet6 fe80::f816:3eff:fe64:1763/64 scope link
valid_lft forever preferred_lft forever

9. Find the current routes. Then add a route for bc1’s network.
$ ip route
default via 192.168.5.1 dev eth0
192.168.5.0/24 dev eth0 src 192.168.5.2
$ sudo -i
# ip route add 10.10.0.0/24 via 192.168.5.1 dev eth0
# exit ; exit

10. Now we need to configure routing back to the other VM, bc2. Remember to use the IP of the router port, not the VM.
Log into bc1 again.
[root@rdo-cc ~(keystone_tester)]# ip netns exec \
qrouter-27bcb5f9-8af5-419f-a0ff-9d109314c8b8 ssh [email protected]
[email protected]’s password: cubswin:)
$ sudo -i
# ip route
default via 10.10.0.1 dev eth0
10.10.0.0/24 dev eth0 src 10.10.0.2
# ip route add 192.168.5.0/24 via 10.10.0.10 dev eth0

11. We should be able to ssh back to the other instance, bc2 using the internal IP Address.
# ssh [email protected]
Host ’192.168.5.2’ is not in the trusted hosts file.
(fingerprint md5 03:79:27:9f:1f:72:71:91:5e:2c:cc:f1:6e:e0:1e:21)
Do you want to continue connecting? (y/n) y
[email protected]’s password: cubswin:)
$ uname -n
bc2

Solution 10.2

Explore the BUI as Non-admin User

3. 1

Create a Router

8. Yes.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 11

Distributed Cloud Storage with Ceph

11.1 Labs

Exercise 11.1: RDO Openstack Deployment on CentOS

Overview

This exercise uses the RDO OpenStack deployment running on CentOS.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

71
72 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH

Figure 11.1: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

Three new nodes will be made available for use. They will each have an extra disk which we will partition into two equally
sized partitions. We will use one partition on each node to deploy a ceph OSD and leave the other for possible swift proxy
installation. While a ceph cluster has no single node in charge, we will be using our cloud controller as a ceph admin node as
well as a MON node.

RDO Cloud Controller: rdo-cc Admin,MON

storage1
New OSD nodes: storage2
storage3

In our lab environment the only way to connect to the storage nodes is via rdo-cc. Use the browser to connect to rdo-cc,
then use ssh to connect. A public key has already been configured for ease of access, although the steps to duplicate the task
are included for you.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 73

Deploy Ceph into RDO OpenStack

Prepare nodes to support ceph

In addition to updating and installing software we need to make sure that time is in sync between nodes.

Begin on your cloud controller, or ceph admin node. Note on the baseurl line we will be using ceph release Luminous for
CentOS 7, or el7, as in Red Hat Enterprise Linux 7. Other options may be available.

1. Begin by adding the Extra Packages for Enterprise Linux repository.


[centos@rdo-cc ~]$ sudo -i

[root@rdo-cc ~]# yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm


<output-omitted>

2. configure a repository for the ceph software. Note that the URL contains ”ee” ”ell” ”seven”.
[root@rdo-cc ~]# vim /etc/yum.repos.d/start-ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

3. Use yum to install the ceph-deploy package. Should yum not work, due to ongoing issues with Python dependencies,
you may need to use pip.
[root@rdo-cc ~]# sudo yum -y install ceph-deploy
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
<output_omitted>

4. Ensure that NTP is enabled and synchronized:


[root@rdo-cc ~]# timedatectl
Local time: Mon 2015-04-20 22:06:36 UTC
Universal time: Mon 2015-04-20 22:06:36 UTC
RTC time: Mon 2015-04-20 22:06:35
Timezone: UTC (UTC, +0000)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

5. The ceph-deploy script cannot be used as root. Add a non-root user:


[root@rdo-cc ~]# useradd -d /home/ceph -m ceph

[root@rdo-cc ~]# id ceph


uid=1001(ceph) gid=1001(ceph) groups=1001(ceph)

6. Assign a password for the new user. While ceph is not a great password, it is easy to remember for the lab. We can use
the echo command to set it with a single command.

[root@rdo-cc ~]# echo ceph | passwd --stdin ceph


Changing password for user ceph.
passwd: all authentication tokens updated successfully.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
74 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH

7. Assign them ability to run password-less sudo commands:


[root@rdo-cc ~]# echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph

[root@rdo-cc ~]# chmod 0400 /etc/sudoers.d/ceph

8. Verify you can now run a command using sudo that wouldn’t work without it. The contents of the directory may be
slightly different.
[root@rdo-cc ~]# su - ceph

[ceph@rdo-cc ~]$ ls -a /root


ls: cannot open directory /root: Permission denied

[ceph@rdo-cc ~]$ sudo ls -a /root


keystonerc_admin keystonerc_demo keystonerc_prodtesting rdo.txt
<output-omitted>

9. If you do not already have public-key access allow access without public-key so the key can be copied the first time.
Don’t forget to restart sshd after editing and verifying the update.
[ceph@rdo-cc ~]$ sudo \
sed -i ’s/PasswordAuthentication\ no/PasswordAuthentication\ yes/’ /etc/ssh/sshd_config

[ceph@rdo-cc ~]$ sudo grep PasswordAuth /etc/ssh/sshd_config


#PasswordAuthentication yes
PasswordAuthentication yes
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication, then enable this but set PasswordAuthentication

[ceph@rdo-cc ~]$ sudo systemctl restart sshd

10. Repeat steps 5 through 10 on each of the ceph storage nodes. All four will need the user who can use sudo and
password-less ssh. If multiple terminal and PuTTY sessions are possible you can copy and paste between them.
Otherwise connect via SSH. Be careful, the prompts look similar.
[ceph@rdo-cc ~]$ exit
logout
[root@rdo-cc ~]# ssh storage1
The authenticity of host ’storage1 (192.168.98.2)’ can’t be established.
ECDSA key fingerprint is
cc:bc:85:34:fa:ff:0f:60:1f:78:0d:c2:57:68:f8:51.
Are you sure you want to
continue connecting (yes/no)? yes
Warning: Permanently added ’storage1,192.168.98.2’ (ECDSA) to the list of known hosts.

[root@storage1 ~]# #<steps 5-10 on this node>

[root@storage1 ~]# exit


logout
Connection to storage1 closed.
[root@rdo-cc ~]# ssh storage2 # complete the steps 5-10, repeat for each node

11. Return to the ceph admin node. Become the ceph user. Generate a new ssh key-pair for ease of inter-node communi-
cation:
[ceph@rdo-cc ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): <enter>
Created directory ’/home/ceph/.ssh’.
Enter passphrase (empty for no passphrase): <enter>
Enter same passphrase again: <enter>
Your identification has been saved in /home/ceph/.ssh/id_rsa.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 75

Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.


The key fingerprint is:
6d:8f:45:a1:e1:2f:2a:80:b8:d6:22:5c:b1:3e:b9:a4 \
[email protected]

The key’s randomart image is:


+--[ RSA 2048]----+
| . . |
| . o . |
| . o . |
| . .o . o |
|. .o. S + o |
|..+ .. o = |
|o+ * . . . . |
|o + o . |
| E . |
+-----------------+

12. Networking and the proper use of short hostnames is essential to how ceph keeps track of cluster membership. While
a manual install is more flexible the ceph-deploy script must use the short hostnames. Verify the /etc/hosts includes
each node’s short hostname and IP. The following names and IP Addresses are for example. Use ones that match your
assigned systems. The output of the hostname -s command shows the short hostname. Use that output to populate
the hosts file. Make sure the all four nodes have the same /etc/hosts file.
[ceph@ ~]$ hostname -s
rdo-cc

[ceph@rdo-cc ~]$ sudo vim /etc/hosts


127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.98.1 rdo-cc
192.168.98.2 storage1
192.168.98.3 storage2
192.168.98.4 storage3

13. Copy the public key to all four nodes, including the ceph admin node itself. Use the short hostname, not the IP, to
double check the hosts entries. The Ceph deployment command will only work with names. Start by copying the key to
storage1.
[ceph@rdo-cc ~]$ ssh-copy-id ceph@storage1
The authenticity of host ’storage1 (192.168.98.2)’ can’t be established.
ECDSA key fingerprint is 17:8a:8f:89:fa:a8:cf:64:fd:a9:0d:b4:63:5a:d6:a8.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that \
are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is \
to install the new keys
ceph@cephnode1’s password:

Number of key(s) added: 1


Now try logging into the machine, with: \texttt{ssh
’ceph@cephnode1’} and check to make sure that only the
key(s) you wanted were added.

14. Add the key to the other nodes, including the node you’re on. The use of a for loop could be helpful as well.
[ceph@rdo-cc ~]$ ssh-copy-id ceph@storage2

[ceph@rdo-cc ~]$ ssh-copy-id ceph@storage3

[ceph@rdo-cc ~]$ ssh-copy-id ceph@rdo-cc

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
76 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH

15. You may need to allow remote sudo commands to run on all four nodes as well. Newer versions of Ubuntu no longer
have this setting.
[ceph@rdo-cc ~]$ sudo sed -i ’s/requiretty/\!requiretty/’ /etc/sudoers

16. The firewall should already be off. Disable SELinux as well until the service has been properly configured. Run the
following commands on all four nodes. If a node reboots you will need to disable SELinux again.
[ceph@rdo-cc ~]$ sudo setenforce 0; sudo yum -y install \
yum-plugin-priorities

17. You may also need to allow traffic through your firewall. Use a log statement to monitor traffic and add rules as necessary.
Typically this will be an issue when the storage nodes try to connect to the monitor during activation. When ready for
production remember to return and lock down ssh access, firewall and SELinux.

18. Update all the nodes. The rdo-cc node may throw an error about a package issue with python and zeromq. In this lab
the error can be ignored. The unused storage nodes should have no such issue.
[ceph@rdo-cc ~]$ sudo yum update -y

Deploy a Monitor

We will use the the cloud controller both as the ceph admin node and a monitor node.

1. Create a directory to hold cluster configuration files:


[ceph@rdo-cc ~]$ mkdir ceph-cluster

[ceph@rdo-cc ~]$ cd ceph-cluster/

2. Before we deploy a monitor we will need to create various configuration files. Review the output of the command. Notice
the information about creating a new cluster named ceph. The Debug, Info and Warning output is expected. Watch for
Errors, often in red if your output shows color. If you receive a traceback error like ”import pkg resources” you may be
encountering an missing dependency. Install the python-pip with yum. After installation try the ceph-deploy again.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy new rdo-cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy new rdo-cc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7ff6b
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephde
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : [’rdo-cc’]
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
<output-omitted>

3. Add a line to the global section reducing the required number of osds to two. The configuration file will accept values
with and without underlines currently.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 77

[ceph@rdo-cc ceph-cluster]$ vim ceph.conf


...
osd pool default size = 2

4. Install the ceph software on each of the four nodes. We will wait to deploy a third ceph OSD storage node, but may as
well install the software now. You may receive an error on the rdo-cc node. This is to be expected, and handled in the
next command. Even through there is an error the script creates the file it will need to continue.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy install --release luminous \
rdo-cc storage1 storage2 storage3
<output_omitted>
[rdo-cc][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: ’ceph’

5. View the differences between various yum repo files. Remove the .rpmnew file and install again.
[ceph@rdo-cc ceph-cluster]$ sudo ls -l /etc/yum.repos.d/ceph*

[ceph@rdo-cc ceph-cluster]$ sudo rm /etc/yum.repos.d/ceph.repo.rpmnew

6. Recall or type in the same command to install Ceph, it should complete without errors this time.
ceph@rdo-cc my-cluster]$ ceph-deploy install --release luminous rdo-cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy install --release
luminous rdo-cc storage1 storage2 storage3
<output_omitted>
[rdo-cc][DEBUG ] Complete!
[rdo-cc][INFO ] Running command: sudo ceph --version
[rdo-cc][DEBUG ] ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a)
luminous (stable)

7. Create at least one monitor. Three to five are suggested for Paxos quorum. They should not be nodes used for OSD.
We will create a single monitor in this exercise.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy mon create-initial
<output_omitted>

8. Only if the command fails, in order to run it again use the overwrite-conf option.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy --overwrite-conf mon create-initial

9. Use of cephx requires keys to be used by every node in the cluster. Deploy the keys to all nodes, including the rdo-cc.
[ceph@rdo-cc my-cluster]$ ceph-deploy admin rdo-cc storage1 storage2 storage3
<output_omitted>

10. The created keyring is not readable by anyone but root, by default. In order to run commands we need to add read
access.
[ceph@rdo-cc my-cluster]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

11. Starting with the Luminous release there needs to be a Ceph manager, mgr running. This daemon collects information
about the cluster.
[ceph@rdo-cc my-cluster]$ ceph-deploy mgr create rdo-cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy mgr
create rdo-cc
<output_omitted>

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
78 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH

12. At this point we can test our configuration and keys by looking at the cluster health. It should report HEALTH OK. There
should be one mon, one mgr set to the rdo-cc node but zero osds. If you get errors check existence and access to the
keyrings. It can be helpful to open a second terminal or PuTTY session and run ceph -w in that window while working
through the following steps.
[ceph@rdo-cc my-cluster]$ ceph -s
cluster:
id: 16975b02-b6d9-4ea7-97ab-85fdebdf32d0
health: HEALTH_OK

services:
mon: 1 daemons, quorum rdo-cc
mgr: rdo-cc(active)
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:

Deploy OSD nodes for the cluster

The suggestion is to use a 1 terabyte disk dedicated to ceph. In our exercise we will use a second, 20G disk attached to our
storage nodes, /dev/xvdb.

1. Create an OSD on your first storage node. Review the output to understand the various steps. Once verified we will
create the second and third OSD.
[ceph@rdo-cc my-cluster]$ ceph-deploy osd create --data /dev/xvdb storage1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy osd create
--data /dev/xvdb storage1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None

<output_omitted>

[storage1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat


--format=json
[ceph_deploy.osd][DEBUG ] Host storage1 is now ready for osd use.

2. Verify the OSD shows as up and in.


[ceph@rdo-cc my-cluster]$ ceph -s
cluster:
id: 16975b02-b6d9-4ea7-97ab-85fdebdf32d0
health: HEALTH_OK
<output_omitted>

osd: 1 osds: 1 up, 1 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 1024 MB used, 19451 MB / 20476 MB avail
pgs:

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 79

3. View the detailed status of the cluster. Note that the -w option means watch. It will capture the window until you use
ctrl-c to stop the command. This can be helpful for troubleshooting and watching activity in a second terminal.
[ceph@rdo-cc ceph-cluster]$ ceph -s
[ceph@rdo-cc ceph-cluster]$ ceph -w
[ceph@rdo-cc ceph-cluster]$ ceph health
[ceph@rdo-cc ceph-cluster]$ ceph health detail

4. From the status outputs, how much space is in the ceph cluster? (This value will depend on disks added by the trainer)

Add some data to the cluster

To ensure the ceph cluster is working we will add some test data using rados.

1. View default pools. You will probably only see one pool.
[ceph@rdo-cc ceph-cluster]$ ceph osd lspools
0 rbd,

2. Create a pool and verify it. We will call the pool test and configure 100 placement groups:
[ceph@rdo-cc ceph-cluster]$ ceph osd pool create test 100
pool ’test’ created

[ceph@rdo-cc ceph-cluster]$ ceph osd lspools


0 rbd,1 test,

3. Create an object to store:


[ceph@rdo-cc ceph-cluster]$ echo "Hello World" > /tmp/hello.txt

4. Add the object to ceph using rados:


[ceph@rdo-cc ceph-cluster]$ rados put try-1 /tmp/hello.txt --pool test

5. Verify object existence and placement. Note that it will not return an error because it is not a standard filesystem.:
[ceph@rdo-cc ceph-cluster]$ rados -p test ls try-1

[ceph@rdo-cc ceph-cluster]$ ceph osd map test try-1


osdmap e30 pool ’test’ (1) object ’try-1’ -> pg 1.6b948dec (1.2c) -> \
up ([0,4], p0) acting ([0,4], p0)

6. Write the object try-1 out to a new file, called /tmp/newfile.


[ceph@rdo-cc ceph-cluster]$ rados get try-1 /tmp/newfile --pool=test
[ceph@rdo-cc ceph-cluster]$ cat /tmp/newfile
Hello World

7. Remove the object.


[ceph@rdo-cc ceph-cluster]$ rados rm try-1 --pool test

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
80 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH

(Optional) Remove an OSD from the cluster

You many need to remove an OSD from the cluster for storage upgrades or other maintenance. This may cause the current
lab cluster to show warnings for being undersized. Add the OSD back to the cluster, shown in previous task, to remove the
warnings.

1. Verify the state of the cluster is healthy. Make sure you have enough replicas and space prior to OSD removal.

[ceph@rdo-cc ceph-cluster]$ ceph -s


cluster 3b8ea299-28ce-44ab-bfd2-d4e3ccb2bc35
health HEALTH_OK
monmap e1: 1 mons at {rdo-cc=172.31.41.239:6789/0}
election epoch 2, quorum 0 rdo-cc
osdmap e28: 3 osds: 3 up, 3 in
pgmap v86: 64 pgs, 1 pools, 0 bytes data, 0 objects
23242 MB used, 36093 MB / 59335 MB avail
64 active+clean

[ceph@rdo-cc ceph-cluster]$ ceph osd stat


osdmap e28: 3 osds: 3 up, 3 in

2. View the current configuration. Note which node osd.2 is on.


[ceph@rdo-cc ceph-cluster]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.02998 root default
-2 0.00999 host ip-172-31-0-23
0 0.00999 osd.0 up 1.00000 1.00000
-3 0.00999 host ip-172-31-0-29
1 0.00999 osd.1 up 1.00000 1.00000
-4 0.00999 host storage1
2 0.00999 osd.2 up 1.00000 1.00000

3. Stop the OSD. It may take a while to migrate the placement groups. Use the ceph -w command to view the migration.
If the migration seems to be taking too long, as happens with small clusters, you may have to re-weight the OSD.
Reference online documentation for these steps.
[ceph@rdo-cc ceph-cluster]$ ceph osd out osd.2

[ceph@rdo-cc ceph-cluster]$ ceph -w

4. Stop the OSD daemon on the storage node. Connect to the storage node hosting the OSD you are trying to remove.

[root@storage1 ~]# systemctl stop ceph-osd@2

5. Return to the admin node and remove the OSD from the crush map.
[ceph@rdo-cc ceph-cluster]$ ceph osd crush remove osd.2

6. Remove the authorization from the OSD.


[ceph@rdo-cc ceph-cluster]$ ceph auth del osd.2

7. Remove the OSD from ceph.

[ceph@rdo-cc ceph-cluster]$ ceph osd rm osd.2

8. Verify the OSD is not in the config file. Remove it if it is.


[ceph@rdo-cc ceph-cluster]$ sudo vim /etc/ceph/ceph.conf

Solution 11.1
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 81

Deploy a two OSD nodes for the cluster

10. 20460 MB

Add an OSD to a ceph cluster

6. 30690 MB avail or 10 GB more

Exercise 11.2: Configure Glance to Use Ceph

Overview

Now that we have a working ceph cluster, we can use it as a backend for several other services.

This exercise uses the RDO OpenStack deployment running on CentOS.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

Figure 11.2: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

Three new nodes will be made available for use. They will each have an extra disk which we will partition into two equally
sized partitions. We will use one partition on each node to deploy a ceph OSD and leave the other for possible swift proxy

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
82 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH

installation. While a ceph cluster has no single node in charge, we will be using our cloud controller as a ceph admin node as
well as a MON node.

RDO Cloud Controller: rdo-cc Admin,MON

storage1
New OSD nodes: storage2
storage3

In our lab environment the only way to connect to the storage nodes is via rdo-cc. Use the browser to connect to rdo-cc,
then use ssh to connect. A public key has already been configured for ease of access, although the steps to duplicate the task
are included for you.

Configure ceph as a backend to Glance

1. Create pools of 100 placement groups for the glance service to use.
[ceph@rdo-cc ceph-cluster]$ ceph osd pool create images 100
pool ’images’ created

2. Generate a keyring for glance and make it persistent. Be very careful with ceph auth commands. If you make a mistake
the only way to make changes is to disable security and restart ceph every node in the cluster.
[ceph@rdo-cc ceph-cluster]$ ceph auth get-or-create client.glance mon ’allow r’ \
osd ’allow class-read object_prefix rbd_children, allow rwx pool=images’

[ceph@rdo-cc ceph-cluster]$ ceph auth get-or-create client.glance \


| sudo tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQDMJD9Vo88sBRAAxa2kmmibqnv9eBSIuq0h4w==

[ceph@rdo-cc ceph-cluster]$ sudo chown glance:glance \


/etc/ceph/ceph.client.glance.keyring

3. Edit /etc/glance/glance-api.conf. Uncomment and edit the following parameters. It is important each variable
remain in the correct part of the file. Appending these values to the end may not work. Remember when you switch
the store to Ceph you will need to re-import images from the current store, you can download them with glance image-
download.
[ceph@rdo-cc ceph-cluster]$ sudo vim /etc/glance/glance-api.conf
default_store=rbd
show_image_direct_url=True
stores=rbd
rbd_store_chunk_size = 8
rbd_store_pool=images
rbd_store_user=glance
rbd_store_ceph_conf=/etc/ceph/ceph.conf

4. Restart glance for the changes to take effect:


[ceph@rdo-cc ceph-cluster]$ sudo systemctl restart openstack-glance-api

5. Create a file showing data usage inside the ceph cluster before uploading a new image.
[ceph@rdo-cc ceph-cluster]$ ceph -s > /tmp/ceph.before

6. Install wget if not already installed and download a small image to the cloud controller:

[ceph@rdo-cc ceph-cluster]$ sudo -i

[root@rdo-cc ceph-cluster]# yum -y install wget

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 83

7. Download a small test image. The URL is one long path. The image version may change over time. If the download
does not work verify the path with a browser and use the new version in the following commands.
[root@rdo-cc ceph-cluster]# wget \
http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

8. Source the keystone file:


[root@rdo-cc ceph-cluster]# source keystonerc_admin

9. Import the image into glance.


[root@rdo-cc ~(keystone_admin)]# glance image-create --name=wceph \
--disk-format=raw --container-format=bare \
--progress < cirros-0.4.0-x86_64-disk.img
<output-omitted>

10. When it finishes verify there is a wceph image along with the previous cirros.
[root@rdo-cc ~(keystone_admin)]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| efd776bc-344a-4d5f-9207-f2fea2b447aa | wceph |
| 09b92efc-6567-4e07-b9af-24cc6bc85f85 | cirros |
+--------------------------------------+--------+

11. Record the new ceph cluster status.


[root@rdo-cc ~(keystone_admin)]# ceph -s > /tmp/ceph.after

12. Compare the before and after files. There after file should show about 12MB of usage.
[root@rdo-cc ~(keystone_admin)]# diff /tmp/ceph.before /tmp/ceph.after
5c5
< osdmap e21: 3 osds: 3 up, 3 in
---
> osdmap e24: 3 osds: 3 up, 3 in
7,8c7,8
< pgmap v60: 364 pgs, 4 pools, 0 bytes data, 0 objects
< 15463 MB used, 99677 MB / 112 GB avail
---
> pgmap v70: 364 pgs, 4 pools, 12859 kB data, 7 objects
> 15511 MB used, 99628 MB / 112 GB avail

13. Now that we know Ceph works as an image store, enable the previous store so that existing images are available. Edit
stores line in the the glance-api.conf file then restart the service.
\begin{raw}
[ceph@rdo-cc ceph-cluster]$ sudo vim /etc/glance/glance-api.conf
....
stores=rbd,file,http
....

[ceph@rdo-cc ceph-cluster]$ sudo systemctl restart openstack-glance-api

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
84 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 12

OpenStack Object Storage with Swift

12.1 Labs

Exercise 12.1: Configure Object Storage with Swift

Overview

Prior to ceph the common network based object storage implementation was swift. Leveraging memcached, it allows for
fast access to data both from a single node as well as via a proxy service. We deployed swift on a local, loopback device via
packstack in an earlier lab.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

85
86 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT

Figure 12.1: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

This lab RDO running on CentOS.

Start Using Swift

As the Swift project predates OpenStack it has many features and capabilities. We will begin by a basic view of the tool,
creating a container and uploading an object.

1. We installed swift using packstack, which creates a loopback device. Begin by looking at the device and through some
of the options the swift command. command will accept.
[root@rdo-cc ~]# source keystonerc_admin

[root@rdo-cc ~(keystone_admin)]# df -ha |grep swift


/dev/loop1 1.9G 6.3M 1.7G 1% /srv/node/swiftloopback

2. The BUI, the openstack utility, curl, and the swift command can manage object storage. Lets begin with swift. Run
the command without any arguments to get the help output.
[root@rdo-cc ~(keystone_admin)]# swift
usage: swift [--version] [--help] [--os-help] [--snet] [--verbose]
<output-omitted>

3. Create a new container called orders, perhaps to hold online orders for a website.
[root@rdo-cc ~(keystone_admin)]# swift post orders

4. Verify the container was created.


[root@rdo-cc ~(keystone_admin)]# swift list
orders

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 87

5. View the basic swift status. Note the Bytes currently used is zero as no objects have been uploaded. There should be
one container and no objects or bytes.
[root@rdo-cc ~(keystone_admin)]# swift stat
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Containers: 1
Objects: 0
Bytes: 0
Containers in policy "policy-0": 1
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
X-Account-Project-Domain-Id: default
X-Timestamp: 1486070098.65823
X-Trans-Id: txcbdfeeb5920141b48d95a-005893a168
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes

6. Look at the details of the orders container. There should be no ACLs set.
[root@rdo-cc ~(keystone_admin)]# swift stat orders
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Objects: 0
Bytes: 0
Read ACL:
Write ACL:
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Storage-Policy: Policy-0
Last-Modified: Thu, 02 Feb 2017 21:14:59 GMT
X-Timestamp: 1486070098.68344
X-Trans-Id: tx3ac71ad62694409189c25-005893a18f
Content-Type: text/plain; charset=utf-8

7. We will look deeper using the -v option. Note the storage URL, which can be used with curl commands.
[root@rdo-cc ~(keystone_admin)]# swift stat -v
StorageURL: http://172.31.20.51:8080/v1/AUTH_e1e7401f7e9744a390b5ea5252a70903
Auth Token: eb3cb8058c2546ae924a624e32ab1be5
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Containers: 1
Objects: 0
Bytes: 0
Containers in policy "policy-0": 1
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
X-Account-Project-Domain-Id: default
X-Timestamp: 1486070098.65823
X-Trans-Id: tx57ccef5091244936a74f7-005893a433
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes

View and Set Access Control Lists (ACL)

Swift allows for granular assignment of read and write access based off of project and user, among other metadata. In this
task we will set, modify and remove access control lists.

1. Objects can have complex read and write access control lists. Begin by allowing ready by everyone.
[root@rdo-cc ~(keystone_admin)]# swift post orders -r ".r:*"

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
88 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT

2. Verify the ACL has been set.


[root@rdo-cc ~(keystone_admin)]# swift stat orders
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Objects: 0
Bytes: 0
Read ACL: .r:*
Write ACL:
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Trans-Id: tx2a9608f97e4c42f5bee1b-005893a1bc
X-Storage-Policy: Policy-0
Last-Modified: Thu, 02 Feb 2017 21:16:40 GMT
X-Timestamp: 1486070098.68344
Content-Type: text/plain; charset=utf-8

3. Narrow down read permissions to members of the SoftwareTesters group. Then verify the ACL has been set. Watch
what happens to the previous ACL.
[root@rdo-cc ~(keystone_admin)]# swift post orders -r "SoftwareTesters:*"

[root@rdo-cc ~(keystone_admin)]# swift stat orders


Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Objects: 0
Bytes: 0
Read ACL: SoftwareTesters:*
Write ACL:
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Trans-Id: txb59a81084c904a7daf36b-005893a23a
X-Storage-Policy: Policy-0
Last-Modified: Thu, 02 Feb 2017 21:18:46 GMT
X-Timestamp: 1486070098.68344
Content-Type: text/plain; charset=utf-8

4. Set a write ACL to be just a single user, developer1 in the SoftwareTesters group.
[root@rdo-cc ~(keystone_admin)]# swift post orders -w "SoftwareTesters:developer1"

5. Verify the ACLs.


[root@rdo-cc ~(keystone_admin)]# swift stat orders
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Objects: 0
Bytes: 0
Read ACL: SoftwareTesters:*
Write ACL: SoftwareTesters:developer1
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Trans-Id: txaa6077eb425b47898cee2-005893a298
X-Storage-Policy: Policy-0
Last-Modified: Thu, 02 Feb 2017 21:20:17 GMT
X-Timestamp: 1486070098.68344
Content-Type: text/plain; charset=utf-8

6. Update the write ACL with a comma separated list of projects and users. Configure the ACL so only developer2 from
SoftwareTesters can write but all members of the Admin group can write. Verify the setting.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 89

[root@rdo-cc ~(keystone_admin)]# swift post orders -w "SoftwareTesters:developer2,Admin:*"

[root@rdo-cc ~(keystone_admin)]# swift stat orders


Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Objects: 0
Bytes: 0
Read ACL: SoftwareTesters:*
Write ACL: SoftwareTesters:developer2,Admin:*
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Trans-Id: tx4cc152d83c644949bb074-005893a2db
X-Storage-Policy: Policy-0
Last-Modified: Thu, 02 Feb 2017 21:21:27 GMT
X-Timestamp: 1486070098.68344
Content-Type: text/plain; charset=utf-8

7. List all uploaded objects. We have not uploaded anything so there should be no output.
[root@rdo-cc ~(keystone_admin)]# swift list orders

8. Upload a file to the orders container. We’ll use the /etc/hosts file as it’s common.
[root@rdo-cc ~(keystone_admin)]# swift upload orders /etc/hosts
etc/hosts

9. Verify the container has grown and contains an object.


[root@rdo-cc ~(keystone_admin)]# swift stat orders
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Objects: 1
Bytes: 159
Read ACL: SoftwareTesters:*
Write ACL: SoftwareTesters:developer2,Admin:*
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Trans-Id: tx2c3eff3f8ec54947bed97-005893a3e6
X-Storage-Policy: Policy-0
Last-Modified: Thu, 02 Feb 2017 21:25:49 GMT
X-Timestamp: 1486070098.68344
Content-Type: text/plain; charset=utf-8

10. View the default metadata for the newly uploaded object.
[root@rdo-cc ~(keystone_admin)]# swift stat orders etc/hosts
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Object: etc/hosts
Content Type: application/octet-stream
Content Length: 159
Last Modified: Thu, 02 Feb 2017 21:25:50 GMT
ETag: 3d2fd8331483d30d32d70431b70233ef
Meta Mtime: 1456161427.668295
Accept-Ranges: bytes
X-Timestamp: 1486070749.21636
X-Trans-Id: tx6fd1c49c371841d68d5cf-005893a3fa

11. Configure the existing object to expire after ten minutes. The command accepts time in seconds.
[root@rdo-cc ~(keystone_admin)]# swift post orders etc/hosts -H "X-Delete-After:600"

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
90 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT

12. Verify the time. Note that it does not show the time set or a countdown, but the epoch time in seconds the object will
expire. Also note the overall number of fields is the same.
[root@rdo-cc ~(keystone_admin)]# swift stat orders etc/hosts
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Object: etc/hosts
Content Type: application/octet-stream
Content Length: 159
Last Modified: Thu, 02 Feb 2017 21:28:43 GMT
ETag: 3d2fd8331483d30d32d70431b70233ef
X-Delete-At: 1486071522
Accept-Ranges: bytes
X-Timestamp: 1486070922.15140
X-Trans-Id: tx6ba1017928694f8a86520-005893a495

13. Set the object to expire at a particular time in the future. First determine the current epoch time in seconds.
[root@rdo-cc ~(keystone_admin)]# date +’%s’
1486070948

14. Add a thousand seconds to the reported time and verify the new setting.
[root@rdo-cc ~(keystone_admin)]# swift post orders etc/hosts -H "X-Delete-At:1486071948"

15. Verify the value has been set.


[root@rdo-cc ~(keystone_admin)]# swift stat orders etc/hosts
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Object: etc/hosts
Content Type: application/octet-stream
Content Length: 159
Last Modified: Thu, 02 Feb 2017 21:29:59 GMT
ETag: 3d2fd8331483d30d32d70431b70233ef
X-Delete-At: 1486071948
Accept-Ranges: bytes
X-Timestamp: 1486070998.55971
X-Trans-Id: txbee72fc120434e0395c8c-005893a4de

16. If we decide we don’t want the object to expire we can pass the X-Remove-Delete-At parameter with no value after the
colon.

[root@rdo-cc ~(keystone_admin)]# swift post orders etc/hosts \


-H "X-Remove-Delete-At:"

17. Verify the Delete-At times have been removed and the X-Timestamp shows instead.

[root@rdo-cc ~(keystone_admin)]# swift stat orders etc/hosts


Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Object: etc/hosts
Content Type: application/octet-stream
Content Length: 159
Last Modified: Thu, 02 Feb 2017 21:31:02 GMT
ETag: 3d2fd8331483d30d32d70431b70233ef
Accept-Ranges: bytes
X-Timestamp: 1486071061.74091
X-Trans-Id: txe4e446f3259b49b6a359a-005893a524

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 91

Use the BUI to Manage Swift

While less powerful than the command line the BUI offers easy access to objects and containers. In this task we will explore
using the OpenStack dashboard.

1. Use the BUI to verify the previously uploaded file exists and view its settings. Log in as admin to your OpenStack
dashboard. Navigate to the Project -> Object Store -> Containers. Select the orders container.

2. Notice that the object rests in a directory structure. Select the etc link. Work through the drop-down options on the
hosts line without making changes. Note there is no mention of the expiration time or ability to change it.
3. In the orders container box note that the Public Access box is not selected and shows as disabled. Check the box.
The word disabled should be replaced with a link. Right-click on the link and copy the link location.

4. Paste the URL into a new browser window. Edit the URL. Replace the internal IP address (172.24.xx.yy type address)
with the Public IP address (54.212.aa.bb type address). Once the edit is complete press enter to retrieve the page. The
page should show the XML for orders container.

5. Edit the URL again. Append the file name etc/hosts. The browser should show a pop-up window, prompting you to
download a file. Download the file. Locate the file and open it with a text editor. You should see the contents of your
/etc/hosts file.
6. Now download the file via the command line. Return to your terminal session. Use the swift command to download the
file to the current directory and change the file’s name.
[root@rdo-cc ~(keystone_admin)]# swift download orders etc/hosts -o localfile

7. Verify the file has what we expect. The file should look the same as what was downloaded via the BUI.
[root@rdo-cc ~(keystone_admin)]# cat localfile
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

8. Configure the container to allow web access and set the type to listing.css. We will verify it later using the openstack
utility.
[root@rdo-cc ~(keystone_admin)]# swift post -m ’web-listings: true orders’

[root@rdo-cc ~(keystone_admin)]# swift post -m ’web-listings-css:listing.css’ orders

9. We will again set an expire time, then check to see if the object still exists. Set the timer for 30 seconds.

[root@rdo-cc ~(keystone_admin)]# swift post orders etc/hosts -H "X-Delete-After:30"

10. Use the sleep command to make sure 30 seconds has passed then view the status of the object.
[root@rdo-cc ~(keystone_admin)]# sleep 30

[root@rdo-cc ~(keystone_admin)]# swift stat orders etc/hosts


Object HEAD failed: http://172.31.20.51:8080/v1/AUTH_e1e7401f7e9744a390b5ea5252a70903/orders/etc/hosts \
404 Not Found
Failed Transaction ID: txb1612900191a4b3fb218e-005893a575

Manage Object Using the openstack Utility

The openstack utility gains more features of previous per-service commands with each release. We will explore the
current capabilities.

11. Open the openstack utility and view the object sub-commands using the help command.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
92 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT

[root@rdo-cc ~(keystone_admin)]# openstack


(openstack)
(openstack) help object store account set
usage: object store account set [-h] --property <key=value>

Set account properties

optional arguments:
-h, --help show this help message and exit
--property <key=value>
Set a property on this account (repeat option to set
multiple properties)

12. Upload the /etc/hosts file to the orders container again.


(openstack) object create orders /etc/group
+------------+-----------+----------------------------------+
| object | container | etag |
+------------+-----------+----------------------------------+
| /etc/group | orders | bdca0add22e8e2395684d8563f39f942 |
+------------+-----------+----------------------------------+

13. View the objects in the orders container. Note the path was picked up differently. In Pike there appears to be an
undocumented feature, where python says ”ascii codec can’t decide byte 0xe2 in position 17: ordinal not in range(128)”.
This can be ignored, the following commands show the objects are actually there.
(openstack) object list orders
+------------+
| Name |
+------------+
| /etc/group |
| etc/hosts |
+------------+

14. View the newly updated object.


(openstack) object show orders /etc/group
+----------------+---------------------------------------+
| Field | Value |
+----------------+---------------------------------------+
| account | AUTH_e1e7401f7e9744a390b5ea5252a70903 |
| container | orders |
| content-length | 1076 |
| content-type | application/octet-stream |
| etag | bdca0add22e8e2395684d8563f39f942 |
| last-modified | Thu, 02 Feb 2017 21:36:32 GMT |
| object | /etc/group |
+----------------+---------------------------------------+

15. View the object store information. Note the Web-Listings parameter we set in a previous task.
(openstack) object store account show
+------------+---------------------------------------+
| Field | Value |
+------------+---------------------------------------+
| Account | AUTH_e1e7401f7e9744a390b5ea5252a70903 |
| Bytes | 1811 |
| Containers | 1 |
| Objects | 2 |
| properties | Web-Listings=’true orders’ |
+------------+---------------------------------------+

16. Delete the group file.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 93

(openstack) object delete orders /etc/group

17. View the current object store account information. Note it may take a while for the Objects output to update. The
background daemon typically runs once a minute.
(openstack) object store account show
+------------+---------------------------------------+
| Field | Value |
+------------+---------------------------------------+
| Account | AUTH_e1e7401f7e9744a390b5ea5252a70903 |
| Bytes | 1235 |
| Containers | 1 |
| Objects | 1 |
| properties | Web-Listings=’true orders’ |
+------------+---------------------------------------+

18. Explore openstack object and openstack object store account commands as time permits. Using command output
and online resources build a list of various metadata settings possible for an object or a container.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
94 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 13

High Availability in the Cloud

13.1 Labs

There is no lab to complete for this chapter.

95
96 CHAPTER 13. HIGH AVAILABILITY IN THE CLOUD

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 14

Cloud Security with OpenStack

14.1 Labs

There is no lab to complete for this chapter.

97
98 CHAPTER 14. CLOUD SECURITY WITH OPENSTACK

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 15

Monitoring and Metering

15.1 Labs

There is no lab to complete for this chapter.

99
100 CHAPTER 15. MONITORING AND METERING

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 16

Cloud Automation

16.1 Labs

Exercise 16.1: Create our first heat stack

Overview

This exercise uses the RDO OpenStack deployment running on CentOS. The lab instructions use the node name alias of
rdo-cc.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

101
102 CHAPTER 16. CLOUD AUTOMATION

Figure 16.1: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

Downloading Template files

You are encouraged to write the yaml files by hand, to learn the proper syntax. A collection of files has been made available
to use as well. They may still require some editing to match UUIDs. You can download them using wget:

[root@rdo-cc ~]# cd
[root@rdo-cc ~]# wget https://training.linuxfoundation.org/cm/LFS452/heat-templates.tar \
--user=LFtraining --password=Penguin2014

These files may be unpacked with:

[root@rdo-cc ~]# tar xvf heat-templates.tar

Gather system information

1. Before we can deploy an instance using a simple heat stack we need to choose a network to join. Just as when the
BUI is used, if there is more than one network available, one must be chosen to launch an instance. We will use the
Accounting Internal network for our new instance. Note the network ID for later use. This example begins with
a9b90a59
[root@rdo-cc ~]# source keystonerc_admin

[root@rdo-cc ~(keystone_admin)]# openstack network list |grep Accounting


| a9b90a59-f28d-4fd3-3ec6a1fc881f | Accounting Internal | 9cd5357f-de10-....

2. Create a YAML file for a simple, one instance stack. Syntax is very important. If you do not indent white space
properly you will receive an error, something like Error parsing template with further sections calling out blocks

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 103

expected <block end>, but found ’<block mapping start>’. Edit the file such that similar sections are equally
indented.
The network ID should match the output from the previous neutron net-list command.
[root@rdo-cc ~(keystone_admin)]# vim hello_world.yaml

heat_template_version: 2015-04-30

description: Simple template to deploy a single compute instance

resources:
server:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
networks:
- network: a9b90a59-f28d-4fd3-a5db-3ec6a1fc881f

3. Connect to the BUI using a web browser in a new window. Log in as Admin. Make sure the Accounting Internal network
is shared. Navigate to Admin -> System -> Networks Select the Edit Network button on the Accounting Internal
line. Click Shared, then Save Changes.

4. To view the stack be created navigate to the Network Topology page. The following commands should cause the page
to automatically update as resources are created and destroyed.

5. Using the CLI, use the openstack stack create command to deploy a new instance. Watch the BUI for updates.
[root@rdo-cc ~(keystone_admin)]# openstack stack create -t hello_world.yaml stack1
+---------------------+-----------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------+
| id | 5a9c5109-3484-431c-a5f1-dee90eeb0574 |
| stack_name | stack1 |
| description | Simple template to deploy a single compute instance |
| creation_time | 2017-02-17T23:09:50Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+-----------------------------------------------------+

6. After a few seconds the instance should finish deployment. Verify the stack status. Keep trying until it reports CREATED.
[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+-------------------
---------------+-----------------+----------------------+--------------+
| ID | Stack Name | Project | Stack Status | Creation Time
+--------------------------------------+------------+-------------------
---------------+-----------------+----------------------+--------------+
| 007328c3-9dd4-48fa-8a2a-2a4cd59ce171 | stack1 |
f20dbe1137784471855e893154253f48 | CREATE_COMPLETE | 2018-06-15T15:40:24Z
| None |
+--------------------------------------+------------+--------------------
--------------+-----------------+----------------------+--------------+

7. Use a nova command to verify the instance was created. Note the name is a derivative of the stack used to create it.
[root@rdo-cc ~(keystone_admin)]# nova list
+--------------------------------------+----------------------------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks
+--------------------------------------+----------------------------+--------+------------+-------------+----------+
| 7c31cfe9-92c9-4c67-9b4f-61e971626296 | stack1-server-mgvaepymkyqa | ACTIVE | - | Running |
Accounting Internal=192.168.0.11 |
+--------------------------------------+----------------------------+--------+------------+-------------+-----------

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
104 CHAPTER 16. CLOUD AUTOMATION

8. Review the details of the stack and settings.


[root@rdo-cc ~(keystone_admin)]# openstack stack show stack1
+-----------------------+-----------------------------------------------------------------+
| Field | Value
+-----------------------+-----------------------------------------------------------------+
| id | 007328c3-9dd4-48fa-8a2a-2a4cd59ce171 |
| stack_name | stack1 |
| description | Simple template to deploy a single compute instance |
| creation_time | 2018-06-15T15:40:24Z |
| updated_time | None |
| stack_status | CREATE_COMPLETE
<output_omitted>

Exercise 16.2: More complex stack

Create a more complex stack

This exercise uses the RDO OpenStack deployment running on CentOS. The lab instructions use the node name alias of
rdo-cc.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

Figure 16.2: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 105

1. Now add more resources to the stack. Review a list of possible resource types. Use the BUI to navigate to
Project -> Orchestration -> Resource Types and look through the various types. View types that begin with
OS::Neutron. Look at the details for OS::Neutron::Subnet.
2. Create another YAML file and populate it with the instance information we used before and add network, router, and
interface information. Again note that proper white space indentation and syntax is essential. The following command
opens a complete file. Consider trying on your own first.
[root@rdo-cc ~(keystone_admin)]# vim netandserver.yaml

heat_template_version: 2015-04-30

description: Instance, router and network

resources:
internal_net:
type: OS::Neutron::Net

internal_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: internal_net }
cidr: "10.8.1.0/24"
dns_nameservers: [ "8.8.8.8", "8.8.4.4" ]
ip_version: 4

internal_router:
type: OS::Neutron::Router
properties:
external_gateway_info: { network: public }

internal_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: internal_router }
subnet: { get_resource: internal_subnet }

server:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
networks:
- network: { get_resource: internal_net }

3. Deploy the more complex stack.


[root@rdo-cc ~(keystone_admin)]# openstack stack create \
-t netandserver.yaml stack2
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | 70fe1b28-bb30-4059-b26c-96524621bdf7 |
| stack_name | stack2 |
| description | Instance, router and network |
| creation_time | 2018-06-15T16:38:25Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+--------------------------------------+

4. View the status of the stacks. Depending on how fast you read and type the stack may have completed being created.
[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
106 CHAPTER 16. CLOUD AUTOMATION

| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |


+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| 70fe1b28-bb30-4059-b26c-96524621bdf7 | stack2 | f20dbe1137784471855e893154253f48 | CREATE_COMPLETE | 2018-06-15T16:38:25Z | None |
| 007328c3-9dd4-48fa-8a2a-2a4cd59ce171 | stack1 | f20dbe1137784471855e893154253f48 | CREATE_COMPLETE | 2018-06-15T15:40:24Z | None |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+

5. Look at the details of the stack.


[root@rdo-cc ~(keystone_admin)]# openstack stack show stack2
+-----------------------+-----------------------------------------
-------------+
| Field | Value
+-----------------------+-----------------------------------------
-------------+
| id | 70fe1b28-bb30-4059-b26c-96524621bdf7
| stack_name | stack2
| description | Instance, router and network
| creation_time | 2018-06-15T16:38:25Z
| updated_time | None
| stack_status | CREATE_COMPLETE
| stack_status_reason | Stack CREATE completed successfully
<output_omitted>

6. Open the BUI and verify the newly created resources. If you view the Network Topology you should find a new router,
network and instance.

7. Now shut down the stack and release the deployed resources.
[root@rdo-cc ~(keystone_admin)]# openstack stack delete stack2
Are you sure you want to delete this stack(s) [y/N]? y

8. Verify the resources are no longer in use from the CLI and BUI. Depending on how fast you type, you may see stack2
in a DELETE_COMPLETE state before it is fully removed.
[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+-----------------+---------------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 5a9c5109-3484-431c-a5f1-dee90eeb0574 | stack1 | CREATE_COMPLETE | 2016-10-28T17:24:31 | None |
+--------------------------------------+------------+-----------------+---------------------+--------------+

Exercise 16.3: Snapshots and updating stacks This exercise uses the RDO OpenStack deployment
running on CentOS. The lab instructions use the node name alias of rdo-cc.

The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.

Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.

The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 107

Figure 16.3: Katacoda Horizon Login

The suggested and tested browser to use is Chrome, although others may work.

Work with snapshots and updating stacks

1. A snapshot allows us to save a stack configuration and roll back to that point of configuration. Begin by creating a
snapshot of stack1.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot create stack1
+---------------+--------------------------------------+
| Field | Value |
+---------------+--------------------------------------+
| ID | b5987b66-082d-49c9-b0f2-f9a4831ea44c |
| name | None |
| status | IN_PROGRESS |
| status_reason | None |
| data | None |
| creation_time | 2017-02-17T23:30:57Z |
+---------------+--------------------------------------+

2. Verify the new snapshot for stack1.


[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot list stack1
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| id | name | status | status_reason | creation_time |
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| b5987b66-082d-49c9-b0f2-f9a4831ea44c | None | COMPLETE | Stack SNAPSHOT completed successfully | 2017-02-17T17:34:40 |
+--------------------------------------+------+----------+---------------------------------------+---------------------+

Update a stack

1. Update the YAML file to create a cinder volume and attach it to the existing instance. Again be mindful of the indentation.
To keep both version easy to use, first copy the file.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
108 CHAPTER 16. CLOUD AUTOMATION

[root@rdo-cc ~(keystone_admin)]# cp hello_world.yaml hello_world-2.yaml

[root@rdo-cc ~(keystone_admin)]# vim hello_world-2.yaml

heat_template_version: 2015-04-30

description: Simple template to deploy a single compute instance

resources:
server:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
networks:
- network: a9b90a59-f28d-4fd3-a5db-3ec6a1fc881f

cinder_volume:
type: OS::Cinder::Volume
properties:
size: 1
volume_attachment:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: cinder_volume }
instance_uuid: { get_resource: server }
mountpoint: /dev/sdb

2. Update the stack calling the newly updated YAML file and the stack to update. For whatever reason the command will
output the stack condition before making the update.
root@rdo-cc ~(keystone_admin)# openstack stack update -t hello_world-2.yaml stack1
+---------------------+-----------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------+
| id | de645167-4478-4ea4-a1f4-622035b50dd6 |
| stack_name | stack1 |
| description | Simple template to deploy a single compute instance |
| creation_time | 2017-02-17T23:08:23Z |
| updated_time | 2017-02-17T23:34:42Z |
| stack_status | UPDATE_IN_PROGRESS |
| stack_status_reason | Stack UPDATE started |
+---------------------+-----------------------------------------------------+

3. Wait a moment and verify the update took place.


[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+-----------------+----------------------+----------------------+
| de645167-4478-4ea4-a1f4-622035b50dd6 | stack1 | UPDATE_COMPLETE | 2017-02-17T23:08:23Z | 2017-02-17T23:34:42Z |
+--------------------------------------+------------+-----------------+----------------------+----------------------+

4. Verify the instance continues to run using the nova command. If it’s not running, use nova start to start it.
[root@rdo-cc ~(keystone_admin)]# openstack server list
+--------------------------------------+----------------------------+--------+------------+-------------+----------
| ID | Name | Status | Task State | Power State | Networks
+--------------------------------------+----------------------------+--------+------------+-------------+----------
| 7c31cfe9-92c9-4c67-9b4f-61e971626296 | stack1-server-mgvaepymkyqa | ACTIVE | - | Running | Accounting
Internal=192.168.0.11 |
+--------------------------------------+----------------------------+--------+------------+-------------+----------

5. Use the openstack server command to view the newly attached storage device.

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 109

[root@rdo-cc ~(keystone_admin)]# openstack server show stack1-server-mgvaepymkyqa


+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| Accounting Internal network | 192.168.0.11 |
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | rdo-cc |
| OS-EXT-SRV-ATTR:hypervisor_hostname | rdo-cc.us-west-2.compute.internal |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-10-28T17:24:40.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2016-10-28T17:24:32Z |
| flavor | m1.tiny (1) |
| hostId | abdae7be68521c557f12caa85588a8c97f79ea1ef84a6ae2ae602038 |
| id | 7c31cfe9-92c9-4c67-9b4f-61e971626296 |
| image | cirros (12eab556-b969-47e8-86fa-8bb261e2bd86) |
| key_name | - |
| metadata | {} |
| name | stack1-server-mgvaepymkyqa |
| os-extended-volumes:volumes_attached | [{"id": "8a272a63-1f7f-4cbf-8256-126d86168603"}] |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | fabac54826614401b57a7b1cb7dab941 |
| updated | 2016-10-28T17:34:46Z |
| user_id | 5858b7e81cc240b885efd554bdf33367 |
+--------------------------------------+----------------------------------------------------------+

6. Using the previous output verify the storage exists and which instance ID it is attached to.
[root@rdo-cc ~(keystone_admin)]# cinder list
+--------------------------------------+--------+------------------+-----------------------------------+------+-------------+
----------+-------------+--------------------------------------+
| ID | Status | Migration Status | Name | Size | Volume Type |
Bootable | Multiattach | Attached to |
+--------------------------------------+--------+------------------+-----------------------------------+------+-------------+
----------+-------------+--------------------------------------+
| 8a272a63-1f7f-4cbf-8256-126d86168603 | in-use | - | stack1-cinder_volume-qni4na3exrot | 1 | - |
false | False | 7c31cfe9-92c9-4c67-9b4f-61e971626296 |
+--------------------------------------+--------+------------------+-----------------------------------+------+-------------+
----------+-------------+--------------------------------------+

7. Look at the volume details. Note that the server_id matches the instance.
[root@rdo-cc ~(keystone_admin)]# cinder show stack1-cinder_volume-qni4na3exrot
+---------------------------------------+-------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------+
| Property |
+---------------------------------------+-------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------+
| attachments | [{u’server_id’: u’7c31cfe9-92c9-4c67-9b4f-61e971626296’, u’attachment_id’: \
u’4061289f-d891-4ecf-8cbc-3560ca5a43dd’, u’host_name’: None, u’volume_id’: u’8a272a63-1f7f-4cbf-8256-126d86168603’, \
u’device’: u’/dev/vdb’, u’id’: u’8a272a63-1f7f-4cbf-8256-126d86168603’}] |
| availability_zone |
| bootable |
<output omitted>

Revert/Roll back a snapshot

The process of snapshots and rollbacks is changing. Currently the rollback causes the instance to error out. While there are
bug reports, it seems that documentation suggests using templates for each stage, instead of a snapshot. By using multiple
YAML files you can select a particular stack state without having taken a snapshot

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
110 CHAPTER 16. CLOUD AUTOMATION

1. Use the original version of the template file and revert the instance. The volume should be deleted, and the instance
should continue to run. Log into the instance and verify the uptime is greater than the most recent update. Your network
namespace will be different. Also remember Cirros is updating their passwords from ’cubswin:)’ to ’gocubsgo’. If one
doesn’t work, try the other.
[root@rdo-cc ~(keystone_admin)]# openstack stack update -t hello_world.yaml stack1
+---------------------+-----------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------+
| id | 1d072060-5589-436d-a7a4-aadda61bc240 |
| stack_name | stack1 |
| description | Simple template to deploy a single compute instance |
| creation_time | 2018-06-15T20:12:18Z |
| updated_time | 2018-06-15T20:46:35Z |
| stack_status | UPDATE_IN_PROGRESS |
| stack_status_reason | Stack UPDATE started |
+---------------------+-----------------------------------------------------+

[root@rdo-cc ~(keystone_admin)]# ip netns list


qrouter-ec0d864f-3dcb-47d6-8dd2-32268db720df (id: 3)
qdhcp-dc40445f-6f5a-4c2d-868c-10d833a68f6d (id: 2)
qdhcp-8d80de63-c3fc-4203-8616-b3ed6e3e40eb (id: 1)
qrouter-ae030f72-e163-4a83-9d37-6297d610da4a (id: 0)

[root@rdo-cc ~(keystone_admin)]# ip netns exec \


qrouter-ec0d864f-3dcb-47d6-8dd2-32268db720df ssh [email protected]
[email protected]’s password: gocubsgo
$ uptime
21:50:06 up 37 min, 1 users, load average: 0.00, 0.00, 0.00

2. Verify via command line or BUI, the volume should no longer be attached or exist.

3. Now try the process of using the snapshot. Again this does not seem to work in the Pike version at time of writing. Verify
the status of the snapshot.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot list stack1
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| id | name | status | status_reason | creation_time |
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| 75e88260-3c7c-4658-bdf9-f96fd3b21b8a | None | COMPLETE | Stack SNAPSHOT completed successfully | 2016-10-28T17:34:40 |
+--------------------------------------+------+----------+---------------------------------------+---------------------+

4. Look at the details of the snapshot. Note there is no reference to the cinder volumes.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot show stack1 75e88260-3c7c-4658-bdf9-f96fd3b21b8a
snapshot:
creation_time: ’2017-02-17T23:30:57Z’
data:
action: SNAPSHOT
environment:
<output-omitted>

5. Shut down the instance before rolling back. Verify it has shut down before continuing. The stack-restore process will
not check and could leave the instance unusable.
[root@rdo-cc ~(keystone_admin)]# openstack server stop stack1-server-mgvaepymkyqa

[root@rdo-cc ~(keystone_admin)]# openstack server show stack1-server-mgvaepymkyqa

+--------------------------------------+----------------------------------------------------------+
| Field | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | rdo-cc.localdomain |
| OS-EXT-SRV-ATTR:hypervisor_hostname | rdo-cc.localdomain |
| OS-EXT-SRV-ATTR:instance_name | instance-00000004 |
| OS-EXT-STS:power_state | Shutdown |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | stopped |
<output ommited>

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 111

6. Get the ID of the snapshot to use.


[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot list stack1
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| id | name | status | status_reason | creation_time |
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| 75e88260-3c7c-4658-bdf9-f96fd3b21b8a | None | COMPLETE | Stack SNAPSHOT completed successfully | 2016-10-28T17:34:40 |
+--------------------------------------+------+----------+---------------------------------------+---------------------+

7. Using the ID and the stack to rollback undo whatever has changed since the snapshot was taken.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot restore stack1 75e88260-3c7c-4658-bdf9-f96fd3b21b8a

8. Verify the roll back took place. Look at the stack_status.


[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+------------------+---------------------+---------------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+------------------+---------------------+---------------------+
| 5a9c5109-3484-431c-a5f1-dee90eeb0574 | stack1 | RESTORE_COMPLETE | 2016-10-28T17:24:31 | 2016-10-28T20:05:55 |
+--------------------------------------+------------+------------------+---------------------+---------------------+

9. Use the BUI, cinder and nova commands to verify you have the instance, but no longer have an attached volume. You
may have to start the instance if it is not running. Note: You may have an error instead. This worked prior to Pike.
[root@rdo-cc ~(keystone_admin)]# nova list
+--------------------------------------+----------------------------+--------+------------+-------------+----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------------+--------+------------+-------------+----------------------------------+
| 7c31cfe9-92c9-4c67-9b4f-61e971626296 | stack1-server-mgvaepymkyqa | ACTIVE | - | Running | Accounting Internal=192.168.0.11 |
+--------------------------------------+----------------------------+--------+------------+-------------+----------------------------------+

[root@rdo-cc ~(keystone_admin)]# cinder show stack1-cinder_volume-qni4na3exrot


ERROR: No volume with a name or ID of ’stack1-cinder_volume-qni4na3exrot’ exists.

10. Now we can delete the remaining stack. Answer yes when asked if you want to delete the stack.
[root@rdo-cc ~(keystone_admin)]# openstack stack delete stack1
+--------------------------------------+------------+------------------+---------------------+---------------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+------------------+---------------------+---------------------+
| 5a9c5109-3484-431c-a5f1-dee90eeb0574 | stack1 | RESTORE_COMPLETE | 2016-10-28T17:24:31 | 2016-10-28T20:05:55 |
+--------------------------------------+------------+------------------+---------------------+---------------------+

[root@rdo-cc ~(keystone_admin)]# openstack stack list


+----+------------+--------------+---------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+----+------------+--------------+---------------+--------------+
+----+------------+--------------+---------------+--------------+

11. Verify the newly created instance is not in the nova list.
[root@rdo-cc ~(keystone_admin)]# nova list | grep stack1

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
112 CHAPTER 16. CLOUD AUTOMATION

LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 17

Conclusion

17.1 Labs

There is no lab to complete for this chapter.

113

You might also like