LFS252 Openstack OCA
LFS252 Openstack OCA
OpenStack
Administration
Fundamentals
Version 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
ii
Open source code incorporated herein may have other copyright holders and is used pursuant to the applicable open source
license.
The training materials are provided for individual use by participants in the form in which they are provided. They may not be
copied, modified, distributed to non-participants or used to provide training to others without the prior written consent of The
Linux Foundation.
No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without express prior
written consent.
Published by:
No representations or warranties are made with respect to the contents or use of this material, and any express or implied
warranties of merchantability or fitness for any particular purpose or specifically disclaimed.
Although third-party application software packages may be referenced herein, this is for demonstration purposes only and
shall not constitute an endorsement of any of these software applications.
Linux is a registered trademark of Linus Torvalds. Other trademarks within this course material are the property of their
respective owners.
If there are any questions about proper and fair use of the material herein, please contact:
[email protected]
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Contents
1 Introduction 1
1.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Cloud Fundamentals 3
2.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
6 Reference Architecture 45
6.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
iii
iv CONTENTS
17 Conclusion 113
17.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
List of Figures
v
vi LIST OF FIGURES
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 1
Introduction
1.1 Labs
Thus, the sensible procedure is to configure things such that single commands may be run with superuser privilege, by using
the sudo mechanism. With sudo the user only needs to know their own password and never needs to know the root password.
If you are using a distribution such as Ubuntu, you may not need to do this lab to get sudo configured properly for the course.
However, you should still make sure you understand the procedure.
To check if your system is already configured to let the user account you are using run sudo, just do a simple command like:
$ sudo ls
You should be prompted for your user password and then the command should execute. If instead, you get an error message
you need to execute the following procedure.
Launch a root shell by typing su and then giving the root password, not your user password.
On all recent Linux distributions you should navigate to the /etc/sudoers.d subdirectory and create a file, usually with the
name of the user to whom root wishes to grant sudo access. However, this convention is not actually necessary as sudo will
scan all files in this directory as needed. The file can simply contain:
An older practice (which certainly still works) is to add such a line at the end of the file /etc/sudoers. It is best to do so using
the visudo program, which is careful about making sure you use the right syntax in your edit.
You probably also need to set proper permissions on the file by typing:
1
2 CHAPTER 1. INTRODUCTION
(Note some Linux distributions may require 400 instead of 440 for the permissions.)
After you have done these steps, exit the root shell by typing exit and then try to do sudo ls again.
There are many other ways an administrator can configure sudo, including specifying only certain permissions for certain
users, limiting searched paths etc. The /etc/sudoers file is very well self-documented.
However, there is one more setting we highly recommend you do, even if your system already has sudo configured. Most
distributions establish a different path for finding executables for normal users as compared to root users. In particular the
directories /sbin and /usr/sbin are not searched, since sudo inherits the PATH of the user, not the full root user.
Thus, in this course we would have to be constantly reminding you of the full path to many system administration utilities;
any enhancement to security is probably not worth the extra typing and figuring out which directories these programs are in.
Consequently, we suggest you add the following line to the .bashrc file in your home directory:
PATH=$PATH:/usr/sbin:/sbin
If you log out and then log in again (you don’t have to reboot) this will be fully effective.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 2
Cloud Fundamentals
2.1 Labs
Overview
All access to lab systems takes place via the Katacoda browser interface. In some labs the deployed cloud will include several
instances. Access to secondary instances will take place through the browser interface using SSH from one virtual instance
to another.
The suggested and tested browser to use is Chrome, although others may work. The course material includes a URL for lab
access. You will use your Linux Foundation login and password to gain access. It may take up to 24 hours after registration
for your email to be added to the lab environment.
Each URL will bring you to an environment which has been pre-configured with the lab steps up to that point. This allows you
to work on a lab again, without having to redo all the steps up to that point.
Some labs will use the Horizon BUI to manage the cloud graphically. The Katacoda page offers a second tab to access the
BUI, named OpenStack Dashboard. The URL can also be found in the /opt/host file on the instance. There will not be a
web page until you have successfully installed OpenStack.
Please be sure to use the Shutdown Cluster link when finished with the lab to release the resources. It will ask if you want to
shutdown the cluster, answer with y for yes.
3
4 CHAPTER 2. CLOUD FUNDAMENTALS
Should you want a second terminal to test or view real-time output you can select the plus sign +, which will show a drop-down
menu. From that menu choose Open New Terminal.
There are two OpenStack deployments in this course, using two distributions. DevStack will be deployed on Ubuntu for the
early labs and RDO will be deployed on CentOS for later labs.
Different lab equipment may be available for each lab, so be sure to begin each lab by choosing the link provided for each
exercise section.
The DevStack installer must be run as a non-root user. If a user does not already exist add a new user and configure them
to use sudo without needing to pass a password.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
2.1. LABS 5
1. Test your user ID and if you can become the ubuntu user. If the user already exists you won’t need to create the user in
the following steps. The prompt may be an indication as well.
ubuntu@base-xenial:~$ id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),....
ubuntu@base-xenial:~$ sudo -i
$ id
uid=0(root) gid=0(root) groups=0(root)
$ exit
2. Add the new ubuntu user, if it does not already exist. If the user exists you’ll receive an error.
$ useradd -m -d /home/ubuntu -s /bin/bash ubuntu
useradd: user ’ubuntu’ already exists
3. Assign a password for the new user, in this case we’ll use LFtrain! as the password. You won’t see the output as you
type the password for security reasons.
$ passwd ubuntu
Enter new UNIX password: LFtrain!
Retype new UNIX password: LFtrain!
passwd: password updated successfully
4. Update the /etc/sudoers file to allow the ubuntu user full sudo access, without requiring a password. There may be a
stack or sudo user listed with the same ability. It may be easiest to copy, paste, and edit that line.
$ vim /etc/sudoers
....
%sudo ALL=(ALL) NOPASSWD:ALL
stack ALL=(ALL) NOPASSWD:ALL
ubuntu ALL=(ALL) NOPASSWD:ALL # Add this line
5. Become the ubuntu user and test sudo usage. You should be able to view the contents of a protected directory without
error. Note that the prompt will change to to show the user, node name and current directory.
$ su - ubuntu
6. While the installation script will choose a primary network interface it is good practice to configure the interface and IP
address to use. Begin by finding the IP address of the primary interface. In the example below the IP is 172.17.0.13,
your IP may be different.
ubuntu@openstack:~$ ip addr show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
link/ether 02:42:ac:11:00:0d brd ff:ff:ff:ff:ff:ff
inet 172.17.0.13/16 brd 172.17.255.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:d/64 scope link
valid_lft forever preferred_lft forever
IP for ens3
7. Be aware that as these labs could run in a variety of places the specific interface and IP addresses may be different.
The following labs will use a generic prompt. The use of devstack-cc is to indicate the command should be run on
the DevStack cloud controller node. The use of compute-node will indicate the command should be run on an added
worker node instead.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
6 CHAPTER 2. CLOUD FUNDAMENTALS
DevStack is under active development. What you download could be different from a download made just minutes later. While
most updates are benign, there is a chance that a new version could render a system difficult or impossible to use. Never
deploy DevStack on an otherwise production machine.
1. Before we can download the software we will need to update the package information and install a version control system
command, git.
ubuntu@devstack-cc:~$ pwd
/home/ubuntu
ubuntu@devstack-cc:~$ git clone https://git.openstack.org/openstack-dev/devstack -b stable/pike
Cloning into ’devstack’...
<output_omitted>
3. The newly installed software can be found in a new sub-directory named devstack. Installation of the script is by a shell
script called stack.sh. Take a look at the file:
ubuntu@devstack-cc:~$ cd devstack
ubuntu@devstack-cc:~/devstack$ less stack.sh
4. There are several files and scripts to investigate. If you have issues during installation and configuration you can use the
unstack.sh and clean.sh script to (usually) return the system to the starting point:
5. We will need to create a configuration file for the installation script. A sample has been provided to review. Use the
contents of the file to answer the following questions.
7. There are several test and exercise scripts available, found in sub-directories of the same name. A good, general test is
the run_tests.sh script.
Due to the constantly changing nature of DevStack these tests are not always useful or consistent. You can expect
to see errors but be able to use OpenStack without issue. For example missing software should be installed by the
upcoming stack.sh script.
Keep the output of the tests and refer back to it as a place to start troubleshooting if you encounter an issue.
ubuntu@devstack-cc:~/devstack$ ./run_tests.sh
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
2.1. LABS 7
While there are many possible options we will do a simple OpenStack deployment. Create a ~/devstack/local.conf file.
Parameters not found in this file will use default values, ask for input at the command line or generate a random value.
1. OpenStack is written in Python, and as such there may be extra steps required when either project updates. In our
environment we will need to install a particular Python tools using a Python tool instead of the default apt installed tool.
Begin by removing the OS tool
ubuntu@openstack:~/devstack$ sudo apt-get remove python-psutil
Reading package lists... Done
Building dependency tree
Reading state information... Done
3. Use the pip installer to install the psutil package. There may be some warning in the output having to do with directory
ownership and version of pip. These warnings can be safely ignored.
ubuntu@openstack:~/devstack$ sudo pip install psutil
<output_omitted>
Downloading https://files.pythonhosted.org/packages/14/a2/8ac7dda36
e03950ec2668ab1b466314403031c83a95c5efc81d2acf163/psutil-5.4.5.tar.gz
100% || 419kB 1.7MB/s
Installing collected packages: psutil
Running setup.py install for psutil ... done
Successfully installed psutil-5.4.5
You are using pip version 8.1.1, however version 10.0.1 is available.
You should consider upgrading via the ’pip install --upgrade pip’ command.
4. We will create a basic configuration file. In our labs we’ll use ens3 and it’s IP address, found in an earlier step, when
you create the following file.
ubuntu@devstack-cc:~devstack$ vim local.conf
[[local|localrc]]
HOST_IP=172.17.0.13
FLAT_INTERFACE=ens3
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=db-secret
RABBIT_PASSWORD=rb-secret
SERVICE_PASSWORD=sr-secret
# Use the following to explore new project
enable_plugin barbican https://git.openstack.org/openstack/barbican stable/pike
The following command will generate a lot of output to the terminal window. The stack.sh script will run for 20 to 37 minutes.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
8 CHAPTER 2. CLOUD FUNDAMENTALS
ubuntu@devstack-cc:~devstack$ ./stack.sh
<output_omitted>
2. View the directory where various logs have been made. If the logs are not present you may have an issue with the
syntax of the local.conf file:
ubuntu@devstack-cc:~devstack$ ls -l /opt/stack/logs
DevStack runs under a user account. DevStack is not meant to be durable, so there is no longer a rejoin script. If the node
reboots, you must run stack.sh again.
The Horizon software produces a web page for management. By logging into this Browser User Interface (BUI) we can
configure almost everything in OpenStack. The look and feel may be different than what you see in the book. The project and
vendor updates change often.
1. With Katacoda we are using a browser interface to access the command line as well as HTTP access. You can either
use the second tab on the page, which will open another browser page or the URL found in /opt/host.
2. Log into the BUI with a username of admin and a password of openstack. Using the tabs on the left, navigate to drop-
down named Project. You will find three other drop-downs, Compute, Volumes and Network. Choose the Compute
drop-down, then the Overview tab. It should look something like the following:
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
2.1. LABS 9
3. Navigate to the Admin -> Compute -> Hypervisors page. Use the Hypervisor and Compute Host sub-tabs to
answer the following questions.
a. How many hypervisors are there?
b. How many VCPUs are used?
c. How many VCPUs total?
d. How many compute hosts are there?
e. What is its state?
6. Navigate through the other tabs and subtabs to become familiar with the BUI.
Solution 2.2
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10 CHAPTER 2. CLOUD FUNDAMENTALS
3. a. 1
b. 0
c. 2
d. 1
e. up
4. a. 0
5. a. 6
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 3
3.1 Labs
Overview
In a previous exercise you deployed an All-In-One DevStack instance, running on Ubuntu. Use the provided link to begin a
new lab with the previous configurations already completed.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
11
12 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
In this exercise we will investigate available resources, configure our cloud and deploy a virtual machine. You may see pop-up
error ”Error: Unable to retrieve usage information.” This can safely be ignored.
All access to lab systems takes place through a browser type interface. In some labs the deployed cloud will include several
instances. Access to secondary instances will take place through the browser interface using SSH from the command line.
This lab uses DevStack running on Ubuntu. Later labs will use the RDO version of OpenStack running on CentOS.
During the OpenStack installation, a configuration file is created with login and environmental information. Devstack creates
a file .localrc.auto with the password information from the stack.sh script and the local.conf information.
1. Use the browser link to connect to command line terminal of the DevStack instance.
2. Change into the devstack directory and find the password to log into the BUI as the user admin.
ubuntu@devstack-cc:~/devstack$ grep ADMIN_PASSWORD .localrc.auto
ADMIN_PASSWORD=openstack
3. Use the second tab of the Katacoda window and find the Horizon BUI. You can also look inside /opt/host.
4. Log into the BUI with a username of admin and the password output of the previous grep command.
5. There are two drop-downs across the top of the BUI. One says admin the other alt_demo or demo. What does each
represent?
a. Left drop down:
b. Right drop down:
Create A Project
A project, once known as a tenant, is a collection of resources available to a user or customer. It allows an OpenStack
administrator to delegate resources and the ability to control them.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 13
3. Fill out the Project Information tab with these values. Reference the following graphic:
Name: SoftwareTesters
Description: A project for software testers
4. Modify the Quotas tab with the following values. Leave the others as default. Reference the following graphic for any
unpopulated fields which require a value (It may look slightly different depending on distribution):
VCPUs: 5
Instances: 5
Floating IPs: 2
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
14 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
5. Once you have completed editing both tabs select Create Project. You should have returned to the Projects page.
6. Find the newly created line for SoftwareTesters. Select the drop-down next to Manage Members and select
Edit Project. Notice you can edit any of the settings you have made.
Add A User
While the admin user is able to manage the infrastructure of OpenStack we will create a user with member privileges for a
project to deploy and manage an instance.
2. Select the +Create User button. Fill it out to match the following graphic, then select the Create User button.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 15
3. Select the Users tab on the left and verify the new user is in the list. Use the button in the upper right to sign out as the
user admin and log back in as the user developer1 with the password you set, openstack
4. Using the information at the top of the BUI, what project is developer1 working with?
5. Working through the tabs on the left, what are some differences of the developer 1 view?
a.
b.
c.
d.
Before we launch an instance we will create a network and router to attach it to.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 17
3. Now to create a router. The UUID of the network namespace (covered in a later chapter) is derived from the router the
instance network is attached to. We will find this UUID to know which network to use. First we create the router. Select
the +Create Router button in the upper right. Enter the net-router as the name, then Create Router in the lower
right.
4. Use the mouse to hover over the router icon (looks like a small X). Select the blue link: View Router Details.
5. Select the second tab over, for Interfaces then +Add interface. Select the drop-down for the Net1: 10.0.0.64/25
(sub-net1) subnet then the Submit button. When the screen refreshes the Status will show as Down. After a minute if
you reload the page it should show as Active.
6. Change to the Overview tab when the interface has been added and take note of the ID. In the following graphic the ID
begins with 4fd279 and ends with e0d2. Yours will be different.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
18 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
7. Return to the command line and view the newly created namespace. Use the ip netns list command. We will learn
more about namespaces in a later chapter. Note the line which begins with qrouter- and has the same ID as the router
we just created. You will use this information to connect to an instance on this network in a future step.
By default only egress is allowed to an instance. We will add ssh ingress to the default group.
1. Use the BUI to navigate to the Project -> Network -> Security Groups page. Select the Manage Rules button on
the default line.
2. Select the Add Rule button. Fill it out as per the following graphic. You will need to select the Rule drop down and scroll
to see the SSH option.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 19
Now that we have a new network and router, we can deploy a new instance. Once it has fully spawned you will log into the
instance via the new namespace.
2. Select the Launch Instance button in the upper right. Fill it out according to the following graphic. You will need to work
with each tab marked with an asterisk. If you receive the error: Error: Host <name> is not mapped to any cell
return to the command line and type: nova-manage cell v2 discover hosts. This is a hiccup with Pike.
Note that you must first select Boot from Image in the Instance Boot Source drop-down before you will be able to
select an image to use. Also set the Delete Volume on Instance Delete slider to Yes to avoid running out of space.
The icon to move a resource from Available to Allocated is either an arrow or a plus sign.
3. When you have worked through the tabs and entered the necessary fields select the Launch Instance button. It will
be grayed out until all requirements are met. We will revisit the other tabs in later exercises. The instance should now
be spawning.
Details:
Instance Name: devOS1
Source:
Select Boot Source: Image
Set Delete Volume on Instance Delete to Yes
Add the cirros image using small up arrow, lower right
Flavor:
Flavor: m1.tiny
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
20 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
4. At first the new instance will show a Task state of scheduling, Block Device Mapping then Spawning. When that finishes,
usually within a minute, the status should change from Build to Active. Take note of the listed IP Address:
6. Navigate from the Overview tab to Action Log reviewing the information available. The Console tab may not work
because of the nature of the lab environment. If we were local to devstack-cc we would be able to log into the
instance.
8. If so, what is the listed username and password, right above the login:
9. Return to the command line. Using sudo get a list of network namespaces. Look for a line beginning with qrouter-
containing the UUID of the router we created earlier via the BUI.
ubuntu@devstack-cc:~/devstack$ sudo ip netns list
qrouter-4fd279c4-b125-4611-956d-adc67432e0d2
qdhcp-f7695fb9-577b-45fd-bcc4-75b3dc0d7c74
qrouter-8112815b-45d4-4c7c-8af2-4c06e9e86994
qdhcp-dea04a0d-3deb-419b-8955-9f6d3a2fa5e4
10. Now that we know which namespace to use, again use sudo and ip netns exec to run the ssh command in that
namespace. Use the IP Address for your instance, which may be different than the example below. The command is on
three lines for readability. Once you log into the instance run a few commands and create a file to be used in a future
lab:
ubuntu@devstack-cc:~/devstack$ sudo ip netns exec \
qrouter-4fd279c4-b125-4611-956d-adc67432e0d2 \
ssh [email protected]
The authenticity of host ’10.0.0.74 (10.0.0.74)’ can’t be established.
RSA key fingerprint is 27:6b:b3:f0:4e:44:01:70:51:e8:ad:1b:28:31:e0:aa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ’10.0.0.74’ (RSA) to the list of known hosts.
[email protected]’s password: cubswin:)
$ uname -a
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 21
Congratulations you just deployed your first instance and logged in!
Viewing Resources
With a instance deployed we should see some usage on various BUI screens.
4. Navigate to the Admin -> Compute -> Hypervisors page. It should look something like this, but the exact numbers
don’t matter as much as the difference in view from this and the admin user view we will see in a few steps:
6. a. What can we know about the difference between an admin view and a project view?
b. How many VCPUs do you have remaining?
7. Sign out of the BUI as admin and back in as developer1. We will deploy two more instances and view resource usage.
Note that the Hypervisor Summary as admin indicates we are currently using one of four VCPUs. The developer1
view shows the quota totals not the actual resources, even if the quota is much larger than the actual resources.
8. After logging in, navigate to the Project -> Compute -> Instances page.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
22 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
9. Select the Launch Instance button and deploy two instances. Work down the tabs on the left filling in the necessary
information. Select the Launch Instance button in the lower right once the fields have been updated.
Details:
Instance Name: devOS2
Count: 2
Source:
Select Boot Source: Image
Set Delete Volume on Instance Delete to Yes
Allocated: cirros image using small up arrow, lower right
Flavor:
Allocated: m1.tiny
10. When you have entered the appropriate information select the Launch Instance button. Wait until the instances finish
spawning. Did you receive any errors?
14. Navigate to the Admin -> Compute -> Hypervisors page. How many VCPUs in use?
16. Select devos2-1 and devos2-2 then select the red Delete Instances button.
17. When the pop-up asks for confirmation select the Delete Instances button.
18. You should have one remaining instance. You can verify this from the command line:
ubuntu@devstack-cc:~$ sudo virsh list --all
Id Name State
----------------------------------------------------
2 instance-00000001 running
Solution 3.1
Add A User
1. Software Testers
2. a. No admin tab
b. Insufficient privilege to add users or projects
c. View only that project resources, which do not reflect the actual system at all
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 23
1. 10.0.0.74
4. yes
5. cirros cubswin:
2. a. 1
b. 1
c. 512
d. 50G
6. a. Those with admin ability see the actual usage, the project view represents a view of quota not real resources
b. 1
10. No
14. 3
Overview
In a previous exercise you deployed an All-In-One DevStack instance, running on Ubuntu. Then configured a project, user
and deployed a new virtual machine.
Connect to the terminal of your cloud controller, devstack-cc, via the provided link for lab3.2. You will be presented a new
Katacoda environment. The new instance may have a different public IP address and URL for BUI access. Use the ip
command, as shown in a previous task, to determine the IP address for eth0 for the new instance and reference the file
/opt/host for the URL to the Horizon BUI. You can also use the OpenStack Dashboard tab on the Katacoda page.
In this exercise we will grow our cloud by adding a Nova compute node. Connect to the terminal via the browser. The only
way to connect to compute-node is via devstack-cc node.
An SSH public key for the Ubuntu user has been implemented and the compute-node has been pre-populated. If asked to
accept the SSH fingerprint choose yes. Use exit to return to devstack-cc when necessary. For example:
The backslash in the git command following is to indicate that the command should be on one line.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
24 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
1. Install the git command and pull down the DevStack software.
2. Find the private IP address of the compute node. Update the table at the beginning of the lab for future reference. Your
IP may be different than the example below.
ubuntu@compute-node:~$ ip addr show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP
group default qlen 1000
link/ether 02:1f:91:1e:db:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.97.2/20 brd 172.31.47.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::1f:91ff:fe1e:db18/64 scope link
valid_lft forever preferred_lft forever
3. We need to create another local.conf file, similar but different from the first node. This file will point to the IP Address
of the first node so that the script can sign in to the various services. We will also limit which services are enabled on
the new node. Note the flat interface may be different. Nodes dedicated to compute services don’t need access to the
same networks as a head node or the network node and may use a data network instead.
ubuntu@compute-node:~$ cd devstack ; vim local.conf
[[local|localrc]]
HOST_IP=192.168.97.2 # IP for compute-node
SERVICE_HOST=192.168.97.1 # devstack-cc IP, first node you used
FLAT_INTERFACE=ens3
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=db-secret
RABBIT_PASSWORD=rb-secret
SERVICE_PASSWORD=sr-secret
DATABASE_TYPE=mysql
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,q-agt,n-api-meta,c-vol,placement-client
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
4. Before running the stack.sh script, save the output of the ip command for later comparison:
ubuntu@compute-node:~devstack$ ip addr show > ~/ip.before.out
5. Install the DevStack software on the second node. If there are issues, double-check and edit the local.conf configu-
ration file, run ./unstack.sh and ./clean.sh and try again. Ask for assistance if you continue to receive errors.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 25
ubuntu@compute-node:~devstack$ ./stack.sh
<output_omitted>
6. Once the script has finished check to see if you have a second hypervisor. As admin, navigate to
Admin -> Compute -> Hypervisors The Hypervisor tab should show two hostnames, as does the Compute Host
tab.
If not you will need to use a five step process to enable the new node. You may see some output about python code
deprecation. This can be ignored if the node is added. Your hostnames and IP addresses may be different. Below we
find only one hypervisor after adding the compute node.
ubuntu@compute-node:~devstack$ source openrc admin
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
26 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
8. Return to the compute-node. Save the output of the ip command again to a new file. Compare how the networking on
the node has changed. Note the new bridges and interfaces created.
ubuntu@compute-node:~devstack$ ip addr show > ~/ip.after.out
9. We will create another instance from the BUI. After it has finished spawning run the ip command again and view the
differences again.
• On your local system open a browser and point it at the public IP Address of your devstack-cc node.
• Log into BUI as developer1 with the password openstack.
• Navigate to Project -> Compute -> Instances. Select Launch Instance.
• Use the name devOS3 and boot from the available cirros image. Select the m1.tiny flavor. When the fields are
filled select Launch
• When it finishes spawning check the differences in IP information on the new compute host.
10. Log into the BUI as the user admin with the password openstack
12. Select the hypervisor tab. You should see a second hypervisor listed. Also a second compute host listed under the
Compute Host tab.
13. Navigate to the Admin -> Compute -> Instances page. You should find that each compute host has one instance
running.
14. Return to the command line. Use exit to return to the devstack-cc system. Using the same command and namespace
as before, but with the IP Address for devOS3 try to log into the new instance. Your instance IP may be different.
ubuntu@compute-node:~/devstack$ exit
logout
Connection to compute-node closed.
15. You can delete the devOS3 instance, as well, to conserve resources.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 27
The private IP Address allows access to an instance from the host machine. In order to allow outside access to an instance a
new security group must be created and rules for access added.
3. Select the +Create Security Group button. Fill it out as found in the following graphic, then select the
Create Security Group button.
4. Select the button Manage Rules under the Actions column on the right of the newly created line for the Basic group.
5. Select the +Add Rule button. Add rules for ssh and HTTP access. To add ssh access, under the top drop-down scroll
to the bottom and select SSH, then the Add button.
6. Follow the same steps to add a rule for HTTP. After adding the rule your page should look something like this:
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
28 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
7. After adding the rules navigate back to the Project -> Compute -> Instances page.
8. Click on the drop-down under the Actions field under your longest running, instance, devOS1, and select
Edit Security Groups.
9. Select the blue plus sign to add the Basic group to this instance, then Save.
Now that we have associated a new security group which allows ssh, let’s test our work. First we add a gateway so our private
network can access the public network, allocate an IP to the Project, then associate with a port of an instance.
1. Navigate to the Project -> Network -> Routers page. Select the Set Gateway button. Choose the drop-down and
select public as the External Network. Then select Submit.
2. Navigate to the Project -> Network -> Floating IPs page.
3. Select the Allocate IP to Project button. Use the drop-down to select the public pool. Then the Allocate IP
button. A new address should be listed, but in a Down status.
4. Navigate to the Project -> Compute -> Instances page.
5. Click on the drop-down under the Actions field for devOS1 and select Associate Floating IP.
6. Use the drop-down to select the newly allocated IP address. Then the Associate button.
7. When the BUI updates, write down the newly assigned floating IP address:
8. Return to the command line of your cloud controller and log into the instance, but without using a namespace. Instead
using the newly assigned floating IP address. Your IP address will be different than the example following.
ubuntu@compute-node:~/devstack$ ssh [email protected]
<output_omitted>
[email protected]’s password: cubswin:)
$ uname -a
Linux devos1 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
$ exit
Connection to 192.168.42.141 closed.
Solution 3.2
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 29
2. 192.168.42.141
Overview
Everything the BUI can do is possible from the command line. DevStack has moved to a new Python-based tool called
openstack. It can run individual commands and act as a utility. Any commands run within the openstack utility would not
show up in your bash history. Some underlying service commands remain, although not as many as you will find in more
stable deployments.
Connect to the terminal of your cloud controller, devstack-cc, via the provided link for lab3.3. You will be presented a new
Katacoda environment. The new instance may have a different public IP address and URL for BUI access. Reference the file
/opt/host for the URL to the Horizon BUI. You can also use the OpenStack Dashboard tab on the Katacoda page.
Commands can be run one at a time or within the utility. We will reproduce some of the BUI functions via the command line.
1. Let’s begin by sourcing the openrc file. If this file is not read into the current shell you will need to set requested
parameters by hand.
ubuntu@devstack-cc:~$ cd ~/devstack
2. Start the openstack utility. Notice the prompt changes to reflect you are no longer entering commands to the bash
shell. Then create a new project.
ubuntu@devstack-cc:~/devstack$ openstack
3. Create a new user who is a member of CallCenter. This is a single, long command, not two.
(openstack) user create --email ubuntu@localhost --project CallCenter \
--password openstack operator1
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | 05425440ce5147b2be06efa40713807a |
| domain_id | default |
| email | ubuntu@localhost |
| enabled | True |
| id | faab415e3ee142d79d83169c0b5be193 |
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
30 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
| name | operator1 |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
5. Get a list of instances. We sourced the openrc file as admin. The admin doesn’t have any running instances. You can
pass variables from the command line, as we see from the second command.
(openstack) server list
6. View the running hypervisors. The output below shows the alias names, yours will look different, perhaps like ip-172-31-
45-74.
(openstack) hypervisor list
+----+---------------------+-----------------+--------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+--------------+-------+
| 1 | devstack-cc | QEMU | 172.31.4.94 | up |
| 2 | compute-node | QEMU | 172.31.6.143 | up |
+----+---------------------+-----------------+--------------+-------+
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 31
11. We have not configured any secondary roles yet, but you can still list the primary role. Note the ID of each is wrapped
on the line.
(openstack) role assignment list --user admin --project demo
+----------------------------------+--------------------------
| Role | User
| Group | Project | Domain | Inherited |
+----------------------------------+--------------------------
| f617b324f31d400eb82500a285e6ce8d | 32eab78f89d94d40b406bc94c1447c81
| 7f779f3c9d964123a619ff1e6c0caf27 | | False |
+----------------------------------+---------------------------------
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
32 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
<content_omitted>
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 33
| Field | Value |
+-------------+--------------------------------------+
| created_at | 2017-11-05T00:33:28.292292 |
| description | None |
| id | 0e332e2c-1ec1-4a93-9179-08bff761f506 |
| name | volA-snap1 |
| properties | |
| size | 1 |
| status | creating |
| updated_at | None |
| volume_id | 174df03b-060d-4465-a820-97ec18846400 |
+-------------+--------------------------------------+
17. Use the –debug option to see the back end communication which can be helpful for troubleshooting. There is a lot of
output. We will write everything to a file for ease of viewing.
(openstack) quit
ubuntu@devstack-cc:~/devstack$ openstack --debug server list &> debug.out
ubuntu@devstack-cc:~/devstack$ less debug.out
<output-omitted>
Overview
In a previous exercise you deployed an All-In-One DevStack instance, running on Ubuntu. Then configured a project, user
and deployed a new virtual machine.
Connect to the terminal of your cloud controller, devstack-cc, via the provided link for lab3.4. You will be presented a new
Katacoda environment. The new instance may have a different public IP address and URL for BUI access. Reference the file
/opt/host for the URL to the Horizon BUI. You can also use the OpenStack Dashboard tab on the Katacoda page.
In this exercise we will first disable the services on a node, which is safe, then remove a node fully from OpenStack, which
may not be safe. There is not an official process to fully decommission a node in OpenStack yet. It is something being
worked on, along with in-place upgrades which has become part of the Kilo software release.
The safe operation to disable services on a particular node will prevent new service running on that node. It will still show up
in the BUI and command line output as disabled. Errors concerning that node will also continue.
If you are knowledgeable and experienced at editing a database in mariadb you could remove the node entirely. Any mistake
with the database could render the whole OpenStack deployment useless.
DO NOT DO THIS IN PRODUCTION AND/OR ON ANY SYSTEM YOU WANT TO CONTINUE USING.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
34 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
To begin we must move any deployed instances from the node we intend to remove. We will the target node compute-node
for removal as an example. Your system will be different. Once there are no services in use on node we will disable it.
1. You cannot see which hypervisor an instance is running on as a member of a project. Log out of the BUI as developer1
and back in as admin.
2. Navigate to Admin -> Compute -> Instances page. The second column, Host, shows which nodes are running on a
particular hypervisor.
3. Select each instance running on the target node, the compute-node for example, then the drop-down on the right side
of the line. Choose Terminate Instance, Migrate Instance or Live Migrate Instance depending on your needs
and current configurations. Our current configuration won’t allow us to migrate so we will Terminate Instance.
4. Once the hypervisor is without instances navigate to Admin -> Compute -> Hypervisors page.
5. Select the Compute Host tab. Find the host with no instances running and select the Disable Service button.
6. Fill in a reason such as “Upgrade hardware” for the reason and select the Disable Service button. You’ll notice the
hypervisor summary will update to show half as much resources available.
7. Verify the state by navigating to Admin -> System -> System Information page.
Select the second tab Compute Services. It should show Status as Disabled, with a recent state change.
8. To make lab 4 have more resources, you may want to enable the node again.
The following steps are optional. There is not a formal way to remove a node completely. The following steps involve editing a
database manually. Any mistake could render the cloud unusable.
Please wait until all labs using DevStack have been completed before attempting these steps. The next chapter has more
DevStack labs. After completing those, you could return for this task.
3. View the available databases. This list has changed over time so the output may be slightly different.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| barbican |
| cinder |
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
3.1. LABS 35
| glance |
| keystone |
| mysql |
| neutron |
| nova_api |
| nova_cell0 |
| nova_cell1 |
| performance_schema |
| sys |
+--------------------+
Database changed
5. View the current tables of this database. Find the compute nodes.
mysql> show tables;
+--------------------------------------------+
| Tables_in_nova_cell1 |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
<output_omitted>
6. View the current compute nodes information. There will be a lot of output. It may help to copy and paste the output then
search for the ip addresses or node names. Find the line that matches the node you want to remove. In the following
example the node name is ip-172-31-39-120. Your node name will be different.
mysql> select * from compute_nodes;
+---------------------+---------------------+------------+----+-
-----------+-------+-----------+----------+------------+--------
<output_omitted>
172.31.39.120 | [["armv7l", "qemu", "hvm"], ["aarch64", "qemu",
<output_omitted>
2 rows in set (0.00 sec)
7. Delete the disabled host from the list of compute nodes. Please note that the value given must be enclosed by single
quotes. If not you will receive a SQL syntax error. When completed the Hypervisor tab in the BUI should no longer
show it as a hypervisor, but it will remain as a Compute Host.
mysql> DELETE QUICK FROM compute_nodes WHERE host_ip=’172.31.39.120’;
Query OK, 1 row affected (0.01 sec)
8. Next we will remove the nodea as a Compute Host. View the current service nodes. Look for the nova-compute lines.
Then delete using the host entry. The value must be enclosed in single quotes, and your host name will be different.
mysql> select * from services;
+---------------------+---------------------+------------+----+------------------+----------------+-----------+--------------+----------+---------+-----------------+---------------------+-------------+-
| created_at | updated_at | deleted_at | id | host | binary | topic | report_count | disabled | deleted | disabled_reason | last_seen_up | forced_down |
+---------------------+---------------------+------------+----+------------------+----------------+-----------+--------------+----------+---------+-----------------+---------------------+-------------+-
| 2018-05-21 04:49:02 | 2018-06-08 21:01:23 | NULL | 1 | ip-172-31-45-74 | nova-conductor | conductor | 161358 | 0 | 0 | NULL | 2018-06-08 21:01:23 | 0 |
| 2018-05-21 04:49:13 | 2018-06-08 21:01:25 | NULL | 2 | ip-172-31-45-74 | nova-compute | compute | 161338 | 0 | 0 | NULL | 2018-06-08 21:01:25 | 0 |
| 2018-05-23 17:00:48 | 2018-06-08 21:01:26 | NULL | 3 | ip-172-31-39-120 | nova-compute | compute | 139679 | 1 | 0 | NULL | 2018-06-08 21:01:26 | 0 |
+---------------------+---------------------+------------+----+------------------+----------------+-----------+--------------+----------+---------+-----------------+---------------------+-------------+-
3 rows in set (0.00 sec)
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
36 CHAPTER 3. MANAGING GUESTS VIRTUAL MACHINES WITH OPENSTACK COMPUTE
10. Verify the change in the BUI. Log into the BUI as admin with a password of openstack. Navigate to
System -> Hypervisors and view the Compute Host tab. The node should not be in the list, you may have to perform
a no-cache refresh of the page to see the changes. Errors indicate something has gone wrong. This may be why the
process is not supported and under development.
11. Verify from the command line the node is fully removed from OpenStack. If BUI shows error, this may error as well.
ubuntu@devstack-cc:~/devstack$ nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+--------------------------------------+---------------------+-------+---------+
| 93b03401-1128-49c6-8d41-dd743267ecb2 | devstack | up | enabled |
+--------------------------------------+---------------------+-------+---------+
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 4
4.1 Labs
Overview
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
37
38 CHAPTER 4. COMPONENTS OF AN OPENSTACK CLOUD
The suggested and tested browser to use is Chrome, although others may work.
This lab uses DevStack running on Ubuntu. Later labs will use RDO running on CentOS.
In this lab we will create a snapshot from a running instance. We will create a new volume from the snapshot then use to
launch a new instance and create a new image.
In this task we will use an an existing instance to create a snapshot,then an image in a different tenant.
1. Logged in as admin navigate to the Admin -> Compute -> Instances page. Select the drop-down under the Actions
column of the devOS1 instance, then Create Snapshot. Give a name of dev-snap1 and then Create Snapshot into
the pop-up window.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
4.1. LABS 39
2. Notice that upon finishing Horizon changed to the Project -> Compute -> Images page. Select the drop-down under
Actions for the newly created snapshot. If the image appears queued for more than a minute refresh the page. Once it
shows as active notice there are several options including launch. Select Edit Image. Note the options under Format.
Then find and change the Image Sharing Visibility to Public if not already set.
3. After noting the visibility now shows Public go to the Project -> Compute -> Images page. Use the drop down to
edit dev-snap1. It should look similar to the Admin.
4. Go to the top of the BUI and change the current project to be demo instead of alt demo.
6. Fill in the following values for the new instance. The source should already be set. Select Launch Instance when
complete.
Instance Name: golden
Source: Image
Allocated: dev-snap1
Flavor: m1.tiny
Networks: Private
7. Navigate to the Project -> Compute -> Instances page. Once the new instance becomes active and has time to
boot take note of the assigned IP address and log in. The username and password remain the same as source instance.
Even though a snapshot, the instance was created by a different user, in a different project, on a different network. Use
the ip netns list and previous steps to find the correct namespace to access the instance. For example to ssh, on the
Private network, see the following. Your namespace will be different. Remember this is a different project, you will need
to add ssh to the security group.
The historical password for Cirros images has been cubswin:). Now that the cubs have actually won, they are changing
to gocubsgo. Should one password not work, try the other.
ubuntu@devstack-cc:~/devstack$ sudo ip netns exec \
qrouter-0701411b-91d8-4871-8191-7c808b1c1144 \
ssh [email protected]
<output_omitted>
[email protected]’s password: cubswin:)
$
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
40 CHAPTER 4. COMPONENTS OF AN OPENSTACK CLOUD
8. Look for existing files and verify the new node name. You should see the file created in a previous lab, prior to creating
the snapshot. Note the difference node name.
$ ls
uname.out
$ cat uname.out
Linux devos1 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
$ uname -a
Linux golden 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
$ exit
Now we enable encryption and create a new encrypted volume. Some volume drivers may not set the encrypted flag. These
cannot use encrypted volumes. We will review how the BUI can be used but preform the steps from the command line.
1. You can create new volume types from the BUI. Navigate to the Admin -> Volumes -> Volume Types page. In the
Actions column select the Create Encryption button. Read through the Description on the right.
2. Return to the command line. Make sure you have sourced the admin file.
ubuntu@devstack-cc:~/devstack$ source openrc admin
4. Use the output of the cinder help command to view the syntax.
ubuntu@devstack-cc:~/devstack$ cinder help encryption-type-create
<output_omitted>
5. Use the cinder command to create the encryption type and assign a cipher and key size.
ubuntu@devstack-cc:~/devstack$ cinder encryption-type-create \
--cipher aes-xts-plain64 \
--key_size 256 \
--control_location front-end LUKS \
LuksEncryptor
<output_omitted>
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
4.1. LABS 41
6. Now that we have the type we can create a new encrypted volume.
ubuntu@devstack-cc:~/devstack$ openstack volume create --size 1 --type LUKS crypt-vol
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
<output_omitted>
7. View the newly created volume. Verify you can see the encrypted setting.
ubuntu@devstack-cc:~/devstack$ cinder show crypt-vol
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-12-30T07:12:48.000000 |
| description | None |
| encrypted | True |
| id | b133e0dd-177c-44f2-a8d8-418269e0211b |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | crypt-vol |
| os-vol-host-attr:host | devstack-cc@lvmdriver-1#lvmdriver-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 8e806b4eeada4305a4a327341a3f44dd |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2016-12-30T07:12:50.000000 |
| user_id | 534ab9b6f27c4be281bab1ffe94cf023 |
| volume_type | LUKS |
+--------------------------------+--------------------------------------+
8. Now we add the volume to a running instance. Begin by viewing instance information. Take note of the ID.
buntu@devstack-cc:~/devstack$ openstack server list
+--------------------------------------+----------+--------+---------------------------------------------------------+------------
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+----------+--------+---------------------------------------------------------+------------
| e743fc56-ee0f-4858-9ce7-e0a796154319 | golden | ACTIVE | private=fd12:74d3:437f:0:f816:3eff:fe35:dddf, 10.0.0.10 | dev-snap1 |
+--------------------------------------+----------+--------+---------------------------------------------------------+------------
9. View the volume information. You can either list all or view the details of a particular. Take note of the ID for crypt-vol.
ubuntu@devstack-cc:~/devstack$ openstack volume list
+--------------------------------------+--------------+-----------+------+---------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+---------------------------------+
| b133e0dd-177c-44f2-a8d8-418269e0211b | crypt-vol | available | 1 | |
| 3f7d187e-0160-4a04-ba83-ceb21ca99317 | | in-use | 1 | Attached to golden on /dev/vda |
+--------------------------------------+--------------+-----------+------+---------------------------------+
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
42 CHAPTER 4. COMPONENTS OF AN OPENSTACK CLOUD
10. Now use the openstack utility to attach the volume to the golden instance. Pass first the ID for the instance then the ID
for the volume. The command is on multiple lines for ease of reading.
ubuntu@devstack-cc:~/devstack$ openstack server add volume \
e743fc56-ee0f-4858-9ce7-e0a796154319 \
b133e0dd-177c-44f2-a8d8-418269e0211b \
--device /dev/vdb
12. Log into the instance and verify the volume can be seen.
ubuntu@devstack-cc:~/devstack$ sudo ip netns exec qrouter-0701411b-91d8-4871-8191-7c808b1c1144\
ssh [email protected]
[email protected]’s password: cubswin:)
$ sudo fdisk -l | grep vdb
Disk /dev/vdb doesn’t contain a valid partition table
Disk /dev/vdb: 1071 MB, 1071644672 bytes
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 5
5.1 Labs
43
44 CHAPTER 5. COMPONENTS OF A CLOUD - PART TWO
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 6
Reference Architecture
6.1 Labs
45
46 CHAPTER 6. REFERENCE ARCHITECTURE
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 7
7.1 Labs
47
48 CHAPTER 7. DEPLOYING PREREQUISITE SERVICES
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 8
8.1 Labs
Overview
In this exercise we will be deploying RDO onto a new CentOS system. This will begin as an All-In-One deployment. Later we
will add nodes for Ceph. Once it has been configured compare and contrast with the DevStack systems.
You cannot use the Ubuntu nodes for this lab. The course material includes a URL for lab access. You will use your Linux
Foundation login and password to gain access. After successfully logging in you will be presented a new page and a virtual
machine instance will be created.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
49
50 CHAPTER 8. DEPLOYING SERVICES OVERVIEW
The suggested and supported browser to use is Chrome, although others may work. Unlike DevStack you must complete
steps in RDO as the root user.
The RDO distribution of OpenStack has an easy to use installer called Packstack. It leverages purpose built puppet scripts,
written and maintained by Red Hat. To access the new CentOS system we will use the similar process as the previous Ubuntu
system.
1. Log into your rdo-cc system. Note that the log in for the CentOS nodes is the user centos.
2. Become root after logging in again and install the RDO yum repository. We will be using the Pike release. To install the
latest, nightly build you can use the https://rdo.fedorapeople.org/rdo-release.rpm
The URL below is divided to appear on two lines. If the backslash does not work properly type the URL on one line.
3. Install the packstack command, and vim if you want to take advantage of it.
[root@rdo-cc ~]# yum install -y \
openstack-packstack vim
you will need to edit the /etc/yum.repos.d/CentOS-QEMU-EV.repo file and change the architecture variable, which
may not match the website. use a browser to find the proper URL, as it may change again. Recent testing showed that
the architecture needs to be aarch64. This may be a typo by Red Hat and fixed in the future. You may have to edit this
file again to get the yum update to work properly.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
8.1. LABS 51
Configuring packstack
While there are hundreds of options to configure inside of the answer file we will start with the following, minimal changes.
1. Edit the newly created answer file and modify the following parameters.
[root@rdo-cc ~]# vim rdo.txt
CONFIG_HEAT_INSTALL=y
CONFIG_NTP_SERVERS=0.pool.ntp.org
CONFIG_DEBUG_MODE=y
CONFIG_KEYSTONE_ADMIN_PW=openstack
The packstack script uses ssh to connect into each node so we must configure access. Public Key access may be easiest,
and more common in a production environment. In this example we will allow standard root access. Double check your edits
of the sshd_config file. A mistake here will keep the daemon from starting and render the instance unreachable. The use of
backslash is to indicate the commands are to be run as one line.
2. Use sed to allow root to log in, then double check your work.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
52 CHAPTER 8. DEPLOYING SERVICES OVERVIEW
Allow users to log in without an existing public key. On a production system remember to return and disable this after
we push keys in a few steps.
3. [root@rdo-cc ~]# sed -i \
’s/PasswordAuthentication\ no/PasswordAuthentication\ yes/’ \
/etc/ssh/sshd_config
When you are ready to install RDO, execute the packstack command and pass the answer file you created. It is not terribly
uncommon that an issue prevents the script from finishing. Puppet is an end-state focused tool, so run the command a second
time. If it errors at the same place again, fix the answer file and continue running the script until the installation is successful.
The script can take up to 25 minutes to run. Take a short break and check back for errors mid-run.
If you receive another repodata error 256 edit the /etc/yum/repos.d/CentOS-QEMU-EV.repo again. Change the architec-
ture from aarch64 to x86 64. Then run the packstack script again.
1. Run the packstack script. Use of the equal sign to point at the file is optional.
[root@rdo-cc ~]# packstack --answer-file rdo.txt
<output-omitted>
**** Installation completed successfully ******
<output-omitted>
2. Find your public IP Address or FQDN. With this information add a ServerAlias parameter to match the inbound address
request then restart the web server and memcached services. The example below shows an URL of 288278-8-ollie3.
openstack-environments.katacoda.com Yours will be different. This information can be found by opening the Open-
Stack Dashboard tab on the Katacoda page. Once open you should see the default Apache welcome page. This
indicates you are connecting to the newly installed web server but it is not aware of the how to handle the URL being
requested. Copy the URL to create the ServerAlias. You may also find this URL inside the /opt/host, file if it exists.
[root@rdo-cc ~]$ vim /etc/httpd/conf.d/15-horizon_vhost.conf
<...>
## Server aliases
ServerAlias 288278-8-ollie3.openstack-environments.katacoda.com
ServerAlias 172.17.0.14
ServerAlias localhost
<...>
3. Log into the BUI with the username admin and the password openstack. You may need to refresh the web page once
httpd and memcached finish their restart.
4. Navigate around the RDO BUI. Compare and contrast with the Devstack deployment. Notice anything different?
5. Create a new project named rdo1. Reference the steps from the previous lab for assistance. Change the number of
vCPUs to 10. How different are the actual steps?
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
8.1. LABS 53
7. Navigate through the rest of the tabs on the left of the BUI. What tabs have the word network or networks on them.
Compare to the DevStack systems. Are they the same?
a.
b.
c.
8. Using the drop down in the upper left corner, how many projects can be selected?
9. Navigate to the Identity -> Projects page. How many projects do you see listed?
a. Is this the same behavior as DevStack?
Solution 8.1
Configuring packstack
2. a. 19
b. 334 using grep = rdo.txt |wc -l
5. Notvery
6. Various differences, which change over time like: a. Project -> Network -> Network Topology
b. Project -> Networks -> Networks
c. Admin -> System -> Networks
7. just one
8. 4
a. no
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
54 CHAPTER 8. DEPLOYING SERVICES OVERVIEW
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 9
9.1 Labs
55
56 CHAPTER 9. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 10
10.1 Labs
Overview
We will perform several familiar steps and learn more about the the CLI tools and capabilities of Neutron networking. This
exercise uses the RDO OpenStack deployment running on CentOS.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
57
58 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO
The suggested and tested browser to use is Chrome, although others may work.
We will explore the BUI as a member of a project to compare and contrast with DevStack
Remember you can use the second tab found on the Katacoda page, or use your own browser and go to the URL found in the
/opt/host file.
1. Log into the BUI as the operator user, with a password of openstack.
2. Using the tabs on the left, explore the BUI, compare to what the admin user had available.
3. Using the drop down in the upper left window. How many projects do you see?
We cannot deploy an instance on the existing private network. First we have to create a new private network, a network we
will call Accounting Internal.
1. Remaining logged in as operator user, navigate to the Project -> Network -> Network Topology page. Select the
+Create Network button. Enter in the name Accounting Internal on the first tab Network, then select Next.
2. Fill out the Subnet tab with a name of acct-sub-internal, a network address of 192.168.0.0/16 with the gateway
of 192.168.0.200. Then select Next.
3. Enter into the Allocation Pools box the addresses 192.168.0.10,192.168.0.20. Then select Create.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 59
Create a Router
While we could provide an interface to each newly created instance directly on a public network, it allows more administrative
flexibility and possibly security to add a software router for traffic instead.
1. Navigate to the Project -> Network -> Network Topology page. In the upper right, select the +Create Router
button.
2. Type in the name Accounting-1, then select Create Router. The router should appear on the topology, but is not
associated yet with any interfaces.
3. Use your mouse to select the newly created router. Select the View Router Details link. Then select the Interfaces
tab, then the Add Interface button.
4. Work through the wizard with the values from the following graphic. When it matches, select the Add Interface button.
The interface may show down for a moment. Refresh the page to check that the status is Active.
5. Only a user with admin capability add an interface to the public network but you can set access as a gateway. Select the
Set Gateway button in the upper right of the new window.
6. Use the dropdown to select the network public. Then select Submit.
7. Navigate to the Network Topology page. Do you see the Accounting internal network or the new router? Do they
connect?
Launch an Instance
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
60 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO
1. Navigate to Project -> Compute -> Instances. Select the Launch Instance button.
2. Fill in an instance name of acct1. Under the Source tab select boot source of Image, then select the up-arrow icon to
add the cirros image.
4. Select the Networks tab. The network should already be assigned. If not select the up-arrow icon next to
Accounting Internal. If you have multiple networks, you will have to choose at least one prior to launch.
5. Select Launch Instance. The new page should show the instance in a Spawning state. Depending on resources being
requested and other activity it can take a minute or two for the instance to finish its build and become active.
6. Verify by returning to the Network Topology page. You should see the newly created instance attached to the
Accounting Internal network.
Similar to what we have accomplished from the BUI we perform the same tasks from the command line.
1. Log into the node and become root. Create a new project named finance.
[root@rdo-cc ~]# source keystonerc_admin
2. Create a new user named tester who is a member of the finance project:
[root@rdo-cc ~(keystone_admin)]# openstack user create --project finance --password openstack \
--email centos@localhost tester
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | e600e54c56b145848d9287474f196be4 |
| domain_id | default |
| email | centos@localhost |
| enabled | True |
| id | 0d895ead0f344b93aa3789a14d119576 |
| name | tester |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
3. While the previous command should have set the project, it does not make the user a member of that project in Pike.
We will need to add the user manually.
[root@rdo-cc ~(keystone_admin)]# openstack role add --user tester \
--project finance _member_
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 61
Neutron is a fast evolving CLI command with lots of functions and features, which supersede the capabilities of the BUI. The
default installation uses Open vSwitch for networking functionality. To see all the work being done you may want to read up
on development here: https://wiki.openstack.org/wiki/NeutronDevelopment.
Just as the login to the BUI affects what resources can be seen, so to does the username and tenant name settings in the
keystonerc files. We will begin by creating a new file for the finance group. You may want to log into the BUI as tester
and view the network topology change. The BUI will update within a minute to show the changes to the network.
1. Copy the admin file and edit the three of the parameters to match the new user:
[root@rdo-cc ~(keystone_admin)]# cp keystonerc_admin \
keystonerc_finance
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
62 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO
3. Create a new network called finance-internal. Make a note of the network id on the line for when we launch an
instance in a later step, or copy and paste it to a file:
[root@rdo-cc ~(keystone_tester)]# openstack network create finance-internal
Created a new network:
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | UP |
<output-omitted>
| id | ffe41f70-962f-4693-9014-2275080cd44a |
<output-omitted>
4. Create a new subnet for the network with a network address of 10.10.0.0/24 and a gateway of 10.10.0.1.
[root@rdo-cc ~(keystone_tester)]# openstack subnet create sub-financial-int \
--subnet-range 10.0.0.0/24 --network finance-internal
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| allocation_pools | 10.0.0.2-10.0.0.254 |
| cidr | 10.0.0.0/24 |
| created_at | 2018-06-11T21:30:56Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | 17a3c73a-aea4-4833-a0f7-047efb61713c |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | sub-financial-int |
| network_id | 544e7326-c416-4a2c-9025-e2361b435c1d |
| project_id | e600e54c56b145848d9287474f196be4 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2018-06-11T21:30:56Z |
| use_default_subnet_pool | None |
+-------------------------+--------------------------------------+
5. Create a new router called finance-router. Make sure the status reports as active.
[root@rdo-cc ~(keystone_tester)]# openstack router create finance-router
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-06-11T21:33:59Z |
| description | |
| distributed | False |
| external_gateway_info | None |
| flavor_id | None |
| ha | False |
| id | 335699ea-8324-494e-bd1c-200b181124bc |
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 63
| name | finance-router |
| project_id | e600e54c56b145848d9287474f196be4 |
| revision_number | None |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2018-06-11T21:33:59Z |
+-------------------------+--------------------------------------+
6. Set a gateway for the new router to use the shared Accounting Exterior network.
[root@rdo-cc ~(keystone_tester)]# openstack router set --external-gateway public finance-router
7. Log into the BUI as tester. Navigate to the Network Topology page. It should show the new network attached via the
router to an exterior network.
Overview
This exercise uses the RDO OpenStack deployment running on CentOS. The lab ties together the previously used steps to
deploy multiple Neutron networks and instances and configure connectivity.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
64 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO
The suggested and tested browser to use is Chrome, although others may work.
We will add some common tasks to launching an instance we have not done via the BUI. We will generate an ssh key for easy
access and a network security group.
1. For ease of access we will generate a new public/private SSH keypair. Press enter key twice to accept default value of
no passphrase.
[root@rdo-cc ~]# source keystonerc_finance
[root@rdo-cc ~(keystone_tester)]# ssh-keygen -f ~/.ssh/finance-key
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): <enter>
Enter same passphrase again: <enter>
Your identification has been saved in /root/.ssh/finance-key.
Your public key has been saved in /root/.ssh/finance-key.pub.
The key fingerprint is:
fe:e9:f3:6c:78:b5:2a:ad:c2:75:46:61:e7:56:bc:9a \
[email protected]
The key’s randomart image is:
+--[ RSA 2048]----+
| . |
| o . o|
| . + ..|
| . o. |
| S . .o |
| . . oE. |
| ... = . . |
| o.+o+ . |
| o=B+. |
+-----------------+
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 65
2. Add the key to Nova compute service and verify it. Some small images do not contain cloud-init and my not accept
the key.
[root@rdo-cc ~(keystone_tester)]# nova keypair-add \
--pub-key ~/.ssh/finance-key.pub finance-key
4. Create a new flavor and verify it. By default only admins can do this. We will give it the name smallfry, an ID of 6,
512MB of memory, 2GB of disk and 1 vCPU.
[root@rdo-cc ~(keystone_tester)]# source keystonerc_admin
8. Take a look at the current rules inside the default group listed above.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
66 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO
9. Run the following commands to create a new security group and add rules to allow SSH and web traffic. Begin by
entering the openstack utility.
[root@rdo-cc ~(keystone_tester)]# openstack
(openstack) security group create --description "Allow http and ssh traffic" web-ssh
+-----------------+--------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------+
| created_at | 2017-01-29T01:14:18Z |
| description | Allow http and ssh traffic |
| headers | |
| id | 28c1056e-d07e-46cc-9092-09c661137a77 |
| name | web-ssh |
| project_id | dd25b7768fb84b43a09b9b9b9019e91e |
| project_id | dd25b7768fb84b43a09b9b9b9019e91e |
| revision_number | 1 |
| rules | created_at=’2017-01-29T01:14:18Z’, direction=’egress’, ethertype=’IPv4’, |
| | id=’3925e8f5-ea72-4c09-ac8e-20e7b8f4298f’, |
| | project_id=’dd25b7768fb84b43a09b9b9b9019e91e’, revision_number=’1’, |
| | updated_at=’2017-01-29T01:14:18Z’ |
| | created_at=’2017-01-29T01:14:18Z’, direction=’egress’, ethertype=’IPv6’, id |
| | =’a5dcb42c-1fee-4e88-8aa5-221b1ab28f67’, |
| | project_id=’dd25b7768fb84b43a09b9b9b9019e91e’, revision_number=’1’, |
| | updated_at=’2017-01-29T01:14:18Z’ |
| updated_at | 2017-01-29T01:14:18Z |
+-----------------+--------------------------------------------------------------------------------+
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 67
| updated_at | 2017-01-29T01:19:35Z |
+-------------------+--------------------------------------+
(openstack) security group rule create --protocol tcp --ingress --dst-port 80 web-ssh
<output-omitted>
14. Launch a new instance, called bc1 with the recently configured settings. You will need the network ID
for the finance-internal network. Run openstack network list if you had not saved it from an earlier exercise.
[root@rdo-cc ~(keystone_tester)]# nova boot --flavor smallfry --image cirros \
--security-group web-ssh --key-name finance-key \
--nic net-id=ffe41f70-962f-4693-9014-2275080cd44a bc1
<output_omitted>
15. Verify the instance is running. It may take a few seconds to change from build state to active. Take note of the IP
Address. We will use the IP in a following step to gain access.
[root@rdo-cc ~(keystone_tester)]# nova list
<some output_omitted>
| 3a911544-a229-46d7-bbef-ac2cfd832e76 | bc1 | ACTIVE | - | Running \
| finance-internal=10.10.0.6
16. Log into the instance. Look at a list of configured IP names spaces. Typically the last created namespace is the first one
listed. Multiple namespaces may have the same IP range. If you cannot SSH to the instance, check to see if another
network also has a 10.0.0.0/24 network. Then double-check the network security groups are in place and have a rule
which allows SSH access. In Pike the peer id will also show in the ip netns list command. This is a bug listed as fixed
in the Ocata release. Once we find the correct namespace we will log into the instance. If the public key does not work
the login is cirros with a password of cubswin:)
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
68 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO
17. Now that have found the correct namespace try to log in. Change the IP to match the nova list output. The password
would only be required if the public key was not properly inserted:
[root@rdo-cc ~(keystone_tester)]# ip netns exec \
qrouter-2bd990fc-6b46-4247-9bdc-94464334207f ssh -i ~/.ssh/finance-key \
[email protected]
[email protected]’s password: cubswin:)
$
Task Goal: Tie the concepts together, we will deploy a second node, then connect from the two nodes across a router. Neutron
replaces only the switch side of the network. Our lab environment has some settings that make exterior access complicated
so we will deploy a new internal network, router and instance. Once we have two instances we will update each route table
and test by connecting via ssh from one instance to the other.
1. Return to the BUI and log in as tester if you are not already. Navigate to the
Project -> Network -> Network Topology page.
2. Select the +Create Network button and create a network called back-office. Assign a subnet called sub-bk-off
with a network address of 192.168.5.0/24. Default values otherwise.
3. Select the +Create Router icon. Give the router the name bk-router. Default values otherwise.
4. When it has been created use the mouse to select the router. Select the View Router Details button. Then select the
Interfaces tab. Select the +Add Interface button to create two interfaces. Attach one to back-office, with default
values. The second interface to finance-internal, specify the IP address is 10.10.0.10.
The Network Topology should look something like the graphic that follows. You may need to select the Graph tab
followed by Toggle Labels to see all the details.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
10.1. LABS 69
5. Now we will add a second instance on a different network. Get a list of networks in order to launch the new instance in
the back-office network.
[root@rdo-cc ~(keystone_tester)]# openstack network list
<output_omitted>
| 580b9d4e-c3da-4215-b9e7-91f349e581c6 | back-office | beeccd33...
6. View the IP addresses of the bk-router ports. Use grep to narrow down the output to only ports on back-office.
[root@rdo-cc ~(keystone_tester)]# openstack port list |grep beeccd33
| 23585c62-3701-4fbb-a0a6-8eabb348d3b3 | | fa:16:3e:74:69:98 | ip_address=’192.168.5.1’, \
subnet_id=’beeccd33-7d86-475e-aed6-163d4acd0cc0’ | ACTIVE |
| 40294a98-bc04-41ec-88e0-8c67561cdd81 | | fa:16:3e:38:58:b9 | ip_address=’192.168.5.2’, \
subnet_id=’beeccd33-7d86-475e-aed6-163d4acd0cc0’ | ACTIVE |
8. Find the VM IP address and correct namespace. Log into the newly deployed instance. Remember it may take a minute
to finish the build and boot. First find the correct namespace, use the id of the router, pre-pending qrouter- to it.
[root@rdo-cc ~(keystone_tester)]# openstack router show bk-router |grep id
| flavor_id | None |
| id | e7886409-bc48-4877-af10-2de3752f4c67 |
| project_id | e600e54c56b145848d9287474f196be4 |
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
70 CHAPTER 10. ADVANCED SOFTWARE DEFINED NETWORKING WITH NEUTRON - PART TWO
9. Find the current routes. Then add a route for bc1’s network.
$ ip route
default via 192.168.5.1 dev eth0
192.168.5.0/24 dev eth0 src 192.168.5.2
$ sudo -i
# ip route add 10.10.0.0/24 via 192.168.5.1 dev eth0
# exit ; exit
10. Now we need to configure routing back to the other VM, bc2. Remember to use the IP of the router port, not the VM.
Log into bc1 again.
[root@rdo-cc ~(keystone_tester)]# ip netns exec \
qrouter-27bcb5f9-8af5-419f-a0ff-9d109314c8b8 ssh [email protected]
[email protected]’s password: cubswin:)
$ sudo -i
# ip route
default via 10.10.0.1 dev eth0
10.10.0.0/24 dev eth0 src 10.10.0.2
# ip route add 192.168.5.0/24 via 10.10.0.10 dev eth0
11. We should be able to ssh back to the other instance, bc2 using the internal IP Address.
# ssh [email protected]
Host ’192.168.5.2’ is not in the trusted hosts file.
(fingerprint md5 03:79:27:9f:1f:72:71:91:5e:2c:cc:f1:6e:e0:1e:21)
Do you want to continue connecting? (y/n) y
[email protected]’s password: cubswin:)
$ uname -n
bc2
Solution 10.2
3. 1
Create a Router
8. Yes.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 11
11.1 Labs
Overview
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
71
72 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH
The suggested and tested browser to use is Chrome, although others may work.
Three new nodes will be made available for use. They will each have an extra disk which we will partition into two equally
sized partitions. We will use one partition on each node to deploy a ceph OSD and leave the other for possible swift proxy
installation. While a ceph cluster has no single node in charge, we will be using our cloud controller as a ceph admin node as
well as a MON node.
storage1
New OSD nodes: storage2
storage3
In our lab environment the only way to connect to the storage nodes is via rdo-cc. Use the browser to connect to rdo-cc,
then use ssh to connect. A public key has already been configured for ease of access, although the steps to duplicate the task
are included for you.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 73
In addition to updating and installing software we need to make sure that time is in sync between nodes.
Begin on your cloud controller, or ceph admin node. Note on the baseurl line we will be using ceph release Luminous for
CentOS 7, or el7, as in Red Hat Enterprise Linux 7. Other options may be available.
2. configure a repository for the ceph software. Note that the URL contains ”ee” ”ell” ”seven”.
[root@rdo-cc ~]# vim /etc/yum.repos.d/start-ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
3. Use yum to install the ceph-deploy package. Should yum not work, due to ongoing issues with Python dependencies,
you may need to use pip.
[root@rdo-cc ~]# sudo yum -y install ceph-deploy
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
<output_omitted>
6. Assign a password for the new user. While ceph is not a great password, it is easy to remember for the lab. We can use
the echo command to set it with a single command.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
74 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH
8. Verify you can now run a command using sudo that wouldn’t work without it. The contents of the directory may be
slightly different.
[root@rdo-cc ~]# su - ceph
9. If you do not already have public-key access allow access without public-key so the key can be copied the first time.
Don’t forget to restart sshd after editing and verifying the update.
[ceph@rdo-cc ~]$ sudo \
sed -i ’s/PasswordAuthentication\ no/PasswordAuthentication\ yes/’ /etc/ssh/sshd_config
10. Repeat steps 5 through 10 on each of the ceph storage nodes. All four will need the user who can use sudo and
password-less ssh. If multiple terminal and PuTTY sessions are possible you can copy and paste between them.
Otherwise connect via SSH. Be careful, the prompts look similar.
[ceph@rdo-cc ~]$ exit
logout
[root@rdo-cc ~]# ssh storage1
The authenticity of host ’storage1 (192.168.98.2)’ can’t be established.
ECDSA key fingerprint is
cc:bc:85:34:fa:ff:0f:60:1f:78:0d:c2:57:68:f8:51.
Are you sure you want to
continue connecting (yes/no)? yes
Warning: Permanently added ’storage1,192.168.98.2’ (ECDSA) to the list of known hosts.
11. Return to the ceph admin node. Become the ceph user. Generate a new ssh key-pair for ease of inter-node communi-
cation:
[ceph@rdo-cc ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): <enter>
Created directory ’/home/ceph/.ssh’.
Enter passphrase (empty for no passphrase): <enter>
Enter same passphrase again: <enter>
Your identification has been saved in /home/ceph/.ssh/id_rsa.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 75
12. Networking and the proper use of short hostnames is essential to how ceph keeps track of cluster membership. While
a manual install is more flexible the ceph-deploy script must use the short hostnames. Verify the /etc/hosts includes
each node’s short hostname and IP. The following names and IP Addresses are for example. Use ones that match your
assigned systems. The output of the hostname -s command shows the short hostname. Use that output to populate
the hosts file. Make sure the all four nodes have the same /etc/hosts file.
[ceph@ ~]$ hostname -s
rdo-cc
13. Copy the public key to all four nodes, including the ceph admin node itself. Use the short hostname, not the IP, to
double check the hosts entries. The Ceph deployment command will only work with names. Start by copying the key to
storage1.
[ceph@rdo-cc ~]$ ssh-copy-id ceph@storage1
The authenticity of host ’storage1 (192.168.98.2)’ can’t be established.
ECDSA key fingerprint is 17:8a:8f:89:fa:a8:cf:64:fd:a9:0d:b4:63:5a:d6:a8.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that \
are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is \
to install the new keys
ceph@cephnode1’s password:
14. Add the key to the other nodes, including the node you’re on. The use of a for loop could be helpful as well.
[ceph@rdo-cc ~]$ ssh-copy-id ceph@storage2
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
76 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH
15. You may need to allow remote sudo commands to run on all four nodes as well. Newer versions of Ubuntu no longer
have this setting.
[ceph@rdo-cc ~]$ sudo sed -i ’s/requiretty/\!requiretty/’ /etc/sudoers
16. The firewall should already be off. Disable SELinux as well until the service has been properly configured. Run the
following commands on all four nodes. If a node reboots you will need to disable SELinux again.
[ceph@rdo-cc ~]$ sudo setenforce 0; sudo yum -y install \
yum-plugin-priorities
17. You may also need to allow traffic through your firewall. Use a log statement to monitor traffic and add rules as necessary.
Typically this will be an issue when the storage nodes try to connect to the monitor during activation. When ready for
production remember to return and lock down ssh access, firewall and SELinux.
18. Update all the nodes. The rdo-cc node may throw an error about a package issue with python and zeromq. In this lab
the error can be ignored. The unused storage nodes should have no such issue.
[ceph@rdo-cc ~]$ sudo yum update -y
Deploy a Monitor
We will use the the cloud controller both as the ceph admin node and a monitor node.
2. Before we deploy a monitor we will need to create various configuration files. Review the output of the command. Notice
the information about creating a new cluster named ceph. The Debug, Info and Warning output is expected. Watch for
Errors, often in red if your output shows color. If you receive a traceback error like ”import pkg resources” you may be
encountering an missing dependency. Install the python-pip with yum. After installation try the ceph-deploy again.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy new rdo-cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy new rdo-cc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7ff6b
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephde
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : [’rdo-cc’]
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
<output-omitted>
3. Add a line to the global section reducing the required number of osds to two. The configuration file will accept values
with and without underlines currently.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 77
4. Install the ceph software on each of the four nodes. We will wait to deploy a third ceph OSD storage node, but may as
well install the software now. You may receive an error on the rdo-cc node. This is to be expected, and handled in the
next command. Even through there is an error the script creates the file it will need to continue.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy install --release luminous \
rdo-cc storage1 storage2 storage3
<output_omitted>
[rdo-cc][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: ’ceph’
5. View the differences between various yum repo files. Remove the .rpmnew file and install again.
[ceph@rdo-cc ceph-cluster]$ sudo ls -l /etc/yum.repos.d/ceph*
6. Recall or type in the same command to install Ceph, it should complete without errors this time.
ceph@rdo-cc my-cluster]$ ceph-deploy install --release luminous rdo-cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy install --release
luminous rdo-cc storage1 storage2 storage3
<output_omitted>
[rdo-cc][DEBUG ] Complete!
[rdo-cc][INFO ] Running command: sudo ceph --version
[rdo-cc][DEBUG ] ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a)
luminous (stable)
7. Create at least one monitor. Three to five are suggested for Paxos quorum. They should not be nodes used for OSD.
We will create a single monitor in this exercise.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy mon create-initial
<output_omitted>
8. Only if the command fails, in order to run it again use the overwrite-conf option.
[ceph@rdo-cc ceph-cluster]$ ceph-deploy --overwrite-conf mon create-initial
9. Use of cephx requires keys to be used by every node in the cluster. Deploy the keys to all nodes, including the rdo-cc.
[ceph@rdo-cc my-cluster]$ ceph-deploy admin rdo-cc storage1 storage2 storage3
<output_omitted>
10. The created keyring is not readable by anyone but root, by default. In order to run commands we need to add read
access.
[ceph@rdo-cc my-cluster]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
11. Starting with the Luminous release there needs to be a Ceph manager, mgr running. This daemon collects information
about the cluster.
[ceph@rdo-cc my-cluster]$ ceph-deploy mgr create rdo-cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy mgr
create rdo-cc
<output_omitted>
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
78 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH
12. At this point we can test our configuration and keys by looking at the cluster health. It should report HEALTH OK. There
should be one mon, one mgr set to the rdo-cc node but zero osds. If you get errors check existence and access to the
keyrings. It can be helpful to open a second terminal or PuTTY session and run ceph -w in that window while working
through the following steps.
[ceph@rdo-cc my-cluster]$ ceph -s
cluster:
id: 16975b02-b6d9-4ea7-97ab-85fdebdf32d0
health: HEALTH_OK
services:
mon: 1 daemons, quorum rdo-cc
mgr: rdo-cc(active)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:
The suggestion is to use a 1 terabyte disk dedicated to ceph. In our exercise we will use a second, 20G disk attached to our
storage nodes, /dev/xvdb.
1. Create an OSD on your first storage node. Review the output to understand the various steps. Once verified we will
create the second and third OSD.
[ceph@rdo-cc my-cluster]$ ceph-deploy osd create --data /dev/xvdb storage1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy osd create
--data /dev/xvdb storage1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
<output_omitted>
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 1024 MB used, 19451 MB / 20476 MB avail
pgs:
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 79
3. View the detailed status of the cluster. Note that the -w option means watch. It will capture the window until you use
ctrl-c to stop the command. This can be helpful for troubleshooting and watching activity in a second terminal.
[ceph@rdo-cc ceph-cluster]$ ceph -s
[ceph@rdo-cc ceph-cluster]$ ceph -w
[ceph@rdo-cc ceph-cluster]$ ceph health
[ceph@rdo-cc ceph-cluster]$ ceph health detail
4. From the status outputs, how much space is in the ceph cluster? (This value will depend on disks added by the trainer)
To ensure the ceph cluster is working we will add some test data using rados.
1. View default pools. You will probably only see one pool.
[ceph@rdo-cc ceph-cluster]$ ceph osd lspools
0 rbd,
2. Create a pool and verify it. We will call the pool test and configure 100 placement groups:
[ceph@rdo-cc ceph-cluster]$ ceph osd pool create test 100
pool ’test’ created
5. Verify object existence and placement. Note that it will not return an error because it is not a standard filesystem.:
[ceph@rdo-cc ceph-cluster]$ rados -p test ls try-1
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
80 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH
You many need to remove an OSD from the cluster for storage upgrades or other maintenance. This may cause the current
lab cluster to show warnings for being undersized. Add the OSD back to the cluster, shown in previous task, to remove the
warnings.
1. Verify the state of the cluster is healthy. Make sure you have enough replicas and space prior to OSD removal.
3. Stop the OSD. It may take a while to migrate the placement groups. Use the ceph -w command to view the migration.
If the migration seems to be taking too long, as happens with small clusters, you may have to re-weight the OSD.
Reference online documentation for these steps.
[ceph@rdo-cc ceph-cluster]$ ceph osd out osd.2
4. Stop the OSD daemon on the storage node. Connect to the storage node hosting the OSD you are trying to remove.
5. Return to the admin node and remove the OSD from the crush map.
[ceph@rdo-cc ceph-cluster]$ ceph osd crush remove osd.2
Solution 11.1
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 81
10. 20460 MB
Overview
Now that we have a working ceph cluster, we can use it as a backend for several other services.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
The suggested and tested browser to use is Chrome, although others may work.
Three new nodes will be made available for use. They will each have an extra disk which we will partition into two equally
sized partitions. We will use one partition on each node to deploy a ceph OSD and leave the other for possible swift proxy
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
82 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH
installation. While a ceph cluster has no single node in charge, we will be using our cloud controller as a ceph admin node as
well as a MON node.
storage1
New OSD nodes: storage2
storage3
In our lab environment the only way to connect to the storage nodes is via rdo-cc. Use the browser to connect to rdo-cc,
then use ssh to connect. A public key has already been configured for ease of access, although the steps to duplicate the task
are included for you.
1. Create pools of 100 placement groups for the glance service to use.
[ceph@rdo-cc ceph-cluster]$ ceph osd pool create images 100
pool ’images’ created
2. Generate a keyring for glance and make it persistent. Be very careful with ceph auth commands. If you make a mistake
the only way to make changes is to disable security and restart ceph every node in the cluster.
[ceph@rdo-cc ceph-cluster]$ ceph auth get-or-create client.glance mon ’allow r’ \
osd ’allow class-read object_prefix rbd_children, allow rwx pool=images’
3. Edit /etc/glance/glance-api.conf. Uncomment and edit the following parameters. It is important each variable
remain in the correct part of the file. Appending these values to the end may not work. Remember when you switch
the store to Ceph you will need to re-import images from the current store, you can download them with glance image-
download.
[ceph@rdo-cc ceph-cluster]$ sudo vim /etc/glance/glance-api.conf
default_store=rbd
show_image_direct_url=True
stores=rbd
rbd_store_chunk_size = 8
rbd_store_pool=images
rbd_store_user=glance
rbd_store_ceph_conf=/etc/ceph/ceph.conf
5. Create a file showing data usage inside the ceph cluster before uploading a new image.
[ceph@rdo-cc ceph-cluster]$ ceph -s > /tmp/ceph.before
6. Install wget if not already installed and download a small image to the cloud controller:
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
11.1. LABS 83
7. Download a small test image. The URL is one long path. The image version may change over time. If the download
does not work verify the path with a browser and use the new version in the following commands.
[root@rdo-cc ceph-cluster]# wget \
http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
10. When it finishes verify there is a wceph image along with the previous cirros.
[root@rdo-cc ~(keystone_admin)]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| efd776bc-344a-4d5f-9207-f2fea2b447aa | wceph |
| 09b92efc-6567-4e07-b9af-24cc6bc85f85 | cirros |
+--------------------------------------+--------+
12. Compare the before and after files. There after file should show about 12MB of usage.
[root@rdo-cc ~(keystone_admin)]# diff /tmp/ceph.before /tmp/ceph.after
5c5
< osdmap e21: 3 osds: 3 up, 3 in
---
> osdmap e24: 3 osds: 3 up, 3 in
7,8c7,8
< pgmap v60: 364 pgs, 4 pools, 0 bytes data, 0 objects
< 15463 MB used, 99677 MB / 112 GB avail
---
> pgmap v70: 364 pgs, 4 pools, 12859 kB data, 7 objects
> 15511 MB used, 99628 MB / 112 GB avail
13. Now that we know Ceph works as an image store, enable the previous store so that existing images are available. Edit
stores line in the the glance-api.conf file then restart the service.
\begin{raw}
[ceph@rdo-cc ceph-cluster]$ sudo vim /etc/glance/glance-api.conf
....
stores=rbd,file,http
....
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
84 CHAPTER 11. DISTRIBUTED CLOUD STORAGE WITH CEPH
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 12
12.1 Labs
Overview
Prior to ceph the common network based object storage implementation was swift. Leveraging memcached, it allows for
fast access to data both from a single node as well as via a proxy service. We deployed swift on a local, loopback device via
packstack in an earlier lab.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
85
86 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT
The suggested and tested browser to use is Chrome, although others may work.
As the Swift project predates OpenStack it has many features and capabilities. We will begin by a basic view of the tool,
creating a container and uploading an object.
1. We installed swift using packstack, which creates a loopback device. Begin by looking at the device and through some
of the options the swift command. command will accept.
[root@rdo-cc ~]# source keystonerc_admin
2. The BUI, the openstack utility, curl, and the swift command can manage object storage. Lets begin with swift. Run
the command without any arguments to get the help output.
[root@rdo-cc ~(keystone_admin)]# swift
usage: swift [--version] [--help] [--os-help] [--snet] [--verbose]
<output-omitted>
3. Create a new container called orders, perhaps to hold online orders for a website.
[root@rdo-cc ~(keystone_admin)]# swift post orders
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 87
5. View the basic swift status. Note the Bytes currently used is zero as no objects have been uploaded. There should be
one container and no objects or bytes.
[root@rdo-cc ~(keystone_admin)]# swift stat
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Containers: 1
Objects: 0
Bytes: 0
Containers in policy "policy-0": 1
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
X-Account-Project-Domain-Id: default
X-Timestamp: 1486070098.65823
X-Trans-Id: txcbdfeeb5920141b48d95a-005893a168
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
6. Look at the details of the orders container. There should be no ACLs set.
[root@rdo-cc ~(keystone_admin)]# swift stat orders
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Objects: 0
Bytes: 0
Read ACL:
Write ACL:
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Storage-Policy: Policy-0
Last-Modified: Thu, 02 Feb 2017 21:14:59 GMT
X-Timestamp: 1486070098.68344
X-Trans-Id: tx3ac71ad62694409189c25-005893a18f
Content-Type: text/plain; charset=utf-8
7. We will look deeper using the -v option. Note the storage URL, which can be used with curl commands.
[root@rdo-cc ~(keystone_admin)]# swift stat -v
StorageURL: http://172.31.20.51:8080/v1/AUTH_e1e7401f7e9744a390b5ea5252a70903
Auth Token: eb3cb8058c2546ae924a624e32ab1be5
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Containers: 1
Objects: 0
Bytes: 0
Containers in policy "policy-0": 1
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
X-Account-Project-Domain-Id: default
X-Timestamp: 1486070098.65823
X-Trans-Id: tx57ccef5091244936a74f7-005893a433
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
Swift allows for granular assignment of read and write access based off of project and user, among other metadata. In this
task we will set, modify and remove access control lists.
1. Objects can have complex read and write access control lists. Begin by allowing ready by everyone.
[root@rdo-cc ~(keystone_admin)]# swift post orders -r ".r:*"
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
88 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT
3. Narrow down read permissions to members of the SoftwareTesters group. Then verify the ACL has been set. Watch
what happens to the previous ACL.
[root@rdo-cc ~(keystone_admin)]# swift post orders -r "SoftwareTesters:*"
4. Set a write ACL to be just a single user, developer1 in the SoftwareTesters group.
[root@rdo-cc ~(keystone_admin)]# swift post orders -w "SoftwareTesters:developer1"
6. Update the write ACL with a comma separated list of projects and users. Configure the ACL so only developer2 from
SoftwareTesters can write but all members of the Admin group can write. Verify the setting.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 89
7. List all uploaded objects. We have not uploaded anything so there should be no output.
[root@rdo-cc ~(keystone_admin)]# swift list orders
8. Upload a file to the orders container. We’ll use the /etc/hosts file as it’s common.
[root@rdo-cc ~(keystone_admin)]# swift upload orders /etc/hosts
etc/hosts
10. View the default metadata for the newly uploaded object.
[root@rdo-cc ~(keystone_admin)]# swift stat orders etc/hosts
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Object: etc/hosts
Content Type: application/octet-stream
Content Length: 159
Last Modified: Thu, 02 Feb 2017 21:25:50 GMT
ETag: 3d2fd8331483d30d32d70431b70233ef
Meta Mtime: 1456161427.668295
Accept-Ranges: bytes
X-Timestamp: 1486070749.21636
X-Trans-Id: tx6fd1c49c371841d68d5cf-005893a3fa
11. Configure the existing object to expire after ten minutes. The command accepts time in seconds.
[root@rdo-cc ~(keystone_admin)]# swift post orders etc/hosts -H "X-Delete-After:600"
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
90 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT
12. Verify the time. Note that it does not show the time set or a countdown, but the epoch time in seconds the object will
expire. Also note the overall number of fields is the same.
[root@rdo-cc ~(keystone_admin)]# swift stat orders etc/hosts
Account: AUTH_e1e7401f7e9744a390b5ea5252a70903
Container: orders
Object: etc/hosts
Content Type: application/octet-stream
Content Length: 159
Last Modified: Thu, 02 Feb 2017 21:28:43 GMT
ETag: 3d2fd8331483d30d32d70431b70233ef
X-Delete-At: 1486071522
Accept-Ranges: bytes
X-Timestamp: 1486070922.15140
X-Trans-Id: tx6ba1017928694f8a86520-005893a495
13. Set the object to expire at a particular time in the future. First determine the current epoch time in seconds.
[root@rdo-cc ~(keystone_admin)]# date +’%s’
1486070948
14. Add a thousand seconds to the reported time and verify the new setting.
[root@rdo-cc ~(keystone_admin)]# swift post orders etc/hosts -H "X-Delete-At:1486071948"
16. If we decide we don’t want the object to expire we can pass the X-Remove-Delete-At parameter with no value after the
colon.
17. Verify the Delete-At times have been removed and the X-Timestamp shows instead.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 91
While less powerful than the command line the BUI offers easy access to objects and containers. In this task we will explore
using the OpenStack dashboard.
1. Use the BUI to verify the previously uploaded file exists and view its settings. Log in as admin to your OpenStack
dashboard. Navigate to the Project -> Object Store -> Containers. Select the orders container.
2. Notice that the object rests in a directory structure. Select the etc link. Work through the drop-down options on the
hosts line without making changes. Note there is no mention of the expiration time or ability to change it.
3. In the orders container box note that the Public Access box is not selected and shows as disabled. Check the box.
The word disabled should be replaced with a link. Right-click on the link and copy the link location.
4. Paste the URL into a new browser window. Edit the URL. Replace the internal IP address (172.24.xx.yy type address)
with the Public IP address (54.212.aa.bb type address). Once the edit is complete press enter to retrieve the page. The
page should show the XML for orders container.
5. Edit the URL again. Append the file name etc/hosts. The browser should show a pop-up window, prompting you to
download a file. Download the file. Locate the file and open it with a text editor. You should see the contents of your
/etc/hosts file.
6. Now download the file via the command line. Return to your terminal session. Use the swift command to download the
file to the current directory and change the file’s name.
[root@rdo-cc ~(keystone_admin)]# swift download orders etc/hosts -o localfile
7. Verify the file has what we expect. The file should look the same as what was downloaded via the BUI.
[root@rdo-cc ~(keystone_admin)]# cat localfile
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
8. Configure the container to allow web access and set the type to listing.css. We will verify it later using the openstack
utility.
[root@rdo-cc ~(keystone_admin)]# swift post -m ’web-listings: true orders’
9. We will again set an expire time, then check to see if the object still exists. Set the timer for 30 seconds.
10. Use the sleep command to make sure 30 seconds has passed then view the status of the object.
[root@rdo-cc ~(keystone_admin)]# sleep 30
The openstack utility gains more features of previous per-service commands with each release. We will explore the
current capabilities.
11. Open the openstack utility and view the object sub-commands using the help command.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
92 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT
optional arguments:
-h, --help show this help message and exit
--property <key=value>
Set a property on this account (repeat option to set
multiple properties)
13. View the objects in the orders container. Note the path was picked up differently. In Pike there appears to be an
undocumented feature, where python says ”ascii codec can’t decide byte 0xe2 in position 17: ordinal not in range(128)”.
This can be ignored, the following commands show the objects are actually there.
(openstack) object list orders
+------------+
| Name |
+------------+
| /etc/group |
| etc/hosts |
+------------+
15. View the object store information. Note the Web-Listings parameter we set in a previous task.
(openstack) object store account show
+------------+---------------------------------------+
| Field | Value |
+------------+---------------------------------------+
| Account | AUTH_e1e7401f7e9744a390b5ea5252a70903 |
| Bytes | 1811 |
| Containers | 1 |
| Objects | 2 |
| properties | Web-Listings=’true orders’ |
+------------+---------------------------------------+
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
12.1. LABS 93
17. View the current object store account information. Note it may take a while for the Objects output to update. The
background daemon typically runs once a minute.
(openstack) object store account show
+------------+---------------------------------------+
| Field | Value |
+------------+---------------------------------------+
| Account | AUTH_e1e7401f7e9744a390b5ea5252a70903 |
| Bytes | 1235 |
| Containers | 1 |
| Objects | 1 |
| properties | Web-Listings=’true orders’ |
+------------+---------------------------------------+
18. Explore openstack object and openstack object store account commands as time permits. Using command output
and online resources build a list of various metadata settings possible for an object or a container.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
94 CHAPTER 12. OPENSTACK OBJECT STORAGE WITH SWIFT
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 13
13.1 Labs
95
96 CHAPTER 13. HIGH AVAILABILITY IN THE CLOUD
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 14
14.1 Labs
97
98 CHAPTER 14. CLOUD SECURITY WITH OPENSTACK
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 15
15.1 Labs
99
100 CHAPTER 15. MONITORING AND METERING
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 16
Cloud Automation
16.1 Labs
Overview
This exercise uses the RDO OpenStack deployment running on CentOS. The lab instructions use the node name alias of
rdo-cc.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
101
102 CHAPTER 16. CLOUD AUTOMATION
The suggested and tested browser to use is Chrome, although others may work.
You are encouraged to write the yaml files by hand, to learn the proper syntax. A collection of files has been made available
to use as well. They may still require some editing to match UUIDs. You can download them using wget:
[root@rdo-cc ~]# cd
[root@rdo-cc ~]# wget https://training.linuxfoundation.org/cm/LFS452/heat-templates.tar \
--user=LFtraining --password=Penguin2014
1. Before we can deploy an instance using a simple heat stack we need to choose a network to join. Just as when the
BUI is used, if there is more than one network available, one must be chosen to launch an instance. We will use the
Accounting Internal network for our new instance. Note the network ID for later use. This example begins with
a9b90a59
[root@rdo-cc ~]# source keystonerc_admin
2. Create a YAML file for a simple, one instance stack. Syntax is very important. If you do not indent white space
properly you will receive an error, something like Error parsing template with further sections calling out blocks
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 103
expected <block end>, but found ’<block mapping start>’. Edit the file such that similar sections are equally
indented.
The network ID should match the output from the previous neutron net-list command.
[root@rdo-cc ~(keystone_admin)]# vim hello_world.yaml
heat_template_version: 2015-04-30
resources:
server:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
networks:
- network: a9b90a59-f28d-4fd3-a5db-3ec6a1fc881f
3. Connect to the BUI using a web browser in a new window. Log in as Admin. Make sure the Accounting Internal network
is shared. Navigate to Admin -> System -> Networks Select the Edit Network button on the Accounting Internal
line. Click Shared, then Save Changes.
4. To view the stack be created navigate to the Network Topology page. The following commands should cause the page
to automatically update as resources are created and destroyed.
5. Using the CLI, use the openstack stack create command to deploy a new instance. Watch the BUI for updates.
[root@rdo-cc ~(keystone_admin)]# openstack stack create -t hello_world.yaml stack1
+---------------------+-----------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------+
| id | 5a9c5109-3484-431c-a5f1-dee90eeb0574 |
| stack_name | stack1 |
| description | Simple template to deploy a single compute instance |
| creation_time | 2017-02-17T23:09:50Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+-----------------------------------------------------+
6. After a few seconds the instance should finish deployment. Verify the stack status. Keep trying until it reports CREATED.
[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+-------------------
---------------+-----------------+----------------------+--------------+
| ID | Stack Name | Project | Stack Status | Creation Time
+--------------------------------------+------------+-------------------
---------------+-----------------+----------------------+--------------+
| 007328c3-9dd4-48fa-8a2a-2a4cd59ce171 | stack1 |
f20dbe1137784471855e893154253f48 | CREATE_COMPLETE | 2018-06-15T15:40:24Z
| None |
+--------------------------------------+------------+--------------------
--------------+-----------------+----------------------+--------------+
7. Use a nova command to verify the instance was created. Note the name is a derivative of the stack used to create it.
[root@rdo-cc ~(keystone_admin)]# nova list
+--------------------------------------+----------------------------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks
+--------------------------------------+----------------------------+--------+------------+-------------+----------+
| 7c31cfe9-92c9-4c67-9b4f-61e971626296 | stack1-server-mgvaepymkyqa | ACTIVE | - | Running |
Accounting Internal=192.168.0.11 |
+--------------------------------------+----------------------------+--------+------------+-------------+-----------
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
104 CHAPTER 16. CLOUD AUTOMATION
This exercise uses the RDO OpenStack deployment running on CentOS. The lab instructions use the node name alias of
rdo-cc.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
The suggested and tested browser to use is Chrome, although others may work.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 105
1. Now add more resources to the stack. Review a list of possible resource types. Use the BUI to navigate to
Project -> Orchestration -> Resource Types and look through the various types. View types that begin with
OS::Neutron. Look at the details for OS::Neutron::Subnet.
2. Create another YAML file and populate it with the instance information we used before and add network, router, and
interface information. Again note that proper white space indentation and syntax is essential. The following command
opens a complete file. Consider trying on your own first.
[root@rdo-cc ~(keystone_admin)]# vim netandserver.yaml
heat_template_version: 2015-04-30
resources:
internal_net:
type: OS::Neutron::Net
internal_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: internal_net }
cidr: "10.8.1.0/24"
dns_nameservers: [ "8.8.8.8", "8.8.4.4" ]
ip_version: 4
internal_router:
type: OS::Neutron::Router
properties:
external_gateway_info: { network: public }
internal_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: internal_router }
subnet: { get_resource: internal_subnet }
server:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
networks:
- network: { get_resource: internal_net }
4. View the status of the stacks. Depending on how fast you read and type the stack may have completed being created.
[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
106 CHAPTER 16. CLOUD AUTOMATION
6. Open the BUI and verify the newly created resources. If you view the Network Topology you should find a new router,
network and instance.
7. Now shut down the stack and release the deployed resources.
[root@rdo-cc ~(keystone_admin)]# openstack stack delete stack2
Are you sure you want to delete this stack(s) [y/N]? y
8. Verify the resources are no longer in use from the CLI and BUI. Depending on how fast you type, you may see stack2
in a DELETE_COMPLETE state before it is fully removed.
[root@rdo-cc ~(keystone_admin)]# openstack stack list
+--------------------------------------+------------+-----------------+---------------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 5a9c5109-3484-431c-a5f1-dee90eeb0574 | stack1 | CREATE_COMPLETE | 2016-10-28T17:24:31 | None |
+--------------------------------------+------------+-----------------+---------------------+--------------+
Exercise 16.3: Snapshots and updating stacks This exercise uses the RDO OpenStack deployment
running on CentOS. The lab instructions use the node name alias of rdo-cc.
The course material includes a URL for lab access. You will use your Linux Foundation login and password to gain access.
After successfully logging in you will be presented a new page and a virtual machine instance will be created. It may take a
minute or two for previous steps to be completed. You will see a line saying Configuring OpenStack and a twirling cursor
while the configuration takes place.
Use the OpenStack Dashboard tab via the Katacoda page to access the Horizon BUI. The Horizon URL can also be found
by looking in the /opt/host file. Your URLs may be different than the example shown.
The plus sign (+) icon in the menu bar can be used to open more terminals for testing or viewing of log files in real time.
Select the Shutdown cluster tab when finished with the lab.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 107
The suggested and tested browser to use is Chrome, although others may work.
1. A snapshot allows us to save a stack configuration and roll back to that point of configuration. Begin by creating a
snapshot of stack1.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot create stack1
+---------------+--------------------------------------+
| Field | Value |
+---------------+--------------------------------------+
| ID | b5987b66-082d-49c9-b0f2-f9a4831ea44c |
| name | None |
| status | IN_PROGRESS |
| status_reason | None |
| data | None |
| creation_time | 2017-02-17T23:30:57Z |
+---------------+--------------------------------------+
Update a stack
1. Update the YAML file to create a cinder volume and attach it to the existing instance. Again be mindful of the indentation.
To keep both version easy to use, first copy the file.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
108 CHAPTER 16. CLOUD AUTOMATION
heat_template_version: 2015-04-30
resources:
server:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
networks:
- network: a9b90a59-f28d-4fd3-a5db-3ec6a1fc881f
cinder_volume:
type: OS::Cinder::Volume
properties:
size: 1
volume_attachment:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: cinder_volume }
instance_uuid: { get_resource: server }
mountpoint: /dev/sdb
2. Update the stack calling the newly updated YAML file and the stack to update. For whatever reason the command will
output the stack condition before making the update.
root@rdo-cc ~(keystone_admin)# openstack stack update -t hello_world-2.yaml stack1
+---------------------+-----------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------+
| id | de645167-4478-4ea4-a1f4-622035b50dd6 |
| stack_name | stack1 |
| description | Simple template to deploy a single compute instance |
| creation_time | 2017-02-17T23:08:23Z |
| updated_time | 2017-02-17T23:34:42Z |
| stack_status | UPDATE_IN_PROGRESS |
| stack_status_reason | Stack UPDATE started |
+---------------------+-----------------------------------------------------+
4. Verify the instance continues to run using the nova command. If it’s not running, use nova start to start it.
[root@rdo-cc ~(keystone_admin)]# openstack server list
+--------------------------------------+----------------------------+--------+------------+-------------+----------
| ID | Name | Status | Task State | Power State | Networks
+--------------------------------------+----------------------------+--------+------------+-------------+----------
| 7c31cfe9-92c9-4c67-9b4f-61e971626296 | stack1-server-mgvaepymkyqa | ACTIVE | - | Running | Accounting
Internal=192.168.0.11 |
+--------------------------------------+----------------------------+--------+------------+-------------+----------
5. Use the openstack server command to view the newly attached storage device.
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 109
6. Using the previous output verify the storage exists and which instance ID it is attached to.
[root@rdo-cc ~(keystone_admin)]# cinder list
+--------------------------------------+--------+------------------+-----------------------------------+------+-------------+
----------+-------------+--------------------------------------+
| ID | Status | Migration Status | Name | Size | Volume Type |
Bootable | Multiattach | Attached to |
+--------------------------------------+--------+------------------+-----------------------------------+------+-------------+
----------+-------------+--------------------------------------+
| 8a272a63-1f7f-4cbf-8256-126d86168603 | in-use | - | stack1-cinder_volume-qni4na3exrot | 1 | - |
false | False | 7c31cfe9-92c9-4c67-9b4f-61e971626296 |
+--------------------------------------+--------+------------------+-----------------------------------+------+-------------+
----------+-------------+--------------------------------------+
7. Look at the volume details. Note that the server_id matches the instance.
[root@rdo-cc ~(keystone_admin)]# cinder show stack1-cinder_volume-qni4na3exrot
+---------------------------------------+-------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------+
| Property |
+---------------------------------------+-------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------+
| attachments | [{u’server_id’: u’7c31cfe9-92c9-4c67-9b4f-61e971626296’, u’attachment_id’: \
u’4061289f-d891-4ecf-8cbc-3560ca5a43dd’, u’host_name’: None, u’volume_id’: u’8a272a63-1f7f-4cbf-8256-126d86168603’, \
u’device’: u’/dev/vdb’, u’id’: u’8a272a63-1f7f-4cbf-8256-126d86168603’}] |
| availability_zone |
| bootable |
<output omitted>
The process of snapshots and rollbacks is changing. Currently the rollback causes the instance to error out. While there are
bug reports, it seems that documentation suggests using templates for each stage, instead of a snapshot. By using multiple
YAML files you can select a particular stack state without having taken a snapshot
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
110 CHAPTER 16. CLOUD AUTOMATION
1. Use the original version of the template file and revert the instance. The volume should be deleted, and the instance
should continue to run. Log into the instance and verify the uptime is greater than the most recent update. Your network
namespace will be different. Also remember Cirros is updating their passwords from ’cubswin:)’ to ’gocubsgo’. If one
doesn’t work, try the other.
[root@rdo-cc ~(keystone_admin)]# openstack stack update -t hello_world.yaml stack1
+---------------------+-----------------------------------------------------+
| Field | Value |
+---------------------+-----------------------------------------------------+
| id | 1d072060-5589-436d-a7a4-aadda61bc240 |
| stack_name | stack1 |
| description | Simple template to deploy a single compute instance |
| creation_time | 2018-06-15T20:12:18Z |
| updated_time | 2018-06-15T20:46:35Z |
| stack_status | UPDATE_IN_PROGRESS |
| stack_status_reason | Stack UPDATE started |
+---------------------+-----------------------------------------------------+
2. Verify via command line or BUI, the volume should no longer be attached or exist.
3. Now try the process of using the snapshot. Again this does not seem to work in the Pike version at time of writing. Verify
the status of the snapshot.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot list stack1
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| id | name | status | status_reason | creation_time |
+--------------------------------------+------+----------+---------------------------------------+---------------------+
| 75e88260-3c7c-4658-bdf9-f96fd3b21b8a | None | COMPLETE | Stack SNAPSHOT completed successfully | 2016-10-28T17:34:40 |
+--------------------------------------+------+----------+---------------------------------------+---------------------+
4. Look at the details of the snapshot. Note there is no reference to the cinder volumes.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot show stack1 75e88260-3c7c-4658-bdf9-f96fd3b21b8a
snapshot:
creation_time: ’2017-02-17T23:30:57Z’
data:
action: SNAPSHOT
environment:
<output-omitted>
5. Shut down the instance before rolling back. Verify it has shut down before continuing. The stack-restore process will
not check and could leave the instance unusable.
[root@rdo-cc ~(keystone_admin)]# openstack server stop stack1-server-mgvaepymkyqa
+--------------------------------------+----------------------------------------------------------+
| Field | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | rdo-cc.localdomain |
| OS-EXT-SRV-ATTR:hypervisor_hostname | rdo-cc.localdomain |
| OS-EXT-SRV-ATTR:instance_name | instance-00000004 |
| OS-EXT-STS:power_state | Shutdown |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | stopped |
<output ommited>
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
16.1. LABS 111
7. Using the ID and the stack to rollback undo whatever has changed since the snapshot was taken.
[root@rdo-cc ~(keystone_admin)]# openstack stack snapshot restore stack1 75e88260-3c7c-4658-bdf9-f96fd3b21b8a
9. Use the BUI, cinder and nova commands to verify you have the instance, but no longer have an attached volume. You
may have to start the instance if it is not running. Note: You may have an error instead. This worked prior to Pike.
[root@rdo-cc ~(keystone_admin)]# nova list
+--------------------------------------+----------------------------+--------+------------+-------------+----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------------+--------+------------+-------------+----------------------------------+
| 7c31cfe9-92c9-4c67-9b4f-61e971626296 | stack1-server-mgvaepymkyqa | ACTIVE | - | Running | Accounting Internal=192.168.0.11 |
+--------------------------------------+----------------------------+--------+------------+-------------+----------------------------------+
10. Now we can delete the remaining stack. Answer yes when asked if you want to delete the stack.
[root@rdo-cc ~(keystone_admin)]# openstack stack delete stack1
+--------------------------------------+------------+------------------+---------------------+---------------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+------------------+---------------------+---------------------+
| 5a9c5109-3484-431c-a5f1-dee90eeb0574 | stack1 | RESTORE_COMPLETE | 2016-10-28T17:24:31 | 2016-10-28T20:05:55 |
+--------------------------------------+------------+------------------+---------------------+---------------------+
11. Verify the newly created instance is not in the nova list.
[root@rdo-cc ~(keystone_admin)]# nova list | grep stack1
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
112 CHAPTER 16. CLOUD AUTOMATION
LFS252: V 2018-09-11
c Copyright the Linux Foundation 2018. All rights reserved.
Chapter 17
Conclusion
17.1 Labs
113