Backing Up and Restoring Undercloud and Control Plane Nodes
Backing Up and Restoring Undercloud and Control Plane Nodes
Creating and restoring backups of the undercloud and the overcloud control plane
nodes
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This guide explains how to create and restore backups of the undercloud and control plane nodes,
and how to troubleshoot backup and restore problems. Backups are required when you upgrade or
update Red Hat OpenStack Platform. You can also optionally create periodic backups of your
environment to minimize downtime in case of issues.
Table of Contents
Table of Contents
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . .
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .BACKING
. . . . . . . . . . UP
. . . .THE
. . . . .UNDERCLOUD
. . . . . . . . . . . . . . . NODE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . .
1.1. SUPPORTED BACKUP FORMATS AND PROTOCOLS 5
1.2. CONFIGURING THE BACKUP STORAGE LOCATION 5
1.3. INSTALLING AND CONFIGURING AN NFS SERVER ON THE BACKUP NODE 6
1.4. INSTALLING REAR ON THE UNDERCLOUD NODE 7
1.5. CREATING A STANDALONE DATABASE BACKUP OF THE UNDERCLOUD NODES 7
1.6. CONFIGURING OPEN VSWITCH (OVS) INTERFACES FOR BACKUP 8
1.7. CREATING A BACKUP OF THE UNDERCLOUD NODE 8
1.8. SCHEDULING UNDERCLOUD NODE BACKUPS WITH CRON 9
.CHAPTER
. . . . . . . . . . 2.
. . BACKING
. . . . . . . . . . .UP
. . . THE
. . . . .CONTROL
. . . . . . . . . . . PLANE
. . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . .
2.1. SUPPORTED BACKUP FORMATS AND PROTOCOLS 11
2.2. INSTALLING AND CONFIGURING AN NFS SERVER ON THE BACKUP NODE 11
2.3. INSTALLING REAR ON THE CONTROL PLANE NODES 12
2.4. CONFIGURING OPEN VSWITCH (OVS) INTERFACES FOR BACKUP 13
2.5. CREATING A BACKUP OF THE CONTROL PLANE NODES 14
2.6. SCHEDULING CONTROL PLANE NODE BACKUPS WITH CRON 15
. . . . . . . . . . . 3.
CHAPTER . . RESTORING
. . . . . . . . . . . . . .THE
. . . . UNDERCLOUD
. . . . . . . . . . . . . . . . AND
. . . . . CONTROL
. . . . . . . . . . . .PLANE
. . . . . . . NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
..............
3.1. PREPARING A CONTROL PLANE WITH COLOCATED CEPH MONITORS FOR THE RESTORE PROCESS
17
3.2. RESTORING THE UNDERCLOUD NODE 18
3.3. RESTORING THE CONTROL PLANE NODES 19
3.4. RESTORING THE GALERA CLUSTER MANUALLY 20
3.5. RESTORING THE UNDERCLOUD NODE DATABASE MANUALLY 24
1
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
2
MAKING OPEN SOURCE MORE INCLUSIVE
3
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
1. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to
submit feedback.
2. Click the following link to open a the Create Issue page: Create Issue
3. Complete the Summary and Description fields. In the Description field, include the
documentation URL, chapter or section number, and a detailed description of the issue. Do not
modify any other fields in the form.
4. Click Create.
4
CHAPTER 1. BACKING UP THE UNDERCLOUD NODE
In addition, you must back up the undercloud node before performing updates or upgrades. You can use
the backups to restore the undercloud node to its previous state if an error occurs during an update or
upgrade.
The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports
when you use ReaR to back up and restore the undercloud and control plane.
ISO
SFTP
NFS
Procedure
In the bar-vars.yaml file, configure the backup storage location. Follow the appropriate steps
for your NFS server or SFTP server.
If you use an NFS server, add the following parameters the bar-vars.yaml file:
tripleo_backup_and_restore_server: <ip_address>
tripleo_backup_and_restore_shared_storage_folder: <backup_server_dir_path>
tripleo_backup_and_restore_output_url: "nfs://{{ tripleo_backup_and_restore_server }}{{
tripleo_backup_and_restore_shared_storage_folder }}"
tripleo_backup_and_restore_backup_url: "nfs://{{ tripleo_backup_and_restore_server }}{{
tripleo_backup_and_restore_shared_storage_folder }}"
tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/
tripleo_backup_and_restore_backup_url: iso:///backup/
Replace <user>, <password>, and <backup_node> with the backup node URL and
credentials.
IMPORTANT
If you previously installed and configured an NFS or SFTP server, you do not need
to complete this procedure. You enter the server information when you set up
ReaR on the node that you want to back up.
By default, the Relax and Recover (ReaR) IP address parameter for the NFS
server is 192.168.24.1. You must add the parameter
tripleo_backup_and_restore_server to set the IP address value that matches
your environment.
Procedure
2. On the undercloud node, create an inventory file for the backup node:
Replace <ip_address> and <user> with the values that apply to your environment.
3. Copy the public SSH key from the undercloud node to the backup node.
Replace <backup_node> with the path and name of the backup node.
6
CHAPTER 1. BACKING UP THE UNDERCLOUD NODE
Prerequisites
You have an NFS or SFTP server installed and configured on the backup node. For more
information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS
server on the backup node”.
Procedure
If you use a custom stack name, add the --stack <stack_name> option to the tripleo-ansible-
inventory command.
2. If you have not done so before, create an inventory file and use the tripleo-ansible-inventory
command to generate a static inventory file that contains hosts and variables for all the
overcloud nodes:
4. If your system uses the UEFI boot loader, perform the following steps on the undercloud node:
You can optionally include standalone undercloud database backups in your routine backup schedule to
provide additional data security. A full backup of an undercloud node includes a database backup of the
undercloud node. But if a full undercloud restoration fails, you might lose access to the database portion
7
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
of the full undercloud backup. In this case, you can recover the database from a standalone undercloud
database backup.
Procedure
Additional resources
Procedure
Replace <command_1> and <command_2> with commands that configure the network
interface names or IP addresses. For example, you can add the ip link add br-ctlplane type
bridge command to configure the control plane bridge name or add the ip link set eth0 up
command to set the name of the interface. You can add more commands to the parameter
based on your network configuration.
If you are upgrading your Red Hat OpenStack Platform environment from 13 to 16.2, you must create a
separate database backup after you perform the undercloud upgrade and before you perform the
Leapp upgrade process on the overcloud nodes. For more information, see Section 1.5, “Creating a
standalone database backup of the undercloud nodes”.
Prerequisites
You have an NFS or SFTP server installed and configured on the backup node. For more
8
CHAPTER 1. BACKING UP THE UNDERCLOUD NODE
You have an NFS or SFTP server installed and configured on the backup node. For more
information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS
server on the backup node”.
You have installed ReaR on the undercloud node. For more information, see Section 1.4,
“Installing ReaR on the undercloud node”.
If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces.
For more information, see Section 1.6, “Configuring Open vSwitch (OVS) interfaces for backup” .
Procedure
5. If you have not done so before, create an inventory file and use the tripleo-ansible-inventory
command to generate a static inventory file that contains hosts and variables for all the
overcloud nodes:
Prerequisites
You have an NFS or SFTP server installed and configured on the backup node. For more
information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS
server on the backup node”.
You have installed ReaR on the undercloud and control plane nodes. For more information, see
9
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
You have installed ReaR on the undercloud and control plane nodes. For more information, see
Section 2.3, “Installing ReaR on the control plane nodes” .
You have sufficient available disk space at your backup location to store the backup.
Procedure
1. To schedule a backup of your control plane nodes, run the following command. The default
schedule is Sundays at midnight:
To change the default backup schedule, pass a different cron schedule on the
tripleo_backup_and_restore_cron parameter:
To define additional parameters that are added to the backup command when cron runs the
scheduled backup, pass the tripleo_backup_and_restore_cron_extra parameter to the
backup command, as shown in the following example:
To change the default user that executes the backup, pass the
tripleo_backup_and_restore_cron_user parameter to the backup command, as shown in
the following example:
10
CHAPTER 2. BACKING UP THE CONTROL PLANE NODES
In addition, you must back up the control plane nodes before performing updates or upgrades. You can
use the backups to restore the control plane nodes to their previous state if an error occurs during an
update or upgrade.
The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports
when you use ReaR to back up and restore the undercloud and control plane.
ISO
SFTP
NFS
IMPORTANT
If you previously installed and configured an NFS or SFTP server, you do not need
to complete this procedure. You enter the server information when you set up
ReaR on the node that you want to back up.
By default, the Relax and Recover (ReaR) IP address parameter for the NFS
server is 192.168.24.1. You must add the parameter
tripleo_backup_and_restore_server to set the IP address value that matches
your environment.
Procedure
11
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
2. On the undercloud node, create an inventory file for the backup node:
Replace <ip_address> and <user> with the values that apply to your environment.
3. Copy the public SSH key from the undercloud node to the backup node.
Replace <backup_node> with the path and name of the backup node.
IMPORTANT
Due to a known issue, the ReaR backup of overcloud nodes continues even if a Controller
node is down. Ensure that all your Controller nodes are running before you run the ReaR
backup. A fix is planned for a later Red Hat OpenStack Platform (RHOSP) release. For
more information, see BZ#2077335 - Back up of the overcloud ctlplane keeps going
even if one controller is unreachable.
Prerequisites
You have an NFS or SFTP server installed and configured on the backup node. For more
information about creating a new NFS server, see Section 2.2, “Installing and configuring an NFS
server on the backup node”.
Procedure
2. If you have not done so before, create an inventory file and use the tripleo-ansible-inventory
command to generate a static inventory file that contains hosts and variables for all the
overcloud nodes:
12
CHAPTER 2. BACKING UP THE CONTROL PLANE NODES
3. In the bar-vars.yaml file, configure the backup storage location. Follow the appropriate steps
for your NFS server or SFTP server.
a. If you use an NFS server, add the following parameters to the bar-vars.yaml file:
tripleo_backup_and_restore_server: <ip_address>
tripleo_backup_and_restore_shared_storage_folder: <backup_server_dir_path>
tripleo_backup_and_restore_output_url: "nfs://{{ tripleo_backup_and_restore_server }}{{
tripleo_backup_and_restore_shared_storage_folder }}"
tripleo_backup_and_restore_backup_url: "nfs://{{ tripleo_backup_and_restore_server }}{{
tripleo_backup_and_restore_shared_storage_folder }}"
tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/
tripleo_backup_and_restore_backup_url: iso:///backup/
Replace <user>, <password>, and <backup_node> with the backup node URL and
credentials.
5. If your system uses the UEFI boot loader, perform the following steps on the control plane
nodes:
Procedure
Replace <command_1> and <command_2> with commands that configure the network
interface names or IP addresses. For example, you can add the ip link add br-ctlplane type
13
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
bridge command to configure the control plane bridge name or add the ip link set eth0 up
command to set the name of the interface. You can add more commands to the parameter
based on your network configuration.
Prerequisites
You have an NFS or SFTP server installed and configured on the backup node. For more
information about creating a new NFS server, see Section 2.2, “Installing and configuring an NFS
server on the backup node”.
You have installed ReaR on the control plane nodes. For more information, see Section 2.3,
“Installing ReaR on the control plane nodes”.
If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces.
For more information, see Section 2.4, “Configuring Open vSwitch (OVS) interfaces for
backup”.
Procedure
2. On each control plane node, back up the config-drive partition of each node as the root user:
Replace <config_drive_partition> with the name of the config-drive partition that you located
in step 1.
4. If you have not done so before, use the tripleo-ansible-inventory command to generate a
static inventory file that contains hosts and variables for all the overcloud nodes:
14
CHAPTER 2. BACKING UP THE CONTROL PLANE NODES
The backup process runs sequentially on each control plane node without disrupting the service
to your environment.
Prerequisites
You have an NFS or SFTP server installed and configured on the backup node. For more
information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS
server on the backup node”.
You have installed ReaR on the undercloud and control plane nodes. For more information, see
Section 2.3, “Installing ReaR on the control plane nodes” .
You have sufficient available disk space at your backup location to store the backup.
Procedure
1. To schedule a backup of your control plane nodes, run the following command. The default
schedule is Sundays at midnight:
To change the default backup schedule, pass a different cron schedule on the
tripleo_backup_and_restore_cron parameter:
To define additional parameters that are added to the backup command when cron runs the
scheduled backup, pass the tripleo_backup_and_restore_cron_extra parameter to the
backup command, as shown in the following example:
To change the default user that executes the backup, pass the
15
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
To change the default user that executes the backup, pass the
tripleo_backup_and_restore_cron_user parameter to the backup command, as shown in
the following example:
16
CHAPTER 3. RESTORING THE UNDERCLOUD AND CONTROL PLANE NODES
IMPORTANT
If you cannot back up the /var/lib/ceph directory, you must contact the Red Hat
Technical Support team to rebuild the ceph-mon index. For more information, see Red
Hat Technical Support Team.
Prerequisites
You have created a backup of the undercloud node. For more information, see Section 1.7,
“Creating a backup of the undercloud node”.
You have created a backup of the control plane nodes. For more information, see Section 2.5,
“Creating a backup of the control plane nodes”.
If you use an OVS bridge for your network interfaces, you have access to the network
configuration information that you set in the NETWORKING_PREPARATION_COMMANDS
parameter. For more information, see see Section 1.6, “Configuring Open vSwitch (OVS)
interfaces for backup”.
Procedure
Replace <file_type> and <device_disk> with the type and location of the backup file. Normally,
the file type is xfs and the location is /dev/vda2.
17
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
if [ -f "/tmp/ceph.tar.gz" ]; then
rm -rf /mnt/local/var/lib/ceph/*
tar xvC /mnt/local -f /tmp/ceph.tar.gz var/lib/ceph --xattrs --xattrs-include='.'
fi
Additional resources
Prerequisites
You have created a backup of the undercloud node. For more information, see Section 2.5,
“Creating a backup of the control plane nodes”.
If you use an OVS bridge for your network interfaces, you have access to the network
configuration information that you set in the NETWORKING_PREPARATION_COMMANDS
parameter. For more information, see see Section 1.6, “Configuring Open vSwitch (OVS)
interfaces for backup”.
Procedure
1. Power off the undercloud node. Ensure that the undercloud node is powered off completely
before you proceed.
NOTE
If your system uses UEFI, select the Relax-and-Recover (no Secure Boot)
option.
When the undercloud node restoration process completes, the console displays the following
message:
18
CHAPTER 3. RESTORING THE UNDERCLOUD AND CONTROL PLANE NODES
To restore the control plane, you must restore all control plane nodes to ensure state consistency.
You can find the backup ISO images on the backup node. Burn the bootable ISO image to a DVD or
download it to the undercloud node through Integrated Lights-Out (iLO) remote access.
NOTE
Red Hat supports backups of Red Hat OpenStack Platform with native SDNs, such as
Open vSwitch (OVS) and the default Open Virtual Network (OVN). For information
about third-party SDNs, refer to the third-party SDN documentation.
Prerequisites
You have created a backup of the control plane nodes. For more information, see Section 2.5,
“Creating a backup of the control plane nodes”.
If you use an OVS bridge for your network interfaces, you have access to the network
configuration information that you set in the NETWORKING_PREPARATION_COMMANDS
parameter. For more information, see see Section 2.4, “Configuring Open vSwitch (OVS)
interfaces for backup”.
Procedure
1. Power off each control plane node. Ensure that the control plane nodes are powered off
completely before you proceed.
2. Boot each control plane node with the corresponding backup ISO image.
3. When the Relax-and-Recover boot menu displays, on each control plane node, select Recover
<control_plane_node>. Replace <control_plane_node> with the name of the corresponding
control plane node.
NOTE
If your system uses UEFI, select the Relax-and-Recover (no Secure Boot)
option.
19
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
4. On each control plane node, log in as the root user and restore the node:
The following message displays:
When the control plane node restoration process completes, the console displays the following
message:
5. When the command line console is available, restore the config-drive partition of each control
plane node:
7. Set the boot sequence to the normal boot device. On boot up, the node resumes its previous
state.
8. To ensure that the services are running correctly, check the status of pacemaker. Log in to a
Controller node as the root user and enter the following command:
# pcs status
9. To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For
more information, see Validating your OpenStack cloud with the Integration Test Suite
(tempest).
Troubleshooting
Clear resource alarms that are displayed by pcs status by running the following command:
Clear STONITH fencing action errors that are displayed by pcs status by running the following
commands:
20
CHAPTER 3. RESTORING THE UNDERCLOUD AND CONTROL PLANE NODES
NOTE
In this procedure, you must perform some steps on one Controller node. Ensure that you
perform these steps on the same Controller node as you go through the procedure.
Procedure
2. Disable the database connections through the virtual IP on all Controller nodes:
$ sudo podman container stop $(sudo podman container ls --all --format "{{.Names}}" --
filter=name=galera-bundle)
$ sudo podman container start $(sudo podman container ls --all --format "{{ .Names }}" --
filter=name=galera-bundle)
$ sudo podman exec -i $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mysql_install_db --datadir=/var/lib/mysql --
user=mysql --log_error=/var/log/mysql/mysql_init.log"
21
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mysqld_safe --skip-networking --wsrep-on=OFF --
log-error=/var/log/mysql/mysql_safe.log" &
11. Move the .my.cnf Galera configuration file on all Controller nodes:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mv /root/.my.cnf /root/.my.cnf.bck"
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mysql -uroot -e'use mysql;update user set
password=PASSWORD(\"$ROOTPASSWORD\")where User=\"root\";flush privileges;'"
13. Restore the .my.cnf Galera configuration file inside the Galera container on all Controller nodes:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mv /root/.my.cnf.bck /root/.my.cnf"
NOTE
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD <
\"/var/lib/mysql/$BACKUP_FILE\" "
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD <
\"/var/lib/mysql/$BACKUP_GRANT_FILE\" "
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "mysqladmin shutdown"
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --
filter=name=galera-bundle) \
/usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --
22
CHAPTER 3. RESTORING THE UNDERCLOUD AND CONTROL PLANE NODES
socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \
--log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \
--wsrep-cluster-address=gcomm:// &
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "clustercheck"
Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise you
must recreate the node.
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "grep wsrep_cluster_address /etc/my.cnf.d/galera.cnf" |
awk '{print $3}'
20. On each of the remaining Controller nodes, start the database and validate the cluster:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-
file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock \
--datadir=/var/lib/mysql --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --
open-files-limit=16384 \
--wsrep-cluster-address=$CLUSTER_ADDRESS &
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \
--filter=name=galera-bundle) bash -c "clustercheck"
Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise
you must recreate the node.
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --
filter=name=galera-bundle) \
/usr/bin/mysqladmin -u root shutdown
22. On all Controller nodes, remove the following firewall rule to allow database connections
through the virtual IP address:
$ sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --
filter=name=galera-bundle)
23
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
$ sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --
filter=name=clustercheck)
Verification
1. To ensure that services are running correctly, check the status of pacemaker:
2. To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For
more information, see Validating your OpenStack cloud with the Integration Test Suite
(tempest).
3. If you suspect an issue with a particular node, check the state of the cluster with clustercheck:
Prerequisites
You have created a standalone backup of the undercloud database. For more information, see
Section 1.5, “Creating a standalone database backup of the undercloud nodes” .
Procedure
3. Ensure that no containers are running on the server by entering the following command:
If any containers are running, enter the following command to stop the containers:
4. Create a backup of the current /var/lib/mysql directory and then delete the directory:
24
CHAPTER 3. RESTORING THE UNDERCLOUD AND CONTROL PLANE NODES
5. Recreate the database directory and set the SELinux attributes for the new directory:
6. Create a local tag for the mariadb image. Replace <image_id> and
<undercloud.ctlplane.example.com> with the values applicable in your environment:
8. Copy the database backup file that you want to import to the database:
25
Red Hat OpenStack Platform 16.2 Backing up and restoring the undercloud and control plane nodes
26