Drbd9 Mysql Rhel8

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

MySQL High Availability using Pacemaker and DRBD on

RHEL/CentOS 8
Brian Hellman;Ryan Ronnander
1.0, 2020-07-09
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1  

2. Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

2.1. Register Nodes and Repository Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

2.2. Installing DRBD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4  

2.3. Installing Pacemaker and Corosync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4  

2.4. Installing MySQL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5  

3. Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6  

3.1. System Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6  

3.2. Firewall Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6  

3.3. SELinux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6  

3.4. Configuring DRBD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6  

3.5. Creating a Filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8  

3.6. MySQL Data Directory Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8  

3.7. Configuring Corosync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9  

3.8. Creating a Basic Pacemaker Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10  

4. Configuring Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  

4.1. Basic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  

4.2. Adding Network Connectivity Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12  

5. Using the HA MySQL Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14  

5.1. Securing the MySQL Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14  

5.2. Importing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14  

5.3. Accessing Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14  

6. Failure Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16  

6.1. Node Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16  

6.2. Storage Subsystem Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16  

6.3. MySQL Service Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16  

6.4. Network Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16  

7. Special Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  

7.1. MySQL Storage Engine Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  

7.2. InnoDB Buffer Pool Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  

8. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18  

9. Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19  

Appendix A: Additional Information and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20  

Appendix B: Legalese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21  

B.1. Trademark Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21  

B.2. License Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21  


MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: Chapter 1. Introduction

Chapter 1. Introduction
This guide describes an approach for designing and implementing High Availability (HA) MySQL cluster using DRBD,
Pacemaker, and Corosync on RHEL 8 or CentOS 8 servers.

There are several options available to achieve HA for MySQL using this particular software stack, all of which may be
combined:

• Shared storage cluster - This deployment type relies on a single, shared data "silo" which holds the data files
associated with the MySQL database. This option, while creating redundancy at the server level, relies on a
single instance of data which itself is typically not highly available. Clusters of this type may use a fibre channel
or iSCSI based storage area network (SAN). When properly configured, shared storage clusters guarantee
transaction integrity across failover.
• DRBD based shared-nothing cluster - This cluster type makes use of local, directly attached storage whose
content is synchronously replicated between cluster nodes. This adds an additional layer of redundancy in that
MySQL’s data storage is available on more than one node. Like shared storage clusters, DRBD based clusters
guarantee transaction integrity across failover.
• MySQL Replication based shared-nothing cluster - This cluster type makes use of local, directly attached
storage and uses MySQL replication to propagate database events across the cluster. Like DRBD, this adds
redundancy in that MySQL’s data storage is available on more than one node. As of DRBD 9, both DRBD and
MySQL replication can utilize multiple secondaries with a single primary, are asynchronous, hence highly
suitable for scale-out solutions. MySQL Replication, however, does not make any guarantees about not losing
updates upon failover.

This technical guide describes the DRBD based shared-nothing option. This approach has some advantages over the
shared storage based one:

• In a shared storage cluster, while there exists server redundancy and the cluster can always tolerate the failure
of a cluster node, the shared storage itself is often not redundant. Thus, as soon as the cluster loses access to
its data — which may or may not involve data destruction — the cluster as a whole is out of service. By
contrast, in a DRBD based shared-nothing cluster, every node has access to its own replica of the data. This
allows DRBD to provide redundancy at both the data as well as the node levels.
• As a consequence, DRBD based shared-nothing clusters may be deployed in their entirety across separate
hypervisors, networks, locations, or any other such divide. By contrast, a shared storage cluster typically
deploys nodes over such boundaries, but does not do so for its data storage.

SERVICE SERVICE

PAGE CACHE PAGE CACHE

FILE SYSTEM FILE SYSTEM

RAW DEVICE RAW DEVICE


NETWORK STACK NETWORK STACK

I/O SCHEDULER I/O SCHEDULER

DISK DRIVER NIC DRIVER NIC DRIVER DISK DRIVER

Shared-nothing clusters can also be achieved by using storage replication mechanisms other than


DRBD. Such synchronous replication solutions, however, are typically proprietary and strongly
coupled to a specific set of storage hardware. DRBD is entirely software based, open source, and
hardware agnostic.

1
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 2.1. Register Nodes and Repository Configuration

Chapter 2. Installation Overview


 Configure LINBIT repositories before installing DRBD, Pacemaker, and Corosync.

In order to create a highly available MySQL cluster, you will need to install the following software packages:

• MySQL is an open source relational database management system (RDBMS). This guide assumes that you are
using MySQL version 8.0 or greater.
• Pacemaker is a cluster resource management framework which you will use to automatically start, stop,
monitor, and migrate resources. Distributions typically bundle Pacemaker in a package simply named
pacemaker. This guide assumes that you are using Pacemaker 2.0.2 or greater installed from LINBIT’s
[pacemaker-2] repository.
• Corosync is the cluster messaging layer that Pacemaker uses for communication and membership. In
distributions, the Corosync package is usually simply named corosync. This guide assumes that you are using
Corosync version 3.0.2 or greater installed from LINBIT’s [pacemaker-2] repository.
• DRBD is a kernel block-level synchronous replication facility which serves as an imported shared-nothing
cluster building block. This guide assumes use of DRBD 9.0 installed from LINBIT’s [drbd-9.0] repository.
LINBIT support customers can get pre-compiled binaries from the official repositories. As always the source can
be found at https://github.com/LINBIT/drbd.

You may be required to install packages other than the above-mentioned ones due to package
 dependencies. However, when using a package management utility such as dnf, these
dependencies should be resolved automatically.

2.1. Register Nodes and Repository Configuration


We will install DRBD from LINBIT’s repositories. To access those repositories you will need to have been setup in
LINBIT’s system, and have access to the LINBIT customer portal.

Once you have access to the customer portal, you can register and configure your node’s repository access by using
the Python command line tool outlined in the "REGISTER NODES" section of the portal.

To register the cluster nodes and configure LINBIT’s repositories, run the following on all nodes, one at a time:

# curl -O https://my.linbit.com/linbit-manage-node.py
# chmod +x ./linbit-manage-node.py
# ./linbit-manage-node.py


If no python interpreter found :-( is displayed when running linbit-manage-
node.py, install Python 3 using the following command: # dnf install python3.

The script will prompt you for your LINBIT portal username and password. Once provided, it will list cluster nodes
associated with your account (none at first).

After you tell the script which cluster to register the node with, you will be asked a series of questions regarding which
repositories you’d like to enable.

Be sure to say yes to the questions regarding installing LINBIT’s public key to your keyring and writing the repository
configuration file.

After that, you should be able to # dnf info kmod-drbd and see dnf pulling package information from LINBIT’s
repository.

2
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 2.1. Register Nodes and Repository Configuration


Before installing packages, make sure to only pull the cluster stack packages from LINBIT’s
repositories.

To ensure we only pull cluster packages from LINBIT, we will need to add the following exclude line to our repository
files:

exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

2.1.1. RHEL 8 Repository Configuration


The x86_64 architecture repositories are used in the following examples. Adjust accordingly if
your system architecture is different.

Add the exclude line to both the [rhel-8-for-x86_64-baseos-rpms] and [rhel-8-for-x86_64-


appstream-rpms] repositories. The default location for all repositories in RHEL 8 is
/etc/yum.repos.d/redhat.repo. The modified repository configuration should look like this:

# '/etc/yum.repos.d/redhat.repo' example:

[rhel-8-for-x86_64-baseos-rpms]
name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
baseurl = https://cdn.redhat.com/content/dist/rhel8/$releasever/x86_64/baseos/os
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
sslclientkey = /etc/pki/entitlement/<your_key_here>.pem
sslclientcert = /etc/pki/entitlement/<your_cert_here>.pem
metadata_expire = 86400
enabled_metadata = 1
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

[rhel-8-for-x86_64-appstream-rpms]
name = Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
baseurl = https://cdn.redhat.com/content/dist/rhel8/$releasever/x86_64/appstream/os
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify = 1
sslcacert = /etc/rhsm/ca/redhat-uep.pem
sslclientkey = /etc/pki/entitlement/<your_key_here>.pem
sslclientcert = /etc/pki/entitlement/<your_cert_here>.pem
metadata_expire = 86400
enabled_metadata = 1
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

If the Red Hat High Availability Add-On is enabled, either add the exclude line to the [rhel-8-
 for-x86_64-highavailability-rpms] section or consider disabling the repository. LINBIT
provides most of the packages available in the HA repository.

3
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 2.2. Installing DRBD

2.1.2. CentOS 8 Repository Configuration


Add the exclude line to both the [BaseOS] section of /etc/yum.repos.d/CentOS-Base.repo as well as the
[AppStream] section of /etc/yum.repos.d/CentOS-AppStream.repo repository files. The modified
repository configuration should look like this:

# /etc/yum.repos.d/CentOS-Base.repo example:

[BaseOS]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=BaseOS&infra=$i
nfra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/BaseOS/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

# /etc/yum.repos.d/CentOS-AppStream.repo example:

[AppStream]
name=CentOS-$releasever - AppStream
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=AppStream&infra
=$infra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/AppStream/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
exclude=cluster* corosync* drbd kmod-drbd libqb* pacemaker* resource-agents*

If the [HighAvailability] repo is enabled in /etc/yum.repos.d/CentOS-HA.repo,


 either add the exclude line to the [HighAvailability] section or consider disabling the
repository. LINBIT provides most of the packages available in the HA repository.

2.2. Installing DRBD


Install DRBD using the following command:

# dnf install drbd kmod-drbd

Now prevent DRBD from starting at boot, Pacemaker will be responsible for starting the DRBD service:

# systemctl disable drbd

2.3. Installing Pacemaker and Corosync


This section will cover installing Pacemaker and Corosync. Issue the following command to install and enable the
necessary packages:

4
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 2.4. Installing MySQL

# dnf install pacemaker corosync crmsh

# systemctl enable pacemaker


Created symlink /etc/systemd/system/multi-user.target.wants/pacemaker.service to
/usr/lib/systemd/system/pacemaker.service.

# systemctl enable corosync


Created symlink /etc/systemd/system/multi-user.target.wants/corosync.service to
/usr/lib/systemd/system/corosync.service.

2.4. Installing MySQL


MySQL and all required dependencies can be installed from the mysql module:

# dnf module install mysql

Ensure the MySQL service is not started boot as Pacemaker will be responsible for starting the MySQL service:

# systemctl disable mysqld

5
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 3.1. System Configurations

Chapter 3. Initial Configuration


This section describes the initial configuration of a two node highly available MySQL database in the context of the the
Pacemaker cluster manager.

3.1. System Configurations


Table 1. Node Configuration Overview
Hostname LVM Volume DRBD Device External External IP Crossover Crossover IP
Device Group Interface Interface
node-a /dev/vdb vg_drbd lv_mysql enp1s0 192.168.122.201 enp9s0 172.16.0.201
node-b /dev/vdb vg_drbd lv_mysql enp1s0 192.168.122.202 enp9s0 172.16.0.202


We’ll need a virtual IP for MySQL services to bind to. For this guide we will use
192.168.122.200.

The Logical Volume Manager (LVM) commands used in the creation of this guide are included for your convenience,
adjust accordingly:

# pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created.

# vgcreate vg_drbd /dev/vdb


  Volume group "vg_drbd" successfully created

# lvcreate -L 20G -n lv_mysql vg_drbd


  Logical volume "lv_mysql" created.

3.2. Firewall Configuration


Refer to firewalld documentation for how to open/allow ports. You will need the following ports open in order for
your cluster to function properly.

Table 2. Required Ports


Component Protocol Port
DRBD TCP 7788
Corosync UDP 5404, 5405
MySQL TCP 3306

3.3. SELinux
If you have SELinux enabled, and you’re having issues, consult your distributions documentation for how to properly
configure it, or disable it (not recommended).

3.4. Configuring DRBD


First, it is necessary to configure a DRBD resource to serve as a backing device for the MySQL database. It is
recommended to use a separate DRBD resource for each database. The suggested method is to use individual logical
volumes as the backing device for each DRBD resource. While this is not absolutely necessary, it will provide a degree
of granularity and allow you to distribute databases across different nodes, rather than having all databases always run
on a single node.

6
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 3.4. Configuring DRBD


For more detailed instructions regarding initial configuration, please see the chapter 4 of the DRBD
User’s Guide.

It is highly recommended that you put your resource configurations in separate resource files that reside in the
/etc/drbd.d directory, whose name is identical to that of the resource such as /etc/drbd.d/mysql.res. Its
contents should look similar to this:

resource mysql {
  protocol C;
  device /dev/drbd0;
  disk /dev/vg_drbd/lv_mysql;
  meta-disk internal;
  on node-a {
  address 172.16.0.201:7788;
  }
  on node-b {
  address 172.16.0.202:7788;
  }
}

Copy this configuration to both DRBD nodes. Next it will be necessary to bring the DRBD resource named mysql up
and online as it will serve as the backing storage for the database.

First create the metadata for the DRBD resource. This step must be done on both nodes:

# drbdadm create-md mysql

Then load the DRBD module and bring the resource up on both nodes:

# drbdadm up mysql

The DRBD resource should now be in the connected state, the Secondary role on both nodes, and show a disk state of
Inconsistent on both nodes. Verify their state by typing the following on either node:

# drbdadm status
mysql role:Secondary
  disk:Inconsistent
  node-b role:Secondary
  peer-disk:Inconsistent

At this point we may either begin the initial device synchronization, or, as this is a brand new volume and identical on
both nodes already (empty), we can safely skip the initial synchronization with the --clear-bitmap option. Run the
following command on one node:

# drbdadm --clear-bitmap new-current-uuid mysql/0

The /0 at the end of the above command is to specify the volume number of the resource. Even
 though the above examples do not utilize the multi-volume support in DRBD, it is still required to
specify a volume number, 0 being the default.

The DRBD resource should now be in the connected state, the Secondary role on both nodes, and show a disk state of

7
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 3.5. Creating a Filesystem

UpToDate on both nodes. Verify their state by typing the following on either node:

# drbdadm status
mysql role:Secondary
  disk:UpToDate
  node-b role:Secondary
  peer-disk:UpToDate

Now that the resource disk state is no longer inconsistent, promote the resource to Primary on the node you wish to
use for creation of the filesystem:

# drbdadm primary mysql

3.5. Creating a Filesystem


Once the DRBD resource has been created and initialized, you can create a filesystem on the new block device. This
example assumes xfs as the filesystem type:

# mkfs.xfs /dev/drbd0


You only need to create the filesystem on the Primary node. Other filesystems such as ext4 and
btrfs may be deployed instead of xfs.

3.6. MySQL Data Directory Configuration


After the filesystem has been created it can be mounted over the default MySQL data directory /var/lib/mysql.
The /var/lib/mysql directory will be empty if the mysqld service has not yet been started.

Temporarily mount the filesystem on the Primary node:

# mount /dev/drbd0 /var/lib/mysql

Change the file ownership for the newly mounted filesystem on the Primary node:

# chown mysql:mysql /var/lib/mysql

Next, start the mysqld service on the Primary node:

# systemctl start mysqld

The MySQL data directory preparation is now complete and its contents reside on the DRBD backed filesystem. Since
filesystem mounting and the mysqld service will be managed by Pacemaker, run the following commands on the
Primary node:

# systemctl stop mysqld

# umount /var/lib/mysql

8
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 3.7. Configuring Corosync


To improve MySQL’s security, mysql_secure_installation will be invoked in a subsequent
section.


MySQL configuration files such as /etc/my.cnf are not synchronized by DRBD and should be
identical across all nodes.

3.7. Configuring Corosync


An excellent example corosync.conf file can be found in the appendix of Clusterlab’s Clusters from Scratch
document. It is highly recommended to take advantage of the support for redundant rings that was introduced in
version 1.4. This is done by simply enabling redundant ring support with the rrp_mode option, then adding another
interface sub-section within the totem section.

Create and edit the file /etc/corosync/corosync.conf, it should look like this:

totem {
  version: 2
  secauth: off
  cluster_name: cluster
  transport: knet
  rrp_mode: passive
}

nodelist {
  node {
  ring0_addr: 172.16.0.201
  ring1_addr: 192.168.122.201
  nodeid: 1
  name: noda-a
  }
  node {
  ring0_addr: 172.16.0.202
  ring1_addr: 192.168.122.202
  nodeid: 2
  name: node-b
  }
}

quorum {
  provider: corosync_votequorum
  two_node: 1
}

logging {
  to_syslog: yes
}

Now that Corosync has been configured we can start the Corosync and Pacemaker services:

# systemctl start corosync


# systemctl start pacemaker

Verify that everything has been started and is working correctly by issuing the following command, you should see

9
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 3.8. Creating a Basic Pacemaker Configuration

similar output to what is below:

# crm_mon -rf -n1


Stack: corosync
Current DC: node-a (version 2.0.2.linbit-3.0.el8-744a30d655) - partition with quorum
Last updated: Tue Jul 7 15:20:14 2020
Last change: Mon Jul 6 10:44:21 2020 by hacluster via crmd on node-a

2 nodes configured
0 resources configured

Node node-a: online


Node node-b: online

No inactive resources

Migration Summary:
* Node node-a:
* Node node-b:

3.8. Creating a Basic Pacemaker Configuration


In a highly available 2 node cluster using DRBD, you should:

• Disable STONITH.
• Set Pacemaker’s "no quorum policy" to ignore loss of quorum.
• Set the default resource stickiness to 200.

To do so, issue the following commands from the CRM shell accessible from the crm command on either node (not
both):

# crm
crm(live)# configure
crm(live)configure# property stonith-enabled="false"
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# rsc_defaults resource-stickiness="200"
crm(live)configure# commit
crm(live)configure# exit
bye

While STONITH is not entirely necessary as DRBD is a shared-nothing solution. It is highly


recommended to prevent split brains, and potential loss of data on the split brain victim. For
brevity this guide disables STONITH. An excellent guide on STONITH and its configuration can be
found on Clusterlab’s site, or you may always contact LINBIT for more information.

10
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 4.1. Basic Configuration

Chapter 4. Configuring Cluster Resources


This section assumes you are about to configure a highly available MySQL database with the following configuration
parameters:

• The DRBD resource to be used as your database storage area is named mysql, and it manages the device
/dev/drbd0.
• The DRBD device holds an xfs filesystem which is to be mounted to /var/lib/mysql - the default MySQL
data directory.

The MySQL database will utilize that filesystem, and listen on a dedicated cluster IP address, 192.168.122.200.

4.1. Basic Configuration


In order to create the appropriate cluster resources, open the crm configuration shell as root and issue the following
commands:

crm(live)# configure
crm(live)configure# primitive p_drbd_mysql ocf:linbit:drbd \
  params drbd_resource="mysql" \
  op start interval="0s" timeout="240s" \
  op stop interval="0s" timeout="100s" \
  op monitor interval="29s" role="Master" \
  op monitor interval="31s" role="Slave"

crm(live)configure# ms ms_drbd_mysql p_drbd_mysql \


  meta master-max="1" master-node-max="1" \
  clone-max="2" clone-node-max="1" \
  notify="true"

crm(live)configure# primitive p_fs_mysql ocf:heartbeat:Filesystem \


  params device="/dev/drbd0" \
  directory="/var/lib/mysql" \
  fstype="xfs" \
  op start interval="0" timeout="60s" \
  op stop interval="0" timeout="60s" \
  op monitor interval="20" timeout="40s"

crm(live)configure# primitive p_ip_mysql ocf:heartbeat:IPaddr2 \


  params ip="192.168.122.200" cidr_netmask="24" \
  op start interval="0s" timeout="20s" \
  op stop interval="0s" timeout="20s" \
  op monitor interval="20s" timeout="20s"

crm(live)configure# primitive p_mysql ocf:heartbeat:mysql \


  params binary="/usr/sbin/mysqld" \
  op start interval="0s" timeout="120s" \
  op stop interval="0s" timeout="120s" \
  op monitor interval="20s" timeout="30s"

You must set an appropriate shutdown and startup timeout based on your database utilization and
 expected workload. Failure to do so will cause Pacemaker to prematurely consider operations
timed-out and initiate recovery operations.

11
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 4.2. Adding Network Connectivity Monitoring

crm(live)configure# group g_mysql \


  p_fs_mysql p_ip_mysql p_mysql
crm(live)configure# colocation c_mysql_on_drbd \
  inf: g_mysql ms_drbd_mysql:Master
crm(live)configure# order o_drbd_before_mysql \
  ms_drbd_mysql:promote g_mysql:start
crm(live)configure# commit
crm(live)configure# exit

Once this configuration has been committed, Pacemaker will:

• Start DRBD on both cluster nodes.


• Select one node for promotion to the DRBD Primary role.
• Mount the filesystem, configure the cluster IP address, and start the MySQL server instance on the same node.
• Commence resource monitoring.

4.2. Adding Network Connectivity Monitoring


Finally, Pacemaker may be configured to monitor the upstream network and ensure that MySQL runs only on nodes
that have connectivity to clients. In order to do so, pick one or more IP addresses that the cluster node can expect to
always be accessible, such as the subnet’s default gateway, a core switch, or similar. The following example uses the
default gateway with IP address 192.168.122.1.

Add the ping resources as follows:

crm(live)# configure
crm(live)configure# primitive p_ping_gw ocf:pacemaker:ping \
  params host_list="192.168.122.1" \
  dampen="5s" \
  multiplier="1000" \
  op start interval="0s" timeout="60s" \
  op stop interval="0s" timeout="60s" \
  op monitor interval="15s" timeout="60s"

crm(live)configure# clone cl_ping p_ping_gw \


  meta interleave="true"

Finally, add a location constraint to tie the Primary role of your DRBD resource to a node with upstream network
connectivity:

crm(live)configure# location l_drbd_primary_on_ping ms_drbd_mysql \


  rule $role="Master" \
  -inf: not_defined pingd or pingd lte 0
crm(live)configure# commit
crm(live)configure# exit

Once these changes have been committed, Pacemaker will:

• Monitor upstream IP addresses from both cluster nodes.


• Periodically update a node attribute for each node with a value corresponding to the number of reachable
upstream hosts.

12
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 4.2. Adding Network Connectivity Monitoring

• Move resources away from any node that loses connectivity to upstream IP addresses.


Ensure ICMP echo is not blocked by any firewalls nor IDS filtering on the IP addresses used for ping
monitoring. Additional firewall rules or modification of existing allow lists may be required.

13
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 5.1. Securing the MySQL Installation

Chapter 5. Using the HA MySQL Server


The highly available MySQL server instance is now ready for use.

5.1. Securing the MySQL Installation


You should first secure your database against unauthorized access.

An initial recommended step to do so is to run the mysql_secure_installation security utility from the
command line. This utility allows you to set a MySQL password for the root user, disallow any root logins over
remote connections, and also remove the test database.

Database security is an advanced topic beyond the scope of this guide. Your security requirements
 may call for additional configuration steps beyond running mysql_secure_installation.
Consult with a MySQL expert if needed.

5.2. Importing Data


After installing, configuring and securing MySQL, you can set up a new database. You may do so with an installation
script, import an existing database dump, or execute any other configuration steps your application may require. This
step typically includes configuration of a database user.

The following example assumes the import of a database backup named database_backup.sql created with the
mysqldump utility:

# mysql --user=root --password < database_backup.sql

5.3. Accessing Databases


Assuming the database configuration allows for a user named dbuser to access a database named example from
any host, on a single node installation this database can be accessed via the MySQL UNIX socket:

$ mysql --database=example --user=dbuser --password

From a remote client, assuming the database is running on host node-a with an IP address of 192.168.122.201,
access the MySQL installation as follows:

$ mysql --host=192.168.122.201 --database=example --user=dbuser --password

However, if the database is a highly available one managed by Pacemaker, then it is vital that remote clients connect to
it only via the virtual floating cluster IP address of 192.168.122.200:

$ mysql --host=192.168.122.200 --database=example --user=dbuser --password

Additionally, the MySQL server bind-address attribute may be specified. It can either be


defined in additional_parameters for the ocf:heartbeat:mysql resource in the
Pacemaker cluster configuration, or, as a parameter defined under the [mysqld] section
commonly found in /etc/my.cnf.

14
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 5.3. Accessing Databases

Client connections in the above scenario will always connect to the correct node which currently
 holds the DRBD resource in the Primary role, has mounted the file system, advertises the virtual
floating cluster IP address, and is actively running the mysqld service.

15
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 6.1. Node Failure

Chapter 6. Failure Modes


This section highlights specific failure modes and the cluster’s corresponding reaction to the events.

6.1. Node Failure


When one cluster node suffers an outage, the cluster shifts all resources to the other node. Since DRBD provides a
synchronous replica of all MySQL data to the peer node, MySQL will resume serving database contents from the peer.

In this scenario Pacemaker moves the failed node from the Online to the Offline state, and starts the affected
resources on the surviving peer node.

 Details of node failure are also explained in chapter 6 of the DRBD User’s Guide.

Node failure does entail MySQL database recovery on the node taking over the service. See MySQL Storage Engine
Recommendations and InnoDB Buffer Pool Size for important considerations applying to database recovery.

6.2. Storage Subsystem Failure


In case the storage subsystem backing a DRBD-enabled node fails, DRBD transparently detaches from its backing
device, and continues to serve data over the DRBD replication link from its peer node.

 Details of this functionality are explained in chapter 2 of the DRBD User’s Guide.

6.3. MySQL Service Failure


In case of mysqld unexpected shutdown, a segmentation fault, or similar, the monitor operation for the p_mysql
resource detects the failure and restarts the service.

6.4. Network Failure


When upstream connectivity is lost, Pacemaker will automatically move resources away from nodes with failed
network links. This requires that ping monitoring is set up as explained in Adding Network Connectivity Monitoring.

If the DRBD replication link fails, DRBD continues to serve data from the Primary node, and re-synchronizes the DRBD
resource automatically as soon as network connectivity is restored.

 Details of this functionality are explained in chapter 2 of the DRBD User’s Guide.

16
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: 7.1. MySQL Storage Engine Recommendations

Chapter 7. Special Considerations


With a highly available database managed by Pacemaker, a few considerations apply that do not exist on standalone
databases. This section highlights some of these considerations.

7.1. MySQL Storage Engine Recommendations


Highly available systems are, by definition, designed to gracefully recover from a hard server failure. In database
applications, this means that the database must support transactions, and be crash safe. In MySQL, the MyISAM
storage engine does not fulfill these requirements, and should thus be avoided.

Highly available MySQL installations should always utilize the InnoDB storage engine. This is the default storage
engine.

7.2. InnoDB Buffer Pool Size


MySQL performance tuning guides often call for selecting a large InnoDB buffer pool size (typically around 80% of the
available physical memory on the machine). While this reduces I/O load and is generally a good approach on a stand-
alone machine, it does have drawbacks on a highly available system.

A large buffer pool increases InnoDB recovery time after a hard server failure, such as a node crash or forced failover.
A properly configured InnoDB database will eventually recover from such a condition, but possibly after a lengthy
recovery process, potentially over the course of many hours. This may lead to extended and unexpected system
outages.

It is thus often necessary to accept somewhat reduced performance on the highly available MySQL system by
selecting a smaller buffer pool, to ensure proper failover times in return. Proper values for this setting vary greatly
based on both hardware and application load, and users should always consult with a MySQL high availability expert to
select a good value.

The InnoDB buffer pool size is set in the MySQL configuration file, typically located in /etc/my.cnf:

[mysqld]
innodb-buffer-pool-size = <value>

17
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: Chapter 8. Conclusion

Chapter 8. Conclusion
Building a highly available MySQL server cluster managed by Pacemaker while leveraging DRBD as the backing storage
device is an elegant solution to ensure the uptime of your databases in the event of hardware failure.

If you have any questions or concerns regarding deployment of the methods outlined in this document with your
existing MySQL databases, or with your unique system(s) you can always consult with the experts at LINBIT®. See
https://www.linbit.com/contact-us/ for contact information.

18
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: Chapter 9. Feedback

Chapter 9. Feedback
Feedback regarding any errors, or even just suggestions, or comments regarding this document are encouraged, and
very much appreciated. You can contact the author directly using the email address(s) listed on the title page when
present, or [email protected].

For a public discussion about the concepts mentioned in this white paper, you are invited to subscribe and post to the
drbd-user mailing list. Please see the drbd-user mailing list for details.

19
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: Appendix A: Additional Information and Resources

Appendix A: Additional Information and Resources


• LINBIT’s GitHub Organization: https://github.com/LINBIT/
• Join LINBIT’s Community on Slack: https://www.linbit.com/join-the-linbit-drbd-linstor-slack/
• The DRBD® and LINSTOR® User’s Guide: https://docs.linbit.com/
• The DRBD® and LINSTOR® Mailing Lists: https://lists.linbit.com/

◦ drbd-announce: Announcements of new releases and critical bugs found


◦ drbd-user: General discussion and community support
◦ drbd-dev: Coordination of development
• Clusterlab’s Documentation Wiki: https://www.clusterlabs.org/wiki/Documentation
• MySQL RDBMS: https://dev.mysql.com/

20
MySQL High Availability using Pacemaker and DRBD on RHEL/CentOS 8: B.1. Trademark Notice

Appendix B: Legalese
B.1. Trademark Notice
DRBD® and LINBIT® are trademarks or registered trademarks of LINBIT in Austria, the United States, and other
countries. Other names mentioned in this document may be trademarks or registered trademarks of their respective
owners.

B.2. License Information


The text and illustrations in this document are licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported
license ("CC BY-SA").

• A summary of CC BY-NC-SA is available at http://creativecommons.org/licenses/by-nc-sa/3.0/.


• The full license text is available at http://creativecommons.org/licenses/by-nc-sa/3.0/legalcode.
• In accordance with CC BY-NC-SA, if you modify this document, you must indicate if changes were made. You
may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use

21

You might also like