Huawei SAP HANA Appliance Two Node Installation Guide (CH121&CH242&2288H&2488H&9008 V5) 08
Huawei SAP HANA Appliance Two Node Installation Guide (CH121&CH242&2288H&2488H&9008 V5) 08
Huawei SAP HANA Appliance Two Node Installation Guide (CH121&CH242&2288H&2488H&9008 V5) 08
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Purpose
This guide describes the installation and configuration of the two-node SAP HANA high
availability (HA) cluster solution.
You can refer to this guide to learn SAP HANA and HA cluster solutions and perform HA
cluster installation planning, operating system (OS) and database installation, network
configuration, NTP service configuration, system replication configuration, and cluster HA
configuration on the two SAP HANA server nodes.
Intended Audience
This document is intended for:
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Issue 08 (2019-05-10)
This issue is the eighth official release.Updated 8.6 Configuring Cluster Resources and 11
OS Lifecycle.
Issue 07 (2019-03-28)
This issue is the seventh official release:
Issue 06 (2018-12-26)
This is the sixth official release. Updated Solution Overview and Installation Planning.
Issue 05 (2018-09-25)
This is the fifth official release. Added 12 Responsibility Matrix and Problem Handling
Process.
Issue 04 (2018-08-10)
This is the fourth official release. Updated some information in the document.
Issue 03 (2018-07-06)
This issue is the third official release. Added the description about SUSE Linux 12.3.
Issue 02 (2018-05-22)
This issue is the second official release.
Issue 01 (2018-04-03)
This issue is the first official release.
Contents
4 Network Configuration.............................................................................................................. 19
4.1 Configuring a Blade Server (CH242 V5)..................................................................................................................... 19
4.1.1 Configuring the MM910 Management Network Out Mode..................................................................................... 19
4.1.2 Configuring Switch Module Stacking....................................................................................................................... 19
4.2 Setting IP Addresses (SLES)........................................................................................................................................ 21
4.3 Setting IP Addresses (RHEL).......................................................................................................................................27
4.4 Modifying the hosts File...............................................................................................................................................28
4.5 Configuring the SSH Password-free Interconnection Service......................................................................................29
11 OS Lifecycle.............................................................................................................................. 109
11.1 SLES for SAP Lifecycle........................................................................................................................................... 109
11.2 RHEL for SAP Lifecycle.......................................................................................................................................... 111
A Appendix 1.................................................................................................................................128
B Software Version List of Red Hat 7.4 Patch Packages in OneBox...................................131
1 Solution Overview
Figure 1-1 shows the typical networking of an SAP HANA two-node cluster. The two servers
with the same hardware configuration form a two-node cluster to provide services externally.
Figure 1-1 SAP HANA two-node cluster networking (CH121 V5&CH242 V5)
NOTE
2 Installation Planning
Internal cluster Node1: 10.5.5.131 Used to bond the two 10GE NICs to
network port Node2: 10.5.5.142 function as a system replication channel for
bond_sr (10GE database synchronization and function as
network port) the cluster heartbeat.
The IP address of the internal cluster
network port bond_sr cannot be in the same
IP network segment as that of the BMC
management network port or OS
management network port.
NOTE
You are advised to select one 10GE port from
each NIC and bond the two 10GE ports.
Service network port Node1: Used to bond the two 10GE NICs of the
bond_vip (10GE 126.126.126.131 server to provide services externally. The
network port) Node2: VIP must be in the same network segment
126.126.126.142 as the two nodes and is used for external
systems to access the SAP HANA.
VIP:
126.126.126.223 The IP address of the service network port
bond_vip cannot be in the same IP network
segment as that of the bond_sr, BMC
management network port, or OS
management network port.
NOTE
You are advised to select one 10GE port from
each NIC and bond the two 10GE ports.
Internal cluster Node1: 10.5.5.131 Used to bond the two 10GE NICs to
network port Node2: 10.5.5.142 function as a system replication channel for
bond_sr (10GE database synchronization and function as
network port) the cluster heartbeat.
The IP address of the internal cluster
network port bond_sr cannot be in the same
IP network segment as that of the BMC
management network port or OS
management network port.
NOTE
You are advised to select one 10GE port from
each NIC and bond the two 10GE ports.
Service network port Node1: Used to bond the two 10GE NICs of the
bond_vip (10GE 126.126.126.131 server to provide services externally. The
network port) Node2: VIP must be in the same network segment
126.126.126.142 as the two nodes and is used for external
systems to access the SAP HANA.
VIP:
126.126.126.223 The IP address of the service network port
bond_vip cannot be in the same IP network
segment as that of the bond_sr, BMC
management network port, or OS
management network port.
NOTE
You are advised to select one 10GE port from
each NIC and bond the two 10GE ports.
NOTE
For details about how to configure the IP addresses of the E9000 management modules and switch
modules, see the E9000 Server Deployment Guide and CX320 Switch Module Configuration Guide.
udev-228-150.32.1.x86_64.r
pm
NOTE
NOTE
l tuned-
utils-2.8.0-5.el7.noarch.r
pm
Step 3: To download the corresponding kernel, click the search box in the upper right corner
and enter the kernel name to find the kernel and download it.
NOTE
Step 4: To download the corresponding RPM package, click RPM Package Search on the
right and enter the software package name, for example, dracut-033 -502.el7.x86_64.rpm.
Do not enter any space or other characters along with the software package name.
Install the OS and database on the two SAP HANA servers. For operation details, see the
Huawei SAP HANA Appliance Single Node System Installation Guide
(CH121&CH242&2288H&2488H&9008 V5). After the installation is complete, ensure that:
l The database instance numbers of the two servers are the same.
l The database SIDs of the two servers are the same.
l The database passwords of the two servers are the same.
3.1 Installing HA Software Packages (RHEL)
Install an operating system (OS), and then install the database. For details, see the Huawei SAP HANA
Appliance Single Node System Installation Guide (CH121&CH242&2288H&2488H&9008 V5). After
the database is installed, install the software packages required by HA.
Step 5 Download the following RPM packages from the official SAP website and upload them to /
home on the server:
l perl-Sys-Syslog-0.33-3.el7.x86_64
l resource-agents-3.9.5-82.el7_3.6.x86_64
l resource-agents-sap-3.9.5-82.el7_3.6.x86_64
l resource-agents-sap-hana-3.9.5-82.el7_3.6.x86_64
Step 6 Install software packages in /home.
yum install -y pacemaker corosync
yum localinstall resource-agents-sap-3.9.5-105.el7_4.2.x86_64.rpm
yum localinstall resource-agents-sap*
yum install pcs fence-agents-all
yum install gtk2 libicu xulrunner sudo tcsh libssh2 expect cairo graphviz iptraf-
ng krb5-workstation krb5-libs libpng12 nfs-utils lm_sensors rsyslog openssl
PackageKit-gtk3-module libcanberra-gtk2 libtool-ltdl xorg-x11-xauth numactl
xfsprogs net-tools bind-utils openssl098e tuned tuned-utils libtool-ltdl ntp
----End
4 Network Configuration
Step 2 Run the smmset -l smm -d outportmode -v 0 command to configure the login to the
management plane from the switch module.
----End
Log in to the switch modules in slots 2X and 3X over SSH. (The default user name is root and the
default password is Huawei12#$.)
Step 4 Run the undo shutdown command to enable the stack ports.
*************SWI2 configuration*********************
<HUAWEI> system-view
[*HUAWEI]interface stack-port 2/1
[*HUAWEI-Stack-Port2/1]undo shutdown
[*HUAWEI]commit
*************SWI3 configuration*********************
<HUAWEI> system-view
[*HUAWEI]interface stack-port 3/1
[*HUAWEI-Stack-Port3/1]undo shutdown
[*HUAWEI]commit
<HUAWEI>display stack
<HUAWEI>display stack configuration all
----End
1. Log in to the two server nodes as the root user and perform the following operations to set the IP
addresses.
2. The following operations use the system replication port bond_sr (physical ports eth3 and eth4 of
different NICs) as an example.
3. Each compute node has two mezzanine cards. You are advised to use one network port on each
mezzanine card and connect the ports to different switch modules to improve network redundancy.
For details about the E9000 internal networking, see A Appendix 1.
Step 2 Run yast2, in the System area, and then click Network Settings.
Step 3 Select the first 10GE network port and click Edit.
NOTE
On the switch, run the display mac command to view the mappings between ports and MAC addresses,
and obtain the MAC address of the network port to be configured. On the server, run the ifconfig
command to obtain the target Ethernet port, and configure the port bonding.
Step 4 Select No Link and IP Setup (Bonding Slaves) and click Next.
Step 5 Repeat the preceding steps to configure the second 10GE port.
Step 7 Select Bond in the Device Type drop-down list, enter the name bond_sr, and click Next.
Step 8 On the Address tab page, select Statically assigned IP Address, and enter the planned IP
address and subnet mask.
Step 9 Click the Bond Slaves tab, select the two network ports (one from each NIC) to be bonded. In
Bond Driver Options, select mode=2 (or mode=balance-xor), and click Next.
NOTE
The configured bonding mode must be consistent with the bonding mode of the switch port on the
network.
Step 10 Repeat the preceding steps to configure other bond network ports. Then, click OK to save the
configuration.
Step 11 Log in to server node 2 and set IP addresses in the same way.
----End
1. Log in to the two server nodes as the root user and perform the following operations to set the IP
addresses.
2. The following operations use the system replication port bond_sr (physical ports ens4f0 and ens6f0)
as an example.
3. If you use the same method to configure bond_vip, change the bond name and IP address in the
commands.
In this example, the slave ports are ens4f0 and ens6f0. Change them based on the actual
situation.
Step 6 Create and configure bond_vip and bond_os in the same way.
Step 8 Log in to server node 2 and set IP addresses in the same way.
----End
l Service network segment: use the server host name defined by the customer.
l Directly connected network segment for HANA system replication (SR): use the format
of "Host name + SR".
l OS system management network segment: use "host name + management gateway".
NOTE
Log in to the two server nodes as the root user and modify the hosts file to enable host name resolution
between the two server nodes.
Step 3 Press i to make the hosts file editable and add the following information to the file.
The host names and IP addresses in the following text are examples only and must be changed
as required.
10.5.5.142 hw00002SR
10.5.5.131 hw00001SR
192.126.126.131 hw00001
192.126.126.142 hw00002
126.126.126.142 hw00002MG
126.126.126.131 hw00001MG
Step 4 Press Esc to switch the vi editor to the CLI mode. Press the colon (:) key to switch to the
bottom line mode. Type wq and press Enter to save the modification and exit the vi editor.
Step 5 Log in to server node 2 and repeat the preceding steps to edit the hosts file.
10.5.5.142 hw00002SR
10.5.5.131 hw00001SR
192.126.126.131 hw00001
192.126.126.142 hw00002
126.126.126.142 hw00002MG
126.126.126.131 hw00001MG
----End
Log in to the two server nodes as the root user and perform the following operations to enable the SSH
password-free interconnection service for the two service nodes.
Step 1 Run the ssh-keygen -t rsa and ssh-keygen -t dsa commands to generate public keys for
authentication.
Server node 1
hw00001: # ssh-keygen -t rsa
hw00001: # ssh-keygen -t dsa
Enter same passphrase again: //Press Enter to retain the default value.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:8DeBxFJO5TwveFcvnzN8PpjVnxO5wScZugg9Om+bhz0 root@hw00001
The key's randomart image is:
+---[DSA 1024]----+
| o+.. |
| .+.+ |
| ..o = . |
| o . + ... |
| S.= o.oo+|
| .oo+. +O+|
| o = .+*X|
| o +.Eo *=|
| ++o . +|
+----[SHA256]-----+
Step 2 Copy the public key for authentication from the local node to the peer node.
NOTE
Before performing this operation, you must enable the host name and IP address resolution in the /etc/
hosts file.
hw00001 and hw00002 are examples only. Change them based on the actual situation.
Server node 1
ssh hw00001 "echo $(cat /root/.ssh/id_dsa.pub) >> /root/.ssh/authorized_keys"
ssh hw00002 "echo $(cat /root/.ssh/id_dsa.pub) >>/root/.ssh/authorized_keys"
ssh hw00001 "echo $(cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"
ssh hw00002 "echo $(cat /root/.ssh/id_rsa.pub) >>/root/.ssh/authorized_keys"
Enter the password of the root user when prompted, as shown in the following example.
hw00001: # ssh hw00001 "echo $(cat /root/.ssh/id_dsa.pub) >> /root/.ssh/
authorized_keys"
The authenticity of host 'hw00001 (192.168.10.100)' can't be established.
ECDSA key fingerprint is ee:4c:78:4b:d8:5f:8d:44:85:c5:46:9c:90:9d:13:bd [MD5].
Are you sure you want to continue connecting (yes/no)? //Enter yes.
Warning: Permanently added 'hw00001,192.168.10.100' (ECDSA) to the list of known
hosts.
Password: //Enter the password of user root.
Server node 2
ssh hw00001 "echo $(cat /root/.ssh/id_dsa.pub) >> /root/.ssh/authorized_keys"
ssh hw00002 "echo $(cat /root/.ssh/id_dsa.pub) >>/root/.ssh/authorized_keys"
ssh hw00001 "echo $(cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"
ssh hw00002 "echo $(cat /root/.ssh/id_rsa.pub) >>/root/.ssh/authorized_keys"
On the two server nodes, use SSH to log in to each other. If the logins are successful without
entering the password, the trust relationship is established.
l On server node 1, run the ssh hw00002 command to log in to service node 2 without
entering the password.
l On server node 2, run the ssh hw00001 command to log in to service node 1 without
entering the password.
----End
Before installing and configuring cluster HA, configure the NTP service on the two SAP
HANA server nodes and synchronize time between the two server nodes.
5.1 Configuring an NTP Server
5.2 Configuring an NTP Client
l This section uses a Linux OS as an example to describe how to configure the NTP server. If an NTP
server exists on the live network, skip this section and configure the NTP client.
l Before configuring NTP, set the time zone in the OS to the local time zone. This document uses the
time zone of China Shanghai as an example. On the customer site, use the actual time zone.
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
Step 1 Log in to the NTP server as the root user. Right-click in the blank space on the desktop and
choose Open Terminal from the shortcut menu.
Step 2 Run the systemctl enable ntpd.service command to configure automatic startup for the NTP
service.
# systemctl enable ntpd.service
Step 3 Run the vi /etc/ntp.conf command to open the NTP configuration file.
# vi /etc/ntp.conf
Step 4 Press i to make the ntp.conf file editable and add NTP client IP address restriction to the file.
Format:
server 127.127.1.0
fudge 127.127.1.0 stratum 10
restrict <NTP client IP address> mask <NTP client subnet mask> nomodify notrap
This command allows an NTP client with the specified IP address to use the current host as
the NTP server and synchronize time from this host.
Example:
restrict 192.126.126.0 mask 255.255.255.0 nomodify
This example command allows NTP clients with the IP addresses 192.126.126.1–
192.126.126.254 to synchronize time from the NTP server.
In this command, set this parameter to the IP address of the OS management network port.
Step 5 Press Esc to switch the vi editor to the CLI mode. Press the colon (:) key to switch to the
bottom line mode. Type wq and press Enter to save the modification and exit the vi editor.
Step 6 Run the systemctl restart ntpd.service command to restart the NTP service.
# systemctl restart ntpd.service
----End
Log in to the two server nodes as the root user and perform the following operations.
Step 2 Run the systemctl enable ntpd.service command to configure automatic startup for the NTP
service.
# systemctl enable ntpd.service
Step 3 Run the vi /etc/ntp.conf command to open the NTP configuration file.
# vi /etc/ntp.conf
Step 4 Press i to make the ntp.conf file editable and add NTP server IP address to the file.
Format:
The following uses 192.126.126.131 as an example. Replace it with the actual NTP server IP
address.
server 192.126.126.131
Step 5 Press Esc to switch the vi editor to the CLI mode. Press the colon (:) key to switch to the
bottom line mode. Type wq and press Enter to save the modification and exit the vi editor.
Step 6 Run the systemctl restart ntpd.service command to restart the NTP service.
# systemctl restart ntpd.service
Step 7 Run the ntpq -p command to check the NTP running status.
hw00001: # ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
192.126.126.131 .LOCL. 1 u 2 64 1 0.417 5.373 0.000
Parameter Description
remote Indicates the name of the NTP server that responds to the
request.
Parameter Description
poll Indicates the interval (in seconds) for the local host to
synchronize time between the local NTP client and the remote
NTP server.
offset Indicates the time offset (in milliseconds) between the local
NTP client and the time source.
NOTE
If the time difference (offset) is greater than 1000 seconds, you need to
stop the ntpd service, manually synchronize the time, and start the ntpd
service.
systemctl stop ntpd.service
ntpdate 192.126.126.131 //NTP server IP address.
Replace it with the actual NTP server IP address.
systemctl start ntpd.service
----End
During Cluster HA installation and configuration on SLES 15, CHRONY is used as the
default time synchronization tool for the SAP HANA server. If the customer can provide a
CHRONY server, perform steps in this chapter to configure CHRONY time synchronization.
Otherwise, use the NTP tool for time synchronization.
6.1 Configuring a CHRONY Server
6.2 Configuring a CHRONY Client
This section uses a Linux OS as an example to describe how to configure the CHRONY server. If a
CHRONY server exists on the live network, skip this section and configure the CHRONY client.
Step 1 Log in to the CHRONY server as the root user. Right-click in the blank space on the desktop
and choose Open Terminal from the shortcut menu.
Step 2 Run the systemctl enable chronyd.service command to configure automatic startup for the
CHRONY service.
# systemctl enable chronyd.service
Step 3 Run the vi /etc/chrony.conf command to open the CHRONY configuration file.
# vi /etc/chrony.conf
Step 4 Press i to make the ntp.conf file editable and add client IP address restriction to the file.
Format:
server 127.0.0.1 iburst
allow <CHRONY client IP address> /<CHRONY lient subnet mask>
This command allows an CHRONY client with the specified IP address to use the current
host as the CHRONY server and synchronize time from this host.
Example:
allow 192.126.126.0/16
This example command allows CHRONY clients with the IP addresses 192.126.126.1–
192.126.126.254 to synchronize time from the CHRONY server.
In this command, set this parameter to the IP address of the OS management network port.
Step 5 Press Esc to switch the vi editor to the CLI mode. Press the colon (:) key to switch to the
bottom line mode. Type wq and press Enter to save the modification and exit the vi editor.
Step 6 Run the systemctl restart chronyd.service command to restart the CHRONY service.
# systemctl restart chronyd.service
----End
Log in to the two server nodes as the root user and perform the following operations.
Step 2 Run the systemctl enable chronyd.service command to configure automatic startup for the
CHRONY service.
# systemctl enable chronyd.service
Step 3 Run the vi /etc/chrony.conf command to open the CHRONY configuration file.
# vi /etc/chrony.conf
Step 4 Type i to enter the edit mode, add the IP address of the CHRONY server to the file, comment
out IP addresses of other servers, and write the following command to the file:
Format:
server < CHRONYP server IP address> iburst
The following uses 192.126.126.131 as an example. Replace it with the actual CHRONY
server IP address.
server 192.126.126.131 iburst
Step 5 Press Esc to switch the vi editor to the CLI mode. Press the colon (:) key to switch to the
bottom line mode. Type wq and press Enter to save the modification and exit the vi editor.
Step 6 Run the systemctl restart chronyd.service command to restart the CHRONY service.
# systemctl restart chronyd.service
----End
l The database software version must match the client version. The SAP_HANA_Client version
described in this section is used as an example.
l There are two installation methods depending on the installation source obtaining method of the
SAP HANA Client.
l Log in to the two server nodes as the root user and perform the operations described in this section.
Installation Method 1
Step 1 Download the SAP HANA Client installation package (x86_64), for example,
IMDB_CLIENT20_002_36-80002082.SAR, and the decompression file
SAPCAR_816-80000935.exe, from the official SAP website.
Step 2 Upload the installation package and decompression software to a directory, such as /home, on
the SAP HANA server.
Step 3 Log in to server node 1.
Step 4 Run the chmod +x * command to add the execute permission.
hw00001:/home# chmod +x *
Step 7 Run the ./hdbinst command to install the SAP HANA Client.
Step 8 Log in to server node 2 and install the SAP HANA Client in the same way.
----End
Installation Method 2
Download the SAP HANA Client installation package from the official SAP website.
Decompress it to the local PC and upload the HDB_CLIENT_LINUX_X86_64 installation
package generated after the decompression to the SAP HANA server nodes. The procedure is
as follows:
Step 1 Download the SAP HANA Client installation package (x86_64), for example,
IMDB_CLIENT20_002_36-80002082.SAR, from the official SAP website.
Step 2 Decompress the package on the local computer to obtain the SAP HANA Client installation
package HDB_CLIENT_LINUX_X86_64 (The package name depends on the actual
situation).
Step 7 Run the ./hdbinst command to install the SAP HANA Client.
Step 8 Log in to server node 2 and install the SAP HANA Client in the same way.
----End
Step 3 In the usr/sap/hdbclient # directory, run the following command to create a user:
./hdbsql -i 00 -u system -p <SYTEM password> -n localhost:30015 "create user
rhelhasync password <password>"
Parameter Description
Step 5 Create the user key for the rhelhasync user to log in to the database.
./hdbuserstore SET SAPHANAS00SR localhost:30015 rhelhasync <password>
Parameter Description
Step 6 Run the ./hdbuserstore list command to check whether the user key is created successfully.
If ENV is 30015 (or 30013 in the multi-tenant scenario) and USER is the name of the newly
created user, the user key is created successfully.
[root@hw00001 hdbclient]# ./hdbuserstore list
DATA FILE : /root/.hdb/hw00001/SSFS_HDB.DAT
KEY FILE : /root/.hdb/hw00001/SSFS_HDB.KEY
KEY SAPHANAS00SR
ENV : localhost:30015
USER: rhelhasync
Step 7 Run the ./hdbsql -U SAPHANAS00SR "select * from dummy" command to check whether
the user key is correct.
NOTE
Before you run the command, ensure that the database is running in primary mode. The database is
supposed not to be registered with another database. Otherwise, the command execution fails.
Step 8 Log in to server node 2 and create a user key in the same way.
----End
Before you perform the following operations, ensure that the database is running in primary mode. The
database is supposed not to be registered with another database. Otherwise, the command execution
fails. Log in to the two server nodes as the root user and use the SAP HANA studio management tool or
command-line interface (CLI) to perform the following operations.
l Multi-tenant scenario:
Run the following commands to back up the SYSEMDB and tenantDB:
NOTE
Step 4 Log in to server node 2 and create a database backup in the same way.
----End
Log in to the two server nodes as the root user and perform the operations described in this section.
s00adm is the database user name automatically generated during the database installation,
and s00 is the database SID in lowercase. Change them based on the actual situation.
hw00001:/ # su - s00adm
Step 4 Run the exit command to the switch to the root user.
hw00001:/usr/sap/S00/HDB00> exit
hw00001:/ #
The SID must be changed based on the actual situation. In this example, the SID is S00.
hw00001:/ # vim /hana/shared/S00/global/hdb/custom/config/global.ini
Step 6 Press i to make the hosts file editable and add the following information to the file.
10.5.5.142 is the IP address of the dedicated system replication 10GE port of the peer server,
and hw00002 is the host name of the peer server. Change them based on the actual situation.
[system_replication_communication]
listeninterface = .global
[system_replication_hostname_resolution]
10.5.5.142 = hw00002
Step 7 Press Esc to switch the vi editor to the CLI mode. Press the colon (:) key to switch to the
bottom line mode. Type :wq and press Enter to save the modification and exit the vi editor.
Step 8 Log in to server node 2, and edit the global. ini file in the same way. (The peer IP address and
host name need to be modified accordingly.)
[system_replication_communication]
listeninterface = .global
[system_replication_hostname_resolution]
10.5.5.131 = hw00001
----End
Step 4 Run the hdbnsutil -sr_enable --name=hw00001 command to enable system replication.
name=hw00001 is the local host name. Change it based on the actual situation.
s00adm@hw00001:/usr/sap/S00/HDB00> hdbnsutil -sr_enable --name=hw00001
checking for active nameserver ...
nameserver is active, proceeding ...
successfully enabled system as system replication source site
done.
Step 5 Run the hdbnsutil -sr_state command to check the database system replication status.
The current database mode is primary, which indicates that the local node is the active node.
s00adm@hw00001:/usr/sap/S00/HDB00> hdbnsutil -sr_state
checking for active or inactive nameserver ...
System Replication State
~~~~~~~~~~~~~~~~~~~~~~~
online: true
mode: primary
operation mode: primary
site id: 1
site name: hw00001
Host Mappings:
~~~~~~~~~~~~~~
Site Mappings:
~~~~~~~~~~~~~~
hw00001 (primary/)
Tier of hw00001: 1
done.
----End
Step 3 Run the HDB stop command to stop the database on the standby node.
hw00002:/usr/sap/S00/HDB00> HDB stop
Stopping instance using: /usr/sap/S00/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00
-function Stop 400
12.10.2015 15:11:23
Stop
OK
Waiting for stopped instance using: /usr/sap/S00/SYS/exe/hdb/sapcontrol -prot
NI_HTTP -nr 00 -function WaitforStopped 600 2
12.10.2015 15:11:23
WaitforStopped
OK
hdbdaemon is stopped.
Step 4 Copy the SSFS_<SID>.DAT and SSFS_<SID>.KEY files from the active node to the
standby node.
In SAP HANA 2.0, the data and log transmission channel needs to be authenticated during the
system replication process. Therefore, the system PKI SSFS storage certificate is required.
SSFS_<SID>.DAT and SSFS_<SID>.KEY are respectively stored in /usr/sap/SID/SYS/
global/security/rsecssfs/data and /usr/sap/SID/SYS/global/security/rsecssfs/key. SID is the
database instance ID. In this example, the SID is S00. Replace it with the actual SID.
1. On the standby node, run the following commands to back up the files:
[root@hw00002 ~]# cd /usr/sap/S00/SYS/global/security/rsecssfs/data
[root@hw00002 data]# mv SSFS_S00.DAT SSFS_S00.DAT.bak
[root@hw00002 ~]# cd /usr/sap/S00/SYS/global/security/rsecssfs/key
[root@hw00002 key]# mv SSFS_S00.KEY SSFS_S00.KEY.bak
2. On the active node, run the following commands to synchronize files to the standby
node:
hw00001:cd /usr/sap/S00/SYS/global/security/rsecssfs/key
hw00001:/usr/sap/S00/SYS/global/security/rsecssfs/key # scp SSFS_S00.KEY
s00adm@hw00002:/usr/sap/S00/SYS/global/security/rsecssfs/key
remoteHost Indicates the host name of the active node. In this example, the
host name is hw00001. Replace it with the actual host name.
name Indicates the host name of the standby node. In this example,
the host name is hw00002. Replace it with the actual host
name.
remoteInstance Indicates the HANA database instance ID. Replace it with the
actual instance ID.
Parameter Description
Step 6 Run the HDB start command to start the database on the standby node.
s00adm@hw00002:/usr/sap/S00/HDB00> HDB start
StartService
Impromptu CCC initialization by 'rscpCInit'.
See SAP note 1266393.
OK
OK
Starting instance using: /usr/sap/S00/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00
-function StartWait 2700 2
12.10.2015 15:16:53
Start
OK
12.10.2015 15:17:27
StartWait
OK
Step 7 Run the hdbnsutil -sr_state command to check the system replication status on the standby
node.
If the database mode is sync and two nodes can be found, the synchronization mode is
registered successfully.
s00adm@hw00002:/usr/sap/S00/HDB00> hdbnsutil -sr_state
online: true
mode: sync
operation mode: logreplay
site id: 2
site name: hw00002
Host Mappings:
~~~~~~~~~~~~~~
Site Mappings:
~~~~~~~~~~~~~~
hw00001 (primary/primary)
|---hw00002 (sync/logreplay)
Tier of hw00001: 1
Tier of hw00002: 2
----End
Step 1 Log in to the active node as the root user. Run the su - s00adm command to switch to the
HANA database user. Replace s00 in the command with the actual database SID in lowercase.
Step 2 Run the cdpy command as a database user to go to the python directory.
Step 3 On the active node, run the python systemReplicationStatus.py command using a database
account to check the data synchronization status.
If the displayed status is ACTIVE, the database has completed synchronization and remains
in sync mode, and you can perform the database takeover operation.
If the status is Initializing, the database is synchronizing data and is not ready for the
takeover operation.
The following is an example of the command output:
s00adm@hw00001:/usr/sap/POC/HDB00> cdpy
s00adm@hw00001:/usr/sap/S00/HDB00/exe/python_support> python
systemReplicationStatus.py
| Database | Host | Port | Service Name | Volume ID | Site ID | Site Name |
Secondary | Secondary | Secondary | Secondary | Secondary | Replication |
Replication | Replication |
| | | | | | | |
Host | Port | Site ID | Site Name | Active Status | Mode |
Status | Status Details |
| -------- | ---- | ----- | ------------ | --------- | ------- | --------- |
--------- | --------- | --------- | --------- | ------------- | ----------- |
----------- | -------------- |
| SYSTEMDB | hw00001 | 30001 | nameserver | 1 | 1 | hw00001
| hw00002 | 30001 | 2 | hw00002 | YES |
SYNC | ACTIVE | |
| TD1 | hw00001 | 30040 | indexserver | 2 | 1 | hw00001
| hw00002 | 30040 | 2 | hw00002 | YES |
SYNC | ACTIVE | |
mode: PRIMARY
site id: 1
site name: hw00001
s00adm@hw00001:/usr/sap/S00/HDB00/exe/python_support>
You can also use HANA studio to connect to the HANA database on the active node and
check the system replication status on the Overview tab page. As shown in the following
figure, if the icon is green (All service are active and in sync), the synchronization is
complete; if the icon is yellow, the synchronization is still being implemented.
----End
l This section describes how to install and configure the SLES 12 SP3 for SAP. The procedure for
installing SLES 12 SP3 for SAP is similar.
l Log in to the active and standby nodes of the SAP HANA database as the root user and perform the
following operations.
Step 3 Right-click the desktop and open a terminal. On the terminal, run the yast2 command to open
the YaST Control Center.
Step 4 On the YaST Control Center, choose Software > Add-On Products.
Step 9 Click the Patterns tab, select High Availability, and click Accept.
Step 11 Log in to the standby node and install the cluster HA component in the same way.
----End
Step 1 Download the SAP HANA System Replication patch package from the official website based
on the OS version. The following patch packages are for reference only.
l SAPHanaSR-0.152.22-1.1.noarch.rpm
l SAPHanaSR-0.152.22-1.1.src.rpm
l SAPHanaSR-doc-0.152.22-1.1.noarch.rpm
Step 2 Upload the patch packages to any directories on the active and standby nodes.
Step 3 Log in to the active node and go to the directory where the patch packages are stored.
Step 4 Run the rpm -Uvh *.rpm command to install the patch packages.
rpm -Uvh *.rpm
Step 5 Log in to the standby node and install the patch packages in the same way.
----End
Before initialization, ensure that you have configured SSH mutual trust and NTP.
----End
In this example, hw00002 is the host name of the standby node. Replace it with the actual
host name or IP address of the standby node.
scp /etc/corosync/corosync.conf hw00002:/etc/corosync/
Step 2 Log in to the standby node of the cluster as the root user, and run the following commands to
add the default route. In the commands, eth2 is the OS management network port of the
standby node. Change it based on the site requirements.
hw00002 :~ # ip route add default via 192.126.126.142 dev eth2
hw00002 :~ # /sbin/ip route
Step 3 Log in to the standby node as the root user and run the ha-cluster-join command to add the
standby node to the cluster.
hw00002:~ # ha-cluster-join
WARNING: No watchdog device found. If SBD is used, the cluster will be unable to
start without a watchdog.
Do you want to continue anyway? [y/N] y //Enter y to use IPMI instead of SBD as
the split-brain mechanism.
WARNING: Could not detect network address for eth0
Join This Node to Cluster:
You will be asked for the IP address of an existing node, from which
configuration will be copied. If you have not already configured
passwordless ssh between nodes, you will be prompted for the root
password of the existing node.
IP address or hostname of existing node (e.g.: 192.168.1.1) [] 192.168.1.143 //
Enter the OS management IP address of the active node.
Enabling sshd service
/root/.ssh/id_rsa already exists - overwrite? [y/N] n//Enter n, which indicates
not to overwrite id_rsa.
/root/.ssh/id_dsa already exists - overwrite? [y/N] n//Enter n, which indicates
not to overwrite id_dsa.
Configuring csync2
Enabling csync2 service
Enabling xinetd service
Merging known_hosts
Probing for new partitions......done
Enabling hawk service
HA Web Konsole is now running, to see cluster status go to:
https://10.5.2.11:7630/
Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
Enabling openais service
Waiting for cluster...done
Done (log saved to /var/log/sleha-bootstrap.log)
hw00002:~ #
NOTE
If the message "WARNING: csync2 of /etc/csync2/csync2.cfg failed - file may not be in sync on all
nodes;" is displayed, the installation fails. Perform the following operations:
1. On the active node, run the scp /etc/csync2/csync2.cfg hw00002:/etc/csync2/ command to manually
synchronize the csync2.cfg file from the active node to the standby node. In the command, hw00002
is the host name of the standby node. Replace it with the actual host name or IP address of the
standby node.
2. On the standby node, run the ha-cluster-join command.
----End
Step 3 Select Communication Channels to set the parameters, and click Finish.
1. Set Transport to Unicast.
2. Set Channel to the active heartbeat channels.
– Bind Network Address: Enter the OS management network segment, for example,
192.126.126.0.
– Port: Retain the default value 5405.
3. Select Redundant Channel to configure the standby heartbeat channel.
– Bind Network Address: Enter the network segment for direct connections with the
system replication databases, for example, 10.5.5.0.
Step 4 Choose Security, select Enable Security Auth, and click Generate Auth Key File.
Step 5 Choose Configure Csync2, click Generate Pre-Shared-Keys to generate the key, and click
Add Suggested Files to add files to be synchronized. Select hw00002 in the Sync Host list
box, and delete /etc/multipath.conf in the Sync File list box. Delete the same file for
hw00001. (If the /etc/multipath.conf file is synchronized, the server will enter the emergency
mode.)
NOTE
Ensure that Turn csync2 OFF is displayed, which indicates that csync2 is enabled.
Step 6 Choose Configure conntrackd, set the parameters as shown in Table 8-1, and click
Generate *. conf.
Dedicated Interface Select the OS management network port. In this example, the
value is eth0:192.126.126.131.
Step 7 Choose Service, set the parameters as shown in Table 8-2, and click Finish.
Switch On and Off Set this parameter to Start pacemaker Now, which indicates
that the pacemaker is enabled.
Manually synchronize the shared key file after configuring basic cluster parameters. Copy
the /etc/csync2/key_hagroup and /etc/csync2/csync2.cfg files from the active node to
the /etc/csync2 directory on the standby node. hw00002 is the host name of the standby node.
Replace it with the actual host name or IP address of the standby node.
scp /etc/csync2/key_hagroup /etc/csync2/csync2.cfg hw00002:/etc/csync2
Step 9 On the active node, run the following commands to enable the csync2 and xinetd functions:
systemctl enable csync2.socket
systemctl enable xinetd
systemctl restart xinetd
Step 10 On the standby node, run the following commands to enable the csync2 and xinetd functions:
systemctl enable csync2.socket
systemctl enable xinetd
systemctl restart xinetd
Step 11 On the active node, run the csync2 -xv command to synchronize the active and standby
configuration files. Check the multipath file multipath.conf in the /etc directory on the
standby node. If the multipath file of the active node is synchronized to the standby node,
reconfigure the multipath file of the standby node according to the Huawei SAP HANA
Appliance Single Node System Installation Guide (CH121&CH242&2288H&2488H&9008
V5). Otherwise, the standby node will restart and enter the maintenance mode due to
multipath issues.
csync2 -xv
NOTE
Step 12 On the active node, run the systemctl status pacemaker and systemctl restart pacemaker
commands to restart the cluster service.
systemctl status pacemaker
systemctl restart pacemaker
Step 13 On the standby node, run the systemctl status pacemaker and systemctl restart pacemaker
commands to restart the cluster service.
systemctl status pacemaker
systemctl restart pacemaker
Step 14 On the active node, run the crm_mon -r command to check the cluster status. The active and
standby nodes are online.
hw00001:~ # crm_mon -r
Stack: corosync
Current DC: hw00001 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Wed May 9 11:20:09 2018
Last change: Wed May 9 11:19:42 2018 by root via crm_attribute on hw00001
2 nodes configured
7 resources configured
Online: [ hw00001 hw00002 ]
Step 15 On the active node, run the corosync-cfgtool -s command to view the cluster heartbeat status.
The active and standby heartbeats exist.
NOTE
If the cluster is configured for the first time, there may be only one heartbeat cable (ring ID). In this
case, restart both servers and check the heartbeat status again.
hw00001:~ # corosync-cfgtool -s
Printing ring status.
Local node ID 1084752650
RING ID 0
id= 10.5.5.131
status= ring 0 active with no faults
RING ID 1
id= 192.126.126.131
status= ring 1 active with no faults
Step 16 Log in to the iBMC, choose Configuration > Local Users, and grant the IPMI LAN login
interface permission to the Administrator user. The permission has been granted if the icon
is green in the IPMI column.
If the permission is not granted, click the edit icon in the Operation column and select IPMI.
On the active and standby nodes, use IPMItool to connect to the iBMCs of the active and
standby nodes respectively, and check the power status. If Chassis Power is on is displayed,
the IPMI connection is working properly.
Active node:
hw00001:~ # ipmitool -I lanplus -H 192.126.126.14 -U Administrator -P Admin@9000
chassis power status
Chassis Power is on
Standby node:
-U Administrator Indicates the user name for logging in to the iBMC. Change it
based on the actual situation.
NOTE
If the message "Get Device ID command failed: 0xc1 Invalid command" is displayed, ignore it.
----End
The content of the crm-stonith-cs.txt file is as follows: (Change hw00001 and hw00002 to
the actual host names.)
# enter the following to crm-stonith-cs.txt
location loc_hana01_stonith rsc_hana01_stonith -inf: hw00001
location loc_hana02_stonith rsc_hana02_stonith -inf: hw00002
Step 2 Run the following commands to import the configuration files to the cluster.
crm configure load update crm-stonith.txt
crm configure load update crm-stonith-cs.txt
Step 4 Run the crm configure load update crm-bs.txt command to import the configuration file to
the cluster.
crm configure load update crm-bs.txt
----End
NOTE
If a certificate warning is displayed when you attempt to access the URL for the first time, it indicates
that a self-signed certificate is used. By default, the self-signed certificate is not considered as a trusted
certificate. Click Continue to this website (not recommended) or add an exception in the browser to
eliminate the warning message.
Step 2 Choose Wizards > SAP > SAP HANA SR Scale-Up Performance-Optimized.
Step 3 Set the database SID, instance number, and virtual IP address, and click Verify.
Step 4 Verify the configuration of the SAP HANA database resource and click Apply.
NOTE
In SAPHanaSR-0.152.21-1.1, VIP resources are bound to nic=eth0, which conflicts with the solution
networking plan bond_vip. You need to manually delete the nic=eth0.
Modification method:
Log in to any cluster node as the root user and run the crm config edit command.
Delete nic=eth0 from primitive rsc_ip_S00_HDB00.
NOTE
In the preceding configuration, the database automatic takeover function is enabled. That is, if the active
node database is faulty when data has been synchronized between the active and standby nodes, the
standby node database can automatically take over the services. The automatic registration function is
disabled. That is, after the standby node takes over the services, the original active node, upon a restart,
cannot automatically register to the original standby node. After rectifying the fault on the original
active node, run the following command to register the original active node to the original standby node:
hw00001 and hw00002 are the host names of the active and standby nodes. Replace them with the
actual host names.
hdbnsutil -sr_register --remoteHost=hw00002 --remoteInstance=00 --
replicationMode=sync --name=hw00001 --operationMode=logreplay
----End
The virtual IP address is used for the SAP application server to access the database and must
be bound to the master database node. If the virtual IP address is not bound to the master
database node, the cluster status is abnormal.
You can also run the crm_mon -r command to check the cluster status.
Stack: corosync
Current DC: hw00001 (version 1.1.15-19.15-e174ec8) - partition with quorum
Last updated: Wed Sep 6 09:58:52 2017
Last change: Wed Sep 6 09:58:47 2017 by root via crm_attribute on hw00001
2 nodes configured
7 resources configured
Step 4 Run the crm configure command to switch to the CLI mode.
crm configure
Step 5 Run the following commands to add network monitoring resources for the upstream service
port:
In this example, 126.126.126.254 is the gateway of the service port and rsc_ip_S00_HDB00
is the name of the virtual IP. Change them based on the actual situation.
crm(live)configure# primitive r_ping ocf:pacemaker:ping params multiplier=100
dampen=5 name=pingdtest host_list=126.126.126.254 op monitor interval=15
timeout=60 op start interval=0 timeout=60
crm(live)configure# clone r_ping-clone r_ping
crm(live)configure# location loc_r_ping rsc_ip_S00_HDB00 rule -inf: not_defined
pingdtest or pingdtest lte 0 //Add constraint
----End
Step 1 Run the SAPHanaSR-showAttr command to check the System Replication active/standby
status.
hw00001:/home # SAPHanaSR-showAttr
Host \ Attr clone_state remoteHost roles site srmode
sync_state vhost lpa_sle_lpt
----------------------------------------------------------------------------------
------------------------------
hw00001 PROMOTED hw00002 4:P:master1:master:worker:master hw00001
sync PRIM hw00001 1416991408
hw00002 DEMOTED hw00001 4:S:master1:master:worker:master hw00002
sync SOK hw00002 30
----End
Step 1 Run the SAPHanaSR-showAttr command to check the active/standby status of the SAP
HANA system replication.
Step 2 If data synchronization is not complete, automatic HA cluster takeover will not occur. For
details about manual takeover, see the SAP notes. When the synchronization status of all
services is not Active, SAP does not recommend the takeover operation because this means
that data may be lost.
a. SAP note: 2578019 - Service Crashes in
DataAccess::PersistenceManagerImpl::endOfDataRecovery
Do not perform a takeover if not for all services the REPLICATION_STATUS in
M_SERVICE_REPLICATION shows ACTIVE. See SAP Note 2063657 for details.
https://launchpad.support.sap.com/#/notes/2578019
b. SAP note: 2580302 - Emergency Shutdown of Indexserver Due to Log Position
Inconsistency Upon Takeover
With the fix the takeover in this case can succeed, but since one service wasn't in sync this
implies data loss. You should follow the takeover decision guide of SAP Note 2063657 to
assess if a takeover in this state is a feasible option
https://launchpad.support.sap.com/#/notes/2580302
c. SAP note: 2063657 - SAP HANA System Replication Takeover Decision Guideline
You are advised to determine whether to perform a takeover based on the site requirements.
https://launchpad.support.sap.com/#/notes/2063657
----End
1 to Step 2. After the registration is complete, clear the failure count of the resource to enable
the resource to run again.
NOTE
The SLES 12 updates the failure counting mechanism. After an HA takeover, the database needs to be
registered and SAP HANA resources are cleaned up again.
Step 1 Assume that the original active node is HW00001 and the original standby node is
HW00002. After HW00002 takeover is complete, virtual IP resources are migrated to
HW00002.
Step 2 Run the hdbnsutil -sr_register --remoteHost=hw00002 --remoteInstance=00 --
replicationMode=sync --name=hw00001 --operationMode=logreplay command using the
hw00001 database account of the original active node to register hw00001 as the standby
node with the HW00002.
Step 3 Log in to SUSE Hawk, choose Cleanup in the Operations column of the SAP HANA
resource record to clear the resource failure count.
Step 4 After the failure count is cleaned up, the database automatically starts on HW00001.
----End
Log in to the active and standby nodes of the SAP HANA database as the root user and perform the
following operations.
Step 3 Right-click the desktop and click Activities to open a terminal. On the terminal, run the yast2
command to open the YaST Control Center.
Step 4 On the YaST Control Center, choose Software > Add-On Products.
Step 7 Click Next. Insert the add-on product DVD and click Continue.
Step 9 Click the Patterns tab, select High Availability, and click Accept.
Step 11 Log in to the standby node and install the cluster HA component in the same way.
----End
If the OS is SLES, log in to the active and standby nodes of the SAP HANA server as the root user.
(You can specify the active and standby nodes based on the site requirements or use the NTP server and
CHRONY server as the active node.) Perform the following operations.
Step 1 Download the SAP HANA System Replication patch package from the official website based
on the OS version. The following patch packages are for reference only.
l SAPHanaSR-0.152.22-4.3.2noarch.rpm
l SAPHanaSR-0.152.22-4.3.2.src.rpm
l SAPHanaSR-doc-0.152.22-4.3.2.noarch.rpm
Step 2 Upload the patch packages to any directories on the active and standby nodes.
Step 3 Open the YaST2 interface and check whether crmsh and crmsh-script are installed.
Step 4 Log in to the active node and go to the directory where the patch packages are stored.
Step 5 Run the rpm -Uvh *.rpm command to install the patch packages.
rpm -Uvh *.rpm
Step 6 Log in to the standby node and install the patch packages in the same way.
----End
Before initialization, ensure that you have configured SSH mutual trust ,NTP and CHRONY.
Step 2 Add the default route. In the command, eth0 is the OS network port of the active node. Set the
actual OS network port according to the actual situation, and then run the /sbin/ip route
command.
hw00001:#ip route add default via 192.168.2.171 dev eth6
hw00001:/home # /sbin/ip route
Step 3 On SLES 15, if the NTP time synchronization tool is used, delete the CHRONY service
automatically installed when Cluster HA is installed. Skip this step on SLES 12.
mv /usr/lib/systemd/system/chronyd.service /home
----End
In this example, hw00002 is the host name of the standby node. Replace it with the actual
host name or IP address of the standby node.
scp /etc/corosync/corosync.conf hw00002:/etc/corosync/
Step 2 Log in to the standby node of the cluster as the root user, and run the following commands to
add the default route. In the commands, eth2 is the OS management network port of the
standby node. Change it based on the site requirements.
hw00002 :~ # ip route add default via 192.126.126.142 dev eth2
hw00002 :~ # /sbin/ip route
Step 3 Log in to the standby node as the root user and run the ha-cluster-join command to add the
standby node to the cluster.
hw00002:~ # ha-cluster-join
WARNING: No watchdog device found. If SBD is used, the cluster will be unable to
start without a watchdog.
Do you want to continue anyway? [y/N] y //Enter y to use IPMI instead of SBD as
the split-brain mechanism.
WARNING: Could not detect network address for eth0
Join This Node to Cluster:
You will be asked for the IP address of an existing node, from which
configuration will be copied. If you have not already configured
passwordless ssh between nodes, you will be prompted for the root
password of the existing node.
IP address or hostname of existing node (e.g.: 192.168.1.1) [] 192.168.1.143 //
Enter the OS management IP address of the active node.
Enabling sshd service
/root/.ssh/id_rsa already exists - overwrite? [y/N] n//Enter n, which indicates
not to overwrite id_rsa.
/root/.ssh/id_dsa already exists - overwrite? [y/N] n//Enter n, which indicates
not to overwrite id_dsa.
Configuring csync2
Enabling csync2 service
Enabling xinetd service
Merging known_hosts
Probing for new partitions......done
Enabling hawk service
HA Web Konsole is now running, to see cluster status go to:
https://10.5.2.11:7630/
Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
Enabling openais service
Waiting for cluster...done
Done (log saved to /var/log/sleha-bootstrap.log)
hw00002:~ #
NOTE
If the message "WARNING: csync2 of /etc/csync2/csync2.cfg failed - file may not be in sync on all
nodes;" is displayed, the installation fails. Perform the following operations:
1. On the active node, run the scp /etc/csync2/csync2.cfg hw00002:/etc/csync2/ command to manually
synchronize the csync2.cfg file from the active node to the standby node. In the command, hw00002
is the host name of the standby node. Replace it with the actual host name or IP address of the
standby node.
2. On the standby node, run the ha-cluster-join command.
----End
Step 3 Select Communication Channels to set the parameters, and click Finish.
1. Set Channel to the active heartbeat channels.
– Bind Network Address: Enter the OS management network segment, for example,
192.126.126.0.
– Port: Retain the default value 5405.
2. Select Redundant Channel to configure the standby heartbeat channel.
– Bind Network Address: Enter the network segment for direct connections with the
system replication databases, for example, 10.5.5.0.
– Port: Enter 5407.
3. Set Transport to Unicast.
4. Configure Member Address. Click Add to add the IP addresses of the OS management
network ports of the two servers in the cluster (192.126.126.131 and 192.126.126.142 in
this example) and the IP addresses for the system replication direct connections
(10.5.5.131 and 10.5.5.142 in this example).
5. Configure Cluster Name.
6. Retain the default value of Expected Votes.
7. Set rrp mode to passive.
8. Select Auto Generate Node ID.
Step 4 Choose Security, select Enable Security Auth, and click Generate Auth Key File.
Step 5 Choose Configure Csync2, click Generate Pre-Shared-Keys to generate the key, and click
Add Suggested Files to add files to be synchronized. Select hw00002 in the Sync Host list
box, and delete /etc/multipath.conf in the Sync File list box. Delete the same file for
hw00001. (If the /etc/multipath.conf file is synchronized, the server will enter the emergency
mode.)
NOTE
Ensure that Turn csync2 OFF is displayed, which indicates that csync2 is enabled.
Step 6 Choose Configure conntrackd, set the parameters as shown in Table 9-1, and click
Generate *. conf.
Dedicated Interface Select the OS management network port. In this example, the
value is eth0:192.126.126.131.
Parameter Description
Step 7 Choose Service, set the parameters as shown in Table 9-2, and click Finish.
Switch On and Off Set this parameter to Start pacemaker Now, which indicates
that the pacemaker is enabled.
Manually synchronize the shared key file after configuring basic cluster parameters. Copy
the /etc/csync2/key_hagroup and /etc/csync2/csync2.cfg files from the active node to
the /etc/csync2 directory on the standby node. hw00002 is the host name of the standby node.
Replace it with the actual host name or IP address of the standby node.
scp /etc/csync2/key_hagroup /etc/csync2/csync2.cfg hw00002:/etc/csync2
Step 9 On the active node, run the following commands to enable the csync2:
systemctl enable csync2.socket
Step 10 On the standby node, run the following commands to enable the csync2:
systemctl enable csync2.socket
Step 11 On the active node, run the csync2 -xv command to synchronize the active and standby
configuration files. Check the multipath file multipath.conf in the /etc directory on the
standby node. If the multipath file of the active node is synchronized to the standby node,
reconfigure the multipath file of the standby node according to the Huawei SAP HANA
Appliance Single Node System Installation Guide (CH121&CH242&2288H&2488H&9008
V5). Otherwise, the standby node will restart and enter the maintenance mode due to
multipath issues.
csync2 -xv
NOTE
Step 12 On the active node, run the systemctl status pacemaker and systemctl restart pacemaker
commands to restart the cluster service.
systemctl status pacemaker
systemctl restart pacemaker
Step 13 On the standby node, run the systemctl status pacemaker and systemctl restart pacemaker
commands to restart the cluster service.
systemctl status pacemaker
systemctl restart pacemaker
Step 14 On the active node, run the crm_mon -r command to check the cluster status. The active and
standby nodes are online.
hw00001:~ # crm_mon -r
Stack: corosync
Current DC: hw00001 (version 1.1.18+20180430.b12c320f5-1.14-b12c320f5) -
partition with quorum
Last updated: Wed Feb 13 15:14:27 2019
Last change: Wed Feb 13 14:49:36 2019 by hacluster via cibadmin on hw00001
2 nodes configured
0 resources configured
No resources
Step 15 On the active node, run the corosync-cfgtool -s command to view the cluster heartbeat status.
The active and standby heartbeats exist.
NOTE
If the cluster is configured for the first time, there may be only one heartbeat cable (ring ID). In this
case, restart both servers and check the heartbeat status again.
hw00001:~ # corosync-cfgtool -s
Printing ring status.
Local node ID 1084752650
RING ID 0
id= 10.5.5.131
status= ring 0 active with no faults
RING ID 1
id= 192.126.126.131
status= ring 1 active with no faults
Step 16 Log in to the iBMC, choose Configuration > Local Users, and grant the IPMI LAN login
interface permission to the Administrator user. The permission has been granted if the icon
is green in the IPMI column.
If the permission is not granted, click the edit icon in the Operation column and select IPMI.
On the active and standby nodes, use IPMItool to connect to the iBMCs of the active and
standby nodes respectively, and check the power status. If Chassis Power is on is displayed,
the IPMI connection is working properly.
Active node:
hw00001:~ # ipmitool -I lanplus -H 192.126.126.14 -U Administrator -P Admin@9000
chassis power status
Chassis Power is on
Standby node:
hw00002:~ # ipmitool -I lanplus -H 192.168.1.13 -U Administrator -P Admin@9000
chassis power status
Chassis Power is on
-U Administrator Indicates the user name for logging in to the iBMC. Change it
based on the actual situation.
NOTE
If the message "Get Device ID command failed: 0xc1 Invalid command" is displayed, ignore it.
----End
Step 2 Run the following commands to import the configuration files to the cluster.
crm configure load update crm-stonith.txt
crm configure load update crm-stonith-cs.txt
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
Step 4 Run the crm configure load update crm-bs.txt command to import the configuration file to
the cluster.
crm configure load update crm-bs.txt
----End
NOTE
If a certificate warning is displayed when you attempt to access the URL for the first time, it indicates
that a self-signed certificate is used. By default, the self-signed certificate is not considered as a trusted
certificate. Click Continue to this website (not recommended) or add an exception in the browser to
eliminate the warning message.
Step 2 Choose Wizards > SAP > SAP HANA SR Scale-Up Performance-Optimized.
Step 3 Set the database SID, instance number, and virtual IP address, and click Verify.
Step 4 Verify the configuration of the SAP HANA database resource and click Apply.
NOTE
In the preceding configuration, the database automatic takeover function is enabled. That is, if the active
node database is faulty when data has been synchronized between the active and standby nodes, the
standby node database can automatically take over the services. The automatic registration function is
disabled. That is, after the standby node takes over the services, the original active node, upon a restart,
cannot automatically register to the original standby node. After rectifying the fault on the original
active node, run the following command to register the original active node to the original standby node:
hw00001 and hw00002 are the host names of the active and standby nodes. Replace them with the
actual host names.
hdbnsutil -sr_register --remoteHost=hw00002 --remoteInstance=00 --
replicationMode=sync --name=hw00001 --operationMode=logreplay
----End
The stonith resource needs to run on the peer node to reboot the peer node when cluster brain
split occurs.
In this example, the stonith resource of hana01 must run on hw0002, and the stonith resource
of hana02 must run on hw0001
The virtual IP address is used for the SAP application server to access the database and must
be bound to the master database node. If the virtual IP address is not bound to the master
database node, the cluster status is abnormal.
You can also run the crm_mon -r command to check the cluster status.
Stack: corosync
Current DC: hw00001 (version 1.1.15-19.15-e174ec8) - partition with quorum
Last updated: Wed Sep 6 09:58:52 2017
Last change: Wed Sep 6 09:58:47 2017 by root via crm_attribute on hw00001
2 nodes configured
7 resources configured
Step 4 Run the crm configure command to switch to the CLI mode.
crm configure
Step 5 Run the following commands to add network monitoring resources for the upstream service
port:
In this example, 126.126.126.254 is the gateway of the service port and rsc_ip_S00_HDB00
is the name of the virtual IP. Change them based on the actual situation.
crm(live)configure# primitive r_ping ocf:pacemaker:ping params multiplier=100
dampen=5 name=pingdtest host_list=126.126.126.254 op monitor interval=15
timeout=60 op start interval=0 timeout=60
crm(live)configure# clone r_ping-clone r_ping
crm(live)configure# location loc_r_ping rsc_ip_S00_HDB00 rule -inf: not_defined
pingdtest or pingdtest lte 0 //Add constraint
crm(live)configure# commit
----End
Step 1 Run the SAPHanaSR-showAttr command to check the System Replication active/standby
status.
hw00001:/home # SAPHanaSR-showAttr
Host \ Attr clone_state remoteHost roles site srmode
sync_state vhost lpa_sle_lpt
----------------------------------------------------------------------------------
------------------------------
hw00001 PROMOTED hw00002 4:P:master1:master:worker:master hw00001
sync PRIM hw00001 1416991408
hw00002 DEMOTED hw00001 4:S:master1:master:worker:master hw00002
sync SOK hw00002 30
----End
Step 1 Run the SAPHanaSR-showAttr command to check the active/standby status of the SAP
HANA system replication.
Run the SAPHanaSR-showAttr command on the active node.
The result description is as follows:
sync_state:
PRIM indicates that the node is the active node.
SOK indicates that the node is ready and synchronization is complete.
SFAIL indicates that synchronization is not complete.
The command output is similar to the following:
HW00001:/home # SAPHanaSR-showAttr
Host \ Attr clone_state remoteHost roles site srmode sync_state
vhost lpa_sle_lpt
----------------------------------------------------------------------------------
-----
Step 2 If data synchronization is not complete, automatic HA cluster takeover will not occur. For
details about manual takeover, see the SAP notes. When the synchronization status of all
services is not Active, SAP does not recommend the takeover operation because this means
that data may be lost.
https://launchpad.support.sap.com/#/notes/2578019
With the fix the takeover in this case can succeed, but since one service wasn't in sync this
implies data loss. You should follow the takeover decision guide of SAP Note 2063657 to
assess if a takeover in this state is a feasible option
https://launchpad.support.sap.com/#/notes/2580302
c. SAP note: 2063657 - SAP HANA System Replication Takeover Decision Guideline
You are advised to determine whether to perform a takeover based on the site requirements.
https://launchpad.support.sap.com/#/notes/2063657
----End
NOTE
The SLES 12 updates the failure counting mechanism. After an HA takeover, the database needs to be
registered and SAP HANA resources are cleaned up again.
Step 1 Assume that the original active node is HW00001 and the original standby node is
HW00002. After HW00002 takeover is complete, virtual IP resources are migrated to
HW00002.
Step 3 Log in to SUSE Hawk, choose Cleanup in the Operations column of the SAP HANA
resource record to clear the resource failure count.
Step 4 After the failure count is cleaned up, the database automatically starts on HW00001.
----End
For details about how to configure the yum source mode, see 3 OS and Database Installation.
yum list
yum install -y pacemaker corosync
yum localinstall resource-agents-sap-3.9.5-105.el7_4.2.x86_64.rpm
yum localinstall resource-agents-sap*
yum install pcs fence-agents-all
yum install gtk2 libicu xulrunner sudo tcsh libssh2 expect cairo graphviz iptraf-
ng krb5-workstation krb5-libs libpng12 nfs-utils lm_sensors rsyslog openssl
PackageKit-gtk3-module libcanberra-gtk2 libtool-ltdl xorg-x11-xauth numactl
xfsprogs net-tools bind-utils openssl098e tuned tuned-utils libtool-ltdl ntp
Step 2 Run the HDB -info command on the active and standby nodes to check whether the SAP
HANA database is running properly. If the processes in bold are displayed, the database is
running properly.
Active node:
s00adm@hw00001:/usr/sap/S00/HDB00> HDB -info
USER PID PPID %CPU VSZ RSS COMMAND
s00adm 21117 21116 0.8 116304 2972 -sh
s00adm 21256 21117 0.0 113256 1644 \_ /bin/sh /usr/sap/S00/HDB00/HDB -
info
s00adm 21288 21256 0.0 139504 1644 \_ ps fx -U s00adm -o
user,pid,ppid,pcpu,vsz,rss,args
s00adm 9493 1 0.0 23616 1712 sapstart pf=/usr/sap/S00/SYS/profile/
S00_HDB00_hw00001
s00adm 9520 9493 0.1 349668 33244 \_ /usr/sap/S00/HDB00/hw00001/trace/
hdb.sapS00_HDB00 -d -nw -f /usr/sap/S00/HDB00/hw00001/daemon.ini
pf=/usr/sap/S00/SYS/profile/S00_HDB00_hw00001
s00adm 9542 9520 39.1 8876924 4973812 \_ hdbnameserver
s00adm 9885 9520 29.7 4398064 1449272 \_ hdbcompileserver
s00adm 9887 9520 5.7 4152936 471824 \_ hdbpreprocessor
Standby node:
s00adm@hw00002:/usr/sap/S00/HDB00> HDB -info
Step 3 On the active node, run the ./hdbsql -u system -i 00 "select value from
"SYS"."M_INIFILE_CONTENTS" where key='log_mode'" command to set log_mode
to normal for SAP HANA.
Go to the /usr/sap/hdbclient directory and run the following command. Enter the database
password when prompted. In this example, the password is Huawei12#$.
[root@hw00001 hdbclient]# ./hdbsql -u system -i 00 "select value from
"SYS"."M_INIFILE_CONTENTS" where key='log_mode'"
Password:
VALUE
"normal"
1 row selected (overall time 133.117 msec; server time 111.782 msec)
Step 4 On the active node, run the grep Autostart /usr/sap/S00/SYS/profile/* command to check
whether the automatic startup function of the SAP HANA database is disabled.
Step 5 Run the systemctl start pcsd.service and systemctl enable pcsd.service commands on the
active and standby nodes to start the pcs service and enable the automatic startup function.
Active node:
[root@hw00001 hdbclient]# systemctl start pcsd.service
[root@hw00001 hdbclient]# systemctl enable pcsd.service
Standby node:
[root@hw00002 hdbclient]# systemctl start pcsd.service
[root@hw00002 hdbclient]# systemctl enable pcsd.service
Step 6 Run the passwd hacluster command on the active and standby nodes to change the cluster
password.
In this example, hacluster is the cluster name and linux is the password. Change them based
on the actual situation.
Active node:
[root@hw00001 ~]# passwd hacluster
Changing password for user hacluster.
New password: (Enter the password.)
Retype new password: (Confirm the password.)
passwd: all authentication tokens updated successfully.
[root@hw00001 ~]#
Standby node:
[root@hw00002 ~]# passwd hacluster
Changing password for user hacluster.
New password: (Enter the password.)
Retype new password: (Confirm the password.)
passwd: all authentication tokens updated successfully.
[root@hw00002 ~]#
Step 7 Run the pcs cluster auth 10.5.5.131 10.5.5.142 command on the active and standby nodes to
perform authentication.
hacluster is the cluster name, linux is the password (configured at step 6), and 10.5.5.131
and 10.5.5.142 are SR channel network port IP addresses of the active and standby nodes.
Change them based on the actual situation.
[root@hw00002 ~]# pcs cluster auth 10.5.5.131 10.5.5.142
Username: hacluster
Password:
10.5.5.142: Authorized
10.5.5.131: Authorized
Step 8 Run the pcs cluster setup --name hacluster --start 10.5.5.131,192.126.126.131
10.5.5.142,192.126.126.142 --transport udpu command on the active and standby nodes to
initialize clusters and set the cluster communication mode to unicast.
In this example, the OS management network port is used as the cluster communication
network port. Before configuration, ensure that the OS management network port is normal.
10.5.5.131 and 10.5.5.142 are the SR channel network port IP addresses of the two hosts in
the cluster. 192.126.126.131 and 192.126.126.142 are OS management network port IP
addresses of the two hosts in the cluster. Change them based on the actual situation.
Active node:
[root@hw00001 ~]# pcs cluster setup --name hacluster --start
10.5.5.131,192.126.126.131 10.5.5.142,192.126.126.142 --transport udpu
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
hw00001: Succeeded
hw00002: Succeeded
Starting cluster on nodes: hw00001, hw00002...
hw00001: Starting Cluster...
hw00002: Starting Cluster...
Synchronizing pcsd certificates on nodes hw00001, hw00002...
hw00002: Success
hw00001: Success
Standby node:
[root@hw00002 ~]# pcs cluster setup --name hacluster --start
10.5.5.131,192.126.126.131 10.5.5.142,192.126.126.142 --transport udpu --force
Step 9 Run the pcs cluster auth hw00001 hw00002 command on the active and standby nodes to
perform authentication.
In this example, hacluster is the cluster name, linux is the password set in the previous step,
hw00001 is the host name of the active node, and hw00002 is the host name of the standby
node. Change them based on the actual situation.
[root@hw00002 ~]# pcs cluster auth hw00001 hw00002
Username: hacluster
Password:
hw00002: Authorized
hw00001: Authorized
Step 10 Run the pcs cluster setup --name hacluster --start hw00001,hw00001sr
hw00002,hw00002sr --transport udpu command on the active and standby nodes to
initialize the cluster and set the cluster communication mode to UDPU communication.
This example uses the upper-layer service ports as cluster communication ports. Before the
configuration, ensure that the upper-layer service ports of the cluster nodes can communicate
with each other properly.
hw00001 and hw00002 are the names mapping to the upper-layer service ports of the two
cluster nodes. hw00001sr and hw00002sr are the names mapping to the cluster SR channel
ports. Change them based on the actual situation.
Active node:
[root@hw00001 ~]# pcs cluster setup --name hacluster --start hw00001,hw00001sr
hw00002,hw00002sr --transport udpu
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
hw00001: Succeeded
hw00002: Succeeded
Starting cluster on nodes: hw00001, hw00002...
hw00001: Starting Cluster...
hw00002: Starting Cluster...
Synchronizing pcsd certificates on nodes hw00001, hw00002...
hw00002: Success
hw00001: Success
Standby node:
[root@hw00002 ~]# pcs cluster setup --name hacluster --start hw00001,hw00001sr
hw00002,hw00002sr --transport udpu --force
Step 11 On the active node, run the following commands to start the services related to the cluster:
[root@hw00001 ~]# systemctl start pcsd.service
[root@hw00001 ~]# systemctl start corosync.service
[root@hw00001 ~]# systemctl start pacemaker.service
Step 12 On the active node, run the following commands to enable the automatic startup function for
pacemaker, corosync, and pcsd.
[root@hw00001 ~]# systemctl enable pcsd.service
[root@hw00001 ~]# systemctl enable corosync.service
[root@hw00001 ~]# systemctl enable pacemaker.service
Step 13 On the active node, run the pcs status command to check the two-node cluster status.
[root@hw00001 ~]# pcs status
Cluster name: hacluster
WARNING: no stonith devices and stonith-enabled is not false
WARNING: corosync and pacemaker node names do not match (IPs used in setup?)
Last updated: Fri Oct 13 16:53:31 2017Last change: Fri Oct 13 16:45:38 2017 by
hacluster via crmd on hw00001
Stack: corosync
Current DC: hw00001 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
2 nodes and 0 resources configured
PCSD Status:
hw00001: Online
hw00002: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@hw00001 ~]#
Step 14 On the active node, run the corosync-cfgtool -s command to check the heartbeat status.
[root@hw00001 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id= 10.5.1.10
status= ring 0 active with no faults
RING ID 1
id= 10.5.2.10
status= ring 1 active with no faults
[root@hw00001 ~]#
----End
Step 2 On the active node, run the following commands to configure the basic two-node cluster
resource parameters.
[root@hw00001 ~]# pcs property set no-quorum-policy="stop"
[root@hw00001 ~]# pcs resource defaults default-resource-stickness=1000
[root@hw00001 ~]# pcs resource defaults default-migration-threshold=5000
[root@hw00001 ~]# pcs resource op defaults timeout=600s
Step 3 On the active node, run the following commands to configure the stonith resource instance to
restart the peer server by IPMI when brain split occurs. (\> is the connector. To copy the text,
remove the symbol and then execute it.)
[root@hw00001 ~]# pcs stonith create st_ipmi_hw00001 fence_ipmilan \
> ipaddr=126.126.126.14 \
> lanplus=on \
> login="Administrator" \
> passwd="Admin@9000" \
> pcmk_host_list="hw00001"
> login="Administrator" \
> passwd="Admin@9000" \
> pcmk_host_list="hw00002"
Parameter Description
Step 4 On the active node, run the following commands to configure the stonith resource constraint.
The resource st_ipmi_hw00001 does not run on the node hw00001 and the resource
st_ipmi_hw00002 does not run on the node hw00002
[root@hw00001 ~]# pcs constraint location st_ipmi_hw00001 avoids hw00001
[root@hw00001 ~]# pcs constraint location st_ipmi_hw00002 avoids hw00002
Step 5 On the active node, run the pcs status command to check that the stonith resource is
configured successfully.
[root@hw00001 ~]# pcs status
Cluster name: hacluster
WARNING: corosync and pacemaker node names do not match (IPs used in setup?)
Last updated: Fri Oct 13 18:12:00 2017Last change: Fri Oct 13 18:11:35 2017 by
root via cibadmin on hw00001
Stack: corosync
Current DC: hw00001 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
2 nodes and 2 resources configured
st_ipmi_hw00001(stonith:fence_ipmilan):Started hw00002
st_ipmi_hw00002(stonith:fence_ipmilan):Started hw00001
PCSD Status:
hw00001: Online
hw00002: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@hw00001 ~]#
Step 6 On the active and standby nodes, check whether the cluster iBMC IPMI link is connected
properly.
Run the ipmitool -I lanplus -H 192.168.1.232 -U root -P Huawei12#$ chassis power status
command on the active and standby nodes to check the power status of the peer nodes. If the
command output is "Chassis Power is on", the IPMI link is connected properly.
Active node:
[root@hw00001 ~]# ipmitool -I lanplus -H 126.126.126.16 -U Administrator -P
Admin@9000 chassis power status
Get Device ID command failed: 0xc1 Invalid command
Chassis Power is on
Standby node:
[root@hw00002 ~]# ipmitool -I lanplus -H 126.126.126.14 -U Administrator -P
Admin@9000 chassis power status
Get Device ID command failed: 0xc1 Invalid command
Chassis Power is on
[root@hw00002 ~]#
-U root Indicates the username for logging in to the iBMC of the peer
node.
-P Huawei12#$ Indicates the password for logging in to the iBMC of the peer
node. Replace it with the actual password.
Step 7 On the active node, run the pcs resource create rsc_ip_SAPHana_S00_HDB00 IPaddr2 \
command to configure the virtual IP address. The virtual IP address and service IP address
must be in the same network segment.
[root@hw00001 ~]# pcs resource create rsc_ip_SAPHana_S00_HDB00 IPaddr2 \
> ip="10.5.1.12" \
> iflabel=0
[root@hw00001 ~]#
rsc_ip_SAPHana_<SID> Indicates the name of the virtual IP address. The SID must be
_HDB00 changed based on the actual situation. In this example, the SID
is S00.
Step 8 On the active node, run the pcs resource create rsc_SAPHanaTopology_S00_HDB00
SAPHanaTopology \ command to create a clone resource.
[root@hw00001 yum.repos.d]# pcs resource create rsc_SAPHanaTopology_S00_HDB00
SAPHanaTopology \
> SID=S00 \
> InstanceNumber=00 \
> op start timeout=600 \
Parameter Description
rsc_SAPHanaTopology_ Indicates the name of the clone resource. The SID must be
<SID>_HDB00 changed based on the actual situation. In this example, the SID
is S00.
Step 9 On the active node, run the pcs resource clone rsc_SAPHanaTopology_S00_HDB00 \
command to configure clone resource parameters.
[root@hw00001 yum.repos.d]# pcs resource clone rsc_SAPHanaTopology_S00_HDB00 \
> meta is-managed=true clone-node-max=1 target-role="Started" interleave=true
Parameter Description
rsc_SAPHanaTopology_ Indicates the name of the clone resource. The SID must be
<SID>_HDB00 changed based on the actual situation. In this example, the SID
is S00.
Step 10 On the active node, run the pcs resource create rsc_SAPHana_S00_HDB00 SAPHana \
command to create the SAP HANA active and standby resource.
[root@hw00001 ~]# pcs resource create rsc_SAPHana_S00_HDB00 SAPHana \
> SID=S00 \
> InstanceNumber=00 \
> PREFER_SITE_TAKEOVER=true \
> DUPLICATE_PRIMARY_TIMEOUT=7200 \
> AUTOMATED_REGISTER=false \
> op start timeout=3600 \
> op stop timeout=3600 \
> op promote timeout=3600 \
> op demote timeout=3600 \
> op monitor interval=59 role="Master" timeout=700 \
> op monitor interval=61 role="Slave" timeout=700
[root@hw00001 ~]#
Parameter Description
rsc_SAPHana_S00_HDB Indicates the name of the SAP HANA resource. The SID must
00 be changed based on the actual situation. In this example, the
SID is S00.
Parameter Description
Step 11 On the active node, run the pcs resource master msl_rsc_SAPHana_S00_HDB00
rsc_SAPHana_S00_HDB00 \ command to configure the active and standby resource
parameters.
The SID must be changed based on the actual situation. In this example, the SID is S00.
[root@hw00001 hdbclient]# pcs resource master msl_rsc_SAPHana_S00_HDB00
rsc_SAPHana_S00_HDB00 \
> meta is-managed=true notify=true clone-max=2 clone-node-max=1 \
> target-role="Started" interleave=true
[root@hw00001 hdbclient]#
Step 12 On the active node, run the following commands to configure the resource constraint and
resource sequence.
The SID must be changed based on the actual situation. In this example, the SID is S00.
[root@hw00001 ~]# pcs constraint colocation add rsc_ip_SAPHana_S00_HDB00 with
master msl_rsc_SAPHana_S00_HDB00 2000
[root@hw00001 ~]# pcs constraint order rsc_SAPHanaTopology_S00_HDB00-clone then
msl_rsc_SAPHana_S00_HDB00 symmetrical=false
Adding rsc_SAPHanaTopology_S00_HDB00-clone msl_rsc_SAPHana_S00_HDB00 (kind:
Mandatory) (Options: first-action=start then-action=start symmetrical=false)
[root@hw00001 ~]#
Step 13 On the active node, run the pcs resource create bond_vip-monitor ethmonitor
interface=bond_vip --clone command to create a clone resource for the upstream service
port.
In this example, bond_vip is the upstream service port. Change it based on the actual
situation.
[root@hw00001 ~]# pcs resource create bond_vip-monitor ethmonitor
interface=bond_vip --clone
Step 14 On the active node, run the pcs constraint location rsc_ip_SAPHana_S00_HDB00 rule
score=-INFINITY ethmonitor-bond_vip ne 1 command to configure resource constraint to
forbid the virtual IP address to run on the node when the upstream service port is faulty.
In this example, rsc_ip_SAPHana_S00_HDB00r is the name of the virtual IP address and
bond_vip is the upstream service port. Change them based on the actual situation.
[root@hw00001 ~]# pcs constraint location rsc_ip_SAPHana_S00_HDB00 rule score=-
INFINITY ethmonitor-bond_vip ne 1
Step 15 On the active node, run the following commands to enable the automatic startup function for
pacemaker, corosync, and pcsd.
[root@hw00001 ~]# systemctl enable pcsd.service
[root@hw00001 ~]# systemctl enable corosync.service
[root@hw00001 ~]# systemctl enable pacemaker.service
Step 16 On the active node, run the pcs resource cleanup command to obtain the latest resource
status.
[root@hw00001 ~]# pcs resource cleanup
Step 17 On the active node, run the pcs status command to check the two-node cluster status.
NOTE
The command output of pcs resource cleanup will be displayed in 5 minutes. Check the two-node
cluster status after the latest resource status is obtained.
[root@hw00001 yum.repos.d]# pcs status
Cluster name: hacluster
WARNING: corosync and pacemaker node names do not match (IPs used in setup?)
Last updated: Mon Oct 16 11:19:36 2017Last change: Mon Oct 16 11:19:21 2017 by
root via cibadmin on hw00001
Stack: corosync
Current DC: hw00001 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
2 nodes and 9 resources configured
st_ipmi_hw00001(stonith:fence_ipmilan):Started hw00002
st_ipmi_hw00002(stonith:fence_ipmilan):Started hw00001
rsc_ip_SAPHana_S00_HDB00(ocf::heartbeat:IPaddr2):Started hw00001
Master/Slave Set: msl_rsc_SAPHana_S00_HDB00 [rsc_SAPHana_S00_HDB00]
Masters: [ hw00001 ]
Slaves: [ hw00002 ]
Clone Set: bond_vip-monitor-clone [bond_vip-monitor]
Started: [ hw00001 hw00002 ]
Clone Set: rsc_SAPHanaTopology_S00_HDB00-clone [rsc_SAPHanaTopology_S00_HDB00]
Started: [ hw00001 hw00002 ]
PCSD Status:
hw00001: Online
hw00002: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
----End
Step 1 Log in to the active node as the root user, and run the su - s00adm command to switch to the
HANA database user.
NOTE
Replace s00 in the command with the actual database SID in lowercase.
Step 2 Run the cdpy command as a database user to go to the python directory.
Step 3 On the active node, run the python systemReplicationStatus.py command using a database
account to check the data synchronization status.
If the displayed status is ACTIVE, the database has completed synchronization and remains
in sync mode, and you can perform the database takeover operation.
If the status is Initializing, the database is synchronizing data and is not ready for the
takeover operation.
The following is an example of the command output.
s00adm@hw00001:/usr/sap/POC/HDB00> cdpy
s00adm@hw00001:/usr/sap/S00/HDB00/exe/python_support> python
systemReplicationStatus.py
| Database | Host | Port | Service Name | Volume ID | Site ID | Site Name |
Secondary | Secondary | Secondary | Secondary | Secondary | Replication |
Replication | Replication |
| | | | | | | |
Host | Port | Site ID | Site Name | Active Status | Mode |
Status | Status Details |
| -------- | ---- | ----- | ------------ | --------- | ------- | --------- |
--------- | --------- | --------- | --------- | ------------- | ----------- |
----------- | -------------- |
| SYSTEMDB | hw00001 | 30001 | nameserver | 1 | 1 | hw00001
| hw00002 | 30001 | 2 | hw00002 | YES |
SYNC | ACTIVE | |
| TD1 | hw00001 | 30040 | indexserver | 2 | 1 | hw00001
| hw00002 | 30040 | 2 | hw00002 | YES |
SYNC | ACTIVE | |
mode: PRIMARY
site id: 1
site name: hw00001
s00adm@hw00001:/usr/sap/S00/HDB00/exe/python_support>
Step 4 If data synchronization is not complete, automatic HA cluster takeover will not occur. For
details about manual takeover, see the SAP notes. When the synchronization status of all
services is not Active, SAP does not recommend the takeover operation because this means
that data may be lost.
a. SAP note: 2578019 - Service Crashes in
DataAccess::PersistenceManagerImpl::endOfDataRecovery
Do not perform a takeover if not for all services the REPLICATION_STATUS in
M_SERVICE_REPLICATION shows ACTIVE. See SAP Note 2063657 for details.
https://launchpad.support.sap.com/#/notes/2578019
b. SAP note: 2580302 - Emergency Shutdown of Indexserver Due to Log Position
Inconsistency Upon Takeover
With the fix the takeover in this case can succeed, but since one service wasn't in sync this
implies data loss. You should follow the takeover decision guide of SAP Note 2063657 to
assess if a takeover in this state is a feasible option
https://launchpad.support.sap.com/#/notes/2580302
c. SAP note: 2063657 - SAP HANA System Replication Takeover Decision Guideline
You are advised to determine whether to perform a takeover based on the site requirements.
https://launchpad.support.sap.com/#/notes/2063657
----End
11 OS Lifecycle
13-year lifecycle
A total of 13-year support period is provided, 10 years of general & ESPOS support, and
3 years of LTSS.
l For each of the SPs:
– Generally the SPs are released in a around 12-month cadence. When the new SP is
released, the last SP will be continue supported for about 18 months, provide
enough time for the customer to test and migrate to the new SP.
– Each SP, except the last one, has around 18 months of general support and 12
months of Extended SP Overlay Support (ESPOS) period. After that, 2 years of
LTSS support can be provide by SUSE if the customer purchases the LTSS support
in addition to their subscription.
– The last SP will receive longer general & ESPOS support than previous SPs, till the
end of the 10th year since the release of the major version.
NOTE
For detail information about the SLES for SAP lifecycle, visit:
https://scc.suse.com/docs/userguide
13-year lifecycle
– Generally the SPs are released in a around 12-month cadence. When the new SP is
released, the last SP will be continue supported for about 3.5 years, provide enough
time for the customer to test and migrate to the new SP.
– Each SP, except the last one, has around 18 months of general support and 42
months of Extended SP Overlay Support (ESPOS) period. The ESPOS included in
SLES for SAP subscription now.
– The last SP will receive longer general support than previous SPs, till the end of the
10th year since the release of the major version. There will be a 3-year LTSS
support for the last SP.
– Release plan of SP5 & 6, including whether there will be such SP and when, are
subject to change according to the actual situation around the time of SP3/4.
NOTE
For detail information about the SLES for SAP lifecycle, visit:
https://scc.suse.com/docs/userguide
For detail information about the RHEL for SAP lifecycle, visit:
https://access.redhat.com/support/policy/updates/errata/
https://help.sap.com/doc/eb3777d5495d46c5b2fa773206bbfb46/2.0.00/en-US/
e243909cbb571014a135a7faa61d61f4.html
l If a SLES OS problem occurs, SAP will forward the problem to the SUSE.
l If an SAP HANA problem occurs, SAP will handle the problem occurs.
l If a hardware or RHEL OS problem occurs, SAP will forward the problem to Huawei for
analysis and handling. If the problem is an evident hardware or RHEL OS problem, you
can also submit a trouble ticket through the Huawei 400 hotline. The Huawei R&D
maintenance will handle the problem.
Huawei
SUSE
Red Hat
Step 2 On the home page, click the icon in the upper right corner to log in to the SAP. Enter the
subscription account and password to log in to the website.
Step 3 Click Report an incident. The page for generating a trouble ticket is displayed.
Step 4 Enter any character string in Enter search term and press Enter. Click Contact SAP
Support at the lower right corner.
Step 5 Select a product, for example, SAP ERP, and click Search.
Step 6 Go to the problem description page, fill in the trouble ticket details, and click Submit at the
lower right corner.
l Language: English is preferred.
l Priority: Set this parameter based on the impact on customer services. If an incident has
affected customer services, set the parameter to High or Very high. For a root cause
analysis incident, set the parameter to Medium.
l Subject: Briefly describe the symptom, such as database do not respond or system hung.
l Component: If the incident is related to the operating system, select BC-OP-LNX. After
filling in the details, click Submit at the lower right corner.
l Description: Describe the symptom in detail.
----End
Step 2 On the Support Tickets page, click Open a new ticket, as shown in Figure 12-3.
Step 3 On the New Support Ticket page, fill in the ticket and submit it.
1. Set Account, Entitlement, and Product, and click Next.
3. Select Platform and Severity, describe the problem in the Description text box, and
click Next.
4. Select a contact method and click Create Support Ticket to create the ticket.
----End
a. Log in to the Red Hat Customer Portal and go to the page for creating a support
case: There are two ways to go to the page.
i. Click Support Cases on the white bar at the top of the page, and then click
Open a New Support Case.
ii. Click Open a Support Case on the Red Hat Customer Portal.
b. Select the product involved.
c. Select a proper version from the list.
d. Enter the short description of the problem to be resolved in Case Name.
After you enter the problem description, the recommended solutions are displayed
on the right. You can check whether the solutions can resolve your problem.
e. Enter the details of the problem in Case Description. This helps the Red Hat
engineer to resolve the problem in a timely manner.
f. You are advised to provide logs or other diagnostic files to help support engineers
quickly resolve your problems. You can click Attach Files under Case Description
to attach a file.
g. Select a support level, which depends on the service level you purchase.
h. Select a proper severity for the problem. For details, see the severity definition.
i. (Optional) Specify a user who has a Red Hat login account to receive the email
notification about the support case.
j. (Optional) Select the support case group for the support case.
k. Click Submit at the bottom.
Fill in all the required fields before submitting the support case. If the Submit
button is unavailable, check whether all required fields are set. After this form is
submitted, your support case will be recorded. The case details page is displayed.
records and add updates in the Case Discussion section. To add an update, enter the
content in the text field and click Post.
At the same time, after you submit the support case, you will receive the update of the
support case by email. You can directly reply to the email (you are advised to remove the
original email content) to update the support case and interact with the Red Hat support
engineers. To add attachment, log in to the Red Hat Customer Portal and submit the
information on the Case page.
If the problem has been resolved, you can close the support case. After the support case
is closed, you may receive a satisfaction survey email from Red Hat. Please participate in
the survey and provide your valuable comments and suggestions. This will help Red Hat
better serve you later.
If the problem persists after the case is closed, you can find the case and update the reply.
Then you can enable the case again.
For details about how to create and manage support cases, see the following document:
How do I open and manage a support case on the Customer Portal?
Online Service
Red Hat offers online services. You can click Support Cases or Open a New Support Case,
and click the chat support button in the upper right corner of the support case list or creation
page to initiate an online service session, as shown in the following figure.
For details about Red Hat online support, see the information at the following addresses:
Red Hat Chat Support: https://access.redhat.com/articles/313583
Red Hat Technical Support reference guide:
https://access.redhat.com/sites/default/files/attachments/chinese-
reference_guide_to_engaging_with_red_hat_support_brochure.pdf
2. In the dialog box, be as specific as you can regarding your area of concern, business
impact and expectations. Click Submit.
A support manager will contact you within 4 hours. Your Red Hat sales representative or
technical account manager (if applicable) can also escalate on your behalf.
News
For notices about product life cycles, warnings, and updates, visit Product Bulletins.
Cases
Learn about server applications at Knowledge Base.
A Appendix 1
Step 1 Open the browser, enter https://192.168.34.112 in the address box, and log in to the MM910
WebUI.
Step 2 Select a compute node and click Network Card Physical Connection Diagram.
Step 4 Select the network ports of two MZ312 cards to view the connections between the NICs and
switch modules. Select different NICs and bind them to different switch modules to improve
network redundancy.
Step 5 Check the serial number of the network port of the OS based on the MAC address of an NIC.
----End
audit-2.8.1-3.el7.x86_64.rpm
audit-libs-2.8.1-3.el7.x86_64.rpm
audit-libs-python-2.8.1-3.el7.x86_64.rpm
autogen-libopts-5.18-5.el7.x86_64.rpm
checkpolicy-2.5-6.el7.x86_64.rpm
clufter-bin-0.77.0-2.el7.x86_64.rpm
clufter-common-0.77.0-2.el7.noarch.rpm
compat-sap-c++-5-5.3.1-10.el7_3.x86_64.rpm
compat-sap-c++-6-6.3.1-1.el7_3.x86_64.rpm
corosync-2.4.3-2.el7_5.1.x86_64.rpm
corosynclib-2.4.3-2.el7_5.1.x86_64.rpm
cpp-4.8.5-28.el7_5.1.x86_64.rpm
dbus-1.10.24-7.el7.x86_64.rpm
dbus-libs-1.10.24-7.el7.x86_64.rpm
dwz-0.11-3.el7.x86_64.rpm
expect-5.45-14.el7_1.x86_64.rpm
fence-agents-all-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-amt-ws-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-apc-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-apc-snmp-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-bladecenter-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-brocade-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-cisco-mds-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-cisco-ucs-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-common-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-compute-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-drac5-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-eaton-snmp-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-emerson-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-eps-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-heuristics-ping-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-hpblade-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ibmblade-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ifmib-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ilo2-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ilo-moonshot-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ilo-mp-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ilo-ssh-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-intelmodular-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ipdu-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-ipmilan-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-kdump-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-mpath-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-rhevm-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-rsa-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-rsb-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-sbd-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-scsi-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-vmware-rest-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-vmware-soap-4.0.11-86.el7_5.2.x86_64.rpm
fence-agents-wti-4.0.11-86.el7_5.2.x86_64.rpm
fence-virt-0.3.2-13.el7.x86_64.rpm
flac-libs-1.3.0-5.el7_1.x86_64.rpm
gcc-4.8.5-28.el7_5.1.x86_64.rpm
gd-2.0.35-26.el7.x86_64.rpm
ghostscript-9.07-28.el7_4.2.x86_64.rpm
ghostscript-fonts-5.50-32.el7.noarch.rpm
glibc-2.17-222.el7.x86_64.rpm
glibc-common-2.17-222.el7.x86_64.rpm
glibc-devel-2.17-222.el7.x86_64.rpm
glibc-headers-2.17-222.el7.x86_64.rpm
gnutls-dane-3.3.26-9.el7.x86_64.rpm
gnutls-utils-3.3.26-9.el7.x86_64.rpm
graphviz-2.30.1-21.el7.x86_64.rpm
gsm-1.0.13-11.el7.x86_64.rpm
gstreamer1-1.10.4-2.el7.x86_64.rpm
ipmitool-1.8.18-7.el7.x86_64.rpm
iptraf-ng-1.1.4-6.el7.x86_64.rpm
kernel-3.10.0-862.2.3.el7.x86_64.rpm
kernel-devel-3.10.0-862.2.3.el7.x86_64.rpm
kmod-kvdo-6.1.0.168-16.el7_5.x86_64.rpm
krb5-libs-1.15.1-19.el7.x86_64.rpm
krb5-workstation-1.15.1-19.el7.x86_64.rpm
libasyncns-0.8-7.el7.x86_64.rpm
libcanberra-0.30-5.el7.x86_64.rpm
libcanberra-gtk2-0.30-5.el7.x86_64.rpm
libcanberra-gtk3-0.30-5.el7.x86_64.rpm
liberation-fonts-common-1.07.2-16.el7.noarch.rpm
liberation-sans-fonts-1.07.2-16.el7.noarch.rpm
libfontenc-1.1.3-3.el7.x86_64.rpm
libgcc-4.8.5-28.el7_5.1.x86_64.rpm
libgomp-4.8.5-28.el7_5.1.x86_64.rpm
libICE-1.0.9-9.el7.x86_64.rpm
libicu-50.1.2-15.el7.x86_64.rpm
libkadm5-1.15.1-19.el7.x86_64.rpm
libmpc-1.0.1-3.el7.x86_64.rpm
libqb-1.0.1-6.el7.x86_64.rpm
libselinux-2.5-12.el7.x86_64.rpm
libselinux-python-2.5-12.el7.x86_64.rpm
libselinux-utils-2.5-12.el7.x86_64.rpm
libsemanage-2.5-11.el7.x86_64.rpm
libsemanage-python-2.5-11.el7.x86_64.rpm
libsepol-2.5-8.1.el7.x86_64.rpm
libSM-1.2.2-2.el7.x86_64.rpm
libsndfile-1.0.25-10.el7.x86_64.rpm
libstdc++-4.8.5-28.el7_5.1.x86_64.rpm
libstdc++-devel-4.8.5-28.el7_5.1.x86_64.rpm
libvpx-1.3.0-5.el7_0.x86_64.rpm
libwsman1-2.6.3-3.git4391e5c.el7.x86_64.rpm
libXaw-1.0.13-4.el7.x86_64.rpm
libXfont-1.5.2-1.el7.x86_64.rpm
libXmu-1.1.2-2.el7.x86_64.rpm
libXpm-3.5.12-1.el7.x86_64.rpm
libXt-1.1.5-3.el7.x86_64.rpm
libyaml-0.1.4-11.el7_0.x86_64.rpm
linux-firmware-20180220-62.git6d51311.el7.noarch.rpm
lm_sensors-3.4.0-4.20160601gitf9185e5.el7.x86_64.rpm
mozilla-filesystem-1.9-11.el7.x86_64.rpm
mpfr-3.1.1-4.el7.x86_64.rpm
net-snmp-libs-5.7.2-33.el7_5.2.x86_64.rpm
net-snmp-utils-5.7.2-33.el7_5.2.x86_64.rpm
nfs-utils-1.3.0-0.54.el7.x86_64.rpm
ntp-4.2.6p5-28.el7.x86_64.rpm
ntpdate-4.2.6p5-28.el7.x86_64.rpm
numactl-2.0.9-7.el7.x86_64.rpm
OpenIPMI-modalias-2.0.23-2.el7.x86_64.rpm
openssl-1.0.2k-12.el7.x86_64.rpm
openssl-libs-1.0.2k-12.el7.x86_64.rpm
openwsman-python-2.6.3-3.git4391e5c.el7.x86_64.rpm
overpass-fonts-2.1-1.el7.noarch.rpm
pacemaker-1.1.18-11.el7_5.2.x86_64.rpm
pacemaker-cli-1.1.18-11.el7_5.2.x86_64.rpm
pacemaker-cluster-libs-1.1.18-11.el7_5.2.x86_64.rpm
pacemaker-libs-1.1.18-11.el7_5.2.x86_64.rpm
PackageKit-glib-1.1.5-2.el7_5.x86_64.rpm
PackageKit-gtk3-module-1.1.5-2.el7_5.x86_64.rpm
patch-2.7.1-10.el7_5.x86_64.rpm
pcs-0.9.162-5.el7_5.1.x86_64.rpm
perl-srpm-macros-1-8.el7.noarch.rpm
perl-Thread-Queue-3.02-2.el7.noarch.rpm
perl-TimeDate-2.30-2.el7.noarch.rpm
pexpect-2.3-11.el7.noarch.rpm
policycoreutils-2.5-22.el7.x86_64.rpm
policycoreutils-python-2.5-22.el7.x86_64.rpm
poppler-data-0.4.6-3.el7.noarch.rpm
pulseaudio-libs-10.0-5.el7.x86_64.rpm
python-clufter-0.77.0-2.el7.noarch.rpm
python-inotify-0.9.4-4.el7.noarch.rpm
python-IPy-0.75-6.el7.noarch.rpm
python-suds-0.4.1-5.el7.noarch.rpm
PyYAML-3.10-11.el7.x86_64.rpm
redhat-rpm-config-9.1.0-80.el7.noarch.rpm
resource-agents-3.9.5-124.el7.x86_64.rpm
resource-agents-sap-hana-3.9.5-124.el7.x86_64.rpm
rpm-4.11.3-32.el7.x86_64.rpm
rpm-build-4.11.3-32.el7.x86_64.rpm
rpm-build-libs-4.11.3-32.el7.x86_64.rpm
rpm-libs-4.11.3-32.el7.x86_64.rpm
rpm-python-4.11.3-32.el7.x86_64.rpm
rsyslog-8.24.0-16.el7_5.4.x86_64.rpm
ruby-2.0.0.648-33.el7_4.x86_64.rpm
rubygem-bigdecimal-1.2.0-33.el7_4.x86_64.rpm
rubygem-io-console-0.4.2-33.el7_4.x86_64.rpm
rubygem-json-1.7.7-33.el7_4.x86_64.rpm
rubygem-psych-2.0.0-33.el7_4.x86_64.rpm
rubygem-rdoc-4.0.0-33.el7_4.noarch.rpm
rubygems-2.0.14.1-33.el7_4.noarch.rpm
ruby-irb-2.0.0.648-33.el7_4.noarch.rpm
ruby-libs-2.0.0.648-33.el7_4.x86_64.rpm
setools-libs-3.3.8-2.el7.x86_64.rpm
sg3_utils-1.37-12.el7.x86_64.rpm
sound-theme-freedesktop-0.8-3.el7.noarch.rpm
startup-notification-0.12-8.el7.x86_64.rpm
subscription-manager-1.20.11-1.el7_5.x86_64.rpm
subscription-manager-plugin-container-1.20.11-1.el7_5.x86_64.rpm
subscription-manager-rhsm-1.20.11-1.el7_5.x86_64.rpm
subscription-manager-rhsm-certificates-1.20.11-1.el7_5.x86_64.rpm
sudo-1.8.19p2-13.el7.x86_64.rpm
tcl-8.5.13-8.el7.x86_64.rpm
telnet-0.17-64.el7.x86_64.rpm
tuned-2.9.0-1.el7.noarch.rpm
tuned-profiles-sap-hana-2.9.0-1.el7.noarch.rpm
unbound-libs-1.6.6-1.el7.x86_64.rpm
urw-fonts-2.4-16.el7.noarch.rpm
vdo-6.1.0.168-18.x86_64.rpm
xcb-util-0.4.0-2.el7.x86_64.rpm
xorg-x11-font-utils-7.5-20.el7.x86_64.rpm
xorg-x11-xauth-1.0.9-1.el7.x86_64.rpm
xulrunner-31.6.0-2.el7_1.x86_64.rpm
yum-utils-1.1.31-45.el7.noarch.rpm
zlib-devel-1.2.7-17.el7.x86_64.rpm