FusionStorage 8.0.1 Block Storage HyperReplication Feature Guide 05
FusionStorage 8.0.1 Block Storage HyperReplication Feature Guide 05
8.0.1
Issue 05
Date 2021-02-05
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Purpose
HyperReplication is the remote replication feature in FusionStorage storage
systems developed by Huawei. This document describes its working principles,
application scenarios, planning and preparation, software installation, as well as
configuration and management.
Intended Audience
This document is intended for:
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Change History
Issue Date Description
Contents
3 Software Installation............................................................................................................36
4 Configuring HyperReplication............................................................................................37
4.1 Configuration Process.......................................................................................................................................................... 37
4.2 Checking the License........................................................................................................................................................... 38
4.3 Configuring Basic Services................................................................................................................................................. 39
4.4 Creating a Replication Cluster.......................................................................................................................................... 39
4.5 Adding a Remote Device.................................................................................................................................................... 47
4.6 Creating a Remote Replication Pair............................................................................................................................... 48
4.7 Creating a Remote Replication Consistency Group.................................................................................................. 52
5 Managing HyperReplication............................................................................................... 57
5.1 Managing a Remote Replication Pair............................................................................................................................ 57
5.1.1 Viewing Information About a Remote Replication Pair....................................................................................... 57
5.1.2 Modifying Properties of a Remote Replication Pair.............................................................................................. 61
5.1.3 Synchronizing a Remote Replication Pair................................................................................................................. 63
5.1.4 Splitting a Remote Replication Pair............................................................................................................................ 64
5.1.5 Performing a Primary/Secondary Switchover.......................................................................................................... 65
5.1.6 Enabling Protection for Secondary Resources......................................................................................................... 66
A Appendix................................................................................................................................. 86
A.1 Modifying the RoCE Configuration File........................................................................................................................ 86
A.2 Configuring the Replication Network by Modifying the Configuration File.................................................... 87
1 Feature Description
1.1 Overview
This section describes the background, definition, and benefits of the
HyperReplication feature.
Background
As the digitalization drive advances in various industries, data has become critical
to the operation of enterprises, and customers impose increasingly demanding
requirements on the stability of storage systems. Although some storage devices
offer extremely high stability, they fail to prevent irrecoverable damage to
production systems in the event of a natural disaster.
Definition
HyperReplication implements asynchronous remote replication to periodically
synchronize data between the primary and secondary storage systems to support
system DR. This minimizes service performance deterioration caused by the
latency of long-distance data transmission.
Benefits
The HyperReplication feature provides the following benefits:
1.2 Availability
License Requirement
The HyperReplication feature requires a standard license or an advanced license.
Table 1-1 shows their application scenarios.
Version Support
Both the primary and secondary storage systems must be FusionStorage 8.0.0 or
later.
Pair
A pair is the data replication relationship between a primary volume and a
secondary volume. One primary volume in the primary storage system and one
secondary volume in the secondary storage system form a pair.
Consistency Group
A consistency group is a collection of pairs that have a service relationship with
each other. For example, the primary storage system has three primary volumes
that respectively store service data, logs, and change records of a database. If data
on any of the three volumes becomes invalid, the data on all three volumes
becomes unusable. For the three pairs in which these three volumes exist, you can
create a consistency group. In actual configuration, you need to create a
consistency group and then manually add the three pairs to the consistency group.
Synchronization
Synchronization is the process of copying data from a primary volume to a
secondary volume.
Splitting
Splitting is the process of suspending the remote replication relationship between
primary and secondary volumes.
Primary/Secondary Switchover
A primary/secondary switchover is the process of exchanging the roles of the
primary and secondary volumes in a pair.
Replication Cluster
A replication cluster consists of a replication control cluster and a replication
service cluster:
● A replication control cluster manages cluster nodes and metadata and
contains three, five, seven, or nine nodes. The CCDB process of the replication
control cluster requires local storage space to record metadata information.
● A replication service cluster manages remote replication pairs and consistency
groups and contains 3 to 64 nodes.
1. In normal cases, the primary volume carries data read and write services.
Before data synchronization, data at the primary and secondary sites is
inconsistent.
2. After data synchronization, data at the primary and secondary sites is
consistent. The following describes data replication when the primary volume
becomes faulty.
NOTICE
– If the primary volume becomes faulty and data at the primary and
secondary sites is consistent:
i. Disable secondary resource protection. The secondary volume takes
over data read and write services.
ii. After the primary volume recovers, perform a primary/secondary
switchover to designate the secondary volume as the primary one.
Then enable secondary resource protection and synchronize data.
– If the primary volume becomes faulty and data at the primary and
secondary sites is inconsistent:
i. Disable secondary resource protection. The secondary volume takes
over data read and write services.
ii. After the primary volume recovers, perform a primary/secondary
switchover to designate the secondary volume as the primary one.
Then enable secondary resource protection and synchronize data.
Data to be synchronized from the original primary volume to the
original secondary volume will be lost.
– If the primary volume becomes faulty:
i. The secondary volume directly takes over data read and write
services.
ii. After the primary volume recovers, it takes back the data read and
write services. Data at the primary and secondary sites is
inconsistent.
iii. Enable secondary resource protection and synchronize data. Data
written to the secondary volume when the secondary volume carries
services will be lost.
Before the deployment, plan each network and ensure that the hardware meets
the requirements. Prepare required software packages and tools.
Before configuring HyperReplication, you can plan related data in advance
according to FusionStorage Block Storage LLD Configuration Template.
2.1 Typical Networking
This section describes network concepts and deployment solutions of
HyperReplication.
2.2 System Requirements
2.3 Required Software Packages and Documentation
NOTE
When deploying the HyperReplication feature, configure IPv4 addresses instead of IPv6
addresses for the following networks.
Manag Used for system management and At least one GE network port is
ement maintenance. needed.
networ
k
If the number of storage nodes is less than or equal to 16, converged deployment
is recommended. If the number of storage nodes is greater than 16, independent
deployment is recommended.
If the independent deployment mode is used, you are advised not to deploy all
nodes of the replication cluster in the same cabinet and not to deploy control
nodes of the replication cluster in the same cabinet, preventing the entire
replication cluster from becoming faulty due to cabinet power-off.
NOTE
Management nodes can be deployed on VMs, external physical servers, or storage nodes.
In the networking schemes provided in this section, the management nodes are deployed
on external physical servers.
Solution 1
Figure 2-1 and Figure 2-2 show the deployment scheme and physical networking,
respectively. In this solution:
Basic networking: Compute nodes and storage nodes are deployed independently,
VBS is deployed on storage nodes, and front-end and back-end storage networks
are converged.
Replication network: The replication service and storage service are converged.
Solution 2
Figure 2-3 and Figure 2-4 show the deployment scheme and physical networking,
respectively. In this solution:
Basic networking: Compute nodes and storage nodes are deployed independently,
VBS is deployed on storage nodes, and front-end and back-end storage networks
are converged.
Replication network: The replication service and storage service are deployed
independently.
Solution 3
Figure 2-5 and Figure 2-6 show the deployment scheme and physical networking,
respectively. In this solution:
Basic networking: Compute nodes and storage nodes are deployed independently,
VBS is deployed on compute nodes, and front-end and back-end storage networks
are converged.
Replication network: The replication service and storage service are converged.
Solution 4
Figure 2-7 and Figure 2-8 show the deployment scheme and physical networking,
respectively. In this solution:
Basic networking: Compute nodes and storage nodes are deployed independently,
VBS is deployed on compute nodes, and front-end and back-end storage networks
are converged.
Replication network: The replication service and storage service are deployed
independently.
Solution 5
Figure 2-9 and Figure 2-10 show the deployment scheme and physical
networking, respectively. In this solution:
Basic networking: Compute nodes and storage nodes are deployed independently,
VBS is deployed on compute nodes, and front-end and back-end storage networks
are converged.
Replication network: The replication service and storage service are converged.
Solution 6
Figure 2-11 and Figure 2-12 show the deployment scheme and physical
networking, respectively. In this solution:
Basic networking: Compute nodes and storage nodes are deployed independently,
VBS is deployed on compute nodes, and front-end and back-end storage networks
are converged.
Replication network: The replication service and storage service are deployed
independently.
Item Requirement
CPU and ● Replication and storage nodes deployed on the same servers:
memory Each node requires 8 GB extra memory to deploy the
resources replication service.
occupied by ● Replication nodes deployed on independent servers:
the Each node requires 20 vCPUs and 64 GB memory.
replication
software
Metadata Space used by CCDB: Two partitions are required, each with at
space least 10 GB capacity. It is recommended that each partition have
25 GB capacity.
Number of A replication cluster can contain three, five, seven, or nine control
control nodes:
nodes in a ● If the number of nodes in the replication service cluster is less
replication than 27, you are advised to select three control nodes to form
cluster the replication cluster.
● If the number of nodes in the replication service cluster is
greater than or equal to 27 but less than 45, you are advised
to select five control nodes to form the replication cluster.
● If the number of nodes in the replication service cluster is
greater than or equal to 45 but less than or equal to 64, you
are advised to select seven control nodes to form the
replication cluster.
● A replication control cluster can be configured with a
maximum of nine nodes, and a maximum of four faulty nodes
are tolerated in the cluster, improving the cluster reliability.
3 Software Installation
NOTE
● If the replication service and storage service are deployed on different nodes, you do not
need to install replication nodes during software installation. Select the replication
nodes when creating a replication cluster.
● If the replication service and storage service are deployed on same nodes, select a
desired number of storage nodes to run the replication service when creating a
replication cluster.
4 Configuring HyperReplication
Context
You must check the license on both the primary and secondary storage systems.
The HyperReplication feature requires a standard license or an advanced license.
Table 4-1 shows their application scenarios.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Settings > License Management.
The current license information is displayed.
----End
Follow-Up Procedure
If no HyperReplication license is available, apply for and import a license file. For
details about how to apply for and import a license file, see the license operation
guide of the corresponding product model.
NOTE
It is recommended that the configurations of storage pools and volumes in primary and
secondary storage systems be the same.
The sizes of volumes in the primary and secondary storage systems must be the same.
Context
A replication cluster consists of a replication control cluster and a replication
service cluster:
● A replication control cluster manages cluster nodes and metadata and
contains three, five, seven, or nine nodes. The CCDB process of the replication
control cluster requires local storage space to record metadata information.
● A replication service cluster manages remote replication pairs and consistency
groups and contains 3 to 64 nodes.
Prerequisites
The IP address of the port that connects each replication cluster node to the
replication network has been obtained. To obtain the IP address, perform the
following operations:
Procedure
Step 1 Log in to DeviceManager.
NOTE
▪ The size of the metadata disk must be greater than 105 GB.
▪ If a system disk partition is used as the replication service metadata disk, the
system disk must meet the following requirements:
○ If the number of pools is less than or equal to 4, the system disk is a SAS
disk or SSD.
○ If the number of pools is greater than 4, the system disk is an SSD.
▪ If you select Physical Disk, specify the disk type and slot selection
mode.
NOTE
d. Click Submit.
e. Wait until the creation is successful, and then click Next.
Step 5 Create a replication service cluster.
1. Select the nodes to create the replication service cluster.
2. Click Submit.
3. Wait until the creation is successful, and then click Next.
Step 6 Configure the replication network.
NOTE
The system supports two methods for configuring the replication network. Method 2 is
recommended.
● Method 1: Modify the network port configuration file to configure the replication
network in advance (for details, see A.2 Configuring the Replication Network by
Modifying the Configuration File), and then read the existing storage network IP
address by following the instructions in this section.
● Method 2: Configure the replication network by following the instructions provided in
this section.
Parameter Description
2. Click Preview.
The system will query the replication IP address assigned to each replication
node.
3. Click Submit.
4. Click Next.
NOTE
Log in to a replication node and run the ifconfig command. If the replication node does not
have an IP address for the replication network, click next to the replication node and
choose Modify to configure an IP address.
In the scenario of interconnecting with FusionCompute, you are not allowed to modify the
configuration information by clicking next to the replication node because the network
configuration of the remote replication port has been completed on FusionCompute.
----End
Follow-Up Procedure
If different replication ports on a replication node use IP addresses on the same
network segment, you must configure policy-based routes for these ports.
Otherwise, these ports cannot be accessed.
1. Log in to the replication node using its management IP address.
2. Configure route tables.
NOTE
The value range of the route priority is from 0 to 255. Check the priorities of existing
routes written in the /etc/iproute2/rt_tables file, and ensure that the priority values
of their routing tables do not conflict.
For example, run the following two commands to configure route table
rep_eth0 with priority 200 for eth0 and configure route table rep_eth1 with
priority 201 for eth1. Configure this parameter as required.
[root@localhost ~]# echo "200 rep_eth0" >> /etc/iproute2/rt_tables
[root@localhost ~]# echo "201 rep_eth1" >> /etc/iproute2/rt_tables
NOTE
If the data returned by the replication plane can be received, the policy-based
routing configuration has taken effect. Otherwise, check whether the default
gateway configuration of the management plane and the policy-based
routing configuration of the replication plane are correct.
iii. Repeat 5.i and 5.ii to write the policy-based routing configurations of
other network adapters to the configuration file.
– For SUSE:
i. Run the vi /etc/sysconfig/network/ifroute-eth0 command to edit
the configuration file. Enter the following information and save the
file:
ip route flush table rep_eth0
ip route add 192.168.10.0/24 dev eth0 src 192.168.10.31 table rep_eth0
ip route add default dev eth0 via 192.168.10.1 table rep_eth0
ip rule del table rep_eth0
ip rule add from 192.168.10.31 table rep_eth0
ii. Modify the permission on the policy-based routing file and write it to
the eth0 network configuration file.
[root@localhost ~]# chmod +x /etc/sysconfig/network/ifroute-eth0
[root@localhost ~]# echo "POST_UP_SCRIPT=\`/etc/sysconfig/network/ifroute-eth0\`"
>> /etc/sysconfig/network/ifcfg-eth0
iii. Repeat 5.i and 5.ii to write the policy-based routing configurations of
other network adapters to the configuration file.
Prerequisites
The local and remote storage systems communicate properly.
Procedure
Step 1 Log in to DeviceManager.
Step 3 Select the cluster for which you want to add a remote device, click , and select
Add Remote Device.
Parameter Description
----End
Prerequisites
● The remote replication license is valid.
● The local and remote storage systems support the remote replication
function.
● The remote volume of the remote replication pair is not mapped to the host.
● The local volume is not configured with features that are mutually exclusive
with the remote replication function.
● The remote resource ID has been obtained. Obtain the remote resource ID:
Log in to the remote device, choose Services > Block Service > Volume, click
, select the desired resource ID, and check the ID of the remote resource
whose capacity is the same as the local resource capacity.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Click Create.
The Create Remote Replication Pair dialog box is displayed.
Step 4 Configure basic information about the remote replication pair. Table 4-7 describes
related parameters.
Parameter Description
Step 5 Select the local and remote resources of the remote replication pair.
1. Select a replication cluster in Local Device and Remote Device respectively.
2. In the Local Resource pane, select the desired local resource and enter the
remote resource ID.
NOTE
Log in to the remote device, choose Services > Block Service > Volume, click ,
select the desired resource ID, and check the ID of the remote resource.
3. Click Synchronization Parameters and set advanced properties of the remote
replication pair. Table 4-8 describes related parameters. Then click OK.
Prerequisites
● Role of the remote replication pair to be added to a consistency group is
Primary.
Precautions
If a remote replication pair is not in split status, the system will split the remote
replication pair first and then add it to a consistency group.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 4 Set the basic information about the consistency group. Table 4-9 describes related
parameters.
Parameter Description
Step 5 Optional: Add a remote replication pair to the remote replication consistency
group.
1. Select Add Remote Replication Pair.
2. In the Available Remote Replication Pairs pane, select one or more remote
replication pairs and click to add the selected objects to the Selected
Remote Replication Pairs pane.
Step 6 Click Advanced to set advanced properties of the remote replication consistency
group. Table 4-10 describes related parameters.
Parameter Description
Parameter Description
----End
5 Managing HyperReplication
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 View information about an existing remote replication pair. Table 5-1 describes
related parameters.
Parameter Description
Parameter Description
Step 4 3. Optional: Click to view the local resource name, remote resource name,
recovery policy, speed, start and end time of synchronization, and synchronization
progress of each remote replication pair.
----End
Precautions
The properties of a remote replication pair which has been added to a consistency
group cannot be modified.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Select the desired remote replication pair, click on the right, and select Modify.
The Modify page is displayed on the right.
Step 4 Modify properties of the remote replication pair. Table 5-2 describes related
parameters.
Parameter Description
Parameter Description
----End
Prerequisites
● The remote replication pair for which you want to perform data
synchronization has not been added to a consistency group.
● Secondary resource protection has been enabled.
● Running Status of the remote replication pair supports synchronization.
Table 5-3 shows in which status a remote replication pair can be
synchronized.
Normal √
Split √
To be recovered √
Synchronizing ×
Invalid ×
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Select the remote replication pair that you want to synchronize and click
Synchronize.
NOTE
You can also click on the right of the remote replication pair and select Synchronize.
----End
Prerequisites
● The remote replication pair that you want to split has not been added to a
consistency group.
● Running Status of the remote replication pair supports splitting. Table 5-4
shows in which status a remote replication pair can be split.
Normal √
Split ×
To be recovered √
Synchronizing √
Invalid ×
√: Supported
×: Not supported
Precautions
If the initial synchronization of a remote replication pair is not completed, splitting
the remote replication pair may cause the unavailability of secondary resources.
Therefore, exercise caution when performing this operation.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Select the remote replication pair that you want to split and click Split.
NOTE
You can also click on the right of the remote replication pair and select Split.
----End
Prerequisites
● The remote replication pair on which you want to perform a primary/
secondary switchover has not been added to a consistency group
● A primary/secondary switchover can be performed on an asynchronous
remote replication pair when the following requirements are met:
Running Status of the pair is set to Split, Secondary Resource Data Status
is set to Consistent, and Secondary Resource Protection is set to Disable.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Select the desired remote replication pair, click , and select Primary/Secondary
Switchover.
----End
Prerequisites
Running Status and Secondary Resource Data Status of the remote replication
pair support protection enablement. Table 5-5 shows in which status protection
for secondary resources in a remote replication pair can be enabled.
Table 5-5 Status requirements for enabling protection for secondary resources in a
remote replication pair
Split Consistent √
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Select the desired remote replication pair, click on the right, and select Enable
Protection for Secondary Resource.
----End
Prerequisites
Running Status and Secondary Resource Data Status of the remote replication
pair support protection disablement. Table 5-6 shows in which status protection
for secondary resources in a remote replication pair can be disabled.
Table 5-6 Status requirements for disabling protection for secondary resources in
a remote replication pair
Split Consistent √
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Select the desired remote replication pair, click on the right, and select Disable
Protection for Secondary Resource.
----End
Prerequisites
● The remote replication pair to be deleted has not been added to a consistency
group.
● Running Status of the remote replication pair supports deletion. Table 5-7
shows in which status a remote replication pair can be deleted.
Normal ×
Split √
Interrupted √
To be recovered ×
Synchronizing ×
Invalid √
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication Pair.
Step 3 Select the desired remote replication pair and click Delete.
NOTE
You can also click on the right of the remote replication pair and select Delete.
----End
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 View the basic information of an existing remote replication consistency group.
Table 5-8 describes related parameters.
Parameter Description
Parameter Description
Parameter Description
----End
Precautions
After the properties of a remote replication consistency group are modified, the
properties of pairs in the consistency group remain unchanged. However, the
working properties are consistent with those of the consistency group.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the remote replication consistency group you want to modify, click on the
right, and select Modify.
The Modify page is displayed on the right.
Step 4 Modify properties of the remote replication consistency group. Table 5-9 describes
related parameters.
Parameter Description
----End
Prerequisites
● This operation can be performed only on the primary device of a remote
replication consistency group.
● Secondary resource protection has been enabled.
● Running Status of the remote replication consistency group supports
synchronization. Table 5-10 shows in which status a remote replication
consistency group can be synchronized.
Normal √
Split √
To be recovered √
Invalid ×
Synchronizing ×
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the remote replication consistency group you want to synchronize and click
Synchronize.
NOTE
You can also click on the right of the remote replication consistency group and select
Synchronize.
----End
Precautions
Running Status of the remote replication consistency group supports splitting.
Table 5-11 shows in which status remote replication pairs in a consistency group
can be split.
Normal √
Split ×
To be recovered √
Invalid ×
Synchronizing √
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the remote replication consistency group you want to split and click Split.
NOTE
You can also click on the right of the remote replication consistency group and select
Split.
----End
Prerequisites
Running Status of the remote replication consistency group and Secondary
Resource Data Status of remote replication pairs in the consistency group
support a primary/secondary switchover. Table 5-12 shows in which status a
primary/secondary switchover can be performed for a remote replication
consistency group.
Normal Consistent ×
Normal Inconsistent ×
Split Consistent √
Split Inconsistent ×
Interrupted Consistent ×
Interrupted Inconsistent ×
To be recovered Consistent ×
To be recovered Inconsistent ×
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the desired remote replication consistency group, click on the right, and
select Primary/Secondary Switchover.
----End
Prerequisites
Running Status of the remote replication consistency group and Secondary
Resource Data Status of remote replication pairs in the consistency group
support protection enablement. Table 5-13 shows in which status protection for
secondary resources in a remote replication consistency group can be enabled.
Table 5-13 Status requirements for enabling protection for secondary resources in
a remote replication consistency group
Split Consistent √
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the desired remote replication consistency group, click on the right, and
select Enable Protection for Secondary Resource.
----End
Prerequisites
Running Status of the remote replication consistency group and Secondary
Resource Data Status of remote replication pairs in the consistency group
support protection disablement. Table 5-14 shows in which status protection for
secondary resources in a remote replication consistency group can be disabled.
Table 5-14 Status requirements for disabling protection for secondary resources in
a remote replication consistency group
Split Consistent √
√: Supported
×: Not supported
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the desired remote replication consistency group, click on the right, and
select Disable Protection for Secondary Resource.
----End
Prerequisites
● Replication Mode of the remote replication pair to be added to a consistency
group is the same as that of the consistency group.
● Role of the consistency group to which the remote replication pair is to be
added is Primary.
● Role of the remote replication pair to be added to a consistency group is
Primary.
● The number of remote replication pairs in a consistency group has not
reached the upper limit.
Precautions
● If a consistency group is in split status, when a remote replication pair is
added, the system will split the remote replication pair first and then add it
into the consistency group.
● If a consistency group is not in split status, when a remote replication pair is
added, the system will split the consistency group first, then split the remote
replication pair, add it into the group, and synchronize the consistency group.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the desired remote replication consistency group, click on the right, and
select Add Remote Replication Pair.
Step 4 In the pair list on the left, select one or more remote replication pairs and click
to add the selected pairs to the pair list on the right.
----End
Prerequisites
● The consistency group has been split.
● Role of the consistency group from which the remote replication pair is to be
removed is Primary.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the desired remote replication consistency group, click on the right, and
select Remove Remote Replication Pair.
The Remove Remote Replication Pair page is displayed on the right.
Step 4 In the pair list on the left, select one or more remote replication pairs and click
to add the selected pairs to the pair list on the right.
Step 5 Click OK.
Confirm your operation as prompted.
----End
Prerequisites
Running Status of the remote replication consistency group supports deletion.
Table 5-15 shows in which status a remote replication consistency group can be
deleted.
Normal ×
Split √
Interrupted √
To be recovered ×
Synchronizing ×
Invalid √
√: Supported
×: Not supported
Precautions
● The essence of deleting a remote replication consistency group is to delete
the consistency group configuration information stored on both local and
remote storage devices.
● If a remote replication link fails, you must forcibly delete the consistency
group from the local and remote devices respectively.
● After a consistency group is deleted, you cannot centrally manage the remote
replication pairs in the group.
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Remote Replication > Remote Replication
Consistency Group.
Step 3 Select the remote replication consistency group you want to delete and click
Delete.
NOTE
You can also click on the right of the remote replication consistency group and select
Delete.
----End
Procedure
Step 1 Log in to DeviceManager.
Step 3 Select the desired replication cluster, click on the right, and select Expand
Capacity.
Parameter Description
2. Click Preview.
3. Click Submit.
Step 6 Wait until the submission is successful, and then click Cancel.
----End
Prerequisites
The number of nodes to be removed from a replication cluster is greater than or
equal to 4.
Procedure
Step 1 Log in to DeviceManager.
Step 3 Click on the right of the desired replication cluster and select Reduce Capacity.
----End
Procedure
Step 1 Log in to DeviceManager.
Step 3 In the Remote Device pane, select the desired remote device, click , and select
Delete.
----End
Procedure
Step 1 Log in to DeviceManager.
Step 3 Select the desired cluster, click , and select Configure Cluster Service
Credential.
Step 4 Set Pre-Shared Key Label, Pre-Shared Key, and Confirm Pre-Shared Key.
NOTE
----End
Procedure
Step 1 Log in to DeviceManager.
Step 3 Select a replication cluster, click on the right, and select Change Name.
----End
Procedure
Step 1 Log in to DeviceManager.
Step 2 Choose Data Protection > Replication Cluster.
Step 3 Click on the right of the desired replication cluster and select Delete.
Step 4 Confirm your operation as prompted.
----End
A Appendix
NOTE
Step 3 After the modification is complete, press Esc to exit the editing mode and
enter :wq! to save the modification and exit.
Step 4 Restart the node where the NIC is located for the modification to take effect.
----End
Prerequisites
● You have planned two replication plane IP addresses.
● You have obtained the password of user root for logging in to the server.
● You have obtained the management plane IP address of the server.
Procedure
Step 1 Open a browser on the local PC, enter http://BMC IP address of the server in the
address bar, and press Enter to go to the login page.
Step 2 On the login page that is displayed, enter the user name and password to log in
to the BMC system.
The default BMC user name is root and the default password is Huawei@123.
For some Huawei servers, you need to choose Remote Control > Remote Virtual Console
(sharing mode) to open the remote control window.
Step 5 Run the following commands to go to the path where the configuration file is
saved:
● For Red Hat, Oracle, CentOS, and distributed storage OS, run cd /etc/
sysconfig/network-scripts.
● For SUSE, run the cd /etc/sysconfig/network command.
Step 6 Use the vi editor to open the configuration file for modifying network port
information. Table A-2 provides an example.
NOTE
Step 7 Run the following commands to make the configuration take effect and run the
ifconfig command to check whether the configuration takes effect. If the IP
addresses configured in the preceding step are displayed in the command output,
the configuration takes effect.
ifdown eth0; ifup eth0
ifdown eth1; ifup eth1
ifconfig
----End