Solution For Databases Reference Architecture For Oracle Rac Database 12c With Gad Using Hdid
Solution For Databases Reference Architecture For Oracle Rac Database 12c With Gad Using Hdid
By Amol Bhoite
August 2019
Feedback
Hitachi Vantara welcomes your feedback. Please share your thoughts by sending an email message to
[email protected]. To assist the routing of this message, use the paper number in the subject and the title
of this white paper in the text.
Revision History
Revision Changes Date
Solution Design 11
Storage Architecture 12
Server and Application Architecture 18
SAN Architecture 19
Network Architecture 25
Global-active Device Setup Pre-configuration 28
Solution Implementation 29
Deploy the Solution 29
Solution Execution 33
Benefits of using HDID Versus Using Manual Commands for Global-active Device Setup
and Configuration 42
Engineering Validation 43
Test Methodology 43
Test Results 44
1
Use this reference architecture guide to design a solution with Hitachi Data Instance Director (HDID) to protect Hitachi
Unified Compute Platform for non-multitenant Oracle Database 12c. This solution is for Oracle Real Application Clusters on
Extended Distance (Stretched) clusters in a two-site environment using global-active device in Hitachi Virtual Storage
Platform.
This explains how to use HDID to deploy global-active device to add backup and recovery capabilities in an Oracle
environment to achieve zero recovery point objective (RPO) and recovery time objective (RTO). Use global-active device in
a two-site replication environment with Virtual Storage Platform storage to provide data protection for Oracle Database.
This guide also explains how to use HDID to perform automated 2 datacenter swap global-active device replication on
demand and automated recovery of global-active device replication in an error or suspended state.
This Hitachi Unified Compute Platform CI architecture for Oracle Database is engineered, pre-tested, and qualified to
provide predictable performance and the highest reliability in demanding, dynamic Oracle environments. This solution is
validated to ensure consistent, predictable results.
This proven solution optimizes your Oracle database environment, and integrates servers, storage systems, network, and
storage software. This provides reliability, high availability, scalability, and performance while processing small-scale to
large-scale OLTP workloads. The dedicated servers run Oracle Database 12c Release 2 with the Oracle Real Application
Cluster option. The operating system is Red Hat Enterprise Linux 7.6.
Tailor your implementation of these best practices to meet your specific data backup and recovery needs.
The practices in this guide are valid for all storage systems that support global-active device and are not limited to the
storage environment used to validate these best practices.
This reference architecture document is for you if you are in one of the following roles:
Database administrator
Storage administrator
IT professional with the responsibility of planning and deploying an Oracle Database solution
1
2
To use this reference architecture guide, you need familiarity with the following:
Note — Testing of this configuration was in a lab environment. Many things affect production environments beyond
prediction or duplication in a lab environment. Follow the recommended practice of conducting proof-of-concept testing
for acceptable results in a non-production, isolated test environment that otherwise matches your production
environment before your production implementation of this solution.
Solution Overview
This reference architecture implements Hitachi Unified Compute Platform CI for Oracle Real Application Clusters on
Extended Distance clusters on four nodes using Hitachi Virtual Storage Platform G900. This environment addresses the
high availability, performance, and scalability requirements for OLTP and OLAP workloads. Your solution implementation
can be tailored to meet your specific needs.
Continuous application availability in traditional and cloud designs requires continuous storage. This solution uses the
unique Hitachi Storage Virtualization Operating System (SVOS) and enterprise-class Hitachi Virtual Storage Platform G-
series systems for the following:
2
3
Configuring Oracle Real Application Clusters on extended distance with global-active device allows you to create and
maintain synchronous, remote copies of data volumes on Hitachi Virtual Storage Platform F or VSP G series storage.
Business Benefits
This reference architecture provides the following benefits:
Continuous server I/O when an unplanned outage, such as disaster or hardware failure, prevents access to a data
volume of the database
Automated configuration for global-active devices and quick recovery of global-active device pairs in error or
suspended state storage operations using a web-based HDID UI without knowledge of manual Hitachi HORCM
configurations
Automated Pause, Resume, 2 datacenter replication swap, dissociate, revert, teardown, and delete global-active
devices using HDID for planned outages
Easy to understand global-active device internal operations using HDID informative log messages. This helps to
quickly identify problems and complete troubleshooting.
The configuration of Virtual Storage Platform G900 and Hitachi Advanced Server DS220 have the following characteristics:
3
4
Figure 1
To avoid any performance impact to the production database, Hitachi Vantara recommends using a configuration with the
following:
Note — In the lab environment the management server setup was configured at Site 3. In the customer environment,
the management server can be configured at Site 1 or Site 2.
4
5
Hitachi Advanced Server 2 × Intel Xeon Gold 6140 CPU @ BIOS: 3A10.H8 4
DS220 servers 2.30GHz
BMC: 4.23.06
768 GB (64 GB × 12) DIMM DDR4
CPLD:10
Synchronous Registered (Buffered) 2666
MHz
Firmware: 6.02
0x80003620
1.1747.0
Firmware:
11.4.204.34
5
6
Hitachi Advanced Server 2 × Intel Xeon Sliver Processor 4110, 8- BIOS: 3A10.H8 2
DS120 server core, 2.1GHz, 85W
BMC: 4.23.06
8 × 32GB DDR4 R-DIMM 2666 Mhz
CPLD:10
(256GB total)
1 × 64 GB SATADOM
Firmware: 5.51
0x80002bca
1.1568.0
Firmware:
11.2.156.27
Brocade Fibre Channel G620 Kernel: 2.6.34.6 4
Switches
48 port Fibre Channel switch Fabric OS: v8.2.0b
16 Gbps SFPs
6
7
VMware ESXi Version 6.7.0 Build 10302608 ESXi for management nodes
7
8
Oracle Enterprise Manager Cloud 13c Release 2 Hitachi Storage and Server OEM
Control 13c plug-ins plugins
Virtual SVP (vSVP) Microcode dependent Storage management software
This solution uses Virtual Storage Platform F900/G900, which supports Oracle Real Application Clusters.
Global-active device enables you to create and maintain synchronous, remote copies of data volumes. A virtual storage
machine is configured in the primary and secondary storage systems using the actual information of the primary storage
system, and the global-active device primary and secondary volumes are assigned the same virtual LDEV number in the
virtual storage machine. This enables the host to see the pair volumes as a single volume on a single storage system, and
both volumes receive the same data from the host.
A quorum disk, which can be located in a third and external storage system or in an iSCSI-attached host server, is used to
monitor the global-active device pair volumes. The quorum disk acts as a heartbeat for the global-active device pair, with
both storage systems accessing the quorum disk to check on each other.
8
9
vCenter
HDCA Probe
Other management applications may be installed on additional virtual machines depending on customer needs and
requirements.
These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing
aggregates the I/O paths, creating a new device that consists of the aggregated paths.
Use Hitachi Infrastructure Analytics Advisor to register resources (storage systems, hosts, servers, and volumes), and set
service-level thresholds. You are alerted to threshold violations and possible performance problems (bottlenecks). Using
analytics tools, you find which resource has a problem and analyze its cause to help solve the problem. The Infrastructure
Analytics Advisor ensures the performance of your storage environment based on real-time SLOs.
9
10
For Hitachi Advanced servers, it provides visibility into the components, including their status, health, and attributes. In
addition, the adapter supplies information about any Oracle database instances running on the servers. Both RAC and non-
RAC databases are supported.
Oracle Real Application Clusters (Oracle RAC) is a clustered version of Oracle Database. It is based on a comprehensive
high-availability stack that can be used as the foundation of a database cloud system, as well as a shared infrastructure.
This ensures high availability, scalability, and agility for any application.
Oracle Automatic Storage Management (Oracle ASM) is a volume manager and a file system for Oracle database files.
This supports single-instance Oracle Database and Oracle Real Application Clusters configurations. Oracle ASM is the
recommended storage management solution that provides an alternative to conventional volume managers, file systems,
and raw devices.
Enterprise Manager is the industry’s first complete cloud solution with Cloud Management. This includes self-service
provisioning balanced against centralized, policy-based resource management, integrated chargeback and capacity
planning, and complete visibility of the physical and virtual environments from applications to disk.
10
11
This solution uses Oracle Enterprise Manager Cloud Control, version 13c release 2. This allows you to use these cloud
management features:
VMware Esxi
VMware ESXi is the next-generation hypervisor, providing a new foundation for virtual infrastructure. This innovative
architecture operates independently from any general-purpose operating system, offering improved security, increased
reliability, and simplified management.
vCenter Appliance
The vCenter Server Appliance is a preconfigured Linux virtual machine, which is optimized for running VMware vCenter
Server and the associated services on Linux.
vCenter Server Appliance comes as an Open Virtualization Format (OVF) template. The appliance is imported to an ESXi
host and configured through the web-based interface. It comes pre-installed with all the components needed to run a
vCenter Server, including vCenter SSO (Single Sign-on), Inventory Service, vSphere Web Client, and the vCenter Server
itself.
Brocade Switches
Brocade and Hitachi Vantara partner to deliver storage networking and data center solutions. These solutions reduce
complexity and cost, as well as enable virtualization and cloud computing to increase business agility.
SAN switches are optional and direct connect is also possible under certain circumstances, but customers should check
the support matrix to ensure support prior to implementation.
The solution uses the Brocade G620, 48 port Fibre Channel switch.
Cisco Switches
The Cisco Nexus Switch product line provides a series of solutions that can make it easier to connect and manage
disparate data center resources with software-defined networking (SDN). Leveraging the Cisco Unified Fabric, which
unifies storage, data and networking (Ethernet/IP) services, the Nexus Switches create an open, programmable network
foundation built to support a virtualized data center environment.
Solution Design
This describes the reference architecture environment to implement Hitachi Unified Compute Platform CI for Oracle Real
Application Clusters on Extended Distance clusters on four nodes using Hitachi Virtual Storage Platform. The environment
used for testing and validation of this solution used Hitachi Virtual Storage Platform G900.
11
12
Site 1
Oracle RAC Servers — Two server nodes were configured in an Oracle Real Application Cluster.
Storage System — There are VVOLs mapped to each port that are presented to the server as LUNs.
SAN Connections — There are SAN connections to connect the Fibre Channel HBA ports to the storage
through Brocade G620 switches.
Site 2
Oracle RAC Servers — Two server nodes were configured in an Oracle Real Application Cluster.
Storage System — There are VVols mapped to each port that are presented to the server as LUNs.
SAN Connection — There are SAN connections to connect the Fibre Channel HBA ports to the storage through
Brocade G620 switches.
Site 3
Quorum Site
Storage System — The Hitachi Virtual Storage Platform G350 used as a quorum device had an LDEV
mapped to two ports presented as an external volume at site 1 and site 2 to each Virtual Storage Platform
G900 on the sites.
Note — Testing used a separate Hitachi Virtual Storage Platform G350 storage system for the quorum device. When
implementing this, you may use any other supported storage system.
Testing used a quorum disk, located in a third storage system and used to monitor the global-active device pair
volumes. Global-active device Active-Active configuration without a quorum disk is also supported with the latest SVOS
version.
A proxy node virtual machine managed and monitored global-active device pair operations is required for the P-
VOLs only.
SAN Connection — Each 16 Gb/sec Fibre Channel HBA port was connected to the storage front-end ports through a
switched SAN fabric.
Storage Architecture
This describes the storage architecture for this solution.
Storage Configuration
The storage configuration takes into consideration Hitachi Vantara for Hitachi Virtual Storage Platform and Oracle
recommended best practices for the design and deployment of database storage.
The high-level storage configuration diagram for this solution is shown in Figure 2.
12
13
Figure 2
Table 4 shows the storage pool configuration used for this solution. In the current configuration OS and Oracle LDEVs are
in different storage pools; however, users can create a single pool for OS and Oracle LDEVs.
13
14
4 60 4 60
Number of
Drives
1 64 1 64
Number of
Pool Volume
LDEVs
14
15
Sysaux Sysaux
Undo Undo
Temp Temp
On Site 3 VSP G350 storage there is an additional RAID group consisting of four 6 TB 7.2 krpm SAS drives configured as
RAID-10 (2D+2D).
This is used as shared storage for the management server cluster and for the quorum device. A single 6 TB LUN is
mapped to four storage ports for the management server. 20 GB LDEV is used as a quorum device. Additional LUNs can
be mapped if required. While the test environment was configured using a dedicated SAS RAID group for the management
server cluster, this can be configured as a dedicated SSD RAID group, a dedicated HDP pool, or it can use capacity on the
HDP pool configured for the Oracle environment depending on customer requirements.
Database Layout
The database layout design uses recommended best practices from Hitachi Vantara for Hitachi Virtual Storage Platform
G900 for small random I/O traffic, such as OLTP transactions. The layout also takes into account the Oracle ASM best
practices when using Hitachi storage. Base the storage design for database layout needs on the requirements of a specific
application implementation. The design can vary greatly from one implementation to another based on the RAID
configuration and number of drives used during the implementation. The components in this solution set have the flexibility
for use in various deployment scenarios to provide the right balance between performance and ease of management for a
given scenario.
15
16
Data and Indexes Tablespace — Assign an ASM diskgroup with external redundancy for the data and index
tablespaces.
TEMP Tablespace — Place the TEMP tablespace in this configuration in the Data ASM diskgroup.
Undo Tablespace — Create an UNDO tablespace in this configuration within the Oracle Data ASM diskgroup. Assign
one UNDO tablespace for each node in the Oracle RAC environment.
Online Redo Logs — Create ASM diskgroup with external redundancy for Oracle online redo logs.
Oracle Cluster Registry and Voting Disk — Create an ASM diskgroup with normal redundancy to contain the OCR
and voting disks and to protect against single disk failure to avoid loss of cluster availability. Place each of these files in
this configuration in the OCR ASM diskgroups.
Database Block Size Settings — Set the database block size to 8 KB.
ASM FILE SYSTEM I/O Settings — Set the Oracle ASM I/O operations for database files as follows:
FILESYSTEMIO_OPTIONS = setall
Table 6 shows the Oracle RAC Database Settings.
Environment Value
Setting Value
DB_CLOCK_SIZE 8 KB
SGA_TARGET 400 GB
PGA_AGGREGATE_TARGET 192 GB
DB_CACHE_SIZE 172 GB
DB_KEEP_CACHE_SIZE 96 GB
DB_RECYCLE_CACHE_SIZE 24 GB
INMEMORY_SIZE 48 GB
USE_LARGE_PAGES TRUE
16
17
Setting Value
FILESYSTEMIO_OPTIONS SETALL
DB_FILE_MULTIBLOCK_READ_COUNT 64
DISK_ASYNCH_IO TRUE
Figure 3 shows the relationships between disk groups and replication pairs.
Figure 3
Table 8 shows the details of the disk mappings from the LUNs to the ASM disk groups for Oracle RAC Database
tablespaces.
TABLE 8. LUNS AND ORACLE ASM DISK MAPPINGS FOR ORACLE DATABASE IN SITE 1 AND SITE 2
17
18
TABLE 8. LUNS AND ORACLE ASM DISK MAPPINGS FOR ORACLE DATABASE IN SITE 1 AND SITE 2 (CONTINUED)
/dev/mapper/mpathfa - /dev/mapper/mpathfp
/dev/mapper/mpathga - /dev/mapper/mpathgp
This provides the compute power for the Oracle RAC database to handle complex database queries and a large volume of
transaction processing in parallel. Table 9 describes the details of the server configuration for this solution.
This reference architecture uses two Hitachi Advanced Server DS120 servers for VMware ESXi management server
configuration.
Site 1 Oracle Server1 oracle-rac-01 Oracle RAC node 1 36 768 GB (64 GB × 12)
DS220
Oracle Server2 oracle-rac-02 Oracle RAC node 2 36 768 GB (64 GB × 12)
Site 2 Oracle Server3 oracle-rac-03 Oracle RAC node 3 36 768 GB (64 GB × 12)
DS220
Oracle Server4 oracle-rac-04 Oracle RAC node 4 36 768 GB (64 GB × 12)
18
19
TABLE 9. HITACHI ADVANCED SERVER DS220 AND DS120 SERVER SPECIFICATIONS (CONTINUED)
Hitachi Storage
Advisor VM
Hitachi Infrastructure
Analytics Advisor VM
Oracle Enterprise
Manager Cloud
Control 13c VM
SAN Architecture
Map the provisioned LDEVs to multiple ports on Hitachi Virtual Storage Platform G900 (VSP G900). These LDEV port
assignments provide multiple paths to the storage system from the host for high availability.
Site 1
16 SAN switch connections are being used for VSP G900 host ports.
16 SAN switch connections are being used for server HBA ports.
Site 2
16 SAN switch connections are being used for VSP G900 host ports.
16 SAN switch connections are being used for server HBA ports.
Site 3:
4 SAN switch connections are being used for VSP G350 host ports.
4 SAN switch connections are being used for server HBA ports.
19
20
Table 10 shows details of the Fibre Channel switch connect configuration on the Hitachi Virtual Storage Platform G900
ports.
TABLE 10. SAN HBA CONNECTION CONFIGURATION BETWEEN DS220 AND VSP G900, DS120, AND
VSP G350
Site Server HBA Storage Host Switch Zone Connec- Storage Port Brocade
Ports Group tion System G620
Switch
20
21
TABLE 10. SAN HBA CONNECTION CONFIGURATION BETWEEN DS220 AND VSP G900, DS120, AND
VSP G350 (CONTINUED)
Site Server HBA Storage Host Switch Zone Connec- Storage Port Brocade
Ports Group tion System G620
Switch
21
22
TABLE 10. SAN HBA CONNECTION CONFIGURATION BETWEEN DS220 AND VSP G900, DS120, AND
VSP G350 (CONTINUED)
Site Server HBA Storage Host Switch Zone Connec- Storage Port Brocade
Ports Group tion System G620
Switch
22
23
TABLE 10. SAN HBA CONNECTION CONFIGURATION BETWEEN DS220 AND VSP G900, DS120, AND
VSP G350 (CONTINUED)
Site Server HBA Storage Host Switch Zone Connec- Storage Port Brocade
Ports Group tion System G620
Switch
23
24
TABLE 11. SAN SWITCH ARCHITECTURE BETWEEN VSP G900 STORAGE SYSTEMS
SAN Switch Architecture between Site 1, Site 2 G900 storage systems and Site 3 VSP G350 storage system.
SAN Switch Architecture Between two G900 Storage Systems and ESXi cluster.
TABLE 13. SAN SWITCH ARCHITECTURE BETWEEN VSP G900 STORAGE SYSTEMS AND ESXI CLUSTER
Site 1 ESXi HBA1_1 MN1_HBA1_1 MN1_HBA1_1_ ASE_43_230_8A Site 1 VSP G900 Command
Cluster device
Site 2 ESXi HBA1_1 MN2_HBA1_1 MN2_HBA1_1_ ASE_43_236_8A Site 2 VSP G900 Command
Cluster device
Note — In a production environment, it is recommended to use separate storage ports for the management servers and
quorum disks to avoid impact on the database performance. Shared storage ports can be used; however, port utilization
should be monitored to avoid performance issues on high performance environments.
24
25
Network Architecture
This architecture requires the following separate networks:
Private Network (also called cluster interconnect) — This network must be scalable. In addition, it must meet the
low latency needs of the network traffic generated by the cache synchronization of Oracle Real Application Clusters
and inter-node communication among the nodes in the cluster.
Public Network — This network provides client connections to the applications and Oracle Real Application Clusters.
BMC/management network — The Baseboard Management Controller (BMC) provides remote management
capabilities including console redirection, logging, and power control.
Hitachi Vantara recommends using pairs of 25 Gbps NICs for the cluster interconnect network and public network.
Observe these points when configuring private and public networks in your environment:
For each server in the clusterware configuration, use at least two identical, high-bandwidth, low-latency NICs for the
interconnection.
Use NIC bonding to provide failover and load balancing of interconnections within a server.
Use at least two public NICs for client connections to the application and database.
25
26
When creating NIC Bonding pairs, ports should be used on different cards to avoid single point of failure (SPoF). It is
recommended that BMC connections go to a separate switch on the management network.
Server NIC Ports VLAN/ NIC IP Address Network Bandwidth Cisco Nexus
Subnet BOND 93180YC-EX
(Gbps) Switch
Switch Port
Number
NIC - 3 25 2
BMC- 244 - 192.168.244.xx Public 1 -
Dedicated Management
NIC
DS220 NIC - 0 208 Bond0 192.168.208.xx Private 25 1 43
Server2
NIC - 2 25 2
26
27
Server NIC Ports VLAN/ NIC IP Address Network Bandwidth Cisco Nexus
Subnet BOND 93180YC-EX
(Gbps) Switch
Switch Port
Number
192.168.242.xx
Database Server 1 (DS220 2) 192.168.242.xx
192.168.242.xx
Database Server 3 (DS220 3) 192.168.242.xx
192.168.242.xx
Database Server 4 (DS220 4) 192.168.242.xx
27
28
Table 16 lists virtual machine configuration running on the management server cluster.
Table 17 shows the manual pre-configuration needed on Site 1 and Site 2 VSP G900 storage before setting up global-
active device using HDID.
28
29
Please refer to Table 10, “SAN HBA Connection Please refer to Table 10, “SAN HBA Connection
Configuration Between DS220 and VSP G900, Configuration Between DS220 and VSP G900,
DS120, and VSP G350,” on page 20 to get DS120, and VSP G350,” on page 20 to get
details of owner (local) and non-owner (remote) details of owner (local) and non-owner (remote)
paths. paths.
Solution Implementation
Deploy the Solution
Implementing this solution requires doing the following high-level procedures:
29
30
Figure 4
1. Policy-OracleDB-GAD was created to global-active device replicate Oracle database VVols. While creating a policy
select the appropriate Oracle database from the added Oracle RAC nodes. In this case rac01, rac02, rac03, rac04 are
the added Oracle RAC nodes and orcl is the Oracle database SID.
2. Policy-Oracle-OCR was created to global-active device replicate OCR VVols. Users need to specify the OCR VVOLs
in the Serial/LDEV_ID format. HDID does not global-active device replicate OCR VVols as a part of Oracle database
VVols.
Figure 5 shows the complete HDID policy details.
30
31
Figure 5
31
32
32
33
Figure 6
Solution Execution
Execution of this solution consists of the following procedures:
Perform global-active device replication for the Oracle database and OCR disks to the secondary VSP G900 storage
using HDID.
Recover Oracle Database After Storage Replication Link Failure between site 1 and site 2 storage systems.
33
34
Perform Global-active Device Replication for the Oracle Database and OCR Disks to the Secondary VSP G900
Storage
This is how to perform global-active device replication for the Oracle database and OCR disks to the secondary storage
using Hitachi Data Instance Director.
To execute the HDID data flow, Activate Hitachi Data Instance Director Data flow.
1. Select the appropriate Data Flow and click on the (Activate) button. The Activate Data Flow(s) dialog box displays
with data flow compilation details.
2. Then click on the Activate button to execute the data flow.
Figure 7
3. On the Monitor menu the user can monitor the HDID data flow progress operation.
4. After the HDID dataflow activation, source Oracle and OCR PVOLs will be tiered to the secondary VSP G900 storage.
5. Users can see global-active device pairs using HDID. Click on the 'Dashboard Storage G900-Site2
Replication and Clones' to see the replications on the Site 2 VSP G900 storage.
34
35
Figure 8
6. Click on any of the replication pairs to see the global-active device pairing and progress details.
Figure 9
Recover Oracle Database After Storage Replication Link Failure Between Site 1 and Site 2 Storage Systems
Objective for Use Case: Recover from storage replication link failure between site 1 and site 2 storage systems.
35
36
Figure 10
(2) Verified that all the VVOLs pairs were in PAIR status using HDID.
Figure 11
36
37
Figure 12
(2) Global-active device pair status after remote replication failure at site 1.
(3) Figure 13 shows that by using HDID, users can see VVols pair status in 'PSUE' state after a replication link failure
between two storage sites.
Figure 13
37
38
(4) Verified that all the database instances were in an online state.
(5) Verified that number of Swingbench user connections was 20.
(6) Checked the database for errors. There were no errors in the alert logs.
4. Recovery Procedure using HDID
(1) At site 1, enabled the Fibre Channel switch ports used for remote connections. Users need to resolve hardware
issues before recovery replication using HDID.
(2) Click on Dashboard Monitor Dataflow-HDID-OracleDB-GAD.
(3) Select GAD-OracleDB-orcl node.
(4) Click on Trigger.
(5) Select the policy Policy-OracleDB-GAD on the next screen and click Run Now to trigger the replication.
This will bring global-active device replication into PAIR state from the PSUE state. Figure 14 shows how to start the trigger
operation on the Monitor screen.
Figure 14
Note — For global-active device pairs in the PSUE error state or PSUS suspend state, users need to resolve the
hardware issue first in the storage side and perform a 'Trigger Operation' using the HDID monitor screen which brings
global-active device replication into the PAIR state from the PSUE error state or the PSUS suspend state.
(1) Observed path status on all the Oracle RAC hosts were 'active ready running'.
38
39
Figure 15
Figure 16
39
40
Figure 17
This list provides details of options used for the Replications stored on a Block Storage node.
Mount: Used to mount replication to operating system and add to host groups
Unmount: Used to unmount replication from the operating system and delete from host groups
Pause: Pauses the Replication. If the replication is live, then it can be paused.
Unsuspend: If a Swap operation cannot be completed due to a P-VOL or data link fault between the primary and
secondary device, then the replication will enter the SSWS state (suspended for swapping) indicating that the swap is
not yet complete. Unsuspend enables the replication process to be re-established once the cause has been rectified
Add to additional Host groups: This enables LDEVs to be added to host groups in addition to the default
HDIDProvisionedHostGroup used by HDID
Remove from Host Groups: This enables LDEVs to be removed from host groups, including the default
HDIDProvisionedHostGroup used by HDID
Transfer RBAC permissions to another node: Allows RBAC ownership to be transferred from the current node to
another node
Dissociate: Dissociates a replication that was previously adopted by HDID. Removes the selected replication(s) from
HDID including state information such as direction and mount location. The replication remains active on the hardware
device(s).
Teardown: Tears down a replication using HDID removes the volume pairings on the array.
Delete: Deletes the replication record from HDID. The replication is also removed from the block storage device.
40
41
A Swap operation may be performed to move array processing load from the primary to the secondary device. If both
P-VOL and S-VOL are operable and the link between the two sites is available, the secondary array will assume the
higher processing load.
If the replication cannot be established because the pair has entered an error or suspended state, then once the
problem is resolved, the site with the most recent data must be used to re-establish the replication. Because the
replication is active-active and cross-path set-ups are possible, depending on the nature of the fault, the P-VOL or S-
VOL could contain the most recent data:
If the P-VOL contains the most recent data, no swap is required:
i. If necessary, unsuspend and resume the replication.
ii. Resynchronize the replication (via manual trigger or data flow reactivation).
If the S-VOL contains the most recent data:
iii. Swap the replication to copy the data from the SVOL to the PVOL.
iv. Swap the replication again to restore the original direction. This is optional, but highly recommended.
The swap operation will result in the both P-VOL and the S-VOL remaining writable.
To perform 2 datacenter replication global-active device swap operation
41
42
Figure 18
This figure shows the results of the global-active device replication swap operation using HDID. The S-VOL takes over the
role of the primary volume and the P-VOL takes over the role of the secondary volume.
Figure 19
Benefits of using HDID Versus Using Manual Commands for Global-active Device Setup
and Configuration
Table 18 shows the benefits of using HDID versus using manual commands for global-active device setup and
configuration.
TABLE 18. COMPARISON OF HDID VS MANUAL COMMANDS FOR GLOBAL-ACTIVE DEVICE SETUP
Global-active Device Setup and Configuration Using Manual Commands Using HDID
42
43
TABLE 18. COMPARISON OF HDID VS MANUAL COMMANDS FOR GLOBAL-ACTIVE DEVICE SETUP
Engineering Validation
This summarizes the key observations from the test results for the Hitachi Unified Compute Platform CI architecture for
Oracle Real Application Clusters on Extended Distance clusters in a two-site environment using HDID and global-active
device in Hitachi Virtual Storage Platform.
Oracle RAC deployment with Hitachi Virtual Storage Platform G900 and Hitachi Advanced Server DS220.
Test Methodology
The test results were demonstrated using the Swingbench tool.
Swingbench
The workload generation application was Swingbench. Swingbench is a free load generator (and benchmark tool)
designed to stress test an Oracle database. Swingbench consists of a load generator, a coordinator, and a cluster
overview. The software enables a load to be generated and the transactions/response times to be charted.
Swingbench can be used to demonstrate and test technologies such as Real Application Clusters, Online table rebuilds,
Standby databases, Online backup and recovery etc. Please refer to the Swingbench documentation for more information
about Swingbench.
Workload Configuration
Testing ran simulated and synthetic workloads using Swingbench. This simulated the workloads for Hitachi Virtual Storage
Platform G900 with Storage Virtualization Operating System to test the global-active device.
43
44
Test Results
44
1
Hitachi Vantara
Corporate Headquarters Contact Information
2535 Augustine Drive Phone: 1-800-446-0744
Santa Clara, CA 95054 USA Sales: 1-858-225-2095
HitachiVantara.com | community.HitachiVantara.com HitachiVantara.com/contact
© Hitachi Vantara Corporation, 2019. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Microsoft and Windows are trademarks or registered
trademarks of Microsoft Corporation.All other trademarks, service marks, and company names are properties of their respective owners
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be
offered by Hitachi Vantara Corporation.
MK-SL-119-02, August 2019