h18617 OracleRACdb PowerStoreT - DG
h18617 OracleRACdb PowerStoreT - DG
H18617.1
Design Guide
Abstract
This design guide describes best practices for designing and
implementing a VMware-based virtualized Oracle Real Applications
Clusters database solution using Dell EMC PowerStore T storage. The
solution incorporates Dell EMC PowerEdge servers and the VMware
vSphere virtualization platform.
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo and Xeon are trademarks
of Intel Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners.
Published in the USA 01/21 Design Guide H18617.1.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change
without notice.
Contents
Introduction ..................................................................................................................................... 4
Solution components...................................................................................................................... 7
Conclusion ..................................................................................................................................... 40
References ..................................................................................................................................... 41
Appendix A: Red Hat Enterprise Linux 8 Operating System Kernel Parameters .................... 42
3
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Introduction
Introduction
Business case Everyone from the boardroom to the server room is talking about the value of data. To
harvest the maximum value from their data, businesses worldwide have invested in an
impressive array of analytical tools for performing analysis, communicating insights, and
informing decisions. The abundance of data sources combined with the velocity of new
data generation has created a complex landscape of data management silos.
Many companies are considering the latest generation of scalable storage solutions to
help address these complexities. Storage systems that are equipped with data reduction
technologies can achieve greater value by controlling cost in the face of the ongoing
demand for more space. Dell Technologies systems with always-on duplication and
compression enable organizations to accommodate the demands of data growth long
after the initial storage investment.
The Dell EMC PowerStore T family of storage arrays offers features that support
increased performance, scalability, and storage efficiency. These arrays can enhance the
value of enterprise-class database management solutions such as Oracle 19c Real
Application Clusters (RAC). The benefits of Oracle RAC include the ability to scale
workloads across multiple active servers. This ability improves system performance and
enhances the server cluster’s ability to continue database services even during a system
failure. PowerStore T arrays hosting Oracle RAC-supported applications enable
organizations to lower the costs of providing high availability and accommodate the
demands of data growth over the life cycle of applications.
Document This guide describes how to design and implement a VMware-based virtualized Oracle
purpose RAC database solution that uses PowerStore T storage. The guide provides an overview
of the architecture design of the entire stack, including the compute, network, and storage
design. It describes best practices for deploying an Oracle RAC database on PowerStore
Audience This guide is intended for infrastructure architects, storage administrators, virtualization
administrators, system administrators, IT managers, and other personnel who evaluate,
acquire, manage, maintain, or operate Oracle database environments.
Deployment This design for Oracle databases with PowerStore storage encompasses the following
workflow phases:
• Phase I, Hardware stack setup and configuration—Setting up all the
component hardware, including PowerEdge servers; Ethernet network switches
and cable connections; the PowerStore storage array, Fibre Channel (FC)
network switches, and cable connections.
• Phase II, Installation and configuration—Establishing the environment for
running Oracle databases on top of the hardware stack that is set up in Phase I.
Phase ll tasks include setting up VMware ESXi on management and database
servers, installing VMware vCenter Server Appliance (VCSA), creating and
configuring PowerStore storage volumes, creating virtual machines (VMs), and
configuring storage and networks for the VMs. Tasks to accomplish in this
phase also include creating snapshots manually on the PowerStore storage
array.
• Phase III, Oracle Grid and database configuration—Establishing the Oracle
19c RAC production database environment that was set up in Phase II. Tasks to
accomplish in this phase include:
▪ Preparing to install Oracle Grid Infrastructure and Oracle database software
▪ Installing Oracle Grid Infrastructure and Oracle database software in each
environment
▪ Creating and configuring the Oracle databases in each environment
▪ For snapshot databases, taking PowerStore snapshots of the Oracle
production database volumes and integrating the database configuration with
the snapshot database server to bring up the snapshot database
We value your Dell Technologies and the authors of this document welcome your feedback on the
feedback solution and the solution documentation. Contact the Dell Technologies Solutions team by
email or provide your comments by completing our documentation survey.
Note: For links to additional documentation for this solution, see the Dell Technologies Solutions
Info Hub for Oracle.
5
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Solution architecture
Solution architecture
Overview This section provides an overview of the architecture and design of the Dell EMC Oracle
RAC database solution on Dell EMC PowerStore T storage arrays.
This solution design implements the Oracle RAC database in a VMware virtualized
environment. It leverages the PowerStore snapshot feature to enable the creation of a
snapshot database for a development (DEV) database.
For this solution, the Dell Technologies engineering team tested and validated the
following Oracle database environments:
• Two-node Oracle 19c Oracle RAC online transaction processing (OLTP) production
(PROD) database running in a VMware environment
• Two-node Oracle 19c Oracle RAC OLTP DEV database that we created based on
the snapshot of the two-node RAC PROD database
Logical The following figure shows the logical architecture of the Oracle RAC database
architecture environment. The figure includes the infrastructure component layers, the software layers,
and the management infrastructure and associated tools. The infrastructure components
include the server layer, network, and storage.
The storage layer consists of the PowerStore T array as the FC SAN storage for the
PROD and DEV databases and the data stores for VM operating system volumes. For a
description of the storage volume design for this virtualized Oracle RAC database
environment, see Oracle 19c RAC database design and deployment.
Solution components
Component Details
7
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Solution components
Component Details
Component Details
Storage hardware
This solution uses PowerStore T storage with a base enclosure, which has 21 Non-
Volatile Memory Express (NVMe) SSD drives and two NVMe NVRAM drives that are
designed for the PowerStore caching system. The following table describes the
PowerStore T hardware components:
Component Details
Hypervisor operating VMware ESXi 7.0 (Dell EMC customized ISO image) with
system vCenter 7.0
Guest operating systems Red Hat Enterprise Linux 8.2 with kernel 4.18.0-193.el8.x86_64
(for PROD and DEV
database nodes)
Oracle Grid infrastructure Oracle Grid Infrastructure 19c (19.0.0.0) with Release Update
(RU) 19.7
Oracle Database Oracle Real Application Clusters 19c (19.0.0.0) with RU 19.7
Enterprise Edition
Data center To support this solution, the data center environment must have:
requirements • Ethernet switching infrastructure that is capable of layer 3 routing and supports
trunking, layer 2 VLAN ID and tagging (IEEE 802.1Q), Link Aggregation Control
Protocol (LACP IEEE 802.3ad), and the Spanning Tree PortFast feature.
• Domain Name System (DNS) and Network Time Protocol (NTP) services. A
Dynamic Host Configuration Protocol (DHCP) server is recommended.
• Sufficient power and cooling to support all components. To determine accurate
power and cooling needs, see the product documentation for the corresponding
component.
9
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Compute design
Compute design
Physical servers To configure the physical servers that host the PROD and snapshot database VMs:
and ESXi hosts • Enable the following BIOS settings for the PowerEdge servers:
▪ Memory: Default options
▪ Processor: Default options
▪ System Profile: Performance (the nondefault option)
• Create a single virtual disk in a RAID 1 configuration with two local disks (SSDs
or HDDs) per server to install the bare-metal operating system (ESXi 7.0).
• Install ESXi 7.0 by using the Dell EMC customized ISO image (Dell version A00,
build number 16324942). Download the image from Dell Technologies Online
Support at VMware ESXi 7.x.
• Create two VMs for the PROD Oracle RAC database, one on each of the
PowerEdge R740 servers, and two VMs for the snapshot Oracle RAC database,
one on each of the PowerEdge R640 servers.
• Configure, monitor, and maintain the ESXi host, virtual networking, and the VM
by using the VMware vSphere web client and VCSA, which is deployed as a VM
on the management server.
• Zone two dual-port 32 Gb/s HBAs—four initiators in total—and configure these
HBAs with the PowerStore T front-end FC ports for high bandwidth, load-
balancing, and highly available SAN traffic.
• Set LUN Queue Depth to 128 for Emulex HBAs that are used in both PROD and
snapshot servers.
Multipath configuration
We configured multipathing on the ESXi 7.0 host according to the following best practices:
• Use vSphere Native Multipathing (NMP) as the multipathing software.
• Retain the default selection of round-robin for the native path selection policy (PSP)
on the PowerStore T volumes that are presented to the ESXi host.
• Change the NMP round-robin path switching frequency from the default value of
1,000 I/O packets to 1 packet.
Virtual machines We used the following design principles to create the VMs:
• SCSI controllers—We created multiple SCSI controllers to optimize and balance
the input/output (I/O) for the different database disks, as shown in the following
table. We chose the controller type VMware Paravirtual for optimal performance.
vCPU vMem
Red Hat We configured Red Hat Enterprise Linux 8.2 with kernel 4.18.0-193.el8.x86_64 for the
Enterprise Linux Oracle RAC database by following these steps:
8 operating 1. Before starting the installation of the Oracle Grid Infrastructure and Oracle RAC
system database, set the operating system environment variable
CV_ASSUME_DISTID=OL7 as $export CV_ASSUME_DISTID=OL7.
2. Create a text file containing the Oracle database-related Linux kernel parameter
settings that are found in /etc/sysctl.d/99-oracle-database-
preinstall-19c-sysctl.conf.
3. Run sysctl --system to modify the file that is available in the active kernel
memory. See Appendix A: Red Hat Enterprise Linux 8 Operating System Kernel
Parameters.
11
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Network design
Network design
Overview This section describes the VMware ESXi-based virtual and physical LAN design and best
practices that the solution implements.
Note: This section provides design examples and best practices for a two-node Oracle RAC
database PROD environment. The same network design best practices can be used and scaled
out for other Oracle database environments such as DEV or scaled up for an Oracle RAC
environment with more nodes.
vDS for network We created three separate vSphere distributed switches (vDS), one each for:
traffic • VMware ESXi hosts management network traffic
• Oracle database public, vSphere vMotion, and applications network traffic
• Oracle database RAC private interconnect network traffic
LAN design for We implemented a virtual and 1 GbE physical network design for ESXi-based server
ESXi host infrastructure management. The solution uses a dedicated management and applications
management server (R630-Mgmt-App-Srvr) to deploy the centralized VCSA and vDS network.
As the following figure shows, there is redundancy at every level for high availability and
network load balance. This redundancy is achieved by creating multiple distributed virtual
uplinks (DVUplinks) and redundant and dedicated 1 GbE network adapter ports within
each ESXi host. These ports in turn are connected to redundant ToR 1 GbE switches.
The management traffic is separated by a dedicated VLAN network (this guide uses
VLAN 379 for illustration purposes) for both security and network performance.
LAN design for We implemented a virtual and 25 GbE physical network for the Oracle public, vMotion,
Oracle public, and HammerDB benchmarking applications.
vMotion, and
applications As the following figure shows, there is redundancy at every level for high availability and
network load balance:
Figure 3. Network design for Oracle public, vSphere vMotion, and applications traffic
For performance reasons, we implemented end-to-end jumbo frames for vMotion traffic
by:
• Configuring physical ports with an MTU size of 9,216 bytes on the 25 Gb
Ethernet switches to which the Oracle public and vMotion network adapter ports
are connected
13
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Network design
• Creating a vDS with an MTU size of 9,000 bytes for Oracle public and vMotion
(vDS-02-OraPub-vM) traffic
• Creating a VMkernel with an MTU size of 9,000 bytes on each ESXi (database)
server for vMotion traffic
LAN design for For security and network performance, we created a dedicated vDS for the Oracle private
Oracle private interconnect traffic. As shown in the following figure, the Oracle private network design is
interconnect configured with dedicated VLAN networks (VLAN 101 and 102) to separate the
traffic interconnect traffic from other network traffic. For high availability and load balance, the
design implements multiple distributed port group uplinks and 25 GbE physical network
adapter ports that are connected to two separate 25 GbE ToR switches.
Network design We set up the network interfaces for the Oracle Grid Infrastructure and Oracle RAC
for Oracle Grid database as follows:
Infrastructure • Oracle public IP addresses and the database server’s FQDN were registered in
and Oracle RAC the DNS server.
• One SCAN NAME and three SCAN virtual IP addresses were registered in the
DNS server.
• The DNS server’s name and IP address were registered in the
/etc/resolv.conf file of each of the Oracle RAC database nodes for name
resolution.
• All Oracle private interconnect IP addresses and interface names were
registered in the /etc/hosts file of each Oracle RAC database node.
• Oracle virtual IP (VIP) addresses for each of the Oracle RAC nodes that are in
the same network as the Oracle public IP addresses were provided during the
Oracle Grid installation.
SAN design for The following figure shows the recommended physical and logical zoning of the two-node
PowerStore T Oracle RAC database nodes to the PowerStore T appliance. The following best practices
are applied in the SAN design and implementation:
• Use two separate fabrics or FC switches for high availability and load balance.
• Use a single initiator zoning scheme—one database server initiator is zoned
with one storage target.
• Each database server or ESXi host has four paths per storage volume.
• Four HBA ports in each database server or ESXi host are zoned with two
storage ports per storage node for maximum high availability and load balance.
For additional SAN guidelines and zoning best practices with PowerStore arrays, see the
Dell EMC PowerStore Host Configuration Guide.
15
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design
Storage design
Introduction The Dell EMC PowerStore system is designed with a purpose-built, 2U, two-node Intel
Xeon platform. The array is available in two different product models:
• PowerStore T—Theses models support multiple storage products. The platform
starts with the PowerStore 1000T or 1000X model and scales up to the
PowerStore 9000T or 9000X model. PowerStore T models are bare-metal,
unified storage arrays that can service block, file, and vVol resources along with
numerous data services and efficiencies. PowerStore T models are ideal for
traditional and modern workloads such as relational databases.
• PowerStore X—These model appliances enable running applications directly
on the appliance through the AppsON capability. A native VMware ESXi layer
runs embedded applications alongside the PowerStore operating system, all in
the form of VMs. This feature is in addition to the traditional storage functionality
of PowerStore X model appliances, which supports serving external block and
vVol storage to servers with FC and iSCSI.
This Oracle RAC database solution uses the PowerStore T model with a base enclosure
that has 21 NVMe SSD drives and two extreme high-performance NVMe NVRAM drives
that are designed for the PowerStore caching system.
For the Oracle database storage volumes and related VM operating system storage
volumes, we applied the following design principles to create volumes:
• Three volumes for OCR for the Oracle RAC databases clusterware
• Four volumes for DATA
• Four volumes for REDO
• One volume each for FRA and TEMP
We used the storage volumes design shown in the following figure for the PROD RAC
database on two nodes (on the left in the figure) and the DEV RAC database on two
nodes (on the right in the figure):
Production For the PROD Oracle RAC database, we created three volume groups:
database storage • VM_OCR volume group with five volumes—Two 250 GB volumes each for the
design VM operating system and three 50 GB LUNs for Oracle clusterware including
Oracle Cluster Registry (OCR), the voting disk, and the Grid Infrastructure
Management Repository (GIMR).
• ORA_DATA volume group with nine volumes—Four 600 GB volumes for the
PROD database datafile, four 25 GB REDO volumes, and one 100 GB volume
for FRA.
• ORA_TEMP volume group with one volume—One 500 GB volume for the
database TEMP tablespace is required for HammerDB TPC-C schema creation
and index creation.
We created a dedicated ORA_DATA volume group that included DATA volumes, REDO
volumes, and FRA volumes and put the TEMP in a separate volume group, ORA_TEMP.
This design meets the requirement for a consistent snapshot of all the DATA, REDO, and
FRA volumes to enable the creation of a snapshot database based on these volumes.
The TEMP volume is not required to be a part of the snapshot.
The following table shows the volumes for the PROD Oracle RAC database:
17
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design
Volume
Name Purpose Size (GB) Host group
group
ORA_DATA
We created a host group to map these production database volumes to two PROD
database servers. As shown in the following figure, this host group consists of two PROD
database hosts, R740_92J-21 and R740-92-J19:
Each of the PROD database hosts has four initiators that are based on the HBAs installed
on each of the hosts, as shown in the following figure:

Figure 8. Two PROD hosts with their FC initiators
Development For the DEV Oracle RAC database, we created six volumes for the corresponding volume
database storage groups:
design • VM_OCR volume group : Five volumes, including two 250 GB volumes for each VM
operating system and three 50 GB volumes for Oracle clusterware (Oracle Cluster
Registry (OCR), the voting disk, and GIMR)
• ORA_TEMP volume group: One 100 GB DEV_TEMP volume for the TEMP
tablespace
Note: The table does not include the DATA, FRA, and REDO volumes of the DEV databases that
are based on the snapshot of the corresponding volumes of the PROD database (see Creating a
snapshot of the PROD database ).
Like the volumes for the PROD database, the DEV database volumes are assigned to
ORA_DEV_HOST_Group to map them to two DEV database servers. As shown in the
following figure, the host group consists of two DEV database hosts, R640_92J-30 and
R640-92-29, each of which has two initiators that are based on the HBAs that we installed
on each of the DEV database hosts:
VM operating To create database VMs, we created four Virtual machine File System (VMFS) data
system data stores to host VM operating systems by using the storage volumes that are presented to
store and RDM the database server ESXi hosts. The following table shows the details:
devices design
Table 9. Operating system volumes for PROD and DEV VMs
Accessible by
Volume name Purpose Data store name
ESXi hosts
19
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design
Accessible by
Volume name Purpose Data store name
ESXi hosts
Both ESXi hosts must have access to the VM OS data store volumes. For example, two
PROD hosts (PROD Host1 and PROD Host2) share access to both the PROD VM OS
data stores (Prod_VM1_OS_DS and Prod_VM1_OS_DS). Then, the two database VMs
built on these data stores can be migrated between two PROD ESXi hosts (see
Virtualized database high availability).
All Oracle-related disks that are presented to the ESXi host from the PowerStore storage
array are mapped as raw devices (RDM) to the database VM. To optimize and balance
the I/O for the different database disks, we also created multiple SCSI controllers, each of
which is responsible for the I/O of a set of disks.
The following table shows the VM hard disk configuration with the corresponding SCSI
controllers in both the PROD database VMs. Aside from hard disk 1, all the hard disks are
RDM, with multi-writer sharing enabled on the two database VMs. Hard disk1 as a guest
operating disk is also configured as Thick Provision Lazy Zeroed VMFS.
SCSI
Hard disk Purpose Size (GB) Disk type Sharing Compatibility
controller
Operating After the database volumes are presented to the database VMs as virtual disks, perform
system devices the following steps to prepare these virtual disks for Oracle ASM disk group creation:
and Oracle ASM 1. For each virtual disk, create a single partition that spans the entire disk and has a
disk group starting offset of 2,048 sectors.
design
2. Use the UDEV rule to assign Linux 0660 permission and ownership of these disks
to the grid user, that is, the owner of the Oracle GI and Oracle ASM instance. In
the following code sample, the UDEV rule is set for one of the disks as an entry of
the custom UDEV rule script in /etc/udev/rules.d/ 95_oracleasm.rules:
KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block",
PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",
RESULT=="368ccf09800cd6a0eaf106e5a2155cf92",
SYMLINK+="oracleasm/disks/ora-redo4", OWNER="grid",
GROUP="asmadmin", MODE="0660"
3. Run the udevadm trigger command to set the ownership of the device
/dev/sdj1 as grid:asmadmin with 0660 permission and also create a soft link
alias /dev/oracleasm/disks/ora-redo4 that points to /dev/sdj1. You can
use this soft link alias as the path of this device to create an Oracle ASM disk
group.
4. Follow steps 2 through 3 to create UDEV rules for all the database disks in the
custom UDEV rule script /etc/udev/rules.d/ 95_oracleasm.rules. For
example, creating these rules creates the device links
/dev/oracleasm/disks/ora-XXX as shown in the following list:
# ls -l /dev/oracleasm/disks
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data2 -> ../../sdi1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data3 -> ../../sdj1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data4 -> ../../sdk1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-fra -> ../../sdf1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-ocr1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-ocr2 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-ocr3 -> ../../sde1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo1 -> ../../sdl1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo2 -> ../../sdm1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo3 -> ../../sdn1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo4 -> ../../sdo1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-temp -> ../../sdg
The Oracle ASM disk groups are created based on these raw devices. To use the soft link
alias for the devices, we set the DISK Discovery Path as /dev/oracleasm/disks/*,
as shown in the following table. While the OCR disk group uses the normal redundancy
setting (with triple mirroring), all other disk groups use the external redundancy setting.
We used the coarse striping setting for the DATA, FRA, and OCR disk groups and the
fine-grain striping setting for the REDO and TEMP disk groups.
21
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design
When ASM rebalances the disk group, a compacting phase at the end of the rebalance
moves the data to the higher-performing tracks of the spinning disks. Because the
PowerStore storage array uses all-flash devices, compacting the data might not provide
any benefits. Effective from Oracle 12c, it is possible to disable compacting on individual
ASM disk groups by setting the _rebalance_compact attribute to FALSE:
Overview This section describes the design and deployment of an Oracle 19c RAC database stack
on the Red Hat Enterprise Linux 8 operating system. This Oracle stack consists of Oracle
Grid Infrastructure (GI) 19c and Oracle RAC 19c database.
Deploying Oracle The certified Oracle database 19c version for Red Hat Enterprise Linux 8+ is Oracle
19c RAC with Enterprise Edition 19.0.0.0 at minimum RU 19.7 (or 19.6 with patches) with minimum
Red Hat kernel version 4.18.0-80.el8.x86_64 or later. For more information, see Oracle Database
Enterprise Linux 19.0.0.0. certification on Linux 86-64 Red Hat Enterprise Linux 8. In accordance with the
8.2 on vSphere 7 Oracle certification, this solution uses Red Hat Enterprise Linux 8.2 with kernel version
4.18.0-193.el8.x86_64 for the guest operating system of the database VMs and Oracle
RAC Enterprise Edition 19cRU7 for the Oracle RAC database.
The Oracle 19cRU7 stack consists of Oracle GI 19cRU7 and Oracle RAC 19cRU7.
Because the base versions of Oracle GI 19c software and Oracle RAC database 19c
software were released with the Red Hat Enterprise Linux 8.x certification, apply patches
to both Oracle GI 19c and Oracle RAC 19c to install both GI 19c software and RAC 19c
software and upgrade them to 19cRU7.
The following table shows these software patches and their corresponding Oracle support
document IDs:
Table 12. Oracle GI 19c and Oracle RAC 19c patch list
Download the required software images from the Oracle software download site and the
patch images from the Oracle support site, which are listed in the following table:
23
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Oracle 19c RAC database design and deployment
Figure 10. Reuse private and public keys existing in the user home
9. Select Reuse private and public keys existing in the user home and click
Setup to set up the passwordless SSH connectivity:
This selection lasts through the remainder of the Oracle GI 19cRU7 installation
and configuration.
Figure 11. Reuse private and public keys existing in the user home
3. Select Reuse private and public keys existing in the user home and click
Test.
The following message is displayed:
25
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Oracle 19c RAC database design and deployment
6. Apply the patch to upgrade the Oracle RAC home from 19c base version to
19cRU7 by following the instructions in Oracle Support Document 30899722.8.
Run the opatchauto command as the Linux root user on each RAC node.
7. Follow the instructions in the 30805684 readme file to apply the patch to Oracle
RAC home on each node.
Creating the Using the Database Configuration Assistant (DBCA), we created a two-node 19c RAC
Oracle 19c RAC database by distributing the constituent files as shown in Table 11:
PROD database • DATA disk group: Database datafiles and control files
• REDO disk group: Online redo logs
• TEMP disk group: TEMP file
• FRA—Archive log files
For the database initialization parameter settings, see Appendix B: Oracle RAC Database
Initialization Settings.
Snapshot database
Introduction PowerStore storage arrays enable you to create snapshots of storage volumes. These
snapshots provide point-in-time copies of the data that is stored in the volumes and can
be used as a local data protection solution within a PowerStore system. Snapshots are
not a full copy of the volumes. Instead, they consist of pointers to the data blocks and use
redirect-on-write technology. Accordingly, snapshots are space-efficient and do not
consume any extra space until either the new data is written to the thin clone that is
created from them or new data is written to the source database.
Hosts cannot access snapshots unless a thin clone is created from the snapshot and
presented to the host.
As shown in the following figure, we took a snapshot of the PROD database, created a
thin clone of the snapshot, and finally mounted the thin clone to the DEV environment as
the DEV database:
Figure 13. Taking a snapshot of the PROD database for a DEV database
ensures that the snapshot of all datafiles, control files, redo files, and archive
logs files are consistent so that the PROD database snapshot is a consistent
point-in-time copy of the PROD database.
Create a snapshot of the PROD database
To create a PROD database snapshot, follow these steps:
1. Put the RAC database in backup mode by running the following SQL command:
SQL> ALTER DATABASE BEGIN BACKUP;
2. Log in to PowerStore Manager and select Storage > Volume Groups >
ORA_DATA volume group.
The following figure is displayed:
Creating a thin We created the snapshot of the PROD database as a point-in-time consistent copy of the
clone of the PROD database. The snapshot can be mounted to the DEV database that the VM hosts
snapshot for a DEV or TEST database. Because a host cannot access the snapshot itself, we
29
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database
created a thin clone of the snapshot and mounted the thin clone to the DEV database VM
host. As shown in the following figure, we created the RAC database snapshot based on
the ORA_DATA storage group, which consists of DATA, FRA, and REDO volumes. To
mount the snapshot to the DEV database nodes, we created a clone of the snapshot
called ORA_DATA_Thin_clone.
2. From the MORE ACTIONS dropdown menu, select Create Thin Clone Using
Snapshot.
3. Complete the Create Thin Clone wizard and click Clone.
A thin clone of the snapshot is created, as shown in the following figure:
Ensure that you enable the option to Apply write-order consistency to protect all
volumes group members, as shown in the following figure:
Figure 19. Apply write-order consistency to protect all volumes group members
31
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database
Creating a DEV After creating a thin clone from the snapshot, you can create a DEV database that is
database from based on the snapshot by mounting the thin clone to both the DEV VMs through their
the PROD ESXi hosts.
snapshot To create a DEV database using the thin clone snapshot of ORA_DATA:
1. Create the Oracle 19cRU7 RAC configuration on both the DEV VMs (DEV_RAC
DB Node1 and DEV_DB Node2) by following the steps you used for the PROD
RAC database (see Deploying Oracle 19c RAC with Red Hat Enterprise Linux 8.2
on vSphere 7).
2. In PowerStore Manager, map the thin clone volumes to the host group of two
DEV ESXi hosts. The following table shows all the DEV database volumes or
volume groups that are mapped to DEV hosts. The volumes in VM_OCR and
ORA_TEMP are the storage volumes. The volumes in the ORA_Thin_clone
volume group are the thin clone snapshots of the corresponding storage volumes
in the PROD database.
Table 14. DEV database volumes
3. Present all the volumes that correspond to these thin clones to both the DEV
database VMs as RDM disks.
4. Add the corresponding entries of the new RDM disks devices to the
/dev/udev/rule.d/95_oracleasm.rules script.
For example, the entry for the snapshot thin clone of the PROD_DATA1 is:
KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block",
PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",
RESULT=="368ccf0980009e5e46f1b8a8486185d3a",
SYMLINK+="oracleasm/disks/ora-data1", OWNER="grid",
GROUP="asmadmin", MODE="0660"
7. Create a TEMPDEV disk group that uses the DEV_TEMP volume by running the
following SQL command in the ASM instance:
Note: Because these devices are essentially the snapshot of a point-in-time copy of the
original PROD database volumes, each volume has the same metadata in the PROD
database, such as ASM disk group name and ASM disk name.
9. Rename the ASM disks by running the following SQL commands in the ASM
instance:
DATA_DEV
$asmcmd mount --restrict DATADEV
$asmcmd mount --restrict REDODEV
$asmcmd mount --restrict FRADEV
33
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database
9. Change the ASM disk group names in the file paths for all the database files,
control files, and redo logs.
Note: You will need to change these names in all references to file paths and in the
destinations setting in the spfile. These file paths reference the old disk group names,
which are no longer valid.
For example, you can rename the database file by running the following SQL
command:
SQL>alter database rename file '+DATA/PROD/DATAFILE/users.261.986035293'
to '+DATADEV/PROD/DATAFILE/users.261.986035293';
10. Mount the database and take it out of backup mode by running:
SQL> ALTER DATABASE END BACKUP;
11. Drop all the old temp files that are pointing to the old TEMP tablespace and
create a TEMP tablespace on the new TEMP disk group. Then open the
database by running:
SQL> ALTER DATABASE OPEN;
12. Change the database name and the DBID by using the DBNEWID utility. This
utility lets you assign the new database name and ID to replace the original
production database name from the PROD database. You must also change the
dbname in the spfile.
13. Open the new database by running the alter database reset logs
command. Disable the archive log mode if the new database is for development
and an archive log is not required. Create a TEMP tablespace on the new TEMP
disk group.
vMotion vMotion migration enables you to migrate a powered-on VM from one system resource to
migration another. This migration can be “cold migration” or “hot migration.” With cold migration, the
VM is taken offline to perform the migration. With hot migration, the VM remains
operational, as does the application running on the VM guest operating system.
As a part of the prerequisites for vMotion migration, the storage that contains the virtual
machine disks must be shared between the source host and target hosts.
The shared storage and network design meets these requirements for vMotion migration:
Both ESXi hosts meet the shared storage requirements. All the storage volumes include
VM operating system volumes. The RAC database volumes including OCR, TEMP,
DATA, REDO, and FRA are in the PowerStore storage that is shared by both source and
target ESXi hosts. Both source and target hosts are connected through the vMotion
network.
Live migration of For a two-node Oracle RAC database with each instance running on two VMs on
the Oracle separate ESXi hosts, you can migrate one VM with the Oracle RAC database instance
instance for from one ESXi host to another. During a cold migration, the VM is offline and the Oracle
planned RAC database instance that is running in the VM guest is down. During a hot migration,
downtime both the VM and the Oracle RAC database instance are online and operational. Use the
hot migration feature to avoid a planned downtime of the database instance during server
35
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Virtualized database high availability
hardware or software maintenance of the ESXi host by moving a live VM with its
operational Oracle database instance to a different ESXi host.
Hot migration
To launch a hot migration by vMotion:
1. Log in to vCenter and locate the VM that you want to migrate.
2. Right-click the VM and select Migrate.
The migration options are displayed, as shown in the following figure:
The following figures show the database throughput transactions per minute (TPM)
before, during, and after vMotion migrations. As shown, no performance change occurred
due to vMotion operation.
Figure 24. HammerDB throughput TPM before and after vMotion starts
37
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Virtualized database high availability
VM2 migration
The following figures show the VM2 migration.
Before the migration, VM2 was running on ESXi host2 (IP address ending with 23):
After the migration, VM2 was moved to ESXi host1 (IP address ending with 22):
CPU utilization
We also reviewed the CPU utilization changes for both ESXi hosts.
The following figure shows the CPU utilization of ESXi host2 (with an IP address ending in
23) as:
• 40% before migration
• 70% during migration
• 1.8% after migration
The following figure shows the CPU utilization of ESXi host1 (with an IP address ending
22), the host to which the VM was migrated, as:
• 35% before migration
• 39% during migration
• 65% after migration
39
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Conclusion
Figure 27. CPU utilization change during the migration: ESXi host1
Conclusion
The PowerStore T platform provides new capabilities to organizations that count on
Oracle RAC for high availability and performance. For example, storage-based snapshots
and thin clones enable multiple Oracle RAC clusters that are connected to a single
PowerStore T array to efficiently access near-instantaneous copies of databases. This
platform is available in a range of configurations, from the entry-level PowerStore 1000T
model to the highly scalable PowerStore 9000T model. The PowerStore T family enables
organizations to have a consistent management experience from PROD systems all the
way to DEV systems.
References
Dell The following Dell Technologies documentation provides additional and relevant
Technologies information. Access to these documents depends on your login credentials. If you do not
documentation have access to a document, contact your Dell Technologies representative.
VMware The following VMware documentation provides additional and relevant information:
documentation • VMware ESXi 7.0 Installation and Setup
• VMware vSphere 7.0 Installation and Setup
• Oracle Databases on VMware Best Practices Guide
Oracle The following Oracle documentation provides additional and relevant information:
documentation • Oracle Database 19 Installation Guide
• Oracle Real Application Clusters 19c Installation Guide
• Oracle Grid Infrastructure Installation and Upgrade Guide
41
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Appendix A: Red Hat Enterprise Linux 8 Operating System Kernel Parameters
43
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Appendix B: Oracle RAC Database Initialization Settings
racdb2.__pga_aggregate_target=21474836480
racdb1.__sga_target=137438953472
racdb2.__sga_target=137438953472
racdb1.__shared_io_pool_size=268435456
racdb2.__shared_io_pool_size=268435456
racdb1.__shared_pool_size=15300820992
racdb2.__shared_pool_size=15300820992
racdb1.__streams_pool_size=0
racdb2.__streams_pool_size=0
racdb1.__unified_pga_pool_size=0
racdb2.__unified_pga_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/racdb/adump'
*.audit_trail='NONE'
*.cluster_database=TRUE
*.compatible='19.0.0'
*.control_files='+DATA/RACDB/CONTROLFILE/current.257.1049809915'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_file_multiblock_read_count=1
*.db_name='racdb'
*.db_writer_processes=8
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=racdbXDB)'
*.filesystemio_options='SETALL'
family:dw_helper.instance_mode='read-only'
racdb1.instance_number=1
racdb2.instance_number=2
*.local_listener='-oraagent-dummy-'
*.log_archive_dest_1='LOCATION=+FRA/'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.pga_aggregate_target=20360m
*.processes=2000
*.remote_login_passwordfile='exclusive'
*.sga_max_size=137438953472
*.sga_target=137438953472
racdb2.thread=2
racdb1.thread=1
*.undo_tablespace='UNDOTBS1'
racdb1.undo_tablespace='UNDOTBS1'
racdb2.undo_tablespace='UNDOTBS2'