0% found this document useful (0 votes)
30 views44 pages

h18617 OracleRACdb PowerStoreT - DG

Uploaded by

Fadhel Bouzaiene
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views44 pages

h18617 OracleRACdb PowerStoreT - DG

Uploaded by

Fadhel Bouzaiene
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Oracle RAC Database on PowerStore T Storage

Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise


Linux 8.2, Oracle RAC database 19c R7, VMware vSphere 7
January 2021

H18617.1

Design Guide

Abstract
This design guide describes best practices for designing and
implementing a VMware-based virtualized Oracle Real Applications
Clusters database solution using Dell EMC PowerStore T storage. The
solution incorporates Dell EMC PowerEdge servers and the VMware
vSphere virtualization platform.

Dell Technologies Solutions


Copyright

The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo and Xeon are trademarks
of Intel Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners.
Published in the USA 01/21 Design Guide H18617.1.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change
without notice.

2 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Contents

Contents
Introduction ..................................................................................................................................... 4

Solution architecture ...................................................................................................................... 6

Solution components...................................................................................................................... 7

Compute design ............................................................................................................................ 10

Network design ............................................................................................................................. 12

Storage design .............................................................................................................................. 16

Oracle 19c RAC database design and deployment .................................................................... 23

Snapshot database ....................................................................................................................... 27

Virtualized database high availability.......................................................................................... 35

Conclusion ..................................................................................................................................... 40

References ..................................................................................................................................... 41

Appendix A: Red Hat Enterprise Linux 8 Operating System Kernel Parameters .................... 42

Appendix B: Oracle RAC Database Initialization Settings ........................................................ 44

3
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Introduction

Introduction

Business case Everyone from the boardroom to the server room is talking about the value of data. To
harvest the maximum value from their data, businesses worldwide have invested in an
impressive array of analytical tools for performing analysis, communicating insights, and
informing decisions. The abundance of data sources combined with the velocity of new
data generation has created a complex landscape of data management silos.

Many companies are considering the latest generation of scalable storage solutions to
help address these complexities. Storage systems that are equipped with data reduction
technologies can achieve greater value by controlling cost in the face of the ongoing
demand for more space. Dell Technologies systems with always-on duplication and
compression enable organizations to accommodate the demands of data growth long
after the initial storage investment.

The Dell EMC PowerStore T family of storage arrays offers features that support
increased performance, scalability, and storage efficiency. These arrays can enhance the
value of enterprise-class database management solutions such as Oracle 19c Real
Application Clusters (RAC). The benefits of Oracle RAC include the ability to scale
workloads across multiple active servers. This ability improves system performance and
enhances the server cluster’s ability to continue database services even during a system
failure. PowerStore T arrays hosting Oracle RAC-supported applications enable
organizations to lower the costs of providing high availability and accommodate the
demands of data growth over the life cycle of applications.

The productivity of IT storage administrators managing highly available database


environments depends significantly on the integration between their storage management
tools and the database platform features. One of the ways companies can increase the
productivity of their IT professionals is by automating and orchestrating a highly available
database system. PowerStore T storage automation can quickly create copies of
application data by using the snapshot and cloning services that are built into the array
architecture. PowerStore T automation provides IT administrators with orchestration that
enables full application life-cycle support, including provisioning of on-demand databases
and point-in-time consistent refreshes.

The traditional approach to provisioning and refreshing database application


environments uses the native backup and restore features of popular database
management systems such as Oracle database management systems. According to our
research, this traditional approach takes considerably longer and is more likely to fail or
produce errors. PowerStore T storage systems increase productivity by providing a
complete set of array-based snapshot and cloning features. These features are available
through REST APIs for automation and can be integrated with other code to orchestrate
complex database deployment and refresh tasks. All PowerStore T models also provide a
common set of enterprise features for configuring and managing an Oracle RAC system.

Document This guide describes how to design and implement a VMware-based virtualized Oracle
purpose RAC database solution that uses PowerStore T storage. The guide provides an overview
of the architecture design of the entire stack, including the compute, network, and storage
design. It describes best practices for deploying an Oracle RAC database on PowerStore

4 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Introduction

T storage in a virtualized environment and how to leverage PowerStore snapshot features


to build an Oracle RAC database in a test and development environment. The guide also
describes the high availability that this solution provides by leveraging VMware vSphere
vMotion to avoid planned downtime.

Audience This guide is intended for infrastructure architects, storage administrators, virtualization
administrators, system administrators, IT managers, and other personnel who evaluate,
acquire, manage, maintain, or operate Oracle database environments.

Deployment This design for Oracle databases with PowerStore storage encompasses the following
workflow phases:
• Phase I, Hardware stack setup and configuration—Setting up all the
component hardware, including PowerEdge servers; Ethernet network switches
and cable connections; the PowerStore storage array, Fibre Channel (FC)
network switches, and cable connections.
• Phase II, Installation and configuration—Establishing the environment for
running Oracle databases on top of the hardware stack that is set up in Phase I.
Phase ll tasks include setting up VMware ESXi on management and database
servers, installing VMware vCenter Server Appliance (VCSA), creating and
configuring PowerStore storage volumes, creating virtual machines (VMs), and
configuring storage and networks for the VMs. Tasks to accomplish in this
phase also include creating snapshots manually on the PowerStore storage
array.
• Phase III, Oracle Grid and database configuration—Establishing the Oracle
19c RAC production database environment that was set up in Phase II. Tasks to
accomplish in this phase include:
▪ Preparing to install Oracle Grid Infrastructure and Oracle database software
▪ Installing Oracle Grid Infrastructure and Oracle database software in each
environment
▪ Creating and configuring the Oracle databases in each environment
▪ For snapshot databases, taking PowerStore snapshots of the Oracle
production database volumes and integrating the database configuration with
the snapshot database server to bring up the snapshot database

We value your Dell Technologies and the authors of this document welcome your feedback on the
feedback solution and the solution documentation. Contact the Dell Technologies Solutions team by
email or provide your comments by completing our documentation survey.

Author: Kai Yu, Ramamohan Reddy, Naveen Iyengar

Contributor: Aighne Kearney

Note: For links to additional documentation for this solution, see the Dell Technologies Solutions
Info Hub for Oracle.

5
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Solution architecture

Solution architecture

Overview This section provides an overview of the architecture and design of the Dell EMC Oracle
RAC database solution on Dell EMC PowerStore T storage arrays.

This solution design implements the Oracle RAC database in a VMware virtualized
environment. It leverages the PowerStore snapshot feature to enable the creation of a
snapshot database for a development (DEV) database.

For this solution, the Dell Technologies engineering team tested and validated the
following Oracle database environments:
• Two-node Oracle 19c Oracle RAC online transaction processing (OLTP) production
(PROD) database running in a VMware environment
• Two-node Oracle 19c Oracle RAC OLTP DEV database that we created based on
the snapshot of the two-node RAC PROD database

Logical The following figure shows the logical architecture of the Oracle RAC database
architecture environment. The figure includes the infrastructure component layers, the software layers,
and the management infrastructure and associated tools. The infrastructure components
include the server layer, network, and storage.

Figure 1. Solution architecture and components

The server layer consists of:


• PROD virtual database servers—Two R740 PowerEdge servers running ESXi
7.0 hypervisors for two VMs running the Red Hat Enterprise Linux 8.2 guest
operating system. The PROD Oracle 19c RAC database is deployed on these
two VMs.

6 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Solution components

• DEV database servers—Two R640 PowerEdge servers running ESXi 7.0


hypervisors for two VMs running the Red Hat Enterprise Linux 8.2 guest
operating system. The Oracle 19c RAC DEV database is deployed on these two
VMs.
• R640 management/tool server—A R640 server running VMware ESXi 7.0 as
the hypervisor to host multiple VMs, as follows:
▪ VMware vCenter Server Appliance (VCSA) deployed as a VM.
▪ VM with the Red Hat Enterprise Linux 8.2 guest operating system installed to
run the HammerDB test tool.
▪ VM with the Windows guest operating system installed to run Dell EMC
AppSync software for snapshot creation and management automation.
The network layer consists of the following types of connectivity:
• Management network between vCenter and ESXi hosts. This network also
connects the iDRAC Ethernet ports of all the servers.
• vMotion network for VM migration using vMotion between ESXi hosts.
• Public network for connecting Oracle database instances.
• Private network for internode communication between Oracle RAC nodes.
• SAN FC network that connects database servers and PowerStore SAN storage.

The following switches are used to implement the networks:


• One 1 GbE switch for management switches
• Two 10 GbE ToR switches for the Oracle private network
• Two 16/32 Gbps FC switches for FC SAN connectivity

The storage layer consists of the PowerStore T array as the FC SAN storage for the
PROD and DEV databases and the data stores for VM operating system volumes. For a
description of the storage volume design for this virtualized Oracle RAC database
environment, see Oracle 19c RAC database design and deployment.

Solution components

Hardware Server hardware


components The solution incorporates two PowerEdge R740 and two PowerEdge R640 servers. The
following tables describe these components:

Table 1. PowerEdge R740 hardware component details

Component Details

Processors 2 x Intel Xeon Gold 6246R CPU


@3.10GHz, 18C/36T

Memory 1.5 TB, 24 x 64 GB 2666MT/S

NIC Broadcom 5720 Quad-Port GbE Rack


NDC, 2 x Intel 10 GbE 2P X710 adapter

7
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Solution components

Component Details

HBA 2 x 32 Gb Emulex LPe35000 FC adapter

Table 2. PowerEdge R640 hardware component details

Component Details

Processors 2 x Intel Xeon® Gold 5118 @2.30GHz,


12C/24T

Memory 768 GB, 12 x 64 G 2666MT/S

NIC BRCM 2P 25G SFP rNDC, Intel(R) 10G


2P X520 adapter

HBA 1 x 16 Gb Emulex LPe32000 FC


adapter

Storage hardware
This solution uses PowerStore T storage with a base enclosure, which has 21 Non-
Volatile Memory Express (NVMe) SSD drives and two NVMe NVRAM drives that are
designed for the PowerStore caching system. The following table describes the
PowerStore T hardware components:

Table 3. PowerStore T hardware components

Component Details

PowerStore model PowerStore T

Processor 4 x Intel Xeon Silver 4108 CPU @ 1.80 GHz,


16C/32T per node

Memory 192 GB per node and 2 x 8 GB NVMe NVRM


for caching

Drives 21 NVMe SSD drives, each 1.92 TB

Total capacity 28.3 TB

8 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Solution components

The following table shows the software components in this solution:


Software
Table 4. Software components
components
RAC database Description

Hypervisor operating VMware ESXi 7.0 (Dell EMC customized ISO image) with
system vCenter 7.0

Guest operating systems Red Hat Enterprise Linux 8.2 with kernel 4.18.0-193.el8.x86_64
(for PROD and DEV
database nodes)

Oracle Grid infrastructure Oracle Grid Infrastructure 19c (19.0.0.0) with Release Update
(RU) 19.7

Oracle Database Oracle Real Application Clusters 19c (19.0.0.0) with RU 19.7
Enterprise Edition

Dell EMC PowerStore Operating system version 1.0.1.0.5.002

Data center To support this solution, the data center environment must have:
requirements • Ethernet switching infrastructure that is capable of layer 3 routing and supports
trunking, layer 2 VLAN ID and tagging (IEEE 802.1Q), Link Aggregation Control
Protocol (LACP IEEE 802.3ad), and the Spanning Tree PortFast feature.
• Domain Name System (DNS) and Network Time Protocol (NTP) services. A
Dynamic Host Configuration Protocol (DHCP) server is recommended.
• Sufficient power and cooling to support all components. To determine accurate
power and cooling needs, see the product documentation for the corresponding
component.

9
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Compute design

Compute design

Physical servers To configure the physical servers that host the PROD and snapshot database VMs:
and ESXi hosts • Enable the following BIOS settings for the PowerEdge servers:
▪ Memory: Default options
▪ Processor: Default options
▪ System Profile: Performance (the nondefault option)
• Create a single virtual disk in a RAID 1 configuration with two local disks (SSDs
or HDDs) per server to install the bare-metal operating system (ESXi 7.0).
• Install ESXi 7.0 by using the Dell EMC customized ISO image (Dell version A00,
build number 16324942). Download the image from Dell Technologies Online
Support at VMware ESXi 7.x.
• Create two VMs for the PROD Oracle RAC database, one on each of the
PowerEdge R740 servers, and two VMs for the snapshot Oracle RAC database,
one on each of the PowerEdge R640 servers.
• Configure, monitor, and maintain the ESXi host, virtual networking, and the VM
by using the VMware vSphere web client and VCSA, which is deployed as a VM
on the management server.
• Zone two dual-port 32 Gb/s HBAs—four initiators in total—and configure these
HBAs with the PowerStore T front-end FC ports for high bandwidth, load-
balancing, and highly available SAN traffic.
• Set LUN Queue Depth to 128 for Emulex HBAs that are used in both PROD and
snapshot servers.
Multipath configuration
We configured multipathing on the ESXi 7.0 host according to the following best practices:
• Use vSphere Native Multipathing (NMP) as the multipathing software.
• Retain the default selection of round-robin for the native path selection policy (PSP)
on the PowerStore T volumes that are presented to the ESXi host.
• Change the NMP round-robin path switching frequency from the default value of
1,000 I/O packets to 1 packet.

Virtual machines We used the following design principles to create the VMs:
• SCSI controllers—We created multiple SCSI controllers to optimize and balance
the input/output (I/O) for the different database disks, as shown in the following
table. We chose the controller type VMware Paravirtual for optimal performance.

Table 5. SCSI controller properties in the database VMs

Controller Purpose SCSI bus sharing Type

SCSI 0 Guest operating system disk None VMware


Paravirtual

10 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Compute design

Controller Purpose SCSI bus sharing Type

SCSI 1 Oracle DATA disks Physical VMware


Paravirtual

SCSI 2 Oracle REDO disks Physical VMware


Paravirtual

SCSI 3 Oracle OCR, FRA, TEMP disks Physical VMware


Paravirtual

• Hard disk drives—We assigned the following properties to all database-related


virtual disks (such as DATA, REDO, FRA, OCR, and TEMP):
▪ Raw Device Mapping (RDM)—All Oracle-related disks that are presented to
the ESXi host from the PowerStore T storage array are mapped directly as
raw devices to the database VM.
▪ Virtual Device Node—For load balancing and optimal performance, the SCSI
controllers are assigned as described in Table 5.
• VM vCPU and vMem—We assigned virtual CPU (vCPU) and virtual memory
(vMem) amounts to the PROD and snapshot database VMs, as shown in the
following table:
Table 6. VM configuration vCPU and vMem details

vCPU vMem

Number of vCPUs Reservation Total (GB) Limit (MB)


(GB)

Production 8 256 256 Unlimited

Snapshot 8 256 256 Unlimited

Red Hat We configured Red Hat Enterprise Linux 8.2 with kernel 4.18.0-193.el8.x86_64 for the
Enterprise Linux Oracle RAC database by following these steps:
8 operating 1. Before starting the installation of the Oracle Grid Infrastructure and Oracle RAC
system database, set the operating system environment variable
CV_ASSUME_DISTID=OL7 as $export CV_ASSUME_DISTID=OL7.
2. Create a text file containing the Oracle database-related Linux kernel parameter
settings that are found in /etc/sysctl.d/99-oracle-database-
preinstall-19c-sysctl.conf.
3. Run sysctl --system to modify the file that is available in the active kernel
memory. See Appendix A: Red Hat Enterprise Linux 8 Operating System Kernel
Parameters.

11
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Network design

Network design

Overview This section describes the VMware ESXi-based virtual and physical LAN design and best
practices that the solution implements.

Note: This section provides design examples and best practices for a two-node Oracle RAC
database PROD environment. The same network design best practices can be used and scaled
out for other Oracle database environments such as DEV or scaled up for an Oracle RAC
environment with more nodes.

vDS for network We created three separate vSphere distributed switches (vDS), one each for:
traffic • VMware ESXi hosts management network traffic
• Oracle database public, vSphere vMotion, and applications network traffic
• Oracle database RAC private interconnect network traffic

LAN design for We implemented a virtual and 1 GbE physical network design for ESXi-based server
ESXi host infrastructure management. The solution uses a dedicated management and applications
management server (R630-Mgmt-App-Srvr) to deploy the centralized VCSA and vDS network.

As the following figure shows, there is redundancy at every level for high availability and
network load balance. This redundancy is achieved by creating multiple distributed virtual
uplinks (DVUplinks) and redundant and dedicated 1 GbE network adapter ports within
each ESXi host. These ports in turn are connected to redundant ToR 1 GbE switches.
The management traffic is separated by a dedicated VLAN network (this guide uses
VLAN 379 for illustration purposes) for both security and network performance.

Figure 2. Network design for ESXi hosts management

12 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Network design

LAN design for We implemented a virtual and 25 GbE physical network for the Oracle public, vMotion,
Oracle public, and HammerDB benchmarking applications.
vMotion, and
applications As the following figure shows, there is redundancy at every level for high availability and
network load balance:

• Multiple distributed port groups (dPG) and DVUplinks


• Redundant, dedicated 25 GbE network adapter ports within each ESXi host
• Redundant ToR and core switches
At the physical database server level (R740-prod-n1 and R740-prod-n2), both the 25 GbE
network ports are available to carry Oracle public and vMotion traffic. The traffic is,
however, logically separated at the vDS level by dedicated VLAN networks (VLAN 382 for
Oracle public traffic, VLAN 400 for vMotion traffic) and dedicated DVUplink paths (Uplink
1 is active for Oracle public traffic while Uplink 2 is active for vMotion traffic). Similarly, the
applications traffic (in this case, HammerDB benchmarking application traffic) has its own
dPG and active/active DVUplink paths to the dedicated management and applications
server (R630-Mgmt-App- Srvr).

Figure 3. Network design for Oracle public, vSphere vMotion, and applications traffic

For performance reasons, we implemented end-to-end jumbo frames for vMotion traffic
by:
• Configuring physical ports with an MTU size of 9,216 bytes on the 25 Gb
Ethernet switches to which the Oracle public and vMotion network adapter ports
are connected

13
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Network design

• Creating a vDS with an MTU size of 9,000 bytes for Oracle public and vMotion
(vDS-02-OraPub-vM) traffic
• Creating a VMkernel with an MTU size of 9,000 bytes on each ESXi (database)
server for vMotion traffic

LAN design for For security and network performance, we created a dedicated vDS for the Oracle private
Oracle private interconnect traffic. As shown in the following figure, the Oracle private network design is
interconnect configured with dedicated VLAN networks (VLAN 101 and 102) to separate the
traffic interconnect traffic from other network traffic. For high availability and load balance, the
design implements multiple distributed port group uplinks and 25 GbE physical network
adapter ports that are connected to two separate 25 GbE ToR switches.

Figure 4. Network design for Oracle Private Interconnect traffic

We implemented best practices at each level by creating:


• Physical ports on the 25 GbE Ethernet switches to which the private
interconnect network adapter ports are connected as spanning-tree port
type edge and MTU 9216
• vDS for the private Interconnect (vDS-03-OraPrivate) with an MTU size of 9000
bytes
• vNICs for the private interconnect with ‘MTU = 9000’ inside the database guest
VMs

Network design We set up the network interfaces for the Oracle Grid Infrastructure and Oracle RAC
for Oracle Grid database as follows:
Infrastructure • Oracle public IP addresses and the database server’s FQDN were registered in
and Oracle RAC the DNS server.

14 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Network design

• One SCAN NAME and three SCAN virtual IP addresses were registered in the
DNS server.
• The DNS server’s name and IP address were registered in the
/etc/resolv.conf file of each of the Oracle RAC database nodes for name
resolution.
• All Oracle private interconnect IP addresses and interface names were
registered in the /etc/hosts file of each Oracle RAC database node.
• Oracle virtual IP (VIP) addresses for each of the Oracle RAC nodes that are in
the same network as the Oracle public IP addresses were provided during the
Oracle Grid installation.

SAN design for The following figure shows the recommended physical and logical zoning of the two-node
PowerStore T Oracle RAC database nodes to the PowerStore T appliance. The following best practices
are applied in the SAN design and implementation:
• Use two separate fabrics or FC switches for high availability and load balance.
• Use a single initiator zoning scheme—one database server initiator is zoned
with one storage target.
• Each database server or ESXi host has four paths per storage volume.
• Four HBA ports in each database server or ESXi host are zoned with two
storage ports per storage node for maximum high availability and load balance.

Figure 5. SAN design: Physical and logical view of FC zoning

For additional SAN guidelines and zoning best practices with PowerStore arrays, see the
Dell EMC PowerStore Host Configuration Guide.

15
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design

Storage design

Introduction The Dell EMC PowerStore system is designed with a purpose-built, 2U, two-node Intel
Xeon platform. The array is available in two different product models:
• PowerStore T—Theses models support multiple storage products. The platform
starts with the PowerStore 1000T or 1000X model and scales up to the
PowerStore 9000T or 9000X model. PowerStore T models are bare-metal,
unified storage arrays that can service block, file, and vVol resources along with
numerous data services and efficiencies. PowerStore T models are ideal for
traditional and modern workloads such as relational databases.
• PowerStore X—These model appliances enable running applications directly
on the appliance through the AppsON capability. A native VMware ESXi layer
runs embedded applications alongside the PowerStore operating system, all in
the form of VMs. This feature is in addition to the traditional storage functionality
of PowerStore X model appliances, which supports serving external block and
vVol storage to servers with FC and iSCSI.

This Oracle RAC database solution uses the PowerStore T model with a base enclosure
that has 21 NVMe SSD drives and two extreme high-performance NVMe NVRAM drives
that are designed for the PowerStore caching system.
For the Oracle database storage volumes and related VM operating system storage
volumes, we applied the following design principles to create volumes:
• Three volumes for OCR for the Oracle RAC databases clusterware
• Four volumes for DATA
• Four volumes for REDO
• One volume each for FRA and TEMP

We used the storage volumes design shown in the following figure for the PROD RAC
database on two nodes (on the left in the figure) and the DEV RAC database on two
nodes (on the right in the figure):

16 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design

Figure 6. Storage design of the PROD and DEV databases

Production For the PROD Oracle RAC database, we created three volume groups:
database storage • VM_OCR volume group with five volumes—Two 250 GB volumes each for the
design VM operating system and three 50 GB LUNs for Oracle clusterware including
Oracle Cluster Registry (OCR), the voting disk, and the Grid Infrastructure
Management Repository (GIMR).
• ORA_DATA volume group with nine volumes—Four 600 GB volumes for the
PROD database datafile, four 25 GB REDO volumes, and one 100 GB volume
for FRA.
• ORA_TEMP volume group with one volume—One 500 GB volume for the
database TEMP tablespace is required for HammerDB TPC-C schema creation
and index creation.

We created a dedicated ORA_DATA volume group that included DATA volumes, REDO
volumes, and FRA volumes and put the TEMP in a separate volume group, ORA_TEMP.
This design meets the requirement for a consistent snapshot of all the DATA, REDO, and
FRA volumes to enable the creation of a snapshot database based on these volumes.
The TEMP volume is not required to be a part of the snapshot.

The following table shows the volumes for the PROD Oracle RAC database:

17
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design

Table 7. PROD database volume design

Volume
Name Purpose Size (GB) Host group
group

VM_OCR PROD_VM(1-2) Guest VM 2x250 ORA_PROD_HOST_GROUP


operating system

PROD_OCR(1-3) Votingdisk/GIMR 3x50 ORA_PROD_HOST_GROUP

ORA_DATA

PROD_DATA(1-4) Data files 4x600 ORA_PROD_HOST_GROUP

PROD_REDO(1-4) REDO LOGs 4x25 ORA_PROD_HOST_GROUP

PROD_FRA FRA 100 ORA_PROD_HOST_GROUP

ORA_TEMP PROD_TEMP TEMP 500 ORA_PROD_HOST_GROUP

We created a host group to map these production database volumes to two PROD
database servers. As shown in the following figure, this host group consists of two PROD
database hosts, R740_92J-21 and R740-92-J19:

Figure 7. PROD host group

Each of the PROD database hosts has four initiators that are based on the HBAs installed
on each of the hosts, as shown in the following figure:


Figure 8. Two PROD hosts with their FC initiators

18 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design

Development For the DEV Oracle RAC database, we created six volumes for the corresponding volume
database storage groups:
design • VM_OCR volume group : Five volumes, including two 250 GB volumes for each VM
operating system and three 50 GB volumes for Oracle clusterware (Oracle Cluster
Registry (OCR), the voting disk, and GIMR)
• ORA_TEMP volume group: One 100 GB DEV_TEMP volume for the TEMP
tablespace

The following table shows the volumes that we added:

Note: The table does not include the DATA, FRA, and REDO volumes of the DEV databases that
are based on the snapshot of the corresponding volumes of the PROD database (see Creating a
snapshot of the PROD database ).

Table 8. DEV database volume design

Volume group Name Purpose Size (GB) Host group

VM_OCR DEV_VM(1-2) Guest VM 2x250 ORA_DEV_HOST_GROUP


operating system

DEV_OCR(1-3) Votingdisk/GIMR 3x50 ORA_DEV_HOST_GROUP

ORA_TEMP DEV_TEMP TEMP 100 ORA_DEV_HOST_GROUP

Like the volumes for the PROD database, the DEV database volumes are assigned to
ORA_DEV_HOST_Group to map them to two DEV database servers. As shown in the
following figure, the host group consists of two DEV database hosts, R640_92J-30 and
R640-92-29, each of which has two initiators that are based on the HBAs that we installed
on each of the DEV database hosts:

Figure 9. DEV host group

VM operating To create database VMs, we created four Virtual machine File System (VMFS) data
system data stores to host VM operating systems by using the storage volumes that are presented to
store and RDM the database server ESXi hosts. The following table shows the details:
devices design
Table 9. Operating system volumes for PROD and DEV VMs

Accessible by
Volume name Purpose Data store name
ESXi hosts

PROD_VM_1 PROD VM1 OS Prod_VM1_OS_DS

19
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design

Accessible by
Volume name Purpose Data store name
ESXi hosts

PROD_VM_2 PROD VM2 OS Prod_VM2_OS_DS PROD Host1,


PROD Host2

DEV_VM_1 DEV VM1 OS DEV_VM1_OS_DS DEV Host1, DEV


Host2
DEV_VM_2 DEV VM1 OS DEV_VM2_OS_DS

Both ESXi hosts must have access to the VM OS data store volumes. For example, two
PROD hosts (PROD Host1 and PROD Host2) share access to both the PROD VM OS
data stores (Prod_VM1_OS_DS and Prod_VM1_OS_DS). Then, the two database VMs
built on these data stores can be migrated between two PROD ESXi hosts (see
Virtualized database high availability).

All Oracle-related disks that are presented to the ESXi host from the PowerStore storage
array are mapped as raw devices (RDM) to the database VM. To optimize and balance
the I/O for the different database disks, we also created multiple SCSI controllers, each of
which is responsible for the I/O of a set of disks.

The following table shows the VM hard disk configuration with the corresponding SCSI
controllers in both the PROD database VMs. Aside from hard disk 1, all the hard disks are
RDM, with multi-writer sharing enabled on the two database VMs. Hard disk1 as a guest
operating disk is also configured as Thick Provision Lazy Zeroed VMFS.

Table 10. VM hard disk configuration

SCSI
Hard disk Purpose Size (GB) Disk type Sharing Compatibility
controller

Harddisk1 Guest OS 250 0 VMFS No sharing


disk

Harddisk2-4 Votingdisk/GI 50 0 RDM Multi-writer Physical


MR

Hardisk5-8 DATA 600 1 RDM Multi-writer Physical

Harddisk9-12 REDO 25 2 RDM Multi-writer Physical

Harddisk13 FRA 100 3 RDM Multi-writer Physical

Harddisk14 TEMP 500 3 RDM Multi-writer Physical

Operating After the database volumes are presented to the database VMs as virtual disks, perform
system devices the following steps to prepare these virtual disks for Oracle ASM disk group creation:
and Oracle ASM 1. For each virtual disk, create a single partition that spans the entire disk and has a
disk group starting offset of 2,048 sectors.
design
2. Use the UDEV rule to assign Linux 0660 permission and ownership of these disks
to the grid user, that is, the owner of the Oracle GI and Oracle ASM instance. In
the following code sample, the UDEV rule is set for one of the disks as an entry of
the custom UDEV rule script in /etc/udev/rules.d/ 95_oracleasm.rules:
KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block",
PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",

20 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design

RESULT=="368ccf09800cd6a0eaf106e5a2155cf92",
SYMLINK+="oracleasm/disks/ora-redo4", OWNER="grid",
GROUP="asmadmin", MODE="0660"

The string “368ccf09800cd6a0eaf106e5a2155cf92” is the SCSI ID of the


device.

3. Run the udevadm trigger command to set the ownership of the device
/dev/sdj1 as grid:asmadmin with 0660 permission and also create a soft link
alias /dev/oracleasm/disks/ora-redo4 that points to /dev/sdj1. You can
use this soft link alias as the path of this device to create an Oracle ASM disk
group.
4. Follow steps 2 through 3 to create UDEV rules for all the database disks in the
custom UDEV rule script /etc/udev/rules.d/ 95_oracleasm.rules. For
example, creating these rules creates the device links
/dev/oracleasm/disks/ora-XXX as shown in the following list:
# ls -l /dev/oracleasm/disks
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data2 -> ../../sdi1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data3 -> ../../sdj1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-data4 -> ../../sdk1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-fra -> ../../sdf1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-ocr1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-ocr2 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-ocr3 -> ../../sde1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo1 -> ../../sdl1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo2 -> ../../sdm1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo3 -> ../../sdn1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-redo4 -> ../../sdo1
lrwxrwxrwx 1 root root 10 Nov 9 10:21 ora-temp -> ../../sdg

The Oracle ASM disk groups are created based on these raw devices. To use the soft link
alias for the devices, we set the DISK Discovery Path as /dev/oracleasm/disks/*,
as shown in the following table. While the OCR disk group uses the normal redundancy
setting (with triple mirroring), all other disk groups use the external redundancy setting.
We used the coarse striping setting for the DATA, FRA, and OCR disk groups and the
fine-grain striping setting for the REDO and TEMP disk groups.

Table 11. ASM disk group configuration

ASM disk ASM ASM disk group Device alias in


Purpose /dev/oracleasm/disks Virtual disk
group striping size (GB)

OCR OCR, Coarse 50 ora-ocr1, ora-ocr2, Hard Disk2-4


voting disk, ora-ocr3
GIMR

DATA Various Coarse 2400 ora-data1, ora-data2, Hard Disk5-8


database ora-data3, ora-data4
files

REDO Online Fine-grain 100 ora-redo1, ora-redo2, Hard Disk9-12


Redo logs ora-redo3, ora-redo4

21
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Storage design

ASM disk ASM ASM disk group Device alias in


Purpose /dev/oracleasm/disks Virtual disk
group striping size (GB)

FRA Archive Coarse 100 ora-fra Hard Disk13


logs

TEMP Temp files Fine-grain 500 ora-temp Hard Disk14

When ASM rebalances the disk group, a compacting phase at the end of the rebalance
moves the data to the higher-performing tracks of the spinning disks. Because the
PowerStore storage array uses all-flash devices, compacting the data might not provide
any benefits. Effective from Oracle 12c, it is possible to disable compacting on individual
ASM disk groups by setting the _rebalance_compact attribute to FALSE:

SQL > ALTER DISKGROUP DATA SET ATTRIBUTE


'_rebalance_compact'='FALSE';

22 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Oracle 19c RAC database design and deployment

Oracle 19c RAC database design and deployment

Overview This section describes the design and deployment of an Oracle 19c RAC database stack
on the Red Hat Enterprise Linux 8 operating system. This Oracle stack consists of Oracle
Grid Infrastructure (GI) 19c and Oracle RAC 19c database.

Deploying Oracle The certified Oracle database 19c version for Red Hat Enterprise Linux 8+ is Oracle
19c RAC with Enterprise Edition 19.0.0.0 at minimum RU 19.7 (or 19.6 with patches) with minimum
Red Hat kernel version 4.18.0-80.el8.x86_64 or later. For more information, see Oracle Database
Enterprise Linux 19.0.0.0. certification on Linux 86-64 Red Hat Enterprise Linux 8. In accordance with the
8.2 on vSphere 7 Oracle certification, this solution uses Red Hat Enterprise Linux 8.2 with kernel version
4.18.0-193.el8.x86_64 for the guest operating system of the database VMs and Oracle
RAC Enterprise Edition 19cRU7 for the Oracle RAC database.

The Oracle 19cRU7 stack consists of Oracle GI 19cRU7 and Oracle RAC 19cRU7.
Because the base versions of Oracle GI 19c software and Oracle RAC database 19c
software were released with the Red Hat Enterprise Linux 8.x certification, apply patches
to both Oracle GI 19c and Oracle RAC 19c to install both GI 19c software and RAC 19c
software and upgrade them to 19cRU7.

The following table shows these software patches and their corresponding Oracle support
document IDs:

Table 12. Oracle GI 19c and Oracle RAC 19c patch list

Stack name Patch number Purpose Document ID

Oracle GI 19.3.0.0 base Oracle GI 19c base


software

6880880 Upgrade Opatch 274526.1

30189609 Fix setup passwordless 30189609.8


SSH connectivity

30899722 Oracle GI RU 19.7 30899722.8

Oracle RAC 19.3.0.0 base Oracle RAC 19c base


software

6880880 Upgrade Opatch 274526.1

30899722 Oracle RAC RU19.7 30899722.8

30805684 Oracle JavaVM 30805684.8


Component RU19.7

Download the required software images from the Oracle software download site and the
patch images from the Oracle support site, which are listed in the following table:

23
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Oracle 19c RAC database design and deployment

Table 13. Required base software images and patch images

Software image/patch name Description

LINUX.X64_193000_grid_home.zip Oracle GI 19c base software 19.3

p6880880_200000_Linux-x86-64.zip Patch 6880880

p30189609_195000OCWRU_Linux-x86- Patch 30189609


64.zip

LINUX.X64_193000_db_home.zip Oracle RAC 19c base software 19.3

p30783556_190000_Linux-x86-64.zip Patch 307835556 combo that includes both patch


P30899722 and P30805684

Install Oracle GI 19cRU7


To install and configure Oracle GI 19cRU7 on the two database nodes:
1. Extract LINUX.X64_193000_grid_home.zip in the first database node.
2. Set the operating system environment variable $CV_ASSUME_DISTID=OL7 and
install the Oracle GI 19c base version to the Oracle GI home of the first node by
running: $gridSetup.sh.
3. Apply patch 6880880 to the Oracle GI home to upgrade the Opatch utility from
12_2.0.1.17 to 12.2.0.1.19.
4. Apply patch p30189609 to the Oracle GI home for bug 30189609.8: cvu fails to
detect the passwordless ssh and to set up passwordless SSH connectivity.
5. Extract the p30783556_190000_Linux-x86-64.zip combination patch,
which includes both the 30899722 and 30805684 patches. Apply patch 30899722
to the Oracle GI 19c home to upgrade it from the 19c base version to 19cRU7 by
following the instructions in Doc ID 30899722.8.
6. Install and configure the upgraded Oracle GI 19cRU7 by running the following
command: $gridSetup.sh
7. Click Configure Oracle Grid Infrastructure for a New Cluster and select both
nodes.
8. If you continue to experience the error [INS-06003} Failed to setup
passwordless SSH connectivity, create the passwordless SSH manually
by running a command such as:
$ORACLH_HOME/deinstall/sshUserSetup.sh -user grid -hosts
"hostname1 hostname2" " -noPromptPassphrase -confirm -
advanced
The dialog shown in the following figure appears:

Figure 10. Reuse private and public keys existing in the user home

24 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Oracle 19c RAC database design and deployment

9. Select Reuse private and public keys existing in the user home and click
Setup to set up the passwordless SSH connectivity:
This selection lasts through the remainder of the Oracle GI 19cRU7 installation
and configuration.

Install Oracle RAC 19cRU7


To install and configure Oracle RAC 19cRU7:
1. After extracting Oracle RAC 19c base version, install the base software by setting
the environment variable $export CV_ASSUME_DISTID=OL7 and running the
runInstaller command.
2. During the SSH connectivity check phase, if you experience an error message
such as [INS-30132] Initial setup required for the execution
of Installer validation failed on nodes: node2, manually create
the passwordless SSH connectivity by running a command such as:
$ORACLE_HOME/deinstall//sshUserSetup.sh -user oracle -hosts
"node11 node2" -noPromptPassphrase -confirm -advanced

The following dialog is displayed:

Figure 11. Reuse private and public keys existing in the user home

3. Select Reuse private and public keys existing in the user home and click
Test.
The following message is displayed:

Figure 12. Passwordless SSH connectivity test success message

4. Click OK and continue with the Oracle RAC software installation.


5. Follow the instructions in the patch 6880880 readme file to apply the patch the
Oracle RAC home on each RAC node to upgrade the Opatch utility from
12_2.0.1.17 to 12.2.0.1.19.

25
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Oracle 19c RAC database design and deployment

6. Apply the patch to upgrade the Oracle RAC home from 19c base version to
19cRU7 by following the instructions in Oracle Support Document 30899722.8.
Run the opatchauto command as the Linux root user on each RAC node.
7. Follow the instructions in the 30805684 readme file to apply the patch to Oracle
RAC home on each node.

Creating the Using the Database Configuration Assistant (DBCA), we created a two-node 19c RAC
Oracle 19c RAC database by distributing the constituent files as shown in Table 11:
PROD database • DATA disk group: Database datafiles and control files
• REDO disk group: Online redo logs
• TEMP disk group: TEMP file
• FRA—Archive log files

For the database initialization parameter settings, see Appendix B: Oracle RAC Database
Initialization Settings.

26 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

Snapshot database
Introduction PowerStore storage arrays enable you to create snapshots of storage volumes. These
snapshots provide point-in-time copies of the data that is stored in the volumes and can
be used as a local data protection solution within a PowerStore system. Snapshots are
not a full copy of the volumes. Instead, they consist of pointers to the data blocks and use
redirect-on-write technology. Accordingly, snapshots are space-efficient and do not
consume any extra space until either the new data is written to the thin clone that is
created from them or new data is written to the source database.

Hosts cannot access snapshots unless a thin clone is created from the snapshot and
presented to the host.

As shown in the following figure, we took a snapshot of the PROD database, created a
thin clone of the snapshot, and finally mounted the thin clone to the DEV environment as
the DEV database:

Figure 13. Taking a snapshot of the PROD database for a DEV database

Creating a PowerStore snapshot design principles


snapshot of the Use the following PowerStore design principles:
PROD database
• Before taking the snapshot, put the PROD database into backup mode to
ensure that the database is in a consistent state.
• Take a snapshot of the ORA_DATA volume group, which consists of four DATA
volumes, four REDO volumes, and one FRA volume. (For the TEMP volume, it
is not necessary for the database temp file to be part of the snapshot.) This step
27
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

ensures that the snapshot of all datafiles, control files, redo files, and archive
logs files are consistent so that the PROD database snapshot is a consistent
point-in-time copy of the PROD database.
Create a snapshot of the PROD database
To create a PROD database snapshot, follow these steps:
1. Put the RAC database in backup mode by running the following SQL command:
SQL> ALTER DATABASE BEGIN BACKUP;
2. Log in to PowerStore Manager and select Storage > Volume Groups >
ORA_DATA volume group.
The following figure is displayed:

Figure 14. Take a snapshot of PROD database volumes

3. On the PROTECTION tab, click TAKE SNAPSHOT.


4. Make the selections as shown in the following figure and click TAKE
SNAPSHOT:

28 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

Figure 15. Take Snapshot of Volume Group wizard

In a few seconds, the PowerStore system creates a snapshot, as shown in the


following figure:

Figure 16. Snapshot is created

5. Put the RAC database into end-of-backup mode by running:


SQL> ALTER DATABASE END BACKUP;
We ran the HammerDB OLTP workloads during the snapshot creation. No
performance impact occurred.

Creating a thin We created the snapshot of the PROD database as a point-in-time consistent copy of the
clone of the PROD database. The snapshot can be mounted to the DEV database that the VM hosts
snapshot for a DEV or TEST database. Because a host cannot access the snapshot itself, we

29
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

created a thin clone of the snapshot and mounted the thin clone to the DEV database VM
host. As shown in the following figure, we created the RAC database snapshot based on
the ORA_DATA storage group, which consists of DATA, FRA, and REDO volumes. To
mount the snapshot to the DEV database nodes, we created a clone of the snapshot
called ORA_DATA_Thin_clone.

To create a thin clone of a snapshot:


1. Click the Snapshot checkbox.
The following window is displayed:

Figure 17. Creating a thin clone

2. From the MORE ACTIONS dropdown menu, select Create Thin Clone Using
Snapshot.
3. Complete the Create Thin Clone wizard and click Clone.
A thin clone of the snapshot is created, as shown in the following figure:

30 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

Figure 18. Thin clone snapshot created

Ensure that you enable the option to Apply write-order consistency to protect all
volumes group members, as shown in the following figure:

Figure 19. Apply write-order consistency to protect all volumes group members

31
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

Creating a DEV After creating a thin clone from the snapshot, you can create a DEV database that is
database from based on the snapshot by mounting the thin clone to both the DEV VMs through their
the PROD ESXi hosts.
snapshot To create a DEV database using the thin clone snapshot of ORA_DATA:
1. Create the Oracle 19cRU7 RAC configuration on both the DEV VMs (DEV_RAC
DB Node1 and DEV_DB Node2) by following the steps you used for the PROD
RAC database (see Deploying Oracle 19c RAC with Red Hat Enterprise Linux 8.2
on vSphere 7).
2. In PowerStore Manager, map the thin clone volumes to the host group of two
DEV ESXi hosts. The following table shows all the DEV database volumes or
volume groups that are mapped to DEV hosts. The volumes in VM_OCR and
ORA_TEMP are the storage volumes. The volumes in the ORA_Thin_clone
volume group are the thin clone snapshots of the corresponding storage volumes
in the PROD database.
Table 14. DEV database volumes

DEV database volume design

Volume group Volume name Description Size (GB) Host group

VM_OCR DEV_VM Guest VM operating 250 ORA_DEV_HOST_GROUP


system

DEV_OCR(1-3) Votingdisk/GIMR 50 ORA_DEV_HOST_GROUP

ORA_TEMP DEV_TEMP TEMP 100 ORA_DEV_HOST_GROUP

ORA_Thin_clone Thin_clone of Data files 4x600 ORA_DEV_HOST_GROUP


PROD_DATA (1-4)

Thin_clone of REDO LOGs 4x25 ORA_DEV_HOST_GROUP


PROD_REDO (1-4)

Thin_clone of FRA 100 ORA_DEV_HOST_GROUP


PROD_FRA

3. Present all the volumes that correspond to these thin clones to both the DEV
database VMs as RDM disks.
4. Add the corresponding entries of the new RDM disks devices to the
/dev/udev/rule.d/95_oracleasm.rules script.
For example, the entry for the snapshot thin clone of the PROD_DATA1 is:
KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block",
PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent",
RESULT=="368ccf0980009e5e46f1b8a8486185d3a",
SYMLINK+="oracleasm/disks/ora-data1", OWNER="grid",
GROUP="asmadmin", MODE="0660"

5. Run the udevadm trigger command to create the device links


/dev/oracleasm/disks/ora-XXX, as shown in the following code sample:
[root@snapracn1 rules.d]# ls -l /dev/oracleasm/disks
total 0
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-data1 -> ../../sdf1

32 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-data2 -> ../../sdg1


lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-data3 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-data4 -> ../../sdi1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-fra -> ../../sdj1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-ocr1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-ocr2 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-ocr3 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-redo1 -> ../../sdk1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-redo2 -> ../../sdl1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-redo3 -> ../../sdm1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-redo4 -> ../../sdn1
lrwxrwxrwx 1 root root 10 Oct 1 20:03 ora-temp -> ../../sde1

6. After the /dev/oracleasm/disks/ora-** devices are presented to the


database VM as the ASM disks, mount the ASM disk groups DATA, REDO, and
FRA.

7. Create a TEMPDEV disk group that uses the DEV_TEMP volume by running the
following SQL command in the ASM instance:

SQL> create diskgroup TEMPDEV external redundancy disk ‘/dev/oracleasm/disks/ora-


temp’;

8. Rename the ASM disk groups.

Note: Because these devices are essentially the snapshot of a point-in-time copy of the
original PROD database volumes, each volume has the same metadata in the PROD
database, such as ASM disk group name and ASM disk name.

$renamedg dgname=DATA newdgname= DATADEV \


asm_diskstring=/dev/oracleasm/disks/* verbose=TRUE
$renamedg dgname=REDO newdgname=REDODEV \
asm_diskstring=/dev/oracleasm/disks/* verbose=TRUE
$renamedg dgname=FRA newdgname=REDOFRA \
asm_diskstring=/dev/oracleasm/disks/* verbose=TRUE

9. Rename the ASM disks by running the following SQL commands in the ASM
instance:
DATA_DEV
$asmcmd mount --restrict DATADEV
$asmcmd mount --restrict REDODEV
$asmcmd mount --restrict FRADEV

SQL> alter diskgroup DATADEV rename disks all


SQL> alter diskgroup REDODEV rename disks all;
SQL> alter diskgroup FRADEV rename disks all;

SQL> alter diskgroup DATADEV diskmount;


SQL> alter diskgroup REDODEV diskmount;
SQL> alter diskgroup FRADEV diskmount;

33
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Snapshot database

SQL> alter diskgroup DATADEV mount;


SQL> alter diskgroup REDODEV mount;
SQL> alter diskgroup FRADEV mount;

9. Change the ASM disk group names in the file paths for all the database files,
control files, and redo logs.

Note: You will need to change these names in all references to file paths and in the
destinations setting in the spfile. These file paths reference the old disk group names,
which are no longer valid.

For example, you can rename the database file by running the following SQL
command:
SQL>alter database rename file '+DATA/PROD/DATAFILE/users.261.986035293'
to '+DATADEV/PROD/DATAFILE/users.261.986035293';

10. Mount the database and take it out of backup mode by running:
SQL> ALTER DATABASE END BACKUP;
11. Drop all the old temp files that are pointing to the old TEMP tablespace and
create a TEMP tablespace on the new TEMP disk group. Then open the
database by running:
SQL> ALTER DATABASE OPEN;
12. Change the database name and the DBID by using the DBNEWID utility. This
utility lets you assign the new database name and ID to replace the original
production database name from the PROD database. You must also change the
dbname in the spfile.
13. Open the new database by running the alter database reset logs
command. Disable the archive log mode if the new database is for development
and an archive log is not required. Create a TEMP tablespace on the new TEMP
disk group.

34 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Virtualized database high availability

Virtualized database high availability

vMotion vMotion migration enables you to migrate a powered-on VM from one system resource to
migration another. This migration can be “cold migration” or “hot migration.” With cold migration, the
VM is taken offline to perform the migration. With hot migration, the VM remains
operational, as does the application running on the VM guest operating system.

As a part of the prerequisites for vMotion migration, the storage that contains the virtual
machine disks must be shared between the source host and target hosts.
The shared storage and network design meets these requirements for vMotion migration:
Both ESXi hosts meet the shared storage requirements. All the storage volumes include
VM operating system volumes. The RAC database volumes including OCR, TEMP,
DATA, REDO, and FRA are in the PowerStore storage that is shared by both source and
target ESXi hosts. Both source and target hosts are connected through the vMotion
network.

The following figure shows this design:

Figure 20. High availability design based on vMotion

Live migration of For a two-node Oracle RAC database with each instance running on two VMs on
the Oracle separate ESXi hosts, you can migrate one VM with the Oracle RAC database instance
instance for from one ESXi host to another. During a cold migration, the VM is offline and the Oracle
planned RAC database instance that is running in the VM guest is down. During a hot migration,
downtime both the VM and the Oracle RAC database instance are online and operational. Use the
hot migration feature to avoid a planned downtime of the database instance during server

35
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Virtualized database high availability

hardware or software maintenance of the ESXi host by moving a live VM with its
operational Oracle database instance to a different ESXi host.

Hot migration
To launch a hot migration by vMotion:
1. Log in to vCenter and locate the VM that you want to migrate.
2. Right-click the VM and select Migrate.
The migration options are displayed, as shown in the following figure:

Figure 21. VM migration using vMotion

3. Select Change compute resource only and click Next.


4. Select a compute resource, network, and vMotion for the migration.
The migration wizard validates the compatibility of the target resource. If any
compatibility problem exists, it is displayed in the Compatibility panel. Resolve the
issue or select another host or cluster.
5. Click Finish to complete the migration.
6. Monitor the migration progress in the Recent Tasks window, as shown in the
following figure:

Figure 22. Migration progress

7. While the VM on which the database instance is running is in migration to other


ESXi hosts, log in to the database instance to view the number of user sessions
that are connecting to it.
We performed this check to confirm that the database instance was functioning
during the migration. The database instance showed 10 active connections from
TPC-C users. This number was the same as before the migration, as shown in
the following figure:

36 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Virtualized database high availability

Figure 23. Viewing database connections during a migration

Performance and Transaction throughput


CPU utilization We ran the HammerDB OLTP workload test on both RAC database nodes during the
tests migration.

The following figures show the database throughput transactions per minute (TPM)
before, during, and after vMotion migrations. As shown, no performance change occurred
due to vMotion operation.

Figure 24. HammerDB throughput TPM before and after vMotion starts

Figure 25. HammerDB throughput TPM during a vMotion migration

Figure 26. HammerDB throughput TPM after vMotion migration completes

37
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Virtualized database high availability

VM2 migration
The following figures show the VM2 migration.

Before the migration, VM2 was running on ESXi host2 (IP address ending with 23):

Figure 27. Premigration: VM2 on ESXi host2

After the migration, VM2 was moved to ESXi host1 (IP address ending with 22):

Figure 25. Postmigration: VM2 on ESXi host1

CPU utilization
We also reviewed the CPU utilization changes for both ESXi hosts.

The following figure shows the CPU utilization of ESXi host2 (with an IP address ending in
23) as:
• 40% before migration
• 70% during migration
• 1.8% after migration

38 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Virtualized database high availability

Figure 26. CPU utilization change during migration: ESXi host2

The following figure shows the CPU utilization of ESXi host1 (with an IP address ending
22), the host to which the VM was migrated, as:
• 35% before migration
• 39% during migration
• 65% after migration

39
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Conclusion

Figure 27. CPU utilization change during the migration: ESXi host1

Conclusion
The PowerStore T platform provides new capabilities to organizations that count on
Oracle RAC for high availability and performance. For example, storage-based snapshots
and thin clones enable multiple Oracle RAC clusters that are connected to a single
PowerStore T array to efficiently access near-instantaneous copies of databases. This
platform is available in a range of configurations, from the entry-level PowerStore 1000T
model to the highly scalable PowerStore 9000T model. The PowerStore T family enables
organizations to have a consistent management experience from PROD systems all the
way to DEV systems.

40 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
References

References

Dell The following Dell Technologies documentation provides additional and relevant
Technologies information. Access to these documents depends on your login credentials. If you do not
documentation have access to a document, contact your Dell Technologies representative.

• Dell EMC PowerStore: Introduction to the Platform


• Dell EMC PowerStore: Oracle Database Best Practices
• Dell EMC PowerStore: Best Practices Guide
• Dell EMC PowerStore Host Configuration Guide

VMware The following VMware documentation provides additional and relevant information:
documentation • VMware ESXi 7.0 Installation and Setup
• VMware vSphere 7.0 Installation and Setup
• Oracle Databases on VMware Best Practices Guide

Oracle The following Oracle documentation provides additional and relevant information:
documentation • Oracle Database 19 Installation Guide
• Oracle Real Application Clusters 19c Installation Guide
• Oracle Grid Infrastructure Installation and Upgrade Guide

41
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Appendix A: Red Hat Enterprise Linux 8 Operating System Kernel Parameters

Appendix A: Red Hat Enterprise Linux 8 Operating System


Kernel Parameters
Linux kernel parameters for the Oracle RAC database
The /etc/sysctl.d/99-oracle-database-preinstall-19c-sysctl.conf text
file contains the following Oracle database-related Linux kernel parameter settings:

# sysctl settings are defined through files in


# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

# oracle-database-preinstall-19c setting for fs.file-max is


6815744
fs.file-max = 6815744

# oracle-database-preinstall-19c setting for kernel.sem is '250


32000 100 128'
kernel.sem = 250 32000 100 128

# oracle-database-preinstall-19c setting for kernel.shmmni is 4096


kernel.shmmni = 4096

# oracle-database-preinstall-19c setting for kernel.shmall is


1073741824 on x86_64
kernel.shmall = 1073741824

# oracle-database-preinstall-19c setting for kernel.shmmax is


4398046511104 on x86_64
kernel.shmmax = 4398046511104

# oracle-database-preinstall-19c setting for kernel.panic_on_oops


is 1 per Orabug 19212317
kernel.panic_on_oops = 1

# oracle-database-preinstall-19c setting for net.core.rmem_default


is 262144
net.core.rmem_default = 262144

# oracle-database-preinstall-19c setting for net.core.rmem_max is


4194304
net.core.rmem_max = 4194304

42 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Appendix A: Red Hat Enterprise Linux 8 Operating System Kernel Parameters

# oracle-database-preinstall-19c setting for net.core.wmem_default


is 262144
net.core.wmem_default = 262144

# oracle-database-preinstall-19c setting for net.core.wmem_max is


1048576
net.core.wmem_max = 1048576

# oracle-database-preinstall-19c setting for


net.ipv4.conf.all.rp_filter is 2
net.ipv4.conf.all.rp_filter = 2

# oracle-database-preinstall-19c setting for


net.ipv4.conf.default.rp_filter is 2
net.ipv4.conf.default.rp_filter = 2

# oracle-database-preinstall-19c setting for fs.aio-max-nr is


1048576
fs.aio-max-nr = 1048576

# oracle-database-preinstall-19c setting for


net.ipv4.ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500

43
Oracle RAC Database on PowerStore T Storage
Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide
Appendix B: Oracle RAC Database Initialization Settings

Appendix B: Oracle RAC Database Initialization Settings


Initialization parameter settings for the Oracle RAC database
Use the following settings in the database init.ora file or spfile:

racdb2.__pga_aggregate_target=21474836480
racdb1.__sga_target=137438953472
racdb2.__sga_target=137438953472
racdb1.__shared_io_pool_size=268435456
racdb2.__shared_io_pool_size=268435456
racdb1.__shared_pool_size=15300820992
racdb2.__shared_pool_size=15300820992
racdb1.__streams_pool_size=0
racdb2.__streams_pool_size=0
racdb1.__unified_pga_pool_size=0
racdb2.__unified_pga_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/racdb/adump'
*.audit_trail='NONE'
*.cluster_database=TRUE
*.compatible='19.0.0'
*.control_files='+DATA/RACDB/CONTROLFILE/current.257.1049809915'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_file_multiblock_read_count=1
*.db_name='racdb'
*.db_writer_processes=8
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=racdbXDB)'
*.filesystemio_options='SETALL'
family:dw_helper.instance_mode='read-only'
racdb1.instance_number=1
racdb2.instance_number=2
*.local_listener='-oraagent-dummy-'
*.log_archive_dest_1='LOCATION=+FRA/'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.pga_aggregate_target=20360m
*.processes=2000
*.remote_login_passwordfile='exclusive'
*.sga_max_size=137438953472
*.sga_target=137438953472
racdb2.thread=2
racdb1.thread=1
*.undo_tablespace='UNDOTBS1'
racdb1.undo_tablespace='UNDOTBS1'
racdb2.undo_tablespace='UNDOTBS2'

44 Oracle RAC Database on PowerStore T Storage


Enabled by Dell EMC PowerEdge R740/R640, Red Hat Enterprise Linux 8.2, Oracle RAC database 19c R7, VMware
vSphere 7
Design Guide

You might also like