Dell Unity Storage With Oracle Databases

Download as pdf or txt
Download as pdf or txt
You are on page 1of 97

Best

Best Practices

Dell Unity: Oracle Database Best Practices


All-flash arrays

Abstract
This document provides best practices for deploying Oracle databases with Dell
Unity All-Flash arrays, including recommendations and considerations for
performance, availability, and scalability.

April 2022

H16765
Revisions

Revisions
Date Description
November 2017 Initial release for Dell Unity OE version 4.2
June 2019 Updated with new format and content for Dell Unity x80F arrays
April 2022 Inclusive language changes, rebranding, updated links, and other edits

Acknowledgments
Authors: Mark Tomczik, Henry Wong

This document may contain language from third party content that is not under Dell's control and is not consistent with Dell's current guidelines for Dell's
own content. When such third party content is updated by the relevant third parties, this document will be revised accordingly.

The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2017–2022 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners. [Best Practices] [H16765]

2 Dell Unity: Oracle Database Best Practices | H16765


Table of contents

Table of contents
Revisions.............................................................................................................................................................................2
Acknowledgments ...............................................................................................................................................................2
Table of contents ................................................................................................................................................................3
Executive summary .............................................................................................................................................................5
Audience .............................................................................................................................................................................5
Storage configuration ..........................................................................................................................................................6
I/O module options .......................................................................................................................................................6
Dynamic storage pools .................................................................................................................................................7
Dell Unity features .............................................................................................................................................................10
FAST VP .....................................................................................................................................................................10
FAST cache ................................................................................................................................................................10
Data reduction ............................................................................................................................................................10
Data at Rest Encryption .............................................................................................................................................12
Host I/O limits .............................................................................................................................................................12
Oracle database design considerations ............................................................................................................................14
OLTP workloads .........................................................................................................................................................14
OLAP or DSS workloads ............................................................................................................................................14
Mixed workloads .........................................................................................................................................................14
Storage pools .............................................................................................................................................................15
Testing and monitoring ...............................................................................................................................................15
Deploying Oracle databases on Dell Unity storage ..........................................................................................................20
Linux setup and configuration ....................................................................................................................................20
Oracle Automatic Storage Management ....................................................................................................................26
Linux LVM...................................................................................................................................................................43
File systems ................................................................................................................................................................44
Dell Unity file storage ........................................................................................................................................................47
Dell Unity front-end Ethernet connectivity for file storage ..........................................................................................47
Dell Unity NAS servers ...............................................................................................................................................47
Dell Unity NFS file system ..........................................................................................................................................50
Scalability ...................................................................................................................................................................52
Storage efficiency .......................................................................................................................................................52
Quotas ........................................................................................................................................................................52
NFS protocol...............................................................................................................................................................52
Dell Unity NFS share ..................................................................................................................................................53

3 Dell Unity: Oracle Database Best Practices | H16765


Table of contents

Verify access to the Dell Unity NFS share .................................................................................................................55


Dell Unity file system and Oracle ASM ......................................................................................................................56
Oracle Disk Manager ........................................................................................................................................................58
NFS traffic ...................................................................................................................................................................58
Oracle Direct NFS .............................................................................................................................................................59
Benefits of dNFS ........................................................................................................................................................59
Creating NFS client mount points...............................................................................................................................59
Mount options for NFS share .....................................................................................................................................60
Ethernet networks and dNFS .....................................................................................................................................62
Jumbo frames .............................................................................................................................................................63
Single network path for dNFS ....................................................................................................................................64
Multiple network paths for dNFS ................................................................................................................................66
Shared subnets ..........................................................................................................................................................67
Configuring LACP .......................................................................................................................................................72
Database server: NFS client network interface configuration ....................................................................................78
Oracle dNFS configuration file: oranfstab ..................................................................................................................80
Enabling and disabling Oracle dNFS .........................................................................................................................82
Verify if dNFS is being used .......................................................................................................................................83
Oracle dynamic dNFS views ......................................................................................................................................84
Dell Unity features with Oracle databases ........................................................................................................................85
Data reduction ............................................................................................................................................................85
Advanced deduplication .............................................................................................................................................85
Snapshots ...................................................................................................................................................................86
Thin clones .................................................................................................................................................................88
Replication ..................................................................................................................................................................90
Data protection ..................................................................................................................................................................92
AppSync .....................................................................................................................................................................92
RecoverPoint virtual edition ........................................................................................................................................92
File system mount options ................................................................................................................................................93
Dell Unity x80F specifications ...........................................................................................................................................95
Technical support and resources ......................................................................................................................................96
Related resources ......................................................................................................................................................96

4 Dell Unity: Oracle Database Best Practices | H16765


Executive summary

Executive summary
This paper delivers straightforward guidance to customers using Dell Unity All-Flash storage systems in an
Oracle 12c database environment on Linux operating systems. Oracle is a robust product that can be used in
various solutions. The relative priorities of critical design goals such as performance, manageability, and
flexibility depend on your specific environment. This paper provides considerations and recommendations to
help meet your design goals.

This paper was developed using the Dell Unity 880F All-Flash array, but the information is also applicable to
other Dell Unity All-Flash array models (x80F and x50F). Oracle Linux (OL) 7 was used for this paper but the
content is applicable to Oracle Linux (OL) 6, and Red Hat Enterprise Linux 6 and 7.

These guidelines are recommended, but some recommendations may not apply to all environments. For
questions about the applicability of these guidelines in your environment, contact your Dell Technologies
representative.

Dell Unity x80F models provide an excellent storage solution for Oracle workloads regardless of the
application characteristics and whether file or block storage is required. This paper discusses the best
practices and performance of the Dell Unity 880F array with block storage, but also presents best practices
with native xNFS or Oracle dNFS.

In addition to file and block storage support, the Dell Unity x80F arrays provide several other features. Some
of the standard features are point-in-time snapshots, replication (local and remote), built encryption,
compression, and extensive integration capabilities for an Oracle standalone or RAC environment.

Audience
This document is intended for Dell Unity administrators, database administrators, architects, partners, and
anyone responsible for configuring Dell Unity storage systems. It is assumed readers have prior experience
with or training in the following areas:

• Dell Unity storage systems


• Linux operating environment
• Multipath software
• Oracle Automatic Storage Management (ASM)
• Oracle standalone or RAC environment

We welcome your feedback along with any recommendations for improving this document. Send comments
to [email protected].

5 Dell Unity: Oracle Database Best Practices | H16765


Storage configuration

Storage configuration
Dell Unity storage is a virtually provisioned, flash-optimized storage system designed for ease of use. This
paper covers the All-Flash array models with emphasis on Dell Unity x80F arrays. This section describes the
foundational array technologies that support the application-specific sections that follow. For general Dell
Unity best practices, see the Dell Unity: Best Practices Guide and the documentation listed in appendix 0.

I/O module options


All Dell Unity All-Flash arrays provide an embedded I/O module (optional on some models) and two optional
I/O modules per storage processor (SP). This section provides an overview of the different options available
in each of the I/O modules. Recommendation on choosing I/O modules for an Oracle environment where
performance and throughput are of interest are also given.

I/O ports used for transferring user data are available on the embedded module or additional I/O modules on
each SP.

Embedded I/O module


Converged Network
Array model Ethernet SAS
Adapter (CNA)
300, 350F, 380, 380F Two ports: Two ports, 10 GbE Base T Two ports, mini-
400F, 450F, 8/16 Gb Fibre Channel (FC) HD
500F, 550F, 4/8/16 Gb FC
600F, 650F 16 Gb FC (single mode)
1/10 Gb IP/iSCSI
480, 480F, N/A Optional 4-port mezzanine card: Two port 12
680, 680F, Gb/s SAS
880, 880F • 25 GbE optical (no auto
negotiation) with either 10 Gb or 25
Gb SFPs (mixed ok), or
TwinAx (active/passive)
• 10 GbE BaseT RJ45
• no four port card (requires blank
filler)

For Oracle environments that require FC front-end connectivity on the Dell Unity 480F, 680F, or 880F models,
consider using a filler blank in place of the embedded I/O module. Also consider using one or two optional I/O
modules that support FC.

If Oracle dNFS is used, consider using the optional four port mezzanine card on the Dell Unity 480F, 680F, or
880F models. A four port mezzanine card can be in both Link Aggregation Control Protocol (LACP) and fail-
safe networking (FSN) configurations. LACP is discussed in section Configuring LACP.

6 Dell Unity: Oracle Database Best Practices | H16765


Storage configuration

Optional I/O modules for Dell Unity All-Flash arrays


Fibre
Array Ethernet Base-T Ethernet/iSCSI optical I/O
Channel I/O SAS I/O module
model I/O module module
module
300F, 4-port 16 4-port 1 GbE, or 4-port 10 GbE, or NA
350F, Gb/s 4-port 10 GbE 2-port 10 GbE offloading
400F,
450F
500F, 550F, 4-port 16 4-port 1, or 4-port 10 GbE, or 4-port mini HD (backend)
600F, 650F Gb/s 4-port 10 GbE 2-port 10 GbE offloading
380, 4-port 16 4-port 10 GbE 4-port 25 GbE optical for NA
380F Gb/s BaseT RJ45 Ethernet and iSCSI block
(auto-negotiate to traffic; either 10 Gb or 25 Gb
1 GbE) SFPs (no auto negotiation,
mixed SFPs ok), or TwinAx
(active or passive)

480, 480F, 4-port 16 4-port 10 GbE 4-port 25 GbE optical for 4-port 12 Gb SAS backend
680, 680F, Gb/s BaseT RJ45 Ethernet and iSCSI block
880, 880F (auto-negotiate to traffic; either 10 Gb or 25 Gb
1 GbE) SFPs (no auto negotiation,
mixed SFPs ok), or TwinAx
(active or passive)

In high-demand Oracle environments where IOPs, latency, or capacity are a concern, consider the option of
using a 4-port 12 Gb SAS I/O module. The 4-port module will increase the number of configurable hard drives
in the array which can help lower latency and increase IOPS and capacity.

I/O modules must be installed in pairs (one in SPA and one in SPB), are of the same type and reside in the
same slots between SPA and SPB.

With Dell Unity 480F, 680F, and 880F models, slot 0 I/O modules have x16 PCIe lanes while slot 1 has x8
PCIe lanes. For this reason, slot 0 should be reserved for environments needing greater bandwidths.

The Ethernet/iSCSI card can be in both Link Aggregation Control Protocol (LACP) and fail-safe networking
(FSN) configurations).

Once the Dell Unity array is configured, all I/O modules are persistent and cannot change type.

Dynamic storage pools


Dell Unity storage supports two types of storage pools on All-Flash storage systems: traditional pools and
dynamic pools. Dynamic pools were introduced in Dell Unity OE version 4.2 for all-flash storage models and
became the default pool type in Dell Unisphere. While traditional pools are still supported on all-flash models,
they can only be created through the Unisphere CLI or REST API.

Dynamic pools offer many benefits over traditional pools. The new pool structure eliminates the need to add
drives in the multiples of the RAID width, allowing for greater flexibility in managing and expanding the pool.
Dedicated hot spare drives are also no longer required with dynamic pools. Data space and replacement
space are spread across the drives within the pool. Spreading space across drives within the pool provides

7 Dell Unity: Oracle Database Best Practices | H16765


Storage configuration

better drive utilization, improves application I/O, and speeds up the proactive copying of failing drives and the
rebuild operation of failed drives.

In general, create dynamic pools with large numbers of drives of the same type and use few storage pools
within the Dell Unity system. However, it may be appropriate to configure additional storage pools in the
following instances:

• Separate workloads and resources from competing databases or applications


• Dedicate resources to meet specific performance goals
• Create smaller failure domains

Additional information can be found in the documents, Dell Unity: Dynamic Pools and Dell Unity: Configuring
Pools.

Storage pool capacity


Storage pool capacity is used for multiple purposes:

• To store all data written into storage objects — LUNs, file systems, datastores, and VMware vSphere
Virtual Volumes (vVols) — in that pool
• To store data that is needed for snapshots of storage objects in the pool
• To track changes to replicated storage objects in that pool

Storage pools must maintain free capacity to operate properly. By default, a Dell Unity system will raise an
alert if a storage pool has less than 30% free capacity. If the alert is raised, the system will begin to
automatically invalidate snapshots and replication sessions if the storage pool has less than 5% free capacity.
Dell Technologies recommends that a storage pool always has at least 10% free capacity.

More drives can be added to a storage pool online. However, to optimize the performance and efficiency of
the storage, add drives with same specification, type, and capacity of the existing drives in the pool. Though
not required, add several drives equal to the RAID width + 1, which allows the new capacity to be immediately
available. Data is automatically rebalanced in the pool when drives are added.

Note: Once drives are added to a storage pool, they cannot be removed unless the storage pool is deleted.

All-flash pool
All-flash pools provide the highest level of performance in Dell Unity systems. Use an all-flash pool when the
application requires the highest storage performance at the lowest response time. Note the following
considerations with all-flash pools:

• All-flash pools consist of either all SAS flash 3 or all SAS flash 4 drives of the same capacity.
• Dell EMC FAST Cache and FAST VP are not applicable to all-flash pools.
• Compression is only supported on an all-flash pool.
• Snapshots and replication operate most efficiently in all-flash pools.
• Dell Technologies recommends using only a single drive size and a single RAID width within an all-
flash pool.

For example: For an all-flash pool, use 800 GB SAS flash 3 drives and configure them all with RAID 5
8+1. For supported drive types in all-flash pool, see appendix 0.

8 Dell Unity: Oracle Database Best Practices | H16765


Storage configuration

Hybrid pool
Hybrid pools (including a combination of flash drives and hard disk drives) are not supported with Dell Unity
All-Flash arrays.

9 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features

Dell Unity features


This section describes some of the native features available on the Dell Unity platform. Not all are applicable
to Dell Unity All-Flash arrays and are noted in this document. Additional information about each of these
features can be found in the Dell Unity: Best Practices Guide.

FAST VP
Dell FAST VP accelerates the performance of a specific storage pool by automatically moving data within that
pool to the appropriate drive technology based on data access patterns. FAST VP is only applicable to hybrid
pools within a Dell Unity Hybrid flash system.

FAST cache
FAST Cache is a single global resource that can improve the performance of one or more hybrid pools within
a Dell Unity Hybrid flash system. FAST Cache can only be created with SAS Flash 2 drives and is only
applicable to hybrid pools. FAST Cache is not applicable to all-flash arrays.

Data reduction
Dell Unity compression reduces the amount of physical storage needed to save a dataset in an all-flash pool
for block LUNs and VMFS datastores. This capability was added to Dell Unity OE version 4.1 for thin block
storage resources and was called Dell Unity Compression. Thin file storage resource support was added in
Dell Unity OE version 4.2 for file systems and NFS datastores in an all-flash pool.

In Dell Unity OE version 4.3, the Dell Unity Data Reduction feature replaces compression. It provides more
space savings logic to the system with the addition of zero-block detection and deduplication. In Dell Unity OE
version 4.5, data reduction includes an optional feature called Advanced Deduplication, which expands the
deduplication capabilities of the data reduction algorithm. With data reduction, the amount of space required
to store a dataset for data reduction enabled storage resources is reduced when savings are achieved. Data
reduction and advanced deduplication is supported on LUNs, file systems, NFS, and VMFS datastores.
Starting with OE 4.5, an 8 KB Dell Unity block within a resource is subject to compression. The block will be
compressed if a 1% savings or higher can be obtained.

Dell Unity Data Reduction savings are not only achieved on the storage resource it is enabled on, but on
snapshots and thin clones of those resources. Snapshots and thin clones inherit the data reduction setting of
the source storage resource, which helps to increase the space savings that they can provide.

If Dell Data Reduction is enabled, the storage system intelligently controls it. Configuring data reduction and
reporting savings is simple, and can be done through Unisphere, Unisphere CLI, or REST API.

Dell Unity Data Reduction is licensed with all physical Dell Unity systems at no additional cost. Data reduction
is not available on the Dell Unity VSA version of the Dell Unity platform as data reduction requires write
caching within the system. Dell Unity must be at OE version 4.3 or later to use data reduction with block and
file resources (thin LUNs, thin file systems, VMware vStorage VMFS datastores, and NFS).

By offering multiple technologies of space saving, Dell Unity provides flexibility for the best balance of space
savings and performance.

10 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features

Dell Unity all-flash arrays: data reduction and advanced deduplication


Dell
Dell Unity pool
Unity Dell Unity model Technology
type
OE
300, 400, 500, 600 All flash* Data reduction
4.3/4.4 300F, 400F, 500F, 600F
350F, 450F, 550F, 650F
300, 400, 500, 600 All flash* Data reduction
300F, 400F, 500F, 600F
4.5 350F, 450F, 550F, 650F
450F, 550F, 650F All flash* Data reduction and advanced
deduplication on dynamic pools only
300, 400, 500, 600 All flash* Data reduction
300F, 400F, 500F, 600F
350F, 450F, 550F, 650F
5.0 380, 480, 680, 880,
380F, 480F, 680F, 880F
450F, 550F, 650F, All flash* Data reduction and advanced
380, 480, 680, 880, deduplication
380F, 480F, 680F, 880F
* Resource can be created on either a traditional or a dynamic pool (for systems that support dynamic pools).

Note: Data reduction is disabled by default and needs to be enabled before advanced deduplication is an
available option. After enabling data reduction, advanced deduplication is available, but is disabled by default.

While data reduction helps to optimize storage investments by maximizing drive utilization, data reduction:

• Increases the overall CPU load on the Dell Unity system when storage objects service reads or writes
of compressible data, and
• May increase latency when accessing the data.

Consider these best practices before enabling data reduction on a storage object:

• Monitor the system to ensure it has available resources to support data reduction. See “Hardware
Capability Guidelines” section and Table 2 in the Dell Unity: Best Practices Guide.
• Enable data reduction on a few storage objects at a time. Then monitor the system to be sure it is still
within the recommended operating ranges before enabling Data Reduction on more storage objects.
• With Dell Unity x80F models, consider that data reduction will provide space savings if the data on
the storage block is at least 1% compressible. Before the new x80F models and OE 5.0, data
reduction would provide space savings if the data on the storage block was at least 25%
compressible.
• Before enabling data reduction on a storage object, determine if it contains data that will compress.
Do not enable data reduction on a storage object if there will be no space savings.
• Contact your Dell Technologies representative for tools that can analyze the data compressibility.

For more information regarding compression, see the Dell Unity: Data Reduction document.

For additional information about Dell Unity Data Reduction, see the Dell Unity: Data Reduction Overview and
Dell Unity: Data Reduction Analysis.

11 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features

Advanced deduplication
Advanced deduplication, an optional extension of data reduction released in OE 4.5, increases the capacity
efficiency of data reduction. Advanced deduplication can be enabled on storage and is only performed on
compressed blocks. With OE 4.5, that meant that advanced deduplication would be performed on
compressed blocks that sustained as little as a 1% savings or higher. In cases where a Dell Unity block did
not compress, advanced deduplication would not be performed. So if there were multiple copies of an
uncompressed block, deduplication would not be applied to the uncompressed multiple block copies to realize
further storage savings.

This restriction of not performing advanced deduplication on blocks with less than a 1% compression savings
no longer exists in OE 5.0. With OE 5.0, advanced deduplication can deduplicate to an uncompressed block
whenever a write or overwrite occurs on the block even if the block has 0% compression.

For more information regarding advanced deduplication, see the Dell Unity: Data Reduction white paper and
the Dell Unity: Best Practices Guide.

Data at Rest Encryption


Many Oracle database applications have data encryption requirements. Data at Rest Encryption (D@RE) is a
controller-based encryption solution that can be used for Oracle databases without requiring any database or
application changes.

Note: D@RE is a license-able feature and must be selected during the ordering processes and licensed at
system initialization. D@RE can only be enabled at the time of system installation with the appropriate license
and cannot be enabled later.

If encryption is enabled, Dell Technologies recommends making external backups of the encryption keys after
system installation, and immediately following any change in the system’s drive configuration. Changes to the
system’s drive configuration would include such things as creating or expanding a storage pool, adding new
drives, or replacing a faulted drive.

For more information about D@RE, see the Dell Unity: Data at Rest Encryption document.

Host I/O limits


Typically, a Dell Unity system is used to service multiple hosts and applications. These applications can have
different service levels and different storage demands. In addition, a single array can provide services to
multiple environments such as development, test, and production. Traditionally, these scenarios have been
difficult to manage to ensure critical applications get the resources they need while managing less critical
resources to ensure they do not over consume.

Host I/O limits, like Quality of Service (QoS), provide an excellent means to manage these types of
workloads. Instead of trying to manage workloads with multiple storage pools, use host I/O limits. Host I/O
limits allow LUNs to be restricted to a specified amount of IOPS or bandwidth, so they do not adversely
impact other applications. Host I/O limits allow storage administrators to ensure applications, and
environments adhere to budgeted limits which greatly simplify planning and management.

Host I/O limits are recommended for Oracle database environments for several reasons. First, storage
administrators can ensure that demanding Oracle databases instances do not overwhelm the entire array by
setting limits on database volumes. Also, if the Oracle database is the priority application, they can set limits

12 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features

on other LUNs on the system to ensure that the Oracle database gets the required resources. Another great
component of host I/O limits is the ability to burst for a given limit for a specific period, which is user
configurable. In this way, small exceptions can still be allowed while maintaining balanced performance.

In development and testing environments, it can be difficult to determine if an application meets performance
requirements. Typically, these environments are smaller than production environments and it is not always
feasible to keep a copy of production data in these environments due to costs or privacy concerns. An issue
with smaller datasets is that the application can run faster and then encounter serious performance issues
when deployed on a real dataset in production.

Host I/O limits can be used to restrict the I/O on smaller datasets to highlight I/O-intensive queries. Setting
limits on databases in development and testing environments will help identify problem areas so they can be
resolved before production deployment. The result is improved Oracle databases service levels and greater
scalability.

For additional information, see the Dell Unity: Unisphere Overview document.

13 Dell Unity: Oracle Database Best Practices | H16765


Oracle database design considerations

Oracle database design considerations


The storage system is a critical component of any Oracle database environment. Sizing and configuring a
storage system without understanding the I/O requirements can have adverse consequences. This section
discusses the types of database workloads and some of the common tools available to measure, collect, and
analyze database system performance which helps define the I/O requirements. For an Oracle environment,
capacity requirements can be as important as the number of I/Os per second (IOPS) and throughput
requirements.

OLTP workloads
An online transaction processing (OLTP) workload typically consists of small random reads and writes. The
I/O sizes are generally equivalent to the database block size. The primary goal of designing a storage system
for this type of workload is to maximize the number of IOPS while keeping the latency as low as possible.
Depending on the business and application requirement, a latency of less than 1 millisecond is typical in a
high performing environment.

Consider using 16 Gb Fibre Channel (FC) or 25 GbE optical I/O models in each SP. If higher drive counts are
necessary to achieve higher IOPS, use 12 Gb SAS IO modules for backend connectivity, such as from the
storage processors (controllers) to the disk enclosures. 12 Gb SAS is only available in the 480F, 680F, and
880F models and 25 GbE optical is only available in Dell Unity x80/x80F models.

OLTP performance
For best results, capture performance statistics for at least 24 hours that includes the system peak workload.

An OLTP workload typically consists of small random reads and writes. The backend storage system
servicing this type of workload is primarily sized based on capacity and the number of IOPS required.

OLAP or DSS workloads


Unlike an OLTP workload, an online analytic processing (OLAP) or decision support system (DSS) workload
typically has a relatively low volume of transactions. Most of the activities involve complex queries and
aggregates a large dataset. The volume of data tends to grow steadily over time and is kept available for a
longer time. OLAP workloads generally have large sequential reads or writes.

The primary goal of designing a storage system that services this type of workload is to optimize the I/O
throughput. The design needs to consider all components in the entire I/O path between the hosts and the
drives in the Dell Unity system. For best throughput, consider using:

• 16 Gbps FC or 25 GbE optical (10 GbE is also an option) iSCSI connectivity to the array, and
• 12 Gbps SAS connectivity from the controllers to the disk enclosures.

To meet high throughput requirements, multiple HBAs may be required on the server, the array, or both.

Mixed workloads
Oracle database workloads may not have I/O patterns that can be strictly categorized as OLTP or OLAP
because several applications can reside within the same database. Multiple databases with different
workloads can also co-exist on the same host. Choose and design a storage system that can handle different

14 Dell Unity: Oracle Database Best Practices | H16765


Oracle database design considerations

types of workloads. When testing the I/O systems, the combined workload of these databases should be
accounted for and measured against the expected performance objectives.

The Dell all-flash midrange storage portfolio offers storage systems that scale in both IOPS and throughput.
Combined with the advanced architecture and storage-saving, the Dell all-flash midrange platform is ideal for
any type of Oracle workload.

Storage pools
In general, it is recommended to use fewer storage pools within Dell Unity systems. Fewer storage pools
reduce complexity and increases flexibility. Dell Technologies recommends using a single virtual disk pool for
hosting volumes for Oracle databases. A single virtual pool provides better performance by leveraging the
aggregate I/O bandwidth of all disks to service I/O requests from Oracle databases. A single drive pool is
easier to manage, allowing an administrator to easily adapt the storage system to satisfy ever-changing
workloads that are common in Oracle databases environments. Before creating multiple storage pools to
separate workloads, understand the various Dell Unity features that are available for managing and throttling
specific workloads.

RAID configurations
By default, the Dell Unity system chooses RAID 5 as the protection level when creating a storage pool which
contradicts traditional guidance advocating RAID 1/0 for database workloads. However, this traditional
guidance assumes the storage system contains spinning disks and does not consider SSDs or flash-
optimized storage such as Dell Unity systems. Testing Dell Unity All-Flash systems in most RAID 5 and RAID
1/0 configurations has shown negligible performance gains unless the workload is extremely write-intensive
for an extended period. Usually, the small performance gain of RAID 1/0 is not worth the reduced capacity
and therefore it is recommended to use the default configuration of RAID 5. For heavy write workloads where
maximum write performance is required, RAID 1/0 can be used.

The I/O requirements need to be clearly defined to size storage correctly. The RAID type chosen will be
determined by comparing availability performance requirements. The small footprint and high IO density will
typically allow a smaller drive size, reducing drive rebuild times, which means RAID 5 would be preferred over
RAID 6 usually.

Testing and monitoring


Once the I/O requirements have been defined, I/O performance should be validated and tested before putting
the environment into full production mode. After the system goes into production, it is imperative to continue
to collect and analyze the performance data periodically to ensure the storage system is meeting the
expected performance level. Ensure that a baseline is established and recorded so that comparison can be
made.

There are many tools in the market that provide a comprehensive set of features to exercise and measure the
storage system and other components in the I/O stack. It is up to administrators to decide the testing
requirements and which tools work best for their environment. When choosing tools, consider the capabilities
in the following subsections. Several performance testing utilities are shown below:

• I/O subsystem

- dd
- Iozone (fs benchmarking)

15 Dell Unity: Oracle Database Best Practices | H16765


Oracle database design considerations

- Iometer
- ORION
- winsat
- FIO (fs benchmarking)
- Vdbench
- bonnie

• Relational Database Management System (RDBMS) level

- SLOB
- Oracle Database I/O calibration feature
- DBMS_RESOURCE_MANAGER.CALIBRATE_IO
- dbbenchmark

• Application Level testing tools (database side)

- Benchmark factory
- hammerdb: Supports TPC-C and TPC-H workloads
- Swingbench
- Simora: Mines Oracle SQL Trace files and generates SQL to be run to reproduce the load
- Oracle Real Application Testing: An enterprise database option from Oracle that records a
database load on the source system and replays it on a destination environment

• Application Level testing tools (app side)

- HP LoadRunner

Testing the I/O path


The first item to test on a new configuration is the path between the server and the array. Running a large
block sequential read test using small files should saturate the path between the server and the array. This
test verifies that all paths are fully functional and can be used for I/O traffic. Run this test on a dedicated
server and array; using a production system could cause significant performance issues.

To validate the I/O path, run a large block sequential read test using the following guidelines as a starting
point and vary as necessary:

• Create one LUN per storage processor


• Format the volumes using a 64 KB allocation unit
• Use a block size of 512 KB for the test
• Configure the test for 32 outstanding I/Os
• Use multiple threads. Eight is the recommended starting point

If the throughput matches the expected throughput for the number of HBA ports in the server, the paths
between the server and Dell Unity array are set up correctly.

Testing the drives


Once the I/O path has been validated, the next step is to test the drives. For best results when testing drives
on a Dell Unity array, use the following guidelines when configuring the test.

16 Dell Unity: Oracle Database Best Practices | H16765


Oracle database design considerations

• In a dual-controller system, use at least one volume per controller to ensures that I/O will be
distributed across both controllers. Using both controllers more closely simulate real-world activity.
For best results, use the same number of volumes on each controller. More LUNs might be better and
may be required to achieve maximum performance.
• When performing I/O tests on any storage platform, use files that are larger than the Dell controller
cache. For more accurate results, use a file size that matches the amount of data being stored. If
using larger files is not practical due to a large dataset, use a file size of at least 100 GB.
• Some I/O test tools (Oracle ORION is an example) generate files full of zeros. This behavior causes
inaccurate results when testing. Avoid using test utilities that write zeros for drive validation or
configure the tool to avoid writing zeros.

The purpose of this type of testing is to validate that the storage design will provide the required throughput
and IOPS with acceptable latency. It is important that the test does not exceed the designed capacity of the
array. For example, an array designed for a workload of 5,000 IOPS is likely to perform poorly with a workload
of 10,000 IOPS. If a test is generating a workload higher than the designed capacity, adjust the workload
being generated by reducing the number of threads, outstanding I/Os, or both.

The results of the Live Optics analysis provide an I/O target to simulate using these tests. To get an idea of
the performance capabilities of the array, run I/O tests with a range of I/O sizes commonly seen with Oracle.
When testing random I/O, test with I/O sizes of 8 KB, 16 KB, and 32 KB. When testing sequential I/O, test
with 8 KB, 16 KB, 32 KB, and 64 KB. Since processes like read ahead scans and backups can issue larger
sequential I/O, it is a good idea to also test block sizes larger than 32 KB. To truly test the array the designed
workload should be simulated at a minimum, and slightly higher if possible. To ensure the array has
headroom for load spikes the throughput should be tested slightly beyond estimated production loads.

I/O simulation
The primary objective of I/O simulation is to stress the storage system. I/O simulation tools are typically easy
to use and configure because they do not require a fully configured database. These tools generally allow
workloads to increase or decrease during the tests by specifying different parameters:

• I/O block size


• Queue depth (outstanding requests)
• Number of test files and sizes
• Number of I/O threads
• Read/write patterns (read only, write only, read/write mix)
• Ability to easily step through a series of tests with various settings (for example, run sets of 8k tests,
16k tests, 32k tests)
• Ability to write random data (non-zero) to the storage
• Ability to generate test reports or export data to other applications (such as Microsoft Excel)

Oracle ORION, Vdbench, and FIO are three IO simulation tools. The software is free to download and use.
Oracle ORION has a unique advantage over others because it is explicitly designed to simulate Oracle
database I/O workloads using the same I/O software stack as Oracle. It also provides both OLTP and OLAP
simulation modes which simplify the setup and execution of the test. ORION has been bundled with the
Oracle database software and can be found in the $ORACLE_HOME/bin directory. See appendix 0 for
references to these tools. For more information about how to configure and run Orion, see the chapter,
Calibration with the Oracle Orion Calibration Tool, in the Oracle Performance Guide.

17 Dell Unity: Oracle Database Best Practices | H16765


Oracle database design considerations

Database transaction generation


These tools focus on generating a combination of workloads with different types of database transactions to
simulate a typical OLTP, OLAP, or both. They require a higher degree of configuration and customization in
the tools and databases. SLOB, HammerDB, Swingbench, and Quest Benchmark Factory are commonly
used to perform database transactional I/O benchmarks. Except for Quest Benchmark Factory, all these tools
are freely available for multiple operating systems.

Performance monitoring
Performance data can be monitored by the operating system, the Dell Unity system, and in Oracle databases.
Ideally, monitoring the performance continuously offers the most detail and allows in-depth analysis of the
environments. At a minimum, performance statistics should be captured for at least 24 hours and during the
time periods when there are heaviest activities. The following subsections describe popular software and
cloud-based platforms for monitoring and analyzing performance.

Operating system monitoring


The following utilities are freely available from the operating system vendors and can perform basic system
and I/O monitoring. See the operating system manual and the online resources for each tool to find more
information.

• sar (Linux)
• iostat (Linux)
• top (Linux)
• atop and netatop (Linux)
• collectl (Linux)
• Performance Monitor (Microsoft Windows)

Oracle database monitoring


For an Oracle database, the utilities used most are statspack and AWR. AWR is preferred to the older
statspack, but either one can provide abundant performance statistics of a database. Both utilities come
bundled with the database software.

Oracle Enterprise Manager (OEM) is a separate application offered by Oracle. It provides a centralized
management and monitoring platform for many Oracle applications and databases. Configuration and
performance data are collected through Oracle agents running on an individual host stored in a common
management database. OEM provides a plethora of performance and utilization charts and many other
advanced features to manage the environment.

Additional information can be found at the Oracle Enterprise Manager page.

Unisphere performance dashboard


This web-based unified management software comes with every Dell Unity storage system and can manage
every aspect of the storage. The performance dashboard in Dell Unisphere provides both real-time and
historical performance charts. Administrators can easily modify existing dashboards and charts or add new
dashboards and charts according to their needs.

Metric data ages over time and gets aggregated into longer sampling intervals. The data is kept for historical
referencing for up to 90 days.

Additional information can be found in the Dell Unity: Unisphere Overview document.

18 Dell Unity: Oracle Database Best Practices | H16765


Oracle database design considerations

Dell Live Optics


Dell’s Performance Analysis Collection Kit (DPACK) has evolved into a new product called Dell Live Optics
and is a platform-agnostic analysis service freely available from Dell Technologies. It works on Linux,
Microsoft Windows, and VMware environments and collects performance data such as processor utilization,
memory utilization, storage utilization, IOPS, I/O throughput, and more. Live Optics analyzes these data and
provides a comprehensive in-depth report on server workloads and capacity.

Find additional information at the Live Optics for Service Providers | Dell USA page with a download available
at Live Optics - Real-world data for IT decisions : Live Optics.

Dell CloudIQ
Dell CloudIQ is a software as a service (SaaS) application that is freely available. When it is enabled for the
Dell Unity storage system, it allows administrators to monitor multiple Dell Unity storage systems remotely.
CloudIQ provides continuous monitoring performance, capacity, configuration, and data protection, and
enables administrators to manage storage proactively by receiving advanced notification for potential issues.

Find additional information in the CloudIQ Overview document.

19 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

Deploying Oracle databases on Dell Unity storage


This section discusses best practices for architecture and configuration of Oracle storage for Oracle
databases to realize the optimal performance and manageability of the environment.

Linux setup and configuration


Oracle databases are commonly deployed on Linux operating systems. The following subsections describe
best practices when working with Dell Unity storage systems on Linux operating systems.

Discovering and identifying Dell Unity LUNs on a host


After creating and enabling host access of the LUNs in the Dell Unity system, the host operating system
needs to scan for these LUNs before they can be used. On Linux, install the following rpm packages which
contain useful utilities to discover and identify LUNs: sg3_utils and lsscsi.

Identifying LUN IDs on Dell Unity storage


The Dell Unity storage system automatically assigns LUN IDs, starting from 0 and incrementing by 1
thereafter, when enabling access to a host. LUN 0 typically represents the first LUN allowed access to a host.

Perform the following to view the LUN ID information:

1. In Unisphere, click Access > Hosts.


2. Select the host check box > Host Properties (pencil icon) > LUNs tab.
3. If the Host LUN ID column is hidden from the default view, click the columns filter (gear icon) >
Columns > Host LUN ID. See Figure 1.

LUN ID information

20 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

Scanning for LUNs


LUNZ is known as a host LUN and is not a real LUN. LUNZ appears on the Linux operating system for
making the Dell Unity system visible to the host when no LUNs are assigned to the host. LUNZ always takes
up ID 0 on the host. Any I/Os sent down the LUNZ paths result in errors.

To see if there are any LUNZ on the system, run the lsscsi command.

# lsscsi|egrep DGC
[13:0:0:0] disk DGC LUNZ 4201 /dev/sde
[13:0:1:0] disk DGC LUNZ 4201 /dev/sdaf
[14:0:0:0] disk DGC LUNZ 4201 /dev/sdd
[14:0:1:0] disk DGC LUNZ 4201 /dev/sdag
[snipped]

If the host detects LUN ID 0 and LUNZ, run rescan-scsi-bus.sh with the --forcerescan option. LUNZ will be
removed and allows the real LUN 0 to show up on the host. For example:

# /usr/bin/rescan-scsi-bus.sh --forcerescan

# lsscsi |egrep DGC


[13:0:0:0] disk DGC VRAID 4201 /dev/sde
[13:0:1:0] disk DGC VRAID 4201 /dev/sdaf
[14:0:0:0] disk DGC VRAID 4201 /dev/sdd
[14:0:1:0] disk DGC VRAID 4201 /dev/sdag
[snipped]

When the Dell Unity LUNs have non-zero IDs, use -a option instead.

# /usr/bin/rescan-scsi-bus.sh -a

Note: Omitting the --forcerescan option might prevent the operating system from discovering LUN 0 because
of the LUNZ conflict.

Identifying LUNs by WWNs


The most accurate way to identify a LUN on the host operating system is by its WWN. The Dell Unity system
assigns a unique WWN for each LUN. The WWN information can be found in Unisphere > Access > Hosts
> Host Properties > LUNs. If the WWN column is hidden from the default view, enable it through the
columns filter. See Figure 1.

Querying WWNs using scsi_id command


To query the WWN on a Linux operating system, run the following commands against the device file.

Oracle Linux or Red Hat Enterprise Linux 6.x

# /sbin/scsi_id --page=0x83 --whitelisted --device=<device>

Oracle Linux or Red Hat Enterprise Linux 7.x

# /usr/lib/udev/scsi_id --page=0x83 --whitelisted --device=<device>

21 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

In these examples, <device> can be one of the following:

• Single path device (/dev/sde)


• Linux multipath device (/dev/mapper/mpathe)
• Dell PowerPath device (/dev/emcpowerc)

The string returned by the scsi_id command indicates the WWN of the Dell Unity LUN as shown in bold,
appended with a 3.

36006016010e0420093a88859586140a5

Querying WWNs using multipath command


If the system has Linux device-mapper-multipath software enabled, the multipath command displays the
multipath device properties including the WWN. For example:

# multipath -ll
mpatha (36006016010e0420093a88859586140a5) dm-0 DGC ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1
alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 13:0:1:0 sdaf 65:240 active ready running
| |- 14:0:1:0 sdag 66:0 active ready running
| |- 15:0:1:0 sdbv 68:144 active ready running
| |- 16:0:1:0 sdcx 70:80 active ready running
| |- 17:0:1:0 sddz 128:16 active ready running
| `- 18:0:1:0 sdfb 129:208 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 13:0:0:0 sde 8:64 active ready running
|- 14:0:0:0 sdd 8:48 active ready running
|- 15:0:0:0 sdbh 67:176 active ready running
|- 16:0:0:0 sdcj 69:112 active ready running
|- 17:0:0:0 sddl 71:48 active ready running
`- 18:0:0:0 sden 128:240 active ready running

Querying WWNs using powermt command


Similarly, when PowerPath is enabled on the system, the powermt command displays the multipath device
properties including the WWN.

# powermt display dev=all

Multipathing
Multipathing is a software solution implemented at the host operating system level. While multipathing is
optional, it provides path redundancy, failover, and performance-enhancing capabilities. It is recommended to
deploy the solution in a production environment or any environments where availability and performance are
critical.

22 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

Main benefits of using an MPIO solution


• Increase database availability by providing automatic path failover and failback
• Enhance database I/O performance by providing automatic load balancing and capabilities for
multiple parallel I/O paths
• Ease administration by providing persistent user-friendly names for the storage devices across cluster
nodes

Multipath software solutions


There are a few multipath software choices available to choose from. It is up to the administrator's preference
to decide which software solution is best for the environment. The following list provides a brief description of
some of these solutions.

• Native Linux multipath (device-mapper-multipath)


• Dell PowerPath
• Symantec Veritas Dynamic Multipathing (VxDMP)

The native Linux multipath solution is supported and bundled with most popular Linux distributions in use
today. Because the software is widely and readily available at no additional cost, many administrators prefer
using it compared to other third-party solutions.

Unlike the native Linux multipath solution, both Dell PowerPath and Symantec VxDMP provide extended
capabilities for some storage platforms and software integrations. Both solutions also offer support for
numerous operating systems in addition to Linux.

Only one multipath software solution should be enabled on the host and the same solution should be
deployed in a cluster on all cluster hosts.

Refer to the vendor's multipath solution documentation for more information. For information about operating
systems supported by Dell PowerPath, see the Dell Simple Support Matrix. Appendix 0 provides links to
additional resources to these solutions.

Connectivity guidelines
The following list provides a summary of array-to-host connectivity best practices. It is recommended to
review the documents, Configuring Hosts to Access Fibre Channel (FC) or iSCSI Storage and Dell Unity: High
Availability.

• Have at least two FC/iSCSI HBAs or ports to provide path redundancy.


• Connect the same port on both Dell Unity storage processors (SP) to the same switch because the
Dell Unity system matches the physical port assignment on both SPs.
• Use multiple switches to provide switch redundancy.

Configuration file
To ease deployment of native Linux multipath, it comes with a set of default settings. The default settings list
storage models from different vendors including the Dell Unity system. The default settings allow the software
to work with the Dell Unity system without additional configuration. However, these settings might not be
optimal for all situations and should be reviewed and modified if necessary.

The multipath daemon configuration file needs to be created on newly installed systems. A basic template can
be copied from /usr/share/doc/device-mapper-multipath-<version>/multipath.conf to /etc/multipath.conf
as a starting point. Any settings that are not defined explicitly in /etc/multipath.conf would assume the default

23 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

values. The full list of settings (explicitly set and default values) can be obtained using the following
command. Specific Dell Unity settings can be found by searching for DGC from the output. The default
settings generally work without any issues.

# multipathd –k"show config"

Creating aliases
It is generally a good idea to assign meaningful names (aliases) for the multipath devices though it is not
mandatory. For example, create aliases based on the application type and environment it is in. The following
snippet in the multipaths section assigns an alias of ORA-DATA-00 to the Dell Unity LUN with the WWN
36006016010e04200271a8a594a34d845.

multipaths {
multipath {
wwid "36006016010e04200271a8a594a34d845"
alias ORA-DATA-00
}
}

Asymmetric Logic Unit Access


Dell Unity systems support ALUA for host access. ALUA allows the host operating system to recognize
optimized paths from nonoptimized paths. Optimized paths are the ones connected to the LUN’s SP owner,
and they are assigned a higher priority. The default multipath settings reflect the support of ALUA feature on
Dell Unity storage. The following example shows LUN ORA-DATA-00 with 12 paths and divided into two
groups. The optimized paths have a priority of 50, and non-optimized paths have a priority of 10.

e.g. multipath –ll shows groups of paths with different priority


# multipath -ll ORA-DATA-00
ORA-DATA-00 (36006016010e04200271a8a594a34d845) dm-18 DGC ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1
alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 13:0:0:1 sdf 8:80 active ready running
| |- 14:0:0:1 sdg 8:96 active ready running
| |- 15:0:0:1 sdbi 67:192 active ready running
| |- 16:0:0:1 sdck 69:128 active ready running
| |- 17:0:0:1 sddm 71:64 active ready running
| `- 18:0:0:1 sdeo 129:0 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 13:0:1:1 sdah 66:16 active ready running
|- 14:0:1:1 sdai 66:32 active ready running
|- 15:0:1:1 sdbw 68:160 active ready running
|- 16:0:1:1 sdcy 70:96 active ready running
|- 17:0:1:1 sdea 128:32 active ready running
`- 18:0:1:1 sdfc 129:224 active ready running

I/Os are sent down the optimized paths when possible. If I/Os are sent down the nonoptimized paths, the peer
SP redirects the I/Os to the primary SP through the internal bus. When the Dell Unity system senses large
amounts of non-optimized I/Os, it automatically trespasses the LUN from the primary SP to the peer SP to
optimize the data paths.

24 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

LUN partition
A LUN can be used as a whole partition or it can be divided into multiple partitions. Certain applications, such
as Oracle ASMLib, recommend partitioning over whole LUNs. Dell Technologies recommends configuring
whole LUNs without partitions wherever appropriate because it offers the most flexibility for configuring and
managing the underlying storage.

See sections Oracle Automatic Storage Management and File systems on choosing a strategy to grow
storage space.

Partition alignment
When partitioning a LUN, it is recommended to align the partition on the 1M boundary. Either fdisk or parted
can be used to create the partition. However, only parted can create partitions larger than 2 TB.

Creating partition using parted


Before creating the partition, label the device as GPT. Then, specify the partition offset at 2048 sector (1M).
The following command creates a single partition that takes up the entire LUN. Once the partition is created,
the partition file /dev/mapper/orabin-std1 should be used for creating file system or ASMLib volume.

# parted /dev/mapper/orabin-std
GNU Parted 3.1
Using /dev/mapper/orabin-std
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) quit
Information: You may need to update /etc/fstab.

# parted /dev/mapper/orabin-std mkpart primary 2048s 100%


Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore
Information: You may need to update /etc/fstab.

Note: A misalignment warning might appear which can be safely ignored

Partitioned devices and filesystems


When creating a file system, create the file system on properly aligned partitioned device.

# mkfs.ext4 /dev/mapper/orabin-std1

I/O scheduler for Oracle ASM devices


Oracle recommends using the deadline I/O scheduler for the best performance of Oracle ASM. For Oracle
Linux, the deadline I/O scheduler is enabled by default in Oracle Unbreakable Enterprise Kernel. For other
Linux operating systems, verify the I/O scheduler and make any necessary updates.

To verify the I/O schedule, use the following commands:

# egrep "*" /sys/block/sd*/queue/scheduler


/sys/block/sdaa/queue/scheduler:noop [deadline] cfq
/sys/block/sdab/queue/scheduler:noop [deadline] cfq
/sys/block/sdac/queue/scheduler:noop [deadline] cfq

25 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

/sys/block/sdad/queue/scheduler:noop [deadline] cfq


/sys/block/sdae/queue/scheduler:noop [deadline] cfq
/sys/block/sdaf/queue/scheduler:noop [deadline] cfq

To set the I/O schedule persistently, create an udev rule that updates the devices. See the section Linux
dynamic device management (udev)0 for more information about using udev to set persistent ownership and
permission.

The following example shows setting the deadline I/O scheduler on all /dev/sd* devices. The rule is appended
to the 99-oracle-asmdevices.rule file.

# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
ACTION=="add|change", KERNEL=="sd*", RUN+="/bin/sh -c '/bin/echo deadline >
/sys$env{DEVPATH}/queue/scheduler'"

# udevadm control --reload-rules


# udevadm trigger

Oracle Automatic Storage Management


Dell Technologies and Oracle recommend using Oracle Automatic Storage Management (ASM) to manage
Dell Unity LUNs for the database and clusterware. This section reviews the general guidelines and additional
considerations for an Oracle database.

Preparing storage for Oracle ASM


LUNs intended for Oracle ASM must have their user and group ownership and permissions set correctly.
Group and ownership must be set to the group and owner of the ASM instance. Permissions must be set to
read/write. For example, if user grid with primary group oinstall is the owner of the ASM instance,
grid:oinstall should be assigned to the LUNs. There are different methods to set the ownership and
permissions and keep these settings persistent across host reboot.

Persistent device ownership and permissions


Persistent device ownership and permission can be managed through various software. The following
describes some of the commonly used software on Linux host.

• Linux dynamic device management (udev)


• Oracle ASMLib
• Oracle ASMFD

Linux dynamic device management (udev)


The Linux udev facility comes with every Linux distribution and is easy to set up for persistent device
ownership and permission by creating rules in the udev rule file. System rule files are in directory
/usr/lib/udev/rules.d and user-defined rule files are in directory /etc/udev/rules.d. There are many ways to
define a device in the rule file. Two examples are provided as follows.

Example 1: Set device ownership and permission by WWNs

Define a rule for each Dell Unity LUN using its unique WWN. With this approach, each LUN requires an udev
rule. Rules are defined in /etc/udev/rules.d/99-oracle-asmdevices.rules. The following example shows an

26 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

udev rule that sets grid:oinstall ownership and 660 permission on a dm (multipath) device that matches the
WWN 36006016010d04200b584ce59557ba84a.

# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="dm-*",PROGRAM=="/lib/udev/scsi_id --whitelisted --
device=/dev/$name",RESULT=="36006016010d04200b584ce59557ba84a",ACTION=="add|chan
ge",OWNER="grid",GROUP="oinstall",MODE="0660"


# udevadm control --reload-rules


# udevadm trigger

If PowerPath is used, change KERNEL=="dm-*" to KERNEL=="emcpower*".

Example 2: Set device ownership and permission by device name pattern

The udev rule can be simplified if multipath device aliases are created with a consistent string pattern. For
example, use prefix ORA- in all multipath device aliases for LUNs intended for Oracle ASM. A single udev
rule can be used to set ownership and permission on all ORA* multipath devices. See section Creating
aliases on creating a multipath device alias.

# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="dm-*",ENV{DM_NAME}=="ORA*",OWNER="grid",GROUP="oinstall",MODE="0660"

# udevadm control --reload-rules


# udevadm trigger

The advantage of this approach is that only multipath.conf needs to be updated when new LUNs are added to
the system for Oracle ASM.

Oracle ASMLib
Oracle ASMLib simplifies storage management and reduces kernel resource usage. It provides device file
name, ownership, and permission persistency and reduces the number of open file handles required by the
database processes. No udev is required when ASMLib is used.

When LUNs are initialized with ASMLib, special device files are created in the /dev/oracleasm/disks folder
with proper ownership and permission automatically. When the system reboots, the ASMLib driver restarts
and re-creates the device files. ASMLib consists of three packages:

• oracleasm-support-version.arch.rpm
• oracleasm-kernel-version.arch.rpm
• oracleasmlib-version.arch.rpm

Each Linux vendor maintains their oracleasm kernel driver (oracleasm-kernel-version.arch.rpm). With Oracle
Linux, the kernel driver is already included with Oracle Linux Unbreakable Enterprise Kernel. For more
information about ASMLib and to download the software, go to
http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html.

27 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

The ownership of the ASMLib device 's defined in the /etc/sysconfig/oracleasm configuration file which is
generated by running /etc/init.d/oracleasm configure initially. Update the configuration file, if necessary, to
reflect the proper ownership and the disk scanning order.

# cat/etc/sysconfig/oracleasm
# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.


ORACLEASM_UID=grid

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.


ORACLEASM_GID=oinstall

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.


ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning


ORACLEASM_SCANORDER="dm"

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan


ORACLEASM_SCANEXCLUDE="sd"

This configuration file indicates grid:oinstall for the ownership, and it searches for multipath devices (dm) and
excludes any single path devices (sd). If PowerPath devices are used, set
ORACLEASM_SCANORDER="emcpower".

Note: The asterisk (*) cannot be used in the value for ORACLEASM_SCANORDER and
ORACLEASM_SCANEXCLUDE.

Oracle requires the LUNs to be partitioned for ASMLib use. First, create a partition with parted, and then use
oracleasm to label the partition. ASMLib does not provide multipath capability and relies on native or third-
party multipath software to provide the function. The following example shows creating an ASMLib device on
a partition of a Linux Multipath device. The oracleasm command writes the ASMLib header to
/dev/mapper/mpathap1 and generates the ASMLib device file in /dev/oracleasm/disks/DATA01 with
ownership as indicated in the /etc/sysconfig/oracleasm file.

# oracleasm createdisk DATA01 /dev/mapper/mpathap1

Oracle ASM Filter Driver


Oracle ASM Filter Driver (ASMFD) is a kernel module that sits between the operating system kernel and
Oracle ASM. Oracle intends to replace Oracle ASMLib with ASMFD and recommends using ASMFD in
Oracle 12c and above. ASMFD includes all the ASMLib benefits of storage device name, ownership, and
permission persistency, and better kernel usage by reducing the number of open file handles. Additionally, it
provides storage protection by rejecting non-oracle I/Os and hence prevents inadvertent overwrite of the ASM
disks.

In a cluster environment, without ASMFD, when a cluster node is fenced, the host must be rebooted to ensure
the integrity of the data. With ASMFD, the fenced node does not need to be rebooted. It is possible to restart
the clusterware stack which reduces the time to recover the node.

28 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

Unlike ASMLib, ASMFD comes with the Grid Infrastructure software and there is no additional software to
download. Starting with Oracle ASM 12c Release 2, the installation and configuration for Oracle ASMFD have
been simplified by integrating the option into the Oracle Grid Infrastructure installation. Administrators need to
select the option Configure Oracle ASM Filter Driver during the Grid installation.

The installation of ASMFD automatically creates an udev rule file in /etc/udev/rules.d/53-afd.rules that sets
the afd devices with the proper ownership and permission. Do not attempt to modify or delete this file directly.
Use the asmcmd adf_configure command to make updates instead.

# cat /etc/udev/rules.d/53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="oinstall", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="oinstall", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="oinstall", MODE="0664"

Either whole LUNs or LUN partitions can be used for ASMFD devices. Dell Technologies recommends using
whole LUNs because of certain restrictions with partitions which affect database availability during storage
expansion. See section Expand Oracle ASM storage for more details.

The following example shows creating an ASMFD device on a Linux multipath device. The asmcmd
afd_label command writes the ASMFD header to /dev/mapper/mpathb and generates the ASMFD device
file in /dev/oracleafd/disks/DATA01. The udev rule ensures the afd devices are set to grid:oinstall and
0664 permission.

# asmcmd afd_label DATA01 /dev/mapper/mpatha

The other advantage of using ASMFD is that it supports thin-provisioned disk group starting in Oracle release
12.2.0.1.

To find out which OS platforms ASMFD is supported on, see Oracle KB Doc ID 2034681.1 at Oracle Support.

For more information on installing and configuring ASMFD, refer to the Oracle Automatic Storage
Management Administrator’s Guide.

Setting the asm_diskstring ASM instance parameter


The asm_diskstring ASM instance parameter tells ASM the location of the ASM devices. During the Grid
Infrastructure installation, it defaults to null and it should be updated to reflect the correct location of the
device files.

Example of asm_diskstring settings


Device files asm_diskstring setting
Linux native multipath asm_diskstring=’/dev/mapper/ORA*’
Dell PowerPath asm_diskstring=’/dev/emcpower*’
Oracle ASMLib asm_diskstring=’ORCL:*’
Oracle ASMFD asm_diskstring=’AFD:*’

29 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

Oracle ASM guidelines


Dell Technologies and Oracle recommend using Oracle ASM as the preferred storage management solution
for either a single-instance database or Real Application Clusters (RAC). ASM takes place of the traditional
Linux volume manager and file system. It takes over the management of the disks and creates disk groups
where data files reside.

Benefits of using Oracle ASM


ASM offers many advantages over the traditional Linux storage management solution such as Logical Volume
Manager (LVM). The main benefits include:

• Automatic file management


• Online data files rebalance across ASM disks
• Online addition and removal of ASM disks without downtime
• Single solution for both volume and file management integrated with Oracle software
• Improved I/O performance because ASM stripes all files across all disks in a disk group
• Transparent integration with Dell Unity system features such as snapshots, thin-provisioning, thin
clones, compression, and Data at Rest Encryption

ASM disk and disk group guidelines


When creating an Oracle ASM disk group, consider the following guidelines:

• For ultimate flexibility and maintaining configuration consistency, create separate disk groups for each
of the following:

- Create a disk group for the Oracle Cluster Registry (OCR) and voting files.
- Create a disk group for Grid Infrastructure Management Repository (GIMR).
- Use one or more disk groups for database data files for each database.
- Use a disk group for a fast recovery area for each database.
- Configure a database which can span across multiple disk groups but with each disk group
mounted and used by one database exclusively. This provides the ability to independently
optimize the storage and snapshot configuration for each individual database.

• Create LUNs with same capacity and services in the same disk group such as compression,
consistency group, and snapshot schedule
• Use fewer but larger LUNs to reduce the number of objects to be managed
• Create a minimum of two LUNs for each disk group. Distribute the LUNs evenly on both Dell Unity
storage processors to allow even I/O distribution to both processors, hence, maximizing the
performance and I/O bandwidth for the environment.
• To take an array-based snapshot on a multivolume Oracle database, ensure that all LUNs belonging
to the same database are snapped together. To snap the LUNs together, group the LUNs in a
Consistency Group (see more information in section Consistency group).
• While ASM can provide software-level mirroring, it is not necessary because data protection is
integrated in Dell Unity RAID protection. Use External Redundancy for ASM disk groups to enable
substantial storage savings, reduce overall IOPS from ASM, and results in better I/O performance.
• For best storage efficiency, create thin-provisioned LUNs in the Dell Unity system for ASM use. When
creating datafiles on ASM disk groups, administrators can set an initial size of each datafile and
specify the autoextend clause to include an extent size for growth. Dell Unity system allocates
storage for the initial datafile size and as the data are written to the datafiles. When needed, more

30 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

space is allocated in the amount of autoextend size. An example of the CREATE TABLESPACE
statement is shown in the following:

SQL> create tablespace DATATS datafile '+DATADG' size 10G autoextend on


next 1024M maxsize unlimited;

• By default, each LUN has unlimited I/O limits in the Dell Unity system. When a database requires
higher performance and another does not, consider creating different host I/O limit policies in Dell
Unisphere that limit I/O performance based on IOPS and bandwidth. Assign the policy to the LUNs
corresponding to the level of performance required. The host I/O limit is applied on the LUN level.
• On Oracle 12c releases, ASMFD now supports thin provision ASM disk group. The feature allows
unused space to be released back to the Dell Unity system after deleting or shrinking the datafiles. To
enable the feature, set THIN_PROVISIONED attribute to 'TRUE' on the disk group. For example:

SQL> ALTER DISKGROUP DATADG SET ATTRIBUTE 'THIN_PROVISIONED'='TRUE';

• When ASM rebalances the disk group, data is moved to higher performing tracks of spinning disks
during the compact phase. Since the Dell Unity system virtualizes the physical storage devices, and
with the use of the flash devices, there is no real benefit to compacting the data. In Oracle 12c, it is
now possible to disable the compact phase on individual disk group by setting the
_rebalance_compact attribute to 'FALSE'.

SQL > ALTER DISKGROUP DATADG SET ATTRIBUTE '_rebalance_compact'='FALSE';

For Oracle pre-12c releases, _rebalance_compact can only be disabled on the ASM instance level
which affects all disk groups. For database environments that have different storage types, turning off
the compact phase might have adverse performance implication.

For more information about ASM compact phase rebalancing, see Oracle KB Doc ID 1902001.1 on
Oracle Support.

Table 5 demonstrates an example of how ASM disk groups are organized. Figure 2 illustrates the storage
layout on the database, ASM disk group, and Dell Unity system levels.

Example ASM disk group configuration


Dell Unity
ASM disk Number
Database LUN size consistency Description
group of LUNs
group
Clusterware GIDATA 2 10 GB N/A Clusterware-related information such as
the OCR and voting disks
Grid MGMT 2 50 GB mgmt_cg In 12cR2, a separate disk group created
Infrastructure for the GI Management Repository data
Management
Repository
Test DATADG 2 200 GB testdb_cg Disk group that holds the database files,
database temporary table space, and online redo
(testdb) logs; contains system-related table
spaces such as SYSTEM and UNDO

Contains only testdb data

31 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

FRADG 2 100 GB Disk group that holds the database


archive logs and backup data

Contains only testdb logs


Development DATA2DG 2 200 GB devdb_cg Disk group that holds the database files,
database temporary table space, online redo logs;
(devdb) contains system-related table spaces
such as SYSTEM and UNDO

Contains only devdb data


FRA2DG 2 100 GB Disk group that holds the database
archive logs and backup data

Contains only devdb logs

32 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

Oracle ASM storage layout on the Dell Unity system

33 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

Consistency group
For performance reasons, it is common for a database to span across multiple LUNs to increase I/O
parallelism to the storage devices. Dell Technologies recommends grouping the LUNs into a consistency
group for a database to ensure data consistency when taking storage snapshots. The Dell Unity system
snapshot feature is a quick and space-efficient way to create a point-in-time snapshot of the entire database.
Sections 0 and 0 discuss using Dell Unity system snapshots and thin clones to reduce database recovery
time and create space-efficient copies of the database.

In Figure 2, for example, the RAC database consists of disk group +DATADG and +FRADG. All ASM
volumes in those disk groups are configured in a single consistency group, testdb_cg. The single instance
database consists of disk groups +DATA2DG and +FRA2DG. The ASM devices of both disk groups are
configured in a consistency group, devdb_cg.

The consistency group feature allows taking a database-consistent snapshot across multiple LUNs. On the
database side, use the ALTER DATABASE BEGIN BACKUP clause before the snapshot is taken and END
BACKUP clause after the snapshot is taken.

Note: Storage snapshots taken on a multiple-LUN database without a consistency group might be
irrecoverable by Oracle during database recovery.

Expand Oracle ASM storage


As the storage consumption grows over time, it is necessary to increase and grow the existing storage
capacity both in the Dell Unity system and in the database. It is most desirable to add capacity online with
minimal business interruptions. The Dell Unity system has the flexibility to expand the current storage system
with no interruption to the application. The following nondisruptive operations can be performed online in
Unisphere:

• Adding flash devices


• Expanding the storage pool
• Increasing the size of existing LUNs
• Creating and adding new LUNs to existing hosts

The following subsections discuss the different ways to increase ASM storage capacity. Each method has its
pros and cons.

Increase Oracle ASM storage by adding new LUNs


More storage capacity can be added to an ASM disk group by adding new LUNs to the disk group. The
advantage of this method is that the process is relatively simple and safe because no changes are made to
the existing LUNs.

The following outlines the general process:

1. Create LUNs in Unisphere.


2. Ensure the size of new LUNs and other features such as compression, and that the consistency
group matches the existing LUNs.
3. Allow access to new LUNs to the host systems.
4. Perform a SCSI scan on the host systems (see section Discovering and identifying Dell Unity LUNs
on a host).
5. Configure multipath for the new devices (see section Multipathing).
6. Prepare the LUNs for ASM (see section Preparing storage for Oracle ASM).

34 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

7. Add the LUNs to the ASM disk group.

Since ASM automatically rebalances the data after new LUNs are added, it is recommended to add
the LUNs in a single operation to minimize the amount of rebalancing work. The following example
shows the ALTER DISKGROUP ADD DISK statement to add multiple devices to a disk group.

ALTER DISKGROUP DATADG ADD DISK 'AFD:DATADG_VOL1', 'AFD:DATADG_VOL2'


REBALANCE POWER 10 NOWAIT;

8. Verify the status and capacity of the disk group.

# asmcmd lsdsk -gk -G datadg


# asmcmd lsdg –g datadg

9. If the existing LUNs are in a consistency group, add the new LUNs to the same consistency group.

Note: Adding or removing LUNs in a consistency group is not allowed when there are existing snapshots of
the consistency group. To add or remove LUNs in a consistency group, delete all snapshots and retry the
operation.

Increase Oracle ASM storage by resizing current LUNs


The Dell Unity system can extend the size of existing LUNs online. However, depending on the operating
system, disk partition configuration, and Oracle software chosen, resizing ASM disks online might not be
possible. Table 6 summaries the online resize capability on some configurations. It does not cover all possible
configuration variations. Customers should consult with each vendor to fully understand the capability and
limitation of their software.

Resize Oracle ASM device online support matrix.


Oracle Without ASMLib and ASMFD ASMFD using non- ASMLib using partition
version using non-partition LUNs partition LUNs LUNs
12.2.0.1 Yes Yes No
12.1.0.1 Yes No No
11.2.0.4 Yes No No

Note: Resizing LUNs on the operating system can cause loss of data or corruption. It is recommended to
back up all data before attempting to resize the LUNs.

Resize ASM devices without ASMFD and ASMLib online


When not using ASMFD or ASMLib, and only using whole LUNs, it is possible to resize the devices online on
a wide range of operating systems and Oracle versions. See Table 6.

The following outlines the general steps to resize ASM devices online without ASMFD and ASMLib.

1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems and refresh partition table on each LUN path and reload
multipath devices.

35 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

4. Reload multipath devices.

# multipathd -k"resize map DATA03"

For PowerPath, the new size is automatically updated.

5. Verify the new LUN size.

# multipath -ll ORA-TEST3 | egrep size


size=400G features='2 queue_if_no_path retain_attached_hw_handler'
hwhandler='1 alua' wp=rw

# multipath -ll ORA-TEST3 | awk '/sd/ {print $(NF-4)}' | xargs -i fdisk -l


/dev/{} | egrep "^Disk"
Disk /dev/sdfd: 429.5 GB, 429496729600 bytes, 838860800 sectors
Disk /dev/sdff: 429.5 GB, 429496729600 bytes, 838860800 sectors
Disk /dev/sdfh: 429.5 GB, 429496729600 bytes, 838860800 sectors
[snipped]

For PowerPath, use the following commands:

# fdisk –l /dev/emcpowerc

# powermt display dev=all|awk '/sd/ {print $3}'|xargs -i fdisk -l


/dev/{}|egrep "^Disk"

6. To determine the maximum size of the LUN, run asmcmd lsdsk and extract the OS_MB value. Use
this value with the ALTER DISKGROUP RESIZE DISK clause.

# asmcmd lsdsk –k

Inst_ID Total_MB Free_MB OS_MB Name Failgroup Site_Name


Site_GUID Site_Status Failgroup_Type Library
Label Failgroup_Label Site_Label UDID Product Redund Path
1 204800 409464 409600 TEST3DG_0000 TEST3DG_0000
00000000000000000000000000000000 REGULAR System
UNKNOWN /dev/mapper/ORA-TEST3
2 204800 409464 409600 TEST3DG_0000 TEST3DG_0000
00000000000000000000000000000000 REGULAR System
UNKNOWN /dev/mapper/ORA-TEST3

Total_MB represents the current size before the resize operation.

OS_MB represents the new maximum size ASM can expand to.

7. Resize the ASM device.

SQL> ALTER DISKGROUP TEST3DG RESIZE DISK TEST3DG_0000 SIZE 409600M


REBALANCE POWER 10;

8. Verify the new ASM device size. After the resize operation completes, run asmcmd lsdsk to confirm
the Total_MB value matches OS_MB value.

36 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

# asmcmd lsdsk -k
Inst_ID Total_MB Free_MB OS_MB Name Failgroup Site_Name
Site_GUID Site_Status Failgroup_Type Library
Label Failgroup_Label Site_Label UDID Product Redund Path
1 409600 409464 409600 TEST3DG_0000 TEST3DG_0000
00000000000000000000000000000000 REGULAR System
UNKNOWN /dev/mapper/ORA-TEST3
2 409600 409464 409600 TEST3DG_0000 TEST3DG_0000
00000000000000000000000000000000 REGULAR System
UNKNOWN /dev/mapper/ORA-TEST3

Run asmcmd lsdg to confirm the Total_MB value on the disk group has increased.

# asmcmd lsdg
Inst_ID State Type Rebal Sector Logical_Sector Block AU
Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
1 MOUNTED EXTERN N 512 512 4096 4194304
409600 409464 0 409464 0
N TEST3DG/
2 MOUNTED EXTERN N 512 512 4096 4194304
409600 409464 0 409464 0
N TEST3DG/

Resize ASM devices with ASMFD


When Oracle 12.2 ASMFD is used with ASM devices, it is possible to resize the ASM device online without
impacting the database. For Oracle version 12.1, option afd_refresh is not available.

Note: The afd_refresh option is only available in Oracle 12.2.

Resize ASM devices online:

The following outlines the general steps to resize ASM devices with ASMFD online.

1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems and refresh partition table on each LUN path and reload
multipath devices.
4. Reload multipath devices.

# multipathd -k"resize map ORA-TEST3"

For PowerPath, the new size is automatically updated.

5. Verify the new LUN size.

# multipath -ll ORA-TEST3 | egrep size

# multipath -ll ORA-TEST3 | awk '/sd/ {print $(NF-4)}' | xargs -i fdisk -l


/dev/{} | egrep "^Disk"

37 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

For PowerPath, use the following commands:

# fdisk –l /dev/emcpowerc

# powermt display dev=all|awk '/sd/ {print $3}'|xargs -i fdisk -l


/dev/{}|egrep "^Disk"

6. Refresh the ASMFD devices.

# asmcmd afd_refresh

7. To determine the maximum size of the LUN, run asmcmd lsdsk and extract the OS_MB value. Use
this value with the ALTER DISKGROUP RESIZE DISK clause.

# asmcmd lsdsk –k

8. Resize the ASM device.

SQL> ALTER DISKGROUP DATADG RESIZE DISK DATA03 SIZE $OS_MB REBALANCE POWER
10;

9. Verify the new size in the ASM device and disk group. After the resize operation completes, run
asmcmd lsdsk to confirm the Total_MB value matches OS_MB value.

# asmcmd lsdsk -k

Run asmcmd lsdg to confirm the Total_MB value on the disk group has increased.

# asmcmd lsdg

Resize ASM devices offline:

As mentioned previously in this section, the online resize capability of ASMFD is available in Oracle 12.2.
With Oracle 12.1, either restart the host to refresh the LUN size, or restart the clusterware, ASM instance, and
the AFD driver on the host to minimize the outage window. In a cluster environment, refreshing the LUN size
can be done in a rolling fashion to further minimize the impact of the outage.

1. Follow step 1 to step 5 in section 0.


2. Stop all databases on the host.
3. Stop CRS.

# crsctl stop crs

4. Reload the AFD driver.

# afdload stop
# afdload start

5. Rescan the AFD devices.

# asmcmd afd_scan

6. Restart CRS.

# crsctl start crs

38 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

7. Restart the databases.


8. Continue step 6 to step 8 in section 0.
9. Repeat the process on other cluster nodes.

Another alternative to restarting the node or software is to unlabel and label the AFD devices. The database
associated with the devices must be stopped, and the disk groups and devices must be unmounted before
they can be relabeled. This approach increases the risk of data loss and corruption and requires extra
caution.

Resize an ASM device with ASMLib


Oracle recommends partitioning LUNs for ASMLib. To increase the size of the partition after expanding the
LUN, the partition is first removed and then re-created with the new size. The database associated with the
device would be impacted.

The following outlines the general steps to resize ASM devices with ASMFD online.

1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of the existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems, refresh the partition table on each LUN path, and reload
the multipath devices.

# rescan-scsi-bus.sh --resize

4. Reload the multipath devices.

# multipathd -k"resize map TEST67_ASMLIB"

For PowerPath, the new size is automatically updated.

5. Verify the new LUN size.

# multipath -ll TEST67_ASMLIB | egrep size

# multipath -ll TEST67_ASMLIB | awk '/sd/ {print $(NF-4)}' | xargs -i


fdisk -l /dev/{} | egrep "^Disk"

For PowerPath, use the following commands.

# fdisk –l /dev/emcpowerc

# powermt display dev=all|awk '/sd/ {print $3}'|xargs -i fdisk -l


/dev/{}|egrep "^Disk"

6. Stop the database across the cluster.

$ srvctl stop db -d demodb

7. Dismount the disk group on all cluster nodes.

SQL> ALTER DISKGROUP TEST67_ASMLIBDG DISMOUNT;

8. Remove and re-create the partition.

# parted /dev/mapper/TEST67_ASMLIB rm 1

39 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

# parted /dev/mapper/TEST67_ASMLIB mkpart primary 2048s 100%

9. Rescan the LUNs on all cluster nodes.

# rescan-scsi-bus.sh –resize

10. Refresh the multipath device on all cluster nodes.

# multipathd -k"resize map TEST67_ASMLIB"

For PowerPath, the new size is automatically updated.

11. Update the partition on all cluster nodes.

# partprobe /dev/mapper/TEST67_ASMLIB

# parted /dev/mapper/TEST67_ASMLIB u GB p

parted command should show the new size.

12. Rescan the ASMLib devices on all cluster nodes.

# oracleasm scandisks
# oracleasm listdisks

13. Mount the disk group on all cluster nodes.

SQL> ALTER DISKGROUP TEST67_ASMLIBDG MOUNT;

14. Run asmcmd lsdsk to extract the OS_MB value to determine the maximum LUN size.

# asmcmd lsdsk –k –g –G TEST67_ASMLIBDG

15. Resize the ASM device.

SQL> ALTER DISKGROUP TEST67_ASMLIB RESIZE DISK TEST67_ASMLIB SIZE $OS_MB


REBALANCE POWER 10;

16. Verify the new size in the ASM device and disk group. After the resize operation completes, run
asmcmd lsdsk to confirm the Total_MB value matches the OS_MB value.

# asmcmd lsdsk -k

Run asmcmd lsdg to confirm the Total_MB value on the disk group has increased.

# asmcmd lsdg

17. Restart the database.

$ srvctl start db -d demodb

Space reclamation
Dell Unity system supports the SCSI TRIM/UNMAP feature which allows operating systems to inform which
data blocks are no longer in use and can be released for other uses. For space reclamation to work, the LUNs
must be thin provisioned in the Dell Unity system and the Linux kernel, and Oracle ASM must also support the

40 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

feature. The TRIM/UNMAP feature has been introduced in Linux kernel 2.6.28-25 and newer. With Oracle
12.2 ASMFD, thin-provisioned ASM diskgroups allow deleted space in datafiles to be reclaimed.

To verify the availability of the feature on the Linux operating system, query
/sys/block/$disk/queue/discard_granularity. If the value is zero, it means the device does not support
discard functionality. For example, since device sdx has a non-zero discard_granularity value, its free space
will be reclaimed with TRIM/UNMAP.

# cat /sys/block/sdx/queue/discard_granularity
8192

Prepare ASM disk group for space reclamation


1. Ensure LUNs are thin provisioned in the Dell Unity storage system.
2. Create data files with an initial size and enable autoextend on the data files.
3. Set the THIN_PROVISIONED attribute to 'TRUE' on the disk group.

Reclaim space in ASM disk group


The following outlines the general steps to reclaim storage space in Oracle ASM.

1. Delete rows, tables, objects, or tablespaces.


2. Enable the ROW MOVEMENT attribute on the tables.

SQL> ALTER TABLE <$TABLE_NAME> ENABLE ROW MOVEMENT;

3. After deleting objects in a table, run command ALTER TABLE SHRINK SPACE to repack the rows,
move the high water mark, and release unused extents in the datafiles.

SQL> ALTER TABLE <$TABLE_NAME> SHRINK SPACE;

4. Determine the HWM of each data file and prepare the resize statements using the following script
provided by Oracle. The original post can be found in the following Oracle article:
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:766625833673

# cat find_datafile_hwm.sql
set verify off line 200 pages 100
column file_name format a50 word_wrapped
column smallest format 999,990 heading "Smallest|Size|Poss."
column currsize format 999,990 heading "Current|Size"
column savings format 999,990 heading "Poss.|Savings"
break on report
compute sum of savings on report

column value new_val blksize


select value from v$parameter where name = 'db_block_size'
/

select file_name,
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
ceil( blocks*&&blksize/1024/1024) currsize,
ceil( blocks*&&blksize/1024/1024) -
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,

41 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

( select file_id, max(block_id+blocks-1) hwm


from dba_extents
group by file_id ) b
where a.file_id = b.file_id(+)
/

column cmd format a85 word_wrapped

select 'alter database datafile ''' || file_name || ''' resize ' ||


ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) || 'm;' cmd
from dba_data_files a,
( select file_id, max(block_id+blocks-1) hwm
from dba_extents
group by file_id ) b
where a.file_id = b.file_id(+)
and ceil( blocks*&&blksize/1024/1024) -
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) > 0
/

The prior script generates the following output.

VALUE
--------------------------------------------------------------------------
8192

Smallest
Size Curr Poss.
FILE_NAME Poss. Size Savings
---------------------------------------------- -------- ------ --------
+DATADG/DEMODB/DATAFILE/system.257.952265543 841 850 9
+DATADG/DEMODB/DATAFILE/demots.289.952606307 16,328 17,014 686
+DATADG/DEMODB/DATAFILE/undotbs1.259.952265593 12,932 24,708 11,776
+DATADG/DEMODB/DATAFILE/undotbs2.265.952265669 37 1,024 987
+DATADG/DEMODB/DATAFILE/demots.337.952606345 17,220 17,652 432
+DATADG/DEMODB/DATAFILE/demots.320.952606331 16,712 17,462 750
+DATADG/DEMODB/DATAFILE/sysaux.258.952265577 2,008 2,030 22
+DATADG/DEMODB/DATAFILE/users.260.952265593 1 5 4
--------
sum 14,666

8 rows selected.

CMD
-------------------------------------------------------------------------------------
alter database datafile '+DATADG/DEMODB/DATAFILE/system.257.952265543' resize 841m;
alter database datafile '+DATADG/DEMODB/DATAFILE/demots.289.952606307' resize 16328m;
alter database datafile '+DATADG/DEMODB/DATAFILE/undotbs1.259.952265593' resize
12932m;

alter database datafile '+DATADG/DEMODB/DATAFILE/undotbs2.265.952265669' resize 37m;


alter database datafile '+DATADG/DEMODB/DATAFILE/demots.337.952606345' resize 17220m;
alter database datafile '+DATADG/DEMODB/DATAFILE/demots.320.952606331' resize 16712m;
alter database datafile '+DATADG/DEMODB/DATAFILE/sysaux.258.952265577' resize 2008m;
alter database datafile '+DATADG/DEMODB/DATAFILE/users.260.952265593' resize 1m;

42 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

8 rows selected.

5. To resize the data files, copy and paste the ALTER DATABASE RESIZE statements associated with
the data files. For example, the previous statements in bold shrink only the demots tablespace that
resides in the DATADG disk group.
6. Manually rebalance the disk group.

SQL> ALTER DISKGROUP DATADG REBALANCE POWER 10;

7. Confirm the release of the space in Unisphere by observing the Capacity and Space Used
information on the LUN properties page. It might take several minutes to see the changes depending
on the amount of data and how busy the system is at the time.

Note: Deleted space is not released until either the data files are deleted or shrunk and a rebalance operation
is run against the ASM disk groups.

Linux LVM
Linux Logical Volume Manager (LVM) is a common general-purpose storage manager in all popular Linux
distributions. Since ASM does not support storing Oracle software, the software must be installed on a Linux
file system that can be configured on top of LVM. LVM mirroring is not necessary because Dell Unity systems
provide storage protection. Multiple LUNs can be grouped into a single LVM volume group. Then logical
volumes must be created that span across these LUNs. When taking Dell Unity system snapshots on a multi-
LUN volume group, ensure the LUNs are configured in a consistency group.

A file system is created on a logical volume where the Oracle binary is installed. More space can be added to
the volume groups, logical volumes, and file systems either by adding new LUNs or by expanding existing
LUNS in the volume groups. Once volume groups and logical volumes are expanded, the file systems can be
resized to the newly added space. LVM and many file systems, such as ext4 and xfs, allow on-demand
expansion without taking down the applications.

Unlike ASM, administrators must configure LVM striping, and data is not rebalanced when extending the
volume group.

LVM guidelines
• Use whole LUNs for volume groups.
• Create a dedicated volume group for storing each copy or version of Oracle software. A dedicated
volume group simplifies management and allows greater flexibility on array-based snapshots on
individual Oracle software copies.
• Use two or more LUNs in a volume group when performance is of concern.
• Configure all LUNs with the same size in the same volume group and group them in the same
consistency group.
• In an Oracle RAC configuration, use a dedicated local volume group for each cluster node.

Physical volume data alignment


When initializing LUNs in LVM, use the --dataalignment argument to indicate the alignment starts at 1M.

The following example shows the tasks to create an Oracle software file system on LVM:

43 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

# pvcreate --dataalignment 1m /dev/mapper/orabin-rac


# vgcreate vgoracle /dev/mapper/orabin-rac
# lvcreate –L 50g –n lv-oracle-bin vgoracle
# mkfs.xfs /dev/vgoracle/lv-oracle-bin (for xfs)
# mkfs.ext4 /dev/vgoracle/lv-oracle-bin (for ext4)

Note: if --dataalignment is not specified, mkfs might report a warning message (see below). Reinitialize the
LUN with --dataalignment to ensure proper alignment.
.

Misalignment warning for mkfs.xfs:

warning: device is not properly aligned /dev/vgoracle/lv-oracle-logs


Use -f to force usage of a misaligned device

Misalignment warning for mkfs.ext4:

mke2fs 1.42.9 (28-Dec-2013)


/dev/vgoracle/lv-oracle-bin alignment is offset by 512 bytes.
This may result in very poor performance, (re)-partitioning suggested.

File systems
Local file system is preferred to store Oracle software and diagnostic logs. Datafiles can be stored in local file
systems, but it is recommended to use Oracle ASM on block devices or Oracle DirectNFS. Sections 0, 0, and
0 discuss using the Dell Unity NFS service with Oracle DirectNFS.

The Dell Unity system supports a wide range of file systems on Linux. This section focuses on two popular
and stable file systems: ext4 and xfs.

For additional information about supported file systems and feature limitations, see the Dell Host Connectivity
Guide for Linux.

File system layout


The file system can be created on top of a LUN, a LUN partition, or a logical volume in LVM. Dell
Technologies recommends using the whole LUN without partition or a logical volume in LVM for ease of
management.

It can be beneficial to separate the Oracle software and Oracle diagnostic logs. To separate software and
logs, create a separate volume group or assign a different LUN to store Oracle diagnostic log files. The
diagnostic logs can consume a large amount of space in a short time. By isolating the logs in a different file
system, it reduces the risk of filling up the storage space with these logs and affects the operation of the
software. Since the diagnostic logs are not mission critical to the software operation, it is not essential to
enable snapshots on the LUNs used by the logs. The diagnostic logs are also good candidates to be
compressed to reduce the storage consumption. Table 7 shows an example of using separate file systems for
software and diagnostic logs.

44 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

An example of file system layout for Oracle software and diagnostic logs
Volume Dell Unity Dell Unity
Logical volume File system mount point
group snapshot compression
vggrid lv-grid-bin /u01 Enable Disable
vgoracle121 lv-oracle-bin /u01/app/oracle/product/12.1.0 Enable Disable
vgoracle122 lv-oracle-bin /u01/app/oracle/product/12.2.0 Enable Disable
vgoraclelog lv-grid-log /u01/app/grid/diag Disable Enable
lv-oracle-log /u01/app/oracle/diag Disable

File system mount options


When mounting a file system, consider the following options and guidelines.

• Identify the file system by its UUID or LVM LV device in the /etc/fstab file. Avoid using any non-
persistent device paths such as /dev/sd*.
• Query the UUID with the blkid command.

# blkid /dev/vgoracle/lv-oracle-rac-home
/dev/vgoracle/lv-oracle-rac-home: UUID="83cf5726-f842-448b-a143-
5f77eb0d9b37" TYPE="xfs"

• Include discard in the mount option to enable space reclamation support for the file system. More
information is provided in section 0.
• Include nofail in the mount option if the Linux operating system experiences mount issue during
system boot. nofail prevents interruption during the boot process which requires manual intervention.
• For the xfs file system, disable the file system check (fsck option) in /etc/fstab because it does not
perform any check or repair automatically during boot time. The xfs journaling feature ensures the file
system integrity and data is in a consistent state after abrupt shutdown. If a manual repair or check is
necessary, use the xfs_repair utility to repair damaged file system.
• Set a value of 0 in the sixth field to disable fsck check. Here is an example of an xfs file system entry
in /etc/fstab:

UUID="83cf5726-f842-448b-a143-5f77eb0d9b37" /u01 xfs


defaults,discard,nofail 0 0

Expand storage for the file system


Certain file system types, such as ext4 and xfs, support the online resize operation. The following outlines the
general steps to resize a file system online assuming non-partition LUN are used.

1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems, refresh the partition table on each LUN path, and reload
multipath devices.

# rescan-scsi-bus.sh –resize

4. Reload the multipath devices.

45 Dell Unity: Oracle Database Best Practices | H16765


Deploying Oracle databases on Dell Unity storage

# multipathd -k"resize map orabin-rac"

For PowerPath, the new size is automatically updated.

5. Expand the logical volume if the file system is on top of LVM.

# lvresize –L $NEW_SIZE /dev/vgoracle/lv-oracle-rac-home

6. Extend the file system size to the maximum size, automatically and online.

# xfs_growfs –d /u01 (for xfs)


# resize2fs /dev/mapper/orabin-rac (for ext4)

Space reclamation
For file system types that support the online SCSI TRIM/UNMAP command, enable the discard mount option
in /etc/fstab or include –o discard to the manual mount command. The discard option allows space to be
released back to the storage pool in the Dell Unity system when deleting files in the file system.
Administrators should review the file system documentation to confirm the availability of the features.

The LUNs must be thin provisioned in Dell Unity storage system for space reclamation to work. As new data
is written to the file system, space is allocated in the Dell Unity system. When files are deleted from the file
system, the operating system informs the Dell Unity system which data blocks can be released. The release
of storage is automatic and requires no additional steps. To confirm the release of space in the Dell Unity
system, monitor the Total Pool Space Used on the LUN properties page in Unisphere.

46 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

Dell Unity file storage


Dell Unity storage can serve file data through virtual file servers (NAS servers) while providing many of the
advanced capabilities of Dell Unity systems. Some of these capabilities are shown in the following list, while
others are mentioned in the remainder of this section:

• Advanced static routing


• Packet reflect
• IP Multitenancy
• NAS server mobility
• Configurable Dell Unity system parameters

Dell Unity x80F storage systems support NAS connections on multiple 10 GbE and 25GbE ports. In an Oracle
NFS environment, 25 Gb/s is recommended for the best performance. If possible, configure Jumbo frames
(MTU 9000) on all ports in the end-to-end network path, to provide the best performance.

When using Oracle Direct NFS (dNFS), it is recommended to configure Link Aggregation Control Protocol
(LACP) across the same multiple Ethernet ports on each SP. Link aggregation provides path redundancy
between clients and NAS servers. Combine LACP with redundant switches to provide the highest network
availability. LACP can be configured across all available Ethernet interfaces and between the I/O modules.
See Figure 30, Figure 31, and Figure 32 for examples.

For additional information pertaining to this section, see the Dell Unity: NAS Capabilities, Dell Unity: Best
Practices Guide, and Dell Unity: Service Commands documents.

Dell Unity front-end Ethernet connectivity for file storage


Dell Unity storage provides multiple options for 10 Gb/s Ethernet front-end connectivity, through onboard
ports directly on the DPE and through optional I/O modules. In general, front-end ports need to be connected
and configured symmetrically across the 2 SPs to facilitate high availability and continued connectivity if there
is SP failure. For best performance, use all front-end ports so workload is spread across as many resources
as possible and uses the Dell Unity 10/25 Gb/s ports for dNFS data traffic.

Management port Port 0 Port 1 Port 2 Port 3


Port 0 Port 1 Port 2 Port 3
Embedded 4 port mezz GbE Service port: embedded Optional 4 port I/O modules
card (see table 2 for options) 2-port GbE (see Table table 2 for options)

Dell Unity 480F, 680F, and 880F front-end Ethernet ports

Dell Unity NAS servers


The Dell Unity virtual NAS servers are assigned to a single SP. All file systems serviced by a NAS server will
have their I/O processed by the SP on which the NAS server is resident or current. If multiple NAS servers

47 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

are required for multiple Oracle environments, load-balance NAS servers in a way that the front-end NFS I/O
is roughly distributed evenly between the SPs. Keep in mind not to over provision either of the SPs such that
in the event of failover, the peer SP does not become overloaded.

Because each NAS server is logically separate, NFS clients of one NAS server cannot access data on
another NAS server. Logically separate NAS servers can provide database isolation and protection across
multiple NFS clients (database servers). To create a NAS server, in Dell Unisphere select File > NAS
Servers > + and supply the necessary information as shown in the following screens.

Starting the Create a NAS Server wizard

Creating a NAS server

48 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

Specifying network information for the NAS Server interface

Defining sharing protocols for the NAS Server

When creating a NAS server for an Oracle database, enable NFSv4 if possible and do not setup the UNIX
Directory Service and NAS server DNS if they are not needed. After a NAS server is created, the Dell Unity
NFS file systems can be created, and then Dell Unity NFS shares can be created.

NAS server interfaces can either be configured as production, or backup and DR testing interfaces. The type
of interface dictates the type of activity that can be performed. Table 8 displays the characteristics of the
interface types.

49 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

NAS server interface types


Interface type Characteristics
Production • Allows CIFS, NFS, and FTP access
• Replicated during replication sessions
• During replication, is active on in the source mode
Backup and DR test • Could be used for backup and DR testing
• Allows NFS access only
• Not replicated during a replication session
• Is active in both source and destination replication modes

If throughput is restricted from using one Ethernet interface, consider configuring multiple Ethernet ports for
the NAS server by selecting: File -> NAS Servers -> select checkbox for NAS server -> Network-> + and
adding additional Ethernet interfaces.

Defining multiple Ethernet interfaces for the NAS server

Dell Unity NFS file system


The Dell Unity file system contains several improvements over existing NAS file system technologies and is
well suited for Oracle. The improved areas include scalability and maximum system size, flexible file system,
storage efficiency, security, isolation, availability, recoverability, virtualization, and performance.

To create a file system in Unisphere, select File > File Systems > + and supply the wanted configuration.

50 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

Creating a file system on the NAS server

Regarding the Oracle database files, the NFS file system can host Oracle datafiles that exist on ASM, file
system, or both. See Figure 10 and Figure 11.

NFS file system hosting raw files for ASM

51 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

NFS file system hosting Oracle datafiles in a file system.

Scalability
Dell Unity file systems provide scalability in several areas, including maximum file system size, which makes
Dell Unity storage ideal for Oracle environments. Dell Unity OE version 4.2 increases the maximum file
system size from 64 TB to 256 TB for all file systems. File systems can also be shrunk or extended to any
size within the supported limits. Dell Technologies recommends configuring storage objects that are 100 GB
at a minimum and preferably 1 TB in size or greater.

Storage efficiency
Dell Unity storage supports thin-provisioned file systems. Starting with Dell Unity OE version 4.2, Unisphere
can also create thick file systems. When using Dell Unity file storage with Oracle, consider using thin-
provisioned file systems. Dell Unity also provides increased storage flexibility with manual or automatic file
system extension and shrinking with reclaim.

Quotas
Dell Unity includes full-quota support to allow administrators to limit the amount of space that can be
consumed from a user of an NFS file system or directory. When working with Oracle, quotas are not usually
necessary. If deciding to use quotas, carefully consider their impact on managing the Oracle environment.

NFS protocol
Dell Unity storage supports NFSv3 through NFSv4.1, including secure NFS.

All Dell Unity OE versions support Oracle dNFS in single-node configurations. Starting with OE version 4.2,
Oracle Real Application Clusters (RAC) are also supported. To use Oracle RAC, the nfs.transChecksum

52 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

parameter must be enabled. This parameter ensures that each transaction carries a unique ID and avoids the
possibility of conflicting IDs that result from the reuse of relinquished ports.

For more information about NAS server parameters and how to configure them, see the Dell Unity Service
Commands document.

NFSv4 is a version of the NFS protocol that differs considerably from previous implementations. Unlike
NFSv3, NFSv4 is a stateful protocol, meaning that it maintains a session state. NFSv4 also does not treat
each request as an independent transaction without the need for additional preexisting information. NFSv4
handles all network traffic by the underlying transport protocol as opposed to the application layer in NFSv3.
Handling network traffic by the underlying transport protocol as opposed to the application layer can provide
savings in the overall load on the Oracle database server (NFS client). NFSv4 is preferred due to
improvements over NFSv3. Some advantages of NFSv4 are:

• Ability to use TCP more thoroughly


• Ability to bundle metadata operations
• An integrated, more functional lock manager
• Conditional file delegation

While Dell Unity storage fully supports most of the NFSv4 and v4.1 functionality described in the relevant
RFCs, directory delegation and pNFS are not supported. Therefore, do not configure Oracle to use parallel
dNFS (known as pNFS). For increased performance, consider using NFSv4 and Oracle Direct NFS (dNFS)
with multiple network interfaces for load-balancing purposes.

Sharing protocols

Dell Unity NFS share


After creating the NAS server and file system, the NFS share can be created. To create the NFS share, select
File > NFS Shares > + and supply the necessary information.

53 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

Creating an NFS share

Assigning a file system to an NFS share

When defining the NFS share name, ensure Allow SUID is selected. Selecting Allow SUID is required for
Oracle software mount points.

54 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

Allow SUID for Oracle.

For NFS shares intended for Oracle, set the NFS export options for the NFS share by setting Default Access
to Read/Write, allow Root.

Specify R/W and allow root access on the NFS share.

Verify access to the Dell Unity NFS share


After file storage has been configured, log in to the database server and verify access to the NFS share
through all the IPs defined for the NFS share. To verify access, use the showmount command in Linux on all
the IPs shown in the list of Exported Paths. If any of the IPs do not have access to the NFS share, resolve
the issue before configuring the NFS client including configuring Oracle dNFS.

55 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

Configure IPs and mount names for the NFS share.

The following showmount command only illustrates its usage on the first IP in the list of Exported Paths.

[root ~]# showmount -e XX.XX.XX.91


Export list for XX.XX.XX.91:
/ORA-ASM-NFS (everyone)
ora-asm-nfs (everyone)

Dell Unity file system and Oracle ASM


To use ASM on top of the Dell Unity file system, use the following process (change values where necessary):

1. Create the Dell Unity NAS share.


2. Create the mount point in Linux and set the permissions and ownership on the mount point:

mkdir /oraasmnas
chmod 770 /oraasmnas
chown grid:oinstall /oraasmnas

3. Mount the Dell Unity NAS share.

mount -o
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0
XX.XX.XX.91:/ORA-ASM-NFS /oraasmnas

4. Change the permissions and ownership of the root directory on the NFS share:

chmod 770 /oraasmnas


chown grid:oinstall /oraasmnas

5. Create the raw files for the ASM disk groups and set their permissions and ownership.

dd if=/dev/zero of=/oraasmnas/nfsasm-ocrvote-disk01 bs=4096 count=2621440


dd if=/dev/zero of=/oraasmnas/nfsasm-data-disk01 bs=8192 count=524288

56 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity file storage

dd if=/dev/zero of=/oraasmnas/nfsasm-data-disk02 bs=8192 count=524288


dd if=/dev/zero of=/oraasmnas/nfsasm-data-disk03 bs=8192 count=524288
dd if=/dev/zero of=/oraasmnas/nfsasm-data-disk04 bs=8192 count=524288
dd if=/dev/zero of=/oraasmnas/nfsasm-data-disk05 bs=8192 count=524288
dd if=/dev/zero of=/oraasmnas/nfsasm-fra-disk01 bs=8192 count=524288
dd if=/dev/zero of=/oraasmnas/nfsasm-fra-disk02 bs=8192 count=524288
dd if=/dev/zero of=/oraasmnas/nfsasm-fra-disk03 bs=8192 count=524288
dd if=/dev/zero of=/oraasmnas/nfsasm-fra-disk04 bs=8192 count=524288
dd if=/dev/zero of=/oraasmnas/nfsasm-fra-disk05 bs=8192 count=524288

chown grid:oinstall /oraasmnas/nfsasm*


chmod 660 /oraasmnas/nfsasm*

6. Create the ASM disk groups.

SQL> create diskgroup nfsdata external redundancy disk


2 '/oraasmnas/nfsasm-data-disk01',
3 '/oraasmnas/nfsasm-data-disk02',
4 '/oraasmnas/nfsasm-data-disk03',
5 '/oraasmnas/nfsasm-data-disk04',
6 '/oraasmnas/nfsasm-data-disk05';

57 Dell Unity: Oracle Database Best Practices | H16765


Oracle Disk Manager

Oracle Disk Manager


Oracle standard ODM library ($ORACLE_HOME/lib/libodm12.so) manages Oracle I/O and its file
management infrastructure. If the ODM library containing the embedded Oracle NFS client
($ORACLE_HOME/rdbms/lib/odm/libnfsodm12.so) is enabled, ODM allows the use of NFS devices without
using the native Linux NFS kernel (kNFS).

NFS traffic
Generally, NFS traffic can be either classified as control/management traffic and I/O traffic on application
data. Regarding the operating system, whether Oracle ODM NFS client library is enabled, Linux NFS kernel
client (kNFS) driver manages the control/management of NFS devices. When the ODM library is enabled, the
Oracle environment is said to be using Oracle Direct NFS (dNFS) and all database I/O, NFS data traffic, flows
through the dNFS driver. When the ODM library containing the embedded Oracle NFS client is disabled, all
database I/O flows through the kNFS client driver:

• get attribute
• set attribute
• access
• create
• mkdir
• rmdir
• mount
• umount

58 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Oracle Direct NFS


Oracle Direct NFS (dNFS) is an optimized NFS client from Oracle for database I/O and resides in the ODM
library as a part of the Oracle database kernel. dNFS improves the stability and reliability of NFS storage
devices over TCP/IP, more so than the native Linux NFS driver (kNFS). dNFS also improves performance to
NFS storage devices by bypassing the kNFS I/O stack. When mounting the database data files, Oracle will
first load dNFS functionality if the Direct NFS client ODM library is enabled. If dNFS cannot access a NFS
storage device, dNFS silently reverts to using the kNFS client. However, to ensure this reversion occurs, the
kNFS client mount options rsize and wsize must be used.

While Dell Unity 4.2, Oracle 12cR1, and 12cR2 dNFS all support NFSv3 and the stateful NFSv4 and NFSv4.1
protocol, Dell Unity does not provide functionality for pNFS. Therefore, do not configure pNFS in Oracle.

It is recommended to use dNFS if NFS storage devices are used so that the performance optimizations built
into Oracle can be exploited.

Benefits of dNFS
The advantage of using Oracle dNFS lies within the fact that it is part of the Oracle database kernel. dNFS
services all I/O to NFS storage devices. dNFS gives Oracle the ability to manage the best possible
configuration, tune itself, use Oracle buffer cache, and use available resources for optimal multipath NFS data
traffic I/O.

Creating NFS client mount points


An Oracle installation requests the intended locations for storing the software and components. Usually these
locations can reside on NFS shares. Some exceptions are discussed in section 0.

Table 9 provides examples of different Oracle directories that could reside on a NFS share. Once NFS shares
are identified for Oracle use, create the necessary mount points for the NFS shares and create the NFS
shares in Dell Unity storage. Also, set the privileges, owner, and group of the Linux mount points and root
directory on the NFS share per Oracle requirements.

Example directories that could be serviced by NFS


Environment variables and typical
Oracle directory Description
values
Oracle base $ORACLE_BASE=/u01/app/oracle/ The top-level directory for installations.
Subsequent installations can either use the
same Oracle base or a different one.
Oracle inventory /u01/app/oraInventory/ All Oracle installations use the same Oracle
inventory directory for the installation repository
Or metadata. If possible, Oracle recommends the
inventory directory resides on a local file
$ORACLE_BASE/<srv>/oraInventory/ system:

/u01/app/oraInventory

If a NAS device must be used for the inventory,


to prevent multiple systems from writing to the
same inventory, create a unique directory for
each database server:

59 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Environment variables and typical


Oracle directory Description
values

$ORACLE_BASE/<srv>/oraInventory
Oracle home $ORACLE_HOME=$ORACLE_BASE/ This directory contains the binaries, library,
product/12.2.0/dbhome_1/ configuration files, and other files from a single
release of one product and cannot be shared
with other releases or other Oracle products.
Database file $ORACLE_BASE/oradata/ This is the location to hold the database. Use a
directory different NFS mount point for database files to
provide the ability to mount the NFS file system
with different mount options, and to distribute
database I/O.
Oracle recovery $ORACLE_BASE/fast_recovery_area/ Oracle recommends that recovery files and
directory database files do not exist on the same file
system.
Oracle product $ORACLE_BASE/product This mount point can be used to install software
directory from different releases, for example:

/u01/app/oracle/product/12.1.0/dbhome_1/
/u01/app/oracle/product/12.2.0/dbhome_1/

Oracle release $ORACLE_BASE/product/<version>/ This mount point can be used to install different
directory Oracle products from the same version, for
example:

$ORACLE_BASE/product/11gR2/dbhome_1
$ORACLE_BASE/product/11gR2/client_1

Even though this is an option, it is not


recommended to install both the rdbms and
client on the database server. If the client is
required, it is recommended that a separate
NFS be defined, and a non-database server be
used to host the client install.

Mount options for NFS share


Before configuring or using the dNFS driver on a NAS share, the NFS share must first be mounted using the
kNFS driver. Specific mounting options are required when mounting an NFS share for dNFS usage. If the
NFS volume will be used for Oracle and needs to be automatically restarted when the server restarts, the
NFS volume must be specified in /etc/fstab. In an Oracle RAC cluster, ensure all nodes in the cluster use the
same mount options for each identical NFS mount point.

After the share is mounted using kNFS, dNFS mounts and unmounts the volume logically as needed. Since
dNFS uses a logical mount, after it unmounts the share, the volume can still be accessed through kNFS.
Having kNFS access on the volume after dNFS unmounts the share guarantees other Oracle databases, or
users can use files from the share as necessary.

60 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

If NFS is used for database files, the NFS buffer size for reads (rsize) and writes (wsize) must be set to at
least 16,384. Oracle recommends a value of 32,768. These values are set in /etc/fstab, or when explicitly
mounting an NFS volume. Since a dNFS write size (v$dnfs_servers.wtmax) of 32,768 or larger is supported in
Dell Unity storage, dNFS does not fall back to the traditional kNFS kernel path. dNFS clients issue writes with
v$dnfs_servers.wtmax granularity to the NFS server.

The following lists the required mount options for NFS mount points used by Oracle standalone, Oracle RAC,
RMAN, and Oracle binaries running on Linux x86-64 version 2.6 and above. For more NFS share mount
options, see Oracle MOS note Mount Options for Oracle files for RAC databases and Clusterware when used
with NFS on NAS devices (Doc ID 359515.1) at Oracle Support.

Linux kernel 2.6 x86-64 NFS mount options for Oracle 12c RAC and Oracle standalone:

• Mount options for binaries (ORACLE_HOME, CRS_HOME) and database files1,2:

rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers={3|4},timeo=600,actimeo=0

• Mount options for CRS voting disk and OCR2:

rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers={3|4},timeo=600,actimeo=0,noac

1 The mount options are applicable only if ORACLE_HOME is shared. Oracle also recommends that the Oracle inventory
directory is kept on a local file system. If it must be placed on a NAS device, create a specific directory for each system to
prevent multiple systems from writing to the same inventory directory. Oracle clusterware is not certified on dNFS.
2 Do not replace tcp with udp. udp should never be used. dNFS cannot serve an NFS server with write size less than

32768. Set option vers to either 3 or 4 and ensure the NFS sharing protocol on the Dell Unity NAS server is set
accordingly. In 12cR2, both OCR and voting disks must reside in ASM. See Oracle MOS note 2201844.1 for additional
information. dNFS is RAC aware. Therefore, even though NFS is a shared file system, and NFS devices for Oracle must
be mounted with the noac option, dNFS automatically recognizes RAC instances and takes appropriate action for
datafiles without additional user configuration. This eliminates the need to specify noac when mounting NFS file systems
for Oracle datafiles or binaries. This exception does not pertain to CRS voting disks or OCR files on NFS. NFS file
systems hosting CRS voting disks and OCR files must be mounted with noac. Option noac should not be used for RMAN
backup set, image copies, and data pump dump files because RMAN and data pump do not check this option and
specifying it can adversely affect performance.

61 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

When configuring an Oracle RAC environment that uses NFS, ensure the entry in /etc/fstab is the same on
each node. The following snippet from /etc/fstab mounts an NFS mount point for ORACLE_HOME binaries
(/u01), and a database that will use ASM.

XX.XX.XX.63:/ora-bin /u01 nfs


rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,defaults 0 0
XX.XX.XX.91:/ORA-ASM-NFS /oraasmnas nfs
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,defaults 0 0

When adding multiple mount options for a specific mount point in /etc/fstab, do not insert spaces after options
because the operating system may not properly parse the options.

Mount options timeo, hard, soft, and intr control the NFS client behavior if the NFS server should become
temporarily unreachable. Whenever the NFS client sends a request to the NFS server, it expects the
operation to have finished after a given interval (specified in the timeout option). If no confirmation is received
within this time, a minor timeout occurs and the operation is retried with the timeout interval doubled. After
reaching a maximum timeout of 60 seconds, a major timeout occurs. By default, a major timeout causes the
NFS client to print a message to the console and start over with an initial timeout interval twice that of the
previous cascade. There is the potential for timeout interval to repeat indefinitely. Volumes that retry an
operation until the server becomes available again are called hard-mounted.

See File system mount options for a description of the mount options used in this paper.

Ethernet networks and dNFS


Using a dedicated 1 Gb/s interface between the NFS client and NAS server for NFS control traffic and a
dedicated 10 Gb/s interface for NFS data traffic may be prudent. Extra 10 Gb/s Ethernet interfaces may be
required for increased load balancing, availability, and performance in environments with high expected NFS
I/O database activity. Using 10 Gb/s is the best way to use the full capability of Dell Unity file storage. Other
considerations when setting up the network are: NIC speed, full duplex settings, end-to-end MTU setting, NFS
data transfer buffer sizes, and using bonded NIC interface for NFS control traffic.

If NFS and network redundancy is a concern, all interfaces (database server, Ethernet switch, and Dell Unity
storage) used for NFS control traffic should be bonded. This bonded interface could be the bonded public
network, if it exists, or even the bonded interface for the RAC interconnect in a RAC environment.

Directing NFS control and data traffic to different NICs may not always be possible because of a limited
number of NICs, or infrastructure limitations. In such cases, it is possible to share an unbonded interface for
both NFS control and data traffic. However, that may cause network performance issues under heavy loads
as the server will not perform network load balancing, and limits database availability without multiple NICs.

When using dNFS, Oracle supports one to five network paths for NFS traffic between a NAS server and NFS
client: one path for NFS control traffic and up to four paths for NFS data traffic. When using dNFS, use
multiple network paths. Ensure each NFS network path belongs to a subnet that is not being used for any
other NIC interface on the NFS client. Also ensure each NFS network path does not use the subnet of the
public network for NFS. Using unique subnets simplifies configuration of dNFS and ensures that dNFS
benefits are fully exploited.

Fewer available subnets may exist than intended dNFS paths. If so, dNFS paths on the NFS client can be set
up to use existing subnets already in use on the NFS client. However, using existing subnets already in use
on the NFS client requires additional configuration in:

62 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

• The operating system network layer (relaxing ingress filtering for multihomed networks and static
routing) and,
• Oracle (file oranfstab) if dNFS data traffic needs to be on one or more dedicated subnets.

This configuration will disable the operating system from determining the default dynamic route and will allow
multiple NIC interfaces in the same server to use the same subnet. See section Ipv4 network routing filters0
for information about using multiple NIC interfaces in the same subnet.

If the operating system chooses the dynamic route, it will invariably use the first best-matched route possible
from the routing table for all paths defined. Usually, that route will be incorrect. Using an incorrect route
results in dNFS load balancing, scalability, and failover not working as expected. To ensure NFS data traffic
are working as expected when using multiple paths in the same subnet, configure static operating system
routing for each dNFS network path. See section Static routing for more information.

If different subnets are used for NFS traffic, routing will be taken care of automatically by the native network
driver and the default route entries in the routing table. Creating static routes are not necessary when using
different subnets for dNFS traffic.

All IP end-points between the NFS client and Dell Unity NAS server must appear in oranfstab when dNFS
data traffic is being isolated to one or more dedicated IP addresses. For additional information, see section
Database server: NFS client network interface configuration and Oracle dNFS configuration file: oranfstab.

Jumbo frames
Jumbo frames, which refers to raising the maximum transfer unit (MTU) from the default of 1,500 to
9,000 bytes, is advised for the entire network path: database servers (NFS client), Ethernet switch, and Dell
Unity storage. Using Jumbo frames allows the network stack to bundle transfers into larger frames and
reduce the TCP protocol overhead. The value used for any frame depends on the immediate needs of the
network session established between the NFS client and server. Raising the limit to 9,000 from end point to
end point in the network path will allow the session to take advantage of a wider range of from sizes.

63 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Single network path for dNFS


For the simplest dNFS configuration, it is recommended to set up dNFS as a single network path between the
NAS server and NFS client.

Single Ethernet path between the NFS client and Dell Unity NAS server for NFS traffic

Both NFS control/management and data traffic use this path. The path requires setting up the Dell Unity NAS
server and NFS share, kNFS for the NFS share, and NIC interface, and enabling dNFS.

Dell Unity NAS server with a single Ethernet path for NFS traffic

64 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Dell Unity NFS share

NFS client (database server) NIC interface configuration:

[root@r730xd-1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1


<Snippet>
NAME=p1p1
DEVICE=p1p1
IPADDR=XX.XX.XX.57
PREFIX=20
GATEWAY=XX.XX.XX.1

dNFS uses two kinds of NFS mounts: the native operating system mount of NFS (also referred to as kernel
kNFS mount) and the Oracle database NFS mount (dNFS mount). When using a single network path for
dNFS, file oranfstab is not necessary because Oracle dNFS will glean the required information for the
matching mounted NFS share in file /etc/mtab. If dNFS is unable to find the necessary information in
/etc/mtab, control is handed back to the database and file access is attempted through kNFS.

Information regarding Dell Unity NFS share /ORA-ASM-NFS is shown in /etc/mtab:

[root ~]# grep oraasmnas /etc/mtab


XX.XX.XX.87:/ORA-ASM-NFS /oraasmnas nfs
rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,acregmin=0,acregmax=0,acdi
rmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=XX.XX.XX.
87,mountvers=3,mountport=1234,mountproto=tcp,local_lock=none,addr=XX.XX.XX.87 0
0

If the IP used in the single network path is in the same subnet used by any other NIC interface in the
database server, see section Static routing for additional requirements. For additional information about file
oranfstab, see section Oracle dNFS configuration file: oranfstab.

65 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Multiple network paths for dNFS


If multiple network paths are intended for NFS traffic, consider using one path for NFS control/management
traffic and the remaining NFS paths for NFS data traffic.

Dell EMC Unity NAS server SPA (current SP owner)

Port 0 Port 1 Port 2 Port 3

kNFS control traffic dNFS data traffic

NIC 1
p1p1 p1p2
slot1

Database server
NFS client

Dedicated interfaces for dNFS control and dNFS data traffic

If the architecture cannot support dedicated paths for all dNFS data traffic, dNFS control and data traffic can
share a path.

Dell EMC Unity NAS server SPA (current SP owner)

Port 0 Port 1 Port 2 Port 3

kNFS control traffic dNFS data traffic


dNFS data traffic

NIC 1
p1p1 p1p2
slot1

Database server
NFS client

Shared interface for dNFS control and data traffic

If multiple dNFS paths are defined for data traffic, when a dNFS data path fails, dNFS reissues requests over
any of the remaining dNFS data paths improving database availability. Multiple data paths also provide Oracle
the ability to automatically tune the data paths to the NFS storage devices. Multiple data paths also avoid the
need to manually tune NFS network performance at the operating system level. Since dNFS implements
multipath I/O internally, there is no need to configure LACP for channel-bonding interfaces for dNFS data
traffic through active-backup or link aggregation. If the LACP protocol is configured on the NIC interfaces

66 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

intended for dNFS data traffic, remove the channel-bond on those interfaces so that the interfaces operate as
independent ports.

If a single interface is used for the operating system kNFS mount, NFS control traffic can be blocked should
the interface be down or the network cable unplugged. This blocked NFS traffic will cause the database to
appear unavailable. To mitigate this single point of failure in the network, LACP protocol should be configured
on multiple interfaces to create a channel-bonded interface for NFS control/management traffic. A channel-
bounded interface for NFS control/management traffic is the recommended configuration as it provides
increased database availability and additional network bandwidth.

For additional information about channel-bonded interfaces for NFS control traffic, see section Configuring
LACP.

When configuring dNFS with multiple network paths, the recommendation is to use a unique network for each
of the paths. When multiple unique networks are not available, or not wanted, multiple IPs from the same
subnet can be used for each of the network paths. See section Shared subnets for additional requirements if
a shared subnet is used for dNFS data traffic.

Shared subnets
If dNFS will use a subnet used by at least one other network interface, some configuration is required.
Configuration will include the Ipv4 network routing filter and static routes.

Ipv4 network routing filters


Linux 6 and 7 follows the recommendations of ingress filtering for multihomed networks
(http://tools.ietf.org/html/rfc3704). These routing filters must be relaxed for multiple NIC interfaces in the same
server to use the same subnet. Before configuring network interfaces to use the same subnet, ensure to relax
routing filters.

If the Oracle 12c preinstall rpm is used to configure the operating system before installing Oracle, the routing
filters will be relaxed appropriately. Beginning with Oracle Database 12c release 2, Oracle has changed the
name of this rpm so that the name corresponds to version of Oracle being installed:

• Oracle Database 12cR2: oracle-database-server-12cR2-preinstall.rpm


• Oracle Database 12cR1: oracle-rdbms-server-12cR1-preinstall.rpm

Both rpms are in the ol7_latest repository for Oracle Linux 7 on the Oracle Linux yum server and from ULN.
Recent releases of Oracle Linux 7 by default include the proper yum configuration to install these rpms. If the
rpm is missing from the operating system, perform the following to install it:

Oracle 12cR1:

yum info oracle-rdbms-server-12cR1-preinstall

Oracle 12cR2:

yum info oracle-database-server-12cR2-preinstall

67 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

To verify if ipv4 routing filters have been relaxed in the current running operating system, perform the
following on the database server from a privileged operating system user. The values of the returned
parameters should be 2 if ipv4 routing filters have been relaxed.

[root ~]# sysctl net.ipv4.conf.all.rp_filter


net.ipv4.conf.all.rp_filter = 2
[root ~]# sysctl net.ipv4.conf.default.rp_filter
net.ipv4.conf.default.rp_filter = 2

If the filters are not set correctly, update /etc/sysctl.conf with the settings so they are persistent across
reboots, and reload the system configuration:

[root ~]# vi /etc/sysctl.conf


net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
[root ~]# sysctl -p

Static routing
When any path for dNFS traffic shares any subnet already in use on the NFS client, static routing must be
configured for each of the paths used by dNFS traffic. A static route for dNFS data traffic must also be
configured should it use the same subnet as dNFS control traffic. If static routes are not defined, automatic
load balancing and performance tuning of dNFS will not operate as expected per file oranfstab. NFS data
traffic will flow through an unexpected network path.

The remainder of this section covers two examples of static routing. The first example considers two
interfaces sharing a subnet, and the second example considers four network interfaces sharing a subnet.

Shared subnet on two interfaces:

Figure 23 illustrates the path taken for dNFS data traffic when a subnet is shared on two interfaces with
default routing. dNFS traffic flows through interface em1 rather than through the intended interface p1p1.

68 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Dell EMC Unity NAS server SPA (current SP owner)

Port 0 Port 1 Port 2 Port 3


xx.xx.xx.87 xx.xx.xx.88 xx.xx.xx.89 xx.xx.xx.90

Actual path taken for dNFS data


traffic with shared subnet and
no static route
Path intended for:
kNFS control traffic Embedded Public
dNFS data traffic em1
Ethernet em2 Ethernet
xx.xx.xx.26
card interface

NIC 1
p1p1 p1p2
slot1

Database server
NFS client

Incorrect path taken by dNFS data traffic on shared subnet and default routing

Default routing table – single interface for dNFS:

[root ~]# route -n


Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 XX.XX.XX.1 0.0.0.0 UG 0 0 0 em1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 em1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p1p1

If default routing is used, the operating system searches the routing table for the route that best matches the
destination address and mask, and it will use that route. In the example above, interfaces em1 and p1p1
share the same subnet (column Destination) and mask (column Genmask). The operating system considers
both entries as best-matched for the target address (XX.XX.XX.87 /20 – port 0 of Dell Unity storage). Since
em1 precedes the entry for p1p1, the operating system will use interface em1 for dNFS data traffic rather than
the route to the intended interface p1p1.

To mitigate the issue of sending dNFS data traffic across the wrong path, a static route must be added to the
route table. That static route forces dNFS data traffic to flow between the intended p1p1 interface and target
address (XX.XX.XX.87 /20). The following command adds the necessary route to the routing table:

ip route add XX.XX.XX.87 dev p1p1

After adding the static route, verify the routing table is updated with the appropriate route:

[root ~]# route -n


Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 XX.XX.XX.1 0.0.0.0 UG 0 0 0 em1

69 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 em1


XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p1p1
XX.XX.XX.87 0.0.0.0 XX.XX.XX.255 UH 0 0 0 p1p1

With the necessary route in place, Figure 24 shows dNFS traffic flowing through the intended interface p1p1.

Dell EMC Unity NAS server SPA (current SP owner)

Port 0 Port 1 Port 2 Port 3


xx.xx.xx.87 xx.xx.xx.88 xx.xx.xx.89 xx.xx.xx.90

Path intended for:


kNFS control traffic Embedded Public
dNFS data traffic em1
Ethernet em2 Ethernet
xx.xx.xx.26
card interface

NIC 1
p1p1 p1p2
slot1

Database server
NFS client

Correct path taken for dNFS data traffic

Modifying the route table is one way to define static routes that are static across reboots. Static routes can
also be defined in interface routing scripts in directory /etc/sysconfig/network-scripts. Static routes defined
in interface routing scripts are not static across reboots.

echo "XX.XX.XX.57 via XX.XX.XX.87" > /etc/sysconfig/network-scripts/route-


p1p1/etc/sysconfig/network-scripts/ifup-routes p1p1

Shared subnet on four network interfaces:

This example illustrates how default and static routing change the paths taken on four interfaces (p<s>p<p>)
configured with IPs from the same subnet for dNFS data traffic.

If the following default routing is used, all dNFS traffic would again flow through interface em1 because it is
the first best-matched entry in the routing table.

[root ~]# route -n

Kernel IP routing table


Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 XX.XX.XX.1 0.0.0.0 UG 0 0 0 em1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 em1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p1p1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p1p2
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p2p1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p2p2

70 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

[root ~]#

To direct dNFS traffic to flow through all intended interfaces, the following static routes are needed:

ip route add XX.XX.XX.87 dev p1p1


ip route add XX.XX.XX.88 dev p1p2
ip route add XX.XX.XX.89 dev p2p1
ip route add XX.XX.XX.90 dev p2p2

Routes that best match the destination and mask must be defined between the interfaces and the four end-
point IP addresses in the Dell Unity NAS Server. After the routes are defined, they will be chosen.

[root ~]# route -n


Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
<snippet>
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p1p1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p1p2
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p2p1
XX.XX.XX.0 0.0.0.0 XX.XX.XX.0 U 0 0 0 p2p2
XX.XX.XX.87 0.0.0.0 XX.XX.XX.255 UH 0 0 0 p1p1
XX.XX.XX.88 0.0.0.0 XX.XX.XX.255 UH 0 0 0 p1p2
XX.XX.XX.89 0.0.0.0 XX.XX.XX.255 UH 0 0 0 p2p1
XX.XX.XX.90 0.0.0.0 XX.XX.XX.255 UH 0 0 0 p2p2
[root ~]#

Correct path taken for multiple dNFS data paths

Modifying the route table is one way to define static routes. Static routes in the route table are static across
reboots. Static routes can also be defined in interface routing scripts in directory /etc/sysconfig/network-
scripts. Static routes defined in interface routing scripts are not static across reboots.

71 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

echo "XX.XX.XX.57 via XX.XX.XX.87" > /etc/sysconfig/network-scripts/route-p1p1


echo "XX.XX.XX.61 via XX.XX.XX.88" > /etc/sysconfig/network-scripts/route-p1p2
echo "XX.XX.XX.62 via XX.XX.XX.89" > /etc/sysconfig/network-scripts/route-p2p1
echo "XX.XX.XX.72 via XX.XX.XX.90" > /etc/sysconfig/network-scripts/route-p2p2
/etc/sysconfig/network-scripts/ifup-routes p1p1
/etc/sysconfig/network-scripts/ifup-routes p1p2
/etc/sysconfig/network-scripts/ifup-routes p2p1
/etc/sysconfig/network-scripts/ifup-routes p2p2

Static routing can also be defined in Dell Unity storage when adding or updating the configuration of a NAS
server. See section Dell Unity NAS servers for more information.

See Table 10 for examples of IP address mapping from end point to end point, dNFS traffic type, and LACP.

Configuring LACP
In environments requiring high availability, a bonded NIC interface for NFS control traffic is recommended.

NFS client (database server) and channel-bonding configuration


If an unbonded interface is used for NFS control traffic and that interface sustains an outage, the database
can appear unavailable under certain operations. To mitigate this single point of failure, LACP protocol should
be configured on multiple interfaces to create a channel-bonded interface for NFS control/management traffic.
This bonded interface could be the bonded public network or even the bonded interface for the RAC
interconnect in a RAC environment. Having a dedicated-bonded network for NFS control traffic should not be
necessary as the NFS control or metadata traffic should be minimal.

Bond interface for NFS control traffic

If LACP is configured on the NFS client for NFS control traffic, LACP must be configured in the Dell Unity
system by creating link aggregations. Port channels in the Ethernet switches connecting the Dell Unity and
NFS client interfaces must also be configured. Link aggregations with Dell Unity interfaces provide
redundancy and additional bandwidth especially when multiple NFS database clients exist. In practice, link
aggregations in Dell Unity storage should be done only if the second link is needed for highly available
configurations.

If the channel-bonded interface on the NFS client will be dedicated to NFS control traffic, it is recommended
to use 1 GbE network interfaces. Using 10 GbE links for the dedicated channel-bonded interface for NFS
control traffic may be a waste of interface resources. However, the dedicated channel-bonded interface for

72 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

NFS control traffic does provide increased availability. Should one of the interface members of the channel-
bond suffer an outage, there is still another working interface in the channel-bond that traffic can flow through.

NAS server (Dell Unity) and link aggregation configuration


If NFS traffic will flow through bonded interfaces on the NFS client, Dell Unity system front-end connectivity
must be configured to support the bonded interfaces of the NFS client. Bonded interfaces must be configured
in both SP A and SP B modules before the Dell Unity system will start either of the interface members of the
bonded interface. If not, Dell Unisphere displays a status of Link Down for the interface members of the link
aggregate.

Both bonded interfaces must also use the same ports from both SPs. Using the same ports from both SPs is
necessary because if there is failover, the peer SP uses the same ports. LACP can be configured across the
ports from the same I/O module but cannot be configured on ports that are also used for iSCSI connections.
In earlier Dell Unity All Flash arrays, LACP could be configured across the on-board Ethernet ports.

If a link aggregate contains two interfaces, four switch interfaces will be required: two switch interfaces for the
two SP A interfaces in the link aggregate, and two switch interfaces for the two SP B interfaces in the link
aggregate. See Figure 30.

Link aggregation in Dell Unity storage is configured from within the Update system settings wizard. To start
the Update system settings wizard, select the gear ion in the menu bar:

Update system settings wizard.

In the Settings wizard, select Access > High Availability to manage or view link aggregations. Then, select
+ from the Link aggregations section to configure a bonded Dell Unity interface.

Creating Unity link aggregation for nfs control traffic

73 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Setting the primary and secondary ports of the bonded interface will be the first steps taken.

If the bond interface is needed for dedicated NFS control traffic, MTU 1500 may be sufficient, but consider
using Jumbo frames (MTU 9000). See section Jumbo frames for additional information.

The link aggregate can be added to the NFS server from the network properties in Unisphere: click File >
NAS Servers > edit (pencil icon) > Network > Interfaces & Routes > + > Production IP interface. Set
Ethernet Port: to the link aggregate created for the NFS traffic and provide the necessary networking
information (IP address, subnet mask/prefix length (or CIDR), gateway) for the link aggregate.

Defining network information for the link aggregation

Then, when mounting the NFS share on the NFS client, mount the NFS share with the IP address specified in
the link aggregate interface.

mount –o <options> XX.XX.XX.91:/ora-asm-nfs-test /oraasmnas-test

Ethernet switch and port channel configuration


If NFS control traffic will flow through a bonded NFS client (database server) NIC interface and a link
aggregate in Dell Unity storage, Ethernet switch ports must use LACP. If the candidate switch interfaces for
the bonded interfaces are in a VLAN, remove them from the VLAN before configuring the port channel.

Figure 30 illustrates how switch interfaces were configured as port channels in a Dell Networking S5000
switch. The port channels will be used for NFS control traffic. Port channel 1 will be used for Dell Unity SP
module A and port channel 2 will be used with Dell Unity SP module B.

74 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Cabling between Dell Unity 650F storage and an Ethernet switch

75 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

NFS database traffic (read/write)

Unity NAS server SPA (current SP owner)


kNFS control dNFS data traffic
Link aggregation
traffic (mount, xx.xx.xx.91
unmount, SPA mezz. card
Port 0 Port 1 Port 2 Port 3
xx.xx.xx.89 xx.xx.xx.90

Port channeling
LC-CB-GE-48P

Serial #
Assy
Status 1/0 47/46
3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38

Port channeling

Port 2 Port 3
Port 0 Port 1
xx.xx.xx.89 xx.xx.xx.90
kNFS control SPB mezz. card
traffic (mount, Link aggregation
unmount, xx.xx.xx.91 dNFS data
Unity NAS server SPB traffic

NFS database traffic (read/write)

Cabling between Dell Unity 480F, 680F, 880F storage, and an Ethernet switch

76 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

NFS database traffic (read/write)

dNFS data kNFS control traffic


traffic (mount, unmount,
Unity NAS server SPA (current SP owner)

4-port 10GbE mezzanine card 4-port 10GbE I/O module

Link aggregation
xx.xx.xx.91
Port 0 Port 1 Port 2 Port 3
Port 0 Port 1 Port 2 Port 3
xx.xx.xx.87 xx.xx.xx.88 xx.xx.xx.89 xx.xx.xx.90

Port channeling
LC-CB-GE-48P

Serial #
Assy
Status 1/0 47/46
3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38

Port channeling

Port 0 Port 1 Port 2 Port 3


Port 0 Port 1 Port 2 Port 3
xx.xx.xx.87 xx.xx.xx.88 xx.xx.xx.89 xx.xx.xx.90

Link aggregation
xx.xx.xx.91

4-port 10GbE mezzanine card 4-port 10GbE I/O module


Unity NAS server SPB

dNFS data kNFS control traffic


traffic (mount, unmount,

NFS database traffic (read/write)

Cabling between Dell Unity 480F, 680F, 880F storage, and an Ethernet switch

Switch interfaces that will be connected to the channel-bond interfaces of the NFS client (database server)
must also be configured with LACP.

For additional network redundancy for NFS traffic, use redundant switches to provide greater network
availability.

77 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

Database server: NFS client network interface configuration


For best performance, the database server should be configured with 10 Gb/s and optionally with 1 Gb/s for
dNFS data traffic and NFS control traffic respectively. If possible, these ports, including all end-to-end ports
servicing dNFS data traffic, should be configured for Jumbo frames (MTU 9000) to provide best performance.

For NFS control/management traffic, either 1 Gb/s or 10 Gb/s ports can be used. For Oracle environments
that require path redundancy for NFS control traffic, it is required to use LACP across multiple interfaces from
end point to end point.

Ethernet connectivity between NFS client (database server) and Ethernet switch

The following snippets are for the interfaces shown previously and correspond to the interface address in the
operating system static routes and dNFS channels defined in file oranfstab:

[root ~]# cd /etc/sysconfig/network-scripts


[root network-scripts]# cat ifcfg-em1
TYPE=Ethernet
DEFROUTE=yes
NAME=em1
DEVICE=em1
SLAVE=yes
MASTER=bond0
<snippet>

[root 2 network-scripts]# cat ifcfg-em2

78 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

TYPE=Ethernet
DEFROUTE=yes
NAME=em2
DEVICE=em2
SLAVE=yes
MASTER=bond0
<snippet>

[root network-scripts]# cat ifcfg-bond0


TYPE=Bond
DEFROUTE=yes
DEVICE=bond0
USERCTL=no
IPADDR=XX.XX.XX.26
PREFIX=20
GATEWAY=XX.XX.XX.1
BONDING_MASTER=yes
<snippet>

[root network-scripts]# cat ifcfg-p1p1


TYPE=Ethernet
DEFROUTE=no
NAME=p1p1
DEVICE=p1p1
IPADDR=XX.XX.XX.57
PREFIX=20
GATEWAY=XX.XX.XX.1
<snippet>

[root network-scripts]# cat ifcfg-p1p2


TYPE=Ethernet
DEFROUTE=no
NAME=p1p2
DEVICE=p1p2
IPADDR=XX.XX.XX.61
PREFIX=20
GATEWAY=XX.XX.XX.1
<snippet>

[root network-scripts]# cat ifcfg-p2p1


TYPE=Ethernet
DEFROUTE=no
NAME=p2p1
DEVICE=p2p1
IPADDR=XX.XX.XX.62
PREFIX=20
GATEWAY=XX.XX.XX.1
<snippet>

[root network-scripts]# cat ifcfg-p2p2

79 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

TYPE=Ethernet
DEFROUTE=no
NAME=p2p2
DEVICE=p2p2
IPADDR=XX.XX.XX.72
PREFIX=20
GATEWAY=XX.XX.XX.1
<snippet>

Example of end-point mappings, dNFS traffic type, and LACP


NFS traffic
NAS server port NAS server IP Host interface Host interface IP LACP
type
2 XX.XX.XX.89 p2p1 XX.XX.XX.75 Data No
3 XX.XX.XX.90 p2p2 XX.XX.XX.76 Data No
Link aggregation 1 (port 0) XX.XX.XX.91 bond0 (em1) XX.XX.XX.117 Control Yes
Link aggregation 1 (port 1) XX.XX.XX.91 bond0 (em2) XX.XX.XX.117 Control Yes

For additional information about bonded interfaces, see the section Configuring LACP.

Oracle dNFS configuration file: oranfstab


oranfstab is used to determine which mount points are available to dNFS and how to configure dNFS network
paths (referred to as channels) between the NFS servers and dNFS client

If oranfstab is missing and assuming NFS file systems are mounted, dNFS mounts and creates a single dNFS
channel for entries found in /etc/mtab that are required for the database. The dNFS channel in Oracle will
have a name equal to the IP address of the mount entry in /etc/mtab. No additional configuration is required.

The following shows the /etc/fstab and /etc/mtab entry for single NFS share:

[root ~]# grep ORA-ASM-NFS /etc/fstab


XX.XX.XX.91:/ORA-ASM-NFS /oraasmnas nfs
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0

[root ~]# grep ORA-ASM-NFS /etc/mtab


XX.XX.XX.87:/ORA-ASM-NFS /oraasmnas nfs
rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,acregmin=0,acregmax=0,acdi
rmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=XX.XX.XX.
87,mountvers=3,mountport=1234,mountproto=tcp,local_lock=none,addr=XX.XX.XX.87 0
0
SQL> select distinct svrname, path, ch_id, svr_id from v$dnfs_channels;

SVRNAME PATH CH_ID SVR_ID


--------------- --------------- ---------- ----------
XX.XX.XX.87 XX.XX.XX.87 0 1

If multiple channels to a NAS server are needed for increased dNFS bandwidth, automatic dNFS data traffic
load balancing, or automatic dNFS channel failover, the file oranfstab is required. dNFS automatically

80 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

performs load balancing across all specified available channels, and if one channel fails, dNFS reissues I/O
commands over any remaining available channel for that NAS server.

oranfstab can reside in either /etc or $ORACLE_HOME/dbs. If oranfstab resides in /etc, its contents will be
global to all databases running on that server regardless of which ORACLE_HOME they are running from. If
oranfstab resides in $ORACLE_HOME/dbs, it will be global to any database running from that
ORACLE_HOME. If ORACLE_HOME is shared between RAC nodes, all RAC databases running from the
shared $ORACLE_HOME will use the same $ORACLE_HOME/dbs/oranfstab.

dNFS searches for mount entries in the following order and uses the first matching entry as the mount point:

• $ORACLE_HOME/dbs/oranfstab
• /etc/oranfstab
• /etc/mtab

If a database uses dNFS mount points configured in oranfstab, Oracle first verifies kNFS mount points by
cross-checking entries in mtab and oranfstab. If a match does not exist, dNFS logs a message and fails to
operate.

The following oranfstab file contains four dNFS data paths to two NAS server aliases, each NAS server alias
is for a different database. Format of data paths can vary within oranfstab:

server: ORA-NAS01
local: XX.XX.XX.57 path: XX.XX.XX.63
local: XX.XX.XX.62 path: XX.XX.XX.65
local: XX.XX.XX.61 path: XX.XX.XX.64
local: XX.XX.XX.72 path: XX.XX.XX.66
mnt_timeout: 60
export: /ORA-FS1 mount: /ora1db
#
server: ORA-ASM-NFS
local: XX.XX.XX.57 path: XX.XX.XX.87
local: XX.XX.XX.62 path: XX.XX.XX.88
local: XX.XX.XX.61 path: XX.XX.XX.89
local: XX.XX.XX.72 path: XX.XX.XX.90
mnt_timeout: 60
export: /ORA-ASM-NFS mount: /oraasmnas

The following channels for ORA-ASM-NFS will be created. Channels for ORA-NAS01 are not shown because
the current database relies only on ORA-ASM-NFS:

SQL> select distinct svrname


2 , path
3 , ch_id
4 , svr_id
5 from v$dnfs_channels
6 order by ch_id;

SVRNAME PATH CH_ID SVR_ID


--------------- --------------- ---------- ----------
ORA-ASM-NFS XX.XX.XX.87 0 1
ORA-ASM-NFS XX.XX.XX.88 1 1

81 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

ORA-ASM-NFS XX.XX.XX.89 2 1
ORA-ASM-NFS XX.XX.XX.90 3 1

If any NFS data path (column PATH) uses an IP existing in a subnet used by any client NIC interface, static
routes must be defined for that NFS data path. See section Shared subnets for more information.

Table 11 presents the available configuration parameters for oranfstab.

oranfstab configuration parameters


oranfstab directive Description
server server can be any name. It begins a group of directives for dNFS. The directives
control how dNFS should operate on the mounted NFS Shares indicated by the
pair of export and mount values in the group. The value of server will also be
used as an identifier in v$dnfs views and logging. For readability and
supportability, it is recommended to set the value of server name to the name of
the NAS server specified in the mount command.
local Local defines the IP of the interface on the database server designated for NFS
data traffic. The value of local and path define the end-to-end point taken for
NFS data traffic. Up to four local and path pairs can be specified. If there are
more than one local-path pairs, automatic load balancing and failover on dNFS
data paths will be enabled.
path The IP of the interface of the NAS server that will be used with the above local
IP. The value of path and local define the end-to-end point taken for NFS data
traffic. Up to four local and path pairs can be specified. If there are more than
one local-path pairs, automatic load balancing and failover on dNFS data paths
will be enabled.
export: <value> export and mount each define a pair of values where the paired values cannot
mount: <value> be broken between lines. Both values must match the appropriate corresponding-
paired values in /etc/mtab and /etc/fstab. The number of export-mount pairs
within a server stanza is unlimited.
dontroute Note: dontroute is not applicable in Linux. If specified, it will be ignored. It is
intended for POSIX-related operating systems and instructs the operating system
to ignore the routes specified in the operating system routing table. dontroute
guarantees that dNFS will use the routes specified by local and path in this file.
To ensure proper routing occurs in Linux, use static routing. See section Shared
subnets for additional information.
mnt_timeout Optional: mnt_timeout defines time in seconds that dNFS will wait for a
successful mount before timing out The default is 600 seconds.
nfs_version Optional: For 12c, nfs_version specifies the version of NFS. Valid versions are
nfsv4 or nfsv3 (default).
management Optional. For 12c, use the management interface for SNMP.
community Optional. For 12c, community defines the community string for SNMP.

Enabling and disabling Oracle dNFS


After installing 12c RDMBS, enabling and disabling dNFS is done by performing the following commands from
the Linux user owning the ORACLE_HOME:

To enable dNFS:

82 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_on

To disable dNFS:

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_off

Verify if dNFS is being used


When there is I/O against the database, the following can be used to verify the Oracle instance is using dNFS
channels and if the Ethernet network has been configured correctly.

If the alert log contains string running with ODM, dNFS has been enabled and the instance was started with
the ODM library containing the direct NFS driver:

[oracle trace]$ grep 'instance running with ODM' alert_dbnfsasm.log


Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

The local IP and path IP shown in the alert log should match all the records in oranfstab for the appropriate
NAS server hosting the database. If Oracle automatically detects the local host interface because oranfstab is
not defined, ensure the chosen interface is intended for the dNFS channel.

[oracle trace]$ grep 'Direct NFS: channel id' alert_dbnfsasm.log | tail -4


Direct NFS: channel id [0] path [XX.XX.XX.87] to filer [ORA-ASM-NFS] via local
[XX.XX.XX.57] is UP
Direct NFS: channel id [1] path [XX.XX.XX.88] to filer [ORA-ASM-NFS] via local
[XX.XX.XX.62] is UP
Direct NFS: channel id [2] path [XX.XX.XX.89] to filer [ORA-ASM-NFS] via local
[XX.XX.XX.61] is UP
Direct NFS: channel id [3] path [XX.XX.XX.90] to filer [ORA-ASM-NFS] via local
[XX.XX.XX.72] is UP

When there is database activity, there should be Ethernet activity either on:

• Interfaces corresponding to the local IPs defined in oranfstab, or


• Interface used to mount the NAS share (assuming dNFS control and dNFS data traffic are routed
through the same interface).

The activity will be displayed as changes to (RX-OK and TX-OK) values from netstat:

The following lists send (TX) and receive (RX) statistics for all interfaces:

[root ~]# netstat -i 5


Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
<snippet>
p1p1 1500 138666544 0 9950 0 74959859 0 0 0 BMRU
p1p2 1500 150865538 0 9950 0 70397464 0 0 0 BMRU
p2p1 1500 151938399 0 9859 0 131360148 0 0 0
BMRU
p2p2 1500 133923499 0 9859 0 67984417 0 0 0 BMRU
<snippet>

83 Dell Unity: Oracle Database Best Practices | H16765


Oracle Direct NFS

The database compares the datafile names with the NFS mount to see if dNFS can be used. Any datafile that
dNFS can work with will reside in v$dnfs_files. Verify that the database sees all database files residing on the
NFS share and that there is activity on the dNFS channels.

select * from v$dnfs_files;


select pnum, svrname, path, local, ch_id, svr_id, sends, recvs
from v$dNFS_channels;

Oracle dynamic dNFS views


Eight dNFS dynamic performance views are available in Oracle 12c to monitor ODM NFS storage devices.
Four of the eight views are for Oracle standalone deployments, and four are for RAC deployments. A full
description of the dynamic tables for Oracle standalone deployments can be found in the Oracle Database
Reference 12cR2 at https://docs.oracle.com/database/122/REFRN/REFRN.pdf:

dNFS dynamic performance views in an Oracle standalone deployment:


dNFS dynamic
performance Description
views
v$dnfs_channels Displays open network paths/channels to servers for which dNFS is providing files
v$dnfs_files Displays open files using dNFS
v$dnfs_servers Displays servers accessed using dNFS
v$dnfs_stats Displays performance statistics for dNFS

84 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features with Oracle databases

Dell Unity features with Oracle databases


There are several features in the Dell Unity system that provide extra enhancements and may provide
additional benefits in an Oracle database environment. The following subsections provide best practices for
these features and their integration with Oracle databases.

Data reduction
Data reduction is a Dell Unity feature that includes both zero detection, compression, and advanced
deduplication. By offering multiple levels of space saving, Dell Unity provides flexibility for the best balance of
space savings and performance.

Oracle provides database-level compression. When database-level compression is enabled, it is unlikely that
the Dell Unity system can further reduce consumption on these compressed data. It is recommended that
compression is applied by either the array or the database engine, but not both. Certain types of data, such
as video, audio, image, and binary, usually get little benefit from compression.

Compression requires CPU resources and at high throughput levels can start to have an impact on
performance. The heavy write ratio of OLAP workloads can also reduce the benefits of compression for
Oracle database. File data can compress well, so selective volume compression should be considered.

Since both the Dell Unity system and Oracle offer data compression, there are several factors to consider.
The best recommendation depends on several factors such as database contents, the amount of available
CPU on both the storage and the database servers, and the number of I/O resources.

The following lists the benefits of using Dell Unity compression over the database-level compression:

• Dell Unity compression offloads CPU resources associated with compression, allowing more CPU
resources available to the operating system and databases.
• Dell Unity compression is transparent to the databases. Any versions of the database can benefit
from it.
• The cost to enable compression for all applications on a Dell Unity system can be lower compared to
the cost to enable compression for a database.
• Dell Technologies guarantees 4:1 storage efficiency for all-flash configurations. For more information,
go to Future-Proof Program.
• Oracle and Grid Linux user home directories are candidates for compression but evaluate the
benefits of compressing them.

Advanced deduplication
In addition to data reduction, advanced deduplication can be enabled if data reduction is enabled. Advanced
deduplication reduces the storage needed for data by keeping only a few copies (often one copy) of a block
with a given content.

The deduplication scope is a single LUN. So when choosing the storage layout, choose fewer LUNs for better
deduplication, or more LUNs for better performance.

This level of space saving can provide the greatest level of return in most environments, but also requires the
most CPU in Dell Unity storage. Because of the nature of user data, there will be duplicate data from copies
of user data. This feature should be tested with a sample of database data and workload before being
enabled in production.

85 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features with Oracle databases

Advanced deduplication was first introduced in OE 4.5. It was an optional addition to the data reduction logic
available with certain models, and it could only be performed on Dell Unity blocks that were compressed. With
OE 5.0, advanced deduplication (if enabled) will deduplicate any block (compressed or uncompressed). For
more information, see the Dell Unity: Data Reduction and Dell Unity: Best Practices Guide.

Snapshots
Snapshots provide a fast and space-efficient way to protect Oracle databases. When using snapshots with
Oracle databases, there are important considerations to ensure a successful database recovery.

• All LUNs of an Oracle database must be protected as a set using the consistency group feature. The
consistency group will ensure that the snapshot is taken at the exact same time on all LUNs in that
group. For NFS file systems that support an Oracle database, to ensure database consistency, the
entire database must exist on the file system being snapped.
• Snapshots do not replace Oracle RMAN for regular database backup. However, it offers additional
protection to the database and allows offloading RMAN processing to an alternate host.
• Snapshots can be taken on demand manually or automatically based on a schedule defined on the
LUN file system. It is recommended to put the database in hot backup mode before taking a snapshot
and end backup mode after the snapshot is taken.

Note: Snapshots increase the overall CPU load on the system and increase the overall drive IOPS in the
storage pool. Snapshots also use pool capacity to store the older data in the snapshot, which increases the
amount of capacity used in the pool until the snapshot is deleted. Consider the overhead of snapshots when
planning both performance and capacity requirements for the storage pool.

• Before enabling snapshots on a storage object, it is recommended to monitor the system and ensure
that existing resources can meet the additional workload requirements. (See “Hardware Capability
Guidelines” section and Table 2 in the Dell Unity: Best Practices Guide.)
• Enable snapshots on a few storage objects at a time, and then monitor the system to be sure it is still
within the recommended operating ranges before enabling more snapshots. Additional information
can be found in the Dell Unity: Snapshots and Thin Clones document.

When recovering the database from a snapshot, the Dell Unity system offers two methods to recover a point-
in-time copy of the database: restore and attach to host.

Restoring from a snapshot


With the restore method, the data of the original LUNs are replaced in place on the original server where the
snapshot was taken. The restore method is the simplest and fastest recovery method because there is no
copying data, and no configuration or modification is required. The overall process to restore a snapshot is
described as follows.

1. Terminate all user connections and shut down the database to be restored.
2. In Dell Unisphere, identify and select the snapshot in the LUN Snapshots properties page or in the
Consistency Group Snapshots properties page. See Figure 34.
3. Choose Restore from the More Actions drop-down menu.
4. After the restore operation is completed, restart the database on the host.
5. Oracle automatically performs database recovery during the startup.
6. Verify the data in the database.

86 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features with Oracle databases

Snapshot properties page

Recovering from a snapshot


When only a subset of the data needs to be restored, a point-in-time copy of the database can be mounted on
an alternate host. Administrators can export and import the data back to the original database. The original
database can remain running but access to the corrupted data should be restricted. To the operating system,
the LUNs presented by the snapshot will have different WWNs than the original LUNs. However, the ASM
headers are the same as the original ASM devices. It is not recommended to attach these LUNs on the same
host as the original database. Attaching the LUNs on the same host as the original database will create
confusion to Oracle and increase the risk of writing to the wrong LUNs accidentally. The overall recovery
process:

1. Prepare the destination host to receive the LUNs.

a. Configure the operating system similarly as the original host.


b. Install the same version of Oracle software on the destination host.
c. Install the same storage software (ASMLib, ASMFD) on the destination host.

2. In Unisphere, identify and select the snapshot in the LUN Snapshots properties page or in the
Consistency Group Snapshots properties page. See Figure 34.
3. Choose Attach to host from the More Actions drop-down menu.
4. Select the destination host and allow Read/Write access.
5. After the snapshot is attached to the host, scan for the LUNs using rescan-scsi-bus.sh -
forcerescan or -a.
6. Set ownership, group membership, and permission on the LUNs. It is possible to set ownership and
membership with chown, and permission with chmod, but the change will not be persistent across
reboots.
7. Scan for ASM devices.

87 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features with Oracle databases

For ASMFD:

# asmcmd afd_scan
# asmcmd afd_lsdsk

For ASMLIb;

# oracleasm scandisks
# oracleasm listdisks

8. Mount the ASM disk groups.


9. Copy the database init parameter file to the destination host.
10. Create the database log directories on the destination host.
11. Start up the database in sqlplus.
12. Restore data using one of the following methods. Data is extracted from the destination database and
imported into the original database.

a. RMAN
b. Datapump
c. Copy data using the database link

Find more information in the Oracle Backup and Recovery User's Guide.

13. Once the recovery is complete, shut down the database copy.
14. Dismount the ASM disk groups.
15. Remove the snapshot LUNs from the destination host.
16. Remove host access of the snapshot LUNs in Unisphere.

Thin clones
Thin clones are based on snapshot technology and are the preferred way to make read/write copies of
databases. Similar to regular LUNs, many of the data services, such as snapshots, replications, and host I/O
limit, are also available to thin clones. When thin clones are first created, they consume no storage because
they share the same blocks as their parent snapshot at the beginning. As new data is written or changes are
made to the existing data, new data blocks are allocated and tracked separately from the parent. The data on
the thin clones are the same as the parent LUNs but they have different LUN IDs and WWNs. To the
operating system, they appear to be different LUNs. However, when Oracle scans for the ASM headers, they
contain the same labels and disk group information as the original LUNs. Therefore, it is recommended to
attach thin clones on an alternate host to avoid confusion in Oracle and risks overwriting data on the wrong
LUNs.

Thin clone use cases:

• Create full-size development and test environments from production


• Test new code, patches, or data changes in a production replica
• Offload backup and restore processing

Find additional information about thin clones can be found in the Dell Unity: Snapshots and Thin Clones
document.

88 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features with Oracle databases

Creating a database copy with thin clones


The following shows the overall process to create a database copy using a thin clone.

1. Prepare the destination host to receive the LUNs.

a. Configure the operating system similarly as the original host.


b. Install the same version of Oracle software on the destination host.
c. Install the same storage software (ASMLib, ASMFD) on the destination host.

2. In Unisphere, identify and select the LUN or the consistency group.


3. Choose Clone from the More Actions drop-down menu.
4. Select a snapshot to clone from.

Note: Only snapshots with no auto-delete policy and no expiration time are eligible for selection.
Remove the auto-delete policy and expiration time on the snapshot before attempting the Clone
action.

5. Follow the wizard to configure the thin clone’s name, host I/O Limit, host access, snapshot policy, and
replication.
6. After the thin clone LUNs are attached to the destination host, scan for the LUNs using rescan-
scsi-bus.sh -forcerescan or –a.
7. If the database clone is intended for long-term use, configure multipath and persistent ownership and
permission on the thin clone LUNs.
8. Scan for ASM devices.
9. Mount the ASM disk groups.
10. Copy the database init parameter file to the destination host.
11. Create the database log directories on the destination host.
12. Start up the database in sqlplus.

Refresh thin clones


It is possible to refresh thin clones from the same snapshot or a different snapshot. Refreshing thin clones
from the same or different snapshot provides an easy way to reset an environment consistently to a baseline
or switch to a different point-in-time copy. The refresh process is quick and simple as only the pointers are
updated, and no data is being copied. The process to refresh a thin clone is as follows:

1. Shut down the database copy that is using the thin clone LUNs.
2. Dismount the ASM disk groups
3. In Unisphere, select the thin clone and select the Refresh action in the More Actions menu. See
Figure 35.
4. A snapshot is automatically created of the thin clone to preserve the thin clone data.
5. Select a snapshot to refresh from.

Note: Snapshots that have auto-delete policy or expiration time set are not eligible for selection.
Remove the auto-delete policy and expiration time on the snapshot first before starting the Refresh
action.

6. After the refresh is completed, perform a rescan on the operating system.


7. Perform a rescan on ASM devices.
8. Mount the ASM disk group.

89 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features with Oracle databases

9. Start up the database.


10. Verify the data in the database.

Refresh action in Unisphere.

Refreshing a snapshot

Replication
Creating a high availability solution for Oracle databases often involves creating a copy of the data on another
storage device and synchronizing that data in some manner. Dell Unity replication provides data
synchronization between Dell Unity systems. Data is replicated at the consistency group or at the LUN and
file system providing a choice of replication settings on a per-volume basis. Using Dell Unity replication can
be an effective way to protect Oracle databases due to the flexibility and configuration options that it provides.
The variety of options provide a robust way to develop a replication scheme that provides the proper mix of
performance and bandwidth efficiency while still meeting RTO and RPO requirements.

90 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity features with Oracle databases

When using Dell Unity replication to protect Oracle databases that are on multiple volumes, contain all ASM
devices for a database within a consistency group. Then configure replication on the consistency group.

Dell Unity storage supports both asynchronous and synchronous replication. A flash tier is recommended (in
a hybrid pool) for both source and destination pools where replication will be active.

Asynchronous replication
Asynchronous replication takes snapshots on the replicated storage objects to create the point-in-time copy,
determining the changed data to transfer and maintain consistency during the transfer. Consider the overhead
of snapshots when planning performance and capacity requirements for a storage pool that will have
replication objects.

Setting smaller RPO values on replication sessions will not make them transfer data more quickly but will
result in more snapshot operations. Choosing larger RPOs, or manually synchronizing during non-production
hours, may provide more predictable levels of performance. Additional information can be found in the Dell
Unity: Replication Technologies and Dell Unity: Configuring Replication documents.

Synchronous replication
Synchronous replication transfers data to the remote system over the first Fibre Channel port on each SP.
When planning to use synchronous replication, it may be appropriate to reduce the number of host
connections on this port. When the CNA ports are configured as FC, CNA port 4 is defined as the
synchronous replication port. If the CNA ports are configured as 10 GbE, port 0 of the lowest numbered FC
I/O module is the replication port. Additional information can be found in the Dell Unity: Replication
Technologies and Dell Unity: Configuring Replication documents.

91 Dell Unity: Oracle Database Best Practices | H16765


Data protection

Data protection
In addition to the snapshots and replication provided by Dell Unity systems, Dell Technologies offers
additional data protection software that integrates with the Dell Unity data protection features. The software is
optional and can be used to enhance the overall application protection.

AppSync
Dell AppSync is software that enables integrated Copy Data Management (iCDM) with the Dell primary
storage systems, including Dell Unity arrays. It supports many applications, including Oracle, and storage
replication technologies. For the latest support information, refer to the AppSync Support Matrix at the Dell
EMC E-lab Navigator.

AppSync simplifies and automates the process of creating and using snapshots of production data. By
abstracting the underlying storage and replication technologies, and through application integration, AppSync
empowers application owners to manage data copy needs themselves. The storage administrator, in turn,
need only be concerned with initial setup and policy management, resulting in a more agile environment.

Additional information about AppSync can be found in the AppSync User and Administration Guide and the
AppSync Performance and Scalability Guidelines.

RecoverPoint virtual edition


Dell RecoverPoint virtual edition provides continuous data protection with multiple recovery points to restore
applications instantly to a specific point in time. RecoverPoint virtual edition consists of RecoverPoint
Appliance (RPA) software deployed as a virtual appliance in an existing VMware ESXi VM environment.
RecoverPoint virtual edition is a flexible deployment option which offers maximum simplicity with no
dependency on a physical appliance, able to lower TCO.

92 Dell Unity: Oracle Database Best Practices | H16765


File system mount options

File system mount options


The following table describes the file system mount options used in this paper.

Mount options
Mount
Description
option
rw Mounts the file system for both reading and writing operations
bg Defines a background mount to occur if a timeout or failure occurs. bg causes the
mount command to fork a child which continues to attempt to mount the export and the
parent process immediately returns with a zero status
hard Explicitly marks the volume as hard-mounted and determines the recovery behavior of
the NFS client after an NFS request times out. hard is enabled by default and prevents
NFS from returning short write errors by retrying the request indefinitely. Short writes
cause the database to crash; otherwise they will continue retrying at timeo=<nn>
intervals. The server will report a message to the console when a major timeout occurs
and will continue to attempt the operation indefinitely.
nointr Without this option, signals like kill -9 which can be used to interrupt an NFS call will
cause data corruption in datafiles because the in-flight writes will be abruptly
terminated.
rsize Specifies the maximum size (bytes) used by NFS clients on read requests, that the
NFS client can receive when reading data from a file on an NFS server. The default
depends on the version of kernel but is generally 1,024 bytes. Data payload size of
each NFS read request is equal to or smaller than the rsize setting, with a maximum
payload size of 1,048,576. Values lower than 1,024 are replaced with 4,096, and
values larger than 1,048,576 are replaced with 1,048,576. If the specified value is
within the supported range but not a multiple of 1,024, it is rounded down to the
nearest multiple of 1,024. If a value is not specified, or if the value is larger than the
supported maximum on either the client or server, the server and client negotiate the
largest rsize they can both support. The rsize specified on the mount appears in
/etc/mtab. However, the effective rsize negotiated by the server and client appears in
/proc/mounts. Concerning Oracle, the value must be set to equal to or a larger multiple
of the Oracle block size (init: db_block_size, default 8k) to prevent fractured blocks in
Oracle. rsize must be set to at least 16,348. However, Oracle recommends setting the
value to 32,768.
wsize Identical to rsize, but for write requests sent from the NFS client. wsize must be set to
at least 16,348. However, Oracle recommends setting the value to 32,768. Oracle
dNFS clients issue writes at wtmax granularity to the NFS filer. If the dNFS client is
used and the NFS server does not support a write size (wtmax) of 32,768 or larger,
NFS reverts to the native kernel NFS path.
tcp tcp defines the transport protocol name and family the NFS client uses to transmit
requests to the NFS server. tcp also controls how the mount command communicates
with the server's rpcbind and mountd services. If an NFS server has both and IPv4 and
an IPv6 address, using a specific netid will force the user of IPv4 or IPv6 networking to
communicate with the server. Specifying tcp forces all traffic from the mount command
and the NFS client to use TCP. The tcp option is an alternative to specifying proto=tcp.
DO NOT use UDP NFS for ANY REASON
vers Specifies the NFS protocol version number used to contact the server's NFS service.
Use either a value of 3 or 4. Option vers is an alternative to option nfsvers and is
provided for compatibility with other OSs.

93 Dell Unity: Oracle Database Best Practices | H16765


File system mount options

Mount
Description
option
timeo Defines the time (in tenths of a second) that an NFS client will wait for a request to
complete before it retires the request. With NFS over TCP, the default value is 60
seconds; otherwise the default value is 0.7 seconds. If a timeout occurs, the behavior
will depend on whether hard or soft was used to mount the file system.
actimeo This option is required whenever the possibility exists to AUTOEXTEND. It ensures the
behavior of AUTOEXTEND is propagated to all nodes in a cluster by disabling all NFS
attribute caching. actimeo sets the values of acregmin, acregmax, acdirmin, and
acdirmax to the same value. Without actimeo, NFS caches the old file size, causing
inappropriate behavior. Oracle depends on file system messaging to advertise a
change in size of a datafile; therefore this setting is necessary.
noac Prevents NFS clients from caching file attributes so that applications can more quickly
detect file changes on the NFS server.

94 Dell Unity: Oracle Database Best Practices | H16765


Dell Unity x80F specifications

Dell Unity x80F specifications


The following table lists specifications of Dell Unity x80F All-Flash arrays.

Dell Unity x80F All-Flash array specifications


Dell Unity 380F Dell Unity 480F Dell Unity 680F Dell Unity 880F
CPU per SP 1x 6C @ 1.7G 2x 8C @ 1.8G 2x 12C @ 2.1G 2x 16C @ 2.1G
Broadwell (4108) (4116) (6130)
Skylake Skylake Skylake
Memory per SP 64 GB 96 GB 192 GB 384 GB
(4x 16 GB) (12x 8 GB) (12x 16 GB) (12x 32 GB)
Min/Max drives 5 / 500 5 / 750** 5 / 1000** 5 / 1500*
Embedded SAS ports 2x 4 lane 12 2x 4 lane 12 Gb/s 2x 4 lane 12 Gb/s 2x 4 lane 12 Gb/s
per SP Gb/s SAS SAS SAS SAS
Optional SAS ports per N/A 4x 4 lane or 2x 8 4x 4 lane or 2x 8 4x 4 lane or 2x 8
SP lane 12 Gb/s SAS lane 12 Gb/s SAS lane 12 Gb/s SAS
I/O Module I/O Module I/O Module
Embedded 10 GbE Two Ports N/A N/A N/A
Base T ports per SP
Embedded CNA ports 2 Ports: 8/16Gb N/A N/A N/A
per SP FC, 10 GbE Opt,
1 GbE BaseT, or
Empty
Support 4-port N/A 4x 10/25GbE Opt, 4x 10/25GbE Opt, 4x 10/25GbE Opt,
mezzanine cards per SP 4x 10 GbE BaseT, 4x 10 GbE BaseT, 4x 10 GbE BaseT,
or Empty or Empty or Empty
Support I/O modules 4x 10 GbE 4x 10 GbE BaseT
(two slots per SP) BaseT 4x 16 Gb FC
4x 16 Gb FC 4x 10/25GbE Opt
4x 10/25GbE Opt 4x 12 Gb/s SAS
Supported DAEs 2.5” 25-Drive, 3.5” 15-Drive, 2.5” 80-Drive

*Requires 4-port 12 Gb SAS backend I/O module to reach max drive count.

95 Dell Unity: Oracle Database Best Practices | H16765


Technical support and resources

Technical support and resources


Dell.com/support is focused on meeting customer needs with proven services and support.

Storage technical documents and videos provide expertise that helps to ensure customer success on Dell
storage platforms.

The Dell Unity area of the Dell Technologies Info Hub provides white papers and videos.

Related resources
The following referenced or recommended Dell Technologies publications and resources are at Dell.com or
infohub.delltechnologies.com.

• Dell Unity XT: Introduction to the Platform


• Dell Unity: Best Practices Guide
• Dell Unity: Dynamic Pools
• Dell Unity Family Configuring Pools
• Dell Unity: Compression - Overview
• Dell Unity: Data Reduction – Overview
• Dell Unity: Data Reduction Analysis
• Dell Unity: Performance Metrics - A Detailed Review
• Dell Unity: Replication Technologies - A Detailed Review
• Dell Unity: Snapshots and Thin Clones - A Detailed Review
• Dell Unity: Unisphere Overview - Simplified Storage Management
• Dell Unity: Data at Rest Encryption - A Detailed Review
• Dell Unity Drive Support Matrix
• Dell EMC Host Connectivity Guide for Linux
• Dell Unity: NAS Capabilities - A Detailed Review
• Dell Unity: High Availability - A Detailed Review
• AppSync User and Administration Guide
• PowerPath Installation and Administration Guide

The following referenced or recommended Veritas resources are at Veritas Online Support:

• Storage Foundation Administrator’s Guide


• Storage Foundation Tuning Guide

The following referenced or recommended Oracle resources are at the Oracle Online Documentation Portal:

• Oracle Automatic Storage Management Administrator’s Guide


• Oracle Database Administration Documentation Library
• Oracle Performance Guide

The following referenced or recommended Oracle notes are at My Oracle Support (Oracle support license
required):

• Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS
devices (Doc ID 359515.1)
• Creating File Devices On NAS/NFS FileSystems For ASM Diskgroups. (Doc ID 1620238.1)

96 Dell Unity: Oracle Database Best Practices | H16765


Technical support and resources

• Direct NFS: FAQ (Doc ID 954425.1)


• How to configure DNFS to use multiple IPs (Doc ID 1552831.1)
• How to Setup Direct NFS Client Multipaths in Same Subnet (Doc ID 822481.1)
• How to configure DNFS to use multiple IPs using different subnets (Doc ID 1528148.1)
• Best Practices: How to configure DNFS client using Single and Multiple Subnets (Doc ID 2246252.1)
• Step by Step - Configure Direct NFS Client (dNFS) on Linux (Doc ID 762374.1)
• How To Setup dNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)

Referenced or recommended I/O benchmark resources:

• vdbench download and documentation


• FIO download
• FIO output explained
• atop and netatop homepage
• collectl sourceforge homepage
• SLOB Blog
• HammerDB homepage
• Swingbench homepage

97 Dell Unity: Oracle Database Best Practices | H16765

You might also like