Dell Unity Storage With Oracle Databases
Dell Unity Storage With Oracle Databases
Dell Unity Storage With Oracle Databases
Best Practices
Abstract
This document provides best practices for deploying Oracle databases with Dell
Unity All-Flash arrays, including recommendations and considerations for
performance, availability, and scalability.
April 2022
H16765
Revisions
Revisions
Date Description
November 2017 Initial release for Dell Unity OE version 4.2
June 2019 Updated with new format and content for Dell Unity x80F arrays
April 2022 Inclusive language changes, rebranding, updated links, and other edits
Acknowledgments
Authors: Mark Tomczik, Henry Wong
This document may contain language from third party content that is not under Dell's control and is not consistent with Dell's current guidelines for Dell's
own content. When such third party content is updated by the relevant third parties, this document will be revised accordingly.
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2017–2022 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners. [Best Practices] [H16765]
Table of contents
Revisions.............................................................................................................................................................................2
Acknowledgments ...............................................................................................................................................................2
Table of contents ................................................................................................................................................................3
Executive summary .............................................................................................................................................................5
Audience .............................................................................................................................................................................5
Storage configuration ..........................................................................................................................................................6
I/O module options .......................................................................................................................................................6
Dynamic storage pools .................................................................................................................................................7
Dell Unity features .............................................................................................................................................................10
FAST VP .....................................................................................................................................................................10
FAST cache ................................................................................................................................................................10
Data reduction ............................................................................................................................................................10
Data at Rest Encryption .............................................................................................................................................12
Host I/O limits .............................................................................................................................................................12
Oracle database design considerations ............................................................................................................................14
OLTP workloads .........................................................................................................................................................14
OLAP or DSS workloads ............................................................................................................................................14
Mixed workloads .........................................................................................................................................................14
Storage pools .............................................................................................................................................................15
Testing and monitoring ...............................................................................................................................................15
Deploying Oracle databases on Dell Unity storage ..........................................................................................................20
Linux setup and configuration ....................................................................................................................................20
Oracle Automatic Storage Management ....................................................................................................................26
Linux LVM...................................................................................................................................................................43
File systems ................................................................................................................................................................44
Dell Unity file storage ........................................................................................................................................................47
Dell Unity front-end Ethernet connectivity for file storage ..........................................................................................47
Dell Unity NAS servers ...............................................................................................................................................47
Dell Unity NFS file system ..........................................................................................................................................50
Scalability ...................................................................................................................................................................52
Storage efficiency .......................................................................................................................................................52
Quotas ........................................................................................................................................................................52
NFS protocol...............................................................................................................................................................52
Dell Unity NFS share ..................................................................................................................................................53
Executive summary
This paper delivers straightforward guidance to customers using Dell Unity All-Flash storage systems in an
Oracle 12c database environment on Linux operating systems. Oracle is a robust product that can be used in
various solutions. The relative priorities of critical design goals such as performance, manageability, and
flexibility depend on your specific environment. This paper provides considerations and recommendations to
help meet your design goals.
This paper was developed using the Dell Unity 880F All-Flash array, but the information is also applicable to
other Dell Unity All-Flash array models (x80F and x50F). Oracle Linux (OL) 7 was used for this paper but the
content is applicable to Oracle Linux (OL) 6, and Red Hat Enterprise Linux 6 and 7.
These guidelines are recommended, but some recommendations may not apply to all environments. For
questions about the applicability of these guidelines in your environment, contact your Dell Technologies
representative.
Dell Unity x80F models provide an excellent storage solution for Oracle workloads regardless of the
application characteristics and whether file or block storage is required. This paper discusses the best
practices and performance of the Dell Unity 880F array with block storage, but also presents best practices
with native xNFS or Oracle dNFS.
In addition to file and block storage support, the Dell Unity x80F arrays provide several other features. Some
of the standard features are point-in-time snapshots, replication (local and remote), built encryption,
compression, and extensive integration capabilities for an Oracle standalone or RAC environment.
Audience
This document is intended for Dell Unity administrators, database administrators, architects, partners, and
anyone responsible for configuring Dell Unity storage systems. It is assumed readers have prior experience
with or training in the following areas:
We welcome your feedback along with any recommendations for improving this document. Send comments
to [email protected].
Storage configuration
Dell Unity storage is a virtually provisioned, flash-optimized storage system designed for ease of use. This
paper covers the All-Flash array models with emphasis on Dell Unity x80F arrays. This section describes the
foundational array technologies that support the application-specific sections that follow. For general Dell
Unity best practices, see the Dell Unity: Best Practices Guide and the documentation listed in appendix 0.
I/O ports used for transferring user data are available on the embedded module or additional I/O modules on
each SP.
For Oracle environments that require FC front-end connectivity on the Dell Unity 480F, 680F, or 880F models,
consider using a filler blank in place of the embedded I/O module. Also consider using one or two optional I/O
modules that support FC.
If Oracle dNFS is used, consider using the optional four port mezzanine card on the Dell Unity 480F, 680F, or
880F models. A four port mezzanine card can be in both Link Aggregation Control Protocol (LACP) and fail-
safe networking (FSN) configurations. LACP is discussed in section Configuring LACP.
480, 480F, 4-port 16 4-port 10 GbE 4-port 25 GbE optical for 4-port 12 Gb SAS backend
680, 680F, Gb/s BaseT RJ45 Ethernet and iSCSI block
880, 880F (auto-negotiate to traffic; either 10 Gb or 25 Gb
1 GbE) SFPs (no auto negotiation,
mixed SFPs ok), or TwinAx
(active or passive)
In high-demand Oracle environments where IOPs, latency, or capacity are a concern, consider the option of
using a 4-port 12 Gb SAS I/O module. The 4-port module will increase the number of configurable hard drives
in the array which can help lower latency and increase IOPS and capacity.
I/O modules must be installed in pairs (one in SPA and one in SPB), are of the same type and reside in the
same slots between SPA and SPB.
With Dell Unity 480F, 680F, and 880F models, slot 0 I/O modules have x16 PCIe lanes while slot 1 has x8
PCIe lanes. For this reason, slot 0 should be reserved for environments needing greater bandwidths.
The Ethernet/iSCSI card can be in both Link Aggregation Control Protocol (LACP) and fail-safe networking
(FSN) configurations).
Once the Dell Unity array is configured, all I/O modules are persistent and cannot change type.
Dynamic pools offer many benefits over traditional pools. The new pool structure eliminates the need to add
drives in the multiples of the RAID width, allowing for greater flexibility in managing and expanding the pool.
Dedicated hot spare drives are also no longer required with dynamic pools. Data space and replacement
space are spread across the drives within the pool. Spreading space across drives within the pool provides
better drive utilization, improves application I/O, and speeds up the proactive copying of failing drives and the
rebuild operation of failed drives.
In general, create dynamic pools with large numbers of drives of the same type and use few storage pools
within the Dell Unity system. However, it may be appropriate to configure additional storage pools in the
following instances:
Additional information can be found in the documents, Dell Unity: Dynamic Pools and Dell Unity: Configuring
Pools.
• To store all data written into storage objects — LUNs, file systems, datastores, and VMware vSphere
Virtual Volumes (vVols) — in that pool
• To store data that is needed for snapshots of storage objects in the pool
• To track changes to replicated storage objects in that pool
Storage pools must maintain free capacity to operate properly. By default, a Dell Unity system will raise an
alert if a storage pool has less than 30% free capacity. If the alert is raised, the system will begin to
automatically invalidate snapshots and replication sessions if the storage pool has less than 5% free capacity.
Dell Technologies recommends that a storage pool always has at least 10% free capacity.
More drives can be added to a storage pool online. However, to optimize the performance and efficiency of
the storage, add drives with same specification, type, and capacity of the existing drives in the pool. Though
not required, add several drives equal to the RAID width + 1, which allows the new capacity to be immediately
available. Data is automatically rebalanced in the pool when drives are added.
Note: Once drives are added to a storage pool, they cannot be removed unless the storage pool is deleted.
All-flash pool
All-flash pools provide the highest level of performance in Dell Unity systems. Use an all-flash pool when the
application requires the highest storage performance at the lowest response time. Note the following
considerations with all-flash pools:
• All-flash pools consist of either all SAS flash 3 or all SAS flash 4 drives of the same capacity.
• Dell EMC FAST Cache and FAST VP are not applicable to all-flash pools.
• Compression is only supported on an all-flash pool.
• Snapshots and replication operate most efficiently in all-flash pools.
• Dell Technologies recommends using only a single drive size and a single RAID width within an all-
flash pool.
For example: For an all-flash pool, use 800 GB SAS flash 3 drives and configure them all with RAID 5
8+1. For supported drive types in all-flash pool, see appendix 0.
Hybrid pool
Hybrid pools (including a combination of flash drives and hard disk drives) are not supported with Dell Unity
All-Flash arrays.
FAST VP
Dell FAST VP accelerates the performance of a specific storage pool by automatically moving data within that
pool to the appropriate drive technology based on data access patterns. FAST VP is only applicable to hybrid
pools within a Dell Unity Hybrid flash system.
FAST cache
FAST Cache is a single global resource that can improve the performance of one or more hybrid pools within
a Dell Unity Hybrid flash system. FAST Cache can only be created with SAS Flash 2 drives and is only
applicable to hybrid pools. FAST Cache is not applicable to all-flash arrays.
Data reduction
Dell Unity compression reduces the amount of physical storage needed to save a dataset in an all-flash pool
for block LUNs and VMFS datastores. This capability was added to Dell Unity OE version 4.1 for thin block
storage resources and was called Dell Unity Compression. Thin file storage resource support was added in
Dell Unity OE version 4.2 for file systems and NFS datastores in an all-flash pool.
In Dell Unity OE version 4.3, the Dell Unity Data Reduction feature replaces compression. It provides more
space savings logic to the system with the addition of zero-block detection and deduplication. In Dell Unity OE
version 4.5, data reduction includes an optional feature called Advanced Deduplication, which expands the
deduplication capabilities of the data reduction algorithm. With data reduction, the amount of space required
to store a dataset for data reduction enabled storage resources is reduced when savings are achieved. Data
reduction and advanced deduplication is supported on LUNs, file systems, NFS, and VMFS datastores.
Starting with OE 4.5, an 8 KB Dell Unity block within a resource is subject to compression. The block will be
compressed if a 1% savings or higher can be obtained.
Dell Unity Data Reduction savings are not only achieved on the storage resource it is enabled on, but on
snapshots and thin clones of those resources. Snapshots and thin clones inherit the data reduction setting of
the source storage resource, which helps to increase the space savings that they can provide.
If Dell Data Reduction is enabled, the storage system intelligently controls it. Configuring data reduction and
reporting savings is simple, and can be done through Unisphere, Unisphere CLI, or REST API.
Dell Unity Data Reduction is licensed with all physical Dell Unity systems at no additional cost. Data reduction
is not available on the Dell Unity VSA version of the Dell Unity platform as data reduction requires write
caching within the system. Dell Unity must be at OE version 4.3 or later to use data reduction with block and
file resources (thin LUNs, thin file systems, VMware vStorage VMFS datastores, and NFS).
By offering multiple technologies of space saving, Dell Unity provides flexibility for the best balance of space
savings and performance.
Note: Data reduction is disabled by default and needs to be enabled before advanced deduplication is an
available option. After enabling data reduction, advanced deduplication is available, but is disabled by default.
While data reduction helps to optimize storage investments by maximizing drive utilization, data reduction:
• Increases the overall CPU load on the Dell Unity system when storage objects service reads or writes
of compressible data, and
• May increase latency when accessing the data.
Consider these best practices before enabling data reduction on a storage object:
• Monitor the system to ensure it has available resources to support data reduction. See “Hardware
Capability Guidelines” section and Table 2 in the Dell Unity: Best Practices Guide.
• Enable data reduction on a few storage objects at a time. Then monitor the system to be sure it is still
within the recommended operating ranges before enabling Data Reduction on more storage objects.
• With Dell Unity x80F models, consider that data reduction will provide space savings if the data on
the storage block is at least 1% compressible. Before the new x80F models and OE 5.0, data
reduction would provide space savings if the data on the storage block was at least 25%
compressible.
• Before enabling data reduction on a storage object, determine if it contains data that will compress.
Do not enable data reduction on a storage object if there will be no space savings.
• Contact your Dell Technologies representative for tools that can analyze the data compressibility.
For more information regarding compression, see the Dell Unity: Data Reduction document.
For additional information about Dell Unity Data Reduction, see the Dell Unity: Data Reduction Overview and
Dell Unity: Data Reduction Analysis.
Advanced deduplication
Advanced deduplication, an optional extension of data reduction released in OE 4.5, increases the capacity
efficiency of data reduction. Advanced deduplication can be enabled on storage and is only performed on
compressed blocks. With OE 4.5, that meant that advanced deduplication would be performed on
compressed blocks that sustained as little as a 1% savings or higher. In cases where a Dell Unity block did
not compress, advanced deduplication would not be performed. So if there were multiple copies of an
uncompressed block, deduplication would not be applied to the uncompressed multiple block copies to realize
further storage savings.
This restriction of not performing advanced deduplication on blocks with less than a 1% compression savings
no longer exists in OE 5.0. With OE 5.0, advanced deduplication can deduplicate to an uncompressed block
whenever a write or overwrite occurs on the block even if the block has 0% compression.
For more information regarding advanced deduplication, see the Dell Unity: Data Reduction white paper and
the Dell Unity: Best Practices Guide.
Note: D@RE is a license-able feature and must be selected during the ordering processes and licensed at
system initialization. D@RE can only be enabled at the time of system installation with the appropriate license
and cannot be enabled later.
If encryption is enabled, Dell Technologies recommends making external backups of the encryption keys after
system installation, and immediately following any change in the system’s drive configuration. Changes to the
system’s drive configuration would include such things as creating or expanding a storage pool, adding new
drives, or replacing a faulted drive.
For more information about D@RE, see the Dell Unity: Data at Rest Encryption document.
Host I/O limits, like Quality of Service (QoS), provide an excellent means to manage these types of
workloads. Instead of trying to manage workloads with multiple storage pools, use host I/O limits. Host I/O
limits allow LUNs to be restricted to a specified amount of IOPS or bandwidth, so they do not adversely
impact other applications. Host I/O limits allow storage administrators to ensure applications, and
environments adhere to budgeted limits which greatly simplify planning and management.
Host I/O limits are recommended for Oracle database environments for several reasons. First, storage
administrators can ensure that demanding Oracle databases instances do not overwhelm the entire array by
setting limits on database volumes. Also, if the Oracle database is the priority application, they can set limits
on other LUNs on the system to ensure that the Oracle database gets the required resources. Another great
component of host I/O limits is the ability to burst for a given limit for a specific period, which is user
configurable. In this way, small exceptions can still be allowed while maintaining balanced performance.
In development and testing environments, it can be difficult to determine if an application meets performance
requirements. Typically, these environments are smaller than production environments and it is not always
feasible to keep a copy of production data in these environments due to costs or privacy concerns. An issue
with smaller datasets is that the application can run faster and then encounter serious performance issues
when deployed on a real dataset in production.
Host I/O limits can be used to restrict the I/O on smaller datasets to highlight I/O-intensive queries. Setting
limits on databases in development and testing environments will help identify problem areas so they can be
resolved before production deployment. The result is improved Oracle databases service levels and greater
scalability.
For additional information, see the Dell Unity: Unisphere Overview document.
OLTP workloads
An online transaction processing (OLTP) workload typically consists of small random reads and writes. The
I/O sizes are generally equivalent to the database block size. The primary goal of designing a storage system
for this type of workload is to maximize the number of IOPS while keeping the latency as low as possible.
Depending on the business and application requirement, a latency of less than 1 millisecond is typical in a
high performing environment.
Consider using 16 Gb Fibre Channel (FC) or 25 GbE optical I/O models in each SP. If higher drive counts are
necessary to achieve higher IOPS, use 12 Gb SAS IO modules for backend connectivity, such as from the
storage processors (controllers) to the disk enclosures. 12 Gb SAS is only available in the 480F, 680F, and
880F models and 25 GbE optical is only available in Dell Unity x80/x80F models.
OLTP performance
For best results, capture performance statistics for at least 24 hours that includes the system peak workload.
An OLTP workload typically consists of small random reads and writes. The backend storage system
servicing this type of workload is primarily sized based on capacity and the number of IOPS required.
The primary goal of designing a storage system that services this type of workload is to optimize the I/O
throughput. The design needs to consider all components in the entire I/O path between the hosts and the
drives in the Dell Unity system. For best throughput, consider using:
• 16 Gbps FC or 25 GbE optical (10 GbE is also an option) iSCSI connectivity to the array, and
• 12 Gbps SAS connectivity from the controllers to the disk enclosures.
To meet high throughput requirements, multiple HBAs may be required on the server, the array, or both.
Mixed workloads
Oracle database workloads may not have I/O patterns that can be strictly categorized as OLTP or OLAP
because several applications can reside within the same database. Multiple databases with different
workloads can also co-exist on the same host. Choose and design a storage system that can handle different
types of workloads. When testing the I/O systems, the combined workload of these databases should be
accounted for and measured against the expected performance objectives.
The Dell all-flash midrange storage portfolio offers storage systems that scale in both IOPS and throughput.
Combined with the advanced architecture and storage-saving, the Dell all-flash midrange platform is ideal for
any type of Oracle workload.
Storage pools
In general, it is recommended to use fewer storage pools within Dell Unity systems. Fewer storage pools
reduce complexity and increases flexibility. Dell Technologies recommends using a single virtual disk pool for
hosting volumes for Oracle databases. A single virtual pool provides better performance by leveraging the
aggregate I/O bandwidth of all disks to service I/O requests from Oracle databases. A single drive pool is
easier to manage, allowing an administrator to easily adapt the storage system to satisfy ever-changing
workloads that are common in Oracle databases environments. Before creating multiple storage pools to
separate workloads, understand the various Dell Unity features that are available for managing and throttling
specific workloads.
RAID configurations
By default, the Dell Unity system chooses RAID 5 as the protection level when creating a storage pool which
contradicts traditional guidance advocating RAID 1/0 for database workloads. However, this traditional
guidance assumes the storage system contains spinning disks and does not consider SSDs or flash-
optimized storage such as Dell Unity systems. Testing Dell Unity All-Flash systems in most RAID 5 and RAID
1/0 configurations has shown negligible performance gains unless the workload is extremely write-intensive
for an extended period. Usually, the small performance gain of RAID 1/0 is not worth the reduced capacity
and therefore it is recommended to use the default configuration of RAID 5. For heavy write workloads where
maximum write performance is required, RAID 1/0 can be used.
The I/O requirements need to be clearly defined to size storage correctly. The RAID type chosen will be
determined by comparing availability performance requirements. The small footprint and high IO density will
typically allow a smaller drive size, reducing drive rebuild times, which means RAID 5 would be preferred over
RAID 6 usually.
There are many tools in the market that provide a comprehensive set of features to exercise and measure the
storage system and other components in the I/O stack. It is up to administrators to decide the testing
requirements and which tools work best for their environment. When choosing tools, consider the capabilities
in the following subsections. Several performance testing utilities are shown below:
• I/O subsystem
- dd
- Iozone (fs benchmarking)
- Iometer
- ORION
- winsat
- FIO (fs benchmarking)
- Vdbench
- bonnie
- SLOB
- Oracle Database I/O calibration feature
- DBMS_RESOURCE_MANAGER.CALIBRATE_IO
- dbbenchmark
- Benchmark factory
- hammerdb: Supports TPC-C and TPC-H workloads
- Swingbench
- Simora: Mines Oracle SQL Trace files and generates SQL to be run to reproduce the load
- Oracle Real Application Testing: An enterprise database option from Oracle that records a
database load on the source system and replays it on a destination environment
- HP LoadRunner
To validate the I/O path, run a large block sequential read test using the following guidelines as a starting
point and vary as necessary:
If the throughput matches the expected throughput for the number of HBA ports in the server, the paths
between the server and Dell Unity array are set up correctly.
• In a dual-controller system, use at least one volume per controller to ensures that I/O will be
distributed across both controllers. Using both controllers more closely simulate real-world activity.
For best results, use the same number of volumes on each controller. More LUNs might be better and
may be required to achieve maximum performance.
• When performing I/O tests on any storage platform, use files that are larger than the Dell controller
cache. For more accurate results, use a file size that matches the amount of data being stored. If
using larger files is not practical due to a large dataset, use a file size of at least 100 GB.
• Some I/O test tools (Oracle ORION is an example) generate files full of zeros. This behavior causes
inaccurate results when testing. Avoid using test utilities that write zeros for drive validation or
configure the tool to avoid writing zeros.
The purpose of this type of testing is to validate that the storage design will provide the required throughput
and IOPS with acceptable latency. It is important that the test does not exceed the designed capacity of the
array. For example, an array designed for a workload of 5,000 IOPS is likely to perform poorly with a workload
of 10,000 IOPS. If a test is generating a workload higher than the designed capacity, adjust the workload
being generated by reducing the number of threads, outstanding I/Os, or both.
The results of the Live Optics analysis provide an I/O target to simulate using these tests. To get an idea of
the performance capabilities of the array, run I/O tests with a range of I/O sizes commonly seen with Oracle.
When testing random I/O, test with I/O sizes of 8 KB, 16 KB, and 32 KB. When testing sequential I/O, test
with 8 KB, 16 KB, 32 KB, and 64 KB. Since processes like read ahead scans and backups can issue larger
sequential I/O, it is a good idea to also test block sizes larger than 32 KB. To truly test the array the designed
workload should be simulated at a minimum, and slightly higher if possible. To ensure the array has
headroom for load spikes the throughput should be tested slightly beyond estimated production loads.
I/O simulation
The primary objective of I/O simulation is to stress the storage system. I/O simulation tools are typically easy
to use and configure because they do not require a fully configured database. These tools generally allow
workloads to increase or decrease during the tests by specifying different parameters:
Oracle ORION, Vdbench, and FIO are three IO simulation tools. The software is free to download and use.
Oracle ORION has a unique advantage over others because it is explicitly designed to simulate Oracle
database I/O workloads using the same I/O software stack as Oracle. It also provides both OLTP and OLAP
simulation modes which simplify the setup and execution of the test. ORION has been bundled with the
Oracle database software and can be found in the $ORACLE_HOME/bin directory. See appendix 0 for
references to these tools. For more information about how to configure and run Orion, see the chapter,
Calibration with the Oracle Orion Calibration Tool, in the Oracle Performance Guide.
Performance monitoring
Performance data can be monitored by the operating system, the Dell Unity system, and in Oracle databases.
Ideally, monitoring the performance continuously offers the most detail and allows in-depth analysis of the
environments. At a minimum, performance statistics should be captured for at least 24 hours and during the
time periods when there are heaviest activities. The following subsections describe popular software and
cloud-based platforms for monitoring and analyzing performance.
• sar (Linux)
• iostat (Linux)
• top (Linux)
• atop and netatop (Linux)
• collectl (Linux)
• Performance Monitor (Microsoft Windows)
Oracle Enterprise Manager (OEM) is a separate application offered by Oracle. It provides a centralized
management and monitoring platform for many Oracle applications and databases. Configuration and
performance data are collected through Oracle agents running on an individual host stored in a common
management database. OEM provides a plethora of performance and utilization charts and many other
advanced features to manage the environment.
Metric data ages over time and gets aggregated into longer sampling intervals. The data is kept for historical
referencing for up to 90 days.
Additional information can be found in the Dell Unity: Unisphere Overview document.
Find additional information at the Live Optics for Service Providers | Dell USA page with a download available
at Live Optics - Real-world data for IT decisions : Live Optics.
Dell CloudIQ
Dell CloudIQ is a software as a service (SaaS) application that is freely available. When it is enabled for the
Dell Unity storage system, it allows administrators to monitor multiple Dell Unity storage systems remotely.
CloudIQ provides continuous monitoring performance, capacity, configuration, and data protection, and
enables administrators to manage storage proactively by receiving advanced notification for potential issues.
LUN ID information
To see if there are any LUNZ on the system, run the lsscsi command.
# lsscsi|egrep DGC
[13:0:0:0] disk DGC LUNZ 4201 /dev/sde
[13:0:1:0] disk DGC LUNZ 4201 /dev/sdaf
[14:0:0:0] disk DGC LUNZ 4201 /dev/sdd
[14:0:1:0] disk DGC LUNZ 4201 /dev/sdag
[snipped]
If the host detects LUN ID 0 and LUNZ, run rescan-scsi-bus.sh with the --forcerescan option. LUNZ will be
removed and allows the real LUN 0 to show up on the host. For example:
# /usr/bin/rescan-scsi-bus.sh --forcerescan
When the Dell Unity LUNs have non-zero IDs, use -a option instead.
# /usr/bin/rescan-scsi-bus.sh -a
Note: Omitting the --forcerescan option might prevent the operating system from discovering LUN 0 because
of the LUNZ conflict.
The string returned by the scsi_id command indicates the WWN of the Dell Unity LUN as shown in bold,
appended with a 3.
36006016010e0420093a88859586140a5
# multipath -ll
mpatha (36006016010e0420093a88859586140a5) dm-0 DGC ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1
alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 13:0:1:0 sdaf 65:240 active ready running
| |- 14:0:1:0 sdag 66:0 active ready running
| |- 15:0:1:0 sdbv 68:144 active ready running
| |- 16:0:1:0 sdcx 70:80 active ready running
| |- 17:0:1:0 sddz 128:16 active ready running
| `- 18:0:1:0 sdfb 129:208 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 13:0:0:0 sde 8:64 active ready running
|- 14:0:0:0 sdd 8:48 active ready running
|- 15:0:0:0 sdbh 67:176 active ready running
|- 16:0:0:0 sdcj 69:112 active ready running
|- 17:0:0:0 sddl 71:48 active ready running
`- 18:0:0:0 sden 128:240 active ready running
Multipathing
Multipathing is a software solution implemented at the host operating system level. While multipathing is
optional, it provides path redundancy, failover, and performance-enhancing capabilities. It is recommended to
deploy the solution in a production environment or any environments where availability and performance are
critical.
The native Linux multipath solution is supported and bundled with most popular Linux distributions in use
today. Because the software is widely and readily available at no additional cost, many administrators prefer
using it compared to other third-party solutions.
Unlike the native Linux multipath solution, both Dell PowerPath and Symantec VxDMP provide extended
capabilities for some storage platforms and software integrations. Both solutions also offer support for
numerous operating systems in addition to Linux.
Only one multipath software solution should be enabled on the host and the same solution should be
deployed in a cluster on all cluster hosts.
Refer to the vendor's multipath solution documentation for more information. For information about operating
systems supported by Dell PowerPath, see the Dell Simple Support Matrix. Appendix 0 provides links to
additional resources to these solutions.
Connectivity guidelines
The following list provides a summary of array-to-host connectivity best practices. It is recommended to
review the documents, Configuring Hosts to Access Fibre Channel (FC) or iSCSI Storage and Dell Unity: High
Availability.
Configuration file
To ease deployment of native Linux multipath, it comes with a set of default settings. The default settings list
storage models from different vendors including the Dell Unity system. The default settings allow the software
to work with the Dell Unity system without additional configuration. However, these settings might not be
optimal for all situations and should be reviewed and modified if necessary.
The multipath daemon configuration file needs to be created on newly installed systems. A basic template can
be copied from /usr/share/doc/device-mapper-multipath-<version>/multipath.conf to /etc/multipath.conf
as a starting point. Any settings that are not defined explicitly in /etc/multipath.conf would assume the default
values. The full list of settings (explicitly set and default values) can be obtained using the following
command. Specific Dell Unity settings can be found by searching for DGC from the output. The default
settings generally work without any issues.
Creating aliases
It is generally a good idea to assign meaningful names (aliases) for the multipath devices though it is not
mandatory. For example, create aliases based on the application type and environment it is in. The following
snippet in the multipaths section assigns an alias of ORA-DATA-00 to the Dell Unity LUN with the WWN
36006016010e04200271a8a594a34d845.
multipaths {
multipath {
wwid "36006016010e04200271a8a594a34d845"
alias ORA-DATA-00
}
}
I/Os are sent down the optimized paths when possible. If I/Os are sent down the nonoptimized paths, the peer
SP redirects the I/Os to the primary SP through the internal bus. When the Dell Unity system senses large
amounts of non-optimized I/Os, it automatically trespasses the LUN from the primary SP to the peer SP to
optimize the data paths.
LUN partition
A LUN can be used as a whole partition or it can be divided into multiple partitions. Certain applications, such
as Oracle ASMLib, recommend partitioning over whole LUNs. Dell Technologies recommends configuring
whole LUNs without partitions wherever appropriate because it offers the most flexibility for configuring and
managing the underlying storage.
See sections Oracle Automatic Storage Management and File systems on choosing a strategy to grow
storage space.
Partition alignment
When partitioning a LUN, it is recommended to align the partition on the 1M boundary. Either fdisk or parted
can be used to create the partition. However, only parted can create partitions larger than 2 TB.
# parted /dev/mapper/orabin-std
GNU Parted 3.1
Using /dev/mapper/orabin-std
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) quit
Information: You may need to update /etc/fstab.
# mkfs.ext4 /dev/mapper/orabin-std1
To set the I/O schedule persistently, create an udev rule that updates the devices. See the section Linux
dynamic device management (udev)0 for more information about using udev to set persistent ownership and
permission.
The following example shows setting the deadline I/O scheduler on all /dev/sd* devices. The rule is appended
to the 99-oracle-asmdevices.rule file.
# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
ACTION=="add|change", KERNEL=="sd*", RUN+="/bin/sh -c '/bin/echo deadline >
/sys$env{DEVPATH}/queue/scheduler'"
Define a rule for each Dell Unity LUN using its unique WWN. With this approach, each LUN requires an udev
rule. Rules are defined in /etc/udev/rules.d/99-oracle-asmdevices.rules. The following example shows an
udev rule that sets grid:oinstall ownership and 660 permission on a dm (multipath) device that matches the
WWN 36006016010d04200b584ce59557ba84a.
# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="dm-*",PROGRAM=="/lib/udev/scsi_id --whitelisted --
device=/dev/$name",RESULT=="36006016010d04200b584ce59557ba84a",ACTION=="add|chan
ge",OWNER="grid",GROUP="oinstall",MODE="0660"
…
…
…
The udev rule can be simplified if multipath device aliases are created with a consistent string pattern. For
example, use prefix ORA- in all multipath device aliases for LUNs intended for Oracle ASM. A single udev
rule can be used to set ownership and permission on all ORA* multipath devices. See section Creating
aliases on creating a multipath device alias.
# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="dm-*",ENV{DM_NAME}=="ORA*",OWNER="grid",GROUP="oinstall",MODE="0660"
The advantage of this approach is that only multipath.conf needs to be updated when new LUNs are added to
the system for Oracle ASM.
Oracle ASMLib
Oracle ASMLib simplifies storage management and reduces kernel resource usage. It provides device file
name, ownership, and permission persistency and reduces the number of open file handles required by the
database processes. No udev is required when ASMLib is used.
When LUNs are initialized with ASMLib, special device files are created in the /dev/oracleasm/disks folder
with proper ownership and permission automatically. When the system reboots, the ASMLib driver restarts
and re-creates the device files. ASMLib consists of three packages:
• oracleasm-support-version.arch.rpm
• oracleasm-kernel-version.arch.rpm
• oracleasmlib-version.arch.rpm
Each Linux vendor maintains their oracleasm kernel driver (oracleasm-kernel-version.arch.rpm). With Oracle
Linux, the kernel driver is already included with Oracle Linux Unbreakable Enterprise Kernel. For more
information about ASMLib and to download the software, go to
http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html.
The ownership of the ASMLib device 's defined in the /etc/sysconfig/oracleasm configuration file which is
generated by running /etc/init.d/oracleasm configure initially. Update the configuration file, if necessary, to
reflect the proper ownership and the disk scanning order.
# cat/etc/sysconfig/oracleasm
# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true
This configuration file indicates grid:oinstall for the ownership, and it searches for multipath devices (dm) and
excludes any single path devices (sd). If PowerPath devices are used, set
ORACLEASM_SCANORDER="emcpower".
Note: The asterisk (*) cannot be used in the value for ORACLEASM_SCANORDER and
ORACLEASM_SCANEXCLUDE.
Oracle requires the LUNs to be partitioned for ASMLib use. First, create a partition with parted, and then use
oracleasm to label the partition. ASMLib does not provide multipath capability and relies on native or third-
party multipath software to provide the function. The following example shows creating an ASMLib device on
a partition of a Linux Multipath device. The oracleasm command writes the ASMLib header to
/dev/mapper/mpathap1 and generates the ASMLib device file in /dev/oracleasm/disks/DATA01 with
ownership as indicated in the /etc/sysconfig/oracleasm file.
In a cluster environment, without ASMFD, when a cluster node is fenced, the host must be rebooted to ensure
the integrity of the data. With ASMFD, the fenced node does not need to be rebooted. It is possible to restart
the clusterware stack which reduces the time to recover the node.
Unlike ASMLib, ASMFD comes with the Grid Infrastructure software and there is no additional software to
download. Starting with Oracle ASM 12c Release 2, the installation and configuration for Oracle ASMFD have
been simplified by integrating the option into the Oracle Grid Infrastructure installation. Administrators need to
select the option Configure Oracle ASM Filter Driver during the Grid installation.
The installation of ASMFD automatically creates an udev rule file in /etc/udev/rules.d/53-afd.rules that sets
the afd devices with the proper ownership and permission. Do not attempt to modify or delete this file directly.
Use the asmcmd adf_configure command to make updates instead.
# cat /etc/udev/rules.d/53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="oinstall", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="oinstall", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="oinstall", MODE="0664"
Either whole LUNs or LUN partitions can be used for ASMFD devices. Dell Technologies recommends using
whole LUNs because of certain restrictions with partitions which affect database availability during storage
expansion. See section Expand Oracle ASM storage for more details.
The following example shows creating an ASMFD device on a Linux multipath device. The asmcmd
afd_label command writes the ASMFD header to /dev/mapper/mpathb and generates the ASMFD device
file in /dev/oracleafd/disks/DATA01. The udev rule ensures the afd devices are set to grid:oinstall and
0664 permission.
The other advantage of using ASMFD is that it supports thin-provisioned disk group starting in Oracle release
12.2.0.1.
To find out which OS platforms ASMFD is supported on, see Oracle KB Doc ID 2034681.1 at Oracle Support.
For more information on installing and configuring ASMFD, refer to the Oracle Automatic Storage
Management Administrator’s Guide.
• For ultimate flexibility and maintaining configuration consistency, create separate disk groups for each
of the following:
- Create a disk group for the Oracle Cluster Registry (OCR) and voting files.
- Create a disk group for Grid Infrastructure Management Repository (GIMR).
- Use one or more disk groups for database data files for each database.
- Use a disk group for a fast recovery area for each database.
- Configure a database which can span across multiple disk groups but with each disk group
mounted and used by one database exclusively. This provides the ability to independently
optimize the storage and snapshot configuration for each individual database.
• Create LUNs with same capacity and services in the same disk group such as compression,
consistency group, and snapshot schedule
• Use fewer but larger LUNs to reduce the number of objects to be managed
• Create a minimum of two LUNs for each disk group. Distribute the LUNs evenly on both Dell Unity
storage processors to allow even I/O distribution to both processors, hence, maximizing the
performance and I/O bandwidth for the environment.
• To take an array-based snapshot on a multivolume Oracle database, ensure that all LUNs belonging
to the same database are snapped together. To snap the LUNs together, group the LUNs in a
Consistency Group (see more information in section Consistency group).
• While ASM can provide software-level mirroring, it is not necessary because data protection is
integrated in Dell Unity RAID protection. Use External Redundancy for ASM disk groups to enable
substantial storage savings, reduce overall IOPS from ASM, and results in better I/O performance.
• For best storage efficiency, create thin-provisioned LUNs in the Dell Unity system for ASM use. When
creating datafiles on ASM disk groups, administrators can set an initial size of each datafile and
specify the autoextend clause to include an extent size for growth. Dell Unity system allocates
storage for the initial datafile size and as the data are written to the datafiles. When needed, more
space is allocated in the amount of autoextend size. An example of the CREATE TABLESPACE
statement is shown in the following:
• By default, each LUN has unlimited I/O limits in the Dell Unity system. When a database requires
higher performance and another does not, consider creating different host I/O limit policies in Dell
Unisphere that limit I/O performance based on IOPS and bandwidth. Assign the policy to the LUNs
corresponding to the level of performance required. The host I/O limit is applied on the LUN level.
• On Oracle 12c releases, ASMFD now supports thin provision ASM disk group. The feature allows
unused space to be released back to the Dell Unity system after deleting or shrinking the datafiles. To
enable the feature, set THIN_PROVISIONED attribute to 'TRUE' on the disk group. For example:
• When ASM rebalances the disk group, data is moved to higher performing tracks of spinning disks
during the compact phase. Since the Dell Unity system virtualizes the physical storage devices, and
with the use of the flash devices, there is no real benefit to compacting the data. In Oracle 12c, it is
now possible to disable the compact phase on individual disk group by setting the
_rebalance_compact attribute to 'FALSE'.
For Oracle pre-12c releases, _rebalance_compact can only be disabled on the ASM instance level
which affects all disk groups. For database environments that have different storage types, turning off
the compact phase might have adverse performance implication.
For more information about ASM compact phase rebalancing, see Oracle KB Doc ID 1902001.1 on
Oracle Support.
Table 5 demonstrates an example of how ASM disk groups are organized. Figure 2 illustrates the storage
layout on the database, ASM disk group, and Dell Unity system levels.
Consistency group
For performance reasons, it is common for a database to span across multiple LUNs to increase I/O
parallelism to the storage devices. Dell Technologies recommends grouping the LUNs into a consistency
group for a database to ensure data consistency when taking storage snapshots. The Dell Unity system
snapshot feature is a quick and space-efficient way to create a point-in-time snapshot of the entire database.
Sections 0 and 0 discuss using Dell Unity system snapshots and thin clones to reduce database recovery
time and create space-efficient copies of the database.
In Figure 2, for example, the RAC database consists of disk group +DATADG and +FRADG. All ASM
volumes in those disk groups are configured in a single consistency group, testdb_cg. The single instance
database consists of disk groups +DATA2DG and +FRA2DG. The ASM devices of both disk groups are
configured in a consistency group, devdb_cg.
The consistency group feature allows taking a database-consistent snapshot across multiple LUNs. On the
database side, use the ALTER DATABASE BEGIN BACKUP clause before the snapshot is taken and END
BACKUP clause after the snapshot is taken.
Note: Storage snapshots taken on a multiple-LUN database without a consistency group might be
irrecoverable by Oracle during database recovery.
The following subsections discuss the different ways to increase ASM storage capacity. Each method has its
pros and cons.
Since ASM automatically rebalances the data after new LUNs are added, it is recommended to add
the LUNs in a single operation to minimize the amount of rebalancing work. The following example
shows the ALTER DISKGROUP ADD DISK statement to add multiple devices to a disk group.
9. If the existing LUNs are in a consistency group, add the new LUNs to the same consistency group.
Note: Adding or removing LUNs in a consistency group is not allowed when there are existing snapshots of
the consistency group. To add or remove LUNs in a consistency group, delete all snapshots and retry the
operation.
Note: Resizing LUNs on the operating system can cause loss of data or corruption. It is recommended to
back up all data before attempting to resize the LUNs.
The following outlines the general steps to resize ASM devices online without ASMFD and ASMLib.
1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems and refresh partition table on each LUN path and reload
multipath devices.
# fdisk –l /dev/emcpowerc
6. To determine the maximum size of the LUN, run asmcmd lsdsk and extract the OS_MB value. Use
this value with the ALTER DISKGROUP RESIZE DISK clause.
# asmcmd lsdsk –k
OS_MB represents the new maximum size ASM can expand to.
8. Verify the new ASM device size. After the resize operation completes, run asmcmd lsdsk to confirm
the Total_MB value matches OS_MB value.
# asmcmd lsdsk -k
Inst_ID Total_MB Free_MB OS_MB Name Failgroup Site_Name
Site_GUID Site_Status Failgroup_Type Library
Label Failgroup_Label Site_Label UDID Product Redund Path
1 409600 409464 409600 TEST3DG_0000 TEST3DG_0000
00000000000000000000000000000000 REGULAR System
UNKNOWN /dev/mapper/ORA-TEST3
2 409600 409464 409600 TEST3DG_0000 TEST3DG_0000
00000000000000000000000000000000 REGULAR System
UNKNOWN /dev/mapper/ORA-TEST3
Run asmcmd lsdg to confirm the Total_MB value on the disk group has increased.
# asmcmd lsdg
Inst_ID State Type Rebal Sector Logical_Sector Block AU
Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
1 MOUNTED EXTERN N 512 512 4096 4194304
409600 409464 0 409464 0
N TEST3DG/
2 MOUNTED EXTERN N 512 512 4096 4194304
409600 409464 0 409464 0
N TEST3DG/
The following outlines the general steps to resize ASM devices with ASMFD online.
1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems and refresh partition table on each LUN path and reload
multipath devices.
4. Reload multipath devices.
# fdisk –l /dev/emcpowerc
# asmcmd afd_refresh
7. To determine the maximum size of the LUN, run asmcmd lsdsk and extract the OS_MB value. Use
this value with the ALTER DISKGROUP RESIZE DISK clause.
# asmcmd lsdsk –k
SQL> ALTER DISKGROUP DATADG RESIZE DISK DATA03 SIZE $OS_MB REBALANCE POWER
10;
9. Verify the new size in the ASM device and disk group. After the resize operation completes, run
asmcmd lsdsk to confirm the Total_MB value matches OS_MB value.
# asmcmd lsdsk -k
Run asmcmd lsdg to confirm the Total_MB value on the disk group has increased.
# asmcmd lsdg
As mentioned previously in this section, the online resize capability of ASMFD is available in Oracle 12.2.
With Oracle 12.1, either restart the host to refresh the LUN size, or restart the clusterware, ASM instance, and
the AFD driver on the host to minimize the outage window. In a cluster environment, refreshing the LUN size
can be done in a rolling fashion to further minimize the impact of the outage.
# afdload stop
# afdload start
# asmcmd afd_scan
6. Restart CRS.
Another alternative to restarting the node or software is to unlabel and label the AFD devices. The database
associated with the devices must be stopped, and the disk groups and devices must be unmounted before
they can be relabeled. This approach increases the risk of data loss and corruption and requires extra
caution.
The following outlines the general steps to resize ASM devices with ASMFD online.
1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of the existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems, refresh the partition table on each LUN path, and reload
the multipath devices.
# rescan-scsi-bus.sh --resize
# fdisk –l /dev/emcpowerc
# parted /dev/mapper/TEST67_ASMLIB rm 1
# rescan-scsi-bus.sh –resize
# partprobe /dev/mapper/TEST67_ASMLIB
# parted /dev/mapper/TEST67_ASMLIB u GB p
# oracleasm scandisks
# oracleasm listdisks
14. Run asmcmd lsdsk to extract the OS_MB value to determine the maximum LUN size.
16. Verify the new size in the ASM device and disk group. After the resize operation completes, run
asmcmd lsdsk to confirm the Total_MB value matches the OS_MB value.
# asmcmd lsdsk -k
Run asmcmd lsdg to confirm the Total_MB value on the disk group has increased.
# asmcmd lsdg
Space reclamation
Dell Unity system supports the SCSI TRIM/UNMAP feature which allows operating systems to inform which
data blocks are no longer in use and can be released for other uses. For space reclamation to work, the LUNs
must be thin provisioned in the Dell Unity system and the Linux kernel, and Oracle ASM must also support the
feature. The TRIM/UNMAP feature has been introduced in Linux kernel 2.6.28-25 and newer. With Oracle
12.2 ASMFD, thin-provisioned ASM diskgroups allow deleted space in datafiles to be reclaimed.
To verify the availability of the feature on the Linux operating system, query
/sys/block/$disk/queue/discard_granularity. If the value is zero, it means the device does not support
discard functionality. For example, since device sdx has a non-zero discard_granularity value, its free space
will be reclaimed with TRIM/UNMAP.
# cat /sys/block/sdx/queue/discard_granularity
8192
3. After deleting objects in a table, run command ALTER TABLE SHRINK SPACE to repack the rows,
move the high water mark, and release unused extents in the datafiles.
4. Determine the HWM of each data file and prepare the resize statements using the following script
provided by Oracle. The original post can be found in the following Oracle article:
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:766625833673
# cat find_datafile_hwm.sql
set verify off line 200 pages 100
column file_name format a50 word_wrapped
column smallest format 999,990 heading "Smallest|Size|Poss."
column currsize format 999,990 heading "Current|Size"
column savings format 999,990 heading "Poss.|Savings"
break on report
compute sum of savings on report
select file_name,
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
ceil( blocks*&&blksize/1024/1024) currsize,
ceil( blocks*&&blksize/1024/1024) -
ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
VALUE
--------------------------------------------------------------------------
8192
Smallest
Size Curr Poss.
FILE_NAME Poss. Size Savings
---------------------------------------------- -------- ------ --------
+DATADG/DEMODB/DATAFILE/system.257.952265543 841 850 9
+DATADG/DEMODB/DATAFILE/demots.289.952606307 16,328 17,014 686
+DATADG/DEMODB/DATAFILE/undotbs1.259.952265593 12,932 24,708 11,776
+DATADG/DEMODB/DATAFILE/undotbs2.265.952265669 37 1,024 987
+DATADG/DEMODB/DATAFILE/demots.337.952606345 17,220 17,652 432
+DATADG/DEMODB/DATAFILE/demots.320.952606331 16,712 17,462 750
+DATADG/DEMODB/DATAFILE/sysaux.258.952265577 2,008 2,030 22
+DATADG/DEMODB/DATAFILE/users.260.952265593 1 5 4
--------
sum 14,666
8 rows selected.
CMD
-------------------------------------------------------------------------------------
alter database datafile '+DATADG/DEMODB/DATAFILE/system.257.952265543' resize 841m;
alter database datafile '+DATADG/DEMODB/DATAFILE/demots.289.952606307' resize 16328m;
alter database datafile '+DATADG/DEMODB/DATAFILE/undotbs1.259.952265593' resize
12932m;
8 rows selected.
5. To resize the data files, copy and paste the ALTER DATABASE RESIZE statements associated with
the data files. For example, the previous statements in bold shrink only the demots tablespace that
resides in the DATADG disk group.
6. Manually rebalance the disk group.
7. Confirm the release of the space in Unisphere by observing the Capacity and Space Used
information on the LUN properties page. It might take several minutes to see the changes depending
on the amount of data and how busy the system is at the time.
Note: Deleted space is not released until either the data files are deleted or shrunk and a rebalance operation
is run against the ASM disk groups.
Linux LVM
Linux Logical Volume Manager (LVM) is a common general-purpose storage manager in all popular Linux
distributions. Since ASM does not support storing Oracle software, the software must be installed on a Linux
file system that can be configured on top of LVM. LVM mirroring is not necessary because Dell Unity systems
provide storage protection. Multiple LUNs can be grouped into a single LVM volume group. Then logical
volumes must be created that span across these LUNs. When taking Dell Unity system snapshots on a multi-
LUN volume group, ensure the LUNs are configured in a consistency group.
A file system is created on a logical volume where the Oracle binary is installed. More space can be added to
the volume groups, logical volumes, and file systems either by adding new LUNs or by expanding existing
LUNS in the volume groups. Once volume groups and logical volumes are expanded, the file systems can be
resized to the newly added space. LVM and many file systems, such as ext4 and xfs, allow on-demand
expansion without taking down the applications.
Unlike ASM, administrators must configure LVM striping, and data is not rebalanced when extending the
volume group.
LVM guidelines
• Use whole LUNs for volume groups.
• Create a dedicated volume group for storing each copy or version of Oracle software. A dedicated
volume group simplifies management and allows greater flexibility on array-based snapshots on
individual Oracle software copies.
• Use two or more LUNs in a volume group when performance is of concern.
• Configure all LUNs with the same size in the same volume group and group them in the same
consistency group.
• In an Oracle RAC configuration, use a dedicated local volume group for each cluster node.
The following example shows the tasks to create an Oracle software file system on LVM:
Note: if --dataalignment is not specified, mkfs might report a warning message (see below). Reinitialize the
LUN with --dataalignment to ensure proper alignment.
.
File systems
Local file system is preferred to store Oracle software and diagnostic logs. Datafiles can be stored in local file
systems, but it is recommended to use Oracle ASM on block devices or Oracle DirectNFS. Sections 0, 0, and
0 discuss using the Dell Unity NFS service with Oracle DirectNFS.
The Dell Unity system supports a wide range of file systems on Linux. This section focuses on two popular
and stable file systems: ext4 and xfs.
For additional information about supported file systems and feature limitations, see the Dell Host Connectivity
Guide for Linux.
It can be beneficial to separate the Oracle software and Oracle diagnostic logs. To separate software and
logs, create a separate volume group or assign a different LUN to store Oracle diagnostic log files. The
diagnostic logs can consume a large amount of space in a short time. By isolating the logs in a different file
system, it reduces the risk of filling up the storage space with these logs and affects the operation of the
software. Since the diagnostic logs are not mission critical to the software operation, it is not essential to
enable snapshots on the LUNs used by the logs. The diagnostic logs are also good candidates to be
compressed to reduce the storage consumption. Table 7 shows an example of using separate file systems for
software and diagnostic logs.
An example of file system layout for Oracle software and diagnostic logs
Volume Dell Unity Dell Unity
Logical volume File system mount point
group snapshot compression
vggrid lv-grid-bin /u01 Enable Disable
vgoracle121 lv-oracle-bin /u01/app/oracle/product/12.1.0 Enable Disable
vgoracle122 lv-oracle-bin /u01/app/oracle/product/12.2.0 Enable Disable
vgoraclelog lv-grid-log /u01/app/grid/diag Disable Enable
lv-oracle-log /u01/app/oracle/diag Disable
• Identify the file system by its UUID or LVM LV device in the /etc/fstab file. Avoid using any non-
persistent device paths such as /dev/sd*.
• Query the UUID with the blkid command.
# blkid /dev/vgoracle/lv-oracle-rac-home
/dev/vgoracle/lv-oracle-rac-home: UUID="83cf5726-f842-448b-a143-
5f77eb0d9b37" TYPE="xfs"
• Include discard in the mount option to enable space reclamation support for the file system. More
information is provided in section 0.
• Include nofail in the mount option if the Linux operating system experiences mount issue during
system boot. nofail prevents interruption during the boot process which requires manual intervention.
• For the xfs file system, disable the file system check (fsck option) in /etc/fstab because it does not
perform any check or repair automatically during boot time. The xfs journaling feature ensures the file
system integrity and data is in a consistent state after abrupt shutdown. If a manual repair or check is
necessary, use the xfs_repair utility to repair damaged file system.
• Set a value of 0 in the sixth field to disable fsck check. Here is an example of an xfs file system entry
in /etc/fstab:
1. Take manual snapshots of LUNs that are going to be expanded. See section Snapshots for more
information about taking snapshots and recovering from snapshots.
2. Expand the size of existing LUNs in Unisphere.
3. Perform a SCSI scan on the host systems, refresh the partition table on each LUN path, and reload
multipath devices.
# rescan-scsi-bus.sh –resize
6. Extend the file system size to the maximum size, automatically and online.
Space reclamation
For file system types that support the online SCSI TRIM/UNMAP command, enable the discard mount option
in /etc/fstab or include –o discard to the manual mount command. The discard option allows space to be
released back to the storage pool in the Dell Unity system when deleting files in the file system.
Administrators should review the file system documentation to confirm the availability of the features.
The LUNs must be thin provisioned in Dell Unity storage system for space reclamation to work. As new data
is written to the file system, space is allocated in the Dell Unity system. When files are deleted from the file
system, the operating system informs the Dell Unity system which data blocks can be released. The release
of storage is automatic and requires no additional steps. To confirm the release of space in the Dell Unity
system, monitor the Total Pool Space Used on the LUN properties page in Unisphere.
Dell Unity x80F storage systems support NAS connections on multiple 10 GbE and 25GbE ports. In an Oracle
NFS environment, 25 Gb/s is recommended for the best performance. If possible, configure Jumbo frames
(MTU 9000) on all ports in the end-to-end network path, to provide the best performance.
When using Oracle Direct NFS (dNFS), it is recommended to configure Link Aggregation Control Protocol
(LACP) across the same multiple Ethernet ports on each SP. Link aggregation provides path redundancy
between clients and NAS servers. Combine LACP with redundant switches to provide the highest network
availability. LACP can be configured across all available Ethernet interfaces and between the I/O modules.
See Figure 30, Figure 31, and Figure 32 for examples.
For additional information pertaining to this section, see the Dell Unity: NAS Capabilities, Dell Unity: Best
Practices Guide, and Dell Unity: Service Commands documents.
are required for multiple Oracle environments, load-balance NAS servers in a way that the front-end NFS I/O
is roughly distributed evenly between the SPs. Keep in mind not to over provision either of the SPs such that
in the event of failover, the peer SP does not become overloaded.
Because each NAS server is logically separate, NFS clients of one NAS server cannot access data on
another NAS server. Logically separate NAS servers can provide database isolation and protection across
multiple NFS clients (database servers). To create a NAS server, in Dell Unisphere select File > NAS
Servers > + and supply the necessary information as shown in the following screens.
When creating a NAS server for an Oracle database, enable NFSv4 if possible and do not setup the UNIX
Directory Service and NAS server DNS if they are not needed. After a NAS server is created, the Dell Unity
NFS file systems can be created, and then Dell Unity NFS shares can be created.
NAS server interfaces can either be configured as production, or backup and DR testing interfaces. The type
of interface dictates the type of activity that can be performed. Table 8 displays the characteristics of the
interface types.
If throughput is restricted from using one Ethernet interface, consider configuring multiple Ethernet ports for
the NAS server by selecting: File -> NAS Servers -> select checkbox for NAS server -> Network-> + and
adding additional Ethernet interfaces.
To create a file system in Unisphere, select File > File Systems > + and supply the wanted configuration.
Regarding the Oracle database files, the NFS file system can host Oracle datafiles that exist on ASM, file
system, or both. See Figure 10 and Figure 11.
Scalability
Dell Unity file systems provide scalability in several areas, including maximum file system size, which makes
Dell Unity storage ideal for Oracle environments. Dell Unity OE version 4.2 increases the maximum file
system size from 64 TB to 256 TB for all file systems. File systems can also be shrunk or extended to any
size within the supported limits. Dell Technologies recommends configuring storage objects that are 100 GB
at a minimum and preferably 1 TB in size or greater.
Storage efficiency
Dell Unity storage supports thin-provisioned file systems. Starting with Dell Unity OE version 4.2, Unisphere
can also create thick file systems. When using Dell Unity file storage with Oracle, consider using thin-
provisioned file systems. Dell Unity also provides increased storage flexibility with manual or automatic file
system extension and shrinking with reclaim.
Quotas
Dell Unity includes full-quota support to allow administrators to limit the amount of space that can be
consumed from a user of an NFS file system or directory. When working with Oracle, quotas are not usually
necessary. If deciding to use quotas, carefully consider their impact on managing the Oracle environment.
NFS protocol
Dell Unity storage supports NFSv3 through NFSv4.1, including secure NFS.
All Dell Unity OE versions support Oracle dNFS in single-node configurations. Starting with OE version 4.2,
Oracle Real Application Clusters (RAC) are also supported. To use Oracle RAC, the nfs.transChecksum
parameter must be enabled. This parameter ensures that each transaction carries a unique ID and avoids the
possibility of conflicting IDs that result from the reuse of relinquished ports.
For more information about NAS server parameters and how to configure them, see the Dell Unity Service
Commands document.
NFSv4 is a version of the NFS protocol that differs considerably from previous implementations. Unlike
NFSv3, NFSv4 is a stateful protocol, meaning that it maintains a session state. NFSv4 also does not treat
each request as an independent transaction without the need for additional preexisting information. NFSv4
handles all network traffic by the underlying transport protocol as opposed to the application layer in NFSv3.
Handling network traffic by the underlying transport protocol as opposed to the application layer can provide
savings in the overall load on the Oracle database server (NFS client). NFSv4 is preferred due to
improvements over NFSv3. Some advantages of NFSv4 are:
While Dell Unity storage fully supports most of the NFSv4 and v4.1 functionality described in the relevant
RFCs, directory delegation and pNFS are not supported. Therefore, do not configure Oracle to use parallel
dNFS (known as pNFS). For increased performance, consider using NFSv4 and Oracle Direct NFS (dNFS)
with multiple network interfaces for load-balancing purposes.
Sharing protocols
When defining the NFS share name, ensure Allow SUID is selected. Selecting Allow SUID is required for
Oracle software mount points.
For NFS shares intended for Oracle, set the NFS export options for the NFS share by setting Default Access
to Read/Write, allow Root.
The following showmount command only illustrates its usage on the first IP in the list of Exported Paths.
mkdir /oraasmnas
chmod 770 /oraasmnas
chown grid:oinstall /oraasmnas
mount -o
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0
XX.XX.XX.91:/ORA-ASM-NFS /oraasmnas
4. Change the permissions and ownership of the root directory on the NFS share:
5. Create the raw files for the ASM disk groups and set their permissions and ownership.
NFS traffic
Generally, NFS traffic can be either classified as control/management traffic and I/O traffic on application
data. Regarding the operating system, whether Oracle ODM NFS client library is enabled, Linux NFS kernel
client (kNFS) driver manages the control/management of NFS devices. When the ODM library is enabled, the
Oracle environment is said to be using Oracle Direct NFS (dNFS) and all database I/O, NFS data traffic, flows
through the dNFS driver. When the ODM library containing the embedded Oracle NFS client is disabled, all
database I/O flows through the kNFS client driver:
• get attribute
• set attribute
• access
• create
• mkdir
• rmdir
• mount
• umount
While Dell Unity 4.2, Oracle 12cR1, and 12cR2 dNFS all support NFSv3 and the stateful NFSv4 and NFSv4.1
protocol, Dell Unity does not provide functionality for pNFS. Therefore, do not configure pNFS in Oracle.
It is recommended to use dNFS if NFS storage devices are used so that the performance optimizations built
into Oracle can be exploited.
Benefits of dNFS
The advantage of using Oracle dNFS lies within the fact that it is part of the Oracle database kernel. dNFS
services all I/O to NFS storage devices. dNFS gives Oracle the ability to manage the best possible
configuration, tune itself, use Oracle buffer cache, and use available resources for optimal multipath NFS data
traffic I/O.
Table 9 provides examples of different Oracle directories that could reside on a NFS share. Once NFS shares
are identified for Oracle use, create the necessary mount points for the NFS shares and create the NFS
shares in Dell Unity storage. Also, set the privileges, owner, and group of the Linux mount points and root
directory on the NFS share per Oracle requirements.
/u01/app/oraInventory
$ORACLE_BASE/<srv>/oraInventory
Oracle home $ORACLE_HOME=$ORACLE_BASE/ This directory contains the binaries, library,
product/12.2.0/dbhome_1/ configuration files, and other files from a single
release of one product and cannot be shared
with other releases or other Oracle products.
Database file $ORACLE_BASE/oradata/ This is the location to hold the database. Use a
directory different NFS mount point for database files to
provide the ability to mount the NFS file system
with different mount options, and to distribute
database I/O.
Oracle recovery $ORACLE_BASE/fast_recovery_area/ Oracle recommends that recovery files and
directory database files do not exist on the same file
system.
Oracle product $ORACLE_BASE/product This mount point can be used to install software
directory from different releases, for example:
/u01/app/oracle/product/12.1.0/dbhome_1/
/u01/app/oracle/product/12.2.0/dbhome_1/
Oracle release $ORACLE_BASE/product/<version>/ This mount point can be used to install different
directory Oracle products from the same version, for
example:
$ORACLE_BASE/product/11gR2/dbhome_1
$ORACLE_BASE/product/11gR2/client_1
After the share is mounted using kNFS, dNFS mounts and unmounts the volume logically as needed. Since
dNFS uses a logical mount, after it unmounts the share, the volume can still be accessed through kNFS.
Having kNFS access on the volume after dNFS unmounts the share guarantees other Oracle databases, or
users can use files from the share as necessary.
If NFS is used for database files, the NFS buffer size for reads (rsize) and writes (wsize) must be set to at
least 16,384. Oracle recommends a value of 32,768. These values are set in /etc/fstab, or when explicitly
mounting an NFS volume. Since a dNFS write size (v$dnfs_servers.wtmax) of 32,768 or larger is supported in
Dell Unity storage, dNFS does not fall back to the traditional kNFS kernel path. dNFS clients issue writes with
v$dnfs_servers.wtmax granularity to the NFS server.
The following lists the required mount options for NFS mount points used by Oracle standalone, Oracle RAC,
RMAN, and Oracle binaries running on Linux x86-64 version 2.6 and above. For more NFS share mount
options, see Oracle MOS note Mount Options for Oracle files for RAC databases and Clusterware when used
with NFS on NAS devices (Doc ID 359515.1) at Oracle Support.
Linux kernel 2.6 x86-64 NFS mount options for Oracle 12c RAC and Oracle standalone:
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers={3|4},timeo=600,actimeo=0
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers={3|4},timeo=600,actimeo=0,noac
1 The mount options are applicable only if ORACLE_HOME is shared. Oracle also recommends that the Oracle inventory
directory is kept on a local file system. If it must be placed on a NAS device, create a specific directory for each system to
prevent multiple systems from writing to the same inventory directory. Oracle clusterware is not certified on dNFS.
2 Do not replace tcp with udp. udp should never be used. dNFS cannot serve an NFS server with write size less than
32768. Set option vers to either 3 or 4 and ensure the NFS sharing protocol on the Dell Unity NAS server is set
accordingly. In 12cR2, both OCR and voting disks must reside in ASM. See Oracle MOS note 2201844.1 for additional
information. dNFS is RAC aware. Therefore, even though NFS is a shared file system, and NFS devices for Oracle must
be mounted with the noac option, dNFS automatically recognizes RAC instances and takes appropriate action for
datafiles without additional user configuration. This eliminates the need to specify noac when mounting NFS file systems
for Oracle datafiles or binaries. This exception does not pertain to CRS voting disks or OCR files on NFS. NFS file
systems hosting CRS voting disks and OCR files must be mounted with noac. Option noac should not be used for RMAN
backup set, image copies, and data pump dump files because RMAN and data pump do not check this option and
specifying it can adversely affect performance.
When configuring an Oracle RAC environment that uses NFS, ensure the entry in /etc/fstab is the same on
each node. The following snippet from /etc/fstab mounts an NFS mount point for ORACLE_HOME binaries
(/u01), and a database that will use ASM.
When adding multiple mount options for a specific mount point in /etc/fstab, do not insert spaces after options
because the operating system may not properly parse the options.
Mount options timeo, hard, soft, and intr control the NFS client behavior if the NFS server should become
temporarily unreachable. Whenever the NFS client sends a request to the NFS server, it expects the
operation to have finished after a given interval (specified in the timeout option). If no confirmation is received
within this time, a minor timeout occurs and the operation is retried with the timeout interval doubled. After
reaching a maximum timeout of 60 seconds, a major timeout occurs. By default, a major timeout causes the
NFS client to print a message to the console and start over with an initial timeout interval twice that of the
previous cascade. There is the potential for timeout interval to repeat indefinitely. Volumes that retry an
operation until the server becomes available again are called hard-mounted.
See File system mount options for a description of the mount options used in this paper.
If NFS and network redundancy is a concern, all interfaces (database server, Ethernet switch, and Dell Unity
storage) used for NFS control traffic should be bonded. This bonded interface could be the bonded public
network, if it exists, or even the bonded interface for the RAC interconnect in a RAC environment.
Directing NFS control and data traffic to different NICs may not always be possible because of a limited
number of NICs, or infrastructure limitations. In such cases, it is possible to share an unbonded interface for
both NFS control and data traffic. However, that may cause network performance issues under heavy loads
as the server will not perform network load balancing, and limits database availability without multiple NICs.
When using dNFS, Oracle supports one to five network paths for NFS traffic between a NAS server and NFS
client: one path for NFS control traffic and up to four paths for NFS data traffic. When using dNFS, use
multiple network paths. Ensure each NFS network path belongs to a subnet that is not being used for any
other NIC interface on the NFS client. Also ensure each NFS network path does not use the subnet of the
public network for NFS. Using unique subnets simplifies configuration of dNFS and ensures that dNFS
benefits are fully exploited.
Fewer available subnets may exist than intended dNFS paths. If so, dNFS paths on the NFS client can be set
up to use existing subnets already in use on the NFS client. However, using existing subnets already in use
on the NFS client requires additional configuration in:
• The operating system network layer (relaxing ingress filtering for multihomed networks and static
routing) and,
• Oracle (file oranfstab) if dNFS data traffic needs to be on one or more dedicated subnets.
This configuration will disable the operating system from determining the default dynamic route and will allow
multiple NIC interfaces in the same server to use the same subnet. See section Ipv4 network routing filters0
for information about using multiple NIC interfaces in the same subnet.
If the operating system chooses the dynamic route, it will invariably use the first best-matched route possible
from the routing table for all paths defined. Usually, that route will be incorrect. Using an incorrect route
results in dNFS load balancing, scalability, and failover not working as expected. To ensure NFS data traffic
are working as expected when using multiple paths in the same subnet, configure static operating system
routing for each dNFS network path. See section Static routing for more information.
If different subnets are used for NFS traffic, routing will be taken care of automatically by the native network
driver and the default route entries in the routing table. Creating static routes are not necessary when using
different subnets for dNFS traffic.
All IP end-points between the NFS client and Dell Unity NAS server must appear in oranfstab when dNFS
data traffic is being isolated to one or more dedicated IP addresses. For additional information, see section
Database server: NFS client network interface configuration and Oracle dNFS configuration file: oranfstab.
Jumbo frames
Jumbo frames, which refers to raising the maximum transfer unit (MTU) from the default of 1,500 to
9,000 bytes, is advised for the entire network path: database servers (NFS client), Ethernet switch, and Dell
Unity storage. Using Jumbo frames allows the network stack to bundle transfers into larger frames and
reduce the TCP protocol overhead. The value used for any frame depends on the immediate needs of the
network session established between the NFS client and server. Raising the limit to 9,000 from end point to
end point in the network path will allow the session to take advantage of a wider range of from sizes.
Single Ethernet path between the NFS client and Dell Unity NAS server for NFS traffic
Both NFS control/management and data traffic use this path. The path requires setting up the Dell Unity NAS
server and NFS share, kNFS for the NFS share, and NIC interface, and enabling dNFS.
Dell Unity NAS server with a single Ethernet path for NFS traffic
dNFS uses two kinds of NFS mounts: the native operating system mount of NFS (also referred to as kernel
kNFS mount) and the Oracle database NFS mount (dNFS mount). When using a single network path for
dNFS, file oranfstab is not necessary because Oracle dNFS will glean the required information for the
matching mounted NFS share in file /etc/mtab. If dNFS is unable to find the necessary information in
/etc/mtab, control is handed back to the database and file access is attempted through kNFS.
If the IP used in the single network path is in the same subnet used by any other NIC interface in the
database server, see section Static routing for additional requirements. For additional information about file
oranfstab, see section Oracle dNFS configuration file: oranfstab.
NIC 1
p1p1 p1p2
slot1
Database server
NFS client
If the architecture cannot support dedicated paths for all dNFS data traffic, dNFS control and data traffic can
share a path.
NIC 1
p1p1 p1p2
slot1
Database server
NFS client
If multiple dNFS paths are defined for data traffic, when a dNFS data path fails, dNFS reissues requests over
any of the remaining dNFS data paths improving database availability. Multiple data paths also provide Oracle
the ability to automatically tune the data paths to the NFS storage devices. Multiple data paths also avoid the
need to manually tune NFS network performance at the operating system level. Since dNFS implements
multipath I/O internally, there is no need to configure LACP for channel-bonding interfaces for dNFS data
traffic through active-backup or link aggregation. If the LACP protocol is configured on the NIC interfaces
intended for dNFS data traffic, remove the channel-bond on those interfaces so that the interfaces operate as
independent ports.
If a single interface is used for the operating system kNFS mount, NFS control traffic can be blocked should
the interface be down or the network cable unplugged. This blocked NFS traffic will cause the database to
appear unavailable. To mitigate this single point of failure in the network, LACP protocol should be configured
on multiple interfaces to create a channel-bonded interface for NFS control/management traffic. A channel-
bounded interface for NFS control/management traffic is the recommended configuration as it provides
increased database availability and additional network bandwidth.
For additional information about channel-bonded interfaces for NFS control traffic, see section Configuring
LACP.
When configuring dNFS with multiple network paths, the recommendation is to use a unique network for each
of the paths. When multiple unique networks are not available, or not wanted, multiple IPs from the same
subnet can be used for each of the network paths. See section Shared subnets for additional requirements if
a shared subnet is used for dNFS data traffic.
Shared subnets
If dNFS will use a subnet used by at least one other network interface, some configuration is required.
Configuration will include the Ipv4 network routing filter and static routes.
If the Oracle 12c preinstall rpm is used to configure the operating system before installing Oracle, the routing
filters will be relaxed appropriately. Beginning with Oracle Database 12c release 2, Oracle has changed the
name of this rpm so that the name corresponds to version of Oracle being installed:
Both rpms are in the ol7_latest repository for Oracle Linux 7 on the Oracle Linux yum server and from ULN.
Recent releases of Oracle Linux 7 by default include the proper yum configuration to install these rpms. If the
rpm is missing from the operating system, perform the following to install it:
Oracle 12cR1:
Oracle 12cR2:
To verify if ipv4 routing filters have been relaxed in the current running operating system, perform the
following on the database server from a privileged operating system user. The values of the returned
parameters should be 2 if ipv4 routing filters have been relaxed.
If the filters are not set correctly, update /etc/sysctl.conf with the settings so they are persistent across
reboots, and reload the system configuration:
Static routing
When any path for dNFS traffic shares any subnet already in use on the NFS client, static routing must be
configured for each of the paths used by dNFS traffic. A static route for dNFS data traffic must also be
configured should it use the same subnet as dNFS control traffic. If static routes are not defined, automatic
load balancing and performance tuning of dNFS will not operate as expected per file oranfstab. NFS data
traffic will flow through an unexpected network path.
The remainder of this section covers two examples of static routing. The first example considers two
interfaces sharing a subnet, and the second example considers four network interfaces sharing a subnet.
Figure 23 illustrates the path taken for dNFS data traffic when a subnet is shared on two interfaces with
default routing. dNFS traffic flows through interface em1 rather than through the intended interface p1p1.
NIC 1
p1p1 p1p2
slot1
Database server
NFS client
Incorrect path taken by dNFS data traffic on shared subnet and default routing
If default routing is used, the operating system searches the routing table for the route that best matches the
destination address and mask, and it will use that route. In the example above, interfaces em1 and p1p1
share the same subnet (column Destination) and mask (column Genmask). The operating system considers
both entries as best-matched for the target address (XX.XX.XX.87 /20 – port 0 of Dell Unity storage). Since
em1 precedes the entry for p1p1, the operating system will use interface em1 for dNFS data traffic rather than
the route to the intended interface p1p1.
To mitigate the issue of sending dNFS data traffic across the wrong path, a static route must be added to the
route table. That static route forces dNFS data traffic to flow between the intended p1p1 interface and target
address (XX.XX.XX.87 /20). The following command adds the necessary route to the routing table:
After adding the static route, verify the routing table is updated with the appropriate route:
With the necessary route in place, Figure 24 shows dNFS traffic flowing through the intended interface p1p1.
NIC 1
p1p1 p1p2
slot1
Database server
NFS client
Modifying the route table is one way to define static routes that are static across reboots. Static routes can
also be defined in interface routing scripts in directory /etc/sysconfig/network-scripts. Static routes defined
in interface routing scripts are not static across reboots.
This example illustrates how default and static routing change the paths taken on four interfaces (p<s>p<p>)
configured with IPs from the same subnet for dNFS data traffic.
If the following default routing is used, all dNFS traffic would again flow through interface em1 because it is
the first best-matched entry in the routing table.
[root ~]#
To direct dNFS traffic to flow through all intended interfaces, the following static routes are needed:
Routes that best match the destination and mask must be defined between the interfaces and the four end-
point IP addresses in the Dell Unity NAS Server. After the routes are defined, they will be chosen.
Modifying the route table is one way to define static routes. Static routes in the route table are static across
reboots. Static routes can also be defined in interface routing scripts in directory /etc/sysconfig/network-
scripts. Static routes defined in interface routing scripts are not static across reboots.
Static routing can also be defined in Dell Unity storage when adding or updating the configuration of a NAS
server. See section Dell Unity NAS servers for more information.
See Table 10 for examples of IP address mapping from end point to end point, dNFS traffic type, and LACP.
Configuring LACP
In environments requiring high availability, a bonded NIC interface for NFS control traffic is recommended.
If LACP is configured on the NFS client for NFS control traffic, LACP must be configured in the Dell Unity
system by creating link aggregations. Port channels in the Ethernet switches connecting the Dell Unity and
NFS client interfaces must also be configured. Link aggregations with Dell Unity interfaces provide
redundancy and additional bandwidth especially when multiple NFS database clients exist. In practice, link
aggregations in Dell Unity storage should be done only if the second link is needed for highly available
configurations.
If the channel-bonded interface on the NFS client will be dedicated to NFS control traffic, it is recommended
to use 1 GbE network interfaces. Using 10 GbE links for the dedicated channel-bonded interface for NFS
control traffic may be a waste of interface resources. However, the dedicated channel-bonded interface for
NFS control traffic does provide increased availability. Should one of the interface members of the channel-
bond suffer an outage, there is still another working interface in the channel-bond that traffic can flow through.
Both bonded interfaces must also use the same ports from both SPs. Using the same ports from both SPs is
necessary because if there is failover, the peer SP uses the same ports. LACP can be configured across the
ports from the same I/O module but cannot be configured on ports that are also used for iSCSI connections.
In earlier Dell Unity All Flash arrays, LACP could be configured across the on-board Ethernet ports.
If a link aggregate contains two interfaces, four switch interfaces will be required: two switch interfaces for the
two SP A interfaces in the link aggregate, and two switch interfaces for the two SP B interfaces in the link
aggregate. See Figure 30.
Link aggregation in Dell Unity storage is configured from within the Update system settings wizard. To start
the Update system settings wizard, select the gear ion in the menu bar:
In the Settings wizard, select Access > High Availability to manage or view link aggregations. Then, select
+ from the Link aggregations section to configure a bonded Dell Unity interface.
Setting the primary and secondary ports of the bonded interface will be the first steps taken.
If the bond interface is needed for dedicated NFS control traffic, MTU 1500 may be sufficient, but consider
using Jumbo frames (MTU 9000). See section Jumbo frames for additional information.
The link aggregate can be added to the NFS server from the network properties in Unisphere: click File >
NAS Servers > edit (pencil icon) > Network > Interfaces & Routes > + > Production IP interface. Set
Ethernet Port: to the link aggregate created for the NFS traffic and provide the necessary networking
information (IP address, subnet mask/prefix length (or CIDR), gateway) for the link aggregate.
Then, when mounting the NFS share on the NFS client, mount the NFS share with the IP address specified in
the link aggregate interface.
Figure 30 illustrates how switch interfaces were configured as port channels in a Dell Networking S5000
switch. The port channels will be used for NFS control traffic. Port channel 1 will be used for Dell Unity SP
module A and port channel 2 will be used with Dell Unity SP module B.
Port channeling
LC-CB-GE-48P
Serial #
Assy
Status 1/0 47/46
3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
Port channeling
Port 2 Port 3
Port 0 Port 1
xx.xx.xx.89 xx.xx.xx.90
kNFS control SPB mezz. card
traffic (mount, Link aggregation
unmount, xx.xx.xx.91 dNFS data
Unity NAS server SPB traffic
Cabling between Dell Unity 480F, 680F, 880F storage, and an Ethernet switch
Link aggregation
xx.xx.xx.91
Port 0 Port 1 Port 2 Port 3
Port 0 Port 1 Port 2 Port 3
xx.xx.xx.87 xx.xx.xx.88 xx.xx.xx.89 xx.xx.xx.90
Port channeling
LC-CB-GE-48P
Serial #
Assy
Status 1/0 47/46
3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
Port channeling
Link aggregation
xx.xx.xx.91
Cabling between Dell Unity 480F, 680F, 880F storage, and an Ethernet switch
Switch interfaces that will be connected to the channel-bond interfaces of the NFS client (database server)
must also be configured with LACP.
For additional network redundancy for NFS traffic, use redundant switches to provide greater network
availability.
For NFS control/management traffic, either 1 Gb/s or 10 Gb/s ports can be used. For Oracle environments
that require path redundancy for NFS control traffic, it is required to use LACP across multiple interfaces from
end point to end point.
Ethernet connectivity between NFS client (database server) and Ethernet switch
The following snippets are for the interfaces shown previously and correspond to the interface address in the
operating system static routes and dNFS channels defined in file oranfstab:
TYPE=Ethernet
DEFROUTE=yes
NAME=em2
DEVICE=em2
SLAVE=yes
MASTER=bond0
<snippet>
TYPE=Ethernet
DEFROUTE=no
NAME=p2p2
DEVICE=p2p2
IPADDR=XX.XX.XX.72
PREFIX=20
GATEWAY=XX.XX.XX.1
<snippet>
For additional information about bonded interfaces, see the section Configuring LACP.
If oranfstab is missing and assuming NFS file systems are mounted, dNFS mounts and creates a single dNFS
channel for entries found in /etc/mtab that are required for the database. The dNFS channel in Oracle will
have a name equal to the IP address of the mount entry in /etc/mtab. No additional configuration is required.
The following shows the /etc/fstab and /etc/mtab entry for single NFS share:
If multiple channels to a NAS server are needed for increased dNFS bandwidth, automatic dNFS data traffic
load balancing, or automatic dNFS channel failover, the file oranfstab is required. dNFS automatically
performs load balancing across all specified available channels, and if one channel fails, dNFS reissues I/O
commands over any remaining available channel for that NAS server.
oranfstab can reside in either /etc or $ORACLE_HOME/dbs. If oranfstab resides in /etc, its contents will be
global to all databases running on that server regardless of which ORACLE_HOME they are running from. If
oranfstab resides in $ORACLE_HOME/dbs, it will be global to any database running from that
ORACLE_HOME. If ORACLE_HOME is shared between RAC nodes, all RAC databases running from the
shared $ORACLE_HOME will use the same $ORACLE_HOME/dbs/oranfstab.
dNFS searches for mount entries in the following order and uses the first matching entry as the mount point:
• $ORACLE_HOME/dbs/oranfstab
• /etc/oranfstab
• /etc/mtab
If a database uses dNFS mount points configured in oranfstab, Oracle first verifies kNFS mount points by
cross-checking entries in mtab and oranfstab. If a match does not exist, dNFS logs a message and fails to
operate.
The following oranfstab file contains four dNFS data paths to two NAS server aliases, each NAS server alias
is for a different database. Format of data paths can vary within oranfstab:
server: ORA-NAS01
local: XX.XX.XX.57 path: XX.XX.XX.63
local: XX.XX.XX.62 path: XX.XX.XX.65
local: XX.XX.XX.61 path: XX.XX.XX.64
local: XX.XX.XX.72 path: XX.XX.XX.66
mnt_timeout: 60
export: /ORA-FS1 mount: /ora1db
#
server: ORA-ASM-NFS
local: XX.XX.XX.57 path: XX.XX.XX.87
local: XX.XX.XX.62 path: XX.XX.XX.88
local: XX.XX.XX.61 path: XX.XX.XX.89
local: XX.XX.XX.72 path: XX.XX.XX.90
mnt_timeout: 60
export: /ORA-ASM-NFS mount: /oraasmnas
The following channels for ORA-ASM-NFS will be created. Channels for ORA-NAS01 are not shown because
the current database relies only on ORA-ASM-NFS:
ORA-ASM-NFS XX.XX.XX.89 2 1
ORA-ASM-NFS XX.XX.XX.90 3 1
If any NFS data path (column PATH) uses an IP existing in a subnet used by any client NIC interface, static
routes must be defined for that NFS data path. See section Shared subnets for more information.
To enable dNFS:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_on
To disable dNFS:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_off
If the alert log contains string running with ODM, dNFS has been enabled and the instance was started with
the ODM library containing the direct NFS driver:
The local IP and path IP shown in the alert log should match all the records in oranfstab for the appropriate
NAS server hosting the database. If Oracle automatically detects the local host interface because oranfstab is
not defined, ensure the chosen interface is intended for the dNFS channel.
When there is database activity, there should be Ethernet activity either on:
The activity will be displayed as changes to (RX-OK and TX-OK) values from netstat:
The following lists send (TX) and receive (RX) statistics for all interfaces:
The database compares the datafile names with the NFS mount to see if dNFS can be used. Any datafile that
dNFS can work with will reside in v$dnfs_files. Verify that the database sees all database files residing on the
NFS share and that there is activity on the dNFS channels.
Data reduction
Data reduction is a Dell Unity feature that includes both zero detection, compression, and advanced
deduplication. By offering multiple levels of space saving, Dell Unity provides flexibility for the best balance of
space savings and performance.
Oracle provides database-level compression. When database-level compression is enabled, it is unlikely that
the Dell Unity system can further reduce consumption on these compressed data. It is recommended that
compression is applied by either the array or the database engine, but not both. Certain types of data, such
as video, audio, image, and binary, usually get little benefit from compression.
Compression requires CPU resources and at high throughput levels can start to have an impact on
performance. The heavy write ratio of OLAP workloads can also reduce the benefits of compression for
Oracle database. File data can compress well, so selective volume compression should be considered.
Since both the Dell Unity system and Oracle offer data compression, there are several factors to consider.
The best recommendation depends on several factors such as database contents, the amount of available
CPU on both the storage and the database servers, and the number of I/O resources.
The following lists the benefits of using Dell Unity compression over the database-level compression:
• Dell Unity compression offloads CPU resources associated with compression, allowing more CPU
resources available to the operating system and databases.
• Dell Unity compression is transparent to the databases. Any versions of the database can benefit
from it.
• The cost to enable compression for all applications on a Dell Unity system can be lower compared to
the cost to enable compression for a database.
• Dell Technologies guarantees 4:1 storage efficiency for all-flash configurations. For more information,
go to Future-Proof Program.
• Oracle and Grid Linux user home directories are candidates for compression but evaluate the
benefits of compressing them.
Advanced deduplication
In addition to data reduction, advanced deduplication can be enabled if data reduction is enabled. Advanced
deduplication reduces the storage needed for data by keeping only a few copies (often one copy) of a block
with a given content.
The deduplication scope is a single LUN. So when choosing the storage layout, choose fewer LUNs for better
deduplication, or more LUNs for better performance.
This level of space saving can provide the greatest level of return in most environments, but also requires the
most CPU in Dell Unity storage. Because of the nature of user data, there will be duplicate data from copies
of user data. This feature should be tested with a sample of database data and workload before being
enabled in production.
Advanced deduplication was first introduced in OE 4.5. It was an optional addition to the data reduction logic
available with certain models, and it could only be performed on Dell Unity blocks that were compressed. With
OE 5.0, advanced deduplication (if enabled) will deduplicate any block (compressed or uncompressed). For
more information, see the Dell Unity: Data Reduction and Dell Unity: Best Practices Guide.
Snapshots
Snapshots provide a fast and space-efficient way to protect Oracle databases. When using snapshots with
Oracle databases, there are important considerations to ensure a successful database recovery.
• All LUNs of an Oracle database must be protected as a set using the consistency group feature. The
consistency group will ensure that the snapshot is taken at the exact same time on all LUNs in that
group. For NFS file systems that support an Oracle database, to ensure database consistency, the
entire database must exist on the file system being snapped.
• Snapshots do not replace Oracle RMAN for regular database backup. However, it offers additional
protection to the database and allows offloading RMAN processing to an alternate host.
• Snapshots can be taken on demand manually or automatically based on a schedule defined on the
LUN file system. It is recommended to put the database in hot backup mode before taking a snapshot
and end backup mode after the snapshot is taken.
Note: Snapshots increase the overall CPU load on the system and increase the overall drive IOPS in the
storage pool. Snapshots also use pool capacity to store the older data in the snapshot, which increases the
amount of capacity used in the pool until the snapshot is deleted. Consider the overhead of snapshots when
planning both performance and capacity requirements for the storage pool.
• Before enabling snapshots on a storage object, it is recommended to monitor the system and ensure
that existing resources can meet the additional workload requirements. (See “Hardware Capability
Guidelines” section and Table 2 in the Dell Unity: Best Practices Guide.)
• Enable snapshots on a few storage objects at a time, and then monitor the system to be sure it is still
within the recommended operating ranges before enabling more snapshots. Additional information
can be found in the Dell Unity: Snapshots and Thin Clones document.
When recovering the database from a snapshot, the Dell Unity system offers two methods to recover a point-
in-time copy of the database: restore and attach to host.
1. Terminate all user connections and shut down the database to be restored.
2. In Dell Unisphere, identify and select the snapshot in the LUN Snapshots properties page or in the
Consistency Group Snapshots properties page. See Figure 34.
3. Choose Restore from the More Actions drop-down menu.
4. After the restore operation is completed, restart the database on the host.
5. Oracle automatically performs database recovery during the startup.
6. Verify the data in the database.
2. In Unisphere, identify and select the snapshot in the LUN Snapshots properties page or in the
Consistency Group Snapshots properties page. See Figure 34.
3. Choose Attach to host from the More Actions drop-down menu.
4. Select the destination host and allow Read/Write access.
5. After the snapshot is attached to the host, scan for the LUNs using rescan-scsi-bus.sh -
forcerescan or -a.
6. Set ownership, group membership, and permission on the LUNs. It is possible to set ownership and
membership with chown, and permission with chmod, but the change will not be persistent across
reboots.
7. Scan for ASM devices.
For ASMFD:
# asmcmd afd_scan
# asmcmd afd_lsdsk
For ASMLIb;
# oracleasm scandisks
# oracleasm listdisks
a. RMAN
b. Datapump
c. Copy data using the database link
Find more information in the Oracle Backup and Recovery User's Guide.
13. Once the recovery is complete, shut down the database copy.
14. Dismount the ASM disk groups.
15. Remove the snapshot LUNs from the destination host.
16. Remove host access of the snapshot LUNs in Unisphere.
Thin clones
Thin clones are based on snapshot technology and are the preferred way to make read/write copies of
databases. Similar to regular LUNs, many of the data services, such as snapshots, replications, and host I/O
limit, are also available to thin clones. When thin clones are first created, they consume no storage because
they share the same blocks as their parent snapshot at the beginning. As new data is written or changes are
made to the existing data, new data blocks are allocated and tracked separately from the parent. The data on
the thin clones are the same as the parent LUNs but they have different LUN IDs and WWNs. To the
operating system, they appear to be different LUNs. However, when Oracle scans for the ASM headers, they
contain the same labels and disk group information as the original LUNs. Therefore, it is recommended to
attach thin clones on an alternate host to avoid confusion in Oracle and risks overwriting data on the wrong
LUNs.
Find additional information about thin clones can be found in the Dell Unity: Snapshots and Thin Clones
document.
Note: Only snapshots with no auto-delete policy and no expiration time are eligible for selection.
Remove the auto-delete policy and expiration time on the snapshot before attempting the Clone
action.
5. Follow the wizard to configure the thin clone’s name, host I/O Limit, host access, snapshot policy, and
replication.
6. After the thin clone LUNs are attached to the destination host, scan for the LUNs using rescan-
scsi-bus.sh -forcerescan or –a.
7. If the database clone is intended for long-term use, configure multipath and persistent ownership and
permission on the thin clone LUNs.
8. Scan for ASM devices.
9. Mount the ASM disk groups.
10. Copy the database init parameter file to the destination host.
11. Create the database log directories on the destination host.
12. Start up the database in sqlplus.
1. Shut down the database copy that is using the thin clone LUNs.
2. Dismount the ASM disk groups
3. In Unisphere, select the thin clone and select the Refresh action in the More Actions menu. See
Figure 35.
4. A snapshot is automatically created of the thin clone to preserve the thin clone data.
5. Select a snapshot to refresh from.
Note: Snapshots that have auto-delete policy or expiration time set are not eligible for selection.
Remove the auto-delete policy and expiration time on the snapshot first before starting the Refresh
action.
Refreshing a snapshot
Replication
Creating a high availability solution for Oracle databases often involves creating a copy of the data on another
storage device and synchronizing that data in some manner. Dell Unity replication provides data
synchronization between Dell Unity systems. Data is replicated at the consistency group or at the LUN and
file system providing a choice of replication settings on a per-volume basis. Using Dell Unity replication can
be an effective way to protect Oracle databases due to the flexibility and configuration options that it provides.
The variety of options provide a robust way to develop a replication scheme that provides the proper mix of
performance and bandwidth efficiency while still meeting RTO and RPO requirements.
When using Dell Unity replication to protect Oracle databases that are on multiple volumes, contain all ASM
devices for a database within a consistency group. Then configure replication on the consistency group.
Dell Unity storage supports both asynchronous and synchronous replication. A flash tier is recommended (in
a hybrid pool) for both source and destination pools where replication will be active.
Asynchronous replication
Asynchronous replication takes snapshots on the replicated storage objects to create the point-in-time copy,
determining the changed data to transfer and maintain consistency during the transfer. Consider the overhead
of snapshots when planning performance and capacity requirements for a storage pool that will have
replication objects.
Setting smaller RPO values on replication sessions will not make them transfer data more quickly but will
result in more snapshot operations. Choosing larger RPOs, or manually synchronizing during non-production
hours, may provide more predictable levels of performance. Additional information can be found in the Dell
Unity: Replication Technologies and Dell Unity: Configuring Replication documents.
Synchronous replication
Synchronous replication transfers data to the remote system over the first Fibre Channel port on each SP.
When planning to use synchronous replication, it may be appropriate to reduce the number of host
connections on this port. When the CNA ports are configured as FC, CNA port 4 is defined as the
synchronous replication port. If the CNA ports are configured as 10 GbE, port 0 of the lowest numbered FC
I/O module is the replication port. Additional information can be found in the Dell Unity: Replication
Technologies and Dell Unity: Configuring Replication documents.
Data protection
In addition to the snapshots and replication provided by Dell Unity systems, Dell Technologies offers
additional data protection software that integrates with the Dell Unity data protection features. The software is
optional and can be used to enhance the overall application protection.
AppSync
Dell AppSync is software that enables integrated Copy Data Management (iCDM) with the Dell primary
storage systems, including Dell Unity arrays. It supports many applications, including Oracle, and storage
replication technologies. For the latest support information, refer to the AppSync Support Matrix at the Dell
EMC E-lab Navigator.
AppSync simplifies and automates the process of creating and using snapshots of production data. By
abstracting the underlying storage and replication technologies, and through application integration, AppSync
empowers application owners to manage data copy needs themselves. The storage administrator, in turn,
need only be concerned with initial setup and policy management, resulting in a more agile environment.
Additional information about AppSync can be found in the AppSync User and Administration Guide and the
AppSync Performance and Scalability Guidelines.
Mount options
Mount
Description
option
rw Mounts the file system for both reading and writing operations
bg Defines a background mount to occur if a timeout or failure occurs. bg causes the
mount command to fork a child which continues to attempt to mount the export and the
parent process immediately returns with a zero status
hard Explicitly marks the volume as hard-mounted and determines the recovery behavior of
the NFS client after an NFS request times out. hard is enabled by default and prevents
NFS from returning short write errors by retrying the request indefinitely. Short writes
cause the database to crash; otherwise they will continue retrying at timeo=<nn>
intervals. The server will report a message to the console when a major timeout occurs
and will continue to attempt the operation indefinitely.
nointr Without this option, signals like kill -9 which can be used to interrupt an NFS call will
cause data corruption in datafiles because the in-flight writes will be abruptly
terminated.
rsize Specifies the maximum size (bytes) used by NFS clients on read requests, that the
NFS client can receive when reading data from a file on an NFS server. The default
depends on the version of kernel but is generally 1,024 bytes. Data payload size of
each NFS read request is equal to or smaller than the rsize setting, with a maximum
payload size of 1,048,576. Values lower than 1,024 are replaced with 4,096, and
values larger than 1,048,576 are replaced with 1,048,576. If the specified value is
within the supported range but not a multiple of 1,024, it is rounded down to the
nearest multiple of 1,024. If a value is not specified, or if the value is larger than the
supported maximum on either the client or server, the server and client negotiate the
largest rsize they can both support. The rsize specified on the mount appears in
/etc/mtab. However, the effective rsize negotiated by the server and client appears in
/proc/mounts. Concerning Oracle, the value must be set to equal to or a larger multiple
of the Oracle block size (init: db_block_size, default 8k) to prevent fractured blocks in
Oracle. rsize must be set to at least 16,348. However, Oracle recommends setting the
value to 32,768.
wsize Identical to rsize, but for write requests sent from the NFS client. wsize must be set to
at least 16,348. However, Oracle recommends setting the value to 32,768. Oracle
dNFS clients issue writes at wtmax granularity to the NFS filer. If the dNFS client is
used and the NFS server does not support a write size (wtmax) of 32,768 or larger,
NFS reverts to the native kernel NFS path.
tcp tcp defines the transport protocol name and family the NFS client uses to transmit
requests to the NFS server. tcp also controls how the mount command communicates
with the server's rpcbind and mountd services. If an NFS server has both and IPv4 and
an IPv6 address, using a specific netid will force the user of IPv4 or IPv6 networking to
communicate with the server. Specifying tcp forces all traffic from the mount command
and the NFS client to use TCP. The tcp option is an alternative to specifying proto=tcp.
DO NOT use UDP NFS for ANY REASON
vers Specifies the NFS protocol version number used to contact the server's NFS service.
Use either a value of 3 or 4. Option vers is an alternative to option nfsvers and is
provided for compatibility with other OSs.
Mount
Description
option
timeo Defines the time (in tenths of a second) that an NFS client will wait for a request to
complete before it retires the request. With NFS over TCP, the default value is 60
seconds; otherwise the default value is 0.7 seconds. If a timeout occurs, the behavior
will depend on whether hard or soft was used to mount the file system.
actimeo This option is required whenever the possibility exists to AUTOEXTEND. It ensures the
behavior of AUTOEXTEND is propagated to all nodes in a cluster by disabling all NFS
attribute caching. actimeo sets the values of acregmin, acregmax, acdirmin, and
acdirmax to the same value. Without actimeo, NFS caches the old file size, causing
inappropriate behavior. Oracle depends on file system messaging to advertise a
change in size of a datafile; therefore this setting is necessary.
noac Prevents NFS clients from caching file attributes so that applications can more quickly
detect file changes on the NFS server.
*Requires 4-port 12 Gb SAS backend I/O module to reach max drive count.
Storage technical documents and videos provide expertise that helps to ensure customer success on Dell
storage platforms.
The Dell Unity area of the Dell Technologies Info Hub provides white papers and videos.
Related resources
The following referenced or recommended Dell Technologies publications and resources are at Dell.com or
infohub.delltechnologies.com.
The following referenced or recommended Veritas resources are at Veritas Online Support:
The following referenced or recommended Oracle resources are at the Oracle Online Documentation Portal:
The following referenced or recommended Oracle notes are at My Oracle Support (Oracle support license
required):
• Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS
devices (Doc ID 359515.1)
• Creating File Devices On NAS/NFS FileSystems For ASM Diskgroups. (Doc ID 1620238.1)