0% found this document useful (0 votes)
39 views76 pages

Automatic Storage Management

Uploaded by

Online admin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views76 pages

Automatic Storage Management

Uploaded by

Online admin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 76

CHAPTER

6
Automatic Storage
Management

M anaging storage is one of the most complex and time-


consuming tasks of a DBA. Data grows at an exponential pace
due to the consolidation of databases, as well as the growth of a
business over time. Business requirements demand the
continuous availability of database storage systems, and the
maintenance window for storage is shrinking from hours to
minutes. Legal and compliance requirements add even more
baggage because increasingly, organizations are required to
retain data for extended periods of time. (We know some sites
that store hundreds of terabytes of active data for seven years or
more!)
Oracle Automatic Storage Management (Oracle ASM), Oracle ASM
Cluster File System (Oracle ACFS), and Oracle ASM Dynamic
Volume Manager (Oracle ADVM) are the key components of
Oracle RAC storage management. Storage management has
changed significantly over the past several years. You can now
have remote ASM-based systems that use some form of a NFS
(Network File System) for shared storage as well as appliances
such as the Oracle Data Appliance, which has an NFS-based
mechanism to expose ASM/ACFS to the virtual machines (VMs). In
addition, some cloud-based storage solutions such as Amazon
EC2/S3 and Microsoft Azure aren’t capable of hosting Oracle RAC–
based systems yet, but work is in progress to make this possible
in these types of environments. Oracle’s public cloud is also
something you can use in case you can’t allocate on-premises
storage.
Modern technologies help DBAs easily manage huge volumes of
data without a considerable amount of administrative overhead.
New tools are being developed that work closely with the RDBMS
kernel to make data management and data consolidation easier.
Automatic Storage Management (affectionately called Awesome
Storage Management) is one of those revolutionary solutions from
Oracle. After introducing Automatic Storage Management in
Oracle Database 10g, Oracle has enhanced it over the following
Oracle Database releases to provide a complete volume manager
built into Oracle database software. The Oracle ASM file system is
highly optimized to provide raw disk performance by avoiding
various types of overhead associated with a conventional file
system.

STANDARD ORACLE ASM AND


ORACLE FLEX ASM
In order to use Oracle ASM to manage an Oracle database, you
must first install ASM. However, there’s no need for you to install
ASM separately because you’ll be installing the Oracle Grid
Infrastructure before installing Oracle RAC databases. Oracle Grid
Infrastructure consists of Oracle Clusterware and Oracle ASM.
When you install Oracle Grid Infrastructure, the installer
automatically installs Oracle ASM in the Oracle Grid Infrastructure
home.
Up until the Oracle Database 12c release, the only way to use
ASM was to run an Oracle ASM instance on each node where an
Oracle RAC instance would run. Multiple database instances on
the same server can share the same Oracle ASM instance, but
database instances running on separate servers, as is the case
with Oracle RAC, needed to have a separate ASM instance
running on each server that was part of the cluster.
This traditional ASM configuration is now called the Standard
Oracle ASM configuration, where each database instance depends
on a separate ASM instance for managing its storage. Failure of
an ASM instance means that any instance running on that server
would also fail.
Oracle Database 12c has introduced the new Oracle Flex ASM
configuration, where Oracle database instances can use Oracle
ASM instances running on remote servers. This means that
the failure of an Oracle ASM instance doesn’t lead to the failure of
the database instances being served by it, as the database
instances will continue to be served by the remaining ASM
instances in the Oracle Flex ASM cluster. We explain both the
Standard ASM and Oracle Flex configurations in this chapter.

INTRODUCTION TO AUTOMATIC
STORAGE MANAGEMENT
Oracle ASM is the storage management solution that Oracle
recommends as an alterative to traditional volume managers, file
systems, and raw devices. ASM is a storage solution that provides
volume manager and file system capabilities that are tightly
integrated and optimized for Oracle databases. Oracle
implements ASM via three key components:

ASM instance
ASM Dynamic Volume Manager (ADVM)
ASM Cluster File System (ACFS)

The ASM Dynamic Volume Manager provides the functionality of a


volume manager for the ASM Cluster File System. ASM Cluster File
System first became available in Oracle Grid Infrastructure
11g Release 2.
ACFS helps customers in reducing overall IT budget costs because
it eliminates the need for third-party cluster file systems, which is
the most common requirement for clustered applications.
Because ACFS is built as a true cluster file system, it can be used
for non-Oracle enterprise applications as well.
ASM simplifies storage management by enabling you to do online
reconfiguration and rebalancing of the ASM disks. ASM is highly
optimized for the Oracle database because it distributes I/O
operations onto all available disks in a disk group and provides
raw disk performance.
In short, the ASM environment provides the performance of raw
disk I/O with the easy management of a file system. It simplifies
database administration by eliminating the need to manage
potentially thousands of Oracle database files in a direct manner.

Physical Limits of ASM


The number of datafiles per database has consistently increased
since Oracle version 7, in which only 1022 datafiles per database
could be used. Current Oracle versions support 65,533 datafiles
per tablespace, and managing thousands of files in a multi-
database environment is a challenge. ASM simplifies storage
management by enabling you to divide all available storage into
disk groups. You can create separate disk groups for different
performance requirements. For example, an ASM disk group can
be created with low-performance hard disks to store archive data,
whereas an ASM disk group with high-performance disks can be
used for active data.
You manage a small set of disk groups, and ASM automates the
placement of the database files within those disk groups. With
ASM you can have 511 disk groups with 10,000 ASM disks in it,
where each ASM disk can store up to 32 petabytes (PB) of data. A
disk group can handle one million ASM files.
The maximum supported file size for a datafile in Oracle Database
12c is 128 terabytes (TB), whereas ASM supports up to a
maximum of 320 exabytes (EB) for the entire storage
system. Figure 6-1 compares the traditional data storage
framework to that of ASM.
FIGURE 6-1. Traditional framework vs. ASM

ASM in Operation
Oracle ASM uses disk groups to store datafiles. Within each disk
group, ASM exposes a file system interface for the database files.
ASM divides a file into pieces and spreads these pieces evenly
across all the disks, unlike other volume managers, which spread
the whole volume onto the different disks. This is the key
difference from the traditional striping techniques that use
mathematical functions to stripe complete logical volumes
independent of files or directories. Striping requires careful
capacity planning at the beginning, because adding new volumes
requires rebalancing and downtime.
With ASM, whenever you add or remove storage, ASM doesn’t
restripe all the data. It just moves an amount of data proportional
to the amount of storage that you’ve added or removed, so as to
redistribute the files evenly and maintain a balanced I/O load
across the disks. This occurs while the database is active, and the
process is transparent to the database and end-user applications.
The new disk resync enhancements in Oracle Database 12c let
you recover very quickly from an instance failure. The disk resync
checkpoint capability makes for fast instance recovery by letting
the resync operation resume from where the resync process was
stopped, instead of starting from the beginning.
ASM supports all files that you use with an Oracle database (with
the exception of pfile and password files). Starting with Oracle
11g Release 2, ACFS can be used for non-Oracle application files.
ASM also supports Oracle Real Application Clusters (Oracle RAC)
and eliminates the need for a cluster Logical Volume Manager or
a third-party cluster file system.
ASM is integrated with Oracle Clusterware and can read from a
mirrored data copy in an extended cluster and improve I/O
performance when used in an extended cluster. (An extended
cluster is special-purpose architecture in Oracle RAC where the
nodes are geographically separated.)
ASM provides the SQL interface for creating database structures
such as tablespaces, controlfiles, redo logs, and archive log files.
You specify the file location in terms of disk groups; ASM then
creates and manages the associated underlying files for you.
Other interfaces interact with ASM as well, such as ASMCMD,
OEM, and ASMCA. SQL*Plus is the most commonly used tool to
manage ASM among DBAs. However, system and storage
administrators tend to like the ASMCMD utility for managing ASM.

ASM Striping
Striping is a technique used for spreading data among multiple
disk drives. A big data segment is broken into smaller units, and
these units are spread across the available devices. The size of
the unit that the database uses to break up the data is called
the data unit size or stripe size.
Stripe size is also sometimes called the block size and refers to
the size of the stripes written to each disk. The number of parallel
stripes that can be written to or read from simultaneously is
known as the stripe width. Striping can speed up operations that
retrieve data from disk storage as it extends the power of the
total I/O bandwidth. This optimizes performance and disk
utilization, making manual I/O performance tuning unnecessary.
ASM supports two levels of striping: fine striping and coarse
striping. Fine striping uses 128KB as the stripe width, and coarse
striping uses 1MB as the stripe width. Fine striping can be used
for files that usually do smaller reads and writes. For example,
online redo logs and controlfiles are the best candidates for fine
striping when the reads or writes are small in nature.

Stripe and Mirror Everything


Oracle’s Stripe and Mirror Everything (SAME) strategy is a simple
and efficient technique used in managing a high volume of data.
ASM implements the Oracle SAME methodology, in which the
database stripes and mirrors all types of data across all available
drives. This helps the database to evenly distribute and balance
the I/O load across all disks within the disk group.
Mirroring provides the much-required fault tolerance for the
database servers, and striping provides performance and
scalability to the database. ASM implements the SAME
methodology to stripe and mirror files inside the ASM dynamic
volume.

Character Devices and Block Devices


You can classify any direct attached or networked storage device
as a character device or a block device. A character device holds
only one file. Normally, raw files are placed on character devices.
The location of the raw devices is platform dependent, and they
are not visible in the file system directory. A block device is a type
of character device, but it holds an entire file system.
If you use ASMLib (or the new ASM Filter Driver) as a disk API,
ASM supports block devices. ASM presents an ASM dynamic
volume device file as a block device to the operating system.
Oracle is obsoleting ASMLib and often doesn’t recommend it for
its customers. Oracle recommends that you instead use the new
ASM Filter Driver, which we explain in detail in this chapter.

Storage Area Network


A storage area network (SAN) is the networked storage device
connected via uniquely identified host bus adapters (HBAs). The
storage is divided into logical unit numbers (LUNs), and each LUN
is logically represented as a single disk to the operating system.
In Automatic Storage Management, the ASM disks are either LUNs
or disk partitions. They are logically represented as a raw device
to the ASM. The name and path of the raw device is dependent on
the operating system. For example, in the Sun operating system,
the raw device has the name cNtNdNsN:

cN is the controller number.


tN is the target ID (SCSI ID).
dN is the disk number, which is the LUN descriptor.
sN is the slice number or partition number.

So when you see a raw partition in the Sun operating system


listed as c0t0d2s3, you’ll know that the device is the third
partition in the second disk connected to the first controller’s first
SCSI port. HP-UX does not expose the slice number in the raw
device. Instead, HP uses the cNtNdN format for the raw partitions.
Note that there is no concept of slice designation, because HP-UX
does not support slices. (HP-UX Itanium does have slice support,
but HP does not support the use of this feature.) The entire disk
must be provided to ASM.
A typical Linux configuration uses straight disks. Raw functionality
was an afterthought. However, Linux imposes a limit of 255
possible raw devices, and this limitation is one of the reasons for
the development of Oracle Cluster File System (OCFS) and the use
of ASMLib. Raw devices are typically stored in /dev/raw and are
named raw1 to raw255.

ASM Building Blocks


ASM is implemented as a special kind of Oracle instance with the
same structure and its own System Global Area (SGA) and
background processes. Additional background processes in ASM
manage storage and disk-rebalancing operations. You can
consider the various components we discuss next to be the
building blocks of ASM.

ASM Instance
An ASM instance is an Oracle instance that manages the
metadata for disk groups, ADVM (ASM Dynamic Volume Manager),
and ACFS (ASM Cluster File System). Metadata refers to
information, such as the disks that belongs to a disk group and
the space available in a disk group, that Oracle ASM uses to
manage disk groups. ASM stores the metadata in the disk groups.
All metadata modifications are done by an ASM instance to isolate
failures. You can install and configure an ASM instance with either
the Oracle Universal Installer (OUI) or the Oracle ASM
Configuration Assistant (ASMCA).
You can use an spfile or a traditional pfile to configure an Oracle
ASM instance. An Oracle ASM instance looks for its initialization
parameter file in the following order:

1. The initialization parameter file location you specify in the


Grid Plug and Play (GPnP) profile.
2. If you haven’t set the location in the GPnP profile, then the
Oracle ASM instance looks for the parameter file in the following
locations:
The spfile in the Oracle ASM instance home
the pfile in the Oracle ASM instance home

NOTE
You can administer Oracle ASM initialization parameter files with
SQL*Plus, ASMCA, and ASMCMD commands.

Database instances connect to an ASM instance to create, delete,


resize, open, or close files, and database instances read/write
directly to disks managed by the ASM instance. You can have only
one ASM instance per node in an Oracle RAC cluster. Sometimes
this can be a disadvantage for a large Oracle RAC cluster (such as
an eight-node Oracle RAC, where you need to maintain eight
separate ASM instances, in addition to eight “user” instances). An
ASM instance failure kills the attached database instances in the
local node.
An ASM instance is just like an Oracle database instance, which
consists of System Global Area (SGA) and background processes.
Just like the buffer cache in an Oracle database instance, an ASM
instance has a special cache called ASM cache to read and write
blocks during rebalancing operations.
Apart from ASM cache, an ASM instance’s System Global Area has
a shared pool, large pool, and free memory area. Oracle internally
uses Automatic Memory Management, and you will hardly need to
tune an Oracle ASM instance.
ASM instances use memory and background processes in the
same way as a regular Oracle instance, but ASM’s SGA allocation
is minimal and doesn’t impose a heavy overhead on the server
running the ASM instance. The job of the ASM instance is to
mount the disk groups and thus make the Oracle ASM files
available to the database instances.
An Oracle ASM instance doesn’t maintain a data dictionary, so
you can only connect to an ASM instance through the system
privileges SYSDBA, SYSADM, and SYSOPER, which are
implemented by operating system groups for the Oracle Grid
Infrastructure owner provided to the Oracle Universal Installer
when you install Oracle Grid Infrastructure.

ASM Listener
The ASM listener is similar to the Oracle database listener, which
is a process that’s responsible for establishing a connection
between database server processes and the ASM instance. The
ASM listener process tnslsnr is started from the $GRID_HOME/bin
directory and is similar to Oracle Net Listener.
The ASM listener also listens for database services running on the
same machine, so there is no need to configure and run a
separate Oracle Net Listener for the database instance. Oracle
will, by default, install and configure an ASM listener on port
1521, which can be changed to a nondefault port while Oracle
Grid Infrastructure is being installed, or later on.

Disk Groups
A disk group consists of disks that the database manages
together as a single unit of storage, and it’s the basic storage
object managed by Oracle ASM. As the primary storage unit of
ASM, this collection of ASM disks is self-describing, independent of
the associated media names.
Oracle provides various ASM utilities such as ASMCA, SQL
statements, and ASMCMD to help you create and manage disk
groups, their contents, and their metadata.
Disk groups are integrated with Oracle Managed Files and support
three types of redundancy: external, normal, and high. Figure 6-
2 shows the architecture of the disk groups.

FIGURE 6-2. Disk groups in ASM

The disks in a disk group are called ASM disks. In ASM, a disk is
the unit of persistent storage for a disk group and can be disks or
partitions from a storage array or an entire disk, or partitions of a
disk group. Disks can also be logical volumes or network-attached
files that are part of a Network File System (NFS).
The disk group in a typical database cluster is part of a remote
shared-disk subsystem, such as a SAN or network-attached
storage (NAS). The storage is accessible via the normal operating
system interface and must be accessible to all nodes.
Oracle must have read and write access to all the disks, even if
one or more servers in the cluster fails. On Windows operating
systems, an ASM disk is always a partition. On all other platforms,
an ASM disk can be a partition of a logical unit number (LUN) or
any NAS device.

NOTE
Oracle doesn’t support raw or block devices as shared storage for
Oracle RAC in Oracle 11g Release 2 onward.

Allocation Unit
ASM disks are divided into a number of units or storage blocks
that are small enough not to be hot. The allocation unit of storage
is large enough for efficient sequential access. The allocation unit
defaults to 1MB in size.
Oracle recommends 4MB as the allocation unit for most
configurations. ASM allows you to change the allocation unit size,
but you don’t normally need to do this, unless ASM hosts a very
large database (VLDB). You can’t change the size of an allocation
unit of an existing disk group.
Significance of 1MB in ASM
Various I/O clients and their usage models are discussed in this section to
provide insight into I/O block size and parallelization at the Oracle level.
The log writer writes the very important redo buffers into log files. These
writes are sequential and synchronous by default. The maximum size of
any I/O request is set at 1MB on most platforms. Redo log reads are
sequential and issued either during recovery or by LogMiner or log
dumps. The size of each I/O buffer is limited to 1MB in most platforms.
There are two asynchronous I/Os pending at any time for parallelization.
DBWR (Database Writer) is the main server process that submits
asynchronous I/Os in a big batch. Most of the I/O request sizes are equal
to the database block size. DBWR also tries to coalesce the adjacent
buffers in the disk up to a maximum size of 1MB whenever it can and
submits them as one large I/O. The Kernel Sequential File I/O (ksfq)
provides support for sequential disk/tape access and buffer management.
The ksfq allocates four sequential buffers by default. The size of the
buffers is determined by the DB_FILE_DIRECT_IO_COUNT parameter,
which is set to 1MB by default. Some of the ksfq clients are Datafile, redo
logs, RMAN, Archive log file, Data Pump, Oracle Data Guard, and the File
Transfer package.

Volume Allocation Unit


Like the allocation unit of the ASM disk group, the volume
allocation unit is the smallest storage unit that ASM allocates for
space inside the ASM dynamic volume. ASM allocates space in
multiples of the volume allocation units. The size of the volume
allocation unit is related to the size of the allocation unit of the
ASM disk group. By default, ASM creates a volume allocation unit
of 64MB inside the ASM disk group created with a default
allocation unit size of 1MB.

Failure Groups
Failure groups define ASM disks that share a common potential
failure mechanism. A failure group is a subset of disks in a disk
group dependent on a common hardware resource whose failure
the database must tolerate. Failure groups are relevant only for
normal or high redundancy configurations. Redundant copies of
the same data are placed in different failure groups. An example
might be a member in a stripe set or a set of SCSI disks sharing
the same SCSI controller. Failure groups are used to determine
which ASM disks should be used for storing redundant copies of
data. By default, each disk is an individual failure group. Figure 6-
3 illustrates the concept of a failure group.

FIGURE 6-3. Failure groups


NOTE
Failure groups apply only to normal and high redundancy disk
groups and are not applicable for external redundancy disk
groups.

For example, if two-way mirroring is specified for a file, ASM


automatically stores redundant copies of file extents in separate
failure groups. You define the failure groups in a disk group when
you create or alter the disk group.

Mirroring, Redundancy, and Failure Group Options


ASM mirroring is more flexible than operating system mirrored
disks because ASM mirroring enables you to specify the
redundancy level on a per-file basis rather than on a volume
basis. Internally, mirroring takes place at the extent level.
If a file is mirrored, depending on the redundancy level set for the
file, each extent has one or more mirrored copies, and mirrored
copies are always kept on different disks in separate failure
groups.
You configure the failure group for a disk group when you create
the disk group or by altering the disk group. The five types of disk
groups based on their redundancy level are external, normal,
high, flex, and extended redundancy. Of these, failure groups
apply to the normal, high, flex, and extended redundancy disk
groups.
The following table describes the available mirroring options that
ASM supports on a per-file basis:
ASM doesn’t require external volume managers or external
cluster file systems for disk management; these are not
recommended because the functionalities provided by the ASM
will conflict with the external volume managers.
The number of failure groups you choose to create will determine
how many failures the database tolerates without losing data.
Oracle provides Oracle Managed Files (OMF), which the database
will create and manage for you. ASM extends the OMF capability.
With ASM you get additional benefits such as mirroring and
striping.
How ASM Deals with Disk Failures
Depending on the redundancy level you configure and how you define
your failure groups, one of the following will happen when one or more
disks fail in an ASM-managed file system:

Oracle will take the offline and drop them, keeping the disk group
mounted and usable. Because the data is mirrored, there’s no data loss
and the data continues to remain available. Once the disk drop is
complete, Oracle ASM will rebalance the disk group to provide the original
redundancy for the data that was on the failed disk.
Oracle will dismount the disk group, which results in the unavailability
of the data. Oracle recommends at least three failure groups for normal
redundancy disk groups and five failure groups for high redundancy disk
groups to maintain the necessary number of copies of the Partner Status
Table (PST). This also offers protection against storage hardware failures.

Even Read for Disk Groups


Starting with the Oracle Database 12.1 release, the even
read feature is enabled by default. As its name indicates, even
read for a disk group distributes data evenly across all the disks in
the group. The database directs each read request to the least-
loaded disk.

ASM Files
Files that the Oracle database writes on ASM disks (which are part
of an ASM disk group) are called ASM files. Each ASM file can
belong to only a single Oracle ASM group. Oracle can store the
following files in an ASM disk group:

Datafiles
Controlfiles
Temporary files
Online and archived redo log files
Flashback logs
RMAN backups
Spfiles
Data Pump dump sets

ASM filenames normally start with a plus sign (+). Although ASM
automatically generates the names, you can specify a
meaningful, user-friendly alias name (or alias) for ASM files. Each
ASM file is completely contained within a single disk group and
evenly divided throughout all ASM disks in the group.
Voting files and the Oracle Cluster Registry (OCR) are two key
components of Oracle Clusterware. You can store both OCR and
voting files in Oracle ASM disk groups.
Oracle divides every ASM disk into multiple allocation units.
An allocation unit (AU) is the fundamental storage allocation unit
in a disk group. A file extent consists of one or more allocation
units, and each ASM file consists of one or more file extents. You
set the AU size for a disk group by specifying the AU_SIZE
attribute when you create a disk group. The value can be 1, 2, 4,
8, 16, 32, or 64MB. Larger AU sizes are beneficial for applications
that sequentially read large chunks of data.
You can’t change the data extent size because Oracle will
automatically increase the size of the data extent (variable extent
sizes) when an ASM file increases in size. The way Oracle
allocates the extents is similar to how it manages locally
managed tablespace extent sizes when an auto-allocate mode is
used. Variable extent sizes depend on the AU size. If the disk
group has an AU size of less than 4MB, the extent size is the same
as the disk group AU size for the first 20,000 extent sets. The
extent size goes up to 4*AU size for the next 20,000 extent sets;
for anything over 40,000 extent sizes, the extent size will be
16*AU size. If the disk group AU size is at least 4MB, the extent
sizing (that is, whether it is the same as the disk group AU size,
4*AU size, or 16*AU size) is figured using the application block
size.

Disk Partners
Disk partners limit the possibility of two independent disk failures
that result in losing both copies of a virtual extent. Each disk has
a limited number of partners, and redundant copies are allocated
on partners.
ASM automatically chooses the disk partner and places the
partners in different failure groups. In both Oracle Exadata and
the Oracle Database Appliance, there’s a fixed partnership. In
other cases, the partnering is not fixed and is decided based on
how the disks are placed in their failure groups. Partner disks
should be the same in size, capacity, and performance
characteristics.

ASM Dynamic Volume Manager and ASM Cluster File


System
ASM Cluster File System (ACFS) is a general-purpose cluster file
system and supports non-Oracle application files. ACFS extends
ASM functionality to support all customer files, including database
files, application files, executables, alert logs, and configuration
files. Oracle ASM Dynamic Volume Manager is the foundation for
ACFS.
The ASM disk group is the basic element in creating ACFS
because the disk group contains the ASM dynamic volume device
files, which the ASM Dynamic Volume Manager (ADVM) presents
to the operating system. Once these files are presented to the
operating system, you can use the traditional mkfs utility to build
and mount the ASM Cluster File System on the ASM dynamic
volume device files, which are block devices.
ADVM supports ext3, ACFS, and NTFS file systems, and users can
use the advmutil utility on Windows systems to build the file
system on ADVM.
ADVM is installed in the Oracle Grid Infrastructure home as part of
the Oracle Grid Infrastructure installation, and Oracle Clusterware
loads the oracleadvm, oracleoks, and oracleacfs modules
automatically on system reboot. These modules play a critical role
in supporting the ADVM and ACFS functionalities of Oracle ASM.
You can use ASM tools such as ASMCA, ASMCMD, Oracle
Enterprise Manager, and SQL*Plus to create ASM dynamic
volumes. You can use ASMCA for all ASM management
operations.
There are some restrictions in terms of ACFS usage; for example,
you can’t use ACFS to create a root or boot directory because
ACFS drivers are loaded by Oracle Clusterware. Similarly, you
can’t use ACFS to store Oracle Grid Infrastructure software. ACFS
is a general file system, and you can perform I/O on an ACFS file
system just as you do on any other third-party file system.
Oracle ACFS is integrated with Cluster Synchronization Services.
In case of failure, Cluster Synchronization Services will fence the
failed cluster node from the active cluster to avoid any possible
data corruption.
Oracle uses various background processes to provide ADVM.
These processes are explained in the section “ASM Background
Processes,” later in this chapter.
Once you create your disk groups and mount them on the ASM
instance, you can use Oracle ASMCMD volume management
commands to manage ADVM. For example,
the volcreate command creates an ADVM volume in a disk
group, and the voldelete command will delete the volume. You
can use the volinfo command to display information about the
ADVM volumes. Here’s an example that shows how to use
the volcreate command to create a volume group:

The previous command creates an Oracle ADVM volume in a disk


group. Once you create the volume, you can use the volume
device associated with the dynamic volume to host an Oracle
ACFS file system.
You use the ALTER DISKGROUP VOLUME statement to manage
Oracle ADVM volumes, such as adding, modifying, disable,
enabling, and dropping ADVM volumes. The following examples
show how to issue the ALTER DISKGROUP VOLUME statement to
manage Oracle ADVM volumes:

To resize a volume that hosts an Oracle ACFS file system, you


must use the acfsutil resize command instead of the ALTER
DISKGROUP VOLUME statement.

ACFS Snapshots
An exciting feature bundled with ACFS is the ability to create
snapshots of the ASM dynamic volume; this allows users to
recover from the deleted files or even to a point in time in the
past.
ACFS snapshots are point-in-time copies of the ACFS file system,
which is read-only and taken online. To perform point-in-time
recovery or even to recover deleted files, you need to know the
current data and changes made to the file. When you create a
snapshot, ASM stores metadata such as the directory structure
and the name of the files in the ASM dynamic volume. Along
with the metadata, ASM stores the location information of all the
data blocks of the files that have never had any data and the
actual data blocks with data.
Once the snapshot is created, to maintain the consistency of the
snapshot, ASM updates it by recording the file changes. Starting
with Oracle Grid Infrastructure 12.1, you can create a snapshot
from a previously created ACFS snapshot.
The ASM Cluster File System supports POSIX and X/Open file
system APIs, and you can use traditional UNIX commands such as
cp, cpio, ar, access, dir, diff, and so on. ACFS supports standard
operating system backup tools, Oracle secure backup, and third-
party tools such as storage array snapshot technologies.

MANAGING ORACLE ASM FILES


AND DIRECTORIES
As we explained earlier, ASM supports most file types that an
Oracle database needs, such as controlfiles, datafiles, and redo
log files. In addition, you can store RAC-specific files such as the
Oracle Cluster Registry files and voting files in ASM. You can also
store Oracle ASM Dynamic Volume Manager volumes in ASM.

ASM Filenames
You can use a fully qualified filename or an alias to refer to Oracle
ASM files.

Using a Fully Qualified Filename


Oracle gives new ASM files a filename generated by Oracle
Managed Files (OMF) called the fully qualified filename. This
filename represents the full path for the file in the ASM file
system.
A fully qualified ASM file has the following form:

Here’s an example:
Oracle uses default templates for creating different types of files
in ASM. These templates provide the attributes for creating a
specific type of file. You thus have the DATAFILE template for
creating datafiles, the CONTROLFILE template for creating
controlfiles, and the ARCHIVELOG template for creating archive
log files.

Alias Oracle ASM Filenames


Instead of using a fully qualified name, you can use an Alias
Oracle ASM filename to refer to ASM files or for creating files. An
alias name starts with the name of the disk group the file belongs
to, preceded by a plus sign, as shown here:

When you choose to create an ASM with an alias filename,


internally, Oracle gives the file a fully qualified filename as well.
You can add an alias name for an ASM file with a fully qualified
system-generated filename by using the ALTER DISKGROUP
command:

Creating and Referencing ASM Files


You can create ASM tablespaces and files by specifying disk group
locations using the Oracle Managed Files feature, or you can
explicitly specify the filename in the file creation statement.

Using a Default File Location (OMF Based)


You can use the OMF feature to specify a disk group as the
default location for creating ASM files. You can configure the
following initialization parameters to specify the default disk
group location in which to create various files:

DB_CREATE_FILE_DEST Specifies the default disk group


location for data and temp files
DB_CREATE_ONLINE_LOG_DEST_n Specifies the default disk
group location for redo log files and controlfiles
DB_RECOVERY_FILE_DEST Specifies the default disk group
for a fast recovery area
CONTROL_FILES Specifies the disk group for creating
controlfiles

You can configure a default disk group with one of the


initialization parameters, as shown in the following example,
which sets “data” as the default disk group for creating datafiles
and temp files:

Once you configure the OMF-related initialization parameters, it’s


easy to create a tablespace that uses ASM files, as shown here:

Let’s say you’ve set the DB_CREATE_FILE_DEST parameter to


point to the ASM disk group “data.” ASM will automatically create
the datafile for the mytblspace tablespace on ASM disks in the
disk group “data.”

Explicitly Specifying Oracle ASM Filenames


The other way to create an ASM file is to specify the filename
when you create the file. Here’s an example:

This statement will create the tablespace mytblspace with one


datafile stored in the disk group “data.”

Dropping an ASM File from a Disk Group


Use the ALTER DISKGROUP command to drop a file from a disk
group. In the following example, we drop the file by referring to it
with its alias name, but you can also use the fully qualified
filename:
Managing Disk Group Directories
ASM disk groups use a hierarchical directory structure
(automatically generated by Oracle) to store ASM files, as shown
here:

Here’s an explanation of this fully qualified filename:

The + sign refers to the root of the ASM file system.


The DATA directory is the parent directory for files in the data
disk group.
The ORCL directory is the parent directory for all files in the
orcl database.
The controlfile directory stores all controlfiles in the orcl
database.

Creating Directories
To store your aliases, you can create your own hierarchical
directory structures. You create directories below the disk group
level, and you must ensure that a parent directory exists before
creating subdirectories or aliases. Here’s an example:

Dropping Directories
You can delete a directory with the ALTER DISKGROUP statement:

The force option deletes a directory that contains alias names.


Note that you can’t remove a system-generated directory.

TIP
The DBMS_FILE_TRANSFER package contains procedures that
enable you to copy ASM files between databases or to transfer
binary files between databases.
ASM ADMINISTRATION AND
MANAGEMENT
Administering an ASM instance is similar to managing a database
instance, but it involves fewer tasks. An ASM instance doesn’t
require a database instance to be running for you to administer it.
An ASM instance does not have a data dictionary because
metadata is not stored in a dictionary. ASM metadata is small,
and Oracle stores it in the ASM disk headers. ASM instances
manage the disk group metadata and provide the layout of the
datafiles in ASM to the Oracle instance.
The metadata that ASM stores in the disk groups includes the
following types:

Information relating to the disks that belong to each disk


group
Names of the files in each disk group
Amount of space available in a disk group
Location of the extents in each disk group
Information pertaining to the Oracle ASM volume

You can use SQL*Plus to perform all ASM administration tasks in


the same way you work with a normal RDBMS instance.

NOTE
To administer ASM with SQL*Plus, you must set the ORACLE_SID
environment variable to the ASM SID before you start SQL*Plus.
The default ASM SID for a single-instance database is +ASM, and
the default SID for ASM on Oracle RAC nodes is +ASMnode#. ASM
instances do not have a data dictionary, so you must use
operating system authentication and connect as SYSASM or
SYSOPER. When connecting remotely through Oracle Net
Services, you must use a password file for authentication.
You can check the integrity of Oracle ASM across the cluster using
the Cluster Verification Utility (CVU), as shown here:

This command checks all nodes in the cluster. You can check a
single node or a set of nodes instead by replacing
the all parameter with [-n node_list].

Managing an ASM Instance


An ASM instance is designed and built as a logical extension of
the database instances, and both it and a database instance
share the same mechanism of instance management.
As with the database instance parameter file, an ASM instance
has a parameter file called a registry file that’s stored in the
<cluster name>/ASMPARAMETERFILE directory of the ASM disk
group specified to store the OCR and voting disk during
installation of Oracle Grid Infrastructure.
The SID for an ASM instance defaults to +ASM for a single-
instance database and +ASMnode# for Oracle RAC nodes. The
rules for filenames, default locations, and search orders that
apply to the database initialization parameter files also apply to
the ASM initialization parameter files. However, ASM instances
come with a separate set of initialization parameters that you
can’t set for a database instance.

Administering the ASM Instance


You start an ASM instance similar to how you start an Oracle
database instance. When connecting to an ASM instance with
SQL*Plus, you must set the ORACLE_SID environment variable to
the ASM SID.
The initialization parameter file for an ASM instance, which can be
a server parameter file also known as an ASM parameter file,
must contain the parameter INSTANCE_TYPE = ASM to signal to
the Oracle executable that an ASM instance is starting and not a
database instance.
Apart from the disk groups specified by the ASM_DISKGROUPS
initialization parameter, ASM will automatically mount the disk
groups used to store the voting disk, OCR, and the ASM
parameter file.

Startup Modes The STARTUP command starts an ASM


instance with the set of memory structures, and it mounts the
disk groups specified by the initialization parameter
ASM_DISKGROUPS. If you leave the value of the
ASM_DISKGROUPS parameter blank, the ASM instance starts and
warns you that no disk groups were mounted. You can then
mount disk groups with the ALTER DISKGROUP MOUNT command
(similar to the ALTER DATABASE MOUNT command).
The following table describes the various startup modes of the
ASM instance:

Other startup clauses have comparable interpretations for ASM


instances as they do for database instances. For example,
RESTRICT prevents database instances from connecting to this
ASM instance. OPEN is invalid for an ASM instance. NOMOUNT
starts up an ASM instance without mounting any disk group.

NOTE
An ASM instance doesn’t have any data dictionary, and the only
possible way to connect to an ASM instance is via operating
system privileges such as SYSDBA, SYSASM, and SYSOPER.

You specify the operating system groups OSASM, OSDBA, and


OSOPER during the installation of the Oracle Grid Infrastructure,
and these operating system groups implement the SYSASM,
SYSDBA, and SYSOPER privileges for the Oracle ASM instance.
Oracle allows password-based authentication to connect to an
ASM instance, which requires that the ASM initialization
parameter REMOTE_LOGIN_PASSWORDFILE be set to a value
other than NONE.
By default, the Oracle Universal Installer will create a password
file for an ASM instance and new users will automatically be
added to the password file. Following this, users can connect to
an ASM instance over the network using Oracle Net Services.
Oracle connects as SYSDBA to an ASM instance when connecting
via ASMCMD. You can use SQL*Plus to connect to the instance
and run simple SQL commands such as show sga and show
parameter <parameter name>. Here’s an example:

You can query the V$PWFILE_USERS dynamic view to list users in


the password file. Another possible way to list users in the
password file is to use the lspwusr command from the ASMCMD
command prompt. ASMCMD can be used to create and manage a
password file manually.

Shutting Down an ASM Instance Shutting down an ASM


instance is similar to how you shut down any other Oracle
database instance. You must, however, first shut down the
database instances using the ASM instance before you can shut
down the ASM instance.
When you issue a NORMAL, IMMEDIATE, or TRANSACTIONAL
shutdown command, ASM waits for any in-progress SQL to
complete. Once all ASM SQL is completed, all disk groups are
dismounted and the ASM instance is shut down in an orderly
fashion. If any database instances are connected to the ASM
instance, the SHUTDOWN command returns an error and leaves
the ASM instance running.
When you issue the SHUTDOWN ABORT command, the ASM
instance is immediately terminated. The instance doesn’t
dismount the disk groups in an orderly fashion. The next startup
requires recovery of ASM—similar to RDBMS recovery—to bring
the disk groups into a consistent state. The ASM instance also has
components similar to undo and redo (details are discussed later
in the chapter) that support crash recovery and instance
recovery.
If any database instance is connected to the ASM instance, the
database instance aborts because it does not get access to the
storage system that is managed by the ASM instance.
You can start and stop an ASM instance with the SQL*Plus,
ASMCA, ASMCMD, and SRVCTL utilities. SRVCTL uses startup and
shutdown options registered in the OCR to start or stop an ASM
instance. The following are examples of starting and stopping the
ASM instance +ASM1 on cluster node racnode01 using the
SRVCTL utility.

NOTE
To manage ASM instances, use the SRVCTL executable in the
Oracle Grid Infrastructure home. You can’t use the SRVCTL
executable located in the Oracle RAC or Oracle Database home
for managing ASM.

Use the following command to start the ASM instance on all nodes
in a cluster:

Use the following command to start the ASM instance on a single


node in the cluster (in this case, the racnode01 node):

The following command shows how to stop the ASM instance on


all active nodes in the cluster:

You can stop the ASM instance on a specific node with the
following command:
In addition to starting up and shutting down an ASM instance, the
SRVCTL utility also lets you perform other administrative tasks, as
described in the following examples.
Use the following command to add an ASM instance:

Use the following command to remove an ASM instance:

Use the following command to check the configuration of an ASM


instance:

Use the following command to display the status of an ASM


instance:

You can start and stop an ASM instance using SQL*Plus, just as
you do for a regular Oracle Database instance. In order to connect
to the ASM instance, make sure to set the ORACLE_SID
environment variable to the Oracle ASM SID. The default SID for
ASM on a RAC node is +ASM node_number,
where node_number is the number of the node hosting the ASM
instance. Here’s how you start up an ASM instance with SQL*Plus:

This command mounts the ASM disk groups. Therefore, MOUNT is


the default option. You can also specify the FORCE, NOMOUNT, or
RESTRICT parameter with the STARTUP command.
Once the instance starts up, you mount the disk groups with the
ALTER DISKGROUP…MOUNT command. This command mounts all
the disk groups specified by the ASM_DISKGROUPS initialization
parameter.
To shut down an ASM instance with the SQL*Plus utility, use the
following commands:
Shutting down with just the SHUTDOWN command without any
options is the same as a SHUTDOWN NORMAL command. You can
also specify the IMMEDIATE, TRANSACTIONAL, or ABORT option
with the SHUTDOWN command.
Be aware that you must first stop all the database instances
connecting to the ASM instance, or else you’ll receive an error,
and the ASM instance will continue running. Similarly, if any
Oracle ACFS file systems are mounted in Oracle ADVM volumes,
you must first dismount those file systems before shutting down
the ASM instance.
You can use the ASMCMD command-line utility to start and stop
the ASM instance. Here are examples of starting and stopping an
ASM instance using ASMCMD.
Use the following command to start the ASM instance in a mount
state:

Use the following command to shut down the ASM instance


immediately:

NOTE
You can’t start or shut down an ASM instance alone in the Oracle
RAC database system if the OCR and voting disks are stored
inside the ASM disk group. You must use the crsctl command to
start or stop the CRS, which will start/stop the ASM instance also.

Oracle ASM File Access Control for Disk Groups


You can configure Oracle ASM File Access Control to restrict
access to ASM disk groups to specific ASM clients that connect as
SYSDBA. Clients in this instance are usually a database that uses
ASM storage.
Using the access control, ASM determines the additional
privileges it must give to databases that authenticate as SYSDBA
on an Oracle ASM instance.

Setting Up ASM File Access Control You must set the


ACCESS_CONTROL_ENABLED disk group attribute (default is false)
and ACCESS_CONTROL_UMASK disk group attribute (default is
066) to set up Oracle ASM File Access Control for a disk group.
The ACCESS_CONTROL_UMASK attribute determines which
permissions are masked out when you create an ASM file; the
permissions are for the owner of the file, users in the same user
group as the owner, and others. The attribute applies to all files in
a disk group.
The following example shows how to enable ASM File Access
control for a disk group named data1, and how to set the umask
permissions to 026, which means read-write to the owner, read to
users in the group, and no access to others not in the same group
as the owner:

Managing Oracle ASM File Access Control Managing


ASM File Access Control involves adding and dropping user
groups, adding and dropping members, adding and dropping
users, and other related tasks. You use the ALTER DISKGROUP
statement to perform all of these tasks. We show some typical
access control management tasks here.
You can add an ASM user group to a disk group with the following
ALTER DISKGROUP command. Before you run the command,
make sure that the OS users you specify with the MEMBER clause
are present in the disk group.
You can add users to a user group with the following command:

You can also add OS users to an ASM disk group so that the users
have access privileges on the disk group:

You can modify the permissions of an ASM file thus:

ASM-Related Dynamic Performance Views


Although ASM does not have a data dictionary, it provides
dynamic performance views stored inside memory that can be
used to extract metadata information from an ASM instance. Here
are some short descriptions of the important dynamic
performance views. Refer to the Oracle documentation for a
complete list of the ASM dynamic performance views.

V$ASM This view displays the instance information of the


ASM instance you are connected to.
V$ASM_DISK This view shows every disk discovered by the
Oracle ASM instance, by performing a disk discovery upon being
queried.
V$ASM_DISKGROUP This view lists the disk groups created
inside the ASM along with metadata information such as free
space, allocation unit size, and the state of the disk group.
V$CLIENT This view shows the databases using the disk
groups being managed by the ASM instance.
V$ASM_FILE This view lists the files created within the disk
groups listed in the V$ASM_DISKGROUP view.
V$ASM_ALIAS This view lists the user-friendly name of the
ASM files listed in the V$ASM_FILE view. This view is useful in
identifying the exact name of the ASM file because the
V$ASM_FILE view only lists the file number.
V$ASM_DISK_IOSTAT This view lists the disk I/O
performance statistics for each disk listed in the
V$ASM_DISKGROUP view.
V$ASM_ACFSVOLUMES This view lists the metadata
information of the ASM dynamic volumes.
V$ASM_OPERATION This view displays the current (long-
running) operations, such as any rebalancing happening on the
disk groups listed in the V$ASM_DISKGROUP view. This view is
useful to monitor the rebalancing operation in ASM.
V$ASM_USER This view shows the operating system user
names of connected database instances and the names of the file
owners.
V$ESTIMATE This view shows an estimate of the work
involved in performing ASM disk group rebalance and resync
operations, without actually performing the operations.

ASM Background Processes


Because ASM is built using the RDBMS framework, the software
architecture is similar to that of Oracle RDBMS processes. The
ASM instance is built using various background processes, and a
few of these processes specific to the ASM instance manage the
disk groups, ASM Dynamic Volume Manager, and ASM Cluster File
System in ASM. The following listing shows the background
processes of the ASM instance having a SID of ASM:
Look at the background processes closely, and you’ll notice that
the ones used in RDBMS instance management are similar to
smon and pmon. However, additional processes, such as rbal and
gmon, are specific to ASM instances.
Let’s take a closer look at the ASM-specific processes. You will see
additional background processes such as VDBG, VBGn, and VMB
when ASM dynamic volumes are created in the ASM instance.
Some important ASM background processes are explained here:

RBAL This is the Rebalancing background process. It is


responsible for the rebalancing operation and also coordinates
the ASM disk discovery process.
GMON This is the Group Monitor background process. It
manages disk groups by marking a disk group “offline” or even by
dropping the disk group.
ARBn Whereas RBAL coordinates the rebalancing of disk
groups, ARBn actually performs the rebalancing operations.
VMB This is the Volume Membership Background process,
which is responsible for cluster membership with the ASM
instance. The ASM instance starts this background process when
ASM dynamic volumes are created.
VDBG This is the Volume Driver background process. It
works with the Dynamic Volume Driver to provide the locking and
unlocking of volume extents. This is a vital process, and it will
shut down the ASM instance if killed unexpectedly.
VBGn This is the Volume Background process. VBG from the
ASM instance communicates with operating system volume
drivers. It handles messaging between ASM and the operating
system.
XDMG This is the Exadata Automation Manager. XDMG
monitors all configured Exadata cells for state changes, such as a
bad disk getting replaced. Its primary tasks are to watch for
inaccessible disks and cells and, when they become accessible
again, to initiate the ASM ONLINE operations.

ASM Processes in the Database Instance


Each database instance using ASM has two background
processes: ASMB and RBAL. The ASMB background process runs
in a database instance and connects to a foreground process in
an ASM instance. Over this connection, periodic messages are
exchanged to update statistics and to verify that both instances
are healthy. All extent maps describing open files are sent to the
database instance via ASMB. If an extent of an open file is
relocated or the status of a disk is changed, messages are
received by the ASMB process in the affected database instances.
During operations that require ASM intervention, such as the
creation of a file by a database foreground process, the process
connects directly to the ASM instance to perform the operation.
Each database instance maintains a pool of connections to its
ASM instance to avoid the overhead of reconnecting for every file
operation.
A group of slave processes in the range 0001 to 0010 establishes
a connection to the ASM instance, and these slave processes are
used as a connection pool for database processes. Database
processes can send messages to the ASM instance using the slave
processes. For example, opening a file sends the open request to
the ASM instance via a slave. However, the database doesn’t use
the slaves for long-running operations, such as those for creating
a file. The slave connections eliminate the overhead of logging
into the ASM instances for short requests. These slaves are
automatically shut down when not in use.

Communication Between a Database and ASM


Instance
ASM is like a volume manager for the Oracle database that
provides the file system for Oracle database files. The ASM
instance uses two data structures—Active Change Directory (ACD)
and Continuing Operations Directory (COD)—to manage the
metadata transactions it contains.
When you create a new tablespace or just add a new datafile to
an existing tablespace, the Oracle database instance requests an
ASM instance to create a new ASM file that the Oracle database
instance can use. Upon receiving a new file-creation request, the
ASM instance adds an entry into the Continuing Operation
Directory and allocates space for the new file in the ASM disk
group. ASM creates extents and then shares the extent map with
the database instance; in fact, the ASMB background process in
the database instance receives this extent map process.
Once the database instance opens the file successfully, the ASM
instance commits the new file creation and removes the entry
from the Continuing Operation Directory because new file
information is now stored in the disk headers. Here is an
important concept you need to understand: most people see
database I/O being redirected to the ASM instance and the ASM
instance performing I/O on behalf of the database instance;
however, this is incorrect. The Oracle database instance performs
I/O directly onto the ASM files, but it has to reopen the newly
created ASM file once for when the ASM instance confirms with
the database instance that the new file has been committed.
Active Change Directory (ACD) and Continuing Operations
Directory (COD)
The Active Change Directory (ACD) is a journaling mechanism that
provides functionalities similar to redo logs in an Oracle database. ACD
records all the metadata changes in the ASM instance required to decide
to roll forward in the event of unexpected failures—either due to
operation failures or an instance crash. ACD is stored as a file in one of
the ASM disks.
ASM metadata is triple-mirrored (high redundancy) and can grow within
disk group when you add new instances. Oracle also mirrors ASM header
metadata in different portions of the disk group. Transaction atomicity for
ASM is guaranteed by ACD.
The Continuing Operations Directory (COD) is a memory structure in the
ASM instance that maintains the state information of the active ASM
operations and changes, such as rebalancing, new disk addition, and disk
deletion. In addition, file creation requests from clients such as the
RDBMS instance use COD to protect integrity. The COD records are either
committed or roll backed upon success or failure of the ASM operation.
COD is similar to the undo tablespace in an Oracle database; however,
users can’t roll the data forward or back.

ASM Initialization Parameters


Just like a database instance, an ASM instance uses both
mandatory and optional initialization parameters. You can set the
ASM initialization parameters in both the database and the ASM
instances, but some of the parameters are only valid for an ASM
instance.
You can set following initialization parameters in an ASM instance.
You can’t specify parameters that start with “ASM_” in database
instances. You can use a pfile or an spfile as the ASM instance
parameter file. If you use an spfile, you must place it in a disk
group or in a cluster file system. Oracle recommends using an
spfile and storing it in a disk group. By default, the spfile for an
ASM instance is stored in the Oracle Grid Infrastructure home
($ORACLE_HOME/dbs/spfile+ASM.ora).

INSTANCE_TYPE This parameter instructs the Oracle


executables about the instance type. For an ASM instance, you
must set the value of the parameter to ASM
(INSTANCE_TYPE=ASM). For an ASM instance in an Oracle Grid
Infrastructure home, this parameter is optional. In Oracle
Database 12c, this parameter has a third value: ASMPROXY. You
must set the value of this parameter to ASMPROXY for an ASM
proxy instance.
ASM_POWER_LIMIT Sets the power limits for disk
rebalancing. This parameter defaults to 1. Valid values are 1
through 1024 for disk groups whose compatibility is set to
11.2.0.2 or higher. Setting the value of this parameter to 0 will, of
course, disable rebalancing, which isn’t recommended! This
parameter is dynamic, and you can use a SQL statement (alter
diskgroup data online disk dta_000 power 100, for example)
to modify the value on the fly. When you manually rebalance
disks with the ALTER DISKGROUP…REBALANCE statement, the
value for the POWER clause can’t exceed the value of the
ASM_POWER_LIMIT parameter. We discuss disk rebalancing in
detail later in this chapter.
ASM_DISKSTRING A comma-separated list of strings that
limits the set of disks that ASM discovers. This parameter accepts
wildcard characters. Only disks that match one of the strings are
discovered. The string format depends on the ASM library in use
and the operating system. The standard system library for ASM
supports glob pattern matching. If you are using ASMLib to create
ASM disks, the default path will be ORCL:*. For example, on a
Linux server, where you aren’t using ASMLib or the new Oracle
ASM Filter Driver (Oracle ASMFD), you can set this parameter in
this way: ASM_DISKSTRING = /dev/rdsk/mydisks/*. This will limit
the discovery process to include just the disks in the
/dev/rdks/mydisks directory.

NOTE
Many DBAs have issues properly configuring the
ASM_DISKSTRING parameter. You must restrict this to the smaller
set of the disks where ASM disk groups are located, and not
/dev/*, as the ASM instance will be forced to go through all disks
in the location. Many of the devices here are not disks, and hence
the disk discovery process will take much longer than necessary.

ASM_DISKGROUPS A list of the names of disk groups to be


mounted by an ASM instance at startup or when the ALTER
DISKGROUP ALL MOUNT statement is used. If this parameter is
not specified, no disk groups are mounted except the ASM disk
groups that store the spfile, OCR, and voting disk. This parameter
is dynamic, and when a server parameter file (spfile) is used,
altering this value isn’t required. Here’s an example:
ASM_DISKGROUPS = DATA, FRA.
ASM_PREFERRED_READ_FAILURE_GROUPS This
parameter allows an ASM instance in extended cluster
configuration, where each site has its own dedicated storage, to
read data from the local disks rather than always reading data
from the primary. Prior to Oracle 11g, ASM always read data from
the primary copy regardless of the same extent being available
on the local disks. This feature is very useful for performance in
Oracle extended clusters. You should choose the number of
failure groups carefully when configuring ASM for extended
clusters because this will have a direct impact on ASM read
performance.
DIAGNOSTIC_DEST You set this parameter to the directory
where you want the ASM instance diagnostics to be stored. The
default value for this parameter is the $ORACLE_BASE directory.
As with the database diagnostic directory setup, you’ll find
directories such as alert, trace, and incident under the main
diagnostic directory.

By default, for all ASM instances, Automatic Memory Management


is enabled. You don’t need to set the MEMORY_TARGET
parameter, however, because by default Oracle ASM uses a value
for this parameter that works for most environments. If you want
to explicitly control the memory management for ASM, you can
set this parameter in the ASM spfile.

MANAGING ASM DISK GROUPS


Oracle provides different ASM tools such as ASMCA, ASMCMD,
Oracle Grid Control, and SQL*Plus to create and manage disk
groups inside the ASM instance.
It’s important to understand the performance and availability
requirements for an ASM disk group before creating the ASM disk
group (for example, the redundancy level for the ASM disk group).
If the underlying storage isn’t protected by a RAID configuration,
you should use ASM mirroring by choosing the correct
redundancy for the ASM disk group.
ASMCA is a graphical tool that’s self-explanatory and does not
require much expertise, but you should click the Show Advanced
Options button to change the different attributes of the ASM disk
group. ASMCMD uses XML-style tags to specify the ASM disk
group name, disk location, and attributes. You can specify the
XML tags as inline XML, or you can create an XML file that you can
use with the mkdg command inside the ASMCMD.
In SQL*Plus, use the CREATE DISKGROUP command to create a
disk group in an ASM instance. Before creating a disk group, the
ASM instance will check that the disk/raw partition being added in
a disk group is addressable. If the disk/raw partition is
addressable and is not being used by any other group, ASM writes
specific information in the first block of the disk or raw partition
being used to create the disk group.
You can specify different attributes for an ASM disk group that
impact its performance and availability. You specify disk group
attributes with the ATTRIBUTE clause of the ALTER DISKGROUP or
CREATE DISKGROUP statement, as shown here:

You can also set the attributes with the ASMCMD


commands setattr and mkdg or through ASMCA. You can query
the attributes of an ASM disk group from the V$ASM_ATTRIBUTE
dynamic performance view. You can also do so with the
ASMCMD lsattr command. Following is a list of important ASM
disk group attributes you’re most likely to use:

AU_SIZE Used to specify the allocation unit size of the ASM


disk group being created. You can’t change the allocation unit of
an existing disk group, so make sure you specify the correct
allocation unit size while creating an ASM disk group. By default,
Oracle uses 1MB for the allocation unit, but most Oracle users set
this to 4MB.
DISK_REPAIR_TIME This attribute is related to the
performance and availability of the disk group. This attribute
specifies the amount of time ASM will wait for an offline ASM disk
to be dropped and the disk group to be rebalanced.
COMPATIBLE.ADVM This attribute is required if the
intended disk group will be used to create an ASM dynamic
volume.
CELL_SMART_SCAN_CAPABLE This attribute is only valid
for Oracle Exadata Grid disks, and it enables Smart Scan
predicate offload processing.
THIN_PROVISIONED This attribute determines whether or
not the unused storage space left after a disk group rebalancing
operation is discarded. By default, the value of this parameter
is false, meaning unused storage is retained. If you want the
unused storage to be discarded, set the value of this parameter
to yes.
PHYS_META_REPLICATED This parameter enables Oracle to
track a disk group’s replication status. If the Oracle ASM disk
group compatibility is 12.1 or higher, Oracle replicates the
physical metadata of each piece of data—that is, it makes a
(backup) copy of the metadata at the center point of the disk,
such as the disk header and free space table blocks information.
Oracle automatically sets this attribute’s value to true if the
entire physical metadata of all disks in a disk group has been
replicated. The attribute has a value of false otherwise, and you
can’t set or change this attribute’s value. ASM mounts the disk
group automatically when the CREATE DISKGROUP command is
executed, and a disk group name is also added in the
ASM_DISKGROUPS parameter in the spfile so that whenever the
ASM instance is restarted later, only this newly created disk group
will be mounted.

If you want ASM to mirror files, define the redundancy level while
creating the ASM disk group. Oracle provides two redundancy
levels: normal redundancy and high redundancy. In normal
redundancy, each extent has one mirrored copy; in high
redundancy, each extent has two mirrored copies in different disk
groups.
Disks in a disk group should be of a similar size with similar
performance characteristics. It’s always advisable to create
different disk groups for different types of disks. All disks in disk
groups must be of the same size to avoid wasting disk space in
failure groups. All disks that you’ll use to create disk groups must
be in line with the ASM_DISKSTRING parameter to avoid disk
discovery issues.

Creating a Disk Group


In the following example, we create a disk group named DGA with
two failure groups, FLGRP1 and FLGRP2, using four raw partitions
—namely, /dev/diska3, /dev/diska4, /dev/diska5, and /dev/diska6:

After creating a disk group, you may need to alter the disk group
depending on your business requirements. Oracle allows you to
perform create, drop/undrop, resize, rebalance, and
mount/dismount operations on disk groups after they’ve been
created.
You can optionally set the SECTOR_SIZE disk group attribute when
creating a disk group. The value you set for SECTOR_SIZE
determines the sector size for the disks in a disk group. You can
set the sector size only when creating a disk group. If your disks
support it, you may set the value of the SECTOR_SIZE parameter
to 512 or 4096. The default value for sector size depends on the
operating system platform.

Adding Disks to a Disk Group


Whenever you add disks to a disk group, Oracle internally
rebalances the I/O load. The following example shows you how to
add disks to an existing disk group. Oracle uses the ADD clause
to add disks or a failure group to an existing disk group. In this
example, the raw partition /dev/diska7 is being added to the
existing group DGA:

No failure group is defined in the statement, so the database


assigns the disk to its own failure group. You can also use the
ASMCA tool to add disks, as shown in the following example:

This command adds the two disks, disk04 and disk05, to the disk
group named mydg.

Dropping, Undropping, Resizing, and


Renaming Disks in a Disk Group
Oracle provides the DROP DISK clause in conjunction with the
ALTER DISK GROUP command to drop a disk within a disk group.
Oracle internally rebalances the files during this operation. Oracle
fails the DROP operation if other disks in the disk group don’t
have enough space.
If you are adding and dropping disks from a disk group, it is
advisable that you add first and then drop, and both operations
should be performed in a single ALTER DISKGROUP statement in
order to reduce the time the database spends on rebalancing the
data.
Oracle also provides force options to drop a disk within a disk
group, even if ASM can’t read or write to those disks. You can’t
specify the force option with external redundancy disk groups.

Undropping Disks in a Disk Group


If a disk dropping operation is pending, you can cancel the drop
operation with the ALTER DISKGROUP...UNDROP DISK statement,
as shown here:

Replacing Disks in a Disk Group


When a disk is damaged or missing, you normally perform the
two-step operation of dropping the affected disk and then
replacing it with a new disk. However, starting with the
12c release, you can consolidate the two steps into a single step
of replacing the damaged or missing disk. The following example
shows how you can replace the disk named disk5 with another
disk named disk6:

You can include a POWER clause with the REPLACE DISK


command, and the clause works the same way as when you use it
in a disk-rebalance command.

Resizing the Disks


Oracle provides a RESIZE clause that you can specify with the
ALTER DISKGROUP statement to resize a disk group, resize any
specific disk in a disk group, or resize the disks within a specific
failure group.
Resizing is useful for reclaiming disk space. For example, if the
SIZE defined for the disks was less than the disk size when the
disk group was created and later on you want to claim the full size
of the disk, you can use this option without giving any SIZE so
that Oracle will take SIZE as returned by the operating system.
Here’s an example:
Renaming a Disk Group
You can rename an ASM disk group by first dismounting all disk
groups on all the nodes in the cluster and then running
the renamedg command on the disk group. Following is an
example that shows how to rename a disk group:

1. Use the renamedg command to rename the disk group


dgroup1 to the new disk group namedgroup2:

2. Because the renamedg command doesn’t update the file


references in the database, the original disk group isn’t
automatically deleted after you run this command. You can check
the status of the old disk group with the following command:

3. Manually delete the old disk group resource with the following
command:

4. Rename the disks in the new disk group with the following
commands:

ADMINISTERING ACFS
You can’t use ASM tools such as ASMCA, ASMCMD, OEM, and
SQL*Plus to create and manage ACFS. Creating an ACFS file
system is just like creating a file system with any traditional
volume manager. In a traditional volume manager, you first
create a volume group and then the volumes. Finally, you create
the file system and then mount the file system.
Just like an allocation unit in the ASM disk group, volume
allocation unit is the smallest storage unit that can be allocated in
an ASM dynamic volume. ASM allocates stripes in each volume
allocation unit, where each stripe is the equivalent of the volume
extent size, which is directly related to the allocation unit size of
the underlying ASM disk group. By default, the size of one volume
extent is 64MB for a 1MB allocation unit size disk group. When
you create an ASM dynamic volume, you specify the number of
stripes (also known as stripe columns) and the width of each
stripe because Oracle internally uses the SAME (Stripe and Mirror
Everything) approach to stripe and mirror data. Oracle will then
distribute the file being stored inside the ACFS into chunks, with a
stripe width size of 128KB (the default stripe width) on each
volume extent.
For example, if you create an ASM volume of 400MB and store a
1MB file initially with default settings, Oracle will create a volume
extent size of 64MB and each volume allocation unit will be of
256MB (the number of stripes multiplied by the volume extent
size). Because Oracle allocates space in multiples of the volume
allocation unit, it will allocate 512MB for the ASM dynamic volume
although you requested 400MB. The ASM dynamic volume can be
extended later on. ASM will distribute the file into eight chunks of
128KB and store them on volume extents. ASM stripes the volume
extents in the same way on the available disks inside the ASM
disk group; hence, it provides very efficient I/O operations.
You can use the V$ASM_ACFSVOLUMES and V$ASM_FILESYSTEM
dynamic performance views of the ASM instance to display the
ACFS file system information on the connected ASM instance.

Setting Up ACFS
Follow the steps given next to create the ASM Cluster File System:

1. Create an ASM disk group with the attribute


COMPATIBLE.ADVM set at a minimum value of 11.2 or higher
(Oracle introduced ACFS in Oracle 11g Release 2), because
setting this attribute tells ASM that this disk group can store ASM
dynamic volumes. If this attribute is not set, you can’t create a
dynamic volume in this ASM disk group. You can use either
ASMCA, ASMCMD, OEM, or SQL*Plus to create the ASM disk group.
If you’re using ASMCA, you can set the disk group attribute by
clicking the Show Advanced Options button. Other ASM tools
allow you to specify the attribute at the command line.
2. Once the ASM disk group is created, you need to create the
ASM dynamic volume in the ASM disk group created in the
previous step. Use the ASMCMD volcreate command for this, as
shown here:

3. Create the required OS directory structure to mount the newly


created ASM dynamic volume.
4. Once you have created the directory, you need to create the
ACFS file system using the operating system command mkfs.
Make sure you use the following syntax to create the ACFS-type
file system on the ASM dynamic volume:

5. As an optional step, you can register the newly created file


system with the acfsutil registry command:

In this example, /acfsmounts/acfs1 is the name of the OS


directory you created in step 3 in order to mount your new
dynamic volume. Although purely optional, registering the file
system in this manner offers a couple of benefits. First, you don’t
need to manually mount the new file system on each cluster
member. Second, the new file system is automatically mounted
when the Oracle Clusterware or the server restarts.
6. Now mount the newly created file system using
the mount operating system command. Make sure you use the
ACFS file system type to mount the file system. Here’s an
example:

7. Once you mount the new file system, make sure you set the
permissions on the file system appropriately so that users who
need to access it can do so. For example, you can set the
permissions for the user oracle in the following manner:

Creating an ACFS Snapshot


You use ASM tools to create and manage ACFS snapshots. You
can use the ASMCMD command acfsutil to create and manage
snapshots via a command-line interface. Here is an example of
creating the ACFS snapshot:

The snapshot-creation process will create a hidden directory


(.ACFS) and a directory structure of snaps/<snapshot name>
inside the hidden directory. In our example, ASM will create the
following directory structure:

With all the exciting features of ACFS, it comes with some


restrictions, which you must be aware of before using the ACFS.
Although you can create other file systems on an ASM dynamic
volume, Oracle only supports ACFS as a cluster file system on an
ASM dynamic volume. Partitioning of an ASM dynamic volume is
not supported, so you cannot use the fdisk command to partition
an ASM dynamic volume device. Apart from this, you should not
use ASMLib over an ASM dynamic volume because Oracle does
not support this configuration. You can use multipath devices to
create an ASM disk group, but using multipathing on ASM
dynamic volumes is prohibited.
ASM Fast Mirror Resync
Prior to Oracle 11g, whenever ASM was not able to write the data extent
to the ASM disk, it took the ASM disk offline. In addition to this, further
reads on this disk were prohibited and ASM would re-create the extents
from the mirror copies stored on the other ASM disks. When you added
the disk back to the disk group, ASM would perform this rebalancing
operation again to reconstruct all the data extents. This rebalancing
operation is very time consuming and also has an impact on the response
time of the ASM disk group.
Starting with Oracle 11g, the database does not drop the offline ASM disk
and waits and tracks the data extents modified up to the time specified
by the ASM_DISK_REPAIR attribute of the associated disk group. When the
failure is repaired within this timeframe, ASM will reconstruct the modified
data extents only, rather than requiring a complete rebalancing of the
whole disk group, thus resulting in an overall faster and more efficient
rebalancing operation.

ASM DISK REBALANCING


ASM doesn’t require any downtime during storage configuration
and reconfiguration—that is, you can change the storage
configuration without having to take the database offline. ASM
automatically redistributes file data evenly across all the disks of
the disk group after you add or drop disks from a group. This
operation is called disk rebalancing and is transparent to the
database.
A rebalancing operation evenly spreads the contents of every file
across all available disks in that disk group. The operation is
driven by space usage in the disks and not based on the I/O
statistics on those disks. It is invoked automatically when needed,
and no manual intervention is required during the operation. You
can also choose to run the operation manually or change a
running rebalancing operation.
Increasing the number of background slave processes responsible
for the operation can speed up the rebalancing operation. The
background process ARBx is responsible for disk rebalancing
during storage reconfiguration. To increase the number of slave
processes dynamically, you use the init.ora parameter
ASM_POWER_LIMIT. It is recommended that you perform
rebalancing using only one node when running in Oracle RAC.
Shutting down any unused ASM instances can do this. Figure 6-
4 shows the rebalancing functionality.

FIGURE 6-4. ASM rebalancing


If you don’t specify the POWER clause in an ALTER DISKGROUP
command, or when adding or dropping a disk implicitly invokes a
rebalance, the rebalance power defaults to the value of the
ASM_POWER_LIMIT initialization parameter. You can adjust this
parameter dynamically.
The higher the value of the ASM_POWER_LIMIT parameter, the
faster a rebalancing operation may complete. Lower values cause
rebalancing to take longer but consume fewer processing and I/O
resources, leaving these resources available for other
applications, such as the database. The default value of 1
minimizes disruption to other applications. The appropriate value
is dependent on your hardware configuration as well as the
performance and availability requirements.
If a rebalance is in progress because a disk is manually or
automatically dropped, increasing the power of the rebalance
shortens the window during which redundant copies of that data
on the dropped disk are reconstructed on other disks.
The V$ASM_OPERATION view provides information that can help
you calibrate the value of the ASM_POWER_LIMIT parameter, and
thus the power of rebalancing operations.
The V$ASM_OPERATION view also gives an estimate
(EST_MINUTES column) of the amount of time remaining for the
rebalancing operation to complete. You can see the effect of
changing the rebalance power by observing the change in the
time estimate. The column EST_WORK gives you an estimate of
the number of allocation units the rebalance operation must
perform. For each resync and rebalance operation, the database
updates the PASS column, and the column can show a value of
RESYNC, REBALANCE, or COMPACT.

Manually Rebalancing a Disk Group


You can manually rebalance the files in a disk group using the
REBALANCE clause of the ALTER DISKGROUP statement. This
would normally not be required, because ASM automatically
rebalances disk groups when their composition changes. You
might want to perform a manual rebalancing operation, however,
if you want to control the speed of what would otherwise be an
automatic rebalancing operation.
The POWER clause of the ALTER DISKGROUP...REBALANCE
statement specifies the degree of parallelization, and thus the
speed of the rebalancing operation. It can be set to a value from 0
to 1024 (the default value is 1). A value of 0 halts a rebalancing
operation until the statement is either implicitly or explicitly
invoked again. The default rebalance power is set by the
ASM_POWER_LIMIT initialization parameter. You can also use the
REBALANCE clause in an ALTER DISKGROUP command that adds,
drops, or resizes ASM disks.

NOTE
ASM automatically disables the rebalancing feature when you
create a disk group with REBALANCE POWER 0. If you add more
disks to this disk group, ASM will not distribute data to the newly
added disks. Also, when you remove disks from this disk group
(DROP DISK), the status of the disk group remains DROPPING until
you change REBALANCE POWER to greater than 0. We the
authors don’t think it’s ever a good idea to set the rebalance
power to 0.

Entering the REBALANCE statement with a new level can change


the power level of an ongoing rebalancing operation. The ALTER
DISKGROUP...REBALANCE command by default returns
immediately so that you can issue other commands while the
rebalancing operation takes place asynchronously in the
background.
You can query the V$ASM_OPERATION view for the status of the
rebalancing operation. You can also can use the lsop command
on the ASMCMD command prompt to list the ASM operations.
Here is an example showing how to use the lsop command:

If you want the ALTER DISKGROUP...REBALANCE command to wait


until the rebalancing operation is complete before returning, you
can add the WAIT keyword to the REBALANCE clause. This is
especially useful in scripts. The command also accepts a NOWAIT
keyword, which invokes the default behavior of conducting the
rebalancing operation asynchronously. You can interrupt a
rebalance running in wait mode by pressing CTRL-C on most
platforms. This causes the command to return immediately with
the message “ORA-01013: user requested cancel of current
operation” and to continue the rebalance operation
asynchronously.
Here are some additional rules for the rebalancing operation:

The ALTER DISKGROUP...REBALANCE statement runs on a


single node and uses the resources of the single node on which
you issue the statement.
ASM can perform only one rebalance at a time on a given
instance.
Rebalancing continues across a failure of the ASM instance
performing the rebalance.
The REBALANCE clause (with its associated POWER and
WAIT/NOWAIT keywords) can also be used in ALTER DISKGROUP
commands that add, drop, or resize disks.

The following example shows how to rebalance a disk group


named dgroup2:

This command returns immediately, and the rebalance operation


takes place asynchronously in the background.
Specify the WAIT keyword with the REBALANCE clause so that the
rebalancing command waits until the rebalance operation is
complete before returning. The following example manually
rebalances the disk group dgroup2. The command does not return
until the rebalancing operation is complete:

You can change the power level of an ongoing rebalance


operation by issuing the rebalance statement with the MODIFY
POWER clause, as shown here:
You can change the power setting back to the default value by
doing the following:

Rebalancing Phase Options


When you rebalance disk groups, you can choose from various
phase options by specifying the WITH or WITHOUT keyword.
Following are the phase options you can choose from:

RESTORE This phase is always run, and you can’t exclude it.
The phase includes the following operations:
RESYNC This operation synchronizes the stale extents on the
disks that the database is bringing online.
RESILVER Applies only to Exadata systems. This is an
operation where data is copied from a mirror to another mirror
that has stale data.
REBULD Restores the redundancy of forcing disks (a forcing
disk is one that you’ve dropped with the FORCE option).
BALANCE This phase restores the redundancy of all the disks
in the disk group and balances extents on all disks.
PREPARE This phase is applicable only to FLEX or EXTENDED
redundancy disk groups. It completes the work relevant to the
“prepare” SQL operation.
COMPACT This phase defragments and compacts all
extents.

If you don’t specify any of the options (RESTORE, BALANCE,


PREPARE, or COMPACT), Oracle will run all the rebalance phases
by default.
Here are two examples that show how to specify the phase
options when balancing a disk group:
Monitoring the Performance of
Balancing Operations
You can query the V$ASM_DISK_STAT and
V$ASM_DISKGROUP_STAT views to get at performance statistics.
These views, along with V$FILESTAT, can provide a bunch of
information about the performance of the disk groups and
datafiles. You can execute the following query to get the
performance statistics at the disk group level:

In Oracle Database 12c, you can rebalance multiple disk groups in


a single rebalancing operation.

Tuning Disk Rebalancing Operations


You can query the V$ASM_OPERATION view to check the status of
a rebalance operation. Just as you can use the EXPLAIN WORK
statement to create an explain plan for an Oracle SQL statement,
you can also determine the amount of work involved in a
rebalancing operation before you issue the rebalancing
commands.
Once you issue the EXPLAIN WORK statement, you can query the
V$ASM_ESTIMATE view to check Oracle’s estimates of the
rebalancing operation. The following example shows how to use
the EXPLAIN feature:

Once you execute the EXPLAIN WORK statement, query the


V$ASM_ESTIMATE view, as shown here:
The V$ASM_ESTIMATE view shows the effects of adjusting the
ASM_POWER_LIMIT and thus the impact on the rebalancing
operations. Although we issued a DROP DISK command here and
not a rebalancing command, there’s an implicit rebalancing
operation when you add or drop ASM disks; hence, the
V$ASM_ESTIMATE view shows the potential impact of the
rebalancing operation.
The column EST_WORK in the V$ASM_ESTIMATE view estimates
the number of ASM allocation units Oracle will move during a
rebalancing operation. You can query the V$ASM_ESTIMATE view
to get estimates of work involved in the execution plans for both
disk group rebalance and resync operations.
Scrubbing Disk Groups
Starting with the Oracle Database 12c release, you can both check for as
well as repair logical corruption in both normal and high redundancy disk
groups. Because the scrubbing process uses the mirror disks to check and
repair logical corruption, there’s not a noticeable impact on your disk I/O.
Here’s an example that shows how to specify the SCRUB option to check
for logical corruption in the disk group DGA:

The SCRUB clause checks and reports any existing logical corruption. The
following example shows how to specify the REPAIR clause along with the
SCRUB clause to automatically repair disk corruption:

As with the rebalancing commands, you may optionally specify the WAIT
and FORCE clauses as well when scrubbing a disk group, a specific disk,
or a specific file of a disk group.

BACKUP AND RECOVERY IN ASM


An ASM instance is not backed up because an ASM instance itself
does not contain any files but rather manages the metadata of
the ASM disks. ASM metadata is triple-mirrored, which should
protect the metadata from typical failures. If sufficient failures
occur to cause the loss of metadata, the disk group must be re-
created. Data on the ASM disks is backed up using RMAN. In case
of failure, once the disk groups are created, the data (such as
database files) can be restored using RMAN.
Each disk group is self-describing, containing its own file
directory, disk directory, and other data such as metadata logging
information. ASM automatically protects its metadata by using
mirroring techniques, even with external redundancy disk groups.
An ASM instance caches the information in its SGA. ASM metadata
describes the disk group and files, and it is self-describing
because it resides inside the disk group. Metadata is maintained
in the blocks, and each metadata block is 4KB and triple-mirrored.
With multiple ASM instances mounting the same disk groups, if
one ASM instance fails, another ASM instance automatically
recovers transient ASM metadata changes caused by the failed
instance. This situation is called ASM instance recovery and is
automatically and immediately detected by the Global Cache
Services.
With multiple ASM instances mounting different disk groups, or in
the case of a single ASM instance configuration, if an ASM
instance fails while ASM metadata is open for update, the disk
groups that are not currently mounted by any other ASM instance
are not recovered until they are mounted again. When an ASM
instance mounts a failed disk group, it reads the disk group log
and recovers all transient changes. This situation is called ASM
crash recovery.
Therefore, when you’re using ASM clustered instances, it is
recommended that you have all ASM instances always mounting
the same set of disk groups. However, it is possible to have a disk
group on locally attached disks that are visible only to one node in
a cluster, and have that disk group mounted only on the node
where the disks are attached.
ASM supports standard operating system backup tools, Oracle
Secure backup, and third-party backup solutions such as storage
array snapshot technologies to back up the ACFS file
system. Chapter 9 covers ASM metadata backups.

ASM FLEX CLUSTERS


Oracle Database 12c introduces Oracle Flex ASM clusters, which
let an ASM instance run on a different server from the one where
the database server runs. Up until this release, you had to run an
ASM instance on each server where the database instance ran.
Flex clusters remove the need to collocate ASM and Oracle
database instances and thus allow a small set of ASM instances
running in a cluster to service a large number of database
instances. The default minimum number of instances in an ASM
cluster is 3.
The key idea behind the introduction of Oracle Flex ASM is to
increase database availability, because in the traditional ASM
deployment model, where a separate ASM instance is required on
every cluster node, the failure of a local ASM instance means that
all database instances running on that server will also fail.
If an Oracle Flex ASM instance on a server fails, the database
instances running on that server will continue to run without a
problem. If an Oracle Flex ASM instance fails, the ASM instance
simply fails over to another node in the cluster; meanwhile, the
database instance(s) will continue to operate unhindered by using
a non-local Oracle Flex ASM instance located elsewhere in the
cluster. This way, you make the database instances independent
from the local ASM instances, thus increasing database
availability. In addition, the new ASM deployment model cuts back
on the total amount of resources used for ASM instances.
A cluster performs the group membership services and consists of
a set of nodes, which include at least one hub node. A hub node is
a node that is connected to all nodes and one that directly
accesses the shared disk system. Therefore, just the hub nodes
have access to the ASM disks. A single set of ASM disk groups is
managed by the ASM instances that are part of the cluster.
Using a single set of ASM disk groups under Oracle Flex ASM lets
you consolidate all your storage into a more easily manageable
set of disk groups. Each cluster has two networks—a private and a
public network—and additionally a minimum of one ASM network.
You can use the private network to perform double duty as the
Oracle ASM network. These two networks need to be on separate
subnets.

NOTE
Oracle Flex ASM is a requirement for Oracle Flex clusters, which
are new in the Oracle Database 12c release (see Chapter 7).
Therefore, it is enabled by default when you install an Oracle Flex
cluster.

Configuration of Oracle ASM in Flex


ASM
You can configure Oracle ASM in Flex ASM in the following ways:

Local ASM clients directly accessing ASM Under this


mode—which technically speaking isn’t really an Oracle Flex
mode because it doesn’t use Flex ASM at all—the database clients
run on the same server as the ASM instance. You can do this only
on a hub node. The database instance runs as a local ASM client,
directly accessing the ASM disks.
Flex ASM client directly accessing Oracle ASM In this
mode, database clients on the hub nodes directly access ASM
storage, but also access ASM instances remotely for ASM
metadata. The database instances don’t run on the same server
hosting the ASM instance. Rather, they run on remote hub nodes.
ACFS access through an ASM proxy instance An ASM
proxy instance is one that runs on a hub node with a direct ASM
client. ASM proxy instances can use both ACFS and ADVM. You set
the ASM initialization parameter INSTANCE_TYPE to ASMPROXY to
use this mode.

Setting Up Flex ASM


Setting up Oracle Flex ASM is easy. During a new installation, the
Oracle Universal Installer (OUI) lets you choose between a regular
Oracle ASM cluster and an Oracle ASM Flex deployment. When
you choose Oracle Flex ASM, you must also choose the Oracle
ASM networks. Each cluster has a minimum of one private and
one public network. If you’re going to use ASM for storage, the
cluster will have at least one Oracle ASM network. The Oracle ASM
listener is created for the new Oracle ASM network and
automatically started on all nodes in the cluster.
If you’re upgrading to Oracle Database, you can continue to use
Oracle ASM as you’ve used it in earlier versions of the database.
However, you must consider enabling Oracle Flex ASM to benefit
from the new ASM capabilities in Oracle Database 12c. Oracle
Flex ASM will protect you against unplanned outages (failures) as
well as scheduled maintenance-related outages for work such as
upgrading the OS and applying patches.
If you’re currently using the Standard Oracle ASM configuration,
you can enable Oracle Flex ASM by using the ASMCA to perform
the conversion to Flex ASM. You must ensure that the OCR, the
spfile, and the ORAPWD file for the current standard ASM
configuration are all stored in a disk group before you can convert
to Oracle Flex ASM configuration.
Click the Convert to Oracle Flex ASM button in the ASMCA
Configure ASM: ASM Instances page. You’ll then need to specify
the listener, port, and network interface in the Convert to Oracle
Flex ASM page.
Once you enter the network and port information on this page
and click OK, the ASM Conversion dialog appears, showing the
path to the script named convertToFlexASM.sh, which you must
run as a privileged user on the node where ASMCA is running. You
may also run the ASMCA utility in the silent mode to convert
regular Oracle ASM to Oracle Flex ASM, as shown in the following
example:

You use the standard utilities ASMCA, CRSCTL, SRVCTL and


SQL*Plus to administer Oracle Flex ASM. You can find out if Oracle
Flex ASM is enabled by issuing the asmcmd
showclustermode command:

In addition to the ASMCMD utility, you can also use ASMCA,


CRSCTL, SQL*Plus, and SRVCTL to manage Oracle Flex ASM. The
following command lets you determine the status of the instances
in an Oracle Flex ASM configuration:
The following example shows how to determine the number of
ASM instances that are part of the Flex ASM configuration:

The config asm command shows that three ASM instances are
part of the Flex ASM configuration. You can change the number of
instances in the configuration with the modify asm command, as
shown here:

In this example, the term “count” is highly significant. It means


that when one instance dies, Oracle brings up another instance on
a different node, to maintain the number of instances at four. If an
ASM instance fails, Oracle automatically switches the clients over
to another ASM instance in the cluster. You can also manually
switch a client, as shown here:

In this example, client-id is specified in the


format INSTANCE_NAME: DB_NAME.

Managing ASM Flex Disk Groups


An ASM flex disk group is an ASM disk group type that supports
ASM file groups and quota groups, which enable easy quota
management. An ASM file group is a set of files that belong to a
database, and it enables you to manage storage at the file group
or database level.

NOTE
Flex disk groups let you manage storage at the granularity level
of a database, in addition to enabling management at the disk
group level.

Key Features
Here are the key features of an ASM flex disk group:

Each database has its own file group.


You can migrate to a flex disk group from a normal or high
redundancy disk group.
A flex disk group requires at least three failure groups.
You set the redundancy for a flex disk group to FLEX
REDUNDANCY. Each file group in the flex disk group has its own
redundancy setting.
File groups of flex disk groups describe database files.
Flex disk groups have a flexible redundancy for files.
Flex groups can tolerate two failures, the same as a high
redundancy disk group.

Creating and Migrating to a Flex Disk Group


You create a flex disk group with the CREATE DISKGROUP
statement:

You can migrate a normal disk group to a flex group with the
ALTER DISKGROUP statement:

Understanding ASM File Groups and


ASM Quota Groups
An ASM flex disk group supports ASM file groups and quota
groups. Flex groups enable storage management at the database
level. An ASM file group consists of a set of ASM files. One or more
file groups constitute a quota group. An ASM disk group can have
multiple quota groups.
ASM Quota Groups
You use ASM quota groups to configure quotas that you allocate
to a set of ASM file groups. A quota is a physical space limit that
the database enforces when you create or resize ASM files.
A quota group is the total storage space that is used by one or
multiple file groups in the same disk group. Quota groups allow
you to control the total storage used by different databases,
especially in a multitenant environment.
If you have two disk groups, Disk Group 1 and Disk Group 2, you
can create two quota groups, QGRP1 and QGRP2, in each of the
two disk groups, and you can allocate a specific quota to each of
the two quota groups. You can then have one or more file groups
in each of the two quota groups.
Following are the essential points to understand about quota
groups:

A file group can belong to only one quota group.


A quota group can’t span multiple disk groups.
A quota has two values: the limit and the current used space.
You can move a file group from one quota group to another.

You can add a quota group to a disk group in the following way:

Now that you’ve learned about quota groups, let’s see how you
create file groups within a quota group.

ASM File Groups


An ASM file group consists of a set of files that have the same
properties and characteristics. The properties include
redundancy, rebalance power limit, striping, quota group, and the
access control list. You can use the file groups to set up different
availability characteristics for databases that share the same disk
group.
A disk group consists of one or more file groups, and it can store
files belonging to multiple database. Each database has a
separate file group.
Here are the key points to remember about disk groups and file
groups:

A disk group contains a minimum of one file group, called the


default file group.
For a disk group to contain a file group, it must have either
flex or extended redundancy.
A database can have only one file group in a disk group.
A file group can describe a single database such as a
pluggable database (PDB), container database (CDB), or cluster.
A file group can belong to only one quota group.

The following example shows how to create a file group.


Remember that you must first create the quota group before you
can assign a file group to that quota group.

Using Quota Groups and File Groups in a Multitenant


Environment
Oracle ASM file groups and quota groups are quite helpful when
allocating storage and enforcing limits on the space usage by
multiple pluggable databases in a multitenant
environment. Figure 6-5 illustrates how three pluggable
databases in a multitenant environment make use of ASM files
stored in multiple file groups, quota groups, and disk groups.
FIGURE 6-5. Oracle ASM file groups, quota groups, and disk
groups

Figure 6-5 shows the architecture of Oracle ASM file groups and
quota groups, as summarized here:

There are two ASM disk groups: Disk Group 1 and Disk Group
2.
Each of the two disk groups contains two quota groups: QGRP1
and QGRP2.
The quota group QGRP1 contains one file group named PDB1.
The quota group QGRP2 contains two file groups named PDB2.

This example has three pluggable databases: PDB1, PDB2, and


PDB3. The following is how the three databases use the file
groups in the two quota groups:

The file groups named PDB1 (disk group 1 and disk group 2)
are dedicated to the pluggable database PDB1.
The file groups named PDB2 (disk group 1 and disk group 2)
are dedicated to the pluggable database PDB2.
The file groups named PDB3 (disk group 1 and disk group 3)
are dedicated to the pluggable database PDB3.

Oracle Extended Disk Groups


Oracle ASM extended disk groups are similar to flex disk groups,
and they are meant for use in an extended cluster, where the
nodes are located in physically separate sites. Here are the key
features of an extended disk group:

You must set the redundancy of an extended disk group to


EXTENDED REDUNDANCY.

NOTE
An extended disk group tolerates failures at the site level, in
addition to failures at the failure group level.

An extended disk group can withstand the loss of an entire


site, in addition to the loss of up to two failure groups in another
site.
You configure the redundancy for each site but not for each
disk group.
The quota group space limit is the total space required for
storing all copies across all the sites of an extended cluster. If a
cluster has two sites, a 10MB file with a redundancy level of
MIRROR uses 40MB of the quota limit.
The minimum allocation unit (AU) size is 4MB.

The following example shows how to create an extended disk


group:
ASM TOOLS
ASM tools, such as the ASM command-line interface and the ASM
file transfer utilities, emulate the UNIX environment within the
ASM file system. Even though ASMCMD internally uses the
SQL*Plus interface to query the ASM instance, it greatly helps
system administrators and storage administrators because it
provides the look and feel of a UNIX shell interface.

ASMCA: The ASM Configuration


Assistant
ASM Configuration Assistant is a GUI tool that you use to install
and configure an ASM instance, disk groups, volumes, and ACFS.
Like the Database Configuration Assistant (DBCA), ASMCA can
also be used in silent mode. Oracle Universal Installer internally
uses ASMCA in silent mode to configure ASM disk groups to store
OCR and voting files. ASMCA is capable of managing the complete
ASM instance and associated ASM objects. Starting with Oracle
11g, DBCA does not allow creating and configuring ASM disk
groups, and Oracle’s future directions are to promote the use of
ASMCA because it is a complete ASM management tool.

ASMCMD: The ASM Command-Line


Utility
ASMCMD is a command-line tool that lets you access the ASM files
and related information via a command-line interface, which
makes ASM management easier and handy for DBAs. Oracle has
enabled this tool with several management features that help
users in managing an ASM instance as well as ASM objects such
as disk groups, volumes, and an ASM cluster file system from the
command line.
You can connect to either an ASM instance or an Oracle database
instance when using ASMCMD, depending on what activities you
want to perform. In order to log into ASMCMD, make sure that the
ORACLE_HOME, PATH, and ORACLE_SID environment variables
point to the ASM instance. Then, execute the asmcmd command
located in the Oracle Grid Infrastructure home, which is the same
as the Oracle ASM home. In order to connect to a database
instance, run the ASMCMD utility from the bin directory of the
Oracle Database home.
ASMCMD provides the DBA with a similar look and feel as well as
the privileges of most UNIX-flavor systems, with commands such
as cd, ls, mkdir, pwd, lsop, dsget, and so on. Starting with
Oracle 11g, lots of new commands have been added to manage
the complete ASM instance and its objects. It is now possible to
completely manage an ASM instance from the command-line
interface, including starting/stopping; managing the disk groups,
volumes, and the ASM cluster file system; backing up and
restoring metadata; and even accessing the GPnP profile and ASM
parameter file within ASMCMD. You should refer to the command-
line help for an explanation and sample usage of specific
ASMCMD commands because the ASMCMD command-line help
provides sufficient information for these commands. Following are
a few examples that show how you can use ASMCMD to manage
ASM.
Here’s how to list all the disk groups:
The following set of ASMCMD commands show how you can use
familiar OS commands such as cd, pwd, and ls to find out the
names of all the datafiles for a database that are stored in ASM:

You can also run ASMCMD commands in a non-interactive manner


by issuing a command from the command line itself without
invoking ASMCMD. Here’s an example that shows how to list the
state and names of all disk groups:
Because you can run ASMCMD interactively, it means you can
also incorporate the commands in a shell script.

ASM FTP Utility


Oracle ASM supports ASM FTP, by which operations on ASM files
and directories can be performed similar to conventional
operations on normal files using conventional File Transfer
Protocol (FTP). A typical use of such access to an ASM file can be
to copy ASM files from one database to another.
The Oracle database leverages the virtual folder feature of XML
DB, which provides a way to access the ASM files and directories
through XML DB protocols such as FTP, Hypertext Transfer
Protocol (HTTP), and programmatic APIs. An ASM virtual folder is
mounted as /sys/asm within the XML DB hierarchy. The folder is
called “virtual” because nothing is physically stored in XML DB. All
operations are handled by underlying ASM components.
The virtual folder is created by default during the installation of
XML DB. If the database is not configured to use automatic
storage, this virtual folder will be empty and no operation will be
permitted. The /sys/asm virtual folder contains folders and
subfolders in line with the hierarchy of the ASM fully qualified
naming structure. Figure 6-6 shows the hierarchy of an ASM
virtual folder.
FIGURE 6-6. ASM virtual folder hierarchy

As this figure shows, the virtual folder contains a subfolder for


each mounted disk group in an ASM instance. Each disk group
folder contains a subfolder for each database using that disk
group subfolder, the database folder contains a file type
subfolder, and the file type subfolder contains ASM files, which
are binary in nature. Although we can access the ASM files as we
access normal files in a conventional FTP application, some
usage/access restrictions are in place. DBA privilege is a must to
view the contents of the /sys/asm virtual folder.
The next example demonstrates accessing ASM files via the
virtual folder. In this example, we assume that we are in the
home directory of user oracle, which is /home/oracle, and we are
connecting to the server, using FTP, where the ASM instance is
hosted. The ASM instance is hosted on the racnode01 server. The
disk group name is DGA, and the database name is dba using the
DGA disk group.
First, we open the FTP connection and pass on the login
information. Only users with the DBA privilege can access the
/sys/asm folder. After connecting, we change our directory to
the /sys/asm virtual folder and list the contents of the /sys/asm
folder. A subfolder named DGA is used for the DGA disk group.
Then we change to the DGA directory and see another subfolder
with a database name, which is dba in our case. Then we list the
contents of the dba directory that contains the ASM binary files
related to the dba database. Finally, we download the file
data01.dbf to our local directory, which is /home/oracle.

ASMLIB
ASMLib is the storage management interface that helps simplify
the OS-to-database interface. The ASMLib API was developed and
supported by Oracle to provide an alternative interface for the
ASM-enabled kernel to identify and access block devices. The
ASMLib interface serves as an alternative to the standard
operating system interface. It provides storage and operating
system vendors the opportunity to supply extended storage-
related features that provide benefits such as greater
performance and integrity than are currently available on other
database platforms.

Installing ASMLib
Oracle provides an ASM library driver for the Linux OS. You must
install the ASM library driver prior to installing any Oracle
Database software. In addition, it is recommended that any ASM
disk devices required by the database be prepared and created
before the OUI database installation. The ASMLib software is
available on the Oracle Technology Network (OTN) for free
download.
Three packages are available for each Linux platform. The two
essential RPM packages are the oracleasmlib package, which
provides the actual ASM library, and the oracleasm-support
package, which provides the utilities to configure and enable the
ASM driver. Both these packages need to be installed. The third
package provides the kernel driver for the ASM library. Each
package provides the driver for a different kernel. You must install
the appropriate package for the kernel you are running.

Configuring ASMLib
After the packages are installed, the ASMLib can be loaded and
configured using the configure option in the /etc/init.d/oracleasm
utility. For Oracle RAC clusters, the oracleasm installation and
configuration must be completed on all nodes of the cluster.
Configuring ASMLib is as simple as executing the following
command:

Note that the ASMLib mount point is not a standard file system
that can be accessed by operating system commands. Only the
ASM library to communicate with the ASM driver uses it. The ASM
library dynamically links with the Oracle kernel, and multiple
ASMLib implementations can be simultaneously linked to the
same Oracle kernel. Each library provides access to a different set
of disks and different ASMLib capabilities.
The objective of ASMLib is to provide a more streamlined and
efficient mechanism for managing disks and I/O processing of
ASM storage. The ASM API provides a set of interdependent
functions that need to be implemented in a layered fashion.
These functions are dependent on the backend storage
implementing the associated functions. From an implementation
perspective, these functions are grouped into three collections of
functions.
Each function group is dependent on the existence of the lower-
level group. Device discovery functions are the lowest-layer
functions and must be implemented in any ASMLib library. I/O
processing functions provide an optimized asynchronous interface
for scheduling I/O operations and managing I/O operation
completion events. These functions, in effect, extend the
operating system interface. Consequently, the I/O processing
functions must be implemented as a device driver within the
operating system kernel.
The performance and reliability functions are the highest-layer
functions and depend on the existence of the I/O processing
functions. These functions use the I/O processing control
structures for passing metadata between the Oracle database and
the backend storage devices. The performance and reliability
functions enable additional intelligence on the part of backend
storage. This is achieved when metadata transfer is passed
through the ASMLib API.

Device Discovery
Device discovery provides the identification and naming of
storage devices that are operated on by higher-level functions.
Device discovery does not require any operating system code and
can be implemented as a standalone library invoked and
dynamically linked by the Oracle database. The discovery function
makes the characteristics of the disk available to ASM. Disks
discovered through ASMLib do not need to be available through
normal operating system interfaces. For example, a storage
vendor may provide a more efficient method of discovering and
locating disks that its own interface driver manages.

I/O Processing
The current standard I/O model imposes a lot of OS overhead, due
in part to mode and context switches. The deployment of ASMLib
reduces the number of state transitions from kernel to user mode
by employing a more efficient I/O schedule and call-processing
mechanism. One call to ASMLib can submit and reap multiple I/Os.
This dramatically reduces the number of calls to the OS when I/O
is performed. Additionally, one I/O handle can be used by all the
processes in an instance for accessing the same disk. This
eliminates multiple open calls and multiple file descriptors.
One of the critical aspects of the ASMLib I/O interface is that it
provides asynchronous I/O interface-enhancing performance and
enables database-related intelligence on the backend storage
devices. As for additional intelligence in the backend storage, the
I/O interface enables passing metadata from the database to the
storage devices. Future developments in the storage array
firmware may allow the transport of database-related metadata to
the backend storage devices and will enable new database-
related intelligence in the storage devices.

Oracle ASM Filter Driver (Oracle


ASMFD)
Oracle Database 12c offers Oracle ASM Filter Driver as an
alternative to ASMLib, for eliminating the need to rebind ASM
devices upon a system restart. ASMFD is a kernel module that
validates I/O requests to the ASM disks by rejecting invalid I/O
such as a non-Oracle I/O, which could overwrite ASM data.
You can choose to configure the new Oracle ASMFD when
installing Oracle Grid Infrastructure. If you’re already using
ASMLib, you can also migrate to Oracle ASMFD.
The Oracle Installer automatically installs Oracle ASMFD with
Oracle Database 12c. So, if you are migrating to Oracle Database
12c, you’ll have Oracle ASMFD installed after the upgrade. If you
prefer to keep using your current ASMLib setup, you don’t need to
do anything—the ASMFD executables will be installed but won’t
be automatically used by Oracle. If, on the other hand, you’d like
to migrate from ASMLib to ASMFD, you must remove ASMLib and
configure the ASM devices to use ASMFD instead. Here are the
steps to follow:
1. Update the ASM disk discovery string so it can discover the
ASM Filter Driver disks. You can update the ASM disk discovery
string, as shown here, to add ASMFD disk label names to the
string:

You may also use the wildcard notation by


specifying ‘AFD:*’ instead of multiple disks. Another way to
update the disk discovery string is by executing the asmcmd
dsset command, as shown here:

You can use the asmcmd dsget command to view the value of
the current ASM disk discovery string.
2. List all nodes and their roles by issuing the olsnodes –
a command and run the following set of commands on each hub
node.
3. Stop Oracle Clusterware with the crsctl stop crs command
first and then configure ASMFD as follows:

If the afd_configure command returns NOT AVAILABLE, it means


that Oracle ASMCFD isn’t configured. If ASMCFD was successfully
configured, the command will show the status as “configured.” At
this point, the Oracle ASM instance can register with the ASM
Filter Driver.
4. Verify the ASMFD status:

5. Start the Oracle Clusterware stack on the node (crsctl start


crs) and set the Oracle ASMFD disk discovery string to its original
ASM disk discovery string value:

From here on, ASMFD will manage the disks instead of ASMLib.

SUMMARY
Automatic Storage Management is the best framework for
managing data storage for the Oracle database. ASM implements
the Stripe and Mirror Everything (SAME) methodology to manage
the storage stack with an I/O size equal to the most common
Oracle I/O clients, thus providing tremendous performance
benefits.
Oracle has enhanced and improved ASM to a level that it is
supported with non-Oracle enterprise applications to store
application data. ASM provides the GUI tools ASMCA and OEM for
management of the ASM instance and its associated objects. The
ASM Cluster File System is a true cluster file system built on ASM
foundations, and it eliminates the need for third-party cluster file
systems in enterprise cluster applications. ACFS also comes with
rich functionalities such as snapshots and encryption.
Command-line tools such as SQL*Plus, ASM FTP, and ASMCMD are
also available to provide a command-line interface to design and
build provisioning scripts. Added features of the ACFS and ACFS
snapshot help keep managing a huge amount of storage from
being a complex task that involves lots of planning and day-to-
day administration issues.

You might also like