Automatic Storage Management
Automatic Storage Management
6
Automatic Storage
Management
INTRODUCTION TO AUTOMATIC
STORAGE MANAGEMENT
Oracle ASM is the storage management solution that Oracle
recommends as an alterative to traditional volume managers, file
systems, and raw devices. ASM is a storage solution that provides
volume manager and file system capabilities that are tightly
integrated and optimized for Oracle databases. Oracle
implements ASM via three key components:
ASM instance
ASM Dynamic Volume Manager (ADVM)
ASM Cluster File System (ACFS)
ASM in Operation
Oracle ASM uses disk groups to store datafiles. Within each disk
group, ASM exposes a file system interface for the database files.
ASM divides a file into pieces and spreads these pieces evenly
across all the disks, unlike other volume managers, which spread
the whole volume onto the different disks. This is the key
difference from the traditional striping techniques that use
mathematical functions to stripe complete logical volumes
independent of files or directories. Striping requires careful
capacity planning at the beginning, because adding new volumes
requires rebalancing and downtime.
With ASM, whenever you add or remove storage, ASM doesn’t
restripe all the data. It just moves an amount of data proportional
to the amount of storage that you’ve added or removed, so as to
redistribute the files evenly and maintain a balanced I/O load
across the disks. This occurs while the database is active, and the
process is transparent to the database and end-user applications.
The new disk resync enhancements in Oracle Database 12c let
you recover very quickly from an instance failure. The disk resync
checkpoint capability makes for fast instance recovery by letting
the resync operation resume from where the resync process was
stopped, instead of starting from the beginning.
ASM supports all files that you use with an Oracle database (with
the exception of pfile and password files). Starting with Oracle
11g Release 2, ACFS can be used for non-Oracle application files.
ASM also supports Oracle Real Application Clusters (Oracle RAC)
and eliminates the need for a cluster Logical Volume Manager or
a third-party cluster file system.
ASM is integrated with Oracle Clusterware and can read from a
mirrored data copy in an extended cluster and improve I/O
performance when used in an extended cluster. (An extended
cluster is special-purpose architecture in Oracle RAC where the
nodes are geographically separated.)
ASM provides the SQL interface for creating database structures
such as tablespaces, controlfiles, redo logs, and archive log files.
You specify the file location in terms of disk groups; ASM then
creates and manages the associated underlying files for you.
Other interfaces interact with ASM as well, such as ASMCMD,
OEM, and ASMCA. SQL*Plus is the most commonly used tool to
manage ASM among DBAs. However, system and storage
administrators tend to like the ASMCMD utility for managing ASM.
ASM Striping
Striping is a technique used for spreading data among multiple
disk drives. A big data segment is broken into smaller units, and
these units are spread across the available devices. The size of
the unit that the database uses to break up the data is called
the data unit size or stripe size.
Stripe size is also sometimes called the block size and refers to
the size of the stripes written to each disk. The number of parallel
stripes that can be written to or read from simultaneously is
known as the stripe width. Striping can speed up operations that
retrieve data from disk storage as it extends the power of the
total I/O bandwidth. This optimizes performance and disk
utilization, making manual I/O performance tuning unnecessary.
ASM supports two levels of striping: fine striping and coarse
striping. Fine striping uses 128KB as the stripe width, and coarse
striping uses 1MB as the stripe width. Fine striping can be used
for files that usually do smaller reads and writes. For example,
online redo logs and controlfiles are the best candidates for fine
striping when the reads or writes are small in nature.
ASM Instance
An ASM instance is an Oracle instance that manages the
metadata for disk groups, ADVM (ASM Dynamic Volume Manager),
and ACFS (ASM Cluster File System). Metadata refers to
information, such as the disks that belongs to a disk group and
the space available in a disk group, that Oracle ASM uses to
manage disk groups. ASM stores the metadata in the disk groups.
All metadata modifications are done by an ASM instance to isolate
failures. You can install and configure an ASM instance with either
the Oracle Universal Installer (OUI) or the Oracle ASM
Configuration Assistant (ASMCA).
You can use an spfile or a traditional pfile to configure an Oracle
ASM instance. An Oracle ASM instance looks for its initialization
parameter file in the following order:
NOTE
You can administer Oracle ASM initialization parameter files with
SQL*Plus, ASMCA, and ASMCMD commands.
ASM Listener
The ASM listener is similar to the Oracle database listener, which
is a process that’s responsible for establishing a connection
between database server processes and the ASM instance. The
ASM listener process tnslsnr is started from the $GRID_HOME/bin
directory and is similar to Oracle Net Listener.
The ASM listener also listens for database services running on the
same machine, so there is no need to configure and run a
separate Oracle Net Listener for the database instance. Oracle
will, by default, install and configure an ASM listener on port
1521, which can be changed to a nondefault port while Oracle
Grid Infrastructure is being installed, or later on.
Disk Groups
A disk group consists of disks that the database manages
together as a single unit of storage, and it’s the basic storage
object managed by Oracle ASM. As the primary storage unit of
ASM, this collection of ASM disks is self-describing, independent of
the associated media names.
Oracle provides various ASM utilities such as ASMCA, SQL
statements, and ASMCMD to help you create and manage disk
groups, their contents, and their metadata.
Disk groups are integrated with Oracle Managed Files and support
three types of redundancy: external, normal, and high. Figure 6-
2 shows the architecture of the disk groups.
The disks in a disk group are called ASM disks. In ASM, a disk is
the unit of persistent storage for a disk group and can be disks or
partitions from a storage array or an entire disk, or partitions of a
disk group. Disks can also be logical volumes or network-attached
files that are part of a Network File System (NFS).
The disk group in a typical database cluster is part of a remote
shared-disk subsystem, such as a SAN or network-attached
storage (NAS). The storage is accessible via the normal operating
system interface and must be accessible to all nodes.
Oracle must have read and write access to all the disks, even if
one or more servers in the cluster fails. On Windows operating
systems, an ASM disk is always a partition. On all other platforms,
an ASM disk can be a partition of a logical unit number (LUN) or
any NAS device.
NOTE
Oracle doesn’t support raw or block devices as shared storage for
Oracle RAC in Oracle 11g Release 2 onward.
Allocation Unit
ASM disks are divided into a number of units or storage blocks
that are small enough not to be hot. The allocation unit of storage
is large enough for efficient sequential access. The allocation unit
defaults to 1MB in size.
Oracle recommends 4MB as the allocation unit for most
configurations. ASM allows you to change the allocation unit size,
but you don’t normally need to do this, unless ASM hosts a very
large database (VLDB). You can’t change the size of an allocation
unit of an existing disk group.
Significance of 1MB in ASM
Various I/O clients and their usage models are discussed in this section to
provide insight into I/O block size and parallelization at the Oracle level.
The log writer writes the very important redo buffers into log files. These
writes are sequential and synchronous by default. The maximum size of
any I/O request is set at 1MB on most platforms. Redo log reads are
sequential and issued either during recovery or by LogMiner or log
dumps. The size of each I/O buffer is limited to 1MB in most platforms.
There are two asynchronous I/Os pending at any time for parallelization.
DBWR (Database Writer) is the main server process that submits
asynchronous I/Os in a big batch. Most of the I/O request sizes are equal
to the database block size. DBWR also tries to coalesce the adjacent
buffers in the disk up to a maximum size of 1MB whenever it can and
submits them as one large I/O. The Kernel Sequential File I/O (ksfq)
provides support for sequential disk/tape access and buffer management.
The ksfq allocates four sequential buffers by default. The size of the
buffers is determined by the DB_FILE_DIRECT_IO_COUNT parameter,
which is set to 1MB by default. Some of the ksfq clients are Datafile, redo
logs, RMAN, Archive log file, Data Pump, Oracle Data Guard, and the File
Transfer package.
Failure Groups
Failure groups define ASM disks that share a common potential
failure mechanism. A failure group is a subset of disks in a disk
group dependent on a common hardware resource whose failure
the database must tolerate. Failure groups are relevant only for
normal or high redundancy configurations. Redundant copies of
the same data are placed in different failure groups. An example
might be a member in a stripe set or a set of SCSI disks sharing
the same SCSI controller. Failure groups are used to determine
which ASM disks should be used for storing redundant copies of
data. By default, each disk is an individual failure group. Figure 6-
3 illustrates the concept of a failure group.
Oracle will take the offline and drop them, keeping the disk group
mounted and usable. Because the data is mirrored, there’s no data loss
and the data continues to remain available. Once the disk drop is
complete, Oracle ASM will rebalance the disk group to provide the original
redundancy for the data that was on the failed disk.
Oracle will dismount the disk group, which results in the unavailability
of the data. Oracle recommends at least three failure groups for normal
redundancy disk groups and five failure groups for high redundancy disk
groups to maintain the necessary number of copies of the Partner Status
Table (PST). This also offers protection against storage hardware failures.
ASM Files
Files that the Oracle database writes on ASM disks (which are part
of an ASM disk group) are called ASM files. Each ASM file can
belong to only a single Oracle ASM group. Oracle can store the
following files in an ASM disk group:
Datafiles
Controlfiles
Temporary files
Online and archived redo log files
Flashback logs
RMAN backups
Spfiles
Data Pump dump sets
ASM filenames normally start with a plus sign (+). Although ASM
automatically generates the names, you can specify a
meaningful, user-friendly alias name (or alias) for ASM files. Each
ASM file is completely contained within a single disk group and
evenly divided throughout all ASM disks in the group.
Voting files and the Oracle Cluster Registry (OCR) are two key
components of Oracle Clusterware. You can store both OCR and
voting files in Oracle ASM disk groups.
Oracle divides every ASM disk into multiple allocation units.
An allocation unit (AU) is the fundamental storage allocation unit
in a disk group. A file extent consists of one or more allocation
units, and each ASM file consists of one or more file extents. You
set the AU size for a disk group by specifying the AU_SIZE
attribute when you create a disk group. The value can be 1, 2, 4,
8, 16, 32, or 64MB. Larger AU sizes are beneficial for applications
that sequentially read large chunks of data.
You can’t change the data extent size because Oracle will
automatically increase the size of the data extent (variable extent
sizes) when an ASM file increases in size. The way Oracle
allocates the extents is similar to how it manages locally
managed tablespace extent sizes when an auto-allocate mode is
used. Variable extent sizes depend on the AU size. If the disk
group has an AU size of less than 4MB, the extent size is the same
as the disk group AU size for the first 20,000 extent sets. The
extent size goes up to 4*AU size for the next 20,000 extent sets;
for anything over 40,000 extent sizes, the extent size will be
16*AU size. If the disk group AU size is at least 4MB, the extent
sizing (that is, whether it is the same as the disk group AU size,
4*AU size, or 16*AU size) is figured using the application block
size.
Disk Partners
Disk partners limit the possibility of two independent disk failures
that result in losing both copies of a virtual extent. Each disk has
a limited number of partners, and redundant copies are allocated
on partners.
ASM automatically chooses the disk partner and places the
partners in different failure groups. In both Oracle Exadata and
the Oracle Database Appliance, there’s a fixed partnership. In
other cases, the partnering is not fixed and is decided based on
how the disks are placed in their failure groups. Partner disks
should be the same in size, capacity, and performance
characteristics.
ACFS Snapshots
An exciting feature bundled with ACFS is the ability to create
snapshots of the ASM dynamic volume; this allows users to
recover from the deleted files or even to a point in time in the
past.
ACFS snapshots are point-in-time copies of the ACFS file system,
which is read-only and taken online. To perform point-in-time
recovery or even to recover deleted files, you need to know the
current data and changes made to the file. When you create a
snapshot, ASM stores metadata such as the directory structure
and the name of the files in the ASM dynamic volume. Along
with the metadata, ASM stores the location information of all the
data blocks of the files that have never had any data and the
actual data blocks with data.
Once the snapshot is created, to maintain the consistency of the
snapshot, ASM updates it by recording the file changes. Starting
with Oracle Grid Infrastructure 12.1, you can create a snapshot
from a previously created ACFS snapshot.
The ASM Cluster File System supports POSIX and X/Open file
system APIs, and you can use traditional UNIX commands such as
cp, cpio, ar, access, dir, diff, and so on. ACFS supports standard
operating system backup tools, Oracle secure backup, and third-
party tools such as storage array snapshot technologies.
ASM Filenames
You can use a fully qualified filename or an alias to refer to Oracle
ASM files.
Here’s an example:
Oracle uses default templates for creating different types of files
in ASM. These templates provide the attributes for creating a
specific type of file. You thus have the DATAFILE template for
creating datafiles, the CONTROLFILE template for creating
controlfiles, and the ARCHIVELOG template for creating archive
log files.
Creating Directories
To store your aliases, you can create your own hierarchical
directory structures. You create directories below the disk group
level, and you must ensure that a parent directory exists before
creating subdirectories or aliases. Here’s an example:
Dropping Directories
You can delete a directory with the ALTER DISKGROUP statement:
TIP
The DBMS_FILE_TRANSFER package contains procedures that
enable you to copy ASM files between databases or to transfer
binary files between databases.
ASM ADMINISTRATION AND
MANAGEMENT
Administering an ASM instance is similar to managing a database
instance, but it involves fewer tasks. An ASM instance doesn’t
require a database instance to be running for you to administer it.
An ASM instance does not have a data dictionary because
metadata is not stored in a dictionary. ASM metadata is small,
and Oracle stores it in the ASM disk headers. ASM instances
manage the disk group metadata and provide the layout of the
datafiles in ASM to the Oracle instance.
The metadata that ASM stores in the disk groups includes the
following types:
NOTE
To administer ASM with SQL*Plus, you must set the ORACLE_SID
environment variable to the ASM SID before you start SQL*Plus.
The default ASM SID for a single-instance database is +ASM, and
the default SID for ASM on Oracle RAC nodes is +ASMnode#. ASM
instances do not have a data dictionary, so you must use
operating system authentication and connect as SYSASM or
SYSOPER. When connecting remotely through Oracle Net
Services, you must use a password file for authentication.
You can check the integrity of Oracle ASM across the cluster using
the Cluster Verification Utility (CVU), as shown here:
This command checks all nodes in the cluster. You can check a
single node or a set of nodes instead by replacing
the all parameter with [-n node_list].
NOTE
An ASM instance doesn’t have any data dictionary, and the only
possible way to connect to an ASM instance is via operating
system privileges such as SYSDBA, SYSASM, and SYSOPER.
NOTE
To manage ASM instances, use the SRVCTL executable in the
Oracle Grid Infrastructure home. You can’t use the SRVCTL
executable located in the Oracle RAC or Oracle Database home
for managing ASM.
Use the following command to start the ASM instance on all nodes
in a cluster:
You can stop the ASM instance on a specific node with the
following command:
In addition to starting up and shutting down an ASM instance, the
SRVCTL utility also lets you perform other administrative tasks, as
described in the following examples.
Use the following command to add an ASM instance:
You can start and stop an ASM instance using SQL*Plus, just as
you do for a regular Oracle Database instance. In order to connect
to the ASM instance, make sure to set the ORACLE_SID
environment variable to the Oracle ASM SID. The default SID for
ASM on a RAC node is +ASM node_number,
where node_number is the number of the node hosting the ASM
instance. Here’s how you start up an ASM instance with SQL*Plus:
NOTE
You can’t start or shut down an ASM instance alone in the Oracle
RAC database system if the OCR and voting disks are stored
inside the ASM disk group. You must use the crsctl command to
start or stop the CRS, which will start/stop the ASM instance also.
You can also add OS users to an ASM disk group so that the users
have access privileges on the disk group:
NOTE
Many DBAs have issues properly configuring the
ASM_DISKSTRING parameter. You must restrict this to the smaller
set of the disks where ASM disk groups are located, and not
/dev/*, as the ASM instance will be forced to go through all disks
in the location. Many of the devices here are not disks, and hence
the disk discovery process will take much longer than necessary.
If you want ASM to mirror files, define the redundancy level while
creating the ASM disk group. Oracle provides two redundancy
levels: normal redundancy and high redundancy. In normal
redundancy, each extent has one mirrored copy; in high
redundancy, each extent has two mirrored copies in different disk
groups.
Disks in a disk group should be of a similar size with similar
performance characteristics. It’s always advisable to create
different disk groups for different types of disks. All disks in disk
groups must be of the same size to avoid wasting disk space in
failure groups. All disks that you’ll use to create disk groups must
be in line with the ASM_DISKSTRING parameter to avoid disk
discovery issues.
After creating a disk group, you may need to alter the disk group
depending on your business requirements. Oracle allows you to
perform create, drop/undrop, resize, rebalance, and
mount/dismount operations on disk groups after they’ve been
created.
You can optionally set the SECTOR_SIZE disk group attribute when
creating a disk group. The value you set for SECTOR_SIZE
determines the sector size for the disks in a disk group. You can
set the sector size only when creating a disk group. If your disks
support it, you may set the value of the SECTOR_SIZE parameter
to 512 or 4096. The default value for sector size depends on the
operating system platform.
This command adds the two disks, disk04 and disk05, to the disk
group named mydg.
3. Manually delete the old disk group resource with the following
command:
4. Rename the disks in the new disk group with the following
commands:
ADMINISTERING ACFS
You can’t use ASM tools such as ASMCA, ASMCMD, OEM, and
SQL*Plus to create and manage ACFS. Creating an ACFS file
system is just like creating a file system with any traditional
volume manager. In a traditional volume manager, you first
create a volume group and then the volumes. Finally, you create
the file system and then mount the file system.
Just like an allocation unit in the ASM disk group, volume
allocation unit is the smallest storage unit that can be allocated in
an ASM dynamic volume. ASM allocates stripes in each volume
allocation unit, where each stripe is the equivalent of the volume
extent size, which is directly related to the allocation unit size of
the underlying ASM disk group. By default, the size of one volume
extent is 64MB for a 1MB allocation unit size disk group. When
you create an ASM dynamic volume, you specify the number of
stripes (also known as stripe columns) and the width of each
stripe because Oracle internally uses the SAME (Stripe and Mirror
Everything) approach to stripe and mirror data. Oracle will then
distribute the file being stored inside the ACFS into chunks, with a
stripe width size of 128KB (the default stripe width) on each
volume extent.
For example, if you create an ASM volume of 400MB and store a
1MB file initially with default settings, Oracle will create a volume
extent size of 64MB and each volume allocation unit will be of
256MB (the number of stripes multiplied by the volume extent
size). Because Oracle allocates space in multiples of the volume
allocation unit, it will allocate 512MB for the ASM dynamic volume
although you requested 400MB. The ASM dynamic volume can be
extended later on. ASM will distribute the file into eight chunks of
128KB and store them on volume extents. ASM stripes the volume
extents in the same way on the available disks inside the ASM
disk group; hence, it provides very efficient I/O operations.
You can use the V$ASM_ACFSVOLUMES and V$ASM_FILESYSTEM
dynamic performance views of the ASM instance to display the
ACFS file system information on the connected ASM instance.
Setting Up ACFS
Follow the steps given next to create the ASM Cluster File System:
7. Once you mount the new file system, make sure you set the
permissions on the file system appropriately so that users who
need to access it can do so. For example, you can set the
permissions for the user oracle in the following manner:
NOTE
ASM automatically disables the rebalancing feature when you
create a disk group with REBALANCE POWER 0. If you add more
disks to this disk group, ASM will not distribute data to the newly
added disks. Also, when you remove disks from this disk group
(DROP DISK), the status of the disk group remains DROPPING until
you change REBALANCE POWER to greater than 0. We the
authors don’t think it’s ever a good idea to set the rebalance
power to 0.
RESTORE This phase is always run, and you can’t exclude it.
The phase includes the following operations:
RESYNC This operation synchronizes the stale extents on the
disks that the database is bringing online.
RESILVER Applies only to Exadata systems. This is an
operation where data is copied from a mirror to another mirror
that has stale data.
REBULD Restores the redundancy of forcing disks (a forcing
disk is one that you’ve dropped with the FORCE option).
BALANCE This phase restores the redundancy of all the disks
in the disk group and balances extents on all disks.
PREPARE This phase is applicable only to FLEX or EXTENDED
redundancy disk groups. It completes the work relevant to the
“prepare” SQL operation.
COMPACT This phase defragments and compacts all
extents.
The SCRUB clause checks and reports any existing logical corruption. The
following example shows how to specify the REPAIR clause along with the
SCRUB clause to automatically repair disk corruption:
As with the rebalancing commands, you may optionally specify the WAIT
and FORCE clauses as well when scrubbing a disk group, a specific disk,
or a specific file of a disk group.
NOTE
Oracle Flex ASM is a requirement for Oracle Flex clusters, which
are new in the Oracle Database 12c release (see Chapter 7).
Therefore, it is enabled by default when you install an Oracle Flex
cluster.
The config asm command shows that three ASM instances are
part of the Flex ASM configuration. You can change the number of
instances in the configuration with the modify asm command, as
shown here:
NOTE
Flex disk groups let you manage storage at the granularity level
of a database, in addition to enabling management at the disk
group level.
Key Features
Here are the key features of an ASM flex disk group:
You can migrate a normal disk group to a flex group with the
ALTER DISKGROUP statement:
You can add a quota group to a disk group in the following way:
Now that you’ve learned about quota groups, let’s see how you
create file groups within a quota group.
Figure 6-5 shows the architecture of Oracle ASM file groups and
quota groups, as summarized here:
There are two ASM disk groups: Disk Group 1 and Disk Group
2.
Each of the two disk groups contains two quota groups: QGRP1
and QGRP2.
The quota group QGRP1 contains one file group named PDB1.
The quota group QGRP2 contains two file groups named PDB2.
The file groups named PDB1 (disk group 1 and disk group 2)
are dedicated to the pluggable database PDB1.
The file groups named PDB2 (disk group 1 and disk group 2)
are dedicated to the pluggable database PDB2.
The file groups named PDB3 (disk group 1 and disk group 3)
are dedicated to the pluggable database PDB3.
NOTE
An extended disk group tolerates failures at the site level, in
addition to failures at the failure group level.
ASMLIB
ASMLib is the storage management interface that helps simplify
the OS-to-database interface. The ASMLib API was developed and
supported by Oracle to provide an alternative interface for the
ASM-enabled kernel to identify and access block devices. The
ASMLib interface serves as an alternative to the standard
operating system interface. It provides storage and operating
system vendors the opportunity to supply extended storage-
related features that provide benefits such as greater
performance and integrity than are currently available on other
database platforms.
Installing ASMLib
Oracle provides an ASM library driver for the Linux OS. You must
install the ASM library driver prior to installing any Oracle
Database software. In addition, it is recommended that any ASM
disk devices required by the database be prepared and created
before the OUI database installation. The ASMLib software is
available on the Oracle Technology Network (OTN) for free
download.
Three packages are available for each Linux platform. The two
essential RPM packages are the oracleasmlib package, which
provides the actual ASM library, and the oracleasm-support
package, which provides the utilities to configure and enable the
ASM driver. Both these packages need to be installed. The third
package provides the kernel driver for the ASM library. Each
package provides the driver for a different kernel. You must install
the appropriate package for the kernel you are running.
Configuring ASMLib
After the packages are installed, the ASMLib can be loaded and
configured using the configure option in the /etc/init.d/oracleasm
utility. For Oracle RAC clusters, the oracleasm installation and
configuration must be completed on all nodes of the cluster.
Configuring ASMLib is as simple as executing the following
command:
Note that the ASMLib mount point is not a standard file system
that can be accessed by operating system commands. Only the
ASM library to communicate with the ASM driver uses it. The ASM
library dynamically links with the Oracle kernel, and multiple
ASMLib implementations can be simultaneously linked to the
same Oracle kernel. Each library provides access to a different set
of disks and different ASMLib capabilities.
The objective of ASMLib is to provide a more streamlined and
efficient mechanism for managing disks and I/O processing of
ASM storage. The ASM API provides a set of interdependent
functions that need to be implemented in a layered fashion.
These functions are dependent on the backend storage
implementing the associated functions. From an implementation
perspective, these functions are grouped into three collections of
functions.
Each function group is dependent on the existence of the lower-
level group. Device discovery functions are the lowest-layer
functions and must be implemented in any ASMLib library. I/O
processing functions provide an optimized asynchronous interface
for scheduling I/O operations and managing I/O operation
completion events. These functions, in effect, extend the
operating system interface. Consequently, the I/O processing
functions must be implemented as a device driver within the
operating system kernel.
The performance and reliability functions are the highest-layer
functions and depend on the existence of the I/O processing
functions. These functions use the I/O processing control
structures for passing metadata between the Oracle database and
the backend storage devices. The performance and reliability
functions enable additional intelligence on the part of backend
storage. This is achieved when metadata transfer is passed
through the ASMLib API.
Device Discovery
Device discovery provides the identification and naming of
storage devices that are operated on by higher-level functions.
Device discovery does not require any operating system code and
can be implemented as a standalone library invoked and
dynamically linked by the Oracle database. The discovery function
makes the characteristics of the disk available to ASM. Disks
discovered through ASMLib do not need to be available through
normal operating system interfaces. For example, a storage
vendor may provide a more efficient method of discovering and
locating disks that its own interface driver manages.
I/O Processing
The current standard I/O model imposes a lot of OS overhead, due
in part to mode and context switches. The deployment of ASMLib
reduces the number of state transitions from kernel to user mode
by employing a more efficient I/O schedule and call-processing
mechanism. One call to ASMLib can submit and reap multiple I/Os.
This dramatically reduces the number of calls to the OS when I/O
is performed. Additionally, one I/O handle can be used by all the
processes in an instance for accessing the same disk. This
eliminates multiple open calls and multiple file descriptors.
One of the critical aspects of the ASMLib I/O interface is that it
provides asynchronous I/O interface-enhancing performance and
enables database-related intelligence on the backend storage
devices. As for additional intelligence in the backend storage, the
I/O interface enables passing metadata from the database to the
storage devices. Future developments in the storage array
firmware may allow the transport of database-related metadata to
the backend storage devices and will enable new database-
related intelligence in the storage devices.
You can use the asmcmd dsget command to view the value of
the current ASM disk discovery string.
2. List all nodes and their roles by issuing the olsnodes –
a command and run the following set of commands on each hub
node.
3. Stop Oracle Clusterware with the crsctl stop crs command
first and then configure ASMFD as follows:
From here on, ASMFD will manage the disks instead of ASMLib.
SUMMARY
Automatic Storage Management is the best framework for
managing data storage for the Oracle database. ASM implements
the Stripe and Mirror Everything (SAME) methodology to manage
the storage stack with an I/O size equal to the most common
Oracle I/O clients, thus providing tremendous performance
benefits.
Oracle has enhanced and improved ASM to a level that it is
supported with non-Oracle enterprise applications to store
application data. ASM provides the GUI tools ASMCA and OEM for
management of the ASM instance and its associated objects. The
ASM Cluster File System is a true cluster file system built on ASM
foundations, and it eliminates the need for third-party cluster file
systems in enterprise cluster applications. ACFS also comes with
rich functionalities such as snapshots and encryption.
Command-line tools such as SQL*Plus, ASM FTP, and ASMCMD are
also available to provide a command-line interface to design and
build provisioning scripts. Added features of the ACFS and ACFS
snapshot help keep managing a huge amount of storage from
being a complex task that involves lots of planning and day-to-
day administration issues.