VXVM Admin 51sp1 Sol
VXVM Admin 51sp1 Sol
Administrator's Guide
Solaris
Legal Notice
Copyright © 2010 Symantec Corporation. All rights reserved.
The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,
REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO
BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL
OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations. Any use, modification, reproduction release,
performance, display or disclosure of the Licensed Software and Documentation by the U.S.
Government shall be solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical
Support’s primary role is to respond to specific queries about product features
and functionality. The Technical Support group also creates content for our online
Knowledge Base. The Technical Support group works collaboratively with the
other functional areas within Symantec to answer your questions in a timely
fashion. For example, the Technical Support group works with Product Engineering
and Symantec Security Response to provide alerting services and virus definition
updates.
Symantec’s support offerings include the following:
■ A range of support options that give you the flexibility to select the right
amount of service for any size organization
■ Telephone and/or Web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers software upgrades
■ Global support purchased on a regional business hours or 24 hours a day, 7
days a week basis
■ Premium service offerings that include Account Management Services
For information about Symantec’s support offerings, you can visit our Web site
at the following URL:
www.symantec.com/business/support/index.jsp
All support services will be delivered in accordance with your support agreement
and the then-current enterprise technical support policy.
Customer service
Customer service information is available at the following URL:
www.symantec.com/business/support/
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and support contracts
■ Information about the Symantec Buying Programs
■ Advice about Symantec's technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs or manuals
Documentation
Your feedback on product documentation is important to us. Send suggestions
for improvements and reports on errors or omissions. Include the title and
document version (located on the second page), and chapter and section titles of
the text on which you are reporting. Send feedback to:
[email protected]
■ Online relayout
■ Volume resynchronization
■ Volume snapshots
■ FastResync
■ Hot-relocation
■ Volume sets
and the operating system as a physical device on which file systems, databases
and other managed data objects can be configured.
VxVM provides easy-to-use online disk storage management for computing
environments and Storage Area Network (SAN) environments. By supporting the
Redundant Array of Independent Disks (RAID) model, VxVM can be configured
to protect against disk and hardware failure, and to increase I/O throughput.
Additionally, VxVM provides features that enhance fault tolerance and fast
recovery from disk failure or storage array failure.
VxVM overcomes restrictions imposed by hardware disk devices and by LUNs by
providing a logical volume management layer. This allows volumes to span multiple
disks and LUNs.
VxVM provides the tools to improve performance and ensure data availability
and integrity. You can also use VxVM to dynamically configure storage while the
system is active.
Virtual objects When one or more physical disks are brought under the control
of VxVM, it creates virtual objects called volumes on those physical
disks. Each volume records and retrieves data from one or more
physical disks. Volumes are accessed by file systems, databases,
or other applications in the same way that physical disks are
accessed. Volumes are also composed of other virtual objects
(plexes and subdisks) that are used in changing the volume
configuration. Volumes and their virtual components are called
virtual objects or VxVM objects.
Physical objects
A physical disk is the basic storage device (media) where the data is ultimately
stored. You can access the data on a physical disk by using a device name to locate
24 Understanding Veritas Volume Manager
How VxVM handles storage management
the disk. The physical disk device name varies with the computer system you use.
Not all parameters are used on all systems.
Typical device names are of the form c#t#d#s#, where c# specifies the controller,
t# specifies the target ID, d# specifies the disk, and s# specifies the partition or
slice. For example, device name c0t0d0s2 is the entire hard disk connected to
controller number 0 in the system, with a target ID of 0, and physical disk number
0.
Figure 1-1 shows how a physical disk and device name (devname) are illustrated
in this document.
devname
Partitions
Figure 1-2 shows how a physical disk can be divided into one or more slices, also
known as partitions.
devnames0 devnames0
devnames1
devnames2
The slice number is added at the end of the devname, and is denoted by s#. Note
that slice s2 refers to an entire physical disk for non-EFI disks.
Understanding Veritas Volume Manager 25
How VxVM handles storage management
Disk arrays
Performing I/O to disks is a relatively slow process because disks are physical
devices that require time to move the heads to the correct position on the disk
before reading or writing. If all of the read or write operations are done to
individual disks, one at a time, the read-write time can become unmanageable.
Performing these operations on multiple disks can help to reduce this problem.
A disk array is a collection of physical disks that VxVM can represent to the
operating system as one or more virtual disks or volumes. The volumes created
by VxVM look and act to the operating system like physical disks. Applications
that interact with volumes should work in the same way as with physical disks.
Figure 1-3 shows how VxVM represents the disks in a disk array as several volumes
to the operating system.
Figure 1-3 How VxVM presents the disks in a disk array as volumes to the
operating system
Operating system
Volumes
Physical disks
Data can be spread across several disks within an array, or across disks spanning
multiple arrays, to distribute or balance I/O operations across the disks. Using
parallel I/O across multiple disks in this way improves I/O performance by
increasing data transfer speed and overall throughput for the array.
devices. Such disk arrays are called multipathed disk arrays. This type of disk
array can be connected to host systems in many different configurations, (such
as multiple ports connected to different controllers on a single host, chaining of
the ports through a single controller on a host, or ports connected to different
hosts simultaneously).
See “How DMP works” on page 157.
Device discovery
Device discovery is the term used to describe the process of discovering the disks
that are attached to a host. This feature is important for DMP because it needs to
support a growing number of disk arrays from a number of vendors. In conjunction
with the ability to discover the devices attached to a host, the Device Discovery
service enables you to add support dynamically for new disk arrays. This operation,
which uses a facility called the Device Discovery Layer (DDL), is achieved without
the need for a reboot.
This means that you can dynamically add a new disk array to a host, and run a
command which scans the operating system’s device tree for all the attached disk
devices, and reconfigures DMP with the new device database.
See “How to administer the Device Discovery Layer” on page 90.
Figure 1-4 Example configuration for disk enclosures connected via a fibre
channel switch
Host
c1
Fibre Channel
switch
Disk enclosures
Host
c1 c2
Fibre Channel
switches
Disk enclosures
Such a configuration protects against the failure of one of the host controllers
(c1 and c2), or of the cable between the host and one of the switches. In this
example, each disk is known by the same name to VxVM for all of the paths over
which it can be accessed. For example, the disk device enc0_0 represents a single
disk for which two different paths are known to the operating system, such as
c1t99d0 and c2t99d0.
Virtual objects
VxVM uses multiple virtualization layers to provide distinct functionality and
reduce physical limitations.
Understanding Veritas Volume Manager 29
How VxVM handles storage management
Disk group
vol01 vol02
Physical
devname1 devname2 devname3
disks
The disk group contains three VM disks which are used to create two volumes.
Volume vol01 is simple and has a single plex. Volume vol02 is a mirrored volume
with two plexes.
The various types of virtual objects (disk groups, VM disks, subdisks, plexes and
volumes) are described in the following sections. Other types of objects exist in
Veritas Volume Manager, such as data change objects (DCOs), and volume sets,
to provide extended functionality.
Understanding Veritas Volume Manager 31
How VxVM handles storage management
Disk groups
A disk group is a collection of disks that share a common configuration, and which
are managed by VxVM. A disk group configuration is a set of records with detailed
information about related VxVM objects, their attributes, and their connections.
A disk group name can be up to 31 characters long.
See “VM disks” on page 31.
In releases before VxVM 4.0, the default disk group was rootdg (the root disk
group). For VxVM to function, the rootdg disk group had to exist and it had to
contain at least one disk. This requirement no longer exists, and VxVM can work
without any disk groups configured (although you must set up at least one disk
group before you can create any volumes of other VxVM objects).
See “System-wide reserved disk groups” on page 230.
You can create additional disk groups when you need them. Disk groups allow
you to group disks into logical collections. A disk group and its components can
be moved as a unit from one host machine to another.
See “Reorganizing the contents of disk groups” on page 270.
Volumes are created within a disk group. A given volume and its plexes and
subdisks must be configured from disks in the same disk group.
VM disks
When you place a physical disk under VxVM control, a VM disk is assigned to the
physical disk. A VM disk is under VxVM control and is usually in a disk group.
Each VM disk corresponds to at least one physical disk or disk partition. VxVM
allocates storage from a contiguous area of VxVM disk space.
A VM disk typically includes a public region (allocated storage) and a small private
region where VxVM internal configuration information is stored.
Each VM disk has a unique disk media name (a virtual disk name). You can either
define a disk name of up to 31 characters, or allow VxVM to assign a default name
that takes the form diskgroup##, where diskgroup is the name of the disk group
to which the disk belongs.
See “Disk groups” on page 31.
Figure 1-7 shows a VM disk with a media name of disk01 that is assigned to the
physical disk, devname.
32 Understanding Veritas Volume Manager
How VxVM handles storage management
disk01 VM disk
Physical disk
devname
Subdisks
A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk.
VxVM allocates disk space using subdisks. A VM disk can be divided into one or
more subdisks. Each subdisk represents a specific portion of a VM disk, which is
mapped to a specific region of a physical disk.
The default name for a VM disk is diskgroup## and the default name for a subdisk
is diskgroup##-##, where diskgroup is the name of the disk group to which the
disk belongs.
See “Disk groups” on page 31.
Figure 1-8 shows disk01-01 is the name of the first subdisk on the VM disk named
disk01.
disk01-01 Subdisk
A VM disk can contain multiple subdisks, but subdisks cannot overlap or share
the same portions of a VM disk. To ensure integrity, VxVM rejects any commands
that try to create overlapping subdisks.
Figure 1-9 shows a VM disk with three subdisks, which are assigned from one
physical disk.
Understanding Veritas Volume Manager 33
How VxVM handles storage management
disk01-01
disk01-02 VM disk with three subdisks
disk01-03
disk01
Any VM disk space that is not part of a subdisk is free space. You can use free
space to create new subdisks.
Plexes
VxVM uses subdisks to build virtual objects called plexes. A plex consists of one
or more subdisks located on one or more physical disks.
Figure 1-10 shows an example of a plex with two subdisks.
vol01-01
Plex with two subdisks
disk01-01 disk01-02
You can organize data on subdisks to form a plex by using the following methods:
■ concatenation
■ striping (RAID-0)
■ mirroring (RAID-1)
■ striping with parity (RAID-5)
Concatenation, striping (RAID-0), mirroring (RAID-1) and RAID-5 are types of
volume layout.
See “Volume layouts in VxVM” on page 35.
34 Understanding Veritas Volume Manager
How VxVM handles storage management
Volumes
A volume is a virtual disk device that appears to applications, databases, and file
systems like a physical disk device, but does not have the physical limitations of
a physical disk device. A volume consists of one or more plexes, each holding a
copy of the selected data in the volume. Due to its virtual nature, a volume is not
restricted to a particular disk or a specific area of a disk. The configuration of a
volume can be changed by using VxVM user interfaces. Configuration changes
can be accomplished without causing disruption to applications or file systems
that are using the volume. For example, a volume can be mirrored on separate
disks or moved to use different disk storage.
VxVM uses the default naming conventions of vol## for volumes and vol##-##
for plexes in a volume. For ease of administration, you can choose to select more
meaningful names for the volumes that you create.
A volume may be created under the following constraints:
■ Its name can contain up to 31 characters.
■ It can consist of up to 32 plexes, each of which contains one or more subdisks.
■ It must have at least one associated plex that has a complete copy of the data
in the volume with at least one associated subdisk.
■ All subdisks within a volume must belong to the same disk group.
Figure 1-11 shows a volume vol01 with a single plex.
vol01
vol01-01
Plex with one subdisk
disk01-01
vol06
vol06-01 vol06-02
Plexes
disk01-01 disk02-01
Each plex of the mirror contains a complete copy of the volume data.
The volume vol06 has the following characteristics:
■ It contains two plexes named vol06-01 and vol06-02.
■ Each plex contains one subdisk.
■ Each subdisk is allocated from a different VM disk (disk01 and disk02).
See “Mirroring (RAID-1)” on page 42.
VxVM supports the concept of layered volumes in which subdisks can contain
volumes.
See “Layered volumes” on page 50.
Non-layered volumes
In a non-layered volume, a subdisk maps directly to a VM disk. This allows the
subdisk to define a contiguous extent of storage space backed by the public region
of a VM disk. When active, the VM disk is directly associated with an underlying
36 Understanding Veritas Volume Manager
Volume layouts in VxVM
physical disk. The combination of a volume layout and the physical disks therefore
determines the storage service available from a given virtual device.
Layered volumes
A layered volume is constructed by mapping its subdisks to underlying volumes.
The subdisks in the underlying volumes must map to VM disks, and hence to
attached physical storage.
Layered volumes allow for more combinations of logical compositions, some of
which may be desirable for configuring a virtual device. For example, layered
volumes allow for high availability when using striping. Because permitting free
use of layered volumes throughout the command level would have resulted in
unwieldy administration, some ready-made layered volume configurations are
designed into VxVM.
See “Layered volumes” on page 50.
These ready-made configurations operate with built-in rules to automatically
match desired levels of service within specified constraints. The automatic
configuration is done on a “best-effort” basis for the current command invocation
working against the current configuration.
To achieve the desired storage service from a set of virtual devices, it may be
necessary to include an appropriate set of VM disks into a disk group, and to
execute multiple configuration commands.
To the extent that it can, VxVM handles initial configuration and on-line
re-configuration with its set of layouts and administration interface to make this
job easier and more deterministic.
Layout methods
Data in virtual objects is organized to create volumes by using the following layout
methods:
■ Concatenation, spanning, and carving
See “Concatenation, spanning, and carving” on page 37.
■ Striping (RAID-0)
See “Striping (RAID-0)” on page 39.
■ Mirroring (RAID-1)
See “Mirroring (RAID-1)” on page 42.
■ Striping plus mirroring (mirrored-stripe or RAID-0+1)
See “Striping plus mirroring (mirrored-stripe or RAID-0+1)” on page 43.
■ Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10)
Understanding Veritas Volume Manager 37
Volume layouts in VxVM
devname
The blocks n, n+1, n+2 and n+3 (numbered relative to the start of the plex) are
contiguous on the plex, but actually come from two distinct subdisks on the same
physical disk.
The remaining free space in the subdisk, disk01-02, on VM disk, disk01, can be
put to other uses.
You can use concatenation with multiple subdisks when there is insufficient
contiguous space for the plex on any one disk. This form of concatenation can be
used for load balancing between disks, and for head movement optimization on
a particular disk.
Figure 1-14 shows data spread over two subdisks in a spanned plex.
devname1 devname2
The blocks n, n+1, n+2 and n+3 (numbered relative to the start of the plex) are
contiguous on the plex, but actually come from two distinct subdisks from two
distinct physical disks.
The remaining free space in the subdisk disk02-02 on VM disk disk02 can be put
to other uses.
Warning: Spanning a plex across multiple disks increases the chance that a disk
failure results in failure of the assigned volume. Use mirroring or RAID-5 to reduce
the risk that a single disk failure results in a volume failure.
Understanding Veritas Volume Manager 39
Volume layouts in VxVM
Striping (RAID-0)
Striping (RAID-0) is useful if you need large amounts of data written to or read
from physical disks, and performance is important. Striping is also helpful in
balancing the I/O load from multi-user applications across multiple disks. By
using parallel data transfer to and from multiple disks, striping significantly
improves data-access performance.
Striping maps data so that the data is interleaved among two or more physical
disks. A striped plex contains two or more subdisks, spread out over two or more
physical disks. Data is allocated alternately and evenly to the subdisks of a striped
plex.
The subdisks are grouped into “columns,” with each physical disk limited to one
column. Each column contains one or more subdisks and can be derived from one
or more physical disks. The number and sizes of subdisks per column can vary.
Additional subdisks can be added to columns, as necessary.
If five volumes are striped across the same five disks, then failure of any one of
the five disks will require that all five volumes be restored from a backup. If each
volume is on a separate disk, only one volume has to be restored. (As an alternative
to or in conjunction with striping, use mirroring or RAID-5 to substantially reduce
the chance that a single disk failure results in failure of a large number of volumes.)
Data is allocated in equal-sized stripe units that are interleaved between the
columns. Each stripe unit is a set of contiguous blocks on a disk. The default stripe
unit size is 64 kilobytes.
Figure 1-15 shows an example with three columns in a striped plex, six stripe
units, and data striped over the three columns.
40 Understanding Veritas Volume Manager
Volume layouts in VxVM
A stripe consists of the set of stripe units at the same positions across all columns.
In the figure, stripe units 1, 2, and 3 constitute a single stripe.
Viewed in sequence, the first stripe consists of:
■ stripe unit 1 in column 0
■ stripe unit 2 in column 1
■ stripe unit 3 in column 2
The second stripe consists of:
■ stripe unit 4 in column 0
■ stripe unit 5 in column 1
■ stripe unit 6 in column 2
Striping continues for the length of the columns (if all columns are the same
length), or until the end of the shortest column is reached. Any space remaining
at the end of subdisks in longer columns becomes unused space.
Figure 1-16 shows a striped plex with three equal sized, single-subdisk columns.
Understanding Veritas Volume Manager 41
Volume layouts in VxVM
Figure 1-16 Example of a striped plex with one subdisk per column
There is one column per physical disk. This example shows three subdisks that
occupy all of the space on the VM disks. It is also possible for each subdisk in a
striped plex to occupy only a portion of the VM disk, which leaves free space for
other disk management tasks.
Figure 1-17 shows a striped plex with three columns containing subdisks of
different sizes.
42 Understanding Veritas Volume Manager
Volume layouts in VxVM
Figure 1-17 Example of a striped plex with concatenated subdisks per column
disk02-01 disk03-01
disk01-01 disk03-02
Striped plex
disk02-02 disk03-03
disk02-01 disk03-01
disk01-01 disk03-02 Subdisks
disk02-02 disk03-03
disk02-01 disk03-01
disk01-01 disk03-02
disk02-02
VM disks
disk03-03
disk01 disk02 disk03
Each column contains a different number of subdisks. There is one column per
physical disk. Striped plexes can be created by using a single subdisk from each
of the VM disks being striped across. It is also possible to allocate space from
different regions of the same disk or from another disk (for example, if the size
of the plex is increased). Columns can also contain subdisks from different VM
disks.
See “Creating a striped volume” on page 337.
Mirroring (RAID-1)
Mirroring uses multiple mirrors (plexes) to duplicate the information contained
in a volume. In the event of a physical disk failure, the plex on the failed disk
becomes unavailable, but the system continues to operate using the unaffected
mirrors. Similarly, mirroring two LUNs from two separate controllers lets the
system operate if there is a controller failure.
Understanding Veritas Volume Manager 43
Volume layouts in VxVM
Although a volume can have a single plex, at least two plexes are required to
provide redundancy of data. Each of these plexes must contain disk space from
different disks to achieve redundancy.
When striping or spanning across a large number of disks, failure of any one of
those disks can make the entire plex unusable. Because the likelihood of one out
of several disks failing is reasonably high, you should consider mirroring to
improve the reliability (and availability) of a striped or spanned volume.
See “Creating a mirrored volume” on page 331.
See “Mirroring across targets, controllers or enclosures” on page 339.
Mirrored-stripe
volume
Striped
column 0 column 1 column 2
plex
Mirror
column 0 column 1 column 2
Striped
plex
Mirror
column 0 column 1 column 2
Striped plex
Figure 1-20 How the failure of a single disk affects mirrored-stripe and
striped-mirror volumes
Mirrored-stripe volume
with no
Striped plex redundancy
Mirror
Detached
striped plex
Mirror
Striped plex
When the disk is replaced, the entire plex must be brought up to date. Recovering
the entire plex can take a substantial amount of time. If a disk fails in a
striped-mirror layout, only the failing subdisk must be detached, and only that
portion of the volume loses redundancy. When the disk is replaced, only a portion
of the volume needs to be recovered. Additionally, a mirrored-stripe volume is
more vulnerable to being put out of use altogether should a second disk fail before
the first failed disk has been replaced, either manually or by hot-relocation.
Compared to mirrored-stripe volumes, striped-mirror volumes are more tolerant
of disk failure, and recovery time is shorter.
If the layered volume concatenates instead of striping the underlying mirrored
volumes, the volume is termed a concatenated-mirror volume.
volume is reflected in all copies. If a portion of a mirrored volume fails, the system
continues to use the other copies of the data.
RAID-5 provides data redundancy by using parity. Parity is a calculated value
used to reconstruct data after a failure. While data is being written to a RAID-5
volume, parity is calculated by doing an exclusive OR (XOR) procedure on the
data. The resulting parity is then written to the volume. The data and calculated
parity are contained in a plex that is “striped” across multiple disks. If a portion
of a RAID-5 volume fails, the data that was on that portion of the failed volume
can be recreated from the remaining data and parity information. It is also possible
to mix concatenation and striping in the layout.
Figure 1-21 shows parity locations in a RAID-5 array configuration.
Stripe 1
Data Data Parity
Stripe 2
Data Parity Data
Data
Stripe 3
Parity Data Data
Stripe 4
Data Data Parity
Every stripe has a column containing a parity stripe unit and columns containing
data. The parity is spread over all of the disks in the array, reducing the write
time for large independent writes because the writes do not have to wait until a
single parity disk can accept the data.
RAID-5 volumes can additionally perform logging to minimize recovery time.
RAID-5 volumes use RAID-5 logs to keep a copy of the data and parity currently
being written. RAID-5 logging is optional and can be created along with RAID-5
volumes or added later.
See “Veritas Volume Manager RAID-5 arrays” on page 47.
Note: VxVM supports RAID-5 for private disk groups, but not for shareable disk
groups in a CVM environment. In addition, VxVM does not support the mirroring
of RAID-5 volumes that are configured using Veritas Volume Manager software.
RAID-5 LUNs hardware may be mirrored.
row is the minimal number of disks necessary to support the full width of a parity
stripe.
Figure 1-22 shows the row and column arrangement of a traditional RAID-5 array.
Stripe 1
Stripe3
Row 0
Stripe 2
Row 1
This traditional array structure supports growth by adding more rows per column.
Striping is accomplished by applying the first stripe across the disks in Row 0,
then the second stripe across the disks in Row 1, then the third stripe across the
Row 0 disks, and so on. This type of array requires all disks columns, and rows to
be of equal size.
Stripe 1
Stripe 2
SD SD
SD
SD
SD SD SD SD SD = subdisk
Left-symmetric layout
There are several layouts for data and parity that can be used in the setup of a
RAID-5 array. The implementation of RAID-5 in VxVM uses a left-symmetric
layout. This provides optimal performance for both random I/O operations and
large sequential I/O operations. However, the layout selection is not as critical
for performance as are the number of columns and the stripe unit size.
Left-symmetric layout stripes both data and parity across columns, placing the
parity in a different column for every stripe of data. The first parity stripe unit is
located in the rightmost column of the first stripe. Each successive parity stripe
Understanding Veritas Volume Manager 49
Volume layouts in VxVM
unit is located in the next stripe, shifted left one column from the previous parity
stripe unit location. If there are more stripes than columns, the parity stripe unit
placement begins in the rightmost column again.
Figure 1-24 shows a left-symmetric parity layout with five disks (one per column).
0 1 2 3 P0
Stripe 5 6 7 P1 4
10 11 P2 8 9
P4 16 17 18 19
For each stripe, data is organized starting to the right of the parity stripe unit. In
the figure, data organization for the first stripe begins at P0 and continues to
stripe units 0-3. Data organization for the second stripe begins at P1, then
continues to stripe unit 4, and on to stripe units 5-7. Data organization proceeds
in this manner for the remaining stripes.
Each parity stripe unit contains the result of an exclusive OR (XOR) operation
performed on the data in the data stripe units within the same stripe. If one
column’s data is inaccessible due to hardware or software failure, the data for
each stripe can be restored by XORing the contents of the remaining columns
data stripe units against their respective parity stripe units.
For example, if a disk corresponding to the whole or part of the far left column
fails, the volume is placed in a degraded mode. While in degraded mode, the data
from the failed column can be recreated by XORing stripe units 1-3 against parity
stripe unit P0 to recreate stripe unit 0, then XORing stripe units 4, 6, and 7 against
parity stripe unit P1 to recreate stripe unit 5, and so on.
Failure of more than one column in a RAID-5 plex detaches the volume. The volume
is no longer allowed to satisfy read or write requests. Once the failed columns
have been recovered, it may be necessary to recover user data from backups.
50 Understanding Veritas Volume Manager
Volume layouts in VxVM
RAID-5 logging
Logging is used to prevent corruption of data during recovery by immediately
recording changes to data and parity to a log area on a persistent device such as
a volume on disk or in non-volatile RAM. The new data and parity are then written
to the disks.
Without logging, it is possible for data not involved in any active writes to be lost
or silently corrupted if both a disk in a RAID-5 volume and the system fail. If this
double-failure occurs, there is no way of knowing if the data being written to the
data portions of the disks or the parity being written to the parity portions have
actually been written. Therefore, the recovery of the corrupted disk may be
corrupted itself.
Figure 1-25 shows a RAID-5 volume configured across three disks (A, B and C).
In this volume, recovery of disk B’s corrupted data depends on disk A’s data and
disk C’s parity both being complete. However, only the data write to disk A is
complete. The parity write to disk C is incomplete, which would cause the data on
disk B to be reconstructed incorrectly.
This failure can be avoided by logging all data and parity writes before committing
them to the array. In this way, the log can be replayed, causing the data and parity
updates to be completed before the reconstruction of the failed drive takes place.
Logs are associated with a RAID-5 volume by being attached as log plexes. More
than one log plex can exist for each RAID-5 volume, in which case the log areas
are mirrored.
See “Adding a RAID-5 log” on page 403.
Layered volumes
A layered volume is a virtual Veritas Volume Manager object that is built on top
of other volumes. The layered volume structure tolerates failure better and has
greater redundancy than the standard volume structure. For example, in a
striped-mirror layered volume, each mirror (plex) covers a smaller area of storage
space, so recovery is quicker than with a standard mirrored volume.
Understanding Veritas Volume Manager 51
Volume layouts in VxVM
Figure 1-26 shows a typical striped-mirror layered volume where each column is
represented by a subdisk that is built from an underlying mirrored volume.
vol01
Striped mirror
vol01-01 volume
vol01-01
Concatenated
disk04-01 disk05-01 disk06-01 disk07-01
plexes
Subdisks on
disk04-01 disk05-01 disk06-01 disk07-01 VM disks
The volume and striped plex in the “Managed by User” area allow you to perform
normal tasks in VxVM. User tasks can be performed only on the top-level volume
of a layered volume.
Underlying volumes in the “Managed by VxVM” area are used exclusively by
VxVM and are not designed for user manipulation. You cannot detach a layered
volume or perform any other operation on the underlying volumes by manipulating
the internal structure. You can perform all necessary operations in the “Managed
by User” area that includes the top-level volume and striped plex (for example,
resizing the volume, changing the column width, or adding a column).
System administrators can manipulate the layered volume structure for
troubleshooting or other operations (for example, to place data on specific disks).
Layered volumes are used by VxVM to perform the following tasks and operations:
52 Understanding Veritas Volume Manager
Online relayout
Online relayout
Online relayout allows you to convert between storage layouts in VxVM, with
uninterrupted data access. Typically, you would do this to change the redundancy
or performance characteristics of a volume. VxVM adds redundancy to storage
either by duplicating the data (mirroring) or by adding parity (RAID-5).
Performance characteristics of storage in VxVM can be changed by changing the
striping parameters, which are the number of columns and the stripe width.
See “Performing online relayout” on page 396.
See “Converting between layered and non-layered volumes” on page 402.
You can override the default size used for the temporary area by using the tmpsize
attribute to vxassist.
See the vxassist(1M) manual page.
As well as the temporary area, space is required for a temporary intermediate
volume when increasing the column length of a striped volume. The amount of
space required is the difference between the column lengths of the target and
source volumes. For example, 20GB of temporary additional space is required to
relayout a 150GB striped volume with 5 columns of length 30GB as 3 columns of
length 50GB. In some cases, the amount of temporary space that is required is
relatively large. For example, a relayout of a 150GB striped volume with 5 columns
as a concatenated volume (with effectively one column) requires 120GB of space
for the intermediate volume.
Additional permanent disk space may be required for the destination volumes,
depending on the type of relayout that you are performing. This may happen, for
example, if you change the number of columns in a striped volume.
54 Understanding Veritas Volume Manager
Online relayout
Figure 1-27 shows how decreasing the number of columns can require disks to be
added to a volume.
Note that the size of the volume remains the same but an extra disk is needed to
extend one of the columns.
The following are examples of operations that you can perform using online
relayout:
■ Remove parity from a RAID-5 volume to change it to a concatenated, striped,
or layered volume.
Figure 1-28 shows an example of applying relayout a RAID-5 volume.
Note that removing parity decreases the overall storage space that the volume
requires.
■ Add parity to a volume to change it to a RAID-5 volume.
Figure 1-29 shows an example.
Concatenated
volume
RAID-5 volume
Understanding Veritas Volume Manager 55
Online relayout
Note that adding parity increases the overall storage space that the volume
requires.
■ Change the number of columns in a volume.
Figure 1-30 shows an example of changing the number of columns.
Note that the length of the columns is reduced to conserve the size of the volume.
■ Change the column stripe width in a volume.
Figure 1-31 shows an example of changing the column stripe width.
Figure 1-31 Example of increasing the stripe width for the columns in a volume
convert command to turn the layered mirrored volume that results from a
relayout into a non-layered volume.
See “Converting between layered and non-layered volumes” on page 402.
■ The usual restrictions apply for the minimum number of physical disks that
are required to create the destination layout. For example, mirrored volumes
require at least as many disks as mirrors, striped and RAID-5 volumes require
at least as many disks as columns, and striped-mirror volumes require at least
as many disks as columns multiplied by mirrors.
■ To be eligible for layout transformation, the plexes in a mirrored volume must
have identical stripe widths and numbers of columns. Relayout is not possible
unless you make the layouts of the individual plexes identical.
■ Online relayout cannot transform sparse plexes, nor can it make any plex
sparse. (A sparse plex is a plex that is not the same size as the volume, or that
has regions that are not mapped to any subdisk.)
■ The number of mirrors in a mirrored volume cannot be changed using relayout.
Instead, use alternative commands, such as the vxassist mirror command.
■ Only one relayout may be applied to a volume at a time.
Transformation characteristics
Transformation of data from one layout to another involves rearrangement of
data in the existing layout to the new layout. During the transformation, online
relayout retains data redundancy by mirroring any temporary space used. Read
and write access to data is not interrupted during the transformation.
Data is not corrupted if the system fails during a transformation. The
transformation continues after the system is restored and both read and write
access are maintained.
You can reverse the layout transformation process at any time, but the data may
not be returned to the exact previous storage location. Before you reverse a
transformation that is in process, you must stop it.
You can determine the transformation direction by using the vxrelayout status
volume command.
Volume resynchronization
When storing data redundantly and using mirrored or RAID-5 volumes, VxVM
ensures that all copies of the data match exactly. However, under certain conditions
(usually due to complete system failures), some redundant data on a volume can
become inconsistent or unsynchronized. The mirrored data is not exactly the
same as the original data. Except for normal configuration changes (such as
detaching and reattaching a plex), this can only occur when a system crashes
while data is being written to a volume.
Data is written to the mirrors of a volume in parallel, as is the data and parity in
a RAID-5 volume. If a system crash occurs before all the individual writes complete,
it is possible for some writes to complete while others do not. This can result in
the data becoming unsynchronized. For mirrored volumes, it can cause two reads
from the same region of the volume to return different results, if different mirrors
are used to satisfy the read request. In the case of RAID-5 volumes, it can lead to
parity corruption and incorrect data reconstruction.
VxVM ensures that all mirrors contain exactly the same data and that the data
and parity in RAID-5 volumes agree. This process is called volume
resynchronization. For volumes that are part of the disk group that is automatically
imported at boot time (usually aliased as the reserved system-wide disk group,
bootdg), resynchronization takes place when the system reboots.
Not all volumes require resynchronization after a system failure. Volumes that
were never written or that were quiescent (that is, had no active I/O) when the
system failure occurred could not have had outstanding writes and do not require
resynchronization.
Dirty flags
VxVM records when a volume is first written to and marks it as dirty. When a
volume is closed by all processes or stopped cleanly by the administrator, and all
writes have been completed, VxVM removes the dirty flag for the volume. Only
volumes that are marked dirty require resynchronization.
58 Understanding Veritas Volume Manager
Dirty region logging
Resynchronization process
The process of resynchronization depends on the type of volume. For mirrored
volumes, resynchronization is done by placing the volume in recovery mode (also
called read-writeback recovery mode). Resynchronization of data in the volume
is done in the background. This allows the volume to be available for use while
recovery is taking place. RAID-5 volumes that contain RAID-5 logs can “replay”
those logs. If no logs are available, the volume is placed in reconstruct-recovery
mode and all parity is regenerated.
Resynchronization can impact system performance. The recovery process reduces
some of this impact by spreading the recoveries to avoid stressing a specific disk
or controller.
For large volumes or for a large number of volumes, the resynchronization process
can take time. These effects can be minimized by using dirty region logging (DRL)
and FastResync (fast mirror resynchronization) for mirrored volumes, or by using
RAID-5 logs for RAID-5 volumes.
See “Dirty region logging” on page 58.
See “FastResync” on page 63.
For mirrored volumes used by Oracle, you can use the SmartSync feature, which
further improves performance.
See “SmartSync recovery accelerator” on page 59.
Note: DRL adds a small I/O overhead for most write access patterns. This overhead
is reduced by using SmartSync.
Sequential DRL
Some volumes, such as those that are used for database replay logs, are written
sequentially and do not benefit from delayed cleaning of the DRL bits. For these
volumes, sequential DRL can be used to limit the number of dirty regions. This
allows for faster recovery. However, if applied to volumes that are written to
randomly, sequential DRL can be a performance bottleneck as it limits the number
of parallel writes that can be carried out.
The maximum number of dirty regions allowed for sequential DRL is controlled
by a tunable as detailed in the description of voldrl_max_seq_dirty.
See “Tunable parameters for VxVM” on page 520.
See “Adding traditional DRL logging to a mirrored volume” on page 386.
See “Preparing a volume for DRL and instant snapshots” on page 380.
Note: To use SmartSync with volumes that contain file systems, see the discussion
of the Oracle Resilvering feature of Veritas File System (VxFS).
The following section describes how to configure VxVM raw volumes and
SmartSync. The database uses the following types of volumes:
■ Data volumes are the volumes used by the database (control files and tablespace
files).
■ Redo log volumes contain redo logs of the database.
SmartSync works with these two types of volumes differently, so they must be
configured as described in the following sections.
To enable the use of SmartSync with database volumes in shared disk groups, set
the value of the volcvm_smartsync tunable to 1.
See “Tunable parameters for VxVM” on page 520.
improved recovery time depends on dirty region logs, redo log volumes should
be configured as mirrored volumes with sequential DRL.
See “Sequential DRL” on page 59.
Volume snapshots
Veritas Volume Manager provides the capability for taking an image of a volume
at a given point in time. Such an image is referred to as a volume snapshot. Such
snapshots should not be confused with file system snapshots, which are
point-in-time images of a Veritas File System.
Figure 1-32 shows how a snapshot volume represents a copy of an original volume
at a given point in time.
T1 Original volume
Even though the contents of the original volume can change, the snapshot volume
preserves the contents of the original volume as they existed at an earlier time.
The snapshot volume provides a stable and independent base for making backups
of the contents of the original volume, or for other applications such as decision
support. In the figure, the contents of the snapshot volume are eventually
resynchronized with the original volume at a later point in time.
Another possibility is to use the snapshot volume to restore the contents of the
original volume. This may be useful if the contents of the original volume have
become corrupted in some way.
62 Understanding Veritas Volume Manager
Volume snapshots
Warning: If you write to the snapshot volume, it may no longer be suitable for use
in restoring the contents of the original volume.
One type of volume snapshot in VxVM is the third-mirror break-off type. This
name comes from its implementation where a snapshot plex (or third mirror) is
added to a mirrored volume. The contents of the snapshot plex are then
synchronized from the original plexes of the volume. When this synchronization
is complete, the snapshot plex can be detached as a snapshot volume for use in
backup or decision support applications. At a later time, the snapshot plex can be
reattached to the original volume, requiring a full resynchronization of the
snapshot plex’s contents.
The FastResync feature was introduced to track writes to the original volume.
This tracking means that only a partial, and therefore much faster,
resynchronization is required on reattaching the snapshot plex. In later releases,
the snapshot model was enhanced to allow snapshot volumes to contain more
than a single plex, reattachment of a subset of a snapshot volume’s plexes, and
persistence of FastResync across system reboots or cluster restarts.
See “FastResync” on page 63.
Release 4.0 of VxVM introduced full-sized instant snapshots and space-optimized
instant snapshots, which offer advantages over traditional third-mirror snapshots
such as immediate availability and easier configuration and administration. You
can also use the third-mirror break-off usage model with full-sized snapshots,
where this is necessary for write-intensive applications.
See “Comparison of snapshot features” on page 62.
For details about the snapshots and how to use them, see the Veritas Storage
Foundation Advanced Features Administrator's Guide.
See the vxassist(1M) manual page.
See the vxsnap(1M) manual page.
Full-sized instant snapshots are easier to configure and offer more flexibility of
use than do traditional third-mirror break-off snapshots. For preference, new
volumes should be configured to use snapshots that have been created using the
vxsnap command rather than using the vxassist command. Legacy volumes can
also be reconfigured to use vxsnap snapshots, but this requires rewriting of
administration scripts that assume the vxassist snapshot model.
FastResync
Note: Only certain Storage Foundation products have a license to use this feature.
64 Understanding Veritas Volume Manager
FastResync
FastResync enhancements
FastResync provides the following enhancements to VxVM:
Non-persistent FastResync
Non-persistent FastResync allocates its change maps in memory. They do not
reside on disk nor in persistent store. This has the advantage that updates to the
FastResync map have little impact on I/O performance, as no disk updates needed
to be performed. However, if a system is rebooted, the information in the map is
lost, so a full resynchronization is required on snapback. This limitation can be
overcome for volumes in cluster-shareable disk groups, provided that at least one
of the nodes in the cluster remained running to preserve the FastResync map in
its memory. However, a node crash in a High Availability (HA) environment
requires the full resynchronization of a mirror when it is reattached to its parent
volume.
Persistent FastResync
Unlike non-persistent FastResync, persistent FastResync keeps the FastResync
maps on disk so that they can survive system reboots, system crashes and cluster
crashes. Persistent FastResync can also track the association between volumes
and their snapshot volumes after they are moved into different disk groups. When
the disk groups are rejoined, this allows the snapshot plexes to be quickly
resynchronized. This ability is not supported by non-persistent FastResync.
See “Reorganizing the contents of disk groups” on page 270.
If persistent FastResync is enabled on a volume or on a snapshot volume, a data
change object (DCO) and a DCO volume are associated with the volume.
66 Understanding Veritas Volume Manager
FastResync
rounded up to the nearest multiple of 8KB. Note that each map includes a 512-byte
header.
For the default number of 32 per-volume maps and region size of 64KB, a 10GB
volume requires a map size of 24KB, and so each plex in the DCO volume requires
840KB of storage.
Mirrored volume
DCO volume
Associated with the volume are a DCO object and a DCO volume with two plexes.
To create a traditional third-mirror snapshot or an instant (copy-on-write)
snapshot, the vxassist snapstart or vxsnap make operation respectively is
performed on the volume.
Figure 1-34 shows how a snapshot plex is set up in the volume, and how a disabled
DCO plex is associated with it.
Mirrored volume
DCO volume
Multiple snapshot plexes and associated DCO plexes may be created in the volume
by re-running the vxassist snapstart command for traditional snapshots, or
the vxsnap make command for space-optimized snapshots. You can create up to
a total of 32 plexes (data and log) in a volume.
Space-optimized instant snapshots do not require additional full-sized plexes to
be created. Instead, they use a storage cache that typically requires only 10% of
the storage that is required by full-sized snapshots. There is a trade-off in
functionality in using space-optimized snapshots. The storage cache is formed
within a cache volume, and this volume is associated with a cache object. For
convenience of operation, this cache can be shared by all the space-optimized
instant snapshots within a disk group.
Understanding Veritas Volume Manager 69
FastResync
Mirrored volume
DCO volume
DCO DCO
log plex log plex
Snapshot volume
DCO volume
DCO
log plex
The DCO volume contains the single DCO plex that was associated with the
snapshot plex. If two snapshot plexes were taken to form the snapshot volume,
the DCO volume would contain two plexes. For space-optimized instant snapshots,
the DCO object and DCO volume are associated with a snapshot volume that is
created on a cache object and not on a VM disk.
Associated with both the original volume and the snapshot volume are snap
objects. The snap object for the original volume points to the snapshot volume,
and the snap object for the snapshot volume points to the original volume. This
70 Understanding Veritas Volume Manager
FastResync
allows VxVM to track the relationship between volumes and their snapshots even
if they are moved into different disk groups.
The snap objects in the original volume and snapshot volume are automatically
deleted in the following circumstances:
■ For traditional snapshots, the vxassist snapback operation is run to return
all of the plexes of the snapshot volume to the original volume.
■ For traditional snapshots, the vxassist snapclear operation is run on a
volume to break the association between the original volume and the snapshot
volume. If the volumes are in different disk groups, the command must be run
separately on each volume.
■ For full-sized instant snapshots, the vxsnap reattach operation is run to
return all of the plexes of the snapshot volume to the original volume.
■ For full-sized instant snapshots, the vxsnap dis or vxsnap split operations
are run on a volume to break the association between the original volume and
the snapshot volume. If the volumes are in different disk groups, the command
must be run separately on each volume.
Note: The vxsnap reattach, dis and split operations are not supported for
space-optimized instant snapshots.
For details about the snapshots and how to use them, see the Veritas Storage
Foundation Advanced Features Administrator's Guide.
See the vxassist(1M) manual page.
See the vxsnap(1M) manual page.
you must grow the replica volume, or the original volume, before invoking any of
the commands vxsnap reattach, vxsnap restore, or vxassist snapback.
Growing the two volumes separately can lead to a snapshot that shares physical
disks with another mirror in the volume. To prevent this, grow the volume after
the snapback command is complete.
FastResync limitations
The following limitations apply to FastResync:
■ Persistent FastResync is supported for RAID-5 volumes, but this prevents the
use of the relayout or resize operations on the volume while a DCO is associated
with it.
■ Neither non-persistent nor persistent FastResync can be used to resynchronize
mirrors after a system crash. Dirty region logging (DRL), which can coexist
with FastResync, should be used for this purpose. In VxVM 4.0 and later
releases, DRL logs may be stored in a version 20 DCO volume.
■ When a subdisk is relocated, the entire plex is marked “dirty” and a full
resynchronization becomes necessary.
■ If a snapshot volume is split off into another disk group, non-persistent
FastResync cannot be used to resynchronize the snapshot plexes with the
original volume when the disk group is rejoined with the original volume’s
disk group. Persistent FastResync must be used for this purpose.
■ If you move or split an original volume (on which persistent FastResync is
enabled) into another disk group, and then move or join it to a snapshot
volume’s disk group, you cannot use vxassist snapback to resynchronize
traditional snapshot plexes with the original volume. This restriction arises
because a snapshot volume references the original volume by its record ID at
the time that the snapshot volume was created. Moving the original volume
to a different disk group changes the volume’s record ID, and so breaks the
association. However, in such a case, you can use the vxplex snapback
command with the -f (force) option to perform the snapback.
Note: This restriction only applies to traditional snapshots. It does not apply
to instant snapshots.
■ Any operation that changes the layout of a replica volume can mark the
FastResync change map for that snapshot “dirty” and require a full
resynchronization during snapback. Operations that cause this include subdisk
split, subdisk move, and online relayout of the replica. It is safe to perform
these operations after the snapshot is completed.
72 Understanding Veritas Volume Manager
Hot-relocation
Hot-relocation
Hot-relocation is a feature that allows a system to react automatically to I/O
failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore
redundancy and access to those objects. VxVM detects I/O failures on objects and
relocates the affected subdisks. The subdisks are relocated to disks designated as
spare disks or to free space within the disk group. VxVM then reconstructs the
objects that existed before the failure and makes them accessible again.
When a partial disk failure occurs (that is, a failure affecting only some subdisks
on a disk), redundant data on the failed portion of the disk is relocated. Existing
volumes on the unaffected portions of the disk remain accessible.
See “How hot-relocation works” on page 428.
Volume sets
Volume sets are an enhancement to VxVM that allow several volumes to be
represented by a single logical object. All I/O from and to the underlying volumes
is directed via the I/O interfaces of the volume set. The Veritas File System (VxFS)
uses volume sets to manage multi-volume file systems and the SmartTier feature.
This feature allows VxFS to make best use of the different performance and
availability characteristics of the underlying volumes. For example, file system
metadata can be stored on volumes with higher redundancy, and user data on
volumes with better performance.
See “Creating a volume set” on page 408.
Note: This feature of vxassist is designed to work in conjunction with SAL (SAN
Access Layer) in Veritas CommandCentral Storage. When VxVM with SAN-aware
vxassist is installed on a host where SAL is also installed, it is recommended
that you create a user named root under SAL. This allows vxassist to use the
root login to contact the SAL daemon (sald) on the primary SAL server without
needing to specify the sal_username attribute to vxassist.
Figure 1-36, shows how you might choose to set up storage groups within a SAN.
Storage groups
Location 1
High Low
performance performance
storage storage
Location 2
In this example, the boundaries of the storage groups are based on the performance
characteristics of different makes of disk array and on geographic location.
The vxassist utility in Veritas Volume Manager understands storage groups that
you have defined using the CommandCentral Storage software. vxassist supports
a simple language that you can use to specify how disks are to be allocated from
pre-defined storage groups. This specification language defines the confinement
and separation criteria that vxassist applies to the available storage to choose
disks for creating, resizing or moving a volume.
To use the CommandCentral Storage storage groups with vxassist, perform the
following steps in the order listed:
74 Understanding Veritas Volume Manager
Configuration of volumes on SAN storage
# vxdisksetup -i 3PARDATA0_1
# vxdisk init 3PARDATA0_1
■ If you already have a disk group for your LUN, add the LUN to the disk
group:
76 Provisioning new usable storage
Growing the existing storage by adding a new LUN
# mkdir mount1
3 Grow the volume and the file system to the desired size. For example:
4 Grow the volume and the file system to the desired size:
■ Disk devices
■ Encapsulating a disk
■ Rootability
■ Removing disks
■ Enabling a disk
■ Renaming a disk
■ Reserving disks
Disks that are controlled by the Sun Microsystems Solaris Volume Manager
software cannot be used directly as VxVM disks, but the disks can be converted
so that their volumes become VxVM volumes.
For detailed information about migrating volumes, see the Veritas Storage
Foundation Advanced Features Administrator's Guide.
Veritas Dynamic Multi-Pathing (DMP) is used to administer multiported disk
arrays.
See “How DMP works” on page 157.
Disk devices
When performing disk administration, it is important to understand the difference
between a disk name and a device name.
The disk name (also known as a disk media name) is the symbolic name assigned
to a VM disk. When you place a disk under VxVM control, a VM disk is assigned
to it. The disk name is used to refer to the VM disk for the purposes of
administration. A disk name can be up to 31 characters long. When you add a disk
to a disk group, you can assign a disk name or allow VxVM to assign a disk name.
The default disk name is diskgroup## where diskgroup is the name of the disk
group to which the disk is being added, and ## is a sequence number. Your system
may use device names that differ from those given in the examples.
The device name (sometimes referred to as devname or disk access name) defines
the name of a disk device as it is known to the operating system.
Administering disks 81
Disk devices
Such devices are usually, but not always, located in the /dev/dsk and /dev/rdsk
directories. Devices that are specific to hardware from certain vendors may use
their own path name conventions.
VxVM uses the device names to create metadevices in the /dev/vx/[r]dmp
directories. Dynamic Multi-Pathing (DMP) uses these metadevices (or DMP nodes)
to represent disks that can be accessed by one or more physical paths, perhaps
via different controllers. The number of access paths that are available depends
on whether the disk is a single disk, or is part of a multiported disk array that is
connected to a system.
You can use the vxdisk utility to display the paths that are subsumed by a DMP
metadevice, and to display the status of each path (for example, whether it is
enabled or disabled).
See “How DMP works” on page 157.
Device names may also be remapped as enclosure-based names.
See “Disk device naming in VxVM” on page 81.
Note: For non-EFI disks, the slice s2 represents the entire disk. For both EFI and
non-EFI disks, the entire disk is implied if the slice is omitted from the device
name.
DMP assigns the name of the DMP meta-device (disk access name) from the
multiple paths to the disk. DMP sorts the names by controller, and selects the
smallest controller number. For example, c1 rather than c2. If multiple paths are
seen from the same controller, then DMP uses the path with the smallest target
name. This behavior make it easier to correlate devices with the underlying storage.
If a CVM cluster is symmetric, each node in the cluster accesses the same set of
disks. This naming scheme makes the naming consistent across nodes in a
symmetric cluster.
The boot disk (which contains the root file system and is used when booting the
system) is often identified to VxVM by the device name c0t0d0.
By default, OS-based names are not persistent, and are regenerated if the system
configuration changes the device name as recognized by the operating system. If
you do not want the OS-based names to change after reboot, set the persistence
attribute for the naming scheme.
See “Changing the disk-naming scheme” on page 104.
Enclosure-based naming
By default, VxVM uses enclosure-based naming.
Enclosure-based naming operates as follows:
■ All fabric or non-fabric disks in supported disk arrays are named using the
enclosure_name_# format. For example, disks in the supported disk array,
enggdept are named enggdept_0, enggdept_1, enggdept_2 and so on.
You can use the vxdmpadm command to administer enclosure names.
See “Renaming an enclosure” on page 203.
See the vxdmpadm(1M) manual page.
■ Disks in the DISKS category (JBOD disks) are named using the Disk_# format.
■ Disks in the OTHER_DISKS category (disks that are not multipathed by DMP)
are named using the c#t#d#s# format.
Administering disks 83
Disk devices
If required, the value for the private region size may be overridden
when you add or replace a disk using the vxdiskadm command.
Each disk that has a private region holds an entire copy of the
configuration database for the disk group. The size of the configuration
database for a disk group is limited by the size of the smallest copy of
the configuration database on any of its member disks.
public region An area that covers the remainder of the disk, and which is used for
the allocation of storage space to subdisks.
A disk’s type identifies how VxVM accesses a disk, and how it manages the disk’s
private and public regions.
The following disk access types are used by VxVM:
84 Administering disks
Disk devices
auto When the vxconfigd daemon is started, VxVM obtains a list of known
disk device addresses from the operating system and configures disk
access records for them automatically.
nopriv There is no private region (only a public region for allocating subdisks).
This is the simplest disk type consisting only of space for allocating
subdisks. Such disks are most useful for defining special devices (such
as RAM disks, if supported) on which private region data would not
persist between reboots. They can also be used to encapsulate disks
where there is insufficient room for a private region. The disks cannot
store configuration and log copies, and they do not support the use
of the vxdisk addregion command to define reserved regions.
VxVM cannot track the movement of nopriv disks on a SCSI chain
or between controllers.
simple The public and private regions are on the same disk area (with the
public area following the private area).
sliced The public and private regions are on different disk partitions.
Auto-configured disks (with disk access type auto) support the following disk
formats:
simple The disk is formatted as a simple disk that can be converted to a CDS
disk.
sliced The disk is formatted as a sliced disk. This format can be applied to
disks that are used to boot the system. The disk can be converted to
a CDS disk if it was not initialized for use as a boot disk.
The vxcdsconvert utility can be used to convert disks to the cdsdisk format.
See the vxcdsconvert(1M) manual page.
Warning: If a disk is initialized by VxVM as a CDS disk, the CDS header occupies
the portion of the disk where the VTOC would usually be located. If you
subsequently use a command such as fdisk or format to create a partition table
on a CDS disk, it erases the CDS information and could cause data corruption.
Administering disks 85
Discovering and configuring newly added disk devices
By default, all auto-configured disks are formatted as cdsdisk disks when they
are initialized for use with VxVM. You can change the default format by using the
vxdiskadm(1M) command to update the /etc/default/vxdisk defaults file.
VxVM initializes each new disk with the smallest possible number of partitions.
For non-EFI disks of type sliced, VxVM usually configures partition s3 as the
private region, s4 as the public region, and s2 as the entire physical disk. An
exception is an encapsulated root disk, on which s3 is usually configured as the
public region and s4 as the private region.
See “Displaying or changing default disk layout attributes” on page 113.
See the vxdisk(1M) manual page.
# vxdctl -f enable
# vxdisk -f scandisks
However, a complete scan is initiated if the system configuration has been modified
by changes to:
■ Installed array support libraries.
■ The list of devices that are excluded from use by VxVM.
■ DISKS (JBOD), SCSI3, or foreign device definitions.
86 Administering disks
Discovering and configuring newly added disk devices
Alternatively, you can specify a ! prefix character to indicate that you want to
scan for all devices except those that are listed.
Note: The ! character is a special character in some shells. The following examples
show how to escape it in a bash shell.
You can also scan for devices that are connected (or not connected) to a list of
logical or physical controllers. For example, this command discovers and configures
all devices except those that are connected to the specified logical controllers:
The next command discovers devices that are connected to the specified physical
controller:
Disk categories
Disk arrays that have been certified for use with Veritas Volume Manager are
supported by an array support library (ASL), and are categorized by the vendor
ID string that is returned by the disks (for example, “HITACHI”).
Disks in JBODs which are capable of being multipathed by DMP, are placed in the
DISKS category. Disks in unsupported arrays can also be placed in the DISKS
category.
See “Adding unsupported disk arrays to the DISKS category” on page 97.
Disks in JBODs that do not fall into any supported category, and which are not
capable of being multipathed by DMP are placed in the OTHER_DISKS category.
# vxdctl enable
Not installed; the array is DGC DMP handles multi-pathing. Active/Passive (A/P),
CLARiioN (CXn00). Active/Passive in
The ASL name is
Explicit Failover mode
libvxCLARiiON.
(A/P-F) and ALUA
If any EMCpower disks are configured as foreign disks, use the vxddladm
rmforeign command to remove the foreign definitions, as shown in this example:
To allow DMP to receive correct inquiry data, the Common Serial Number (C-bit)
Symmetrix Director parameter must be set to enabled.
# vxddladm list
HBA c2 (20:00:00:E0:8B:19:77:BE)
Port c2_p0 (50:0A:09:80:85:84:9D:84)
Target c2_p0_t0 (50:0A:09:81:85:84:9D:84)
LUN c2t0d0s2
. . .
HBA c3 (iqn.1986-03.com.sun:01:0003ba8ed1b5.45220f80)
Port c3_p0 (10.216.130.10:3260)
Target c3_p0_t0 (iqn.1992-08.com.netapp:sn.84188548)
LUN c3t0d0s2
LUN c3t0d1s2
Target c3_t1 (iqn.1992-08.com.netapp:sn.84190939)
. . .
For example, to obtain the targets configured from the specified HBA:
DDL status Whether the device is claimed by DDL. If claimed, the output
also displays the ASL name.
94 Administering disks
Discovering and configuring newly added disk devices
To list the devices configured from a Host Bus Adapter and target
◆ To obtain the devices configured from a particular HBA and target, use the
following command:
DefaultTime2Retain 20 0 3600
DefaultTime2Wait 2 0 3600
ErrorRecoveryLevel 0 0 2
MaxConnections 1 1 65535
MaxOutStandingR2T 1 1 65535
To get the iSCSI operational parameters on the initiator for a specific iSCSI target
◆ Type the following commands:
You can use this command to obtain all the iSCSI operational parameters.
The following is a sample output:
To set the iSCSI operational parameters on the initiator for a specific iSCSI target
◆ Type the following command:
This example excludes support for disk arrays that depends on the library
libvxenc.so. You can also exclude support for disk arrays from a particular
vendor, as shown in this example:
This command adds the array library to the database so that the library can
once again be used in device discovery. If vxconfigd is running, you can use
the vxdisk scandisks command to discover the arrays and add their details
to the database.
# vxddladm listexclude
Administering disks 97
Discovering and configuring newly added disk devices
# vxddladm listjbod
This command displays the vendor ID (VID), product IDs (PIDs) for the arrays,
array types (for example, A/A or A/P), and array names. The following is
sample output.
# /etc/vx/diag.d/vxscsiinq device_name
where device_name is the device name of one of the disks in the array. Note
the values of the vendor ID (VID) and product ID (PID) in the output from this
command. For Fujitsu disks, also note the number of characters in the serial
number that is displayed.
The following example shows the output for the example disk with the device
name /dev/rdsk/c1t20d0s2
# /etc/vx/diag.d/vxscsiinq /dev/rdsk/c1t20d0s2
2 Stop all applications, such as databases, from accessing VxVM volumes that
are configured on the array, and unmount all file systems and Storage
Checkpoints that are configured on the array.
3 If the array is of type A/A-A, A/P or A/PF, configure it in autotrespass mode.
Administering disks 99
Discovering and configuring newly added disk devices
where vendorid and productid are the VID and PID values that you found
from the previous step. For example, vendorid might be FUJITSU, IBM, or
SEAGATE. For Fujitsu devices, you must also specify the number of characters
in the serial number as the argument to the length argument (for example,
10). If the array is of type A/A-A, A/P or A/PF, you must also specify the
policy=ap attribute.
5 Use the vxdctl enable command to bring the array under VxVM control.
# vxdctl enable
# vxddladm listjbod
The following is sample output from this command for the example array:
# vxdmpadm listenclosure
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT
==============================================================
Disk Disk DISKS CONNECTED Disk 2
The enclosure name and type for the array are both shown as being set to
Disk. You can use the vxdisk list command to display the disks in the array:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
Disk_0 auto:none - - online invalid
Disk_1 auto:none - - online invalid
...
8 To verify that the DMP paths are recognized, use the vxdmpadm getdmpnode
command as shown in the following sample output for the example array:
This shows that there are two paths to the disks in the array.
For more information, enter the command vxddladm help addjbod.
See the vxddladm(1M) manual page.
See the vxdmpadm(1M) manual page.
Foreign devices
DDL may not be able to discover some devices that are controlled by third-party
drivers, such as those that provide multi-pathing or RAM disk capabilities. For
these devices it may be preferable to use the multi-pathing capability that is
provided by the third-party drivers for some arrays rather than using Dynamic
Multi-Pathing (DMP). Such foreign devices can be made available as simple disks
to VxVM by using the vxddladm addforeign command. This also has the effect
of bypassing DMP for handling I/O. The following example shows how to add
entries for block and character devices in the specified directories:
By default, this command suppresses any entries for matching devices in the
OS-maintained device tree that are found by the autodiscovery mechanism. You
can override this behavior by using the -f and -n options as described on the
vxddladm(1M) manual page.
After adding entries for the foreign devices, use either the vxdisk scandisks or
the vxdctl enable command to discover the devices as simple disks. These disks
then behave in the same way as autoconfigured disks.
The foreign device feature was introduced in VxVM 4.0 to support non-standard
devices such as RAM disks, some solid state disks, and pseudo-devices such as
EMC PowerPath.
Foreign device support has the following limitations:
■ A foreign device is always considered as a disk with a single path. Unlike an
autodiscovered disk, it does not have a DMP node.
■ It is not supported for shared disk groups in a clustered environment. Only
standalone host systems are supported.
■ It is not supported for Persistent Group Reservation (PGR) operations.
■ It is not under the control of DMP, so enabling of a failed disk cannot be
automatic, and DMP administrative commands are not applicable.
■ Enclosure information is not available to VxVM. This can reduce the availability
of any disk groups that are created using such devices.
■ The I/O Fencing and Cluster File System features are not supported for foreign
devices.
If a suitable ASL is available and installed for an array, these limitations are
removed.
See “Third-party driver coexistence” on page 89.
102 Administering disks
Disks under VxVM control
■ If the disk is not needed immediately, it can be initialized (but not added to a
disk group) and reserved for future use. To do this, enter none when asked to
name a disk group. Do not confuse this type of “spare disk” with a
hot-relocation spare disk.
■ If the disk was previously initialized for future use by VxVM, it can be
reinitialized and placed under VxVM control.
■ If the disk was previously in use, but not under VxVM control, you may wish
to preserve existing data on the disk while still letting VxVM take control of
the disk. This can be accomplished using encapsulation.
Encapsulation preserves existing data on disks.
■ Multiple disks on one or more controllers can be placed under VxVM control
simultaneously. Depending on the circumstances, all of the disks may not be
processed the same way.
It is possible to configure the vxdiskadm utility not to list certain disks or
controllers as being available. For example, this may be useful in a SAN
environment where disk enclosures are visible to a number of separate systems.
To exclude a device from the view of VxVM, select Prevent
multipathing/Suppress devices from VxVM’s view from the vxdiskadm main
menu.
See “Disabling multi-pathing and making devices invisible to VxVM” on page 165.
Administering disks 103
VxVM coexistence with SVM and ZFS
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:none - - online invalid
c1t1d0s2 auto:none - - online invalid
c2t5006016130603AE5d2s2 auto:ZFS - - ZFS
c2t5006016130603AE5d3s2 auto:SVM - - SVM
c2t5006016130603AE5d4s2 auto:cdsdisk - - online
c2t5006016130603AE5d5s2 auto:cdsdisk - - online
104 Administering disks
Changing the disk-naming scheme
# /usr/lib/vxvm/bin/vxdiskunsetup diskname
3 You can now initialize the disk as a SVM/ZFS device using ZFS/SVM tools.
See the Sun documentation for details.
You must perform step 1 and step 2 in order for VxVM to recognize a disk as
SVM or ZFS device.
To reuse a ZFS disk or an SVM disk as a VxVM disk
1 Remove the disk from the zpool or SVM metadevice, or destroy the zpool or
SVM metadevice.
See the Sun documentation for details.
2 Clear the signature block using the dd command:
Where c#t#d#s# is the disk slice on which the ZFS device or the SVM device
is configured. If the whole disk is used as the ZFS device, clear the signature
block on slice 0.
3 You can now initialize the disk as a VxVM device using the vxdiskadm
command or the vxdisksetup command.
The default naming scheme is enclosure-based naming (EBN). When you use DMP
with native volumes, the disk naming scheme must be EBN, the use_avid attribute
must be on, and the persistence attribute must be set to yes.
Note: Devices with very long device names (longer than 31 characters) are
represented by enclosure-based names regardless of the naming scheme. If the
OS-based names include WWN identifiers, the device name displays with the
WWN identifier as long as the device name is less than 31 characters. If any device
name is longer than 31 characters, that device name displays with an enclosure
name.
106 Administering disks
Changing the disk-naming scheme
The optional persistence argument allows you to select whether the names
of disk devices that are displayed by VxVM remain unchanged after disk
hardware has been reconfigured and the system rebooted. By default,
enclosure-based naming is persistent. Operating system-based naming is not
persistent by default.
To change only the naming persistence without changing the naming scheme,
run the vxddladm set namingscheme command for the current naming
scheme, and specify the persistence attribute.
By default, the names of the enclosure are converted to lowercase, regardless
of the case of the name specified by the ASL. The enclosure-based device
names are therefore in lower case. Set the lowercase=no option to suppress
the conversion to lowercase.
For enclosure-based naming, the use_avid option specifies whether the Array
Volume ID is used for the index number in the device name. By default,
use_avid=yes, indicating the devices are named as enclosure_avid. If use_avid
is set to no, DMP devices are named as enclosure_index. The index number
is assigned after the devices are sorted by LUN serial number.
The change is immediate whichever method you use.
See “Regenerating persistent device names” on page 107.
To display the current disk-naming scheme and its mode of operations, use the
following command:
The -c option clears all user-specified names and replaces them with
autogenerated names.
If the -c option is not specified, existing user-specified names are maintained,
but OS-based and enclosure-based names are regenerated.
The disk names now correspond to the new path names.
The argument to the tpdmode attribute selects names that are based on those
used by the operating system (native), or TPD-assigned node names (pseudo).
The use of this command to change between TPD and operating system-based
naming is illustrated in the following example for the enclosure named EMC0.
In this example, the device-naming scheme is set to OSN.
# vxdisk list
# vxdisk list
If tpdmode is set to native, the path with the smallest device number is
displayed.
See “Removing the error state for simple or nopriv disks in the boot disk group”
on page 109.
See “Removing the error state for simple or nopriv disks in non-boot disk groups”
on page 110.
See the vxdarestore(1M) manual page.
Removing the error state for simple or nopriv disks in the boot
disk group
If the boot disk group (usually aliased as bootdg) is comprised of only simple
and/or nopriv disks, the vxconfigd daemon goes into the disabled state after the
naming scheme change.
110 Administering disks
About the Array Volume Identifier (AVID) attribute
To remove the error state for simple or nopriv disks in the boot disk group
1 Use vxdiskadm to change back to c#t#d#s# naming.
2 Enter the following command to restart the VxVM configuration daemon:
# vxdarestore
2 Use the vxdarestore command to restore the failed disks, and to recover the
objects on those disks:
# vxdarestore
The VxVM utilities such as vxdisk list display the DMP metanode name, which
includes the AVID property. Use the AVID to correlate the DMP metanode name
to the LUN displayed in the array management interface (GUI or CLI) .
If the ASL does not provide the array volume ID property, then DMP generates
an index number. DMP sorts the devices seen from an array by the LUN serial
number and then assigns the index number. In this case, the DMP metanode name
is in the format enclosureID_index.
In a cluster environment, the DMP device names are the same across all nodes in
the cluster.
For example, on an EMC CX array where the enclosure is emc_clariion0 and the
array volume ID provided by the ASL is 91, the DMP metanode name is
emc_clariion0_91. The following sample output shows the DMP metanode names:
$ vxdisk list
emc_clariion0_91 auto:cdsdisk emc_clariion0_91 dg1 online shared
emc_clariion0_92 auto:cdsdisk emc_clariion0_92 dg1 online shared
emc_clariion0_93 auto:cdsdisk emc_clariion0_93 dg1 online shared
emc_clariion0_282 auto:cdsdisk emc_clariion0_282 dg1 online shared
emc_clariion0_283 auto:cdsdisk emc_clariion0_283 dg1 online shared
emc_clariion0_284 auto:cdsdisk emc_clariion0_284 dg1 online shared
For example, to find the physical device that is associated with disk ENC0_21,
the appropriate commands would be:
To obtain the full pathname for the block disk device and the character disk
device from these commands, append the displayed device name to
/dev/vx/dmp or /dev/vx/rdmp.
Note: SCSI disks are usually preformatted. Reformatting is needed only if the
existing formatting has become damaged.
Warning: Initialization does not preserve the existing data on the disks.
See “Disabling multi-pathing and making devices invisible to VxVM” on page 165.
114 Administering disks
Adding a disk to VxVM
If you enter list at the prompt, the vxdiskadm program displays a list of the
disks available to the system:
The phrase online invalid in the STATUS line indicates that a disk has yet
to be added or initialized for VxVM control. Disks that are listed as online
with a disk name and disk group are already under VxVM control.
Enter the device name or pattern of the disks that you want to initialize at
the prompt and press Return.
Administering disks 115
Adding a disk to VxVM
3 To continue with the operation, enter y (or press Return) at the following
prompt:
4 At the following prompt, specify the disk group to which the disk should be
added, or none to reserve the disks for future use:
5 If you specified the name of a disk group that does not already exist, vxdiskadm
prompts for confirmation that you really want to create this new disk group:
You are then prompted to confirm whether the disk group should support
the Cross-platform Data Sharing (CDS) feature:
If the new disk group may be moved between different operating system
platforms, enter y. Otherwise, enter n.
6 At the following prompt, either press Return to accept the default disk name
or enter n to allow you to define your own disk names:
8 When prompted whether to exclude the disks from hot-relocation use, enter
n (or press Return).
9 You are next prompted to choose whether you want to add a site tag to the
disks:
A site tag is usually applied to disk arrays or enclosures, and is not required
unless you want to use the Remote Mirror feature.
If you enter y to choose to add a site tag, you are prompted to the site name
at step 11.
10 To continue with the operation, enter y (or press Return) at the following
prompt:
11 If you chose to tag the disks with a site in step 9, you are now prompted to
enter the site name that should be applied to the disks in each enclosure:
belong to enclosure(s):
list of enclosure names
12 If you see the following prompt, it lists any disks that have already been
initialized for use by VxVM:
This prompt allows you to indicate “yes” or “no” for all of these disks (Y or N)
or to select how to process each of these disks on an individual basis (S).
If you are sure that you want to reinitialize all of these disks, enter Y at the
following prompt:
VxVM NOTICE V-5-2-366 The following disks you selected for use
appear to already have been initialized for the Volume
Manager. If you are certain the disks already have been
initialized for the Volume Manager, then you do not need to
reinitialize these disk devices.
Output format: [Device]
13 vxdiskadm may now indicate that one or more disks is a candidate for
encapsulation. Encapsulation allows you to add an active disk to VxVM control
and preserve the data on that disk.If you want to preserve the data on the
disk, enter y. If you are sure that there is no data on the disk that you want
to preserve, enter n to avoid encapsulation.
device name
14 If you choose to encapsulate the disk vxdiskadm confirms its device name
and prompts you for permission to proceed. Enter y (or press Return) to
continue encapsulation:
device name
You can now choose whether the disk is to be formatted as a CDS disk that is
portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32MB). Press Return to confirm that you want
to use the default value, or enter a different value. (The maximum value that
you can specify is 524288 blocks.)
If you entered cdsdisk as the format, you are prompted for the action to be
taken if the disk cannot be converted this format:
If you enter y, and it is not possible to encapsulate the disk as a CDS disk, it
is encapsulated as a sliced disk. Otherwise, the encapsulation fails.
120 Administering disks
Adding a disk to VxVM
vxdiskadm then proceeds to encapsulate the disks. You should now reboot
your system at the earliest possible opportunity, for example by running this
command:
The /etc/vfstab file is updated to include the volume devices that are used
to mount any encapsulated file systems. You may need to update any other
references in backup scripts, databases, or manually created swap devices.
The original /etc/vfstab file is saved as /etc/vfstab.prevm.
15 If you choose not to encapsulate the disk vxdiskadm asks if you want to
initialize the disk instead. Enter y to confirm this:
Instead of encapsulating, initialize? [y,n,q,?] (default: n) yvxdiskadm now
confirms those disks that are being initialized and added to VxVM control
with messages similar to the following. In addition, you may be prompted to
perform surface analysis.
16 You can now choose whether the disk is to be formatted as a CDS disk that is
portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
17 At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32MB). Press Return to confirm that you want
to use the default value, or enter a different value. (The maximum value that
you can specify is 524288 blocks.)
vxdiskadm then proceeds to add the disks.
VxVM INFO V-5-2-88 Adding disk device device name to disk group
disk group name with disk name disk name.
.
.
.
Administering disks 121
Adding a disk to VxVM
18 If you choose not to use the default disk names, vxdiskadm prompts you to
enter the disk name.
19 At the following prompt, indicate whether you want to continue to initialize
more disks (y) or return to the vxdiskadm main menu (n):
Disk reinitialization
You can reinitialize a disk that has previously been initialized for use by VxVM
by putting it under VxVM control as you would a new disk.
See “Adding a disk to VxVM” on page 113.
Warning: Reinitialization does not preserve data on the disk. If you want to
reinitialize the disk, make sure that it does not contain data that should be
preserved.
If the disk you want to add has been used before, but not with VxVM, you can
encapsulate the disk to preserve its information. If the disk you want to add has
previously been under the control of Solaris Volume Manager, you can preserve
the data it contains on a VxVM disk by the process of conversion.
For detailed information about migrating volumes, see the Veritas Storage
Foundation Advanced Features Administrator's Guide.
122 Administering disks
RAM disk support in VxVM
# vxdiskadd disk
# vxdiskadd c0t1d0
# ln -s /dev/ramdisk/ramdiskname /dev/dsk/ramdiskname
# ln -s /dev/rramdisk/rramdiskname /dev/rdsk/rramdiskname
# vxdisk scandisks
# vxdisk define ramdiskname type=nopriv volatile len=size
Normally, VxVM does not start volumes that are formed entirely from plexes with
volatile subdisks. That is because there is no plex that is guaranteed to contain
the most recent volume contents.
Some RAM disks are used in situations where all volume contents are recreated
after reboot. In these situations, you can force volumes formed from RAM disks
to be started at reboot by using the following command:
Encapsulating a disk
Warning: Encapsulating a disk requires that the system be rebooted several times.
Schedule performance of this procedure for a time when this does not
inconvenience users.
This section describes how to encapsulate a disk for use in VxVM. Encapsulation
preserves any existing data on the disk when the disk is placed under VxVM
control.
To prevent the encapsulation from failing, make sure that the following conditions
apply:
■ The disk has two free partitions for the public and private regions.
■ The disk has an s2 slice.
■ The disk has a small amount of free space (at least 1 megabyte at the beginning
or end of the disk) that does not belong to any partition. If the disk being
encapsulated is the root disk, and this does not have sufficient free space
available, a similar sized portion of the swap partition is used instead.
Only encapsulate a root disk if you also intend to mirror it. There is no benefit in
root-disk encapsulation for its own sake.
See “Rootability” on page 129.
Use the format or fdisk commands to obtain a printout of the root disk partition
table before you encapsulate a root disk. For more information, see the appropriate
manual pages. You may need this information should you subsequently need to
recreate the original root disk.
You cannot grow or shrink any volume (rootvol, usrvol, varvol, optvol, swapvol,
and so on) that is associated with an encapsulated root disk. This is because these
volumes map to physical partitions on the disk, and these partitions must be
contiguous.
When the boot disk is encapsulated or mirrored, a device path alias is added to
the NVRAMRC in the SPARC EEPROM. These device aliases can be used to set the
system's boot device.
For more information, see the devalias and boot-device settings in the SUN
documentation.
Administering disks 125
Encapsulating a disk
Warning: If the root disk is encapsulated and the dump device is covered by the
swap volume, it is not safe to use the savecore -L operation because this
overwrites the swap area. Configure a dedicated dump device on a partition other
than the swap area.
3 Select the disk group to which the disk is to be added at the following prompt:
4 At the following prompt, either press Return to accept the default disk name
or enter a disk name:
5 To continue with the operation, enter y (or press Return) at the following
prompt:
device name
device name
A message similar to the following confirms that the disk is being encapsulated
for use in VxVM and tells you that a reboot is needed:
7 For non-root disks, you can now choose whether the disk is to be formatted
as a CDS disk that is portable between different operating systems, or as a
non-portable sliced disk:
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk. Note that only the sliced format is suitable for use
with root, boot or swap disks.
8 At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32MB). Press Return to confirm that you want
to use the default value, or enter a different value. (The maximum value that
you can specify is 524288 blocks.)
9 If you entered cdsdisk as the format in step 7, you are prompted for the
action to be taken if the disk cannot be converted this format:
If you enter y, and it is not possible to encapsulate the disk as a CDS disk, it
is encapsulated as a sliced disk. Otherwise, the encapsulation fails.
10 vxdiskadm then proceeds to encapsulate the disks. You should now reboot
your system at the earliest possible opportunity, for example by running this
command:
The /etc/vfstab file is updated to include the volume devices that are used
to mount any encapsulated file systems. You may need to update any other
references in backup scripts, databases, or manually created swap devices.
The original /etc/vfstab file is saved as /etc/vfstab.prevm.
11 At the following prompt, indicate whether you want to encapsulate more
disks (y) or return to the vxdiskadm main menu (n):
The drawback with using nopriv devices is that VxVM cannot track changes in
the address or controller of the disk. Normally, VxVM uses identifying information
stored in the private region on the physical disk to track changes in the location
of a physical disk. Because nopriv devices do not have private regions and have
no identifying information stored on the physical disk, tracking cannot occur.
One use of nopriv devices is to encapsulate a disk so that you can use VxVM to
move data off the disk. When space has been made available on the disk, remove
the nopriv device, and encapsulate the disk as a standard disk device.
A disk group cannot be formed entirely from nopriv devices. This is because
nopriv devices do not provide space for storing disk group configuration
information. Configuration information must be stored on at least one disk in the
disk group.
Warning: Do not use nopriv disks to encapsulate a root disk. If insufficient free
space exists on the root disk for the private region, part of the swap area can be
used instead.
Administering disks 129
Rootability
Rootability
VxVM can place various files from the root file system, swap device, and other file
systems on the root disk under VxVM control. This is called rootability. The root
disk (that is, the disk containing the root file system) can be put under VxVM
control through the process of encapsulation.
The root disk can be encapsulated using the vxdiskadm command.
See “Encapsulating a disk” on page 124.
Once encapsulated, the root disk can also be mirrored by using the vxdiskadm.
command.
See “Mirroring an encapsulated root disk” on page 132.
130 Administering disks
Rootability
Note: Only encapsulate your root disk if you also intend to mirror it. There is no
benefit in root-disk encapsulation for its own sake.
You can mirror the rootvol, and swapvol volumes, as well as other parts of the
root disk that are required for a successful boot of the system (for example, /usr).
This provides complete redundancy and recovery capability in the event of disk
failure. Without mirroring, the loss of the root, swap, or usr partition prevents
the system from being booted from surviving disks.
Mirroring disk drives that are critical to booting ensures that no single disk failure
renders the system unusable. A suggested configuration is to mirror the critical
disk onto another available disk (using the vxdiskadm command). If the disk
containing root and swap partitions fails, the system can be rebooted from a disk
containing mirrors of these partitions.
Recovering a system after the failure of an encapsulated root disk requires the
application of special procedures.
See the Veritas Volume Manager Troubleshooting Guide.
For Sun x64 systems, mirroring a root disk creates a GRUB boot menu entry for
the Primary and Alternate (mirror) Boot disk.
For Sun SPARC systems, after mirroring the root disk, you can configure the
system to boot from the alternate boot drive to recover from a primary boot drive
failure.
See the Veritas Volume Manager Troubleshooting Guide for more information
about recovering from boot drive failure.
To mirror your root disk onto another disk
1 Choose a disk that is at least as large as the existing root disk.
2 If the selected disk is not already under VxVM control, use the vxdiskadd or
vxdiskadm command, or the Veritas Enterprise Administrator (VEA) to add
it to the bootdg disk group. Ensure that you specify the sliced format for
the disk.
3 Select Mirror Volumes on a Disk from the vxdiskadm main menu, or use the
VEA to create a mirror of the root disk. Doing so automatically invokes the
vxmirror command if the mirroring operation is performed on the root disk.
Alternatively, to mirror only those file systems on the root disk that are
required to boot the system, run the following command:
# vxmirror altboot_disk
where altboot_disk is the disk media name of the mirror for the root disk.
vxmirror creates a mirror for rootvol (the volume for the root file system
on an alternate disk). The alternate root disk is configured to enable booting
from it if the primary root disk fails.
4 Monitor the progress of the mirroring operation with the vxtask list
command.
# vxtask list
TASKID PTID TYPE/STATE PCT PROGRESS
161 PARENT/R 0.00% 3/0(1) VXRECOVER dg01 dg
162 162 ATCOPY/R 04.77% 0/41945715/2000896 PLXATT home home-01 dg
The OBP names specify the OpenBoot PROM designations. For example, on Desktop
SPARC systems, the designation sbus/esp@0,800000/sd@3,0:a indicates a SCSI
disk (sd) at target 3, lun 0 on the SCSI bus, with the esp host bus adapter plugged
into slot 0.
You can use Veritas Volume Manager boot disk alias names instead of OBP names.
Example aliases are vx-rootdisk or vx-disk01. To list the available boot devices,
use the devalias command at the OpenBoot prompt.
The filename argument is the name of a file that contains the kernel. The default
is /kernel/unix in the root partition. If necessary, you can specify another
program (such as /stand/diag) by specifying the -a flag. (Some versions of the
firmware allow the default filename to be saved in the nonvolatile storage area
of the system.)
Warning: Do not boot a system running VxVM with rootability enabled using all
the defaults presented by the -a flag.
Boot flags are not interpreted by the boot program. The boot program passes all
boot-flags to the file identified by filename.
136 Administering disks
Rootability
If you do not have space for a copy of some of these file systems on your alternate
boot disk, you can mirror them to other disks. You can also span or stripe these
other volumes across other disks attached to your system.
To list all volumes on your primary boot disk, use the command:
# vxprint -t -v -e'aslist.aslist.sd_disk="boot_disk"'
vxrootadm [-v] [-g dg] [-s srcdisk] ... keyword arg ...
The target disk for the snapshot must be as large (or bigger) than the source disk
(boot disk). You must use a new disk group name to associate the target disk.
To create a snapshot of an encapsulated boot disk
◆ Enter the following command:
In this example, disk_0 is the encapsulated boot disk, and rootdg is the
associate boot disk group. disk_1 is the target disk, and snapdg is the new
disk group name
See “Booting from alternate boot disks” on page 133.
Administering disks 139
Rootability
Because the disk is not the currently booted root disk, you can complete all
operations in a single phase without a reboot.
Growing a booted root disk requires four phases. For phase 1, use the command
above. For phases 2 to 4, specify vxrootadm grow continue.
The target disk for the grow operation must be of equal or greater size of the
source disk (boot disk). The grow operation can be performed on the active boot
disk or a snapshot boot disk.
To grow an active encapsulated boot disk
1 To complete the grow operation on the active boot disk requires three reboots
to complete the grow operations for the selected volume (rootvol, usrvol,
or swapvol).
2 Enter the following command:
In this example, disk_0 is an encapsulated boot disk associated with the boot
disk group rootdg. disk_1 is the target disk and rootvol is a 60g volume to
be grown.
You are prompted when a reboot is required (with specific command needed),
and how to continue the grow operation after the reboot is completed.
When the grow operation completes, the target disk is the active boot disk,
the volume has grown to the selected size, and the source boot disk is removed
from the boot disk group (rootdg).
140 Administering disks
Unencapsulating the root disk
# vxunroot
vxunroot does not perform any conversion to disk partitions if any plexes
remain on other disks.
# vxdisk list
The phrase online invalid in the STATUS line indicates that a disk has not
yet been added to VxVM control. These disks may or may not have been
initialized by VxVM previously. Disks that are listed as online are already
under VxVM control.
142 Administering disks
Removing disks
The -v option causes the command to additionally list all tags and tag values
that are defined for the disk. Without this option, no tags are displayed.
■ If you enter all, VxVM displays the device name, disk name, group, and
status.
■ If you enter the address of the device for which you want information,
complete disk information (including the device name, the type of disk,
and information about the public and private areas of the disk) is displayed.
Once you have examined this information, press Return to return to the main
menu.
Removing disks
You must disable a disk group before you can remove the last disk in that group.
Administering disks 143
Removing disks
3 Move the volumes to other disks or back up the volumes. To move a volume,
use vxdiskadm to mirror the volume on one or more disks, then remove the
original copy of the volume. If the volumes are no longer needed, they can
be removed instead of moved.
4 Check that any data on the disk has either been moved to other disks or is no
longer needed.
5 Select Remove a disk from the vxdiskadm main menu.
6 At the following prompt, enter the disk name of the disk to be removed:
7 If there are any volumes on the disk, VxVM asks you whether they should be
evacuated from the disk. If you wish to keep the volumes, answer y. Otherwise,
answer n.
144 Administering disks
Removing disks
The vxdiskadm utility removes the disk from the disk group and displays the
following success message:
You can now remove the disk or leave it on your system as a replacement.
9 At the following prompt, indicate whether you want to remove other disks
(y) or return to the vxdiskadm main menu (n):
home usrvol
Enter y to remove the disk completely from VxVM control. If you do not want
to remove the disk completely from VxVM control, enter n.
# /usr/lib/vxvm/bin/vxdiskunsetup c#t#d#
Note: You may need to run commands that are specific to the operating system
or disk array before removing a physical disk.
To replace a disk
1 Select Remove a disk for replacement from the vxdiskadm main menu.
2 At the following prompt, enter the name of the disk to be replaced (or enter
list for a list of disks):
3 When you select a disk to remove for replacement, all volumes that are
affected by the operation are displayed, for example:
home src
mkting
To remove the disk, causing the named volumes to be disabled and data to
be lost when the disk is replaced, enter y or press Return.
To abandon removal of the disk, and back up or move the data associated
with the volumes that would otherwise be disabled, enter n or q and press
Return.
For example, to move the volume mkting to a disk other than mydg02, use the
following command.
The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
After backing up or moving the data in the volumes, start again from step 1.
148 Administering disks
Removing and replacing disks
4 At the following prompt, either select the device name of the replacement
disk (from the list provided), press Return to choose the default disk, or enter
none if you are going to replace the physical disk:
Do not choose the old disk drive as a replacement even though it appears in
the selection list. If necessary, you can choose to initialize a new disk.
You can enter none if you intend to replace the physical disk.
See “Replacing a failed or removed disk” on page 150.
5 If you chose to replace the disk in step 4, press Return at the following prompt
to confirm this:
vxdiskadm displays the following messages to indicate that the original disk
is being removed:
6 If the disk was previously an encapsulated root disk, vxdiskadm displays the
following message. Enter y to confirm that you want to reinitialize the disk:
Entering y at the prompt destroys any data that is on the disk. Ensure that
you have at least one valid copy of the data on other disks before proceeding.
7 You can now choose whether the disk is to be formatted as a CDS disk that is
portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
8 At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32 MB). Press Return to confirm that you want
to use the default value, or enter a different value. (The maximum value that
you can specify is 524288 blocks.)
9 If one of more mirror plexes were moved from the disk, you are now prompted
whether FastResync should be used to resynchronize the plexes:
10 At the following prompt, indicate whether you want to remove another disk
(y) or return to the vxdiskadm main menu (n):
3 The vxdiskadm program displays the device names of the disk devices available
for use as replacement disks. Your system may use a device name that differs
from the examples. Enter the device name of the disk or press Return to select
the default device:
■ If the disk has already been initialized, press Return at the following
prompt to replace the disk:
Warning: It is recommended that you do not enter n at this prompt. This can
result in an invalid VTOC that makes the disk unbootable.
Entering y at the prompt destroys any data that is on the disk. Ensure that
you have at least one valid copy of the data on other disks before proceeding.
152 Administering disks
Removing and replacing disks
5 You can now choose whether the disk is to be formatted as a CDS disk that is
portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
6 At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32 MB). Press Return to confirm that you want
to use the default value, or enter a different value. (The maximum value that
you can specify is 524288 blocks.)
7 The vxdiskadm program then proceeds to replace the disk, and returns the
following message on success:
At the following prompt, indicate whether you want to replace another disk
(y) or return to the vxdiskadm main menu (n):
Note: The following procedure is suitable for use with any array that is
administered by using the Solaris luxadm command.
# vxdisk rm daname
where daname is the disk access name of the device (for example, c1t5d0s2).
Administering disks 153
Enabling a disk
3 Use the Solaris luxadm command to obtain the array name and slot number
of the disk, and then use these values with luxadm to remove the disk:
Follow the luxadm prompts, and pull out the disk when instructed.
4 Run the following luxadm command when you are ready to insert the
replacement disk:
Follow the luxadm prompts, and insert the replacement disk when instructed.
5 Run the following command to scan for the new disk and update the system:
# vxdiskconfig
Enabling a disk
If you move a disk from one system to another during normal system operation,
VxVM does not recognize the disk automatically. The enable disk task enables
VxVM to identify the disk and to determine if this disk is part of a disk group.
Also, this task re-enables access to a disk that was disabled by either the disk
group deport task or the disk device disable (offline) task.
To enable a disk
1 Select Enable (online) a disk device from the vxdiskadm main menu.
2 At the following prompt, enter the device name of the disk to be enabled (or
enter list for a list of devices):
3 At the following prompt, indicate whether you want to enable another device
(y) or return to the vxdiskadm main menu (n):
Warning: Taking a disk offline is only useful on systems that support hot-swap
removal and insertion of disks. If a system does not support hot-swap removal
and insertion of disks, you must shut down the system.
Renaming a disk
If you do not specify a VM disk name, VxVM gives the disk a default name when
you add the disk to VxVM control. The VM disk name is used by VxVM to identify
the location of the disk or the disk type.
Administering disks 155
Reserving disks
To rename a disk
◆ Type the following command:
By default, VxVM names subdisk objects after the VM disk on which they are
located. Renaming a VM disk does not automatically rename the subdisks on
that disk.
For example, you might want to rename disk mydg03, as shown in the following
output from vxdisk list, to mydg02:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:sliced mydg01 mydg online
c1t0d0s2 auto:sliced mydg03 mydg online
c1t1d0s2 auto:sliced - - online
To confirm that the name change took place, use the vxdisk list command
again:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:sliced mydg01 mydg online
c1t0d0s2 auto:sliced mydg02 mydg online
c1t1d0s2 auto:sliced - - online
Reserving disks
By default, the vxassist command allocates space from any disk that has free
space. You can reserve a set of disks for special purposes, such as to avoid general
use of a particularly slow or a particularly fast disk.
156 Administering disks
Reserving disks
To reserve a disk
◆ Type the following command:
After you enter this command, the vxassist program does not allocate space
from the selected disk unless that disk is specifically mentioned on the
vxassist command line. For example, if mydg03 is reserved, use the following
command:
■ About enabling and disabling I/O for controllers and storage processors
Active/Passive (A/P) Allows access to its LUNs (logical units; real disks
or virtual disks created using hardware) via the
primary (active) path on a single controller (also
known as an access port or a storage processor)
during normal operation.
Active/Passive in explicit failover mode The appropriate command must be issued to the
or non-autotrespass mode (A/P-F) array to make the LUNs fail over to the secondary
path.
Active/Passive with LUN group failover For Active/Passive arrays with LUN group failover
(A/P-G) (A/PG arrays), a group of LUNs that are connected
through a controller is treated as a single failover
entity. Unlike A/P arrays, failover occurs at the
controller level, and not for individual LUNs. The
primary controller and the secondary controller
are each connected to a separate group of LUNs.
If a single LUN in the primary controller’s LUN
group fails, all LUNs in that group fail over to the
secondary controller.
An array policy module (APM) may define array types to DMP in addition to the
standard types for the arrays that it supports.
VxVM uses DMP metanodes (DMP nodes) to access disk devices connected to the
system. For each disk in a supported array, DMP maps one node to the set of paths
that are connected to the disk. Additionally, DMP associates the appropriate
multi-pathing policy for the disk array with the node. For disks in an unsupported
array, DMP maps a separate node to each path that is connected to a disk. The
raw and block devices for the nodes are created in the directories /dev/vx/rdmp
and /dev/vx/dmp respectively.
Figure 4-1 shows how DMP sets up a node for a disk in a supported disk array.
160 Administering Dynamic Multi-Pathing
How DMP works
Figure 4-1 How DMP represents multiple physical paths to a disk as one node
VxVM
Host
Disk
VxVM implements a disk device naming scheme that allows you to recognize to
which array a disk belongs.
Figure 4-2 shows an example where two paths, c1t99d0 and c2t99d0, exist to a
single disk in the enclosure, but VxVM uses the single DMP node, enc0_0, to access
it.
Host
c1 c2 VxVM
enc0_0
Mapped
Fibre Channel by DMP
switches DMP
c1t99d0 c2t99d0
Disk enclosure
enc0
If required, the response of DMP to I/O failure on a path can be tuned for the paths
to individual arrays. DMP can be configured to time out an I/O request either after
a given period of time has elapsed without the request succeeding, or after a given
number of retries on a path have failed.
See “Configuring the response to I/O failures” on page 203.
I/O throttling
If I/O throttling is enabled, and the number of outstanding I/O requests builds up
on a path that has become less responsive, DMP can be configured to prevent new
I/O requests being sent on the path either when the number of outstanding I/O
requests has reached a given value, or a given time has elapsed since the last
successful I/O request on the path. While throttling is applied to a path, the new
I/O requests on that path are scheduled on other available paths. The throttling
is removed from the path if the HBA reports no error on the path, or if an
outstanding I/O request on the path succeeds.
See “Configuring the I/O throttling mechanism” on page 205.
Administering Dynamic Multi-Pathing 163
How DMP works
Load balancing
By default, the DMP uses the Minimum Queue policy for load balancing across
paths for Active/Active (A/A), Active/Passive (A/P), Active/Passive with explicit
failover (A/P-F) and Active/Passive with group failover (A/P-G) disk arrays. Load
balancing maximizes I/O throughput by using the total bandwidth of all available
paths. I/O is sent down the path which has the minimum outstanding I/Os.
For A/P disk arrays, I/O is sent down the primary paths. If the primary paths fail,
I/O is switched over to the available secondary paths. As the continuous transfer
of ownership of LUNs from one controller to another results in severe I/O
slowdown, load balancing across primary and secondary paths is not performed
for A/P disk arrays unless they support concurrent I/O.
For A/P, A/P-F and A/P-G arrays, load balancing is performed across all the
currently active paths as is done for A/A arrays.
You can use the vxdmpadm command to change the I/O policy for the paths to an
enclosure or disk array.
See “Specifying the I/O policy” on page 195.
# stmsboot -d
Dynamic Reconfiguration
Dynamic Reconfiguration (DR) is a feature that is available on some high-end
enterprise systems. It allows some components (such as CPUs, memory, and other
controllers or I/O boards) to be reconfigured while the system is still running.
The reconfigured component might be handling the disks controlled by VxVM.
See “About enabling and disabling I/O for controllers and storage processors”
on page 167.
Note: You need an additional license to use the cluster feature of VxVM.
Note: Support for automatic failback of an A/P array requires that an appropriate
ASL (and APM, if required) is available for the array, and has been installed on
the system.
See “Discovering disks and dynamically adding disk arrays” on page 87.
For Active/Active type disk arrays, any disk can be simultaneously accessed
through all available physical paths to it. In a clustered environment, the nodes
do not all need to access a disk via the same physical path.
See “How to administer the Device Discovery Layer” on page 90.
See “Configuring array policy modules” on page 210.
Option 1 Suppresses all paths through the specified controller from the
view of VxVM.
Option 3 Suppresses disks from the view of VxVM that match a specified
Vendor ID and Product ID combination.
Option 4 Suppresses all but one path to a disk. Only one path is made
visible to VxVM.
Option 3 Unsuppresses disks from the view of VxVM that match a specified
Vendor ID and Product ID combination.
Option 5 Allows multi-pathing of all disks that have paths through the
specified controller.
array port resulted in all primary paths being disabled, DMP will failover to active
secondary paths and I/O will continue on them.
After the operation is over, you can use vxdmpadm to re-enable the paths through
the controllers.
See “Disabling I/O for paths, controllers or array ports” on page 201.
See “Enabling I/O for paths, controllers or array ports” on page 202.
Note: From release 5.0 of VxVM, these operations are supported for controllers
that are used to access disk arrays on which cluster-shareable disk groups are
configured.
# vxdisk path
This shows that two paths exist to each of the two disks, mydg01 and mydg02,
and also indicates that each disk is in the ENABLED state.
170 Administering Dynamic Multi-Pathing
Displaying the paths to a disk
The output from the vxdisk list command displays the multi-pathing
information, as shown in the following example:
Device c2t0d0
devicetag c2t0d0
type sliced
hostid system01
.
.
.
Multipathing information:
numpaths: 2
c2t0d0s2 state=enabled type=primary
c1t0d0s2 state=disabled type=secondary
The numpaths line shows that there are 2 paths to the device. The next two
lines in the "Multipathing information" section show that one path is active
(state=enabled) and that the other path has failed (state=disabled).
The type field is shown for disks on Active/Passive type disk arrays such as
the EMC CLARiiON, Hitachi HDS 9200 and 9500, Sun StorEdge 6xxx, and Sun
StorEdge T3 array. This field indicates the primary and secondary paths to
the disk.
The type field is not displayed for disks on Active/Active type disk arrays
such as the EMC Symmetrix, Hitachi HDS 99xx and Sun StorEdge 99xx Series,
and IBM ESS Series. Such arrays have no concept of primary and secondary
paths.
Administering Dynamic Multi-Pathing 171
Setting customized names for DMP nodes
You can also assign names from an input file. This enables you to customize the
DMP nodes on the system with meaningful names.
172 Administering Dynamic Multi-Pathing
DMP coexistence with native multipathing
The physical path is specified by argument to the nodename attribute, which must
be a valid path listed in the /dev/rdsk directory.
The command displays output similar to the following:
Use the -v option to display the LUN serial number and the array volume ID.
Use the enclosure attribute with getdmpnode to obtain a list of all DMP nodes for
the specified enclosure.
Use the dmpnodename attribute with getdmpnode to display the DMP information
for a given DMP node.
The following command displays the consolidated information for all of the DMP
nodes in the system:
Use the enclosure attribute with list dmpnode to obtain a list of all DMP nodes
for the specified enclosure.
For example, the following command displays the consolidated information for
all of the DMP nodes in the enc0 enclosure.
Use the dmpnodename attribute with list dmpnode to display the DMP information
for a given DMP node. The DMP node can be specified by name or by specifying
a path name. The detailed information for the specified DMP node includes path
information for each subpath of the listed dmpnode.
The path state differentiates between a path that is disabled due to a failure and
a path that has been manually disabled for administrative purposes. A path that
has been manually disabled using the vxdmpadm disable command is listed as
disabled(m).
For example, the following command displays the consolidated information for
the DMP node emc_clariion0_158.
dmpdev = emc_clariion0_158
state = enabled
enclosure = emc_clariion0
cab-sno = CK200070400359
asl = libvxCLARiiON.so
vid = DGC
pid = DISK
array-name = EMC_CLARiiON
array-type = CLR-A/PF
iopolicy = MinimumQ
avid = 158
lun-sno = 600601606D121B008FB6E0CA8EDBDB11
udid = DGC%5FDISK%5FCK200070400359%5F600601606D121B008FB6E0CA8EDBDB11
dev-attr = lun
176 Administering Dynamic Multi-Pathing
Administering DMP using vxdmpadm
###path = name state type transport ctlr hwpath aportID aportWWN attr
path = c0t5006016141E03B33d1s2 enabled(a) primary FC c0
/pci@1e,600000/SUNW,emlxs@3/fp@0,0 A5 50:06:01:61:41:e0:3b:33 -
path = c0t5006016041E03B33d1s2 enabled(a) primary FC c0
/pci@1e,600000/SUNW,emlxs@3/fp@0,0 A4 50:06:01:60:41:e0:3b:33 -
path = c0t5006016841E03B33d1s2 enabled secondary FC c0
/pci@1e,600000/SUNW,emlxs@3/fp@0,0 B4 50:06:01:68:41:e0:3b:33 -
path = c1t5006016141E03B33d1s2 enabled(a) primary FC c1
/pci@1e,600000/SUNW,emlxs@3,1/fp@0,0 A5 50:06:01:61:41:e0:3b:33 -
path = c1t5006016841E03B33d1s2 enabled secondary FC c1
/pci@1e,600000/SUNW,emlxs@3,1/fp@0,0 B4 50:06:01:68:41:e0:3b:33 -
path = c1t5006016041E03B33d1s2 enabled(a) primary FC c1
/pci@1e,600000/SUNW,emlxs@3,1/fp@0,0 A4 50:06:01:60:41:e0:3b:33 -
# vxdmpadm getsubpaths
Administering Dynamic Multi-Pathing 177
Administering DMP using vxdmpadm
For A/A arrays, all enabled paths that are available for I/O are shown as
ENABLED(A).
For A/P arrays in which the I/O policy is set to singleactive, only one path is
shown as ENABLED(A). The other paths are enabled but not available for I/O. If
the I/O policy is not set to singleactive, DMP can use a group of paths (all primary
or all secondary) for I/O, which are shown as ENABLED(A).
See “Specifying the I/O policy” on page 195.
Paths that are in the DISABLED state are not available for I/O operations.
A path that was manually disabled by the system administrator displays as
DISABLED(M). A path that failed displays as DISABLED.
You can use getsubpaths to obtain information about all the paths that are
connected to a particular HBA controller:
You can also use getsubpaths to obtain information about all the paths that are
connected to a port on an array. The array port can be specified by the name of
the enclosure and the array port ID, or by the worldwide name (WWN) identifier
of the array port:
For example, to list subpaths through an array port through the enclosure and
the array port ID:
For example, to list subpaths through an array port through the WWN:
You can use getsubpaths to obtain information about all the subpaths of an
enclosure.
This output shows that the controller c1 is connected to disks that are not in any
recognized DMP category as the enclosure type is OTHER.
The other controllers are connected to disks that are in recognized DMP categories.
All the controllers are in the ENABLED state which indicates that they are available
for I/O operations.
The state DISABLED is used to indicate that controllers are unavailable for I/O
operations. The unavailability can be due to a hardware failure or due to I/O
operations being disabled on that controller by using the vxdmpadm disable
command.
The following forms of the command lists controllers belonging to a specified
enclosure or enclosure type:
180 Administering Dynamic Multi-Pathing
Administering DMP using vxdmpadm
or
The vxdmpadm getctlr command displays HBA vendor details and the Controller
ID. For iSCSI devices, the Controller ID is the IQN or IEEE-format based name.
For FC devices, the Controller ID is the WWN. Because the WWN is obtained from
ESD, this field is blank if ESD is not running. ESD is a daemon process used to
notify DDL about occurance of events. The WWN shown as ‘Controller ID’ maps
to the WWN of the HBA port associated with the host controller.
# vxdmpadm getctlr c5
If an A/P or ALUA array is under the control of MPxIO, then DMP claims the
devices in A/A mode. The output of the above commands shows the ARRAY_TYPE
as A/A. For arrays under MPxIO control, DMP does not store A/P-specific attributes
or ALUA-specific attributes. These attributes include primary/secondary paths,
port serial number, and the array controller ID.
Note: DMP does not report information about array ports for LUNs that are
controlled by the native multi-pathing driver..DMP reports pWWN information
only if the dmp_monitor_fabric tunable is on, and the event source daemon (esd)
is running.
To display the attributes of an array port that is accessible via a path, DMP node
or HBA controller, use one of the following commands:
The following form of the command displays information about all of the array
ports within the specified enclosure:
The following example shows information about the array port that is accessible
via DMP node c2t66d0s2:
DMP has discovered for a given TPD device, and the TPD device that corresponds
to a given TPD-controlled node discovered by DMP:
# vxdisk list
The following command displays the paths that DMP has discovered, and which
correspond to the PowerPath-controlled node, emcpower10s2:
Conversely, the next command displays information about the PowerPath node
that corresponds to the path, c7t0d10s2, discovered by DMP:
Hardware RAID types Displays what kind of Storage RAID Group the
LUN belongs to
Thin Provisioning Discovery and Displays the LUN’s thin reclamation abilities
Reclamation
Device Media Type Displays the type of media –whether SSD (solid
state disk )
Each LUN can have one or more of these attributes discovered during device
discovery. ASLs furnish this information to DDL through the property
DDL_DEVICE_ATTR. The vxdisk -p list command displays DDL extended
attributes. For example, the following command shows attributes of “std”, “fc”,
and “RAID_5” for this LUN:
# vxdisk -p list
DISK : tagmastore-usp0_0e18
DISKID : 1253585985.692.rx2600h11
VID : HITACHI
UDID : HITACHI%5FOPEN-V%5F02742%5F0E18
REVISION : 5001
PID : OPEN-V
PHYS_CTLR_NAME : 0/4/1/1.0x50060e8005274246
LUN_SNO_ORDER : 411
LUN_SERIAL_NO : 0E18
LIBNAME : libvxhdsusp.sl
HARDWARE_MIRROR: no
DMP_DEVICE : tagmastore-usp0_0e18
DDL_THIN_DISK : thick
DDL_DEVICE_ATTR: std fc RAID_5
184 Administering Dynamic Multi-Pathing
Administering DMP using vxdmpadm
CAB_SERIAL_NO : 02742
ATYPE : A/A
ARRAY_VOLUME_ID: 0E18
ARRAY_PORT_PWWN: 50:06:0e:80:05:27:42:46
ANAME : TagmaStore-USP
TRANSPORT : FC
The vxdisk -x attribute -p list command displays the one-line listing for
the property list and the attributes. The following example shows two Hitachi
LUNs that support Thin Reclamation via the attribute hdprclm:
User can specify multiple -x options in the same command to display multiple
entries. For example:
# vxdisk -e list
DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME ATTR
tagmastore-usp0_0a7a auto - - online c10t0d2 std fc RAID_5
tagmastore-usp0_0a7b auto - - online c10t0d3 std fc RAID_5
tagmastore-usp0_0a78 auto - - online c10t0d0 std fc RAID_5
tagmastore-usp0_0655 auto - - online c13t2d7 hdprclm fc
tagmastore-usp0_0656 auto - - online c13t3d0 hdprclm fc
tagmastore-usp0_0657 auto - - online c13t3d1 hdprclm fc
Administering Dynamic Multi-Pathing 185
Administering DMP using vxdmpadm
For a list of ASLs that supports Extended Attributes, and descriptions of these
attributes, refer to the hardware compatibility list at the following URL:
http://seer.entsupport.symantec.com/docs/330441.htm
Note: DMP does not support Extended Attributes for LUNs that are controlled by
the native multi-pathing driver.
Note: The ! character is a special character in some shells. The following syntax
shows how to escape it in a bash shell.
where:
all – all devices
product=VID:PID – all devices with the specified VID:PID
ctlr=ctlr – all devices through the given controller
dmpnodename=diskname - all paths under the DMP node
dmpnodename=diskname path=\!pathname - all paths under the DMP node except
the one specified.
186 Administering Dynamic Multi-Pathing
Administering DMP using vxdmpadm
The memory attribute can be used to limit the maximum amount of memory that
is used to record I/O statistics for each CPU. The default limit is 32k (32 kilobytes)
per CPU.
To display the accumulated statistics at regular intervals, use the following
command:
This command displays I/O statistics for all paths (all), or for a specified
controller, DMP node, enclosure, path or port ID. The statistics displayed are the
CPU usage and amount of memory per CPU used to accumulate statistics, the
number of read and write operations, the number of kilobytes read and written,
and the average time in milliseconds per kilobyte that is read or written.
The interval and count attributes may be used to specify the interval in seconds
between displaying the I/O statistics, and the number of lines to be displayed. The
actual interval may be smaller than the value specified if insufficient memory is
available to record the statistics.
To disable the gathering of statistics, enter this command:
The next command displays the current statistics including the accumulated total
numbers of read and write operations and kilobytes read and written, on all paths:
The following command changes the amount of memory that vxdmpadm can use
to accumulate the statistics:
The displayed statistics can be filtered by path name, DMP node name, and
enclosure name (note that the per-CPU memory has changed following the previous
command):
You can also specify the number of times to display the statistics and the time
interval. Here the incremental statistics for a path are displayed twice with a
2-second interval:
See the vxdmpadm(1m) manual page for more information about the vxdmpadm
iostat command.
For example:
To display the count of I/Os that returned with errors on a DMP node, path or
controller:
For example, to show the I/O counts that returned errors on a path:
For example:
To group by controller:
For example:
To group by arrayport:
For example:
To group by enclosure:
For example:
You can also filter out entities for which all data entries are zero. This option is
especially useful in a cluster environment which contains many failover devices.
You can display only the statistics for the active paths.
To filter all zero entries from the output of the iostat show command:
Administering Dynamic Multi-Pathing 191
Administering DMP using vxdmpadm
For example:
You can now specify the units in which the statistics data is displayed. By default,
the read/write times are displayed in milliseconds up to 2 decimal places. The
throughput data is displayed in terms of ‘BLOCKS’ and the output is scaled,
meaning that the small values are displayed in small units and the larger values
are displayed in bigger units, keeping significant digits constant.The -u option
accepts the following options:
primary Defines a path as being the primary path for a JBOD disk array.
The following example specifies a primary path for a JBOD disk
array:
secondary Defines a path as being the secondary path for a JBOD disk array.
The following example specifies a secondary path for a JBOD disk
array:
standby Marks a standby (failover) path that it is not used for normal I/O
scheduling. This path is used if there are no active paths available
for I/O. The next example specifies a standby path for an A/P-C
disk array:
For example, to list the devices with fewer than 3 enabled paths, use the following
command:
To display the minimum redundancy level for a particular device, use the vxdmpadm
getattr command, as follows:
For example, to show the minimum redundancy level for the enclosure
HDS9500-ALUA0:
The next example displays the setting of partitionsize for the enclosure enc0,
on which the balanced I/O policy with a partition size of 2MB has been set:
---------------------------------------
enc0 512 4096
Warning: Starting with release 4.1 of VxVM, I/O policies are recorded in the file
/etc/vx/dmppolicy.info, and are persistent across reboots of the system.
The default value for the partition size is 512 blocks (256k).
Specifying a partition size of 0 is equivalent to the default
partition size of 512 blocks (256k).
minimumq This policy sends I/O on paths that have the minimum
number of outstanding I/O requests in the queue for a LUN.
No further configuration is possible as DMP automatically
determines the path with the shortest queue.
priority This policy is useful when the paths in a SAN have unequal
performance, and you want to enforce load balancing
manually. You can assign priorities to each path based on
your knowledge of the configuration and performance
characteristics of the available paths, and of other aspects
of your system.
singleactive This policy routes I/O down the single active path. This
policy can be configured for A/P arrays with one active path
per controller, where the other paths are used in case of
failover. If configured for A/A arrays, there is no load
balancing across the paths, and the alternate paths are only
used to provide high availability (HA). If the current active
path fails, I/O is switched to an alternate active path. No
further configuration is possible as the single active path
is selected by DMP.
The use_all_paths attribute only applies to A/A-A arrays. For other arrays, the
above command displays the message:
Device: c3t2d15s2
.
.
.
numpaths: 8
c2t0d15s2 state=enabled type=primary
c2t1d15s2 state=enabled type=primary
c3t1d15s2 state=enabled type=primary
c3t2d15s2 state=enabled type=primary
c4t2d15s2 state=enabled type=primary
c4t3d15s2 state=enabled type=primary
c5t3d15s2 state=enabled type=primary
c5t4d15s2 state=enabled type=primary
In addition, the device is in the enclosure ENC0, belongs to the disk group mydg,
and contains a simple concatenated volume myvol1.
The first step is to enable the gathering of DMP statistics:
Next the dd command is used to apply an input workload from the volume:
By running the vxdmpadm iostat command to display the DMP statistics for the
device, it can be seen that all I/O is being directed to one path, c5t4d15s2:
The vxdmpadm command is used to display the I/O policy for the enclosure that
contains the device:
This shows that the policy for the enclosure is set to singleactive, which explains
why all the I/O is taking place on one path.
To balance the I/O load across the multiple primary paths, the policy is set to
round-robin as shown here:
With the workload still running, the effect of changing the I/O policy to balance
the load across the primary paths can now be seen.
The enclosure can be returned to the single active I/O policy by entering the
following command:
Note: From release 5.0 of VxVM, this operation is supported for controllers that
are used to access disk arrays on which cluster-shareable disk groups are
configured.
Before detaching a system board, stop all I/O to the HBA controllers that are
located on the board. To do this, execute the vxdmpadm disable command, and
then run the Dynamic Reconfiguration (DR) facility provided by Sun.
To disable I/O for a path, use the following command:
To disable I/O for the paths connected to an HBA controller, use the following
command:
To disable I/O for the paths connected to an array port, use one of the following
commands:
where the array port is specified either by the enclosure name and the array port
ID, or by the array port’s worldwide name (WWN) identifier.
The following are examples of using the command to disable I/O on an array port:
You can use the -c option to check if there is only a single active path to the disk.
If so, the disable command fails with an error message unless you use the -f
option to forcibly disable the path.
The disable operation fails if it is issued to a controller that is connected to the
root disk through a single path, and there are no root disk mirrors configured on
alternate paths. If such mirrors exist, the command succeeds.
Note: From release 5.0 of VxVM, this operation is supported for controllers that
are used to access disk arrays on which cluster-shareable disk groups are
configured.
To enable I/O for the paths connected to an HBA controller, use the following
command:
Administering Dynamic Multi-Pathing 203
Administering DMP using vxdmpadm
To enable I/O for the paths connected to an array port, use one of the following
commands:
where the array port is specified either by the enclosure name and the array port
ID, or by the array port’s worldwide name (WWN) identifier.
The following are examples of using the command to enable I/O on an array port:
Renaming an enclosure
The vxdmpadm setattr command can be used to assign a meaningful name to an
existing enclosure, for example:
To display the current settings for handling I/O request failures that are applied
to the paths to an enclosure, array name or array type, use the vxdmpadm getattr
command.
See “Displaying recovery option values” on page 207.
To set a limit for the number of times that DMP attempts to retry sending an I/O
request on a path, use the following command:
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption=fixedretry retrycount=n
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption=timebound iotimeout=seconds
The default value of iotimeout is 300 seconds. For some applications such as
Oracle, it may be desirable to set iotimeout to a larger value. The iotimeout value
for DMP should be greater than the I/O service time of the underlying operating
system layers.
The following example configures time-bound recovery for the enclosure enc0,
and sets the value of iotimeout to 360 seconds:
The next example sets a fixed-retry limit of 10 for the paths to all Active/Active
arrays:
The above command also has the effect of configuring I/O throttling with the
default settings.
See “Configuring the I/O throttling mechanism” on page 205.
Note: The response to I/O failure settings is persistent across reboots of the system.
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption=nothrottle
The following example shows how to disable I/O throttling for the paths to the
enclosure enc0:
The vxdmpadm setattr command can be used to enable I/O throttling on the paths
to a specified enclosure, disk array name, or type of array:
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type}\
recoveryoption=throttle [iotimeout=seconds]
If the iotimeout attribute is specified, its argument specifies the time in seconds
that DMP waits for an outstanding I/O request to succeed before invoking I/O
206 Administering Dynamic Multi-Pathing
Administering DMP using vxdmpadm
Note: The I/O throttling settings are persistent across reboots of the system.
To turn on the feature, set the dmp_sfg_threshold value to the required number
of path failures which triggers SFG. The default is 1.
The default value of the tunable is “1” which represents that the feature is on.
To see the Subpaths Failover Groups ID, use the following command:
# vxdmpadm getattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption
The following example shows the vxdmpadm getattr command being used to
display the recoveryoption option values that are set on an enclosure.
This shows the default and current policy options and their values.
Table 4-1 summarizes the possible recovery option settings for retrying I/O after
an error.
Table 4-2 summarizes the possible recovery option settings for throttling I/O.
Note: The DMP path restoration thread does not change the disabled state of the
path through a controller that you have disabled using vxdmpadm disable.
When configuring DMP path restoration policies, you must stop the path
restoration thread, and then restart it with new attributes.
See “Stopping the DMP path restoration thread” on page 210.
Use the vxdmpadm start restore command to configure one of the following
restore policies. The policy will remain in effect until the restore thread is stopped
or the values are changed using vxdmpadm settune command.
■ check_all
The path restoration thread analyzes all paths in the system and revives the
paths that are back online, as well as disabling the paths that are inaccessible.
The command to configure this policy is:
■ check_alternate
The path restoration thread checks that at least one alternate path is healthy.
It generates a notification if this condition is not met. This policy avoids inquiry
commands on all healthy paths, and is less costly than check_all in cases
where a large number of paths are available. This policy is the same as
check_all if there are only two paths per DMP node. The command to configure
this policy is:
Administering Dynamic Multi-Pathing 209
Administering DMP using vxdmpadm
■ check_disabled
This is the default path restoration policy. The path restoration thread checks
the condition of paths that were previously disabled due to hardware failures,
and revives them if they are back online. The command to configure this policy
is:
■ check_periodic
The path restoration thread performs check_all once in a given number of
cycles, and check_disabled in the remainder of the cycles. This policy may
lead to periodic slowing down (due to check_all) if there is a large number of
paths available. The command to configure this policy is:
The interval attribute must be specified for this policy. The default number
of cycles between running the check_all policy is 10.
The interval attribute specifies how often the path restoration thread examines
the paths. For example, after stopping the path restoration thread, the polling
interval can be set to 400 seconds using the following command:
Starting with the 5.0MP3 release, you can also use the vxdmpadm settune command
to change the restore policy, restore interval, and restore period. This method
stores the values for these arguments as DMP tunables. The settings are
immediately applied and are persistent across reboots. Use the vxdmpadm gettune
to view the current settings.
See “DMP tunable parameters” on page 530.
If the vxdmpadm start restore command is given without specifying a policy or
interval, the path restoration thread is started with the persistent policy and
interval settings previously set by the administrator with the vxdmpadm settune
command. If the administrator has not set a policy or interval, the system defaults
are used. The system default restore policy is check_disabled. The system default
interval is 300 seconds.
210 Administering Dynamic Multi-Pathing
Administering DMP using vxdmpadm
Warning: Decreasing the interval below the system default can adversely affect
system performance.
Warning: Automatic path failback stops if the path restoration thread is stopped.
The output from this command includes the file name of each module, the
supported array type, the APM name, the APM version, and whether the module
is currently loaded and in use. To see detailed information for an individual module,
specify the module name as the argument to the command:
The optional configuration attributes and their values are specific to the APM for
an array. Consult the documentation that is provided by the array vendor for
details.
Note: By default, DMP uses the most recent APM that is available. Specify the -u
option instead of the -a option if you want to force DMP to use an earlier version
of the APM. The current version of an APM is replaced only if it is not in use.
Specifying the -r option allows you to remove an APM that is not currently loaded:
2 Identify which LUNs to remove from the host. Do one of the following:
■ Use Storage Array Management to identify the Array Volume ID (AVID)
for the LUNs.
■ If the array does not report the AVID, use the LUN index.
■ If the data has not been evacuated and the LUN is part of a subdisk or disk
group, enter the following command to remove the LUNs from the disk
Online dynamic reconfiguration 215
Reconfiguring a LUN online that is under DMP control
group. If the disk is part of a shared disk group, you must use the -k option
to force the removal.
4 For LUNs that are in use by ZFS, export or destroy the zpool.
5 Using the AVID or LUN index, use Storage Array Management to unmap or
unmask the LUNs you identified in step 2.
6 Remove the LUNs from the vxdisk list. Enter the following command on all
nodes in a cluster:
# vxdisk rm da-name
This is a required step. If you do not perform this step, the DMP device tree
shows ghost paths.
7 Clean up the Solaris SCSI device tree for the devices that you removed in step
6.
See “Cleaning up the operating system device tree after removing LUNs”
on page 219.
This step is required. You must clean up the operating system SCSI device
tree to release the SCSI target ID for reuse if a new LUN is added to the host
later.
8 Scan the operating system device tree.
See “Scanning an operating system device tree after adding or removing
LUNs” on page 219.
9 Use VxVM to perform a device scan. You must perform this operation on all
nodes in a cluster. Enter one of the following commands:
■ # vxdctl enable
■ # vxdisk scandisks
10 Refresh the DMP device name database using the following command:
# vxddladm assign names
11 Verify that the LUNs were removed cleanly by answering the following
questions:
■ Is the device tree clean?
216 Online dynamic reconfiguration
Reconfiguring a LUN online that is under DMP control
Verify that the operating system metanodes are removed from the /dev
directory.
■ Were all the appropriate LUNs removed?
Use the DMP disk reporting tools such as the vxdisk list command
output to determine if the LUNs have been cleaned up successfully.
■ Is the vxdisk list output correct?
Verify that the vxdisk list output shows the correct number of paths
and does not include any ghost disks.
If the answer to any of these questions is "No," return to step 5 and perform
the required steps.
If the answer to all of the questions is "Yes," the LUN remove operation is
successful.
■ # vxdisk scandisks
218 Online dynamic reconfiguration
Reconfiguring a LUN online that is under DMP control
7 Refresh the DMP device name database using the following command:
8 Verify that the LUNs were added correctly by answering the following
questions:
■ Do the newly provisioned LUNs appear in the vxdisk list output?
■ Are the configured paths present for each LUN?
If the answer to any of these questions is "No," return to step 3 and begin the
procedure again.
If the answer to all of the questions is "Yes," the LUNs have been successfully
added. You can now add the LUNs to a disk group, create new volumes, or
grow existing volumes.
If the dmp_native_support tunable is set to ON and the new LUN does not
have a VxVM label or is not claimed by a TPD driver then the LUN is available
for use by ZFS.
The message above indicates that a new LUN is trying to reuse the target ID of
an older LUN. The device entries have not been cleaned, so the new LUN cannot
use the target ID. Until the operating system device tree is cleaned up, DMP
prevents this operation.
Online dynamic reconfiguration 219
Reconfiguring a LUN online that is under DMP control
# cfgadm -c configure c2
# devfsadm -Cv
2 Use Storage Array Management or the command line to unmap the LUNs.
After they are unmapped, Solaris indicates the devices are either unusable
or failing.
See “Reconfiguring a LUN online that is under DMP control” on page 213.
3 If the output indicates the LUNs are failing, you must force an LIP on the
HBA.
This operation probes the targets again, so that output indicates the devices
are unstable. To remove a device from the operating system device tree, it
must be unstable.
4 Remove the device from the cfgadm database. On the HBA, enter the following
commands:
# devfsadm -Cv
Online dynamic reconfiguration 221
Upgrading the array controller firmware online
Array vendors have different names for this process. For example, EMC calls it a
nondisruptive upgrade (NDU) for CLARiiON arrays.
A/A type arrays require no special handling during this online upgrade process.
For A/P, A/PF, and ALUA type arrays, DMP performs array-specific handling
through vendor-specific array policy modules (APMs) during an online controller
code upgrade.
When a controller resets and reboots during a code upgrade, DMP detects this
state through the SCSI Status. DMP immediately fails over all I/O to the next
controller.
If the array does not fully support NDU, all paths to the controllers may be
unavailable for I/O for a short period of time. Before beginning the upgrade, set
the dmp_lun_retry_timeout tunable to a period greater than the time that you
expect the controllers to be unavailable for I/O. DMP retries the I/Os until the end
of the dmp_lun_retry_timeout period, or until the I/O succeeds, whichever
happens first. Therefore, you can perform the firmware upgrade without
interrupting the application I/Os.
For example, if you expect the paths to be unavailable for I/O for 300 seconds, use
the following command:
DMP retries the I/Os for 300 seconds, or until the I/O succeeds.
To verify which arrays support Online Controller Upgrade or NDU, see the
hardware compatibility list (HCL) at the following URL:
http://entsupport.symantec.com/docs/330441
222 Online dynamic reconfiguration
Replacing a host bus adapter on an M5000 server
2 Identify the HBA and its WWPN(s), which you want to replace using the
cfgadm command.
To select the HBA to dump the portap and get the WWPN, enter the following:
Alternately, you can run the fcinfo hba-port Solaris command to get the
WWPN(s) for the HBA ports.
3 Ensure you have a compatible spare HBA for hot-swap.
Online dynamic reconfiguration 225
Replacing a host bus adapter on an M5000 server
4 Stop the I/O operations on the HBA port(s) and disable the DMP subpath(s)
for the HBA that you want to replace.
5 Dynamically unconfigure the HBA in the PCIe slot using the cfgadm command.
console messages
Oct 24 16:21:44 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2):
card is removed from the slot iou 0-pci 1
6 Verify that the HBA card that is being replaced in step 5 is not in the
configuration. Enter the following command:
console messages
iou 0-pci 1 unknown disconnected unconfigured unknown
10 Bring the replaced HBA back into the configuration. Enter the following:
# cfgadm -c configure iou 0-pci 1
console messages
Oct 24 16:21:57 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2):
card is inserted in the slot iou#0-pci#1 (pci dev 0)
226 Online dynamic reconfiguration
Replacing a host bus adapter on an M5000 server
11 Verify that the reinserted HBA is in the configuration. Enter the following:
# cfgadm -al | grep -i fibre
iou#0-pci 1 fibre/hp connected configured ok <====
iou#0-pci 4 fibre/hp connected configured ok
# cfgadm -c configure c3
15 Clean up the device tree for old LUNs. Enter the following:
# devfsadm -Cv
16 If VxVM does not show a ghost path for the removed HBA path, enable the
path using the vxdmpadm command:. This performs the device scan for that
particular HBA subpath(s). Enter the following:
17 Verify if I/O operations are scheduled on that path. If I/O operations are
running correctly on all paths, the dynamic HBA replacement operation is
complete.
Chapter 6
Creating and administering
disk groups
This chapter includes the following topics:
continues to refer to it. You can replace disks by first associating a different
physical disk with the name of the disk to be replaced and then recovering any
volume data that was stored on the original disk (from mirrors or backup copies).
Having disk groups that contain many disks and VxVM objects causes the private
region to fill. If you have large disk groups that are expected to contain more than
several hundred disks and VxVM objects, you should set up disks with larger
private areas. A major portion of a private region provides space for a disk group
configuration database that contains records for each VxVM object in that disk
group. Because each configuration record is approximately 256 bytes, you can
use the configuration database copy size to estimate the number of records that
you can create in a disk group. You can obtain the copy size in blocks from the
output of the vxdg list diskgroup command. It is the value of the permlen
parameter on the line starting with the string “config:”. This value is the smallest
of the len values for all copies of the configuration database in the disk group.
The value of the free parameter indicates the amount of remaining free space in
the configuration database.
See “Displaying disk group information” on page 236.
One way to overcome the problem of running out of free space is to split the
affected disk group into two separate disk groups.
See “Reorganizing the contents of disk groups” on page 270.
See “Backing up and restoring disk group configuration data” on page 286.
Before Veritas Volume Manager (VxVM) 4.0, a system installed with VxVM was
configured with a default disk group, rootdg. This group had to contain at least
one disk. By default, operations were directed to the rootdg disk group. From
release 4.0 onward, VxVM can function without any disk group having been
configured. Only when the first disk is placed under VxVM control must a disk
group be configured. Now, you do not have to name any disk group rootdg. If you
name a disk group rootdg, it has no special properties because of this name.
See “Specification of disk groups to commands” on page 230.
Additionally, before VxVM 4.0, some commands such as vxdisk were able to
deduce the disk group if the name of an object was uniquely defined in one disk
group among all the imported disk groups. Resolution of a disk group in this way
is no longer supported for any command.
230 Creating and administering disk groups
About disk groups
bootdg Specifies the boot disk group. This is an alias for the disk group that
contains the volumes that are used to boot the system. VxVM sets
bootdg to the appropriate disk group if it takes control of the root
disk. Otherwise, bootdg is set to nodg (no disk group).
defaultdg Specifies the default disk group. This is an alias for the disk group
name that should be assumed if the -g option is not specified to a
command, or if the VXVM_DEFAULTDG environment variable is
undefined. By default, defaultdg is set to nodg (no disk group).
nodg Specifies to an operation that no disk group has been defined. For
example, if the root disk is not under VxVM control, bootdg is set to
nodg.
Warning: Do not try to change the assigned value of bootdg. If you change the
value, it may render your system unbootable.
If you have upgraded your system, you may find it convenient to continue to
configure a disk group named rootdg as the default disk group (defaultdg).
defaultdg and bootdg do not have to refer to the same disk group. Also, neither
the default disk group nor the boot disk group have to be named rootdg.
■ Use the default disk group name that is specified by the environment variable
VXVM_DEFAULTDG. This variable can also be set to one of the reserved
system-wide disk group names: bootdg, defaultdg, or nodg. If the variable is
undefined, the following rule is applied.
■ Use the disk group that has been assigned to the system-wide default disk
group alias, defaultdg. If this alias is undefined, the following rule is applied.
See “Displaying and specifying the system-wide default disk group” on page 231.
■ If the operation can be performed without requiring a disk group name (for
example, an edit operation on disk access records), do so.
If none of these rules succeeds, the requested operation fails.
# vxdg bootdg
# vxdg defaultdg
If a default disk group has not been defined, nodg is displayed. You can also use
the following command to display the default disk group:
If bootdg is specified as the argument to this command, the default disk group is
set to be the same as the currently defined system-wide boot disk group.
If nodg is specified as the argument to the vxdctl defaultdg command, the
default disk group is undefined.
The specified disk group is not required to exist on the system.
See the vxdctl(1M) manual page.
See the vxdg(1M) manual page.
You must explicitly upgrade the disk group to the appropriate disk group version
to use the feature.
See “Upgrading the disk group version” on page 284.
Table 6-1 summarizes the Veritas Volume Manager releases that introduce and
support specific disk group versions. It also summarizes the features that are
supported by each disk group version.
Creating and administering disk groups 233
About disk groups
5.1 150 SSD device support, 20, 30, 40, 50, 60, 70,
migration of ISP dg 80, 90, 110, 120, 130,
140, 150
5.0 140 Data migration, 20, 30, 40, 50, 60, 70,
Remote Mirror, 80, 90, 110, 120, 130,
coordinator disk 140
groups (used by VCS),
linked volumes,
snapshot LUN
import.
3.2, 3.5 90 ■ Cluster Support 20, 30, 40, 50, 60, 70,
for Oracle 80, 90
Resilvering
■ Disk Group Move,
Split and Join
■ Device Discovery
Layer (DDL) 1.0
■ Layered Volume
Support in
Clusters
■ Ordered
Allocation
■ OS Independent
Naming Support
■ Persistent
FastResync
1.3 15 15
1.2 10 10
If you need to import a disk group on a system running an older version of Veritas
Volume Manager, you can create a disk group with an earlier disk group version.
See “Creating a disk group with an earlier disk group version” on page 239.
# vxdg list
NAME STATE ID
rootdg enabled 730344554.1025.tweety
newdg enabled 731118794.1213.tweety
To display more detailed information on a specific disk group, use the following
command:
When you apply this command to a disk group named mydg, the output is similar
to the following:
Group: mydg
dgid: 962910960.1025.bass
Creating and administering disk groups 237
Displaying disk group information
import-id: 0.1
flags:
version: 160
local-activation: read-write
alignment: 512 (bytes)
ssb: on
detach-policy: local
copies: nconfig=default nlog=default
config: seqno=0.1183 permlen=3448 free=3428 templen=12 loglen=522
config disk c0t10d0 copy 1 len=3448 state=clean online
config disk c0t11d0 copy 1 len=3448 state=clean online
log disk c0t10d0 copy 1 len=522
log disk c0t11d0 copy 1 len=522
To verify the disk group ID and name that is associated with a specific disk (for
example, to import the disk group), use the following command:
This command provides output that includes the following information for the
specified disk. For example, output for disk c0t12d0 as follows:
Disk: c0t12d0
type: simple
flags: online ready private autoconfig autoimport imported
diskid: 963504891.1070.bass
dgname: newdg
dgid: 963504895.1075.bass
hostid: bass
info: privoffset=128
# vxdg free
To display free space for a disk group, use the following command:
The following example output shows the amount of free space in sectors:
# vxdiskadd c1t0d0
where c1t0d0 is the device name of a disk that is not currently assigned to a disk
group. The command dialog is similar to that described for the vxdiskadm
command.
See “Adding a disk to VxVM” on page 113.
You can also create disk groups using the following vxdg init command:
For example, to create a disk group named mktdg on device c1t0d0s2, enter the
following:
The disk that is specified by the device name, c1t0d0s2, must have been previously
initialized with vxdiskadd or vxdiskadm. The disk must not currently belong to
a disk group.
You can use the cds attribute with the vxdg init command to specify whether a
new disk group is compatible with the Cross-platform Data Sharing (CDS) feature.
In Veritas Volume Manager 4.0 and later releases, newly created disk groups are
compatible with CDS by default (equivalent to specifying cds=on). If you want to
change this behavior, edit the file /etc/default/vxdg and set the attribute-value
pair cds=off in this file before creating a new disk group.
You can also use the following command to set this attribute for a disk group:
This creates a disk group, newdg, which can be imported by Veritas Volume
Manager 4.1. Note that while this disk group can be imported on the VxVM 4.1
system, attempts to use features from Veritas Volume Manager 5.0 or later releases
will fail.
You can also use the vxdiskadd command to add a disk to a disk group. Enter the
following:
# vxdiskadd c1t1d0
where c1t1d0 is the device name of a disk that is not currently assigned to a disk
group. The command dialog is similar to that described for the vxdiskadm
command.
See “Adding a disk to VxVM” on page 113.
For example, to remove mydg02 from the disk group mydg, enter the following:
If the disk has subdisks on it when you try to remove it, the following error message
is displayed:
Using the -k option lets you remove the disk even if it has subdisks.
See the vxdg(1M) manual page.
After you remove the disk from its disk group, you can (optionally) remove it from
VxVM control completely. Enter the following:
# vxdiskunsetup devicename
Creating and administering disk groups 241
Moving disks between disk groups
For example, to remove the disk c1t0d0s2 from VxVM control, enter the following:
# vxdiskunsetup c1t0d0s2
You can remove a disk on which some subdisks of volumes are defined. For
example, you can consolidate all the volumes onto one disk. If you use vxdiskadm
to remove a disk, you can choose to move volumes off that disk. To do this, run
vxdiskadm and select Remove a disk from the main menu.
home usrvol
If you choose y, all volumes are moved off the disk, if possible. Some volumes may
not be movable. The most common reasons why a volume may not be movable
are as follows:
■ There is not enough space on the remaining disks.
■ Plexes or striped subdisks cannot be allocated on different disks from existing
plexes or striped subdisks in the volume.
If vxdiskadm cannot move some volumes, you may need to remove some plexes
from some disks to free more space before proceeding with the disk removal
operation.
Warning: This procedure does not save the configurations nor data on the disks.
242 Creating and administering disk groups
Deporting a disk group
You can also move a disk by using the vxdiskadm command. Select Remove a disk
from the main menu, and then select Add or initialize a disk.
To move disks and preserve the data on these disks, along with VxVM objects,
such as volumes:
See “Moving objects between disk groups” on page 277.
3 From the vxdiskadm main menu, select Remove access to (deport) a disk
group .
4 At prompt, enter the name of the disk group to be deported. In the following
example it is newdg):
5 At the following prompt, enter y if you intend to remove the disks in this disk
group:
After the disk group is deported, the vxdiskadm utility displays the following
message:
7 At the following prompt, indicate whether you want to disable another disk
group (y) or return to the vxdiskadm main menu (n):
You can use the following vxdg command to deport a disk group:
# vxdisk -s list
2 From the vxdiskadm main menu, select Enable access to (import) a disk
group.
244 Creating and administering disk groups
Handling of minor number conflicts
3 At the following prompt, enter the name of the disk group to import (in this
example, newdg):
When the import finishes, the vxdiskadm utility displays the following success
message:
4 At the following prompt, indicate whether you want to import another disk
group (y) or return to the vxdiskadm main menu (n):
You can also use the following vxdg command to import a disk group:
You can also import the disk group as a shared disk group.
See “Importing disk groups as shared” on page 478.
shared disk groups only from the shared pool, and VxVM allocates minor numbers
of private disk groups only from the private pool. If you import a private disk
group as a shared disk group or vice versa, the device minor numbers are
re-allocated from the correct pool. The disk group is dynamically reminored.
By default, private minor numbers range from 0-32999, and shared minor numbers
start from 33000. You can change the division, if required. For example, you can
set the range for shared minor numbers to start from a lower number. This range
provides more minor numbers for shared disk groups and fewer minor numbers
for private disk groups.
Normally, the minor numbers in private and shared pools are sufficient, so there
is no need to make changes to the division.
Note: To make the new division take effect, you must run vxdctl enable or restart
vxconfigd after the tunable is changed in the defaults file. The division on all the
cluster nodes must be exactly the same to prevent node failures for node join,
volume creation, or disk group import operations.
sharedminorstart=20000
You cannot set the shared minor numbers to start at less than 1000. If
sharedminorstart is set to values between 0 to 999, the division of private
minor numbers and shared disk group minor numbers is set to 1000. The
value of 0 disables dynamic renumbering.
2 Run the following command:
# vxdctl enable
In certain scenarios, you may need to disable the division between shared minor
numbers and private minor numbers. For example, you may need to prevent the
device minor numbers from being changed when you upgrade from a previous
release. In this case, disable the dynamic reminoring before you install the new
VxVM package.
246 Creating and administering disk groups
Moving disk groups between systems
sharedminorstart=0
# vxdctl enable
3 Move all the disks to the target system and perform the steps necessary
(system-dependent) for the target system and VxVM to recognize the new
disks.
This can require a reboot, in which case the vxconfigd daemon is restarted
and recognizes the new disks. If you do not reboot, use the command vxdctl
enable to restart the vxconfigd program so VxVM also recognizes the disks.
Creating and administering disk groups 247
Moving disk groups between systems
4 Import (enable local access to) the disk group on the target system with this
command:
Warning: All disks in the disk group must be moved to the other system. If
they are not moved, the import fails.
5 By default, VxVM enables and starts any disabled volumes after the disk
group is imported.
See “Setting the automatic recovery of volumes” on page 244.
If the automatic volume recovery feature is turned off, start all volumes with
the following command:
You can also move disks from a system that has crashed. In this case, you
cannot deport the disk group from the source system. When a disk group is
created or imported on a system, that system writes a lock on all disks in the
disk group.
Warning: The purpose of the lock is to ensure that SAN-accessed disks are
not used by both systems at the same time. If two systems try to access the
same disks at the same time, this must be managed using software such as
the clustering functionality of VxVM. Otherwise, data and configuration
information stored on the disk may be corrupted, and may become unusable.
The next message indicates that the disk group does not contains any valid disks
(not that it does not contains any disks):
The disks may be considered invalid due to a mismatch between the host ID in
their configuration copies and that stored in the /etc/vx/volboot file.
To clear locks on a specific set of devices, use the following command:
A disk group can be imported successfully if all the disks are accessible that were
visible when the disk group was last imported successfully. However, sometimes
you may need to specify the -f option to forcibly import a disk group if some disks
are not available. If the import operation fails, an error message is displayed.
The following error message indicates a fatal error that requires hardware repair
or the creation of a new disk group, and recovery of the disk group configuration
and data:
If some of the disks in the disk group have failed, you can force the disk group to
be imported by specifying the -f option to the vxdg import command:
Warning: Be careful when using the -f option. It can cause the same disk group
to be imported twice from different sets of disks. This can cause the disk group
configuration to become inconsistent.
As using the -f option to force the import of an incomplete disk group counts as
a successful import, an incomplete disk group may be imported subsequently
without this option being specified. This may not be what you expect.
You can also import the disk group as a shared disk group.
See “Importing disk groups as shared” on page 478.
These operations can also be performed using the vxdiskadm utility. To deport a
disk group using vxdiskadm, select Remove access to (deport) a disk group
from the main menu. To import a disk group, select Enable access to (import)
a disk group. The vxdiskadm import operation checks for host import locks and
prompts to see if you want to clear any that are found. It also starts volumes in
the disk group.
Note: The default policy ensures that a small number of disk groups can be merged
successfully between a set of machines. However, where disk groups are merged
automatically using failover mechanisms, select ranges that avoid overlap.
To view the base minor number for an existing disk group, use the vxprint
command as shown in the following examples for the disk group, mydg:
To set a base volume device minor number for a disk group that is being created,
use the following command:
For example, the following command creates the disk group, newdg, that includes
the specified disks, and has a base minor number of 30000:
If a disk group already exists, you can use the vxdg reminor command to change
its base minor number:
For example, the following command changes the base minor number to 30000
for the disk group, mydg:
If a volume is open, its old device number remains in effect until the system is
rebooted or until the disk group is deported and re-imported. If you close the open
volume, you can run vxdg reminor again to allow the renumbering to take effect
without rebooting or re-importing.
An example of where it is necessary to change the base minor number is for a
cluster-shareable disk group. The volumes in a shared disk group must have the
same minor number on all the nodes. If there is a conflict between the minor
numbers when a node attempts to join the cluster, the join fails. You can use the
reminor operation on the nodes that are in the cluster to resolve the conflict. In
a cluster where more than one node is joined, use a base minor number which
does not conflict on any node.
Creating and administering disk groups 251
Moving disk groups between systems
Note: Such a disk group may still not be importable by VxVM 4.0 on Linux with a
pre-2.6 kernel if it would increase the number of minor numbers on the system
that are assigned to volumes to more than 4079, or if the number of available
extended major numbers is smaller than 15.
You can use the following command to discover the maximum number of volumes
that are supported by VxVM on a Linux host:
252 Creating and administering disk groups
Handling cloned disks with duplicated identifiers
# cat /proc/sys/vxvm/vxio/vol_max_volumes
4079
# vxdisk list
This command uses the current value of the UDID that is stored in the Device
Discovery Layer (DDL) database to correct the value in the private region. The -f
option must be specified if VxVM has not set the udid_mismatch flag for a disk.
For example, the following command updates the UDIDs for the disks c2t66d0s2
and c2t67d0s2:
This form of the command allows only cloned disks to be imported. All non-cloned
disks remain unimported.
If the clone_disk flag is set on a disk, this indicates the disk was previously
imported into a disk group with the udid_mismatch flag set.
The -o updateid option can be specified to write new identification attributes to
the disks, and to set the clone_disk flag on the disks. (The vxdisk set clone=on
254 Creating and administering disk groups
Handling cloned disks with duplicated identifiers
command can also be used to set the flag.) However, the import fails if multiple
copies of one or more cloned disks exist. In this case, you can use the following
command to tag all the disks in the disk group that are to be imported:
# vxdisk listtag
If you have already imported the non-cloned disks in a disk group, you can use
the -n and -t option to specify a temporary name for the disk group containing
the cloned disks:
# vxdisk listtag
The following command ensures that configuration database copies and kernel
log copies are maintained for all disks in the disk group mydg that are tagged as
t1:
The disks for which such metadata is maintained can be seen by using this
command:
Alternatively, the following command can be used to ensure that a copy of the
metadata is kept with a disk:
To import the cloned disks, they must be assigned a new disk group name, and
their UDIDs must be updated:
Note that the state of the imported cloned disks has changed from online
udid_mismatch to online clone_disk.
In the next example, none of the disks (neither cloned nor non-cloned) have been
imported:
To import only the cloned disks into the mydg disk group:
In the next example, a cloned disk (BCV device) from an EMC Symmetrix DMX
array is to be imported. Before the cloned disk, EMC0_27, has been split off from
the disk group, the vxdisk list command shows that it is in the error
udid_mismatch state:
After updating VxVM’s information about the disk by running the vxdisk
scandisks command, the cloned disk is in the online udid_mismatch state:
The following command imports the cloned disk into the new disk group newdg,
and updates the disk’s UDID:
# vxdisk listtag
To import the cloned disks that are tagged as t1, they must be assigned a new
disk group name, and their UDIDs must be updated:
As the cloned disk EMC0_15 is not tagged as t1, it is not imported. Note that the
state of the imported cloned disks has changed from online udid_mismatch to
online clone_disk.
In the next example, none of the disks (neither cloned nor non-cloned) have been
imported:
To import only the cloned disks that have been tagged as t1 into the mydg disk
group:
As in the previous example, the cloned disk EMC0_15 is not tagged as t1, and so it
is not imported.
After DDL recognizes the LUN, turn on name persistence using the following
command:
If the -t option is included, the import is temporary and does not persist across
reboots. In this case, the stored name of the disk group remains unchanged on its
original host, but the disk group is known by the name specified by newdg to the
importing host. If the -t option is not used, the name change is permanent.
For example, this command temporarily renames the disk group, mydg, as mytempdg
on import:
When renaming on deport, you can specify the -h hostname option to assign a
lock to an alternate host. This ensures that the disk group is automatically
imported when the alternate host reboots.
For example, this command renames the disk group, mydg, as myexdg, and deports
it to the host, jingo:
You cannot use this method to rename the boot disk group because it contains
volumes that are in use by mounted file systems (such as /). To rename the boot
disk group, you must first unmirror and unencapsulate the root disk, and then
re-encapsulate and remirror the root disk in a different disk group. This disk
group becomes the new boot disk group.
To temporarily move the boot disk group, bootdg, from one host to another (for
repair work on the root volume, for example) and then move it back
1 On the original host, identify the disk group ID of the bootdg disk group to
be imported with the following command:
dgname: rootdg
dgid: 774226267.1025.tweety
In this example, the administrator has chosen to name the boot disk group
as rootdg. The ID of this disk group is 774226267.1025.tweety.
This procedure assumes that all the disks in the boot disk group are accessible
by both hosts.
2 Shut down the original host.
Creating and administering disk groups 263
Handling conflicting configuration copies
3 On the importing host, import and rename the rootdg disk group with this
command:
The -t option indicates a temporary import name, and the -C option clears
import locks. The -n option specifies an alternate name for the rootdg being
imported so that it does not conflict with the existing rootdg. diskgroup is
the disk group ID of the disk group being imported (for example,
774226267.1025.tweety).
If a reboot or crash occurs at this point, the temporarily imported disk group
becomes unimported and requires a reimport.
4 After the necessary work has been done on the imported disk group, deport
it back to its original host with this command:
Here hostname is the name of the system whose rootdg is being returned
(the system name can be confirmed with the command uname -n).
This command removes the imported disk group from the importing host
and returns locks to its original host. The original host can then automatically
import its boot disk group at the next reboot.
Figure 6-1 shows a 2-node cluster with node 0, a fibre channel switch and disk
enclosure enc0 in building A, and node 1, another switch and enclosure enc1 in
building B.
Node 0 Node 1
Redundant private
network
Disk enclosures
enc0 enc1
Building A Building B
When the network links are restored, attempting to reattach the missing disks to
the disk group on Node 0, or to re-import the entire disk group on either node,
fails. VxVM increments the serial ID in the disk media record of each imported
disk in all the disk group configuration databases on those disks, and also in the
private region of each imported disk. The value that is stored in the configuration
database represents the serial ID that the disk group expects a disk to have. The
serial ID that is stored in a disk’s private region is considered to be its actual value.
VxVM detects the serial split brain when the actual serial ID of the disks that are
being attached mismatches with the serial ID in the disk group configuration
database of the imported disk group.
If some disks went missing from the disk group (due to physical disconnection or
power failure) and those disks were imported by another host, the serial IDs for
the disks in their copies of the configuration database, and also in each disk’s
private region, are updated separately on that host. When the disks are
subsequently re-imported into the original shared disk group, the actual serial
IDs on the disks do not agree with the expected values from the configuration
copies on other disks in the disk group.
Depending on what happened to the different portions of the split disk group,
there are two possibilities for resolving inconsistencies between the configuration
databases:
■ If the other disks in the disk group were not imported on another host, VxVM
resolves the conflicting values of the serial IDs by using the version of the
configuration database from the disk with the greatest value for the updated
ID (shown as update_id in the output from the vxdg list diskgroup
command).
Figure 6-2 shows an example of a serial split brain condition that can be
resolved automatically by VxVM.
266 Creating and administering disk groups
Handling conflicting configuration copies
Figure 6-2 Example of a serial split brain condition that can be resolved
automatically
Disk A Disk B
1. Disk A is imported on a separate
Disk A = 1 Disk B = 0 host. Disk B is not imported. The
actual and expected serial IDs are
Configuration Configuration updated only on Disk A.
database database
Expected A = 1 Expected A = 0
Expected B = 0 Expected B = 0
Disk A Disk B
2. The disk group is re-imported
Disk A = 1 Disk B = 0 on the cluster. The configuration
Configuration Configuration copy on Disk A is used to correct
database database the configuration copy on Disk B
Expected A = 1 Expected A = 1 as the actual value of the updated
Expected B = 0 Expected B = 0 ID on Disk A is the greatest.
■ If the other disks were also imported on another host, no disk can be considered
to have a definitive copy of the configuration database.
Figure 6-3 shows an example of a true serial split brain condition that cannot
be resolved automatically by VxVM.
Creating and administering disk groups 267
Handling conflicting configuration copies
Figure 6-3 Example of a true serial split brain condition that cannot be resolved
automatically
In this case, the disk group import fails, and the vxdg utility outputs error messages
similar to the following before exiting:
The import does not succeed even if you specify the -f flag to vxdg.
Although it is usually possible to resolve this conflict by choosing the version of
the configuration database with the highest valued configuration ID (shown as
the value of seqno in the output from the vxdg list diskgroup| grep config
command), this may not be the correct thing to do in all circumstances.
See “Correcting conflicting configuration information” on page 268.
See “About sites and remote mirrors” on page 489.
268 Creating and administering disk groups
Handling conflicting configuration copies
Note: The disk group must have a version number of at least 110.
The following is sample output from running vxsplitlines on the disk group
newdg:
# vxsplitlines -v -g newdg
To see the configuration copy from a disk, enter the following command:
To import the disk group with the configuration copy from a disk, enter the
following command:
Pool 0
DEVICE DISK DISK ID DISK PRIVATE PATH
newdg1 c2t5d0s2 1215378871.300.vm2850lx13 /dev/vx/rdmp/c2t5d0s2
newdg2 c2t6d0s2 1215378871.300.vm2850lx13 /dev/vx/rdmp/c2t6d0s2
Pool 1
DEVICE DISK DISK ID DISK PRIVATE PATH
newdg3 c2t7d0s2 1215378871.294.vm2850lx13 /dev/vx/rdmp/c2t7d0s2
If you do not specify the -v option, the command has the following output:
All the disks in the first pool have the same config copies
All the disks in the second pool may not have the same config copies
Number of disks in the first pool: 1
Number of disks in the second pool: 1
To import the disk group with the configuration copy from the first pool, enter
the following command:
To import the disk group with the configuration copy from the second pool, enter
the following command:
In this example, the disk group has four disks, and is split so that two disks appear
to be on each side of the split.
You can specify the -c option to vxsplitlines to print detailed information about
each of the disk IDs from the configuration copy on a disk specified by its disk
access name:
Please note that even though some disks ssb ids might match
that does not necessarily mean that those disks’ config copies
have all the changes. From some other configuration copies,
those disks’ ssb ids might not match. To see the configuration
from this disk, run
/etc/vx/diag.d/vxprivutil dumpconfig /dev/vx/dmp/c2t6d0s2
Based on your knowledge of how the serial split brain condition came about, you
must choose one disk’s configuration to be used to import the disk group. For
example, the following command imports the disk group using the configuration
copy that is on side 0 of the split:
When you have selected a preferred configuration copy, and the disk group has
been imported, VxVM resets the serial IDs to 0 for the imported disks. The actual
270 Creating and administering disk groups
Reorganizing the contents of disk groups
and expected serial IDs for any disks in the disk group that are not imported at
this time remain unaltered.
Move
■ The join operation removes all VxVM objects from an imported disk group
and moves them to an imported target disk group. The source disk group is
removed when the join is complete.
Figure 6-6 shows the join operation.
Creating and administering disk groups 273
Reorganizing the contents of disk groups
Join
Warning: Before moving volumes between disk groups, stop all applications that
are accessing the volumes, and unmount all file systems that are configured on
these volumes.
■ Splitting or moving a volume into a different disk group changes the volume’s
record ID.
■ The operation can only be performed on the master node of a cluster if either
the source disk group or the target disk group is shared.
■ In a cluster environment, disk groups involved in a move or join must both be
private or must both be shared.
■ If a cache object or volume set that is to be split or moved uses ISP volumes,
the storage pool that contains these volumes must also be specified.
The following example lists the objects that would be affected by moving volume
vol1 from disk group mydg to newdg:
However, the following command produces an error because only a part of the
volume vol1 is configured on the disk mydg01:
Specifying the -o expand option, as shown below, ensures that the list of objects
to be moved includes the other disks (in this case, mydg05) that are configured in
vol1:
the move. You can use the vxprint command on a volume to examine the
configuration of its associated DCO volume.
If you use the vxassist command to create both a volume and its DCO, or the
vxsnap prepare command to add a DCO to a volume, the DCO plexes are
automatically placed on different disks from the data plexes of the parent volume.
In previous releases, version 0 DCO plexes were placed on the same disks as the
data plexes for convenience when performing disk group split and move operations.
As version 20 DCOs support dirty region logging (DRL) in addition to Persistent
FastResync, it is preferable for the DCO plexes to be separated from the data
plexes. This improves the performance of I/O from/to the volume, and provides
resilience for the DRL logs.
Figure 6-7 shows some instances in which it is not be possible to split a disk group
because of the location of the DCO plexes on the disks of the disk group.
For more information about snapshots and DCO volumes, see the Veritas Storage
Foundation Advanced Features Administrator's Guide.
See “Specifying storage for version 20 DCO plexes” on page 382.
See “FastResync” on page 63.
See “Volume snapshots” on page 61.
Creating and administering disk groups 277
Reorganizing the contents of disk groups
Figure 6-7 Examples of disk groups that can and cannot be split
Volume Snapshot
DCO plexes DCO plex
The -o expand option ensures that the objects that are actually moved include
all other disks containing subdisks that are associated with the specified objects
or with objects that they contain.
The default behavior of vxdg when moving licensed disks in an EMC array is to
perform an EMC disk compatibility check for each disk involved in the move. If
the compatibility checks succeed, the move takes place. vxdg then checks again
to ensure that the configuration has not changed since it performed the
compatibility check. If the configuration has changed, vxdg attempts to perform
the entire move again.
Note: You should only use the -o override and -o verify options if you are
using an EMC array with a valid timefinder license. If you specify one of these
options and do not meet the array and license requirements, a warning message
is displayed and the operation is ignored.
The -o override option enables the move to take place without any EMC checking.
The -o verify option returns the access names of the disks that would be moved
but does not perform the move.
The following output from vxprint shows the contents of disk groups rootdg and
mydg.
The output includes two utility fields, TUTIL0 and PUTIL0.. VxVM creates these
fields to manage objects and communications between different commands and
Symantec products. The TUTIL0 values are temporary; they are not maintained
on reboot. The PUTIL0 values are persistent; they are maintained on reboot.
See “Changing subdisk attributes” on page 299.
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm rootdg02 c1t97d0s2 - 17678493 - - - -
dm rootdg03 c1t112d0s2 - 17678493 - - - -
dm rootdg04 c1t114d0s2 - 17678493 - - - -
dm rootdg06 c1t98d0s2 - 17678493 - - - -
dg mydg mydg - - - - - -
dm mydg01 c0t1d0s2 - 17678493 - - - -
dm mydg05 c1t96d0s2 - 17678493 - - - -
dm mydg07 c1t99d0s2 - 17678493 - - - -
dm mydg08 c1t100d0s2 - 17678493 - - - -
v vol1 fsgen ENABLED 2048 - ACTIVE - -
pl vol1-01 vol1 ENABLED 3591 - ACTIVE - -
sd mydg01-01 vol1-01 ENABLED 3591 0 - - -
pl vol1-02 vol1 ENABLED 3591 - ACTIVE - -
sd mydg05-01 vol1-02 ENABLED 3591 0 - - -
By default, VxVM automatically recovers and starts the volumes following a disk
group move. If you have turned off the automatic recovery feature, volumes are
disabled after a move. Use the following commands to recover and restart the
volumes in the target disk group:
The output from vxprint after the move shows that not only mydg01 but also
volume vol1 and mydg05 have moved to rootdg, leaving only mydg07 and mydg08
in disk group mydg:
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm mydg01 c0t1d0s2 - 17678493 - - - -
dm rootdg02 c1t97d0s2 - 17678493 - - - -
dm rootdg03 c1t112d0s2 - 17678493 - - - -
dm rootdg04 c1t114d0s2 - 17678493 - - - -
dm mydg05 c1t96d0s2 - 17678493 - - - -
dm rootdg06 c1t98d0s2 - 17678493 - - - -
v vol1 fsgen ENABLED 2048 - ACTIVE - -
pl vol1-01 vol1 ENABLED 3591 - ACTIVE - -
sd mydg01-01 vol1-01 ENABLED 3591 0 - - -
pl vol1-02 vol1 ENABLED 3591 - ACTIVE - -
sd mydg05-01 vol1-02 ENABLED 3591 0 - - -
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm rootdg01 c0t1d0s2 - 17678493 - - - -
dm rootdg02 c1t97d0s2 - 17678493 - - - -
dm rootdg03 c1t112d0s2 - 17678493 - - - -
dm rootdg04 c1t114d0s2 - 17678493 - - - -
dm rootdg05 c1t96d0s2 - 17678493 - - - -
dm rootdg06 c1t98d0s2 - 17678493 - - - -
dm rootdg07 c1t99d0s2 - 17678493 - - - -
dm rootdg08 c1t100d0s2 - 17678493 - - - -
v vol1 fsgen ENABLED 2048 - ACTIVE - -
pl vol1-01 vol1 ENABLED 3591 - ACTIVE - -
sd rootdg01-01 vol1-01 ENABLED 3591 0 - - -
Creating and administering disk groups 281
Reorganizing the contents of disk groups
The following command removes disks rootdg07 and rootdg08 from rootdg to
form a new disk group, mydg:
By default, VxVM automatically recovers and starts the volumes following a disk
group split. If you have turned off the automatic recovery feature, volumes are
disabled after a split. Use the following commands to recover and restart the
volumes in the target disk group:
The output from vxprint after the split shows the new disk group, mydg:
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm rootdg01 c0t1d0s2 - 17678493 - - - -
dm rootdg02 c1t97d0s2 - 17678493 - - - -
dm rootdg03 c1t112d0s2- 17678493 - - - -
dm rootdg04 c1t114d0s2- 17678493 - - - -
dm rootdg05 c1t96d0s2 - 17678493 - - - -
dm rootdg06 c1t98d0s2 - 17678493 - - - -
v vol1 fsgen ENABLED 2048 - ACTIVE - -
pl vol1-01 vol1 ENABLED 3591 - ACTIVE - -
sd rootdg01-01 vol1-01 ENABLED 3591 0 - - -
pl vol1-02 vol1 ENABLED 3591 - ACTIVE - -
sd rootdg05-01 vol1-02 ENABLED 3591 0 - - -
Note: You cannot specify rootdg as the source disk group for a join operation.
The following output from vxprint shows the contents of the disk groups rootdg
and mydg.
The output includes two utility fields, TUTIL0 and PUTIL0.. VxVM creates these
fields to manage objects and communications between different commands and
Symantec products. The TUTIL0 values are temporary; they are not maintained
on reboot. The PUTIL0 values are persistent; they are maintained on reboot.
See “Changing subdisk attributes” on page 299.
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm rootdg01 c0t1d0s2 - 17678493 - - - -
dm rootdg02 c1t97d0s2 - 17678493 - - - -
dm rootdg03 c1t112d0s2 - 17678493 - - - -
dm rootdg04 c1t114d0s2 - 17678493 - - - -
dm rootdg07 c1t99d0s2 - 17678493 - - - -
dm rootdg08 c1t100d0s2 - 17678493 - - - -
By default, VxVM automatically recovers and starts the volumes following a disk
group join. If you have turned off the automatic recovery feature, volumes are
disabled after a join. Use the following commands to recover and restart the
volumes in the target disk group:
The output from vxprint after the join shows that disk group mydg has been
removed:
# vxprint
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm mydg01 c0t1d0s2 - 17678493 - - - -
dm rootdg02 c1t97d0s2 - 17678493 - - - -
dm rootdg03 c1t112d0s2 - 17678493 - - - -
dm rootdg04 c1t114d0s2 - 17678493 - - - -
dm mydg05 c1t96d0s2 - 17678493 - - - -
dm rootdg06 c1t98d0s2 - 17678493 - - - -
dm rootdg07 c1t99d0s2 - 17678493 - - - -
dm rootdg08 c1t100d0s2 - 17678493 - - - -
v vol1 fsgen ENABLED 2048 - ACTIVE - -
pl vol1-01 vol1 ENABLED 3591 - ACTIVE - -
sd mydg01-01 vol1-01 ENABLED 3591 0 - - -
pl vol1-02 vol1 ENABLED 3591 - ACTIVE - -
sd mydg05-01 vol1-02 ENABLED 3591 0 - - -
Deporting a disk group does not actually remove the disk group. It disables use
of the disk group by the system. Disks in a deported disk group can be reused,
reinitialized, added to other disk groups, or imported for use on other systems.
Use the vxdg import command to re-enable access to the disk group.
284 Creating and administering disk groups
Destroying a disk group
When a disk group is destroyed, the disks that are released can be re-used in other
disk groups.
The disk must be specified by its disk access name, such as c0t12d0s2.
Examine the output from the command for a line similar to the following that
specifies the disk group ID.
dgid: 963504895.1075.bass
Until the disk group is upgraded, it may still be deported back to the release from
which it was imported.
To use the features in the upgraded release, you must explicitly upgrade the
existing disk groups. There is no "downgrade" facility. After you upgrade a disk
group, the disk group is incompatible with earlier releases of VxVM that do not
support the new version. For disk groups that are shared among multiple servers
for failover or for off-host processing, verify that the VxVM release on all potential
hosts that may use the disk group supports the disk group version to which you
are upgrading.
After upgrading to Storage Foundation 5.1SP1, you must upgrade any existing
disk groups that are organized by ISP. Without the version upgrade, configuration
query operations continue to work fine. However, configuration change operations
will not function correctly.
To list the version of a disk group, use this command:
You can also determine the disk group version by using the vxprint command
with the -l format option.
To upgrade a disk group to the highest version supported by the release of VxVM
that is currently running, use this command:
The vxconfigd daemon reads the contents of this file to locate the disks and the
configuration databases for their disk groups.
The /etc/vx/darecs file is also used to store definitions of foreign devices that
are not autoconfigurable. Such entries may be added by using the vxddladm
addforeign command.
# vxnotify -f
# vxnotify -s -i
Note: When you upgrade the ISP disk group, all intent and storage pools
information is lost. Only upgrade the disk group when this condition is acceptable.
288 Creating and administering disk groups
Working with existing ISP disk groups
# vxprint
Sample output:
st mypool - - - - DATA - -
dm mydg1 ams_wms0_358 - 4120320 - - - -
In the sample output, st mypool indicates that mydg is an ISP disk group.
To upgrade an ISP disk group
◆ Upgrade the ISP disk group using the following command:
The ISP volumes in the disk group are not allowed to make any configuration
changes until the disk group is upgraded. Attempting any operations such as grow
shrink, add mirror, disk group split join, etc, on ISP volumes would give the
following error:
Creating and administering disk groups 289
Working with existing ISP disk groups
Note: Non-ISP or VxVM volumes in the ISP disk group are not affected.
■ About subdisks
■ Creating subdisks
■ Moving subdisks
■ Splitting subdisks
■ Joining subdisks
■ Removing subdisks
■ About plexes
■ Creating plexes
■ Detaching plexes
■ Reattaching plexes
■ Moving plexes
About subdisks
Subdisks are the low-level building blocks in a Veritas Volume Manager (VxVM)
configuration that are required to create plexes and volumes.
See “Creating a volume” on page 319.
Creating subdisks
Use the vxmake command to create VxVM objects, such as subdisks:
where subdisk is the name of the subdisk, diskname is the disk name, offset is the
starting point (offset) of the subdisk within the disk, and length is the length of
the subdisk.
For example, to create a subdisk named mydg02-01 in the disk group, mydg, that
starts at the beginning of disk mydg02 and has a length of 8000 sectors, use the
following command:
Note: As for all VxVM commands, the default size unit is s, representing a sector.
Add a suffix, such as k for kilobyte, m for megabyte or g for gigabyte, to change
the unit of size. For example, 500m would represent 500 megabytes.
Creating and administering subdisks and plexes 293
Displaying subdisk information
If you intend to use the new subdisk to build a volume, you must associate the
subdisk with a plex.
See “Associating subdisks with plexes” on page 295.
Subdisks for all plex layouts (concatenated, striped, RAID-5) are created the same
way.
# vxprint -st
You can display complete information about a particular subdisk by using this
command:
For example, the following command displays all information for subdisk
mydg02-01 in the disk group, mydg:
Subdisk: mydg02-01
info: disk=mydg02 offset=0 len=205632
assoc: vol=mvol plex=mvol-02 (offset=0)
flags: enabled
device: device=c0t11d0s2 path=/dev/vx/dmp/c0t11d0s2 diskdev=32/68
294 Creating and administering subdisks and plexes
Moving subdisks
Moving subdisks
Moving a subdisk copies the disk space contents of a subdisk onto one or more
other subdisks. If the subdisk being moved is associated with a plex, then the data
stored on the original subdisk is copied to the new subdisks. The old subdisk is
dissociated from the plex, and the new subdisks are associated with the plex. The
association is at the same offset within the plex as the source subdisk. To move
a subdisk, use the following command:
For example, if mydg03 in the disk group, mydg, is to be evacuated, and mydg12 has
enough room on two of its subdisks, use the following command:
For the subdisk move to work correctly, the following conditions must be met:
■ The subdisks involved must be the same size.
■ The subdisk being moved must be part of an active plex on an active (ENABLED)
volume.
■ The new subdisk must not be associated with any other plex.
Subdisk can also be moved manually after hot-relocation.
See “Moving relocated subdisks” on page 439.
Splitting subdisks
Splitting a subdisk divides an existing subdisk into two separate subdisks. To split
a subdisk, use the following command:
where subdisk is the name of the original subdisk, newsd1 is the name of the first
of the two subdisks to be created and newsd2 is the name of the second subdisk
to be created.
The -s option is required to specify the size of the first of the two subdisks to be
created. The second subdisk occupies the remaining space used by the original
subdisk.
If the original subdisk is associated with a plex before the task, upon completion
of the split, both of the resulting subdisks are associated with the same plex.
To split the original subdisk into more than two subdisks, repeat the previous
command as many times as necessary on the resulting subdisks.
Creating and administering subdisks and plexes 295
Joining subdisks
For example, to split subdisk mydg03-02, with size 2000 megabytes into subdisks
mydg03-02, mydg03-03, mydg03-04 and mydg03-05, each with size 500 megabytes,
all in the disk group, mydg, use the following commands:
Joining subdisks
Joining subdisks combines two or more existing subdisks into one subdisk. To join
subdisks, the subdisks must be contiguous on the same disk. If the selected subdisks
are associated, they must be associated with the same plex, and be contiguous in
that plex. To join several subdisks, use the following command:
For example, to create the plex home-1 and associate subdisks mydg02-01,
mydg02-00, and mydg02-02 with plex home-1, all in the disk group, mydg, use the
following command:
Subdisks are associated in order starting at offset 0. If you use this type of
command, you do not have to specify the multiple commands needed to create
296 Creating and administering subdisks and plexes
Associating subdisks with plexes
the plex and then associate each of the subdisks with that plex. In this example,
the subdisks are associated to the plex in the order they are listed (after sd=). The
disk space defined as mydg02-01 is first, mydg02-00 is second, and mydg02-02 is
third. This method of associating subdisks is convenient during initial
configuration.
Subdisks can also be associated with a plex that already exists. To associate one
or more subdisks with an existing plex, use the following command:
If the plex is not empty, the new subdisks are added after any subdisks that are
already associated with the plex, unless the -l option is specified with the
command. The -l option associates subdisks at a specific offset within the plex.
The -l option is required if you previously created a sparse plex (that is, a plex
with portions of its address space that do not map to subdisks) for a particular
volume, and subsequently want to make the plex complete. To complete the plex,
create a subdisk of a size that fits the hole in the sparse plex exactly. Then,
associate the subdisk with the plex by specifying the offset of the beginning of
the hole in the plex, using the following command:
For example, the following command would insert the subdisk, mydg15-01, in the
plex, vol10-01, starting at an offset of 4096 blocks:
Note: The subdisk must be exactly the right size. VxVM does not allow the space
defined for two subdisks to overlap within a plex.
For striped or RAID-5 plexes, use the following command to specify a column
number and column offset for the subdisk to be added:
If only one number is specified with the -l option for striped plexes, the number
is interpreted as a column number and the subdisk is associated at the end of the
column.
Creating and administering subdisks and plexes 297
Associating log subdisks
For example, the following command would add the subdisk, mydg11-01, to the
end of column 1 of the plex, vol02-01:
The following example shows how to append three subdisk to the ends of the three
columns in a striped plex, vol-01, in the disk group, mydg:
If a subdisk is filling a “hole” in the plex (that is, some portion of the volume logical
address space is mapped by the subdisk), the subdisk is considered stale. If the
volume is enabled, the association operation regenerates data that belongs on the
subdisk. Otherwise, it is marked as stale and is recovered when the volume is
started.
Warning: Only one log subdisk can be associated with a plex. Because this log
subdisk is frequently written, care should be taken to position it on a disk that is
not heavily used. Placing a log subdisk on a heavily-used disk can degrade system
performance.
Warning: The version 20 DCO volume layout includes space for a DRL. Do not use
procedures that are intended for manipulating log subdisks with a volume that
has a version 20 DCO volume associated with it.
See “Preparing a volume for DRL and instant snapshots” on page 380.
298 Creating and administering subdisks and plexes
Dissociating subdisks from plexes
where subdisk is the name to be used for the log subdisk. The plex must be
associated with a mirrored volume before dirty region logging takes effect.
For example, to associate a subdisk named mydg02-01 with a plex named vol01-02,
which is already associated with volume vol01 in the disk group, mydg, use the
following command:
You can also add a log subdisk to an existing volume with the following command:
This command automatically creates a log subdisk within a log plex on the specified
disk for the specified volume.
For example, to dissociate a subdisk named mydg02-01 from the plex with which
it is currently associated in the disk group, mydg, use the following command:
You can additionally remove the dissociated subdisks from VxVM control using
the following form of the command:
Removing subdisks
To remove a subdisk, use the following command:
For example, to remove a subdisk named mydg02-01 from the disk group, mydg,
use the following command:
The vxedit command changes attributes of subdisks and other VxVM objects. To
change subdisk attributes, use the following command:
The subdisk fields you can change with the vxedit command include the following:
putiln Persistent utility field(s) used to manage objects and communication between
different commands and Symantec products.
putiln field attributes are maintained on reboot. putiln fields are organized
as follows:
tutiln field attributes are not maintained on reboot. tutiln fields are
organized as follows:
len Subdisk length. This value is a standard Veritas Volume Manager length number.
You can only change the length of a subdisk if the subdisk is disassociated. You
cannot increase the length of a subdisk to the point where it extends past the
end of the disk or it overlaps a reserved disk region on another disk.
comment Comment.
For example, to change the comment field of a subdisk named mydg02-01 in the
disk group, mydg, use the following command:
To prevent a particular subdisk from being associated with a plex, set the putil0
field to a non-null string, as shown in the following command:
About plexes
Plexes are logical groupings of subdisks that create an area of disk space
independent of physical disk size or other restrictions. Replication (mirroring) of
disk data is set up by creating multiple data plexes for a single volume. Each data
plex in a mirrored volume contains an identical copy of the volume data. Because
each data plex must reside on different disks from the other plexes, the replication
provided by mirroring prevents data loss in the event of a single-point
disk-subsystem failure. Multiple data plexes also provide increased data integrity
and reliability.
See “About subdisks” on page 292.
Creating and administering subdisks and plexes 301
Creating plexes
Creating plexes
Use the vxmake command to create VxVM objects, such as plexes. When creating
a plex, identify the subdisks that are to be associated with it:
To create a plex from existing subdisks, use the following command:
For example, to create a concatenated plex named vol01-02 from two existing
subdisks named mydg02-01 and mydg02-02 in the disk group, mydg, use the
following command:
To use a plex to build a volume, you must associate the plex with the volume.
See “Attaching and associating plexes” on page 306.
# vxprint -lp
To display detailed information about a specific plex, use the following command:
302 Creating and administering subdisks and plexes
Displaying plex information
The -t option prints a single line of information about the plex. To list free plexes,
use the following command:
# vxprint -pt
The following section describes the meaning of the various plex states that may
be displayed in the STATE field of vxprint output.
Plex states
Plex states reflect whether or not plexes are complete and are consistent copies
(mirrors) of the volume contents. VxVM utilities automatically maintain the plex
state. However, if a volume should not be written to because there are changes to
that volume and if a plex is associated with that volume, you can modify the state
of the plex. For example, if a disk with a particular plex located on it begins to fail,
you can temporarily disable that plex.
A plex does not have to be associated with a volume. A plex can be created with
the vxmake plex command and be attached to a volume later.
VxVM utilities use plex states to:
■ indicate whether volume contents have been initialized to a known state
■ determine if a plex contains a valid copy (mirror) of the volume contents
■ track whether a plex was in active use at the time of a system failure
■ monitor operations on plexes
This section explains the individual plex states in detail.
See the Veritas Volume Manager Troubleshooting Guide.
Table 7-1shows the states that may be associated with a plex.
Creating and administering subdisks and plexes 303
Displaying plex information
State Description
■ when the volume is started and the plex fully participates in normal
volume I/O (the plex contents change as the contents of the volume
change)
■ when the volume is stopped as a result of a system crash and the
plex is ACTIVE at the moment of the crash
DCOSNP This state indicates that a data change object (DCO) plex attached to
a volume can be used by a snapshot plex to create a DCO volume during
a snapshot operation.
EMPTY Volume creation sets all plexes associated with the volume to the
EMPTY state to indicate that the plex is not yet initialized.
IOFAIL The IOFAIL plex state is associated with persistent state logging. When
the vxconfigd daemon detects an uncorrectable I/O failure on an
ACTIVE plex, it places the plex in the IOFAIL state to exclude it from
the recovery selection process at volume start time.
This state indicates that the plex is out-of-date with respect to the
volume, and that it requires complete recovery. It is likely that one or
more of the disks associated with the plex should be replaced.
LOG The state of a dirty region logging (DRL) or RAID-5 log plex is always
set to LOG.
304 Creating and administering subdisks and plexes
Displaying plex information
State Description
OFFLINE The vxmend off task indefinitely detaches a plex from a volume by
setting the plex state to OFFLINE. Although the detached plex
maintains its association with the volume, changes to the volume do
not update the OFFLINE plex. The plex is not updated until the plex
is put online and reattached with the vxplex att task. When this
occurs, the plex is placed in the STALE state, which causes its contents
to be recovered at the next vxvol start operation.
SNAPATT This state indicates a snapshot plex that is being attached by the
snapstart operation. When the attach is complete, the state for the
plex is changed to SNAPDONE. If the system fails before the attach
completes, the plex and all of its subdisks are removed.
SNAPDIS This state indicates a snapshot plex that is fully attached. A plex in
this state can be turned into a snapshot volume with the vxplex
snapshot command. If the system fails before the attach completes,
the plex is dissociated from the volume.
SNAPDONE The SNAPDONE plex state indicates that a snapshot plex is ready for
a snapshot to be taken using vxassist snapshot.
STALE If there is a possibility that a plex does not have the complete and
current volume contents, that plex is placed in the STALE state. Also,
if an I/O error occurs on a plex, the kernel stops using and updating
the contents of that plex, and the plex state is set to STALE.
TEMP Setting a plex to the TEMP state eases some plex operations that
cannot occur in a truly atomic fashion. For example, attaching a plex
to an enabled volume requires copying volume contents to the plex
before it can be considered fully attached.
A utility sets the plex state to TEMP at the start of such an operation
and to an appropriate state at the end of the operation. If the system
fails for any reason, a TEMP plex state indicates that the operation is
incomplete. A later vxvol start dissociates plexes in the TEMP
state.
Creating and administering subdisks and plexes 305
Displaying plex information
State Description
TEMPRM A TEMPRM plex state is similar to a TEMP state except that at the
completion of the operation, the TEMPRM plex is removed. Some
subdisk operations require a temporary plex. Associating a subdisk
with a plex, for example, requires updating the subdisk with the volume
contents before actually associating the subdisk. This update requires
associating the subdisk with a temporary plex, marked TEMPRM, until
the operation completes and removes the TEMPRM plex.
If the system fails for any reason, the TEMPRM state indicates that
the operation did not complete successfully. A later operation
dissociates and removes TEMPRM plexes.
TEMPRMSD The TEMPRMSD plex state is used by vxassist when attaching new
data plexes to a volume. If the synchronization operation does not
complete, the plex and its subdisks are removed.
IOFAIL The plex was detached as a result of an I/O failure detected during
normal volume I/O. The plex is out-of-date with respect to the volume,
and in need of complete recovery. However, this condition also
indicates a likelihood that one of the disks in the system should be
replaced.
NODAREC No physical disk was found for one of the subdisks in the plex. This
implies either that the physical disk failed, making it unrecognizable,
or that the physical disk is no longer attached through a known access
path. The plex cannot be used until this condition is fixed, or the
affected subdisk is dissociated.
RECOVER A disk corresponding to one of the disk media records was replaced,
or was reattached too late to prevent the plex from becoming
out-of-date with respect to the volume. The plex required complete
recovery from another plex in the volume to synchronize its contents.
REMOVED Set in the disk media record when one of the subdisks associated with
the plex is removed. The plex cannot be used until this condition is
fixed, or the affected subdisk is dissociated.
DETACHED Maintenance is being performed on the plex. Any write request to the
volume is not reflected in the plex. A read request from the volume is
not satisfied from the plex. Plex operations and ioctl function calls
are accepted.
ENABLED The plex is online. A write request to the volume is reflected in the
plex. A read request from the volume is satisfied from the plex. If a
plex is sparse, this is indicated by the SPARSE modifier being displayed
in the output from the vxprint -t command.
For example, to attach a plex named vol01-02 to a volume named vol01 in the
disk group, mydg, use the following command:
If the volume does not already exist, associate one or more plexes to the volume
when you create the volume, using the following command:
You can also use the command vxassist mirror volume to add a data plex as a
mirror to an existing volume.
If a disk fails (for example, it has a head crash), use the vxmend command to take
offline all plexes that have associated subdisks on the affected disk. For example,
if plexes vol01-02 and vol02-02 in the disk group, mydg, had subdisks on a drive
to be repaired, use the following command to take these plexes offline:
This command places vol01-02 and vol02-02 in the OFFLINE state, and they
remain in that state until it is changed. The plexes are not automatically recovered
on rebooting the system.
308 Creating and administering subdisks and plexes
Detaching plexes
Detaching plexes
To temporarily detach one data plex in a mirrored volume, use the following
command:
For example, to temporarily detach a plex named vol01-02 in the disk group,
mydg, and place it in maintenance mode, use the following command:
This command temporarily detaches the plex, but maintains the association
between the plex and its volume. However, the plex is not used for I/O. A plex
detached with the preceding command is recovered at system reboot. The plex
state is set to STALE, so that if a vxvol start command is run on the appropriate
volume (for example, on system reboot), the contents of the plex is recovered and
made ACTIVE.
When the plex is ready to return as an active part of its volume, it can be reattached
to the volume.
See “Reattaching plexes” on page 308.
Reattaching plexes
This section describes how to reattach plexes manually if automatic reattachment
feature is disabled. This procedure may also be required for devices that are not
automatically reattached. For example, VxVM does not automatically reattach
plexes on site-consistent volumes.
When a disk has been repaired or replaced and is again ready for use, the plexes
must be put back online (plex state set to ACTIVE). To set the plexes to ACTIVE, use
one of the following procedures depending on the state of the volume.
■ If the volume is currently ENABLED, use the following command to reattach the
plex:
For example, for a plex named vol01-02 on a volume named vol01 in the disk
group, mydg, use the following command:
For example, to re-enable a plex named vol01-02 in the disk group, mydg, enter:
In this case, the state of vol01-02 is set to STALE. When the volume is next
started, the data on the plex is revived from another plex, and incorporated
into the volume with its state set to ACTIVE.
If the vxinfo command shows that the volume is unstartable, set one of the
plexes to CLEAN using the following command:
To disable automatic plex attachment, remove vxattachd from the start up scripts.
Disabling vxattachd disables the automatic reattachment feature for both plexes
and sites.
In a Cluster Volume Manager (CVM) the following considerations apply:
■ If the global detach policy is set, a storage failure from any node causes all
plexes on that storage to be detached globally. When the storage is connected
back to any node, the vxattachd daemon triggers reattaching the plexes on
the master node only.
■ The automatic reattachment functionality is local to a node. When enabled on
a node, all the disk groups imported on the node are monitored. If the automatic
reattachment functionality is disabled on a master node, the feature is disabled
on all shared disk groups and private disk groups imported on the master node.
■ The vxattachd daemon listens for "dmpnode online" events using vxnotify to
trigger its operation. Therefore, an automatic reattachment is not triggered
if the dmpnode online event is not generated when vxattachd is running. The
following are typical examples:
■ Storage is reconnected before vxattachd is started; for example, during
reboot.
■ In CVM, with active/passive arrays, if all nodes cannot agree on a common
path to an array controller, a plex can get detached due to I/O failure. In
these cases, the dmpnode will not get disabled. Therefore, after the
connections are restored, a dmpnode online event is not generated and
automatic plex reattachment is not triggered.
Moving plexes
Moving a plex copies the data content from the original plex onto a new plex. To
move a plex, use the following command:
After the copy task is complete, new_plex is not associated with the specified
volume volume. The plex contains a complete copy of the volume data. The plex
that is being copied should be the same size or larger than the volume. If the plex
being copied is larger than the volume, an incomplete copy of the data results.
For the same reason, new_plex should not be sparse.
are critical to the creation of a new plex to contain the same data. Before a plex
is removed, you must record its configuration.
See “Displaying plex information” on page 301.”
To dissociate a plex from the associated volume and remove it as an object from
VxVM, use the following command:
For example, to dissociate and remove a plex named vol01-02 in the disk group,
mydg, use the following command:
This command removes the plex vol01-02 and all associated subdisks.
Alternatively, you can first dissociate the plex and subdisks, and then remove
them with the following commands:
When used together, these commands produce the same result as the vxplex -o
rm dis command. The -r option to vxedit rm recursively removes all objects
from the specified object downward. In this way, a plex and its associated subdisks
can be removed by a single vxedit command.
The vxedit command changes the attributes of plexes and other Volume Manager
objects. To change plex attributes, use the following command:
Plex fields that can be changed using the vxedit command include:
■ name
■ putiln
■ tutiln
■ comment
Creating and administering subdisks and plexes 313
Changing plex attributes
The following example command sets the comment field, and also sets tutil2 to
indicate that the subdisk is in use:
To prevent a particular plex from being associated with a volume, set the putil0
field to a non-null string, as shown in the following command:
■ Creating a volume
■ Using vxassist
■ Accessing a volume
■ Using rules and persistent attributes to make volume allocation more efficient
Striped A volume with data spread evenly across multiple disks. Stripes
are equal-sized fragments that are allocated alternately and evenly
to the subdisks of a single plex. There must be at least two subdisks
in a striped plex, each of which must exist on a different disk.
Throughput increases with the number of disks across which a
plex is striped. Striping helps to balance I/O load in cases where
high traffic areas exist on certain subdisks.
Mirrored A volume with multiple data plexes that duplicate the information
contained in a volume. Although a volume can have a single data
plex, at least two are required for true mirroring to provide
redundancy of data. For the redundancy to be useful, each of these
data plexes should contain disk space from different disks.
RAID-5 A volume that uses striping to spread data and parity evenly across
multiple disks in an array. Each stripe contains a parity stripe
unit and data stripe units. Parity can be used to reconstruct data
if one of the disks fails. In comparison to the performance of
striped volumes, write throughput of RAID-5 volumes decreases
since parity information needs to be updated each time data is
modified. However, in comparison to mirroring, the use of parity
to implement data redundancy reduces the amount of space
required.
■ Dirty region logs allow the fast recovery of mirrored volumes after a system
crash.
See “Dirty region logging” on page 58.
These logs are supported either as DRL log plexes, or as part of a version 20
DCO volume. Refer to the following sections for information on creating a
volume on which DRL is enabled:
See “Creating a volume with dirty region logging enabled” on page 336.
See “Creating a volume with a version 20 DCO volume” on page 336.
■ RAID-5 logs are used to prevent corruption of data during recovery of RAID-5
volumes.
See “RAID-5 logging” on page 50.
These logs are configured as plexes on disks other than those that are used
for the columns of the RAID-5 volume.
See “Creating a RAID-5 volume” on page 341.
Creating volumes 319
Creating a volume
Creating a volume
You can create volumes using an advanced approach or an assisted approach.
Each method uses different tools. You may switch between the advanced and the
assisted approaches at will.
Advanced approach
The advanced approach consists of a number of commands that typically require
you to specify detailed input. These commands use a “building block” approach
that requires you to have a detailed knowledge of the underlying structure and
components to manually perform the commands necessary to accomplish a certain
task. Advanced operations are performed using several different VxVM commands.
To create a volume using the advanced approach, perform the following steps in
the order specified:
■ Create subdisks using vxmake sd.
See “Creating subdisks” on page 292.
■ Create plexes using vxmake plex, and associate subdisks with them.
See “Creating plexes” on page 301.
See “Associating subdisks with plexes” on page 295.
■ Associate plexes with the volume using vxmake vol.
■ Initialize the volume using vxvol start or vxvol init zero.
See “Initializing and starting a volume created using vxmake” on page 347.
The steps to create the subdisks and plexes, and to associate the plexes with the
volumes can be combined by using a volume description file with the vxmake
command.
See “Creating a volume using a vxmake description file” on page 345.
See “Creating a volume using vxmake” on page 344.
Assisted approach
The assisted approach takes information about what you want to accomplish and
then performs the necessary underlying tasks. This approach requires only
minimal input from you, but also permits more detailed specifications.
Assisted operations are performed primarily through the vxassist command.
vxassist creates the required plexes and subdisks using only the basic attributes
320 Creating volumes
Using vxassist
of the desired volume as input. Additionally, the vxassist command can modify
existing volumes while automatically modifying any underlying or associated
objects.
The vxassist command uses default values for many volume attributes, unless
you provide specific values. It does not require you to have a thorough
understanding of low-level VxVM concepts, vxassist does not conflict with other
VxVM commands or preclude their use. Objects created by vxassist are compatible
and inter-operable with objects created by other VxVM commands and interfaces.
Using vxassist
You can use the vxassist utility to create and modify volumes. Specify the basic
requirements for volume creation or modification, and vxassist performs the
necessary tasks.
The advantages of using vxassist rather than the advanced approach include:
■ Most actions require that you enter only one command rather than several.
■ You are required to specify only minimal information to vxassist. If necessary,
you can specify additional parameters to modify or control its actions.
■ Operations result in a set of configuration changes that either succeed or fail
as a group, rather than individually. System crashes or other interruptions do
not leave intermediate states that you have to clean up. If vxassist finds an
error or an exceptional condition, it exits after leaving the system in the same
state as it was prior to the attempted operation.
The vxassist utility helps you perform the following tasks:
■ Creating volumes.
■ Creating mirrors for existing volumes.
■ Growing or shrinking existing volumes.
■ Backing up volumes online.
■ Reconfiguring a volume’s layout online.
vxassist obtains most of the information it needs from sources other than your
input. vxassist obtains information about the existing objects and their layouts
from the objects themselves.
For tasks requiring new disk space, vxassist seeks out available disk space and
allocates it in the configuration that conforms to the layout specifications and
that offers the best use of free space.
Creating volumes 321
Using vxassist
where keyword selects the task to perform. The first argument after a vxassist
keyword, volume, is a volume name, which is followed by a set of desired volume
attributes. For example, the keyword make allows you to create a new volume:
You must create the /etc/default directory and the vxassist default file if these
do not already exist on your system.
The format of entries in a defaults file is a list of attribute-value pairs separated
by new lines. These attribute-value pairs are the same as those specified as options
on the vxassist command line.
See the vxassist(1M) manual page.
To display the default attributes held in the file /etc/default/vxassist, use the
following form of the vxassist command:
# By default:
# create unmirrored, unstriped volumes
# allow allocations to span drives
# with RAID-5 create a log, with mirroring don’t create a log
# align allocations on cylinder boundaries
layout=nomirror,nostripe,span,nocontig,raid5log,noregionlog,
diskalign
# use the fsgen usage type, except when creating RAID-5 volumes
usetype=fsgen
# allow only root access to a volume
mode=u=rw,g=,o=
user=root
group=root
# by default, create 1 log copy for both mirroring and RAID-5 volumes
nregionlog=1
nraid5log=1
Creating volumes 323
Discovering the maximum size of a volume
# use 64K as the default stripe unit size for regular volumes
stripe_stwid=64k
# use 16K as the default stripe unit size for RAID-5 volumes
raid5_stwid=16k
Note: The file system must be mounted to get the benefits of the SmartMove™
feature.
When the SmartMove feature is on, less I/O is sent through the host, through the
storage network and to the disks or LUNs. The SmartMove feature can be used
for faster plex creation and faster array migrations.
The SmartMove feature enables migration from a traditional LUN to a thinly
provisioned LUN, removing unused space in the process.
For more information, see the section on migrating to thin provisioning in the
Veritas Storage Foundation™ Advanced Features Administrator's Guide.
For example, to discover the maximum size RAID-5 volume with 5 columns and
2 logs that you can create within the disk group, dgrp, enter the following
command:
324 Creating volumes
Disk group alignment constraints on volumes
You can use storage attributes if you want to restrict the disks that vxassist uses
when creating volumes.
See “Creating a volume on specific disks” on page 325.
The maximum size of a VxVM volume that you can create is 256TB.
By default, vxassist automatically rounds up the volume size and attribute size
values to a multiple of the alignment value. (This is equivalent to specifying the
attribute dgalign_checking=round as an additional argument to the vxassist
command.)
If you specify the attribute dgalign_checking=strict to vxassist, the command
fails with an error if you specify a volume length or attribute size value that is
not a multiple of the alignment value for the disk group.
To create a concatenated, default volume, use the following form of the vxassist
command:
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
For example, to create the concatenated volume voldefault with a length of 10
gigabytes in the default disk group:
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
For example, to create the volume volspec with length 5 gigabytes on disks mydg03
and mydg04, use the following command:
The vxassist command allows you to specify storage attributes. These give you
control over the devices, including disks, controllers and targets, which vxassist
uses to configure a volume. For example, you can specifically exclude disk mydg05.
Note: The ! character is a special character in some shells. The following examples
show how to escape it in a bash shell.
The following example excludes all disks that are on controller c2:
326 Creating volumes
Creating a volume on specific disks
This example includes only disks on controller c1 except for target t5:
If you want a volume to be created using only disks from a specific disk group,
use the -g option to vxassist, for example:
Any storage attributes that you specify for use must belong to the disk group.
Otherwise, vxassist will not use them to create a volume.
You can also use storage attributes to control how vxassist uses available storage,
for example, when calculating the maximum size of a volume, when growing a
volume or when removing mirrors or logs from a volume. The following example
excludes disks dgrp07 and dgrp08 when calculating the maximum size of RAID-5
volume that vxassist can create using the disks in the disk group dg:
It is also possible to control how volumes are laid out on the specified storage.
See “Specifying ordered allocation of storage to volumes” on page 328.
If you are using VxVM in conjunction with Veritas SANPoint Control 2.0, you can
specify how vxassist should use the available storage groups when creating
volumes.
See “Configuration of volumes on SAN storage” on page 72.
See the vxassist(1M) manual page.
vxassist also lets you select disks based on disk tags. The following command
only includes disks that have a tier1 disktag.
where diskgroup is the name of the disk group to which the disk belongs.
The allocation behavior of the vxassist command changes with the presence of
SSD devices in a disk group.
Note: If the disk group version is less than 150, the vxassist command does not
honor media type of the device for making allocations.
The vxassist command allows you to specify Hard Disk Drive (HDD) or SSD
devices for allocation using the mediatype attribute. For example, to create a
volume myvol of size 1g on SSD disks in mydg, use the following command:
For example, to create a volume myvol of size 1g on HDD disks in mydg, use the
following command:
If enclr3 is only specified, only hdd devices present in enclr3 are considered for
allocation.
In the following two commands, volume myvol of size 1G is allocated on HDD
devices from enclr3 array:
328 Creating volumes
Creating a volume on specific disks
The allocation fails, if the command is specified in one of the following two ways:
In the above case, volume myvol cannot be created as there are no HDD devices
in enclr1 enclosure.
In the above case, volume myvol cannot be created as there are no SSD devices
in enclr2 enclosure.
This command places columns 1, 2 and 3 of the first mirror on disks mydg01,
mydg02 and mydg03 respectively, and columns 1, 2 and 3 of the second mirror on
disks mydg04, mydg05 and mydg06 respectively.
Figure 8-1 shows an example of using ordered allocation to create a mirrored-stripe
volume.
Creating volumes 329
Creating a volume on specific disks
Mirrored-stripe
volume
Striped
column 1 column 2 column 3
plex
mydg01-01 mydg02-01 mydg03-01
Mirror
column 1 column 2 column 3
mydg04-01 mydg05-01 mydg06-01 Striped
plex
For layered volumes, vxassist applies the same rules to allocate storage as for
non-layered volumes. For example, the following command creates a striped-mirror
volume with 2 columns:
This command mirrors column 1 across disks mydg01 and mydg03, and column 2
across disks mydg02 and mydg04.
Figure 8-2 shows an example of using ordered allocation to create a striped-mirror
volume.
Mirror
column 1 column 2
mydg03-01 mydg04-01
Striped plex
Additionally, you can use the col_switch attribute to specify how to concatenate
space on the disks into columns. For example, the following command creates a
mirrored-stripe volume with 2 columns:
330 Creating volumes
Creating a volume on specific disks
This command allocates 3 gigabytes from mydg01 and 2 gigabytes from mydg02 to
column 1, and 3 gigabytes from mydg03 and 2 gigabytes from mydg04 to column
2. The mirrors of these columns are then similarly formed from disks mydg05
through mydg08.
Figure 8-3 shows an example of using concatenated disk space to create a
mirrored-stripe volume.
Mirrored-stripe
column 1 column 2 Striped volume
mydg01-01 mydg03-01 plex
mydg02-01 mydg04-01
Mirror
column 1 column 1
mydg05-01 mydg07-01
Striped
mydg06-01 mydg08-01
plex
Other storage specification classes for controllers, enclosures, targets and trays
can be used with ordered allocation. For example, the following command creates
a 3-column mirrored-stripe volume between specified controllers:
This command allocates space for column 1 from disks on controllers c1, for
column 2 from disks on controller c2, and so on.
Figure 8-4 shows an example of using storage allocation to create a mirrored-stripe
volume across controllers.
Creating volumes 331
Creating a mirrored volume
c1 c2 c3 Controllers
Mirrored-stripe volume
column 1 column 2 column 3 Striped plex
Mirror
column 1 column 2 column 3
Striped plex
c4 c5 c6 Controllers
There are other ways in which you can control how vxassist lays out mirrored
volumes across controllers.
See “Mirroring across targets, controllers or enclosures” on page 339.
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
For example, to create the mirrored volume, volmir, in the disk group, mydg, use
the following command:
To create a volume with 3 instead of the default of 2 mirrors, modify the command
to read:
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
Alternatively, first create a concatenated volume, and then mirror it.
See “Adding a mirror to a volume ” on page 375.
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
Note: You need a license to use the Persistent FastResync feature. If you do not
have a license, you can configure a DCO object and DCO volume so that snap
objects are associated with the original and snapshot volumes. However, without
a license, only full resynchronization can be performed.
To upgrade a disk group to the latest version, use the following command:
For non-layered volumes, the default number of plexes in the mirrored DCO
volume is equal to the lesser of the number of plexes in the data volume or
2. For layered volumes, the default number of DCO plexes is always 2. If
required, use the ndcomirror attribute to specify a different number. It is
recommended that you configure as many DCO plexes as there are data plexes
in the volume. For example, specify ndcomirror=3 when creating a 3-way
mirrored volume.
The default size of each plex is 132 blocks unless you use the dcolen attribute
to specify a different size. If specified, the size of the plex must be a multiple
of 33 blocks from 33 up to a maximum of 2112 blocks.
By default, FastResync is not enabled on newly created volumes. Specify the
fastresync=on attribute if you want to enable FastResync on the volume. If
a DCO object and DCO volume are associated with the volume, Persistent
FastResync is enabled; otherwise, Non-Persistent FastResync is enabled.
Creating volumes 335
Creating a volume with a version 0 DCO volume
3 To enable DRL or sequential DRL logging on the newly created volume, use
the following command:
If you do not specify the logdisk attribute, vxassist locates the logs in the
data plexes of the volume.
See “Specifying ordered allocation of storage to volumes” on page 328.
See the vxassist(1M) manual page.
See the vxvol(1M) manual page.
336 Creating volumes
Creating a volume with a version 20 DCO volume
To upgrade a disk group to the most recent version, use the following
command:
Set the value of the drl attribute to on if dirty region logging (DRL) is to be
used with the volume (this is the default setting). For a volume that will be
written to sequentially, such as a database log volume, set the value to
sequential to enable sequential DRL. The DRL logs are created in the DCO
volume. The redundancy of the logs is determined by the number of mirrors
that you specify using the ndcomirror attribute.
By default, Persistent FastResync is not enabled on newly created volumes.
Specify the fastresync=on attribute if you want to enable Persistent
FastResync on the volume.
See “Determining the DCO version number” on page 383.
See the vxassist(1M) manual page.
The nlog attribute can be used to specify the number of log plexes to add. By
default, one log plex is added. The loglen attribute specifies the size of the log,
where each bit represents one region in the volume. For example, the size of the
log would need to be 20K for a 10GB volume with a region size of 64 kilobytes.
For example, to create a mirrored 10GB volume, vol02, with two log plexes in the
disk group, mydg, use the following command:
Sequential DRL limits the number of dirty regions for volumes that are written
to sequentially, such as database replay logs. To enable sequential DRL on a volume
that is created within a disk group with a version number between 70 and 100,
specify the logtype=drlseq attribute to the vxassist make command.
It is also possible to enable the use of Persistent FastResync with this volume.
See “Creating a volume with a version 0 DCO volume” on page 333.
Note: Operations on traditional DRL log plexes are usually applicable to volumes
that are created in disk groups with a version number of less than 110. If you
enable DRL or sequential DRL on a volume that is created within a disk group
with a version number of 110 or greater, the DRL logs are usually created within
the plexes of a version 20 DCO volume.
Specify the -b option if you want to make the volume immediately available for
use.
338 Creating volumes
Creating a striped volume
This creates a striped volume with the default stripe unit size (64 kilobytes) and
the default number of stripes (2).
You can specify the disks on which the volumes are to be created by including the
disk names on the command line. For example, to create a 30-gigabyte striped
volume on three specific disks, mydg03, mydg04, and mydg05, use the following
command:
To change the number of columns or the stripe width, use the ncolumn and
stripeunit modifiers with vxassist. For example, the following command creates
a striped volume with 5 columns and a 32-kilobyte stripe size:
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
Alternatively, first create a striped volume, and then mirror it. In this case, the
additional data plexes may be either striped or concatenated.
See “Adding a mirror to a volume ” on page 375.
Creating volumes 339
Mirroring across targets, controllers or enclosures
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
By default, VxVM attempts to create the underlying volumes by mirroring subdisks
rather than columns if the size of each column is greater than the value for the
attribute stripe-mirror-col-split-trigger-pt that is defined in the vxassist
defaults file.
If there are multiple subdisks per column, you can choose to mirror each subdisk
individually instead of each column. To mirror at the subdisk level, specify the
layout as stripe-mirror-sd rather than stripe-mirror. To mirror at the column
level, specify the layout as stripe-mirror-col rather than stripe-mirror.
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
The attribute mirror=ctlr specifies that disks in one mirror should not be on the
same controller as disks in other mirrors within the same volume:
340 Creating volumes
Mirroring across media types (SSD and HDD)
The following command creates a mirrored volume with two data plexes in the
disk group, mydg:
The disks in one data plex are all attached to controller c2, and the disks in the
other data plex are all attached to controller c3. This arrangement ensures
continued availability of the volume should either controller fail.
The attribute mirror=enclr specifies that disks in one mirror should not be in
the same enclosure as disks in other mirrors within the same volume.
The following command creates a mirrored volume with two data plexes:
The disks in one data plex are all taken from enclosure enc1, and the disks in the
other data plex are all taken from enclosure enc2. This arrangement ensures
continued availability of the volume should either enclosure become unavailable.
There are other ways in which you can control how volumes are laid out on the
specified storage.
See “Specifying ordered allocation of storage to volumes” on page 328.
Note: VxVM supports the creation of RAID-5 volumes in private disk groups, but
not in shareable disk groups in a cluster environment.
You can create RAID-5 volumes by using either the vxassist command
(recommended) or the vxmake command. Both approaches are described below.
A RAID-5 volume contains a RAID-5 data plex that consists of three or more
subdisks located on three or more physical disks. Only one RAID-5 data plex can
exist per volume. A RAID-5 volume can also contain one or more RAID-5 log plexes,
which are used to log information about data and parity being written to the
volume.
See “RAID-5 (striping with parity)” on page 45.
Warning: Do not create a RAID-5 volume with more than 8 columns because the
volume will be unrecoverable in the event of the failure of more than one disk.
Specify the -b option if you want to make the volume immediately available for
use.
See “Initializing and starting a volume” on page 346.
For example, to create the RAID-5 volume volraid together with 2 RAID-5 logs
in the disk group, mydg, use the following command:
This creates a RAID-5 volume with the default stripe unit size on the default
number of disks. It also creates two RAID-5 logs rather than the default of one
log.
342 Creating volumes
Creating tagged volumes
If you require RAID-5 logs, you must use the logdisk attribute to specify the disks
to be used for the log plexes.
RAID-5 logs can be concatenated or striped plexes, and each RAID-5 log associated
with a RAID-5 volume has a complete copy of the logging information for the
volume. To support concurrent access to the RAID-5 array, the log should be
several times the stripe size of the RAID-5 plex.
It is suggested that you configure a minimum of two RAID-5 log plexes for each
RAID-5 volume. These log plexes should be located on different disks. Having two
RAID-5 log plexes for each RAID-5 volume protects against the loss of logging
information due to the failure of a single disk.
If you use ordered allocation when creating a RAID-5 volume on specified storage,
you must use the logdisk attribute to specify on which disks the RAID-5 log plexes
should be created. Use the following form of the vxassist command to specify
the disks from which space for the logs is to be allocated:
For example, the following command creates a 3-column RAID-5 volume with the
default stripe unit size on disks mydg04, mydg05 and mydg06. It also creates two
RAID-5 logs on disks mydg07 and mydg08.
The number of logs must equal the number of disks that is specified to logdisk.
See “Specifying ordered allocation of storage to volumes” on page 328.
See the vxassist(1M) manual page.
It is possible to add more logs to a RAID-5 volume at a later time.
See “Adding a RAID-5 log” on page 403.
You can use the tag attribute with the vxassist make command to set a named
tag and optional tag value on a volume, for example:
To list the tags that are associated with a volume, use this command:
If you do not specify a volume name, the tags of all volumes and vsets in the disk
group are listed.
The following is an example of listtag output:
To list the volumes that have a specified tag name, use this command:
Tag names and tag values are case-sensitive character strings of up to 256
characters. Tag names can consist of letters (A through Z and a through z),
numbers (0 through 9), dashes (-), underscores (_) or periods (.) from the ASCII
character set. A tag name must start with either a letter or an underscore. Tag
values can consist of any character from the ASCII character set with a decimal
value from 32 through 127. If a tag value includes any spaces, use the vxassist
settag command to set the tag on the newly created volume.
Dotted tag hierarchies are understood by the list operation. For example, the
listing for tag=a.b includes all volumes that have tag names that start with a.b.
The tag names site, udid and vdid are reserved and should not be used. To avoid
possible clashes with future product features, it is recommended that tag names
do not start with any of the following strings: asl, be, isp, nbu, sf, symc, vx, or
vxvm.attr.
The following command confirms that the vxfs.placement_class tag has been
updated.
Note that because four subdisks are specified, but the number of columns is not
specified, the vxmake command assumes a four-column RAID-5 plex and places
one subdisk in each column. Striped plexes are created using the same method
except that the layout is specified as stripe. If the subdisks are to be created and
added later, use the following command to create the plex:
If no subdisks are specified, the ncolumn attribute must be specified. Subdisks can
be added to the plex later using the vxsd assoc command.
See “Associating subdisks with plexes” on page 295.
If each column in a RAID-5 plex is to be created from multiple subdisks which
may span several physical disks, you can specify to which column each subdisk
should be added. For example, to create a three-column RAID-5 plex using six
subdisks, use the following form of the vxmake command:
Creating volumes 345
Creating a volume using vxmake
The following command creates a RAID-5 volume, and associates the prepared
RAID-5 plex and RAID-5 log plexes with it:
Each RAID-5 volume has one RAID-5 plex where the data and parity are stored.
Any other plexes associated with the volume are used as RAID-5 log plexes to log
information about data and parity being written to the volume.
After creating a volume using vxmake, you must initialize it before it can be used.
See “Initializing and starting a volume” on page 346.
Alternatively, you can specify the file to vxmake using the -d option:
The following sample description file defines a volume, db, with two plexes, db-01
and db-02:
346 Creating volumes
Initializing and starting a volume
The subdisk definition for plex, db-01, must be specified on a single line. It is
shown here split across two lines because of space constraints.
The first plex, db-01, is striped and has five subdisks on two physical disks, mydg03
and mydg04. The second plex, db-02, is the preferred plex in the mirror, and has
one subdisk, ramd1-01, on a volatile memory disk.
For detailed information about how to use vxmake, refer to the vxmake(1M) manual
page.
After creating a volume using vxmake, you must initialize it before it can be used.
See “Initializing and starting a volume created using vxmake” on page 347.
The -b option makes VxVM carry out any required initialization as a background
task. It also greatly speeds up the creation of striped volumes by initializing the
columns in parallel.
As an alternative to the -b option, you can specify the init=active attribute to
make a new volume immediately available for use. In this example, init=active
Creating volumes 347
Initializing and starting a volume
is specified to prevent VxVM from synchronizing the empty data plexes of a new
mirrored volume:
Warning: There is a very small risk of errors occurring when the init=active
attribute is used. Although written blocks are guaranteed to be consistent, read
errors can arise in the unlikely event that fsck attempts to verify uninitialized
space in the file system, or if a file remains uninitialized following a system crash.
If in doubt, use the -b option to vxassist instead.
This command writes zeroes to the entire length of the volume and to any log
plexes. It then makes the volume active. You can also zero out a volume by
specifying the attribute init=zero to vxassist, as shown in this example:
You cannot use the -b option to make this operation a background task.
The following command can be used to enable a volume without initializing it:
This allows you to restore data on the volume from a backup before using the
following command to make the volume fully active:
If you want to zero out the contents of an entire volume, use this command to
initialize it:
Accessing a volume
As soon as a volume has been created and initialized, it is available for use as a
virtual disk partition by the operating system for the creation of a file system, or
by application programs such as relational databases and other data management
software.
Creating a volume in a disk group sets up block and character (raw) device files
that can be used to access the volume:
The pathnames include a directory named for the disk group. Use the appropriate
device node to create, mount and repair file systems, and to lay out databases that
require raw partitions.
As the rootdg disk group no longer has special significance, VxVM only creates
volume device nodes for this disk group in the /dev/vx/dsk/rootdg and
/dev/vx/rdsk/rootdg directories. VxVM does not create device nodes in the
/dev/vx/dsk or /dev/vx/rdsk directories for the rootdg disk group.
For example, you can create allocation rules so that a set of servers can standardize
their storage tiering. Suppose you had the following requirements:
You can create rules for each volume allocation requirement and name the rules
tier1, tier2, and tier0.
You can also define rules so that each time you create a volume for a particular
purpose, it's created with the same attributes. For example, to create the volume
for a production database, you can create a rule called productiondb. To create
standardized volumes for home directories, you can create a rule called homedir.
To standardize your high performance index volumes, you can create a rule called
dbindex.
volume allocation which has proven too restrictive or discard it to allow a needed
allocation to succeed.
This syntax defines a rule named rulename which is a short-hand for the listed
vxassist attributes. Rules can reference other rules using an attribute of
rule=rulename[,rulename,...], which adds all the attributes from that rule
into the rule currently being defined. The attributes you specify in a rule definition
override any conflicting attributes that are in a rule that you specify by reference.
You can add a description to a rule with the attribute
description=description_text.
The following is a basic rule file. The first rule in the file, base, defines the logtype
and persist attributes. The remaining rules in the file – tier0, tier1, and tier2 –
reference this rule and also define their own tier-specific attributes. Referencing
a rule lets you define attributes in one place and reuse them in other rules.
The following rule file contains a more complex definition which runs across
several lines.
In the following example, when you create the volume vol1 in disk group dg3,
you can specify the tier1 rule on the command line. In addition to the attributes
you enter on the command line, vol1 is given the attributes that you defined in
tier1.
The following vxprint command displays the attributes of disk group dg3. The
output includes the new volume, vol1.
vxprint -g dg3
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dg3 dg3 - - - - - -
The following vxassist command confirms that vol1 is in tier1. The application
of rule tier1 was successful.
# cat /etc/default/vxsf_rules
volume rule rule1 { mediatype:ssd persist=extended }
# vxdisk listtag
DEVICE NAME VALUE
Creating volumes 353
Using rules and persistent attributes to make volume allocation more efficient
The following command creates a volume, vol1, in the disk group dg3. rule1 is
specified on the command line, so those attributes are also applied to vol1.
The following command shows that the volume vol1 is created off the SSD device
ibm_ds8x000_0266 as specified in rule1.
# vxprint -g dg3
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dg3 dg3 - - - - - -
The following command displays the attributes that are defined in rule1.
The following vxprint command confirms that the volume was grown on SSD
devices.
# vxprint -g dg3
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dg3 dg3 - - - - - -
■ Stopping a volume
■ Starting a volume
■ Resizing a volume
■ Removing a mirror
■ Removing a volume
Note: To use most VxVM commands, you need superuser or equivalent privileges.
# vxprint -hvt
You can also apply the vxprint command to a single disk group:
Here v is a volume, pl is a plex, and sd is a subdisk. The first few lines indicate
the headers that match each type of output line that follows. Each volume is listed
along with its associated plexes and subdisks.
You can ignore the headings for sub-volumes (SV), storage caches (SC), data change
objects (DCO) and snappoints (SP) in the sample output. No such objects are
associated with the volumes that are shown.
To display volume-related information for a specific volume, use the following
command:
For example, to display information about the volume, voldef, in the disk group,
mydg, use the following command:
Volume states
Table 9-1 shows the volume states that may be displayed by VxVM commands
such as vxprint.
ACTIVE The volume has been started (the kernel state is currently ENABLED)
or was in use (the kernel state was ENABLED) when the machine was
rebooted.
CLEAN The volume is not started (the kernel state is DISABLED) and its plexes
are synchronized. For a RAID-5 volume, its plex stripes are consistent
and its parity is good.
EMPTY The volume contents are not initialized. When the volume is EMPTY,
the kernel state is always DISABLED.
NEEDSYNC You must resynchronize the volume the next time it is started. A
RAID-5 volume requires a parity resynchronization.
REPLAY The volume is in a transient state as part of a log replay. A log replay
occurs when it becomes necessary to use logged parity and data. This
state is only applied to RAID-5 volumes.
Administering volumes 359
Displaying volume information
ENABLED The volume is online and can be read from or written to.
360 Administering volumes
Monitoring and controlling tasks
Note: VxVM supports this feature only for private disk groups, not for shared disk
groups in a CVM environment.
■ vxevac
■ vxmirror
■ vxplex
■ vxrecover
■ vxrelayout
■ vxresize
■ vxsd
■ vxvol
For example, to execute a vxrecover command and track the resulting tasks as a
group with the task tag myrecovery, use the following command:
Any tasks started by the utilities invoked by vxrecover also inherit its task ID
and task tag, establishing a parent-child task relationship.
Administering volumes 361
Monitoring and controlling tasks
For more information about the utilities that support task tagging, see their
respective manual pages.
vxtask operations
The vxtask command supports the following operations:
abort Stops the specified task. In most cases, the operations “back out” as
if an I/O error occurred, reversing what has been done so far to the
largest extent possible.
362 Administering volumes
Monitoring and controlling tasks
list Displays a one-line summary for each task running on the system.
The -l option prints tasks in long format. The -h option prints tasks
hierarchically, with child tasks following the parent tasks. By default,
all tasks running on the system are printed. If you include a taskid
argument, the output is limited to those tasks whose taskid or task
tag match taskid. The remaining arguments filter tasks and limit
which ones are listed.
# vxtask list
Administering volumes 363
About SF Thin Reclamation feature
To print tasks hierarchically, with child tasks following the parent tasks, specify
the -h option, as follows:
# vxtask -h list
To trace all paused tasks in the disk group mydg, as well as any tasks with the tag
sysstart, use the following command:
To list all paused tasks, use the vxtask -p list command. To continue execution
(the task may be specified by its ID or by its tag), use vxtask resume :
# vxtask -p list
# vxtask resume 167
To monitor all tasks with the tag myoperation, use the following command:
To cause all tasks tagged with recovall to exit, use the following command:
This command causes VxVM to try to reverse the progress of the operation so far.
For example, aborting an Online Relayout results in VxVM returning the volume
to its original layout.
See “Controlling the progress of a relayout” on page 401.
The thin reclamation feature is supported only for LUNs that have the thinrclm
attribute. VxVM automatically discovers LUNs that support Thin Reclamation
from thin capable storage arrays. You can list devices that are known to have the
thinonly or thinrclm attributes on the host.
In the above output, the SIZE column shows the size of the disk. The
PHYS_ALLOC column shows the physical allocation on the array side. The
TYPE indicates that the array supports thin reclamation.
on regular I/O's to the array, the reclaim operation is made asynchronous. When
a volume is deleted the space previously used by the volume is tracked for later
asynchronous reclamation. This asynchronous reclamation is handled by vxrelocd
(or recovery) daemon.
By default, the vxrelocd daemon runs everyday at 22:10 hours and reclaims
storage on the deleted volume that are one day old.
To perform the reclaim operation during less critical time of the system, control
the time of the reclaim operation by using the following tunables:
In the above example, suppose the disk1 contains a VxVM volume vol1 with a
VxFS file system. If the VxFS file system is not mounted, the command skips
reclamation for disk1.
To reclaim space on disk1, use the following command:
The above command reclaims unused space on disk1 that is outside of the vol1.
The reclamation skips the vol1 volume, since the VxFS file system is not mounted,
but it scans the rest of the disk for unused space.
Example of reclamation for disk groups. The following example triggers
reclamation on the disk group oradg:
# /opt/VRTS/bin/fsadm -R /mnt1
Veritas File System also supports reclamation of a portion of the file system using
the vxfs_ts_reclaim() API.
Note: Thin Reclamation is a slow process and may take several hours to complete,
depending on the file system size. Thin Reclamation is not guaranteed to reclaim
100% of the free space.
You can track the progress of the Thin Reclamation process by using the vxtask
list command when using the Veritas Volume Manager (VxVM) command vxdisk
reclaim.
where <VxFS_mount_point> is the name of the VxFS file system mount point.
Note: If the VxFS file system is not mounted you will receive an error message.
For example: Disk 3pardata0_110 : Skipped. No VxFS file system found.
368 Administering volumes
Monitoring Thin Reclamation using the vxtask command
For example:
# vxtask list
# sync
See the Veritas Storage Foundation Advanced Features Administrator's Guide for
more information on Thin Provisioning and SmartMove.
Note: The full new plex or volume allocates physical storage on thin LUNs and
will not be a thin/optimized operation.
Stopping a volume
Stopping a volume renders it unavailable to the user, and changes the volume
kernel state from ENABLED or DETACHED to DISABLED. If the volume cannot
be disabled, it remains in its current state. To stop a volume, use the following
command:
To stop all volumes in a specified disk group, use the following command:
Warning: If you use the -f option to forcibly disable a volume that is currently
open to an application, the volume remains open, but its contents are inaccessible.
I/O operations on the volume fail, and this may cause data loss. You cannot deport
a disk group until all its volumes are closed.
If you need to prevent a closed volume from being opened, use the vxvol maint
command, as described in the following section.
370 Administering volumes
Starting a volume
To assist in choosing the revival source plex, use vxprint to list the stopped
volume and its plexes.
To take a plex offline, (in this example, vol01-02 in the disk group, mydg), use the
following command:
Make sure that all the plexes are offline except for the one that you will use for
revival. The plex from which you will revive the volume should be placed in the
STALE state. The vxmend on command can change the state of an OFFLINE plex of
a DISABLED volume to STALE. For example, to put the plex vol101-02 in the STALE
state, use the following command:
Running the vxvol start command on the volume then revives the volume with
the specified plex. Because you are starting the volume from a stale plex, you must
specify the force option ( -f).
By using the procedure above, you can enable the volume with each plex, and you
can decide which plex to use to revive the volume.
After you specify a plex for revival, and you use the procedure above to enable
the volume with the specified plex, put the volume back into the DISABLED state
and put all the other plexes into the STALE state using the vxmend on command.
Now, you can recover the volume.
See “Starting a volume” on page 370.
Starting a volume
Starting a volume makes it available for use, and changes the volume state from
DISABLED or DETACHED to ENABLED. To start a DISABLED or DETACHED
volume, use the following command:
# vxrecover -s
Resizing a volume
Resizing a volume changes its size. For example, if a volume is too small for the
amount of data it needs to store, you can increase its length . To resize a volume,
use one of the following commands: vxresize (preferred), vxassist, or vxvol.
Note: You cannot use VxVM commands, Storage Foundation Manager (SFM), or
VEA to resize a volume or any underlying file system on an encapsulated root
disk. This is because the underlying disk partitions also need to be reconfigured.
If you need to resize the volumes on the root disk, you must first unencapsulate
the root disk.
When you resize a volume, you can specify the length of a new volume in sectors,
kilobytes, megabytes, or gigabytes. The unit of measure is added as a suffix to the
length (s, m, k, or g). If you do not specify a unit, sectors are assumed. The vxassist
372 Administering volumes
Resizing a volume
command also lets you specify an increment by which to change the volume’s
size.
Warning: If you use vxassist or vxvol to resize a volume, do not shrink it below
the size of the file system on it. If you do not shrink the file system first, you risk
unrecoverable data loss. If you have a VxFS file system, shrink the file system
first, and then shrink the volume. For other file systems, you may need to back
up your data so that you can later recreate the file system and restore its data.
VxFS UFS
For example, the following command resizes a volume from 1 GB to 10 GB. The
volume is homevol in the disk group mydg, and contains a VxFS file system. The
command uses spare disks mydg10 and mydg11.
The -b option specifies that this operation runs in the background. To monitor
its progress, specify the task tag homevolresize with the vxtask command.
When you use vxresize, note the following restrictions:
■ vxresize works with VxFS and UFS file systems only.
■ In some situations, when you resize large volumes, vxresize may take a long
time to complete.
■ If you resize a volume with a usage type other than FSGEN or RAID5, you can
lose data. If such an operation is required, use the -f option to forcibly resize
the volume.
Administering volumes 373
Resizing a volume
■ You cannot resize a volume that contains plexes with different layout types.
Attempting to do so results in the following error message:
To resize such a volume successfully, you must first reconfigure it so that each
data plex has the same layout.
Note: If you enter an incorrect volume size, do not try to stop the vxresize
operation by entering Crtl-C. Let the operation complete and then rerun vxresize
with the correct value.
For more information about the vxresize command, see the vxresize(1M) manual
page.
Warning: You cannot grow or shrink any volume associated with an encapsulated
root disk (rootvol, usr, var, opt, swapvol, and so on) because these map to a
physical underlying partition on the disk and must be contiguous. If you try to
grow rootvol, usrvol, varvol, or swapvol, the system could become unbootable
if you need to revert back to booting from slices. It can also prevent a successful
Solaris upgrade, and you might have to do a fresh install. Theupgrade_start
script might also fail.
For example, to extend volcat to 2000 sectors, use the following command:
If you want the subdisks to be grown using contiguous disk space, and you
previously performed a relayout on the volume, also specify the attribute
layout=nodiskalign to the growto command.
If you want the subdisks to be grown using contiguous disk space, and you
previously performed a relayout on the volume, also specify the attribute
layout=nodiskalign to the growby command .
For example, to shrink volcat to 1300 sectors, use the following command:
Warning: Do not shrink the volume below the current size of the file system or
database using the volume. You can safely use the vxassist shrinkto command
on empty volumes.
For example, to shrink volcat by 300 sectors, use the following command:
Warning: Do not shrink the volume below the current size of the file system or
database using the volume. You can safely use the vxassist shrinkby command
on empty volumes.
For example, to change the length of the volume vol01, in the disk group mydg,
to 100000 sectors, use the following command:
Note: You cannot use the vxvol set len command to increase the size of a volume
unless the needed space is available in the volume's plexes. When you reduce the
volume's size using the vxvol set len command, the freed space is not released
into the disk group’s free space pool.
If a volume is active and you reduce its length, you must force the operation using
the -o force option to vxvol. This precaution ensures that space is not removed
accidentally from applications using the volume.
You can change the length of logs using the following command:
Warning: Sparse log plexes are not valid. They must map the entire length of the
log. If increasing the log length makes any of the logs invalid, the operation is not
allowed. Also, if the volume is not active and is dirty (for example, if it has not
been shut down cleanly), you cannot change the log length. If you are decreasing
the log length, this feature avoids losing any of the log contents. If you are
increasing the log length, it avoids introducing random data into the logs.
Specifying the -b option makes synchronizing the new mirror a background task.
376 Administering volumes
Adding a mirror to a volume
For example, to create a mirror of the volume voltest in the disk group, mydg,
use the following command:
You can also mirror a volume by creating a plex and then attaching it to a volume
using the following commands:
# /etc/vx/bin/vxmirror -g diskgroup -a
# vxmirror -d yes
If you make this change, you can still make unmirrored volumes by specifying
nmirror=1 as an attribute to the vxassist command. For example, to create an
unmirrored 20-gigabyte volume named nomirror in the disk group mydg, use the
following command:
Note: This task only mirrors concatenated volumes. Volumes that are already
mirrored or that contain subdisks that reside on multiple disks are ignored
Administering volumes 377
Adding a mirror to a volume
4 At the prompt, enter the target disk name (this disk must be the same size or
larger than the originating disk):
6 At the prompt, indicate whether you want to mirror volumes on another disk
(y) or return to the vxdiskadm main menu (n):
■ Mirroring a full root disk to a target disk that is the same size as the source
disk. A full disk has no free cylinders.
■ Mirroring a disk created using an earlier version of Veritas Volume Manager
to a target disk that is the same size as the source disk. You only need to use
this step if mirroring using vxdiskadm fails.
378 Administering volumes
Adding a mirror to a volume
■ Mirroring a full Veritas Volume Manager disk (not a root disk) that was
encapsulated in VxVM 3.5 to a target disk that is the same size as the source
disk. You only need to use this step if mirroring using vxdiskadm fails.
See the vxdiskadm(1M) manual page.
To create a mirror under any of these scenarios
1 Determine the size of the source disk’s private region, using one of the
following methods:
■ If the source disk is a root disk, obtain its private region length by running
the following command:
# vxprint -l rootdisk
■ If the source disk is not a root disk, obtain its private region length by
running the following command:
2 Use the vxdisksetup program to initialize the target disk, Enter the following:
where XXXX is the size of the source disk’s private region, and YYYY is the
size of its public region.
If your system is configured to use enclosure-based naming instead of
OS-based naming, replace the c#t#d# name with the enclosure-based name
for the disk.
Administering volumes 379
Removing a mirror
3 Add the newly initialized target disk to the source disk group. Enter the
following:
4 Use the vxdiskadm command and select Mirror volumes on a disk to create
the mirror. Specify the disk media names of the source disk (rootdisk) and
the target disk (medianame).
Removing a mirror
When you no longer need a mirror, you can remove it to free disk space.
Note: VxVM will not allow you to remove the last valid plex associated with a
volume.
You can also use storage attributes to specify the storage to be removed. For
example, to remove a mirror on disk mydg01 from volume vol01, enter the
following.
Note: The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
For example, to dissociate and remove a mirror named vol01-02 from the disk
group mydg, use the following command:
This command removes the mirror vol01-02 and all associated subdisks. This is
equivalent to entering the following commands separately:
380 Administering volumes
Adding logs and maps to volumes
■ Dirty Region Logs let you quickly recover mirrored volumes after a system
crash. These logs can be either DRL log plexes, or part of a version 20 DCO
volume.
See “Dirty region logging” on page 58.
See “Adding traditional DRL logging to a mirrored volume” on page 386.
See “Preparing a volume for DRL and instant snapshots” on page 380.
■ RAID-5 logs prevent corruption of data during recovery of RAID-5 volumes.
These logs are configured as plexes on disks other than those that are used
for the columns of the RAID-5 volume.
See “RAID-5 logging” on page 50.
See “Adding a RAID-5 log” on page 403.
Note: You need a license key to use the DRL and FastResync feature. If you do not
have a license key, you can configure a DCO object and DCO volume so that snap
objects are associated with the original and snapshot volumes. However, without
a license key, only full resynchronization can be performed.
The ndcomirs attribute specifies the number of DCO plexes that are created in
the DCO volume. You should configure as many DCO plexes as there are data and
snapshot plexes in the volume. The DCO plexes are used to set up a DCO volume
for any snapshot volume that you subsequently create from the snapshot plexes.
For example, specify ndcomirs=5 for a volume with 3 data plexes and 2 snapshot
plexes.
The value of the regionsize attribute specifies the size of the tracked regions in
the volume. A write to a region is tracked by setting a bit in the change map. The
default value is 64k (64KB). A smaller value requires more disk space for the change
maps, but the finer granularity provides faster resynchronization.
To enable DRL logging on the volume, specify drl=on (this is the default). For
sequential DRL, specify drl=sequential. If you do not need DRL, specify drl=off.
You can also specify vxassist-style storage attributes to define the disks that
can or cannot be used for the plexes of the DCO volume.
See “Specifying storage for version 20 DCO plexes” on page 382.
The vxsnap prepare command automatically enables Persistent FastResync on
the volume. Persistent FastResync is also set automatically on any snapshots that
are generated from a volume on which this feature is enabled.
If the volume is a RAID-5 volume, it is converted to a layered volume that can be
used with instant snapshots and Persistent FastResync.
See “Using a DCO and DCO volume with a RAID-5 volume” on page 383.
382 Administering volumes
Preparing a volume for DRL and instant snapshots
To view the details of the DCO object and DCO volume that are associated with a
volume, use the vxprint command. The following is example vxprint -vh output
for the volume named vol1 (the TUTIL0 and PUTIL0 columns are omitted for
clarity):
In this output, the DCO object is shown as vol1_dco, and the DCO volume as
vol1_dcl with 2 plexes, vol1_dcl-01 and vol1_dcl-02.
If you need to relocate DCO plexes to different disks, you can use the vxassist
move command. For example, the following command moves the plexes of the
DCO volume, vol1_dcl, for volume vol1 from disk03 and disk04 to disk07 and
disk08.
Administering volumes 383
Preparing a volume for DRL and instant snapshots
Note: The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
Warning: Dissociating a DCO and DCO volume disables FastResync on the volume.
A full resynchronization of any remaining snapshots is required when they are
snapped back.
To find out the version number of a DCO that is associated with a volume
1 Use the vxprint command on the volume to discover the name of its DCO.
Enter the following:
2 Use the vxprint command on the DCO to determine its version number. Enter
the following:
This displays the logging type as REGION for DRL, DRLSEQ for sequential DRL,
or NONE if DRL is not enabled.
If the number of active mirrors in the volume is less than 2, DRL logging is
not performed even if DRL is enabled on the volume.
See “Determining if DRL logging is active on a volume” on page 385.
Administering volumes 385
Preparing a volume for DRL and instant snapshots
2 Use the vxprint command on the DCO volume to find out if DRL logging is
active:
You can use these commands to change the DRL policy on a volume by first
disabling and then re-enabling DRL as required. If a data change map (DCM, used
with Veritas Volume Replicator) is attached to a volume, DRL is automatically
disabled .
This command also has the effect of disabling FastResync tracking on the volume.
386 Administering volumes
Adding traditional DRL logging to a mirrored volume
If specified, the -b option makes adding the new logs a background task.
The nlog attribute specifies the number of log plexes to add. By default, one log
plex is added. The loglen attribute specifies the size of the log, where each bit
represents one region in the volume. For example, a 10 GB volume with a 64 KB
region size needs a 20K log.
For example, to add a single log plex for the volume vol03 in the disk group mydg,
use the following command:
When you use the vxassist command to add a log subdisk to a volume, a log plex
is created by default to contain the log subdisk. If you do not want a log plex,
include the keyword nolog in the layout specification.
For a volume that will be written to sequentially, such as a database log volume,
use the following logtype=drlseq attribute to specify that sequential DRL will
be used:
After you create the plex containing a log subdisk, you can treat it as a regular
plex. You can add subdisks to the log plex. If you need to, you can remove the log
plex and log subdisk.
See “Removing a traditional DRL log” on page 387.
Administering volumes 387
Upgrading existing volumes to use version 20 DCOs
By default, the vxassist command removes one log. Use the optional attribute
nlog=n to specify the number of logs that are to remain after the operation
completes.
You can use storage attributes to specify the storage from which a log will be
removed. For example, to remove a log on disk mydg10 from volume vol01,
enter the following command.
To upgrade a disk group to the latest version, use the following command:
This command assumes that the volumes can only have version 0 DCOs as
the disk group has just been upgraded.
See “Determining the DCO version number” on page 383.
To upgrade each volume within the disk group, repeat the following steps as
required.
3 If the volume to be upgraded has a traditional DRL plex or subdisk (that is,
the DRL logs are not held in a version 20 DCO volume), use the following
command to remove this:
4 For a volume that has one or more associated snapshot volumes, use the
following command to reattach and resynchronize each snapshot:
If FastResync was enabled on the volume before the snapshot was taken, the
data in the snapshot plexes is quickly resynchronized from the original
volume. If FastResync was not enabled, a full resynchronization is performed.
5 To turn off FastResync for the volume, use the following command :
6 To dissociate a version 0 DCO object, DCO volume and snap objects from the
volume, use the following command:
The ndcomirs attribute specifies the number of DCO plexes that are created
in the DCO volume. You should configure as many DCO plexes as there are
data and snapshot plexes in the volume. The DCO plexes are used to set up a
DCO volume for any snapshot volume that you subsequently create from the
snapshot plexes. For example, specify ndcomirs=5 for a volume with 3 data
plexes and 2 snapshot plexes.
The regionsize attribute specifies the size of the tracked regions in the
volume. A write to a region is tracked by setting a bit in the change map. The
default value is 64k (64KB). A smaller value requires more disk space for the
change maps, but the finer granularity provides faster resynchronization.
To enable DRL logging on the volume, specify drl=on (this is the default
setting). If you need sequential DRL, specify drl=sequential. If you do not
need DRL, specify drl=off.
To define the disks that can or cannot be used for the plexes of the DCO
volume, you can also specify vxassist-style storage attributes.
To list the tags that are associated with a volume, use the following command:
If you do not specify a volume name, all the volumes and vsets in the disk group
are displayed. The acronym vt in the TY field indicates a vset.
The following is a sample listtag command:
To list the volumes that have a specified tag name, use the following command:
Tag names and tag values are case-sensitive character strings of up to 256
characters. Tag names can consist of the following ASCII characters:
■ Letters (A through Z and a through z)
■ Numbers (0 through 9)
■ Dashes (-)
■ Underscores (_)
■ Periods (.)
A tag name must start with either a letter or an underscore.
Tag values can consist of any ASCII character that has a decimal value from 32
through 127. If a tag value includes spaces, quote the specification to protect it
from the shell, as follows:
The list operation understands dotted tag hierarchies. For example, the listing
for tag=a.b includes all volumes that have tag names starting with a.b.
The tag names site, udid, and vdid are reserved. Do not use them. To avoid
possible clashes with future product features, do not start tag names with any of
the following strings: asl, be, nbu, sf, symc, or vx.
Administering volumes 391
Changing the read policy for mirrored volumes
prefer Reads first from a plex that has been named as the preferred
plex.
For disk group versions 150 or higher and if the local site
has a SSD based plex, it will be preferred.
split Divides the read requests and distributes them across all
the available plexes.
For example, to set the read policy for the volume vol01 in disk group mydg to
round-robin, use the following command:
For example, to set the policy for vol01 to read preferentially from the plex
vol01-02, use the following command:
Removing a volume
If a volume is inactive or its contents have been archived, you may no longer need
it. In that case, you can remove the volume and free up the disk space for other
uses.
To remove a volume
1 Remove all references to the volume by application programs, including
shells, that are running on the system.
2 If the volume is mounted as a file system, unmount it with the following
command:
# umount /dev/vx/dsk/diskgroup/volume
3 If the volume is listed in the /etc/vfstab file, edit this file and remove its
entry. For more information about the format of this file and how you can
modify it, see your operating system documentation
4 Stop all activity by VxVM on the volume with the following command:
You can also use the vxedit command to remove the volume as follows:
You can now optionally specify a list of disks to which the volume(s) should
be moved. At the prompt, do one of the following:
■ Press Enter to move the volumes onto available space in the disk group.
■ Specify the disks in the disk group that should be used, as follows:
As the volumes are moved from the disk, the vxdiskadm program displays
the status of the operation:
When the volumes have all been moved, the vxdiskadm program displays the
following success message:
3 At the following prompt, indicate whether you want to move volumes from
another disk (y) or return to the vxdiskadm main menu (n):
FastResync quickly and efficiently resynchronizes stale mirrors. When you use
FastResync with operations such as backup and decision support, it also increases
the efficiency of the VxVM snapshot mechanism.
See “FastResync” on page 63.
You can enable the following versions of FastResync on a volume:
■ Persistent FastResync holds copies of the FastResync maps on disk. If a system
is rebooted, you can use these copies to quickly recover mirrored volumes. To
use this form of FastResync, you must first associate a version 0 or a version
20 data change object (DCO) and DCO volume with the volume.
See “Upgrading existing volumes to use version 20 DCOs” on page 387.
See “Preparing a volume for DRL and instant snapshots” on page 380.
■ Non-Persistent FastResync holds the FastResync maps in memory. These maps
do not survive on a system that is rebooted.
By default, FastResync is not enabled on newly-created volumes. If you want to
enable FastResync on a volume that you create, specify the fastresync=on
attribute to the vxassist make command.
To use FastResync with a snapshot, you must enable FastResync before the
snapshot is taken, and it must remain enabled until after the snapback is
completed.
Administering volumes 395
Enabling FastResync on a volume
Note: The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
To list all volumes on which Persistent FastResync is enabled, use the following
command:
Disabling FastResync
Use the vxvol command to turn off Persistent or Non-Persistent FastResync for
an existing volume, as follows:
Turning off FastResync releases all tracking maps for the specified volume. All
subsequent reattaches do not use the FastResync facility, but perform a full
resynchronization of the volume. The full resynchronization occurs even if you
turn on FastResync later.
396 Administering volumes
Performing online relayout
concat-mirror Concatenated-mirror
concat Concatenated
nomirror Concatenated
nostripe Concatenated
span Concatenated
stripe Striped
Sometimes, you may need to perform a relayout on a plex rather than on a volume.
See “Specifying a plex for relayout” on page 400.
concat No.
concat-mirror No. Add a mirror, and then use vxassist convert instead.
Administering volumes 397
Performing online relayout
raid5 Yes. The stripe width and number of columns may be defined.
stripe Yes. The stripe width and number of columns may be defined.
stripe-mirror Yes. The stripe width and number of columns may be defined.
concat No. Use vxassist convert, and then remove the unwanted mirrors
from the resulting mirrored-concatenated volume instead.
concat-mirror No.
raid5 Yes.
stripe Yes. This relayout removes a mirror and adds striping. The stripe
width and number of columns may be defined.
stripe-mirror Yes. The stripe width and number of columns may be defined.
Table 9-6 shows the supported relayout transformations for RAID-5 volumes.
concat Yes.
concat-mirror Yes.
398 Administering volumes
Performing online relayout
raid5 Yes. The stripe width and number of columns may be changed.
mirror-concat No.
raid5 Yes. The stripe width and number of columns may be defined. Choose
a plex in the existing mirrored volume on which to perform the
relayout. The other plexes are removed at the end of the relayout
operation.
stripe Yes.
stripe-mirror Yes.
concat Yes.
concat-mirror Yes.
raid5 Yes. The stripe width and number of columns may be changed.
Table 9-9 shows the supported relayout transformations for unmirrored stripe
and layered striped-mirror volumes.
concat Yes.
concat-mirror Yes.
raid5 Yes. The stripe width and number of columns may be changed.
The following examples use vxassist to change the stripe width and number of
columns for a striped volume in the disk group dbasedg:
For relayout operations that have not been stopped using the vxtask pause
command (for example, the vxtask abort command was used to stop the task,
the transformation process died, or there was an I/O failure), resume the relayout
by specifying the start keyword to vxrelayout, as follows:
If you use the vxrelayout start command to restart a relayout that you previously
suspended using the vxtask pause command, a new untagged task is created to
402 Administering volumes
Converting between layered and non-layered volumes
complete the operation. You cannot then use the original task tag to control the
relayout.
The -o bg option restarts the relayout in the background. You can also specify
the slow and iosize option modifiers to control the speed of the relayout and the
size of each region that is copied. For example, the following command inserts a
delay of 1000 milliseconds (1 second) between copying each 10 MB region:
The default delay and region size values are 250 milliseconds and 1 MB
respectively.
To reverse the direction of relayout operation that is stopped, specify the reverse
keyword to vxrelayout as follows:
This undoes changes made to the volume so far, and returns it to its original
layout.
If you cancel a relayout using vxtask abort, the direction of the conversion is
also reversed, and the volume is returned to its original configuration.
See “Managing tasks with vxtask” on page 361.
See the vxrelayout(1M) manual page.
See the vxtask(1M) manual page.
If you specify the -b option, the conversion of the volume is a background task.
The following conversion layouts are supported:
You can use volume conversion before or after you perform an online relayout to
achieve more transformations than would otherwise be possible. During relayout
process, a volume may also be converted into an intermediate layout. For example,
to convert a volume from a 4-column mirrored-stripe to a 5-column
mirrored-stripe, first use vxassist relayout to convert the volume to a 5-column
striped-mirror as follows:
When the relayout finishes, use the vxassist convert command to change the
resulting layered striped-mirror volume to a non-layered mirrored-stripe:
Note: If the system crashes during relayout or conversion, the process continues
when the system is rebooted. However, if the system crashes during the first stage
of a two-stage relayout and conversion, only the first stage finishes. To complete
the operation, you must run vxassist convert manually.
If you specify the -b option, adding the new log is a background task.
When you add the first log to a volume, you can specify the log length. Any logs
that you add subsequently are configured with the same length as the existing
log.
For example, to create a log for the RAID-5 volume volraid, in the disk group
mydg, use the following command:
The attach operation can only proceed if the size of the new log is large enough
to hold all the data on the stripe. If the RAID-5 volume already contains logs, the
new log length is the minimum of each individual log length. The reason is that
the new log is a mirror of the old logs.
If the RAID-5 volume is not enabled, the new log is marked as BADLOG and is
enabled when the volume is started. However, the contents of the log are ignored.
If the RAID-5 volume is enabled and has other enabled RAID-5 logs, the new log’s
contents are synchronized with the other logs.
If the RAID-5 volume currently has no enabled logs, the new log is zeroed before
it is enabled.
where volume is the name of the RAID-5 volume. For a RAID-5 log, the output
lists a plex with a STATE field entry of LOG.
To dissociate and remove a RAID-5 log and any associated subdisks from an
existing volume, use the following command:
For example, to dissociate and remove the log plex volraid-02 from volraid in
the disk group mydg, use the following command:
You can also remove a RAID-5 log with the vxassist command, as follows:
By default, the vxassist command removes one log. To specify the number of
logs that remain after the operation, use the optional attribute nlog=n.
Administering volumes 405
Adding a RAID-5 log
Note: When you remove a log and it leaves less than two valid logs on the volume,
a warning is printed and the operation is stopped. You can force the operation by
specifying the -f option with vxplex or vxassist.
406 Administering volumes
Adding a RAID-5 log
Chapter 10
Creating and administering
volume sets
This chapter includes the following topics:
■ The first volume (index 0) in a volume set must be larger than the sum of the
total volume size divided by 4000, the size of the VxFS intent log, and 1MB.
Volumes 258 MB or larger should always suffice.
■ Raw I/O from and to a volume set is not supported.
■ Raw I/O from and to the component volumes of a volume set is supported
under certain conditions.
See “Raw device node access to component volumes” on page 411.
■ Volume sets can be used in place of volumes with the following vxsnap
operations on instant snapshots: addmir, dis, make, prepare, reattach,
refresh, restore, rmmir, split, syncpause, syncresume, syncstart, syncstop,
syncwait, and unprepare. The third-mirror break-off usage model for full-sized
instant snapshots is supported for volume sets provided that sufficient plexes
exist for each volume in the volume set.
For more information about snapshots, see the Veritas Storage Foundation
Advanced Features Administrator's Guide.
■ A full-sized snapshot of a volume set must itself be a volume set with the same
number of volumes and the same volume index numbers as the parent. The
corresponding volumes in the parent and snapshot volume sets are also subject
to the same restrictions as apply between standalone volumes and their
snapshots.
Here volset is the name of the volume set, and volume is the name of the first
volume in the volume set. The -t vxfs option creates the volume set configured
for use by VxFS. You must create the volume before running the command. vxvset
will not automatically create the volume.
For example, to create a volume set named myvset that contains the volume vol1,
in the disk group mydg, you would use the following command:
For example, to add the volume vol2, to the volume set myvset, use the following
command:
Warning: The -f (force) option must be specified if the volume being added, or
any volume in the volume set, is either a snapshot or the parent of a snapshot.
Using this option can potentially cause inconsistencies in a snapshot hierarchy
if any of the volumes involved in the operation is already in a snapshot chain.
For example, the following commands remove the volumes, vol1 and vol2, from
the volume set myvset:
Warning: The -f (force) option must be specified if the volume being removed, or
any volume in the volume set, is either a snapshot or the parent of a snapshot.
Using this option can potentially cause inconsistencies in a snapshot hierarchy
if any of the volumes involved in the operation is already in a snapshot chain.
If the name of a volume set is not specified, the command lists the details of all
volume sets in a disk group, as shown in the following example:
To list the details of each volume in a volume set, specify the name of the volume
set as an argument to the command:
The context field contains details of any string that the application has set up for
the volume or volume set to tag its purpose.
To stop and restart one or more volume sets, use the following commands:
For the example given previously, the effect of running these commands on the
component volumes is shown below:
Creating and administering volume sets 411
Raw device node access to component volumes
Warning: Writing directly to or reading from the raw device node of a component
volume of a volume set should only be performed if it is known that the volume's
data will not otherwise change during the period of access.
All of the raw device nodes for the component volumes of a volume set can be
created or removed in a single operation. Raw device nodes for any volumes added
to a volume set are created automatically as required, and inherit the access mode
of the existing device nodes.
412 Creating and administering volume sets
Raw device node access to component volumes
Access to the raw device nodes for the component volumes can be configured to
be read-only or read-write. This mode is shared by all the raw device nodes for
the component volumes of a volume set. The read-only access mode implies that
any writes to the raw device will fail, however writes using the ioctl interface or
by VxFS to update metadata are not prevented. The read-write access mode allows
direct writes via the raw device. The access mode to the raw device nodes of a
volume set can be changed as required.
The presence of raw device nodes and their access mode is persistent across system
reboots.
Note the following limitations of this feature:
■ The disk group version must be 140 or greater.
■ Access to the raw device nodes of the component volumes of a volume set is
only supported for private disk groups; it is not supported for shared disk
groups in a cluster.
See “Enabling raw device access when creating a volume set” on page 412.
See “Displaying the raw device access settings for a volume set” on page 413.
See “Controlling raw device access for an existing volume set” on page 413.
The -o makedev=on option enables the creation of raw device nodes for the
component volumes at the same time that the volume set is created. The default
setting is off.
If the -o compvol_access=read-write option is specified, direct writes are allowed
to the raw device of each component volume. If the value is set to read-only, only
reads are allowed from the raw device of each component volume.
If the -o makedev=on option is specified, but -o compvol_access is not specified,
the default access mode is read-only.
If the vxvset addvol command is subsequently used to add a volume to a volume
set, a new raw device node is created in /dev/vx/rdsk/diskgroup if the value of
Creating and administering volume sets 413
Raw device node access to component volumes
the makedev attribute is currently set to on. The access mode is determined by the
current setting of the compvol_access attribute.
The following example creates a volume set, myvset1, containing the volume,
myvol1, in the disk group, mydg, with raw device access enabled in read-write
mode:
The makedev attribute can be specified to the vxvset set command to create
(makedev=on) or remove (makedev=off) the raw device nodes for the component
volumes of a volume set. If any of the component volumes are open, the -f (force)
option must be specified to set the attribute to off.
Specifying makedev=off removes the existing raw device nodes from the
/dev/vx/rdsk/diskgroup directory.
If the makedev attribute is set to off, and you use the mknod command to create
the raw device nodes, you cannot read from or write to those nodes unless you
set the value of makedev to on.
414 Creating and administering volume sets
Raw device node access to component volumes
The syntax for setting the compvol_access attribute on a volume set is:
The final example removes raw device node access for the volume set, myvset2:
Data backup As the requirement for 24 x 7 availability becomes essential for many
businesses, organizations cannot afford the downtime involved in
backing up critical data offline. By taking a snapshot of the data, and
backing up from this snapshot, business-critical applications can
continue to run without extended down time or impacted performance.
Testing and Development or service groups can use snapshots as test data for new
training applications. Snapshot data gives developers, system testers and QA
groups a realistic basis for testing the robustness, integrity, and
performance of new applications.
416 Configuring off-host processing
Implemention of off-host processing solutions
SCSI or Fibre
Channel
connectivity
If the volume can be used for instant snapshot operations, this command
returns on; otherwise, it returns off.
If the volume was created under VxVM 4.0 or a later release, and it is not
associated with a new-style DCO object and DCO volume, add a version 20
DCO and DCO volume.
See “Preparing a volume for DRL and instant snapshots” on page 380.
If the volume was created before release 4.0 of VxVM, and has any attached
snapshot plexes, or is associated with any snapshot volumes, upgrade the
volume to use a version 20 DCO.
See “Upgrading existing volumes to use version 20 DCOs” on page 387.
2 On the primary host, use the following command to check whether FastResync
is enabled on the volume:
3 On the primary host, create a new volume in a separate disk group for use as
the snapshot volume.
For more information about snapshots, see the Veritas Storage Foundation
Advanced Features Administrator's Guide.
It is recommended that a snapshot disk group is dedicated to maintaining
only those disks that are used for off-host processing.
Configuring off-host processing 419
Implemention of off-host processing solutions
4 On the primary host, link the snapshot volume in the snapshot disk group to
the data volume. Enter the following:
You can use the vxsnap snapwait command to wait for synchronization of
the linked snapshot volume to complete. Enter the following:
This step sets up the snapshot volumes, and starts tracking changes to the
original volumes.
When you are ready to create a backup, go to step 5.
5 On the primary host, suspend updates to the volume that contains the
database tables. A database may have a hot backup mode that lets you do this
by temporarily suspending writes to its tables.
6 On the primary host, create the snapshot volume, snapvol, by running the
following command:
If a database spans more than one volume, you can specify all the volumes
and their snapshot volumes using one command, as follows:
9 On the OHP host where the backup is to be performed, use the following
command to import the snapshot volume’s disk group:
10 The snapshot volume is initially disabled following the import. On the OHP
host, use the following commands to recover and restart the snapshot volume:
11 On the OHP host, back up the snapshot volume. If you need to remount the
file system in the volume to back it up, first run fsck on the volume. The
following are sample commands for checking and mounting a file system:
At this point, back up the file system and use the following command to
unmount it:
# umount mount_point
12 On the OHP host, use the following command to deport the snapshot volume’s
disk group:
13 On the primary host, re-import the snapshot volume’s disk group using the
following command:
14 The snapshot volume is initially disabled following the import. Use the
following commands on the primary host to recover and restart the snapshot
volume:
15 On the primary host, reattach the snapshot volume to its original volume
using the following command:
For example, to reattach the snapshot volumes svol1, svol2 and svol3:
You can use the vxsnap snapwait command to wait for synchronization of
the linked snapshot volume to complete:
Repeat step 5 through step 15 each time that you need to back up the volume.
To set up a replica database using the table files that are configured within a volume
in a private disk group
1 Use the following command on the primary host to see if the volume is
associated with a version 20 data change object (DCO) and DCO volume that
allow instant snapshots and Persistent FastResync to be used with the volume:
This command returns on if the volume can be used for instant snapshot
operations; otherwise, it returns off.
If the volume was created under VxVM 4.0 or a later release, and it is not
associated with a new-style DCO object and DCO volume, it must be prepared.
See “Preparing a volume for DRL and instant snapshots” on page 380.
If the volume was created before release 4.0 of VxVM, and has any attached
snapshot plexes, or is associated with any snapshot volumes, it must be
upgraded.
See “Upgrading existing volumes to use version 20 DCOs” on page 387.
2 Use the following command on the primary host to check whether FastResync
is enabled on a volume:
3 Prepare the OHP host to receive the snapshot volume that contains the copy
of the database tables. This may involve setting up private volumes to contain
any redo logs, and configuring any files that are used to initialize the database.
4 On the primary host, create a new volume in a separate disk group for use as
the snapshot volume.
For more information about snapshots, see the Veritas Storage Foundation
Advanced Features Administrator's Guide.
It is recommended that a snapshot disk group is dedicated to maintaining
only those disks that are used for off-host processing.
Configuring off-host processing 423
Implemention of off-host processing solutions
5 On the primary host, link the snapshot volume in the snapshot disk group to
the data volume:
You can use the vxsnap snapwait command to wait for synchronization of
the linked snapshot volume to complete:
This step sets up the snapshot volumes, and starts tracking changes to the
original volumes.
When you are ready to create a replica database, proceed to step 6.
6 On the primary host, suspend updates to the volume that contains the
database tables. A database may have a hot backup mode that allows you to
do this by temporarily suspending writes to its tables.
7 Create the snapshot volume, snapvol, by running the following command on
the primary host:
If a database spans more than one volume, you can specify all the volumes
and their snapshot volumes using one command, as shown in this example:
This step sets up the snapshot volumes ready for the backup cycle, and starts
tracking changes to the original volumes.
8 On the primary host, if you temporarily suspended updates to a volume in
step 6, release all the database tables from hot backup mode.
9 On the primary host, deport the snapshot volume’s disk group using the
following command:
10 On the OHP host where the replica database is to be set up, use the following
command to import the snapshot volume’s disk group:
11 The snapshot volume is initially disabled following the import. Use the
following commands on the OHP host to recover and restart the snapshot
volume:
12 On the OHP host, check and mount the snapshot volume. The following are
sample commands for checking and mounting a file system:
13 On the OHP host, use the appropriate database commands to recover and
start the replica database for its decision support role.
At a later time, you can resynchronize the snapshot volume’ s data with the
primary database.
To refresh the snapshot plexes from the original volume
1 On the OHP host, shut down the replica database, and use the following
command to unmount the snapshot volume:
# umount mount_point
2 On the OHP host, use the following command to deport the snapshot volume’s
disk group:
3 On the primary host, re-import the snapshot volume’s disk group using the
following command:
4 The snapshot volume is initially disabled following the import. Use the
following commands on the primary host to recover and restart the snapshot
volume:
5 On the primary host, reattach the snapshot volume to its original volume
using the following command:
For example, to reattach the snapshot volumes svol1, svol2 and svol3:
You can use the vxsnap snapwait command to wait for synchronization of
the linked snapshot volume to complete:
You can then proceed to create the replica database, from step 6 in the
previous procedure.
See “To set up a replica database using the table files that are configured
within a volume in a private disk group” on page 422.
426 Configuring off-host processing
Implemention of off-host processing solutions
Chapter 12
Administering
hot-relocation
This chapter includes the following topics:
■ About hot-relocation
About hot-relocation
If a volume has a disk I/O failure (for example, the disk has an uncorrectable error),
Veritas Volume Manager (VxVM) can detach the plex involved in the failure. I/O
stops on that plex but continues on the remaining plexes of the volume.
428 Administering hot-relocation
How hot-relocation works
If a disk fails completely, VxVM can detach the disk from its disk group. All plexes
on the disk are disabled. If there are any unmirrored volumes on a disk when it
is detached, those volumes are also disabled.
Apparent disk failure may not be due to a fault in the physical disk media or the
disk controller, but may instead be caused by a fault in an intermediate or ancillary
component such as a cable, host bus adapter, or power supply.
The hot-relocation feature in VxVM automatically detects disk failures, and notifies
the system administrator and other nominated users of the failures by electronic
mail. Hot-relocation also attempts to use spare disks and free disk space to restore
redundancy and to preserve access to mirrored and RAID-5 volumes.
See “How hot-relocation works” on page 428.
If hot-relocation is disabled or you miss the electronic mail, you can use the
vxprint command or the graphical user interface to examine the status of the
disks. You may also see driver error messages on the console or in the system
messages file.
Failed disks must be removed and replaced manually.
See “Removing and replacing disks” on page 146.
For more information about recovering volumes and their data after hardware
failure, see the Veritas Volume Manager Troubleshooting Guide.
Disk failure This is normally detected as a result of an I/O failure from a VxVM
object. VxVM attempts to correct the error. If the error cannot be
corrected, VxVM tries to access configuration information in the
private region of the disk. If it cannot access the private region, it
considers the disk failed.
Warning: Hot-relocation does not guarantee the same layout of data or the same
performance after relocation. An administrator should check whether any
configuration changes are required after hot-relocation occurs.
■ The failing subdisks are on non-redundant volumes (that is, volumes of types
other than mirrored or RAID-5).
■ There are insufficient spare disks or free disk space in the disk group.
■ The only available space is on a disk that already contains a mirror of the
failing plex.
■ The only available space is on a disk that already contains the RAID-5 log plex
or one of its healthy subdisks. Failing subdisks in the RAID-5 plex cannot be
relocated.
■ If a mirrored volume has a dirty region logging (DRL) log subdisk as part of its
data plex, failing subdisks belonging to that plex cannot be relocated.
■ If a RAID-5 volume log plex or a mirrored volume DRL log plex fails, a new log
plex is created elsewhere. There is no need to relocate the failed subdisks of
the log plex.
See the vxrelocd(1M) manual page.
Figure 12-1 shows the hot-relocation process in the case of the failure of a single
subdisk of a RAID-5 volume.
Administering hot-relocation 431
How hot-relocation works
a Disk group contains five disks. Two RAID-5 volumes are configured
across four of the disks. One spare disk is availavle for hot-relocation.
mydg01 mydg02 mydg03 mydg04 mydg05
mydg02-02 mydg03-02
mydg02-02 mydg03-02
To: root
Subject: Volume Manager failures on host teal
Failures have been detected by the Veritas Volume Manager:
failed plexes:
home-02
src-02
432 Administering hot-relocation
How hot-relocation works
The -s option asks for information about individual subdisks, and the -ff option
displays the number of failed read and write operations. The following output
display is typical:
FAILED
TYP NAME READS WRITES
sd mydg01-04 0 0
sd mydg01-06 0 0
sd mydg02-03 1 0
sd mydg02-04 1 0
This example shows failures on reading from subdisks mydg02-03 and mydg02-04
of disk mydg02.
Hot-relocation automatically relocates the affected subdisks and initiates any
necessary recovery procedures. However, if relocation is not possible or the
hot-relocation feature is disabled, you must investigate the problem and attempt
to recover the plexes. Errors can be caused by cabling failures, so check the cables
connecting your disks to your system. If there are obvious problems, correct them
and recover the plexes using the following command:
This starts recovery of the failed plexes in the background (the command prompt
reappears before the operation completes). If an error message appears later, or
if the plexes become detached again and there are no obvious cabling failures,
replace the disk.
See “Removing and replacing disks” on page 146.
To: root
Subject: Volume Manager failures on host teal
Administering hot-relocation 433
How hot-relocation works
failed disks:
mydg02
failed plexes:
home-02
src-02
mkting-01
failing disks:
mydg02
This message shows that mydg02 was detached by a failure. When a disk is
detached, I/O cannot get to that disk. The plexes home-02, src-02, and mkting-01
were also detached (probably because of the failure of the disk).
One possible cause of the problem could be a cabling error.
See “Partial disk failure mail messages” on page 431.
If the problem is not a cabling error, replace the disk.
See “Removing and replacing disks” on page 146.
Hot-relocation tries to move all subdisks from a failing drive to the same
destination disk, if possible.
If the failing disk is a root disk, hot-relocation only works if it can relocate all of
the file systems to the same disk. If none are found, the system administrator is
notified through email.
When hot-relocation takes place, the failed subdisk is removed from the
configuration database, and VxVM ensures that the disk space used by the failed
subdisk is not recycled as free space.
Here mydg02 is the only disk designated as a spare in the mydg disk group. The
LENGTH field indicates how much spare space is currently available on mydg02 for
relocation.
The following commands can also be used to display information about disks that
are currently designated as spares:
■ vxdisk list lists disk information and displays spare disks with a spare flag.
■ vxprint lists disk and other information and displays spare disks with a SPARE
flag.
■ The list menu item on the vxdiskadm main menu lists all disks including
spare disks.
You can use the vxdisk list command to confirm that this disk is now a spare;
mydg01 should be listed with a spare flag.
Any VM disk in this disk group can now use this disk as a spare in the event of a
failure. If a disk fails, hot-relocation automatically occurs (if possible). You are
notified of the failure and relocation through electronic mail. After successful
relocation, you may want to replace the failed disk.
To use vxdiskadm to designate a disk as a hot-relocation spare
1 Select Mark a disk as a spare for a disk group from the vxdiskadm
main menu.
2 At the following prompt, enter a disk media name (such as mydg01):
The following notice is displayed when the disk has been marked as spare:
3 At the following prompt, indicate whether you want to add more disks as
spares (y) or return to the vxdiskadm main menu (n):
Any VM disk in this disk group can now use this disk as a spare in the event
of a failure. If a disk fails, hot-relocation should automatically occur (if
possible). You should be notified of the failure and relocation through
electronic mail. After successful relocation, you may want to replace the failed
disk.
3 At the following prompt, indicate whether you want to disable more spare
disks (y) or return to the vxdiskadm main menu (n):
3 At the following prompt, indicate whether you want to add more disks to be
excluded from hot-relocation (y) or return to the vxdiskadm main menu (n):
3 At the following prompt, indicate whether you want to add more disks to be
excluded from hot-relocation (y) or return to the vxdiskadm main menu (n):
spare=only
If not enough storage can be located on disks marked as spare, the relocation fails.
Any free space on non-spare disks is not used.
Administering hot-relocation 439
Moving relocated subdisks
To: root
Subject: Volume Manager failures on host teal
This message has information about the subdisk before relocation and can be
used to decide where to move the subdisk after relocation.
Here is an example message that shows the new location for the relocated subdisk:
To: root
Subject: Attempting VxVM relocation on host teal
Before you move any relocated subdisks, fix or replace the disk that failed.
See “Removing and replacing disks” on page 146.
Once this is done, you can move a relocated subdisk back to the original disk as
described in the following sections.
Warning: During subdisk move operations, RAID-5 volumes are not redundant.
4 If moving subdisks to their original offsets is not possible, you can choose to
unrelocate the subdisks forcibly to the specified disk, but not necessarily to
the same offsets.
Note: The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
Here, \!mydg05 specifies the current location of the subdisks, and mydg02 specifies
where the subdisks should be relocated.
If the volume is enabled, subdisks within detached or disabled plexes, and detached
log or RAID-5 subdisks, are moved without recovery of data.
If the volume is not enabled, subdisks within STALE or OFFLINE plexes, and stale
log or RAID-5 subdisks, are moved without recovery. If there are other subdisks
within a non-enabled volume that require moving, the relocation fails.
For enabled subdisks in enabled plexes within an enabled volume, data is moved
to the new location, without loss of either availability or redundancy of the volume.
If vxunreloc cannot replace the subdisks back to the same original offsets, a force
option is available that allows you to move the subdisks to a specified disk without
using the original offsets.
See the vxunreloc(1M) manual page.
The examples in the following sections demonstrate the use of vxunreloc.
The destination disk should have at least as much storage capacity as was in use
on the original disk. If there is not enough space, the unrelocate operation will
fail and none of the subdisks will be moved.
Assume that mydg01 failed and the subdisks were relocated and that you want to
move the hot-relocated subdisks to mydg05 where some subdisks already reside.
You can use the force option to move the hot-relocated subdisks to mydg05, but
not to the exact offsets:
After the disk that experienced the failure is fixed or replaced, vxunreloc can be
used to move all the hot-relocated subdisks back to the disk. When a subdisk is
hot-relocated, its original disk-media name and the offset into the disk are saved
in the configuration database. When a subdisk is moved back to the original disk
or to a new disk using vxunreloc, the information is erased. The original
disk-media name and the original offset are saved in the subdisk records. To print
all of the subdisks that were hot-relocated from mydg01 in the mydg disk group,
use the following command:
The comment fields of all the subdisks on the destination disk remain marked as
UNRELOC until phase 3 completes. If its execution is interrupted, vxunreloc can
subsequently re-use subdisks that it created on the destination disk during a
previous execution, but it does not use any data that was moved to the destination
disk.
If a subdisk data move fails, vxunreloc displays an error message and exits.
Determine the problem that caused the move to fail, and fix it before re-executing
vxunreloc.
If the system goes down after the new subdisks are created on the destination
disk, but before all the data has been moved, re-execute vxunreloc when the
system has been rebooted.
Warning: Do not modify the string UNRELOC in the comment field of a subdisk
record.
1 To prevent vxrelocd starting, comment out the entry that invokes it in the
startup file:
2 By default, vxrelocd sends electronic mail to root when failures are detected
and relocation actions are performed. You can instruct vxrelocd to notify
additional users by adding the appropriate user names as shown here:
where the optional IOdelay value indicates the desired delay in milliseconds.
The default value for the delay is 250 milliseconds.
On a Solaris 10 system, after making changes to the way vxrelocd is invoked
in the startup file, run the following command to notify that the service
configuration has changed:
■ Overview of clustering
Overview of clustering
Tightly-coupled cluster systems are common in the realm of enterprise-scale
mission-critical data processing. The primary advantage of clusters is protection
against hardware failure. Should the primary node fail or otherwise become
unavailable, applications can continue to run by transferring their execution to
standby nodes in the cluster. This ability to provide continuous availability of
service by switching to redundant hardware is commonly termed failover.
Another major advantage of clustered systems is their ability to reduce contention
for system resources caused by activities such as backup, decision support and
report generation. Businesses can derive enhanced value from their investment
in cluster systems by performing such operations on lightly loaded nodes in the
cluster rather than on the heavily loaded nodes that answer requests for service.
This ability to perform some operations on the lightly loaded nodes is commonly
termed load balancing.
448 Administering cluster functionality (CVM)
Overview of clustering
Figure 13-1 shows a simple cluster arrangement consisting of four nodes with
similar or identical hardware characteristics (CPUs, RAM and host adapters), and
configured with identical software (including the operating system).
Administering cluster functionality (CVM) 449
Overview of clustering
Redundant
SCSIor Fibre
Channel
connectivity
Cluster-shareable disks
Cluster-shareable
disk groups
To the cluster monitor, all nodes are the same. VxVM objects configured within
shared disk groups can potentially be accessed by all nodes that join the cluster.
However, the CVM functionality of VxVM requires that one node act as the master
node; all other nodes in the cluster are slave nodes. Any node is capable of being
the master node, and it is responsible for coordinating certain VxVM activities.
In this example, node 0 is configured as the CVM master node and nodes 1, 2 and
3 are configured as CVM slave nodes. The nodes are fully connected by a private
network and they are also separately connected to shared external storage (either
disk arrays or JBODs: just a bunch of disks) via SCSI or Fibre Channel in a Storage
Area Network (SAN).
In this example, each node has two independent paths to the disks, which are
configured in one or more cluster-shareable disk groups. Multiple paths provide
resilience against failure of one of the paths, but this is not a requirement for
cluster configuration. Disks may also be connected by single paths.
The private network allows the nodes to share information about system resources
and about each other’s state. Using the private network, any node can recognize
which other nodes are currently active, which are joining or leaving the cluster,
and which have failed. The private network requires at least two communication
channels to provide redundancy against one of the channels failing. If only one
channel were used, its failure would be indistinguishable from node failure—a
condition known as network partitioning.
450 Administering cluster functionality (CVM)
Overview of clustering
You can run commands that configure or reconfigure VxVM objects on any node
in the cluster. These tasks include setting up shared disk groups, creating and
reconfiguring volumes, and performing snapshot operations.
The first node to join a cluster performs the function of master node. If the master
node leaves a cluster, one of the slave nodes is chosen to be the new master.
Private disk group Belongs to only one node. A private disk group can only be imported
by one system. LUNs in a private disk group may be physically
accessible from one or more systems, but access is restricted to only
one system at a time.
The boot disk group (usually aliased by the reserved disk group name
bootdg) is always a private disk group.
Shared disk group Can be shared by all nodes. A shared (or cluster-shareable) disk group
is imported by all cluster nodes. LUNs in a shared disk group must be
physically accessible from all systems that may join the cluster.
In a CVM cluster, most disk groups are shared. LUNs in a shared disk group are
accessible from all nodes in a cluster, allowing applications on multiple cluster
nodes to simultaneously access the same LUN. A volume in a shared disk group
can be simultaneously accessed by more than one node in the cluster, subject to
license key and disk group activation mode restrictions.
You can use the vxdg command to designate a disk group as cluster-shareable.
See “Importing disk groups as shared” on page 478.
When a disk group is imported as cluster-shareable for one node, each disk header
is marked with the cluster ID. As each node subsequently joins the cluster, it
recognizes the disk group as being cluster-shareable and imports it. In contrast,
a private disk group's disk headers are marked with the individual node's host
name. As system administrator, you can import or deport a shared disk group at
any time; the operation takes place in a distributed fashion on all nodes.
Each LUN is marked with a unique disk ID. When cluster functionality for VxVM
starts on the master, it imports all shared disk groups (except for any that do not
have the autoimport attribute set). When a slave tries to join a cluster, the master
sends it a list of the disk IDs that it has imported, and the slave checks to see if it
can access them all. If the slave cannot access one of the listed disks, it abandons
its attempt to join the cluster. If it can access all of the listed disks, it joins the
cluster and imports the same shared disk groups as the master. When a node
Administering cluster functionality (CVM) 451
Overview of clustering
leaves the cluster gracefully, it deports all its imported shared disk groups, but
they remain imported on the surviving nodes.
Reconfiguring a shared disk group is performed with the cooperation of all nodes.
Configuration changes to the disk group are initiated by the master, and happen
simultaneously on all nodes and the changes are identical. Such changes are
atomic in nature, which means that they either occur simultaneously on all nodes
or not at all.
Whether all members of the cluster have simultaneous read and write access to
a cluster-shareable disk group depends on its activation mode setting.
See “Activation modes of shared disk groups” on page 451.
The data contained in a cluster-shareable disk group is available as long as at least
one node is active in the cluster. The failure of a cluster node does not affect access
by the remaining active nodes. Regardless of which node accesses a
cluster-shareable disk group, the configuration of the disk group looks the same.
Warning: Applications running on each node can access the data on the VM disks
simultaneously. VxVM does not protect against simultaneous writes to shared
volumes by more than one node. It is assumed that applications control consistency
(by using Veritas Cluster File System or a distributed lock manager, for example).
exclusivewrite The node has exclusive write access to the disk group. No other node
(ew) can activate the disk group for write access.
452 Administering cluster functionality (CVM)
Overview of clustering
readonly (ro) The node has read access to the disk group and denies write access
for all other nodes in the cluster. The node has no write access to the
disk group. Attempts to activate a disk group for either of the write
modes on other nodes fail.
sharedread The node has read access to the disk group. The node has no write
(sr) access to the disk group, however other nodes can obtain write access.
sharedwrite The node has write access to the disk group. Attempts to activate the
(sw) disk group for shared read and shared write access succeed. Attempts
to activate the disk group for exclusive write and read-only access
fail.
off The node has neither read nor write access to the disk group. Query
operations on the disk group are permitted.
Table 13-2 summarizes the allowed and conflicting activation modes for shared
disk groups.
Shared disk groups can be automatically activated in a specified mode when the
disk group is created or imported. To control automatic activation of shared disk
groups, create a defaults file /etc/default/vxdg containing the following lines:
enable_activation=true
default_activation_mode=activation-mode
Administering cluster functionality (CVM) 453
Overview of clustering
■ Any failures that require a configuration change must be sent to the master
node so that they can be resolved correctly.
■ As the master node resolves failures, all the slave nodes are correctly updated.
This ensures that all nodes have the same view of the configuration.
The practical implication of this design is that I/O failure on any node results in
the configuration of all nodes being changed. This is known as the global detach
policy. However, in some cases, it is not desirable to have all nodes react in this
way to I/O failure. To address this, an alternate way of responding to I/O failures,
known as the local detach policy, was introduced.
The local detach policy is intended for use with shared mirrored volumes in a
cluster. This policy prevents I/O failure on any of the nodes in the cluster from
causing a plex to be detached. This would require the plex to be resynchronized
when it is subsequently reattached.
The local detach policy is supported for disk groups that have a version number
of 120 or greater.
For small mirrored volumes, non-mirrored volumes, volumes that use hardware
mirrors, and volumes in private disk groups, there is no benefit in configuring
the local detach policy. In most cases, it is recommended that you use the default
global detach policy.
The choice between local and global detach polices is one of node availability
versus plex availability when an individual node loses access to disks. Select the
local detach policy for a diskgroup if you are using mirrored volumes within it,
and would prefer a single node to lose write access to a volume rather than a plex
of the volume being detached clusterwide. i.e. you consider the availability of your
data (retaining mirrors) more important than any one node in the cluster. This
will typically only apply in larger clusters, and where a parallel application is
being used that can seamlessly provide the same service from the other nodes.
For example, this option is not appropriate for fast failover configurations. Select
the global detach policy in all other cases.
In the event of the master node losing access to all the disks containing log/config
copies, the disk group failure policy is triggered. At this point no plexes can be
detached, as this requires access to the log/config copies, no configuration changes
to the disk group can be made, and any action requiring the kernel to write to the
klog (first open, last close, mark dirty etc) will fail. If this happened in releases
prior to 4.1, the master node always disabled the disk group. Release 4.1 introduces
the disk group failure policy, which allows you to change this behavior for critical
disk groups. This policy is only supported for disk groups that have a version
number of 120 or greater.
Administering cluster functionality (CVM) 455
Overview of clustering
Table 13-3 Cluster behavior under I/O failure to a mirrored volume for different
disk detach policies
Failure of path to Reads fail only if no plexes remain The plex is detached, and I/O
one disk in a available to the affected node. from/to the volume continues. An
volume for a single Writes to the volume fail. I/O error is generated if no plexes
node remain.
Failure of paths to I/O fails for the affected node. The plex is detached, and I/O
all disks in a from/to the volume continues. An
volume for a single I/O error is generated if no plexes
node remain.
Failure of one or The plex is detached, and I/O The plex is detached, and I/O
more disks in a from/to the volume continues. An from/to the volume continues. An
volume for all I/O error is generated if no plexes I/O error is generated if no plexes
nodes. remain. remain.
Master node loses The master node The master node The master node
access to all copies panics with the disables the disk leaves the cluster,
of the logs. message “klog update group. after VCS handles all
failed” for a failed the applications
kernel-initiated dependent upon
transaction, or “cvm shared storage by
config update failed” either gracefully
for a failed stopping them or
user-initiated failing them over to
transaction. other nodes of the
cluster.
The behavior of the master node under the disk group failure policy is independent
of the setting of the disk detach policy. If the disk group failure policy is set to
leave, all nodes panic in the unlikely case that none of them can access the log
copies.
If the disk group failure policy is set to requestleave, the master node gracefully
leaves the cluster if the master node loses access to all log/config copies of the
disk group. If the master node loses access to the log/config copies of a shared
disk group, Cluster Volume Manager (CVM) signals the CVM Cluster Veritas Cluster
Server agent. Veritas Cluster Server (VCS) attempts to take offline the CVM group
on the master node. When the CVM group is taken offline, the dependent services
groups are also taken offline. If the dependent applications managed by VCS
cannot be taken offline for some reason, the master node may not be able to leave
the cluster gracefully.
The vxdg command can be used to set the failure policy on a shared disk group.
458 Administering cluster functionality (CVM)
Overview of clustering
See “Setting the disk group failure policy on a shared disk group” on page 482.
Note: The requestleave disk group failure policy is supported only for disk groups
containing volumes that have a single plex and that do not have a DCO log attached.
The default settings for the detach and failure policies are global and dgdisable
respectively. You can use the vxdg command to change both the detach and failure
policies on a shared disk group, as shown in this example:
Note: The boot disk group (usually aliased as bootdg) cannot be made
cluster-shareable. It must be private.
The cluster functionality of VxVM does not support RAID-5 volumes, or task
monitoring for cluster-shareable disk groups. These features can, however, be
used in private disk groups that are attached to specific nodes of a cluster.
If you have RAID-5 volumes in a private disk group that you wish to make
shareable, you must first relayout the volumes as a supported volume type such
as stripe-mirror or mirror-stripe. Online relayout of shared volumes is
supported provided that it does not involve RAID-5 volumes.
If a shared disk group contains RAID-5 volumes, deport it and then reimport the
disk group as private on one of the cluster nodes. Reorganize the volumes into
layouts that are supported for shared disk groups, and then deport and reimport
the disk group as shared.
Import lock
When a host in a non-CVM environment imports a disk group, an import lock is
written on all disks in that disk group. The import lock is cleared when the host
460 Administering cluster functionality (CVM)
Multiple host failover configurations
deports the disk group. The presence of the import lock prevents other hosts from
importing the disk group until the importing host has deported the disk group.
Specifically, when a host imports a disk group, the import normally fails if any
disks within the disk group appear to be locked by another host. This allows
automatic re-importing of disk groups after a reboot (autoimporting) and prevents
imports by another host, even while the first host is shut down. If the importing
host is shut down without deporting the disk group, the disk group can only be
imported by another host by clearing the host ID lock first (discussed later).
The import lock contains a host ID (the host name) reference to identify the
importing host and enforce the lock. Problems can therefore arise if two hosts
have the same host ID.
Since Veritas Volume Manager uses the host name as the host ID (by default), it
is advisable to change the host name of one machine if another machine shares
its host name. To change the host name, use the vxdctl hostid new_hostname
command.
Failover
The import locking scheme works well in an environment where disk groups are
not normally shifted from one system to another. However, consider a setup where
two hosts, Node A and Node B, can access the drives of a disk group. The disk
group is initially imported by Node A, but the administrator wants to access the
disk group from Node B if Node A crashes. Such a failover scenario can be used
to provide manual high availability to data, where the failure of one node does
not prevent access to data. Failover can be combined with a “high availability”
monitor to provide automatic high availability to data: when Node B detects that
Node A has crashed or shut down, Node B imports (fails over) the disk group to
provide access to the volumes.
Veritas Volume Manager can support failover, but it relies on the administrator
or on an external high-availability monitor, such as VCS, to ensure that the first
system is shut down or unavailable before the disk group is imported to another
system.
See “Moving disk groups between systems” on page 246.
See the vxdg(1M) manual page.
system or database is started on the imported volumes before the other host
crashes or shuts down.
If this kind of corruption occurs, your configuration must typically be rebuilt from
scratch and all data be restored from a backup. There are typically numerous
configuration copies for each disk group, but corruption nearly always affects all
configuration copies, so redundancy does not help in this case.
As long as the configuration backup daemon, vxconfigbackupd, is running, VxVM
will backup configurations whenever the configuration is changed. By default,
backups are stored in /etc/vx/cbr/bk. You may also manually backup the
configuration using the vxconfigbackup utility. The configuration can be rebuilt
using the vxrestore utility.
See the vxconfigbackup, vxconfigbackupd, vxconfigrestore man pages.
Disk group configuration corruption usually shows up as missing or duplicate
records in the configuration databases. This can result in a variety of vxconfigd
error messages
These errors are typically reported in association with specific disk group
configuration copies, but usually apply to all copies. The following is usually
displayed along with the error:
Availability If one node fails, the other nodes can still access the shared disks.
When configured with suitable software, mission-critical applications
can continue running by transferring their execution to a standby
node in the cluster. This ability to provide continuous uninterrupted
service by switching to redundant hardware is commonly termed
failover.
Note that a standby node need not remain idle. It could be used to
serve other applications in parallel.
Note: The CVM functionality of VxVM is supported only when used with a cluster
monitor that has been configured correctly to work with VxVM.
Use a cluster monitor such as Sun Java™ System Cluster software or GAB (Group
Membership and Atomic Broadcast) in Veritas Cluster Service (VCS). For VCS, the
Veritas product installer collects the required information to configure the cluster
monitor.
The cluster monitor startup procedure effects node initialization, and brings up
the various cluster components (such as VxVM with cluster support, the cluster
monitor, and a distributed lock manager) on the node. Once this is complete,
applications may be started. The cluster monitor startup procedure must be
invoked on each node to be joined to the cluster.
For VxVM in a cluster environment, initialization consists of loading the cluster
configuration information and joining the nodes in the cluster. The first node to
join becomes the master node, and later nodes (slaves) join to the master. If two
nodes join simultaneously, VxVM chooses the master. After a given node joins,
that node has access to the shared disk groups and volumes.
Cluster reconfiguration
Cluster reconfiguration occurs if a node leaves or joins a cluster. Each node’s
cluster monitor continuously watches the other cluster nodes. When the
membership of the cluster changes, the cluster monitor informs VxVM for it to
take appropriate action.
During cluster reconfiguration, VxVM suspends I/O to shared disks. I/O resumes
when the reconfiguration completes. Applications may appear to freeze for a
short time during reconfiguration.
If other operations, such as VxVM operations or recoveries, are in progress, cluster
reconfiguration can be delayed until those operations complete. Volume
464 Administering cluster functionality (CVM)
CVM initialization and configuration
vxclust utility
vxclust is used when Sun Java System Cluster software acts as the cluster monitor.
Every time there is a cluster reconfiguration, every node currently in the cluster
runs the vxclust utility at each of several well-orchestrated steps. The cluster
monitor facilities ensure that the same step is executed on all nodes at the same
time. A given step only starts when the previous one has completed on all nodes.
At each step in the reconfiguration, the vxclust utility determines what the CVM
functionality of VxVM should do next. After informing VxVM of its next action,
the vxclust utility waits for the outcome (success, failure, or retry) and
communicates that to the cluster monitor.
If a node does not respond to the vxclust utility request within a specific timeout
period, that node aborts. The vxclust utility then decides whether to restart the
reconfiguration or give up, depending on the circumstances. If the cause of the
reconfiguration is a local, uncorrectable error, vxclust gives up. If a node cannot
complete an operation because another node has left, the surviving node times
out. In this case, the vxclust utility requests a reconfiguration with the expectation
that another node will leave. If no other node leaves, the vxclust utility causes
the local node to leave.
If a reconfiguration step fails, the vxclust utility returns an error to the cluster
monitor. The cluster monitor may decide to abort the node, causing its immediate
departure from the cluster. Any I/O in progress to the shared disk fails and access
to the shared disks is stopped.
vxclust decides what actions to take when it is informed of changes in the cluster.
If a new master node is required (due to failure of the previous master), vxclust
determines which node becomes the new master.
vxclustadm utility
The vxclustadm command provides an interface to the CVM functionality of VxVM
when VCS is used as the cluster monitor. It is also called during cluster startup
Administering cluster functionality (CVM) 465
CVM initialization and configuration
and shutdown. In the absence of a cluster monitor, vxclustadm can also be used
to activate or deactivate the CVM functionality of VxVM on any node in a cluster.
The startnode keyword to vxclustadm starts CVM functionality on a cluster node
by passing cluster configuration information to the VxVM kernel. In response to
this command, the kernel and the VxVM configuration daemon, vxconfigd,
perform initialization.
The stopnode keyword stops CVM functionality on a node. It waits for all
outstanding I/O to complete and for all applications to close shared volumes.
The setmaster keyword migrates the CVM master to the specified node. The
migration is an online operation. Symantec recommends that you switch the
master when the cluster is not handling VxVM configuration changes or cluster
reconfiguration operations.
The reinit keyword allows nodes to be added to or removed from a cluster without
stopping the cluster. Before running this command, the cluster configuration file
must have been updated with information about the supported nodes in the cluster.
The nidmap keyword prints a table showing the mapping between CVM node IDs
in VxVM’s cluster-support subsystem and node IDs in the cluster monitor. It also
prints the state of the nodes in the cluster.
The nodestate keyword reports the state of a cluster node and also the reason
for the last abort of the node as shown in this example:
# vxclustadm nodestate
Table 13-5 lists the various reasons that may be given for a node abort.
Reason Description
cannot find disk on slave node Missing disk or bad disk on the slave node.
cannot obtain configuration The node cannot read the configuration data due
data to an error such as disk failure.
clustering license mismatch Clustering license does not match that on the
with master node master node.
466 Administering cluster functionality (CVM)
CVM initialization and configuration
Reason Description
connection refused by master The join operation of a node refused by the master
node.
disk in use by another cluster A disk belongs to a cluster other than the one that
a node is joining.
join timed out during The join operation of a node has timed out due to
reconfiguration reconfiguration taking place in the cluster.
klog update failed Cannot update kernel log copies during the join
operation of a node.
master aborted during join Master node aborted while another node was
joining the cluster.
recovery in progress Volumes that were opened by the node are still
recovering.
Volume reconfiguration
Volume reconfiguration is the process of creating, changing, and removing VxVM
objects such as disk groups, volumes and plexes. In a cluster, all nodes cooperate
to perform such operations. The vxconfigd daemons play an active role in volume
reconfiguration. For reconfiguration to succeed, a vxconfigd daemon must be
running on each of the nodes.
Administering cluster functionality (CVM) 467
CVM initialization and configuration
vxconfigd daemon
The VxVM configuration daemon, vxconfigd, maintains the configuration of
VxVM objects. It receives cluster-related instructions from the vxclust utility
under Sun Java System Cluster software, or from the kernel when running VCS.
A separate copy of vxconfigd runs on each node, and these copies communicate
468 Administering cluster functionality (CVM)
CVM initialization and configuration
with each other over a network. When invoked, a VxVM utility communicates
with the vxconfigd daemon running on the same node; it does not attempt to
connect with vxconfigd daemons on other nodes. During cluster startup,
SunCluster or VCS prompts vxconfigd to begin cluster operation and indicates
whether it is a master node or a slave node.
When a node is initialized for cluster operation, the vxconfigd daemon is notified
that the node is about to join the cluster and is provided with the following
information from the cluster monitor configuration database:
■ cluster ID
■ node IDs
■ master node ID
■ role of the node
■ network address of the node
On the master node, the vxconfigd daemon sets up the shared configuration by
importing shared disk groups, and informs the vxclust utility (for SunCluster)
or the kernel (for VCS) when it is ready for the slave nodes to join the cluster.
On slave nodes, the vxconfigd daemon is notified when the slave node can join
the cluster. When the slave node joins the cluster, the vxconfigd daemon and the
VxVM kernel communicate with their counterparts on the master node to set up
the shared configuration.
When a node leaves the cluster, the kernel notifies the vxconfigd daemon on all
the other nodes. The master node then performs any necessary cleanup. If the
master node leaves the cluster, the kernels select a new master node and the
vxconfigd daemons on all nodes are notified of the choice.
Such attempts do not succeed until the vxconfigd daemon is restarted on the
master. In this case, the vxconfigd daemons on the slave nodes have not lost
information about the shared configuration, so that any displayed configuration
information is correct.
■ If the vxconfigd daemon is stopped on a slave node, the master node takes no
action. When the vxconfigd daemon is restarted on the slave, the slave
vxconfigd daemon attempts to reconnect to the master daemon and to
re-acquire the information about the shared configuration. (Neither the kernel
view of the shared configuration nor access to shared disks is affected.) Until
the vxconfigd daemon on the slave node has successfully reconnected to the
vxconfigd daemon on the master node, it has very little information about
the shared configuration and any attempts to display or modify the shared
configuration can fail. For example, shared disk groups listed using the vxdg
list command are marked as disabled; when the rejoin completes successfully,
they are marked as enabled.
■ If the vxconfigd daemon is stopped on both the master and slave nodes, the
slave nodes do not display accurate configuration information until vxconfigd
is restarted on the master and slave nodes, and the daemons have reconnected.
If the vxclust utility (for SunCluster) or the CVM agent (for VCS) determines that
the vxconfigd daemon has stopped on a node, vxconfigd is restarted
automatically.
Warning: The -r reset option to vxconfigd restarts the vxconfigd daemon and
recreates all states from scratch. This option cannot be used to restart vxconfigd
while a node is joined to a cluster because it causes cluster information to be
discarded.
470 Administering cluster functionality (CVM)
CVM initialization and configuration
2 Enter the following command to stop and restart the VxVM configuration
daemon on the affected node:
# vxconfigd -k
3 Use the following command to re-enable failover for the service groups that
you froze in step 1:
Node shutdown
Although it is possible to shut down the cluster on a node by invoking the shutdown
procedure of the node’s cluster monitor, this procedure is intended for terminating
cluster components after stopping any applications on the node that have access
to shared storage. VxVM supports clean node shutdown, which allows a node to
leave the cluster gracefully when all access to shared volumes has ceased. The
host is still operational, but cluster applications cannot be run on it.
The CVM functionality of VxVM maintains global state information for each
volume. This enables VxVM to determine which volumes need to be recovered
when a node crashes. When a node leaves the cluster due to a crash or by some
other means that is not clean, VxVM determines which volumes may have writes
that have not completed and the master node resynchronizes these volumes. It
can use dirty region logging (DRL) or FastResync if these are active for any of the
volumes.
Clean node shutdown must be used after, or in conjunction with, a procedure to
halt all cluster applications. Depending on the characteristics of the clustered
application and its shutdown procedure, a successful shutdown can require a lot
of time (minutes to hours). For instance, many applications have the concept of
draining, where they accept no new work, but complete any work in progress
before exiting. This process can take a long time if, for example, a long-running
transaction is active.
When the VxVM shutdown procedure is invoked, it checks all volumes in all shared
disk groups on the node that is being shut down. The procedure then either
continues with the shutdown, or fails for one of the following reasons:
Administering cluster functionality (CVM) 471
Dirty region logging in cluster environments
■ If all volumes in shared disk groups are closed, VxVM makes them unavailable
to applications. Because all nodes are informed that these volumes are closed
on the leaving node, no resynchronization is performed.
■ If any volume in a shared disk group is open, the shutdown procedure fails.
The shutdown procedure can be repeatedly retried until it succeeds. There is
no timeout checking in this operation—it is intended as a service that verifies
that the clustered applications are no longer active.
Once shutdown succeeds, the node has left the cluster. It is not possible to access
the shared volumes until the node joins the cluster again.
Since shutdown can be a lengthy process, other reconfiguration can take place
while shutdown is in progress. Normally, the shutdown attempt is suspended
until the other reconfiguration completes. However, if it is already too far
advanced, the shutdown may complete first.
Cluster shutdown
If all nodes leave a cluster, shared volumes must be recovered when the cluster
is next started if the last node did not leave cleanly, or if resynchronization from
previous nodes leaving uncleanly is incomplete. CVM automatically handles the
recovery and resynchronization tasks when a node joins the cluster.
vxdctl -c mode
Table 13-6 shows the various messages that may be output according to the current
status of the cluster node.
mode: enabled: The node has not yet been assigned a role,
cluster active - role not set and is in the process of joining the cluster.
master: mozart
state: joining
reconfig: master update
# vxclustadm nidmap
Name CVM Nid CM Nid State
system01 0 0 Joined: Slave
system02 1 1 Joined: Master
# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: system02
# vxclustadm -v nodestate
state: cluster member
nodeId=0
masterId=0
neighborId=1
members[0]=0xf
joiners[0]=0x0
leavers[0]=0x0
members[1]=0x0
joiners[1]=0x0
leavers[1]=0x0
reconfig_seqnum=0x9f9767
vxfen=off
state: master switching in progress
reconfig: vxconfigd in join
# vxclustadm nidmap
Name CVM Nid CM Nid State
system01 0 0 Joined: Master
system02 1 1 Joined: Slave
# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: system01
a transaction is in progress.
Try again
If you see this message, retry the command after the master switching has
completed.
Device: c4t1d0
devicetag: c4t1d0
type: auto
clusterid: cvm2
disk: name=shdg01 id=963616090.1034.cvm2
timeout: 30
group: name=shdg id=963616065.1032.cvm2
flags: online ready autoconfig shared imported
...
Note that the clusterid field is set to cvm2 (the name of the cluster), and the
flags field includes an entry for shared. The imported flag is only set if a node
is a part of the cluster and the disk group is imported.
Administering cluster functionality (CVM) 477
Administering VxVM in cluster environments
# vxdg list
NAME STATE ID
group2 enabled,shared 774575420.1170.teal
group1 enabled,shared 774222028.1090.teal
# vxdg -s list
NAME STATE ID
group2 enabled,shared 774575420.1170.teal
group1 enabled,shared 774222028.1090.teal
To display information about one specific disk group, use the following command:
The following is example output for the command vxdg list tempdg on the
master:
Group: tempdg
dgid: 1245902808.74.ts4200-04
import-id: 33792.73
flags: shared cds
version: 150
alignment: 8192 (bytes)
local-activation: shared-write
cluster-actv-modes: ts4200-04=sw ts4200-06=sw ts4200-05=sw
ssb: on
autotagging: on
detach-policy: global
dg-fail-policy: dgdisable
copies: nconfig=default nlog=default
config: seqno=0.1027 permlen=0 free=0 templen=0 loglen=0
478 Administering cluster functionality (CVM)
Administering VxVM in cluster environments
Note that the flags field is set to shared. The output for the same command when
run on a slave is slightly different. The local-activation and
cluster-actv-modes fields display the activation mode for this node and for each
node in the cluster respectively. The detach-policy and dg-fail-policy fields
indicate how the cluster behaves in the event of loss of connectivity to the disks,
and to the configuration and log copies on the disks.
Note: For Sun Clusters, the command to create shared disk groups can only be
run from the master node.
If the cluster software has been run to set up the cluster, a shared disk group can
be created using the following command:
where diskgroup is the disk group name, diskname is the administrative name
chosen for a VM disk, and devicename is the device name (or disk access name).
Warning: The operating system cannot tell if a disk is shared. To protect data
integrity when dealing with disks that can be accessed by multiple systems, use
the correct designation when adding a disk to a disk group. VxVM allows you to
add a disk that is not physically shared to a shared disk group if the node where
the disk is accessible is the only node in the cluster. However, this means that
other nodes cannot join the cluster. Furthermore, if you attempt to add the same
disk to different disk groups (private or shared) on two nodes at the same time,
the results are undefined. Perform all configuration on one node only, and
preferably on the master node.
Note: For Sun Clusters, the command to import shared disk groups can only be
run from the master node.
Disk groups can be imported as shared using the vxdg -s import command. If
the disk groups are set up before the cluster software is run, the disk groups can
be imported into the cluster arrangement using the following command:
where diskgroup is the disk group name or ID. On subsequent cluster restarts, the
disk group is automatically imported as shared. Note that it can be necessary to
deport the disk group (using the vxdg deport diskgroup command) before
invoking the vxdg utility.
Warning: The force option(-f) must be used with caution and only if you are fully
aware of the consequences such as possible data corruption.
When a cluster is restarted, VxVM can refuse to auto-import a disk group for one
of the following reasons:
■ A disk in the disk group is no longer accessible because of hardware errors on
the disk. In this case, use the following command to forcibly reimport the disk
group:
Note: After a forced import, the data on the volumes may not be available and
some of the volumes may be in the disabled state.
■ Some of the disks in the shared disk group are not accessible, so the disk group
cannot access all of its disks. In this case, a forced import is unsafe and must
not be attempted because it can result in inconsistent mirrors.
Note: For Sun Clusters, the command to convert shared disk groups can only be
run from the master node.
To convert a shared disk group to a private disk group, first deport it on the master
node using this command:
Then reimport the disk group on any cluster node using this command:
Note: For Sun Clusters, the command to move objects between shared disk groups
can only be run from the master node.
You can use the vxdg move command to move a self-contained set of VxVM objects
such as disks and top-level volumes between disk groups. In a cluster, you can
move such objects between private disk groups on any cluster node where those
disk groups are imported.
See “Moving objects between disk groups” on page 277.
Splitting a private disk group creates a private disk group, and splitting a shared
disk group creates a shared disk group. You can split a private disk group on any
cluster node where that disk group is imported.
You can split a shared disk group or create a shared target disk group on a master
node or a slave node. If you run the command to split a shared disk group or to
create a shared target disk group on a slave node, the command is shipped to the
master and executed on the master.
Note: For Sun Clusters, the command to split a shared disk group or create a shared
target disk group can only be run from the master node.
Note: For Sun Clusters, the command to perform the join can only be run from
the master node.
If you use this command to change the activation mode of a shared disk group,
you must first change the activation mode to off before setting it to any other
value, as shown here:
Multiple opens by the same node are also supported. Any attempts by other nodes
to open the volume fail until the final close of the volume by the node that opened
it.
Specifying exclusive=off instead means that more than one node in a cluster
can open a volume simultaneously. This is the default behavior.
Administering cluster functionality (CVM) 483
Administering VxVM in cluster environments
Multiple opens by the same node are also supported. Any attempts by other nodes
to open the volume fail until the final close of the volume by the node that opened
it.
Specifying exclusive=off instead means that more than one node in a cluster
can open a volume simultaneously. This is the default behavior.
# vxdctl list
Volboot file
version: 3/1
seqno: 0.19
cluster protocol version: 100
hostid: giga
entries:
You can also check the existing cluster protocol version using the following
command:
# vxdctl protocolversion
# vxdctl support
Support information:
vxconfigd_vrsn: 31
dg_minimum: 20
dg_maximum: 160
kernel: 31
protocol_minimum: 90
protocol_maximum: 100
protocol_current: 100
You can also use the following command to display the maximum and minimum
cluster protocol version supported by the current Veritas Volume Manager release:
# vxdctl protocolrange
Warning: While the vxrecover utility is active, there can be some degradation in
system performance.
the total usage, by all nodes, for the requested objects. If a local object is specified,
its local usage is returned.
You can optionally specify a subset of nodes using the following form of the
command:
where node is the CVM node ID number. You can find out the CVM node ID by
using the following command:
# vxclustadm nidmap
If a comma-separated list of nodes is supplied, the vxstat utility displays the sum
of the statistics for the nodes in the list.
For example, to obtain statistics for node 2, volume vol1,use the following
command:
To obtain and display statistics for the entire cluster, use the following command:
# vxstat -b
The statistics for all nodes are summed. For example, if node 1 performed 100 I/O
operations and node 2 performed 200 I/O operations, vxstat -b displays a total
of 300 I/O operations.
disk group on the master node. If the circumstances require, you can issue these
commands from the slave node.
Commands that operate on private disk groups are not shipped to the master
node. Similarly, CVM does not ship commands that operate locally on the slave
node, such as vxprint and vxdisk list.
Note: In Sun Clusters, VxVM supports shared disk group configuration commands
only on the master node.
When you issue a command on the slave that is executed on the master, the
command output (on the slave node) displays the object names corresponding to
the master node. For example, the command displays the disk access name (daname)
from the master node.
When run from a slave node, a query command such as vxtask or vxstat displays
the status of the commands on the slave node. The command does not show the
status of commands that originated from the slave node and that are executing
on the master node.
Note the following error handling for commands that you originate from the slave
node, which CVM executes on the master:
■ If the vxconfigd daemon on either the slave node or on the master node fails,
the command exits. The instance of the command on the master also exits. To
determine if the command executed successfully, use the vxprint command
to check the status of the VxVM objects.
■ If the slave node that shipped the command or the master node leaves the
cluster while the master is executing the command, the command exits on the
master node as well as on the slave node. To determine if the command
executed successfully, use thevxprint command to check the status of the
VxVM objects.
Note the following limitations for issuing CVM commands from the slave node:
■ This functionality is only available in VCS clusters. It is not supported for Sun
Clusters.
■ The CVM protocol version 100 or later is required on all nodes in the cluster.
See “Displaying the cluster protocol version” on page 483.
■ CVM uses the values in the defaults file on the master node when CVM executes
the command. To avoid any ambiguity, we recommend that you use the same
values in the defaults file for each of the nodes in the cluster.
■ CVM does not support executing all commands on the slave node. You must
issue the following commands only on the master node:
Administering cluster functionality (CVM) 487
Administering VxVM in cluster environments
■ Commands that specify both a shared disk group and a private disk group.
For example:
# vxassist -d defaults_file
Site A Site B
Private network
Fibre Channel
switch
Fibre Channel
switch
If a disk group is configured across the storage at the sites, and inter-site
communication is disrupted, there is a possibility of a serial split brain condition
arising if each site continues to update the local disk group configuration copies.
See “Handling conflicting configuration copies” on page 263.
VxVM provides mechanisms for dealing with the serial split brain condition,
monitoring the health of a remote mirror, and testing the robustness of the cluster
against various types of failure (also known as fire drill).
For applications and services to function correctly at a site when other sites have
become inaccessible, at least one complete plex of each volume must be configured
at each site (site-based allocation), and the consistency of the data in the plexes
at each site must be ensured (site consistency).
By tagging disks with site names, storage can be allocated from the correct location
when creating, resizing or relocating a volume, and when changing a volume’s
layout.
As shown in the examples, the network connectivity can be Fibre Channel (FC) or
Dense Wavelength Division Multiplex (DWDM). The storage network and the
heartbeat network can be the same network.
Administering sites and remote mirrors 491
About sites and remote mirrors
Figure 14-2 Site-consistent volume with two plexes at each of two sites
Site A Site B
Disk group
Volume V
Plex Plex
Plex P2 Plex P3
P1 P4
The storage for plexes P1 and P2 is allocated storage that is tagged as belonging
to site A, and the storage for plexes P3 and P4 is allocated storage that is tagged
as belonging to site B.
Although not shown in this figure, DCO log volumes are also mirrored across the
sites, and disk group configuration copies are distributed across the sites.
Site consistency means that the data in the plexes for a volume must be consistent
at each site. The site consistency of a volume is ensured by detaching a site when
its last complete plex fails at that site. If a site fails, all its plexes are detached and
the site is said to be detached. If site consistency is not on, only the plex that fails
is detached. The remaining volumes and their plexes on that site are not detached.
To enhance read performance, VxVM will service reads from the plexes at the
local site where an application is running if the siteread read policy is set on a
volume. Writes are written to plexes at all sites.
Figure 14-3 shows a configuration with remote storage only that is also supported.
492 Administering sites and remote mirrors
About sites and remote mirrors
Site A Site B
Cluster or
standalone
system Metropolitan
or wide area
network link
(Fibre Channel
Fibre or DWDM)
Channel
switch Fibre Channel
switch
If mirroring across sites is not required, or is not possible (as is the case for RAID-5
volumes), specify the allsites=off attribute to the vxassist command. If sites
are configured in the disk group, a plex will always be confined to a site and will
not span across sites. This enforcement cannot be overridden.
Before adding a new site to a disk group, be sure to meet the following
requirements:
■ Disks from the site being added (site tagged) are present or added to the disk
group.
■ Each existing volume with allsites set in the disk group must have at least
one plex at the site being added. If this condition is not met, the command to
Administering sites and remote mirrors 493
About sites and remote mirrors
add the site to the disk group fails. If the -f option is specified, the command
does not fail, but instead it sets the allsites attribute for the volume to off.
Note: By default, volumes created will be mirrored when sites are configured in
a disk group. Initial synchronization occurs between mirrors. Depending on the
size of the volume, synchronization may take a long time. If you do not need to
perform an initial synchronization across mirrors, use init=active with the
vxassist command.
■ At least two sites must be configured in the disk group before site consistency
is turned on.
See “Making an existing disk group site consistent” on page 495.
■ All the disks in a disk group must be registered to one of the sites before you
can set the siteconsistent attribute on the disk group.
This command has no effect if a site name has not been set for the host.
See “Changing the read policy for mirrored volumes” on page 391.
2 On each host that can access the disk group, define the site name:
3 Tag all the disks in the disk group with the appropriate site name:
Or, to tag all the disks in a specified enclosure, use the following command:
4 Use the vxdg move command to move any unsupported RAID-5 volumes to
another disk group. Alternatively, use the vxassist convert commands to
convert the volumes to a supported layout such as mirror or mirror-stripe.
You can use the site and mirror=site storage allocation attribute to ensure
that the plexes are created on the correct storage.
5 Use the vxevac command to ensure that the volumes have at least one plex
at each site. You can use the site and mirror=site storage allocation attribute
to ensure that the plexes are created on the correct storage.
See the vxevac(1m) manual page.
6 Register a site record for each site with the disk group:
8 Turn on the allsites flag for the volume which requires data replication to
each site:
9 Turn on site consistency for each existing volume in the disk group for which
site consistency is needed. You also need to attach DCOv20 if it is not attached
already. DCOv20 is required to ensure that site detach and reattach are
instantaneous.
See “Preparing a volume for DRL and instant snapshots” on page 380.
This section describes setting up a new disk group. To configure an existing disk
group as a Remote Mirror configuration, additional steps may be required.
See “Making an existing disk group site consistent” on page 495.
Setting up a new disk group for a Remote Mirror configuration
1 Define the site name for each host that can access the disk group.
To verify the site name assigned to the host, use the following command:
# vxdctl list
where the disks can be specified either by the disk access name or the disk
media name.
■ To autotag new disks added to the disk group based on the enclosure to
which they belong, perform the following steps in the order presented.
These steps are limited to disks in a single group.
■ Set the autotagging policy to on for the disk group, if required.
Automatic tagging is the default setting, so this step is only required
if the autotagging policy was previously disabled. To turn on
autotagging, enter the following command:
After validating the consistency of the volumes and disk groups at your sites, you
should validate the procedures that you will use in the event of the various possible
types of failure. A fire drill lets you test that a site can be brought up cleanly during
recovery from a disaster scenario such as site failure.
The -f option must be specified if any plexes configured on storage at the site are
currently online.
After the site is detached, the application should run correctly on the available
site. This step verifies that the primary site is fine. Continue the fire drill by
verifying the secondary site.
Then start the application. If the application runs correctly on the secondary site,
this step verifies the integrity of the secondary site.
Use the following commands to reattach a site and recover the disk group:
The -F option is required if any imported disk groups are registered to the
site.
2 Set the new site name for the host.
# vxdctl list
500 Administering sites and remote mirrors
Administering the Remote Mirror configuration
where the disks can be specified either by the disk access name or the disk
media name.
To display the disks or enclosures registered to a site
◆ To check which disks or enclosures are registered to a site, use the following
command:
2 Assign the site name to an enclosure within the disk group, using the following
command:
By default, a volume inherits the value that is set on its disk group.
By default, creating a site-consistent volume also creates an associated version
20 DCO volume, and enables Persistent FastResync on the volume. This allows
faster recovery of the volume during the reattachment of a site.
To turn on the site consistency requirement for an existing volume, use the
following form of the vxvol command:
To turn off the site consistency requirement for a volume, use the following
command:
502 Administering sites and remote mirrors
Examples of storage allocation by specifying sites
The siteconsistent attribute and the allsites attribute must be set to off for
RAID-5 volumes in a site-consistent disk group.
Command Description
Command Description
To display the setting for automatic site tagging for a disk group
◆ To determine whether automatic site tagging is on for a disk group, use the
following command:
To verify whether site consistency has been enabled for a disk group
◆ To verify whether site consistency has been enabled for a disk group, use the
following command:
Disruption of network link See “Recovering from a loss of site connectivity” on page 505.
between sites.
Failure of hosts at a site. See “Recovering from host failure” on page 506.
Failure of storage at a site. See “Recovering from storage failure” on page 506.
Failure of both hosts and See “Recovering from site failure” on page 507.
storage at a site.
If the network links between the sites are disrupted, the application environments
may continue to run in parallel, and this may lead to inconsistencies between the
disk group configuration copies at the sites. If the parallel instances of an
application issue writes to volumes, an unrecoverable data loss may occur and
manual intervention is needed. To avoid data loss, it is recommended that you
configure the VCS fencing mechanism to handle network split-brain situations.
When connectivity between the sites is restored, a serial split-brain condition will
be detected between the sites. One site must be chosen as having the preferred
version of the data and the disk group configuration copies. The data from the
chosen site is resynchronized to the other site. If new writes are issued to volumes
after the network split, they are overwritten with the data from the chosen site.
The configuration copies at the other sites are updated from the copies at the
chosen site.
506 Administering sites and remote mirrors
Failure and recovery scenarios
At the chosen site, use the following commands to reattach a site and recover the
disk group:
In the case that the host systems are configured at a single site with only storage
at the remote sites, the usual resynchronization mechanism of VxVM is used to
recover the remote plexes when the storage comes back on line.
See “Handling conflicting configuration copies” on page 263.
For more information about recovering a disk group, refer to the Veritas Volume
Manager Troubleshooting Guide.
Administering sites and remote mirrors 507
Failure and recovery scenarios
Note: vxattachd does not try to reattach a site that you have explicitly detached
by using the vxdg detachsite command.
To send mail to other users, add the user name to the line that starts vxattachd
in the /lib/svc/method/vxvm-recover startup script and run the svcadm refresh
vxvm/vxvm-recover command (for Solaris 10 onward), or
/etc/init.d/vxvm-recover and reboot the system (for OS releases before Solaris
10).
If you do not want a site to be recovered automatically, kill the vxattachd daemon,
and prevent it from restarting. If you stop vxattachd, the automatic plex
reattachment also stops. To kill the daemon, run the following command from
the command line:
# ps -afe
Locate the process table entry for vxattachd, and kill it by specifying its process
ID:
# kill -9 PID
If there is no entry in the process table for vxattachd, the automatic site
reattachment feature is disabled.
To prevent the automatic site reattachment feature from being restarted, comment
out the line that starts vxattachd in the /lib/svc/method/vxvm-recover startup
script and run the svcadm refresh vxvm/vxvm-recover command (for Solaris
10 onward), or /etc/init.d/vxvm-recover (for OS releases before Solaris 10).
Chapter 15
Performance monitoring
and tuning
This chapter includes the following topics:
■ Performance guidelines
■ RAID-5
■ Performance monitoring
■ Tuning VxVM
Performance guidelines
Veritas Volume Manager (VxVM) can improve system performance by optimizing
the layout of data storage on the available hardware. VxVM lets you optimize data
storage performance using the following strategies:
■ Balance the I/O load among the available disk drives.
■ Use striping and mirroring to increase I/O bandwidth to the most frequently
accessed data.
VxVM also provides data redundancy through mirroring and RAID-5, which allows
continuous access to data in the event of disk failure.
Data assignment
When you decide where to locate file systems, you typically try to balance I/O
load among the available disk drives. The effectiveness of this approach is limited.
It is difficult to predict future usage patterns, and you cannot split file systems
across the drives. For example, if a single file system receives the most disk
accesses, moving the file system to another drive also moves the bottleneck.
510 Performance monitoring and tuning
Performance guidelines
VxVM can split volumes across multiple drives. This approach gives you a finer
level of granularity when you locate data. After you measure access patterns, you
can adjust your decisions on where to place file systems. You can reconfigure
volumes online without adversely impacting their availability.
Striping
Striping improves access performance by cutting data into slices and storing it
on multiple devices that can be accessed in parallel. Striped plexes improve access
performance for both read and write operations.
After you identify the most heavily-accessed volumes (containing file systems or
databases), you can increase access bandwidth to this data by striping it across
portions of multiple disks.
Figure 15-1 shows an example of a single volume (HotVol) that has been identified
as a data-access bottleneck.
This volume is striped across four disks. The remaining space on these disks is
free for use by less-heavily used volumes.
Mirroring
Mirroring stores multiple copies of data on a system. When you apply mirroring
properly, data is continuously available. Mirroring also protects against data loss
due to physical media failure. If the system crashes or a disk or other hardware
fails, mirroring improves the chance of data recovery.
In some cases, you can also use mirroring to improve I/O performance. Unlike
striping, the performance gain depends on the ratio of reads to writes in the disk
accesses. If the system workload is primarily write-intensive (for example, greater
than 30 percent writes), mirroring can reduce performance.
Performance monitoring and tuning 511
RAID-5
RAID-5
RAID-5 offers many of the advantages of combined mirroring and striping, but
it requires more disk space. RAID-5 read performance is similar to that of striping,
and RAID-5 parity offers redundancy similar to mirroring. The disadvantages of
RAID-5 include relatively slow write performance.
RAID-5 is not usually seen as a way to improve throughput performance. The
exception is when the access patterns of applications show a high ratio of reads
to writes. .
Figure 15-2 shows an example in which the read policy of the mirrored-stripe
volume labeled HotVol is set to prefer for the striped plex PL1.
The prefer policy distributes the load when reading across the otherwise
lightly-used disks in PL1, as opposed to the single disk in plex PL2. (HotVol is an
example of a mirrored-stripe volume in which one data plex is striped and the
other data plex is concatenated.)
To improve performance for read-intensive workloads, you can attach up to 32
data plexes to the same volume. However, this approach is usually an ineffective
use of disk space for the gain in read performance.
Performance monitoring
As a system administrator, you have two sets of priorities for setting priorities
for performance. One set is physical, concerned with hardware such as disks and
controllers. The other set is logical, concerned with managing software and its
operation.
Best performance is usually achieved by striping and mirroring all volumes across
a reasonable number of disks and mirroring between controllers, when possible.
This procedure tends to even out the load between all disks, but it can make VxVM
more difficult to administer. For large numbers of disks (hundreds or thousands),
set up disk groups containing 10 disks, where each group is used to create a
striped-mirror volume. This technique provides good performance while easing
the task of administration.
■ average operation time (which reflects the total time through the VxVM
interface and is not suitable for comparison against other statistics programs)
These statistics are recorded for logical I/O including reads, writes, atomic copies,
verified reads, verified writes, plex reads, and plex writes for each volume. As a
result, one write to a two-plex volume results in at least five operations: one for
each plex, one for each subdisk, and one for the volume. Also, one read that spans
two subdisks shows at least four reads—one read for each subdisk, one for the
plex, and one for the volume.
VxVM also maintains other statistical data. For each plex, it records read and
write failures. For volumes, it records corrected read and write failures in addition
to read and write failures.
To reset the statistics information to zero, use the -r option. This can be done for
all objects or for only those objects that are specified. Resetting just prior to an
operation makes it possible to measure the impact of that particular operation.
The following is an example of output produced using the vxstat command:
due to volumes being created, and also removes statistics from boot time (which
are not usually of interest).
After resetting the counters, allow the system to run during typical system activity.
Run the application or workload of interest on the system to measure its effect.
When monitoring a system that is used for multiple purposes, try not to exercise
any one application more than usual. When monitoring a time-sharing system
with many users, let statistics accumulate for several hours during the normal
working day.
To display volume statistics, enter the vxstat command with no arguments. The
following is a typical display of volume statistics:
If you need to move the volume named archive onto another disk, use the following
command to identify on which disks it lies:
The subdisks line (beginning sd) indicates that the volume archive is on disk
mydg03. To move the volume off mydg03, use the following command.
Note: The ! character is a special character in some shells. This example shows
how to escape it in a bash shell.
Here dest_disk is the destination disk to which you want to move the volume. It
is not necessary to specify a destination disk. If you do not specify a destination
disk, the volume is moved to an available disk with enough space to contain the
volume.
For example, to move a volume from disk mydg03 to disk mydg04, in the disk group,
mydg, use the following command:
After reorganizing any particularly busy volumes, check the disk statistics. If
some volumes have been reorganized, clear statistics first and then accumulate
statistics for a reasonable period of time.
Performance monitoring and tuning 517
Performance monitoring
If some disks appear to be excessively busy (or have particularly long read or write
times), you may want to reconfigure some volumes. If there are two relatively
busy volumes on a disk, move them closer together to reduce seek times on the
disk. If there are too many relatively busy volumes on one disk, move them to a
disk that is less busy.
Use I/O tracing (or subdisk statistics) to determine whether volumes have excessive
activity in particular regions of the volume. If the active regions can be identified,
split the subdisks in the volume and move those regions to a less busy disk.
Note that file systems and databases typically shift their use of allocated space
over time, so this position-specific information on a volume is often not useful.
Databases are reasonable candidates for moving to non-busy disks if the space
used by a particularly busy index or table can be identified.
Examining the ratio of reads to writes helps to identify volumes that can be
mirrored to improve their performance. If the read-to-write ratio is high, mirroring
can increase performance as well as reliability. The ratio of reads to writes where
mirroring can improve performance depends greatly on the disks, the disk
controller, whether multiple controllers can be used, and the speed of the system
bus. If a particularly busy volume has a high ratio of reads to writes, it is likely
that mirroring can significantly improve performance of that volume.
Tuning VxVM
This section describes how to adjust the tunable parameters that control the
system resources that are used by VxVM. Depending on the system resources that
are available, adjustments may be required to the values of some tunable
parameters to optimize performance.
For example, a single entry has been added to the end of the following
/kernel/drv/vxio.conf file to change the value of vol_tunable to 5000:
Warning: Do not edit the configuration file for the vxspec driver,
/kernel/drv/vxspec.conf.
You can use the prtconf -vP command to display the current values of the
tunables. All VxVM tunables that you specify in /kernel/drv/vxio.conf are
listed in the output under the “System properties.” heading for the vxio drivers.
All unchanged tunables are listed with their default values under the “Driver
properties” heading. The following sample output shows the new value for
vol_tunable in hexadecimal:
# prtconf -vP
.
.
.
vxio, instance #0
System properties:
name <vol_tunable> length <4>
value <0x00001388>
Driver properties:
name <voldrl_max_seq_dirty> length <4>
value <0x00000003>
.
.
.
For more information, see the prtconf(1M) and driver.conf(4) manual pages.
DMP tunables are set online (without requiring a reboot) by using the vxdmpadm
command as shown here:
Parameter Description
vol_default_iodelay The count in clock ticks for which utilities pause if they
have been directed to reduce the frequency of issuing
I/O requests, but have not been given a specific delay
time. This tunable is used by utilities performing
operations such as resynchronizing mirrors or
rebuilding RAID-5 columns.
Parameter Description
Parameter Description
Parameter Description
Parameter Description
Parameter Description
Parameter Description
Parameter Description
voliot_iobuf_limit The upper limit to the size of memory that can be used
for storing tracing buffers in the kernel. Tracing
buffers are used by the VxVM kernel to store the
tracing event records. As trace buffers are requested
to be stored in the kernel, the memory for them is
drawn from this pool.
voliot_iobuf_max The maximum buffer size that can be used for a single
trace buffer. Requests of a buffer larger than this size
are silently truncated to this size. A request for a
maximal buffer size from the tracing interface results
(subject to limits of usage) in a buffer of this size.
Parameter Description
Parameter Description
Parameter Description
Parameter Description
dmp_delayq_interval How long DMP should wait before retrying I/O after
an array fails over to a standby path. Some disk arrays
are not capable of accepting I/O requests immediately
after failover.
Parameter Description
Parameter Description
Parameter Description
Parameter Description
Parameter Description
■ check_all
■ check_alternate
■ check_disabled
■ check_periodic
Parameter Description
# vxtune vol_stats_enable
# vxtune vol_stats_enable 0
If you are concerned about high I/O throughput, you may also choose to disable
DMP I/O statistics collection.
To disable DMP I/O statistics collection
◆ Enter the following command:
# vxtune vol_stats_enable
# vxtune vol_stats_enable 1
$ PATH=$PATH:/usr/sbin:/opt/VRTS/bin:/opt/VRTSvxfs/sbin:\
/opt/VRTSdbed/bin:/opt/VRTSob/bin
$ MANPATH=/usr/share/man:/opt/VRTS/man:$MANPATH
$ export PATH MANPATH
VxVM library commands and supporting scripts are located under the
/usr/lib/vxvm directory hierarchy. You can include these directories in your
path if you need to use them on a regular basis.
For detailed information about an individual command, refer to the appropriate
manual page in the 1M section.
See “Online manual pages” on page 569.
Commands and scripts that are provided to support other commands and scripts,
and which are not intended for general use, are not located in /opt/VRTS/bin and
do not have manual pages.
Commonly-used commands are summarized in the following tables:
■ Table A-1 lists commands for obtaining information about objects in VxVM.
■ Table A-2 lists commands for administering disks.
■ Table A-3 lists commands for creating and administering disk groups.
■ Table A-4 lists commands for creating and administering subdisks.
■ Table A-5 lists commands for creating and administering plexes.
■ Table A-6 lists commands for creating volumes.
■ Table A-7 lists commands for administering volumes.
■ Table A-8 lists commands for monitoring and controlling tasks in VxVM.
Command Description
vxdisk [-g diskgroup] list [diskname] Lists disks under control of VxVM.
Example:
Command Description
Example:
Example:
# vxdg -s list
Example:
Example:
Command Description
Example:
vxprint -pt [-g diskgroup] [plex ...] Displays information about plexes.
Example:
Command Description
Example:
# vxdiskadd c0t1d0
Example:
Command Description
vxedit [-g diskgroup] set \ Sets aside/does not set aside a disk from
reserve=on|off diskname use in a disk group.
Examples:
vxedit [-g diskgroup] set \ Does not/does allow free space on a disk
nohotuse=on|off diskname to be used for hot-relocation.
Examples:
Examples:
Command Description
Example:
Example:
vxdg -g diskgroup rmdisk diskname Removes a disk from its disk group.
Example:
Example:
# vxdiskunsetup c0t3d0
Command Description
Example:
Command Description
Example:
vxdg [-n newname] deport diskgroup Deports a disk group and optionally
renames it.
Example:
vxdg [-n newname] import diskgroup Imports a disk group and optionally
renames it.
Example:
Example:
Command Description
vxdg [-o expand] listmove sourcedg \ Lists the objects potentially affected by
targetdg object ... moving a disk group.
Example:
vxdg [-o expand] move sourcedg \ Moves objects between disk groups.
targetdg object ...
See “Moving objects between disk
groups” on page 277.
Example:
vxdg [-o expand] split sourcedg \ Splits a disk group and moves the
targetdg object ... specified objects into the target disk
group.
Example:
Command Description
Example:
Example:
Example:
Command Description
Example:
# vxmake -g mydg sd \
mydg02-01 mydg02,0,8000
548 Using Veritas Volume Manager commands
About Veritas Volume Manager commands
Command Description
Example:
vxsd [-g diskgroup] assoc plex \ Adds subdisks to the ends of the
subdisk1:0 ... subdiskM:N-1 columns in a striped or RAID-5 volume.
Example:
Example:
Command Description
Example:
Example:
Example:
Example:
Command Description
Example:
vxsd [-g diskgroup] -o rm dis subdisk Dissociates and removes a subdisk from
a plex.
Example:
Command Description
Command Description
vxplex [-g diskgroup] att volume plex Attaches a plex to an existing volume.
Example:
Example:
vxmend [-g diskgroup] off plex Takes a plex offline for maintenance.
Example:
Example:
Example:
# vxplex -g mydg mv \
vol02-02 vol02-03
552 Using Veritas Volume Manager commands
About Veritas Volume Manager commands
Command Description
Example:
vxmend [-g diskgroup] fix clean plex Sets the state of a plex in an unstartable
volume to CLEAN.
Example:
vxplex [-g diskgroup] -o rm dis plex Dissociates and removes a plex from a
volume.
Example:
Command Description
Example:
Command Description
Example:
Example:
Example:
Command Description
Example:
Example:
Example:
vxvol [-g diskgroup] start volume Initializes and starts a volume for use.
Example:
Command Description
Example:
Command Description
Example:
Example:
Command Description
Example:
Example:
# vxresize -b -F vxfs \
-g mydg myvol 20g mydg10 \
mydg11
Example:
Command Description
Example:
Example:
Command Description
For example:
Example:
Command Description
Example:
Example:
Example:
Command Description
Example:
Example:
# vxrelayout -g mydg -o bg \
reverse vol3
Example:
Command Description
Example:
Command Description
Example:
# vxrecover -g mydg \
-t mytask -b mydg05
Example:
Example:
Command Description
Example:
Example:
Example:
Example:
Table A-9 List of CVM commands supported for executing on the slave node
vxdg
564 Using Veritas Volume Manager commands
CVM commands supported for executing on the slave node
Table A-9 List of CVM commands supported for executing on the slave node
(continued)
vxdg [-o expand] move sourcedg targetdg object (both dgs should be shared
)
Table A-9 List of CVM commands supported for executing on the slave node
(continued)
Table A-9 List of CVM commands supported for executing on the slave node
(continued)
Table A-9 List of CVM commands supported for executing on the slave node
(continued)
Table A-9 List of CVM commands supported for executing on the slave node
(continued)
Table A-9 List of CVM commands supported for executing on the slave node
(continued)
1M Administrative commands.
4 File formats.
570 Using Veritas Volume Manager commands
Online manual pages
Name Description
Name Description
Name Description
Name Description
Name Description
Name Description
Name Description
■ Foreign devices
■ Cluster support
Foreign devices
The device discovery feature of VxVM can discover some devices that are controlled
by third-party drivers, such as for EMC PowerPath. For these devices it may be
preferable to use the multipathing capability that is provided by the third-party
drivers rather than using the Dynamic Multi-Pathing (DMP) feature. Provided
that a suitable array support library is available, DMP can co-exist with such
drivers. Other foreign devices, for which a compatible ASL does not exist, can be
made available to Veritas Volume Manager as simple disks by using the vxddladm
addforeign command. This also has the effect of bypassing DMP.
Mirroring guidelines
Refer to the following guidelines when using mirroring.
578 Configuring Veritas Volume Manager
Guidelines for configuring storage
■ Do not place subdisks from different plexes of a mirrored volume on the same
physical disk. This action compromises the availability benefits of mirroring
and degrades performance. Using the vxassist or vxdiskadm commands
precludes this from happening.
■ To provide optimum performance improvements through the use of mirroring,
at least 70 percent of physical I/O operations should be read operations. A
higher percentage of read operations results in even better performance.
Mirroring may not provide a performance increase or may even result in a
performance decrease in a write-intensive workload environment.
■ The operating system implements a file system cache. Read requests can
frequently be satisfied from the cache. This can cause the read/write ratio for
physical I/O operations through the file system to be biased toward writing
(when compared to the read/write ratio at the application level).
■ Where possible, use disks attached to different controllers when mirroring or
striping. Most disk controllers support overlapped seeks. This allows seeks to
begin on two disks at once. Do not configure two plexes of the same volume
on disks that are attached to a controller that does not support overlapped
seeks. This is important for older controllers or SCSI disks that do not cache
on the drive. It is less important for modern SCSI disks and controllers.
Mirroring across controllers allows the system to survive a failure of one of
the controllers. Another controller can continue to provide data from a mirror.
■ A plex exhibits superior performance when striped or concatenated across
multiple disks, or when located on a much faster device. Set the read policy to
prefer the faster plex. By default, a volume with one striped plex is configured
to prefer reading from the striped plex.
See “Mirroring (RAID-1)” on page 42.
Warning: Using Dirty Region Logging can adversely impact system performance
in a write-intensive environment.
Striping guidelines
Refer to the following guidelines when using striping.
■ Do not place more than one column of a striped plex on the same physical disk.
■ Calculate stripe-unit sizes carefully. In general, a moderate stripe-unit size
(for example, 64 kilobytes, which is also the default used by vxassist) is
recommended.
■ If it is not feasible to set the stripe-unit size to the track size, and you do not
know the application I/O pattern, use the default stripe-unit size.
■ Many modern disk drives have variable geometry. This means that the track
size differs between cylinders, so that outer disk tracks have more sectors than
inner tracks. It is therefore not always appropriate to use the track size as the
stripe-unit size. For these drives, use a moderate stripe-unit size (such as 64
kilobytes), unless you know the I/O pattern of the application.
■ Volumes with small stripe-unit sizes can exhibit poor sequential I/O latency
if the disks do not have synchronized spindles. Generally, striping over disks
without synchronized spindles yields better performance when used with
larger stripe-unit sizes and multi-threaded, or largely asynchronous, random
I/O streams.
■ Typically, the greater the number of physical disks in the stripe, the greater
the improvement in I/O performance; however, this reduces the effective mean
time between failures of the volume. If this is an issue, combine striping with
mirroring to combine high-performance with improved reliability.
■ If only one plex of a mirrored volume is striped, set the policy of the volume
to prefer for the striped plex. (The default read policy, select, does this
automatically.)
■ If more than one plex of a mirrored volume is striped, configure the same
stripe-unit size for each striped plex.
■ Where possible, distribute the subdisks of a striped volume across drives
connected to different controllers and buses.
■ Avoid the use of controllers that do not support overlapped seeks. (Such
controllers are rare.)
The vxassist command automatically applies and enforces many of these rules
when it allocates space for striped plexes in a volume.
See “Striping (RAID-0)” on page 39.
580 Configuring Veritas Volume Manager
Guidelines for configuring storage
RAID-5 guidelines
Refer to the following guidelines when using RAID-5.
In general, the guidelines for mirroring and striping together also apply to RAID-5.
The following guidelines should also be observed with RAID-5:
■ Only one RAID-5 plex can exist per RAID-5 volume (but there can be multiple
log plexes).
■ The RAID-5 plex must be derived from at least three subdisks on three or more
physical disks. If any log plexes exist, they must belong to disks other than
those used for the RAID-5 plex.
■ RAID-5 logs can be mirrored and striped.
■ If the volume length is not explicitly specified, it is set to the length of any
RAID-5 plex associated with the volume; otherwise, it is set to zero. If you
specify the volume length, it must be a multiple of the stripe-unit size of the
associated RAID-5 plex, if any.
■ If the log length is not explicitly specified, it is set to the length of the smallest
RAID-5 log plex that is associated, if any. If no RAID-5 log plexes are associated,
it is set to zero.
■ Sparse RAID-5 log plexes are not valid.
■ RAID-5 volumes are not supported for sharing in a cluster.
See “RAID-5 (striping with parity)” on page 45.
Hot-relocation guidelines
Hot-relocation automatically restores redundancy and access to mirrored and
RAID-5 volumes when a disk fails. This is done by relocating the affected subdisks
to disks designated as spares and/or free space in the same disk group.
The hot-relocation feature is enabled by default. The associated daemon, vxrelocd,
is automatically started during system startup.
Refer to the following guidelines when using hot-relocation.
■ The hot-relocation feature is enabled by default. Although it is possible to
disable hot-relocation, it is advisable to leave it enabled. It will notify you of
the nature of the failure, attempt to relocate any affected subdisks that are
redundant, and initiate recovery procedures.
■ Although hot-relocation does not require you to designate disks as spares,
designate at least one disk as a spare within each disk group. This gives you
some control over which disks are used for relocation. If no spares exist, Veritas
Volume Manager uses any available free space within the disk group. When
Configuring Veritas Volume Manager 581
Guidelines for configuring storage
Creating a volume in a disk group sets up block and character (raw) device files
that can be used to access the volume:
The pathnames include a directory named for the disk group. Use the appropriate
device node to create, mount and repair file systems, and to lay out databases that
require raw partitions.
Cluster support
The Veritas Volume Manager software includes a licensable feature that enables
it to be used in a cluster environment. The cluster functionality in Veritas Volume
Manager allows multiple hosts to simultaneously access and manage a set of disks
under Veritas Volume Manager control. A cluster is a set of hosts sharing a set of
disks; each host is referred to as a node in the cluster.
See the Veritas Storage Foundation Getting Started Guide.
If you are setting up Veritas Volume Manager for the first time, configure the
shared disks using the following steps in the specified order:
■ Start the cluster on one node only to prevent access by other nodes.
Configuring Veritas Volume Manager 583
Cluster support
■ On one node, run the vxdiskadm program and choose option 1 to initialize new
disks. When asked to add these disks to a disk group, choose none to leave the
disks for future use.
■ On other nodes in the cluster, run vxdctl enable to see the newly initialized
disks.
■ Create disk groups on the shared disks.
■ Use the vxdg command or the Veritas Operations Manager (VOM) to create
disk groups. If you use the vxdg command, specify the -s option to create
shared disk groups.
■ Use vxassist or VOM to create volumes in the disk groups.
■ If the cluster is only running with one node, bring up the other cluster nodes.
Enter the vxdg list command on each node to display the shared disk groups.
584 Configuring Veritas Volume Manager
Cluster support
# vxdg list
To deport the disk groups that are to be shared, use the following command:
This procedure marks the disks in the shared disk groups as shared and
stamps them with the ID of the cluster, enabling other nodes to recognize
the shared disks.
If dirty region logs exist, ensure they are active. If not, replace them with
larger ones.
To display the shared flag for all the shared disk groups, use the following
command:
# vxdg list
Active/Active disk This type of multipathed disk array allows you to access a disk in the disk array
arrays through all the paths to the disk simultaneously, without any performance
degradation.
Active/Passive disk This type of multipathed disk array allows one path to a disk to be designated as
arrays primary and used to access the disk at any time. Using a path other than the
designated active path results in severe performance degradation in some disk
arrays.
associate The process of establishing a relationship between VxVM objects; for example, a
subdisk that has been created and defined as having a starting point within a plex
is referred to as being associated with that plex.
associated plex A plex associated with a volume.
associated subdisk A subdisk associated with a plex.
atomic operation An operation that either succeeds completely or fails and leaves everything as it
was before the operation was started. If the operation succeeds, all aspects of the
operation take effect at once and the intermediate states of change are invisible.
If any aspect of the operation fails, then the operation aborts without leaving
partial changes.
In a cluster, an atomic operation takes place either on all nodes or not at all.
attached A state in which a VxVM object is both associated with another object and enabled
for use.
block The minimum unit of data transfer to or from a disk or array.
boot disk A disk that is used for the purpose of booting a system.
boot disk group A private disk group that contains the disks from which the system may be booted.
bootdg A reserved disk group name that is an alias for the name of the boot disk group.
clean node shutdown The ability of a node to leave a cluster gracefully when all access to shared volumes
has ceased.
cluster A set of hosts (each termed a node) that share a set of disks.
cluster manager An externally-provided daemon that runs on each node in a cluster. The cluster
managers on each node communicate with each other and inform VxVM of changes
in cluster membership.
586 Glossary
cluster-shareable disk A disk group in which access to the disks is shared by multiple hosts (also referred
group to as a shared disk group).
column A set of one or more subdisks within a striped plex. Striping is achieved by
allocating data alternately and evenly across the columns within a plex.
concatenation A layout style characterized by subdisks that are arranged sequentially and
contiguously.
configuration copy A single copy of a configuration database.
configuration database A set of records containing detailed information on existing VxVM objects (such
as disk and volume attributes).
DCO (data change A VxVM object that is used to manage information about the FastResync maps in
object) the DCO volume. Both a DCO object and a DCO volume must be associated with a
volume to implement Persistent FastResync on that volume.
data stripe This represents the usable data portion of a stripe and is equal to the stripe minus
the parity region.
DCO volume A special volume that is used to hold Persistent FastResync change maps and
dirty region logs. See also see dirty region logging.
detached A state in which a VxVM object is associated with another object, but not enabled
for use.
device name The device name or address used to access a physical disk, such as c0t0d0s2. The
c#t#d#s# syntax identifies the controller, target address, disk, and slice (or
partition).
In a SAN environment, it is more convenient to use enclosure-based naming,
which forms the device name by concatenating the name of the enclosure (such
as enc0) with the disk’s number within the enclosure, separated by an underscore
(for example, enc0_2). The term disk access name can also be used to refer to a
device name.
dirty region logging The method by which the VxVM monitors and logs modifications to a plex as a
bitmap of changed regions. For a volumes with a new-style DCO volume, the dirty
region log (DRL) is maintained in the DCO volume. Otherwise, the DRL is allocated
to an associated subdisk called a log subdisk.
disabled path A path to a disk that is not available for I/O. A path can be disabled due to real
hardware failures or if the user has used the vxdmpadm disable command on that
controller.
disk A collection of read/write data blocks that are indexed and can be accessed fairly
quickly. Each disk has a universally unique identifier.
disk access name An alternative term for a device name.
Glossary 587
disk access records Configuration records used to specify the access path to particular disks. Each
disk access record contains a name, a type, and possibly some type-specific
information, which is used by VxVM in deciding how to access and manipulate
the disk that is defined by the disk access record.
disk array A collection of disks logically arranged into an object. Arrays tend to provide
benefits such as redundancy or improved performance.
disk array serial number This is the serial number of the disk array. It is usually printed on the disk array
cabinet or can be obtained by issuing a vendor- specific SCSI command to the
disks on the disk array. This number is used by the DMP subsystem to uniquely
identify a disk array.
disk controller In the multipathing subsystem of VxVM, the controller (host bus adapter or HBA)
or disk array connected to the host, which the operating system represents as the
parent node of a disk.
For example, if a disk is represented by the device name
/dev/sbus@1f,0/QLGC,isp@2,10000/sd@8,0:c then the path component
QLGC,isp@2,10000 represents the disk controller that is connected to the host
for disk sd@8,0:c.
disk enclosure An intelligent disk array that usually has a backplane with a built-in Fibre Channel
loop, and which permits hot-swapping of disks.
disk group A collection of disks that share a common configuration. A disk group
configuration is a set of records containing detailed information on existing VxVM
objects (such as disk and volume attributes) and their relationships. Each disk
group has an administrator-assigned name and an internally defined unique ID.
The disk group names bootdg (an alias for the boot disk group), defaultdg (an
alias for the default disk group) and nodg (represents no disk group) are reserved.
disk group ID A unique identifier used to identify a disk group.
disk ID A universally unique identifier that is given to each disk and can be used to identify
the disk, even if it is moved.
disk media name An alternative term for a disk name.
disk media record A configuration record that identifies a particular disk, by disk ID, and gives that
disk a logical (or administrative) name.
disk name A logical or administrative name chosen for a disk that is under the control of
VxVM, such as disk03. The term disk media name is also used to refer to a disk
name.
dissociate The process by which any link that exists between two VxVM objects is removed.
For example, dissociating a subdisk from a plex removes the subdisk from the
plex and adds the subdisk to the free space pool.
588 Glossary
fabric mode disk A disk device that is accessible on a Storage Area Network (SAN) via a Fibre
Channel switch.
FastResync A fast resynchronization feature that is used to perform quick and efficient
resynchronization of stale mirrors, and to increase the efficiency of the snapshot
mechanism.
Fibre Channel A collective name for the fiber optic technology that is commonly used to set up
a Storage Area Network (SAN).
file system A collection of files organized together into a structure. The UNIX file system is
a hierarchical structure consisting of directories and files.
free space An area of a disk under VxVM control that is not allocated to any subdisk or
reserved for use by any other VxVM object.
free subdisk A subdisk that is not associated with any plex and has an empty putil[0] field.
hostid A string that identifies a host to VxVM. The host ID for a host is stored in its
volboot file, and is used in defining ownership of disks and disk groups.
any of these results in DMP trying to shift all I/O for that disk onto the remaining
(alternate) paths.
pathgroup In the case of disks which are not multipathed by vxdmp, VxVM will see each path
as a disk. In such cases, all paths to the disk can be grouped. This way only one of
the paths from the group is made visible to VxVM.
Persistent FastResync A form of FastResync that can preserve its maps across reboots of the system by
storing its change map in a DCO volume on disk).
persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes
and prevents failed mirrors from being selected for recovery. This is also known
as kernel logging.
physical disk The underlying storage device, which may or may not be under VxVM control.
plex A plex is a logical grouping of subdisks that creates an area of disk space
independent of physical disk size or other restrictions. Mirroring is set up by
creating multiple data plexes for a single volume. Each data plex in a mirrored
volume contains an identical copy of the volume data. Plexes may also be created
to represent concatenated, striped and RAID-5 volume layouts, and to store volume
logs.
primary path In Active/Passive disk arrays, a disk can be bound to one particular controller on
the disk array or owned by a controller. The disk can then be accessed using the
path through this particular controller.
private disk group A disk group in which the disks are accessed by only one specific host in a cluster.
private region A region of a physical disk used to store private, structured VxVM information.
The private region contains a disk header, a table of contents, and a configuration
database. The table of contents maps the contents of the disk. The disk header
contains a disk ID. All data in the private region is duplicated for extra reliability.
public region A region of a physical disk managed by VxVM that contains available space and
is used for allocating subdisks.
RAID (redundant array A disk array set up with part of the combined storage capacity used for storing
of independent disks) duplicate information about the data stored in that array. This makes it possible
to regenerate the data if a disk failure occurs.
read-writeback mode A recovery mode in which each read operation recovers plex consistency for the
region covered by the read. Plex consistency is recovered by reading data from
blocks of one plex and writing the data to all other writable plexes.
root configuration The configuration database for the root disk group. This is special in that it always
contains records for other disk groups, which are used for backup purposes only.
It also contains disk records that define all disk devices on the system.
root disk The disk containing the root file system. This disk may be under VxVM control.
Glossary 591
root file system The initial file system mounted as part of the UNIX kernel startup sequence.
root partition The disk region on which the root file system resides.
root volume The VxVM volume that contains the root file system, if such a volume is designated
by the system configuration.
rootability The ability to place the root file system and the swap device under VxVM control.
The resulting volumes can then be mirrored to provide redundancy and allow
recovery in the event of disk failure.
secondary path In Active/Passive disk arrays, the paths to a disk other than the primary path are
called secondary paths. A disk is supposed to be accessed only through the primary
path until it fails, after which ownership of the disk is transferred to one of the
secondary paths.
sector A unit of size, which can vary between systems. Sector size is set per device (hard
drive, CD-ROM, and so on). Although all devices within a system are usually
configured to the same sector size for interoperability, this is not always the case.
A sector is commonly 512 bytes.
shared disk group A disk group in which access to the disks is shared by multiple hosts (also referred
to as a cluster-shareable disk group).
shared volume A volume that belongs to a shared disk group and is open on more than one node
of a cluster at the same time.
shared VM disk A VM disk that belongs to a shared disk group in a cluster.
slave node A node that is not designated as the master node of a cluster.
slice The standard division of a logical disk device. The terms partition and slice are
sometimes used synonymously.
snapshot A point-in-time copy of a volume (volume snapshot) or a file system (file system
snapshot).
spanning A layout technique that permits a volume (and its file system or database) that is
too large to fit on a single disk to be configured across multiple physical disks.
sparse plex A plex that is not as long as the volume or that has holes (regions of the plex that
do not have a backing subdisk).
SAN (storage area A networking paradigm that provides easily reconfigurable connectivity between
network) any subset of computers, disk storage and interconnecting hardware such as
switches, hubs and bridges.
stripe A set of stripe units that occupy the same positions across a series of columns.
stripe size The sum of the stripe unit sizes comprising a single stripe across all columns
being striped.
592 Glossary
stripe unit Equally-sized areas that are allocated alternately on the subdisks (within columns)
of each striped plex. In an array, this is a set of logically contiguous blocks that
exist on each disk before allocations are made from the next disk in the array. A
stripe unit may also be referred to as a stripe element.
stripe unit size The size of each stripe unit. The default stripe unit size is 64KB. The stripe unit
size is sometimes also referred to as the stripe width.
striping A layout technique that spreads data across several physical disks using stripes.
The data is allocated alternately to the stripes within the subdisks of each plex.
subdisk A consecutive set of contiguous disk blocks that form a logical disk segment.
Subdisks can be associated with plexes to form volumes.
swap area A disk region used to hold copies of memory pages swapped out by the system
pager process.
swap volume A VxVM volume that is configured for use as a swap area.
transaction A set of configuration changes that succeed or fail as a group, rather than
individually. Transactions are used internally to maintain consistent
configurations.
VM disk A disk that is both under VxVM control and assigned to a disk group. VM disks
are sometimes referred to as VxVM disks.
volboot file A small file that is used to locate copies of the boot disk group configuration. The
file may list disks that contain configuration copies in standard locations, and
can also contain direct pointers to configuration copy locations. The volboot file
is stored in a system-dependent location.
volume A virtual disk, representing an addressable range of disk blocks used by
applications such as file systems or databases. A volume is a collection of from
one to 32 plexes.
volume configuration The volume configuration device (/dev/vx/config) is the interface through which
device all configuration changes to the volume device driver are performed.
volume device driver The driver that forms the virtual disk drive between the application and the
physical device driver level. The volume device driver is accessed through a virtual
disk device node whose character device nodes appear in /dev/vx/rdsk, and whose
block device nodes appear in /dev/vx/dsk.
volume event log The device interface (/dev/vx/event) through which volume driver events are
reported to utilities.
vxconfigd The VxVM configuration daemon, which is responsible for making changes to the
VxVM configuration. This daemon must be running before VxVM operations can
be performed.
Index
Symbols allocation
/dev/vx/dmp directory 159 site-based 490
/dev/vx/rdmp directory 159 APM
/etc/default/vxassist file 321, 438 configuring 210
/etc/default/vxdg defaults file 452 array policy module (APM)
/etc/default/vxdg file 239 configuring 210
/etc/default/vxdisk file 85, 113 array ports
/etc/default/vxencap file 113 disabling for DMP 201
/etc/init.d/vxvm-recover file 444 displaying information about 181
/etc/vfstab file 392 enabling for DMP 202
/etc/volboot file 286 array support library (ASL) 88
/etc/vx/darecs file 286 Array Volume ID
/etc/vx/dmppolicy.info file 195 device naming 106
/etc/vx/volboot file 248 arrays
/kernel/drv/vxio.conf file 518–519 DMP support 87
/kernel/drv/vxspec.conf file 520 ASL
/lib/svc/method/vxvm-recover file 444 array support library 87–88
Asymmetric Active/Active disk arrays 158
attributes
A active 192
A/A disk arrays 158 comment 300, 312
A/A-A disk arrays 158 dcolen 66, 334
A/P disk arrays 158 default for disk initialization 113
A/P-C disk arrays 158–159 default for encapsulation 113
A/PF disk arrays 159 dgalign_checking 324
A/PG disk arrays 159 drl 336, 389
A5x00 arrays fastresync 334, 336, 395
removing and replacing disks 152 for specifying storage 325
access port 158 hasdcolog 395
activation modes for shared disk groups 451–452 init 346
ACTIVE len 300
plex state 303 loglen 337
volume state 358 logtype 336
active path attribute 192 maxdev 251
active paths name 299, 312
devices 193–194 ndcomirror 334, 336
Active/Active disk arrays 158 ndcomirs 381
Active/Passive disk arrays 158 nomanual 192
adaptive load-balancing 195 nopreferred 192
adding disks 122 plex 312
alignment constraints 324 preferred priority 192
primary 192
594 Index
vxvset (continued)
removing volumes from volume sets 409
starting volume sets 410
stopping volume sets 410
W
worldwide name identifiers 81, 105
WWN identifiers 81, 105
Z
zero
setting volume contents to 347