NetBackup83 AdminGuide SnapshotClient
NetBackup83 AdminGuide SnapshotClient
Snapshot Client
Administrator's Guide
Release 8.3
Veritas NetBackup™ Snapshot Client Administrator's
Guide
Document version: 8.3
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
The latest documentation is available on the Veritas website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Legal Notice
Copyright © 2020 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, and NetBackup are trademarks or registered trademarks of Veritas
Technologies LLC or its affiliates in the U.S. and other countries. Other names may be
trademarks of their respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
http://www.veritas.com
Contents
About restoring over the SAN to a host acting as both client server
and media server ............................................................ 243
About restoring directly from a snapshot .................................... 244
About restoring from a disk snapshot .............................................. 245
About restoring on UNIX ........................................................ 245
About restoring on Windows ................................................... 248
Instant Recovery Makes the backups available for recovery from disk.
NetBackup for Hyper-V Backs up and restores Windows and Linux Hyper-V
virtual machines (guest operating systems).
NetBackup for VMware Backs up and restores Windows and Linux VMware
virtual machines (guest operating systems).
Block level incremental backup Enables NetBackup to back up only the changed data
(BLIB) blocks of VMware virtual machines and Oracle or DB2
database files.
Introduction 16
Snapshot Client features
About snapshots
A snapshot is a point-in-time, read-only, disk-based copy of a client volume. After
the snapshot is created, NetBackup backs up data from the snapshot, not directly
from the client’s primary or original volume. Users and client operations can access
the primary data without interruption while data on the snapshot volume is backed
up. The contents of the snapshot volume are cataloged as if the backup was
produced directly from the primary volume. After the backup is complete, the
snapshot-based backup image on storage media is indistinguishable from a
traditional, non-snapshot backup image.
All the features of Snapshot Client (including off-host backup, FlashBackup, and
Instant Recovery) require the creation of a snapshot.
NetBackup master
server
LAN / WAN
NetBackup
client SCSI
Backup agent
Robot on
Disks of client SAN
data on SAN
Introduction 19
Snapshot Client features
Note: NetBackup for NDMP add-on software is required, and the NAS vendor must
support snapshots.
its physical disk address. These disk addresses are sent to the backup agent over
the LAN. The data is then read from the appropriate disk by the backup agent.
See “Off-host backup overview” on page 24.
Two types of snapshots are available, both supported by NetBackup: copy-on-write
and mirror (or clone).
As in a copy-on-write, transactions are allowed to finish and new I/O on the primary
disk is briefly halted. When the mirror image is brought up-to-date with the source,
the mirror is split from the primary. After the mirror is split, new changes can be
made to the primary but not to the mirror. The mirror can now be backed up (see
next diagram).
If the mirror is to be used again it must be brought up-to-date with the primary
volume (synchronized). During synchronization, the changes that were made to
the primary volume—while the mirror was split—are written to the mirror.
Since mirroring requires a complete copy of the primary on a separate device (same
size as the primary), it consumes more disk space than copy-on-write.
See “Benefits of copy-on-write versus mirror” on page 22.
■ It consumes less disk space: No need for ■ It has less effect on the performance of
secondary disks containing complete the host being backed up (NetBackup
copies of source data. client), because the copy-on-write
■ Relatively easy to configure (no need to mechanism is not needed.
set up mirror disks). ■ Allows for faster backups: The backup
■ Creates a snapshot much faster than one process reads data from a separate disk
created by a large, unsynchronized mirror, (mirror) operating independently of the
because mirror synchronization can be primary disk that holds the client’s source
time consuming. data. Unlike copy-on-write, disk I/O is not
shared with other processes or
applications. Apart from NetBackup, no
other applications have access to the
mirror disk. During a copy-on-write, other
applications as well as the copy-on-write
mechanism can access the source data.
NetBackup master
server
LAN / WAN
SCSI SCSI
The figure shows the following phases in the local backup process:
Phase Action
Data mover: NetBackup media A NetBackup media server reads raw data from the client
server (UNIX clients only) snapshot and writes it to a storage device, using mapping
information that the client provides.
Data mover: Network Attached An NDMP (NAS) host performs the snapshot-only backup
Storage for Instant Recovery only.
Data mover: Third-Party Copy A third-party copy device reads raw data from the client
Device data mover (UNIX clients snapshot and writes the data to a storage device. To do
only) so, the third-party copy device uses the Extended Copy
command and mapping information from the client. Many
kinds of devices, such as routers and disk arrays, are
designed as third-party copy devices.
Data mover: NDMP Use to replicate NDMP snapshots. Select this agent in
a policy that uses NDMP with Replication Director.
The mapping methods are installed as part of the NetBackup Snapshot Client
product. Depending on whether the backup data is configured over physical devices,
logical volumes, or file systems, NetBackup automatically selects the correct
mapping method.
LAN / WAN
Data sharing
(mirrors or
replication)
primary client alternate client media server storage
The figure shows the following phases in the alternate client backup process:
Introduction 27
Off-host backup methods
Phase Action
Phase 1 Primary and alternate client collaborates to create the snapshot on the
alternate client.
Phase 2 Alternate client sends the snapshot data to the media server.
Phase 3 Media server reads the snapshot data from the alternate client.
Note: The mirror disk need not be visible to the primary client, only to the alternate
client.
Introduction 28
Off-host backup methods
Figure 1-7 Alternate client and split mirror: primary client and alternate client
share data through mirroring.
NetBackup master
server
Phase Action
Figure 1-8 shows the media server and alternate client on the same host.
Introduction 29
Off-host backup methods
LAN / WAN
alternate client/
primary client media server
Primary mirror
disk disk storage
Phase Action
A single alternate client can handle backups for a number of primary clients, as
shown in the following diagram.
Multiple clients can share an alternate backup client of the same operating system
type.
LAN / WAN
Figure 1-10 Multiple clients with SSO: alternate client performs backup for
multiple primary clients with NetBackup SSO option on a SAN
LAN / WAN
Solaris
Shared
Fibre Channel/SAN Storage
Alternate client/
Windows
media server, with
clients
SSO
Windows
and is used to complete the backup. After the backup, the snapshot volume is
unmounted. The mirror is resynchronized with the replicating volume, and the
replication is resumed.
Figure 1-11 Replication: primary client and alternate client share data through
replication
NetBackup master
server (Detail from policy attributes dialog.
Requires the VVR snapshot method.)
LAN / WAN
primary replication
alternate media
client
client server
primary
volume
The NetBackup client’s primary volume
replicating mirror
is replicated on an alternate client.
volume volume
Storage
Phase Action
Only the VVR snapshot method for UNIX clients supports this configuration. This
configuration requires the Veritas Volume Manager (VxVM version 3.2 or later) with
the VVR license.
Introduction 32
Off-host backup methods
Figure 1-12 Alternate client split-mirror backup with FlashBackup policy type
NetBackup master
server
alternate
primary client media
client
server
Phase Action
Phase Action
Note: For a multi-ported SCSI disk array, a Fibre Channel SAN is not required.
NetBackup
master
server (Detail from policy attributes dialog.)
LAN / WAN
media
client
server
Fibre Channel/SAN
robot on
SAN
disks of client
data on SAN
Introduction 34
Snapshot Client requirements
Phase Action
■ To use Snapshot Client to back up a VxFS file system, the client’s VxFS file
system has to be patched with the dynamic linked libraries.
■ For the VxVM snapshot method, all clients must have VxVM 3.1 or later.
■ For the FlashSnap and VVR snapshot methods, all clients must have VxVM 3.2
or later. Each method requires its own add-on license to VxVM.
■ For the disk array snapshot methods, assistance may be required from the disk
array vendor.
■ To use the snapshot and off-host backup features of NetBackup Snapshot Client
with a NetBackup Oracle policy, UNIX clients must have Oracle8i or later
installed.
■ HP clients must use the OnlineJFS file system, not the default JFS.
■ Backup of an AIX 64-bit client with the NetBackup media server (data mover)
method and the VxVM or VxFS_Checkpoint snapshot method may fail with
NetBackup status code 11. This failure may occur if the client volumes are
configured with Storage Foundation 5.0 MP3. A NetBackup message similar to
the following appears in the job's Detailed Status tab:
This error occurs because the required VxVM libraries for 64-bit AIX are not
installed in the correct location. The libraries should be installed in
/opt/VRTSvxms/lib/map/aix64/.
cp /usr/lpp/VRTSvxvm/VRTSvxvm/5.0.3.0/inst_root/
opt/VRTSvxms/lib/map/aix64/* /opt/VRTSvxms/lib/map/aix64/
■ For off-host backups that use the NDMP data mover option to replicate
snapshots, see the NetBackup Replication Director Solutions Guide for a list of
limitations.
■ In a clustered environment, Instant Recovery point-in-time rollback is not
supported for the backups that were made with a disk array snapshot method.
The disk array snapshot methods are described in the chapter titled Configuration
of snapshot methods for disk arrays.
See “About the new disk array snapshot methods” on page 147.
■ For the TimeFinder, ShadowImage, or BusinessCopy legacy snapshot methods
(when you use the NetBackup media server or Third-Party Copy Device backup
methods): The NetBackup clients must have access to the mirror (secondary)
disk containing the snapshot of the client’s data. The NetBackup clients must
also be able to access the primary disk. The NetBackup media server only needs
access to the mirror (secondary) disk.
■ For the TimeFinder, ShadowImage, or BusinessCopy legacy snapshot methods,
a Volume Manager disk group must consist of disks from the same vendor.
■ The NetBackup media server off-host backup method does not support the
clients that use client deduplication. If the client is enabled for deduplication,
you must select Disable client-side deduplication on the policy Attributes
tab.
Introduction 37
Snapshot Client terminology
■ For the NetBackup media server or Third-Party Copy Device backup method:
The disk must return its SCSI serial number in response to a serial-number
inquiry (serialization), or the disk must support SCSI Inquiry Page Code 83.
■ Multiplexing is not supported for Third-Party Copy Device off-host backups.
■ For alternate client backup: The user and the group identification numbers (UIDs
and GIDs) for the files must be available to the primary client and the alternate
backup client.
■ Inline Tape Copies (called Multiple Copies in Vault) is not supported for
Third-Party Copy Device off-host backups.
■ For media servers running AIX (4.3.3 and higher), note the following:
■ Clients must be Solaris, HP, or AIX.
■ Requires the use of tape or disk LUNs to send the Extended copy commands
for backup.
■ The tape must be behind a third-party-copy-capable FC-to-SCSI router. The
router must be able to intercept Extended Copy commands that are sent to
the tape LUNs.
■ The mover.conf file must have a tape path defined, not a controller path.
Term Definition
Alternate client backup The alternate client performs a backup on behalf of another client.
Backup agent (see also A general term for the host that manages the backup on behalf of the NetBackup client.
Third-Party Copy Device) The agent is either another client, the NetBackup media server, a third-party copy device,
or a NAS filer.
BCV The mirror disk in an EMC primary-mirror array configuration (see mirror). BCV stands
for Business Continuance Volume.
Bridge In a SAN network, a bridge connects SCSI devices to Fibre Channel. A third-party copy
device can be implemented as part of a bridge or as part of other devices. Note that not
all bridges function as third-party copy devices.
Introduction 38
Snapshot Client terminology
Term Definition
Cache Copy-on-write snapshot methods need a separate working area on disk during the lifetime
of the snapshot. This area is called a cache. The snapshot method uses the cache to
store a copy of the client’s data blocks that are about to change because of file system
activity. This cache must be a raw disk partition that does not contain valuable information:
when you use the cache, the snapshot method overwrites any data currently stored there.
Copy-on-write In NetBackup Snapshot Client, one of two types of supported snapshots (see also mirror).
Unlike a mirror, a copy-on-write does not create a separate copy of the client’s data. It
creates a block-by-block "account" from the instant the copy-on-write was activated. The
account describes which blocks in the client data have changed and which have not.
The backup application uses this account to create the backup copy. Other terms and
trade names sometimes used for copy-on-write snapshots are space-optimized snapshots,
space-efficient snapshots, and checkpoints.
Data movement A copy operation as performed by a third-party copy device or NetBackup media server.
Data mover The host or entity that manages the backup on behalf of the NetBackup client. The data
mover can be either the NetBackup media server, a third-party copy device, or a NAS
filer.
device A general term for any of the following: LUN, logical volume, vdisk, and BCV or STD.
Disk group A configuration of disks to create a primary-mirror association, using commands unique
to the disks’ vendor. See mirror and volume group.
Extent A contiguous set of disk blocks that are allocated for a file and represented by three
values:
■ Device identifier
■ Starting block address (offset in the device)
■ Length (number of contiguous blocks)
The mapping methods in Snapshot Client determine the list of extents and send the list
to the backup agent.
FastResync (VxVM) Formerly known as Fast Mirror Resynchronization or FMR, VxVM FastResync performs
quick and efficient resynchronization of mirrors. NetBackup’s Instant Recovery feature
uses FastResync to create and maintain a point-in-time copy of a production volume.
Fibre Channel A type of high-speed network that is composed of either optical or of copper cable and
employing the Fibre Channel protocol. NetBackup Snapshot Client supports both arbitrated
loop and switched fabric (switched Fibre Channel) environments.
Introduction 39
Snapshot Client terminology
Term Definition
File system Has two meanings. For a product, such as UFS (Sun Solaris) or VxFS (Veritas) file
systems, file system means the management and the allocation schemes of the file tree.
Regarding a file tree component, file system means a directory that is attached to the
UNIX file tree by means of the mount command. When a file system is selected as an
entry in the NetBackup Backup Selections list, this definition applies.
Instant Recovery A restore feature of a disk snapshot of a client file system or volume. Client data can be
rapidly restored from the snapshot, even after a system restart.
Mapping Converting a file or raw device (in the file system or Volume Manager) to physical
addresses or extents for backup agents on the network. NetBackup Snapshot Client
uses the VxMS library to perform file mapping.
Mapping methods A set of routines for converting logical file addresses to physical disk addresses or extents.
NetBackup Snapshot Client includes support for file-mapping and volume-mapping
methods.
■ A disk that maintains an exact copy or duplicate of another disk. A mirror disk is often
called a secondary, and the source disk is called the primary. All writes to the primary
disk are also made to the mirror disk.
■ A type of snapshot that is captured on a mirror disk. At an appropriate moment, all
further writes to the primary disk are held back from the mirror, which "splits" the
mirror from the primary. As a result of the split, the mirror becomes a snapshot of the
primary. The snapshot can then be backed up.
NetBackup media server An off-host backup method in which the NetBackup media server performs the data
method movement.
Off-host backup The off-loading of backup processing to a separate backup agent executing on another
host. NetBackup Snapshot Client provides the following off-host backup options: Alternate
Client, NetBackup media server, Third-Party Copy Device, and Network Attached Storage.
Primary disk In a primary-mirror configuration, client applications read and write their data on the
primary disk. An exact duplicate of the primary disk is the mirror.
Raw partition A single section of a raw physical disk device occupying a range of disk sectors. The
raw partition does not have a file system or other hierarchical organization scheme (thus,
a "raw" stream of disk sectors). On some operating systems, such as Solaris and HP-UX,
a raw partition is different from a block device over which the file system is mounted.
Recovery Manager (RMAN) Oracle's backup and recovery program. RMAN performs backup and restore by making
requests to a NetBackup shared library.
Introduction 40
Snapshot Client terminology
Term Definition
RMAN Proxy Copy An extension to the Oracle8i media management API which enables media management
software such as NetBackup to perform data transfer directly.
SAN (storage area network) A Fibre Channel-based network connecting servers and storage devices. The storage
devices are not attached to servers but to the network itself, and are visible to all servers
on the network.
Snapshot A point-in-time, read-only, disk-based copy of a client volume. A snapshot is created with
minimal effect on other applications. NetBackup provides several types, depending on
the device where the snapshot occurs: copy-on-write, mirror, clone, and snap.
Snapshot method A set of routines for creating a snapshot. You can select the method, or let NetBackup
select it when the backup is started (auto method).
Snapshot mirror A disk mirror created by the Veritas Volume Manager (VxVM). Snapshot mirror is an
exact copy of a primary volume at a particular moment, reproduced on a physically
separate device.
Snapshot source The entity (file system, raw partition, or logical volume) to which a snapshot method is
applied. NetBackup automatically selects the snapshot source according to the entries
in the policy’s Backup Selections list.
Snapshot volume A mirror that has been split from the primary volume or device and made available to
users. Veritas Volume Manager (VxVM) creates snapshot volumes as a point-in-time
copy of the primary volume. Subsequent changes in the primary volume are recorded in
the Data Change Log. The recorded changes can be used to resynchronize with the
primary volume by means of VxVM FastResync. The changes that were made while the
snapshot volume was split are applied to the snapshot volume to make it identical to the
primary volume.
Standard device Refers to the primary disk in an EMC primary-mirror disk array (see primary disk).
Storage Checkpoint (VxFS) Provides a consistent and a stable view of a file system image and keeps track of modified
data blocks since the last checkpoint. Unlike a mirror, a VxFS Storage Checkpoint does
not create a separate copy of the primary or the original data. It creates a block-by-block
account that describes which blocks in the original data have changed from the instant
the checkpoint was activated.
A Storage Checkpoint stores its information in available space on the primary file system,
not on a separate or a designated device. (Also, the ls command does not list Storage
Checkpoint disk usage; you must use the fsckptadm list command instead.)
Introduction 41
Snapshot Client assistance
Term Definition
■ A backup agent on the SAN that operates on behalf of backup applications. The
third-party copy device receives backup data from a disk that is attached to Fibre
Channel and sends it to a storage device. The third-party copy device uses the SCSI
Extended Copy command. The third-party copy device is sometimes called a Copy
Manager, third-party copy engine, or data mover. In SAN hardware configurations, a
third-party copy device can be implemented as part of a bridge, router, or storage
device. The third-party copy device may or may not be the device to which the storage
units are connected.
■ An off-host backup method in NetBackup Snapshot Client that allows backups to be
made by means of a backup agent on the SAN.
UFS file system The UNIX file system (UFS), which is the default file system type on Sun Solaris. The
UFS file system was formerly the Berkeley Fast File System.
VxMS (Veritas Federated A library of routines (methods) used by NetBackup Snapshot Client to obtain the physical
Mapping Services) addresses of logical disk objects such as files and volumes.
Volume A virtual device that is configured over raw physical disk devices (not to be confused with
a NetBackup Media and Device Management volume). Consists of a block and a character
device. If a snapshot source exists over a volume, NetBackup automatically uses a
volume mapping method to map the volume to physical device addresses.
Volume group A logical grouping of disks, created with the Veritas Volume Manager, to allow more
efficient use of disk space.
VxFS The Veritas extent-based File System (VxFS), designed for high performance and large
volumes of data.
VxVM The Veritas Volume Manager (VxVM), which provides the logical volume management
that can also be used in SAN environments.
Snapshot Client help from For help creating a policy, click the Master Server name at the top of the left pane and
NetBackup Administration click Create a Snapshot Backup Policy.
Console
Introduction 42
About open file backups for Windows
Snapshot Client assistance For a document containing additional Snapshot Client assistance, see the tech note
from the web NetBackup Snapshot Client Configuration. This document may be accessed from the
following link:
http://www.veritas.com/docs/000081320
This document includes the following:
Compatibility list For a complete list of supported platforms, snapshot methods, data types, and database
agents, and supported combinations of platform and snapshot methods, see the
NetBackup 7.x Snapshot Client Compatibility document:
http://www.netbackup.com/compatibility
NDMP information on the web The Veritas Support website has a pdf document on supported NDMP operating systems
and NAS vendors. The document also contains configuration and troubleshooting help
for particular NAS systems.
http://www.veritas.com/docs/000027113
The document’s title is: NetBackup for NDMP Supported OS and NAS Appliance
Information.
For information about freezing a service group, see the clustering section in the
NetBackup High Availability Administrator’s Guide for the cluster software you
are running.
Refer to the uninstall procedure that is described in the NetBackup Installation
Guide
the state file does not exist on the NetBackup Master server, it becomes impossible
for the active node to get snapshot information.
■ Storage devices must be configured (you can use the Device Configuration
Wizard).
■ Encryption and compression are supported, but are applied only to the backup
copy that is written to a storage unit. The snapshot itself is neither compressed
nor encrypted.
■ FlashBackup policies do not support encryption or compression.
■ BLIB with Snapshot Client (Perform block level incremental backups option
on the policy Attributes tab): BLIB is supported with NetBackup for Oracle,
NetBackup for DB2, and with VMware.
If you choose the Perform block level incremental backups option on the
policy Attributes tab, the other features of Snapshot Client are grayed out.
■ Ensure that the number of LUNs exposed to the HBAs, do not reach the
maximum limit during any snapshot related operations.
■ For all other cases, select Standard for UNIX clients and MS-Windows
for Windows clients.
4 Select a storage unit, storage unit group, or a storage lifecycle policy as the
Policy storage.
5 Make sure Perform snapshot backups is selected.
Note: When you select Perform snapshot backups, the Bare Metal Restore
option is disabled.
Note: Perform snapshot backups must be selected for the policy to reference
any storage lifecycle policy with a Snapshot destination.
7 To create a backup that enables Instant Recovery, select the Retain snapshots
for instant recovery or SLP management attribute.
This attribute is required for block-level restore, file promotion, and rollback.
See “Instant Recovery restore features” on page 233.
Help for creating a policy for instant recovery backups is available.
See “Configuring a policy for Instant Recovery” on page 98.
8 To reduce the processing load on the client, select Perform off-host backup.
See “Off-host backup configuration options ” on page 53.
9 To save these settings, click Apply.
10 To define a schedule, use the Schedules tab, and to specify the clients, use
the Clients tab .
Regarding clients: only one snapshot method can be configured per policy. To
select one snapshot method for clients a, b, and c, and a different method for
clients d, e, and f: create a separate policy for each group of clients and select
one method per policy. You may be able to avoid this restriction using the auto
method.
11 To specify the files to be backed up, use the Backup Selections tab .
See “Backup Selections tab options when configuring a policy” on page 51.
12 On the Policy Attributes tab: if you click Apply or OK, a validation process
checks the policy and reports any errors. If you click Close, no validation is
performed.
If the backward slash is not included, the snapshot image does not appear in
the NetBackup catalog.
■ Wildcards are permitted if the wildcard does not correspond to a mount point or
a mount point does not follow the wildcard in the path.
Note: This is applicable to a Storage Lifecycle Policy that has snapshot as the
first operation and does not contain any backup or replicate operation.
For example, in the path /a/b, if /a is a mounted file system or volume, and
/a/b designates a subdirectory in that file system: the entry /a/b/*.pdf causes
NetBackup to make a snapshot of the /a file system and to back up all pdf files
in the /a/b directory. But, with an entry of /* or /*/b, the backup may fail or
have unpredictable results, because the wildcard corresponds to the mount
point /a. Do not use a wildcard to represent all or part of a mount point.
In another example, /a is a mounted file system which contains another mounted
file system at /a/b/c (where c designates a second mount point). A Backup
Selections entry of /a/*/c may fail or have unpredictable results, because a
mount point follows the wildcard in the path.
Information is available on the Cross mount points policy attribute.
See “Snapshot tips” on page 72.
■ For a raw partition backup of a UNIX client, specify the /rdsk path, not the /dsk
path. You can specify the disk partition (except on AIX) or a VxVM volume.
Examples:
On Solaris: /dev/rdsk/c0t0d0s1
/dev/vx/rdsk/volgrp1/vol1
On HP: /dev/rdsk/c1t0d0
/dev/vx/rdsk/volgrp1/vol1
On Linux: /dev/sdc1
On AIX clients, backing up a native disk partition is not supported. A raw partition
backup must specify a VxVM volume, such as /dev/vx/rdsk/volgrp1/vol1.
Note that /dev/vx/dsk/volgrp1/vol1 (without the "r" in /rdsk) does not work.
Policy configuration 53
Off-host backup configuration options
To back up a virtual machine that does not have a NetBackup client installed
on it, you must select this option. If a NetBackup client is installed on the virtual
machine, you can back up the virtual machine in the same way as an ordinary
physical host (a snapshot-based backup is not required).
The VMware backup host option requires the FlashBackup-Windows or
MS-Windows policy type.
See the NetBackup for VMware Administrator’s Guide for further information:
https://www.veritas.com/docs/DOC5332
Note: The VMware backup host is not displayed when you select the Retain
snapshots for Instant Recovery or SLP management check box as VMware
backup is not supported for Instant Recovery.
■ Alternate Client
Select this option to designate another client (alternate client) as the backup
agent.
An alternate client saves computing resources on the original client. The alternate
client handles the backup I/O processing on behalf of the original client, so the
backup has little effect on the original client.
Enter the name of the alternate client in the Machine field.
See “About using alternate client backup” on page 68.
■ Data Mover
Select this option to designate the backup agent as a NetBackup media server,
a third-party copy device that implements the SCSI Extended Copy command,
or a NAS filer (Network Attached Storage).
The Data Mover option requires the Standard, FlashBackup, or MS-Windows
policy type.
Select the type of data mover in the Machine pull-down:
Network Attached Storage An NDMP host (NAS filer) performs the backup
processing, by means of the NAS_Snapshot method.
NetBackup for NDMP software is required on the
NetBackup server. This option is required for NAS
snapshots.
NetBackup Media Server A Solaris, HP, AIX media server performs the backup
processing (for Solaris, HP, and AIX clients only).
Policy configuration 55
Automatic snapshot selection
Third-Party Copy Device A third-party copy device handles the backup processing.
For Solaris, HP, AIX, and Linux clients only.
http://www.veritas.com/docs/000081320
■ If the policy had been configured for a particular snapshot method, click the
Snapshot Client Options option and set the snapshot method to that particular
one. NetBackup selects a snapshot method when the backup starts.
Use of the auto method does not guarantee that NetBackup can select a snapshot
method for the backup. NetBackup looks for a suitable method according to the
following factors:
■ The client platform and policy type.
■ The presence of up-to-date software licenses, such as VxFS and VxVM.
■ How the client data is configured. For instance:
■ Whether a raw partition has been specified for a copy-on-write cache.
See “Entering the cache” on page 130.
■ Whether the client’s data is contained in the VxVM volumes that were
configured with one or more snapshot mirrors.
Note: The auto method cannot select a snapshot method that is designed for a
particular disk array, such as EMC_TimeFinder_Clone or HP_EVA_Vsnap. You
must select the disk array method from the drop-down list on the Snapshot Options
dialog box.
6 In the pull-down menu, select the Snapshot method for the policy.
■ Choose auto if you want NetBackup to select the snapshot method.
See “Automatic snapshot selection” on page 55.
■ The available methods depend on how your clients are configured and
which attributes you selected on the Attributes tab.
Only one snapshot method can be configured per policy. Configure each policy
for a single method and include only clients and backup selections for which
that snapshot method can be used. For example, for the nbu_snap method
(which applies to Solaris clients only), create a policy that includes Solaris
clients only. The snapshot method you select must be compatible with all items
in the policy’s Backup Selections list.
See “Snapshot methods” on page 58.
Policy configuration 58
Selecting the snapshot method
Snapshot methods
Table 3-1 describes each snapshot method (not including the disk array methods).
See “Disk array methods at a glance” on page 151.
Method Description
https://www.veritas.com/support/en_US/article.DOC5332
Method Description
VSS VSS uses the Volume Shadow Copy Service of Windows and
supports Instant Recovery. VSS is for local backup or alternate
client backup.
http://www.veritas.com/docs/000081320
For alternate client backup, the client data must reside on: either
a disk array such as EMC, HP, or Hitachi with snapshot capability,
or a Veritas Storage Foundation for Windows 4.1 or later volume
with snapshots enabled. VSS supports file system backup of a
disk partition (such as E:\) and backup of databases.
Method Description
Note that all files in the Backup Selections list must reside in
the same file system.
VxVM For any of the following types of snapshots with data configured
over Volume Manager volumes, for clients on Solaris, HP, AIX,
Linux, or Windows. Linux and AIX clients require VxVM 4.0 or
later.
Method Description
This setting overrides a cache that is specified on Host Properties > Clients >
Client Properties dialog > UNIX Client > Client Settings.
See “Entering the cache” on page 130.
Do not specify wildcards (such as /dev/rdsk/c2*).
A complete list of requirements is available.
See “Cache device requirements” on page 127.
60 seconds before it retries the disk group split if Number of times to retry disk
group split is 1 or more. On some systems, a 60-second delay may be too short.
Use this parameter to set a longer delay between retries.
0-auto (the default) If the policy is not configured for Instant Recovery, you can
select this option. The auto option attempts to select the
available provider in this order: Hardware, Software, System.
3-hardware Use the hardware provider for your disk array. A hardware
provider manages the VSS snapshot at the hardware level
by working with a hardware storage adapter or controller. For
example, if you want to back up an EMC CLARiiON or HP
EVA array by means of the array’s snapshot provider, select
3-hardware.
No means that the backup job cannot reach completion until the resynchronize
operation has finished.
Choosing Yes may allow more efficient use of backup resources. If two backups
need the same tape drive, the second can start even though the resynchronize
operation for the first job has not completed.
0-unspecified If the policy is not configured for Instant Recovery, you can
select this option.
for high-performance array and controller hardware. The specified value is rounded
to a multiple of the volume's region size.
This option is the same as the iosize=size parameter on the VxVM vxsnap command.
For more details on the iosize=size parameter, see the Veritas Volume Manager
Administrator’s Guide.
Snapshot Resources
To configure the disk array methods, see the chapter titled Configuration of snapshot
methods for disk arrays:
See “Disk array configuration tasks” on page 153.
METHOD=USER_DEFINED
DB_BEGIN_BACKUP_CMD=your_begin_script_path
DB_END_BACKUP_CMD=your_end_script_path
For example:
In this example, the script shutdown_db.ksh is run before the backup, and
restart_db.ksh is run after the snapshot is created.
■ User and group identification numbers (UIDs and GIDs) for the files to back up
must be available to the primary client and the alternate client.
■ Alternate client backup on Windows does not support incremental the backups
that are based on archive bit. Instead, use incremental backups that are based
on timestamp.
See “About incremental backup of mirror-based snapshots” on page 73. for
more information.
■ The primary client and alternate client must run the same version of NetBackup.
For example, the use of a later version of NetBackup on the primary client and
an earlier version on the alternate client is not supported.
■ The primary client and alternate client must run the same operating system,
volume manager, and file system. For each of these I/O system components,
the alternate client must be at the same level as the primary client, or higher
level.
Table 3-2 lists the supported configurations.
VxFS 3.4 or later (VxFS 3.3 VxFS, at same level as primary client or higher
for HP, VxFS 4.0 for AIX
and Linux)
VxVM 3.2 or later (UNIX) VxVM, at same level as primary client or higher.
VxVM 3.1 or later Note: For the VVR method, the alternate client must be at
(Windows) exactly the same level as primary client.
For VxVM on Windows, use VxVM 3.1 or later with all the latest
VxVM service packs and updates.
Configuration Description
Client data is on an EMC disk array in To run the backup on an alternate client: choose
split-mirror mode Standard as the policy type, select Perform
snapshot backups, Perform off-host backup,
and Use alternate client. Then select the alternate
client. On the Snapshot Options display, specify
an EMC TimeFinder snapshot method.
Client data is replicated on a remote To run the backup on the replication host (alternate
host client), choose: Standard as the policy type, select
Perform snapshot backups, Perform off-host
backup, and Use alternate client. Then select
the alternate client (the replication host). On the
Snapshot Options display, specify the VVR
snapshot method.
Client data is on a JBOD array in VxVM To run the backup on the alternate client, choose:
volumes with snapshot mirrors Standard (for UNIX client) or MS-Windows
configured (Windows client) as the policy type and Perform
snapshot backups, Perform off-host backup,
and Use alternate client. Then select the alternate
client. On the Snapshot Options display, specify
the FlashSnap method.
Snapshot tips
Note the following tips:
■ In the Backup Selections list, be sure to specify absolute path names. Refer
to the NetBackup Administrator’s Guide, Volume I for help specifying files in the
Backup Selections list.
■ If an entry in the Backup Selections list is a symbolic (soft) link to another file,
Snapshot Client backs up the link, not the file to which the link points. This
NetBackup behavior is standard. To back up the actual data, include the file
path to the actual data.
■ On the other hand, a raw partition can be specified in its usual symbolic-link
form (such as /dev/rdsk/c0t1d0s1). Do not specify the actual device name
that /dev/rdsk/c0t1d0s1 points to. For raw partitions, Snapshot Client
automatically resolves the symbolic link to the actual device.
■ The Cross mount points policy attribute is not available for the policies that
are configured for snapshots. This option is not available because NetBackup
does not cross file system boundaries during a backup of a snapshot. A backup
of a high-level file system, such as / (root), does not back up the files residing
in lower-level file systems. Files in the lower-level file systems are backed up if
they are specified as separate entries in the Backup Selections list. For
instance, to back up /usr and /var, both /usr and /var must be included as
separate entries in the Backup Selections list.
For more information on Cross mount points, refer to the NetBackup
Administrator’s Guide, Volume I.
■ On Windows, the \ must be entered in the Backup Selections list after the drive
letter (for example, D:\).
See “Configuring a FlashBackup policy” on page 79.
Policy configuration 73
Policy configuration tips
Primary disk
Phase 2 Mirror was synchronized with primary and split at 8:24 pm.
Phase 4 Backup of mirror was completed at 10:14 pm; file access time on mirror
is reset to 8:01 pm.
■ About FlashBackup
■ FlashBackup restrictions
About FlashBackup
FlashBackup is a policy type that combines the speed of raw-partition backups with
the ability to restore individual files. The features that distinguish FlashBackup from
other raw-partition backups and standard file system backups are these:
■ Increases backup performance as compared to standard file-ordered backup
methods. For example, a FlashBackup of a file system completes faster than
other types of backup in the following case:
■ the file system contains a large number of files
■ most of the file system blocks are allocated
FlashBackup restrictions
Note the following restrictions:
■ FlashBackup policies do not support file systems that HSM manages.
■ FlashBackup policies for UNIX clients do not support Instant Recovery.
■ FlashBackup does not support VxFS storage checkpoints that the
VxFS_Checkpoint snapshot method uses.
■ FlashBackup supports the following I/O system components: ufs, VxFS, and
Windows NTFS file systems, VxVM volumes and LVM volumes, and raw disks.
Other components (such as non-Veritas storage replicators or other non-Veritas
volume managers) are not supported.
■ FlashBackup on Linux supports only the VxFS file system on VxVM volumes.
For Linux clients, no other file system is supported, and VxFS file systems are
not supported without VxVM volumes.
■ FlashBackup on AIX supports only the VxFS file system, with VxVM or LVM
volumes. For AIX clients, no other file system is supported, and the data must
be over a VxVM or LVM volume.
■ Note these restrictions for Windows clients:
■ The use of FlashBackup in a Windows Server Failover Clustering (WSFC)
environment is supported, with the following limitation: Raw partition restores
can only be performed when the disk being restored is placed in extended
maintenance mode or removed from the WSFC resource group.
■ FlashBackup-Windows and Linux policies do not support a Client Direct
restore.
■ FlashBackup-Windows policies support Instant Recovery, but only for backup
to a storage unit (not for snapshot-only backups).
■ FlashBackup-Windows policies do not support the backup of Windows
system-protected files (the System State, such as the Registry and Active
Directory).
■ FlashBackup-Windows policies do not support the backup of Windows OS
partitions that contain the Windows system files (usually C:).
■ FlashBackup-Windows policies do not support the backup of Windows
System database files (such as RSM Database and Terminal Services
Database).
■ FlashBackup-Windows policies do not support "include" lists (exceptions to
client "exclude" lists).
FlashBackup configuration 79
Configuring a FlashBackup policy
3 In the All Policies pane, right-click and select New Policy... to create a new
policy.
FlashBackup configuration 81
Configuring a FlashBackup policy
4 On the Attributes tab, select the Policy type: FlashBackup for UNIX clients,
or FlashBackup-Windows for Windows clients.
FlashBackup-Windows supports the backup and restore of NTFS files that are
compressed.
The files are backed up and restored as compressed files (they are not
uncompressed).
5 Specify the storage unit.
FlashBackup and FlashBackup-Windows policies support both tape storage
units and disk storage units.
6 Select a snapshot method in one of the following ways:
■ Click Perform snapshot backups on the Attributes tab.
For a new policy, NetBackup selects a snapshot method when the backup
starts.
For a copy of a policy that was configured for a snapshot method, click the
Snapshot Client Options option and set the method to auto. NetBackup
selects a snapshot method when the backup starts.
■ Click Perform snapshot backups, click the Snapshot Client Options
option and select a snapshot method.
See “Selecting the snapshot method” on page 56.
7 Windows only: to enable the backup for Instant Recovery, select Retain
snapshots for Instant Recovery or SLP management.
Instant Recovery is not supported for FlashBackup with UNIX clients.
8 UNIX only: if you selected nbu_snap or VxFS_Snapshot as the snapshot
method, specify a raw partition as cache, in either of these ways:
■ Use the Host Properties node of the Administration Console to specify the
default cache device path for snapshots. Click Host Properties > Clients,
select the client, then Actions > Properties, UNIX Client > Client Settings.
■ Use the Snapshot Client Options dialog to specify the cache.
See “Entering the cache” on page 130.
The partition to be used for the cache must exist on all clients that are
included in the policy.
10 To reduce backup time when more than one raw partition is specified in the
Backup Selections list, select Allow multiple data streams.
11 Use the Schedules tab to create a schedule.
FlashBackup policies support full and incremental types only. User backup and
archive schedule types are not supported.
A full FlashBackup backs up the entire disk or raw partition that was selected
in the Backup Selections tab (see next step). An incremental backup backs
up individual files that have changed since their last full backup, and also backs
up their parent directories. The incremental backup does not back up files in
parent directories unless the files have changed since the last full backup.
For incremental backups, a file is considered “changed” if its Modified Time
or Create Time value was changed.
Note on FlashBackup-Windows: The NTFS Master File Table does not update
the Create Time or Modified Time of a file or folder when the following changes
are made:
■ Changes to file name or directory name.
■ Changes to file or directory security.
■ Changes to file or directory attributes (read only, hidden, system, archive
bit).
12 On the Backup Selections tab, specify the drive letter or mounted volume
(Windows) or the raw disk partition (UNIX) containing the files to back up.
For Windows
where:
HP examples /dev/rdsk/c1t0d0
/dev/vx/rdsk/volgrp1/vol1
■ To use multiple data streams, other directives had to be added to the policy’s
Backup Selections (file) list.
The following procedure and related topics explain how to configure a FlashBackup
policy with a CACHE= entry in the policy’s Backup Selections list. This means of
configuration is provided for backward compatibility.
To configure FlashBackup policy for backward compatibility (UNIX only)
1 Leave Perform snapshot backups deselected on the policy Attributes tab.
NetBackup uses nbu_snap (snapctl driver) for Solaris clients or VxFS_Snapshot
for HP.
2 On the policy’s Backup Selections tab, specify at least one cache device by
means of the CACHE directive. For example:
CACHE=/dev/rdsk/c0t0d0s1
This cache partition is for storing any blocks that change in the source data
while the backup is in progress. CACHE= must precede the source data entry.
Note the following:
■ Specify the raw device, such as /dev/rdsk/c1t0d0s6. Do not specify the
block device, such as /dev/dsk/c1t0d0s6.
■ Do not specify the actual device file name. For example, the following is
not allowed:
/devices/pci@1f,0/pci@1/scsi@3/sd@1,0:d,raw
CACHE=/dev/rdsk/c1t4d0s0
/dev/rdsk/c1t4d0s7
CACHE=/dev/rdsk/c1t4d0s1
FlashBackup configuration 85
Configuring FlashBackup policy for backward compatibility (UNIX only)
/dev/rdsk/c1t4d0s3
/dev/rdsk/c1t4d0s4
Note: CACHE entries are allowed only when the policy’s Perform snapshot
backups option is deselected. If Perform snapshot backups is selected,
NetBackup attempts to back up the CACHE entry and the backup fails.
All entries must specify the raw device, such /dev/rdsk/c0t0d0s1. Do not use the
actual file name; you must use the link form of cxtxdxsx.
FlashBackup configuration 86
Configuring FlashBackup policy for backward compatibility (UNIX only)
Note: Only one data stream is created for each physical device on the client. You
cannot include the same partition more than once in the Backup Selections list.
The directives that you can use in the Backup Selections list for a FlashBackup
policy are as follows:
■ NEW_STREAM
■ UNSET_ALL
FlashBackup configuration 87
Configuring FlashBackup policy for backward compatibility (UNIX only)
Each backup begins as a single stream of data. The start of the Backup Selections
list up to the first NEW_STREAM directive (if any) is the first stream. Each NEW_STREAM
entry causes NetBackup to create an additional stream or backup.
Note that all file paths that are listed between NEW_STREAM directives are in the same
stream.
Table 4-1 shows a Backup Selections list that generates four backups:
1 CACHE=/dev/rdsk/c1t3d0s3 CACHE=/dev/cache_group/rvol1c
/dev/rdsk/c1t0d0s6 /dev/vol_grp/rvol1
2 NEW_STREAM NEW_STREAM
/dev/rdsk/c1t1d0s1 UNSET CACHE
CACHE=/dev/cache_group/rvol2c
/dev/vol_grp/rvol2
3 NEW_STREAM NEW_STREAM
UNSET CACHE UNSET CACHE
CACHE=/dev/rdsk/c1t3d0s4 CACHE=/dev/cache_group/rvol3c
/dev/rdsk/c1t2d0s5 /dev/vol_grp/rvol3
/dev/rdsk/c1t5d0s0 /dev/vol_grp/rvol3a
4 NEW_STREAM NEW_STREAM
UNSET CACHE UNSET CACHE
CACHE=/dev/rdsk/c0t2d0s3 CACHE=/dev/cache_group/rvol4c
/dev/rdsk/c1t6d0s1 /dev/vol_grp/rvol4
The backup streams are issued as follows. The following items correspond in order
to the numbered items inTable 4-1:
1. The first stream is generated automatically and a backup is started for
/dev/rdsk/c1t0d0s6 (Solaris) or /dev/vol_grp/rvol1 (HP). The CACHE=
entry sets the cache partition to /dev/rdsk/c1t3d0s3 (Solaris) or
/dev/cache_group/rvol1c (HP).
■ Modifying the VxVM or FlashSnap resync options for point in time rollback
■ No-data Storage Checkpoints (those containing file system metadata only) are
not supported.
■ Instant Recovery snapshots must not be manually removed or renamed,
otherwise the data cannot be restored.
■ Instant Recovery does not support the VxVM, FlashSnap, and VVR snapshot
methods when used with VxVM volume sets.
■ On Linux, Instant Recovery is not supported by disk-array based snapshot
methods.
■ For Instant Recovery backups of data that is configured on VxVM volumes on
Windows, the VxVM volume names must be 12 characters or fewer. Otherwise,
the backup fails.
■ Any media server that is used in an Instant Recovery backup must have full
server privileges.
See “Giving full server privileges to the media server” on page 92.
■ Instant Recovery restores can fail from a backup that a FlashSnap off-host
backup policy made.
From a policy that was configured with the FlashSnap off-host backup method
and with Retain snapshots for Instant Recovery enabled, the backups that
were made at different times may create snapshot disk groups with the same
name. As a result, only one snapshot can be retained at a time. In addition,
NetBackup may not be able to remove the catalog images for the snapshots
that have expired and been deleted. It appears that you can browse the expired
snapshots and restore files from them. But the snapshots no longer exist, and
the restore fails with status 5.
■ For Instant Recovery, Veritas recommends that a primary volume be backed
up by a single Instant Recovery policy. If the same volume is backed up by two
or more Instant Recovery policies, conflicts between the policies may occur
during snapshot rotation. Data loss could result if the policies are configured for
snapshots only (if the policies do not back up the snapshots to separate storage
devices).
Consider the following example: Two policies use the same snapshot device
(or VxFS storage checkpoint) to keep Instant Recovery snapshots of volume_1.
■ Instant Recovery policy_A creates a snapshot of volume_1 on the designated
snapshot device or storage checkpoint.
■ When Instant Recovery policy_B runs, it removes the snapshot made by
policy_A from the snapshot device or storage checkpoint. It then creates its
own snapshot of volume_1 on the snapshot device or storage checkpoint.
The snapshot created by policy_A is gone.
Instant Recovery configuration 92
Giving full server privileges to the media server
Note: Even if each policy has its own separate snapshot devices, conflicts can
occur when you browse for restore. Among the available snapshots, it may be
difficult to identify the correct snapshot to be restored. It is therefore best to
configure only one policy to protect a given volume when you use the Instant
Recovery feature of NetBackup.
2 Make sure that the media server is listed under Additional Servers, not under
Media Servers.
Note: on UNIX, this procedure places a SERVER = host entry in the bp.conf
file for each host that is listed under Additional Servers. In the bp.conf file,
the media server must not be designated by a MEDIA_SERVER = host entry.
Note: NetBackup Instant Recovery retains the snapshot. The snapshot can be
used for restore even if the client has been restarted.
In this figure, the next Instant Recovery backup overwrites the snapshot that was
made at 12:00 noon.
■ The serial number of the array is specified in the Array Serial # field. Contact
your array administrator to obtain the disk array serial numbers and designators
(unique IDs) for the array.
The unique ID snapshot resource or source LUN containing the primary data is
specified in the Source Device.
The maximum number of snapshots to retain is determined by the number of
configured devices in the Snapshot Device(s) field. For example, if you enter
two devices, only two snapshots can be retained. The above example specifies
three devices (0122;0123;0124), so three snapshots can be retained. When the
maximum is reached, the fourth snapshot overwrites the first one.
■ The particular devices to use for the snapshots are those named in the Snapshot
Device(s) field.
■ The order in which the devices are listed in the Snapshot Device(s) field
determines their order of use. Device 0122 is used for the first snapshot, 0123
for the second, and 0124 for the third.
Preconfiguration of the snapshot devices may be required.
See the appropriate topic for your disk array and snapshot method.
Note: For Windows clients using the VSS method on disk arrays that are configured
for clones or mirrors: you must synchronize the clones or mirrors with their source
before you run the backup.
For Instant Recovery backups, it is good practice to set the backup retention level
to infinite. A retention period that is too short can interfere with maintaining a
maximum number of snapshots for restore.
Instant Recovery configuration 98
Configuring a policy for Instant Recovery
The disk array snapshot See the appropriate topic for your disk array and
methods. snapshot method.
10 To enter the files and folders to be backed up, use the Backup Selections
tab.
■ When backing up Oracle database clients, refer to the NetBackup for Oracle
System Administrator’s Guide for instructions.
■ Snapshot Client policies do not support the ALL_LOCAL_DRIVES entry in
the policy’s Backup Selections list, except for the policies that are
configured with the VMware method.
or the longer the life of the snapshot, the more blocks that are likely to be changed.
As a result, more data must be stored in the cache.
The size of the file system or raw partition does not determine cache size. If little
change activity occurs on the source during the life of the snapshot, little cache
space is required, even for a large file system.
Note: If the cache runs out of space, the snapshot may fail.
For raw partitions: Cache size = volume size * the number of retained snapshots
For file systems: Cache size = (consumed space * the number of retained snapshots)
+ approximately 2% to 5% of the consumed space in the file system
Note:
vxassist snapstart X:
where X is the drive letter. This command creates a snapshot mirror of the
designated drive.
2 For a volume that is not associated with a drive letter, enter:
This command shows information for the specified disk group, including the
names of the volumes that are configured for that group.
■ Create the snapshot by entering the following:
Where:
■ Brackets [ ] indicate optional items.
■ make volume specifies the name of the volume snapshot.
=SNAP_vol1_NBU/cache=NBU_CACHE
Instant Recovery configuration 105
About configuring VxVM
Where:
■ Brackets [ ] indicate optional items.
■ make volume specifies the name of the volume snapshot.
The number for nmirror should equal the number for ndcomirror.
Instant Recovery configuration 106
About configuring VxVM
Note: For Linux, the init value should be init=active instead of init=none.
For Solaris 10 with Storage Foundation 5.1, the init value should be
init=active instead of init=none.
3 Set the Maximum Snapshots (Instant Recovery only) value on the NetBackup
Snapshot Client Options dialog.
/usr/openv/netbackup/SYNC_PARAMS
2 In the file, enter the numeric values for the options, on one line. The numbers
apply to the options in the bulleted list above, in that order.
For example:
6 3 1000
2 Create a policy for snapshots. (Use the Policies node of the Administration
Console.)
On the policy Attributes tab:
Instant Recovery configuration 109
About storage lifecycle policies for snapshots
■ You can specify the lifecycle policy in the Policy storage unit / lifecycle
policy field. You can later change the lifecycle policy in the schedule, as
explained later in this procedure.
■ Select Perform snapshot backups.
■ On the Snapshot Options dialog box, the Maximum Snapshots (Instant
Recovery only) parameter sets the maximum number of snapshots to be
retained at one time. When the maximum is reached, the next snapshot
causes the oldest job-complete snapshot to be deleted.
A snapshot is considered to be job complete once all its configured
dependent copies (for example, Backup from Snapshot, Index, Replication)
are complete.
Note that if you also set a snapshot retention period of less than infinity in
the lifecycle policy, the snapshot is expired when either of these settings
takes effect (whichever happens first). For example, if the Maximum
Snapshots value is exceeded before the snapshot retention period that is
specified in the lifecycle policy, the snapshot is deleted.
The same is true for the Snapshot Resources pane on the Snapshot
Options dialog box. If the snapshot method requires snapshot resources,
the maximum number of snapshots is determined by the number of devices
that are specified in the Snapshot Device(s) field. For example, if two
devices are specified, only two snapshots can be retained at a time. Either
the Snapshot Device(s) field or the snapshot retention period in the lifecycle
policy can determine the retention period.
Policy validation fails if there is a mismatch of retention found on the
snapshot. For example, if the Maximum Snapshots (Instant Recovery
only) parameter is set to any value other than Managed by SLP and the
SLP used in the same policy has Fixed retention for the Snapshot job the
policy validation fails. If you have such a policy configured on a pre- 7.6
NetBackup master server, it is advisable that you validate and correct the
policy after you upgrade to a NetBackup 8.3 master server.
■ In the Override policy storage selection field, select the lifecycle policy
that you created in 1.
■ Under Schedule type, set an appropriate frequency, such as 1 day.
When the Snapshot Client policy executes this schedule, the lifecycle policy
named in the Override policy storage selection field creates images on the
destinations that are named in the lifecycle policy. The lifecycle policy also sets
the retention periods for the images it creates. In this example, the retention
is six months for backups to disk and five years for tape.
Error 156 can be a result of different problems, listed below are some of them:
VxVM failing to get the version of the disk group, run appropriate VxVM command
outside of NetBackup to see whether you can get the version information for the
disk group in use.
Bpfis log
The device that is to backup by this process is being used by another process.
Check whether any other process is holding the same device.
Bpfis log
Policy validation fails for valid backup selection. If there filer’s volume is mounted
on a windows client, run NetBackup client service on the client and the alternate
client with a valid credentials to access CIFS share, and check that the filers are
up, and the volume is seen mounted on the windows client.
Bpfis log
For windows client, live browse from the snapshot fails with the following error
message. Make sure that NetBackup client service on the client and the alternate
client is running with a valid credential to access CIFS share
ERROR: permissions denied by client during rcmd.
Snapshot backup for windows client fail with status 55. Make sure that NetBackup
client service on the client and the alternate client is running with a valid credential
to access CIFS share.
Bpfis log
Live browse or ‘backup from snapshot’ operation for windows client fail with error
43, status 156. Enable create_ucode & convert_ucode on primary volume.
Bpfis log
NBUAdapter log
■ Notes on NAS_Snapshot
LAN / WAN
NetBackup client
NDMP host (NAS filer)
CIFS or NFS mount
Data is mounted on client and resides on NAS (NDMP) Snapshot of client volume is
host. made here.
Note: Windows pathnames must use the Universal Naming Convention (UNC).
NetBackup creates snapshots on the NAS-attached disk only, not on the storage
devices that are attached to the NetBackup server or the client.
Notes on NAS_Snapshot
The following notes apply to backups that are made with the NAS_Snapshot method:
Network Attached Storage (NAS) snapshot configuration 115
Notes on NAS_Snapshot
■ Snapshots of NAS host data are supported for NetBackup clients running
Windows (32-bit system and 64-bit system), Solaris, Linux, and AIX.
■ Note these software requirements and licensing requirements:
■ On the NetBackup server, both NetBackup for NDMP and Snapshot Client
software must be installed and licensed. In addition, a NetBackup for NDMP
license must be purchased for each NDMP host (filer).
■ NetBackup clients that are used to perform backups must have Snapshot
Client installed.
■ On NetBackup clients for Oracle: NetBackup for Oracle database agent
software must be installed on all clients.
■ The NAS host must support NDMP protocol version V4 and the NDMP V4
snapshot extension, with additional changes made to the snapshot extension.
The NetBackup Snapshot Client Configuration online pdf contains a list of NAS
vendors that NetBackup supports for NAS snapshots. This online pdf includes
requirements specific to your NAS vendor.
See: http://www.veritas.com/docs/000081320
■ NetBackup must have access to each NAS host on which a NAS snapshot is
to be created. To set up this authorization, you can use either of the following:
■ In the NetBackup Administration Console: the Media and Device
Management > Credentials > NDMP Hosts option or the NetBackup Device
Configuration Wizard.
OR
■ The following command:
■ The client data must reside on a NAS host and be mounted on the client by
means of NFS on UNIX or CIFS on Windows. For NFS mounts, the data must
not be auto-mounted, but must be hard (or manually) mounted.
■ For NAS snapshot, you must create a NAS_Snapshot policy.
See “Setting up a policy for NAS snapshots” on page 116.
■ On Windows clients, to restore files from a NAS_Snapshot backup, the
NetBackup Client Service must be logged in as the Administrator account. The
NetBackup Client Service must not be logged in as the local system account.
The Administrator account allows NetBackup to view the directories on the
NDMP host to which the data is to be restored. If you attempt to restore files
from a NAS_Snapshot and the NetBackup Client Service is logged in as the
local system account, the restore fails.
Network Attached Storage (NAS) snapshot configuration 116
Logging on to the NetBackup Client Service as the Administrator
10 For the Backup Selections list, specify the directories, volumes, or files from
the client perspective, not from the NDMP host perspective. For example:
■ On a UNIX client, if the data resides in /vol/vol1 on the NDMP host nas1,
and is NFS mounted to /mnt2/home on the UNIX client: specify /mnt2/home
in the Backup Selections list.
■ On a Windows client, if the data resides in /vol/vol1 on the NDMP host
nas1, and is shared by means of CIFS as vol1 on the Windows client,
specify \\nas1\vol1.
■ Windows path names must use the Universal Naming Convention (UNC),
in the form \\server_name\share_name.
■ The client data must reside on a NAS host. The data must be mounted on
the client by means of NFS on UNIX or shared by means of CIFS on
Windows. For NFS mounts, the data must be manually mounted by means
of the mount command, not auto-mounted.
■ For a client in the policy, all paths must be valid, or the backup fails.
■ The ALL_LOCAL_DRIVES entry is not allowed in the Backup Selections
list.
11 On the policy Attributes tab: if you click Apply or OK, a validation process
checks the policy and reports any errors. If you click Close, no validation is
performed.
Network Attached Storage (NAS) snapshot configuration 118
NAS snapshot naming scheme
NAS+NBU+PFI+client_name+policy_name+sr+volume_name+date_time_string
NAS+NBU+PFI+sponge+NAS_snapshot_pol1+sr+Vol_15G+2005.05.31.13h41m41s
Where:
Client name = sponge
Policy name = NAS_snapshot_pol1
sr = indicates that the snapshot was created for a NAS snapshot.
Volume name = Vol_15G
Date/Time = 2005.05.31.13h41m41s
Term Description
Environment Configurations
Environment Configurations
Volumes are distributed across backup hosts and the backups happen using streams
(by default 2 per volume). So based on the above example, each backup host gets
five volumes each in a round robin method. Based on the CPU and memory
configurations, NetBackup decides the maximum number of jobs that can be run
on the backup host at a given point in time.
Note: Ensure that all hosts in a backup host pool must have same the OS, OS path
level, and default NFS version configuration.
The following diagram illustrates the high-level steps to configure dynamic data
streaming:
1 Register the CloudPoint server and the NAS vendor See “Registering a
plug-in with NetBackup. CloudPoint server in
NetBackup” on page 296.
The solution uses the Integrated Snapshot
Management framework.
2 Add the backup hosts to a backup host pool. The See “Configuring a backup
backup hosts are responsible for data streaming. host pool” on page 121.
3 Configure a storage lifecycle policy for snapshots See “About storage lifecycle
and backup from snapshots also. policies for snapshots”
on page 108.
Note: NetBackup master server running on Veritas Flex Appliance is not supported
as a backup host for the NAS-Data-Protection policy.
The Backup Host pool properties apply to the selected media server or the client.
See, Veritas NetBackup Administrator’s Guide.
You can use the nbsvrgrp command to create a backup host pool. See the
nbsvrgrp man page or the Veritas NetBackup Command Reference Guide.
Note: You cannot delete a backup host pool, if it is configured with an existing
NAS-Data-Protection policy.
Note: If you use cloud as a storage unit, you must configure appropriate buffer size.
Refer to the Veritas NetBackup Cloud Administrator's Guide.
not available then the backup operation might fail with the error shown in Activity
Monitor Detailed status.
■ NAS-Data-Protection policy is a snapshot enabled data protection policy. You
can configure only storage lifecycle policy (SLP) against policy's storage
destination. Additionally, the SLP should always have Snapshot as the primary
job and Backup from Snapshot as secondary job.
■ For NAS-Data-Protection policy, multiple images are created for a single volume
that is backed up. The number of images is equal to the value configured for
the Maximum number of streams per volume in the policy. Since a single
image cannot be referred from a single volume, NetBackup groups the images
associated with a volume. When an operation is performed on one of the images
in a volume, the same operation is also performed on the other grouped images
in the volume. For example, if Maximum number of streams per volume is
set as four and you select one image for a volume to expire, the other three
images also expire. The image grouping is applicable for the following operations:
■ Browse and Restore
■ Image expiration
■ Image import
■ Image duplication
■ Image verification
■ Set primary copy
Note: Image grouping is not applicable for importing images as part of Image
Sharing operation.
Note: At a given time, you can only use either the Enable vendor change
tracking for incremental backupsor the Use Accelerator check box.
Note: With this setting, the Allow multiple data streams option is also
selected.
Note: At a given time, you can use either Enable vendor change tracking
for incremental backups or the Use Accelerator check box.
12 On the Schedules tab, set the required attributes for the schedule.
Note: For Instant Recovery, only Snapshots and copy snapshots to a storage
unit is supported and is selected by default.
Network Attached Storage (NAS) snapshot configuration 125
Setting up a NAS-Data-Protection policy
13 On the Clients tab, from the NAS Vendor list, select the preferred vendor.
14 Click New to add a new client.
15 On the Backup Selections tab, click New to add a backup selection.
If you add a subdirectory from volume in the backup selection then the policy
validation fails.
16 On the Exclude Volumes tab, in the Volume to exclude field, add the preferred
volumes that you don't want to backup.
17 Click OK.
Chapter 7
Configuration of
software-based snapshot
methods
This chapter includes the following topics:
About nbu_snap
The nbu_snap snapshot method is for Solaris clients only. It is for making
copy-on-write snapshots for UFS or VxFS file systems.
The information in this section applies to either Standard or FlashBackup policy
types.
nbu_snap is not supported in clustered file systems. It is not supported as the
selected snapshot method or as the default snapctl driver when you configure
FlashBackup in the earlier manner.
See “Configuring FlashBackup policy for backward compatibility (UNIX only)”
on page 83.
Configuration of software-based snapshot methods 127
Software-based snapshot methods
Warning: Choose a cache partition carefully! The cache partition’s contents are
overwritten by the snapshot process.
■ Specify the raw partition as the full path name of either the character special
device file or the block device file. For example:
Or
/dev/dsk/c2t0d3s3
Or
/dev/vx/dsk/diskgroup_1/volume_3
/usr/openv/netbackup/bin/driver/snapon /omo_cat3
/dev/vx/rdsk/zeb/cache
Example output:
■ Id of each snapshot
■ Size of the partition containing the client file system
■ Amount of file system write activity in 512-byte blocks that occurred during
the nbu_snap snapshot (under the cached column).
The more blocks that are cached as a result of user activity, the larger the
cache partition required.
snapcachelist shows each cache device in use and what percentage has
been used (busy). For each cache device that is listed, busy shows the total
space that is used in the cache. This value indicates the size of the raw partition
that may be required for nbu_snap cache.
More details are available on the snap commands.
See “nbu_snap commands” on page 280.
The snap commands can be used in a script.
If the cache partition is not large enough, the backup fails with status code 13,
"file read failed." The /var/adm/messages log may contain errors such as the
following:
5 Using the information that snaplist and snapcachelist provide, you have
several options:
■ Specify a larger (or smaller) partition as cache, depending on the results
from snaplist and snapcachelist.
■ Reschedule backups to a period when less user activity is expected.
Configuration of software-based snapshot methods 130
Software-based snapshot methods
■ If multiple backups use the same cache, reduce the number of concurrent
backups by rescheduling some of them.
6 When you are finished with the snapshot, you can remove it by entering the
following:
/usr/openv/netbackup/bin/driver/snapoff snapid
where snapid is the numeric id of the snapshot that was created earlier.
NetBackup policies do not control any snapshots that you create manually with
the snapon command. When snapon is run manually, it creates a copy-on-write
snapshot only. The snapshot remains on the client until it is removed by entering
snapoff or the client is restarted.
About VxFS_Checkpoint
The VxFS_Checkpoint snapshot method is for making copy-on-write snapshots.
This method is one of several snapshot methods that support Instant Recovery
backups. Note that for VxFS_Checkpoint, the Instant Recovery snapshot is made
on the same disk file system that contains the client’s original data.
For VxFS_Checkpoint, VxFS 3.4 or later with the Storage Checkpoints feature must
be installed on the NetBackup clients. HP requires VxFS 3.5; AIX and Linux require
VxFS 4.0.
Note: On the Red Hat Linux 4 platform, VxFS_Checkpoint snapshot method supports
Storage Foundation 5.0 MP3 RP3 HF9 or later versions.
Note: Off-host backup is not supported for a VxFS 4.0 multi-volume system.
Configuration of software-based snapshot methods 132
Software-based snapshot methods
Block-Level restore
If only a small portion of a file system or database changes on a daily basis, full
restores are unnecessary. The VxFS Storage Checkpoint mechanism keeps track
of the data blocks that were modified since the last checkpoint was taken. Block-level
restores take advantage of this feature by restoring only changed blocks, not the
entire file or database. The result is faster restores when you recover large files.
See “About Instant Recovery: block-level restore” on page 233.
Configuration of software-based snapshot methods 133
Software-based snapshot methods
About VxFS_Snapshot
The VxFS_Snapshot method is for making copy-on-write snapshots of local Solaris
or HP clients. Off-host backup is not supported with this snapshot method.
Note the following:
■ VxFS_Snapshot supports the FlashBackup policy type only.
■ The VxFS_Snapshot method can only be used to back up a single file system.
If multiple file systems are specified in the policy’s Backup Selections list when
you use this method, the backup fails.
■ In a FlashBackup policy, if the Backup Selections list contains CACHE= entries,
FlashBackup does support the backup of multiple file systems from a single
policy. For each file system, a separate cache must be designated with the
CACHE= entry. Make sure you create a separate policy for each file system.
See “Configuring FlashBackup policy for backward compatibility (UNIX only)”
on page 83.
■ You must designate a raw partition to be used for copy-on-write cache.
Raw partition example:
Solaris: /dev/rdsk/c1t0d0s3
Or
/dev/dsk/c1t0d0s3
HP: /dev/rdsk/c1t0d0
Or
/dev/dsk/c1t0d0
About VxVM
The VxVM snapshot method is for making mirror snapshots with Veritas Volume
Manager 3.1 or later snapshot mirrors. (On Windows, make sure that VxVM has
the latest VxVM service packs and updates.)
Configuration of software-based snapshot methods 134
Software-based snapshot methods
Note: On the Red Hat Linux 4 platform, VxVM snapshot method supports Storage
Foundation 5.0 MP3 RP3 HF9 or later versions.
The VxVM snapshot method works for any file system that is mounted on a VxVM
volume. However, before the backup is performed, the data must be configured
with either of the following: a VxVM 3.1 or later snapshot mirror or a VxVM 4.0 or
later cache object. Otherwise, the backup fails.
Note the following:
■ See “Creating a snapshot mirror of the source” on page 134.
Or refer to your Veritas Volume Manager documentation.
■ Help is available for configuring a cache object.
See “About VxVM instant snapshots” on page 135.
Or refer to your Veritas Volume Manager documentation.
■ For Instant Recovery backups of the data that is configured on VxVM volumes
on Windows, the VxVM volume names must be 12 characters or fewer.
Otherwise, the backup fails.
■ VxVM and VxFS_Checkpoint are the only snapshot methods in Snapshot Client
that support the multi-volume file system (MVS) feature of VxFS 4.0.
■ Since VxVM does not support fast mirror resynchronization on RAID 5 volumes,
VxVM must not be used with VxVM volumes configured as RAID 5. If the VxVM
snapshot method is selected for a RAID 5 volume, the backup fails.
where:
■ disk_group is the Volume Manager disk group to which the volume belongs.
■ volume_name is the name of the volume that is designated at the end of
the source volume path (for example, vol1 in /dev/vx/rdsk/dg/vol1).
■ fmr=on sets the Fast Mirror Resynchronization attribute, which
resynchronizes the mirror with its primary volume. This attribute copies only
the blocks that have changed, rather than performing a full
resynchronization. Fast mirror resynchronization can dramatically reduce
the time that is required to complete the backup.
Fast Mirror Resynchronization (FMR) is a separate product for Veritas
Volume Manager.
3 With the Media Server or Third-Party Copy method, the disks that make up
the disk group must meet certain requirements.
See “Disk requirements for Media Server and Third-Party Copy methods”
on page 227.
About FlashSnap
FlashSnap uses the Persistent FastResync and Disk Group Split and Join features
of Veritas Volume Manager (VxVM).
The FlashSnap snapshot method can be used for alternate client backups only, in
the split mirror configuration.
See “Alternate client backup split mirror examples” on page 27.
FlashSnap supports VxVM full-sized instant snapshots, but not space-optimized
snapshot. Additionally, FlashSnap supports VxVM volumes in a shared disk group.
For support configurations, please refer to the NetBackup 7.x Snapshot Client
Compatibility List.
See the Volume Manager Administrator’s Guide for more information on deporting
disk groups.
The following steps are described in more detail in the Veritas FlashSnap
Point-In-Time Copy Solutions Administrator’s Guide.
To test volumes for FlashSnap on UNIX
1 On the primary host:
■ Add a DCO log to the volume:
■ Move the disks containing the snapshot volume to a separate (split) disk
group:
If the volume has not been properly configured, you may see an error similar
to the following:
Configuration of software-based snapshot methods 138
Software-based snapshot methods
■ Re-examine the layout of the disks and the volumes that are assigned to
them, and reassign the unwanted volumes to other disks as needed.
Consult the Veritas FlashSnap Point-In-Time Copy Solutions Administrator’s
Guide for examples of disk groups that can and cannot be split.
■ Deport the split disk group:
3 After this test, you must re-establish the original configuration to what it was
before you tested the volumes.
■ Deport the disk group on the alternate client.
■ Import the disk group on the primary client.
■ Recover and join the original volume group.
See “Identifying and removing a left-over snapshot” on page 262.
To test volumes for FlashSnap on Windows
1 On the primary host:
■ If not already done, create a snapshot mirror:
■ Move the disks containing the snapshot volume to a separate (split) disk
group.
The disk group is also deported after this command completes:
vxassist rescan
■ Import the disk group that was deported from the primary:
About VVR
The VVR snapshot method (for UNIX clients only) relies on the Veritas Volume
Replicator, which is a licensed component of VxVM. The Volume Replicator
maintains a consistent copy of data at a remote site. Volume Replicator is described
in the Veritas Volume Replicator Administrator’s Guide.
The VVR snapshot method can be used for alternate client backups only, in the
data replication configuration.
See “Alternate client backup through data replication example (UNIX only)”
on page 30.
VVR makes use of the VxVM remote replication feature. The backup processing
is done by the alternate client at the replication site, not by the primary host or client.
VVR supports VxVM instant snapshots.
See “About VxVM instant snapshots” on page 135.
Configuration of software-based snapshot methods 140
Software-based snapshot methods
3 On the secondary host, receive the IBC message from the primary host:
About NAS_Snapshot
NetBackup can make point-in-time snapshots of data on NAS (NDMP) hosts using
the NDMP V4 snapshot extension. The snapshot is stored on the same device that
contains the NAS client data. From the snapshot, you can restore individual files
or roll back a file system or volume by means of the Instant Recovery feature.
Note: NetBackup for NDMP software is required on the server, and the NAS vendor
must support the NDMP V4 snapshot extension.
You can control snapshot deletion by means of the Maximum Snapshots (Instant
Recovery Only) parameter. This parameter is specified on the Snapshot Options
dialog of the policy.
For detailed information about NAS snapshots, setting up a policy for NAS snapshots
and format of NAS snapshot name, check the 'Network Attached Storage (NAS)
snapshot configuration' chapter of this guide.
See “Means of controlling snapshots” on page 96.
About VSS
VSS uses the Volume Shadow Copy Service of Microsoft Windows and supports
Instant Recovery. VSS is for local backup or alternate client backup.
For the most up-to-date list of Windows operating systems and disk arrays supported
by this method, see the NetBackup 7.x Snapshot Client Compatibility List document
available on the Veritas support site:
http://www.netbackup.com/compatibility
For alternate client backup, the client data must reside on either a disk array such
as EMC, HP, or Hitachi with snapshot capability, or a Veritas Storage Foundation
for Windows 4.1 or later volume with snapshots enabled. VSS supports file system
backup of a disk partition (such as E:\) and backup of databases.
Configuration of software-based snapshot methods 142
Software-based snapshot methods
data to back up includes Windows system files, that volume cannot be backed
up with the VSS snapshot method.
■ Does not support the backup of Windows system database files (such as RSM
Database and Terminal Services Database).
Chapter 8
Support for Cluster
Volume Manager
Environments (CVM)
This chapter includes the following topics:
■ About enabling the NetBackup client to execute VxVM commands on the CVM
master node
The following snapshot methods support only English locale. They do not support
I18N (internationalization).
■ EMC_CLARiiON_Snapview_Clone
■ EMC_CLARiiON_Snapview_Snapshot
■ EMC_TimeFinder_Clone
■ EMC_TimeFinder_Mirror
■ EMC_TimeFinder_Snap
■ Hitachi_ShadowImage
■ Hitachi_CopyOnWrite
■ HP_EVA_Vsnap
■ HP_EVA_Snapshot
■ HP_EVA_Snapclone
■ HP_XP_BuisinessCopy
■ HP_XP_Snapshot
■ IBM_DiskStorage_FlashCopy
■ IBM_StorageManager_FlashCopy
http://www.netbackup.com/compatibility
Note: Some disk array vendors use the term snapshot to refer to a certain kind of
point-in-time copy made by the array. In other chapters of this guide, however,
snapshot refers more generally to all kinds of point-in-time copies, disk-array based
or otherwise. Refer to your array documentation for the definition of array vendor
terminology.
Configuration of snapshot methods for disk arrays 150
About the new disk array snapshot methods
Note: The following array methods support Veritas Volume Manager (VxVM)
volumes: Hitachi_CopyOnWrite and Hitachi_ShadowImage. The
IBM_DiskStorage_FlashCopy method (on the IBM DS6000) supports VxVM on
the AIX platform.
Warning: If you make other changes to the snapshot resources, the NetBackup
catalog may be invalidated. For instance, restores may fail from backups
consisting of the snapshots that have been deleted outside the view of
NetBackup.
LAN
HBA configuration
The supported HBAs are Emulex and QLogic. The JNI HBA is not supported.
Note: Persistent target bindings are not needed if you use Leadville drivers on
Solaris.
Configuration of snapshot methods for disk arrays 157
OS-specific configuration tasks
Note: The sd.conf file does not have to be modified if you use Leadville drivers.
Veritas recommends that you add LUNs 0-15 for all disk array targets on which
snapshots are to be created. This creates 16 host-side LUNs on each target that
can be used for importing the snapshots (clones, mirrors, and copy-on-write
snapshots) required for backups. If 16 host-side LUNs are not enough for a particular
disk array target, add more LUNs for that target. Note that snapshots are imported
to a NetBackup client in sequential order starting with the lowest unused host-side
LUN number. The host-side LUN number pool is managed on the disk array. The
disk array cannot determine which host-side LUN numbers have been configured
in sd.conf. The array can only determine which host-side LUN number it has not
yet assigned to the host. If the array adds a device at a host-side LUN number that
has not been configured in sd.conf, that device is not visible on the host. Also, if
alternate client backups are being used, be sure to properly configure sd.conf on
the alternate client.
You must restart after modifying sd.conf.
Symmetrix arrays pre-assign host-side LUN numbers (that is, the LUN numbers
are not set at the time the device is imported). These pre-selected LUN numbers
must be entered into sd.conf for the Symmetrix target number.
Note: If you use EMC Control Center interface (ECC) to determine Symmetrix
host-side LUN numbers, note that ECC shows host-side LUN numbers in
hexadecimal format. Since the LUN entries in sd.conf must be in decimal format,
convert the hexadecimal value to decimal before adding it to sd.conf.
If the Symmetrix array was persistently bound at target 5, and the host-side LUN
numbers of the Symmetrix devices are 65, 66, 67, then the following entries should
be added to sd.conf.
If the line is not present, add it to the modprobe.conf and enter the following:
#mv /boot/initrd-linux_kernel_version.img
/boot/initrd-linux_kernel_version.img.bak
#mkinitrd -v /boot/initrd-linux_kernel_version.img
linux_kernel_version
where the linux_kernel_version is the value that is returned from uname -r (for
example, 2.6.9-34.ELsmp).
/usr/openv/netbackup/bin/nbfirescan
■ Windows
This command queries the host’s SCSI bus for all the SCSI (or Fibre) attached
devices that are visible.
Note the following regarding CLARiiON:
■ If there are LUNs in the client’s CLARiiON storage group, the LUNs are included
in the output.
■ If there are no LUNs visible but the array is zoned to allow the host to see it, the
output includes the entry DGC LUNZ. This entry is a special LUN that the
CLARiiON uses for communication between the client and the array. The LUNZ
entry is replaced by another disk entry as soon as one is put in the storage group
which has been presented to the client.
Example Solaris output, followed by a description:
DevicePath Represents the actual access point for the device as it exists on
the client host.
Ctl,Bus,Tgt,Lun Controller, bus, target, and LUN numbers are the elements that
designate a particular physical or virtual disk from the perspective
of the client host computer.
Note: For backup of a disk array using the Windows VSS snapshot method with
Instant Recovery, be sure to configure NetBackup disk array credentials (if required
by the array) before you run the backup. A Point in Time Rollback fails if NetBackup
did not have credentials to access the array during the backup.
Symmetrix You must associate the source device in the array with the
target device(s) that are to be used for the differential
(copy-on-write) or plex-based (clone or mirror) backup.
Note: For Symmetrix arrays, NetBackup supports VSS with
differential (copy-on-write) backup but not plex-based (clone
or mirror) backup.
EMC TimeFinder Snap See “Creating EMC disk groups for VSS differential snapshots
that use EMC TimeFinder Snap” on page 161.
2 To display information on all existing snapshots on the client, enter the following
command:
vshadow.exe -q
Example output:
vshadow.exe -da
Configuration of snapshot methods for disk arrays 164
About EMC CLARiiON arrays
If the SRC <=> TGT value reads CopyOnWrite, the snapshot was created
successfully.
UNIX client
NetBackup Snapshot
Client
Create Restore
snapshot data
Create snapshot or restore
Navisphere Secure
CLI EMC CLARiiON
Register host array
with array
Navisphere Agent FLARE OS
Configuration of snapshot methods for disk arrays 167
About EMC CLARiiON arrays
■ On Windows:
If the command fails, you must address the problem before you do any further
array configuration.
This problem could be due to the following:
Configuration of snapshot methods for disk arrays 168
About EMC CLARiiON arrays
Note: On AIX or some UNIX host the Snapshot creation can fail for
EMC_CLARiiON array, if the Navisphere Secure CLI location entries are
incorrect in the /usr/openv/lib/vxfi/configfiles/emcclariionfi.conf
file.
For example, on the AIX host naviseccli is found at the following location
/usr/lpp/NAVICLI/naviseccli. Verify the correct naviseccli path and
add the following file path and name entries to the
/usr/openv/lib/vxfi/configfiles/emcclariionfi.conf file.
■ FILEPATH_NAVISEC_EXE"="filepath"
■ FILENAME_NAVISEC_EXE"="filename"
Note: You must also enter credentials by means of the Disk Array Hosts dialog
box in the NetBackup Administration Console. The disk array host name is not
provided in the Navisphere security file.
Warning: Veritas strongly recommends that every NetBackup client be given its
own CLARiiON storage group on the array. Data corruption could result if more
than one client (host) exists in a single storage group. If it is necessary to have
multiple hosts in a single storage group, you must make certain that only one host
in the storage group is actually using the device at any given time. (Only one host
should mount the disk.) A Windows host may actually write to a LUN masked device
even if the device is not mounted. Therefore, a Windows host should always be in
its own storage group.
Configuration of snapshot methods for disk arrays 170
About EMC CLARiiON arrays
Step 1 Array administrator creates clone See “Creating a clone private LUN
private LUNs. with the EMC Navisphere Web
interface” on page 171.
Step 2 Array administrator creates a clone See “Creating a clone group and
group and selects a LUN as select a LUN as source”
source. on page 171.
Step 3 Array administrator adds clone See “Adding clone LUNs to the
LUNs to the clone group. clone group” on page 172.
Step 4 Array administrator supplies source See “Obtaining the device identifier
and target devices. for each source and clone LUN”
on page 174.
Note: For Windows clients and the VSS method, you must synchronize the clone
with its source.
Note: These steps are separate from those taken by NetBackup to create the
backup. When the backup begins, NetBackup synchronizes the clones with the
source (if necessary) and splits (fractures) the clones to make them available for
the backup.
For more information on the EMC array terminology in this section, see your EMC
CLARiiON documentation.
Creating a clone private LUN with the EMC Navisphere Web interface
You must configure a clone private LUN for each CLARiiON storage processor that
owns a clone source LUN. Clone private LUNs store the portions of the client’s data
that incoming write requests change while the clone is in use. Clone private LUNs
are used while a clone LUN is fractured and when a synchronization occurs.
A clone private LUN can be any bound LUN that is at least 250,000 blocks in size.
To create a clone private LUN with the EMC Navisphere Web interface
1 Right-click the array name.
2 Right-click the Snapview node and select Clone Feature Properties.
3 Choose the LUNs you want to label as Clone Private LUNs.
Choose a clone private LUN for each storage processor that contains clone
source LUNs. (You must know which storage processor owns a given LUN.)
Only one clone private LUN is required per storage processor. You can add
more clone private LUNs later if more space is needed.
3 When you click Apply, Navisphere begins to copy data from the source LUN
to the LUN you have selected, creating a clone LUN.
Any previous data on the clone LUN is lost.
Configuration of snapshot methods for disk arrays 174
About EMC CLARiiON arrays
Obtaining the device identifier for each source and clone LUN
The NetBackup policy requires entry of the array’s Unique ID. If your array
administrator provided LUN numbers for the devices, you must convert those LUN
numbers into Unique IDs for entry in the NetBackup policy Snapshot Resources
pane. You can obtain the LUN Unique IDs in either of two ways, as follows.
To obtain the device identifier for each source and clone LUN
1 Enter the following command on the NetBackup client:
2 Note the exact UID string that this command returns. This UID is the unique
ID of the LUN.
For example, to obtain the unique ID of LUN 67, enter:
Example output:
UID: 60:06:01:60:C8:26:12:00:4F:AE:30:13:C4:11:DB:11
3 To obtain the number of the LUN to use on the naviseccli command, find the
clone group and examine the LUN list.
4 Copy the unique ID into the NetBackup policy, as follows:
■ If the LUN specified on the naviseccli command is the source LUN for the
clone group, copy the unique ID into the Source Device field of the Add
Snapshot Resource dialog box of the NetBackup policy. Help is available
for that dialog box.
See “Configuring a policy using EMC_CLARiiON_Snapview_Clone method”
on page 177.
■ If the LUN specified on the naviseccli command is a clone LUN, copy the
unique ID into the Snapshot Device(s) field.
Configuration of snapshot methods for disk arrays 175
About EMC CLARiiON arrays
Unique ID field
Storage processor
■ LUNs in the reserved LUN pool are private LUNs, which cannot belong to a
storage group. The storage processor manages its reserved LUN pool and
automatically assigns one or more private LUNs to a source LUN. This
assignment is based on how much snapshot activity takes place in the source
LUN. This activity can result from one busy snapshot or multiple snapshots.
■ While the snapshot is active, client write activity on the source consumes more
space in the reserved LUN pool. Adding more LUNs to the reserved LUN pool
increases the size of the reserved LUN pool. The storage processor automatically
uses a LUN if one is needed.
■ All snapshots share the reserved LUN pool. If two snapshots are active on two
different source LUNs, the reserved LUN pool must contain at least two private
LUNs. If both snapshots are of the same source LUN, the snapshots share the
same private LUN (or LUNs) in the reserved LUN pool.
EMC_CLARiiON_Snapview_Snapshot method
In the Snapshot Options dialog box of the policy, you can set the Maximum
snapshots (Instant Recovery only) parameter for the
EMC_CLARiiON_Snapview_Snapshot method. The maximum value is 8.
See “Maximum Snapshots parameter” on page 97.
EMC Solutions Enabler NetBackup clients For versions used in test configurations, see Veritas NetBackup
Snapshot Client Configuration, at:
http://www.veritas.com/docs/000081320
Symmetrix Solutions NetBackup clients For versions used in test configurations, see Veritas NetBackup
Enabler license Snapshot Client Configuration, at:
http://www.veritas.com/docs/000081320
HKEY_LOCAL_MACHINE\SOFTWARE\EMC\ShadowCopy
Registry details:
■ Name: EnforceStrictBCVPolicy
■ Type: REG_SZ
Possible values include:
■ TRUE: Indicates that EMC VSS Provider enforces a strict BCV rotation policy,
where a BCV should only be used if it is not currently part of a snapshot.
■ FALSE: Indicates that EMC VSS Provider does not enforce a BCV rotation
policy, leaving enforcement to the VSS requestor.
A VCMDB is a virtual LUN database that keeps track of which LUNs the client
can see. A gatekeeper is a small disk that the DMX uses to pass commands
between the client and the array.
Example output:
Symmetrix ID : 000292603831
Device Masking Status : Success
Symmetrix ID : 000492600276
Device Masking Status : Success
0050 0060
Make sure the temp_file name matches the temp_file name you used above.
4 In the output, look for Synchronized under the State column. When the pair
enters the synchronized state, it is ready to be used for backup.
To verify that the clone is complete before doing a point in time rollback
1 Create a temporary file that contains only the source and target device IDs
separated by a space.
For example, if the source device ID is 0050 and the target (clone) device ID
is 0060, the temporary file should contain the following:
0050 0060
2 Check the status of the clone with the symclone command. For example:
3 In the output, look for Copied under the State column. When the clone pair is
in the copied state, it is ready for point-in-time rollback.
■ For EMC_TimeFinder_Clone, the target devices are the STD devices that
were allocated to be used as clones.
11 Enter source and target device IDs exactly as they appear on the Symmetrix.
For example, if device 4c appears as 004C, then enter it as 004C (case does
not matter). The symdev show command can be used to determine how a
device ID appears on Symmetrix. Refer to your SymCLI documentation for
more information on this command.
For Instant Recovery backups, the Snapshot Device(s) entries determine where
and in what order the snapshots are retained.
See “Snapshot Resources pane” on page 96.
SSSU for HP NetBackup clients For versions used in test configurations, see Veritas NetBackup
StorageWorks Command Snapshot Client Configuration, at:
View EVA (CLI)
http://www.veritas.com/docs/000081320
Configuration of snapshot methods for disk arrays 187
About HP EVA arrays
HP StorageWorks HP Command View For versions used in test configurations, see Veritas NetBackup
Command View EVA EVA server Snapshot Client Configuration, at:
Web interface
http://www.veritas.com/docs/000081320
StorageWorks
Command View
Web interface
contact Hewlett Packard Enterprise for the required software and versions. HP
supplies this software as a bundle, to ensure that the software components are at
the right level and function correctly.
Note that the open support policy for VSS providers is not applicable to Instant
Recovery. To use VSS along with the NetBackup Instant Recovery feature, refer
to the NetBackup 7.x Snapshot Client Compatibility List for the components that
NetBackup supports for Instant Recovery with the array. The compatibility list is
available at the following URL:
http://www.netbackup.com/compatibility
/opt/hp/sssu/sssu_sunos
Example output:
3 Verify that you can see the EVA arrays that are managed by the host:
NoSystemSelected> ls cell
Example output:
/opt/hp/sssu/sssu_sunos
Example output:
3 Verify that you can see the EVA arrays that are managed by the host:
NoSystemSelected> ls cell
Example output:
The cause of the error message is the CLI path, which is different from the default
CLI path.
To fix the policy validation, add the following entry into the hpevafi.conf file:
[CLI_TOOL_INFO]
"FILEPATH"="/opt/hp/sssu"
"FILENAME"="sssu_hpux_parisc"
After you manually add these inputs to the hpevafi.conf file, the validation is
successful.
HP EVA restrictions
Note the following array configuration restrictions. In essence, you cannot use two
or more EVA snapshot methods for a given source disk.
Table 9-6
Array Restrictions
Table 9-7 Software that is required for IBM DS6000 and DS8000
2 Repeat step 1 for each NetBackup client or alternate client that uses the array.
3 Create a volume group and associate the volume group with the NetBackup
host you have defined on the array. For details, refer to your IBM
documentation.
4 Create logical volumes (or logical drives) for the volume group. This step makes
the volumes or drives visible to the NetBackup client. For details, refer to your
IBM documentation.
Example:
2 Find the volumes presented to this volume group and to the host:
Example:
/usr/openv/netbackup/bin/nbfirescan
Configuration of snapshot methods for disk arrays 195
About IBM DS6000 and DS8000 arrays
To use the IBM Storage Manager web interface to obtain the device identifiers
1 In the Storage Manager, click Real-time manager > Manage hardware >
Host systems.
2 Click the host for which you need to find the volumes presented.
The volume groups that are associated with the host are displayed.
3 Click the volume group to get the list of the logical volumes that are configured
in this volume group.
The Number column indicates the LUN ID.
Configuration of snapshot methods for disk arrays 196
About IBM DS6000 and DS8000 arrays
7 Enter the unique ID for the source LUN in the Source Device field.
8 Enter the unique IDs for the clone LUNs in the Snapshot Device(s) field. To
enter multiple IDs, place a semicolon between them.
Note the following:
■ The clone LUNs must be unmasked to the client (or alternate client) before
you start a backup.
■ For Instant Recovery backups, the Snapshot Device(s) entries determine
where and in what order the snapshots are retained.
Table 9-8 New snapshot method for the IBM DS4000 disk array
Install the disk array and its software, See your array documentation.
including appropriate licenses.
See “IBM 4000 software requirements”
on page 198.
Zone the client HBAs through the Fibre See your Fibre Channel documentation.
Channel switch, so the array is visible to the
primary client and to any alternate clients.
Install NetBackup, and array vendor snapshot See the appropriate installation
management software, on the NetBackup documentation.
primary client and any alternate clients.
Create and configure the Access Logical See your array documentation.
Drive for the host connection at the array.
Configure logical drives on the array and
make them visible to the host.
/opt/IBM_DS4000/
/opt/IBM_DS4000/
/usr/openv/netbackup/bin/nbfirescan
This command queries the host’s SCSI bus for all the SCSI (or Fibre) attached
devices that are visible.
Example output from an AIX host, for Hitachi and IBM arrays, followed by a
description:
Ctl,Bus,Tgt,LUN Controller, bus, target, and LUN numbers are the elements
that designate a particular physical or virtual disk from the
perspective of the client host computer.
Configuration of snapshot methods for disk arrays 200
About IBM DS4000 array
2 Repeat step 1 for each NetBackup client or alternate client that uses the array.
Configuration of snapshot methods for disk arrays 201
About IBM DS4000 array
3 For every client and host group added, map an Access Logical Drive on LUN
number 7 or 31.
4 Create logical drives and map them to the host group. This step makes the
logical drives visible to the NetBackup client. For details, refer to your IBM
documentation.
Repository % of Base Determines the size of the IBM repository logical drive as a
(100 for Instant percentage of the primary device (base logical drive). The size
Recovery) can range from 1% to 100%. The more write activity that
occurs on the primary drive, the more space the repository
logical drive requires.
If the size of the primary is 500 GB and you set this parameter
to 30%, the repository drive is set to 150 GB (30% of 500).
For more details about the repository logical drive, refer to the
IBM System Storage DS4000 Series and Storage Manager
document.
Configuration of snapshot methods for disk arrays 202
About Hitachi SMS/WMS/AMS, USP/NSC, USP-V/VM
/usr/lib/RMLIB/bin/whatrmver
/usr/lib/RMLIB/bin/whatrmver
Example output:
Model :RAID-Manager/LIB/Solaris
Ver&Rev:01-12-03/04
Configuration of snapshot methods for disk arrays 203
About Hitachi SMS/WMS/AMS, USP/NSC, USP-V/VM
Pair status must be PSUS After creating volume pairs, you must split each
pair and leave the status of the pair at PSUS.
/usr/openv/netbackup/bin/nbfirescan
Example output:
Obtaining the Hitachi array serial number and the unique device
identifiers
The NetBackup policy requires the Hitachi array's serial number and the unique
IDs (device identifiers) for the source and clone LUNs. Use the following procedure
to obtain that information.
Configuration of snapshot methods for disk arrays 205
About Hitachi SMS/WMS/AMS, USP/NSC, USP-V/VM
To obtain the Hitachi array serial number and the unique device identifiers
◆ Enter the following command:
/usr/openv/netbackup/bin/nbfirescan
Example output:
The Enclosure ID is the serial number and the Device ID is the array’s device
ID.
Note: The term "clone LUNs," as used in this procedure, refers to the
Hitachi_ShadowImage method. For the Hitachi_CopyOnWrite method, the term
"clone LUNs" can be replaced with "snapshot LUNs."
See “Obtaining the Hitachi array serial number and the unique device identifiers”
on page 204.
5 In the Add Snapshot Resource dialog box, enter the array's serial number in
the Array Serial # field.
6 Enter the unique ID for the source LUN in the Source Device field.
The ID must be entered without leading zeroes. For example, if the LUN ID is
0110, enter 110 in the Source Device field.
7 Enter the unique IDs for the clone LUNs (for Hitachi_ShadowImage method)
or the snapshot LUNs (for Hitachi_CopyOnWrite) in the Snapshot Device(s)
field. To enter multiple IDs, place a semicolon between them.
The ID must be without leading zeroes. For example, if the LUN ID is 0111,
enter 111 in the Snapshot Device(s) field.
Note the following:
■ The LUNs must be unmasked to the client (or alternate client) before you
start a backup.
■ For Instant Recovery backups, the Snapshot Device(s) entries determine
where and in what order the snapshots are retained.
Configuration of snapshot methods for disk arrays 207
About HP-XP arrays
/usr/lib/RMLIB/bin/whatrmver
Configure command devices on NetBackup The HP-XP command devices must be visible
client and alternate client to the NetBackup client as well as to any
alternate client. To configure command
devices, refer to your HP-XP documentation.
Configuration of snapshot methods for disk arrays 208
About HP-XP arrays
/usr/openv/netbackup/bin/nbfirescan
/usr/openv/netbackup/bin/nbfirescan
Configuration of snapshot methods for disk arrays 209
About HP-XP arrays
Note: The term "clone LUNs," as used in this procedure, refers to the
HP_XP_BusinessCopy method. For the HP_XP_Snapshot method, the term "clone
LUNs" can be replaced with "snapshot LUNs."
The ID must be without leading zeroes. For example, if the LUN ID is 0111,
enter 111 in the Snapshot Device(s) field.
Note the following:
■ The LUNs must be unmasked to the client (or alternate client) before you
start a backup.
■ For Instant Recovery backups, the Snapshot Device(s) entries determine
where and in what order the snapshots are retained.
In this example, an HP-EVA snapshot was not found on the backup host. The
/kernel/drv/sd.conf file probably has insufficient lun= entries. Add lun= entries
for the HP-EVA target in sd.conf and restart the system. More information is
available about LUN entries in sd.conf.
See “About Solaris sd.conf file” on page 157.
Configuration of snapshot methods for disk arrays 211
About array troubleshooting
Backups fail and the following message appears in the Credentials must be added for the CLARiiON array by
bpfis log: means of the NetBackup Administration Console.
emcclariionfi: WARNING: Unable to import See “Configuring NetBackup to access the CLARiiON
any login credentials for any appliances. array” on page 168.
Backups fail and one or both of the following messages NetBackup searches the CLARiiON's storage groups for
appear in the bpfis log: the import host. (For a local backup, the import host is the
host where the source device is mounted. For an off-host
emcclariionfi: The host hostname was not
backup, the import host is the alternate client.) When the
found in any storage groups. To import a
host is found, the snapshot device is assigned to that
snapshot to host hostname, hostname must
storage group, thus making it visible to the import host
be in a Clariion storage group.
where the backup can proceed. If the import host is not in
emcclariionfi: LUN masking failed. Could any storage groups, the backup fails.
not find a storage group containing the
hostname [hostname].
Configuration of snapshot methods for disk arrays 212
About array troubleshooting
Table 9-13 Issues with NetBackup and EMC CLARiiON arrays (continued)
Backups fail and the following message appears in the The device cannot be imported to the host, because the
bpfis log: maximum number of devices from the array is already
imported to this host. Expire any unneeded backup images.
emcclariionfi: No more available HLU
numbers in storage group. LUN LUN number
cannot be LUN masked at this time
EMC_CLARiiON_Snapview_Clone backups fail and the The clone target device does not exist in the clonegroup
following message appears in the bpfis log: belonging to the source device. Either correct the target
list in the policy or use Navisphere to add the target device
emcclariionfi: Could not find LUN LUN
to the source device's clone group.
number in clonegroup clonegroup name
Both types of CLARiiON backups fail with the following in These messages appear when the Snapview software is
the bpfis log: not installed on the CLARiiON array. Snapview must be
installed on the array before CLARiiON clone or snapshot
emcclariionfi: CLIDATA: Error: snapview
backups can succeed. Please see the array documentation
command failed emcclariionfi: CLIDATA: This
or contact EMC for more information.
version of Core Software does not support
Snapview
Backups fail and the following message appears in the NetBackup uses naviseccli to send commands to the
bpfis log: CLARiiON array. If naviseccli encounters an error, it is
captured and placed in the bpfis log. The lines immediately
execNAVISECCLI: CLI Command [CLI command]
following the above line should contain the output from
failed with error [error number]
naviseccli that indicates why the command failed.
After a point-in-time rollback from a Windows VSS backup As a best practice, avoid performing a point-in-time rollback
that was made with the EMC CLARiiON Snapview Clone from a Windows VSS backup that was made with the EMC
snapshot provider, all clones are fractured (split from the CLARiiON Snapview Clone snapshot provider, if one of
primary) the clones is configured for the policy has not been used
for an Instant Recovery backup. After a rollback, all the
clones are placed in a “fractured” state. (Fractured clones
are no longer synchronized with the primary.) As a result,
any clone that had not already been used for a backup is
no longer available for a future Instant Recovery backup.
Table 9-13 Issues with NetBackup and EMC CLARiiON arrays (continued)
Policy validation fails for Standard policy with the following Policy validation for a Standard policy created with the
message: EMC_CLARiiON_Snapview_Snapshot fails with error 4201.
The error message is Incorrect snapshot method The policy validation fails when the CLI is installed at a
configuration or snapshot method not location where NetBackup fails to identify it. The CLI must
compatible for protecting backup selection be installed in the /sbin/naviseccli. If the CLI is
entries. installed at another location, NetBackup fails to identify
that location and policy validation fails.
To fix the policy validation, add the following entry into the
emcclariionfi.conf file:
[CLI_TOOL_INFO]
/usr/openv/lib/vxfi/configfiles/emcclariionfi.conf
[CLI_TOOL_INFO]
"FILEPATH_NAVISEC_EXE"="/opt/Navisphere/bin"
"FILENAME_NAVISEC_EXE"="naviseccli"
Point in time rollback fails and the following message See “Verifying that the clone is complete before doing a
appears in the bpfis log: point in time rollback” on page 183.
Table 9-14 Issues with NetBackup and EMC Symmetrix arrays (continued)
If all Save Device space is consumed on the Symmetrix, Check the symapi log (often found at /var/symapi/log
a backup with EMC_TimeFinder_Snap or EMC_ on UNIX) to determine the exact error. If the log indicates
TimeFinder_Clone fails with the following error in the bpfis there is no Save Device space, add Save Devices to the
log: Save Device pool on your Symmetrix array.
An internal Snap or Clone error has
occurred. Please see the symapi log file
EMC_TimeFinder_Mirror backups fail and the following This message indicates that the STD-BCV pair is not in a
message appears in the bpfis log: state that allows the mirror to be created. Verify that the
pair were fully synchronized before the backup attempt.
emcsymfi: Invalid STD-BCV pair state
See “Fully synchronizing STD/BCV mirror pairs”
on page 182.
Issue Explanation/Recommended
Action
Backups fail with the following warning message Credentials must be added for the EVA
in the bpfis log: array by means of the NetBackup
Administration Console.
WARNING: No credentials found for HP
HSV
Configuration of snapshot methods for disk arrays 215
About array troubleshooting
Issue Explanation/Recommended
Action
Snapshot job fails when the client has VxVM Uninstall the VxVM software from the
software installed, but the underlying disk in client.
Snapshot Client backup is not configured on stack.
The following error message is displayed:
Table 9-16 Issues encountered with Snapshot (NetBackup status code 156)
The snapshot device (clone) is not visible (unmasked) to Make the clone device visible to the NetBackup client or
the NetBackup client or alternate client. alternate client before you retry the backup. Contact IBM
technical support or refer to your IBM array documentation.
The snapshot device (clone) is also a source device in Reconfigure source and clone devices so that the clone
another device pair. required for this backup is not a source device for another
clone. Contact IBM technical support or refer to your IBM
The following message may appear in the
array documentation.
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
The snapshot device (clone) and source device are not of Reconfigure source and clone devices to be identical in
equal size. size. Contact IBM technical support or refer to your IBM
array documentation.
The following message may appear in the
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
Table 9-16 Issues encountered with Snapshot (NetBackup status code 156)
(continued)
The IBM FlashCopy license is not installed. Install the FlashCopy license on the storage subsystem.
Contact IBM technical support or refer to your IBM array
The following message may appear in the
documentation.
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
The FlashCopy relationship is not recording enabled. ■ Make sure a FlashCopy relationship exists for the
device pair.
The following message may appear in the
If the FlashCopy relationship is not recording enabled,
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date ■
remove the FlashCopy relationship and then re-run the
log:
backup.
CMUN03027E resyncflash: FlashCopy operation
failure: action prohibited by current
FlashCopy state. Contact IBM technical
support for assistance
A FlashCopy relationship does not exist. Verify that a FlashCopy pair does not exist, and then
re-execute the backup.
The following message may appear in the
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
Table 9-16 Issues encountered with Snapshot (NetBackup status code 156)
(continued)
Table 9-17 Explanations and recommended actions for status code 156
The array does not FlashCopy logical drives are created under the same logical array as is the base or primary
have enough free logical drive. The storage subsystem might have free space, but if the logical array has
space. insufficient space, the FlashCopy operation fails.
Mon Mar 31 2008 14:25:23.036588 <Pid - 1065104 / Thread id - 1> FlashCopy could not be
created. command [create FlashCopyLogicalDrive baseLogicalDrive="drive-claix11-1"
userLabel="drive-claix11-1_flcp";]. Mon Mar 31 2008 14:25:23.037164 <Pid - 1065104 /
Thread id - 1> OUTPUT=[Unable to create logical drive "drive-claix11-1_flcp" using the Create
FlashCopy Logical Drive command at line 1. Error - The operation cannot complete because
there is not enough space on the array. The command at line 1 that caused the error is:
create FlashCopyLogicalDrive baseLogicalDrive="drive-claix11-1"
userLabel="drive-claix11-1_flcp";
Recommended action: Make sure that the array has enough space available for the snapshot.
Recommended action: Delete any FlashCopies that NetBackup did not create.
The Access Logical On the IBM DS4000, the Access Logical Drive communicates with the storage subsystem.
Drive is not mapped for Any client that is connected to and needs to communicate with the storage subsystem should
the NetBackup client or have an Access Logical Drive mapped to it. If an Access Logical Drive is not mapped to the
alternate client at LUN client, the client is unable to communicate with the array. As a result, any NetBackup client
31 or 7. operation involving the array fails.
Recommended action: Create and map an Access Logical Drive. Contact IBM technical
support or refer to your IBM array documentation.
Configuration of snapshot methods for disk arrays 220
About array troubleshooting
Table 9-17 Explanations and recommended actions for status code 156
(continued)
The DAR driver is not Recommended action: Make sure that the RDAC package is installed on the AIX host.
functional.
Look for the following error in the Make sure that the RMLIB 64-bit library is installed. This
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date> requirement applies when you upgrade from a 6.5.x system
log: (requires 32-bit RMLIB) to a 7.1 system, and when you
install a fresh 7.1 system.
Library RMLIB init failed
The Hitachi command device is not unmasked. Refer to Hitachi documentation for creating and unmasking
command devices.
See the sample log messages in the next row.
The Hitachi command device is unmasked but is not visible Make sure that the command device is recognized by the
to client, or the enclosure ID specified in the policy’s operating system and that the enclosure ID is entered
Snapshot Resources is invalid. correctly in the policy's Snapshot Resources.
A mismatch exists between the policy’s snapshot method Specify the correct snapshot method or snapshot devices.
and the type of LUNs specified for the Snapshot Devices.
For example, if you select the Hitachi_ShadowImage
method but specify snapshot LUNs instead of clone LUNs
for the Snapshot Devices, an error occurs.
A disk pair was not created for the source device and Set up a disk pair (primary and secondary) for the source
snapshot device specified in the NetBackup policy’s device and snapshot device that are specified in the
Snapshot Resources. policy’s Snapshot Resources. Refer to the Hitachi
documentation.
The
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date>
log may contain messages similar to the following.
In the policy’s Snapshot Resources, the device identifier Recommended action: Make sure that the identifiers are
for the source device or snapshot device is invalid. correctly entered in the policy’s Snapshot Resources.
Specify source and snapshot IDs without leading zeros.
The
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date> See “Configuring a NetBackup policy for
log may contain messages similar to the following: Hitachi_ShadowImage or Hitachi_CopyOnWrite”
on page 205.
Fri Mar 21 2008 16:26:49.173893 <Pid - 9477
/ Thread id - 1> getrminfo failed. Fri Mar
21 2008 16:26:49.173893 <Pid - 9477 /
Thread id - 1> operation failed with error
number <> with message <msg>'.
The RAID Manager library libsvrrm.so software is not Recommended action: Install the RAID Manager package
installed in the /usr/lib/ directory. in /usr/lib/. See the Hitachi documentation.
The installed version of RAID Manager library Recommended action: Look for the Library RMLIB
libsvrrm.so is not supported. version message in the
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date>
log.
The
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date>
log contains the following message:
The default array controller of the source device is not the Recommended action: Make sure that the clone (or
same as the controller of the snapshot device. Use the snapshot) device has the same default controller as the
Storage Navigator interface to verify. source device. See the Hitachi documentation.
Chapter 10
Notes on Media Server
and Third-Party Copy
methods
This chapter includes the following topics:
■ The disk must be able to return its SCSI serial number in response to a
serial-number inquiry (serialization). Or, the disk must support SCSI Inquiry
Page Code 83.
Solaris: /dev/rdsk/c1t3d0s3
HP: /dev/rdsk/c1t0d0
■ Restoring a large number of files in a clustered file system (VxFS on UNIX Only)
Automatic backup The most convenient way to back up client data is to configure
a policy and then set up schedules for automatic, unattended
backups. To use NetBackup Snapshot Client, you must
enable snapshot backup as described in the appropriate
configuration chapter of this guide. To add new schedules
or change existing schedules for automatic backups, you can
follow the guidelines in the NetBackup Administrator’s Guide,
Volume I.
Backup and restore procedures 231
About performing a restore
User-directed backup and From a NetBackup client, the user can execute a Snapshot
archive Client backup. The NetBackup administrator must configure
an appropriate snapshot policy with schedule.
Note: In the Backup, Archive, and Restore interface, set the policy type to
FlashBackup for UNIX clients and FlashBackup-Windows for Windows clients.
■ An entire raw partition can be restored from a full backup only. FlashBackup
incremental backups only support individual file restores.
■ Ensure that the device file for the raw partition exists before the restore.
■ The overwrite option must be selected for raw partition restores. The device file
must exist and the disk partition is overwritten.
■ To restore a very large number of files (when individual file restore would take
too long), you can do a raw-partition restore. Redirect the restore to another
raw partition of the same size and then copy individual files to the original file
system.
File promotion (for VxFS_Checkpoint or See “About Instant Recovery: file promotion”
NAS_Snapshot snapshots) on page 234.
Fast File Resync for Windows (for VxVM and See “About Instant Recovery: Fast File
FlashSnap snapshots) Resync (Windows clients only)” on page 235.
Rollback (for VxFS_Checkpoint, VxVM, VSS, See “About Instant Recovery: point in time
FlashSnap, or NAS_Snapshot snapshots, rollback” on page 237.
OST_FIM, and the disk array methods)
/usr/openv/netbackup/PFI_BLI_RESTORE
After this file is created, all subsequent restores of the client’s data use
block-level restore.
To deactivate block-level restore
◆ Delete (or rename) the PFI_BLI_RESTORE file.
When block-level restore is activated, it is used for all files in the restore.
Block-level restore may not be appropriate for all of the files. It may take longer
to restore a large number of small files, because they must first be mapped.
■ File promotion is available only when restoring files to the original location on
the original client.
Note the following regarding NAS_Snapshot:
■ File promotion is available when restoring to the original volume on the original
client.
■ File promotion can be done from older snapshots, but any newer NAS snapshots
are deleted after the file promotion takes place.
■ The file system requirements depend on the NAS vendor.
■ For further requirements specific to your NAS vendor, see the NetBackup for
NDMP Supported OS and NAS Appliance Information online document. That
document can be accessed from the following:
http://www.veritas.com/docs/000027113
■ FFR requires Storage Foundations for Windows 4.1 or later and the licensed
FlashSnap option.
■ FFR can be done only from an Instant Recovery snapshot that was made with
the VxVM or FlashSnap method.
■ FFR is available only when you restore to the original location on the original
client.
■ The overwrite existing files option must be selected.
Notes on rollback
Note the following.
Warning: Rollback deletes all files that were created after the creation-date of the
snapshot that you restore. Rollback returns a file system or volume to a given point
in time. Any data changes or snapshots that were made after that time are lost.
Also, if there are multiple logical volumes on a single disk or volume group and if
you perform a Point in Time Rollback of a specific logical volume, the entire disk
or volume group is restored to the point in time.
■ Rollback can be done only from the backups that were enabled for Instant
Recovery and made with one of the following methods: VxFS_Checkpoint,
VxVM, FlashSnap, NAS_Snapshot, or the disk array methods.
■ If the backup was made with the EMC_TimeFinder_Clone method and the clone
is not fully created, a rollback cannot succeed.
To verify that the clone is complete before you do a rollback:
See “Verifying that the clone is complete before doing a point in time rollback”
on page 183.
■ For the backups that were made with the VxFS_Checkpoint method, rollback
requires the VxFS File System 4.0 or later and Disk Layout 6. For
NAS_Snapshot, the file system requirements depend on the NAS vendor.
■ Rollback deletes any VxFS_Checkpoint snapshots or NAS_Snapshot snapshots
(and their catalog information) that were created after the creation-date of the
snapshot that you restore.
■ If the primary file system is mounted and the snapshot resides on a disk array,
the rollback attempts to unmount the file system. Any I/O on the primary device
is forcibly stopped if the unmount succeeds. To be safe, make sure that no I/O
occurs on the primary device before a rollback.
If the attempt to unmount the primary file system fails, the rollback does not
succeed. You should halt I/O on the device and retry the rollback. For example,
if a terminal session has accessed the file system through the cd command,
change to a directory outside the file system and retry the rollback.
Backup and restore procedures 238
Instant Recovery restore features
■ Rollback is available only when you restore the file system or volume to the
original location on the client.
■ When a file system rollback starts, NetBackup verifies that the primary file system
has no files that were created after the snapshot was made. Otherwise, the
rollback aborts.
■ Rollback from OST_FIM type snapshot can be done from copy one only.
■ For the rollback from OST_FIM type snapshot, refer to the NetBackup Replication
Director Solutions Guide.
/usr/openv/netbackup/bin/jbpSA &
You can select root level or mount points (file systems or volumes), but not
folders or files at a lower level.
6 In the Directory Structure list, click the check box next to the root node or a
mount point beneath root.
You can select a file system or volume, but not lower-level components.
7 Click the Restore option.
The only available destination option is Restore everything to its original
location.
8 For file systems, you can choose to skip file verification by placing a check in
the Skip verification and force rollback option.
Warning: Click Skip verification and force rollback only if you are sure that
you want to replace all the files in the original location with the snapshot.
Rollback deletes all files that were created after the creation-date of the
snapshot that you restore.
Backup and restore procedures 240
Instant Recovery restore features
You can select root level or mount points (file systems or volumes), but not
folders or files at a lower level.
Backup and restore procedures 241
Notes for restoring individual files from an Instant Recovery snapshot
5 In the All Folders pane, click the check box next to the root node or a mount
point beneath root.
You can select a file system or volume, but not lower-level components.
6 Click Actions > Start Restore of Marked Files.
The only destination option is Restore everything to its original location.
7 For file systems, you can choose to skip file verification by placing a check in
the Skip verification and force rollback option.
Warning: Click Skip verification and force-rollback only if you are sure that
you want to replace all the files in the original location with the snapshot.
Rollback deletes all files that were created after the creation-date of the
snapshot that you restore.
■ When you restore files from a snapshot that is made for an Instant Recovery
backup (local or off-host alternate client):
If the exclude list is changed after the backup occurred, NetBackup honors the
latest version of the exclude list during the restore. Any of the files that are listed
in the current exclude list are not restored. Also, as noted in the previous item,
the exclude list on the alternate client takes precedence over the exclude list
on the primary client.
For example: If the current version of the exclude list has the entry *.jpg, and
some .jpg files were included in the backup, the .jpg files can be selected for
the restore but are not in fact restored. To restore the files, you must change
the exclude list on the primary (or alternate) client.
Note: For ordinary backups (not based on snapshots), any files that were
included in the exclude list are not backed up. For snapshot-based backups,
however, all files are included in the snapshot. The exclude list is consulted only
when a storage unit backup is created from the snapshot. If the snapshot is
retained after the backup (for the Instant Recovery feature) and the snapshot
is available at the time of the restore, NetBackup restores files from the snapshot.
Since all files are available in the snapshot (including those that would be
excluded from a storage unit backup), NetBackup incorrectly consults the current
exclude list on the client or alternate client. Any files in the exclude list are
skipped during the restore.
LAN / WAN
media
client server
SCSI
client
disks SCSI
storage
The following table describes the phases that are illustrated in the diagram.
Phase Action
Phase 2 Media server sends the data to the client over the LAN.
Phase 3 Client restores the data to disk (disk can be locally attached or on SAN).
About restoring over the SAN to a host acting as both client server
and media server
This type of restore requires the FORCE_RESTORE_MEDIA_SERVER option in
the server’s bp.conf file.
See the NetBackup Administrator’s Guide, Volume I, for details on the
FORCE_RESTORE_MEDIA_SERVER option.
Backup and restore procedures 244
About configurations for restore
LAN / WAN
client/media media
server server
SAN
The following table describes the phases that are illustrated in the diagram.
Phase Action
Phase 1 Client/media server reads data from tape over the SAN.
Phase 2 Client restores the data to disk (disk can be locally attached or on SAN).
You can restore individual files through the OpsCenter GUI. The restore is possible
if the Index From Snapshot or Backup From Snapshot operation is selected
while creating a storage lifecycle policy for snapshot replication.
Note: Unless the backup was made with the Instant Recovery feature, you cannot
restore from a snapshot by means of the Backup, Archive, and Restore interface.
You must perform the restore manually at the command line.
/usr/openv/netbackup/bin/bpfis query
This returns the ID (FIS IDs) of all current snapshots. For example:
2 For each snapshot identifier, enter bpfis query again, specifying the snapshot
ID:
This returns the path of the original file system (snapshot source) and the path
of the snapshot file system. For example:
/tmp/_vrts_frzn_img_26808/mnt/ufscon
OPTIONS:ALT_PATH_PREFIX=/tmp/_vrts_frzn_img_26808,FITYPE=MIRROR,
MNTPOINT=/mnt/ufscon,FSTYPE=ufs
INF - EXIT STATUS 0: the requested operation was successfully
completed
In this example, the primary file system is /mnt/ufscon and the snapshot file
system is /tmp/_vrts_frzn_img_26808/mnt/ufscon.
3 Copy the files from the mounted snapshot file system to the original file system.
umount original_file_system
umount snapshot_image_file_system
vxdg list
SPLIT-primaryhost_diskgroup
If vxdg list does not show the disk group, the group might have been
deported. You can discover all the disk groups, including deported ones,
by entering:
The disk groups in parentheses are not imported on the local system.
■ Deport the VxVM disk group:
3 Import and join the VxVM disk group on the primary (original) client:
4 Start the volume and snap back the snapshot volume as follows, using the -o
resyncfromreplica option:
To restore the entire secondary disk if the snapshot was made on an EMC,
Hitachi, or HP disk array
◆ WITH CAUTION, you can use hardware-level restore to restore the entire
mirror or secondary disk to the primary disk.
If the disk is shared by more than one file system or VxVM volume, there may
be unintended results. Read the following:
/usr/openv/netbackup/bin/bpfis query
This returns the ID (FIS IDs) of all current snapshots. For example:
2 For each snapshot identifier, enter bpfis query again, specifying the snapshot
ID:
This returns the path or the original file system (snapshot source) and the GUID
(Global Universal Identifier) representing the snapshot volume. For example:
In this example the snapshot file system is H:\ and the GUID is
\\?\Volume{54aa666f-0547-11d8-b023-00065bde58d1}\.
3 To restore individual files from the snapshot volume:
■ Mount the GUID to an empty NTFS directory:
mountvol C:\Temp\Mount
\\?\Volume{54aa666f-0547-11d8-b023-00065bde58d1}\
■ Copy the file to be restored from the temporary snapshot mount point (in
this example, C:\Temp\Mount)to the primary volume.
Backup and restore procedures 250
About restoring from a disk snapshot
vxdg list
SPLIT-primaryhost_diskgroup
2 Import and join the VxVM disk group on the primary (original) client:
vxassist rescan
vxdg -g split_diskgroup import
vxdg -g split_diskgroup -n diskgroup join
■ Snapshot job fails and the snapshot command does not recognize the volume
name
■ Snapshot creation fails when the same volume is mounted on multiple mount
points of the same host
■ Policy validation fails if the specified CIFS share path contains a forward slash
■ An NDMP snapshot policy for wildcard backup fails with error 4201
■ To create detailed log information, place a VERBOSE entry in the bp.conf file
on the NetBackup master and client. Or set the Global logging level to a high
value in the Logging dialog, under both Master Server Properties and Client
Properties.
■ These directories can eventually require a lot of disk space. Delete them when
you are finished troubleshooting and remove the VERBOSE option from the
bp.conf file. Or reset the Global logging level to a lower value.
install_path\NetBackup\logs\mklogdir.bat
C:\Program Files\Veritas\NetBackup\logs
Since a different path can be set during installation, the paths that are listed in this
topic are install_path\NetBackup\logs.
Note: To create detailed logs, set the Global logging level to a high value, in the
Logging dialog, under both Master Server Properties and Client Properties.
The log folders can eventually require a lot of disk space. Delete them when you
are finished troubleshooting and set the logging level on master and client to a
lower value.
Note: If you have run the NetBackup mklogdir command, the VxMS log
directory already exists.
Note: If the VxMS log location is changed, the Logging Assistant does not
collect the logs.
Note: If you have run the NetBackup mklogdir.bat command, the VxMS log
directory already exists.
Note: You can use NTFS compression on VxMS log folders to compress the
log size. The new logs are written in compressed form only.
Note: If the VxMS log location is changed, the Logging Assistant does not
collect the logs.
Note: Logging levels higher than 5 cannot be set in the Logging Assistant.
Note: Logging levels higher than 5 should be used in very unusual cases only. At
that level, the log files and metadata dumps may place significant demands on disk
space and host performance.
Level Description
0 No logging.
1 Error logging.
4 Same as level 3.
Troubleshooting 258
Customer support contact information
Level Description
5 Highly verbose (includes level 1) + auxiliary evidence files (.mmf, .dump, VDDK
logs, .xml, .rvpmem).
You can set the logging level for the VDDK messages.
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC.policy_name
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC.storage_unit_name
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC
For third-party copy backup, the size of the data buffer is 65536 bytes (64K),
by default. To increase it, put a larger integer in the SIZE_DATA_BUFFERS_TPC
file. For a buffer size of 96K, put 98304 in the file. If not an exact multiple of
1024, the value that is read from the file is rounded up to a multiple of 1024.
The file name with no extension (SIZE_DATA_BUFFERS_TPC) applies as a
default to all third-party copy backups, if neither of the other file-name types
exists. A SIZE_DATA_BUFFERS_TPC file with the .policy_name extension
applies to backups that the named policy runs. The .storage_unit_name extension
applies to backups that use the named storage unit. If more than one of these
files applies to a given backup, the buffers value is selected in this order:
SIZE_DATA_BUFFERS_TPC.policy_name
SIZE_DATA_BUFFERS_TPC.storage_unit_name
SIZE_DATA_BUFFERS_TPC
As soon as one of these files is located, its value is used. A .policy_name file
that matches the name of the executed policy overrides the value in both the
.storage_unit_name file and the file with no extension. The .storage_unit_name
file overrides the value in the file with no extension.
You can set the maximum buffer size that a particular third-party copy device
can support.
A third-party copy device is not used if it cannot handle the buffer size that is
set for the backup.
Troubleshooting 261
Snapshot Client installation problems
Note: Replication Director uses the NetApp DataFabric Manager server for data
movement and not the media server as in most cases.
you have tried to install the Snapshot Client software before you install the base
NetBackup software.
/usr/openv/netbackup/bin/driver/snaplist
2 For each snapshot that is listed, run the following to make sure a bpbkar
process is associated with it:
/usr/openv/netbackup/bin/driver/snapoff snapn
/usr/openv/netbackup/bin/bpfis query
This command returns the IDs (FIS IDs) of all current snapshots. For example:
If bpfis removed the snapshot, you can skip the rest of this procedure.
3 Solaris, HP, AIX, Linux: if bpfis could not remove the snapshot, enter the
following (on the client or alternate client) when no backups are running:
df -k
This command displays all mounted file systems, including any snapshots of
a mounted file system.
If a snapshot backup is currently running, the snapshot should not be deleted.
NetBackup deletes it when the backup completes.
Here are two snapshots from a df -k listing:
/tmp/_vrts_frzn_img__filesystemname_pid
4 Solaris, HP, AIX, Linux: unmount the unneeded snapshot file systems (on the
client or alternate client, depending on the type of backup).
The next step depends on the type of snapshot.
5 For nbu_snap (Solaris only):
Troubleshooting 264
Identifying and removing a left-over snapshot
/usr/openv/netbackup/bin/driver/snaplist
6 For VxVM (Solaris, HP, AIX, Linux) and VVR (Solaris and HP):
Do the following on the client for VxVM, and on the alternate client for VVR:
■ Enter the following to display unsynchronized mirror disks:
vxprint -g diskgroup
Note: file_system is the mount point of the primary file system that was
backed up, NOT the snapshot file system that was unmounted in a previous
step.
For example, if the snapshot file system that was unmounted is the following:
/tmp/_vrts_frzn_img__vm2_1765
Troubleshooting 265
Identifying and removing a left-over snapshot
the original file system, which should be specified on the fsckptadm list
command, is the following:
/vm2
Example entry:
Output:
/vm2
NBU+2004.04.02.10h53m22s:
ctime = Fri Apr 02 10:53:23 2004
mtime = Fri Apr 02 10:53:23 2004
flags = removable
For example:
■ For more detail on removing VxFS clones, refer to the recommended actions
for NetBackup status code 156 in the NetBackup Troubleshooting Guide.
where LdevName is the logical device name of the standard device. For
Hitachi and HP arrays (ShadowImage, BusinessCopy):
vxdg list
SPLIT-primaryhost_diskgroup
If vxdg list does not show the disk group, the group might have been
deported. You can discover all the disk groups, including deported ones,
by entering:
The disk groups in parentheses are not imported on the local system.
■ Deport the VxVM disk group:
■ On the primary (original) client, import and join the VxVM disk group:
■ On the primary (original) client, start the volume and snap back the snapshot
volume:
Example:
In this example, chime is the primary client and rico is the alternate client.
1hddg is the name of the original disk group on chime.
chime_lhddg is the split group that was imported on rico and must be
rejoined to the original group on the primary chime.
On alternate client rico, enter:
vxdg list
SPLIT-primaryhost_diskgroup
■ On the primary (original) client, import and join the VxVM disk group:
vxassist rescan
vxdg -g split_diskgroup import
vxdg -g split_diskgroup -n diskgroup join
In this case, you must use the bpdgclone command with the -c option to remove
the clone. Then resynchronize the mirror disk with the primary disk.
The following commands should be run on the client or alternate client, depending
on the type of backup.
Troubleshooting 269
Removing a VxVM volume clone
vxdg list
NAME STATE ID
rootdg enabled 983299491.1025.turnip
VolMgr enabled 995995264.8366.turnip
wil_test_clone enabled 1010532924.21462.turnip
wil_test enabled 983815798.1417.turnip
In this example, the name suffix indicates wil_test_clone was created for a
snapshot backup that was configured with an array-specific snapshot method.
If a backup failed with log entries similar to those in this example, the clone
must be manually deleted.
2 To remove the clone, enter the following:
where wil_test is the name of the disk group, vol01 is the name of the VxVM
volume, and wil_test_clone is the name of the clone. Use the Volume
Manager vxprint command to display volume names and other volume
information.
For more information, refer to the bpdgclone man page.
For assistance with vxprint and other Volume Manager commands, refer to
the Veritas Volume Manager Administrator’s Guide.
Troubleshooting 270
Alternate client restore and backup from a snapshot fails
3 To verify that the clone has been removed, re-enter vxdg list.
Sample output:
NAME STATE ID
rootdg enabled 983299491.1025.turnip
VolMgr enabled 995995264.8366.turnip
wil_test enabled 983815798.1417.turnip
user 0m0.047s the CPU cycle time for the command in user mode.
sys 0m0.024s the CPU cycle time for the command in kernel mode.
Note: If the total time, is greater than 155 sec the snapshot will fail with error
4220.
The backup of NFS share mounted by two different mount points for OST_FIM is
not supported in this release.
The VxFS_Snapshot can be used to backup a single file system only. If multiple
file systems are backed up using the same policy, the backup fails.
Make sure that you create a separate policy for each file system.
Note: The client and the alternate client on the same host is not supported.
Veritas recommends that you use the back slash when you specify the backup
selection. For example, \\NASFiler1\dataShare1 and C:\backup\testdir are
valid paths.
Troubleshooting 276
An NDMP snapshot policy for wildcard backup fails with error 4201
The administrators of the HP-UX 11iv3 host machines should ignore the log
messages if they encounter them during backups with NetBackup.
To resolve this issue, avoid using NFS version 3 to access the snapshot for
accelerator-enabled backups. You can change the Access Protocol to NFS4 for
the affected policy. For more details, refer to the NetApp documentation.
Appendix A
Managing nbu_snap
(Solaris)
This appendix includes the following topics:
nbu_snap commands
The following commands relate to the nbu_snap snapshot method.
snapon command
snapon starts an nbu_snap snapshot (copy-on-write).
Execute this command as root:
where:
■ snapshot_source is the partition on which the client’s file system (the file system
to be "snapped") is mounted
■ cache is the raw partition to be used as copy-on-write cache.
Example 1:
Managing nbu_snap (Solaris) 281
About managing nbu_snap
Example 2:
/usr/openv/netbackup/bin/driver/snapon /dev/vx/rdsk/omo/tcp1
/dev/vx/rdsk/omo/sncache
The snapshot is created on disk, and remains active until it is removed with the
snapoff command or the system is restarted.
snaplist command
snaplist shows the amount of client write activity that occurred during an nbu_snap
snapshot. Information is displayed for all snapshots that are currently active.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snaplist
ident A unique numeric identifier of the snapshot. ident is the pid of the
process that created the snapshot.
Managing nbu_snap (Solaris) 282
About managing nbu_snap
size The size of the client’s snapshot source in 512-byte blocks. The
snapshot source is the partition on which the client’s file system (the
file system being backed up) is mounted.
Note: size is not a reliable guide to the size of the cache that is
needed for the snapshot. The user write activity during the snapshot is
what determines the size of the cache needed. See the cached column
of this output.
cached The number of 512-byte blocks in the client file system that were
changed by user activity while the snapshot was active. Before being
changed, these blocks were copied to the cache partition. The more
blocks that are cached as a result of user activity, the larger the cache
partition required. However, additional overhead—which is not shown
in this cached value—is required in the cache. To see the total space
that is used in a particular cache partition, use the snapcachelist
command.
minblk In the partition on which the file system is mounted, minblk shows:
the lowest numbered block that is monitored for write activity while the
snapshot is active. Only FlashBackup policies use minblk.
device The raw partition containing the client’s file system data to back up
(snapshot source).
cache The raw partition used as cache by the copy-on-write snapshot process.
Make sure that this partition is large enough to store all the blocks likely
to be changed by user activity during the backup.
snapcachelist command
snapcachelist displays information about all partitions currently in use as nbu_snap
caches. This command shows the extent to which the caches are full.
Managing nbu_snap (Solaris) 283
About managing nbu_snap
Note: snaplist and snapcachelist can also be used to monitor an nbu_snap snapshot
that a NetBackup policy started. Note that once the backup completes, NetBackup
removes the snapshot. As a result, the snaplist and snapcachelist commands no
longer return any output.
/usr/openv/netbackup/bin/driver/snapcachelist
Description of output:
busy The number of 512-byte blocks in the client data that changed while
the snapshot was active. Before being changed, these blocks were
copied to the cache partition by the nbu_snap copy-on-write process.
For each cache device that is listed, busy shows the total space that
was used in the cache.
You can use this value as a sizing guide when setting up raw partitions
for nbu_snap cache. When a cache is full, any additional change to the
client data causes the copy-on-write to fail: the snapshot is no longer
readable or writable. Reads or writes to the client data continue (that
is, user activity is unaffected). The failed snapshot, however, is not
terminated automatically and must be terminated using snapoff.
snapstat command
snapstat displays diagnostic information about the snap driver.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snapstat
snapoff command
snapoff terminates an nbu_snap snapshot.
Execute this command as root:
Managing nbu_snap (Solaris) 284
About managing nbu_snap
snapshot 1 disabled
snapshot 2 disabled
...
snapshot n disabled
Warning: Do not terminate a snapshot while the backup is active: corruption of the
backup image may result.
Appendix B
Overview of snapshot
operations
This appendix includes the following topics:
Step Action
Note: Steps 1, 2, and 6 in Table B-1 apply only to databases, such as those requiring
NetBackup for Oracle Snapshot Client.
2 Finish transactions.
3 Quiesce acknowledge.
4 Quiesce stack,
... In quiesce mode .... trigger snapshot.
5 Unquiesce.
6 Out of quiesce mode.
Continue processing.
Overview of snapshot operations 288
About quiescing the stack
source block that is about to change for the first time. Then it copies the block’s
current data to cache, and records the location and identity of the cached blocks.
Then the intercepted writes are allowed to take place in the source blocks.
Figure B-2 shows the copy-on-write process.
Phase 1
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Source data
Phase 2
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Writes delayed
Phase 3 c0 c1 c2 c3 c4
.
Cache
Phase 5
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
The following table lists the phases that have been depicted in the diagram:
Phase Action
Phase 2 New write requests to s4, s7, s8 are held by copy-on-write process (see
arrows).
Overview of snapshot operations 290
How copy-on-write works
Phase Action
Phase 3 Copy-on-write process writes contents of blocks s4, s7, and s8 to cache.
These blocks write to cache only once, no matter how many times they
change in the source during the snapshot.
The immediate results of the copy-on-write are the following: a cached copy of the
source blocks that were about to change (phase 3), and a record of where those
cached blocks are stored (phase 4).
The copy-on-write does not produce a copy of the source. It creates cached copies
of the blocks that have changed and a record of their location. The backup process
refers to the source data or cached data as directed by the copy-on-write process.
Figure B-3 shows the process for backing up a copy-on-write snapshot.
Overview of snapshot operations 291
How copy-on-write works
Phase 1 s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Cache
Phase 2 c0 c1 c2 c3 s4
Phase 3 s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Cache
Phase 4 c0 c1 c2 c3 s4
Phase 6 s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
The following table lists the phases that have been depicted in the diagram:
Phase Action
Phase 4 At s7 and s8, copy-on-write tells backup to read c1, c2 instead of s7,
s8.
Task Description
Configure the CloudPoint server in You can configure the CloudPoint server as
NetBackup. a snapshot management server. To configure
the CloudPoint server in NetBackup you need
to add the credentials of the CloudPoint
server.
Configure the CloudPoint plug-ins in The CloudPoint plug-ins that are installed on
NetBackup. the CloudPoint server must be registered and
configured in NetBackup with the associated
CloudPoint server.
Configure a Standard policy to use VSO See “Configuring a Snapshot Client policy”
snapshot method. on page 48.
Backup and restore procedures See “About performing a backup” on page 230.
Considerations
Consider the following when integrating NetBackup with CloudPoint:
■ Deletion of CloudPoint host entry and its associated plug-ins is not supported
in NetBackup.
Using CloudPoint, if you delete plug-ins that are configured in NetBackup, the
images of snapshots that are taken using CloudPoint will be unusable.
■ Back up from snapshot and index from snapshot is supported. Replication
operations are not supported.
■ For snapshot and index for snapshot, you can use any available storage unit.
■ Post integration, all the related operations must be performed from NetBackup.
Results of operations that are performed outside NetBackup, will not be visible
in NetBackup. CloudPoint has its own RBAC, where NetBackup is one of the
users and operations performed only through NetBackup will be visible in
NetBackup integration with CloudPoint for snapshot management 295
About NetBackup and CloudPoint integration for snapshot management
NetBackup. For example, even though you can add CloudPoint plug-in from
CloudPoint, you must add the plug-in from NetBackup, or the plug-in may not
be visible in NetBackup.
■ HP-UX Native volume group Version 2.0 and later is not supported.
■ Raw device and raw partitions are not supported.
■ File system verification is not supported. You must skip the verification step and
perform a force rollback. In the Restore Marked Files dialog box, select the
Skip verification and force rollback check box.
■ Consider all the CloudPoint limitations.
Refer to the Veritas CloudPoint Installation and Upgrade Guide.
■ NetBackup integration is not supported with the CloudPoint freemium version.
■ It is recommended not to reinstall the CloudPoint server but to upgrade it.
However, if you reinstall the CloudPoint server you need to reconfigure the
CloudPoint server and the associated CloudPoint plug-ins.
■ Perform the following tasks for RHEL:
In the recent RHEL version, there are changes in usage of partition delimiter.
Based on last character in the device path, OS decides if partition delimiter must
be added or not. To address the change, update the udev rule to add partition
delimiter 'p' on RHEL
■ Open the /lib/udev/rules.d/62-multipath.rules file.
■ Update the existing line:
To
■ For devices in LVM, if multiple paths are used, ensure that multipathing service
is enabled.
■ On SUSE Linux, ensure that the automount service is disabled.
■ Junction mount point on filler or array is not supported for snapshot operations
with VSO FIM snapshot method.
NetBackup integration with CloudPoint for snapshot management 296
About configuring Snapshot Management server in NetBackup
■ For client-side nested mount points, rollback of parent mount points is not
supported.
Note: The host name address must be DNS resolvable. Also, IP address is
not supported for CloudPoint server name as an input.
5 (Optional) Select the Connect using Port number check box, if you want to
connect using a specific port.
6 Click Validate Server to retrieve the CA certificate of the snapshot server.
NetBackup integration with CloudPoint for snapshot management 297
Configuring CloudPoint plug-ins in NetBackup
Note: You can associate multiple providers with a server, but you cannot
associate multiple servers with the same provider.
8 Click OK.
A success message is displayed.
9 Click OK
Note: The plug-in ID must be unique and must comprise of A-Z, a-z, 0-9, + ,.
, _ , - characters.
Note: The fields are different for different plug-in types. Refer to the CloudPoint
Install and Upgrade Guide for more information about plug-ins and their
parameters.
10 Click OK.
A success message is displayed.
11 Click OK.
The newly added plug-in is listed in the CloudPoint Plugins pane.
To modify CloudPoint plug-in credentials
1 Log on to the NetBackup Administration Console.
2 In the left navigation pane, go to Media and Device Management >
Credentials > Snapshot Management Server.
The Snapshot Server Management pane is displayed.
3 Click in the server under which the plug-in is added. The CloudPoint Plugins
pane displays all the associated plug-ins.
NetBackup integration with CloudPoint for snapshot management 299
Troubleshooting CloudPoint and NetBackup integration issues
Note: You cannot change the plug-in type and plug-in ID.
7 Click OK.
Workaround
Check if you have configured the CloudPoint plug-in associated with a particular
disk asset.
■ CloudPoint server takes additional time to discover.
Workaround
Wait for sometime so that all the devices from array are discovered.
■ CloudPoint server is not accessible from Master or Client.
Or
Certificate mismatch on client.
Or
CloudPoint certificate has been expired.
Workaround
Refresh the CloudPoint certificates on client.
Certificate location: /usr/openv/var/global/cloudpoint/<CloudPoint server
name>.pem
■ HP3 par is loaded and there are multiple WS API operations are running.
Workaround
Try snapshot creation after sometime.
Workaround
When you encounter this error, run the lsscsi command to verify if the device
selected in the backup selection is available on the client.
If the device is not listed, verify the connectivity issue between the array and host
or get in touch with your system administrator.
Workaround
Increase the REQUEST_DELIVERY_TIMEOUT configuration option from the default
of 300 (5 minutes) to more than 3600 seconds (more than 60 minutes).
This option does not appear in the NetBackup Administration Console host
properties. See the NetBackup Commands Reference Guide for information about
using the bpgetconfig and the bpsetconfig commands to change the configuration
option in the bp.conf file (UNIX) or the registry (Windows).
■ ONPREM specifies that the CloudPoint server is used for on-premises storage
array management.
■ CLOUD specifies that the CloudPoint server is used for cloud management.
Index
Symbols B
156 status code 258 Backup
3pc.conf file 42 Archive
Restore 238
A backup
agent 18, 37
abnormal termination 262, 280
automatic 230
access time not updated 73
local 23, 32
activate block-level restore 234
logs 253–255
Active Directory 78, 143
manual 231
and FlashBackup 78
off-host
actual device file name 82–83
configuration 51, 53
Administrator account and NAS_Snapshot 115
prerequisites 227
AIX
raw partition 52, 228
and VxFS 43
retention period
media servers, restrictions 37
NAS_Snapshot 141
ALL_LOCAL_DRIVES entry 35, 51, 100, 228
scripts 67
Allow multiple data streams 73
techniques (overview) 20
alternate client
types supported 82
defined 37
user-directed 231
Alternate client backup
backup agent 53
configuring 68
backup retention level 97
alternate client backup 19, 25, 54, 81
Backup Selections list 51
and FlashSnap 136
ALL_LOCAL_DRIVES entry 35, 51, 100, 228
and split mirror 27
and Instant Recovery 100
and VVR 139
block vs. character device 228
introduction 26
directives 86
requirements 68
FlashBackup 83
restrictions 69
symbolic link 72
testing setup 136, 140, 142
BLIB 48
Any_available storage unit 55
block device file (vs character) 228
APP_NBU_VVR 140
block level incremental backup 48
arbitrated loop 38
block-level restore 233
archive bit
how to activate 234
incremental backup 69
restriction 234
archives 231
bp.conf file 253
auto option (Provider Type for VSS) 64
bpbkar
auto snapshot selection 55, 58, 70, 98
log 253
automatic backup 230
process 262
bpbrm log 253–254
bpdgclone command 269
Index 303
J mirror 21
jbpSA 238 access time 73
compared to copy-on-write 23
defined 39
K fast resynch 135
Keep snapshot after backup 62, 262 overview 21
restoring from image 244–245 preparing 102
kernel messages 254 rotation 95
VxVM snapshot 40, 134
L mklogdir script 253
left over snapshot 262, 280 mklogdir.bat 254
Limit jobs per policy 73 modprobe.conf file (Linux) 158
limitations 35 mover.conf file 42
links (in Backup Selections list) 72 and AIX 37
Linux multi-volume system (VxFS) 35, 131, 134
and VxFS 43 Multiple Copies (Vault) 37
Local Host multiple data streams 73
network configuration for 32 configuring 73
local host backup method multiplexing 37, 228
network configuration for 23 MVS 35, 131, 134
local system account 115
lock file system 288 N
logging NAS
directories to create 252 off-host backup 25
VxMS 255 NAS filer
logical volume (as raw partition) 127 as backup agent 54
logs 252–255 NAS_Snapshot 20, 99–100, 141, 233
creating for UNIX 253 access for NetBackup 115
creating for Windows 254 backup retention period 141
loop (Fibre Channel) 38 licensing 115
ls command 40 logging info 253, 255
LVM 78 name 118
notes
M requirements 114
manual backup 231 NAS_Snapshot method 58
mapping naviseccli 167, 174
defined 39 Navisphere 165
Maximum jobs per client 73 nbfirescan 158
Maximum multiplexing per drive 73 NBU_CACHE 104
maximum pathname length 51, 72 nbu_snap method 59, 126, 279
Maximum Snapshots (Instant Recovery only) 63, 97, with VxVM shared disk group 127
106, 141 NDMP 20
Media multiplexing 73 access web info 42
media server (see NetBackup Media Server) 53–54 licensing 115
messages file 254 NDMP host
method as backup agent 54
selecting off-host backup 51, 53 NDMP protocol version 115
selecting snapshot 57 NDMP snapshot 141
ndmp unified log (VxUL) 253, 255
Index 307
NDMP V4 99 performance
NetBackup Client Service 115 increasing tape 260
NetBackup Media Server 25, 39 peripherals (latest info on web) 42
and storage units 55, 228 PFI_BLI_RESTORE file (for block-level restore) 234
network diagram of 33 physical device (as raw partition) 127
selecting 53–54 platform requirements 34
NetBackup Replication Director 16 platforms supported 43
Network Attached Storage 25 plex option (Snapshot Attribute for VSS) 65
Network Attached Storage (data mover) 54 point-in-time snapshots 20
network interface cards 259 policy
NEW_STREAM directive 86 for NAS snapshot 116
NIC cards and full duplex 259 how to select type of 49
no-data Storage Checkpoint 91 storage unit 55
Policy dialog 49, 80
O policy_name (on mover.conf file) 260
primary vs. alternate client 26
off-host backup 51, 53
promotion
and multiplexing 228
file 19, 234–235
NAS 25
provider 17, 259
overview 24
Provider Type (for VSS) 63
prerequisites for 227
raw partition 228
type of disk (SCSI vs. IDE) 259 Q
with data mover 42 query snapshot 246, 249, 263
Open File Backup quiesce 286, 289
disabling 74
license 42 R
operating system
RAID 5 134
patches 34
raw partition 83, 86
Oracle 35
as snapshot source 72
OST_FIM method 59
backup 52
overview of snapshot operations 285
block vs. character device 228
overwriting
defined 39
raw partition restores 232
not supported with VxFS_Checkpoint 131
restore 231
P fsck needed after vxfs restore 232
page code 83 37, 228, 259 specifying for cache 127
pairresync command 266 recovery procedure 262, 280
pairsplit (Hitachi) 203 Registry 78, 143
pairsplit (HP-XP) 207 and FlashBackup 78
parameters for snapshots 61 remote snapshot (see alternate client backup) 26
partitions removing
Windows 78 clone 268
patch for VxFS 35 snapshots 262, 280
patches 34, 258 replicated host 31
pathname length 51, 72 replication
Perform block level incremental backups 48 for alternate client backup 139
Perform snapshot backups 279 testing setup for alternate client backup 140
Replication Director 25, 34, 59, 261
Index 308
requirements for NetBackup 34 Shadow Copy Service (see VSS) 70, 141
restore 231 ShadowImage method 260
and fsck 232 shared disk group (VxVM) 127, 136
block-level restore 234 SIZE_DATA_BUFFERS 260
configurations 242 snap
FFR with vxvm or FlashSnap 98, 233, 235 removing 264
file promotion 19, 234–235 snapcachelist command 282
from disk image 244–245 snapctl 78
from EMC_TimeFinder_Clone 183, 237 driver log 254
from FlashBackup 231 overview 279
hardware-level 248 snaplist command 264, 281
logs 253, 255 snapoff command 264, 283
NAS_Snapshot 115 snapon command 280
Oracle files 234 snapshot 20
overwrite option 232 auto selection 55, 58, 70, 98
raw partitions 231 back up to local storage 23
re. device file 232 backup to local storage 32
Restore everything to its original location 239, 241 configuration 56, 98
restrictions 35, 78 controlling rotation 96
resyncfromreplica option 247, 250 copy-on-write vs mirror (how to choose) 23
resynchronization of mirror 135 defined 40
resynchronize deleting 96
disks 265 disabling 74
Resynchronize mirror in background 64 ID 246, 249, 263
retention level for backup 97 instant 104
RMAN 39 mirror
rollback 233, 237 defined 21
and clone creation 183, 237 mirror (creating) 137–138
causes fractured (split) clone 212 mirror (VxVM) 40, 134
VSS and disk array credentials 160 mirror access time 73
root naming format
specifying as snapshot source 72 NAS snapshot 118
rotation 95 on Solaris client
of snapshots 96 troubleshooting 262
RSM Database 78, 143 overview 20
pre-processing for 286
S removing 262–263, 280
restoring from disk 245–246, 249
SAN 35, 227
rotation 95
defined 40
selecting method of 57
Schedule tab
source
Instant Recovery 99
defined 40
scripts
for symbolic link 72
running before/after backup 67
state file 45
SCSI E4 target descriptors 55
volume (creating) 137–138
SCSI Inquiry Page Code 83 37, 228, 259
VxVM instant 60
SCSI serialization 37, 228, 259
Snapshot Attribute (for VSS) 65
SCSI vs. IDE 259
Snapshot Client 20
serial numbers 37, 228, 259
access web info 42
serialization 37, 228, 259
Index 309
user-directed VxVM
archives 231 and RAID 5 134
backup 231 clone of disk group 268
instant snapshots 60, 105, 135
V mirror 134
preparing for Instant Recovery 102
VCMDB (Volume Configuration Management
provider
Database) 180
with VSS 64
vendors (latest info on) 42
required version 58, 69, 133, 152
VERBOSE setting for logs 253
shared disk group 127, 136
Veritas Federated Mapping Services 41
Volume Manager 35, 134
Veritas Volume Manager 60, 134
volume name restriction 91, 102, 134
Veritas Volume Manager cluster. See CVM
volume sets 91
Veritas Volume Replication 99
vxvm method 60, 99, 133–134, 236
virtual machine proxy 53
vxvol 103, 134, 137
VMware 53, 59
ALL_LOCAL_DRIVES 51, 100
volume W
defined 41 web access to recent Snapshot Client info 42
sets (VxVM) 91 whatrmver 202
vshadow.exe 162 wildcards in Backup Selections list 52
VSS Windows
disk array credentials and rollback 160 open file backup 42
VSS method 59, 141 OS partitions 78
VVR 35, 99 System database files 78, 143
VVR method 31, 60, 140 Windows Shadow Copy Service 142
preparing for alternate client backup 139
vxassist 104–105, 134, 137–138, 264
vxassist snapstart 103
vxdg command 103, 137, 139, 247, 250
vxdg list command 269
vxdisk command 247
VxFS clone
removing 264
VxFS file system 34, 59, 78, 126
and AIX
Linux 43
patch for library routines 35
restoring 232
VxFS multi-volume file system 35, 131, 134
VxFS_Checkpoint method 60, 78, 99, 131, 234
VxFS_Snapshot method 60, 133
vxibc command 140
vxmake 104
VxMS 39, 41
VxMS logging 255
vxprint 103, 105, 264
vxprint command 264
vxrecover command 247
vxsnap 106