XP RAID Manager User Guide
XP RAID Manager User Guide
Contents
About this guide ................................................................................. 15
Intended audience ....................................................................................................................
Related documentation ..............................................................................................................
Document conventions and symbols .............................................................................................
Conventions for storage capacity values ......................................................................................
HP technical support .................................................................................................................
Subscription service ..................................................................................................................
HP websites .............................................................................................................................
Documentation feedback ...........................................................................................................
15
15
15
16
17
17
17
17
31
31
31
32
32
33
34
34
36
36
37
37
37
37
38
38
38
51
51
51
51
52
53
55
56
58
58
58
59
59
59
59
60
60
61
61
61
61
62
63
63
64
65
65
65
65
66
66
67
pairvolchk ......................................................................................................................
raidar .............................................................................................................................
raidqry ..........................................................................................................................
raidscan .........................................................................................................................
Command options for Windows NT/2000/2003 ......................................................................
drivescan .......................................................................................................................
env ................................................................................................................................
findcmddev ....................................................................................................................
mount .............................................................................................................................
portscan .........................................................................................................................
setenv ............................................................................................................................
sleep .............................................................................................................................
sync ..............................................................................................................................
umount ..........................................................................................................................
usetenv ..........................................................................................................................
Database Validator commands .................................................................................................
raidvchkset ......................................................................................................................
raidvchkdsp .....................................................................................................................
raidvchkscan ...................................................................................................................
156
162
164
167
174
175
176
176
177
179
180
181
181
184
185
186
186
190
195
205
205
207
207
208
209
211
267
267
267
268
269
273
273
273
273
273
274
274
274
274
275
275
275
275
276
276
276
276
276
277
277
277
278
278
292
292
292
292
293
293
Figures
1..XP Snapshot concept .............................................................................................. 21
2..3DC cascaded ....................................................................................................... 22
3..3DC multi-target ..................................................................................................... 22
4..Back up the S-VOL in PAIR status (XP Continuous Access Software) ................................ 23
5..Back up S-VOL in PAIR status (XP Business Copy Software) ........................................... 24
6..Restore S-VOL to P-VOL in split status (XP Continuous Access Software) ........................... 25
7..Restore S-VOL to P-VOL in split status (XP Business Copy Software) ................................ 26
8..Swap paired volume for duplex operation (XP Continuous Access Software) ................... 27
9..Restoring the S-VOL for duplex operations (XP Continuous Access Software) ................... 28
10..XP RAID Manager configuration on guest operating system and VMware ...................... 32
11..Paired volume configuration ..................................................................................... 40
12..HORCM_MON ...................................................................................................... 42
13..HORCM_CMD ....................................................................................................... 43
14..XP RAID Manager protection ................................................................................... 64
15..LUN visibility in a two host configuration ................................................................... 66
16..Single host protection mode configuration ................................................................. 66
17..System buffer flushing .............................................................................................. 82
18..MSCS and XP Business Copy Software ...................................................................... 84
19..SLPR security concept .............................................................................................. 86
20..horctakeoff on the L1 local site (3DC) ........................................................................ 95
21..horctakeoff on the L2 local site (3DC) ........................................................................ 95
22..horctakeoff on the L2 remote site (3DC) ..................................................................... 96
23..horctakeoff on the L1 remote site (3DC) ..................................................................... 96
24..A takeover using the suspended journal volume group ............................................... 119
25..Cascading volumes using the m option (pairdisplay) ............................................... 130
26..pairevtwait command FCA option ......................................................................... 133
27..pairevtwait command FBC option ......................................................................... 133
28..pairresync command FCA option .......................................................................... 140
29..pairresync command FBC option .......................................................................... 140
30..pairresync command FCA [MU#] swaps option (3DC) ........................................... 141
31..pairresync command swap operation ...................................................................... 145
32..pairsplit command FCA option ............................................................................. 148
10
11
Tables
1..Document conventions ............................................................................................. 15
2..Instance configuration file parameters ....................................................................... 39
3..Mirror descriptor validity: XP Continuous Access Software and XP Continuous Access
Journal Software ....................................................................................................... 45
4..Mirror descriptor validity: XP Business Copy Software and XP Snapshot ......................... 45
5..Paired XP Continuous Access Software volume status definitions .................................... 52
6..Paired XP Business Copy Software volume status definitions .......................................... 54
7..Paired XP Snapshot volume status definitions .............................................................. 55
8..Files for UNIX-based systems .................................................................................... 56
9..Files for Windows-based systems .............................................................................. 57
10..Command log size (UNIX and Windows) .................................................................. 60
11..Protection facility permitted volumes .......................................................................... 64
12..$HORCMPROMOD protection mode ........................................................................ 70
13..General commands ................................................................................................ 87
14..Windows NT/2000/2003 commands ..................................................................... 88
15..Database Validator commands ................................................................................. 88
16..horctakeoff error codes ............................................................................................ 94
17..horctakeover error codes ......................................................................................... 99
18..inqraid command R:Group LDEV mapping .............................................................. 106
19..paircreate command fq and $HORCC_SPLT relationship .......................................... 115
20..paircreate command m default bitmap ................................................................... 116
21..paircreate error codes ........................................................................................... 119
22..paircurchk error codes .......................................................................................... 122
23..pairdisplay JNLS status .......................................................................................... 127
24..pairdisplay % output breakdown ......................................................................... 132
25..pairevtwait error codes ......................................................................................... 136
26..pairmon argument combinations ............................................................................ 137
27..pairresync command: fq and $HORCC_RSYN relationship ....................................... 143
28..pairresync command: fq and $HORCC_REST relationship ........................................ 144
29..pairresync error codes ........................................................................................... 146
30..pairsplit command: fq and $HORCC_SPLT relationship ............................................ 150
31..pairsplit error codes .............................................................................................. 151
12
13
14
Intended audience
This guide is intended for system administrators with knowledge of:
Related documentation
The following documents provide related information:
HP StorageWorks XP Continuous Access Software user guide
HP StorageWorks XP Continuous Access Journal Software user guide
HP StorageWorks XP Business Copy Software user guide
You can find these documents from the Manuals page of the HP Business Support Center website:
http://www.hp.com/support/manuals
In the Storage section, click Storage array systems and then select your product.
Element
website addresses
Bold text
15
Convention
Element
Text typed into a GUI element, such as a box
GUI elements that are clicked or selected, such as menu
and list items, buttons, tabs, and check boxes
Italic text
Text emphasis
Monospace text
Monospace, italic
text
Code variables
Command variables
Monospace, bold
text
CAUTION:
Indicates that failure to follow directions could result in damage to equipment or data.
IMPORTANT:
Provides clarifying information or specific instructions.
NOTE:
Provides additional information.
TIP:
Provides helpful hints and shortcuts.
16
HP technical support
For worldwide technical support information, see the HP support website:
http://www.hp.com/support
Before contacting HP, collect the following information:
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website:
http://www.hp.com/go/e-updates
After registering, you will receive e-mail notification of product enhancements, new driver versions,
firmware updates, and other product resources.
HP websites
For additional information, see the following HP websites:
http://www.hp.com
http://www.hp.com/go/storage
http://www.hp.com/service_locator
http://www.hp.com/support/manuals
Documentation feedback
HP welcomes your feedback.
To make comments and suggestions about product documentation, please send a message to
[email protected]. All submissions become the property of HP.
17
18
19
XP RAID Manager displays volume or group information and allows you to perform operations through
either the command line, a script (UNIX), or a batch file (Windows).
This product has features that ensure reliable transfers in asynchronous mode, including journaling
and protection against link failure.
For effective and complete disaster recovery solutions, this product is integrated with many cluster
solutions, such as XP Cluster Extension for Windows, Linux, Solaris and AIX, and MetroCluster and
ContinentalCluster for HP-UX.
Modes
XP Continuous Access Software can operate in 3 different modes:
XP Continuous Access Synchronous Software: All write operations on the primary (source) volume
must be replicated to the secondary (copy) volume before the write can be acknowledged to the
host. This mode ensures the highest level of data concurrency possible. Host I/O performance is
directly impacted by the distance between the primary and secondary volumes and therefore this
product is recommended for metropolitan distances.
XP Continuous Access Asynchronous Software: All write operations on the primary volume are
time stamped and stored in the array system cache, also known as the side file, before the write
is acknowledged to the host. The data is then asynchronously replicated to the secondary disk
array and re-applied in sequence to the secondary devices. Data is not always current, but due
to the unique timestamp implementation, data is always consistent. The side file protects host I/O
performance from temporary degradations of the communication link between the sites. It also
acts as a buffer for temporary high write bursts from the host. This product is ideal for long distance
replication.
XP Continuous Access Journal Software: Supported on the XP10000, XP12000, and XP24000
disk arrays, this product works in principal the same as XP Continuous Access Asynchronous
Software, but instead of buffering write I/Os in the more expensive and limited disk array cache
(the side file), this product writes data on special XP LUNs called journal pools. Journal pools
consist of up to 16 physical LDEVs of any size, and can therefore buffer much larger amounts of
data. This product also implements a unique read operation from the remote disk array, instead
of the normal write (push) operation from the local (primary) disk array, and is therefore much
more tolerant of short communication link outages.
HP StorageWorks XP Snapshot
XP Snapshot allows you to create point-in-time copies of only changed data blocks (Copy-on-Write)
and store them in a storage pool.
This product creates a virtual volume (V-VOL) for copy-on-write without designating a specific LUN as
S-VOL. However, for the host to use the volume, there must be a LUN mapped.
20
21
22
Figure 4 Back up the S-VOL in PAIR status (XP Continuous Access Software)
23
24
Figure 6 Restore S-VOL to P-VOL in split status (XP Continuous Access Software)
25
Figure 7 Restore S-VOL to P-VOL in split status (XP Business Copy Software)
26
Figure 8 Swap paired volume for duplex operation (XP Continuous Access Software)
27
Figure 9 Restoring the S-VOL for duplex operations (XP Continuous Access Software)
28
Local instance: The instance currently being used, that is, the instance to which commands are
issued. Local instances link to remote instances by using UDP socket services.
Remote instance: The instance that the local instance communicates with, as configured in the
HORCM_INST section of an instance configuration file. (For further information see
XP RAID Manager instance configuration files, page 38)
There are four possible topologies:
1.
2.
3.
4.
29
CAUTION:
There should be no data on the volume you select as the command device because data on the volume
you select becomes inaccessible.
CAUTION:
MPE/iX systems need a dummy volume set. Create this through the VOLUTIL utility program and scratch
the volume set before converting to a command device.
CAUTION:
OpenVMS systems need a LUN 0 device of 35 MB. Note that storage assigned to the LUN 0 device is not
accessible from OpenVMS.
XP RAID Manager issues SCSI read/write commands to the command device. If the command device
fails, all XP Business Copy Software and XP Continuous Access Software commands terminate
abnormally and the host cannot issue commands to the disk array.
To avoid data loss and system downtime, you can designate an alternate command device. Then, if
XP RAID Manager receives an error notification in reply to a request, XP RAID Manager automatically
switches to the alternate command device.
30
HP
HP
HP
HP
HP
StorageWorks
StorageWorks
StorageWorks
StorageWorks
StorageWorks
31
VMware restrictions
XP RAID Manager can run on VMware if supported by the guest operating system (such as Windows,
Linux, and so forth). The guest operating system depends on VMware support of the virtual hardware
(HBA). Therefore, certain restrictions must be observed when using XP RAID Manager on VMware.
32
2.
Identify the CD-ROM device file to be substituted in the mount commands that follow (for example,
/dev/dsk/c1t1d0).
3.
Log in as root.
su root
4.
5.
6.
Choose a file system for the XP RAID Manager software. You need about 5 MB of disk space.
The standard and recommended file system to load the software to is /opt.
7.
From the /opt directory, use cpio to unpack the appropriate archive. Create the HORCM directory
if it does not already exist.
cd /opt
mkdir HORCM (choose the next command according to your OS)
cat /cdrom/LINUX/rmxp* | cpio idum (or)
cat /cdrom/AIX/rmxp* | cpio idum (or)
cat /cdrom/DIGITAL/rmxp* | cpio idum (or)
33
9.
log0
log
usr
log1
2.
3.
4.
When the Run window opens, enter D:\WIN_NT\setup.exe (where D is the letter of your
CD-ROM drive) in the Open dialog box and click OK.
5.
The installation wizard opens. Follow the on-screen instructions to install the software.
Update your system with MPE/iX 6.5 or greater, along with that OS version's latest Power Patch.
2.
3.
Verify that at least one logical volume on the disk array is configured to function as a command
device.
CAUTION:
MPE/iX systems require that the command device be recognized as a dummy volume set.
Create this through the VOLUTIL utility program and then scratch the volume before converting
it to a command device.
34
4.
Run the POSIX shell from CI and change your working directory to the temporary directory
/tmp/raidmgr.
: Sh
Shell/iX> cd /tmp/raidmgr
5.
6.
When the previous installation completes successfully, create the device files:
Shell/iX> mknod /dev/ldev99 c 31 99 (for LDEV devices)
Shell/iX> mknod /dev/ldev100 c 31 100
Shell/iX> mknod /dev/cmddev c 31 102 (for Command device)
The 31 in the previous example is called the major number. The 99, 100, 102 are called minor
numbers. For XP RAID Manager, always specify 31 as the major number. The minor number
should correspond to the LDEV numbers as configured in sysgen. Create device files for all the
LDEVs configured through sysgen and for the command device. The device link file for the
command device should be called /dev/cmddev.
7.
8.
9.
JLIST
20
LP
LP
INTRODUCED
THU 5:29P
FRI 5:08P
FRI 5:08P
JOB NAME
MANAGER.SYS
JRAIDMR1,MANAGER.SYS
JRAIDMR2,MANAGER.SYS
10. Get the physical mapping of the available LDEVs to fill in the HORCM_DEV and HORCM_INST
sections of the horcm1.conf file. Invoke the shell and change your working directory to
/HORCM/usr/bin. Run:
:sh
Shell/iX> cd /HORCM/usr/bin
Shell/iX> export HORCMINST=1
Shell/iX> ls /dev/* | ./raidscan -find
DEVICE_FILE
UID S/F PORT
TARG LUN
/dev/cmddev
0
S CL1-D
1
0
/dev/ldev407
0
S CL1-E
8
0
/dev/ldev408
0
S CL1-E
9
0
/dev/ldev409
0
S CL1-E
10
0
/dev/ldev410
0
S CL1-E
11
0
SERIAL
35393
35393
35393
35393
35393
LDEV
22
263
264
265
266
PROD_ID
OPEN-3-CM
OPEN-3
OPEN-3
OPEN-3
OPEN-3
35
/dev/ldev411
/dev/ldev412
0
0
S
S
CL1-E
CL1-E
12
13
0
0
35393
35393
267
268
OPEN-3
OPEN-3
11. Now fill in the HORCM_DEV and HORCM_INST sections in your /etc/horcm#.conf files.
Sample Configuration for Instance 1:
#
#/*************************For HORCM_MON****************************/
HORCM_MON
#ip_address
service
poll(10ms)
timeout(10ms)
NONE
horcm0
1000
3000
#/************************** For HORCM_CMD**************************/
HORCM_CMD
#dev_name
dev_name
dev_name
/dev/cmddev0
#/************************** For HORCM_DEV**************************/
HORCM_DEV
#dev_group
dev_name
port#
TargetID
LU#
MU#
VG01
oradb1
CL1-E
8
0
VG02
oradb2
CL1-E
9
0
0
#/************************* For HORCM_INST *************************/
HORCM_INST
#dev_group
ip_address
service
VG01
HSTB
horcm1
VG02
HSTC
horcm1
12. Shut down the XP RAID Manager daemon within the shell and the current working directory
/HORCM/usr/bin.
Shell/iX> ./horcmshutdown.sh 1
Restart the XP RAID Manager job using the completed configuration file:
: stream jraidmr1.pub.sys
36
2.
Directory locations
The services and hosts files are contained in these directories:
UNIX: /etc
Windows NT/2000/2003: %systemroot%\system32\drivers\etc
MPE/iX (in the MPE group directory):
SERVICES.NET.SYS
HOSTS.NET.SYS
OpenVMS:
Services file: SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT
Hosts file: SYS$SYSROOT:[SYSEXE]HOST.DAT
Services file
To configure the services file:
37
1.
2.
Add a udp service entry for each instance that runs on the host and each instance referenced in
the configuration file. The service number selected must be unique to the services file and in
the range 1024 to 65535.
Example:
horcm0
horcm1
11000/udp#RaidManager
11001/udp#RaidManager
instance 0
instance 1
6100g#RaidManager
6100g#RaidManager
instance 0
instance 1
Hosts file
Each host running an instance should be entered in the hosts file (for example, /etc/hosts). This
lets you refer to any remote host by either its name or IP address.
If a DNS (domain name server) manages host name resolution on your network, no hosts file editing
is required.
38
MPE/iX: /HORCM/etc/horcm.conf
See Appendix E, Using XP RAID Manager with MPE/iX on page 273.
OpenVMS: See Appendix F, Using XP RAID Manager with OpenVMS on page 279.
You can use the mkconf command to create a configuration file. See mkconf on page 109 for usage
information.
If the level of detail provided in the following pages is not sufficient, ask your HP representative to
consult the HP internal document:
XP RAID Manager Basic Specifications
For examples of configuration files, see XP RAID Manager configuration file examples on page 215.
Default value
Type
Limit
IP_address
None
Character string
63 characters
host_name
None
Character string
31 characters
service_name or
service_number
None
15 characters
1000
Numeric value
None
timeout_value (10 ms
increments)
3000
Numeric value
None
None
Character string
31 characters
dev_group
None
Character string
31 characters
39
Parameter
Default value
Type
Limit
port
None
Character string
31 characters
target_ID
None
Numeric value
7 characters
LUN
None
Numeric value
7 characters
mirror_unit
Numeric value
7 characters
RM_group
None
Character string
31 characters
None
Character string
63 characters
HORCM_MON section
Description
The HORCM_MON section describes the host name or IP address, the port number, and the paired
volume error monitoring interval of the local host.
40
Syntax
HORCM_MON { host_name | IP_address } { service_name | service_number }
poll_value timeout_value }
Arguments
host_name
Name of the host on which this instance runs.
IP_address
IP address of the host on which this instance runs. Specify NONE when two or more network
cards are installed in the server, or several networks (subnets) are configured, and you want
to use this feature to listen on all networks.
service_name
Service name that was configured in the host services file.
service_number
Service number that was configured in the host services file.
poll_value
Specifies a monitoring interval for paired volumes. By making this interval longer, the XP RAID
Manager daemon load is reduced, but it may take longer to notice a change in pair status.
If this interval is set to 1, paired volumes are not monitored. Set to 1 when two or more
instances run on the same machine and one is already monitoring the pair.
timeout_value
Specifies the remote server communication time-out period.
Examples
The instance is running on system blue, service name horcm1, with a poll value of 10 seconds and
a time-out value of 30 seconds.
HORCM_MON
blue
horcm1
1000
3000
The instance is running on system NONE, indicating two or more network cards are installed in the
server, or several networks (subnets) are configured, and the XP RAID Manager listens on all networks.
The service name is horcm1 with a poll value of 10 seconds and a time-out value of 30 seconds.
HORCM_MON
NONE
horcm1
1000
3000
Run the raidqry r group command on each host to examine multiple network configurations.
The following figure shows that the volume group known as oradb is controlled by host HST1 (using
either subnet A or B) and by either HST2 or HST3 (using either subnet A or B).
41
Figure 12 HORCM_MON
HORCM_CMD section
Description
The HORCM_CMD section defines the command devices XP RAID Manager uses to communicate
with the disk array. A command is initiated to write command data to the special disk array command
device. The disk array then reads this data and carries out the appropriate actions.
Multiple command devices are defined in this section of the configuration file to provide alternate
command devices and paths in the event of failure.
HP recommends that each host have a unique command device. Do not access a command device
by more than one host. Multiple instances on the same host can use the same command device.
To configure command devices, use Command View XP, LUN Manager, HP Remote Web Console,
or XP Command View Advanced Edition Software. If none of these applications are available, ask
your HP representative to configure the command devices.
Syntax
HORCM_CMD command_device [command_device]...
Examples
HP-UX
This example defines two device files as paths to a command device. These devices can be pvlinks
to the same volume on the disk array, or may be different command devices. Placing the second
command device on the same line implies that it is an alternate within the same disk array.
42
HORCM_CMD
/dev/rdsk/c2t3d0 /dev/rdsk/c6t2d4
This HP-UX example shows multiple disk arrays connected to the host. One instance can control
multiple disk arrays. To enable this feature, the different command devices must be specified on
different lines. XP RAID Manager uses unit IDs to control multiple disk arrays. A device group can
span multiple disk arrays (XP Continuous Access Synchronous Software only). The unit ID must be
appended for every volume device name in the HORCM_DEV section, as shown in the following
figure.
HORCM_CMD
#unitID0 (Array 1)
/dev/rdsk/c1t3d5
#unitID1 (Array 2)
/dev/rdsk/c2t3d5
Figure 13 HORCM_CMD
Windows NT/2000/ 2003
This example shows the path to a shared command device in Windows.
HORCM_CMD
\\.\PHYSICALDRIVE3
This example shows the use of a Volume GUID for the command device in Windows.
\\.\Volume{GUID}
Because the Volume{GUID} is changed when rebooting, the command device can be designated
using the serial, LDEV, and port numbers.
\\.\CMD Ser# - Ldev# - Port#
43
MPE/iX
See Appendix E, Using XP RAID Manager with MPE/iX on page 273.
OpenVMS
See Appendix F, Using XP RAID Manager with OpenVMS on page 279.
HORCM_DEV section
Description
The HORCM_DEV section describes the physical volumes corresponding to the paired volume names.
Each volume listed in HORCM_DEV is defined on a separate line.
Syntax
HORCM_DEV device_group device_name port target_ID LUN [mirror_unit]
Arguments
device_group
Each device group contains one or more volumes. This parameter gives you the capability to
act on a group of volumes with one command. The device group can be any user-defined
name up to 31 characters in length.
device_name
User-defined and unique to the instances using the device groups. It can be up to 31 characters
in length and is a logical name that can be used instead of the physical Port/TID/LUN/MU
number designation.
port
Disk array I/O port through which the volume is configured to be accessed. Port specification
is not case sensitive (CL1-A= cl1-a= CL1-a= cl1-A).
target_ID
SCSI/Fibre target ID assigned to the volume.
LUN
Decimal logical unit number assigned to the volume.
mirror_unit
Used when you are making multiple XP Business Copy Software copies from a P-VOL. The
mirror unit is a number ranging from 0 to 2 and must be explicitly supplied for all XP Business
Copy Software volumes.
If mirror_unit is left blank, the software assumes that XP Continuous Access Synchronous or
Asynchronous Software is being used. The number is not a count of the number of copies to
be made but rather a label for a specific P-VOL to S-VOL relationship.
XP Continuous Access Journal Software allows up to four copies from a P-VOL. The mirror unit
for an XP Continuous Access Journal volume is indicated by an h and a number ranging
from 0 to 3. If mirror_unit is omitted, the software assumes the value of h0. Mirror unit value
h1, h2 and h3 are valid only for XP Continuous Access Journal Software operations.
44
Example:
HORCM_MON
#ip_address service poll(10ms
HST1
horcm 1000
HORCM_CMD
#dev_name
/dev/rsd0e
HORCM_DEV
#dev_group
Group
Group1
Group2
Group3
Group4
dev_name
dev_name
dev
dev1
dev2
dev3
dev4
HORCM_INST
#dev_group
Group
Group1
timeout(10ms)
3000
dev_name
port#
CL1-A
CL1-A
CL1-A
CL1-A
CL1-A
ip_address
HST2
HST3
TargetID
3
3
3
3
3
LU# MU#
1
1 0
1 1
1 2
1 h1
service
horcm
horcm
The following table shows the mirror descriptor validity for pair states.
Table 3 Mirror descriptor validity: XP Continuous Access Software and XP Continuous Access Journal
Software
Feature
SMPL
P-VOL
S-VOL
MU#0
MU#1-3
MU#0
MU#1-3
MU#0
MU#1-3
XP Continuous Access
Software
Valid
Invalid
Valid
Invalid
Valid
Invalid
XP Continuous Access
Journal Software
Valid
Valid
Valid
Valid
Valid
Valid
SMPL
P-VOL
S-VOL
MU#0-2
MU#3-63
MU#0-2
MU#3-63
MU#0
MU#1-63
XP Business Copy
Software
Valid
Invalid
Valid
Invalid
Valid
Invalid
XP Snapshot
Valid
Valid
Valid
Valid
Valid
Invalid
Example
This example shows a volume defined in device group1 known as device g1d1. It is accessible
through disk array unit 0 and I/O port CL1-A. The SCSI target ID is 12, the LUN is 1, and the XP
Business Copy Software mirror unit number is 0.
HORCM_DEV
group1
g1d1
CL1A
12
45
You can use XP RAID Manager to control multiple disk arrays with one instance by specifying the unit
ID appended to the port. This example refers to the example in the
HORCM_CMD section on page 42.
HORCM_DEV
group1
group2
g1d1
g2d1
CL1A
CL1A1
12
12
0
0
This example shows that the volume pair with the device name g2d1 resides on disk array unit 1
while the volume pair with device name g1d1 resides on disk array unit 0.
TIP:
For Fibre Channel, if the host reports a different target ID and LU number than raidscan, use the
raidscan value.
Related information
To see configuration file examples, and to see how devices belonging to different unit IDs are
configured, see Appendix A, XP RAID Manager configuration file examples on page 215.
HORCM_LDEV section
Description
The HORCM_LDEV section specifies stable LDEV and serial numbers of physical volumes that correspond
to paired logical volume names. Each group name is unique and typically has a name fitting its use
(e.g. database data, Redo log file, UNIX file). The group and paired logical volume name described
in this item must also be known to the remote server.
NOTE:
HORCM_LDEV is usable only with the XP10000, XP12000, and XP24000 disk arrays, microcode
21-03-00/00 or later. If HORCM_LDEV fails at startup, use HORCM_DEV.
Syntax
HORCM_LDEV device_group device_name Serial# CU:LDEV(LDEV#) MU#
Arguments
device_group
Each device group contains one or more volumes. This parameter gives you the capability to
act on a group of volumes with one command. The device group can be any user-defined
name up to 31 characters in length.
device_name
User-defined and unique to the instances using the device groups. It can be up to 31 characters
in length and is a logical name that can be used instead of the physical Port/TID/LUN/MU
number designation.
46
Serial#
Serial number of the disk array.
CU:LDEV(LDEV#)
Specifies the LDEV number in three possible formats:
As hex used by the SVP or Web console
Example: (LDEV# 260) 01: 04
As decimal used by the inqraid command
Example: (LDEV# 260) 260
As hex used by the inqraid command
Example: (LDEV# 260) 0x104
Example:
HORCM_LDEV
#dev_group
dev_name
Serial#
oradb
dev1
30095
oradb
dev2
30095
CU:LDEV(LDEV#)
0
0
MU#
02:40
02:41
HORCM_INST section
Description
The HORCM_INST section defines how XP RAID Manager groups link to remote instances.
Syntax
HORCM_INST device_group { host_name | IP_address } { service_name |
service_number }
Arguments
device_group
Defined in the HORCM_DEV section. Each group defined in HORCM_DEV must be represented
in the HORCM_INST section only once for every remote instance.
host_name
Host name of the host on which the remote instance runs. The remote instance can run on the
same host as the local instance.
IP_address
IP address of the host on which the remote instance runs. The remote instance can run on the
same host as the local instance.
service_name
Service name that was entered into the services file for the remote instance.
service_number
Service number that was entered into the services file for the remote instance.
47
Example
The following example shows that the opposite side of the pairs contained within the group called
group1 are serviced by an instance residing on host yellow that listens on a UDP port defined in
/etc/services named horcm0.
HORCM_INST
group1 yellow
horcm0
HP-UX
Run this shell command on each host that runs an instance:
/usr/bin/horcmstart.sh [instance_number] [instance_number]...
If you do not specify an instance number, the command uses the value stored in the HORCM_INST
environment variable. The default value is 0.
Windows NT/2000/2003
From the command prompt, under the \HORCM\etc directory, type this command:
horcmstart instance_number [instance_number]...
MPE/iX
See Appendix E, Using XP RAID Manager with MPE/iX on page 273.
OpenVMS
Run instances as a detached process. See Appendix F,
Using XP RAID Manager with OpenVMS on page 279.
UNIX
For UNIX ksh, use the export command:
export HORCC_MRCF=1
export HORCMINST=n
48
Windows NT/2000/2003
For Windows NT/2000/2003, use the set command:
set HORCC_MRCF=1
set HORCMINST=n
MPE/iX
For MPE/iX, use the setenv command.
setenv HORCC_MRCF 1
setenv HORCMINST n
OpenVMS
For OpenVMS, set the environment variable using symbol.
HORCC_MRCF := 1
HORCMINST := 0
UNIX
Setting a null value is not sufficient.
For UNIX ksh, use the unset command:
unset HORCC_MRCF
set HORCMINST=n
Windows NT/2000/2003
For Windows NT/2000/2003, use the usetenv command option:
49
Related Information
For syntax descriptions, see usetenv on page 185 and setenv on page 180.
MPE/iX
Within the POSIX shell, use the unset command:
unset HORCC_MRCF
set HORCMINST=n
OpenVMS
For OpenVMS, use the following command:
$DELETE/SYMBOL HORCC_MRCF
50
51
SMPL
PAIR
COPY
PSUS
PSUE
PFUS
The P-VOL controls the status for the pair, which is reflected in the status of the S-VOL. When you issue
a command, the status usually changes. A read or write request from the host is allowed or rejected,
depending on the status of the paired volume, as shown in the following figure.
CAUTION:
The XP Business Copy Software and XP Continuous Access Software Remote Console based GUI has
different terminology and functionality from the XP RAID Manager interface. For instance:
The terms suspend and split may have opposite meanings
S-VOL read/write options while suspended may differ
The GUI allows you to choose/force a PSUE state
For more detail, refer to the following manuals (XP512/XP48 disk array only):
HP StorageWorks Business Copy XP: User's Guide
HP StorageWorks XP Continuous Access Software user and reference guide
If a volume making up an aggregated LUSE volume is PSUE status, the LUSE volume is reported as
PDUB (dubious) status.
Pairing status
Primary
Secondary
SMPL
Unpaired volume
R/W enabled
R/W enabled
PAIR
R/W enabled
R enabled
52
(See Note 3)
Status
Pairing status
Primary
Secondary
COPY
R/W enabled
R enabled
R/W enabled
R/W enabled
R enabled
(See Note 2)
(See Note 3)
R/W enabled
R enabled
(See Note 2)
(See Note 3)
PSUS
PSUE (Error)
PFUS
(See Note 3)
R/W enabled
(See Note 1)
Note 1: Valid when reading and writing are enabled using the write enable pair split option.
Note 2: Reading and writing are enabled as long as no errors occur in the primary volume.
Note 3: Reading disabled when m noread option is specified in the paircreate command.
NOTE:
The data at the XP Continuous Access Asynchronous Software S-VOL is assured to be consistent, but is
only current in PSUS state.
SMPL
PAIR
COPY
RCPY
PSUS
PSUE
The P-VOL controls the pair state that is typically reflected in the status of the S-VOL. The status can
be changed when a command is issued. A read or write request from the host is allowed or rejected
according to the status, as shown in the following figure.
53
CAUTION:
The XP Business Copy Software and XP Continuous Access Software Remote Console based GUI has
different terminology and functionality from the XP RAID Manager interface. For instance:
The terms suspend and split may have opposite meanings
S-VOL read/write options while suspended may differ
The GUI allows you to choose/force a PSUE state
For more detail, refer to the following manuals (XP512/XP48 disk array only):
NOTE:
The data in the XP Business Copy Software S-VOL in any state except PSUS is likely to be inconsistent and
not current.
Pairing status
Primary
Secondary
SMPL
Unpaired volume
R/W enabled
R/W disabled
PAIR
R/W enabled
R enabled
R/W enabled
R/W enabled
PSUS
R/W enabled
R/W enabled
PSUE (Error)
R/W enabled
R enabled
(See Note 1)
(See Note 2)
COPY
RCOPY
54
(See Note 2)
R enabled
(See Note 2)
R enabled
(See Note 2)
Status
Pairing status
Primary
Secondary
Note 1: Valid when reading and writing are enabled, as long as no failure occurs in the P-VOL.
Note 2: Reading disabled when the user specified the m noread option in the paircreate command.
SMPL
PAIR
COPY
RCPY
PSUS
PSUE
The P-VOL controls the pair state that is typically reflected in the status of the S-VOL. The status can
be changed when a command is issued. A read or write request from the host is allowed or rejected
according to the status, as shown in the following figure.
Pairing status
Primary
Secondary
SMPL
R/W enabled
R/W enabled
(See Note 2)
PAIR (PFUL)
R/W enabled
R/W disabled
COPY
R/W enabled
R/W disabled
RCOPY
R/W disabled
R/W disabled
PSUS (PFUS)
R/W enabled
R/W enabled
55
Status
Pairing status
Primary
Secondary
PSUE (Error)
R/W enabled
R/W disabled
(See Note 1)
Note 1: Valid when reading and writing are enabled, as long as no failure occurs in the P-VOL.
Note 2: A V-VOL unmapped to the S-VOL of XP Snapshot replies to a SCSI Inquiry, but is not enabled for
Reading and/or Writing.
HORCM (RM)
/etc/horcmgr
none
HORCM_CONF
/HORCM/etc/horcm.conf
none
Takeover
/usr/bin/horctakeover
horctakeover
/usr/bin/mkconf.sh
mkconf
/usr/bin/paircurchk
paircurchk
Pair generation
/usr/bin/paircreate
paircreate
Pair splitting/suspending
/usr/bin/pairsplit
pairsplit
Pair resynchronization
/usr/bin/pairresync
pairresync
Event waiting
/usr/bin/pairevtwait
pairevtwait
Error notification
/usr/bin/pairmon
pairmon
Volume checking
/usr/bin/pairvolchk
pairvolchk
/usr/bin/pairdisplay
pairdisplay
RAID scan
/usr/bin/raidscan
raidscan
/usr/bin/raidar
raidar
Connection confirmation
/usr/bin/raidqry
raidqry
Trace control
/usr/bin/horcctl
horcctl
/usr/bin/pairsyncwait
pairsyncwait
/usr/bin/horcmstart.sh
horcmstart.sh
/usr/bin/horcmshutdown.sh
horcmshutdown.sh
56
Title
Connection confirmation
/HORCM/usr/bin/inqraid*
inqraid
/usr/bin/raidvchkset
raidvchkset
/usr/bin/raidvchkdsp
raidvchkdsp
usr/bin/raidvchkscan
raidvchkscan
*The inqraid command is provided only for Linux, HP-UX, Solaris, MPE/iX, and OpenVMS.
Command file
HORCM (RM)
\HORCM\etc\horcmgr.exe
none
HORCM_CONF
\HORCM\etc\horcm.conf
none
Takeover
\HORCM\etc\horctakeover.exe
horctakeover
\HORCM\etc\mkconf.exe
mkconf
Accessibility check
\HORCM\etc\paircurchk.exe
paircurchk
Pair generation
\HORCM\etc\paircreate.exe
paircreate
Pair split/suspend
\HORCM\etc\pairsplit.exe
pairsplit
Pair resynchronization
\HORCM\etc\pairresync.exe
pairresync
Event waiting
\HORCM\etc\pairevtwait.exe
pairevtwait
Error notification
\HORCM\etc\pairmon.exe
pairmon
Volume checking
\HORCM\etc\pairvolchk.exe
pairvolchk
\HORCM\etc\pairdisplay.exe
pairdisplay
RAID scanning
\HORCM\etc\raidscan.exe
raidscan
\HORCM\etc\raidar.exe
raidar
Connection confirmation
\HORCM\etc\raidqry.exe
raidqry
Trace control
\HORCM\etc\horcctl.exe
horcctl
\HORCM\etc\horcmstart.exe
horcmstart
\HORCM\etc\horcmshutdown.exe
horcmshutdown
Synchronous waiting
\HORCM\etc\pairsyncwait.exe
pairsyncwait
Connection confirmation
\HORCM\etc\inqraid.exe
inqraid
Takeover
\HORCM\usr\bin\horctakeover.exe
horctakeover
Accessibility check
\HORCM\usr\bin\paircurchk.exe
paircurchk
57
Title
Command file
Pair generation
\HORCM\usr\bin\paircreate.exe
paircreate
Pair split/suspend
\HORCM\usr\bin\pairsplit.exe
pairsplit
Pair resynchronization
\HORCM\usr\bin\pairresync.exe
pairresync
Event waiting
\HORCM\usr\bin\pairevtwait.exe
pairevtwait
Volume check
\HORCM\usr\bin\pairvolchk.exe
pairvolchk
\HORCM\usr\bin\pairsyncwait.exe
pairsyncwait
\HORCM\usr\bin\pairdisplay.exe
pairdisplay
RAID scan
\HORCM\usr\bin\raidscan.exe
raidscan
Connection confirmation
\HORCM\usr\bin\raidqry.exe
raidqry
\HORCM\usr\bin\raidvchkset
raidvchkset
\HORCM\usr\bin\raidvchkdsp
raidvchkdsp
\HORCM\usr\bin\raidvchkscan
raidvchkscan
Tool
\HORCM\Tool\chgacl.exe
chgacl
Log files
XP RAID Manager and its commands write internal logs and trace information to help the user:
identify causes of software failures.
keep records of the transition history of pairs.
The software logs are classified as either startup or execution logs. The startup logs contain data on
errors occurring before the software is ready to provide services. The execution logs (error, trace,
and core logs) contain data on internal errors caused by hardware or software problems. When an
error occurs while running a command, data on the error is collected in the command log file.
UNIX systems
Startup log files
HORCM startup log: $HORCM_LOG/horcm_HOST.log
Command log: $HORCC_LOG/horcc_HOST.log and $HORCC_LOG/horcc_HOST.oldlog
58
MPE/iX systems
Startup log files
HORCM startup log: $HORCM_LOG/horcm_HOST.log
Command log: $HORCC_LOG/horcc_HOST.log
Error log file
HORCM error log: $HORCM_LOG/horcmlog_HOST/horcm.log
Trace file
HORCM trace: $HORCM_LOG/horcmlog_HOST/horcm_PID.trc
OpenVMS systems
Startup log file
sys$posix_root :[horcm.log]
Log directories
The log directories for the instance containing the different log files may be specified using environment
variables:
59
$HORCM_LOG: A trace log file directory specified using the environment variable HORCM_LOG.
The HORCM log file, trace file and core file (and the command trace file and core file) are stored
in this directory. If you do not specify an environment variable, /HORCM/log/curlog becomes
the default.
$HORCC_LOG: A command log file directory specified using the environment variable
HORCC_LOG. If you do not specify an environment variable, the directory /HORCM/logn (n is
the instance number) becomes the default.
While XP Continuous Access Software is running, log files are stored in the $HORCM_LOG directory.
When XP RAID Manager starts up, the log files created are saved automatically in the
$HORCM_LOGS directory:
XP RAID Manager in operation log file directory:
$HORCM_LOG = /HORCM/logn/curlog
n is the instance number.
XP RAID Manager automatic archives log file directory:
$HORCM_LOGS = /HORCM/logn/tmplog
n is the instance number.
horcc_HOST.conf
Result
= value
=0
unspecified
60
HORCC_LOGSZ = value
HORCC_LOGSZ = 0
unspecified
/HORCM/logn/horcc_HOST.conf file
Command log sizing is accomplished by setting a value for the variable HORCC_LOGSZ, where the
value is the desired HOST.log size. For example:
HORCC_LOGSZ=2048
You can also set a variable to disable the logging of specific command and exit code conditions. All
commands except inqraid and all error codes except EX_xxx can be disabled.
For example, to disable the pairvolchk returning a 32 (S-VOL COPY) status you enter:
pairvolcheck=32
User-created files
When constructing the XP RAID Manager environment, the system administrator should make a copy
of the HORCM_CONF file, edit the file for the system environment, and save the file:
UNIX: /etc/horcm.conf or /etc/horcmn.conf where n is the instance number.
Windows NT/2000/2003: \WINNT\horcm.conf or \WINNT\horcmn.conf where n is the
instance number.
MPE/iX: /etc/horcm.conf or /etc/horcmn.conf where n is the instance number.
OpenVMS: sys$posix_root : [etc]horcmn.conf where n is the instance number.
61
$HORCM_TRCBUF: Specifies the trace mode. If you specify this environment variable, data is
written to the trace file in nonbuffered mode. If you do not specify it, data is written in buffered
mode.
The trace mode can be changed in real time by using the horcctl c b command.
$HORCM_TRCUENV: Specifies whether to use the trace control parameters (TRCLVL and TRCBUF
trace types) as they are when a command is issued. When you specify this environment variable,
the most recently set trace control parameters are used. If you do not specify it, the default trace
control parameters are used: tracing becomes level 4, and trace mode is set to buffer mode.
$HORCM_FCTBL: Changes the fibre address conversion table number when the target ID indicated
by the raidscan command is different from the target ID used by the host.
$HORCMSTART_WAIT: Changes the time-out value (in seconds) for startup. The default is 200.
Must be a minimum of 5 seconds and is set in multiples of 5.
Example:
HORCMSTART_WAIT=500171
Export HORCMSTART_WAIT
HORCC_LOGSZ=2048
When the specified maximum file size is reached, the /HORCM/log*/horcc_HOST.log file is
moved to /HORCM/log*/horcc_HOST.oldlog.
If specified as 0 or unspecified, normal command error logging occurs.
$HORCC_TRCSZ: Specifies the size of the command trace file in kilobytes. If you do not specify a
size, the default trace size for XP Continuous Access Software commands is used. This default
trace size is the trace size used by the software.
The default trace size for the commands can be changed in real time by using the horcctl d
s command.
$HORCC_TRCLVL: Specifies the command trace level (between 0 and 15). If you specify a negative
value, the trace mode is canceled. If you do not specify a level, the default trace level for XP
Continuous Access Software commands is used. This tracing is level 4 by default (or the XP
Continuous Access Software level). You can change the default trace level for the commands in
real time using the horcctl d l command.
$HORCC_TRCBUF: Specifies the command trace mode. If you specify this environment variable,
data is written to the trace file in nonbuffer mode. If you do not specify it, the default trace mode
for XP Continuous Access Software commands is used. This default is buffered mode (or the XP
Continuous Access Software trace mode). You can change the default trace mode for the commands
in real time using the horcctl d b command.
62
# pairdisplay
2.
-g <group>
I5 ...
To set the environment to XP Continuous Access Software or XP Business Copy Software, use the
I[H][M] (for the Hitachi RAID Manager) or the I[CA][BC] (for the HP XP RAID Manager)
arguments.
For example, to set the environment for XP Continuous Access Software, use I[H] or I[CA]:
# pairdisplay
3.
-g <group>
ICA ...
# pairdisplay
-g <group>
IBC5 ...
63
Security is determined by command device definition within the SVP, Remote Console, or via SNMP.
Upon definition, the protection facility for each command device can be enabled by setting an attribute.
The software refers to this attribute when it first recognizes the command device.
Command devices with protection ON permit access to volumes that are not only on their list of
allowed volumes, but are also host viewable.
The following figure shows the definition of a protected (access refused) volume:
BC0
BC1
BC2
not
not
not
not
Unknown
/dev/rdsk/c0t0d0
64
Permission command
To allow initial access to a protected volume, you must run the Permission command. This command
is the find inst option of raidscan; see raidscan, page 167, which /etc/horcmgr runs
automatically upon XP RAID Manager startup. With security enabled, the software permits operations
on a volume only after the Permission command runs. Operations target volumes listed in the
horcm.conf file.
The command compares volumes in the horcm.conf file to all host viewable volumes. Results are
noted within XP RAID Manager in an internal table of protected and permitted volumes based on the
horcm.conf file and the results of the Inquiry command. The Inquiry result is based on the LUN
security for that host; you must configure LUN security before beginning XP RAID Manager operation.
Attempts to control protected volumes are rejected with the error code EX_ENPERM.
65
66
The pairdisplay command has no XP RAID Manager protection restrictions. Using this command,
you can confirm whether volumes are permitted or not. Non-permitted volumes are shown without
LDEV number information (****).
Example:
# pairdisplay -g oradb
Group
oradb
oradb
raidscan
The raidscan command shows all volumes without restriction because it does not use the
HORCM_DEV and HORCM_INST fields in the horcm.conf file.
To identify permitted volumes with raidscan, use the find option (supported with version
01.05.00). This option shows the device file name and array serial number information. You can
use raidscan find to create the horcm.conf file, because only permitted volumes (from the
host's perspective) are displayed.
Example (HP-UX):
raidscan -find
PORT
TARG LUN
CL1-D
3
0
CL1-D
3
1
SERIAL
35013
35013
LDEV
17
18
PRODUCT_ID
OPEN-3
OPEN-3
LUN
3
3
M
0 0 0
SERIAL
35013
35013
LDEV
17
17
67
Registration process 1
The following registers permitted volumes in a file ($HORCMPERM). If the $HORCMPERM file already
exists, it uses the existing file without doing a new ioscan (Registration process 2).
If you want to permit even fewer volumes, edit the device file list in the $HORCMPERM file. If you try
to add device files that ioscan does not see (due to nonexistence or a LUN security product), an
error is returned at access time. This file is simply the text output (device files only) of a prior ioscan
with the non XP device files removed.
LVM
OR
# vgdisplay -v /dev/vg01|grep dsk|sed
's/\/*\/dsk\//\/rdsk\//g'|raidscan -find verify 1 -fd
DEVICE_FILE
Group
PairVol
Device_File
M
/dev/rdsk/c0t3d0
oradb1 oradev1
c0t3d0
1
/dev/rdsk/c0t3d1
oradb1 oradev2
c0t3d1
1
/dev/rdsk/c0t3d2
oradb
oradev3
c0t3d2
1
/dev/rdsk/c0t3d3
1
SERIAL
35013
35013
35013
35013
LDEV
17
18
19
20
68
LDEV
17
18
19
20
Registration process 2
If no $HORCMPERM file exists, you can run the following commands manually to permit the use of all
volumes the host is currently allowed to see (LUN security products may or may not be in place).
HP-UX: ioscan fun | grep e rdisk e rdsk | /HORCM/usr/bin/raidscan
find inst
Linux: ls /dev/sd* | /HORCM/usr/bin/raidscan find inst
Solaris: ls /dev/rdsk/* | /HORCM/usr/bin/raidscan find inst
AIX: lsdev C c disk | grep hdisk | /HORCM/usr/bin/raidscan find inst
If the lsdev command does not show the TID and LUN (for example, 2F-00-00-2,0) in the column
output for the devices, the d[g] raw device option (on all commands) and raidscan find
cannot find target devices.
lsdev C c disk | grep hdisk | /HORCM/usr/bin/raidscan find inst
# lsdev C c disk
hdisk1 Defined
04-02-01
This happens when a Fibre Channel adapter is used with a different device driver (for example,
an Emulex adapter with an AIX driver).
MPE/iX: callci dstat | /HORCM/usr/bin/raidscan find inst
Windows NT/2000/2003: echo hd0999 | x:\HORCM\etc\raidscan.exe find inst
The MAX volume to be scanned is 1000 by default.
69
NOTE:
This registration process has some risk because it runs automatically upon /etc/horcmgr startup to
validate the fd option and is done without checking for protection mode. The permitted volume registration
brings a performance degradation in horcmstart.sh (XP RAID Manager startup), but the XP RAID
Manager daemon runs as usual, depending on how many devices a host has. If you want XP RAID Manager
to start up faster in non-protection mode, set $HORCMPERM to SIZE 0 byte as a dummy file or set
HORCMPERM=MGRNOINST. At that time, the fd option shows the Device_File name as Unknown.
Afterwards, you can validate the fd option by using raidscan find inst.
Environment variables
$HORCMPROMOD
This environment variable sets protection mode ON by force. If your command device was created
with protection mode OFF, this parameter forces protection mode ON, as shown in the following
table.
Table 12 $HORCMPROMOD protection mode
Original command device
setting
HORCMPROMOD
Resulting mode
Protection mode ON
Protection mode ON
Variable set
Protection mode ON
$HORCMPERM
This variable is used to specify the XP RAID Manager permission file. If no file name is specified, the
default is /etc/horcmperm.conf, or /etc/horcmpermn.conf (where n is the instance number).
If a permission file exists, /etc/horcmgr runs the following command to permit the volumes listed
in the file.
HP-UX: cat $HORCMPERM | /HORCM/usr/bin/raidscan find inst
Windows NT/2000/2003: type $HORCMPERM | x:\HORCM\etc\raidscan.exe find
inst
If no permission file exists, /etc/horcmgr runs a built-in command to permit all volumes owned by
the host.
HP-UX: ioscan fun | grep rdsk | /HORCM/usr/bin/raidscan find inst
Linux: ls /dev/sd* | /HORCM/usr/bin/raidscan find inst
Solaris: ls /dev/rdsk/* | /HORCM/usr/bin/raidscan find inst
AIX: lsdev C c disk | grep hdisk | /HORCM/usr/bin/raidscan find inst
Tru64 UNIX: ls /dev/rdisk/dsk* | /HORCM/usr/bin/raidscan find inst
Digital UNIX: ls /dev/rrz* | /HORCM/usr/bin/raidscan find inst
DYNIX/ptx: /etc/dumpconf d | grep sd | /HORCM/usr/bin/raidscan find
inst
70
Data protection
User data files are written to a disk through a software layer such as a file system, LVM, disk driver,
SCSI protocol driver, bus adapter, or SAN switching fabric. Data corruption can occur due to software
or human error. XP RAID Manager subsystems provide data protection by guarding specified volumes
against accidental writing.
Data protection functions include:
Database Validator on page 71. For further information, see HP StorageWorks XP24000 Database
Validator User's Guide.
Data Retention Utility on page 72. For further information, see HP StorageWorks XP24000 Data
Retention Utility User's Guide.
Database Validator
Database Validator is designed for the Oracle database platform to prevent data corruption between
the database and the storage subsystem. Database Validator prevents corrupted data blocks generated
in the database-to-storage subsystem infrastructure from being written onto the storage disk.
XP RAID Manager has options in the following three commands for setting and verifying data protection:
raidvchkset: Sets the parameters for guarding specified volumes. See raidvchkset on page 186.
raidvchkdsp: Shows the guarding parameters for specified volumes, based on the configuration
file. See raidvchkdsp on page 190.
raidvchkscan: Shows the guarding parameter for specified volumes, based on the raidscan
command. See raidvchkscan on page 195.
71
Guarding options
XP RAID Manager supports the following guarding options:
Hiding from inquiry commands.
72
Conceals the target volumes from SCSI Inquiry commands by responding unpopulated volume
(0x7F) to the device type.
SIZE 0 volume.
Replies to SCSI Read Capacity commands with SIZE 0 for the target volume.
Read protection.
Protects volumes from reading by responding with the check condition of Illegal function
(SenseKey=0x05, SenseCode=0x2200).
Write protection.
Protects volumes from writing by replying with Write Protect in the Mode sense header and by
responding with the check condition of Write Protect (SenseKey=0x07, SenseCode=0x2700).
S-VOL disabling.
Protects volumes from becoming an S-VOL during pair creation.
Commands affected
XP RAID Manager has options for setting and verifying guarding using the same three commands as
with Database Validator: raidvchkset, raidvchkdsp, and raidvchkscan.
73
SSID
0004
0004
0004
0004
0004
0004
0004
R:Group
5:01-03
5:01-01
5:01-02
5:01-03
5:01-01
5:01-02
5:01-03
PRODUCT_ID
OPEN-3
OPEN-3
OPEN-3
OPEN-3
OPEN-3
OPEN-3
OPEN-3
74
Example 2:
chgac1 /A:RMadmin \\.\PHYSICALDRIVE10 \\.\PHYSICALDRIVE9
75
You can also use the \\?\\Volume{GUID} format used by Windows commands such as
mountvol.
Example 2:
chgacl /A:RMadmin Scsi0 Scsi1 Scsi2
76
Restrictions
Because the ACL for the device object is set every time Windows boots, access must be reset every
time the system starts up.
Use the Windows Scheduled Tasks application to run a batch file that adds the user name to the
access list when the system reboots.
To add a scheduled task (Windows 2000/2003):
1.
2.
Double-click Scheduled Tasks. Double-click Add Scheduled Task. The Scheduled Task Wizard
appears.
3.
Click Next.
4.
5.
6.
7.
You can redirect the output of the batch file by adding redirection in the batch file. Alternately, you
can specify redirection in the Scheduled Task item's Run field in advanced properties (for example,
C:\HORCM\add_RM_user.bat > C:\HORCM\logs\add_RM_user.log).
NOTE:
If you change the Windows system administrator's password, this scheduled task does not run. You must
modify the task by entering the new password.
When new device objects (physical drives) are created, you must update user access for these devices.
77
Restrictions
Restriction 1.
A user without system administrator privilege is not allowed to use the Windows mountvol command
(although some current Windows 2000 revisions allow a user to mountvol a directory to a volume).
Therefore, a user cannot run the directory mount option using the mountvol command.
For example, raidscan -x mount C:\test \vol5 generates an error even though the system
administrator has added the user name to the access list of the volume.
Restriction 2.
The inqraid gvinf option uses the %SystemDrive%:\windows\ or %SystemDrive%:\WINNT\
directory. Therefore, the user running this command receives errors unless the system administrator
grants the user write access to the directory.
XP RAID Manager version 01.15.02 and later allows the user to set the HORCM_USE_TEMP variable
to prevent the errors.
Example:
78
C:\HORCM\etc\>set HORCM_USE_TEMP=1
C:\HORCM\etc\>inqraid $Phys -gvinf
Restriction 3.
The user using the commands and the user starting the HORCM instance must have the same system
privileges. The following scenario is an example:
An administrator stated a HORCM instance 5. User A with user privileges cannot use commands
with HORCM instance 5. If user A has been added to the ACL for the devices, user A's commands
cannot communicate with the HORCM instance that was started by another user with different privileges.
XP RAID Manager version 01.15.02 and later allows the user to connect to HORCM by setting the
HORCM_EVERYCLI environment variable.
Example:
C:\HORCM\etc\>set HORCM_CONF=C:\Documents and
Settings\RMadmin\horcm10.conf
C:\HORCM\etc\>set HORCMINST=10
C:\HORCM\etc\>set HORCM_EVERYCLI=1
C:\HORCM\etc\>horcmstart
79
Examples:
inqraid $LETALL
DEVICE_FILE
D:\Vol2\Dsk4
E:\Vol44\Dsk0
F:\Vol45\Dsk0
G:\Dmt1\Dsk1
G:\Dmt1\Dsk2
G:\Dmt1\Dsk3
-CLI
PORT
CL2-K
CL2-K
CL2-K
CL2-K
CL2-K
80
SERIAL
61456
61456
61456
61456
61456
LDEV CTG
194
194
256
257
258
-
H/M/12
s/s/ss
s/s/ss
s/s/ss
s/s/ss
s/s/ss
SSID
0004
0004
0005
0005
0005
R:Group
1:01-10
1:01-10
1:01-11
1:01-11
1:01-11
PRODUCT_ID
DDRS-34560D
OPEN-3
OPEN-3
OPEN-3
OPEN-3
OPEN-3
SERIAL
LDEV CTG
H/M/12
\Vol2\Dsk4
\Vol44\Dsk0
\Vol45\Dsk0
\Dmt1\Dsk1
\Dmt1\Dsk2
\Dmt1\Dsk3
CL2-K
CL2-K
CL2-K
CL2-K
CL2-K
61456
61456
61456
61456
61456
194
194
256
257
258
s/s/ss
s/s/ss
s/s/ss
s/s/ss
s/s/ss
0004
0004
0005
0005
0005
1:01-10
1:01-10
1:01-11
1:01-11
1:01-11
DDRS-34560D
OPEN-3
OPEN-3
OPEN-3
OPEN-3
OPEN-3
SERIAL
61456
61456
61456
61456
-
LDEV CTG
194
256
257
258
-
H/M/12
s/s/ss
s/s/ss
s/s/ss
s/s/ss
-
SSID
0004
0005
0005
0005
-
R:Group
1:01-10
1:01-11
1:01-11
1:01-11
-
PRODUCT_ID
OPEN-3
OPEN-3
OPEN-3
OPEN-3
DDRS-34560D
-CLI
PORT
CL2-K
CL2-K
SERIAL
61456
61456
LDEV CTG
194
194
-
H/M/12
s/s/ss
s/s/ss
SERIAL
61456
LDEV CTG
194
-
H/M/12
s/s/ss
[drive:]path VolumeName
[drive:]path /D
[drive:]path /L
\\?\Volume{56e4954a-28d5-4824-a408-3ff9a6521e5d}\
G:\
\\?\Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}\
F:\
81
DEVICE_FILE
\Vol46\Dsk1
UID
0
S/F PORT
F CL2-K
TARG
7
LUN
1
SERIAL
61456
LDEV
193
PRODUCT_ID
OPEN-3
-pi
ORB
ORB
ORB
$Volume -find
ORB_000[-] ->
ORB_001[-] ->
ORB_002[-] ->
sync -g ORB
\Dmt1\Dsk1 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
\Dmt1\Dsk2 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
\Dmt1\Dsk3 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
The following example flushes the system buffer associated with all groups for the local instance.
raidscan
[SYNC] :
[SYNC] :
[SYNC] :
[SYNC] :
[SYNC] :
-pi
ORA
ORA
ORB
ORB
ORB
$Volume -find
ORA_000[-] ->
ORA_000[-] ->
ORB_000[-] ->
ORB_001[-] ->
ORB_002[-] ->
sync
\Vol44\Dsk0
\Vol45\Dsk0
\Dmt1\Dsk1
\Dmt1\Dsk2
\Dmt1\Dsk3
:
:
:
:
:
Volume{56e4954a-28d5-4824-a408-3ff9a6521e5d}
Volume{56e4954a-28d5-4824-a408-3ff9a6521e5e}
Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
NOTE:
Because Windows NT does not support LDM volumes, you must use $LETALL instead of $Volume.
82
Offline and online backup using raidscan find sync (Windows NT)
On Windows NT, the raidscan find sync command flushes the system buffer by finding a
logical drive that corresponds to a configuration file group. This eliminates the need to use the x
mount and x umount commands. The following is an example for group ORB.
For offline backup:
P-VOL side
S-VOL side
S-VOL side
Offline and online backup using raidscan find sync (Windows 2000/2003)
On Windows 2000/2003 , the raidscan find sync command flushes the system buffer
associated to a logical drive by finding a Volume{GUID} that corresponds to a configuration file
group. This eliminates the need to use the x mount and x umount commands. The following is
an example for group ORB.
For offline backup:
P-VOL side
S-VOL side
83
P-VOL side
S-VOL side
S-VOL side
84
NOTE:
Because the cluster disk driver is non-Plug and Play, it displays the Noread volume as Device is not
ready when booted. This can be verified using the inqraid command as shown in the following example:
SERIAL
-
LDEV CTG
-
H/M/12
-
If this happens, use the following procedure to disable the cluster disk driver:
1.
In the Computer Management window, double-click System Tools, and then click Device Manager.
2.
In the View menu, click Show Hidden Devices. A list of non-Plug and Play Drivers is displayed in
the right-hand pane.
3.
Open non-Plug and Play Drivers, right-click Cluster Disk, and then click Disable. When asked to
confirm if you want to disable the cluster disk, click Yes, and then click Yes to restart the computer.
4.
Verify you can see the Noread volume using the inqraid command as shown in the following
example:
inqraid $Phy -CLI
DEVICE_FILE
PORT
Harddisk0
CL2-K
Harddisk1
CL2-K
SERIAL
61456
61456
LDEV CTG
194
256
-
H/M/12
s/S/ss
s/S/ss
5.
After starting XP RAID Manager and splitting the S-VOL, restore the signature by using the
inqraid svinf command.
6.
7.
Open non-Plug and Play Drivers, right-click Cluster Disk, click Enable and then restart the computer.
85
Set SLPR on the command device: The command device has an SLPR number and associated
bitmap. This allows you to set multiple SLPRs by sharing a command device (using ports connected
to different SLPRs) by setting the command device through SLPR#0 (Storage Administrator) on the
Web Console or SVP.
For example, if the command device is shared with the port on SLPR#1 and SLPR#2, the command
device automatically sets the bitmap corresponding to SLPR#1 and SLPR#2.
2.
Test SLPR: XP RAID Manager verifies whether the command device can access a target within
SLPR. If the command device belongs to SLPR#0 or the software has no SLPR function, the SLPR
protection is ignored.
However, if the command device is shared with the port on SLPR#1 and SLPR#2, the software
allows you to operate the volume on SLPR#1 and SLPR#2.
3.
Reject commands: If access is denied on the specified port (or target volume), XP RAID Manager
rejects the following commands and outputs an EX_ESPERM error code:
horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairvolchk,
pairevtwait, and pairsyncwait
raidscan (except find verify, and find ins), raidar, and pairdisplay
raidvchkset, raidvchkscan (except v jnl), and raidvchkdsp
[EX_ESPERM]
Permission denied with the SLPR
[Cause ] : A specified command device does not have permission to access other SLPR.
[Action] : Please make the SLPR so that the target port and the command device belong
to the same SLPR.
86
General commands
Table 13 General commands
XP RAID Manager command
Description
UNIX
Windows
NT/2000/
2003
MPE/iX
horcctl, page 89
horcmshutdown, page 91
horcmstart, page 92
horctakeover, page 96
inqraid, page 99
Creates a pair.
Resynchronizes a pair.
87
Description
UNIX
Windows
NT/2000/
2003
Lists the Fibre Channel port, target ID, LUN, and LDEV
status.
Description
Suspends execution.
Description
88
MPE/iX
General commands
horcctl
Change and display internal trace and control parameters
Description
The horcctl command is used for maintenance (except for the S, D, C, ND, NC, and g
arguments) and troubleshooting. When issued, the internal trace control parameters of XP RAID
Manager and its commands are changed and displayed.
If the arguments l level, b m, s size(KB), or t type are not specified, the current trace
control parameters are displayed.
Syntax
horcctl { b y/n | c | C | d | D | DI | g group | h |
I[H/CA][M/BC][instance#] | l level | NC | ND | q | s size(KB) | S
| t type | u unitid | z | zx }
Arguments
b y/n
Sets a trace level.
c
Interprets the trace control arguments (l level, b y/n, t type) following this argument
as parameters for the XP Continuous Access Software manager.
C
Changes the command device name being used by XP RAID Manager and displays the new
name.
Use this argument to change the command device if it is blocked due to online maintenance
(microcode replacement) of the subsystem,
By using this argument again after completing the online maintenance (microprogram
replacement), the previous command device is reinstated.
d
Interprets the trace control arguments (l level, b y/n, s size(KB), t type)
following this argument as parameters for XP RAID Manager.
D
Displays the command device name currently used by XP RAID Manager.
If the command device is blocked due to online maintenance (microprogram replacement) of
the disk array, check the command device name before using this argument.
An asterisk (*) indicates that protection mode is ON.
An example with protection mode on.
89
# horcctl -D
Current control device = /dev/rdsk/c0t0d0*
DI
Displays the command device name currently used by XP RAID Manager, whether it is a secure
device (*), and the number of actual instances, temporary instances, and instances currently
in use.
Output fields:
AI: Number of actual instances
TI: Number of temporary instances
CI: Number of instances currently in use
Example:
# horcctl -DI1469
Current control device = /dev/rdsk/c0t0d0*
AI = 14 TI = 0 CI = 1
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
l level
Sets a trace level to the one specified in level. The range is between 0 and 15.
Specifying a negative value cancels the trace mode. A negative value n is specified as n,
where n is any digit between 1 and 9. For example: horcctl l 4.
Level 4 is the default setting and must not be changed unless directed by an HP service
representative.
Setting a trace level to other than 4 can impact problem resolution if a program failure occurs.
Levels 0 to 3 are for troubleshooting.
When a change option to the trace control parameter is specified, a warning message is
displayed, and the command enters interactive mode.
NC g group
Changes the network address and port name being used by XP RAID Manager to the next
remote instance and displays the new network address name.
ND g group
Displays the network address and port name being used by XP RAID Manager.
q
90
Example
Entering horcctl D C identifies a protection mode command device by adding * to the device
name:
HP-UX:
# horcctl -D
Current control device = /dev/rdsk/c0t0d0*
horcmshutdown
Stop instances
Description
The horcmshutdown command is an executable for stopping instances.
Syntax
horcmshutdown.sh [inst...]
horcmshutdown.exe [inst...]
91
Arguments
inst
Indicates an instance number corresponding to the instance to be shut down.
When omitted, the command uses the value stored in the HORCMINST environment variable.
horcmstart
Start instance
Description
The horcmstart command is an executable for starting XP RAID Manager. If instance numbers are
specified, this executable sets environment variables (HORCM_CONF, HORCM_LOG, HORCM_LOGS)
and it starts instances.
Syntax
HP-UX: horcmstart.sh [instance...]
Windows NT/2000/2003: horcmstart.exe [instance...]
MPE/iX: MPE/iX POSIX cannot launch a daemon process from a POSIX shell. Therefore, you must
run XP RAID Manager as a job in the background by using the STREAM command.
OpenVMS: OpenVMS needs to run the detached LOGINOUT.EXE as a job in the background by
using the RUN /DETACHED command.
Arguments
instance
Specifies the instance number. If omitted, the command uses the value stored in the HORCMINST
environment variable. If HORCMINST is not set, a null value for instance is used to set the
environment variables (HORCM_COMF, HORCM_LOG, HORCM_LOGS).
Returned values
The horcmstart command sets either of the following returned values in exit(), which allows you to
check the execution results.
The command returns 0 upon normal termination.
A nonzero return indicates abnormal termination. For the cause of the error and details, see the
execution logs.
Files
/HORCM/loginstance/curlog/horcm_hostname.log
/HORCM/loginstance/horcm_hostname.log
horctakeoff
Meta-command for changing delta resync configuration
92
Description
This is a scripted meta-command for executing several combined commands. It checks the volume
attribute (optionally specified) and decides a takeover action. The horctakeoff operation is useful
for changing from a 3 Data Center multi-target to a 3 Data Center multi-hop configuration. The
horctakeover command can then configure 3 Data Center multi-target on the remote site.
The granularity of either a logical volume or volume group can be specified with this command.
Syntax
horctakeoff { nomsg | d[s] pair vol | d[g][s] raw_device [MU#] |
d[g][s] seq# LDEV# [MU#] | g[s] group | h | jp id | js id | q | t
timeout | z | zx }
Arguments
nomsg
Used to suppress messages when running this command from a user program. Must be specified
at the beginning of the command arguments.
[s] is used to when specifying two device groups in a 3 Data Center environment. The first (d[g])
is normally the XP Continuous Access Synchronous Software group, and the second (d[g][s]) is
the XP Continuous Access Journal Software group. Both parameters must be used or the command is
not recognized. (dg G1 dgs G2)
The following arguments use the [s] option in a 3 Data Center environment:
d[s]
d[g]s raw_device [MU#]
d[g]s seq# LDEV# [MU#]
d[s] pair vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g][s] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg). If
the specified raw device is found in two or more groups, the command runs on the first group.
d[g][s] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
g[s] group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The parameters g andgs must both be used (g G1 gs G2).
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
93
jp id
(XP Continuous Access Journal Software only) Specifies a journal group ID for the Journal_PVOL
to create a 3 Data Center multi-hop (CA_Sync > CA_Sync/Journal_PVOL > Journal).
If not specified, the journal group ID from the 3 Data Center multi-target Journal_PVOL is
automatically inherited.
js id
(XP Continuous Access Journal Software only) Specifies a journal group ID for the Journal_SVOL
to create a 3 Data Center multi-hop (CA_Sync > CA_Sync/Journal_SVOL > Journal).
If not specified, the journal group ID from the 3 Data Center multi-target Journal_SVOL is
automatically inherited.
The CTGID is also automatically inherited for the internal paircreate command.
q
Terminates interactive mode and exits this command.
t timeout
Specifies the maximum time in seconds to wait for Sync_P-VOL to Sync_S-VOL delta data to
resynchronize. It is used for the internal pairresync command.
If this option is not specified, the default time-out value is 7200 sec.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Returned values
The horctakeoff command returns one of the following values at exit (), which allows you to check
the execution results.
The command returns 0 upon normal termination.
A nonzero return indicates abnormal termination. For the cause of the error and details, see the
execution logs.
Error codes
The following table lists error codes for the horctakeoff command.
Table 16 horctakeoff error codes
Category
Error code
Error message
Value
Volume status
unrecoverable
EX_ENQVOL
236
EX_INCSTG
229
EX_EVOLCE
235
94
Category
Timer recoverable
Error code
Error message
Value
EX_VOLCRE
223
EX_EWSTOT
233
NOTE:
Unrecoverable error should have been done without re-execute by handling of an error code. The command
has failed, and then the detailed status is logged in the command log ($HORCC_LOG), even though the
user script has no error handling.
Example
-g G1 -gs G2
'pairsplit -g G1 -S -FCA 2' is in progress.
'pairsplit -g G1' is in progress.
'pairsplit -g G2 -S' is in progress.
'paircreate -g G1 -gs G2 -FCA 2 -nocopy -fasync -jp 0 -js 1' is in progress
'pairsplit -g G1 -FCA 2' is in progress.
'pairresync -g G1' is in progress.
'pairresync -g G1 -FCA 2' is in progress.
horctakeoff done.766
-g G1 -gs G3
'pairsplit -g G1 -S -FCA 1' is in progress.
'pairsplit -g G1' is in progress.
'pairsplit -g G3 -S' is in progress.
'paircreate -g G1 -gs G3 -FCA 1 -nocopy -fasync -jp 0 -js 1' is in progress
'pairsplit -g G1 -FCA 1' is in progress.
'pairresync -g G1' is in progress.
'pairresync -g G1 -FCA 1' is in progress.
horctakeoff done.
95
-g G1 -gs G3
'pairsplit -g G3 -S' is in progress.
'pairsplit -g G1' is in progress.
'pairsplit -g G1 -FCA 1 -S' is in progress.
'paircreate -g G3 -vl -nocopy -f async -jp 0 -js 1' is in progress
'pairsplit -g G3' is in progress.
'pairresync -g G1' is in progress.
'pairresync -g G3' is in progress.
horctakeoff done.
-g G1 -gs G2
'pairsplit -g G2 -S' is in progress.
'pairsplit -g G1' is in progress.
'pairsplit -g G1 -FCA 2 -S' is in progress.
'paircreate -g G2 -vl -nocopy -f async -jp 0 -js 1' is in progress.
'pairsplit -g G2' is in progress.
'pairresync -g G1' is in progress.
'pairresync -g G2' is in progress.
horctakeoff done.
horctakeover
Take ownership of a pair (XP Continuous Access Software only)
Description
The horctakeover meta command (contains many sub-commands) is used in conjunction with HA
software, such as MC/Service Guard and XP Continuous Access Software. It selects and executes
one of four actions, depending on the state of the paired volumes: nop-takeover, swap-takeover,
S-VOL-takeover, or P-VOL-takeover.
See Takeover-switch function on page 258 for actions taken by horctakeover.
The table under the heading HA control script state transitions on page 245 lists state transitions
resulting from running horctakeover in HA control scripts.
96
NOTE:
Executing horctakeover in a cascaded XP Continuous Access Software environment causes an automatic
suspend of the downstream XP Continuous Access Journal Software.
Syntax
horctakeover { nomsg | g group | d pair_vol|d[g] raw_device [MU#] |
d[g] seq# LDEV# [MU#] | h | I [instance#] |l | q | S | t timeout
| z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
d pair_vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g]raw_device [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
raw_device. If a volume is found, the command runs on the paired volume (d) or group
(dg).
This option is effective without specifying the g group option.
If the volume is contained in two groups, the command runs on the first volume encountered.
If MU# is not specified, it defaults to 0.
d[g] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. If MU# is not specified, it defaults to 0. The
seq# LDEV# values can be specified in hexadecimal (by the addition of 0x) or decimal
notation.
The command runs for the entire group unless the d pair_vol argument is specified.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [instance#]
Specifies the instance number. An alternate method to using the environmental variable
$HORCMINST; for further information see
XP RAID Manager instance and execution environment variables on page 63.
l
97
Executes a PVOL-takeover, which enables the P-VOL for reading and writing by a local host
without a remote host. This argument is used when the primary volume is in status or data
fence, is not allowing writes, and is in PSUE or PDUB state. If the primary volume is in any
other state, a nop-takeover is executed.
q
Terminates interactive mode and exits this command.
S
Selects and executes an SVOL-takeover. The target volume of a local host must be an S-VOL.
If this argument is specified, the l argument is invalid.
The target volume of a local host must be a P-VOL.
This argument must be specified at the beginning of the command arguments.
t timeout
(Asynchronous paired volumes only) Specifies the maximum time in seconds to wait for P-VOL
to S-VOL delta data to resynchronize. If the time-out occurs, EX_EWSTOT is returned. This
option is required for an asynchronous paired volume; it has no effect on synchronous paired
volumes.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Returned values
The horctakeover command returns one of the following values in exit().
Normal termination
Abnormal termination
Other than the previous. For the error cause and details, see the execution logs.
98
Error codes
The following table lists specific error codes for the horctakeover command.
Table 17 horctakeover error codes
Category
Error code
Error message
Value
Volume status
unrecoverable
EX_ENQVOL
236
EX_INCSTG
229
EX_EVOLCE
235
EX_VOLCUR
225
EX_VOLCUE
224
EX_VOLCRE
223
EX_EWSTOT
233
Timer
recoverable
Wait until the S-VOL state becomes SVOL_PSUS by using the return code of the pairvolchk
g group ss command. Then, attempt the startup again from the HA control script.
2.
Attempt to resynchronize the original P-VOL, based on the S-VOL, by using the pairresync
g group swaps c size command for a Fast Failback operation.
The operation in step2 may fail with EX_CMDRJE or EX_CMDIOE. This causes an ESCON link
and/or site failure.
If this operation fails, the HA control script reports the following message:
After a recovery from failure, try the pairresync g group swaps c size command.
To avoid the previous recovery steps, the time-out value should be just less than (for example, 30
seconds) the startup time-out value for the HA control script.
inqraid
Display disk array information
Description
The inqraid command displays the relationship between a host device special file and an actual
physical drive in the disk array.
Syntax
inqraid { CLI [W|WP|WN] | f[c][g][h][l][p][v][w][x] | find[c] | gplba
| gplbaex | gvinf | gvinfex | h | inqdump | inst | pin | quit |
sort[CM][CLIB] | special_file | svinf[=PTN] | svinfex[=PTN] }
99
Arguments
CLI
Specifies structured output for Command Line Interface parsing. The column data is aligned in
each row. The delimiters between columns are either a space or .
CLI [W|WP|WN]
(Not for use with Tru64 or Digital UNIX) Displays the WWN (World Wide Name) and LUN
in CLI format.
fc
Used to calculate the Bitmap page of cylinder size for HORC.
fg
Displays a LUN in the host view by finding a host group for the XP1024/XP128 disk array.
fh
Specifies XP Continuous Access/XP Continuous Access Journal for the Bitmap pages when
used with sort CLIB.
fl
Indicates a Data Retention Utility volume with the CLI option by appending * to the device
file name.
fp
Indicates an Oracle validation volume with the CLI option by appending * to the device
file name.
fv
(Windows NT/2000/2003 only) Displays the Volume{GUID} via $Volume in wide format.
fw
Displays the cascading volume status on STD Inquiry page. If this option is not specified, the
display shows four cascading mirrors the same as at present to maintain compatibility with the
current CLI option.
Example:
# ls /dev/rdsk/* | inqraid -CLI
DEVICE_FILE
PORT
SERIAL
c1t2d10s2
CL2-D
62500
c1t2d11s2
CL2-D
62500
-fw
LDEV CTG C../B/..
266
- Psss/P/PP----------267
- ssss/s/ss-----------
fx
Displays the LDEV number in hexadecimal format.
find[c]
Using device special file names provided via STDIN, this option displays information about
the corresponding configuration file volume groups through the use of the inquiry and
pairdisplay commands.
This option requires that the HORCMINST variable be defined in the command execution
environment.
The find option employs the following options of the pairdisplay command:
100
101
SSID
0004
0004
0004
0004
0004
R:Group
5:01-03
5:01-01
5:01-02
5:01-03
5:01-01
PRODUCT_ID
OPEN-3
OPEN-3
OPEN-3
OPEN-3
OPEN-3
Example (Windows):
C:\HORCM\etc>inqraid -CLI $Phy -pin
DEVICE_FILE
PORT
SERIAL LDEV CTG
Harddisk0
Harddisk1
Harddisk2
CL4-E
63528
0
Harddisk3
CL4-E
63528
1
Harddisk4*
CL4-E
63528
2
Harddisk5
CL4-E
63528
3
-
C/B/12
s/s/ss
s/s/ss
quit
Terminates interactive mode and exits the command.
sort[CM]
Sorts the target devices in serial number, LDEV number order.
The sort[CM] option displays the command devices listed in the horcm.conf file.
A unit ID is displayed with the serial number.
When two or more command devices exist, this option shows multiple device files linked to a
command device (an LDEV).
sort[CLIB]
Displays the calculated XP Business Copy Software Bitmap pages and unused Bitmap pages.
Sorts the specified special file (standard input or argument) by serial number, LDEV number.
NOTE:
Identical LDEVs and command devices are not used to calculate the Bitmap pages. LDEVs shared
by multiple ports are calculated as one LDEV.
svinf[=PTN]
(Windows NT/2000/2003 only) Uses SCSI Inquiry to retrieve the serial and LDEV numbers
created by gvinf of the RAID for the target device, and sets the signature and volume layout
information from file VOLssss_llll.ini to the target device.
This option completes correctly even if the hard disk number is changed by the operating
system. The signature and volume layout information is managed by the RAID serial and LDEV
numbers.
The svinf=PTN option specifies a string pattern to select only the pertinent output lines being
provided from STDIN. This option returns 0 upon normal termination. A nonzero return indicates
abnormal termination.
svinfex [=PTN]
102
(Windows 2003 SP1 only) This option is only for use with a GPT disk and is the same as
svinf except that it sets the signature/GUID DiskID and volume layout information from file
VOLssss_llll.ini to the target device.
special_file
Specifies a device special file name as an argument to the command. If no argument is specified,
the command waits for input from STDIN. For STDIN file specification information, see
XP RAID Manager STDIN file formats, page 271.
Restrictions
STDINs or special files are specified as follows:
HP-UX: /dev/rdsk/*
Solaris: /dev/rdsk/*s2 or c*s2
Linux: /dev/sd... or /dev/rd... ,/dev/raw/raw*
zLinux: /dev/sd... or /dev/dasd... or /dev/rd... ,/dev/raw/raw*
MPE/iX: /dev/..., LDEV-
AIX: /dev/rhdisk* or /dev/hdisk* or hdisk*
Digital or Tru64: /dev/rrz*c or /dev/rdisk/dsk*c or /dev/cport/scp*
DYNIX: /dev/rdsk/sd* or sd* for only unpartitioned raw device
IRIX64: /dev/rdsk/*vol or /dev/rdsk/node_wwn/*vol/* or /dev/dsk/*vol or
/dev/dsk/node_wwn/*vol/*
Windows NT: hdX-Y, $LETALL, $Phys, D:\DskX\pY, \DskX\pY
Windows 2000/2003: hdX-Y, $LETALL, $Volume, $Phys, D:\Vol(Dms, Dmt,
Dmr)X\DskY, \Vol(Dms, Dmt, Dmr)X\DskY
OpenVMS: $1$* or DK* or DG* or GK*
Lines that start with # via STDIN are interpreted as comments
Examples
Examples using the find option:
Linux
ls /dev/sd* | inqraid -find
Group
PairVol(L/R) (Port#,TID,LU), Seq#, LDEV#. P/S, Status, Fence, Seq#, P-LDEV# M
oradb
oradev2(L) (CL2-N , 3, 2) 8071 22..
SMPL ---------,----- ---->/dev/sdc
HP-UX
# echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t2d3 | ./inqraid -find
Group
PairVol (L/R) (Port#,TID,LU-M), Seq#, LDEV#. P/S, Status,
horc1
dev00
(L) (CL2-J , 0, 0-0) 61456 192.. S-VOL SSUS,
->/dev/rdsk/c23t0d0
Group
PairVol (L/R) (Port#,TID,LU-M), Seq#, LDEV#. P/S, Status,
horc1
dev10
(L) (CL2-J , 2, 3-0) 61456 209.. S-VOL SSUS,
->/dev/rdsk/c23t2d3
Seq#,
-----
P-LDEV# M
193
-
Seq#,
-----
P-LDEV# M
206
-
103
HP-UX
# echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t2d3 | ./inqraid -findc
DEVICE_FILE
M Group
PairVol
P/S
Stat R_DEVICE
M P/S
c23t0d0
0 horc1
dev00
S-VOL SSUS c23t0d1
0 P-VOL
/dev/rdsk/c23t0d0[1] -> No such on the group
/dev/rdsk/c23t0d0[2] -> No such on the group
DEVICE_FILE
M Group
PairVol
P/S
Stat R_DEVICE
M P/S
c23t2d3
0 horc1
dev10
S-VOL SSUS c23t2d2
0 P-VOL
/dev/rdsk/c23t2d3[1] -> No such on the group
/dev/rdsk/c23t2d3[2] -> No such on the group
# echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t2d3 | ./inqraid -findc -CLI
DEVICE_FILE
M Group
PairVol
P/S
Stat R_DEVICE
M P/S
c23t0d0
0 horc1
dev00
S-VOL SSUS c23t0d1
0 P-VOL
c23t2d3
0 horc1
dev10
S-VOL SSUS c23t2d2
0 P-VOL
Stat LK
PSUS OK
Stat LK
PSUS OK
Stat LK
PSUS OK
PSUS OK
C/B/12
s/s/ss
s/s/ss
s/s/ss
s/s/ss
s/S/ss
SSID
0100
000B
000B
000B
000B
R:Group
5:01-09
S:00001
U:00001
E:16384
A:00002
PRODUCT_ID
OPEN-V
OPEN-0V
OPEN-0V
OPEN-V
OPEN-0V
PRODUCT_ID
OPEN-3
OPEN-3
PRODUCT_ID
OPEN3-CVS
104
+BC/BC
-
UNUSED PRODUCT_ID
- OPEN-9-CM
c1t0d1
c1t0d2
c1t0d3
c1t0d4
c1t0d5
c1t0d6
c2t0d6
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL2-E
63516
63516
63516
63516
63516
63516
63516
12288
12403
12405
12800
12801
13057
13057
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
4
9
12
18
31
31
30718
30718
30718
30718
30718
30718
30718
OPEN-3
OPEN-9
OPEN-E
OPEN-8
OPEN-8*2
OPEN-L
OPEN-L
Output fields:
SSID
0004
-
R:Group PRODUCT_ID
5:01-01 OPEN-3
OPEN-3-CM
An example using the gvinf option follows. This example saves the volume information for all
physical drives.
D:\HORCM\etc>inqraid $Phys -gvinf -CLI
\\.\PhysicalDrive0:
# Harddisk0
-> [VOL61459_448_DA7C0D91] [OPEN-3
\\.\PhysicalDrive1:
# Harddisk1
-> [VOL61459_449_DA7C0D92] [OPEN-3
\\.\PhysicalDrive2:
# Harddisk2
-> [VOL61459_450_DA7C0D93] [OPEN-3
]
]
]
An example using the svinf=PTN follows. This example writes signature/volume information to
LUNs identified by Harddisk in the output of the pairdisplay command.
D:\HORCM\etc>pairdisplay -l -fd -g URA
Group
PairVol(L/R) Device_File
M ,Seq#, LDEV#. P/S, Status,
URA
URA_000(L)
Harddisk3
0 61459 451.. S-VOL SSUS,
URA
URA_001(L)
Harddisk4
0 61459 452.. S-VOL SSUS,
URA
URA_002(L)
Harddisk5
0 61459 453.. S-VOL SSUS,
D:\HORCM\etc>pairdisplay -l -fd -g URA | inqraid -svinf=Harddisk
[VOL61459_451_5296A763] -> Harddisk3
[OPEN-3
]
[VOL61459_452_5296A760] -> Harddisk4
[OPEN-3
]
[VOL61459_453_5296A761] -> Harddisk5
[OPEN-3
]
Seq#,
-------------
P-LDEV#
448
449
450
M
-
Output fields:
CLX-Y: The port number of the disk array.
Ser: The production (serial#) number of the disk array.
105
R:
Group
RAID group
RAID Level
1 = RAID1
5 = RAID5
6 = RAID6
XP Snapshot S-VOL
Unmapped
Pool ID number
SNAPS
Pool ID number
00000
UNMAP
Group 00000
External LUN
Pool ID number
106
Additional information:
If you create an S-VOL with the Noread option and reboot the Windows 2000/2003 system, the
system cannot create a device object (\Device\HarddiskVolume#) and Volume {GUID} for that
S-VOL. A device object (\Device\HarddiskVolume#) and Volume{GUID} can be created, using
the -svinf option to the inqraid command (on a suspended S-VOL).
\Device\HarddiskVolume#( number ) is assigned in sequential order by the svinf option.
This number is valid as long as the system configuration does not change.
Use the svinf sort option to cause signature writes to occur in LDEV number order:
D:\HORCM\etc>echo hd5 hd4 hd3 | inqraid -svinf -sort
[VOL61459_451_5296A763] -> Harddisk3
[OPEN-3
[VOL61459_452_5296A760] -> Harddisk4
[OPEN-3
[VOL61459_453_5296A761] -> Harddisk5
[OPEN-3
]
]
]
LDEV CTG
256
-
SSID
0004
0004
0004
0004
R:Group
5:01-03
5:01-03
5:01-01
5:01-02
PRODUCT_ID
OPEN-3
OPEN-3
OPEN-3
OPEN-3
The following examples display the relationship between a special file and the actual physical drive
in the disk array, by using the inqraid and system commands.
HP-UX
# ioscan -fun | grep rdsk | ./inqraid
/dev/rdsk/c0t2d0 ->[HP] CL2-D Ser = 30053 LDEV = 8 [HP ] [OPEN-3 ]
CA = SMPL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
/dev/rdsk/c0t2d1 ->[HP] CL2-D Ser = 30053 LDEV = 9 [HP ] [OPEN-3]
CA = SMPL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
/dev/rdsk/c0t4d0 ->[HP] CL2-D Ser = 30053 LDEV = 14 [HP] [OPEN-3 CM]
Linux
# ls /dev/sd* | ./inqraid/dev/sdg ->CHNO = 0 TID = 1 LUN =
[HP] CL2-B Ser = 30053 LDEV = 22 [HP
] [OPEN-3
]
CA = SMPL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
/dev/sdh ->CHNO = 0 TID = 1 LUN = 7
[HP] CL2-B Ser = 30053 LDEV = 23 [HP
] [OPEN-3
]
CA = SMPL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
/dev/sdu -> CHNO =
0 TID =
1 LUN = 14
[HP] CL2-G Ser =
63528 LDEV =2755 [HP
] [OPEN-0V
107
CA = SMPL BC[MU#0
SNAPS[PoolID 0001]
/dev/sdv -> CHNO =
[HP] CL2-G Ser =
CA = SMPL BC[MU#0
UNMAP[Group 00000]
/dev/sdw -> CHNO =
[HP] CL2-G Ser =
CA = SMPL BC[MU#0
E-LUN[Group 16384]
/dev/sdx -> CHNO =
[SQ] CL2-G Ser =
CA = SMPL BC[MU#0
A-LUN[PoolID 0002]
= SMPL]
] [OPEN-0V
= SMPL]
] [OPEN-V
= SMPL]
] [OPEN-V
= SMPL]
Solaris
# ls /dev/rdsk/* | ./inqraid
/dev/rdsk/c0t2d1 -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP
CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
/dev/rdsk/c0t4d0 -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP
] [OPEN-3
] [OPEN-3-CM
MPE/iX
shell/iX>ls /dev/* | ./inqraid 2>/dev/null
/dev/ldev009 -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP
CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
/dev/cmddev -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP
] [OPEN-3
] [OPEN-3-CM
AIX
# lsdev -C -c disk | grep hdisk | ./inqraid
hdisk1 -> [HP] CL2-D Ser =
30053 LDEV =
9 [HP
CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
hdisk2 -> [HP] CL2-D Ser =
30053 LDEV = 14 [HP
] [OPEN-3
] [OPEN-3-CM
Additional information:
If the lsdev command does not show the TID and LUN (for example, 2F-00-00-2,0) on the column
output for the devices, the inqraid command and d[g] raw_device option for all commands
cannot find a target device.
# lsdev -C -c disk
hdisk1 Defined
04-02-01
This occurs when a Fibre Channel adapter and device driver are different (for example, an Emulex
adapter with an AIX driver).
108
Windows NT/2000/2003
C:\HORCM\etc> echo hd1-2 | inqraid ( or inqraid hd1-2 )
Harddisk 1 -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP ] [OPEN-3
CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
Harddisk 2 -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP ] [OPEN-3-CM
Tru64
# ls /dev/rdisk/dsk* | ./inqraid
/dev/rdisk/dsk10c -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP] [OPEN-3
]
CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
/dev/rdisk/dsk11c -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP] [OPEN-3-CM]
DYNIX/ptx
# dumpconf -d | grep sd | ./inqraid
sd1 -> [HP] CL2-D Ser =
30053 LDEV =
9 [HP
] [OPEN-3
CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
Sd2 -> [HP] CL2-D Ser =
30053 LDEV = 14 [HP
] [OPEN-3-CM
mkconf
Make a configuration file
Description
The mkconf command is used to make a configuration file from a special file (raw device file) provided
via STDIN. It executes the following steps:
1.
Make a configuration file containing only the HORCM_CMD section by executing inqraid
sort CM CLI.
2.
Start an instance without a HORCM_DEV and HORCM_INST section, which is enough to run
the raidscan command for the next step.
3.
Make a configuration file including the HORCM_DEV and HORCM_INST sections by executing
raidscan find conf using a special file (raw device file) provided via STDIN. For STDIN
file specification information, see XP RAID Manager STDIN file formats on page 271.
4.
5.
Run raidscan find verify to verify the correspondence between host device files and the
newly created configuration file.
The configuration file is created with the name horcm*.conf within the current directory. An
XP RAID Manager log directory is created with the name log* within the current directory.
You may have to modify the ip_address and service parameters within the newly created
configuration file as the need arises.
109
Syntax
mkconf.sh { a | g group | i inst# | m MU# | s service }
Windows NT/2000/2003 or OpenVMS only: mkconf.exe { a | c drive#/DKA# #
| g group | i inst# | m MU# | s service }
Arguments
(none)
Using the mkconf command without arguments displays help/usage information.
a
Used to add a new volume group within the newly created configuration file.
c drive#
(Windows NT/2000/2003 only) Specifies the range of drive numbers to be searched for
existing command devices. If not specified, $PhysicalDrive is used as the default.
c DKA# #
(OpenVMS only) Specifies the range of drive numbers to be searched for existing command
devices. If not specified, $1$DGA0-10000 DKA0-10000 DGA0-10000 is used as the default.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
If not specified, VG is used.
i inst#
Specifies the instance number for XP RAID Manager.
m MU#
Specifies the mirror descriptor for XP Business Copy Software/XP Snapshot volumes. XP
Continuous Access Software volumes do not specify a mirror descriptor.
s service
Specifies the service name to be used in the newly created configuration file. If not specified,
52323 is used as a default.
Example
This example demonstrates the usage of the mkconf command and the resulting configuration
file.
HP-UX
# cd /tmp/test
# cat /etc/horcmperm.conf | /HORCM/usr/bin/mkconf.sh -g ORA -i 9 -m 0
starting HORCM inst 9
HORCM inst 9 starts successfully.
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
starting HORCM inst 9
HORCM inst 9 starts successfully.
DEVICE_FILE
Group
PairVol
PORT
TARG LUN M
SERIAL
/dev/rdsk/c23t0d0
ORA
ORA_000
CL2-J
0
0 0
61456
110
LDEV
192
/dev/rdsk/c23t0d1
ORA
ORA_001
CL2-J
0
1 0
61456
193
/dev/rdsk/c23t0d2
ORA
ORA_002
CL2-J
0
2 0
61456
194
/dev/rdsk/c23t0d3
ORA
ORA_003
CL2-J
0
3 0
61456
195
/dev/rdsk/c23t0d4
ORA
ORA_004
CL2-J
0
4 0
61456
256
/dev/rdsk/c23t0d5
ORA
ORA_005
CL2-J
0
5 0
61456
257
/dev/rdsk/c23t0d6
ORA
ORA_006
CL2-J
0
6 0
61456
258
/dev/rdsk/c23t0d7
- 0
61456
259
HORCM Shutdown inst 9 !!!
Please check '/tmp/test/horcm9.conf','/tmp/test/log9/curlog/horcm_*.log', and modify
'ip_address & service'.
# ls
horcm9.conf log9
# vi *.conf
Configuration file:
# Created by mkconf.sh on Mon Jan 22 17:59:11 JST 2001
HORCM_MON
#ip_address
service
poll(10ms)
timeout(10ms)
localhost
52323
1000
3000
HORCM_CMD
#dev_name
dev_name
dev_name
#UnitID 0 (Serial# 61456)
/dev/rdsk/c23t3d0
HORCM_DEV
#dev_group
dev_name
port#
TargetID
LU#
MU#
# /dev/rdsk/c23t0d0
SER =
61456 LDEV = 192 [ FIBRE FCTBL = 4 ]
ORA
ORA_000
CL2-J
0
0
0
# /dev/rdsk/c23t0d1
SER =
61456 LDEV = 193 [ FIBRE FCTBL = 4 ]
ORA
ORA_001
CL2-J
0
1
0
# /dev/rdsk/c23t0d2
SER =
61456 LDEV = 194 [ FIBRE FCTBL = 4 ]
ORA
ORA_002
CL2-J
0
2
0
# /dev/rdsk/c23t0d3
SER =
61456 LDEV = 195 [ FIBRE FCTBL = 4 ]
ORA
ORA_003
CL2-J
0
3
0
# /dev/rdsk/c23t0d4
SER =
61456 LDEV = 256 [ FIBRE FCTBL = 4 ]
ORA
ORA_004
CL2-J
0
4
0
# /dev/rdsk/c23t0d5
SER =
61456 LDEV = 257 [ FIBRE FCTBL = 4 ]
ORA
ORA_005
CL2-J
0
5
0
# /dev/rdsk/c23t0d6
SER =
61456 LDEV = 258 [ FIBRE FCTBL = 4 ]
ORA
ORA_006
CL2-J
0
6
0
# ERROR [CMDDEV] /dev/rdsk/c23t0d7
SER =
61456 LDEV = 259 [ OPEN-3-CM ]
HORCM_INST
#dev_group
ip_address
service
ORA
localhost
52323
paircreate
Create a pair relationship
Description
The paircreate command establishes a primary to secondary pair relationship between volumes.
This command generates a new paired volume from SMPL volumes. The default action pairs a logical
group of volumes as defined in the instance configuration file.
111
HP-UX
CAUTION:
Before issuing this command, ensure that the secondary volume is not mounted on an HP-UX system. If the
secondary volume is mounted during the paircreate command, change the pair status to SMPL, unmount
the secondary volume, and reissue the paircreate command.
MPE/iX
CAUTION:
Before issuing this command, ensure that the secondary volume is not mounted on an MPE/iX system. If it
is, VSCLOSE that volume set and de-configure the LDEVs using IOCONFIG, the online device configuration
utility program.
Syntax
paircreate { nomsg | c size | g[s] group | cto otime ctime rtime
| d[s] pair_vol | d[g][s] raw_device [MU#] | d[g][s] seq# LDEV [MU#]
f fence [CTGID] | FCA [MU#] | fq <mode> | h |
I[H/CA][M/BC][instance#]| jp ID | js ID | m mode | nocopy | nocsus
| pid <PID> | q | split | vl | vr | z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
c size
Specifies the number of tracks that are concurrently copied. The number can range from 1 to
15. If not specified, the default value is 3.
The command runs for the entire group unless the ds pair_vol argument is specified.
cto otime ctime rtime
(XP Continuous Access Asynchronous or Journal Software only) Sets values for the offloading
timer (o-time), the copy-pending timer (c-time), and the RCU-ready timer (r-time).
XP Continuous Access Journal Software uses only o-time.
If only one value is given (-cto 90) the value is interpreted as o-time. If two values are
given (-cto 90 5) they are interpreted as o-time and c-time. It is not possible to specify
times, without specifying the time in front of it. That is, to specify r-time, you must also specify
o-time and c-time (-cto 90 5 6)
o-time: This option sets the offloading timer. It specifies a grace period between the
sending side remote replication buffer completely full situation and the fall back position
of converting the entire buffer to an out of order bitmap, during which no new writes are
accepted.
112
The parameters are saved for as long as the journal group exists.
[s] is used to when specifying two device groups in a 3 Data Center environment. The first (d[g])
is normally the XP Continuous Access Synchronous Software group, and the second (d[g][s]) is
the XP Continuous Access Journal Software group. Both parameters must be used or the command is
not recognized. (dg G1 dgs G2)
The following arguments use the [s] option in a 3 Data Center environment:
d[s]
d[g][s] raw_device [MU#]
d[g][s] seq# LDEV# [MU#]
113
d[s] pair_vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g][s] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
This option is effective without specifying the g group option.
If the specified raw_device is listed in multiple device groups, this applies to the first one
encountered.
d[g][s] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
This option is effective without specifying the g group option.
f fence [CTGID]
(XP Continuous Access Software only) Specifies a data-consistency level.
Valid values:
CTGID (CT group ID) is assigned automatically, but the async option terminates with
EX_ENOCTG when beyond the maximum number of CT groups.
Maximum number:
The CTGID option forces creation of paired volumes for a given CTGID group.
FCA [MU#] or FHORC [MU#]
Creates the cascading configuration with g group and gs group options from the local
node (takeover node).
g group specifies the cascading P-VOL
gs group specifies the cascading S-VOL.
Ignores the vl or vr option because the S-VOL is specified with the gs group option.
fq <mode>
(XP Business Copy Software only) Specifies whether or not split is performed in QUICK
mode.
114
$HORCC_SPLT
Behavior
quick
no effect
quick Split
normal
no effect
normal Split
Unspecified
QUICK
quick Split
Unspecified
NORMAL
normal Split
Unspecified
Unspecified
NOTE:
The fq option is also validated on XP Continuous Access Software/XP Business Copy Software
cascading operations using FBC [MU#].
NOTE:
The fq option works only with the XP12000 and XP24000 disk arrays and is ignored by the
XP1024/XP128 disk array.
g[s] group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The parameters g andgs must both be used (g G1 gs G2).
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
jp ID
(XP Continuous Access Journal Software only) Specify a journal group ID for a P-VOL
js ID
(XP Continuous Access Journal Software only) Specify a journal group ID for an S-VOL
m mode
The following modes may be specified:
115
noread (XP Business Copy Software only): Specifies that the S-VOL is unreadable while the
paired volumes are in the PAIR state. This mode is useful for hiding S-VOLs. By default, the
S-VOL is readable even when in the PAIR state.
cyl (XP1024, XP12000, and XP24000 disk arrays; XP Continuous Access Software only):
Specifies that a bitmap table manages the volumes at the cylinder level.
trk (XP1024, XP12000, and XP24000 disk arrays; XP Continuous Access Software only):
Specifies that a bitmap table manages the volumes at the track level.
If cyl or trk is not specified, the following default bitmap table identified is used.
Table 20 paircreate command m default bitmap
RAID
Default bitmap
granularity
OPEN-3/9
Track
OPEN-E/L/M
Cylinder
N/A
Cylinder
Others
If there is not enough shared memory to maintain track level information, error EX_CMDRJE
is returned.
dif (XP Business Copy Software only): Used at paircreate to cause the S-VOL bitmap
table (used to create a differential backup) to designate all tracks changed since paircreate.
inc (XP Business Copy Software only): Used at paircreate to cause the S-VOL bitmap
table (used for incremental backup) to designate all tracks changed since the last
re-synchronization
grp [CTGID] (XP1024/XP128, XP12000, and XP24000 disk arrays; XP Business Copy
Software only): Used at paircreate to group specified pairs into a consistency group,
allowing a consistent split of multiple devices at exactly the same point in time. This applies
when doing a split using the pairsplit g group command (except S or E option).
A CTGID (CT Group ID) is assigned automatically if you do not specify the CTGID option
in the command. If CTGID is not specified and the maximum number of CT groups already
exist, an EX_ENOCTG error occurs. Therefore, the CTGID option can forcible assign a
volume group to an existing CTGID.
The maximum number of configurable LDEVs with the same CTGID is 1024. For the XP24000
and XP12000 disk arrays (firmware version 50-04-31 or later) , it is 4096.
cc (XP Business Copy Software only): Specifies the Cruising Copy mode for volume
migration. This option cannot be used with the split argument. This option is ignored
if c <size> is used.
nocopy
(XP Continuous Access Software only) Creates paired volumes without copying data. The data
consistency of SMPL volumes is assured by the user.
nocsus
(XP Continuous Access Journal Software only) Without copying data, creates suspended journal
volumes to enable delta-resync between DC2 (Sync S-VOL) and DC3 (Journal S-VOL).
pid <PID>
(XP Snapshot only) Identify the pool with a pool ID. LDEV's in a group that has a PID belong
to the specified pool. If a specific PID is not given, the LDEVs is designated with the default
pool ID (0).
116
q
Terminates interactive mode and exits this command.
split
(XP Business Copy Software/XP Snapshot only) Splits the paired volume after completing the
pairing process.
split works differently based on the microcode version:
This option returns immediately with the PVOL_PSUS and SVOL_COPY state changes. The
S-VOL state changes to SVOL_SSUS after all data is copied.
For:
XP256 disk array (microcode 52-46-yy or under)
XP512 disk array (microcode 01-10-00/xx or under)
After running the command, the volume status is PVOL_COPY and SVOL_COPY. The P-VOL
and S-VOL states changes to PVOL_PSUS and SVOL_SSUS after all data is copied.
vl or vr
Required. Specifies the direction of the P-VOL to S-VOL relationship. Specifies which set of
volumes, r (remote) or l (local), is the primary (P-VOL) set. Local disks are determined by how
the HORCMINST environment variable is set.
vl specifies the volumes defined by the local instance as the primary volumes.
vr specifies the volumes defined by the remote instance as the primary volumes while the
local instance controls the secondary volume.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Returned values
This command sets either of the following returned values in exit(), which allows you to check the
execution results.
The command returns 0 upon normal termination.
A nonzero return indicates abnormal termination. For the error cause and details, see the execution
logs.
(XP Continuous Access Software only) If the target volume is under maintenance, this command cannot
report copy rejection if an error occurs.
117
Example
Establish an XP Business Copy Software pairing between the volumes in group vg01. The volumes
in the local instance are used as the P-VOLs:
paircreate g vg01 vl
Create an XP Business Copy Software volume pair that corresponds to disk device
/dev/rdsk/c5t1d0 as the S-VOL (using the remote instances volume as the P-VOL):
paircreate d /dev/rdsk/c5t1d0 vr
If the volume is part of multi-volume group, only the volume specified by the d argument is set
up as a pair.
Create an XP Business Copy Software group pair out of the group that contains the sequence
number 35611 and LDEV 35. Use the volumes defined by the local instance as the P-VOLs:
paircreate d 35611 35 vl
In this example, all volumes that are part of the group that contains this LDEV are put into the PAIR
state. Because MU# was not specified, it defaulted to 0.
Create the suspended G4 journal volume group (choose one of the following three methods):
Create G3 from DC1:
paircreate g G1 gs G2 FCA 2 nocsus f async <ctgid> jp <id> js
<id>
Create G3 from DC2:
paircreate g G3 vl nocsus f async <ctgid> jp <id> js <id>
Create G3 from DC3:
paircreate g G3 vr nocsus f async <ctgid> jp <id> js <id>
NOTE:
The journal ID for the shared Journal-S-VOL must be the same as the current S-VOL.
The paircreate CTGID can the same as the current S-VOL CT group.
The following figure illustrates a takeover using the suspended journal volume group G3. Not illustrated
in the second figure is that the SVOL SSWS status at the DC2 site was first accomplished via a
horctakeover -g G1 command executed from the DC2 site.
118
Error codes
The table lists specific error codes for the paircreate command.
Table 21 paircreate error codes
Category
Error code
Error message
Value
Volume status
unrecoverable
EX_ENQVOL
236
EX_INCSTG
Inconsistent status in
group
229
EX_INVVOL
222
119
Category
Resource unrecoverable
Error code
Error message
Value
EX_INVSTP
228
EX_ENQSIZ
212
EX_ENQCLP
204
EX_ENOCTG
217
EX_ENXCTG
215
EX_ENOPOL
Target volume's XP
Snapshot pool has
exceeded threshold value
206
paircurchk
Check S-VOL data consistency (XP Continuous Access Software only)
Description
The paircurchk command displays pairing status to allow the operator to verify the completion of
pair generation or pair resynchronization. This command is also used to confirm the paired volume
connection path (physical link of paired volume to the host).
The granularity of the reported data is based on the volume or group.
Syntax
paircurchk { nomsg | d pair_vol| d[g] raw_device [MU#] | d[g] seq#
LDEV# [MU#] | g group | h | I [instance#]| q | z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
d pair_vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
This option is effective without specifying the g group option.
If the specified raw_device is listed in multiple device groups, this applies to the first one
encountered.
d[g] seq# LDEV# [MU#]
120
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
The command runs for the entire group unless the d pair_vol argument is specified.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
If used this argument must be specified at the beginning of command arguments.
I [instance#]
Specifies the instance number. An alternate method to using the environmental variable
$HORCMINST; for further information see
XP RAID Manager instance and execution environment variables on page 63.
q
Terminates interactive mode and exits this command.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Returned values
This command sets either of the following returned values in exit(), which allows you to check the
execution results.
Normal termination 0. (OK. Data is consistent.)
Abnormal termination: Other than 0. (For the error cause and details, refer to the execution logs.)
Example
# paircurchk -g oradb
Group Pair vol Port targ#
oradb oradb1
CL1-A 1
oradb oradb2
CL1-A 1
lun#
5
6
LDEV#
145
146
Volstatus
S-VOL
S-VOL
Status
PAIR
PSUS
Fence
NEVER
STATUS
To be...
Analyzed
Suspected
Output fields:
Group: The group name (dev_group) described in the configuration definition file.
Pair vol: The paired volume name (dev_name) within a group described in the configuration
definition file.
121
Port targ# lun#: The port number, target ID, and LUN described in the configuration definition file.
LDEV#: The LDEV number.
Volstat: The attribute of a volume.
Status: The status of the paired volume.
Fence: The fence level of the paired volume.
To be: The data consistency of the secondary volume.
Error codes
The table lists specific error codes for the paircurchk command.
Table 22 paircurchk error codes
Category
Error code
Error message
Value
EX_VOLCUR
225
pairdisplay
Confirm pair configuration
Description
The pairdisplay command displays the pairing status of a volume or group of volumes. This
command is also used to confirm the configuration of paired volumes.
Volumes are defined in the HORCM_DEV section of the instance configuration files.
Syntax
pairdisplay { c | CLI | d pair_vol | d[g] raw_device [MU#] d[g] seq#
LDEV# [MU#] | f[x|c|d|m|e] | FBC [MU#] | FCA [MU#] | g group | h |
I[H/CA][M/BC][instance#] | l | m mode | q | v ctg | v jnl[t] | v
pid | z | zx }
Arguments
c
Checks the paired volume connection path (physical link from paired volume to the host) and
only illegally paired volumes are displayed.
If this option is not specified, the status of the specified volumes is displayed without checking
their path to the host.
CLI
Specifies structured output for Command Line Interface parsing. The column data is aligned in
each row. The delimiters between columns are either a space or . If you specify the CLI
option, pairdisplay does not display the cascading mirror (MU#1-4).
d pair_vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g] raw_device [MU#]
122
Searches the instance configuration file (local instance) for a volume that matches the specified
raw_device. If a volume is found, the command runs on the paired volume (d) or group
(dg). If the volume is contained in two groups, this command runs for the first volume
encountered only. If MU# is not specified, it defaults to 0.
d[g] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
f[x|c|d|m|e]
fx displays the LDEV number in hexadecimal.
fc displays the copy operation rate and a completion percentage. Detects and displays the
status (PFUL, PFUS) and confirms SSWS state as an indication of SVOL_SSUS-takeover. This
option is also used to display the copy operation progress, the side file percentage or the
BITMAP percentage for asynchronous pair volumes.
fd displays the relationship between the Device_File and the paired volumes, based on the
group (as defined in the local instance configuration definition file). If Device_File column shows
unknown to either the local or the remote host (instance), it shows a volume that is not recognized
on the current host, and pair operations are rejected (except the local option l in protection
mode).
fm displays the Bitmap mode.
fe displays the serial number and LDEV number of the external LUNs mapped to the LDEV
and additional information for the pair volume. This option is invalid if m all or m cas
are specified.
Example (XP Continuous Access Software):
# pairdisplay -g horc0 -fdxe
Group ... LDEV#. P/S, Status, Fence, Seq#, P-LDEV# M CTG JID AP EM E-Seq# E-LDEV#
horc0 ... 41.
P-VOL PAIR
ASYNC, 63528 40
- 0
- 2 horc0 ... 40.
S-VOL PAIR
ASYNC, ----- 41
- 0
- - -
Output fields:
For an explanation of the fields Group through M, see pairdisplay output fields on page 131.
CTG: For XP Continuous Access Asynchronous or Journal Software, displays the CT group ID, and
shows Fence as ASYNC. For XP Business Copy Software, displays the CT group ID only at the
time volumes are split.
JID: The journal group ID for the P-VOL or S-VOL. If the volume is not an XP Continuous Access
Journal Software volume, is displayed.
AP: The number of active paths in to the P-VOL. If not known, is displayed.
CM: Copy mode. N is for non-XP Snapshot. S is for XP Snapshot. C is for Cruising Copy.
123
124
# pairdisplay -g
Group
PairVol
MURA
MURA_001
- L
- L
URA
URA_001
- L
URA
URA_001
- R
- R
URA
URA_001
- R
M
-
For output field explanations, see pairdisplay output fields on page 131.
q
Terminates interactive mode and exits this command.
v ctg
Finds and displays consistency (CT) group information from the perspective of the local and
remote host connected via the specified group or raw_device. The first line shows the CT
group information for the local host, and the second line for the remote host.
NOTE:
If the target volume is not an XP Continuous Access Asynchronous or Journal Software volume, this argument
has no effect.
NOTE:
The FCA [MU#] argument displays cascading XP Continuous Access Asynchronous or Journal Software
information, and then displays only the CT group information from the perspective of the remote host.
Example:
# pairdisplay -g ora -v ctg
CTG P/S
Status AP U(%) Q-Marker
001 P-VOL PAIR
2
0 00000080
001 S-VOL PAIR
0 0000007d
QM-Cnt SF(%)
3
50
50
Seq# IFC
63528 ON
63528 -
Output fields:
125
XP Continuous Access Journal Software: The percentage of available journal space used.
Q-Marker: P-VOL: The latest sequence number of the MCU P-VOL as of the time that the current
write command was received.
S-VOL: The sequence number of the latest remotely replicated write to reach the RCU.
This item is valid at PAIR state.
QM-Cnt: The number of remaining Q-Markers within a CT group. XP Continuous Access
Asynchronous Software sends a dummy record set at regular intervals, therefore QM-Cnt always
shows 2 or 3 even if the host is not writing.
This number is only valid in PAIR state.
SF(%): The side file cache usage, regardless of XP Continuous Access Asynchronous or Journal
Software status.
Seq#: The serial number of the RAID array frame.
IFC: Indicates whether host write inflow control is ON or OFF.
OT/s: The CT group offloading timer setting (in seconds) for XP Continuous Access Asynchronous
or Journal Software.
For XP Continuous Access Journal Software, this is the same as DOW in raidvchkscan v
jnlt or pairdisplay v jnlt.
CT/m: The CT group copy pending timer setting (in minutes) for XP Continuous Access
Asynchronous Software only.
RT/m: The CT group RCU ready timer (in minutes) XP Continuous Access Asynchronous Software
only.
v jnl[t]
Displays the JNL status for the local and remote host connected to the group. The first line shows
the journal information for the local host and the second line for the remote host. [t] provides
additional data for three timer values for the journal volume.
This option displays nothing if the target volume is not a journal volume.
FCA [MU#] displays only remote host journal information in a cascading journal volume.
Examples
# pairdisplay -g VG01 -v jnl
JID MU CTG JNLS AP U(%) Q-Marker
001 0
2 PJNN
4
21
43216fde
002 0
2 SJNN
4
95
3459fd43
# pairdisplay -g VG01 -v jnlt
JID MU CTG JNLS AP U(%)
Q-Marker
001 1
2 PJNN
4
21
43216fde
002 1
2 SJNN
4
95
3459fd43
# pairdisplay -g VG01 -v jnl -FCA 1
JID MU CTG JNLS AP U(%) Q-Marker
003 1
2 PJNN
4
21
43216fde
Q-CNT
30
52000
Q-CNT
30
52000
Q-CNT
30
D-SZ(BLK)
512345
512345
D-SZ(BLK)
512345
512345
D-SZ(BLK)
512345
Output fields:
JID: The journal group ID.
MU: The mirror descriptions on XP Continuous Access Journal Software.
CTG: The CT group ID.
126
AP: Active path. Displays the following two conditions, according to the pair status:
For pair status PJNL or SJNL (except suspend state), this field shows the number of active paths
on the initiator port in XP Continuous Access Journal Software links. If unknown, is displayed.
For pair status SJNL (suspend state), this field shows the result of the suspend operation and
indicates whether or not all data on PJNL (P-VOL) was completely passed (synchronized) to
S-JNL (S-VOL). If AP is 1, all data was passed; other values indicate that all data was not
passed.
U(%): The usage rate of the journal data.
Q-Marker: The sequence number of the journal group ID, called the Q-marker.
For pair status PJNL, Q-Marker shows the latest sequence number on the PJNL volume.
For pair status SJNL, Q-Marker shows the latest sequence number on the cache (DFW).
Other info
Meaning
P-JNL
S-JNL
SMPL
PJNN
SJNN
(PJNS)
(SJNS)
PJNN
(PJNS)
QCNT
AP
127
JNLS
Other info
Meaning
P-JNL
S-JNL
SJNN
(SJNS)
PJSN
PJSF
SJSF
AP
SJSN
PJNF
PJSE
QCNT
SJSE
N
v pid
Displays the pool ID and related information for each local and remote connected to the group
via the specified group or raw-device. The first line shows pool information for the local host.
The second line shows the remote host.
This option displays nothing if the target volume is not a QS (XP Snapshot) volume.
FBC [MU#] displays the cascading QS (XP Snapshot) volume pool information to allow
monitoring of the pool status of a remote host connected to a cascading CA_PVOL
CA_SVOL/QS_PVOL.
Examples
# pairdisplay -g QS -v pid
PID POLS U(%) SSCNT Available(MB) Capacity(MB)
127 POLN
0
6
3000
3000
127 POLN
0
6
3000
3000
Seq#
63528
63528
# pairdisplay -g QS -v pid -l
PID POLS U(%) SSCNT Available(MB) Capacity(MB)
127 POLN
0
6
3000
3000
Seq#
63528
128
Seq#
63528
Output fields:
PID: The XP Snapshot pool ID.
POLS: The following status in the XP Snapshot pool.
POLN: Pool Normal.
POLF: Pool Full.
POLS: Pool Suspend.
POLE: Pool Failure, and does not display pool information.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Examples
(XP Business Copy Software only):
# pairdisplay g oradb
Group Pair Vol(L/R) (Port#,TID,LU-M), Seq#, LDEV#...P/S, Status, Seq#, P-LDEV# M
oradb oradb1(L)
(CL1-A, 1, 1-0) 30053 18
...P-VOL PAIR
30053 19
oradb oradb1(R)
(CL1-D, 1, 1-0) 30053 19
...S-VOL PAIR
---- 18
-
The following shows the output when using CLI. The format aligns the column data in each row,
making it easier to parse. The delimiters between columns are either a space or .
129
Seq# LDEV#
30053 271
30053 271
30053 271
P/S
Status
P-VOL PAIR
SMPL
SMPL
-
Seq# P-LDEV# M
30053 263
-
M, Seq#,
0 35013
0 35013
P-LDEV# M
18
17
-
Seq#,
30052
30052
30053
30053
30053
LDEV#.P/S,
266...SMPL
266...P-VOL
268...P-VOL
268...P-VOL
268...S-VOL
Status,
----,
COPY,
COPY,
PSUS,
COPY,
Seq#,
----30053
30053
30053
-----
P-LDEV#
---268
270
272
266
M
W
-
The following examples use m all. This argument displays all bitmaps, whether in use or not, that
can be employed with the volumes involved in the designated XP Continuous Access Software pair.
# pairdisplay -g oradb m all
Group
PairVol(L/R) (Port#, TID,
oradb
oradev1(L)
(CL1-D , 3,
---------(L)
(CL1-D , 3,
---------(L)
(CL1-D , 3,
oradb
oradev1(L)
(CL1-D , 3,
oradb1 oradev11(R) (CL1-D , 3,
oradb2 oradev21(R) (CL1-D , 3,
---------(R)
(CL1-D , 3,
oradb
oradev1(R)
(CL1-D , 3,
LU-M), Seq#,
0-0) 30052
0-1) 30052
0-2) 30052
0)
30052
2-0) 30053
2-1) 30053
2-2) 30053
2)
30053
LDEV#. P/S,
266...SMPL
266...SMPL
266...SMPL
266...P-VOL
268...P-VOL
268...P-VOL
268...SMPL
268...S-VOL
Status,
----,
----,
----,
PAIR,
COPY,
PSUS,
----,
COPY,
130
Seq#,
------------30053
30053
30053
---------
P-LDEV#
---------268
270
272
---266
M
W
-
LDEV#.P/S,
266...SMPL
266...SMPL
266...SMPL
266...P-VOL
Status,
----,
----,
----,
PAIR,
Seq#,
------------30053
P-LDEV#
---------268
Example 2:
# pairdisplay -g URA -CLI -fd -m all
Group
PairVol L/R Device_File
M Seq# LDEV#
MURA
MURA_001 L
c1t2d7s2
0 62500 263
L
c1t2d7s2
1 62500 263
L
c1t2d7s2
2 62500 263
URA
URA_001 L
c1t2d7s2
- 62500 263
L
c1t2d7s2
h1 62500 263
URA
URA_001 R
c1t2d8s2
0 62500 264
R
c1t2d8s2
1 62500 264
R
c1t2d8s2
2 62500 264
URA
URA_001 R
c1t2d8s2
- 62500 264
R
c1t2d8s2
h1 62500 264
P/S
Status
S-VOL PAIR
SMPL
SMPL
SMPL
SMPL
SMPL
SMPL
SMPL
SMPL
SMPL
-
Seq# P-LDEV# M
262
-
M
M
M
M
=
=
=
=
131
M
-
%: The following table shows percentages for XP Continuous Access Asynchronous Software, XP
Continuous Access Synchronous Software, XP Business Copy Software, and XP Continuous Access
Journal Software.
Table 24 pairdisplay % output breakdown
State/
Volume
Cnt Ac-A
COPY
P-VOL
CR
S-VOL
PAIR
OTHER
SF
BM
SF
BM
Cnt Ac-S
COPY
CR
PAIR
OTHER
BC
COPY
PAIR
PVOL_
Cnt Ac-J
OTHER
COPY
PAIR
PSUS/
PSUS
SSUS
SVOL_
(PJNS/
COPY
SJNS)
BM
BM
CR
CR
BM
CR
BM
BM
CR
CR
CR
CR
CR
SF
IF
BM
SF
IF
BM
The following is an arithmetic expression using the High Water Mark (HWM) as 100% of a side file
space:
HWM (%) = 30 / side file space (30 to 70) * 100
pairevtwait
Wait for event completion
Description
The pairevtwait command waits for the paircreate and pairresync commands to complete.
It also checks the status of those commands. It waits (sleeps from the viewpoint of the process) until
the paired volume status becomes identical to a specified status. When the desired status has been
achieved, or the time-out period has elapsed, the command exits with the appropriate return code.
132
OTHER
Syntax
pairevtwait h
pairevtwait { nomsg | d pair_vol | d[g] raw_device [MU#] | d[g] seq#
LDEV# [MU#] | FCA [MU#] | FBC [MU#] | g group | h | |
I[H/CA][M/BC][instance#] l |nowait[s] | q | s[s] status... | t timeout
[interval] | z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
This option must be specified at the beginning of the command arguments.
d pair_vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g] raw_device [MU#]
133
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
This option is effective without specifying the g group option.
If the volume is contained in two groups, the command runs on the first volume encountered.
If MU# is not specified, it defaults to 0.
d[g] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
This option is effective without specifying the g group option.
FCA [MU#]
Used to forcibly specify, for event waiting, an XP Continuous Access Software P-VOL that is
also an XP Business Copy Software P-VOL. If the l option is specified, the status of a cascading
XP Continuous Access Software volume on a local host (near site) is tested. If no l option is
specified, this option tests the status of a cascading XP Continuous Access Software volume
on a remote host (far site).
The target XP Continuous Access Software volume must be SMPL or P-VOL.
The MU# specifies the cascading mirror descriptor for XP Continuous Access Journal Software.
FBC [MU#]
Used to forcibly specify, for event waiting, an XP Continuous Access Software P-VOL that is
also an XP Business Copy Software P-VOL. If the l option is specified, this option tests the
status of a cascading XP Business Copy Software volume on a local host (near site). If no l
option is specified, this option tests the status of a cascading XP Business Copy Software volume
on a remote host (far site).
The target XP Business Copy Software volume must be SMPL or P-VOL.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The command runs for the entire group unless the d pair_vol argument is specified.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
l
When this command cannot use a remote host because it is down, this option allows a local
host to run the command.
The target volume of a local host must be SMPL or P-VOL.
XP Business Copy Software/XP Snapshot volumes can be specified from the S-VOL.
134
nowait[s]
Causes the pairing status to be reported immediately. The [s] option causes the pairing status
of the S-VOL to be reported immediately.
When this option is specified, the t and s options are ignored.
q
Terminates interactive mode and exits this command.
s[s] status
Specifies the status to wait for (SMPL, COPY [including RCPY], PAIR, PSUS, or PSUE). If two
or more statuses are specified following s, waiting occurs according to the logical OR of the
specified statuses. This argument is not valid when the nowait argument is specified.
The [s] option specifies to wait for the S-Vol to attain the specified status.
t timeout [interval]
Specifies the amount of time, in one-second intervals, to wait for the specified state. If [interval]
is not specified, the, the default value is used. This argument is not valid when the nowait
argument is specified. If the interval is specified as greater than 1999999, a warning message
is displayed.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Returned values
This command sets one of the following returned values in exit(), which allows you to check the
execution results.
When the nowait argument is specified:
Normal termination
Abnormal termination
Other than 6 to 127 (For the error cause and details, see the execution logs.)
When the nowaits option is specified:
Normal termination
1: The status is SMPL
2: The status is COPY or RCPY
3: The status is PAIR
135
Error codes
The table lists specific error codes for the pairevtwait command.
Table 25 pairevtwait error codes
Category
Error code
Error message
Value
Volume status
unrecoverable
EX_ENQVOL
236
EX_INCSTG
229
EX_INVVOL
222
EX_EVOLCE
235
EX_EWSUSE
234
EX_EWSTOT
233
EX_EWSLTO
232
Timer
recoverable
pairmon
Report pair transition status
Description
The pairmon command is sent to the XP RAID Manager (daemon) to report the pairing status transition.
When an error or status transition is detected, this command outputs an error message.
Events exist on the pair state transfer queue for XP RAID Manager. Resetting an event correlates to
deleting one or all events from the pair state transfer queue. If the command does not reset, the pair
state transfer queue is maintained.
136
-nowait
-resevt
-allsnd
Actions
When XP RAID Manager does not have an event, this option
waits until an event occurs. If more events exist, it reports one
event. This option clears the event that it reports.
Invalid
allsnd
Invalid
resevt
Invalid
resevt
Invalid
nowait
Invalid
nowait
Invalid
nowait
resevt
Invalid
nowait
resevt
allsnd
allsnd
allsnd
Syntax
pairmon { D | allsnd | h | I[H/CA][M/BC][instance#] | q | nowait |
resevt | s status... | z | zx }
Arguments
D
Selects the default report mode.
One event is reported (and cleared) if there is pairing status transition information to be reported.
If there is no information, the command waits.
The report modes consists of three flags: allsnd, resevt, and nowait.
allsnd
Reports all pairing status transition events.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
137
Example
# pairmon allsnd
nowait
Group Pair vol Port
targ# lun# LDEV# Oldstat
oradb oradb1
CL1-A 1
5
145
SMPL
oradb oradb2
CL1-A 1
6
146
PAIR
pairresync
Resynchronize a pair
138
code
0x01
0x04
Description
The pairresync command re-establishes a split pair and then resumes updating the secondary
volume based on the primary volume. If no data has been written in the secondary volume, differential
P-VOL data is copied. If data has been written in the secondary volume, differential data from the
P-VOL is copied to the S-VOL. The changes on the S-VOL are overwritten. The swap option updates
the P-VOL based on the S-VOL so that the P-VOL becomes the S-VOL and the S-VOL becomes the
P-VOL. Pair resynchronization can be specified even while the primary volume is being accessed.
When the pairresync command is issued, write access to the secondary volume is disabled.
The pairresync command puts a paired volume currently in the suspend state [PSUS or SSUS] into
a PAIR state.
This command can be applied to each paired logical volume or each group.
NOTE:
Executing pairresync with normal options in a cascaded XP Continuous Access Software environment
causes an automatic suspend of the downstream XP Continuous Access Journal Software.
UNIX
CAUTION:
Because data in the secondary volume is renewed by pairresync, the secondary volume must be not
be in a mounted state on a UNIX system.
MPE/iX
CAUTION:
Before issuing this command, ensure that the secondary volume is not mounted on an MPE/iX system. If it
is, VSCLOSE that volume set and de-configure the LDEVs using IOCONFIG, the online device configuration
utility program.
139
140
Syntax
pairresync { nomsg | c size | cto otime ctime rtime | d pair_vol
| d[g] raw_device [MU#] | d[g] seq# LDEV# [MU#] | FCA [MU#] b <mode>
| g group | h | I[H/CA][M/BC][instance#] | l | q | restore |
swap[s|p] | z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
This option must be specified at the beginning of the command arguments
c size
Used to specify the number of tracks (1 to 15) copied in parallel. If omitted, the default is the
value used at time of paircreate.
cto otime ctime rtime
(XP Continuous Access Asynchronous or Journal Software only) Sets values for the offloading
timer (o-time), the copy-pending timer (c-time), and the RCU-ready timer (r-time).
XP Continuous Access Journal Software uses only o-time.
141
If only one value is given (-cto 90) the value is interpreted as o-time. If two values are
given (-cto 90 5) they are interpreted as o-time and c-time. It is not possible to specify
times, without specifying the time in front of it. That is, to specify r-time, you must also specify
o-time and c-time (-cto 90 5 6)
o-time: This option sets the offloading timer. It controls write I/O inflow to the specified
CT group.
In XP Continuous Access Asynchronous Software, o-time can be a value of 1 to 255
seconds with a default of 90 seconds.
In XP Continuous Access Journal Software, o-time can be a value of 1 to 600 seconds
with a default of 60 seconds. Entering a value of 0 sets the offloading timer to OFF so there
is no write I/O inflow control.
If the side file is full, the host side write I/O is stopped for the time set (o-time) to allow
for more space to become available. If after a time out, the side file is still full, pair state
of the side file (Journal) changes from PAIR to PFUS, and the host side write I/O is
continued in BITMAP mode.
c-time: (XP Continuous Access Asynchronous Software only) This option sets the copy
pending timer for the specified CT group. c-time can be a value of 1 to 15 minutes with
a default of 5 minutes. If this option is not specified, the default value is used. If a new CT
group is created, the default 5 minutes is set. If a new CT group is not created, any
previously set c-time value is not changed.
r-time: (XP Continuous Access Asynchronous Software only) This option sets the RCU
ready timer for the specified CT group. r-time can be a value of 1 to 10 minutes with
a default of 5 minutes. If this option is not specified, the default value is used. If a new CT
group is created, the default 5 minutes is set. If a new CT group is not created, any
previously set r-time value is not changed.
NOTE:
These options are invalid when a pair-volume is added to a CT group.
XP Continuous Access Asynchronous Software
Pairresync command parameters are forwarded to the S-VOL side, and are used if the
S-VOL is changed to a P-VOL.
The command parameters are saved until the pair-volumes transition to SMPL state.
XP Continuous Access Journal Software
The parameters are saved for each journal group and therefore must be specified for both
the P-VOL and S-VOL sides.
For example to change the P-VOL to 30 seconds:
Pairsplit g UR
Pairresync g UR cto 30
If an S-VOL is changed and then later becomes a P-VOL, the previous operation must be repeated.
The command parameters are saved until the journal group is broken.
d pair_vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
142
$HORCC_RSYN
Behavior
quick
no effect
quick resync
normal
no effect
normal resync
Unspecified
QUICK
quick resync
143
-fq option
$HORCC_RSYN
Behavior
Unspecified
NORMAL
normal resync
Unspecified
Unspecified
dependent on Mode 87
The fq option is also validated on XP Continuous Access Software/XP Business Copy Software
cascading operations using FBC [MU#].
The fq option works only with the XP12000 and XP24000 disk arrays and is ignored by
the XP1024/XP128 disk array.
When restore is specified, if <mode> is set to quick, the pairresync restore performs
a Quick restore regardless of the $HORCC_REST environment variable setting and/or the
Mode 80 via SVP setting.
The following table shows the relationship between fq option and $HORCC_REST.
Table 28 pairresync command: fq and $HORCC_REST relationship
-fq option
$HORCC_REST
Behavior
quick
no effect
quick resync
normal
no effect
normal resync
Unspecified
QUICK
quick resync
Unspecified
NORMAL
normal resync
Unspecified
Unspecified
dependent on Mode 80
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The command runs for the entire group unless the d pair_vol argument is specified.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
l
Allows a local host (connected to the P-VOL) to resynchronize P-VOL to S-VOL even though the
remote host is down.
q
Terminates interactive mode and exits this command.
restore
(XP Business Copy Software/XP Snapshot only) (Optional) Copies differential data from the
secondary volume to the primary volume. (The S-VOL must not be mounted on any host while
this command is executing.)
144
If the restore option is not specified, the P-VOL is copied to the S-VOL. If the restore
option is used, the P-VOL must not be host mounted while the command is executing. If the
target volume is currently under maintenance, this command cannot execute copy rejection.
If mode 80 is turned ON at the SVP, this option takes time to complete the S-VOL to P-VOL
copy (pairevtwait signals its completion). However, at completion, the P-VOL and S-VOL
LUNs still point to the same LDEVs (physical disks) as before.
If mode 80 is turned OFF on the SVP, this option takes virtually no time (pairevtwait still
signals completion) because the P-VOL LUN now is associated with the LDEVs that used to be
associated with the S-VOL (and vice versa). This allows virtually immediate P-VOL access while
it continues to copy to the S-VOL in the background. To avoid noticing a performance change
after using this option, the P-VOL and S-VOL should use the same RAID type and the same
speed disks (for example, 10k RPM).
swap[s|p]
(XP Continuous Access Software only) The swaps option runs from the S-VOL when there is
no host on the P-VOL side to help. A remote host must be connected to the S-VOL. Typically
executed in PSUS (SSWS) state (after a horctakeover) to facilitate fast failback without
requiring a full copy.
Unlike swaps, swapp requires the hosts to cooperate on both sides. It is the equivalent of
swaps, executed from the original P-VOL side.
For both swaps and swapp, the delta data from the original S-VOL becomes dominant and
is copied to the original P-VOL, and then the P-VOL/S-VOL designations are swapped.
The application can continue to run at the remote failover site during this operation. At
completion, the remote failover site owns the P-VOL. When desired, a very fast horctakeover
allows a fast failback of the application from the recovery site to the original site.
The following figure describes the swap[s|p] operation. The left side of the diagram shows
T0 (time zero) for both the P-VOL and S-VOL, before command execution. The right side shows
T1, after command execution.
145
Example
This example shows a pairresync on group VG01. The pairdisplay shows two volumes in the
COPY state. The copy% value indicates how much of the P-VOL is in sync with the S-VOL.
# pairresync g VG01
# pairdisplay -g VG01 fc -l
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status,Copy%,P-LDEV# M
VG01
d1(L)
(CL2-P , 0, 0-0)35641 58..P-VOL COPY, 89
61 VG01
d2(L)
(CL2-P , 0, 1-0)35641 59..P-VOL COPY, 96
62 -
Output fields:
This command sets either of the following returned values in exit(), which allows you to check the
execution results. The command returns 0 upon normal termination.
A nonzero return indicates abnormal termination. For the error cause and details, see the execution
logs.
Group: The group name (dev_group) described in the configuration definition file.
PairVol(L/R): The paired volume name (dev_name) of the group described in the configuration
definition file. L is the local host. R is the remote host.
P,T#,L#: (XP Continuous Access Software only) The port number, target ID, and LUN described in
the configuration definition file.
Port#, ID,LU-M: (XP Business Copy Software only) The port number, target ID, LUN, and MU number
described in the configuration definition file.
Seq#: The disk array serial number.
LDEV#: The LDEV number.
P/S: The (P-VOL, S-VOL) attribute of a volume.
Status: The status of the paired volume.
Fence: (XP Continuous Access Software only) The fence level of the paired volume.
Copy%: The copy operation rate (identical for P-VOL and S-VOL).
P-LDEV#: Displays the LDEV number of a primary pair partner.
M=W: (Valid for PSUS state only) In the P-VOL case, this designates suspended with S-VOL R/W
enabled.
In the S-VOL case, this designates that the S-VOL can accept writes.
M=N: (Valid for COPY/RCPY/PAIR/PSUE state) A listed volume means that reading is disabled.
Error codes
The table lists specific error codes for the pairresync command.
Table 29 pairresync error codes
Category
Error code
Error message
Value
Volume status
unrecoverable
EX_ENQVOL
236
EX_INCSTG
229
EX_INVVOL
222
EX_INVSTP
228
146
pairsplit
Split a pair
Description
The pairsplit command is used to change the status of a paired volume. This command puts the
pair into either PSUS or SMPL state.
For status change from PAIR to PSUS or PSUS to SMPL: Before these state changes are made, all
changes made to the P-VOL, up to the point when the command was issued, are written to the S-VOL.
If possible the host system must flash the host resident buffer cache before executing this command.
For status change from PAIR to SMPL: Changes made on the P-VOL that are not yet copied to S-VOL
are lost and data consistency on S-VOL is not enforced. First, change the status from PAIR to PSUS
and then to SMPL to ensure consistency on S-VOL to use data on S-VOL.
After a pair is put into the PSUS state, changes made to the P-VOL are no longer copied to the S-VOL.
However, the changes made to both the S-VOL and the P-VOL are noted and, when the volumes are
resynchronized, the changed tracks or cylinders (XP Continuous Access Software) are resynchronized
with the P-VOL. See pairresync.
When a pair is put into SMPL state, the pair relationship between the volumes is broken. Changes
made to either volume are not recorded. To get the volumes back into a pair relationship, use the
paircreate command.
This command stops updating the secondary volume while maintaining pairing status. When this
command is issued, read or read/write access to the secondary volume is enabled and the volume
is put into a SSUS state.
This command can be applied to each paired logical volume or each group. Only one pair splitting
argument (r, rw, S, R, or P) can be specified. If several arguments are specified, only the last
argument is valid.
MPE/iX
Before you run this command, the non-written data that remains in the host buffer must be flushed for
synchronization. For MPE/iX systems this is VSCOSE of the volume set.
147
Syntax
pairsplit { nomsg | c size | d pair_vol | d[g] raw_device [MU#] |
d[g] seq# LDEV# [MU#] | E | FBC [MU#] | FCA [MU#] | fq <mode> | g
group | h | I[H/CA][M/BC][instance#] | l | P | R[S][B] | r[w] | S
}
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
c size
(XP Business Copy Software only) Copies differential data retained in the primary volume into
the secondary volume, and then enables reading and writing from and to the secondary volume
(after completing the copy).
For size, specify a track size for copying in a range of 1 to 15. If no track size is specified,
the value used for paircreate is used.
d pair_vol
148
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
This option is effective without specifying the g group option.
If the specified raw_device is listed in multiple device groups, this applies to the first one
encountered.
d[g] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
This option is effective without specifying the g group option.
E
(XP Business Copy Software only) Issued to forcibly suspend a paired volume (for example,
when a failure occurs). It is not normally used.
FCA [MU#]
Used to forcibly specify a cascading XP Continuous Access Software volume in a combination
XP Continuous Access Software and XP Business Copy Software environment. If the l option
is specified, this option splits a cascading XP Continuous Access Software volume on a local
host (near site). If no l option is specified, this option splits a cascading XP Continuous Access
Software volume on a remote host (far site).
The target XP Continuous Access Software volume must be a P-VOL, or the R[S][B] option
can be specified on the S-VOL.
The MU# specifies the cascading mirror descriptor for XP Continuous Access Journal Software.
FBC [MU#]
Used to forcibly specify a cascading XP Business Copy Software volume in a combination XP
Business Copy Software and XP Continuous Access Software environment. If the l option is
specified, this option splits a cascading XP Business Copy Software volume on a local host
(near site). If no l option is specified, this option splits a cascading XP Business Copy Software
volume on a remote host (far site).
The target XP Business Copy Software volume must be a P-VOL, and the E option cannot be
specified.
fq <mode>
(XP Business Copy Software only) Specifies whether or not split is performed in QUICK
mode.
The paircreate split performs a Quick Split regardless of the $HORCC_SPLT environment
variable setting and/or the Mode 122 via SVP setting.
149
The following table shows the relationship between fq option and $HORCC_SPLT.
Table 30 pairsplit command: fq and $HORCC_SPLT relationship
-fq option
$HORCC_SPLT
Behavior
quick
no effect
quick Split
normal
no effect
normal Split
Unspecified
QUICK
quick Split
Unspecified
NORMAL
normal Split
Unspecified
Unspecified
NOTE:
The fq option is also validated on XP Continuous Access Software/XP Business Copy Software
cascading operations using FBC [MU#].
NOTE:
The fq option works only with the XP12000 and XP24000 disk arrays and is ignored by the
XP1024/XP128 disk array.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The command runs for the entire group unless the d pair_vol argument is specified.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
l
When the remote host is down and cannot be used, this option enables a pairsplit from
a local host.
(XP Continuous Access Software only) Unless the R option is specified, the target volume of
a local host must be a P-VOL.
P
(XP Continuous Access Software only) For XP Continuous Access Synchronous Software, used
to bring the primary volume forcibly into write disabled mode. It is issued by the secondary
host to suppress data updating by the host possessing the primary volume.
150
When used with XP Continuous Access Asynchronous or Journal Software, this option allows
the user to suspend write operations forcefully when the side file/journal utilization becomes
too high and purges the remaining side file/Journal data without updating the S-VOL. In the
case of disaster recovery and the S-VOL data is not up to date, if the user uses the S-VOL as
file system, issuing an FSCK or CHKDSK command is necessary before mounting the volume
even after the P-VOL is unmounted.
R
(XP Continuous Access Software only) Used to bring the secondary volume forcibly into SMPL
mode. It is issued by the secondary host if the host possessing the primary volume goes down
because of a failure or the like.
R[S]
(XP Continuous Access Software only) Bring the secondary volume forcibly into SMPL mode.
R[B]
(XP Continuous Access Software only) Used to bring the secondary volume forcibly from SMPL
to PSUE mode.
r[w]
(XP Continuous Access Software only) Used to specify a mode of access to the secondary
volume after paired volumes are split.
The r option allows read-only access of the secondary volume, r is a default option.
The rw option enables reading and writing from and to the secondary volume.
S
(Optional) Used to bring the primary and secondary volumes into SMPL mode in which pairing
is not maintained. Data consistency is only maintained if devices are in a suspend status (PSUS).
If devices are in a pair status (PAIR), data on the secondary volume is not consistent or usable.
Returned values
This command sets either of the following returned values in exit(), which allows you to check the
execution results.
The command returns 0 upon normal termination.
A nonzero return indicates abnormal termination. For the error cause and details, see the execution
logs.
Error codes
The following table lists specific error codes for the pairsplit command.
Table 31 pairsplit error codes
Category
Error code
Error message
Value
Volume status
unrecoverable
EX_ENQVOL
236
EX_INCSTG
229
EX_INVVOL
222
EX_EVOLCE
235
151
Category
Error code
Error message
Value
EX_INVSTP
228
EX_EWSUSE
234
pairsyncwait
Synchronization waiting command
Description
The pairsyncwait command is used to confirm that a mandatory write (and all writes before it)
has been stored in the DFW (write) cache area of the RCU.
The command gets the latest P-VOL XP Continuous Access Asynchronous Software sequence number
of the main control unit (MCU) side file and the sequence number of the most recently received write
at the RCU DFW (with the correct CTGID, group or raw_device) and compares them at regular
intervals.
If the RCU sequence number exceeds the value of the designated MCU sequence number within the
time specified, this command reports a 0 return code (meaning P-VOL/S-VOL synchronization to the
desired point is achieved).
The nowait option shows the latest sequence number (Q-marker) of the designated MCU P-VOL
and CTGID. The Q-marker is displayed in 10 hexadecimal characters.
Syntax
pairsyncwait { nomsg | d pair_vol | d[g] raw_device [MU#] | d[g] seq#
LDEV# [MU#] | fq | g group | h | I[H/CA][M/BC][instance#] | m marker
| nowait | q | t timeout | z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
If used, this argument must be specified at the beginning of a command argument.
d pair_vol
Used to specify a logical (named) volume that is defined in the configuration definition file.
When this option is specified, the command runs for the specified paired logical volumes.
d[g] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
This option is effective without specifying the g group option.
If the specified raw_device is listed in multiple device groups, the option is applied to the
first one encountered.
d[g] seq# LDEV# [MU#]
152
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
This option is effective without specifying the g group option.
fq
Displays the number of remaining Q-markers in the CT group in the side file.
# pairsyncwait -g oradb -nowait -fq
UnitID CTGID
Q-Marker Status
Q-Num
QM-Cnt
0
3
01003408ef NOWAIT
2
120
# pairsyncwait -g oradb -nowait -m 01003408e0 -fq
UnitID CTGID
Q-Marker Status
Q-Num
QM-Cnt
0
3
01003408e0 NOWAIT
2
105
# pairsyncwait -g oradb -t 50 -fq
UnitID CTGID
Q-Marker Status
Q-Num
QM-Cnt
0
3
01003408ef TIMEOUT
2
5
If you specify nowait fq QM-Cnt shows the number of remaining Q-markers in the CT
group.
If you specify nowait m marker fq QM-Cnt shows the number of remaining Q-markers
from the specified marker in the CT group.
If you do not specify nowait and the display status is TIMEOUT QM-Cnt shows the number
of remaining Q-markers at time-out.
If the Q-market status is invalid (BROKEN or CHANGED), is displayed.
To determine the remaining data in the CT group: Remaining data in CT group = side file
capacity * side file percentage / 100
The side file percentage is the rate shown under the % column by the pairdisplay
command.
The side file capacity is the capacity within 30-70% of the cache setting as the side file.
To determine the average data per Q-marker in the CT group: Data per Q-Marker = Remaining
data in CT group / QM-Cnt
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The command runs for the specified group unless the d pair_vol option is specified.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
m marker
153
Used to specify the Q-marker, the XP Continuous Access Asynchronous Software sequence
number of the main control unit (MCU) P-VOL. If XP RAID Manager gets the Q-marker from the
nowait option, it can confirm the completion of asynchronous transfer to that point, by using
pairsysncwait with that Q-marker.
If a Q-marker is not specified, XP RAID Manager uses the latest sequence number at the time
pairsysncwait is executed. It is also possible to wait for completion from the S-VOL side.
Q-marker format:
Returned values
This command returns one of the following values in exit (), which allows you to check the execution
results.
When the nowait option is specified:
Normal termination
0. The status is NOWAIT
Abnormal termination
Other than 0 to 127. (For the error cause and details, see the execution logs.)
When the nowait option is not specified:
Normal termination:
154
Example
When the nowait option is specified:
# pairsyncwait g oradb nowait
UnitID CTGID
Q-Marker Status
0
3
01003408ef NOWAIT
Q-Num
2
g oradb t 100
Q-Marker Status
Q-Num
01003408ef DONE
2
g oradb t 1
Q-Marker Status
Q-Num
01003408ef TIMEOUT
3
g oradb t 100 m 01003408ef
Q-Marker Status
Q-Num
01003408ef DONE
0
g oradb t 100
Q-Marker Status
Q-Num
01003408ef BROKEN
0
155
Error codes
The following table lists specific error codes for the pairsyncwait command.
Table 32 pairsyncwait error codes
Category
Error code
Error message
Value
EX_INVVOL
222
pairvolchk
Check volume attributes
Description
The pairvolchk command reports the attributes of a volume from the perspective of the local or
remote host. This command can be applied to each paired logical volume or each group.
This is the most important command used by high availability (HA) failover software to determine
when a failover or failback is appropriate.
The table under the heading HA control script state transitions on page 245 lists state transitions
resulting from running pairvolchk in HA control scripts.
156
Syntax
pairvolchk { nomsg | c | d pair_vol | d[g] raw_device [MU#] | d[g]
seq# LDEV# [MU#] | FBC [MU#] | FCA [MU#] | g group | h |
I[H/CA][M/BC][instance#] | q | s[s] | z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
If used, this argument, must be specified at the beginning of a command argument.
c
Checks the conformability of the paired volumes of the local and remote hosts and reports the
volume attribute of the remote host.
If it is not specified, the volume attribute of the local host is not reported.
d pair_vol
Specifies a paired volume name written in the configuration definition file. The command runs
only for the specified paired volume.
d[g] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
This option is effective without specifying the g group option.
If the specified raw_device is listed in multiple device groups, this applies to the first group
encountered.
d[g] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
This option is effective without specifying the g group option.
FBC [MU#]
Forcibly specifies an XP Business Copy Software pair using the name of an XP Continuous
Access Software group to which it is cascaded. If the c option is not specified, this option
157
acquires the attributes of a cascading XP Business Copy Software volume on a local host (near
site). If the c option is specified, this option acquires the attributes of a cascading XP Business
Copy Software volume at the remote host (far site).
FCA [MU#]
Forcibly specifies an XP Continuous Access Software volume by way of its cascaded XP Business
Copy Software volume name. If the c option is not specified, this option acquires the attributes
of a cascading XP Continuous Access Software volume at the local host (near site). If the c
option is specified, this option acquires the attributes of a cascading XP Continuous Access
Software volume at the remote host (far site).
The MU# specifies the cascading mirror descriptor for XP Continuous Access Journal Software.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The command runs for the entire group unless the d pair_vol argument is specified.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
q
Terminates interactive mode and exits this command.
s[s]
Used to acquire the fine granularity volume state (for example, PVOL_PSUS) of a volume. See
the XP RAID Manager status table.
If it is not specified, the generic volume state (for example, P-VOL) is reported.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Returned values
When the s[s] argument is not specified:
Normal termination:
1: The volume attribute is SMPL
2: The volume attribute is P-VOL
3: The volume attribute is S-VOL
Abnormal termination:
158
Other than 0 to 127. (For the error cause and details, see the execution logs.)
236: EX_ENQVOL
237: EX_CMDIOE
235: EX_EVOLCE (Only when the c option is specified)
Error message
Return value
EX_ENORMT
242
EX_CMDIOE
237
236
EX_EVOLCE
235
EX_INCSTG
229
EX_VOLCUR
225
EX_VOLCUE
224
EX_VOLCRE
223
EX_EXTCTG
216
EX_ENQCTG
214
11: SMPL for XP Continuous Access Synchronous Software and XP Business Copy Software volumes
22: PVOL_COPY or PVOL_RCPY
23: PVOL_PAIR
24: PVOL_PSUS
25: PVOL_PSUE
26: PVOL_PDUB (XP Continuous Access Software and LUSE volume only)
29: PVOL_INCSTG (inconsistent status in group) Not returned
32: SVOL_COPY or SVOL_RCPY
33: SVOL_PAIR
34: SVOL_PSUS
35: SVOL_PSUE
36: SVOL_PDUB (XP Continuous Access Software and LUSE volume only)
39: SVOL_INCSTG (inconsistent status in group) Not returned
159
For XP Continuous Access Asynchronous Software and XP Continuous Access Journal Software volumes:
42: PVOL_COPY
43: PVOL_PAIR
44: PVOL_PSUS
45: PVOL_PSUE
46: PVOL_PDUB (XP Continuous Access Software and LUSE volumes only)
47: PVOL_PFUL
48: PVOL_PFUS
52: SVOL_COPY or SVOL_RCPY
53: SVOL_PAIR
54: SVOL_PSUS
55: SVOL_PSUE
56: SVOL_PDUB (XP Continuous Access Software and LUSE volumes only)
57: SVOL_PFUL
58: SVOL_PFUS
25: PVOL_PSUE
26: PVOL_PDUB (XP Continuous Access Software and LUSE volumes only)
27: PVOL_PFUL (PAIR closing Full status of the XP Snapshot pool)
28: PVOL_PFUS (PSUS closing Full status of the XP Snapshot pool)
29: PVOL_INCSTG (inconsistent status in group) Not returned
32: SVOL_COPY or SVOL_RCPY
33: SVOL_PAIR
34: SVOL_PSUS
35: SVOL_PSUE
36: SVOL_PDUB (XP Continuous Access Software and LUSE volumes only)
37: SVOL_PFUL (PAIR closing Full status of the XP Snapshot pool)
38: SVOL_PFUS (PSUS closing Full status of the XP Snapshot pool)
39: SVOL_INCSTG (inconsistent status in group) Not returned
The user can set threshold for the specified pool via Web console. The default value is 80% of pool
capacity.
PFUS is set when the XP Snapshot pool exceeded the threshold of the PSUS state.
PFUL is set when the XP Snapshot pool exceeded the threshold of the PAIR state.
Example:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR ]
Other than 0 to 127 (For the error cause and details, see the execution logs):
160
236:EX_ENQVOL
237:EX_CMDIOE
235:EX_EVOLCE (When the c argument is specified)
242:EX_ENORMT (When the c argument is specified)
16:EX_EXTCTG
214:EX_ENQCTG
When a volume group contains volumes in different states, one state takes precedence and is reported
for the group as shown in the following table.
Table 34 pairvolchk volume state precedence
Option
COPY
PSUE
PDUB
PFUS
PSUS
PFUL
PAIR
Group status
-s
COPY*
PSUE
PDUB
PFUS
PSUS
PFUL
PAIR
COPY*
PSUE
PDUB
PFUL
PAIR
PFUS
PSUS
-ss
Explanation of terms
1: Status is TRUE.
0: Status is FALSE.
x: Status is TRUE or FALSE (N/A).
COPY*: Status is either COPY or RCPY.
PFUL: Because the PFUL state refers to the High Water Mark of the side file in PAIR state, the PFUL
state is displayed as PAIR by all commands except pairvolchk and the fc option of the
pairdisplay command.
PFUS: Because the PFUS state refers to a Suspend state with the side file Full, the PFUS state is
displayed as PSUS by all commands except pairvolchk and the fc option of the pairdisplay
command.
SVOL_PSUS: Displayed as SSUS by the pairdisplay command.
161
Error codes
The table lists specific error codes for the pairvolchk command.
Table 35 pairvolchk error codes
Category
Error code
Error message
Value
Volume status
unrecoverable
EX_ENQVOL
236
EX_EVOLCE
235
Example
CA Async:
# pairvolchk -g oradb
pairvolchk:Volstat is P-VOL.[status=PAIR fence=ASYNC CTGID=2 MINAP=2
CA Sync:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR fence = DATA
MINAP = 2
LDEV =BLOCKED]
MINAP shows the minimum active paths to a specified group on the P-VOL. If the disk array firmware
does not support tracking the number of active paths, MINAP is not displayed.
LDEV = BLOCKED indicates failure to link to an E-LUN by XP Continuous Access Software
BC:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR ]
BC with CT Group:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR CTGID = 1 ]
raidar
Report LDEV activity
Description
The raidar command reports the I/O activity of a port, target or LUN over a specified time interval.
It reports early termination via CNTL-C. You can use this command regardless of the instance
configuration definitions.
I/O activity of an S-VOL that is part of an active XP Continuous Access Software pair (a pair that is
in the COPY or PAIR state) shows internal I/O used to maintain the pair and user I/O. For XP Business
Copy Software, only host I/Os are reported on the P-VOL.
For XP Continuous Access Software, the I/O activity reported for an S-VOL in either COPY or PAIR
state reflects the total, not just host based, activity of the volume.
For XP Business Copy Software, only the host based I/O activity is reported.
162
If the volume state changes from S-VOL (COPY or PAIR) to SMPL during the monitoring period, the
activity number may be based on internal and host I/Os.
Syntax
raidar h
raidar { h | I[H/CA][M/BC][instance#] | p port targ lun [mun] | pd
raw_device| q | s [interval] [count] | z | zx }
Arguments
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
p
Specifies a device location of the disk array for activity. You can use this argument more than
once to monitor more than one device. It is only possible to monitor 16 devices at once.
p port
Specifies the name of a port to be reported by selecting it from CL1-A to CL1-R (excluding CL1-I
and CL1-O), or CL2-A to CL2-R (excluding CL2-I and CL2-O).
For the XP1024 disk array, the expanded ports CL3-A up to CL3-R, or CL4-A up to CL4-R can
also be selected.
For the XP12000 and XP24000 disk arrays, the expanded ports CL3-A up to CL3-R, or CLG-A
up to CLG-R can be selected.
Port specification is not case sensitive (CL1-A= cl1-a= CL1-a= cl1-A).
p lun
Specifies a LUN of a specified SCSI/Fibre Channel target.
p targ
Specifies a SCSI/Fibre Channel target ID of a specified port.
p mun
(XP Business Copy Software/XP Snapshot only) Specifies the duplicated mirroring descriptor
(MU#) for the identical LU under XP Business Copy Software/XP Snapshot in a range of 0 to
2/63.
pd raw_device
(HP-UX, Linux, Solaris, Windows NT/2000/2003, AIX, and MPE/iX only) Allows the
designation of an LDEV via the specified raw_device file.
q
Terminates interactive mode and exits this command.
s [interval] [count] or sm [interval] [count]
163
Example
Related information
XP RAID Manager Fibre Channel addressing, page 267.
raidqry
Confirm disk array connection to host
Description
The raidqry command displays the connected host and disk array configuration.
164
Syntax
raidqry h
raidqry { f | g | h | I[H/CA][M/BC][instance#] | l | q | r group
| z | zx }
Arguments
f
This option is used to display the floatable IP address for the host name (ip_address) described
in a configuration definition file.
g
Displays a list of group names (dev_group) from the local instance configuration file and
provides RAID type, interface version and maximum number of MUs.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
l
Displays a configuration of the local host connected to the disk array.
q
Terminates interactive mode and exits this command.
r group
Displays the configuration of the remote host and the disk array connected with the designated
group.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Examples
Example 1:
# raidqry -l
No Group
Hostname
HORCM_ver
Uid
Serial#
Micro_ver
Cache(MB)
165
1
1
-----
HOSTA
HOSTA
01 20 03/02
01 20 03/02
0
1
30053
30054
52-35-02/02
52-35-02/02
256
256
HORCM_ver
01 20 03/02
01 20 03/02
01 20 03/02
01 20 03/02
Uid
0
0
1
1
Serial#
30053
30053
30054
30054
Micro_ver
52-35-02/02
52-35-02/02
52-35-02/02
52-35-02/02
Cache(MB)
256
256
256
256
Example 2:
# raidqry -r oradb
No Group
Hostname
1 oradb
HOSTA
2 oradb
HOSTB
1 oradb
HOSTA
2 oradb
HOSTB
Example 3:
# raidqry -l -f
No Group Floatable Host
1
--FH001
HORCM_ver
01 20 03/02
Uid
0
Serial#
30053
Micro_ver
Cache(MB)
52-35-02/02 256
Output fields:
No: The group names (by number) in the order in which they are defined in the configuration file.
Group: When using the r option, the group name (dev_group) described in the configuration
definition file.
Floatable Host: When using the f option, the first 30 characters of the host name (ip_address)
described in the configuration definition file. The f option interprets the host name as utilizing
a floatable IP for the host.
HORCM_ver: When the l option is specified, the XP Continuous Access Software version of the
local host. When the r option is specified, this item shows the XP Continuous Access Software
version on the remote host for the specified group.
Uid: The unit ID of the disk array connected to the local host when the l option is specified. If
the r option is specified, the information is for the disk array connected to the remote host.
Serial#: The production serial number of the disk array connected to the local host when the l
option is specified. If the r option is specified, the information is for the disk array connected to
the remote host.
Micro_ver: The microcode version of the disk array connected to the local host when the l option
is specified. If the r option is specified, the information is for the disk array connected to the
remote host.
Cache(MB): The logical cache capacity (in MB) in the disk array. When the l option is specified,
the cache capacity is for the local disk array. When the r option is specified, the cache capacity
shown is for the remote disk array.
# raidqry -g
GNo Group
1 ora
2 orb
3 orc
RAID_type
XP_RAID
HTC_RAID
HTC_DF
Output fields:
GNo: Group (dev-group) number.
Group: Group name.
RAID_type: Type of RAID configured.
166
IV/H
12
12
8
IV/M
9
9
6
MUN/H
4
4
1
MUN/M
64
64
1
IV/H: The XP Continuous Access Software interface version used to create the CT group. Provided
for troubleshooting.
IV/M: The XP Business Copy Software interface version used to create the CT group. Provided for
troubleshooting.
MUN/H: The maximum number of XP Continuous Access Journal Software MUs in the CT group.
MUN/M: The maximum number of XP Business Copy Software MUs in the CT group.
raidscan
Display port status
Description
The raidscan command displays, for a given port, the target ID, LDEV (mapped for LUN, and the
status of the LDEV), regardless of the configuration definition file.
Syntax
raidscan
raidscan { CLI | f[xfgde] | find | find conf [MU#] | find inst |
find [op] [MU#] | find sync [MU#] [g name] | find verify | h |
I[H/CA][M/Business Copy][instance#] | l lun | m [MU#] | p port[hgrp]
| s Seq# | t targ | pd raw_device | pi strings | q | z | zx }
Arguments
CLI
Specifies structured output for Command Line Interface parsing. The column data is aligned in
each row. The delimiters between columns are either a space or . If you specify the CLI
option, raidscan does not display the cascading mirror (MU1-4).
f[x]
Displays the LDEV number in hexadecimal.
f[f]
Specifies the volume type in the output (for example, OPEN-3/8/9/K).
If this option is specified, the f[g] and f[d] options are invalid.
f[g]
g displays the group name in the output. This option is used to search a group in the
configuration definition file (local instance) and display a group_name when the scanned
LDEV is contained in the group.
If this option is specified, the f[f] and f[d] options are invalid.
f[d]
d displays the Device_File that was registered to the XP RAID Manager Group in the output,
based on the LDEV (as defined in the local instance configuration definition file).
If this option is specified, the f[f] and f[g] options are invalid.
f[e]
Displays the serial number and LDEV number of the external LUNs mapped to the LDEV.
167
If the external LUN mapped to the LDEV on a specified port does not exist, this option does
nothing. If this argument is specified, the f[f][g][d] argument is not allowed.
Example:
# raidscan -p cl1-a-0 -fe -CLI
PORT# /ALPA/C TID# LU# Seq#
Num LDEV# P/S Status Fence
CL1-A-0 ef 0
0
48 62468
2
256 SMPL
CL1-A-0 ef 0
0
49 62468
2
272 SMPL
CL1-A-0 ef 0
0
50 62468
1
288 SMPL
-
E-Seq# E-LDEV#
30053
17
30053
23
30053
28
Output fields:
E-Seq#: The production (serial) number of the external LUN.
E-LDEV#: The LDEV number of the external LUN.
find
Used to display the port, target ID, LUN (in XP RAID Manager notation) that was mapped to
a LDEV using a special (raw_device) file provided via STDIN.
If the target and LUN are unknown, use this option to discover the port, target ID, LUN
associated with a host device file so that the information can be in a horcm.conf file.
Use this option with the fx option to display the LDEV numbers in hexadecimal format.
find conf [MU#] [g name]
Used to display the port, target ID, and LUN in the horcm.conf file by using a special raw
device file provided via STDIN.
If the target ID and LUN are unknown for the target device file, you must start XP RAID Manager
without a description for HORCM_DEV and HORCM_INST.
This option allows you to use the fx option to display the LDEV numbers in hexadecimal
format.
g name
Specifies the name to be used for dev_group in the horcm.conf file. If this option is not
specified, the group applies VG as the default.
find inst
This option runs automatically at /etc/horcm_startup time. It is used to logically connect
and register a device file name to all pertinent mirror descriptors [MU#s] in the LDEV map
table. It allows XP RAID Manager to note permitted volumes.
Normally, the user does not need to run this command. XP RAID Manager gets the serial and
LDEV numbers from the disk array. Then, XP RAID Manager compares the inquiry result to the
contents of the horcm.conf file, and the result is displayed and stored within the instance.
To minimize the time required, this option is terminated when the registration is finished based
on the horcm.conf file.
Use this option with the fx option to display the LDEV numbers in hexadecimal format.
If the pi strings option is also specified, this option does not receive its strings via
STDIN. Instead, the strings specified in the pi option is used as input.
find [op] [MU#]
Used to run the specified [op] using a raw device file provided by STDIN. See next entries.
find sync [MU#] [g name]
168
Flushes the system buffer of the logical drive corresponding to a g name (dev_group) in the
configuration file. The dev_group name is provided via STDIN through the key words ($Volume,
$LETALL, $Physical).
NOTE:
Because Windows NT does not support LDM volumes, the user must specify $LETALL instead of
$Volume.
The g name option is used to specify the name to be used for dev_group in the horcm.conf
file. If this option is not specified, the system buffers associated with all groups for the local
instance are flushed.
If the logical drive corresponding to a g name is not open for an application, the logical
drive system buffer is flushed and the drive is unmounted.
If the logical drive corresponding to a g name is open for an application, the logical drive
system buffer is only flushed.
This option allows the system buffer to be flushed before a pairsplit without unmounting the
P-VOL (open state).
find verify [MU#]
Used to verify the relationship between a Group in the configuration definition file and a
Device_File registered to the LDEV map tables (based on the raw device file name provided
via STDIN).
This option also allows you to use the fx option to display the LDEV numbers in hexadecimal
format. You can also use this in conjunction with the fd option.
This option is affected by the command execution environment (HORCC_MRCF).
If a device name is different in the DEVICE_FILE and Device_File fields, an LDEV is being
referenced by multiple device files. See the Examples section for an example of such a case.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
l lun
Specifies a LUN for a specified SCSI/Fibre Channel target. Specifying a LUN without
designating the target ID is not allowed.
If this option is not specified, the command applies to all LUNs.
If this option is specified, the t option must also be used.
m MU#
Displays the cascading mirror descriptor. If you specify the CLI option, raidscan does not
display the cascading mirror (MU1-4).
m all displays all cascading mirror descriptors.
169
p port[hgrp]
Specifies the name of a port to be scanned by selecting it from CL1-A to CL1-R (excluding CL1-I
and CL1-O), or CL2-A to CL2-R (excluding CL2-I and CL2-O).
For the XP1024 disk array, the expanded ports CL3-A up to CL3-R, or CL4-A up to CL4-R can
also be selected.
For the XP12000 and XP24000 disk arrays, the expanded ports CL3-A up to CL3-R, or CLG-A
up to CLG-R can also be selected.
Port specifications are not case sensitive (CL1-A= cl1-a= CL1-a= cl1-A).
This option must always be specified.
The [hgrp] option displays only the LDEVs mapped to a host group on an XP1024, XP12000,
or XP24000 port.
pd raw_device
(UNIX only) Specifies a raw_device name.
(Windows NT/2000/2003 only) Specifies a physical device in this format:
\\.\PhysicalDriven
Finds the Seq# and port name on the disk array and scans the port of the disk array (which
corresponds with the unit ID) and searches for the unit ID from Seq#.
If this option is specified, and then the s Seq# option is invalid.
pi strings
Used to explicitly specify a character string rather than receiving it from STDIN.
If this option is specified, the find option is ignored. Instead, the strings specified in the pi
option is used as input. The specified character string must be limited to 255 characters.
q
Terminates interactive mode and exits this command.
s Seq#
Used to specify the serial number of the disk array on multiple disk array connections when
you cannot specify the unit ID that is contained in the p port option.
This option searches corresponding unit ID from Seq# and it scans the port that is specified
by p port option.
If this option is specified, the unit ID that is contained in p port is ignored.
Example:
If the unit ID#2 has been corresponding to seq#30053 in a multiple XP RAID Manager configuration,
you can specify the disk array in the following two ways:
raidscan p CL1E2
(Unit ID that is contained in p port is #2.)
raidscan p CL1E s 30053
t targ
Specifies a SCSI/Fibre target ID. If this option is not specified, the command applies to all
targets.
z
170
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Examples
raidscan using the CLI option formats the display so that all the columns are aligned.
# raidscan p CL1-C CLI
Port# TargetID# Lun# Seq#
CL1-C
1
0 30053 1
CL1-C
2
2 30053 1
CL1-C
2
3 30053 1
If you specify the CLI option, raidscan does not display the cascading mirror (MU1-4).
A raidscan on a Fibre Channel port displays ALPA data for the port instead of target ID number.
# raidscan p CL2-P
PORT# /ALPA/C,TID#,LU#.Num(LDEV#..)..P/S, Status,LDEV#,P-Seq#,P-LDEV#
CL2-P / ef/0, 0, 0-1.0(58).........P-VOL PSUS
58, 35641
61
CL2-P / ef/0, 0, 1-1.0yp(59).......P-VOL PSUS
59, 35641
62
CL2-P / ef/0, 0, 2...0(61).........S-VOL SSUS
61, ----58
CL2-P / ef/0, 0, 3...0(62).........S-VOL SSUS
62, ----59
TARG
0
0
-
LUN
2
3
-
SERIAL
31168
31168
31170
LDEV
118
121
121
PRODUCT_ID
OPEN-3-CVS
OPEN-3-CVS
OPEN-3-CVS
171
SER =
61456
LDEV =
259 [ OPEN-3-CM
It shares an LDEV among multiple device files and an LDEV is already displayed by another target
device:
# ERROR [LDEV LINK] /dev/rdsk/c24t0d3
SER =
61456
LDEV =
SER =
SER =
61456
The following example flushes the system buffer associated with the ORB group through $Volume.
This example uses the echo $Volume | raidscan find sync g ORB or raidscan pi
$Volume find sync g ORB options.
[SYNC] : ORB ORB_000[-] -> \Dmt1\Dsk1 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
[SYNC] : ORB ORB_001[-] -> \Dmt1\Dsk2 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
[SYNC] : ORB ORB_002[-] -> \Dmt1\Dsk3 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
The following example flushes the system buffer associated with all of groups for the local instance.
This example uses the echo $Volume | raidscan find sync or raidscan pi $Volume
find sync options.
[SYNC]
[SYNC]
[SYNC]
[SYNC]
[SYNC]
:
:
:
:
:
ORA
ORA
ORB
ORB
ORB
ORA_000[-]
ORA_000[-]
ORB_000[-]
ORB_001[-]
ORB_002[-]
->
->
->
->
->
\Vol44\Dsk0
\Vol45\Dsk0
\Dmt1\Dsk1
\Dmt1\Dsk2
\Dmt1\Dsk3
:
:
:
:
:
Volume{56e4954a-28d5-4824-a408-3ff9a6521e5d}
Volume{56e4954a-28d5-4824-a408-3ff9a6521e5e}
Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e
TARG
3
3
3
LUN
0
0
0
M
0
1
SERIAL
35013
35013
35013
LDEV
17
17
17
TARG
3
3
-
LUN
0
1
-
M
0
0
0
SERIAL
35013
35013
35013
LDEV
17
18
19
172
M
1
1
1
SERIAL
35013
35013
35013
LDEV
17
18
19
# raidscan -p cl1-r -f
Port#, TargetID#, Lun# Num(LDEV#...) P/S,
CL1-R, 15,
7 5(100,101...)
P-VOL
CL1-R, 15,
6 5(200,201...)
SMPL
# raidscan -pd
Port#, TargetID#,
CL1-R, 15,
CL1-R, 15,
/dev/rdsk/c0t15/d7 -fg
Lun# Num(LDEV#...) P/S,
7 5(100,101...)
P-VOL
6 5(200,201...)
SMPL
# raidscan -p cl1-r -f
PORT#/ALPA/C,TID#,LU#..Num(LDEV#...) P/S,
CL1-R/ ce/15, 15,
7..5(100,101...) P-VOL
CL1-R/ ce/15, 15,
6..5(200,201...) SMPL
Num(LDEV#...)
5(100,101...)
5(100,101...)
5(100,101...)
5(200,201...)
5(200,201...)
5(200,201...)
5(400,101...)
5(400,101...)
5(400,101...)
P/S,
P-VOL
P-VOL
P-VOL
SMPL
SMPL
SMPL
S-VOL
SMPL
SMPL
Status,
PAIR
PAIR
PAIR
---------PAIR
-------
LDEV#,
100,
100,
100,
---------400,
-------
P-Seq#
5678
5678
5678
---------5678
-------
P-LDEV#
300
301
302
---------100
-------
173
Windows NT does not support the LDM volume. The user must specify $LETALL instead of $Volume:
raidscan -pi $LETALL -find sync -g ORA
[SYNC] : ORA ORA_000[-] -> F:\Dsk1\p1
: F:
Output fields:
PairVol: The paired volume name (dev_name) within the group defined in the configuration
definition file.
M: The MU number defined in the configuration definition file. For XP Continuous Access Software,
is displayed. For XP Business Copy Software, 0, 1, or 2 is displayed.
Device_File: The Device_File that is registered to the LDEV map tables within XP RAID Manager.
UID: The unit ID for multiple disk array configurations. If UID is displayed as , a command device
(HORCM-CMD) has not been found.
S/F: Shows whether a port is SCSI or Fibre Channel.
Related information
For STDIN file specification information, see XP RAID Manager STDIN file formats on page 271.
174
You can use any general command (not just raidscan). The x option overrides the normal operation
of the command.
It is not necessary to have an instance running to use these command options when only the
sub-command is used.
If you run a Windows NT/2000/2003 command from a UNIX command line, a syntax error is
returned.
drivescan
Display disk drive and connection information (Windows NT/2000/2003 only)
Description
The drivescan command displays the relationship between hard disk numbers on Windows
NT/2000/2003 and the actual physical drives.
Syntax
RM_command x drivescan stringx,y
Arguments
RM_command
Any general XP RAID Manager command.
string
An alphabetic character string; provided for readability.
x,y
Specifies a range of disk drive numbers.
Output fields:
Example
This example shows drivescan executed from the raidscan command, and displays the actual
physical drive connection for disk drive number 0 to 10.
raidscan x drivescan harddisk0,10
Harddisk 0..Port[ 1] PhId[ 0] TId[ 0] Lun[ 0] [HITACHI] [DK328H-43WS]
Harddisk 1..Port[ 2] PhId[ 4] TId[ 29] Lun[ 0] [HITACHI] [OPEN-3]
Port[CL1-J] Ser#[ 30053] LDEV#[ 9(0x009)]
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
Harddisk 2..Port[ 2] PhId[ 4] TId[ 29] Lun[ 1] [HITACHI] [OPEN-3]
Port[CL1-J] Ser#[ 30053] LDEV#[10(0x00A)]
175
MU#2 = SMPL]
2] [HITACHI] [OPEN-3 ]
MU#2 = SMPL]
3] [HITACHI] [OPEN-3 ]
MU#2 = SMPL]
6] [HITACHI] [OPEN-3-CM]
Related information
XP RAID Manager Fibre Channel addressing, page 267.
env
Display environment variable (Windows NT/2000/2003 only)
Description
The env command displays an environment variable within a command.
Syntax
RM_command x env
Argument
RM_command
Any general XP RAID Manager command.
Example
This example displays the current value of the HORCC_MRCF environment variable.
raidscan x env HORCC_MRCF
1
findcmddev
Search for a command device (Windows NT/2000/2003 only)
Description
The findcmddev command searches to see if a command device exists within the range of the
specified disk drive numbers. When the command device exists, the command displays the command
device in the format described in the XP RAID Manager configuration definition file.
This command searches for a command device as a physical drive, a Logical drive, and a
Volume{GUID} for Windows 2000/2003.
If a command device is specified as a logical drive in addition to a Physical Drive, a drive letter is
assigned to the command device. This drive letter should be deleted from the list of those available
to general users.
176
The Volume{GUID} must be made by creating a partition, using the disk manager without the file
system format option, and is used to keep as the same command device even though the physical
drive numbers are changed on every reboot in a SAN environment.
Syntax
RM_command x findcmddev stringx,y
Arguments
RM_command
Any general XP RAID Manager command.
string
An alphabetic character string; provided for readability.
x,y
Specifies a range of disk drive numbers.
Restriction
The findcmddev command is used when a command device name to be described in the configuration
definition file is unknown. XP RAID Manager must not be running when this command is used.
Example
This example runs findcmddev, searching device numbers 0 to 20.
raidscan
cmddev of
cmddev of
cmddev of
-x findcmddev
hdisk0, 20
Ser#
62496 = \\.\PhysicalDrive0
Ser#
62496 = \\.\E:
Ser#
62496 = \\.\Volume{b9b31c79-240a-11d5-a37f-00c00d003b1e}
mount
Mount and display a device (Windows NT/2000/2003 only)
Description
The mount command allocates the specified logical drive letter to the specified partition on the disk
drive (hard disk). If no arguments are specified, this option displays a list of mounted devices.
Syntax
RM_command x mount
RM_command x mount D: [\directory] volume#]
Windows NT: RM_command x mount D: hdisk# [partition#]...
Windows 2000/2003: RM_command x mount D: volume#
Arguments
RM_command
177
Restrictions
The partition on the specified disk drive (hard disk) must be recognized on Windows NT/2000/2003.
XP RAID Manager supports the mount command specifying the device object name (such as
\Device\Harddiskvolume X). However, Windows 2003 changes the device number for the
device object name when it recovers from a failure of the physical drive. The mount command specifying
the device object name may fail due to this change.
To overcome this, specify a Volume{GUID} and the device object name. If a Volume{GUID} is
specified, it is converted to a device object name during execution. You can discover the
Volume{GUID} by using inqraid $Vol fv command.
Example:
C:\HORCM\etc>inqraid -CLI $Vol -fv
DEVICE_FILE
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}\Vol3\Dsk0
CL2-D
62496 256
OPEN-3-CVS-CM
Issuing a mount using DefineDosDevice() allows you to force a dismount of the mounted volume by
logging off Windows 2000/2003.
Example:
C:\HORCM\etc>raidscan -x mount E: Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
E: <+> HarddiskVolume3
Issuing a mount using a Directory mount prevents a forced dismount due to logging off Windows
2000/2003.
Example:
C:\HORCM\etc>raidscan -x mount E:\ Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
E:\ <+> HarddiskVolume3
178
Examples
Windows NT
This Windows NT example runs mount from the pairsplit command option, mounting the F:\
drive to partition 1 on disk drive 2, and mounting the G:\ drive to partition 1 on disk drive 1. Then
a list of mounted devices is displayed.
pairsplit -x mount F:
pairsplit -x mount
Drive FS_name VOL_name
C:
FAT
Null
F:
FAT
Null
G:
NTFS
Null
Z:
CDFS
Null
hdisk2
p1
Device
Harddisk0
Harddisk2
Harddisk1
CdRom0
-x mount G:
hdisk1
p1
Targ
0
5
5
Lun
0
1
0
Windows 2000/2003
This Windows 2000/2003 example shows the specification of a directory mount point on the logical
drive.
pairsplit x mount D:\hd1 \Vol8 D:\hd1 <+> HarddiskVolume8 pairsplit x
mount D:\hd2 \Vol9 D:\hd2 <+> HarddiskVolume9
This Windows 2000/2003 example runs the mount command from a sub-command option of
pairsplit. It mounts the F:\ drive to the harddiskvolume2, and then displays the mounted devices.
If you do not specify a partition number, the drive is mounted as HarddiskVolume#.
pairsplit
pairsplit
Drive
C:
F:
D:
D:\hd1
D:\hd2
G:
-x mount F: hdisk2
-x mount
FS_name VOL_name Device Partition
... Port PathID Targ
NTFS
Null
Harddiskvolume1
... Harddisk0
NTFS
Null
Harddiskvolume2
... Harddisk1
NTFS
Null
Harddiskvolume3
... Harddisk2
NTFS
Null
Harddiskvolume4
... Harddisk3
NTFS
Null
Harddiskvolume5
... Harddisk4
NTFS
Null
HarddiskDmVolumes\...\Volume1 ... Harddisk5[3]
Output fields:
portscan
Display devices on designated ports (Windows NT/2000/2003 only)
Description
The portscan command displays the physical devices that are connected to the designated port.
179
Lun
Syntax
RM_command x portscan portx,y
Arguments
RM_command
Any general XP RAID Manager command.
portx,y
Specifies a range of port numbers.
Example
This example runs portscan from the raidscan command option, and displays the physical device
connection from port number 0 to 20.
raidscan
PORT[ 0]
PhId[ 0]
PhId[ 0]
PORT[ 1]
PhId[ 0]
PORT[ 2]
PhId[ 0]
PhId[ 0]
PhId[ 0]
PhId[ 0]
x portscan port0,20
IID [ 7] SCSI Devices
TId[ 3] Lun[ 0] [MATSHIT]
TId[ 4] Lun[ 0] [HP
IID [ 7] SCSI Devices
TId[ 0] Lun[ 0] [HITACHI]
IID [ 7] SCSI Devices
TId[ 5] Lun[ 0] [HITACHI
TId[ 5] Lun[ 1] [HITACHI
TId[ 5] Lun[ 2] [HITACHI
TId[ 6] Lun[ 0] [HITACHI
[OPEN-3
[OPEN-3
[OPEN-3
[3390-3A
] ...Claimed
]
]
]
]
...Claimed
...Claimed
...Claimed
...Claimed
Output fields:
setenv
Set environment variable (Windows NT/2000/2003 only)
Description
The setenv command sets an environment variable within a command.
Syntax
RM_command x setenv variable value
Arguments
RM_command
180
Restrictions
Set environment variable prior to starting XP RAID Manager, unless you are using interactive mode.
Changing an environment variable after an execution error of a command is invalid.
Example
This example changes the execution environment from HORC to HOMRCF by using raidscan to
change the HORCC_MRCF environment variable.
raidscan[HORC]:
raidscan[MRCF]:
raidscan[MRCF]:
raidscan[HORC]:
-x setenv
HORCC_MRCF 1
-x usetenv
HORCC_MRCF
Related information
usetenv, page 185
sleep
Suspend execution (Windows NT/2000/2003 only)
Description
The sleep command suspends execution for a specified period of time.
Syntax
RM_command x sleep time
Arguments
RM_command
Any general XP RAID Manager command.
time
Specifies the sleep time in seconds.
sync
Write data to drives (Windows NT/2000/2003 only)
181
Description
The sync command writes unwritten data remaining on the Windows NT/2000/2003 system to the
logical and physical drives.
If the logical drives designated as the objects of the sync command is not opened to applications,
sync flushes the system buffer to a drive and performs a dismount.
If the logical drives designated as the objects of the sync command are already opened to applications,
sync only flushes the system buffer to a drive.
The sync command accepts a Volume{GUID} and the device object name. If you specify a
Volume{GUID}, XP RAID Manager converts the Volume{GUID} to a device object name on
execution.
Syntax
RM_command x sync A: B: C: ...
RM_command x sync all
RM_command x sync D:[\directory|\directory pattern]...
RM_command x sync drive#...
RM_command x sync volume#...
Windows 2000/2003 only: RM_command
Windows 2003 SP1 only: Use x syncd
Arguments
RM_command
Any general XP RAID Manager command.
A:B:C: [\directory|\directory pattern] ...
Data is flushed to the specified logical (and the corresponding physical) drives.
If the specified logical drive has directory mount volumes, SYNC is executed for all of the
volumes on the logical drive.
(Windows 2000/2003 only) [\directory|\directory pattern] specifies the directory
mount point on the logical drive.
If directory is specified, SYNC is executed for the specified directory mounted volume only.
If a directory pattern is specified, SYNC is executed for the directory mounted volumes
identified by directory pattern.
all
Data is flushed to all logical drives (and the physical drives corresponding to the logical drives
assuming that they are hard disks), excluding the logical drive used by XP RAID Manager and
the logical drive supporting the current Windows directory.
D
Data is flushed to the specified logical (and the corresponding physical) drive.
Volume#...
(Windows 2000/2003 only) The LDM Volumes to be flushed. Volume#... must be specified
in LDM format: '\Vol# or \Dms# or \Dmt# or \Dmr# '
182
x syncd
(Windows 2003 SP1 only) Use x syncd instead of x sync to avoid a problem where
NTFS on the P-VOL splits inconsistently due to an I/O delay in dismounting.
Examples
C:\HORCM\etc>raidscan -x sync Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
[SYNC] Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
The following example runs SYNC for all of the volumes on a logical drive.
pairsplit -x sync D:
[SYNC] D: HarddiskVolume2
[SYNC] D:\hd1 HarddiskVolume8
[SYNC] D:\hd2 HarddiskVolume9
The following example runs SYNC for specified directory mounted volume.
pairsplit -x sync D:\hd1
[SYNC] D:\hd1 HarddiskVolume8
The following example runs SYNC for the directory mounted volumes identified by the directory pattern
D:\h.
pairsplit -x sync D:\h
[SYNC] D:\hd1 HarddiskVolume8
[SYNC] D:\hd2 HarddiskVolume9
The following example runs SYNC for all of the volumes on the logical drives with directory mount
volumes.
pairsplit -x sync all
[SYNC] C: HarddiskVolume1
[SYNC] D:\hd1 HarddiskVolume8
[SYNC] D:\hd2 HarddiskVolume9
[SYNC] G: HarddiskVolume10
The following example runs sync from a sub-command option of pairsplit. After flushing remaining
data to the logical drives C: and D:, Read/Write access to the secondary volume is enabled.
pairsplit -x sync C: D: -g oradb -rw
183
The following example runs sync from a sub-command option of pairsplit. After flushing remaining
data to harddisk2 and harddisk3, Read/Write access to the secondary volume is enabled in simplex
mode.
pairsplit -x sync hdisk2 hdisk3 -g oradb -S
This following example flushes the system buffer before the pairsplit without unmounting the P-VOL
(open state), and provides a warning.
pairsplit -x sync C:
WARNING: Only flushed to [\\.\C:] drive due to be opening.
[SYNC] C: HarddiskVolume3
umount
Unmount a device (Windows NT/2000/2003 only)
Description
The umount command unmounts a logical drive and deletes the drive letter. Before deleting the drive
letter, the command automatically runs the sync command for the specified logical drive (flushes
unwritten buffer data to the disk).
Syntax
RM_command x umount D: [time]
Windows 2000/2003: RM_command x umount D: [\directory] [time]
Windows 2003 SP1 only: Use x umountd
Arguments
RM_command
Any general XP RAID Manager command.
D
Specifies the logical drive letter to unmount.
directory
(Windows 2000/2003 only) The directory mount point on the logical drive.
time
(Windows 2000/2003 only) Used to specify a delay. This avoids a problem where the
Windows 2003 DeviceIoControl function (FSCTL_LOCK_VOLUME) holds an I/O for dismounting
as a pending I/O.
x umountd
(Windows 2003 SP1 only) Use x umountd instead of x umount to avoid a problem
where NTFS on the P-VOL splits inconsistently due to an I/O delay in dismounting. This also
writes to the SVOL_PAIR (writing disabled) state by executing a rescan or LDM (Windows disk
management), and logs it as a Windows event (for example, ID51,57).
184
Restriction
Before issuing the umount command, all drive activity must be stopped, including system activity and
user applications. If activity is not stopped, the unmount operation is not completed and a device
busy error is reported.
Examples
Windows 2000/2003:
This Windows 2000/2003 example shows the specification of a directory mount point on the logical
drive.
pairsplit -x umount
D:\hd1 D:\hd1 <-> HarddiskVolume8
pairsplit -x umount D:\hd2
D:\hd2 <-> HarddiskVolume9
This example uses the time option and sets the delay to 45 seconds.
pairsplit -x umount D: 45
D: <-> HarddiskVolume8
This example runs umount from the pairsplit command option, after unmounting the F:\ and
G:\ drives. Read/Write access to the secondary volume is enabled, and mounted devices are
displayed.
pairsplit x umount F: x umount G:g oradb rw
pairsplit x mount
Drive
FS_name
VOL_name Device
Partition ... Port PathID Targ Lun
C:
FAT
Null
Harddisk0 Partition1 ... 1
0
0
0
Z:
Unknown
Unknown
CdRom0
... Unknown
Output fields:
usetenv
Delete environment variable (Windows NT/2000/2003 only)
185
Description
The usetenv command deletes an environment variable within a command.
Syntax
RM_command x usetenv variable
Arguments
RM_command
Any general XP RAID Manager command.
variable
Specifies the environment variable to be deleted.
Restrictions
Changing an environment variable after an execution error of a command is invalid.
Example
This example changes the execution environment from HORC to HOMRCF by using raidscan to
change the HORCC_MRCF environment variable.
raidscan[HORC]:
raidscan[MRCF]:
raidscan[MRCF]:
raidscan[HORC]:
-x setenv
HORCC_MRCF 1
-x usetenv
HORCC_MRCF
Related information
setenv, page 180
raidvchkset
Integrity checking command (Database Validator only)
Description
The raidvchkset command sets the parameters for integrity checking the specified volumes and
can also be used to turn off all integrity checking without specifying type at a specified time [rtime]
and when the integrity checking that was originally set (or later extended) has elapsed.
Protection checking is based on a group as defined in the configuration file.
When enabling Database Validator using raidvchkset, if there are redundant paths to the same
LUN (for example, when using HP StorageWorks Auto Path or LVM pv-links), it is not necessary to
186
enable raidvchkset on each path. Enable Database Validator on only one path, usually the path
specified in the XP RAID Manager horcm.conf configuration file.
Syntax
raidvchkset {nomsg | d pair_vol |d[g] raw_device [MU#] | d[g] seq#
LDEV# [MU#] | g group h | I[H/CA][M/BC][instance#] | q | [vg [type]
[rtime]| vs bsize [SLBA ELBA] | vt [type] z | zx }
Arguments
nomsg
Used to suppress messages when this command runs from a user program. Must be specified
at the beginning of the command arguments.
d pair_vol
Specifies a paired logical volume name from the configuration definition file. The command
runs only for the specified paired logical volume.
d[g] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
d[g] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
This option is effective without specifying the g group option.
This option must be specified at the beginning of the command arguments.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The command runs for the specified group unless the d pair_vol option is specified.
This option is effective without specifying the g group option.
If the volume is contained in two groups, the command runs on the first volume encountered.
If MU# is not specified, it defaults to 0.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
q
Terminates interactive mode and exits this command.
187
vg [type][rtime]
Specifies the following guard types to the target volumes for Data Retention Utility.
If [type] is not specified, this option disables all guarding. If no guard type has been specified,
the volume is unguarded (read and write operations from the host and use as an S-VOL is
allowed).
If [type] has been specified previously to set a guard level and the time specified in [rtime]
has not elapsed, the guard characteristics of the target volumes do not change.
If [type] has been specified previously to set a guard level and the time specified in [rtime]
has elapsed, not specifying [type] disables all guarding for the target volumes.
If a volume has guard attribute set, write access for that volume cannot be restored by the
customer until [rtime] has expired. If a volume has been set to a guarded state by accident,
contact HP support for recovery of the volume. Valid values for type:
inv: Conceals the target volumes from the SCSI Inquiry command by responding with
unpopulated volume.
Sz0: The target volumes reply with SIZE 0 through the SCSI read capacity command.
rwd: Disables the target volumes from reading and writing.
wtd: Disables the target volumes from writing. The volumes cannot be used as an S-VOL
or written by a host.
svd: Disables the target volumes so they cannot become an S-VOL. Read and Write
operations from hosts are still allowed.
[rtime]: Specifies the data retention time, in days. If [rtime] is not specified, the data
retention time never expires. Disk array microcode versions 21-06-xx and 21-07-xx ignore this
option and always set the retention time to never expire.
If [rtime] is not specified, the default time defined by the microcode version is used. The
default time is infinite in microcode version 21-06-xx or 21-07-xx. The default time is zero
in microcode version 21-08-xx.
vs bsize [SLBA ELBA]
Specifies the data block size of Oracle I/O and a region on a target volume for validation
checking. bsize is used for specifying the data block size of Oracle I/O, in units of 512
bytes. bsize is able to specify between 1 (512 bytes) and 128 (64 kilobytes), but the effective
size for Oracle is between 1 (512 bytes) and 64 (32 kilobytes). If the vs option is also used
for redo log volumes to specify SLBA ELBA, bsize must be set to 2 for HP-UX or 1 for Solaris.
SLBA ELBA specifies a region defined between Start_LBA and End_LBA on a target volume
for checking, in units of 512 bytes. The effective region is from 1 to end-of-LU. SLBA ELBA
can be specified in hexadecimal by addition of 0x, or decimal. If this option is not specified,
a region for a target volume is set as all blocks (SLBA=0; ELBA=0).
vt [type]
Specifies the data type of the target volumes as an Oracle database. If type is not specified,
this option disables all checking. Valid values for type:
redo8: Sets the parameter for validation checking as Oracle redo log files (including archive
logs) prior to Oracle9i. This option sets bsize to 1 (512 bytes) for Solaris or 2 (1024
bytes) for HP-UX.
data8: Sets the parameter for validation checking as Oracle data files prior to Oracle9i.
redo9: Sets the parameter for validation checking as Oracle redo log files for Oracle9iR2
or later. This option sets bsize to 1 (512 bytes) for Solaris or 2 (1024 bytes) for HP-UX.
188
data9: Sets the parameter for validation checking as Oracle data files (including control
files) for Oracle9iR2 later.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Returned values
Return values in exit() allow you to check execution results from a user program. Normal termination
returns 0.
Examples
This example sets the volumes for the oralog group as redolog file prior to Oracle9i.
raidvchkset g oralog vt redo8
This example sets the volumes for the oradat group as data file, where the Oracle block size is 8
kilobytes.
raidvchkset g oradat vt data8 vs 16
This example sets to the volumes for the oradat group as data file, where the Oracle block size is 16
kilobytes.
raidvchkset g oradat vt data8 vs 32
This example disables all volume checking for the oralog group.
raidvchkset g oralog vt
This example disables all writing to volumes for the oralog group:
raidvchkset g oralog vg wtd
This example disables all writing and retention time for the oralog group:
raidvchkset g oralog vg wtd 365
189
This example disables writing and sets as retention time of 365 days.
raidvchkset -g oralog -vg wtd 365
Error codes
This command is rejected with EX_ERPERM by connectivity checking between XP RAID Manager and
the disk array.
The raidvchkset vg option returns the following error code and generic errors:
Table 36 raidvchkset error code
Category
Error code
Error message
Value
Volume Status
Unrecoverable
EX_EPRORT
208
This means that the target volume mode cannot be changed, because retention time prevents it.
Confirm the retention time for the target volume by using raidvchkscan v gflag.
Flags
The command sets the following four flags for each guarding type:
Table 37 raidvchkscan v gflag flags set
Type
INQ
RCAP
READ
WRITE
Inv
Sz0
Rwd
Wtd
raidvchkdsp
Integrity checking confirmation command (Database Validator only)
Description
The raidvchkdsp command displays the parameters for protection checking of the specified volumes.
The unit of checking for the protection is based on the configuration file group.
A non-permitted volume is shown without LDEV number information ().
190
Syntax
raidvchkdsp { c | d pair_vol | d[g] raw_device [MU#] | d[g] seq# LDEV#
[MU#] | f[xde] | g group | h | I[H/CA][M/BC][instance#] | q | v
operation z | zx }
Arguments
c
Shows if the LDEV to LUN/port mapping has changed and is now different from the instance
configuration file HORCM_DEV and HORCM_LDEV information. Used to determine if the
instance should be restarted to discover and use the new mapping information.
NOTE:
If no changes were made to the target volume, this option displays nothing.
An example of a changed LDEV#:
# raidvchkdsp -g VG000 -c
Group
PairVol
Port# TID
VG000
vg0001
CL4-E-0 0
LU Seq# LDEV#
17 63528
786
LDEV#(conf) -change->
785(conf) -change->
LDEV#
786
LU Seq# LDEV#
17 63528
-
LDEV#(conf) -change->
LDEV#
785(conf) -change-> NO LDEV
d pair_vol
Specifies a paired logical volume name from the configuration definition file. The command
runs only for the specified paired logical volume.
d[g] raw_device [MU#]
Searches the configuration file (local instance) for a volume that matches the specified raw
device. If a volume is found, the command runs on the paired volume (d) or group (dg).
This option is effective without specifying the g group option.
If the volume is contained in two groups, the command runs on the first volume encountered.
If MU# is not specified, it defaults to 0.
d[g] seq# LDEV# [MU#]
Searches the instance configuration file (local instance) for a volume that matches the specified
sequence (array serial) number and LDEV. If a volume is found, the command runs on the
paired logical volume (d) or group (dg). If the volume is contained in multiple groups, the
command runs on the first volume encountered. The seq# LDEV# values can be specified in
hexadecimal (by the addition of 0x) or decimal notation.
This option is effective without specifying the g group option.
f[xde]
fx displays the LDEV/STLBA/ENLBA number in hexadecimal.
191
fd displays the relationship between the Device_File and the paired volumes, based on the
group (as defined in the local instance configuration definition file). If the Device_File column
shows unknown to either the local or the remote host (instance), the volume is not recognized
on the current host, and the command is rejected in protection mode.
fe displays the serial and LDEV numbers of the external LUNs mapped to the LDEV for the
target volume.
This option displays the previous information by adding to last column, and then ignores the
80-column format.
Example:
# raidvchkdsp
Group ... TID
horc0 ... 0
horc0 ... 0
Output fields:
EM: The external connection mode:
H = a mapped E-LUN hidden from the host.
V = a mapped E-LUN visible to the host.
= an unmapped E-LUN.
BH = a mapped E-LUN hidden from the host with a blocked LDEV.
BV = a mapped E-LUN hidden from the host with a blocked LDEV.
B = an unmapped E-LUN with a blocked LDEV.
E-Seq#: The production (serial) number of the external LUN. If unknown, is displayed.
E-LDEV#: The LDEV number of the external LUN. If unknown, is displayed.
g group
Specifies the device group name in the HORCM_DEV section of the instance configuration file.
The command runs for the specified group unless the d pair_vol option is specified.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
q
Terminates interactive mode and exits this command.
v operation
Specifies an operation that displays the each parameter for validation checking.
Valid values for operation:
v cflag: Displays all flags for checking regarding data block validation for target
volumes.
192
Example:
raidvchkdsp -g vg01 -fd -v
Group PairVol Device_File
vg01 oradb1 c4t0d2
vg01 oradb2 c4t0d3
cflag
Seq# LDEV#
2332
2
2332
3
BR-W-E-E
D E B R
D E B R
MR-W-B
D D D
D D D
BR-W-B
D E E
D E E
SR-W-B-S
D E D D
D E D D
Output fields:
BR-W-E-E: The flags for checking data block size:
R=Read: E=Enable and D=Disable
W=Write: E=Enable and D=Disable
E=Endian format: L=Little and B=Big
E=Not rejected when validation error: W=Write
R=Read
MR-W-B: The flags for checking block header information:
R=Read: E=Enable and D=Disable
W=Write: E=Enable and D=Disable
B=Block #0: E=Enable and D=Disable
BR-W-B: The flags for checking data block number information:
R=Read: E=Enable and D=Disable
W=Write: E=Enable and D=Disable
B=Data Block: E=Enable and D=Disable
SR-W-B-S: The flags for checking data block checksum.
R=Read: E=Enable and D=Disable
W=Write: E=Enable and D=Disable
B=Block #0: E=Enable and D=Disable
S=Checksum: E=Enable and D=Disable
v offset
Displays the range setting for data block size of Oracle I/O and a region on a target volume
for validation checking.
Output fields:
Bsize: The data block size of Oracle I/O, in units of bytes.
STLBA: The Start of LBA on a target volume, in units of LBAs.
ENLBA: The End of LBA on a target volume, in units of LBAs. If STLBA and ENLBA are both zero,
all blocks are checked.
BNM: If validation is disabled or enabled. If BNM is 0 then this validation is disabled.
v gflag
Displays the flags for guarding the target volumes.
Example:
raidvchkdsp -g vg01 -fd -v gflag
Group
PairVol Device_File
Seq# LDEV#
vg01
oradb1 c4t0d2
2332
2
vg01
oradb2 c4t0d3
2332
3
GI-C-R-W-S
E E D D E
E E D D E
PI-C-R-W-S R-Time
E E D D E
365
E E D D E
-
193
Output fields:
GI-C-R-W-S: The protection flags for the target volume. The flags are E for enabled and D for
disabled.
I. Inquiry command.
C. Read Capacity command.
R. Read command.
W. Write command.
S. Ability to become an S-VOL.
PI-C-R-W-S: The permission flags, showing whether the permission flags can be changed to enable.
E indicates that a flag can be changed to enable. D indicates that it cannot.
I. I flag permission.
C. C flag permission.
R. R flag permission.
W. W flag permission.
S. S flag permission.
R-Time: The retention time for write protection, in days. A hyphen (-) indicates that the retention
time is infinite.
v pool
Displays the capacity and the usable capacity of the XP Snapshot pool corresponding to the
group.
Example:
raidvchkdsp -g vg01 -v pool
Group PairVol Port# TID LU Seq# LDEV#
Vg01
oradb1
CL2-D
2
7 62500 167
Vg01
oradb2
CL2-D
2 10 62500 170
Bsize
2048
2048
Available
100000
100000
Capacity
1000000000
1000000000
Output fields:
Bsize: The data block size of the pool, in units of block (512 bytes).
Available (Bsize): The available capacity for the volume data on the XP Snapshot pool in units of
Bsize.
Capacity (Bsize): The total capacity in the XP Snapshot pool in units of Bsize.
NOTE:
This command is controlled as protection facility. A non-permitted volume is shown without LDEV number
information (). This command is rejected with EX_ERPERM by connectivity checking between XP RAID
Manager and the disk array.
v errcnt
Displays statistical information for errors counted on the target volumes. The error count is
cleared when the individual flag for integrity checking is disabled.
Output fields:
CFEC: Block size validation error count.
MNEC: Block header validation error count.
194
Error codes
This command is rejected with EX_ERPERM by connectivity checking between XP RAID Manager and
the disk array.
Examples
# raidvchkdsp -g vg01 -fd -v cflag
Group
PairVol Device_File
Seq# LDEV#
vg01
oradb1 Unknown
2332
vg01
oradb2 c4t0d3
2332
3
# raidvchkdsp -g vg01 -fd -v offset
Group
PairVol Device_File
Seq# LDEV#
vg01
oradb1 c4t0d2
2332
2
vg01
oradb2 c4t0d3
2332
3
# raidvchkdsp -g vg01 -fd -v cflag
Group
PairVol Device_File Seq# LDEV#
vg01
oradb1 c4t0d2
2332
2
vg01
oradb2 c4t0d3
2332
3
# raidvchkdsp -g vg01 -fd -v
Group PairVol Device_File
vg01
oradb1 c4t0d2
vg01
oradb2 c4t0d3
errcnt
Seq# LDEV#
2332
2
2332
3
BR-W-E-E
- - - D E B R
MR-W-B
- - D D D
Bsize
1024
1024
STLBA
1
1
BR-W-E-E
D E B R
D E B R
CfEC
0
0
BR-W-B
- - D E E
ENLBA BNM
102400
102400
MR-W-B
D D D
D D D
MNEC
0
0
SR-W-B-S
- - - D E D D
BR-W-B
D E E
D E E
SCEC
0
0
SR-W-B-S
D E D D
D E D D
BNEC
0
0
raidvchkscan
Integrity checking confirmation command (Database Validator only)
Description
The raidvchkscan command sets the parameters for protection checking to the specified volumes.
The unit of checking for the protection is based on the raidscan command.
Syntax
raidvchkscan {fx | h | I[H/CA][M/BC][instance#] | l LUN| p port[hgrp]
| pd[g] raw_device | q | s seq# | t target | | v operation | v aou
| v cflag | v errcnt | v gflag | v jnl unit# | v jnlt | v offset |
v pid unit# | v pida unit# | z | zx }
9
9
195
Arguments
fx
Displays the LDEV/STLBA/ENLBA number in hexadecimal.
h
Displays Help/Usage, version, instance number, and environment (XP Continuous Access
Software/XP Business Copy Software) information.
I [H][M] [instance#] or I[CA][BC] [instance#]
Sets the instance number and specifies the command as XP Continuous Access Software or XP
Business Copy Software. This is an alternate method to using the environmental variables
$HORCMINST and $HORCC_MRCF. For further information, see
XP RAID Manager instance and execution environment variables on page 63.
l LUN
Specifies the LUN of a specified SCSI/Fibre target. If this option is not specified, the command
applies to all LUNs.
A LUN-only specification without designating a target ID is invalid.
p port
Specifies the name of a port to be scanned by selecting it from CL1-A to CL1-R (excluding CL1-I
and CL1-O), or CL2-A to CL2-R (excluding CL2-I and CL2-O). For the expanded port, specify
CL3-a to CL3-r or CL4-a to CL4-r. Port names are not case sensitive
This option always must be specified if pd raw_device option is not specified.
[hgrp] is specified to display only the LDEVs mapped to a host group on a disk array port.
pd[g] raw_device
Specifies a raw_device name.
Finds the sequence number and port name on the disk array and scans the port of the disk
array (which corresponds to the unit ID) and searches for the unit ID from seq#.
This option always must be specified if the find or p port option is not specified. If this
option is specified, the s seq# option is invalid.
pdg specifies the LUNs displayed in host view by locating a host group for the XP1024/XP128
disk array.
q
Terminates interactive mode and exits this command.
s seq#
Specifies the serial number of the disk array on multiple disk array connections when you
cannot specify the unit ID that is contained in the p port option or the v jnl option.
This option searches the corresponding unit ID for the sequence number and scans the port
that is specified by p port option.
If this option is specified, the unit ID contained in p port is invalid. If this option is specified,
the unit ID contained in v jnl is invalid.
t target
Specifies a SCSI/Fibre target ID of a specified port. If this option is not specified, the command
applies to all targets.
196
v operation
Specifies an operation that displays each parameter for validation checking.
Valid values for operation include: cflag, errcnt, gflag, jnl unit#, jnlt, offset,
and pid unit#.
v aou
(XP24000 disk array only) Displays Thin Provisioning (THP) volume and pool ID information.
Example:
Displays the LUN capacity and usage rate for the THP volume mapped to the specified port, and
displays the pool ID of the THP volume's LDEVs.
# raidvchkscan -v aou -p CL2-d-0
PORT# /ALPA/C TID# LU#
Seq# Num LDEV#
CL2-D-0 /e4/ 0
2
0 62500
1
160
CL2-D-0 /e4/ 0
2
1 62500
1
161
Used(MB)
20050
200500
LU_CAP(MB)
1100000
1100000
Example:
Displays the LUN capacity and usage rate for the configuration file group's THP volume, and displays
the pool ID of the THP volume's LDEVs.
# raidvchkdsp -v aou -g AOU
Group
PairVol
Port# TID
AOU
AOU_001 CL2-D
2
AOU
AOU_002 CL2-D
2
LU
7
10
Seq# LDEV#
62500
167
62500
170
Output fields:
Used(MB): The usage size of the allocated block on this LUN.
LU_CAP(MB): The LUN capacity corresponding to the SCSI Readcapacity command.
U(%): The usage rate of the allocated block on the associated THP pool to this LUN.
T(%): The threshold rate being set to the THP pool as High water mark.
PID: The THP pool ID assigned to this THP volume.
v cflag
Displays all flags for checking regarding data block validation for target volumes.
Example:
# raidvchkscan -p CL1-A -v cflag
PORT# /ALPA/C TID# LU#
Seq# Num LDEV#
CL1-A / ef/ 0
0
0
2332
1
0
CL1-A / ef/ 0
0
1
2332
1
1
CL1-A / ef/ 0
0
2
2332
1
2
CL1-A / ef/ 0
0
3
2332
1
3
CL1-A / ef/ 0
0
4
2332
1
4
BR-W-E-E
D E B R
D E B R
D E B R
D E B R
D E B R
MR-W-B
D D D
D D D
D D D
D D D
D D D
BR-W-B
D E E
D E E
D E E
D E E
D E E
SR-W-B-S
D E D D
D E D D
D E D D
D E D D
D E D D
Output fields:
BR-W-E-E: The flags for checking data block size:
R=Read: E=Enable and D=Disable
W=Write: E=Enable and D=Disable
E=Endian format: L=Little and B=Big
197
CFEC
0
0
0
0
0
MNEC
0
0
0
0
0
SCEC
0
0
0
0
0
BNEC
0
0
0
0
0
Output fields:
v gflag
The flags for block data validation for target volumes.
Example:
# raidvchkscan -p CL1-A -v gflag
PORT# /ALPA/C TID# LU#
Seq# Num LDEV#
CL1-A / ef/ 0
0
0
2332
1
0
CL1-A / ef/ 0
0
1
2332
1
1
CL1-A / ef/ 0
0
2
2332
1
2
GI-C-R-W-S
E E D D E
E E D D E
E E D D E
PI-C-R-W-S R-Time
E E D D E
365
E E D D E
E E E E E
0
Output fields:
GI-C-R-W-S: The protection flags for the target volume. The flags are E for enabled and D for
disabled:
I. Inquiry command
198
PI-C-R-W-S: The permission flags, showing whether the permission flags can be changed to enable.
E indicates that the I flag can be changed to enable. D indicates that it cannot.
I. I flag permission
C. C flag permission
R. R flag permission
W. W flag permission
S. S flag permission
R-Time: The retention time for write protection, in days. A hyphen (-) indicates that the retention
time is infinite.
v jnl unit#
Finds the journal volume lists setting and displays information for the journal volume.
Example:
# raidvchkscan -v jnl 0
JID MU CTG JNLS AP U(%)
001 0
1 PJNN
4
21
002 1
2 PJNF
4
95
003 2
3 SJNS
4
94
004 0
4 PJSN
4
0
005 0
5 PJSF
4
45
006 0
6 PJSE
0
0
007 - SMPL
008 0
7 SMPL
4
5
Q-Marker
43216fde
3459fd43
3459fd51
1234f432
345678ef
Q-CNT
30
52000
112
78
66
D-SZ(BLK)
Seq# Nnm LDEV#
512345 62500
2
265
512345 62500
3
270
512345 62500
3
274
512345 62500
1
275
512345 62500
1
276
512345 62500
1
277
512345 62500
1
278
512345 62500
1
278
Output fields:
AP: (active path) displays the following two conditions, according to the pair status.
For pair status PJNL or SJNL (except suspend state), this field shows the number of active paths on
the initiator port in XP Continuous Access Journal Software links. If unknown, is displayed.
For pair status SJNL (suspend state), this field shows the result of the suspend operation and
indicates whether or not all data on PJNL (P-VOL) were passed (synchronized) to S-JNL (S-VOL)
completely. If AP is 1, all data were passed. If not, all data were not passed from S-JNL (S-VOL).
199
The following table shows the meanings of JNLS status when combined with other information.
When a status is shown in parentheses, such as PJNN (PJNS), the first status refers to the normally
active link and the second status refers to the normally inactive delta-resync link.
Table 38 raidvchkscan JNLS status
JNLS
Other info
Meaning
P-JNL
S-JNL
SMPL
PJNN
SJNN
(PJNS)
(SJNS)
PJNN
(PJNS)
QCNT
AP
N
SJNN
(SJNS)
PJSN
PJNF
PJSF
PJSE
200
SJSN
SJSF
JNLS
Other info
Meaning
P-JNL
S-JNL
QCNT
AP
SJSE
N
v jnlt
Displays three timer values for the journal volume.
Example:
# raidvchkscan -v jnlt
JID MU CTG JNLS AP U(%)
001 0
1 PJNN
4
21
002 1
2 PJNF
4
95
003 0
3 PJSN
4
0
Q-Marker
43216fde
3459fd43
-
Q-CNT
30
52000
-
D-SZ(BLK)
512345
512345
512345
Output fields:
DOW: Data Overflow Watch timer (in seconds) for the journal group.
PBW: Path Blockade Watch timer (in seconds) for the journal group. Displays 0 when in SMPL
state.
APW: Active Path Watch time (in seconds) to detect link failure.
v offset
Displays the range setting for data block size of Oracle I/O and a region on a target volume
for validation checking.
Example:
# raidvchkscan -p CL1-A -v offset
PORT# /ALPA/C TID# LU#
Seq# Num LDEV#
CL1-A / ef/ 0
0
0
2332
1
0
CL1-A / ef/ 0
0
1
2332
1
1
CL1-A / ef/ 0
0
2
2332
1
2
CL1-A / ef/ 0
0
3
2332
1
3
CL1-A / ef/ 0
0
4
2332
1
4
Bsize
1024
1024
1024
1024
1024
STLBA
1
1
1
1
1
ENLBA BNM
102400
9
102400
9
102400
9
102400
9
102400
9
Output fields:
Bsize: The data block size of Oracle I/O, in units of bytes.
STLBA: The Start of LBA on a target volume, in units of LBAs.
201
ENLBA: The End of LBA on a target volume, in units of LBAs. If STLBA and ENLBA are both zero,
all blocks are checked.
BNM: If validation is disabled or enabled. If BNM is 0 then this validation is disabled.
v pid [unit#]
(XP24000, XP12000, and XP10000 disk arrays only) Displays XP Snapshot pool information.
Example:
# raidvchkscan -v pid 0
PID POLS
U(%) SSCNT
001 POLN
10
330
002 POLF
95
9900
003 POLS
100
10000
005 POLE
0
0
Available(MB)
10000000
100000
100
0
Capacity(MB)
1000000000
1000000000
1000000000
0
H(%)
80
70
70
Output fields:
PID: The XP Snapshot pool ID.
POLS: The following status in the XP Snapshot pool:
POLN: Pool Normal.
POLF: Pool Full.
POLS: Pool Suspend.
POLE: Pool Failure, and does not display pool information.
v pida [unit#]
(XP24000 disk array only) Identifies and displays Thin Provisioning (THP) pool settings and
information.
Example:
# raidvchkscan -v pida 0
PID POLS U(%) AV_CAP(MB)
001 POLN 10
45000000
002 POLF 95
10000
004 POLN
0
10000000
Output fields:
PID: The THP pool ID.
POLS: The following THP pool status:
POLN: Pool Normal.
POLF: Pool Full.
POLS: Pool Suspend.
202
LCNT
33
900
0
TL_CAP(MB)
65000000
100000000
0
v pool
(XP24000, XP12000, and XP10000 disk arrays; XP Snapshot only) Displays the capacity and
the usable capacity of the pool corresponding to the group.
Example:
# raidvchkscan -v pool -p CL2-d-0
PORT# /ALPA/C TID# LU#
Seq# Num LDEV#
CL2-D-0 /e4/ 0
2
0 62500
1
160
CL2-D-0 /e4/ 0
2
1 62500
1
161
Bsize
2048
2048
Available
100000
100000
Capacity
1000000000
1000000000
Output fields:
Bsize: The data block size of the pool, in units of block (512 bytes).
Available (Bsize): The available capacity for the volume data on the pool in units of Bsize.
Capacity (Bsize): The total capacity in the pool in units of Bsize.
NOTE:
This command is rejected with EX_ERPERM by connectivity checking between XP RAID Manager and the
disk array.
z
Makes XP RAID Manager enter interactive mode, prompting you on the next line for command
options. If the instance terminates or is shutdown, your CLI session is not terminated but becomes
unresponsive.
zx
(Not for use with MPE/iX or OpenVMS) Prevents XP RAID Manager from entering interactive
mode, prompting you on the next line for command options. If the instance terminates or is
shutdown, your CLI session is terminated.
Error codes
This command is rejected with EX_ERPERM by connectivity checking between XP RAID Manager and
the disk array.
XP RAID Manager reports the following message to the syslog file as an integrity check error when
each statistical information counted an error is updated.
HORCM_103
203
Detected a validation check error on this volume (dev_group, dev_name, unit#X, ldev#Y): CfEC=n,
MNEC=n, SCEC=n, BNEC=n
Cause: A validation error occurred on the database volume, or validation parameters for this volume
are invalid.
Action to be taken: Confirm the following items, and use the raidvchkdsp v operation command
for verifying the validation parameters.
Check whether the block size (vs size) is an appropriate size.
Check whether the type for checking (vt type) is an appropriate type.
Check whether the data validations are disabled for LVM configuration changes.
Check whether the data validations are not used based on the file system.
Check whether the redo log and data file are separated among the volumes.
204
Error reporting
If you have a problem with XP RAID Manager, first make sure that the problem is not caused by the
host or the connection to the disk array.
The tables in this chapter provide detailed troubleshooting information:
If a failure occurs in XP Continuous Access Software or XP Business Copy Software volumes, find the
failure in the paired volumes, recover the volumes, and continue operation in the original system. If
an XP Continuous Access Software command terminates abnormally, see the activation log file, error
log file, and trace file to identify the cause.
XP RAID Manager monitors failures in the paired volumes at regular intervals. When it detects a
failure, it sends an error message to the host syslog file. When a failure is detected and reported,
collect the data in the error log file and trace data file (in all files under $HORCM_LOG) to determine
the cause of the error.
Operational notes
Table 39 Operational notes
Error
Solution
When the LVM mirror and XP Continuous Access Software volumes are used
together, the LVM mirror handles write errors by switching LVM P-VOL volumes.
Thus, the fence level of mirrored P-VOLs used by the LVM must be set to data.
One instance of LVM must not be allowed to see both the P-VOL and S-VOL of
the same XP Business Copy Software or XP Continuous Access Software pair.
This causes an LVM error in that two volumes contain the same LVM volume
group ID.
If you wish to split and mount an S-VOL on the same host as the P-VOL, you
must first use the vgchgid command to give the S-VOL a new LVM volume
group ID.
Command device
205
Error
Solution
or into a specific block area of the command device. Therefore, the command
device cannot be used by the user. In addition, this device must not belong to
an LVM volume group.
(XP Continuous Access Software only) Check the error notification command or
the syslog file to identify the failed paired volume. Manually issue a command
to the identified failed paired volume to try to recover it.
If the secondary volume is the failed volume, issue the pairresync command
to recover it.
If the primary volume fails, delete or suspend the pair (pairsplit command)
and use the secondary volume as the primary volume, and create another pair.
horctakeover (swap-takeover)
Host machines must be running the same operating system and the same
architecture.
After a new host system has been constructed, a failure to start can occur due
to an improper environmental setting or an inaccurate configuration definition
file. Use the activation log file for error definitions.
Host failure
If the primary volume detects a failure in the secondary volume, pair writing is
suspended. The primary volume changes the paired volume status to PSUE. (The
fence level determines whether host A continues processing, that is, writing, or
host B takes over from host A.) The software detects the change in status and
sends a message to the syslog.
If host A had initiated a monitoring command, a message appears on host A.
When the secondary volume recovers, host A updates the S-VOL data by running
the pairsplit S, paircreate vl, or pairresync command.
Startup failure
206
When the P-VOL server boots up, the secondary volume can be updated. If the
secondary volume is used by the LVM, the volume group of the LVM must be
deactivated. The secondary volume must only be mounted to a host when the
volume is in PSUS state or in SMPL mode. The secondary volume must not be
mounted automatically in any host boot sequence.
Error
Solution
If the primary and secondary volumes are on the same server, alternate pathing,
for example, pvlink, cannot be used (from primary volume to secondary volume).
Use of SCSI alternative pathing to a volume pair is limited to one side of a pair.
The hidden S-VOL option can avoid undesirable alternate pathing.
This is caused by the following kernel code in drivers/scsi/scsi_ioctl.c warning that kernal
2.6.9.xx ioctl (SCSI_IOCTL_...) cannot process an HBA driver error.
/* Check for deprecated ioctls ... all the ioctls which don't follow the new unique
numbering scheme are deprecated */
switch (cmd) {
case SCSI_IOCTL_SEND_COMMAND:
case SCSI_IOCTL_TEST_UNIT_READY:
case SCSI_IOCTL_BENCHMARK_COMMAND:
case SCSI_IOCTL_SYNC:
case SCSI_IOCTL_START_UNIT:
case SCSI_IOCTL_STOP_UNIT:
printk(KERN_WARNING program %s is using a deprecated SCSI
ioctl, please convert it to SG_IO\n, current->comm);
XP RAID Manager normally automatically changes to the ioctl (SG_IO). For Linux kernel 2.6.9.xx,
use one of the following two methods to change to the ioctl (SG_IO):
/HORCM/etc/USE_OLD_IOCTL file(size=0), which uses the ioctl
(SCSI_IOCTL_SEND_COMMAND).
defining the environment variable
Example:
export USE_OLD_IOCTL=1
horcmstart.sh 10
HORCM/etc:
-rw-r--r-- 1 root root
0 Nov 11 11:12 USE_OLD_IOCTL
-r--r--r-- 1 root sys
32651 Nov 10 20:02 horcm.conf
-r-xr--r-- 1 root sys 282713 Nov 10 20:02 horcmgr
207
export USE_OLD_IOCTL=1
horcmstart.sh 10
HORCM/etc:
-rw-r--r-- 1 root root
0 Nov 11 11:12 USE_OLD_IOCT
-r--r--r-- 1 root sys
32651 Nov 10 20:02 horcm.conf
-r-xr--r-- 1 root sys 282713 Nov 10 20:02 horcmgr
Error codes
Table 40 Error codes
Error code
Problem
Cause
Solution
HORCM_001
HORCM_002
HORCM_003
HORCM_004
HORCM_005
HORCM_006
HORCM_007
HORCM_008
HORCM_009
XP Continuous Access
Software/XP RAID Manager
connection failed.
HORCM_101
XP Continuous Access
Software/XP RAID Manager
communication failed.
HORCM_102
HORCM_103
208
Error code
Problem
Cause
Solution
Data validation is disabled for LVM
configuration changes.
Data validation is not used based on the
File system.
The redo log and data file are on
separate volumes.
Command error
Error message
204
EX_ENQCLP
206
EX_ENOPOL
207
EX_ESPERM
208
EX_EPRORT
209
EX_ESVOLD
210
EX_ENOSUP
211
EX_ERPERM
212
EX_ENQSIZ
213
EX_ENPERM
214
EX_ENQCTG
Unmatched CTGID.
215
EX_ENXCTG
216
EX_ENTCTG
217
EX_ENOCTG
218
EX_ENQSER
219
EX_ENOUNT
220
EX_INVMUN
221
EX_CMDRJE
222
EX_INVVOL
223
EX_VOLCRE
224
EX_VOLCUE
209
Return value
Command error
Error message
225
EX_VOLCUR
226
EX_INVRCD
227
EX_ENLDEV
228
EX_INVSTP
229
EX_INCSTG
230
EX_UNWCMD
Unknown command.
231
EX_ESTMON
232
EX_EWSLTO
233
EX_EWSTOT
Time-out error.
234
EX_EWSUSE
Pairsplit E.
235
EX_EVOLCE
236
EX_ENQVOL
237
EX_CMDIOE
238
EX_UNWCOD
239
EX_ENOGRP
240
EX_INVCMD
241
EX_INVMOD
242
EX_ENORMT
243
EX_ENAMLG
244
EX_ERANGE
245
EX_ENOMEM
Insufficient memory.
246
EX_ENODEV
247
EX_ENOENT
248
EX_OPTINV
249
EX_INVNAM
250
EX_ATTDBG
251
EX_ATTHOR
252
EX_UNWOPT
Unknown option.
253
EX_INVARG
Invalid argument.
254
EX_REQARG
210
Return value
Command error
Error message
255
EX_COMERR
Command errors
Table 42 Command errors
Command error
Problem
Action
EX_ATTDBG
EX_ATTHOR
Verify that the software has started and that the correct HORCMINST
value has been defined.
EX_CMDIOE
Check whether the syslog host file reports an Illegal Request (0x05)
Sense Key. If so, verify:
The XP Business Copy Software/XP Continuous Access Software
functions are installed on the disk array;
The ESCON RCP and LCP ports are set properly;
The CU paths have been established;
The target volume is available.
EX_CMDRJE
EX_COMERR
EX_ENAMLG
Undefined error.
EX_ENLDEV
Verify that the configuration file is correct and that all devices are defined
correctly.
EX_ENOCTG
EX_ENODEV
Verify the device name and add it to the configuration file of the remote
and local hosts.
EX_ENOENT
Verify the device or group name and add it to the configuration file of
the remote and local hosts.
EX_ENOGRP
Verify the device or group name and add it to the configuration file of
the remote and local hosts.
211
Command error
Problem
Action
Insufficient memory.
EX_ENORMT
Verify that the local and remote servers are properly communicating, and
increase the time-out value in the configuration file.
EX_ENOSUP
S-VOL error
EX_ENOUNT
Verify the disk array unit ID and add it to the HORCM_CMD section of
the local host configuration file.
EX_ENPERM
EX_ENQCTG
EX_ENQSER
EX_ENQSIZ
EX_ENQVOL
Confirm the attributes and fence level settings using the pairdisplay
command and reset the volume attributes and fence levels.
EX_ENXCTG
EX_EPRORT
Verify the retention time for a target volume using the raidvchkscan
v gflag command.
EX_ERANGE
Re-issue the command, making sure to correctly define all of the command
arguments.
EX_ERPERM
EX_ESTMON
Monitoring is prohibited.
EX_EVOLCE
EX_EWSLTO
212
Command error
Problem
Action
EX_EWSTOT
EX_ENOMEM
Insufficient memory.
EX_EWSUSE
Issue the pairresync command to try to recover the failed pair. If the
pairresync command does not restore the pair, call the HP support
center.
EX_EXTCTG
EX_INCSTG
EX_INVARG
Reissue the command, making sure to correctly define all of the command
arguments.
EX_INVCMD
EX_INVMOD
EX_INVMUN
EX_INVNAM
Reissue the command, making sure to correctly define all of the command
arguments.
EX_INVRCD
EX_INVSTP
EX_INVVOL
EX_OPTINV
EX_REQARG
EX_UNWCMD
EX_UNWCOD
EX_UNWERR
Undefined error.
EX_UNWOPT
EX_VOLCRE
EX_VOLCUE
EX_VOLCUR
213
214
MU#0
Cnt Ac
HORCM_DEV
#dev_group
Oradb
dev_name
oradev1
port#
CL1-D
TargetID
2
LU#
1
MU#
HORCM_DEV
#dev_group
Oradb
Oradb1
Oradb2
dev_name
oradev1
oradev11
oradev21
port#
CL1-D
CL1-D
CL1-D
TargetID
2
2
2
LU#
1
1
1
MU#
HORCM_DEV
#dev_group
Oradb
dev_name
oradev1
port#
CL1-D
TargetID
2
LU#
1
MU#
BC
oradev1
oradev1
oradev1
oradev1
BC (XP
Snapshot)
Cnt Ac-J
only
MU#1-2
(MU#3-63)
MU#1-3
oradev11
oradev21
1
2
oradev1
oradev11
oradev21
oradev31
215
MU#0
Cnt Ac
Oradb1
Oradb2
Oradb3
oradev11
oradev21
oradev31
CL1-D
CL1-D
CL1-D
2
2
2
1
1
1
0
1
2
HORCM_DEV
#dev_group
Oradb
dev_name
oradev1
port#
CL1-D
TargetID
2
LU#
1
MU#
0
HORCM_DEV
#dev_group
Oradb
Oradb1
Oradb2
dev_name
oradev1
oradev11
oradev21
port#
CL1-D
CL1-D
CL1-D
TargetID
2
2
2
LU#
1
1
1
MU#
0
1
2
HORCM_DEV
#dev_group
Oradb
Oradb1
Oradb2
Oradb3
Oradb4
dev_name
oradev1
oradev11
oradev21
oradev31
oradev41
port#
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
TargetID
2
2
2
2
2
LU#
1
1
1
1
1
MU#
BC
BC (XP
Snapshot)
Cnt Ac-J
only
MU#1-2
(MU#3-63)
MU#1-3
oradev1
oradev1
oradev11
oradev21
oradev11
oradev11
0
h1
h2
h3
oradev21
oradev31
oradev41
216
217
218
service
horcm
poll(10ms)
1000
dev_name
oradev1
oradev2
port#
CL1-A
CL1-A
ip_address
HST2
timeout(10ms)
3000
TargetID
1
1
LU#
1
2
service
horcm
service
horcm
poll(10ms)
1000
timeout(10ms)
3000
HORCM_CMD
#dev_name
219
/dev/xxx
HORCM_DEV
#dev_group
Oradb
Oradb
HORCM_INST
#dev_group
Oradb
dev_name
oradev1
oradev2
port#
CL1-D
CL1-D
ip_address
HST1
TargetID
2
2
LU#
1
2
service
horcm
NOTE:
There must be at least one command device described in the configuration definition for every instance.
Up to 16 instances can use the same command device via the same port. Instances beyond 16 must use
a different SCSI path.
The following shows an example of the required (raw) control device file format. HOSTx = HOSTA,
HOSTB, etc...
HP-UX
HORCM_CMD for HOSTx ... /dev/rdsk/c0t0d1
Solaris
HORCM_CMD for HOSTx ... /dev/rdsk/c0t0d1s2
AIX
HORCM_CMD for HOSTx ... /dev/rhdiskNN
Where NN is the device number assigned automatically by AIX.
Digital UNIX
HORCM_CMD for HOSTx ... /dev/rrzbNNc
Where NN is device number (BUS number 8 + target ID) defined by Digital UNIX.
DYNIX/ptx
HORCM_CMD for HOSTx ... /dev/rdsk/sdNN
Where NN is the device number assigned automatically by DYNIX/ptx.
Windows NT/2000/2003
HORCM_CMD for HOSTx ... \\.\PhysicalDriveN or \\.\Volume{GUID} for Windows
2000/2003
Where N is the device number assigned automatically by Windows NT/2000/2003.
Linux, xLinux
HORCM_CMD for HOSTx ... /dev/sdN
Where N is the device number assigned automatically by Linux/xLinux.
220
paircreate
-g Oradb
-f never
-vl
This command begins a pair coupling between the volumes designated as Oradb in the
configuration definition file and begins copying the two pairs (in the example configuration).
Designate a volume name (oradev1) and a local host P-VOL:
# paircreate
-g Oradb
-d oradev1
-f never -vl
This command begins a pair coupling between the volumes designated as oradev1 in the
configuration definition file.
In the example configuration, this pairs CL1-A, T1, L1 and CL1-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
Oradb
(P,T#,L#),
(CL1-A, 1,1)
(CL1-D, 2,1)
(CL1-A, 1,2)
(CL1-D, 2,2)
Seq#,
30053
30054
30053
30054
P-LDEV#
19
18
21
20
M
-
P-LDEV#
18
19
20
21
M
-
paircreate
-g Oradb
-f never
-vr
This command begins a pair coupling between the volumes designated as Oradb in the
configuration definition file and begins copying the two pairs (in the example configuration).
Designate a volume name (oradev1) and a remote host P-VOL:
# paircreate
-g Oradb
-d oradev1
-f never -vr
This command begins a pair coupling between the volumes designated as oradev1 in the
configuration definition file.
In the example configuration, this pairs CL1-A, T1, L1 and CL1-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
Oradb
(P,T#,L#),
(CL1-D, 2,1)
(CL1-A, 1,1)
(CL1-D, 2,2)
(CL1-A, 1,2)
Seq#,
30054
30053
30054
30053
221
222
service
horcm
poll(10ms)
1000
dev_name
oradev1
oradev2
ip_address
HST2
timeout(10ms)
3000
port#
CL1-A
CL1-A
TargetID
1
1
service
horcm
LU#
1
2
service
horcm
HORCM_CMD
#dev_name
/dev/xxx
HORCM_DEV
#dev_group
Oradb
Oradb
poll(10ms)
1000
dev_name
oradev1
oradev2
HORCM_INST
#dev_group
Oradb
timeout(10ms)
3000
port#
CL1-D
CL1-D
TargetID
2
2
ip_address
HST1
LU#
1
2
service
horcm
paircreate
-g Oradb
-f never
-vl
This command begins a pair coupling between the volumes designated as Oradb in the
configuration definition file and begins copying the two pairs (in the example configuration).
Designate a volume name (oradev1) and a local host P-VOL:
# paircreate
-g Oradb
-d oradev1
-f never -vl
This command begins a pair coupling between the volumes designated as oradev1 in the
configuration definition file.
In the example configuration, this pairs CL1-A, T1, L1 and CL1-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
Oradb
(P,T#,L#),
(CL1-A, 1,1)
(CL1-D, 2,1)
(CL1-A, 1,2)
(CL1-D, 2,2)
Seq#,
30053
30053
30053
30053
P-LDEV#
19
18
21
20
paircreate
-g Oradb
-f never
-vr
223
M
-
This command begins a pair coupling between the volumes designated as Oradb in the
configuration definition file and begins copying the two pairs (in the example configuration).
Designate a volume name (oradev1) and a remote host P-VOL:
# paircreate
-g Oradb
-d oradev1
-f never -vr
This command begins a pair coupling between the volumes designated as oradev1 in the
configuration definition file.
In the example configuration, this pairs CL1-A, T1, L1 and CL1-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
224
Oradb
(P,T#,L#),
(CL1-D, 2,1)
(CL1-A, 1,1)
(CL1-D, 2,2)
(CL1-A, 1,2)
Seq#,
30053
30053
30053
30053
P-LDEV#
18
19
20
21
M
-
service
horcm0
poll(10ms)
1000
dev_name
oradev1
oradev2
ip_address
HST1
port#
CL1-A
CL1-A
timeout(10ms)
3000
TargetID
1
1
LU#
1
2
service
horcm1
225
service
horcm1
HORCM_CMD
#dev_name
/dev/xxx
HORCM_DEV
#dev_group
Oradb
Oradb
HORCM_INST
#dev_group
Oradb
poll(10ms)
1000
dev_name
oradev1
oradev2
timeout(10ms)
3000
port#
CL1-D
CL1-D
ip_address
HST1
TargetID
2
2
LU#
1
2
service
horcm0
-g Oradb
-d oradev1
-f never -vl
In the example configuration, this pairs CL1-A, T1, L1 and CL1-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
Oradb
(P,T#,L#),
(CL1-A, 1,1)
(CL1-D, 2,1)
(CL1-A, 1,2)
(CL1-D, 2,2)
Seq#,
30053
30053
30053
30053
226
P-LDEV#
19
18
21
20
M
-
# setenv HORCMINST 1
(Windows NT/2000/2003) set HORCMINST=1
paircreate
-g Oradb
-f never
-vr
This command begins a pair coupling between the two pairs of volumes designated as Oradb in
the configuration definition file.
Designate a volume name (oradev1) and a remote instance P-VOL:
# paircreate
-g Oradb
-d oradev1
-f never -vr
In the example configuration, this pairs CL1-A, T1, L1 and CL1-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
Oradb
(P,T#,L#),
(CL1-D, 2,1)
(CL1-A, 1,1)
(CL1-D, 2,2)
(CL1-A, 1,2)
Seq#,
30053
30053
30053
30053
P-LDEV#
18
19
20
21
227
M
-
service
horcm
poll(10ms)
1000
HORCM_CMD
#dev_name
228
timeout(10ms)
3000
/dev/xxx
HORCM_DEV
#dev_group
Oradb
Oradb
dev_name
oradev1
oradev2
port#
CL1-A
CL1-A
TargetID
1
1
LU#
1
2
MU#
0
0
Oradb1
Oradb1
oradev1-1
oradev1-2
CL1-A
CL1-A
1
1
1
2
1
1
Oradb2
Oradb2
oradev2-1
oradev2-2
CL1-A
CL1-A
1
1
1
2
2
2
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
ip_address
HST2
HST3
HST4
service
horcm
horcm
horcm
service
horcm
poll(10ms)
1000
dev_name
oradev1
oradev2
timeout(10ms)
3000
port#
CL2-B
CL2-B
ip_address
HST1
TargetID
2
2
LU#
1
2
MU#
service
horcm
service
horcm
poll(10ms)
1000
dev_name
oradev1-1
oradev1-2
ip_address
HST1
port#
CL2-C
CL2-C
timeout(10ms)
3000
TargetID
2
2
LU#
1
2
MU#
service
horcm
229
service
horcm
HORCM_CMD
#dev_name
/dev/xxx
HORCM_DEV
#dev_group
Oradb2
Oradb2
poll(10ms)
1000
dev_name
oradev2-1
oradev2-2
HORCM_INST
#dev_group
Oradb2
timeout(10ms)
3000
port#
CL2-D
CL2-D
ip_address
HST1
TargetID
2
2
LU#
1
2
MU#
service
horcm
paircreate
-g Oradb
-vl
This command begins a pair coupling between the two pairs of volumes designated as Oradb in
the configuration definition file.
Designate a volume name (oradev1) and a local host P-VOL:
# paircreate
-g Oradb
-d oradev1
-vl
In the example configuration, this pairs CL1-A, T1, L1 and CL1-B, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
Oradb
(Port#,TID,LU-M),
(CL1-A, 1, 1-0)
(CL2-B, 2, 1-0)
(CL1-A, 1, 2-0)
(CL2-B, 2, 2-0)
Seq#,
30053
30053
30053
30053
230
P-LDEV#
20
18
21
19
M
-
# setenv HORCC_MRCF 1
(Windows NT/2000/2003) set HORCC_MRCF=1
paircreate
-g Oradb
-vr
This command begins a pair coupling between the two pairs of volumes designated as Oradb in
the configuration definition file.
Designate a volume name (oradev1) and a remote host P-VOL:
# paircreate
-g Oradb
-d oradev1
-vr
In the example configuration, this pairs CL1-A, T1, L1 and CL1-B, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1(L)
oradb oradev1(R)
oradb oradev2(L)
oradb oradev2(R)
Oradb
(Port#,TID,LU-M),
(CL2-B, 2, 1-0)
(CL1-A, 1, 1-0)
(CL2-B, 2, 2-0)
(CL1-A, 1, 2-0)
Seq#,
30053
30053
30053
30053
P-LDEV#
18
20
19
21
M
-
paircreate
-g Oradb1
-vl
This command begins a pair coupling between the two pairs of volumes designated as Oradb1
in the configuration definition file.
Designate a volume name (oradev1-1) and a local host P-VOL:
# paircreate
-g Oradb1
-d oradev1-1
-vl
In the example configuration, this pairs CL1-A, T1, L1 and CL2-C, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1-1(L)
oradb oradev1-1(R)
oradb oradev2-2(L)
oradb oradev2-2(R)
Oradb1
(Port#,TID,LU-M),
(CL1-A, 1, 1-1)
(CL2-C, 2, 1-0)
(CL1-A, 1, 2-1)
(CL2-C, 2, 2-0)
Seq#,
30053
30053
30053
30053
P-LDEV#
22
18
23
19
231
M
-
HORCC_MRCF=1
paircreate
-g Oradb1
-vr
This command begins a pair coupling between the two pairs of volumes designated as Oradb1
in the configuration definition file.
Designate a volume name (oradev1-1) and a remote host P-VOL:
# paircreate
-g Oradb1
-d oradev1-1
-vr
In the example configuration, this pairs CL1-A, T1, L1 and CL2-C, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev1-1(L)
oradb oradev1-1(R)
oradb oradev1-2(L)
oradb oradev1-2(R)
Oradb1
(Port#,TID,LU-M),
(CL2-C, 2, 1-0)
(CL1-A, 1, 1-1)
(CL2-C, 2, 2-0)
(CL1-A, 1, 2-1)
Seq#,
30053
30053
30053
30053
P-LDEV#
18
22
19
23
M
-
HORCC_MRCF=1
paircreate
-g Oradb1
-vl
This command begins a pair coupling between the two pairs of volumes designated as Oradb2
in the configuration definition file.
Designate a volume name (oradev2-1) and a local host P-VOL:
# paircreate
-g Oradb2
-d oradev2-1
-vl
In the example configuration, this pairs CL1-A, T1, L1 and CL2-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev2-1(L)
oradb oradev2-1(R)
232
Oradb2
(Port#,TID,LU-M), Seq#,
(CL1-A, 1, 1-2) 30053
(CL2-D, 2, 1-0) 30053
P-LDEV# M
24
18
-
2-2)
2-0)
30053
30053
19..P-VOL
25..S-VOL
COPY
COPY
30053
-----
25
19
HORCC_MRCF=1
paircreate
-g Oradb2
-vr
This command begins a pair coupling between the two pairs of volumes designated as Oradb2
in the configuration definition file.
Designate a volume name (oradev2-1) and a remote host P-VOL:
# paircreate
-g Oradb2
-d oradev2-1
-vr
In the example configuration, this pairs CL1-A, T1, L1 and CL2-D, T2, L1
Designate a group name and confirm pair volume state:
# pairdisplay -g
Group PairVol(L/R)
oradb oradev2-1(L)
oradb oradev2-1(R)
oradb oradev2-2(L)
oradb oradev2-2(R)
Oradb2
(Port#,TID,LU-M),
(CL2-D, 2, 1-0)
(CL1-A, 1, 1-2)
(CL2-D, 2, 2-0)
(CL1-A, 1, 2-2)
Seq#,
30053
30053
30053
30053
P-LDEV#
18
24
19
25
233
M
-
234
service
horcm0
poll(10ms)
1000
dev_name
oradev1
oradev2
oradev11
oradev12
oradev21
oradev22
ip_address
HST1
timeout(10ms)
3000
port#
CL1-A
CL1-A
CL1-D
CL1-D
CL1-D
CL1-D
TargetID
1
1
3
3
4
4
service
horcm1
LU#
1
2
1
2
1
2
MU#
0
0
0
0
0
0
Oradb1
Oradb2
HST1
HST1
horcm1
horcm1
service
horcm1
poll(10ms)
1000
dev_name
oradev1
oradev2
oradev11
oradev12
oradev21
oradev22
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
port#
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
ip_address
HST1
HST1
HST1
timeout(10ms)
3000
TargetID
2
2
2
2
2
2
LU#
1
2
1
2
1
2
MU#
0
0
1
1
2
2
service
horcm0
horcm0
horcm0
HORCMINST=0
HORCC_MRCF=1
Designate group names (Oradb and Oradb1) and a local instance P-VOL:
#
#
paircreate
paircreate
-g Oradb -vl
-g Oradb1 -vr
This command begins a pair coupling between the four pairs of volumes designated as Oradb
and Oradb1 in the configuration definition file.
Designate a group name and confirm pair states:
235
P-LDEV#
268
266
270
---269
267
271
----
M
-
HORCMINST=1
HORCC_MRCF=1
Designate group names (Oradb and Oradb1) and a remote instance P-VOL:
#
#
paircreate
paircreate
-g Oradb -vr
-g Oradb1 -vl
This command begins a pair coupling between the four pairs of volumes designated as Oradb
and Oradb1 in the configuration definition file.
Designate a group name and confirm pair states:
# pairdisplay -g oradb m cas
Group PairVol(L/R) (Port#,TID,LU-M),
oradb oradev1(L)
(CL1-D, 2, 1-0)
oradb oradev11(R) (CL1-D, 2, 1-1)
oradb1 oradev21(L) (CL1-D, 2, 1-2)
oradb2 oradev1(R)
(CL1-A, 1, 1-0)
oradb oradev2(L)
(CL1-D, 2, 2-0)
oradb1 oradev12(R) (CL1-D, 2, 2-1)
oradb2 oradev22(L) (CL1-D, 2, 2-2)
oradb oradev2(R)
(CL1-A, 1, 2-0)
236
P-LDEV#
266
270
---268
267
271
---269
M
-
service
horcm
poll(10ms)
1000
dev_name
oradev1
oradev2
ip_address
HST2
HST2
port#
CL1-A
CL1-A
timeout
3000
TargetID
1
1
LU#
1
2
MU#
service
horcm
horcm0
237
service
horcm1
poll(10ms)
1000
dev_name
oradev1
oradev2
oradev11
oradev12
oradev21
oradev22
timeout(10ms)
3000
port#
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
ip_address
HST1
HST2
HST2
TargetID
2
2
2
2
2
2
LU#
1
2
1
2
1
2
MU#
0
0
1
1
service
horcm
horcm0
horcm0
service
horcm0
poll(10ms)
1000
dev_name
oradev1
oradev2
oradev11
oradev12
oradev21
oradev22
ip_address
HST1
HST2
HST2
timeout(10ms)
3000
port#
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
TargetID
2
2
3
3
4
4
LU#
1
2
1
2
1
2
MU#
0
0
0
0
service
horcm
horcm
horcm
238
# setenv HORCC_MRCF 1
(Windows NT/2000/2003) set
HORCC_MRCF=1
Designate a group name (Oradb) on the XP Continuous Access Software environment of HOSTA:
#
paircreate
-g Oradb
-vl
Designate a group name (Oradb1) on the XP Business Copy Software environment of HOSTB:
#
paircreate
-g Oradb1
-vl
This command begins a pair coupling between the four pairs of volumes designated as Oradb
and Oradb1 in the configuration definition file.
Designate a group name and confirm pair volume state on HOSTA:
# pairdisplay -g oradb m cas
Group PairVol(L/R) (Port#,TID,LU-M),
oradb oradev1(L)
(CL1-A, 1, 1-0)
oradb oradev1(L)
(CL1-A, 1, 1)
oradb1 oradev11(R) (CL1-D, 2, 1-0)
oradb2 oradev21(R) (CL1-D, 2, 1-1)
oradb oradev1(R)
(CL1-D, 2, 1)
oradb oradev2(L)
(CL1-A, 1, 2-0)
oradb oradev2(L)
(CL1-A, 1, 2)
oradb1 oradev12(R) (CL1-D, 2, 2-0)
oradb2 oradev22(R) (CL1-D, 2, 2-1)
oradb oradev2(R)
(CL1-D, 2, 2)
P-LDEV#
---268
270
---266
---269
271
---267
M
-
HORCC_MRCF=1
Designate a group name (Oradb) on the XP Continuous Access Software environment of HOSTB:
#
paircreate
-g Oradb
-vr
Designate a group name (Oradb1) on the XP Business Copy Software environment of HOSTB:
#
paircreate
-g Oradb1
-vl
This command begins a pair coupling between the four pairs of volumes designated as Oradb in
the configuration definition file.
Designate a group name and confirm pair volume state on the XP Continuous Access Software
environment of HOSTB:
# pairdisplay -g oradb m cas
Group PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#..P/S,
Status, Seq#,
P-LDEV# M
239
oradb1
oradb2
oradb
oradb
oradb
oradb1
oradb2
oradb
oradb
oradb
oradev11(L)
oradev21(L)
oradev1(L)
oradev1(R)
oradev1(R)
oradev12(L)
oradev22(L)
oradev2(L)
oradev2(R)
oradev2(R)
(CL1-D,
(CL1-D,
(CL1-D,
(CL1-A,
(CL1-A,
(CL1-D,
(CL1-D,
(CL1-D,
(CL1-A,
(CL1-A,
2,
2,
2,
1,
1,
2,
2,
2,
1,
1,
1-0)
1-1)
1)
1-0)
1)
2-0)
2-1)
2)
2-0)
2)
30053
30053
30053
30053
30053
30053
30053
30053
30053
30053
268..P-VOL
268..SMPL
268..S-VOL
266..SMPL
266..P-VOL
269..P-VOL
269..SMPL
269..S-VOL
267..SMPL
267..P-VOL
PAIR
---PAIR
---PAIR
PAIR
---PAIR
---PAIR
30053
------------30053
30053
------------30053
270
---266
---268
271
---267
---269
P-LDEV#
270
---266
268
271
---267
269
M
-
Designate a group name and confirm XP Business Copy Software pair states from HOSTB:
# pairdisplay -g oradb1 m cas
Group PairVol(L/R) (Port#,TID,LU-M),
oradb1 oradev11(L) (CL1-D, 2, 1-0)
oradb2 oradev21(L) (CL1-D, 2, 1-1)
oradb oradev1(L)
(CL1-D, 2, 1)
oradb1 oradev11(L) (CL1-D, 3, 1-0)
oradb1 oradev12(L) (CL1-D, 2, 2-0)
oradb2 oradev22(L) (CL1-D, 2, 2-1)
oradb oradev2(R)
(CL1-D, 2, 2)
oradb1 oradev12(R) (CL1-D, 3, 2-0)
Designate a group name and confirm XP Business Copy Software pair states from HOSTB, Instance
0:
# pairdisplay -g oradb1 m cas
Group PairVol(L/R) (Port#,TID,LU-M),
oradb1 oradev11(L) (CL1-D, 3, 1-0)
oradb1 oradev11(R) (CL1-D, 2, 1-0)
oradb2 oradev21(R) (CL1-D, 2, 1-1)
oradb oradev1(R)
(CL1-D, 3, 1)
oradb1 oradev12(L) (CL1-D, 3, 2-0)
oradb1 oradev12(R) (CL1-D, 2, 2-0)
oradb2 oradev22(R) (CL1-D, 2, 2-1)
oradb oradev2(R)
(CL1-D, 3, 2)
P-LDEV#
268
270
---266
269
271
---267
File 1
# This is the Raid Manager Configuration file for host blue.
# It will manage the PVOLs in the Business Copy pairing.
HORCM_MON
#local host
local service
poll
timeout
blue
horcm0
1000
3000
HORCM_CMD
/dev/rdsk/c4t14d0
HORCM_DEV
#group
disk-name
interface target
lun
mirror
Group1
disk_1_g1
CL1-A
2
0
240
M
-
Group1
disk_2_g1
HORCM_INST
#group
remote host
Group1
yellow
CL1-A
File 2
# This is the Raid Manager Configuration file for host yellow.
# It will manage the SVOLs in the Business Copy pairing.
HORCM_MON
#local host
local service
poll
timeout
yellow
horcm1
1000
3000
HORCM_CMD
/dev/rdsk/c10t14d0
HORCM_DEV
#group
disk-name
interface target
lun
mirror
Group1
disk_1_g1
CL1-E
3
3
Group1
disk_2_g1
CL1-E
3
4
HORCM_INST
#group
remote host
remote service name
Group1
blue
horcm0
The configuration files show one group defined. The group, Group1, contains two disks. The comments
note that system blue is defining the P-VOLs, and system yellow is defining the S-VOLs. However, the
P-VOL/S-VOL relationship is set when the paircreate command is issued. The set of disks that
becomes the P-VOL or S-VOL depends on two conditions:
The instance to which the command is issued.
The option specified in the paircreate command.
The instance that the command is issued to becomes the local instance. If the option passed to the
paircreate command is vl, the volumes defined in the local instance become the P-VOLs. If the
option is vr, the volumes defined in the remote instance become the P-VOLs.
File 1
# This is the Raid Manager Configuration file for host blue.
# It will manage the PVOLs in the Business Copy pairing.
HORCM_MON
#local host
local service
poll
timeout
blue
horcm0
1000
3000
HORCM_CMD
/dev/rdsk/c4t14d0
HORCM_DEV
#group
disk-name
interface target
lun
mirror
Group1-0
disk_1_g1-0
CL1-A
2
0
0
Group1-0
disk_2_g1-0
CL1-A
2
1
0
Group1-1
disk_1_g1-1
CL1-A
2
0
1
Group1-1
disk_2_g1-1
CL1-A
2
1
1
HORCM_INST
241
#group
Group1-0
Group1-1
remote host
blue
blue
File 2
# This is the Raid Manager Configuration file for host blue.
# It will manage the SVOLs in the Business Copy pairing.
HORCM_MON
#local host
local service
poll
timeout
blue
horcm1
-1
3000
HORCM_CMD
/dev/rdsk/c4t14d0
HORCM_DEV
#group
disk-name
interface target
lun
Group1-0
disk_1_g1-0
CL1-A
5
5
Group1-0
disk_2_g1-0
CL1-A
5
6
Group1-1
disk_1_g1-1
CL1-A
6
0
Group1-1
disk_2_g1-1
CL1-A
6
1
HORCM_INST
#group
remote host
remote service name
Group1-0
blue
horcm0
Group1-1
blue
horcm0
File 1
# This is the Raid Manager configuration file for host blue.
#It will manage the PVOLs in the Business Copy pairing.
HORCM_MON
#local host
local service
poll
timeout
blue
horcm0
1000
3000
HORCM_CMD
/dev/rdsk/c4t14d0
HORCM_DEV
#group
disk-name
interface target
lun
mirror
Group1
disk_1_g1
CL1-A
2
0
Group1
disk_2_g1
CL1-A
2
1
Group2
disk_1_g2
CL1-A
3
0
Group2
disk_2_g2
CL1-A
4
0
Group2
disk_3_g2
CL1-A
4
1
HORCM_INST
242
#group
Group1
Group2
remote host
yellow
green
File 2
# This is the Raid Manager Configuration file for host yellow.
# It will manage the SVOLs in the Business Copy pairing.
HORCM_MON
#local host
local service
poll
timeout
yellow
horcm1
-1
3000
HORCM_CMD
/dev/rdsk/c10t14d0
HORCM_DEV
#group
disk-name
interface target
lun
mirror
Group1
disk_1_g1
CL1-E
3
3
Group1
disk_2_g1
CL1-E
3
4
HORCM_INST
#group
remote host
remote service name
Group1
blue
horcm0
File 3
# This is the Raid Manager Configuration file for host green.
# It will manage the SVOLs in the Business Copy pairing.
HORCM_MON
#local host
local service
poll
timeout
green
horcm0
-1
3000
HORCM_CMD
/dev/rdsk/c10t14d0
HORCM_DEV
#group
disk-name
interface target
lun
mirror
Group2
disk_1_g2
CL1-F
3
3
Group2
disk_2_g2
CL1-F
3
4
Group2
disk_2_g2
CL1-F
3
5
HORCM_INST
#group
remote host
remote service name
Group2
blue
horcm0
File 1
HORCM_MON
#ip_address
service
HST1
horcm
HORCM_CMD
#unitID 0... (seq#30014)
#dev_name
dev_name
/dev/rdsk/c0t0d0
poll(10ms)
1000
timeout(10ms)
3000
dev_name
243
244
dev_name
port#
CL1-A
CL1-A
CL1-A1
CL1-A1
CL1-A1
service
horcm
horcm
horcm
TagetID
3
3
5
5
5
LU#
0
1
0
1
2
MU#
takeover-switch
swap-takeover
SVOL-takeover
PVOL-takeover
245
Terms:
XXX: Pair status of P-VOL that was returned by the pairvolchk s or pairvolchk s c
command.
YYY: Pair status of S-VOL that was returned by the pairvolchk s or pairvolchk s c
command.
PAIR STATUS: Because the P-VOL controls status, PAIR STATUS is reported as PVOL_XXX (except
when the P-VOL's status is Unknown).
PVOL-PSUE: PVOL-PSUE-takeover
PVOL-SMPL: PVOL-SMPL-takeover
Nop: Nop-takeover
Swap: Swap-takeover
When the horctakeover command execution succeeds, the state transitions to that of the shown
number.
XP256 disk array (microcode 52-47-xx and under)
XP512/XP48 disk array (microcode 10-00-xx and under)
246
With older firmware, a horctakeover used to result in a SMPL S-VOL, which necessitated a full
copy at failback time. See Swap-takeover function, page 259.
S-VOL: SVOL-SMPL takeover
SVOL_E: Execute SVOL-SMPL takeover and return EX_VOLCUR.
SVOL_E*: Execute SVOL-SMPL takeover and return EX_VOLCUR.
XP256 disk array (microcode 52-47-xx and over)
XP512/XP48 disk array (microcode 10-00-xx and over)
XP1024/XP128, XP10000, XP12000, and XP24000 disk arrays
With newer firmware, a horctakeover results in a SSWS state S-VOL so that a delta copy is all
that is required at failback. This functionality is known as fast failback and is accomplished via the
swaps|p option topairresync.
S-VOL: SVOL-SSUS takeover or swap-takeover. If a host fails, this function executes swap-takeover.
If an ESCON/FC link or P-VOL site fails, this function executes SVOL-SSUS-takeover.
SVOL_E: Execute SVOL-SSUS takeover and return EX_VOLCUR
SVOL_E*: SVOL_E* Return EX_VOLCUR
When the horctakeover command execution succeeds, the state transitions to that of the shown
line number.
For instance, if the HA control script sees svol_pair at the local volume and pvol_pair at the remote
volume (as in State 16), it performs a swap-takeover that results in a State 12 situation.
pairsplit S
2.
paircreate vl
3.
4.
pairsplit S
5.
paircreate vr
6.
Refer to the state definitions in the table under the heading. HA control script state transitions, page 245
Initial state:
247
248
249
PVOL-PSUE takeover
The horctakeover command executes a PVOL-PSUE-takeover when the primary volume cannot
report status or refuses writes (for example, data fence).
PSUE (or PDUB) and the horctakeover command returns a PVOL-PSUE-takeover value at exit().
A PVOL-PSUE-takeover forces the primary volume to the suspend state (PSUE or PDUB: PSUE*,
PAIR: PSUS), which permits WRITEs to all primary volumes of the group.
The following illustrates how volumes in the same volume group may be of different status. Only the
volumes that were active at the time of link failure would immediately be PSUE.
250
251
252
Terms:
PAIR*: Equivalent to PAIR for XP Continuous Access Synchronous Software. Equivalent to PAIR:
PSUE for XP Continuous Access Asynchronous Software.
SSUS: Equivalent to SVOL_PSUS.
253
254
255
Status
Fence
SMPL
Needs to be confirmed
P-VOL
Needs to be confirmed
S-VOL
COPY
data
status
256
paircurchk
Currency
SVOL_Takeover
Inconsistent
Inconsistent (due to out-of-order
copying)
Object volume
Attribute
Status
Fence
Currency
SVOL_Takeover
paircurchk
never
Inconsistent
async
Inconsistent
data
OK
OK
status
OK
OK
never
Must be analyzed
To be analyzed
async
Must be analyzed
OK (Assumption)
PFUL
async
To be analyzed
OK (Assumption)
PSUS
data
suspect
suspect
status
suspect
suspect
never
suspect
suspect
async
suspect
suspect
PFUS
async
suspect
OK (Assumption)
PSUE
data
OK
OK
status
suspect
suspect
never
suspect
suspect
async
suspect
OK (Assumption)
data
suspect
status
suspect
never
suspect
async
suspect
PAIR
PDUB
SSWS
Terms:
Inconsistent: Data in the volume is inconsistent because it is being copied.
Suspect: The primary volume data and secondary volume data are not consistent (the same).
Must be analyzed: Must be analyzed. It cannot be determined from the status of the secondary
volume whether data is consistent. It is OK if the status of the primary volume is PAIR. It is
suspect if the status is PSUS or PSUE.
Needs to be confirmed: It is necessary to manually check the volume.
When the S-VOL Data Consistency function is used, paircurchk sets either of the following
returned values in exit(), which allows users to check the execution results with a user program.
Normal termination: 0 (OK. Data is consistent.)
Abnormal termination: Other than 0. (For the error cause and details, see the execution logs.)
257
Takeover-switch function
The takeover command, when activated manually or by a control script, checks the attributes of
volumes on the local and remote disk array to determine the proper takeover action. The following
table shows the takeover actions.
Local node (Takeover)
Remote node
Volume attribute
SMPL
P-VOL (primary
Fence = Never
Status = Others
S-VOL (secondary)
Status = SSWS
Volume attribute
Takeover action
P-VOL status
SMPL
NG
P-VOL
Nop-takeover
S-VOL
Volumes unconformable
Unknown
NG
SMPL
NG
P-VOL
Volumes unconformable
S-VOL
PVOL-takeover
PVOL-takeover (required
to allow local writes)
SMPL
NG
P-VOL
Volumes unconformable
S-VOL
Nop-takeover
Any
Nop-takeover
SMPL
Volumes unconformable
P-VOL
PAIR or PFUL
Swap-Takeover
Others
SVOL-takeover
(after SVOL_SSUStakeover)
Others
S-VOL
Volumes unconformable
Unknown
SVOL-takeover
Terms:
NG: The takeover command is rejected and the operation terminates abnormally.
Nop-takeover: No operation is done although the takeover command is accepted.
Volumes unconformable: A pair of volumes are not in sync to each other. The takeover command
terminates abnormally.
258
Unknown: The remote node attribute is unknown. The remote node system is down or cannot
communicate.
SSWS: Suspend for swapping with S-VOL side only. This state is displayed as SSUS (SVOL_PSUS)
by all commands except the pairdisplay fc command.
PVOL-takeover: This takeover function runs from the P-VOL side, and gives the P-VOL read/write
capability even if the S-VOL is unavailable with a fence level of data or status.
SVOL-takeover: This takeover function runs from the S-VOL side, and attempts to swap the P/S
designations. If unable to swap the P/S designations, this function changes the S-VOL to SVOL-SSUS
mode. If unable to change the S-VOL to SVOL-SSUS mode, this function changes the S-VOL to
SMPL mode to allow writes to the volume.
Swap-takeover: This takeover function swaps the primary and secondary volume designations.
Swap-takeover function
It is possible to swap the designations of the primary and secondary volumes when the P-VOL of the
remote disk array is in the PAIR or PFUL (XP Continuous Access Asynchronous Software and over
HWM) state and the mirror consistency of S-VOL data has been assured.
The takeover command carries out the commands internally to swap the designations of the primary
and secondary volumes. You can specify swapping at the granularity of volume pair, CT group, or
volume group.
Swap-takeover works differently according to microcode version.
XP256 disk array (microcode 52-47-xx and under)
XP512/XP48 disk array (microcode 10-00-xx and under)
1.
The command splits the pair and puts each volume in the SMPL state.
If this step fails, the swap-takeover function is disabled and the SVOLtakeover command runs.
2.
The local volumes of the takeover node are paired in No Copy mode and switched to be the
primary volume.
If this step fails, step 1 repeats to cancel step 2, and the SVOL-takeover function is then executed.
If step 1 fails again, the swap-takeover fails.
The command orders a suspend for swapping (SSWS) for the local volume (S-VOL).
If this step fails, the swap-takeover function is disabled and returns an error.
2.
The command orders a resync for swapping to switch to the primary volume. The local volume
(S-VOL) is swapped as the NEW_PVOL. The NEW_SVOL is resynchronized based on the
NEW_PVOL.
If the remote host is known, the command uses the value of P-VOL specified at paircreate time
for the number of simultaneous copy tracks. If the remote host is unknown, the command uses a
default of 3 simultaneous copy tracks for resync for swapping.
If this step fails, the swap-takeover function is returned at SVOL-SSUS-takeover. The local volume
(S-VOL) is maintained in the SSUS (PSUS) state, which permits WRITE and maintenance of delta
259
data (BITMAP) for the secondary volume. This special state is also displayed as the SSWS state,
using the fc option of the pairdisplay command.
The P-VOL side issues a pairsplit command to the P-VOL side disk array.
2.
Non-transmitted data that remains in the FIFO queue (side file) of the P-VOL is copied to the S-VOL
side.
3.
4.
The swap command returns after the synchronization between the P-VOL and S-VOL.
The S-VOL side issues a suspend for swapping to the S-VOL side disk array.
2.
Non-transmitted data that remains in the FIFO queue (side file) of the P-VOL is copied to the S-VOL
side.
3.
SVOL-takeover function
This function enables the takeover node to have exclusive access to the S-VOL volume in SSUS (PSUS)
state (reading and writing are enabled), except in COPY state, on the assumption that the remote
node, controlling the P-VOL, is unavailable or unreadable.
The data consistency of the secondary volume is judged by its pair status and fence level. If the data
consistency check fails, the SVOL-takeover function fails.
You can specify SVOL-takeover at the granularity of a paired logical volume or group.
If this check proves that the data is consistent, this function runs to switch to the primary volume using
a Resync for Swapping. If this switch succeeds, this function returns with swap-takeover. Otherwise,
this function returns SVOL-SSUS-takeover as the return value of a horctakeover command.
If there is a Host failure, this function returns as swap-takeover.
If an ESCON/FC link or P-VOL site failure occurs, this function returns as SVOL-SSUS-takeover.
If SVOL-takeover is specified for a group, the data consistency check runs for all volumes in the group.
Inconsistent volumes are displayed in the execution log file.
Example:
260
Group
oradb1
oradb1
Pair vol
/dev/dsk/hd001
/dev/dsk/hd002
Port
CL1-A
CL1-A
PVOL-takeover functions
The PVOL-takeover function terminates the PAIR state of a pair or group. The takeover node is given
unrestricted and exclusive access to the primary volume (reading and writing are enabled), on the
assumption that the remote node (controlling the S-VOL) is unavailable or unreachable.
The PVOL-takeover function has two roles:
PVOL-PSUE-takeover puts the P-VOL into PSUE state, which permits WRITE access to all primary
volumes of that group.
PVOL-SMPL-takeover puts the P-VOL into SMPL state.
PVOL-takeover first attempts to use PVOL-PSUE-takeover. If PVOL-PSUE-takeover fails,
PVOL-SMPL-takeover is executed.
You can specify PVOL-takeover with a granularity of logical volume or group.
P-VOLs (primary volumes) in DATA fence do not accept write commands after ESCON/FC link or
remote array failures. You can use PVOL-takeover on these P-VOLs to allow the application to update
the P-VOL if you choose. However, none of those updates are replicated or mirrored to the remote
S-VOL.
261
Figure 65 HA system failure and recovery (XP256 and XP512/XP48 disk arrays)
Scenario
1.
2.
3.
4.
262
Scenario
1.
2.
3.
4.
While host B is processing, the P-VOL and S-VOL are swapped using pairresync swaps
and the delta data (BITMAP) updated by host B is fed back to host A.
When host A recovers from the failure, host A takes over processing from host B through the
horctakeover swaptakeover command.
263
Scenario
1.
2.
3.
The P-VOL detects a failure in the S-VOL or the link and suspends mirroring. (It depends on the
fence level whether host A continues processing or host B takes over processing from host A.)
The P-VOL changes its paired volume status to PSUE and keeps track of data changes in a
difference bitmap. The XP Continuous Access Software manager detects the status change and
outputs a message to syslog. If a host A user has initiated a monitoring command, a message
is displayed on the client's screen.
The S-VOL or the link recovers from the failure. Host A issues the pairsplit S, paircreate
vl, or pairresync command to update the P-VOL data by copying all data, or copying
differential data only. The updated P-VOL is fed back to the S-VOL.
If an error occurs in writing paired volumes (for example, pair suspension), the server software
using the volumes detects the error depending on the fence level of the paired volume.
2.
3.
If necessary, issue the horctakeover command to recover P-VOL write access if the secondary
volume fails and the primary is fenced (write inhibited).
4.
If the primary volume fails, split or suspend the paired volume and use the secondary volume as
the substitute volume.
5.
Find out the reason why the pair was split. Repair or recover the failure and resynchronize your
pairs immediately.
264
Abnormal termination
An XP Continuous Access Software command can abnormally terminate for many reasons, for example:
Check the system log file and log file to identify the cause.
If a command terminates abnormally because the remote server fails, recover the remote server, and
then reissue the command. If the instance has disappeared, reactivate the instance. If you find failures
for which you can take no action, check the files in the log directory and contact HP.
265
266
HP-UX
Sun Solaris
Microsoft Windows NT
Microsoft Windows 2000
Microsoft Windows 2003
OpenVMS
Sun Solaris
TID
LUN
TID
LUN
TID
LUN
Fibre
0 to 63
0 to 7
0 to 125
0 to 511
0 to 31
0 to 511
SCSI
0 to 15
0 to 7
0 to 15
0 to 7
0 to 15
0 to 7
C1
C2
C3
C4
C5
C6
C7
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
EF
CD
B2
98
72
55
3A
25
E8
CC
B1
97
71
54
39
23
E4
CB
AE
90
6E
53
36
1F
E2
CA
AD
8F
6D
52
35
1E
E1
C9
AC
88
6C
51
34
1D
267
C0
C1
C2
C3
C4
C5
C6
C7
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
E0
C7
AB
84
6B
4E
33
1B
DC
C6
AA
82
6A
4D
32
18
DA
C5
A9
81
69
4C
31
17
D9
C3
A7
80
67
4B
2E
10
D6
BC
A6
7C
66
4A
2D
0F
D5
10
BA
10
A5
10
7A
10
65
10
49
10
2C
10
08
10
D4
11
B9
11
A3
11
79
11
63
11
47
11
2B
11
04
11
D3
12
B6
12
9F
12
76
12
5C
12
46
12
2A
12
02
12
D2
13
B5
13
9E
13
75
13
5A
13
45
13
29
13
01
13
D1
14
B4
14
9D
14
74
14
59
14
43
14
27
14
CE
15
B3
15
9B
15
73
15
56
15
3C
15
26
15
C1
C2
C3
C4
C5
C6
C7
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
EF
CD
16
B2
32
98
48
72
64
55
80
3A
96
25
112
E8
CC
17
B1
33
97
49
71
65
54
81
39
97
23
113
E4
CB
18
AE
34
90
50
6E
66
53
82
36
98
1F
114
E2
CA
19
AD
35
8F
51
6D
67
52
83
35
99
1E
115
E1
C9
20
AC
36
88
52
6C
68
51
84
34
100
1D
116
E0
C7
21
AB
37
84
53
6B
69
4E
85
33
101
1B
117
DC
C6
22
AA
38
82
54
6A
70
4D
86
32
101
18
118
DA
C5
23
A9
39
81
55
69
71
4C
87
31
103
17
119
D9
C3
24
A7
40
80
56
67
72
4B
88
2E
104
10
120
D6
BC
25
A6
41
7C
57
66
73
4A
89
2D
105
0F
121
D5
10
BA
26
A5
42
7A
58
65
74
49
90
2C
106
08
122
D4
11
B9
27
A3
43
79
59
63
75
47
91
2B
107
04
123
D3
12
B6
28
9F
44
76
60
5C
76
46
92
2A
108
02
124
268
C0
C1
C2
C3
C4
C5
C6
C7
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
ALPA
TID
D2
13
B5
29
9E
45
75
61
5A
77
45
93
29
109
01
125
D1
14
B4
30
9D
46
74
62
59
78
43
94
27
110
CE
15
B3
31
9B
47
73
63
56
79
3C
95
26
111
269
270
TID
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
ALPA
3C
3A
39
36
35
34
33
32
31
2E
2D
2C
2B
2A
29
01
02
04
08
0F
10
17
18
1B
1D
1E
1F
23
25
10
11
12
13
14
15
27
26
TID
ALPA
PhId1(C1)
59
5A
5C
63
65
66
67
69
6A
6B
6C
6D
6E
71
72
ALPA
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
TID
43
45
46
47
49
4A
4B
4C
4D
4E
51
52
53
54
55
56
ALPA
PhId2(C2)
10
11
12
13
14
15
TID
9B
9D
9E
9F
A3
A5
A6
A7
A9
AA
AB
AC
AD
AE
B1
ALPA
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
TID
73
74
75
76
79
7A
7C
80
81
82
84
88
8F
90
97
98
ALPA
PhId3(C3)
10
11
12
13
14
15
TID
CD
CE
D1
D2
D3
D4
D5
D6
D9
DA
DC
E0
E1
E2
E4
ALPA
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
TID
B2
B3
B4
B5
B6
B9
BA
BC
C3
C5
C6
C7
C9
CA
CB
CC
ALPA
PhId4(C4)
10
11
12
13
14
15
TID
ALPA
TID
E8
EF
ALPA
PhId5(C5)
TID
271
272
Introduction
Because MPE/iX does not fully support POSIX like UNIX, XP RAID Manager operates with some
restrictions in an MPE/iX environment. The system calls (wait3(), gettimeofday()... ) that are not
supported on MPE/iX are implemented within XP RAID Manager. XP RAID Manager is ported within
standard POSIX for MPE/iX only.
Network function
Because the Bind() system call of MPE/iX POSIX cannot specify the Ip_address of its own host, it
supports only INADDR_ANY. Therefore, XP RAID Manager needs to use NONE as in the following
HORCM_MON. Also a port number over 1024 must be specified in /etc/services.
HORCM_MON
#ip_address
service
poll(10ms)
timeout(10ms)
NONE
horcm
1000
3000
273
When you run this JOB in the background by using the STREAM command of MPE/iX, the HORCM
daemon runs in the background. You can verify that the HORCM daemon is running as a JOB by
using the SHOWJOB command.
shell/iX> callci STREAM JRAIDMR1
#J15
shell/iX> callci SHOWJOB
JOBNUM
STATE
#J14
IPRI
JIN
JLIST
INTRODUCED
JOB NAME
EXEC
10S
LP
WED 9:02P
JRAIDMR0,MANAGER.SYS
#J15
EXEC
10S
LP
WED 9:02P
JRAIDMR1,MANAGER.SYS
#S28
EXEC
WED 9:10P
MANAGER.SYS
QUIET
Command device
Because MPE/iX POSIX does not provide raw I/O, XP RAID Manager uses the SCSI pass through
driver to access the command device on the XP256 and XP512 disk arrays, and uses the normal
read/write SCSI commands for some control operations.
You must confirm that MPE/iX has installed the patch MPEKXU3 before using the SCSI pass through
driver.
Installing
Because MPE/iX POSIX cannot run cpio to extract a file, the installation file is a tar file.
For further information, see Installing on MPE/iX systems, page 34.
Uninstalling
The RMuninst (rm rf /$instdir/HORCM) command cannot remove the directory
(/HORCM/log*/curlog only) while the HORCM is running.
The only way to remove the log directory for the RMuninst (rm rf /$instdir /HORCM) command
is to shut down and reboot the MPE system.
Use the RMuninst (rm rf /$instdir/HORCM) command after shutting down and rebooting the
MPE/iX system.
274
Therefore the zx option for commands is not supported, and is deleted as a displayed option.
STATUS
VOLUME
99-OPEN-3-CVS
UNKNOWN
100-OPEN-3-CVS
MASTER
MEMBER100
PVOL100-0
101-OPEN-3-CVS
MASTER
MEMBER101
PVOL101-0
102-OPEN-3-CVS
MASTER
MEMBER102
PVOL102-0
103-OPEN-3-CVS-C
MASTER
MEMBER103
PVOL103-0
275
/users/HORCM/log0/curlog:
/users/HORCM/log0/tmplog:
/users/HORCM/log1/curlog:
/users/HORCM/log1/tmplog:
Permission
Permission
Permission
Permission
denied
denied
denied
denied
The rm command results in Permission denied and does not remove the
/users/HORCM/log*/curlog directory.
MPE/iX POSIX commands cannot remove these directories even if you use the mv
/users/HORCM/log*/curlog /tmp command.
PORT
SERIAL
ldev100
CL1-L
35013
ldev101
CL1-L
ldev102
ldev103
276
LDEV
CTG
C/B/12 SSID
R:Group
PRODUCT_ID
17
s/s/ss
0004
5:01-01
OPEN-3
35013
18
s/s/ss
0004
5:01-01
OPEN-3
CL1-L
35013
19
s/s/ss
0004
5:01-01
OPEN-3
CL1-L
35013
35
OPEN-3-CM
NOTE:
LDEV user here refers to the MPE/iX term.
service
horcm
poll(10ms)
1000
dev_name
dev_name
dev_name
port#
ip_address
service
timeout(10ms)
3000
TargetID
LU#
MU#
You must start HORCM without a description for HORCM_DEV and HORCM_INST because the target
ID and LUN are Unknown.
You can discover which physical device is mapped to which logical device (ldev of MPE/iX term) by
using raidscan find.
JOBNUM
STATE
IPRI
JIN
JLIST
INTRODUCED
JOB NAME
#J14
EXEC
10S
LP
WED 9:02P
JRAIDMR0,MANAGER.SYS
#S28
EXEC
QUIET
WED 9:10P
MANAGER.SYS
DEVICE_FILE
UID
S/F
PORT
TARG
LUN
SERIAL
LDEV
PRODUCT_ID
/dev/ldev100
CL1-L
35013
17
OPEN-3
/dev/ldev101
CL1-L
35013
18
OPEN-3
/dev/ldev102
CL1-L
35013
19
OPEN-3
277
DEVICE_FILE
UID
S/F
PORT
TARG
/dev/ldev103
CL1-L
LUN
SERIAL
LDEV
PRODUCT_ID
35013
35
OPEN-3-CM
dev_name
port#
TargetID
LU#
MU#
DSG1
dsvol0
CL1-L
DSG1
dsvol1
CL1-L
DSG1
dsvol2
CL1-L
HORCM_INST
#dev_group
ip_address
service
DSG1
HOSTB
horcm1
JOBNUM
STATE
#S28
#J17
278
IPRI
JIN
JLIST
INTRODUCED
JOB NAME
EXEC
WED 9:10P
MANAGER.SYS
EXEC
10S
LP
WED 11:34P
JRAIDMR0,MANAGER.SYS
SYS$POSIX_ROOT
Define the POSIX_ROOT before running XP RAID Manager
Example:
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT Device:[directory]
Mailbox driver
Redefine LNM$TEMPORARY_MAILBOX in the LNM$PROCESS_DIRECTORY table.
Example:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
279
$
$
$
$
Example 2:
$
$
$
$
Example 3:
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 _$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com _$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out _$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
$
$
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm1 _$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm1.com _$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.out _$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.err
%RUN-S-PROC_ID, identification of created process is 00004166
00004160
Command device
XP RAID Manager uses the SCSI Class driver to access the command device on the disk array, and
defines DG* or DK* as the logical name for the device.
280
You must define the physical device as either DG*, DK* or GK* by using the DEFINE/SYSTEM
command for XP RAID Manager versions 01.12.03 and earlier.
Example:
$ show device
Device Name
Device Status Error Count
Volume Label Free Blocks Trans Count
VMS4$DKB0:
Online
0
VMS4$DKB100:
Mounted
0
ALPHASYS
30782220
414
1
VMS4$DKB200:
Online
0
VMS4$DKB300:
Online
0
VMS4$DQA0:
Online
0
$1$DGA145: (VMS4) Online 0$1$DGA146: (VMS4) Online 0::$1$DGA153: (VMS4) Online 0
$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:
Syslog function
OpenVMS does not support the syslog function. Instead, XP RAID Manager uses the HORCM logging
file.
281
Mnt Cnt
Because the software uses the Mailbox driver for communication between components, the
command processor and daemon (called HORCM) must have the same privileges. If the command
processor and HORCM run with different privileges, the command processor hangs or is unable
to attach to the daemon.
2.
282
The subprocess (HORCM, the XP RAID Manager daemon) created by spawn terminates when the
terminal is logged off or the session is terminated. To run the process independently of LOGOFF, use
the RUN /DETACHED command.
Device
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
Status
Online
Online
Online
Online
Online
Online
Online
Error Count
0
0
0
0
0
0
0
Example 2:
$ inqraid DKA145-153 -cli
DEVICE_FILE PORT SERIAL
DKA145
CL1-H 30009
DKA146
CL1-H 30009
DKA147
CL1-H 30009
DKA148
-
LDEV
145
146
147
-
CTG
-
C/B/12
s/P/ss
s/S/ss
- -
283
Cnt
DKA149
DKA150
DKA151
DKA152
DKA153
CL1-H
CL1-H
CL1-H
CL1-H
30009
30009
30009
30009
149
151
152
153
P/s/ss
P/s/ss
s/s/ss
s/s/ss
0004
5:01-11
OPEN-9
0004
0004
0004
5:01-11
5:01-11
5:01-11
OPEN-9
OPEN-9
OPEN-9
Example 3:
$ inqraid DKA148
sys$assign : DKA148 -> errcode = 2312
DKA148 -> OPEN: no such device or address
After enabling the S-VOL for writing by using either the pairsplit or horctakeover command,
you must run the mcr sysman command to use the S-VOLs for backup or disaster recovery.
Example 4:
$ pairsplit -g CAVG rw
$ mcr sysman
SYSMAN> io auto
SYSMAN> exit
Example 5:
$ sh dev dg
Device Name
$1$DGA145:
$1$DGA146:
$1$DGA147:
$1$DGA148:
$1$DGA149:
$1$DGA150:
$1$DGA151:
$1$DGA152:
$1$DGA153:
Device
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
(VMS4)
284
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:
If a command and daemon (HORCM) are executing as different jobs (using a different terminal), you
must redefine LNM$TEMPORARY_MAILBOX in the LNM$PROCESS_DIRECTORY table:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
LDEV
145
146
147
148
149
150
151
CTG
-
C/B/12
s/S/ss
s/P/ss
s/S/ss
s/P/ss
s/S/ss
s/P/ss
SSID
0004
0004
0004
0004
0004
0004
R:Group PRODUCT_ID
OPEN-9-CM
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9
5:01-11 OPEN-9
Example 2:
SYS$POSIX_ROOT:[etc]horcm0.conf
HORCM_MON
#ip_address
service
poll(10ms
127.0.0.1
30001
1000
HORCM_CMD
#dev_name
DKA145
dev_name
timeout(10ms)
3000
dev_name
You must start XP RAID Manager without a description for HORCM_DEV and HORCM_INST because
target ID and LUN are unknown.
You can determine the mapping of a physical device to a logical name by using raidscan find.
Executing a horcmstart 0
Example:
285
UID
0
0
0
0
0
0
0
S/F
F
F
F
F
F
F
F
PORT
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
TARG
0
0
0
0
0
0
0
LUN
1
2
3
4
5
6
7
$ horcmshutdown 0
inst 0:
HORCM Shutdown inst 0 !!!
dev_name
oradb1
oradb2
oradb3
port#
CL1-H
CL1-H
CL1-H
TargetID
0
0
0
ip_address
HOSTB
LU#
2
4
6
MU#
0
0
0
service
horcm1
For horcm1.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01
HORCM_INST
#dev_group
VG01
286
dev_name
oradb1
oradb2
oradb3
port#
CL1-H
CL1-H
CL1-H
TargetID
0
0
0
ip_address
HOSTA
LU#
3
5
7
service
horcm0
MU#
0
0
0
30001/udp
30002/udp
00004160
00004166
You can verify that the XP RAID Manager daemon is running as a detached process by using the
SHOW PROCESS command.
$ show process horcm0
25-MAR-2003 23:27:27.72 User: SYSTEM
Process ID:
00004160
Node: VMS4
Process name: HORCM0
Terminal: User Identifier: [SYSTEM] Base priority: 4
Default file spec: Not available Number of Kthreads: 1
Soft CPU Affinity: off
Serial#
30009
Seq#
Micro_ver
Cache(MB)
50-04-00/00 8192
LDEV#
Status
P-LDEV#
287
VG01
VG01
VG01
VG01
VG01
VG01
oradb1(L)
oradb1(L)
oradb2(L)
oradb2(L)
oradb3(L)
oradb3(L)
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
0
0
0
0
0
0
30009
30009
30009
30009
30009
30009
146..
147..
148..
149..
150..
151..
S-VOLPAIR,
P-VOLPAIR,
S-VOLPAIR,
P-VOLPAIR,
S-VOLPAIR,
P-VOLPAIR,
100
100
100
100
100
100
147
146
149
148
151
150
Seq#
30009
30009
30009
30009
30009
30009
LDEV#
146..
147..
148..
149..
150..
151..
P/S
SMPL
SMPL
SMPL
SMPL
SMPL
SMPL
Status
-------------------
Fence %
------,
------,
------,
------,
------,
------,
P-LDEV#
-------------------
-------------------------
288
PRODUCT_ID
M
-$
$ type dev_file
DKA145-150
$
$ pipe type dev_file | mkconf -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE Group PairVol PORT TARG LUN M
DKA145
- - 30009 145
DKA146
URA
URA_000 CL1-H 0 2
0 30009
DKA147
URA
URA_000 CL1-H 0 2
0 30009
DKA148
URA
URA_000 CL1-H 0 2
0 30009
DKA149
URA
URA_000 CL1-H 0 2
0 30009
DKA150
URA
URA_000 CL1-H 0 2
0 30009
SERIAL
LDEV
146
147
148
149
150
dev_name
SER =
URA_000
SER =
URA_001
SER =
URA_002
SER =
URA_003
SER =
URA_004
ip_address
127.0.0.1
port#
30009 LDEV
CL1-H
30009 LDEV
CL1-H
30009 LDEV
CL1-H
30009 LDEV
CL1-H
30009 LDEV
CL1-H
TargetID
LU#
MU#
146 [ FIBRE FCTBL = 3 ]
0
2
0
= 147 [ FIBRE FCTBL = 3 ]
0
3
0
= 148 [ FIBRE FCTBL = 3 ]
0
4
0
= 149 [ FIBRE FCTBL = 3 ]
0
5
0
= 150 [ FIBRE FCTBL = 3 ]
0
6
0
=
service
52323
289
LDEV CTG
145
146
147
148
0
C/B/12
s/P/ss
s/S/ss
P/s/ss
SSID
0004
0004
0004
R:Group
5:01-11
5:01-11
5:01-11
PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9
C/B/12
s/P/ss
s/S/ss
P/s/ss
SSID
0004
0004
0004
R:Group
5:01-11
5:01-11
5:01-11
PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9
Example 2:
Example 3:
$ pipe show device
| MKCONF -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE
Group
PairVol
PORT
TARG LUN M
SERIAL LDEV
$1$DGA145
- 30009
145
$1$DGA146
URA
URA_000
CL2-H
0
2 0
30009
146
$1$DGA147
URA
URA_001
CL2-H
0
3 0
30009
147
$1$DGA148
URA
URA_002
CL2-H
0
4 0
30009
148
HORCM Shutdown inst 9 !!!
Please check 'SYS$SYSROOT:[SYSMGR]HORCM9.CONF', 'SYS$SYSROOT:[SYSMGR.LOG9.CURLOG]
HORCM_*.LOG', and modify 'ip_address & service'.
HORCM inst 9 finished successfully.
$
Example 4:
$ pipe show device
DEVICE_FILE
$1$DGA145
$1$DGA146
$1$DGA147
$1$DGA148
| RAIDSCAN -find
UID S/F PORT
TARG
0
F CL2-H
0
0
F CL2-H
0
0
F CL2-H
0
0
F CL2-H
0
LUN
1
2
3
4
SERIAL
30009
30009
30009
30009
LDEV
145
146
147
148
PRODUCT_ID
OPEN-9-CM
OPEN-9
OPEN-9
OPEN-9
Example 5:
$ pairdisplay -g BCVG -fdc
Group
PairVol(L/R) Device_File
BCVG
oradb1(L)
$1$DGA146
BCVG
oradb1(R)
$1$DGA147
$
Example 6:
290
M
0
0
,Seq#,LDEV#..P/S,Status,% ,P-LDEV# M
30009
146..P-VOL PAIR, 100
147 30009
147..S-VOL PAIR, 100
146 -
(VMS4)
(VMS4)
Device
Status
Online
Online
Error
Count
0
0
(VMS4)
Online
Volume
Label
DKA145 $1$DGA145:
DKA146 $1$DGA146:
DKA153 $1$DGA153:
/etc/horcm0.conf
291
HORCM_MON
#ip_address
127.0.0.1
HORCM_CMD
#dev_name
DKA145
service
52000
poll(10ms timeout(10ms)
1000
3000
dev_name
dev_name
HORCM_DEV
#dev_group
dev_name
port#
HORCM_INST
#dev_group
ip_address
service
TargetID
LU#
MU#
You must start XP RAID Manager without a description for HORCM_DEV and HORCM_INST because
the target ID and LUN are unknown.
You can determine the mapping of a physical device to a logical name by using the raidscan
find command.
292
dev_name
oradb1
oradb2
oradb3
port#
TargetID
CL1-H
0
CL1-H
0
CL1-H
0
ip_address
HOSTB
LU# MU#
2
0
4
0
6
0
service
horcm1
For horcm1.conf
HORCM_DEV
#dev_group dev_name port#
VG01
oradb1
CL1-H 0
VG01
oradb2
CL1-H 0
VG01
oradb3
CL1-H 0
HORCM_INST
#dev_group
VG01
ip_address
HOSTA
TargetID
3
0
5
0
7
0
LU#
MU#
service
horcm0
Starting horcmstart 0 1
The XP RAID Manager subprocess created by bash is terminated when bash terminates.
bash$ horcmstart 0 &
19
bash$
starting HORCM inst 0
bash$ horcmstart 1 &
20
bash$
starting HORCM inst 1
293
294
Glossary
ACP
Array Control Processor. The ACP handles passing data between cache and the
physical drives. ACPs work in pairs. In the event of an ACP failure, the redundant
ACP takes control. Both ACPs work together sharing the load.
allocation
AL-PA
BC
Continuous Access
cache
Very high speed memory used to speed I/O transaction time. All reads and
writes to the disk array are sent to the cache. The data is buffered there until the
transfer to/from physical disks (with slower data throughput) is complete. Cache
memory speeds I/O throughput to the application.
CH
Channel.
CHA (channel
adapter)
The channel adapter (CHA) provides the interface between the disk array and
the external host system. Occasionally this term is used synonymously with the
term channel host interface processor (CHIP)
CHP
CHPID
CLI
Cnt Ac
Cnt Ac-A
Cnt Ac-J
Cnt Ac-S
295
command device
Command View
CTGID (consistency
group ID)
CU
CVS
Custom Volume Size. Volume Size Configuration (VSC) defines custom volume
sizes (CVS) that are smaller than normal fixed-sized logical disk devices (volumes).
(OPEN-V is a CVS-based custom disk size that you determine. OPEN-L does not
support CVS.)
disk group
A named group of disks selected from all the available disks in a disk array.
One or more virtual disks can be created from a disk group. Also the physical
disk locations associated with a parity group.
disk type
The manufacturing ID written into the physical disk controller firmware. In most
cases, the disk type is identical to the disk model number.
DKC (disk
controller unit)
The array cabinet that houses the channel adapters and service processor (SVP).
DRR
Disk Recovery and Restore unit. The unit responsible for data recovery and
restoration in the event of a cache failure. Located on the ACP.
daemon
A process in UNIX systems that waits for events and remains after an event is
carried out.
DW
Duplex write.
DWL
emulation mode
The logical devices (LDEVs) associated with each RAID group are assigned an
emulation mode that makes them operate like open system disk drives of various
sizes. The emulation mode determines the volume's capacity.
EPO
Emergency power-off.
ESCON
Enterprise Systems Connection (an IBM trademark). A set of IBM and vendor
products that interconnect mainframe computers with each other and with attached
storage, locally attached workstations, and other devices, using optical fiber
technology and switches called ESCON Directors.
296
Glossary
expanded LUN
A LUN is normally associated with only a single LDEV. The LUN Size Expansion
(LUSE) feature allows a LUN to be associated with 2-36 LDEVs. Essentially, LUSE
makes it possible for applications to access a single large pool of storage. LUSE
is an optional feature.
ExSA
failover
FC
Fibre Channel.
FC-AL
FCP
fence level
FICON
GB
Gigabyte.
HA
High availability.
HBA
HORCM_CMD
A section of the XP RAID Manager instance configuration file that defines the
disk devices used as command devices by XP RAID Manager to communicate
with the disk array.
HORCM_DEV
A section of the XP RAID Manager instance configuration file that describes the
physical volumes corresponding to the paired volume names.
HORCM_INST
A section of the XP RAID Manager instance configuration file that defines how
XP RAID Manager groups link to remote XP RAID Manager instances.
HORCM_LDEV
A section of the XP RAID Manager instance configuration file that specifies stable
LDEV and serial numbers of physical volumes that correspond to paired logical
volume names.
HORCM_MON
A section of the XP RAID Manager instance configuration file that describes the
host name or IP address, the port number, and the paired volume error monitoring
interval of the local host.
host mode
Each port can be configured for a particular host type. These modes are
represented as two-digit hexadecimal numbers. For example, host mode 08
represents an HP-UX host.
hot standby
Using one or more servers (or disks) as a standby in case of a primary server
(disk) failure.
HP
Hewlett-Packard Company.
instance
297
instance
configuration file
An XP RAID Manager file that defines the link between a volume and an XP RAID
Manager instance. This file consists of five sections: HORCM_MON,
HORCM_CMD, HORCM_DEV, HORCM_LDEV, and HORCM_INST.
LCP
LDEV
Logical device. An LDEV is created when a RAID group is divided into sections
using a host emulation mode (for example, OPEN-9 or OPEN-M). The number
of resulting LDEVs depends on the emulation mode. The term LDEV is often used
synonymously with the term volume.
local disk
local instance
LUN
Logical Unit Number. A physically addressable storage unit (virtual disk) consisting
of multiple portions of physical disks addressed as a single unit. A LUN results
from mapping a SCSI logical unit number, port ID, and LDEV ID to a RAID group.
The size of the LUN is determined by the emulation mode of the LDEV and the
number of LDEVs associated with the LUN. For example, a LUN associated with
two OPEN-L LDEVs has a size of 72 GB.
LUSE
A LUN is normally associated with only a single LDEV. The LUSE feature allows
a LUN to be associated with 2 to 36 LDEVs. Essentially, LUSE (logical unit size
expansion) makes it possible for applications to access a single large pool of
storage. The LUSE feature is available when the LUN Manager product is installed
MB
Megabyte.
MCU
OFC
OPEN-x
A general term describing any one of the supported OPEN emulation modes (for
example, OPEN-L).
parity group
path
Path and LUN are synonymous. Paths are created by associating a port, a
target, and a LUN ID with one or more LDEVs.
PB
Petabyte.
port
A connector on a channel adapter card in the disk array. A port passes data
between the disk array and external devices, such as a host. Ports are named
using a port group and port letter, for example, CL1-A.
P-VOL
RAID
RAID group
RCP
298
Glossary
remote instance
The instance with which the local instance communicates, as configured in the
HORCM_INST section of the XP RAID Manager instance configuration file.
RCU
Remote Web
Console (RWC)
R-SIM
script file
SCSI
shell script
SIM
SNMP
SSID
S-VOL
Secondary (or remote) volume. The volume that receives the data from the P-VOL
(primary volume).
SVP
Service processor. The processor built into the array's disk controller. The SVP
provides a direct interface into the disk array. It is used only by the HP service
representative.
takeover
The process in which a remote standby disk array takes over processing from
the previously active local disk array.
TB
Terabyte.
TID
Target ID.
Volume
Generic term for a number of physical disks or portions of disks logically bound
together as a virtual disk containing contiguous logical blocks. Volume can also
be software shorthand for a mapped volume (Windows drive letter or mount
point). On the XP disk array, a volume is a uniquely identified virtual storage
device composed of a control unit (CU) component and a logical device (LDEV)
component separated by a colon. For example 00:00.
VSC
Volume size customization. Synonymous with CVS. A feature that defines custom
volumes (CVS volumes) that are smaller than normal fixed-sized logical disk
devices (OPEN-x volumes).
XDF
XP Command
View Advanced
Edition Software
WWN
299
300
Glossary
Index
A
addresses
Fibre Channel conversion in XP RAID Manager, 279
audience, 15
G
general commands, 87
glossary, 295
command devices, 29
command devices, switching, 30
commands
using XP RAID Manager, 51
configuration
setting up, 32
configuration file examples, 215
configuration file parameters, 39
conventions
document, 15
storage capacity values, 16
inqraid command, 99
installing
MPE/iX, 34
OpenVMS, 37
installing XP RAID Manager
UNIX systems, 33
instances, 28
XP RAID Manager, 28
F
features, 19
Fibre Channel
addressing in RM, 279
Fibre Channel addressing, 267
findcmddev command, 176
M
mkconf command, 109
mount command option, 177
MPE socket hang, 275
301
MPE/iX
installing, 34, 274
known issues, 275
porting notice, 273
restrictions, 273
start-up procedures, 276
uninstalling, 274
N
norctakeover command, 96
O
OpenVMS
installing, 37, 282
known issues, 283
P
paircreate command, 111
paircurchk command, 120
pairdisplay command, 122
pairevolchk command, 156
pairevtwait command, 132
pairmon command, 136
pairresync command, 138
pairsplit command, 147
pairsyncwait command, 152
parameters, configuration file, 39
porting notice, MPE/iX, 273
portscan command option, 179
PVOL-takeover function, 261
T
takeover-switch function, 258
technical support
HP, 17
service locator website, 17
topologies, 29
troubleshooting, 87
U
umount command option, 184
UNIX systems
installing, 33
user files
creating, 61
usetenv command, 185
using XP RAID Manager, 51
using XP RAID Manager commands, 51
V
variables, environment, 61
websites
HP , 17
HP Subscriber's Choice for Business, 17
product manuals, 15
Windows NT/2000/2003 command options,
174
S
S-VOL data consistency function, 256
scripts with XP RAID Manager commands, 51
SCSI pass through driver, 274
setenv command option, 180
setting up XP RAID Manager, 32
sleep command option, 181
Start-up procedures using detached process on
DCL, 284
state transitions, 245
302
X
XP RAID Manager
command devices, 29
features, 19
general commands, 87
instances, 28
product description, 19
system requirements, 31
topologies, 29
using, 51
Windows NT/2000/2003 command options, 174
XP RAID Manager commands, 51
303
304