Hitachi Unified Storage VM Block Module: Hitachi High Availability Manager User Guide
Hitachi Unified Storage VM Block Module: Hitachi High Availability Manager User Guide
Hitachi Unified Storage VM Block Module: Hitachi High Availability Manager User Guide
FASTFIND LINKS
Contents
Product Version
Getting Help
MK-92RD7052-00
ii
HUS VM Block Module Hitachi High Availability Manager User Guide
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
Intended audience. . . . . . . . . . . . . . . . . . .
Product version . . . . . . . . . . . . . . . . . . . . .
Release notes . . . . . . . . . . . . . . . . . . . . . .
Document revision level . . . . . . . . . . . . . . .
Referenced documents. . . . . . . . . . . . . . . .
Document conventions. . . . . . . . . . . . . . . .
Conventions for storage capacity values. . . .
Accessing product documentation . . . . . . . .
Getting help . . . . . . . . . . . . . . . . . . . . . . .
Comments . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
....
....
....
....
....
....
....
....
....
....
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .x
. .x
. .x
. .x
. .x
. .x
. xi
. xii
. xii
. xii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
..
..
..
..
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1-2
1-3
1-4
1-4
1-4
1-4
1-5
1-5
1-5
1-5
1-5
1-6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2-3
2-3
2-3
2-4
iii
HUS VM Block Module Hitachi High Availability Manager User Guide
Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
License capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pair volume requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Quorum disk requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data path requirements and recommendations . . . . . . . . . . . . . . .
Storage Navigator requirements . . . . . . . . . . . . . . . . . . . . . . . . . .
External storage systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Planning failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preventing unnecessary failover . . . . . . . . . . . . . . . . . . . . . . . . . .
Sharing volumes with other Hitachi Data Systems software products
Virtual Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cache Residency Manager . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LUN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Open Volume Management . . . . . . . . . . . . . . . . . . . . . . . . . .
LUN Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configurations with ShadowImage volumes . . . . . . . . . . . . . .
Configuring HAM with ShadowImage. . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 2-4
. 2-5
. 2-5
. 2-6
. 2-7
. 2-8
. 2-8
. 2-8
2-10
2-11
2-12
2-12
2-12
2-12
2-12
2-13
2-13
2-13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3-2
3-2
3-3
3-3
3-4
3-4
3-4
3-5
3-5
3-5
3-6
3-6
3-6
3-6
3-6
3-7
3-8
3-8
3-8
3-8
3-8
iv
HUS VM Block Module Hitachi High Availability Manager User Guide
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 4-2
. 4-2
. 4-4
. 4-4
. 4-4
. 4-5
. 4-6
4-10
4-10
4-11
4-11
4-11
4-11
4-13
4-15
4-15
4-15
4-15
4-16
4-17
4-17
4-17
4-18
4-19
4-19
4-20
4-21
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5-2
5-2
5-2
5-2
5-2
5-3
5-3
5-4
5-4
5-5
5-5
5-5
5-6
5-6
5-7
5-7
5-9
v
HUS VM Block Module Hitachi High Availability Manager User Guide
. 6-2
. 6-2
. 6-2
. 6-2
. 6-3
. 6-3
. 6-3
. 6-3
. 6-4
. 6-4
. 6-4
. 6-5
. 6-6
. 6-7
. 6-8
. 6-9
. 6-9
6-10
6-10
6-11
6-12
6-12
6-13
6-14
6-14
6-15
6-15
6-16
6-16
6-17
6-17
6-19
6-19
6-19
6-19
6-20
6-20
6-21
vi
HUS VM Block Module Hitachi High Availability Manager User Guide
Required software . . . . . . . . . . . . . .
Supported cluster software. . . . . . . .
Configuration requirements . . . . . . .
Configuring the system . . . . . . . . . .
Disaster recovery in a cluster system.
Restrictions . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7-2
7-2
7-3
7-3
7-3
7-4
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
Potential causes of errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Is there an error messages for every type of failure? . . . . . . . . . . . . . . . .
Where do you look for error messages? . . . . . . . . . . . . . . . . . . . . . . . . . .
Basic types of troubleshooting procedures . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting general errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Suspended volume pair troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . .
The workflow for troubleshooting suspended pairs when using Storage
Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting suspended pairs when using CCI . . . . . . . . . . . . . . .
Location of the CCI operation log file . . . . . . . . . . . . . . . . . . . . . . .
Example log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Related topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery of data stored only in cache memory. . . . . . . . . . . . . . . . . . . . .
Pinned track recovery procedures . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering pinned tracks from volume pair drives. . . . . . . . . . . . . . . .
Recovering pinned tracks from quorum disks . . . . . . . . . . . . . . . . . . .
Contacting the Hitachi Data Systems Support Center . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8-2
8-2
8-2
8-2
8-2
8-4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8-4
8-6
8-6
8-7
8-8
8-8
8-8
8-8
8-9
8-9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. A-2
. A-5
. A-5
. A-8
A-10
A-11
A-13
A-13
A-15
Glossary
Index
vii
HUS VM Block Module Hitachi High Availability Manager User Guide
viii
HUS VM Block Module Hitachi High Availability Manager User Guide
Preface
The Hitachi High Availability Manager User Guide describes and provides
instructions for using the Hitachi High Availability Manager software to plan,
configure, and perform pair operations on the Hitachi Unified Storage VM
(HUS VM) storage system.
Please read this document carefully to understand how to use this product,
and maintain a copy for reference purposes.
Intended audience
Product version
Release notes
Referenced documents
Document conventions
Getting help
Comments
Preface
HUS VM Block Module Hitachi High Availability Manager User Guide
ix
Intended audience
This document is intended for system administrators, Hitachi Data Systems
representatives, and authorized service providers who are involved in
installing, configuring, and operating the HUS VM storage system.
Readers of this document should meet the following requirements:
Data processing and RAID storage systems and their basic functions.
The Unified Storage VM storage system and the Hitachi Unified Storage
VM Block Module Hardware User Guide.
The Storage Navigator software for the Unified Storage VM and the
Hitachi Storage Navigator User Guide.
Product version
This document revision applies to HUS VM microcode 73-03-0x or later.
Release notes
The Hitachi Unified Storage VM Release Notes provide information about the
HUS VM microcode (DKCMAIN and SVP), including new features and
functions and changes. The Release Notes are available on the Hitachi Data
Systems Portal: https://portal.hds.com
Date
October 2013
Description
Initial Release.
Referenced documents
HUS VM documentation:
Document conventions
This document uses the following typographic conventions:
Preface
HUS VM Block Module Hitachi High Availability Manager User Guide
Convention
Bold
Description
Indicates the following:
Italic
Monospace
[ ] square brackets
{ } braces
| vertical bar
Meaning
Description
Tip
Note
Caution
WARNING
Preface
HUS VM Block Module Hitachi High Availability Manager User Guide
xi
Value
1 KB
1,000 bytes
1 MB
1,0002 bytes
1 GB
1,0003 bytes
1 TB
1,0004 bytes
1 PB
1,0005 bytes
1 EB
1,0006 bytes
Logical storage capacity values (e.g., logical device capacity) are calculated
based on the following values:
Logical capacity unit
Value
1 KB
1,024 bytes
1 MB
1 GB
1 TB
1 PB
1 EB
1 block
512 bytes
Getting help
The Hitachi Data Systems customer support staff is available 24 hours a
day, seven days a week. If you need technical support, log on to the Hitachi
Data Systems Support Portal for contact information:https://
hdssupport.hds.com
Comments
Please send us your comments on this document:
[email protected]. Include the document title, number, and revision.
Please refer to specific sections and paragraphs whenever possible. All
comments become the property of Hitachi Data Systems.)
Thank you!
xii
Preface
HUS VM Block Module Hitachi High Availability Manager User Guide
1
High Availability Manager overview
HAM ensures high availability of host applications used in Hitachi Unified
Storage VM Block Module (HUS VM) storage systems. HAM provides
protection against the loss of application availability when input and output
(I/O) failures occur in the primary storage system by automatically
switching host applications from the primary storage system to the
secondary storage system and by enabling recovery from the failures that
caused the I/O failure.
HAM is designed for recovery from on-site disasters such as power supply
failure. TrueCopy is suited to large-scale disaster recovery.
HAM components
Data replication
Failover
11
12
The HAM primary and secondary storage systems are connected to the
same host. When a HAM pair is created, the host sees the primary and
secondary volumes as the same volume.
HAM components
A typical configuration consists of two Hitachi Unified Storage VM Block
Module storage systems installed at the primary and secondary sites. In
addition, the HAM system consists of the following components:
Dedicated Fibre Channel data paths linking the primary and secondary
storage systems, with Fibre Channel switches.
The following interface tools for configuring and operating the pairs:
Hitachi Storage Navigator (SN) graphical user interface (GUI),
located on a management LAN.
Hitachi Command Control Interface software (CCI), located on the
host.
13
Secondary storage system LCUs containing the copy volumes are called
remote control units (RCUs).
Normally the MCU contains the P-VOLs and the RCU contains the S-VOLs.
The MCU communicates with the RCU via the data path. You can
simultaneously set P-VOL and S-VOL in the same storage system if the
volumes are used by different pairs. In this case, the CU can function
simultaneously as an MCU for the P-VOL and as an RCU for the S-VOL.
The MCU is often referred to as the primary storage system in this
document; the RCU is often referred to as the secondary storage system.
Pair volumes
Original data from the host is stored in the P-VOL; the remote copy is stored
in the S-VOL. Data is copied as it is written to the P-VOL; new updates are
copied only when the previous updates are acknowledged in both primary
and secondary volumes.
Once a pair is created, you can do the following:
Delete the pair, which removes the pair relationship, though not the
data.
Data paths
The physical links between the primary and secondary storage systems are
referred to as the "data path." These links include the Fibre Channel
interface cables and switches. HAM commands and data are transmitted
through the data path. The data path links the primary and secondary
storage systems through two types of Fibre Channel ports, Initiator and
RCU Target ports.
14
Quorum disk
The quorum disk is a continuously updated volume that contains
information about the state of data consistency between the P-VOL and SVOL. The information is used by HAM in the event of failure to direct host
operations to the secondary volume. The quorum disk is located in an
externally attached storage system.
Multipath software
Multipath software distributes the loads among the paths to the current
production volume. For HAM, the multipath software duplicates the paths
between the host and P-VOL, so that the paths are in place between the
host and the S-VOL also.
If a failure occurs in the data path to the primary storage system, or with
the primary storage system, the multipath software transfers host
operations to the S-VOL in the secondary storage system.
Data replication
HAM supports data sharing between the following volumes:
15
Failover
A failover is an automatic takeover of operations from the primary storage
system to the secondary storage system. This occurs when the primary
storage system cannot continue host operations due to a failure in either
the data path or the primary storage system. The multipath software in the
host switches I/O to the remote system. A multipath software package that
has been qualified with HAM must be installed on the host.
16
2
System implementation planning and
system requirements
Understanding the system planning process and the various requirements
of HAM enables you to plan a system that functions properly and can be
configured to meet your business needs over time.
Required hardware
Multipath software
Licenses
License capacity
Planning failover
21
22
Required hardware
The following hardware is required for a HAM system:
A physical path connection between the primary storage system and the
external system hosting the quorum disk.
Multipath software
A multipath software package qualified with HAM is required on each host
platform for failover support. Hitachi's multipath software, Dynamic Link
Manager, supports the following host platforms:
AIX
Linux
23
Solaris
VMware. Requires host mode option 57 on the host group where VMware
resides.
Make sure that the primary, secondary, and external storage systems
have their own independent sources of power.
The HAM P-VOL and S-VOL must be located in different storage systems.
Primary and secondary storage systems each require two initiator ports
and two RCU target ports.
The initiator port sends HAM commands to the paired storage
system. Initiator ports must be configured on the primary storage
system for HAM operations. However, for disaster recovery, you
should also configure initiator ports on the secondary storage
system.
RCU Target port receives HAM commands and data. RCU target ports
must be configured on the secondary storage system for HAM
operations. You should also configure RCU target ports on the
secondary storage system for disaster recovery.
Additional microprocessors for replication links may be required based
on replication workload.
If you use switches, prepare them for both the primary and the
secondary storage systems. Do not share a switch between the two.
Using two independent switches provides redundancy in the event of
failure in one.
Cache and non-volatile storage (NVS) must be operable for both the
MCU and RCU. If not, the HAM Paircreate CCI command will fail.
Licenses
The following Hitachi Data Systems software products must be installed on
both the primary and secondary storage systems. Each product each
requires a license key.
24
TrueCopy
License capacity
A single HAM license must be purchased for each HUS VM system. The HAM
license is not capacity based. The capacity of the TrueCopy license
determines the capacity of HAM volumes that may be replicated. Review the
TrueCopy license installed on your system to verify that it meets your
requirements.
For example, when the license capacity for TrueCopy is 10GB, the volume
capacity that can be used for HAM is up to 10GB. When 2GB out of 10GB of
license capacity for TrueCopy is used, the volume capacity that can be used
for HAM is up to the remaining 8GB.
For information on licenses and the actions to take for expired licenses and
exceeded capacity, see the Hitachi Storage Navigator User Guide.
LDEVs for the P-VOL and S-VOL must be created and formatted before
creating a pair.
A P-VOL can be copied to only one S-VOL; and an S-VOL can be the copy
of only one P-VOL.
If you are storing data in an external volume or volumes, make sure the
external volumes are mapped to the primary or secondary storage
system they support.
If you plan to create multiple pairs during the initial copy operation,
observe the following:
All P-VOLs must be in the same primary storage system, or in
mapped external systems.
All S-VOLs must be in the same secondary storage system, or in
mapped external systems.
You can specify the number of pairs to be created concurrently during
initial copy operations (1 to 16).
25
For more information about the System Option dialog box, see the
topic on changing option settings in the Hitachi TrueCopy User
Guide.
During the initial pair operation in SN, you will select multiple P-VOLs
on the primary storage system for pairing. After selecting the P-VOL,
only the P-VOL with the lowest LUN appears in the subsequent
Paircreate dialog box. To pair the other P-VOLs to the correct SVOLs, observe the following:
- In the Paircreate dialog box, you can select only one S-VOL. This
should be the volume to be paired with the P-VOL that is shown.
- S-VOLs for the remaining P-VOLs are assigned automatically by SN,
according to their LUNs. If you are creating three P-VOLs, and you
assign LUN001 as the S-VOL in the Paircreate dialog box, the
remaining S-VOLs will be assigned incrementally by LUN (for
example, LUN002 and LUN003).
- Make sure that all S-VOLs to be assigned automatically are
available, are numbered in an order that will pair them properly, and
that they correspond in size to the P-VOLs.
- If an S-VOL is not available for a P-VOL, the pair must be created
individually.
26
All HAM pairs created between one MCU and one RCU must use the same
quorum disk. Thus, The P-VOL and S-VOL for a pair must use the same
quorum disk.
Read/Write operations from the storage system to the quorum disk are
for internal use. These operations are performed even when Write
Pending operations reach 70%.
Caution: Quorum disks are used in a unique way in that they are shared
with two storage systems. For data protection reasons, make sure not to
share any other kind of volume with two storage systems.
Do not share the data paths with TrueCopy. Install independent data
paths for HAM.
Install at least two data paths from the primary storage system to the
secondary storage system, and two data paths from the secondary
storage system to the primary storage system. This allows data transfer
to continue in the event of failure one path's cables or switches.
Optical fibre cables are required to connect the primary and secondary
storage system.
Physical
Data Paths
Logical
Paths
Item
Min.
Max.
Recommended
2 or more
2 or more
2 or more
2 or more
2 or more
2 or more
2 or more
27
Category
Ports
Item
Min.
Max.
64
16
Recommended
For more information, see the Hitachi Storage Navigator User Guide.
You can connect one external system per HAM P-VOL, and one per SVOL.
Planning failover
Automatic failover of host operations to the secondary storage system is
part of the HAM system. Failover occurs when the primary storage system
cannot continue I/O operations due to a failure. The multipath software in
the host switches I/O to the secondary storage system.
28
In the HAM system, the quorum disk stores the information about the state
of data consistency between the P-VOL and S-VOL, which is used to check
whether P-VOL or S-VOL contains the latest data. If the P-VOL and S-VOL
are not synchronized due to a failure, the MCU and RCU determine which
volume should accept host I/O based on the information stored in the
quorum disk.
The following figure illustrates failover when a failure occurs at the MCU.
The RCU issues service information messages (SIMs) when the data
path is blocked. The multipath software issues messages about the
failure in the host-MCU paths.
Health check of the quorum disk by the MCU and RCU. The primary or
secondary storage system issues a SIM if a failure in the quorum disk is
detected. Host operations will not switch to the S-VOL if the quorum disk
fails. In this case, the failure must be cleared as soon as possible and
the quorum disk recovered.
The multipath software issues message when all host-MCU paths fail.
These messages must then be checked and the cause corrected. If
failover took place, host operations should be switched back to the
primary storage system.
29
Behavior when ON
Normal operation.
For more information about setting host mode options, see the Provisioning
Guide.
210
Table 2-2 Volume types that can be shared with HAM volumes
Product
LUN Manager
Volumes
Yes
Yes
No
No
Yes
Yes
VLL volume
Yes
Yes
System disk
No
No
Yes
Yes
Volume
Shredder
N/A
No
No
Dynamic
Provisioning
Yes
Yes
Pool volume
No
No
Universal
Volume
Manager
Yes
ShadowImage
ShadowImage P-VOL
No
Yes
ShadowImage S-VOL
No
No
Reserved volume
No
No
Thin Image
No
TrueCopy
P-VOL, S-VOL
No
No
Universal
Replicator
No
No
No
No
Journal volume
No
No
Source volume
No
No
No
No
Yes
Yes
No
No
N/A
No
No
Open Volume
Management
Volume
Migration
(*1)
Multiplatform
Backup
211
Product
Volumes
*1: For information on using Volume Migration, contact the Hitachi Data Systems
Support Center.
The following topics clarify key information regarding the use of other
software products.
Performance Monitor
LUN Manager
LU paths cannot be deleted after you create HAM pairs. To delete the LU
path, you need to release the HAM pair first.
212
VLL volumes can be assigned to HAM pairs, provided that the S-VOL has
the same capacity as the P-VOL.
LUN Expansion
LUSE volumes can be assigned to HAM pairs, provided that both P-VOL and
S-VOL are LUSE volumes consisting of the same number of LDEVs, the same
size, and the same structure.
213
214
3
System configuration
The HAM system configuration process is the first main task in the process
of setting up the HAM system. It follows the planning of the system
implementation and is based on the outcome of the system implementation
planning effort. All of the configuration procedures must be completed
before you can begin using the system.
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
31
32
The external system that has the quorum disk to the primary and
secondary storage systems.
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
Prerequisites
Before you begin, make sure you have:
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
33
Additional documentation
To ensure that you use the correct steps to install the software, refer to the
installation instructions in the following documentation during the
installation process:
Prerequisites
Before you begin, make sure you have:
34
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
Caution: Make sure that you install the software in the order described in
the procedure. If you do not, you may have to uninstall and reinstall the
software.
Additional documentation
To ensure that you use the correct steps to configure the systems, refer to
these instructions in the following documentation during the configuration
process:
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
35
Details about CLPR in the Hitachi Virtual Partition Manager User Guide.
Prerequisites
Before you begin, make sure you have:
Workflow
Use the following process to configure the primary and secondary storage
systems:
1. Stop Performance Monitor, if it is running, to avoid performance impact
on the TCP/IP network.
2. Set the port attributes for HAM.
3. Configure the primary and secondary storage systems and establish
logical paths between the primary and secondary HUS VM systems.
Prerequisites
Before you begin, make sure you have:
Procedure
1. In the external storage system, prepare a volume for use as the quorum
disk and specify any required system options.
36
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
2. Using the External attribute, configure the ports on the primary and
secondary storage systems that are connected to the disk.
3. Set the paths from the disk to the primary and secondary storage
systems to Active.
4. Using SNs Ports/Host Groups and External Storages windows, map
the primary and secondary storage systems to the disk by doing the
following:
Configure at least two cross-system paths between the primary
storage system and quorum disk, and two between the secondary
storage system and the quorum disk.
Specify the these external volume parameters:
- Emulation type: OPEN-V.
- Number of LDEVs: 1.
- Cache mode: This parameter is not used for quorum disks. Either
Enable or Disable can be specified.
- Inflow control: Select Disable. Data will be written in the cache
memory.
- CLPR: If you partition cache memory, specify the CLPR that the
quorum disk uses.
- LDKC:CU:LDEV number: The number is used to identify the
quorum disk for the primary and secondary storage systems.
5. In the External Path Groups tab in SN, configure port parameters for
the primary and secondary storage systems by specifying the following
values:
QDepth: This is the number of Read/Write commands that can be
issued (queued) to the quorum disk at a time.
The default is 8.
I/O TOV: This is the timeout value to the quorum disk from the
primary and secondary storage systems. The value must be less than
the time-over value from the application.
Recommended: 15 seconds
Default: 15 seconds
Path Blockade Watch: This is the time that you want the system
to wait after the quorum disk paths are disconnected before the
quorum disk is blocked.
Recommended: 10 seconds; Default: 10 seconds.
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
37
Prerequisites
Before you begin, make sure you have:
Configured the quorum disk (see Configuring the quorum disks on page
3-6).
Procedure
To ensure that the quorum disk ID is added correctly, make sure that:
1. Delete any data in the external volume that you assigned to be the
quorum disk.
2. Access the MCU or RCU in SN, then click Actions > Remote Copy >
TrueCopy > Quorum Disk Operation.
3. Make sure that you are in the modify mode.
4. In the Quorum Disk Operation window, right-click the quorum disk ID
that you want to add, then click Add Quorum Disk ID.
5. In Add Quorum Disk ID dialog box, from the Quorum Disk drop-down
menu, select the LDKC:CU:LDEV number that you specified when
mapping the external volume. This is the volume that will be used for
the quorum disk.
6. From the RCU drop-down menu, select the CU that is to be paired with
the CU on the current storage system. The list shows the RCU serial
number, LDKC number, controller ID, and model name registered in CU
Free.
7. Click Set. The settings are shown in the Preview area.
8. Verify your settings. To make a correction, select the setting, right-click,
and click Modify.
9. Click Apply to save your changes.
Prerequisites
Before you begin, make sure you have:
Added the ID for the quorum disk to the storage systems (see Adding
the ID for the quorum disk to the storage systems on page 3-7).
Procedure
Use the following host mode options for your system:
38
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
If using VMware or Windows, set host mode option 57 on the host group
where VMware or Windows reside.
If using software that uses a SCSI-2 Reservation, set host mode option
52 on the host groups where the executing node and standby node
reside.
For more information on host mode options, see the Provisioning Guide.
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
39
310
System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide
4
Working with volume pairs
A number of tasks must be performed on volume pairs as part of your
normal HAM system maintenance activities, when troubleshooting system
issues, or when taking action to recover from failure.
Splitting pairs
Resynchronizing pairs
Releasing a pair
41
Creating pairs.
Releasing pairs.
Resynchronizing pairs.
Splitting pairs.
During pair changes. Check pair status to see that the pairs are
operating correctly and that data is updating from P-VOLs to S-VOLs in
PAIR status, or that differential data management is happening in Split
status.
Note: You can perform a pair task can only be completed if the pair is in
a status that permits the task. Checking the status before you run a CCI
command lets you verify that the pair is in a status that permits the task.
The following figure shows HAM pair status before and after pair creation,
splitting pairs, various errors, and after releasing a pair.
42
43
9. When you release a pair, the status of the P-VOL and the S-VOL changes
to SMPL.
44
45
VOL Access
P-VOL
S-VOL
SMPL
SMPL
COPY
COPY
46
P-VOL
Access
to
Description
S-VOL
P-VOL
S-VOL
Blank
Blank
Access
Blank
(No
Lock)
Read/write Not
accessible
Pair Status
VOL Access
P-VOL
S-VOL
PAIR
PAIR
SSUS
PSUS
P-VOL
Access
to
Description
S-VOL
P-VOL
S-VOL
Blank
Blank
Blank
Access
(Lock)
Read/write
Blank
Blank
Read/write
Blank
Access
(Lock)
Not
accessible
Blank
Blank
Not
accessible
47
Pair Status
VOL Access
P-VOL
S-VOL
PSUS
(see
Suspend
types
(PSUE
status)
on page
4-10
PSUS
SSWS
SMPL
48
P-VOL
Access
to
Description
S-VOL
P-VOL
S-VOL
Access
(Lock)
Blank
Read/
Not
write.
accessible.
Read-only
can be set
with the
option
pairsplit-r)
Blank
Blank
Not
accessible
Not
accessible
Blank
Access
(Lock)
Not
accessible
Not
accessible
Blank
Access
(Lock)
Not
accessible
Read/write
Blank
Blank
Read/Write
Access
(Lock)
Blank
Read/Write
Access
(Lock)
Blank
Read/Write
Pair Status
VOL Access
P-VOL
S-VOL
PSUE
(see
Suspend
types
(PSUE
status)
on page
4-10
PSUE
Access
(No
Lock)
Blank
PAIR
Access
(Lock)
Blank
Access
SSWS (No
Lock)
Blank
Access
(Lock)
Blank
Access
(No
Lock)
Blank
Access
(Lock)
Blank
Access
(Lock)
Blank
PSUS
PDUB
PDUB
P-VOL
Access
to
Description
S-VOL
P-VOL
S-VOL
Not
accessible
Read/Write
Not
accessible
* If an S-VOL in PAIR status accepts write I/O, the storage system assumes that a failover occurred.
Then the S-VOL changes to SSWS status and the P-VOL usually changes to the PSUS status.
Description
SMPL
COPY
The initial copy for a HAM pair is in progress, but the pair is not
synchronized yet.
PAIR
The initial copy for a HAM pair is completed, and the pair is
synchronized.
PSUS
Although the paired status is retained, the user split the HAM pair, and
update of S-VOL is stopped. This status only applies to SVOL. While the
pair is split, the storage system keeps track of updates to P-VOL.
SSUS
Although the paired status is retained, the user split the HAM pair, and
update of S-VOL is stopped. This status only applies to P-VOL. If the pair
is split with the option of permitting update of S-VOL specified, the
storage system keeps track of updates to S-VOL.
49
Item
Description
PSUE
PDUB
This status is shown only for pairs using LUSE. Although the paired
status is retained, the pair status transition is suspended because of an
error in some LDEVs within LUSE.
SSWS
The paired status is retained. Processing for resync with P-VOL and SVOL swapped (horctakeover command) is in progress.
Applies to
Description
P-VOL by
Operator
P-VOL
The user split the pair from the primary storage system
using the P-VOL Failure on the Suspend Type option
for Suspend Kind. The S-VOL split type is PSUS-by
MCU.
S-VOL by
Operator
P-VOL
by MCU
S-VOL
Delete pair to
RCU
P-VOL
S-VOL
410
The user has released the pair from the secondary storage system.
Verifying that the host recognizes both the P-VOL and S-VOL as a single
volume.
Prerequisites
Make sure you have configured the pair (see System configuration on page
3-1).
Procedure
When creating the pair, make sure that:
You copy each volume's port number, host group number (GID), and
LUN. This is needed during the procedure.
You copy the CU Free RCU from which you will assign the secondary
volume. Copy the serial number, LDKC number, controller ID, model
name, path group ID, and the channel type.
411
You assign a quorum disk to the HAM pairs during the initial copy
procedure. The pairs created between the same primary and secondary
storage system must be assigned the same quorum disk.
The initial copy parameters you specify during the procedure cannot be
successfully changed after a pair is created. If you attempt to change or
delete them, the Pair Operation window and Detailed Information
dialog box shows misleading and inaccurate information.
If you are creating multiple pairs in one operation, all pairs are assigned
the same parameters and the same quorum disk ID.
1. Access the MCU in SN, then click Actions > Remote Copy >TrueCopy
> Pair Operation.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the primary volume or volumes are located. The
volumes available to be paired are shown in the volume list.
4. Right-click a volume that you want as a P-VOL and select Paircreate
and HAM from the menu.
You can create more than one pair at one time by selecting then
right-clicking more than one volume. The related secondary volumes
must be in the same secondary storage system.
Volumes with the pair icon
volumes.
5. In the Paircreate(HAM) dialog box, the volume you selected for pairing
is shown for P-VOL. If you selected multiple volumes, the volume with
the lowest LUN is shown.
Note: When a P-VOL or S-VOL appears in a dialog box, it is identified
by port number, GID, LUN (LDKC number: CU number: LDEV number),
CLPR number, and CLPR name of the LU.
From the S-VOL drop-down menus, select the volume that you want to
pair with the shown P-VOL. Select the port number, GID, and LUN. This
will become the secondary volume (S-VOL).
If you are creating multiple pairs, select the S-VOL for the P-VOL
shown for P-VOL. The S-VOLs for the remaining group of P-VOLs will
be automatically assigned according to the LUN.
For example, if you are creating three pairs, and you select LUN001
as the S-VOL, the remaining S-VOLs for the other P-VOLs will be
LUN002 and LUN003.
6. From the RCU drop-down menu, select the remote system where the SVOL is located. The list shows all registered CU Free RCUs, which are
shown by serial number, LDKC number, controller ID, model name, path
group ID, and channel type. The system you select must be the same
for all pairs being created in this operation.
412
7. The P-VOL Fence Level is automatically set to Never. The P-VOL will
never be fenced, or blocked from receiving host read/write.
Note: In the Initial Copy Parameters area, remember that the
parameters you specify cannot be changed after a pair or pairs are
created. To make changes to the parameters specified below, you will
need to release and recreate the pair.
8. From the Initial Copy drop-down menu, specify whether to copy data
or not copy during the paircreate operation:
Select Entire Volume to copy P-VOL data to the S-VOL (default).
Select None to set up the pair relationship between the volumes but
to copy no data from P-VOL to S-VOL. You must be sure that the PVOL and S-VOL are already identical.
9. From the Copy Pace drop-down menu, select the desired number of
tracks to be copied at one time (1-15) during the initial copy operation.
The default setting is 15. If you specify a large number, such as 15,
copying is faster, but I/O performance of the storage system may
decrease. If you specify a small number, such as 3, copying is slower,
but the impact on I/O performance is lessened.
10.From the Priority drop-down menu, select the scheduling order for the
initial copy operations. You can enter between 1-256. The highest
priority is 1, the lowest priority is 256. The default is 32.
For example, if you are creating 10 pairs and you specified in the
System Option window that the maximum initial copies that can be
made at one time is 5, the priority you assign here determines the order
that the 10 pairs are created.
11.From the Difference Management drop-down menu, select the unit of
measurement for storing differential data. You can select Auto, Cylinder,
or Track.
With Auto, the system decides whether to use Cylinder or Track. This is
based on the size of the volume.
If VLL is used, the number of cylinders set for VLL is applied.
If the pair volume has 10,019 or more cylinders, Cylinder is applied.
If the pair volume has less than 10,019 cylinders, Track is applied.
12.From the Quorum Disk ID drop-down menu, specify the quorum disk
ID that you want to assign to the pair or pairs.
13.Click Set. The settings are shown in the Preview area.
14.In the Preview list, check the settings. To change a setting, right-click
and select Modify. To delete a setting, right-click and select Delete.
When satisfied, click Apply. This starts pair creation and initial copying
if specified.
413
414
Splitting pairs
When the pair is synchronized, data written to the P-VOL is copied to the SVOL. This continues until the pair is split.When you split a pair, the pair
status changes to PSUS and updates to the S-VOL stop.
You can set an option to block updates to the P-VOL while the pair is split.
This results in the P-VOL and S-VOL staying synchronized.
If the P-VOL accepts write I/O while the pair is split, the primary storage
system records the updated tracks as differential data. This data is copied
to the S-VOL when the pair is resynchronized.
The pair can be made identical again by resynchronizing the pair.
Note:
Prerequisites
The pair must be in PAIR status.
Procedure
1. Access the MCU or RCU in SN, then click Actions > Remote Copy >
TrueCopy > Pair Operation.
You do not need to vary the P-VOL offline.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the pair volume is located.
4. In the volume list, right-click the pair to be split and click Pairsplit-r
from the menu. You can split more than one pair by selecting then rightclicking more than one volume.
5. In the Pairsplit-r dialog box, information for the selected volume is
shown for Volume. When more than one volume is selected, the volume
with the lowest LUN is shown.
From the Suspend Kind drop-down menu, specify whether or not to
continue host I/O writes to the P-VOL while the pair is split. (If you are
running the CCI command from the S-VOL, this item is disabled.)
Select P-VOL Failure not to allow write I/O to the P-VOL while the
pair is split, regardless of the P-VOL fence level setting. Choose this
setting if you need to maintain synchronization of the HAM pair.
Select S-VOL to allow write I/O to the P-VOL while the pair is split.
The P-VOL will accept all subsequent write I/O operations after the
split. The primary storage system will keep track of updates while the
415
pair is split. Choose this setting if the P-VOL is required for system
operation and you need to keep the P-VOL online while the pair is
split.
6. Click Set. The settings are shown in the Preview area.
7. In the Preview list, check the settings. To change a setting, right-click
and select Modify. When satisfied, click Apply. The primary storage
system will complete all write operations in progress before splitting the
pair, so that the pair is synchronized at the time of the split.
8. Verify that the operation is completed successfully by checking the
Status column. The status should be PSUS.
Resynchronizing pairs
When you resynchronize a split pair, the volume that was not being
updatedusually the S-VOLis synchronized with the volume that was
being updated by the hostusually the P-VOL.
Pair status during resynchronization changes to COPY. It changes again to
PAIR when the operation is complete.
The method for performing this operation differs according to whether the
P-VOL or the S-VOL is accepting write I/O from the host. Check the VOL
Access column in the Pair Operation window to see which volume is online.
Pairs must be in PSUS or PSUE when the P-VOL is receiving I/O. If status
is PSUE, clear the error before resynchronizing. The operation can be
performed using the SN procedure below.
When the S-VOL is receiving host I/O, pair status must be SSWS. The
operation is performed by running the CCI pairresync-swaps
command.
Note: If you want to resynchronize the pair that has been released from
the S-VOL side, do not use this procedure. Instead, complete the following:
1. Release the pair from the P-VOL side by running the pairsplit-S CCI
command.
2. Create the pair from the P-VOL side using the Paircreate(HAM) dialog
box, making sure to set the appropriate initial copy option (Entire
Volume or None).
1. Access the MCU in SN, then click Actions > Remote Copy > TrueCopy
> Pair Operation.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the P-VOL is located.
4. In the volume list, right-click the P-VOL in the pair to be resynchronized
and click Pairresync.
In the Pairresync dialog box, P-VOL Fence Level is automatically set
to Never. The volume receiving updates from the host will never be
fenced, or blocked, from receiving host read/write.
416
5. In the Pairresync dialog box, from the Copy Pace drop-down menu,
select the desired number of tracks to be copied at one time (1-15)
during the copy operation. The default setting is 15. If you specify a
large number, such as 15, copying is faster, but I/O performance of the
storage system may decrease. If you specify a small number, such as 3,
copying is slower, but the impact on I/O performance is lessened.
6. From the Priority drop-down menu, select the scheduling order for the
copy operations. This applies when multiple pairs are being
resynchronized. You can enter between 1-256. The highest priority is 1,
the lowest priority is 256. The default is 32.
7. Click Set.
8. In the Preview list, check the settings. To change a setting, right-click
and select Modify. When satisfied, click Apply.
Update the pair status by clicking File/Refresh, then confirm that the
operation is completed with a status of PAIR.
Reverse resynchronization
After a failover, when the S-VOL is receiving updates from the host instead
of the P-VOL, you can resynchronize the P-VOL with the S-VOL by running
the CCI pairresync-swaps command. Copy direction is from S-VOL-to-PVOL.
The P-VOL and S-VOL are swapped in this operation: the secondary storage
system S-VOL becomes the P-VOL; the primary storage system P-VOL
becomes the S-VOL.
The pairresync -swaps command is the only supported method for
reverse resynchronizing HAM pairs.
Prerequisites
Make sure that:
Procedure
1. Run the CCI pairresync -swaps command on the S-VOL.
The data in the RCU S-VOL is copied to the MCU P-VOL. Also, the P-VOL
and S-VOL is swapped. The MCU P-VOL becomes the S-VOL, and the
RCU S-VOL becomes the P-VOL.
2. When the operation completes, verify that the pair is in PAIR status.
The following figure shows the swapping of a HAM P-VOL and S-VOL.
417
Releasing a pair
You can release a pair when you no longer need to keep the P-VOL and SVOL synchronized. You can release a single pair or multiple pairs using the
same procedure.
When you release a pair from the P-VOL, the primary storage system stops
copy operations and changes pair status of both P-VOL and S-VOL to SMPL.
The system continues to accept write I/O to the P-VOL volume, but does not
keep track of the updates as differential data.
When you release a pair from the S-VOL, the secondary storage system
changes S-VOL status to SMPL, but does not change P-VOL status. When
the primary storage system performs the next pair operation, it detects that
S-VOL status as SMPL and changes P-VOL status to PSUS. The suspend type
is Delete pair to RCU.
Tip: Best Practice: Release a pair from the P-VOL. If the pair has a failure
and cannot be released from the P-VOL, then release it from the S-VOL.
1. Verify that the P-VOL has the latest data using one of the following
methods:
On the secondary storage system, open the Pair Operation window.
Check that the Volume Access column for the S-VOL is blank (must
not show Access Lock).
Use the multipath software command to check if the P-VOL path
(owner path) is online.
2. Vary the S-VOL path offline using the multipath software command.
3. Access the MCU in SN, then click Actions > Remote Copy > TrueCopy
> Pair Operation.
4. Make sure that you are in the modify mode.
418
5. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the P-VOL is located.
6. Right-click the pair to be released and click Pairsplit-S.
7. In the Pairsplit-S dialog box, Delete Pair by Force drop-down menu,
select one of the following:
Yes: The pair will be released even if the primary storage system
cannot communicate with the secondary storage system. This option
can be used to free a host waiting for a response from the primary
storage system, thus allowing host operations to continue.
No: The pair will be released only if the primary storage system can
change pair status for both P-VOL and S-VOL to SMPL.
When the status of the pair to be released is SMPL, the default setting
is Yes and it cannot be changed. If the status is other than SMPL, the
default setting is No.
8. Click Set.
9. Click Apply.
10.Verify that the operation completes successfully (changes to SMPL
status).
11.The device identifier for one or both pair volumes changes when the pair
is released. Therefore you should enable host recognition of the S-VOL,
using one of the following methods:
Run the device recognition command provided by the operating
system on the host.
For more information, see the following table.
Reboot the host.
Requirements
Make sure that the following requirements are met to ensure the pair can
be changed correctly without errors or failures:
419
Caution: If this operation fails, check TrueCopy pair options to make sure
that they have not changed. A failure can cause unexpected changes, for
example, the P-VOL fence level could change from Data to Never.
Procedure
1. Access the MCU in SN, then click Actions > Remote Copy > TrueCopy
> Pair Operation.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select a CU group, CU, port, or host
group where the TrueCopy P-VOL belongs.
4. Split the TrueCopy pairs that you want to convert to HAM pairs.
5. On the Pairsplit -r dialog box, specify parameters you want and then
click Set.
6. On the Pair Operation window, verify the setting for Preview and click
Apply.
7. Verify that the pair status is PSUS.
8. In the list, select and right-click the TrueCopy P-VOL, and then click
Pairresync.
9. In the Pairresync dialog box, use the following settings:
For P-VOL Fence Level, specify Never.
In Change attribute to HAM, specify Yes.
In Quorum Disk ID, specify a quorum disk ID for the HAM pair.
10.Click Set.
11.Verify settings in Preview pane, then click Apply.
12.Verify that the Status column shows COPY for the pair that you
converted, and that the Type column shows HAM.
To update the information in the window, click File > Refresh.
13.Access the secondary storage system, open the Pair Operation
window, and verify that the Type column for the S-VOL is HAM.
Caution: Application I/Os cannot be taken over in case of a failover
until the window shows HAM for the S-VOL. Before moving onto the
next step, wait for a while and refresh the window to make sure HAM
appears.
14.Make sure that the host recognizes the HAM pair.
For more information about verifying host recognition for new pairs, see
Verifying host recognition of a new pair on page 4-13.
420
Though the host recognizes the HAM volumes as a single volume, CCI
views the P-VOL and S-VOL as separate volumes.
CCI shows the HAM pair, the TrueCopy pair, and the UR pair, according
to the following table.
Pair Type
Display of CTG
HAM pair
NEVER
TrueCopy pair
DATA
The CTG ID
STATUS
The CTG ID
NEVER
The CTG ID
ASYNC
The CTG ID
Universal
Replicator pair
Display of Fence
Display of JID
Quorum Disk ID
The journal ID
When running CCI commands on the S-VOL, make sure that you specify
S-VOL information in the scripts.
The following table shows the supported CCI commands and corresponding
SN tasks.
Type of task
Task
CCI command
SN window or dialog
box
N/A
Pair
operations
paircreate -jp
Paircreate dialog box
<quorum disk ID>
-f never
Create a pair.
paircreate -jq
<quorum disk ID>
-f never
Split a pair.
pairsplit -r
Resynchronize a pair.
pairresync
Resynchronize a pair
(reverse direction).
pairresync -swaps
N/A
pairresync -swapp
N/A
N/A
N/A
Change a TrueCopy
pair to a HAM pair.
N/A
421
Type of task
Maintenance
Task
View pair status.
CCI command
pairdisplay
SN window or dialog
box
Pair Operation window or
Detailed Information
dialog box
Release pair.
pairsplit-S
422
You cannot run the pairresync command on a pair when the storage system
cannot access the quorum disk due to a failure.
After you run the pairsplit -RS command, the write command to the P-VOL will
fail.
You cannot run the following commands on HAM pairs: pairresync -fg <CTGID>
and pairsplit -rw.
5
System maintenance
To ensure that the HAM system function properly and is able to provide
robust and reliable high-availability protection for host applications, it is
essential for you to be able to perform HAM system maintenance tasks.
Related documentation
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
51
Related documentation
For more information about other system maintenance tasks, see the
Hitachi TrueCopy User Guide.
When both owner and non-owner paths are online, vary the owner path
offline.
When the owner path is offline and non-owner path is online, vary the
owner path online.
Caution: When the HAM pair is in the PAIR status, if you vary the owner
path online, non-owner paths on RCU may be changed to offline. Since no
failure actually occurred in these paths, you should restore the status of the
HAM pair to PAIR, and then vary the non-owner paths online.
I/O scheduled before making the owner paths online with the multipath
software is issued to non-owner paths. According to circumstances, I/O
scheduled after making owner paths online might be issued prior to I/O
scheduled before making owner paths online. In this case, the check
condition on I/O which is issued to non-owner paths may report to the host
because the RCU HAM pair is in the PSUS status and the MCU HAM pair is
in the SSWS status. I/O which is reported on the check condition are
reissued to the owner path, and then ended normally. Non-owner paths
which are reported on the check condition become offline due to failures.
52
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
53
Requirements
To ensure that you delete the disk ID properly, make sure that:
The disk is not being used by any HAM pair. If it is, you cannot delete
the ID.
You delete the ID on both the primary and secondary storage systems.
The procedure is the same for both systems.
1. Access an MCU or RCU in SN and click Actions > Remote Copy >
TrueCopy > Quorum Disk Operation.
2.
On the Quorum Disk Operation window, make sure that you are in
the modify mode.
3. In the quorum ID list, right-click the quorum disk ID that you want to
delete, then click Delete Quorum Disk ID.
4. Confirm the operation in the Preview list, then click Apply.
If the quorum disk ID cannot be deleted, a failure might have occurred in
the quorum disk. Do one of the following:
Recover from the failure, then try to delete the ID again using this
procedure.
Forcibly delete the quorum disk (see Deleting quorum disk IDs by
system attribute (forced deletion) on page 5-4).
Requirements
To ensure that you delete the disk ID properly, make sure that:
The disk is not being used by any HAM pair. If it is, you cannot delete
the ID.
You delete the ID on both the primary and secondary storage systems.
The procedure is the same for both systems.
1. On the primary storage system, release all HAM pairs using the quorum
disk that you want to delete.
54
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
2. Call the Hitachi Data Systems Support Center and ask them to turn ON
the appropriate system option on the storage system that cannot access
the quorum disk.
3. On the primary and secondary storage system, delete the quorum disk
ID from the TrueCopy/Quorum Disk Operation window in SN.
4. On the primary and secondary storage system, make sure that the ID is
correctly deleted. If you deleted the wrong ID, register the ID again.
5. Call the Hitachi Data Systems Support Center and ask them to turn OFF
the system option.
Recovering the disk when the P-VOL was receiving host I/O at deletion
on page 5-5
Recovering the disk when the S-VOL was receiving host I/O at deletion
on page 5-5
Recovering the disk when the P-VOL was receiving host I/O at deletion
Use this procedure to recover a disk that was accidentally deleted when the
primary volume was receiving host I/O at the time of deletion.
1. Vary the host-to-S-VOL path offline using the multipath software.
2. Release all pairs that use the forcibly-deleted quorum disk.
3. Make sure the quorum disk ID is deleted from both primary and
secondary storage systems.
4. On the primary and secondary storage system, add the quorum disk ID.
5. On the primary storage system, create the HAM pair.
6. On both the primary and secondary storage systems, make sure that
Type shows HAM.
Recovering the disk when the S-VOL was receiving host I/O at deletion
Use this procedure to recovery a disk that was accidentally deleted when
the secondary volume was receiving host I/O at the time of deletion.
1. Stop the I/O from the host.
2. Release all pairs using the forcibly-deleted quorum disk.
3. Make sure the quorum disk ID is deleted from both primary and
secondary storage systems.
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
55
4. On the secondary storage system, create a TrueCopy pair. The data flow
is from secondary to primary sites.
To do this, specify the HAM S-VOL as a TrueCopy P-VOL.
5. Do one of the following:
If changing a TrueCopy pair to a HAM pair: When the copy operation
is completed, run the horctakeover command on the TrueCopy SVOL. This reverses the TrueCopy P-VOL and S-VOL.
If using CCI to create the HAM pair again: When the copy operation
is completed, run the pairsplit -S command and release the
TrueCopy pair.
If using SN: When the copy operation is completed, release the
TrueCopy pair.
6. Using multipath software, vary the host-to-P-VOL path online.
7. On the primary and secondary storage systems, add the quorum disks.
8. Do one of the following:
If changing the TrueCopy pair to a HAM pair: From the primary
system P-VOL using Storage Navigator, split the TrueCopy pair, then
change the pair to a HAM pair. See Changing TrueCopy pairs to HAM
pairs on page 4-19.
If creating the pair again:
- If using CCI, run the paircreate command on the primary storage
system.
If using SN, create the HAM pair on the primary storage system.
9. On the primary and secondary storage systems, make sure the volume
type is HAM.
Quorum disks
56
A quorum disk
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
Performing outages when the P-VOL is receiving host I/O on page 5-7.
Performing outages when the S-VOL is receiving host I/O on page 5-8.
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
57
58
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
59
14.(If the non-owner path is offline) Using the multipath software, vary it
online.
*1: If you vary the owner path offline during I/O processing, the multipath
software may display a message saying that the path is offline due to a path
failure. In this case, you can ignore the message.
*2: Skip this step when you power off the primary storage system only.
*3: When the host operating system is Windows, the multipath software
may return a host-to-P-VOL path failure when you power on the primary
storage system. In this case, you can ignore the message. This happens
because HAM blocks access to the primary storage system after the plug
and play function automatically recovers the owner path to online.
510
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
511
512
System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide
6
Disaster recovery
On-site disasters, such as power supply failures, can disrupt the normal
operation of your HAM system. Being able to quickly identify the type of
failure and recover the affected system or component helps to ensure that
you can restore high-availability protection for host applications as soon as
possible.
Detecting failures
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
61
Volume pairs
Quorum disks
Detecting failures
Detecting failures
Detecting failures is the first task in the recovery process. Failure detection
is essential because you need to know the type of failure before you can
determine which recovery procedure to use.
You have two options for detecting failures. You can check to see if failover
has occurred and then determine the type of failure that caused it, or you
can check to see if failures have occurred by using the SIM and path failure
system messages.
62
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Next steps
You need to determine the type of failure before you can determine which
recovery procedure to use.
For more information, see Using system messages to check for failures on
page 6-4.
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
63
Blocked volumes
Quorum disk
Power outage
Selecting Procedures
Make sure you have identified the type of failure that occurred by using the
system information messages (SIM) and path failure system messages.
For more information, see Using system messages to check for failures on
page 6-4.
1. Analyze these failure messages to determine which basic type of failure
occurred. The failure types are: blocked volumes, quorum disk, power
outage, or failures that can be resolved by resynchronizing affected
pairs.
2. Use the decision tree in the following figure to select the correct set of
procedures. Use the links below the figure to go to the appropriate
procedures.
64
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Depending on which systems and volumes are affected and whether failover
occurred, the recovery process can involve:
Clearing any other failures that may prevent normal host I/O
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
65
The following figure shows the different volume failure scenarios that are
possible with volume pairs.
66
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
67
pairsplit-S
4. Clear the failure in the primary storage system P-VOL.
5. On the primary storage system, make sure that no other failures exist
and that it is ready to accept host I/O.
6. On the secondary storage system, create a TrueCopy pair from the
original S-VOL to the P-VOL. The data flow is from the secondary to
primary storage system.
7. You can continue by either changing the TrueCopy pair to a HAM pair
(below) or recreating the HAM pair (next step).
If changing the TrueCopy pair to a HAM pair, complete this step on the
primary storage system.
a. On the primary system, use CCI to run the horctakeover command
on the new TrueCopy S-VOL. This reverses the relationship and the
copy flow from S-VOL to P-VOL.
b. On the primary system, use SN to split the TrueCopy pair.
c. On the primary system, perform the SN pair resync operation to
change the TrueCopy pair to a HAM pair. See Changing TrueCopy
pairs to HAM pairs on page 4-19.
8. If recreating the HAM pair:
a. Using either CCI or SN, on the secondary storage system, release the
TrueCopy pair (for CCI, use the pairsplit -S command.
b. On the primary storage system, create the HAM pair.
9. On both systems, in SN, make sure that Type shows HAM.
10.Using multipath software, vary the owner path online.
11.Restart I/O.
68
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
69
Replacing a quorum disk when the MCU is receiving host I/O on page 610
Replacing a quorum disk when the RCU is receiving host I/O on page 611
Note: You can use the replacement procedures to replace a disk that is
connected to one or more volume pairs.
610
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
611
612
Recovering the system when the RCU is receiving host I/O updates on
page 6-13.
Recovering the system when host I/O updates have stopped on page 614.
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Recovering the system when the RCU is receiving host I/O updates
Recovering the primary storage system from power failure, when host I/O
updates continue after failover, involves the completion of tasks on both the
primary storage system and secondary storage system. Because failover
occurred, you must complete the steps required to restore the original
relationship of the volumes.
You use multipath software and either SN or CCI to complete the recovery
procedure.
Tip: Some of the steps can be done using either SN or CCI. Typically, you
can complete these steps quicker using CCI.
1. Verify that the S-VOL has the latest data and is being updated. Open the
Pair Operation window on the secondary storage system and check
that the VOL Access column shows Access (Lock).
2. Stop I/O from the host.
3. On the secondary storage system, release the HAM pair.
4. On the secondary storage system, delete the quorum disk ID.
5. On the secondary storage system, create a TrueCopy pair. The data flow
is from secondary to primary sites.
6. Format the quorum disk.
7. On the primary storage system, register the HAM secondary storage
system to the primary storage system.
8. You can continue by either changing the TrueCopy pair to a HAM pair
(below) or recreating the HAM pair (next steps).
If changing the TrueCopy pair to a HAM pair, complete this step.
a. When the TrueCopy operation finishes, run the horctakeover
command on the primary storage system S-VOL to reverse the P-VOL
and S-VOL.
b. On the primary and secondary storage systems, add the quorum
disk.
c. Using SN, on the primary system, split the TrueCopy pair.
d. Using SN, on the primary system, change the TrueCopy pair to a HAM
pair. See Changing TrueCopy pairs to HAM pairs on page 4-19 for
details.
9. If recreating the HAM pair using CCI:
a. When the copy operation in Step 5 completes, release the TrueCopy
pair.
b. On the primary and secondary storage systems, add the quorum
disk.
c. On the primary storage system, run the paircreate command to
create the HAM pair.
10.If recreating the HAM pair using SN:
a. On the secondary storage system, release the TrueCopy pair.
b. On the primary and secondary storage systems, add the quorum
disk.
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
613
614
Recovering the system when host updates have stopped on page 6-15.
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
615
Required conditions
You can only use this method if all of the following conditions exist:
616
If there was a quorum disk failure, the disk does not need to be replaced.
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Prerequisites
Make sure you have identified the type of failure that occurred by using the
system information messages (SIM) and path failure system messages.
For more information, see Using system messages to check for failures on
page 6-4.
Note:
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
617
618
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Procedure
1. Analyze the failure messages to determine if all conditions required for
resynchronization exist:
The HAM volumes are not blocked.
If there was a quorum disk failure, the disk does not need to be
replaced.
Data was not lost as a result of shared memory initialization caused
by a power outage.
2. Use the decision tree in the flowchart to select the correct set of
procedures. Use the links below to go to the appropriate procedures.
Related topics
Prerequisites
Make sure you have analyzed the failure messages to determine if all of the
conditions required to use resynchronization exist.
For more information, see Determining which resynchronization recovery
procedure to use on page 6-17.
Procedure
1. Using multipath software, vary the owner path offline.
2. Swap and suspend the HAM pair using the CCI pairsplit -RS
command.
3. Resynchronize the ShadowImage pair on the secondary storage system
in the opposite direction, using the Quick Restore or Reverse Copy
operations. The system copies the data in the ShadowImage S-VOL to
the HAM S-VOL.
4. Split the ShadowImage pair by running the pairsplit command.
5. Resynchronize the HAM pair in the opposite direction using the
pairresync -swaps command. The system copies the data in the S-VOL
to the P-VOL.
6. Using multipath software, vary the owner path online.
7. Using multipath software, vary the non-owner path offline.
8. When the HAM P-VOL and S-VOL are in PAIR status, swap and suspend
the pair using the pairsplit -RS command.
9. Resynchronize the HAM pair in the opposite direction using the
pairresync -swaps command.
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
619
2. Non-owner path
1. Non-Owner path
2. Owner path
1. Non-Owner path
2. Owner path
620
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
Contact the Hitachi Data Systems Support Center and ask to have the
appropriate system option turned ON. This enables the S-VOL to receive
host I/O.
Note: The system option applies to the whole storage system. You cannot
set this option to an individual pair or pairs.
To continue using the pair after the failure has been cleared, release and recreate the pair. The storage system builds the initial copy from the P-VOL to
the S-VOL.
If you need technical support, log on to the HDS Support Portal for contact
information: https://hdssupport.hds.com.
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
621
622
Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide
7
Using HAM in a cluster system
There are specific software and configuration requirements for using HAM
in a cluster system.
Required software
Configuration requirements
Restrictions
71
You connect an executing node and a standby node to MCU and RCU so that
the nodes can access both MCU and RCU. A heartbeat network must be set
up between the nodes.
If a failure occur in the executing node, operations are continued by a
failover to the standby node.
Required software
The following software is required to use HAM in a cluster system:
Cluster software.
72
Operating systems
Configuration requirements
To ensure that HAM functions properly, make sure that:
The same version of Hitachi Dynamic Link Manager are used in the MCU
and RCU.
If using MSFC, specify Node and File Share Majority in Quorum Mode.
73
Restrictions
The following restrictions apply when using HAM in a cluster system:
For Windows Server 2008 R2, HAM does not support Hyper-V function
and Cluster Shared Volumes (CSV) function.
74
8
Troubleshooting
HAM is designed to provide you with error messages so that you can quickly
and easily identify the cause of the error and take corrective action. Many
types of errors you encounter can be resolved by using fairly simple
troubleshooting procedures.
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
81
Related topics
For more information about error messages and error codes that are
displayed by SN, see Hitachi Storage Navigator Messages.
82
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
Error
Corrective action
An initiator channel-enable
Please call the Hitachi Data Systems Support Center for
LED indicator (on the HUS VM assistance.
control panel) is off or
flashing.
The status of the pairs and/or Make sure the correct CU is selected.
data paths are not shown
correctly.
A HAM error message appears Resolve the error and then try the operation again.
on the SN computer.
The data path status is not
Normal.
Paircreate or pairresync
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
83
Error
Corrective action
I/O failures
When using SN
84
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
Following the troubleshooting steps based on the suspend type and the
volume type (primary or secondary).
The following table lists the troubleshooting steps to use for each suspend
type and volume type.
Suspend
type
VOL Type
PSUE,
P-VOL
by RCU
S-VOL
PSUE,
P-VOL
S-VOL
Failure
S-VOL
PSUE,
P-VOL
S-VOL
Failure
S-VOL
Error
Corrective action
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
85
Suspend
type
VOL Type
PSUE,
P-VOL
S-VOL
Failure
S-VOL
Error
Corrective action
PSUE,
P-VOL
PSUE,
P-VOL
Initial
Copy
Failed
S-VOL
Do the following:
Using the CCI operation log file to identify the cause of the error.
Following the troubleshooting steps based on the SSB1 and SSB2 error
codes (the codes are recorded in the log file).
86
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
The following table lists the codes for the errors that can occur when using
CCI and the steps involved in troubleshooting the errors.
Error code
(SSB1)
Error code
(SSB2)
Description
2E31
9100
You cannot run the command because the user was not authenticated.
B90A
B901
B90A
B902
B90A
B904
Reservations that are set to the P-VOL by the host have been
propagated to the S-VOL by HAM. The cause is one of the following:
B90A
B980
B90A
B981
A HAM pair cannot be created because the HAM program product is not
installed.
B90A
B982
B90A
B983
B912
B902
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
87
Error code
(SSB1)
D004
Error code
(SSB2)
CBEF
Description
One of the following causes apply:
Related topics
For more information about troubleshooting other types of errors, see the
Hitachi TrueCopy User Guide.
88
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
For more information about recovering data from pinned tracks, see the
pinned track recovery procedures for your operating system or contact
your Hitachi Data Systems representative for assistance.
3. Connect to the primary storage system.
4. Resynchronize the pair using the Entire Volume initial copy option.
1. Connect to the primary storage system and release all the pairs that use
the quorum disk with the pinned track.
2. On the primary and secondary storage systems, delete the quorum disk
ID. If you cannot release the ID, forcibly delete it.
For more information about deleting quorum disk IDs by system
attribute, see Deleting quorum disk IDs by system attribute (forced
deletion) on page 5-4.
3. Format the quorum disk and recover data from the pinned track.
For more information about recovering pinned tracks, see the recovery
procedures for your operating system or contact your Hitachi Data
Systems representative for assistance.
4. On the primary and secondary storage systems, add the quorum disk ID.
5. On the primary storage system, recreate the released volume pair (or
pairs) using the Entire Volume initial copy option.
If you need technical support, log on to the HDS Support Portal for contact
information: https://hdssupport.hds.com.
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
89
810
Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide
A
HAM GUI reference
This topic describes HAM windows, dialog boxes, items, and behaviors in
SN.
In addition, information related to HAM systems also shown in the following
windows and is documented in the Hitachi TrueCopy User Guide:
- The RCU Operation window
- The Usage Monitor window
- The History window
- The System Option window
- The Quorum Disk Operation window
A1
Item
Tree
Description
Shows the connected storage system, the LDKC, the CU grouping, the
CUs, ports, and host groups. Select the desired CU grouping, CU(
port (
), or host group (
) to shows related LUs. Only one CU
grouping, CU, port, or host group can be selected.
A2
),
Item
List
Description
Shows detailed pair information about the local storage system. To sort
the items that are shown in ascending/descending order, click the
column heading. To perform HAM operations such as creating/splitting/
resynchronizing a HAM pair, right-click a row in the list.
If a volume has multiple LU paths, each LU path appears in a separate
row. However, when you select a CU group or a CU in the tree, only one
LU path per volume is shown in the list.
For more information about the list, see the following table.
Used
Capacity
Display Filter Click to open the Display Filter dialog box, from which you can narrow
down the list of volumes.
For more information about the Display Filter dialog box, see the
Hitachi TrueCopy User Guide.
Export
Preview
Shows the settings to be saved to the system when you click Apply. You
can change or delete the settings by right-clicking.
Apply
Cancel
The S/N, ID, and Fence columns can be blank while a HAM pair is in
transition to the SMPL status. To show the latest information, refresh the
window by clicking File and then Refresh on the menu bar of SN windows.
A3
Item
VOL
Description
An icon shows whether the volume is assigned to a pair.
: P-VOL.
: S-VOL.
Status
S/N(LDKC)
ID
SSID of the paired storage system, or Path group ID that you entered
when registering the RCU.
Paired VOL
Information about the path from the host to the paired volume appears
as port number, the host group number, and LUN (LDKC:CU:LDEV),
separated by hyphens.
The symbols used in the VOL column might appear at the end of the
LDEV number.
Type
Fence
The fence level, which is a TC setting. Does not apply for HAM pairs.
Diff
The unit of measurement used for storing differential data (by cylinder,
track, or auto).
CTG
Sync Rate
Shows which pair volume is online and thus receiving I/O from the host.
For more information, see Possible VOL Access values for pairs on page
A-5.
CLPR
A4
The number and name of the cache logical partition that the local
volume belongs to.
You can to determine the volume that is receiving host I/O, which is the
online volume by checking the VOL Access values in the Pair Operation
window. The particular combination of values (one for the P-VOL and one
for the S-VOL) indicates which volume is the online volume.
Pair Status of
S-VOL**
VOL access of
P-VOL
VOL access of
S-VOL
Online volume
COPY
COPY
Access (No
Lock)
Blank
P-VOL
PAIR
PAIR
Blank
Blank
P-VOL
SSWS
Blank
Access (Lock)
S-VOL
Blank
Blank
Any*
Blank
Access (Lock)
Any*
Blank
Blank
Any*
Access (Lock)
Blank
P-VOL
Blank
Blank
Any*
Blank
Access (Lock)
Any*
Blank
Access (Lock)
S-VOL
Blank
Blank
Any*
Access (Lock)
Blank
Any*
SMPL
Access (Lock)
Blank
P-VOL
PSUE
Access (No
Lock)
Blank
P-VOL
PAIR
Access (Lock)
Blank
P-VOL
SSWS
Access (No
Lock)
Blank
Any*
Access (Lock)
Blank
Any*
Access (No
Lock)
Blank
Any*
Access (Lock)
Blank
Any*
Access (Lock)
Blank
P-VOL
PSUS
PSUS
PSUS
SSWS
PSUE
PSUS
PDUB
PDUB
* The S-VOL pair status is forcibly changed to SSWS by the swap suspended operation,
or else the S-VOL pair status is changed from SSWS to PSUS by the rollback operation.
This status can be see when you try to use the volume that has the older data. You can
use any one volume as an on-line volume.
** Storage Navigator pair statuses are shown in the format, SN status/CCI status. If
the two statuses are the same, the CCI status is not shown. For more information on
pair status definitions, see Pair status values on page 4-6.
A5
Item
P-VOL and S-VOL
Description
A6
Emulation type.
CLPR
Group Name
Pair Status
Item
Pair Synchronized
Description
The percentage of synchronization or consistency between
the pair volumes.
If you are viewing S-VOL information, the percentage for all
pair statuses except COPY is shown.
If you are viewing P-VOL information, the percentage for all
pair statuses is shown.
Note: If the operation is waiting to start, (Queueing) is
shown.
S/N and ID
Controller ID
MCU-RCU Path
Update Type
Copy Pace
S-VOL Write
Paired Time
Time taken to copy pairs. The time shown for this item
differs from the time shown in the Copy Time on the
History window. The difference is as follows:
Difference Management
1.
2.
3.
4.
Quorum Disk ID
A7
Item
Description
VOL Access
Sync CT Group
Refresh the Pair Operation Select to refresh the Pair Operation window after the
window after this dialog
Detailed Information dialog box closes.
box is closed
Previous
Next
Refresh
Close
A8
Item
P-VOL
Description
Shows the port number, host group number (GID),
LUN(LDKC number: CU number: LDEV number), CLPR
number, and CLPR name of the selected LU. This item
shows the P-VOL with the lowest LUN when you create
multiple pairs at a time. The following symbols might
appear at the end of the LDEV number:
A9
Item
Description
S-VOL
The port number, GID, and LUN for the pairs S-VOL. Port
number entered directly can be specified with two
characters, for example, 1A as CL1 A. Port number can be
entered in lowercase and uppercase characters.
RCU
Copy Pace
Priority
Difference management
HAM Parameters
Quorum Disk ID
A10
Item
Volume
Description
Port - GID LUN (LDKC number: CU number: LDEV number) of the
selected volume. The following symbols might appear at the end of the
LDEV number:
S-VOL Write
Disable appears. The S-VOL of this pair will reject the write I/Os while
the pair is being split.
Suspend
Kind
Setting for whether or not the system continues host I/O writes to the
P-VOL while the pair is split. (If you run the command from the S-VOL,
this item is disabled.)
A11
Item
P-VOL
Description
Port - GID LUN (LDKC number: CU number: LDEV number) of the
selected volume. The following symbols might appear at the end of the
LDEV number:
P-VOL Fence
Level
Not used for HAM, Never is default, meaning P-VOL is never fenced.
Copy Pace
The number of the tracks 1-15 for the resync operations (default = 15).
Initial copy parameter.
Priority
Scheduling order for the resync operation (1-256, default = 32). Initial
copy parameter.
Change
attribute to
HAM
Quorum Disk Used to specify a quorum disk ID when changing a TrueCopy Sync pair
ID
to a HAM pair. The list shows the quorum disk ID and the RCU
information such as the serial number, controller ID, and model name.
A12
Item
Volume
Description
Port - GID LUN (LDKC number: CU number: LDEV number) of the
selected volume. The following symbols might appear at the end of the
LDEV number:
Delete Pair
by Force
Yes: The pair is released even if the primary storage system cannot
communicate with the secondary storage system.
No: The pair is released only when the primary storage system can
change pair status for both the P-VOL and S-VOL to SMPL.
When the status of the pairs to be released is SMPL, the default setting
is Yes (cannot be changed). Otherwise, the default setting is No.
A13
Item
Tree
List
Description
Shows the connected Storage System, LDKC, and Used and Not
Used.
When Used or Not Used is selected, the used and unused quorum
disk IDs in the system is shown in the list area.
The quorum disk ID list shows quorum disk information. You can sort the
list by column in ascending or descending order. The list contains the
following items:
Paired S/N:
Controller ID:
A14
Preview
Shows any changes you have made. You can alter or delete changes by
right-clicking a row in Preview.
Apply
Cancel
Item
Description
Quorum Disk Where the external volume to be used as a quorum disk is selected.
RCU
Where the paired CU is selected. The list shows the RCU information
registered in CU Free. Multiple RCUs with the same serial number and
controller ID, but different path group IDs, appear as one RCU.
A15
A16
Glossary
This glossary defines the special terms used in this document. Click the
letter links below to navigate.
#
2DC
two-data-center. Refers to the local and remote sites, or data centers, in
which TrueCopy (TC) and Universal Replicator (UR) combine to form a
remote replication configuration.
In a 2DC configuration, data is copied from a TC primary volume at the
local site to the UR master journal volume at an intermediate site, then
replicated to the UR secondary volume at the remote site. Since this
configuration side-steps the TC secondary volume at the intermediate
site, the intermediate site is not considered a data center.
3DC
three-data-center. Refers to the local, intermediate, and remote sites, or
data centers, in which TrueCopy and Universal Replicator combine to
form a remote replication configuration.
In a 3DC configuration, data is copied from a local site to an
intermediate site and then to a remote site (3DC cascade configuration),
or from a local site to two separate remote sites (3DC multi-target
configuration).
A
alternate path
A secondary path (port, target ID, LUN) to a logical volume, in addition
to the primary path, that is used as a backup in case the primary path
fails.
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary1
HUS VM Block Module Hitachi High Availability Manager User Guide
array
Another name for a RAID storage system.
array group
See RAID group.
async
asynchronous
at-time split
Consistency group operation that performs multiple pairsplit operations
at a pre-determined time.
audit log
Files that store a history of the operations performed from SN and the
service processor (SVP), commands that the storage system received
from hosts, and data encryption operations.
B
base emulation type
Emulation type that is set when drives are installed. Determines the
device emulation types that can be set in the RAID group.
BC
business continuity
BCM
Business Continuity Manager
blade
A computer module, generally a single circuit board, used mostly in
servers.
BLK, blk
block
bmp
bitmap
C
C/T
See consistency time (C/T).
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary2
HUS VM Block Module Hitachi High Availability Manager User Guide
ca
cache
capacity
The amount of data storage space available on a physical storage device,
usually measured in bytes (MB, GB, TB, etc.).
cascade configuration
In a 3DC cascade configuration for remote replication, data is copied
from a local site to an intermediate site and then to a remote site using
TrueCopy and Universal Replicator. See also 3DC.
In a Business Copy Z cascade configuration, two layers of secondary
volumes can be defined for a single primary volume. Pairs created in the
first and second layer are called cascaded pairs.
cascade function
A ShadowImage function for open systems where a primary volume (PVOL) can have up to nine secondary volumes (S-VOLs) in a layered
configuration. The first cascade layer (L1) is the original ShadowImage
pair with one P-VOL and up to three S-VOLs. The second cascade layer
(L2) contains ShadowImage pairs in which the L1 S-VOLs are
functioning as the P-VOLs of layer-2 ShadowImage pairs that can have
up to two S-VOLs for each P-VOL.
See also root volume, node volume, leaf volume, level-1 pair, and level2 pair.
cascaded pair
A ShadowImage pair in a cascade configuration. See cascade
configuration.
shared volume
A volume that is being used by more than one replication function. For
example, a volume that is the primary volume of a TrueCopy pair and
the primary volume of a ShadowImage pair is a shared volume.
CCI
Command Control Interface
CFL
Configuration File Loader. A SN function for validating and running
scripted spreadsheets.
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary3
HUS VM Block Module Hitachi High Availability Manager User Guide
CFW
cache fast write
CG
See consistency group (CTG).
CTG
See consistency group (CTG).
CH
channel
channel path
The communication path between a channel and a control unit. A
channel path consists of the physical channel path and the logical path.
CHAP
challenge handshake authentication protocol
CL
cluster
CLI
command line interface
CLPR
cache logical partition
cluster
Multiple-storage servers working together to respond to multiple read
and write requests.
command device
A dedicated logical volume used only by Command Control Interface and
Business Continuity Manager to interface with the storage system. Can
be shared by several hosts.
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary4
HUS VM Block Module Hitachi High Availability Manager User Guide
controller
The component in a storage system that manages all storage functions.
It is analogous to a computer and contains a processors, I/O devices,
RAM, power supplies, cooling fans, and other sub-components as
needed to support the operation of the storage system.
copy-on-write
Point-in-time snapshot copy of any data volume within a storage
system. Copy-on-write snapshots only store changed data blocks,
therefore the amount of storage capacity required for each copy is
substantially smaller than the source volume.
copy pair
A pair of volumes in which one volume contains original data and the
other volume contains the copy of the original. Copy operations can be
synchronous or asynchronous, and the volumes of the copy pair can be
located in the same storage system (local copy) or in different storage
systems (remote copy).
A copy pair can also be called a volume pair, or just pair.
COW
copy-on-write
COW Snapshot
Hitachi Copy-on-Write Snapshot
CTG
See consistency group (CTG).
CTL
controller
CU
control unit
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary5
HUS VM Block Module Hitachi High Availability Manager User Guide
currency of data
The synchronization of the volumes in a copy pair. When the data on the
secondary volume (S-VOL) is identical to the data on the primary
volume (P-VOL), the data on the S-VOL is current. When the data on the
S-VOL is not identical to the data on the P-VOL, the data on the S-VOL
is not current.
CYL, cyl
cylinder
cylinder bitmap
Indicates the differential data (updated by write I/Os) in a volume of a
split or suspended copy pair. The primary and secondary volumes each
have their own cylinder bitmap. When the pair is resynchronized, the
cylinder bitmaps are merged, and the differential data is copied to the
secondary volume.
D
DASD
direct-access storage device
data consistency
When the data on the secondary volume is identical to the data on the
primary volume.
data path
The physical paths used by primary storage systems to communicate
with secondary storage systems in a remote replication environment.
data pool
One or more logical volumes designated to temporarily store original
data. When a snapshot is taken of a primary volume, the data pool is
used if a data block in the primary volume is to be updated. The original
snapshot of the volume is maintained by storing the to-be-changed data
blocks in the data pool.
DB
database
DBMS
database management system
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary6
HUS VM Block Module Hitachi High Availability Manager User Guide
delta resync
A disaster recovery solution in which TrueCopy and Universal Replicator
systems are configured to provide a quick recovery using only
differential data stored at an intermediate site.
device
A physical or logical unit with a specific function.
device emulation
Indicates the type of logical volume. Mainframe device emulation types
provide logical volumes of fixed size, called logical volume images
(LVIs), which contain EBCDIC data in CKD format. Typical mainframe
device emulation types include 3390-9 and 3390-M. Open-systems
device emulation types provide logical volumes of variable size, called
logical units (LUs), that contain ASCII data in FBA format. The typical
open-systems device emulation type is OPEN-V.
DEVN
device number
DFW
DASD fast write
DHCP
dynamic host configuration protocol
differential data
Changed data in the primary volume not yet reflected in the copy.
disaster recovery
A set of procedures to recover critical application data and processing
after a disaster or other failure.
disk array
Disk array, or just array, is another name for a RAID storage system.
DKC
disk controller. Can refer to the RAID storage system or the controller
components.
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary7
HUS VM Block Module Hitachi High Availability Manager User Guide
DKCMAIN
disk controller main. Refers to the microcode for the RAID storage
system.
DKP
disk processor. Refers to the microprocessors on the back-end director
features of the Universal Storage Platform V.
DKU
disk unit. Refers to the cabinet (floor model) or rack-mounted hardware
component that contains data drives and no controller components.
DMP
Dynamic Multi Pathing
DRU
Hitachi Data Retention Utility
DP-VOL
Dynamic Provisioning-virtual volume. A virtual volume with no memory
space used by Dynamic Provisioning.
dynamic provisioning
An approach to managing storage. Instead of reserving a fixed amount
of storage, it removes capacity from the available pool when data is
actually written to disk. Also called thin provisioning.
E
EC
error code
emulation
The operation of the Hitachi RAID storage system to emulate the
characteristics of a different storage system. For device emulation the
mainframe host sees the logical devices on the RAID storage system
as 3390-x devices. For controller emulation the mainframe host sees
the control units (CUs) on the RAID storage system as 2105 or 2107
controllers.
RAID storage system operates the same as the storage system being
emulated.
emulation group
A set of device emulation types that can be intermixed within a RAID
group and treated as a group.
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary8
HUS VM Block Module Hitachi High Availability Manager User Guide
env.
environment
ERC
error reporting communications
ESCON
Enterprise System Connection
EXCTG
See extended consistency group (ECTG).
EXG
external volume group
ext.
external
external application
A software module that is used by a storage system but runs on a
separate platform.
external port
A fibre-channel port that is configured to be connected to an external
storage system for Universal Volume Manager operations.
external volume
A logical volume whose data resides on drives that are physically located
outside the Hitachi storage system.
F
failback
The process of switching operations from the secondary path or host
back to the primary path or host, after the primary path or host has
recovered from failure. See also failover.
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary9
HUS VM Block Module Hitachi High Availability Manager User Guide
failover
The process of switching operations from the primary path or host to a
secondary path or host when the primary path or host fails.
FBA
fixed-block architecture
FC
fibre channel; FlashCopy
FCA
fibre-channel adapter
FC-AL
fibre-channel arbitrated loop
FCIP
fibre-channel internet protocol
FCP
fibre-channel protocol
FCSP
fibre-channel security protocol
FIBARC
Fibre Connection Architecture
FICON
Fibre Connectivity
FIFO
first in, first out
free capacity
The amount of storage space (in bytes) that is available for use by the
host systems.
FSW
fibre switch
FTP
file-transfer protocol
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary10
HUS VM Block Module Hitachi High Availability Manager User Guide
FV
fixed-size volume
FWD
fast-wide differential
G
GID
group ID
GUI
graphical user interface
H
HA
high availability
HACMP
High Availability Cluster Multi-Processing
HAM
Hitachi High Availability Manager
HDLM
Hitachi Dynamic Link Manager
HDP
Hitachi Dynamic Provisioning
HDS
Hitachi Data Systems
HDT
Hitachi Dynamic Tiering
HDvM
Hitachi Device Manager
HGLAM
Hitachi Global Link Availability Manager
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary11
HUS VM Block Module Hitachi High Availability Manager User Guide
H-LUN
host logical unit
HOMRCF
Hitachi Open Multi-RAID Coupling Feature. Another name for Hitachi
ShadowImage.
HORC
Hitachi Open Remote Copy. Another name for Hitachi TrueCopy.
HORCM
Hitachi Open Remote Copy Manager. Another name for Command
Control Interface.
host failover
The process of switching operations from one host to another host when
the primary host fails.
host group
A group of hosts of the same operating system platform.
host mode
Operational modes that provide enhanced compatibility with supported
host platforms. Used with fibre-channel ports on RAID storage systems.
HRC
Hitachi Remote Copy. Another name for Hitachi TrueCopy for IBM z/OS.
HRpM
Hitachi Replication Manager
HSCS
Hitachi Storage Command Suite. This suite of products is now called the
Hitachi Command Suite.
HUR
Hitachi Universal Replicator
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary12
HUS VM Block Module Hitachi High Availability Manager User Guide
HXRC
Hitachi Extended Remote Copy. Another name for Hitachi Compatible
Replication for IBM XRC.
I
iFCP
internet fibre-channel protocol
IML
initial microcode load; initial microprogram load
IMPL
initial microprogram load
initial copy
An initial copy operation is performed when a copy pair is created. Data
on the primary volume is copied to the secondary volume.
initiator port
A fibre-channel port configured to send remote I/Os to an RCU target
port on another storage system. See also RCU target port and target
port.
in-system replication
The original data volume and its copy are located in the same storage
system. ShadowImage in-system replication provides duplication of
logical volumes; Copy-on-Write Snapshot in-system replication provides
snapshots of logical volumes that are stored and managed as virtual
volumes (V-VOLs).
See also remote replication.
internal volume
A logical volume whose data resides on drives that are physically located
within the storage system. See also external volume.
IO, I/O
input/output
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary13
HUS VM Block Module Hitachi High Availability Manager User Guide
IOPS
I/Os per second
IP
internet protocol
IPL
initial program load
J
JNL
journal
JNLG
journal group
journal volume
A volume that records and stores a log of all events that take place in
another volume. In the event of a system crash, the journal volume logs
are used to restore lost data and maintain data integrity.
In Universal Replicator, differential data is held in journal volumes on
until it is copied to the S-VOL.
JRE
Java Runtime Environment
L
L1 pair
See layer-1 (L1) pair.
L2 pair
See layer-2 (L2) pair.
LAN
local-area network
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary14
HUS VM Block Module Hitachi High Availability Manager User Guide
LBA
logical block address
LCP
local control port; link control processor
LCU
logical control unit
LDEV
logical device
LDKC
See logical disk controller (LDKC).
leaf volume
A level-2 secondary volume in a ShadowImage cascade configuration.
The primary volume of a layer-2 pair is called a node volume. See also
cascade configuration.
LED
light-emitting diode
license key
A specific set of characters that unlocks a software application so that
you can use it.
local copy
See in-system replication.
local site
See primary site.
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary15
HUS VM Block Module Hitachi High Availability Manager User Guide
logical volume
See volume.
LU
logical unit
LUN
logical unit number
LUNM
Hitachi LUN Manager
LUSE
Hitachi LUN Expansion; Hitachi LU Size Expansion
LV
logical volume
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary16
HUS VM Block Module Hitachi High Availability Manager User Guide
M
main control unit (MCU)
A storage system at a primary or main site that contains primary
volumes of TrueCopy for Mainframe remote replication pairs. The MCU is
configured to send remote I/Os to one or more storage systems at the
secondary or remote site, called remote control units (RCUs), that
contain the secondary volumes of the remote replication pairs. See also
remote control unit (RCU).
main site
See primary site.
max.
maximum
MB
megabyte
Mb/sec, Mbps
megabits per second
MB/sec, MBps
megabytes per second
MCU
See main control unit (MCU).
MF, M/F
mainframe
MIH
missing interrupt handler
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary17
HUS VM Block Module Hitachi High Availability Manager User Guide
mirror
In Universal Replicator, each pair relationship in and between journals is
called a mirror. Each pair is assigned a mirror ID when it is created.
The mirror ID identifies individual pair relationships between journals.
M-JNL
main journal
modify mode
The mode of operation of SN where you can change the the storage
system configuration. The two SN modes are view mode and modify
mode. See also view mode.
MP
microprocessor
MSCS
Microsoft Cluster Server
mto, MTO
mainframe-to-open
MU
mirror unit
multi-pathing
A performance and fault-tolerant technique that uses more than one
physical connection between the storage system and host system. Also
called multipath I/O.
M-VOL
main volume
N
node volume
A level-2 primary volume in a ShadowImage cascade configuration. The
secondary volume of a layer-2 pair is called a leaf volume. See also
cascade configuration.
NUM
number
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary18
HUS VM Block Module Hitachi High Availability Manager User Guide
NVS
nonvolatile storage
O
OPEN-V
A logical unit (LU) of user-defined size that is formatted for use by opensystems hosts.
OPEN-x
A logical unit (LU) of fixed size (for example, OPEN-3 or OPEN-9) that is
used primarily for sharing data between mainframe and open-systems
hosts using Hitachi Cross-OS File Exchange.
OS
operating system
OS/390
Operating System/390
P
pair
Two logical volumes in a replication relationship in which one volume
contains original data to be copied and the other volume contains the
copy of the original data. The copy operations can be synchronous or
asynchronous, and the pair volumes can be located in the same storage
system (in-system replication) or in different storage systems (remote
replication).
pair status
Indicates the condition of a copy pair. A pair must have a specific status
for specific operations. When an operation completes, the status of the
pair changes to the new status.
parity group
See RAID group.
path failover
The ability of a host to switch from using the primary path to a logical
volume to the secondary path to the volume when the primary path fails.
Path failover ensures continuous host access to the volume in the event
the primary path fails.
See also alternate path and failback.
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary19
HUS VM Block Module Hitachi High Availability Manager User Guide
PG
parity group. See RAID group.
physical device
See device.
PiT
point-in-time
pool
A set of volumes that are reserved for storing Copy-on-Write Snapshot
data or Dynamic Provisioning write data.
port attribute
Indicates the type of fibre-channel port: target, RCU target, or initiator.
port block
A group of four fibre-channel ports that have the same port mode.
port mode
The operational mode of a fibre-channel port. The three port modes for
fibre-channel ports on the Hitachi RAID storage systems are standard,
high-speed, and initiator/external MIX.
PPRC
Peer-to-Peer Remote Copy
Preview list
The list of requested operations on SN.
primary site
The physical location of the storage system that contains the original
data to be replicated and that is connected to one or more storage
systems at the remote or secondary site via remote copy connections.
A primary site can also be called a main site or local site.
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary20
HUS VM Block Module Hitachi High Availability Manager User Guide
The term primary site is also used for host failover operations. In that
case, the primary site is the host computer where the production
applications are running, and the secondary site is where the backup
applications run when the applications at the primary site fail, or where
the primary site itself fails.
primary volume
The volume in a copy pair that contains the original data to be replicated.
The data in the primary volume is duplicated synchronously or
asynchronously on the secondary pairs.
The following Hitachi products use the term P-VOL: SN, Copy-on-Write
Snapshot, ShadowImage, ShadowImage for Mainframe, TrueCopy,
Universal Replicator, Universal Replicator for Mainframe, and High
Availability Manager.
See also secondary volume (S-VOL).
P-site
primary site
P-VOL
Term used for the primary volume in the earlier version of the SN GUI
(still in use). See primary volume.
Q
quick format
The quick format feature in Virtual LVI/Virtual LUN in which the
formatting of the internal volumes is done in the background. Use to
configure the system (such as defining a path or creating a TrueCopy
pair) before the formatting is completed. To quick format, the volumes
must be in blocked status.
quick restore
A reverse resynchronization in which no data is actually copied: the
primary and secondary volumes are swapped.
quick split
A split operation in which the pair becomes split immediately before the
differential data is copied to the secondary volume (S-VOL). Any
remaining differential data is copied to the S-VOL in the background. The
benefit is that the S-VOL becomes immediately available for read and
write I/O.
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary21
HUS VM Block Module Hitachi High Availability Manager User Guide
R
R/W, r/w
read/write
RAID
redundant array of inexpensive disks
RAID group
A redundant array of inexpensive drives (RAID) that have the same
capacity and are treated as one group for data storage and recovery. A
RAID group contains both user data and parity information, and the
storage system can access the user data in the event that one or more
of the drives within the RAID group are not available. The RAID level of
a RAID group determines the number of data drives and parity drives
and how the data is striped across the drives. For RAID1, user data is
duplicated within the RAID group, so there is no parity data for RAID1
RAID groups.
A RAID group can also be called an array group or a parity group.
RAID level
The type of RAID implementation. RAID levels include RAID0, RAID1,
RAID2, RAID3, RAID4, RAID5 and RAID6.
RCP
remote control port
RCU
See remote control unit (RCU).
RD
read
remote console PC
A previous term for the personal computer (PC) system that is LANconnected to a RAID storage system. The current term is SN PC.
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary22
HUS VM Block Module Hitachi High Availability Manager User Guide
remote copy
See remote replication.
remote replication
Data replication configuration in which the storage system that contains
the original data is at a local site and the storage system that contains
the copy of the original data is at a remote site. TrueCopy and Universal
Replicator provide remote replication. See also in-system replication.
remote site
See secondary site.
resync
Resync is short for resynchronize.
RF
record format
RIO
remote I/O
R-JNL
restore journal
RL
record length
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary23
HUS VM Block Module Hitachi High Availability Manager User Guide
RMI
Remote Method Invocation
rnd
random
root volume
A level-1 primary volume in a ShadowImage cascade configuration. The
secondary volume of a layer-1 pair is called a node volume. See also
cascade configuration.
RPO
recovery point objective
R-SIM
remote service information message
R-site
remote site (used for Universal Replicator)
RTC
real-time clock
RTO
recovery time objective
R-VOL
See remote volume (R-VOL).
R/W
read/write
S
S#
serial number
S/N
serial number
s/w
software
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary24
HUS VM Block Module Hitachi High Availability Manager User Guide
SAID
system adapter ID
SAN
storage-area network
SATA
serial Advanced Technology Attachment
SC
storage control
SCDS
source control dataset
SCI
state change interrupt
scripting
The use of command line scripts, or spreadsheets downloaded by
Configuration File Loader, to automate storage management operations.
SCSI
small computer system interface
secondary site
The physical location of the storage system that contains the primary
volumes of remote replication pairs at the main or primary site. The
storage system at the secondary site is connected to the storage system
at the main or primary site via remote copy connections. The secondary
site can also be called the remote site. See also primary site.
secondary volume
The volume in a copy pair that is the copy. The following Hitachi products
use the term secondary volume: SN, Copy-on-Write Snapshot,
ShadowImage, ShadowImage for Mainframe, TrueCopy, Universal
Replicator, Universal Replicator for Mainframe, and High Availability
Manager.
See also primary volume.
seq.
sequential
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary25
HUS VM Block Module Hitachi High Availability Manager User Guide
severity level
Applies to service information messages (SIMs) and SN error codes.
SI
Hitachi ShadowImage
SIz
Hitachi ShadowImage for Mainframe
sidefile
An area of cache memory that is used to store updated data for later
integration into the copied data.
SIM
service information message
size
Generally refers to the storage capacity of a memory module or cache.
Not usually used for storage of data on disk or flash drives.
SM
shared memory
SMTP
simple mail transfer protocol
SN
serial number shown in SN
snapshot
A point-in-time virtual copy of a Copy-on-Write Snapshot primary
volume (P-VOL). The snapshot is maintained when the P-VOL is updated
by storing pre-updated data (snapshot data) in a data pool.
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary26
HUS VM Block Module Hitachi High Availability Manager User Guide
SNMP
simple network management protocol
SOM
system option mode
space
Generally refers to the data storage capacity of a disk drive or flash
drive.
SRM
Storage Replication Manager
SS
snapshot
SSB
sense byte
SSID
(storage) subsystem identifier. SSIDs are used as an additional way to
identify a control unit on mainframe operating systems. Each group of
64 or 256 volumes requires one SSID, therefore there can be one or four
SSIDs per CU image. For HUS VM, one SSID is associated with 256
volumes.
SSL
secure socket layer
steady split
In ShadowImage, a typical pair split operation in which any remaining
differential data from the P-VOL is copied to the S-VOL and then the pair
is split.
S-VOL
See secondary volume or source volume (S-VOL). When used for
secondary volume, S-VOL is only seen in the earlier version of the SN
GUI (still in use).
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary27
HUS VM Block Module Hitachi High Availability Manager User Guide
SVP
See service processor (SVP).
sync
synchronous
T
target port
A fibre-channel port that is configured to receive and process host I/Os.
TB
terabyte
TC
Hitachi TrueCopy
TCz
Hitachi TrueCopy for Mainframe
TDEVN
target device number
TGT
target; target port
THD
threshold
TID
target ID
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary28
HUS VM Block Module Hitachi High Availability Manager User Guide
total capacity
The aggregate amount of storage space in a data storage system.
T-VOL
See target volume (T-VOL).
U
update copy
An operation that copies differential data on the primary volume of a
copy pair to the secondary volume. Update copy operations are
performed in response to write I/Os on the primary volume after the
initial copy operation is completed.
UR
Hitachi Universal Replicator
URz
Hitachi Universal Replicator for Mainframe
USP
Hitachi TagmaStore Universal Storage Platform
USP V
Hitachi Universal Storage Platform V
USP VM
Hitachi Universal Storage Platform VM
UT
Universal Time
UTC
Universal Time-coordinated
V
V
version; variable length and de-blocking (mainframe record format)
VB
variable length and blocking (mainframe record format)
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary29
HUS VM Block Module Hitachi High Availability Manager User Guide
view mode
The mode of operation of SN where you can only view the storage
system configuration. The two SN modes are view mode and modify
mode on page Glossary-18.
VLL
Hitachi Virtual LVI/LUN
VLVI
Hitachi Virtual LVI
VM
volume migration; volume manager
VOL, vol
volume
VOLID
volume ID
volser
volume serial number
volume
A logical device (LDEV), or a set of concatenated LDEVs in the case of
LUSE, that has been defined to one or more hosts as a single data
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary30
HUS VM Block Module Hitachi High Availability Manager User Guide
volume pair
See copy pair.
V-VOL
virtual volume
#
# A B C D E F G H II
K L
M N O P Q R S T U V W X Y Z
Glossary31
HUS VM Block Module Hitachi High Availability Manager User Guide
# A B C D E F G H II
#
K L
M N O P Q R S T U V W X Y Z
Glossary32
HUS VM Block Module Hitachi High Availability Manager User Guide
Index
A
GUI, using to
add quorum disk ID 37
create pairs 411
delete pairs 418
resync pairs 416
split pairs 415
C
Cache Residency Manager, sharing volumes
with 212
capacity 25
CCI
troubleshooting with 86
using for pair operations 421
changing TrueCopy pair to HAM 419
components 13
configuration
hardware order 32
quorum disk 36
software order 34
with Business Copy 213
with ShadowImage 213
create pairs 411
H
hardware
required 23
setup 32
High Availability Manager, discontinuing 53
host recognition of pair 413
I
initial copy 411
interoperability;sharing volumes 211
D
data path
general 14
max, min, recommended 27
overview 14
requirements 27
switching 52
deleting pairs 418
E
ending HAM operations 53
expired license 25
external storage systems, requirements 28
F
failover
description 16
planning 28
LDEV requirements 25
license capacity 25
licenses required 24
logical paths
max, min, recommended 27
LUN Expansion, sharing volumes with HAM 213
LUN Manager 212
LUSE, sharing volumes with HAM 213
M
max. number of pairs 25
MCU 14
multipath software
description 15
multipath software, requirements 23
N
non-owner path, restore 620
Index1
HUS VM Block Module Hitachi High Availability Manager User Guide
P
P-VOL 14
P-VOL, disallow I/O when split 415
pair status
definitions 46
monitoring 42
Paircreate(HAM) dialog box A8
pairs
create 411
max. number 25
releasing 418
requirements 25
resynchronizing 416
split 415
troubleshooting 84
Performance Monitor 212
planning, workflow 23
ports
max, min, recommended 27
power on/off system 56
power outage, recovery from 612
program products with HAM 211
PSUS types and definitions 410
S-VOL 14
S-VOL write option, allowing I/O 415
ShadowImage and HAM 213
SIM 44
software setup 34
split operation, maintaining
synchronization 415
splitting pairs 415
Storage Navigator
general 15
requirements 28
storage systems 14
suspended pairs, troubleshooting 84
system
configuration for HAM 35
hardware requirements 23
power on/off procedure 56
primary, secondary requirements 24
T
technical support 621, 89
troubleshooting
general errors; 82
suspended pairs; 84
using CCI 86
TrueCopy, changing pair to HAM 419
quorum disk
and other products 212
delete 53
overview 15
recover if deleted by accident 55
replace 610
requirements 26
setup 36
quorum disk ID, adding 37
Quorum Disk Operation window A13
R
RCU 14
recovering quorum disk if deleted 55
recovery
after failure 65
after power outage 612
with ShadowImage 619
releasing pairs 418
requirements
data path 27
for multiple pair creation 25
hardware 23
licences 24
pair volumes 25
quorum disk 26
resynchronizing pairs 416
Index2
HUS VM Block Module Hitachi High Availability Manager User Guide
MK-92RD7052-00