Hitachi Unified Storage VM Block Module: Hitachi High Availability Manager User Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 164

Hitachi Unified Storage VM Block Module

Hitachi High Availability Manager User Guide

FASTFIND LINKS
Contents
Product Version
Getting Help

MK-92RD7052-00

2013 Hitachi, Ltd. All rights reserved.


No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying and recording, or stored in a database or retrieval system for any
purpose without the express written permission of Hitachi, Ltd. (hereinafter referred to as "Hitachi") and
Hitachi Data Systems Corporation (hereinafter referred to as "Hitachi Data Systems").
Hitachi and Hitachi Data Systems reserve the right to make changes to this document at any time without
notice and assume no responsibility for its use. This document contains the most current information
available at the time of publication. When new and/or revised information becomes available, this entire
document will be updated and distributed to all registered users.
Some of the features described in this document may not be currently available. Refer to the most recent
product announcement or contact your local Hitachi Data Systems sales office for information about feature
and product availability.
Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of
the applicable Hitachi Data Systems agreements. The use of Hitachi Data Systems products is governed by
the terms of your agreements with Hitachi Data Systems.
Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data
Systems is a registered trademark and service mark of Hitachi, Ltd. in the United States and other countries.
ShadowImage and TrueCopy are registered trademarks of Hitachi Data Systems.
AIX, ESCON, FICON, FlashCopy, IBM, MVS/ESA, MVS/XA, OS/390, S/390, VM/ESA, VSE/ESA, z/OS, zSeries,
z/VM, and zVSE are registered trademarks or trademarks of International Business Machines Corporation.
All other trademarks, service marks, and company names are properties of their respective owners.
Microsoft product screen shots reprinted with permission from Microsoft Corporation.

ii
HUS VM Block Module Hitachi High Availability Manager User Guide

Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
Intended audience. . . . . . . . . . . . . . . . . . .
Product version . . . . . . . . . . . . . . . . . . . . .
Release notes . . . . . . . . . . . . . . . . . . . . . .
Document revision level . . . . . . . . . . . . . . .
Referenced documents. . . . . . . . . . . . . . . .
Document conventions. . . . . . . . . . . . . . . .
Conventions for storage capacity values. . . .
Accessing product documentation . . . . . . . .
Getting help . . . . . . . . . . . . . . . . . . . . . . .
Comments . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

....
....
....
....
....
....
....
....
....
....

..
..
..
..
..
..
..
..
..
..

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

. .x
. .x
. .x
. .x
. .x
. .x
. xi
. xii
. xii
. xii

High Availability Manager overview . . . . . . . . . . . . . . . . . . . . . . . 1-1


How HAM works . . . . . . . . . . . . . . . .
HAM components . . . . . . . . . . . . . . .
HUS VM storage systems . . . . . . .
Main and remote control units . . .
Pair volumes . . . . . . . . . . . . . . . .
Data paths . . . . . . . . . . . . . . . . .
Quorum disk . . . . . . . . . . . . . . . .
Multipath software . . . . . . . . . . .
Storage Navigator GUI . . . . . . . . .
Command Control Interface (CCI) .
Data replication . . . . . . . . . . . . . . . . .
Failover . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

..
..
..
..
..
..
..
..
..
..
..
..

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

1-2
1-3
1-4
1-4
1-4
1-4
1-5
1-5
1-5
1-5
1-5
1-6

System implementation planning and system requirements . . . . . . 2-1


The workflow for planning High Availability Manager implementation
Required hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Multipath software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Storage system requirements . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

2-3
2-3
2-3
2-4

iii
HUS VM Block Module Hitachi High Availability Manager User Guide

Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
License capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pair volume requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Quorum disk requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data path requirements and recommendations . . . . . . . . . . . . . . .
Storage Navigator requirements . . . . . . . . . . . . . . . . . . . . . . . . . .
External storage systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Planning failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preventing unnecessary failover . . . . . . . . . . . . . . . . . . . . . . . . . .
Sharing volumes with other Hitachi Data Systems software products
Virtual Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cache Residency Manager . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LUN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Open Volume Management . . . . . . . . . . . . . . . . . . . . . . . . . .
LUN Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configurations with ShadowImage volumes . . . . . . . . . . . . . .
Configuring HAM with ShadowImage. . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

. 2-4
. 2-5
. 2-5
. 2-6
. 2-7
. 2-8
. 2-8
. 2-8
2-10
2-11
2-12
2-12
2-12
2-12
2-12
2-13
2-13
2-13

System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1


The basic workflow for configuring the system configuration . . . . . . . . . . .
Connecting the hardware components. . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The workflow for connecting the hardware components . . . . . . . . . . .
Installing and configuring software . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Additional documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The workflow for installing and configuring High Availability Manager . .
Configuring the primary and secondary storage systems . . . . . . . . . . . . . .
Additional documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring the quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adding the ID for the quorum disk to the storage systems . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring host mode options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

3-2
3-2
3-3
3-3
3-4
3-4
3-4
3-5
3-5
3-5
3-6
3-6
3-6
3-6
3-6
3-7
3-8
3-8
3-8
3-8
3-8

Working with volume pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Workflow for HAM volume pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Reasons for checking pair status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2

iv
HUS VM Block Module Hitachi High Availability Manager User Guide

When to check pair status? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


How pair status reflects system events and use . . . . . . . . . . . . . . . .
What pairs information can you view and where is it? . . . . . . . . . . . .
Where to find the information . . . . . . . . . . . . . . . . . . . . . . . . . .
How hosts see volume pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checking pair status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pair status values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Split types (PSUS status) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Suspend types (PSUE status) . . . . . . . . . . . . . . . . . . . . . . . . . .
Volume pair creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a HAM pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Verifying host recognition of a new pair . . . . . . . . . . . . . . . . . .
How multipath software shows storage serial number for pairs
Splitting pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resynchronizing pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reverse resynchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Releasing a pair. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing TrueCopy pairs to HAM pairs . . . . . . . . . . . . . . . . . . . . . .
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of the CCI commands and Storage Navigator . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

. 4-2
. 4-2
. 4-4
. 4-4
. 4-4
. 4-5
. 4-6
4-10
4-10
4-11
4-11
4-11
4-11
4-13
4-15
4-15
4-15
4-15
4-16
4-17
4-17
4-17
4-18
4-19
4-19
4-20
4-21

System maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Applications used to perform maintenance tasks. . . . . . . . . . . . . . . . . . . . . .
Required Storage Navigator settings . . . . . . . . . . . . . . . . . . . . . . . . . . .
Related documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The different types of maintenance tasks. . . . . . . . . . . . . . . . . . . . . . . . . . .
Switching paths using multipath software . . . . . . . . . . . . . . . . . . . . . . .
Discontinuing HAM operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Quorum disk ID deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Deleting quorum disk IDs (standard method) . . . . . . . . . . . . . . . . . . .
Deleting quorum disk IDs by system attribute (forced deletion) . . . . . .
Recovery of accidently deleted quorum disks . . . . . . . . . . . . . . . . . . . . .
Recovering the disk when the P-VOL was receiving host I/O at deletion
Recovering the disk when the S-VOL was receiving host I/O at deletion
Planned outages for system components . . . . . . . . . . . . . . . . . . . . . . . .
Options for performing the planned outages. . . . . . . . . . . . . . . . . . . .
The procedures for performing planned outages . . . . . . . . . . . . . . . . .
Performing planned outages (quorum disk only) . . . . . . . . . . . . . . . .
Performing planned outages (primary system and quorum disk). . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

5-2
5-2
5-2
5-2
5-2
5-3
5-3
5-4
5-4
5-5
5-5
5-5
5-6
5-6
5-7
5-7
5-9

v
HUS VM Block Module Hitachi High Availability Manager User Guide

Performing planned outages (secondary system and quorum disk) . . . . 5-10


Performing planned outages (both systems and quorum disk) . . . . . . . 5-10

Disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1


Main types of failures that can disrupt your system. . . . . . . . . . . . . . . . . . . .
The basic recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System failure messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Detecting failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Option 1: Check for failover first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Storage Navigator to check for failover . . . . . . . . . . . . . . . . . . .
Using CCI to check for failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using multipath software to check for failover . . . . . . . . . . . . . . . . . .
Option 2: Check for failures only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Determining which basic recovery procedures to use . . . . . . . . . . . . . . . . . .
Selecting Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery from blocked pair volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering from primary volume failure on the MCU. . . . . . . . . . . . . . . .
Recovering from secondary volume failure on the MCU . . . . . . . . . . . . . .
Recovering from primary volume failure on the RCU . . . . . . . . . . . . . . . .
Recovering from secondary volume failure on the RCU . . . . . . . . . . . . . .
Recovery from quorum disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replacement of quorum disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replacing a quorum disk when the MCU is receiving host I/O . . . . . . .
Replacing a quorum disk when the RCU is receiving host I/O. . . . . . . .
Recovery from power failure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Primary storage system recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering the system when the RCU is receiving host I/O updates . . .
Recovering the system when host I/O updates have stopped. . . . . . . .
Secondary system recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering the system when the P-VOL is receiving host updates . . . .
Recovering the system when host updates have stopped . . . . . . . . . .
Recovery from failures using resynchronization . . . . . . . . . . . . . . . . . . . . . .
Required conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Determining which resynchronization recovery procedure to use . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering primary volume from ShadowImage secondary volume . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering from path failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Allowing host I/O to an out-of-date S-VOL . . . . . . . . . . . . . . . . . . . . . . . . . .
Contacting the Hitachi Data Systems Support Center . . . . . . . . . . . . . . . . . .

. 6-2
. 6-2
. 6-2
. 6-2
. 6-3
. 6-3
. 6-3
. 6-3
. 6-4
. 6-4
. 6-4
. 6-5
. 6-6
. 6-7
. 6-8
. 6-9
. 6-9
6-10
6-10
6-11
6-12
6-12
6-13
6-14
6-14
6-15
6-15
6-16
6-16
6-17
6-17
6-19
6-19
6-19
6-19
6-20
6-20
6-21

Using HAM in a cluster system. . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1


Cluster system architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2

vi
HUS VM Block Module Hitachi High Availability Manager User Guide

Required software . . . . . . . . . . . . . .
Supported cluster software. . . . . . . .
Configuration requirements . . . . . . .
Configuring the system . . . . . . . . . .
Disaster recovery in a cluster system.
Restrictions . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

7-2
7-2
7-3
7-3
7-3
7-4

Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
Potential causes of errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Is there an error messages for every type of failure? . . . . . . . . . . . . . . . .
Where do you look for error messages? . . . . . . . . . . . . . . . . . . . . . . . . . .
Basic types of troubleshooting procedures . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting general errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Suspended volume pair troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . .
The workflow for troubleshooting suspended pairs when using Storage
Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Troubleshooting suspended pairs when using CCI . . . . . . . . . . . . . . .
Location of the CCI operation log file . . . . . . . . . . . . . . . . . . . . . . .
Example log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Related topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovery of data stored only in cache memory. . . . . . . . . . . . . . . . . . . . .
Pinned track recovery procedures . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recovering pinned tracks from volume pair drives. . . . . . . . . . . . . . . .
Recovering pinned tracks from quorum disks . . . . . . . . . . . . . . . . . . .
Contacting the Hitachi Data Systems Support Center . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

8-2
8-2
8-2
8-2
8-2
8-4

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

8-4
8-6
8-6
8-7
8-8
8-8
8-8
8-8
8-9
8-9

HAM GUI reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1


Pair Operation window . . . . . . . . . . . . .
Possible VOL Access values for pairs
Detailed Information dialog box . . .
Paircreate(HAM) dialog box . . . . . .
Pairsplit-r dialog box . . . . . . . . . . .
Pairresync dialog box . . . . . . . . . . .
Pairsplit-S dialog box . . . . . . . . . . .
Quorum Disk Operation window . . . . . .
Add Quorum Disk ID dialog box . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

. A-2
. A-5
. A-5
. A-8
A-10
A-11
A-13
A-13
A-15

Glossary
Index

vii
HUS VM Block Module Hitachi High Availability Manager User Guide

viii
HUS VM Block Module Hitachi High Availability Manager User Guide

Preface
The Hitachi High Availability Manager User Guide describes and provides
instructions for using the Hitachi High Availability Manager software to plan,
configure, and perform pair operations on the Hitachi Unified Storage VM
(HUS VM) storage system.
Please read this document carefully to understand how to use this product,
and maintain a copy for reference purposes.

Intended audience

Product version

Release notes

Document revision level

Referenced documents

Document conventions

Conventions for storage capacity values

Accessing product documentation

Getting help

Comments

Preface
HUS VM Block Module Hitachi High Availability Manager User Guide

ix

Intended audience
This document is intended for system administrators, Hitachi Data Systems
representatives, and authorized service providers who are involved in
installing, configuring, and operating the HUS VM storage system.
Readers of this document should meet the following requirements:

Data processing and RAID storage systems and their basic functions.

The Unified Storage VM storage system and the Hitachi Unified Storage
VM Block Module Hardware User Guide.

The Storage Navigator software for the Unified Storage VM and the
Hitachi Storage Navigator User Guide.

Remote replication and disaster recovery configurations for enterprise


storage data centers.

Product version
This document revision applies to HUS VM microcode 73-03-0x or later.

Release notes
The Hitachi Unified Storage VM Release Notes provide information about the
HUS VM microcode (DKCMAIN and SVP), including new features and
functions and changes. The Release Notes are available on the Hitachi Data
Systems Portal: https://portal.hds.com

Document revision level


Revision
MK-92HM7052-00

Date
October 2013

Description
Initial Release.

Referenced documents
HUS VM documentation:

Hitachi ShadowImage User Guide, MK-92HM7013

Hitachi Storage Navigator User Guide, MK-92HM7016

Hitachi Unified Storage VM Block Module Provisioning Guide, MK92HM7012

Hitachi Unified Storage VM Block Module Hardware User Guide, MK92HM7042

Hitachi Command Control Interface User and Reference Guide, MK92HM7010

Document conventions
This document uses the following typographic conventions:

Preface
HUS VM Block Module Hitachi High Availability Manager User Guide

Convention
Bold

Description
Indicates the following:

Italic

Text in a window or dialog box, such as menus, menu


options,buttons, and labels. Example: On the Add Pair
dialog box, click OK.

Text appearing on screen or entered by the user. Example:


The -split option.

The name of a directory, folder, or file. Example: The


horcm.conf file.

Indicates a variable, which is a placeholder for actual text


provided by the user or system. Example: copy source-file
target-file
Angle brackets are also used to indicate variables.

Monospace

Indicates text that is displayed on screen or entered by the user.


Example: # pairdisplay -g oradb

< > angle brackets

Indicates a variable, which is a placeholder for actual text


provided by the user or system. Example: # pairdisplay -g
<group>
Italic is also used to indicate variables.

[ ] square brackets

Indicates optional values. Example: [ a | b ] indicates that you


can choose a, b, or nothing.

{ } braces

Indicates required or expected values. Example: { a | b }


indicates that you must choose either a or b.

| vertical bar

Indicates that you have a choice between two or more options


or arguments. Examples:
[ a | b ] indicates that you can choose a, b, or nothing.
{ a | b } indicates that you must choose either a or b.

This document uses the following icons to draw attention to information:


Icon

Meaning

Description

Tip

Provides helpful information, guidelines, or suggestions for


performing tasks more effectively.

Note

Calls attention to important and/or additional information.

Caution

Warns users of adverse conditions and consequences, such as


disruptive operations.

WARNING

Warns users of severe conditions and consequences, such as


destructive operations.

Conventions for storage capacity values


Physical storage capacity values (for example, disk drive capacity) are
calculated based on the following values:

Preface
HUS VM Block Module Hitachi High Availability Manager User Guide

xi

Physical capacity unit

Value

1 KB

1,000 bytes

1 MB

1,0002 bytes

1 GB

1,0003 bytes

1 TB

1,0004 bytes

1 PB

1,0005 bytes

1 EB

1,0006 bytes

Logical storage capacity values (e.g., logical device capacity) are calculated
based on the following values:
Logical capacity unit

Value

1 KB

1,024 bytes

1 MB

1,024 KB or 1,0242 bytes

1 GB

1,024 MB or 1,0243 bytes

1 TB

1,024 GB or 1,0244 bytes

1 PB

1,024 TB or 1,0245 bytes

1 EB

1,024 PB or 1,0246 bytes

1 block

512 bytes

Accessing product documentation


The HUS VM user documentation is available on the Hitachi Data Systems
Support Portal: https://hdssupport.hds.com. Please check this site for the
most current documentation, including important updates that may have
been made after the release of the product.

Getting help
The Hitachi Data Systems customer support staff is available 24 hours a
day, seven days a week. If you need technical support, log on to the Hitachi
Data Systems Support Portal for contact information:https://
hdssupport.hds.com

Comments
Please send us your comments on this document:
[email protected]. Include the document title, number, and revision.
Please refer to specific sections and paragraphs whenever possible. All
comments become the property of Hitachi Data Systems.)
Thank you!

xii

Preface
HUS VM Block Module Hitachi High Availability Manager User Guide

1
High Availability Manager overview
HAM ensures high availability of host applications used in Hitachi Unified
Storage VM Block Module (HUS VM) storage systems. HAM provides
protection against the loss of application availability when input and output
(I/O) failures occur in the primary storage system by automatically
switching host applications from the primary storage system to the
secondary storage system and by enabling recovery from the failures that
caused the I/O failure.
HAM is designed for recovery from on-site disasters such as power supply
failure. TrueCopy is suited to large-scale disaster recovery.

How HAM works

HAM components

Data replication

Failover

High Availability Manager overview


HUS VM Block Module Hitachi High Availability Manager User Guide

11

How HAM works


HAM uses Hitachi TrueCopy Remote Replication software software to
create a synchronous remote copy of a production volume. But where
TrueCopy is suited to large-scale disaster recovery, HAM is intended for
recovery from on-sight disasters such as power supply failure.
Because of this, HAM is configured differently than a typical TrueCopy
configuration.

12

The HAM primary and secondary storage systems are connected to the
same host. When a HAM pair is created, the host sees the primary and
secondary volumes as the same volume.

HAM requires multipath software to be installed on the host. In the event


that the host cannot access the production volume on the primary
storage system, host I/O is redirected via the host multipath software to
the secondary volume on the remote system. Failover is accomplished
without stopping and restarting the application.

HAM is used on open host systems only.

A HAM pair consists of a primary data volume (P-VOL) on the primary


storage system and a secondary data volume (S-VOL) on the secondary
storage system (like TrueCopy). The S-VOL is the copy of the P-VOL.

HAM uses a quorum disk located on an external storage system, which


keeps track of consistency between the P-VOL and S-VOL. Consistency
data is used in the event of an unexpected outage of the data path or
primary storage system. In this case, the differential data in both
systems is compared and, if the pairs are consistent, host operations
continue on the secondary storage system.

High Availability Manager overview


HUS VM Block Module Hitachi High Availability Manager User Guide

HAM components
A typical configuration consists of two Hitachi Unified Storage VM Block
Module storage systems installed at the primary and secondary sites. In
addition, the HAM system consists of the following components:

HAM and TrueCopy software, which are installed on both systems.

A host server running a multipath software solution, qualified with HAM


software, that is connected to both storage systems.

Dedicated Fibre Channel data paths linking the primary and secondary
storage systems, with Fibre Channel switches.

A qualified external storage system to host the quorum disks. This


storage system must be accessible to both the primary and secondary
storage systems.

The following interface tools for configuring and operating the pairs:
Hitachi Storage Navigator (SN) graphical user interface (GUI),
located on a management LAN.
Hitachi Command Control Interface software (CCI), located on the
host.

HAM components are illustrated in the following figure and described in


more detail in the following topics.

High Availability Manager overview


HUS VM Block Module Hitachi High Availability Manager User Guide

13

HUS VM storage systems


HAM operations are conducted between Hitachi Unified Storage VM (HUS
VM) and another storage system on the primary and secondary sites. The
primary storage system consists of the main control unit (MCU) and service
processor (SVP). The secondary storage system consists of the remote
control unit (RCU) and SVP.
The primary storage system communicates with the secondary storage
system over dedicated Fibre Channel remote copy connections.

Main and remote control units


Like TrueCopy, HAM replication relationships exist at the Logical Control Unit
(LCU) level within the storage systems.

Primary storage system LCUs containing the production volumes to be


replicated are called MCUs (main control units).

Secondary storage system LCUs containing the copy volumes are called
remote control units (RCUs).

Normally the MCU contains the P-VOLs and the RCU contains the S-VOLs.
The MCU communicates with the RCU via the data path. You can
simultaneously set P-VOL and S-VOL in the same storage system if the
volumes are used by different pairs. In this case, the CU can function
simultaneously as an MCU for the P-VOL and as an RCU for the S-VOL.
The MCU is often referred to as the primary storage system in this
document; the RCU is often referred to as the secondary storage system.

Pair volumes
Original data from the host is stored in the P-VOL; the remote copy is stored
in the S-VOL. Data is copied as it is written to the P-VOL; new updates are
copied only when the previous updates are acknowledged in both primary
and secondary volumes.
Once a pair is created, you can do the following:

Split the pair, which suspends copy activity.

Resynchronize the pair, which restores and maintains synchronization.

Delete the pair, which removes the pair relationship, though not the
data.

Data paths
The physical links between the primary and secondary storage systems are
referred to as the "data path." These links include the Fibre Channel
interface cables and switches. HAM commands and data are transmitted
through the data path. The data path links the primary and secondary
storage systems through two types of Fibre Channel ports, Initiator and
RCU Target ports.

14

High Availability Manager overview


HUS VM Block Module Hitachi High Availability Manager User Guide

Because paths are one-directional, and HAM communicates in both


directions, a minimum of two data paths are needed; however, Hitachi Data
Systems requires a minimum of two in each direction for greater support
and security of the data. A maximum of eight data paths in each direction
are supported. Therefore, the maximum number of logical paths between
any two storage systems is sixteen (eight forward and eight reverse).

Quorum disk
The quorum disk is a continuously updated volume that contains
information about the state of data consistency between the P-VOL and SVOL. The information is used by HAM in the event of failure to direct host
operations to the secondary volume. The quorum disk is located in an
externally attached storage system.

Multipath software
Multipath software distributes the loads among the paths to the current
production volume. For HAM, the multipath software duplicates the paths
between the host and P-VOL, so that the paths are in place between the
host and the S-VOL also.
If a failure occurs in the data path to the primary storage system, or with
the primary storage system, the multipath software transfers host
operations to the S-VOL in the secondary storage system.

Storage Navigator GUI


You perform HAM tasks using the SN graphical user interface. SN is installed
on a management computer. It communicates with the SVP of each storage
system over defined TCP/IP connections.

Command Control Interface (CCI)


You can run commands using CCI to perform pair tasks, which is installed
on the host. You run commands from a command device on the host.
Disaster recovery operations use a mix of SN and CCI.

Data replication
HAM supports data sharing between the following volumes:

A volume in the primary HUS VM system and a volume in secondary HUS


VM system.

Volumes in external storage systems.

A volume in the primary or secondary storage system and a volume in


an external storage system.

High Availability Manager overview


HUS VM Block Module Hitachi High Availability Manager User Guide

15

Failover
A failover is an automatic takeover of operations from the primary storage
system to the secondary storage system. This occurs when the primary
storage system cannot continue host operations due to a failure in either
the data path or the primary storage system. The multipath software in the
host switches I/O to the remote system. A multipath software package that
has been qualified with HAM must be installed on the host.

16

High Availability Manager overview


HUS VM Block Module Hitachi High Availability Manager User Guide

2
System implementation planning and
system requirements
Understanding the system planning process and the various requirements
of HAM enables you to plan a system that functions properly and can be
configured to meet your business needs over time.

The workflow for planning High Availability Manager implementation

Required hardware

Multipath software

Storage system requirements

Licenses

License capacity

Pair volume requirements

Quorum disk requirements

Data path requirements and recommendations

Storage Navigator requirements

External storage systems

Planning failover

Preventing unnecessary failover

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

21

22

Sharing volumes with other Hitachi Data Systems software products

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

The workflow for planning High Availability Manager


implementation
The process for planning your HAM implementation involves these two main
tasks:

Plan and configure the volume pairs, data path configurations,


bandwidth sizing, RAID configuration.
For more information, see the Hitachi TrueCopy User Guide.

Follow all of the HAM requirements and recommendations.


There are major differences between HAM and TrueCopy. For example,
use of the quorum disk and multipath software on the host are specific
to HAM.

Note: Hitachi Data Systems strongly recommends that you contact


Global Solutions Services for assistance in the planning and configuration
of your Hitachi Data Systems system.

Required hardware
The following hardware is required for a HAM system:

Storage systems must be installed on the primary and secondary sites.

HAM pairs can be set up between two HUS VMs (73-03-0x-xx/xx or


later).

A host must be connected to both primary and secondary storage


systems.

An external storage system for the quorum disk.

External storage system for data storage (optional).

Data path connections between primary and secondary storage


systems.

A physical path connection between the primary storage system and the
external system hosting the quorum disk.

A physical path connection between the secondary storage system and


the external system hosting the quorum disk.

Path connections from host to primary and secondary storage systems.


Multipath software must be installed on each host server for this
purpose.

If necessary, physical path connections between external storage and


primary and/or secondary storage systems.

Multipath software
A multipath software package qualified with HAM is required on each host
platform for failover support. Hitachi's multipath software, Dynamic Link
Manager, supports the following host platforms:

AIX

Linux

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

23

Solaris

Windows. Requires host mode option 57 on the host group where


Windows resides.

VMware. Requires host mode option 57 on the host group where VMware
resides.

Dynamic Link Manager manages I/O through a disk driver.For version


information, contact your Hitachi Data Systems representative.

Storage system requirements


The requirements for the primary, secondary, and external storage systems
must be met to ensure these systems function properly.

Make sure that the primary, secondary, and external storage systems
have their own independent sources of power.

The HAM P-VOL and S-VOL must be located in different storage systems.

Primary and secondary storage systems each require two initiator ports
and two RCU target ports.
The initiator port sends HAM commands to the paired storage
system. Initiator ports must be configured on the primary storage
system for HAM operations. However, for disaster recovery, you
should also configure initiator ports on the secondary storage
system.
RCU Target port receives HAM commands and data. RCU target ports
must be configured on the secondary storage system for HAM
operations. You should also configure RCU target ports on the
secondary storage system for disaster recovery.
Additional microprocessors for replication links may be required based
on replication workload.

If you use switches, prepare them for both the primary and the
secondary storage systems. Do not share a switch between the two.
Using two independent switches provides redundancy in the event of
failure in one.

Secondary storage system cache should be configured to support


remote copy workloads, as well as any local workload activity.

Cache and non-volatile storage (NVS) must be operable for both the
MCU and RCU. If not, the HAM Paircreate CCI command will fail.

The required program products for HAM operations must be installed.

Licenses
The following Hitachi Data Systems software products must be installed on
both the primary and secondary storage systems. Each product each
requires a license key.

24

High Availability Manager

TrueCopy

Universal Volume Manager

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

License capacity
A single HAM license must be purchased for each HUS VM system. The HAM
license is not capacity based. The capacity of the TrueCopy license
determines the capacity of HAM volumes that may be replicated. Review the
TrueCopy license installed on your system to verify that it meets your
requirements.
For example, when the license capacity for TrueCopy is 10GB, the volume
capacity that can be used for HAM is up to 10GB. When 2GB out of 10GB of
license capacity for TrueCopy is used, the volume capacity that can be used
for HAM is up to the remaining 8GB.
For information on licenses and the actions to take for expired licenses and
exceeded capacity, see the Hitachi Storage Navigator User Guide.

Pair volume requirements


Data in the P-VOL on the primary storage system is copied to the S-VOL on
the secondary storage system. These two volumes are a pair.
The following are requirements for setting up P-VOLs and S-VOLs:

LDEVs for the P-VOL and S-VOL must be created and formatted before
creating a pair.

The volumes must have identical block counts and capacity.

A P-VOL can be copied to only one S-VOL; and an S-VOL can be the copy
of only one P-VOL.

Maximum number of pairs per storage system is 16,384.


The number of HAM pairs that can be created depends on whether
TrueCopy and/or Universal Replicator are used in the same storage
system. HAM, TrueCopy, and Universal Replicator share the same
bitmap areas used to manage differential data, which affects number of
pairs. If one or both of these products are used, the maximum number
of the HAM pairs allowed is than 16,384 and must be calculated.
For instructions, see the topic on difference management in the Hitachi
TrueCopy User Guide.

Multiple LU paths to each volume must be set using LUN Manager.


For instructions, see the Provisioning Guide.

If you are storing data in an external volume or volumes, make sure the
external volumes are mapped to the primary or secondary storage
system they support.

If you plan to create multiple pairs during the initial copy operation,
observe the following:
All P-VOLs must be in the same primary storage system, or in
mapped external systems.
All S-VOLs must be in the same secondary storage system, or in
mapped external systems.
You can specify the number of pairs to be created concurrently during
initial copy operations (1 to 16).

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

25

For more information about the System Option dialog box, see the
topic on changing option settings in the Hitachi TrueCopy User
Guide.
During the initial pair operation in SN, you will select multiple P-VOLs
on the primary storage system for pairing. After selecting the P-VOL,
only the P-VOL with the lowest LUN appears in the subsequent
Paircreate dialog box. To pair the other P-VOLs to the correct SVOLs, observe the following:
- In the Paircreate dialog box, you can select only one S-VOL. This
should be the volume to be paired with the P-VOL that is shown.
- S-VOLs for the remaining P-VOLs are assigned automatically by SN,
according to their LUNs. If you are creating three P-VOLs, and you
assign LUN001 as the S-VOL in the Paircreate dialog box, the
remaining S-VOLs will be assigned incrementally by LUN (for
example, LUN002 and LUN003).
- Make sure that all S-VOLs to be assigned automatically are
available, are numbered in an order that will pair them properly, and
that they correspond in size to the P-VOLs.
- If an S-VOL is not available for a P-VOL, the pair must be created
individually.

Quorum disk requirements


Quorum disks store continually-updated information about data in HAM PVOLs and S-VOLs for use during failover operations.

26

All HAM pairs created between one MCU and one RCU must use the same
quorum disk. Thus, The P-VOL and S-VOL for a pair must use the same
quorum disk.

A quorum disk must be located in an external storage system that is


separate from the primary and secondary storage systems.

Only external storage systems supported by Hitachi Universal Volume


Manager can be used for the quorum disk. see the Hitachi Universal
Volume Manager User Guide for a list of supported external systems.

Multiple quorum disks can be created in one external storage system.

The maximum number of quorum disks per external system is 128.

The external system is not required to be dedicated to quorum disks


exclusively.

Quorum disk size requirements: 47 MB to 4 TB (96,000 blocks to


8,589,934,592 blocks).

The quorum disk must not be expanded or divided by LUN Expansion or


Virtual LUN.

An LU path must not be configured to the quorum disk.

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

Read/Write operations from the storage system to the quorum disk are
for internal use. These operations are performed even when Write
Pending operations reach 70%.

Caution: Quorum disks are used in a unique way in that they are shared
with two storage systems. For data protection reasons, make sure not to
share any other kind of volume with two storage systems.

Data path requirements and recommendations


Data is transmitted from the P-VOL to the S-VOL over the data path. Please
observe the following requirements and recommendations:

Data path requirements for HAM are the same as TrueCopy


requirements.
For more information, see the Hitachi TrueCopy User Guide.

Do not share the data paths with TrueCopy. Install independent data
paths for HAM.

Install at least two data paths from the primary storage system to the
secondary storage system, and two data paths from the secondary
storage system to the primary storage system. This allows data transfer
to continue in the event of failure one path's cables or switches.

Optical fibre cables are required to connect the primary and secondary
storage system.

Direct and switch connections are supported.

Use target ports in the primary and secondary storage systems to


connect with the host Fibre Channel ports.
Initiator ports cannot be used for host connections.
For more information about port attributes, see the topic on configuring
host interface ports in the Hitachi TrueCopy User Guide.

The following table shows maximum, minimum, and recommended


number of data paths, logical paths, and ports for HAM.
Category

Physical
Data Paths

Logical
Paths

Item

Min.

Max.

Recommended

Path between primary/secondary


systems and a host.

2 or more

Data path from primary to


secondary system.

2 or more

Data path from secondary to


primary system.

2 or more

Path between primary/secondary


systems and quorum disk.

2 or more

From primary to secondary system. 1

2 or more

From secondary to primary system. 1

2 or more

Mapping path (path between


primary/secondary system and
quorum disk).

2 or more

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

27

Category
Ports

Item

Min.

Max.

Secondary system target port that 1


can be connected to an initiator port.

64

Initiator port that can be connected


to a secondary system target port.

16

Recommended

Storage Navigator requirements


The following requirements must be met to ensure that you are able to use
SN to manage the system:

SN is required for HAM.

You can connect a SN computer to both the primary and secondary


storage system.

You must have storage administrator authority to perform HAM tasks. If


you do not, you will only be able to view HAM information.

To perform any HAM task, make sure SN is in Modify Mode.

For more information, see the Hitachi Storage Navigator User Guide.

External storage systems


You can use Hitachi storage systems, original equipment manufacturer
(OEM) storage systems, and other vendors' storage systems (such as IBM
or EMC) as connectable external storage systems. Hosts will recognize
these volumes as internal volumes of the HUS VM storage system.
When using external storage systems with HAM, please observe the
following:

Optional external storage systems may be used to store pair data.


For supported external systems, see the Hitachi Universal Volume
Manager User Guide.

You can connect one external system per HAM P-VOL, and one per SVOL.

The maximum number of external systems that can be connected


depends on the number of the external ports that can be defined for a
storage system.

Planning failover
Automatic failover of host operations to the secondary storage system is
part of the HAM system. Failover occurs when the primary storage system
cannot continue I/O operations due to a failure. The multipath software in
the host switches I/O to the secondary storage system.

28

The multipath software automatically configures the path to the P-VOL


as the owner path when you create a HAM pair.

The path to the S-VOL is automatically configured as the non-owner


path.

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

In the HAM system, the quorum disk stores the information about the state
of data consistency between the P-VOL and S-VOL, which is used to check
whether P-VOL or S-VOL contains the latest data. If the P-VOL and S-VOL
are not synchronized due to a failure, the MCU and RCU determine which
volume should accept host I/O based on the information stored in the
quorum disk.
The following figure illustrates failover when a failure occurs at the MCU.

HAM also performs the following checks to detect failures:

The RCU issues service information messages (SIMs) when the data
path is blocked. The multipath software issues messages about the
failure in the host-MCU paths.

Health check of the quorum disk by the MCU and RCU. The primary or
secondary storage system issues a SIM if a failure in the quorum disk is
detected. Host operations will not switch to the S-VOL if the quorum disk
fails. In this case, the failure must be cleared as soon as possible and
the quorum disk recovered.

If the multipath software detects a failure in the host-to-pair volume


paths, the operation switches to a different available path and no SIM is
issued. To stay informed about path status, monitor the path failure
messages issued by the multipath software.

The multipath software issues message when all host-MCU paths fail.
These messages must then be checked and the cause corrected. If
failover took place, host operations should be switched back to the
primary storage system.

It is possible that maintenance operations require both storage systems to


be powered off at the same time. In this case, the health checking periods
would be shortened to prevent unexpected failover while both systems are
powered off.

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

29

After failover, when a failure is corrected, you may continue operations on


the S-VOL, though Hitachi Data Systems recommends switching them back
to the P-VOL. To find which volume was originally a P-VOL, use the
multipath software on the host to refer to path information, checking for the
volume with the owner path. The owner path is set to the volume that you
specified as a P-VOL when you created a HAM pair. The owner path never
switches even if the P-VOL and S-VOL were swapped due to a failover.

Preventing unnecessary failover


Some applications issue the read command to the HAM S-VOL. When these
applications are used, and when the number of read commands to the SVOL reaches or exceeds the threshold (1,000 times per six minutes), HAM
assumes that a P-VOL failure has occurred. This situation results in an
unnecessary failover to the HAM S-VOL.
At this time, the Solaris VERITAS Volume Manager (VxVM) vxdisksetup
command issues more read commands than allowed by the threshold.
You can prevent unnecessary failover by setting host mode option 48 to ON.
Note that when this option is ON, the S-VOL responds slower to the read
command.
Review system conditions and resulting behaviors related to host mode
option 48 in the following table.

Table 2-1 System behavior for host mode option 48


Event

Behavior when OFF

Behavior when ON

Normal operation.

Failover occurs only when you


run certain applications.

No failover, even when you run


the applications.

The S-VOL receives


more read
commands than
allowed by the
threshold, and
receives no write
command.

Updates from a host go to


S-VOL, and S-VOL status
becomes SSWS.

The S-VOL responds to the


read command as quickly
as the P-VOL responds.

The S-VOL receives


one or more write
commands.

Updates from a host go to


S-VOL, and S-VOL status
becomes SSWS.

The S-VOL responds to the


read command as quickly
as the P-VOL responds.

Updates from a host go to


P-VOL, and S-VOL status
remains PAIR.
The S-VOL responds slower
to the read command than
the P-VOL does. The S-VOL
takes several milliseconds
to respond.

Same as when option 48 is OFF.

For more information about setting host mode options, see the Provisioning
Guide.

210

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

Sharing volumes with other Hitachi Data Systems


software products
HAM volumes can be used as volumes for other Hitachi Data Systems
software products, such as ShadowImage or Virtual LUN.
The following table shows the HAM volumes that can be shared with other
software. Only those volumes listed can be shared.

Table 2-2 Volume types that can be shared with HAM volumes
Product
LUN Manager

Volumes

Used as HAM Used as HAM SP-VOL?


VOL?

Volume where an LU path is


defined

Yes

Yes

Volume where no LU path is


defined

No

No

Volume where LUN security is


applied

Yes

Yes

VLL volume

Yes

Yes

System disk

No

No

LUN Expansion LUSE volume

Yes

Yes

Volume
Shredder

N/A

No

No

Dynamic
Provisioning

DP-VOL (virtual volume)

Yes

Yes

Pool volume

No

No

Universal
Volume
Manager

External volume (after mapping is Yes


finished)

Yes

ShadowImage

ShadowImage P-VOL

No

Yes

ShadowImage S-VOL

No

No

Reserved volume

No

No

Thin Image

Data volume, Virtual volume, Pool No


volume

No

TrueCopy

P-VOL, S-VOL

No

No

Universal
Replicator

Primary data volume

No

No

Secondary data volume

No

No

Journal volume

No

No

Source volume

No

No

Target or reserved volume

No

No

Yes

Yes

Volume with attribute other than


the above

No

No

N/A

No

No

Open Volume
Management

Volume
Migration
(*1)

Data Retention Volume with the Read/Write


Utility
attribute

Multiplatform
Backup

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

211

Product

Volumes

Used as HAM Used as HAM SP-VOL?


VOL?

*1: For information on using Volume Migration, contact the Hitachi Data Systems
Support Center.

The quorum disk cannot be used by other software products, except as


follows:

Virtual Partition Manager can allocate the CLPR (virtually partitioned


cache) to the quorum disk when you map the quorum disk to the storage
system.

Performance Monitor can monitor usage or performance of the quorum


disk.

The following topics clarify key information regarding the use of other
software products.

Virtual Partition Manager


Virtually partition the cache (CLPR), and allocate the CLPR to the host that
issues the I/O to the HAM pairs.

Cache Residency Manager


With Cache Residency Manager, you can improve data access performance
by storing the HAM data in the storage system's cache memory.

Performance Monitor

Performance Monitor is used to monitor usage or performance of the


storage system. You can also show statistical I/O data of HAM and
TrueCopy pairs.

When Performance Monitor data collection results in a large amount of


data, significant traffic on the HUS VM internal LAN can occur. To prevent
time-outs while performing HAM operations on the SN computer, cancel
Performance Monitor data collection activities.

LUN Manager
LU paths cannot be deleted after you create HAM pairs. To delete the LU
path, you need to release the HAM pair first.

Open Volume Management

212

VLL volumes can be assigned to HAM pairs, provided that the S-VOL has
the same capacity as the P-VOL.

To perform VLL operations on an existing HAM P-VOL or S-VOL, the pair


must be released first to return the volume to the SMPL status.

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

LUN Expansion
LUSE volumes can be assigned to HAM pairs, provided that both P-VOL and
S-VOL are LUSE volumes consisting of the same number of LDEVs, the same
size, and the same structure.

Configurations with ShadowImage volumes


You can use the HAM S-VOL as a ShadowImage P-VOL. This configuration
benefits the HAM pair if the P-VOL is logically destroyed. In this case, you
can recover the data from the split ShadowImage S-VOL.

Configuring HAM with ShadowImage


You perform this configuration by creating a HAM pair, a ShadowImage pair,
then splitting the ShadowImage pair.
1. Create the HAM pair. Make sure that pair status becomes PAIR.
2. Create the ShadowImage pair, using the HAM S-VOL as a ShadowImage
P-VOL.
3. Split the ShadowImage pair and resume host operations on the HAM PVOL.

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

213

214

System implementation planning and system requirements


HUS VM Block Module Hitachi High Availability Manager User Guide

3
System configuration
The HAM system configuration process is the first main task in the process
of setting up the HAM system. It follows the planning of the system
implementation and is based on the outcome of the system implementation
planning effort. All of the configuration procedures must be completed
before you can begin using the system.

The basic workflow for configuring the system configuration

Connecting the hardware components

Installing and configuring software

Configuring the primary and secondary storage systems

Configuring the quorum disks

Adding the ID for the quorum disk to the storage systems

Configuring host mode options

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

31

The basic workflow for configuring the system


configuration
The configuration process involves connecting the system hardware
components, installing all required software, configuring the primary and
secondary storage systems, setting up the quorum disk, and configuring
host mode options.
Complete the configuration tasks in the indicated order and configure the
components according to the configuration requirements.
Use the following process to configure HAM:
1. Connect the system hardware components.
For more information, see Connecting the hardware components on
page 3-2.
2. Install the required software.
For more information, see Installing and configuring software on page
3-4.
3. Configure the primary and secondary storage systems (MCU and RCU).
Configuring the primary and secondary storage systems on page 3-5.
4. Set up a quorum disk.
For more information, see Configuring the quorum disks on page 3-6.
5. Add the quorum disk ID.
For more information, see Adding the ID for the quorum disk to the
storage systems on page 3-7.
6. Configure host mode options.
For more information, see Configuring host mode options on page 3-8.

Connecting the hardware components


Connecting certain hardware components of the system is the first main
task in the system configuration. Completion of this task ensures that the
data paths required for normal system operation are set up and ready for
use.
During this task, you and Hitachi Data Systems representatives connect the
following system components:

32

The host to the primary and secondary HUS VM systems.

The primary to the secondary storage system.

The external system that has the quorum disk to the primary and
secondary storage systems.

Any optional external storage systems to the primary and/or secondary


storage systems.

Initiator ports (primary storage system) to the RCU Target ports


(secondary storage system).

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

Prerequisites
Before you begin, make sure you have:

Completed the system implementation (see System implementation


planning and system requirements on page 2-1).

The workflow for connecting the hardware components


Use the following process to connect the hardware components for a HAM
system:
1. If you have external storage for storing pair data, connect the external
systems to the external ports on the HUS VM systems.
2. Connect the host to the primary and secondary storage systems using
target ports on the HUS VM systems.
3. Make the connections for the data path between the primary and
secondary storage systems by doing the following:
Connect the initiator ports on the primary storage system to the RCU
Target ports on the secondary storage system.
Connect the initiator ports on the secondary storage system to the
RCU Target ports on the primary storage system.
4. Connect the quorum disk to the primary and secondary storage systems
using external ports on the HUS VM systems.
Caution: For data protection reasons, make sure that quorum disks are
not shared by two storage systems.
The following figure shows these connections.

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

33

Installing and configuring software


Installing and configuring the system software is the second main task in
the system configuration. Completion of this task ensures that all of the
software required for normal system operation is installed and ready for
use. This task involves installing multipath software and CCI on the host and
installing the main system software on the primary and secondary storage
systems.

Additional documentation
To ensure that you use the correct steps to install the software, refer to the
installation instructions in the following documentation during the
installation process:

The documentation for the multipath software.

The Hitachi Storage Navigator User Guide.

The Hitachi Command Control Interface Installation and Configuration


Guide.

Prerequisites
Before you begin, make sure you have:

34

Connected the system hardware components (see Connecting the


hardware components on page 3-2).

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

The required software for HAM is installed (see System implementation


planning and system requirements on page 2-1).

Caution: Make sure that you install the software in the order described in
the procedure. If you do not, you may have to uninstall and reinstall the
software.

The workflow for installing and configuring High Availability


Manager
Use the following process to install and configure HAM and the required
software:
1. Install multipath software on the host.
2. Using the multipath software, set the path health-checking period to
three minutes.
3. Install the following required software on the primary and secondary
storage systems using SN:
High Availability Manager
TrueCopy
Universal Volume Manager
4. Install CCI on the hosts.

Configuring the primary and secondary storage systems


Configuring the primary and secondary storage systems (MCU and RCU) is
the third main task in the system configuration. Completion of this task
ensures that the systems are configured to enable the communication and
data transfer between the systems that is essential to normal system
operation. Part of this task is setting up the logical paths between the
systems.
This task involves setting port attributes You can set the number of pairs
the system creates concurrently (at the same time) during this task This
task involves installing multipath software and CCI on the host and
installing the main system software on the primary and secondary storage
systems.

Additional documentation
To ensure that you use the correct steps to configure the systems, refer to
these instructions in the following documentation during the configuration
process:

Setting the number of volumes to be copied concurrently in the Hitachi


TrueCopy User Guide.

Defining port attributes in the Hitachi TrueCopy User Guide.

Configuring storage systems and defining logical paths in the Hitachi


TrueCopy User Guide.

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

35

Details about mapping the primary and secondary storage systems to


the external system that contains the in the Provisioning Guide and
Hitachi Universal Volume Manager User Guide.

Details about CLPR in the Hitachi Virtual Partition Manager User Guide.

Details about external path groups in the Hitachi Universal Volume


Manager User Guide.

Prerequisites
Before you begin, make sure you have:

Installed the multipath and system software (see Installing and


configuring software on page 3-4).

Workflow
Use the following process to configure the primary and secondary storage
systems:
1. Stop Performance Monitor, if it is running, to avoid performance impact
on the TCP/IP network.
2. Set the port attributes for HAM.
3. Configure the primary and secondary storage systems and establish
logical paths between the primary and secondary HUS VM systems.

Configuring the quorum disks


Configuring the quorum disk is the third main task in the system
configuration. Completion of this task ensures that the disk is configured to
be able to determine which pair volume has the latest (most current) data
when automatic failover is required.
This task involves configuring the disk and configuring port parameters for
the disk on the primary and secondary storage systems.
Caution: For data protection reasons, make sure that the disk is not
shared by two storage systems.
Note: If a support personnel has changed the system configuration on
the MCU and RCU where HAM pairs have been created, you will need to
format the volume in the external storage system for the quorum disk and
redefine the quorum disk.

Prerequisites
Before you begin, make sure you have:

Installed the multipath and system software (see Configuring the


primary and secondary storage systems on page 3-5).

Procedure
1. In the external storage system, prepare a volume for use as the quorum
disk and specify any required system options.

36

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

2. Using the External attribute, configure the ports on the primary and
secondary storage systems that are connected to the disk.
3. Set the paths from the disk to the primary and secondary storage
systems to Active.
4. Using SNs Ports/Host Groups and External Storages windows, map
the primary and secondary storage systems to the disk by doing the
following:
Configure at least two cross-system paths between the primary
storage system and quorum disk, and two between the secondary
storage system and the quorum disk.
Specify the these external volume parameters:
- Emulation type: OPEN-V.
- Number of LDEVs: 1.
- Cache mode: This parameter is not used for quorum disks. Either
Enable or Disable can be specified.
- Inflow control: Select Disable. Data will be written in the cache
memory.
- CLPR: If you partition cache memory, specify the CLPR that the
quorum disk uses.
- LDKC:CU:LDEV number: The number is used to identify the
quorum disk for the primary and secondary storage systems.
5. In the External Path Groups tab in SN, configure port parameters for
the primary and secondary storage systems by specifying the following
values:
QDepth: This is the number of Read/Write commands that can be
issued (queued) to the quorum disk at a time.
The default is 8.
I/O TOV: This is the timeout value to the quorum disk from the
primary and secondary storage systems. The value must be less than
the time-over value from the application.
Recommended: 15 seconds
Default: 15 seconds
Path Blockade Watch: This is the time that you want the system
to wait after the quorum disk paths are disconnected before the
quorum disk is blocked.
Recommended: 10 seconds; Default: 10 seconds.

Adding the ID for the quorum disk to the storage systems


Adding the ID for the quorum disk is the fourth main task in the system
configuration. Completion of this task ensures that the primary and
secondary storage systems to which the disk has been mapped can
recognize the disk.

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

37

Prerequisites
Before you begin, make sure you have:

Configured the quorum disk (see Configuring the quorum disks on page
3-6).

Procedure
To ensure that the quorum disk ID is added correctly, make sure that:

You complete the procedure on the primary and secondary storage


systems.

You use the same ID for the disk on both systems.

1. Delete any data in the external volume that you assigned to be the
quorum disk.
2. Access the MCU or RCU in SN, then click Actions > Remote Copy >
TrueCopy > Quorum Disk Operation.
3. Make sure that you are in the modify mode.
4. In the Quorum Disk Operation window, right-click the quorum disk ID
that you want to add, then click Add Quorum Disk ID.
5. In Add Quorum Disk ID dialog box, from the Quorum Disk drop-down
menu, select the LDKC:CU:LDEV number that you specified when
mapping the external volume. This is the volume that will be used for
the quorum disk.
6. From the RCU drop-down menu, select the CU that is to be paired with
the CU on the current storage system. The list shows the RCU serial
number, LDKC number, controller ID, and model name registered in CU
Free.
7. Click Set. The settings are shown in the Preview area.
8. Verify your settings. To make a correction, select the setting, right-click,
and click Modify.
9. Click Apply to save your changes.

Configuring host mode options


Configuring host mode options is the last main task in the system
configuration. Completion of this task ensures that the host mode option
setting are correct. The settings vary depending on the whether the system
is a standard or cluster implementation.

Prerequisites
Before you begin, make sure you have:

Added the ID for the quorum disk to the storage systems (see Adding
the ID for the quorum disk to the storage systems on page 3-7).

Procedure
Use the following host mode options for your system:

38

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

If using VMware or Windows, set host mode option 57 on the host group
where VMware or Windows reside.

If using software that uses a SCSI-2 Reservation, set host mode option
52 on the host groups where the executing node and standby node
reside.

For more information on host mode options, see the Provisioning Guide.

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

39

310

System configuration
HUS VM Block Module Hitachi High Availability Manager User Guide

4
Working with volume pairs
A number of tasks must be performed on volume pairs as part of your
normal HAM system maintenance activities, when troubleshooting system
issues, or when taking action to recover from failure.

Workflow for HAM volume pairs

Reasons for checking pair status

When to check pair status?

How pair status reflects system events and use

What pairs information can you view and where is it?

How hosts see volume pairs

Checking pair status

Pair status values

Volume pair creation

Splitting pairs

Resynchronizing pairs

Releasing a pair

Changing TrueCopy pairs to HAM pairs

Comparison of the CCI commands and Storage Navigator

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

41

Workflow for HAM volume pairs


You perform several different types of tasks with volume pairs as part of
your system maintenance and recovery activities.
The different types of tasks include:

Checking pair status.

Creating pairs.

Releasing pairs.

Resynchronizing pairs.

Splitting pairs.

All of the different pair-related tasks can be performed using SN or CCI.

Reasons for checking pair status


Pair status information indicates the current state or condition of the pair.
There are two main reasons for checking the current status of a volume pair.
One is to verify the status of the pair while you run pair CCI commands
during normal system maintenance or failure recovery. The other reason is
to check the status of pairs as part of your normal system monitoring
activities to ensure they are working properly.

When to check pair status?


You should check the status of volume pairs whenever you run pair CCI
commands and as part of your normal system monitoring activities.
When you run pair CCI commands, you check pair status:

Before you run a pair CCI command.

During pair changes. Check pair status to see that the pairs are
operating correctly and that data is updating from P-VOLs to S-VOLs in
PAIR status, or that differential data management is happening in Split
status.

Note: You can perform a pair task can only be completed if the pair is in
a status that permits the task. Checking the status before you run a CCI
command lets you verify that the pair is in a status that permits the task.

How pair status reflects system events and use


The storage system records information about the current status of HAM
pairs. You can check the current status of any volume pair at any time.
Status changes occur for the following reasons:

Automatic system events, such as errors or failover situations.

Administrator actions, such as creating, releasing, or deleting pairs.

The following figure shows HAM pair status before and after pair creation,
splitting pairs, various errors, and after releasing a pair.

42

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

1. When a volume is not in a HAM pair, its status is SMPL.


2. When you create a HAM pair using SMPL volumes, the status of the PVOL and the S-VOL changes to COPY while the system copies the data.
3. A stable synchronized pair has the status PAIR.
4. When you split a pair, the status of the P-VOL and the S-VOL changes to
PSUS (pair suspended-split, split by command).
5. When the MCU cannot maintain in synch the P-VOL and the S-VOL
because of an error, the status of the P-VOL and the S-VOL changes to
PSUE (pair suspended-error, split due to an error). If the MCU cannot
communicate with the RCU, the status of the S-VOL stays PAIR.
6. When a failover occurs in the storage system, the status of the S-VOL
changes to SSWS, and the status of the P-VOL changes to PSUS.
7. When you resynchronize the pair in PSUS or PSUE status (see No.4 and
No.5), the status of the P-VOL and the S-VOL changes to COPY.
8. When you resynchronize the pair with the S-VOL in SSWS status (see
No.6), (using the CCI pairresync -swaps command on the S-VOL), the
P-VOL and the S-VOL swap, and the pair status changes to COPY.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

43

9. When you release a pair, the status of the P-VOL and the S-VOL changes
to SMPL.

What pairs information can you view and where is it?


You can monitor the following information for pairs:

Percentage of synchronization (Sync. column)

Pair details (Detailed Information dialog box)


Pair status
Split types
Suspend types

Where to find the information


You can view all information about pairs in the GUI. If you configured the
system to send email notifications, you can monitor those particular events
remotely.
When a pair is in PSUS, SSWS, or PSUE status, the primary storage system
generates SIMs (service information messages). You can check SIMs in SNs
Alerts window.

How hosts see volume pairs


When you create a HAM pair, the host sees the primary and secondary
volumes as the same volume.
The following figure shows:

44

The configuration as seen by the host.

The actual configuration with primary and secondary volumes.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

Checking pair status


Use SN to view the current status of volume pairs.
1. In the SN main window, click Actions > Remote Copy > TrueCopy >
Pair Operation.
2. In the Pair Operation window tree, select the CU group, CU, port, or
the host group where the HAM pair belongs.
The list shows TrueCopy and HAM pairs.
3. You can complete the following tasks:
To make sure data is current and review the status information in the
Status column, click File/Refresh.
To filter the list to show only HAM pairs, open the Display Filter
window.
To view pair status details, right-click the HAM pair and click Detail.
You can complete the following tasks in the Detailed Information
dialog box:
See information about other volumes. Click Previous or Next.
Edit the current information for the pair. Click Update.
Close the dialog box. Select the Refresh the Pair Operation
window after this dialog box closes option to update the
information in the Pair Operation window based on the updated
information on the Detailed Information dialog box. Click Close.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

45

Pair status values


When checking pair status in SN, click File/Refresh to verify that the data
is current.
In SN, the pair status is shown in the [pair status in Storage
Navigator/pair status in CCI] format. If the pair status name in SN and
the pair status name in CCI are the same, the pair status name in CCI is not
shown.
The following table lists HAM pair status and whether the volumes can be
accessed. The accessibility to P-VOL or S-VOL depends on the pair status
and VOL Access. You can see VOL Access in the Pair Operation window or
the Detailed Information dialog box.
Pair Status

VOL Access

P-VOL

S-VOL

SMPL

SMPL

COPY

COPY

46

P-VOL

Access
to

Description

S-VOL

P-VOL

S-VOL

Blank

Blank

The volume is not assigned to Read/write Read/write


a HAM pair.

Access

Blank

The initial copy operation for


this pair is in progress. This
pair is not yet synchronized.

(No
Lock)

Read/write Not
accessible

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

Pair Status

VOL Access

P-VOL

S-VOL

PAIR

PAIR

SSUS

PSUS

P-VOL

Access
to

Description

S-VOL

P-VOL

S-VOL

Blank

Blank

The pair is synchronized. The Read/write Read/write*


system reads and writes
updates from the host to the
P-VOL and duplicates them in
the S-VOL.

Blank

Access
(Lock)

The I/O operation to S-VOL or Not


the swapping and suspending Accessible
operation was successful. If
the pair status of the S-VOL is
SSWS, the S-VOL is
accessible. The most recent
data is on the S-VOL.

Read/write

Blank

Blank

Pair failed to get the lock at


Not
swapping and suspended
accessible
command (pairsplit -RS
command). Status of the
SSUS was forcibly changed. If
the pair status of the S-VOL is
SSWS, the S-VOL is
accessible. Data on the S-VOL
might be old.

Read/write

Blank

Access
(Lock)

After the status of the S-VOL Not


was changed to SSWS, you
accessible
ran pairsplit-RB command
to rollback. Suspend the pair
and resynchronize. After a
rollback, do not access the PVOL or S-VOL until
resynchronization is finished.
Data on P-VOL might be old.

Not
accessible

Blank

Blank

After the status of the S-VOL Not


was forcibly changed to
accessible
SSWS, you ran the
pairsplit-RB command to
rollback. Suspend the pair
and resynchronize. After a
rollback, do not access the PVOL or S-VOL until
resynchronization is finished.
Data on P-VOL might be old.

Not
accessible

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

47

Pair Status

VOL Access

P-VOL

S-VOL

PSUS
(see
Suspend
types
(PSUE
status)
on page
4-10

PSUS

SSWS

SMPL

48

P-VOL

Access
to

Description

S-VOL

P-VOL

S-VOL

Access
(Lock)

Blank

This HAM pair is not


synchronized because the
user has split the pair
(pairsplit-r command).
The P-VOL data is up-to-date.

Read/
Not
write.
accessible.
Read-only
can be set
with the
option
pairsplit-r)

Blank

Blank

Not
accessible

Not
accessible

Blank

Access
(Lock)

After the status of the S-VOL


was forcibly changed to
SSWS, you ran the
pairsplit-RB command to
rollback. After a rollback, do
not access the P-VOL or SVOL until resynchronization is
finished. Data on P-VOL might
be old.

Not
accessible

Not
accessible

Blank

Access
(Lock)

The HAM pair is not


synchronized because a
failover has occurred. The
data on S-VOL is the latest.

Not
accessible

Read/write

Blank

Blank

The user has performed a


Not
swap and suspend operation accessible
(pairsplit-RS). Data on S-VOL
may be old.

Read/Write

Access
(Lock)

Blank

After the user performed a


Read/
suspend operation (pairsplit - Write
r), the user performed swap
and suspend operations
(pairsplit-RS). Data on S-VOL
may be old.

Read/Write

Access
(Lock)

Blank

The user has released the pair Read/


from RCU (pairsplit -S). This Write
HAM pair is not synchronized.

Read/Write

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

Pair Status

VOL Access

P-VOL

S-VOL

PSUE
(see
Suspend
types
(PSUE
status)
on page
4-10

PSUE

Access
(No
Lock)

Blank

PAIR

Access
(Lock)

Blank

Access
SSWS (No
Lock)

Blank

Access
(Lock)

Blank

Access
(No
Lock)

Blank

Access
(Lock)

Blank

Access
(Lock)

Blank

PSUS

PDUB

PDUB

P-VOL

Access
to

Description

S-VOL

P-VOL

S-VOL

This HAM pair is not


Read/
synchronized because the
Write
MCU suspended the HAM pair
with an error. Data in the PVOL is up to date.

Not
accessible

After the pair was suspended Read/


by a failure, the user
Write
performed the swapping and
suspending command
(pairsplit -RS). If the pair
status of the S-VOL is SSWS,
the S-VOL is accessible. Data
on the S-VOL might be old.

Read/Write

After the HAM pair was


Not
suspended due to an error,
accessible
the user performed the
swapping and suspending
operation (pairsplit-RS) and
performed the rollback
operation (pairsplit -RB).
After the rollback, do not
access either the P-VOL or the
S-VOL until the resync
operation is finished. The
data on the P-VOL might be
old.

Not
accessible

Read/write Read Only


The HAM pair consists of
LUSE volumes. The status of
(Not
the pair is COPY or PAIR. The
accessible in
status of at least one LUSE
case of COPY
or PSUE)
volumes is SMPL or PSUE.

* If an S-VOL in PAIR status accepts write I/O, the storage system assumes that a failover occurred.
Then the S-VOL changes to SSWS status and the P-VOL usually changes to the PSUS status.

The following table describes pair status in CCI.


Item

Description

SMPL

The volume is not assigned to a HAM pair.

COPY

The initial copy for a HAM pair is in progress, but the pair is not
synchronized yet.

PAIR

The initial copy for a HAM pair is completed, and the pair is
synchronized.

PSUS

Although the paired status is retained, the user split the HAM pair, and
update of S-VOL is stopped. This status only applies to SVOL. While the
pair is split, the storage system keeps track of updates to P-VOL.

SSUS

Although the paired status is retained, the user split the HAM pair, and
update of S-VOL is stopped. This status only applies to P-VOL. If the pair
is split with the option of permitting update of S-VOL specified, the
storage system keeps track of updates to S-VOL.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

49

Item

Description

PSUE

Although the paired status is retained, update of S-VOL is stopped


because of an error status. PSUE is the same as PSUS (SSUS) in terms
of symptom; the difference is that PSUE is caused by an internal error.

PDUB

This status is shown only for pairs using LUSE. Although the paired
status is retained, the pair status transition is suspended because of an
error in some LDEVs within LUSE.

SSWS

The paired status is retained. Processing for resync with P-VOL and SVOL swapped (horctakeover command) is in progress.

Split types (PSUS status)


Split types are shown in the Detailed Information dialog box; they specify
the reason a pair is split.
When you split a pair, the pair status changes to PSUS and the system stops
updates to the S-VOL. You can set an option to block updates to the P-VOL
also while the pair is split. This results in the P-VOL and S-VOL staying
synchronized.
If the P-VOL accepts write I/O while the pair is split, the primary storage
system records the updated tracks of the P-VOL as differential data. This
data is copied to the S-VOL when the pair is resynchronized.
The following table lists PSUS types.
Split type

Applies to

Description

P-VOL by
Operator

P-VOL

The user split the pair from the primary storage system
using the P-VOL Failure on the Suspend Type option
for Suspend Kind. The S-VOL split type is PSUS-by
MCU.

S-VOL by
Operator

P-VOL

The user split the pair from the primary or secondary


storage system using the S-VOL on the Suspend Type
option for Suspend Kind. Or, the pair is split because
of a failover in the storage system.

by MCU

S-VOL

The secondary storage system received a request from


the primary storage system to split the pair. The P-VOL
split type is PSUS-P-VOL by Operator or PSUS-S-VOL by
Operator.

Delete pair to
RCU

P-VOL

The primary storage system detected that the S-VOL


status changed to SMPL because the user released the
pair from the secondary storage system. The pair
cannot be resynchronized because the S-VOL does not
have the PSUS/PSUE status.

S-VOL

Suspend types (PSUE status)


Suspend types are shown in the Detailed Information dialog box; they
specify why the pair was suspended by the system.
The primary storage system suspends a pair when the following conditions
occur:

410

The user has released the pair from the secondary storage system.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

An error condition related to the secondary storage system, the S-VOL,


or the update copy operation.

Communications with the secondary storage system have stopped.

When a pair is suspended, the primary storage system stops sending


updates to the S-VOL, even though host applications may continue updating
the P-VOL. The primary storage system records the P-VOL's updated tracks
as differential data. This data is copied to the S-VOL when the error
condition is cleared and the pair is resynchronized.
P-VOL/S-VOL differential data is not retained if a primary or secondary
storage system is powered off and its backup batteries are fully discharged
while the pairs are suspended. In this unlikely case, the primary storage
system performs the equivalent of an entire initial copy operation when the
pairs are resynchronized.
For descriptions of suspend types and the troubleshooting steps to resolve
them, see Suspended volume pair troubleshooting on page 8-4.

Volume pair creation


A volume pair consists of primary and secondary volume whose data is
synchronized until the pair is split. During the initial copy, the P-VOL
remains available to the host for read/write.

Creating a HAM pair


There are two basic steps involved in creating a HAM pair. They are:

Performing the initial copy.

Verifying that the host recognizes both the P-VOL and S-VOL as a single
volume.

Prerequisites
Make sure you have configured the pair (see System configuration on page
3-1).

Procedure
When creating the pair, make sure that:

LDEVs for the primary and secondary volumes:


Are created and formatted.
Have identical block counts.
Are in SMPL status.

The S-VOL must be offline; the P-VOL can be online.

You copy each volume's port number, host group number (GID), and
LUN. This is needed during the procedure.

You copy the CU Free RCU from which you will assign the secondary
volume. Copy the serial number, LDKC number, controller ID, model
name, path group ID, and the channel type.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

411

You assign a quorum disk to the HAM pairs during the initial copy
procedure. The pairs created between the same primary and secondary
storage system must be assigned the same quorum disk.

The initial copy parameters you specify during the procedure cannot be
successfully changed after a pair is created. If you attempt to change or
delete them, the Pair Operation window and Detailed Information
dialog box shows misleading and inaccurate information.

If you are creating multiple pairs in one operation, all pairs are assigned
the same parameters and the same quorum disk ID.

1. Access the MCU in SN, then click Actions > Remote Copy >TrueCopy
> Pair Operation.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the primary volume or volumes are located. The
volumes available to be paired are shown in the volume list.
4. Right-click a volume that you want as a P-VOL and select Paircreate
and HAM from the menu.
You can create more than one pair at one time by selecting then
right-clicking more than one volume. The related secondary volumes
must be in the same secondary storage system.
Volumes with the pair icon
volumes.

are already used as primary data

5. In the Paircreate(HAM) dialog box, the volume you selected for pairing
is shown for P-VOL. If you selected multiple volumes, the volume with
the lowest LUN is shown.
Note: When a P-VOL or S-VOL appears in a dialog box, it is identified
by port number, GID, LUN (LDKC number: CU number: LDEV number),
CLPR number, and CLPR name of the LU.

A # at the end of the string indicates an external volume.

An X t the end of the string indicates a Dynamic Provisioning virtual


volume.

From the S-VOL drop-down menus, select the volume that you want to
pair with the shown P-VOL. Select the port number, GID, and LUN. This
will become the secondary volume (S-VOL).
If you are creating multiple pairs, select the S-VOL for the P-VOL
shown for P-VOL. The S-VOLs for the remaining group of P-VOLs will
be automatically assigned according to the LUN.
For example, if you are creating three pairs, and you select LUN001
as the S-VOL, the remaining S-VOLs for the other P-VOLs will be
LUN002 and LUN003.
6. From the RCU drop-down menu, select the remote system where the SVOL is located. The list shows all registered CU Free RCUs, which are
shown by serial number, LDKC number, controller ID, model name, path
group ID, and channel type. The system you select must be the same
for all pairs being created in this operation.

412

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

7. The P-VOL Fence Level is automatically set to Never. The P-VOL will
never be fenced, or blocked from receiving host read/write.
Note: In the Initial Copy Parameters area, remember that the
parameters you specify cannot be changed after a pair or pairs are
created. To make changes to the parameters specified below, you will
need to release and recreate the pair.
8. From the Initial Copy drop-down menu, specify whether to copy data
or not copy during the paircreate operation:
Select Entire Volume to copy P-VOL data to the S-VOL (default).
Select None to set up the pair relationship between the volumes but
to copy no data from P-VOL to S-VOL. You must be sure that the PVOL and S-VOL are already identical.
9. From the Copy Pace drop-down menu, select the desired number of
tracks to be copied at one time (1-15) during the initial copy operation.
The default setting is 15. If you specify a large number, such as 15,
copying is faster, but I/O performance of the storage system may
decrease. If you specify a small number, such as 3, copying is slower,
but the impact on I/O performance is lessened.
10.From the Priority drop-down menu, select the scheduling order for the
initial copy operations. You can enter between 1-256. The highest
priority is 1, the lowest priority is 256. The default is 32.
For example, if you are creating 10 pairs and you specified in the
System Option window that the maximum initial copies that can be
made at one time is 5, the priority you assign here determines the order
that the 10 pairs are created.
11.From the Difference Management drop-down menu, select the unit of
measurement for storing differential data. You can select Auto, Cylinder,
or Track.
With Auto, the system decides whether to use Cylinder or Track. This is
based on the size of the volume.
If VLL is used, the number of cylinders set for VLL is applied.
If the pair volume has 10,019 or more cylinders, Cylinder is applied.
If the pair volume has less than 10,019 cylinders, Track is applied.
12.From the Quorum Disk ID drop-down menu, specify the quorum disk
ID that you want to assign to the pair or pairs.
13.Click Set. The settings are shown in the Preview area.
14.In the Preview list, check the settings. To change a setting, right-click
and select Modify. To delete a setting, right-click and select Delete.
When satisfied, click Apply. This starts pair creation and initial copying
if specified.

Verifying host recognition of a new pair


The final steps in creating new pairs is to make sure the host recognizes the
pair and to verify the path configuration from the host to the pair. With HAM,
the host recognizes both P-VOL and S-VOL as a single volume.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

413

To verify host recognition of the HAM P-VOL and S-VOL


1. Using CCI, make sure the HAM pair is created with the -fe option of the
pairdisplay command.
2. Make sure pair status is PAIR, and that Type on the Storage Navigator
Pair Operation window displays HAM for the primary and secondary
storage systems. (Until Type = HAM, application I/Os cannot be taken
over in case of a failover.)
3. Either reboot the host or use the host operating system's device
recognition command.
For more information, see the documentation provided with the system.
For Linux host operating systems, make sure that the HAM pairs
status is PAIR before rebooting. If the host reboots when the HAM
pair is not in the PAIR status, a check condition might be reported to
the host.
For Solaris host operating systems, make sure the HAM pair status is
PAIR before performing the following operations, otherwise a failure
can occur:
- An online operation on a path you are not allowed to access.
- Rebooting the host.
- Dynamic addition of the host device.
- Opening the host device.
If a path becomes offline, change the HAM pair status to PAIR and
then perform the online operation for the offline path.
4. Check path configuration to the P-VOL and S-VOL using the multipath
software command on the host. Set the owner and non-owner paths to
the P-VOL.
For more information about the multipath software commands, see the
documentation provided with the multipath software.
The following figure shows an example of how the Hitachi Dynamic Link
Manager shows path configuration information. In this example, there are
four configuration paths to a single volume in the primary storage system.
Two of the paths are owner paths (Own), and two paths are non-owner
paths (Non).

414

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

How multipath software shows storage serial number for pairs


If you use HUS VMs on the primary and the secondary sites, multipath
software shows the serial number of the primary storage system for both
the P-VOL and the S-VOL. The storage system is shown as HUS VM.

Splitting pairs
When the pair is synchronized, data written to the P-VOL is copied to the SVOL. This continues until the pair is split.When you split a pair, the pair
status changes to PSUS and updates to the S-VOL stop.
You can set an option to block updates to the P-VOL while the pair is split.
This results in the P-VOL and S-VOL staying synchronized.
If the P-VOL accepts write I/O while the pair is split, the primary storage
system records the updated tracks as differential data. This data is copied
to the S-VOL when the pair is resynchronized.
The pair can be made identical again by resynchronizing the pair.
Note:

You can split a pair from either the P-VOL or S-VOL.

Prerequisites
The pair must be in PAIR status.

Procedure
1. Access the MCU or RCU in SN, then click Actions > Remote Copy >
TrueCopy > Pair Operation.
You do not need to vary the P-VOL offline.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the pair volume is located.
4. In the volume list, right-click the pair to be split and click Pairsplit-r
from the menu. You can split more than one pair by selecting then rightclicking more than one volume.
5. In the Pairsplit-r dialog box, information for the selected volume is
shown for Volume. When more than one volume is selected, the volume
with the lowest LUN is shown.
From the Suspend Kind drop-down menu, specify whether or not to
continue host I/O writes to the P-VOL while the pair is split. (If you are
running the CCI command from the S-VOL, this item is disabled.)
Select P-VOL Failure not to allow write I/O to the P-VOL while the
pair is split, regardless of the P-VOL fence level setting. Choose this
setting if you need to maintain synchronization of the HAM pair.
Select S-VOL to allow write I/O to the P-VOL while the pair is split.
The P-VOL will accept all subsequent write I/O operations after the
split. The primary storage system will keep track of updates while the

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

415

pair is split. Choose this setting if the P-VOL is required for system
operation and you need to keep the P-VOL online while the pair is
split.
6. Click Set. The settings are shown in the Preview area.
7. In the Preview list, check the settings. To change a setting, right-click
and select Modify. When satisfied, click Apply. The primary storage
system will complete all write operations in progress before splitting the
pair, so that the pair is synchronized at the time of the split.
8. Verify that the operation is completed successfully by checking the
Status column. The status should be PSUS.

Resynchronizing pairs
When you resynchronize a split pair, the volume that was not being
updatedusually the S-VOLis synchronized with the volume that was
being updated by the hostusually the P-VOL.
Pair status during resynchronization changes to COPY. It changes again to
PAIR when the operation is complete.
The method for performing this operation differs according to whether the
P-VOL or the S-VOL is accepting write I/O from the host. Check the VOL
Access column in the Pair Operation window to see which volume is online.

Pairs must be in PSUS or PSUE when the P-VOL is receiving I/O. If status
is PSUE, clear the error before resynchronizing. The operation can be
performed using the SN procedure below.

When the S-VOL is receiving host I/O, pair status must be SSWS. The
operation is performed by running the CCI pairresync-swaps
command.

Note: If you want to resynchronize the pair that has been released from
the S-VOL side, do not use this procedure. Instead, complete the following:
1. Release the pair from the P-VOL side by running the pairsplit-S CCI
command.
2. Create the pair from the P-VOL side using the Paircreate(HAM) dialog
box, making sure to set the appropriate initial copy option (Entire
Volume or None).
1. Access the MCU in SN, then click Actions > Remote Copy > TrueCopy
> Pair Operation.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the P-VOL is located.
4. In the volume list, right-click the P-VOL in the pair to be resynchronized
and click Pairresync.
In the Pairresync dialog box, P-VOL Fence Level is automatically set
to Never. The volume receiving updates from the host will never be
fenced, or blocked, from receiving host read/write.

416

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

5. In the Pairresync dialog box, from the Copy Pace drop-down menu,
select the desired number of tracks to be copied at one time (1-15)
during the copy operation. The default setting is 15. If you specify a
large number, such as 15, copying is faster, but I/O performance of the
storage system may decrease. If you specify a small number, such as 3,
copying is slower, but the impact on I/O performance is lessened.
6. From the Priority drop-down menu, select the scheduling order for the
copy operations. This applies when multiple pairs are being
resynchronized. You can enter between 1-256. The highest priority is 1,
the lowest priority is 256. The default is 32.
7. Click Set.
8. In the Preview list, check the settings. To change a setting, right-click
and select Modify. When satisfied, click Apply.
Update the pair status by clicking File/Refresh, then confirm that the
operation is completed with a status of PAIR.

Reverse resynchronization
After a failover, when the S-VOL is receiving updates from the host instead
of the P-VOL, you can resynchronize the P-VOL with the S-VOL by running
the CCI pairresync-swaps command. Copy direction is from S-VOL-to-PVOL.
The P-VOL and S-VOL are swapped in this operation: the secondary storage
system S-VOL becomes the P-VOL; the primary storage system P-VOL
becomes the S-VOL.
The pairresync -swaps command is the only supported method for
reverse resynchronizing HAM pairs.

Prerequisites
Make sure that:

All errors are cleared.

Pair status is SSWS.

Procedure
1. Run the CCI pairresync -swaps command on the S-VOL.
The data in the RCU S-VOL is copied to the MCU P-VOL. Also, the P-VOL
and S-VOL is swapped. The MCU P-VOL becomes the S-VOL, and the
RCU S-VOL becomes the P-VOL.
2. When the operation completes, verify that the pair is in PAIR status.
The following figure shows the swapping of a HAM P-VOL and S-VOL.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

417

Releasing a pair
You can release a pair when you no longer need to keep the P-VOL and SVOL synchronized. You can release a single pair or multiple pairs using the
same procedure.
When you release a pair from the P-VOL, the primary storage system stops
copy operations and changes pair status of both P-VOL and S-VOL to SMPL.
The system continues to accept write I/O to the P-VOL volume, but does not
keep track of the updates as differential data.
When you release a pair from the S-VOL, the secondary storage system
changes S-VOL status to SMPL, but does not change P-VOL status. When
the primary storage system performs the next pair operation, it detects that
S-VOL status as SMPL and changes P-VOL status to PSUS. The suspend type
is Delete pair to RCU.
Tip: Best Practice: Release a pair from the P-VOL. If the pair has a failure
and cannot be released from the P-VOL, then release it from the S-VOL.
1. Verify that the P-VOL has the latest data using one of the following
methods:
On the secondary storage system, open the Pair Operation window.
Check that the Volume Access column for the S-VOL is blank (must
not show Access Lock).
Use the multipath software command to check if the P-VOL path
(owner path) is online.
2. Vary the S-VOL path offline using the multipath software command.
3. Access the MCU in SN, then click Actions > Remote Copy > TrueCopy
> Pair Operation.
4. Make sure that you are in the modify mode.

418

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

5. In the Pair Operation window tree, select the CU group, CU, port, or
host group where the P-VOL is located.
6. Right-click the pair to be released and click Pairsplit-S.
7. In the Pairsplit-S dialog box, Delete Pair by Force drop-down menu,
select one of the following:
Yes: The pair will be released even if the primary storage system
cannot communicate with the secondary storage system. This option
can be used to free a host waiting for a response from the primary
storage system, thus allowing host operations to continue.
No: The pair will be released only if the primary storage system can
change pair status for both P-VOL and S-VOL to SMPL.
When the status of the pair to be released is SMPL, the default setting
is Yes and it cannot be changed. If the status is other than SMPL, the
default setting is No.
8. Click Set.
9. Click Apply.
10.Verify that the operation completes successfully (changes to SMPL
status).
11.The device identifier for one or both pair volumes changes when the pair
is released. Therefore you should enable host recognition of the S-VOL,
using one of the following methods:
Run the device recognition command provided by the operating
system on the host.
For more information, see the following table.
Reboot the host.

Changing TrueCopy pairs to HAM pairs


You can convert a TrueCopy pair to a HAM pair.

Requirements
Make sure that the following requirements are met to ensure the pair can
be changed correctly without errors or failures:

HAM system requirements must be met.

The microcode version of the primary and secondary storage systems


must be DKCMAIN 73-03-0X-XX/XX or later.

A quorum disk ID must be registered.

The TrueCopy pair must not be registered to a consistency group.

CU Free are specified on the primary and secondary storage systems


when the TrueCopy pair was created.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

419

The P-VOL is in the primary storage system. If the S-VOL is in the


primary storage system, run the horctakeover command on the S-VOL
to reverse the P-VOL and the S-VOL of the pair.

Caution: If this operation fails, check TrueCopy pair options to make sure
that they have not changed. A failure can cause unexpected changes, for
example, the P-VOL fence level could change from Data to Never.

Procedure
1. Access the MCU in SN, then click Actions > Remote Copy > TrueCopy
> Pair Operation.
2. Make sure that you are in the modify mode.
3. In the Pair Operation window tree, select a CU group, CU, port, or host
group where the TrueCopy P-VOL belongs.
4. Split the TrueCopy pairs that you want to convert to HAM pairs.
5. On the Pairsplit -r dialog box, specify parameters you want and then
click Set.
6. On the Pair Operation window, verify the setting for Preview and click
Apply.
7. Verify that the pair status is PSUS.
8. In the list, select and right-click the TrueCopy P-VOL, and then click
Pairresync.
9. In the Pairresync dialog box, use the following settings:
For P-VOL Fence Level, specify Never.
In Change attribute to HAM, specify Yes.
In Quorum Disk ID, specify a quorum disk ID for the HAM pair.
10.Click Set.
11.Verify settings in Preview pane, then click Apply.
12.Verify that the Status column shows COPY for the pair that you
converted, and that the Type column shows HAM.
To update the information in the window, click File > Refresh.
13.Access the secondary storage system, open the Pair Operation
window, and verify that the Type column for the S-VOL is HAM.
Caution: Application I/Os cannot be taken over in case of a failover
until the window shows HAM for the S-VOL. Before moving onto the
next step, wait for a while and refresh the window to make sure HAM
appears.
14.Make sure that the host recognizes the HAM pair.
For more information about verifying host recognition for new pairs, see
Verifying host recognition of a new pair on page 4-13.

420

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

Comparison of the CCI commands and Storage Navigator


You can perform many HAM pair tasks using the CCI. The following list
compares CCI commands and SN functionality for operations that can be
performed on HAM pairs:

Though the host recognizes the HAM volumes as a single volume, CCI
views the P-VOL and S-VOL as separate volumes.

CCI shows the HAM pair, the TrueCopy pair, and the UR pair, according
to the following table.
Pair Type

Display of CTG

HAM pair

NEVER

TrueCopy pair

DATA

The CTG ID

STATUS

The CTG ID

NEVER

The CTG ID

ASYNC

The CTG ID

Universal
Replicator pair

Display of Fence

Display of JID
Quorum Disk ID

The journal ID

When running CCI commands on the S-VOL, make sure that you specify
S-VOL information in the scripts.

The following table shows the supported CCI commands and corresponding
SN tasks.
Type of task

Task

CCI command

SN window or dialog
box

Configuration Add the quorum disk


ID.

N/A

Pair
operations

paircreate -jp
Paircreate dialog box
<quorum disk ID>
-f never

Create a pair.

Add Quorum Disk ID


dialog box

paircreate -jq
<quorum disk ID>
-f never
Split a pair.

pairsplit -r

Pairsplit-r dialog box

Resynchronize a pair.

pairresync

Pairresync dialog box

Resynchronize a pair
(reverse direction).

pairresync -swaps

N/A

pairresync -swapp

Swap & suspend a pair. pairsplit -RS

N/A

Forcibly change the S- pairsplit-RB


VOL status from SSWS
to PSUS (rollback).

N/A

Swapping, suspending, horctakeover


and resynchronizing a
pair (reverse
direction).

N/A

Change a TrueCopy
pair to a HAM pair.

Pairresync dialog box

N/A

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

421

Type of task
Maintenance

Task
View pair status.

CCI command
pairdisplay

SN window or dialog
box
Pair Operation window or
Detailed Information
dialog box

Release pair.

pairsplit-S

Delete the quorum disk N/A


ID.

Pairsplit-S dialog box


Delete Quorum Disk ID
button

Note the following:

422

You cannot run the pairresync command on a pair when the storage system
cannot access the quorum disk due to a failure.

To resynchronize a HAM pair, the fence level must be Never.

After you run the pairsplit -RS command, the write command to the P-VOL will
fail.

You cannot run the following commands on HAM pairs: pairresync -fg <CTGID>
and pairsplit -rw.

Working with volume pairs


HUS VM Block Module Hitachi High Availability Manager User Guide

5
System maintenance
To ensure that the HAM system function properly and is able to provide
robust and reliable high-availability protection for host applications, it is
essential for you to be able to perform HAM system maintenance tasks.

Applications used to perform maintenance tasks

Related documentation

The different types of maintenance tasks

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

51

Applications used to perform maintenance tasks


All of the typical maintenance tasks can be completed using SN or CCI.

Required Storage Navigator settings


You must be in Modify Mode in SN.

Related documentation
For more information about other system maintenance tasks, see the
Hitachi TrueCopy User Guide.

The different types of maintenance tasks


The following tasks can be performed.

Switching paths using multipath software on page 5-2

Discontinuing HAM operations on page 5-3

Quorum disk ID deletion on page 5-3

Recovery of accidently deleted quorum disks on page 5-5

Planned outages for system components on page 5-6

Switching paths using multipath software


Before switching paths using multipath software, make sure that P-VOL and
S-VOL status is PAIR. If the path is switched while the HAM pair is not in the
PAIR status, I/O updates could be suspended or the host might read older
data.
Vary the paths as follows:

When both owner and non-owner paths are online, vary the owner path
offline.

When the owner path is offline and non-owner path is online, vary the
owner path online.

Caution: When the HAM pair is in the PAIR status, if you vary the owner
path online, non-owner paths on RCU may be changed to offline. Since no
failure actually occurred in these paths, you should restore the status of the
HAM pair to PAIR, and then vary the non-owner paths online.
I/O scheduled before making the owner paths online with the multipath
software is issued to non-owner paths. According to circumstances, I/O
scheduled after making owner paths online might be issued prior to I/O
scheduled before making owner paths online. In this case, the check
condition on I/O which is issued to non-owner paths may report to the host
because the RCU HAM pair is in the PSUS status and the MCU HAM pair is
in the SSWS status. I/O which is reported on the check condition are
reissued to the owner path, and then ended normally. Non-owner paths
which are reported on the check condition become offline due to failures.

52

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

Discontinuing HAM operations


You can discontinue HAM operations at any time.
Note: During the procedure, you must delete the external volume
mapping of the quorum disk. If the HAM pair is still connected to the disk,
you cannot delete the mapping. To ensure you can delete the mapping,
before you delete the mapping, make sure you disconnect the quorum disk
by running the Disconnect External Volumes command.
1. Using multipath software, vary the non-owner path to offline.
2. Release all HAM pairs from the primary storage system.
3. From the primary and secondary storage systems, verify that the pair
status for all volumes is SMPL.
If the status does not change to SMPL due to a failure of the remote copy
connections, release all HAM pairs from the secondary storage system.
4. From the primary and secondary storage systems, delete all quorum
disk IDs. The quorum disk becomes a normal external volume.
5. From the primary and secondary storage systems, disconnect the
quorum disk by doing the following:
a. Open the External Storages window.
b. Select the quorum disk and specify the Disconnect External
Volumes command.
6. From the primary and secondary storage systems, delete the external
volume mapping of the quorum disk. In the External Storages window,
select the quorum disk and specify the Delete External Volumes
command.
7. Remove the cables between the host and secondary storage system,
between the primary and secondary storage systems and the quorum
disk.
To cause the host to recognize HUS VM S-VOLs as non-pair volumes, see
Releasing a pair on page 4-18.

Quorum disk ID deletion


You delete a quorum disk ID during some normal maintenance activities and
failure recovery. When you delete a disk ID, the system lists the quorum
disk as an external volume and not as a pair.
Two procedures can delete a quorum disk ID. One is a standard deletion,
and the second is a forced deletion. You use the forced deletion when access
to the disk is blocked due to a failure in the disk or the path.

Deleting quorum disk IDs (standard method) on page 5-4

Deleting quorum disk IDs by system attribute (forced deletion) on page


5-4

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

53

Deleting quorum disk IDs (standard method)


You delete a quorum disk ID using the standard method if you are able to
access the disk (access may be blocked if there is a disk failure or a path
failure).

Requirements
To ensure that you delete the disk ID properly, make sure that:

The disk is not being used by any HAM pair. If it is, you cannot delete
the ID.

You delete the ID on both the primary and secondary storage systems.
The procedure is the same for both systems.

The ID you delete on the systems is the same ID.

1. Access an MCU or RCU in SN and click Actions > Remote Copy >
TrueCopy > Quorum Disk Operation.
2.

On the Quorum Disk Operation window, make sure that you are in
the modify mode.

3. In the quorum ID list, right-click the quorum disk ID that you want to
delete, then click Delete Quorum Disk ID.
4. Confirm the operation in the Preview list, then click Apply.
If the quorum disk ID cannot be deleted, a failure might have occurred in
the quorum disk. Do one of the following:

Recover from the failure, then try to delete the ID again using this
procedure.

Forcibly delete the quorum disk (see Deleting quorum disk IDs by
system attribute (forced deletion) on page 5-4).

Deleting quorum disk IDs by system attribute (forced deletion)


You can forcibly delete a quorum disk ID when access to the disk is blocked
due to a failure in either the disk or path. This is done by turning a system
option ON and then forcibly deleting the disk. This causes the disk to
become a normal external volume.

Requirements
To ensure that you delete the disk ID properly, make sure that:

The disk is not being used by any HAM pair. If it is, you cannot delete
the ID.

You delete the ID on both the primary and secondary storage systems.
The procedure is the same for both systems.

The ID you delete on the systems is the same ID.

1. On the primary storage system, release all HAM pairs using the quorum
disk that you want to delete.

54

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

2. Call the Hitachi Data Systems Support Center and ask them to turn ON
the appropriate system option on the storage system that cannot access
the quorum disk.
3. On the primary and secondary storage system, delete the quorum disk
ID from the TrueCopy/Quorum Disk Operation window in SN.
4. On the primary and secondary storage system, make sure that the ID is
correctly deleted. If you deleted the wrong ID, register the ID again.
5. Call the Hitachi Data Systems Support Center and ask them to turn OFF
the system option.

Recovery of accidently deleted quorum disks


If you forcibly deleted a quorum disk by mistake, you can recover the
quorum disk. The procedure you use depends on whether the P-VOL or SVOL was receiving host I/O when the disk was deleted.
You can use SN and CCI, or only CCI to complete the recovery procedures.
Tip: Some of the steps can be done using either SN or CCI. Typically, these
steps can be completed more quickly using CCI.
The procedures are:

Recovering the disk when the P-VOL was receiving host I/O at deletion
on page 5-5

Recovering the disk when the S-VOL was receiving host I/O at deletion
on page 5-5

Recovering the disk when the P-VOL was receiving host I/O at deletion
Use this procedure to recover a disk that was accidentally deleted when the
primary volume was receiving host I/O at the time of deletion.
1. Vary the host-to-S-VOL path offline using the multipath software.
2. Release all pairs that use the forcibly-deleted quorum disk.
3. Make sure the quorum disk ID is deleted from both primary and
secondary storage systems.
4. On the primary and secondary storage system, add the quorum disk ID.
5. On the primary storage system, create the HAM pair.
6. On both the primary and secondary storage systems, make sure that
Type shows HAM.

Recovering the disk when the S-VOL was receiving host I/O at deletion
Use this procedure to recovery a disk that was accidentally deleted when
the secondary volume was receiving host I/O at the time of deletion.
1. Stop the I/O from the host.
2. Release all pairs using the forcibly-deleted quorum disk.
3. Make sure the quorum disk ID is deleted from both primary and
secondary storage systems.

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

55

4. On the secondary storage system, create a TrueCopy pair. The data flow
is from secondary to primary sites.
To do this, specify the HAM S-VOL as a TrueCopy P-VOL.
5. Do one of the following:
If changing a TrueCopy pair to a HAM pair: When the copy operation
is completed, run the horctakeover command on the TrueCopy SVOL. This reverses the TrueCopy P-VOL and S-VOL.
If using CCI to create the HAM pair again: When the copy operation
is completed, run the pairsplit -S command and release the
TrueCopy pair.
If using SN: When the copy operation is completed, release the
TrueCopy pair.
6. Using multipath software, vary the host-to-P-VOL path online.
7. On the primary and secondary storage systems, add the quorum disks.
8. Do one of the following:
If changing the TrueCopy pair to a HAM pair: From the primary
system P-VOL using Storage Navigator, split the TrueCopy pair, then
change the pair to a HAM pair. See Changing TrueCopy pairs to HAM
pairs on page 4-19.
If creating the pair again:
- If using CCI, run the paircreate command on the primary storage
system.
If using SN, create the HAM pair on the primary storage system.
9. On the primary and secondary storage systems, make sure the volume
type is HAM.

Planned outages for system components


As part of your normal system maintenance activities, you can perform
planned outages by turning the power for system components on and off as
needed.
The following are the types of components you can use to turn on and off
power:

Primary storage systems

Secondary storage systems

Quorum disks

Options for performing the planned outages


You do not have to use multiple procedures to perform a planned outage on
multiple components. You can use a single procedure to perform all of the
steps required to complete a planned outage for multiple components.
You can use a single procedure to perform a planned outage on:

56

A quorum disk

A primary storage system and the quorum disk connected to it

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

A secondary storage system and the quorum disk connected to it

A primary and secondary storage system and the disk connected to


them

The procedures for performing planned outages


The following procedures contain all of the steps required to perform a
planned outage. Use the procedure that fits the requirements for the
planned outage.
Use one of the following:

Performing planned outages (quorum disk only) on page 5-7.

Performing planned outages (primary system and quorum disk) on page


5-9.

Performing planned outages (secondary system and quorum disk) on


page 5-10.

Performing planned outages (both systems and quorum disk) on page


5-10.

Performing planned outages (quorum disk only)


The procedure you use for a planned outage of a quorum disk depends on
whether the primary or secondary storage system is receiving host I/O
updates.
Use one of the following:

Performing outages when the P-VOL is receiving host I/O on page 5-7.

Performing outages when the S-VOL is receiving host I/O on page 5-8.

Performing outages when the P-VOL is receiving host I/O


Use this procedure when the P-VOL is receiving host I/O updates. If you also
need to power off and on the primary or secondary storage system
connected to the disk, use one of the other procedures (see The procedures
for performing planned outages on page 5-7).
1. Using multipath software, vary the non-owner path offline.
2. On the primary storage system, complete the following:
a. Split the pair.
b. Make sure the P-VOL is PSUS.
c. Run the following command on the quorum disk:
Disconnect External Volumes
3. On the secondary storage system, run the following command on the
quorum disk:
Disconnect External Volumes
4. Power off the quorum disk.
5. Power on the quorum disk.

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

57

6. On the secondary storage system, run the following command on the


quorum disk:
Reconnect External Volumes
7. On the primary storage system, complete the following:
a. Run the following command on the quorum disk:
Reconnect External Volumes
b. Resynchronize the pair.
c. Make sure the P-VOL status is PAIR.
8. Using multipath software, vary the non-owner path online.

Performing outages when the S-VOL is receiving host I/O


Use this procedure when the S-VOL is receiving host I/O updates. If you also
need to power off and on the primary or secondary storage system
connected to the disk, use one of the other procedures (see The procedures
for performing planned outages on page 5-7).
1. On the secondary storage system, split the pair.
2. On the secondary storage system, make sure the P-VOL is in PSUS
status.
3. On the primary storage system, run the following command on the
quorum disk:
Disconnect External Volumes
4. On the secondary storage system, run the following command on the
quorum disk:
Disconnect External Volumes
5. Power off the quorum disk.
6. Power on the quorum disk.
7. On the secondary storage system, run the following command on the
quorum disk:
Reconnect External Volumes
8. On the primary storage system, run the following command on the
quorum disk:
Reconnect External Volumes
9. On the secondary storage system, complete the following:
a. Resynchronize the pair.
b. Make sure the P-VOL status is PAIR.
10.Using multipath software, complete the following:
a. vary the owner path online.
b. Vary the non-owner path offline.
11.On the secondary storage system, make sure the P-VOL status is PSUS.
12.On the primary storage system, complete the following:
a. Make sure the S-VOL status is SSWS.

58

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

b. Reverse the copy direction so that it is again from primary storage


system P-VOL to secondary storage system S-VOL by running the
following command on the S-VOL:
pairresync -swaps
c. Make sure P-VOL and S-VOL status is PAIR.
13.If the non-owner path is offline, vary it online using the multipath
software.

Performing planned outages (primary system and quorum disk)


1. Make sure the P-VOL and S-VOL status is PAIR.
2. Using multipath software, complete the following:
a. Vary the owner path offline. *1
b. Make sure that host I/O switches to the non-owner path (a failure
occurred)
3. On the primary storage system, make sure that P-VOL status is PSUS.
4. On the secondary storage system, complete the following:
a. Make sure that S-VOL status is SSWS.
b. Run the following command on the quorum disk:
Disconnect External Volumes
5. Power off the primary storage system.
6. Power off the quorum disk. *2
7. Power on the quorum disk. *2
8. Power on the primary storage system. *3
9. On the secondary storage system, complete the following:
a. Run the following Universal Volume Manager command on the
quorum disk:
Reconnect External Volumes
b. Run the following command on the S-VOL:
pairresync -swaps
c. Make sure that P-VOL and S-VOL status is PAIR.
10.Using the multipath software, complete the following:
a. Vary the owner path online.
b. Vary the non-owner path offline.
11.On the primary storage system, make sure that S-VOL status (original
P-VOL) is SSWS.
12.On the secondary storage system, make sure that P-VOL status (original
S-VOL) is PSUS.
13.On the primary storage system, complete the following:
a. Run the following command on the S-VOL:
pairresync -swaps
b. Make sure that P-VOL and S-VOL are reversed to their original pair
configuration and that status is PAIR.

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

59

14.(If the non-owner path is offline) Using the multipath software, vary it
online.
*1: If you vary the owner path offline during I/O processing, the multipath
software may display a message saying that the path is offline due to a path
failure. In this case, you can ignore the message.
*2: Skip this step when you power off the primary storage system only.
*3: When the host operating system is Windows, the multipath software
may return a host-to-P-VOL path failure when you power on the primary
storage system. In this case, you can ignore the message. This happens
because HAM blocks access to the primary storage system after the plug
and play function automatically recovers the owner path to online.

Performing planned outages (secondary system and quorum disk)


Use this procedure to perform a planned outage of the secondary storage
system and the quorum disk connected to the system.
1. Using multipath software, vary the non-owner path offline.
2. On the primary storage system, complete the following:
a. Split the pair.
b. Make sure that P-VOL status is PSUS.
c. Run the following Universal Volume Manager command on the
quorum disk:
Disconnect External Volumes
3. Power off the secondary storage system.
4. Power off the quorum disk. If you are powering-off only the secondary
storage system, skip this step.
5. Power on the quorum disk. If you are powering-off only the secondary
storage system, skip this step.
6. Power on the secondary storage system.
7. On the primary storage system, complete the following:
a. Run the following Universal Volume Manager command on the
quorum disk:
Reconnect External Volumes
b. Resynchronize the pair.
c. Make sure that P-VOL status is PAIR.
8. Using multipath software, vary the non-owner path online.

Performing planned outages (both systems and quorum disk)


Use this procedure to perform a planned outage of the primary storage
system, the secondary storage system, and the disk connected the
systems.
1. On the primary storage system, complete the following:
a. Split the pair.
b. Make sure that P-VOL status is PSUS.

510

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

2. Stop host I/O.


3. Using multipath software, vary the non-owner path offline.
4. Power off the primary storage system.
5. Power off the secondary storage system.
6. Power off the quorum disk.
7. Power on the quorum disk.
8. Power on the secondary storage system.
9. Power on the primary storage system.
10.Using multipath software, vary the owner path online.
11.Restart host I/O.
12.On the primary storage system, complete the following:
a. Resynchronize the pair.
b. Make sure that P-VOL status is PAIR.
13.Using multipath software, vary the non-owner path online.

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

511

512

System maintenance
HUS VM Block Module Hitachi High Availability Manager User Guide

6
Disaster recovery
On-site disasters, such as power supply failures, can disrupt the normal
operation of your HAM system. Being able to quickly identify the type of
failure and recover the affected system or component helps to ensure that
you can restore high-availability protection for host applications as soon as
possible.

Main types of failures that can disrupt your system

The basic recovery process

System failure messages

Detecting failures

Determining which basic recovery procedures to use

Recovery from blocked pair volumes

Recovery from quorum disk failure

Recovery from power failure

Recovery from failures using resynchronization

Recovering from path failures

Allowing host I/O to an out-of-date S-VOL

Contacting the Hitachi Data Systems Support Center

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

61

Main types of failures that can disrupt your system


The main types of failures that can disrupt the system are power failures,
hardware failures, connection or communication failures, and software
failures. These types of failures can cause system components to function
improperly or stop functioning.
System components typically affected by these types of failures include:

Main control unit (primary storage system)

Service processor (primary or secondary storage system)

Remote control unit (secondary storage system)

Volume pairs

Quorum disks

The basic recovery process


The basic process for recovering from an on-site disaster is the same,
regardless of the type of failure that caused the disruption in the system.
The recovery process involves:

Detecting failures

Determining the type of failure

Determining which recovery procedure to use

Completing the recovery procedure.

System failure messages


The system automatically generates messages that you can use to detect
failures and determine the type of failure that occurred. The messages
contain information about the type of failure.
System information messages (SIM)

Generated by the primary and secondary


storage systems

Path failure messages

Generated by the multipath software on the


host

Detecting failures
Detecting failures is the first task in the recovery process. Failure detection
is essential because you need to know the type of failure before you can
determine which recovery procedure to use.
You have two options for detecting failures. You can check to see if failover
has occurred and then determine the type of failure that caused it, or you
can check to see if failures have occurred by using the SIM and path failure
system messages.

62

Option 1: Check for failover first on page 6-3

Option 2: Check for failures only on page 6-4

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

Option 1: Check for failover first


You can use status information about the secondary volume and path status
information to see if failover occurred. You can do this using SN, CCI, or
multipath software.
Note: You need to determine the type of failure before you can determine
which recovery procedure to use.
For more information, see Using system messages to check for failures on
page 6-4.

Using Storage Navigator to check for failover on page 6-3

Using CCI to check for failover on page 6-3

Using multipath software to check for failover on page 6-3

Using Storage Navigator to check for failover


If you are using SN, you use status information about the secondary volume
to check to see if failover has occurred.
1. In the Pair Operation window, if the following values appear for the
secondary volume, failover occurred:
Status is SSWS
Vol Access is Access (Lock)

Using CCI to check for failover


If you are using CCI, use the following procedure to obtain status
information about the secondary volume to see if failover has occurred.
1. Run the pairdisplay command on the secondary storage system.
2. If the status value for the secondary volume is SSWS, failover occurred.

Using multipath software to check for failover


If you are using multipath software, check the path status information to
see if failover has occurred.
1. Run the path status viewing command.
2. If the system returns the following status values for the owner and nonowner paths, failover occurred:
Owner path is offline
Non-owner path is online

Next steps
You need to determine the type of failure before you can determine which
recovery procedure to use.
For more information, see Using system messages to check for failures on
page 6-4.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

63

Option 2: Check for failures only


You can use automatically generated system messages to see if failures
have occurred and to determine the type of failure. These system messages
contain information about the type of failure that occurred.
Based on these messages, you determine which recovery procedure to use.
For more information, see Determining which basic recovery procedures to
use on page 6-4.

Using system messages to check for failures


Use the system information messages (SIM) and path failure system
messages to check for failures and to determine the type of failure.
1. Check for system information messages from the primary or secondary
storage system.
2. Check for path failure messages generated by the multipath software on
the host.

Determining which basic recovery procedures to use


Determining which basic recovery procedures to use involves analyzing the
information in the system information messages (SIM) and path failure
system messages to identify the type of failure, then selecting the correct
procedure based on the type of failure.
Each of the following basic types of failures uses a different set of recovery
procedures:

Blocked volumes

Quorum disk

Power outage

Resynchronization of volume pairs

Owner or non-owner path failures

Selecting Procedures
Make sure you have identified the type of failure that occurred by using the
system information messages (SIM) and path failure system messages.
For more information, see Using system messages to check for failures on
page 6-4.
1. Analyze these failure messages to determine which basic type of failure
occurred. The failure types are: blocked volumes, quorum disk, power
outage, or failures that can be resolved by resynchronizing affected
pairs.
2. Use the decision tree in the following figure to select the correct set of
procedures. Use the links below the figure to go to the appropriate
procedures.

64

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

1. Recovery from blocked pair volumes on page 6-5


2. Recovery from quorum disk failure on page 6-9
3. Recovery from power failure on page 6-12
4. Recovery from failures using resynchronization on page 6-16

Recovery from blocked pair volumes


In most cases, a blocked volume pair results in automatic failover. This
helps to ensure that host I/O continues and host applications remain
available.
The process used to recover pair volumes from this failure varies depending
on:

Whether the volume is a primary or secondary volume.

Whether the volume is in the primary or secondary storage system.

Depending on which systems and volumes are affected and whether failover
occurred, the recovery process can involve:

Releasing the pair

Clearing the failure

Clearing any other failures that may prevent normal host I/O

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

65

Recreating the pair

Restoring the original relationship of the volumes

Restoring normal host I/O (or restarting host I/O)

The following figure shows the different volume failure scenarios that are
possible with volume pairs.

1. Recovering from primary volume failure on the MCU on page 6-6.


2. Recovering from secondary volume failure on the MCU on page 6-7.
3. Recovering from primary volume failure on the RCU on page 6-8.
4. Recovering from secondary volume failure on the RCU on page 6-9.

Recovering from primary volume failure on the MCU


When this failure occurs, automatic system failover switches host I/O to the
primary volume on the RCU. Recovering the volume from this failure
involves releasing the pair, clearing the failure, recreating the pair, restoring
the original relationship of the volumes, and restoring normal host I/O.
Note: If the failed volume is an external volume, contact the Hitachi Data
Systems Support Center for assistance clearing the failure.
You use multipath software and either SN or CCI to complete the recovery
procedure.
Tip: Some of the steps can be performed by using either SN or CCI.
Typically, you can complete these steps quicker using CCI.
1. Using multipath software, make sure that the secondary storage system
is receiving host I/O.
2. Stop I/O from the host.
3. On the primary storage system, run the Pairsplit-S CCI command on
the P-VOL to release the HAM pair.
4. Clear the failure in the primary storage system P-VOL.
5. On the secondary storage system, create a TrueCopy pair from the
original S-VOL to the P-VOL. Data flow is from the secondary to primary
storage systems.

66

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

6. Continue by either changing the TrueCopy pair to a HAM pair (below) or


recreating the HAM pair (next steps).
If changing the TrueCopy pair to a HAM pair, complete this step. If using
SN, go directly to the SN step.
a. On the primary storage system, run the CCI horctakeover
command on the new TrueCopy S-VOL. This reverses the relationship
of the pair.
b. On the primary storage system, split the TrueCopy pair using the
Pairsplit-r dialog box.
c. On the primary storage system, use SNs Pairresync dialog box to
change the TrueCopy pair to a HAM pair. For more information, see
Changing TrueCopy pairs to HAM pairs on page 4-19.
7. If recreating the HAM pair using CCI:
a. On the secondary storage system, when the copy operation is
completed, run the following command to release the TrueCopy pair:
pairsplit -S
b. On the primary storage system, run the following command to create
the HAM pair:
paircreate
8. If recreating the HAM pair using SN:
a. On the secondary storage system, release the TrueCopy pair.
b. On the primary storage system, create the HAM pair.
9. On the primary and secondary storage systems, make sure that Type
shows HAM.
10.Using multipath software, vary the owner path online.
11.Restart I/O.

Recovering from secondary volume failure on the MCU


When this failure occurs, automatic system failover switches host I/O to the
primary volume on the RCU. Recovering the volume from this failure
involves releasing the pair, clearing the failure, recreating the pair, restoring
the original relationship of the volumes, and restoring normal host I/O.
Note: If the failed volume is an external volume, contact the Hitachi Data
Systems Support Center for assistance clearing the failure.
You use multipath software and either SN or CCI to complete the recovery
procedure.
Tip: Some of the steps can be done using either SN or CCI. Typically, these
steps can be completed more quickly using CCI.
1. Using multipath software, make sure that the secondary storage system
is receiving I/O.
2. Stop I/O from the host.
3. On the secondary storage system, release the HAM pair by running the
following CCI command on the P-VOL:

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

67

pairsplit-S
4. Clear the failure in the primary storage system P-VOL.
5. On the primary storage system, make sure that no other failures exist
and that it is ready to accept host I/O.
6. On the secondary storage system, create a TrueCopy pair from the
original S-VOL to the P-VOL. The data flow is from the secondary to
primary storage system.
7. You can continue by either changing the TrueCopy pair to a HAM pair
(below) or recreating the HAM pair (next step).
If changing the TrueCopy pair to a HAM pair, complete this step on the
primary storage system.
a. On the primary system, use CCI to run the horctakeover command
on the new TrueCopy S-VOL. This reverses the relationship and the
copy flow from S-VOL to P-VOL.
b. On the primary system, use SN to split the TrueCopy pair.
c. On the primary system, perform the SN pair resync operation to
change the TrueCopy pair to a HAM pair. See Changing TrueCopy
pairs to HAM pairs on page 4-19.
8. If recreating the HAM pair:
a. Using either CCI or SN, on the secondary storage system, release the
TrueCopy pair (for CCI, use the pairsplit -S command.
b. On the primary storage system, create the HAM pair.
9. On both systems, in SN, make sure that Type shows HAM.
10.Using multipath software, vary the owner path online.
11.Restart I/O.

Recovering from primary volume failure on the RCU


This failure occurs after automatic system failover switches host I/O to the
primary volume on the RCU due to a failure of the primary volume on the
primary storage system. Because both systems have volume failures, all
host I/O stops. Recovering the volume from this failure involves releasing
the pair, clearing the failure (and any failures on the primary storage
system), recreating the pair, restoring the original relationship of the
volumes, and restoring normal host I/O.
Note: If the failed volume is an external volume, contact the Hitachi Data
Systems Support Center for assistance clearing the failure.
You use multipath software and SN to complete this task.
1. Make sure that I/O is stopped.
2. Using multipath software, vary the non-owner path offline.
3. On the secondary storage system, release the HAM pair by running the
Pairsplit-S command on the P-VOL.
4. Clear the failure in the secondary storage system P-VOL.

68

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

5. If using CCI, on the primary storage system, run the paircreate


command to create the HAM pair.
6. If using SN, on the primary storage system use a Paircreate (HAM)
window to create the HAM pair.
7. Clear any failures on the primary storage system so that it can accept I/
O.
8. On the primary storage system, create the pair (Paircreate(HAM)).
9. On both systems, make sure that Type shows HAM.
10.Using multipath software, vary the owner and non-owner paths online.
11.Restart host I/O.

Recovering from secondary volume failure on the RCU


When this failure occurs, automatic system failover switches host I/O to the
primary volume on the RCU. Recovering the volume from this failure
involves releasing the pair, clearing the failure, recreating the pair, and
restarting host I/O. Unlike the other types of pair volume failures, host I/O
is not interrupted and failover does not occur.
Note: If the failed volume is an external volume, contact the Hitachi Data
Systems Support Center for assistance clearing the failure.
1. Using multipath software, make sure that the primary storage system is
receiving host I/O.
2. Using multipath software, vary the non-owner path offline.
3. On the primary storage system, run the Pairsplit-S CCI command to
release the pair.
4. Clear the failure in the secondary storage system S-VOL.
5. If using CCI, on the primary storage system, run the paircreate CCI
command to create the HAM pair.
6. If using SN, on the primary storage system create the HAM pair using
the Paircreate(HAM) dialog box.
7. On the primary storage system, create the pair (Paircreate(HAM)).
8. On both systems, make sure that Type shows HAM.
9. Using multipath software, vary the non-owner path online.
10.Restart host I/O.

Recovery from quorum disk failure


There are two basic methods for recovering from quorum disk failure. In
some cases, the disk must be replaced. In other cases, you can recover
from the failure by resynchronizing the pair volume that is connected to the
disk.

Replacement of quorum disks on page 6-10

Recovery from failures using resynchronization on page 6-16

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

69

Replacement of quorum disks


Replacement of a quorum disk is done when the disk fails and the external
storage system that contains the disk cannot be recovered by
resynchronizing the volume pair connected to the disk.
The procedure you use varies depending which system (primary or
secondary) is receiving host I/O.
The procedures are:

Replacing a quorum disk when the MCU is receiving host I/O on page 610

Replacing a quorum disk when the RCU is receiving host I/O on page 611

Note: You can use the replacement procedures to replace a disk that is
connected to one or more volume pairs.

Replacing a quorum disk when the MCU is receiving host I/O


Replacing the disk involves preparing the systems and the volume pair (or
pairs) connected to the disk so that you can safely remove the failed disk
and replace it. After it is replaced, you reconnect the systems to the disk,
add the disk ID, and re-create the volume pair (or pairs).
When you replace a disk, data on the disk is erased and you cannot continue
using HAM.
Caution: To prevent host I/O from stopping, make sure you complete the
steps exactly as they are listed in the procedure.
1. Use the multipath software to vary the non-owner path offline.
2. On the primary storage system, release the pair by running the
pairsplit-S CCI command on the P-VOL.
3. On the primary storage system, make sure that P-VOL status is SMPL.
4. On the primary storage system, delete the quorum disk ID. If a failure
in the quorum disk prevents deletion, forcibly delete it.
5. On the secondary storage system, delete the quorum disk ID. You can
forcibly delete it, if necessary.
6. On the primary storage system, run the Disconnect External Volumes
command on the quorum disk. If the connection to the quorum disk
cannot be released due to the failure of the quorum disk, skip the
Disconnect External Volumes operation.
7. On the secondary storage system, run the Universal Volume Manager
Disconnect External Volumes command on the quorum disk. If the
connection to the quorum disk cannot be released due to the failure of
the quorum disk, do not run this command.
8. Replace the quorum disk.
9. On both systems, run the Reconnect External Volumes command on
the quorum disk.

610

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

10.On both systems, add the quorum disk ID.


11.On the primary storage system, run the Paircreate(HAM) command to
create the HAM pair.
12.On both the primary and secondary storage systems, make sure that
Type shows HAM.
13.Use the multipath software to vary the non-owner path online.

Replacing a quorum disk when the RCU is receiving host I/O


Replacing the disk involves preparing the systems and the volume pair (or
pairs) connected to the disk so that you can safely remove the failed disk
and replace it. After it is replaced, you reconnect the systems to the disk,
add the disk ID, and re-create the volume pair (or pairs).
When you replace a disk, data on the disk is erased and you cannot continue
using HAM.
Caution: To prevent host I/O from stopping, make sure you complete the
steps exactly as they are listed in the procedure.
You can use SN or CCI to complete the recovery procedure.
Tip: Some of the steps can be done using either SN or CCI. Typically, you
can complete these steps quicker using CCI.
1. Stop I/O from the host.
2. On the secondary storage system, run the pairsplit-S CCI command
on the P-VOL to release the HAM pair.
3. On the secondary storage system, make sure that P-VOL status is SMPL.
4. On the secondary storage system, create a TrueCopy pair. The data flow
is from secondary to primary storage system.
5. Perform one of the following operations:
If changing the TrueCopy pair to a HAM pair on the primary storage
system: Run the CCI horctakeover command on the S-VOL. This
reverses the relationship of the pair.
If using CCI to create a HAM pair again, on the primary storage
system, run the pairsplit command on the S-VOL and release the
TrueCopy pair.
If using SN, from the secondary storage system, release the
TrueCopy pair.
6. On the primary storage system, delete the quorum disk ID.
If the quorum disk ID cannot be deleted due to a disk failure, forcibly
delete the quorum disk.
7. On the secondary storage system, delete the quorum disk ID.
8. On both systems, run the Disconnect External Volumes command on
the quorum disk.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

611

If the connection to the quorum disk cannot be released due to the


failure of the quorum disk, skip the Disconnect External Volumes
operation.
9. Replace the quorum disk.
10.On both systems, run the Reconnect External Volumes command on
the quorum disk.
11.On both systems, add the quorum disk ID.
12.Perform one of the following operations:
If changing the TrueCopy pair to a HAM pair: use SN to split the
TrueCopy pair, then change the pair to a HAM pair. See Changing
TrueCopy pairs to HAM pairs on page 4-19.
If using CCI and are recreating the HAM pair: On the primary storage
system, run the paircreate command to create the HAM pair.
If using SN and are recreating the HAM pair again: On the primary
storage system, use the paircreate(HAM) dialog box to create a
HAM pair.
13.On both the primary and secondary storage systems, make sure that
Type shows HAM.
14.Use the multipath software to vary the owner path online.
15.Restart I/O.

Recovery from power failure


You can recover the primary storage system or secondary storage system
from power failures that cause the systems backup batteries to discharge
and cause the loss of differential data.
The recovery process varies depending on the type of system.

Primary storage system recovery on page 6-12

Secondary system recovery on page 6-14

Primary storage system recovery


There are two procedures you can use to recover the system following a
power failure that results in the initialization of the system memory. Which
task you use depends on whether or not host I/O updates have stopped.
You must use the correct procedure to ensure the recovery is successful.
The tasks are:

612

Recovering the system when the RCU is receiving host I/O updates on
page 6-13.

Recovering the system when host I/O updates have stopped on page 614.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

Recovering the system when the RCU is receiving host I/O updates
Recovering the primary storage system from power failure, when host I/O
updates continue after failover, involves the completion of tasks on both the
primary storage system and secondary storage system. Because failover
occurred, you must complete the steps required to restore the original
relationship of the volumes.
You use multipath software and either SN or CCI to complete the recovery
procedure.
Tip: Some of the steps can be done using either SN or CCI. Typically, you
can complete these steps quicker using CCI.
1. Verify that the S-VOL has the latest data and is being updated. Open the
Pair Operation window on the secondary storage system and check
that the VOL Access column shows Access (Lock).
2. Stop I/O from the host.
3. On the secondary storage system, release the HAM pair.
4. On the secondary storage system, delete the quorum disk ID.
5. On the secondary storage system, create a TrueCopy pair. The data flow
is from secondary to primary sites.
6. Format the quorum disk.
7. On the primary storage system, register the HAM secondary storage
system to the primary storage system.
8. You can continue by either changing the TrueCopy pair to a HAM pair
(below) or recreating the HAM pair (next steps).
If changing the TrueCopy pair to a HAM pair, complete this step.
a. When the TrueCopy operation finishes, run the horctakeover
command on the primary storage system S-VOL to reverse the P-VOL
and S-VOL.
b. On the primary and secondary storage systems, add the quorum
disk.
c. Using SN, on the primary system, split the TrueCopy pair.
d. Using SN, on the primary system, change the TrueCopy pair to a HAM
pair. See Changing TrueCopy pairs to HAM pairs on page 4-19 for
details.
9. If recreating the HAM pair using CCI:
a. When the copy operation in Step 5 completes, release the TrueCopy
pair.
b. On the primary and secondary storage systems, add the quorum
disk.
c. On the primary storage system, run the paircreate command to
create the HAM pair.
10.If recreating the HAM pair using SN:
a. On the secondary storage system, release the TrueCopy pair.
b. On the primary and secondary storage systems, add the quorum
disk.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

613

c. On the primary storage system, create the HAM pair.


11.On both the primary and secondary storage systems, make sure that
Type shows HAM.
12.Using the multipath software, vary the owner and non-owner paths
online.
13.Restart host I/O.

Recovering the system when host I/O updates have stopped


Recovering the primary storage system from power failure, when host I/O
updates have stopped, involves the completion of tasks on both the primary
storage system and secondary storage system.
You use multipath software and either SN or CCI to complete the recovery
procedure.
Tip: Some of the steps can be performed using either SN or CCI. Typically,
you can complete these steps quicker using CCI.
1. Verify that the P-VOL has the latest data by opening the Pair Operation
window on the secondary storage system (S-VOL side) and checking
that the VOL Access column is empty (blank).
2. On the secondary storage system, release the HAM pairs.
3. On the secondary storage system, delete the quorum disk ID.
4. Format the quorum disk.
5. On the primary storage system, register the HAM secondary storage
system to the primary storage system.
6. On the primary and secondary storage systems, add the quorum disk.
7. On the primary storage system, create the HAM pair.
8. On both the primary and secondary storage systems, make sure that
Type shows HAM.
9. Vary the owner and non-owner paths online using the multipath
software.
10.Restart host I/O.

Secondary system recovery


There are two procedures you can use to recover the system after a power
failure that results in the initialization of the system memory. The procedure
you use depends on whether or not host I/O updates have stopped. You
must use the correct procedure to ensure the recovery is successful.
The procedures are:

614

Recovering the system when the P-VOL is receiving host updates on


page 6-15.

Recovering the system when host updates have stopped on page 6-15.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

Recovering the system when the P-VOL is receiving host updates


Recovering the secondary storage system from power failure involves the
completion of tasks on both the primary storage system and secondary
storage system. Because failover does not occur, it is not necessary to
complete the steps required to restore the original relationship of the
volumes.
You use multipath software and either SN or CCI to complete the recovery
procedure.
Tip: Some of the steps can be performed using either SN or CCI. Typically,
these steps can be completed more quickly using CCI.
1. Verify that the P-VOL has the latest data and is being updated. Open the
Pair Operation window on the primary storage system and check that
the VOL Access column shows Access (Lock).
2. On the primary storage system, release the HAM pair.
3. On the primary storage system, delete the quorum disk ID.
4. Format the quorum disk.
5. On the secondary storage system, register the HAM primary storage
system to the secondary storage system.
6. On the primary and secondary storage systems, add the quorum disk.
7. On the primary storage system, create the HAM pair.
8. On both the primary and secondary storage systems, make sure that
Type shows HAM.
9. Using multipath software, vary the non-owner paths online.

Recovering the system when host updates have stopped


Recovering the secondary storage system from power failure when host I/
O updates have stopped involves the completion of tasks on both the
primary storage system and secondary storage system.
You use multipath software and either SN or CCI to complete the recovery
procedure.
Tip: Some of the steps can be performed using either SN or CCI. Typically,
you can complete these steps quicker using CCI.
1. Verify that the S-VOL has the latest data by opening the Pair Operation
window on the primary storage system (P-VOL side) and checking that
the VOL Access column is empty (blank).
2. On the primary storage system, release the HAM pairs.
3. On the primary storage system, delete the quorum disk ID.
4. On the secondary storage system, register the HAM primary storage
system to the secondary storage system.
5. On the secondary storage system, create a TrueCopy pair. The data flow
is from secondary to primary sites.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

615

6. Format the quorum disk.


7. Continue by changing the TrueCopy pair to a HAM pair (below) or
recreating the HAM pair (next steps).
If changing the TrueCopy pair to a HAM pair, continue with this step. If
using SN, go directly to the SN step, below.
a. When the TrueCopy pair creation operation is completed, run the
horctakeover CCI command on the primary storage system S-VOL.
This reverses the P-VOL/S-VOL relationship; the S-VOL on the
primary storage system now becomes the P-VOL.
b. On the primary and secondary storage systems, add the quorum
disk.
c. Using SN, on the primary system, split the TrueCopy pair.
d. Using SN, on the primary system, change the TrueCopy pair to a HAM
pair. See Changing TrueCopy pairs to HAM pairs on page 4-19 for
details.
8. If recreating the HAM pair using CCI:
a. When the copy operation in Step 5 completes, release the TrueCopy
pair.
b. On the primary and secondary storage systems, add the quorum
disk.
c. On the primary storage system, run the paircreate command to
create the HAM pair.
9. If recreating the HAM pair using SN:
a. Make sure the TrueCopy pair creation operation completed, then
release the TrueCopy pair.
b. On the primary and secondary storage systems, add the quorum
disk.
c. On the primary storage system, create the HAM pair.
10.On both primary and secondary storage systems, make sure that Type
shows HAM.
11.Using multipath software, vary the owner and non-owner paths online.
12.Restart host I/O.

Recovery from failures using resynchronization


In some cases, you can resynchronize a HAM pair to recover a volume pair,
quorum disk, or a volume pair and quorum disk from failure. The recovery
process can only be used if certain conditions exist.

Required conditions
You can only use this method if all of the following conditions exist:

616

The HAM volumes are not blocked.

If there was a quorum disk failure, the disk does not need to be replaced.

Data was not lost as a results of shared memory initialization caused by


a power outage.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

Determining which resynchronization recovery procedure to use


Determining which recovery procedure to use involves analyzing the
information in the system information messages (SIM) and path failure
system messages to determine if all of the conditions required to use
resynchronization exist, then selecting the procedure to use.
You select the procedure to use based on a number of factors, including the
continuation of host I/O, failover, and the status of the secondary volume.

Prerequisites
Make sure you have identified the type of failure that occurred by using the
system information messages (SIM) and path failure system messages.
For more information, see Using system messages to check for failures on
page 6-4.
Note:

The procedures for CCI and SN are the same.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

617

618

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

Procedure
1. Analyze the failure messages to determine if all conditions required for
resynchronization exist:
The HAM volumes are not blocked.
If there was a quorum disk failure, the disk does not need to be
replaced.
Data was not lost as a result of shared memory initialization caused
by a power outage.
2. Use the decision tree in the flowchart to select the correct set of
procedures. Use the links below to go to the appropriate procedures.

Related topics

Recovering primary volume from ShadowImage secondary volume on


page 6-19.

Recovering primary volume from ShadowImage secondary volume


In some cases, you can use resynchronization to recover the HAM P-VOL
from a ShadowImage S-VOL located on the secondary storage system.

Prerequisites
Make sure you have analyzed the failure messages to determine if all of the
conditions required to use resynchronization exist.
For more information, see Determining which resynchronization recovery
procedure to use on page 6-17.

Procedure
1. Using multipath software, vary the owner path offline.
2. Swap and suspend the HAM pair using the CCI pairsplit -RS
command.
3. Resynchronize the ShadowImage pair on the secondary storage system
in the opposite direction, using the Quick Restore or Reverse Copy
operations. The system copies the data in the ShadowImage S-VOL to
the HAM S-VOL.
4. Split the ShadowImage pair by running the pairsplit command.
5. Resynchronize the HAM pair in the opposite direction using the
pairresync -swaps command. The system copies the data in the S-VOL
to the P-VOL.
6. Using multipath software, vary the owner path online.
7. Using multipath software, vary the non-owner path offline.
8. When the HAM P-VOL and S-VOL are in PAIR status, swap and suspend
the pair using the pairsplit -RS command.
9. Resynchronize the HAM pair in the opposite direction using the
pairresync -swaps command.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

619

10.Using multipath software, vary the non-owner path online.

Recovering from path failures


You can recover an owner path, non-owner path, or both paths from failure
by restoring the paths.
Caution: If failures occur in both the owner and non-owner paths, you
must restore the paths in the correct order to prevent unexpected failover
or failback. Make sure you restore the path to the storage system that
resumes host operations before you restore the other path.
The following table lists whether failover or failback will occur based on the
order in which you restore the paths (owner path first or non-owner path
first).
Volume receiving
host I/O
Primary system PVOL

Order for restoring


paths
1. Owner path

Will failover or failback occur?


No.

2. Non-owner path
1. Non-Owner path
2. Owner path

Secondary system P- 1. Owner path


VOL
2. Non-owner path
(original S-VOL)

1. Non-Owner path

Yes, if data in the S-VOL is newer


than in the P-VOL, failover occurs
after the non-owner path is
restored. However, failover does not
occur if data in the S-VOL is older.
Check this by viewing the VOL
Access column on the Pair
Operation window for the P-VOL. If
you see Access (Lock), then the PVOL is receiving host I/O.
Yes, failover occurs after the owner
path is restored. However, if data in
the primary storage system S-VOL is
older, failover does not occur. Check
the VOL Access column on the Pair
Operation window for the
secondary storage system P-VOL. If
you see Access (Lock), then this
volume is receiving host I/O.
No.

2. Owner path

Allowing host I/O to an out-of-date S-VOL


If you attempt a recovery of a primary storage system, you might need to
use the S-VOL to continue host operations, even though its data is older
than the data on the P-VOL.

620

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

Contact the Hitachi Data Systems Support Center and ask to have the
appropriate system option turned ON. This enables the S-VOL to receive
host I/O.
Note: The system option applies to the whole storage system. You cannot
set this option to an individual pair or pairs.
To continue using the pair after the failure has been cleared, release and recreate the pair. The storage system builds the initial copy from the P-VOL to
the S-VOL.

Contacting the Hitachi Data Systems Support Center


The HDS customer support staff is available 24 hours a day, seven days a
week. If you need to call the Hitachi Data Systems Support Center, please
provide as much information about the problem as possible, including:

The circumstances surrounding the error or failure.

The content of any error messages displayed on the host systems.

The content of any error messages displayed on SN.

The SN configuration information (use the FD Dump Tool).

The service information messages (SIMs), including reference codes and


severity levels, displayed by SN.

If you need technical support, log on to the HDS Support Portal for contact
information: https://hdssupport.hds.com.

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

621

622

Disaster recovery
HUS VM Block Module Hitachi High Availability Manager User Guide

7
Using HAM in a cluster system
There are specific software and configuration requirements for using HAM
in a cluster system.

Cluster system architecture

Required software

Supported cluster software

Configuration requirements

Configuring the system

Disaster recovery in a cluster system

Restrictions

Using HAM in a cluster system


HUS VM Block Module Hitachi High Availability Manager User Guide

71

Cluster system architecture


The following diagram shows the architecture of a typical implementation of
a cluster system.

You connect an executing node and a standby node to MCU and RCU so that
the nodes can access both MCU and RCU. A heartbeat network must be set
up between the nodes.
If a failure occur in the executing node, operations are continued by a
failover to the standby node.

Required software
The following software is required to use HAM in a cluster system:

Software that uses a SCSI-2 Reservation with the hypervisor platform,


Virtual Machine File System 5 (VMFS 5).

Cluster software.

Supported cluster software


HAM supports the following types of cluster software:
Software
Microsoft Failover Cluster (MSFC)

PowerHA (HACMP) 5.4.1 SP7

72

Operating systems

Windows Server 2008

Windows Server 2008 R2

AIX 6.1 TL6

Using HAM in a cluster system


HUS VM Block Module Hitachi High Availability Manager User Guide

Configuration requirements
To ensure that HAM functions properly, make sure that:

The same version of Hitachi Dynamic Link Manager are used in the MCU
and RCU.

If using MSFC, specify Node and File Share Majority in Quorum Mode.

If using MSFC, use only basic disks.

If using PowerHA (HACMP), use enhanced concurrent volume groups.

Configuring the system


Use the following workflow for setting up and creating a HAM pair. Some
steps vary, depending on whether you use cluster software or software with
SCSI-2 Reservation. This noted is noted in the step.
1. Install and connect the hardware.
2. Set up a heartbeat network between an executing node and a standby
node.
3. Install software.
4. Configure MCU and RCU.
5. Configure a quorum disk.
6. Configure host mode options. Host mode option 52 is required for host
groups when using cluster software with SCSI-2 Reservation where an
executing node and a standby node reside.
Also, when using software with SCSI-2 Reservation, if host mode option
52 is enabled, confirm that the volume to be used as the S-VOL does not
have SCSI-2 Reservation and SCSI-3 Persistent Reservation. To check,
see the section on host-reserved LUN windows in the Provisioning Guide.
7. Create the HAM pairs.
Note: If using software with SCSI-2 Reservation, create the HAM
pairs specifying Entire Volume for Initial Copy in the
Paircreate(HAM) dialog box. This setting results in a full initial copy;
all of the P-VOL data is copied to the S-VOL.
However, if you need to create the pair without copying P-VOL data to
the S-VOL, then create a TrueCopy pair and specify None for Initial
Copy. When the pair is created, split it, then change to a HAM pair.
8. Configure a cluster.
Fore more information, see the documentation for your software. If
necessary, contact the Hitachi Data Systems Support Center.

Disaster recovery in a cluster system


Use the following workflow to recover from failure.
1. Confirm that the primary and secondary storage systems are operating
normally.

Using HAM in a cluster system


HUS VM Block Module Hitachi High Availability Manager User Guide

73

If a failure has occurred in one of the systems, use the appropriate


disaster recovery procedure.
2. Confirm that the software is operating normally.
If a failure has occurred in the software, see the softwares
documentation for recovery instructions.
3. Use the software to fail back from a standby node to an executing node.

Restrictions
The following restrictions apply when using HAM in a cluster system:

You cannot use HAM pair volumes for SAN boot.

You cannot store OS page files in HAM pair volumes.

For Windows Server 2008 R2, HAM does not support Hyper-V function
and Cluster Shared Volumes (CSV) function.

If using software with SCSI-2 Reservation, and SCSI-2 Reservation is


registered on the P-VOL, a pair cannot be created or resynchronized if
SCSI-3 Persistent Reservation key information is also registered on the
S-VOL.
If SCSI-2 Reservation or SCSI-3 Persistent Reservation key information
are not registered on the P-VOL, but SCSI-3 Persistent Reservation key
information is registered on the S-VOL, the pair can be created or
resynchronized, but it will then suspend because of host I/O.
To perform these operations (including reverse resync), remove the host
where the SCSI-3 Persistent Reservation key is registered, delete the
key on the S-VOL, and then create or resynchronized the pair. If you
cannot delete the key from the host, contact the Hitachi Data Systems
Support Center.

74

Using HAM in a cluster system


HUS VM Block Module Hitachi High Availability Manager User Guide

8
Troubleshooting
HAM is designed to provide you with error messages so that you can quickly
and easily identify the cause of the error and take corrective action. Many
types of errors you encounter can be resolved by using fairly simple
troubleshooting procedures.

Potential causes of errors

Is there an error messages for every type of failure?

Where do you look for error messages?

Basic types of troubleshooting procedures

Troubleshooting general errors

Suspended volume pair troubleshooting

Recovery of data stored only in cache memory

Contacting the Hitachi Data Systems Support Center

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

81

Potential causes of errors


Errors can be caused by power, hardware, or software failures, the use of
incompatible or unsupported software, firmware issues, incorrect
configuration settings, communication errors, and factors that result in the
suspension of volumes.

Is there an error messages for every type of failure?


Although many types of failures have error messages, some do not. Some
failures are indicated by status information, incorrect display of system
data, or interruptions in operation.

Where do you look for error messages?


If you are using SN, the SN GUI displays all HAM error messages and error
codes. If you are using CCI, the CCI operation log file contains all HAM error
messages.

Related topics
For more information about error messages and error codes that are
displayed by SN, see Hitachi Storage Navigator Messages.

Basic types of troubleshooting procedures


The troubleshooting procedures you use to resolve issues you encounter
depend on whether the error type. The main error types are general errors,
suspended volume pairs, and data stored only in cache memory.

Troubleshooting general errors on page 8-2

Suspended volume pair troubleshooting on page 8-4

Recovery of data stored only in cache memory on page 8-8

Troubleshooting general errors


Most of the errors that occur when using HAM are general errors. They can
occur in different parts of the system and can be caused by power,
hardware, or software failures, the use of incompatible or unsupported
software, firmware issues, incorrect configuration settings, or
communication errors.
The only type of error that is not a general error is a suspended volume pair.
The following table lists the general errors that can occur and the corrective
action to take for each error.

82

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

Error

Corrective action

The SN computer hangs, or


HAM does not function
properly.

Make sure that the problem is not being caused by


the SN computer or Ethernet hardware or software.
Restart the SN computerthis does not affect
storage system operations.

Make sure that HAM requirements and restrictions


are met.

Make sure that the primary and secondary storage


systems are powered on and fully operational
(NVS, cache).

Check all input values and parameters to make


sure that you entered the correct information in SN
(for example, secondary storage system S/N, path
parameters, P-VOL and S-VOL IDs).

If you are using Performance Monitor, refrain from


using it.

An initiator channel-enable
Please call the Hitachi Data Systems Support Center for
LED indicator (on the HUS VM assistance.
control panel) is off or
flashing.
The status of the pairs and/or Make sure the correct CU is selected.
data paths are not shown
correctly.
A HAM error message appears Resolve the error and then try the operation again.
on the SN computer.
The data path status is not
Normal.

Check the path status on the RCU Status dialog box


and resolve the error.
For more information, see the Hitachi TrueCopy User
Guide.

Quorum disk failure

The primary or secondary storage system issues a SIM


when a failure occurs in the quorum disk. After
checking the SIM, review Disaster recovery on page 61, and specifically, see Recovery from quorum disk
failure on page 6-9.

Paircreate or pairresync

operation resulted in a timeout


error.

If the timeout error was caused by a hardware


failure, a SIM is generated. Check the SIM, then
call the Hitachi Data Systems Support Center. After
you have corrected the problem, retry the
operation.

If no SIM is generated, wait 5 or 6 minutes, then


check the pair status. If the status is changed to
PAIR, the operation completed after the timeout. If
pair status did not change to PAIR, heavy workload
might have prevented the operation from being
completed. In this case, retry when the storage
system has a lighter workload.

If a communication error between the SN computer


and SVP occurs, correct the error.
For more information about correcting error
between the SN computer and SVP, see the Hitachi
Storage Navigator User Guide.

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

83

Error

Corrective action

There is a pinned track on a


HAM volume or the quorum
disk.

See Recovery of data stored only in cache memory on


page 8-8.

Cannot downgrade the


firmware.

To downgrade the firmware to a version that does not


support HAM, complete the steps described in Planned
outages for system components on page 5-6, and then
perform the downgrade.

Though no failure occurred,


When the storage system has a heavy workload, the
the P-VOL is offline, or host I/ multipath software turns the P-VOL offline. This does
O switched to the S-VOL.
not mean that there is a failure, it is an expected
occurrence.
You can reduce the possibility of this happening by
adding additional parity groups, cache memory, and/or
disk adapters.
However, first make certain that no failure has occurred
on the primary or secondary storage system, quorum
disk, the host, or with cabling.

Suspended volume pair troubleshooting


A suspended volume pair is a pair in which normal data replication is not
occurring between the primary and secondary volumes. Until the failure or
condition that caused the suspension is resolved, production data on the
primary volume is not copied to the secondary volume.
A number of failures or conditions can cause a volume pair to become
suspended, including:

Power supply failure

Secondary storage system failure

Communication failure between the primary and secondary storage


systems

I/O failures

Quorum disk failure

Incompatible host mode option settings

Incomplete initial copy operation

The troubleshooting procedures you use to resolve suspended volume pairs


depends on the interface you are using when the pair becomes suspended.

When using SN

When using CCI

The workflow for troubleshooting suspended pairs when using


Storage Navigator
Use the following process to resolve suspended volume pairs when using
SN:

84

Checking the pair status on the Detailed Information dialog box.

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

Following the troubleshooting steps based on the suspend type and the
volume type (primary or secondary).

The following table lists the troubleshooting steps to use for each suspend
type and volume type.
Suspend
type

VOL Type

PSUE,

P-VOL

by RCU

S-VOL

PSUE,

P-VOL

S-VOL
Failure

S-VOL

PSUE,

P-VOL

S-VOL
Failure

S-VOL

Error

Corrective action

The primary storage system


detected an error condition
at the secondary storage
system which caused the
primary storage system to
suspend the pair. The SVOL suspend type is S-VOL
Failure.

Clear the error condition at the


secondary storage system or SVOL, then resynchronize the pair
from the primary storage
system.

The primary storage system Do the following:


detected an error during
Check data path status on
communication with the
the secondary storage
secondary storage system,
system Status dialog box.
or an I/O error during
Clear any errors.
update copy.
Clear any error conditions on
The suspend type for the Sthe secondary storage
VOL is usually S-VOL
system or S-VOL.
Failure.
After errors are cleared,
resynchronize from the
primary storage system.
The primary storage system Do the following:
detected a failure in the
Recover the quorum disk
quorum disk.
failure by following the
recovery procedure defined
by the external storage
system.

If SIMs related to Universal


Volume Manager are issued,
call the Hitachi Data Systems
Support Center and follow
the recovery procedure
defined by the Hitachi
Universal Volume Manager
User Guide. After the failure
is recovered, resynchronize
the pair from the MCU
(Pairresync).

Because of the quorum disk


failure, the multipath
software will show that the
S-VOL is offline.
Resynchronize the pair as
described above, and then
make the path to the S-VOL
online with the multipath
software.

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

85

Suspend
type

VOL Type

PSUE,

P-VOL

S-VOL
Failure

S-VOL

Error

Corrective action

The settings for host mode Do one of the following:


option 52 differ between
If you do not use cluster
ports on the primary and
systems, disable host mode
secondary storage systems.
option 52.

PSUE,

P-VOL

MCU IMPL S-VOL

PSUE,

P-VOL

Initial
Copy
Failed

S-VOL

If you use cluster systems,


enable host mode option 52
for host groups where an
executing node and a
standby node belong.

The primary storage system Resynchronize the pair. This


could not find valid control results in an initial copy
information in its
operation.
nonvolatile memory during
IMPL. This condition only
occurs if the primary
storage system is without
power for more than 48
hours (for example, a
power failure and fully
discharged backup
batteries).
The pair was suspended
before the initial copy
completed. Data on the SVOL is not identical to data
on the P-VOL.

Do the following:

Release the pair from the


primary storage system.

Clear all error conditions at


the primary storage system,
P-VOL, secondary storage
system, and S-VOL.

Restart the initial copy


operation.

Troubleshooting suspended pairs when using CCI


The basic steps involved in resolving suspended volume pairs when using
CCI include:

Using the CCI operation log file to identify the cause of the error.

Following the troubleshooting steps based on the SSB1 and SSB2 error
codes (the codes are recorded in the log file).

Location of the CCI operation log file


The file is stored in the following directory by default:
/HORCM/log*/curlog/horcmlog_HOST/horcm.log.
Where:

* is the instance number.

HOST is the host name.

Error codes appear on the right of the equal symbol (=).

86

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

Example log file


Two error codes are included in this example (B9E1 and B901).

The following table lists the codes for the errors that can occur when using
CCI and the steps involved in troubleshooting the errors.
Error code
(SSB1)

Error code
(SSB2)

Description

2E31

9100

You cannot run the command because the user was not authenticated.

B90A

B901

A HAM pair cannot be created or resynchronized because HAM does not


support the option, or the command cannot be accepted in the current
pair status.

B90A

B902

A HAM pair cannot be created or resynchronized because the quorum


disk is being blocked.

B90A

B904

Reservations that are set to the P-VOL by the host have been
propagated to the S-VOL by HAM. The cause is one of the following:

The pair cannot be created because the reservations that were


propagated earlier still remain in the volume. Wait then try again.

You cannot run the horctakeover and pairresync - swaps


commands because the reservations that were propagated to the
S-VOL remain in the volume. Wait then try the command again.
If the error occurs again, release the LUN reservation by host, and
then try the command again.
For more information about how to release the LUN reservation by
host, see the Provisioning Guide.

B90A

B980

A HAM pair cannot be created because the TrueCopy program product


is not installed.

B90A

B981

A HAM pair cannot be created because the HAM program product is not
installed.

B90A

B982

A HAM pair cannot be created or resynchronized because the specified


quorum disk ID was not added to the secondary storage system.

B90A

B983

The specified value of the quorum disk ID is out of range.

B912

B902

A HAM pair cannot be suspended because of one of the following


conditions:

HAM does not support the pairsplit -rw command.

The operation cannot be accepted in the current pair status.

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

87

Error code
(SSB1)
D004

Error code
(SSB2)
CBEF

Description
One of the following causes apply:

A HAM pair cannot be created or resynchronized because of one of


the following causes:
- The specified quorum disk ID is not added to secondary storage
system.
- The specified quorum disk ID does not match the quorum disk ID
that the S-VOL uses.
- The same quorum disk ID is assigned to different external
volumes separately by the primary and secondary storage
systems.
- On the paired CU side, the specified quorum disk ID is configured
to a different external volume.
- The quorum disk is blocked, or the path between secondary
storage system and the quorum disk has a failure.

The pair cannot be resynchronized using the Sync copy mode


because the S-VOL has the HAM attribute.

Related topics
For more information about troubleshooting other types of errors, see the
Hitachi TrueCopy User Guide.

Recovery of data stored only in cache memory


When a hardware failure occurs while the storage system is running, data
in cache memory may not be written to data drives. In this case, the data
stored only in cache memory is referred to as a pinned track. Pinned tracks
can occur on volume pair drives and on quorum disks. You can recover
pinned tracks.

Pinned track recovery procedures


You can use one of the following procedures to recover pinned tracks from
volume pairs and quorum disks. The procedure you use depends on the type
of data drive:

Recovering pinned tracks from volume pair drives on page 8-8.

Recovering pinned tracks from quorum disks on page 8-9.

Recovering pinned tracks from volume pair drives


When you recover pinned tracks from volume pair drives, you release the
volume pair before you recover the pinned track. This is required because
the primary storage system automatically suspends the volume pair when
a pinned track occurs on a P-VOL or S-VOL.
1. On the primary storage system, release the pair containing the volume
with the pinned track.
2. Recover data from the pinned track.

88

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

For more information about recovering data from pinned tracks, see the
pinned track recovery procedures for your operating system or contact
your Hitachi Data Systems representative for assistance.
3. Connect to the primary storage system.
4. Resynchronize the pair using the Entire Volume initial copy option.

Recovering pinned tracks from quorum disks


When you recover pinned tracks from quorum disks, you release all volume
pairs that use the disk before you recover the pinned track (the data in
cache memory).
Note:
disk.

Host tools cannot be used to recover a pinned track on the quorum

1. Connect to the primary storage system and release all the pairs that use
the quorum disk with the pinned track.
2. On the primary and secondary storage systems, delete the quorum disk
ID. If you cannot release the ID, forcibly delete it.
For more information about deleting quorum disk IDs by system
attribute, see Deleting quorum disk IDs by system attribute (forced
deletion) on page 5-4.
3. Format the quorum disk and recover data from the pinned track.
For more information about recovering pinned tracks, see the recovery
procedures for your operating system or contact your Hitachi Data
Systems representative for assistance.
4. On the primary and secondary storage systems, add the quorum disk ID.
5. On the primary storage system, recreate the released volume pair (or
pairs) using the Entire Volume initial copy option.

Contacting the Hitachi Data Systems Support Center


The HDS customer support staff is available 24 hours a day, seven days a
week. If you need to call the Hitachi Data Systems Support Center, please
provide as much information about the problem as possible, including the
following:

The circumstances surrounding the error or failure.

The content of any error messages displayed on the host systems.

The content of any error messages displayed on SN.

The SN configuration information (use the FD Dump Tool).

The service information messages (SIMs), including reference codes and


severity levels, displayed by SN.

If you need technical support, log on to the HDS Support Portal for contact
information: https://hdssupport.hds.com.

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

89

810

Troubleshooting
HUS VM Block Module Hitachi High Availability Manager User Guide

A
HAM GUI reference
This topic describes HAM windows, dialog boxes, items, and behaviors in
SN.
In addition, information related to HAM systems also shown in the following
windows and is documented in the Hitachi TrueCopy User Guide:
- The RCU Operation window
- The Usage Monitor window
- The History window
- The System Option window
- The Quorum Disk Operation window

Pair Operation window

Quorum Disk Operation window

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A1

Pair Operation window


Use this window to view HAM and TrueCopy pairs. Filter the list to show only
HAM pairs by clicking Display Filter.
You can perform these procedures from the window:

Creating a HAM pair on page 4-11

Splitting pairs on page 4-15

Resynchronizing pairs on page 4-16

Releasing a pair on page 4-18

Checking pair status on page 4-5

Item
Tree

Description
Shows the connected storage system, the LDKC, the CU grouping, the
CUs, ports, and host groups. Select the desired CU grouping, CU(
port (
), or host group (
) to shows related LUs. Only one CU
grouping, CU, port, or host group can be selected.

A2

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

),

Item
List

Description
Shows detailed pair information about the local storage system. To sort
the items that are shown in ascending/descending order, click the
column heading. To perform HAM operations such as creating/splitting/
resynchronizing a HAM pair, right-click a row in the list.
If a volume has multiple LU paths, each LU path appears in a separate
row. However, when you select a CU group or a CU in the tree, only one
LU path per volume is shown in the list.
For more information about the list, see the following table.

Used
Capacity

Used Capacity: The capacity of the volume used in TrueCopy pairs.


Licensed capacity is enclosed in parenthesis.

HAM: Unlimited is shown.

Display Filter Click to open the Display Filter dialog box, from which you can narrow
down the list of volumes.
For more information about the Display Filter dialog box, see the
Hitachi TrueCopy User Guide.
Export

Saves HAM pair information currently in view, which matches settings in


the Display Filter dialog box. Information is saved in a text file. You can
use the file to review the progress of your HAM operations.
For more information about this snapshot function, see the Hitachi
TrueCopy User Guide.

Preview

Shows the settings to be saved to the system when you click Apply. You
can change or delete the settings by right-clicking.

Apply

Saves the operation or changes to the storage system. If an error


occurs, an error code appears on the Error Code column in Preview. To
show an error message, select one setting, right-click, and click Error
Detail. After you see the error message, click OK to close the error
message.

Cancel

Cancels all the settings in Preview.

The S/N, ID, and Fence columns can be blank while a HAM pair is in
transition to the SMPL status. To show the latest information, refresh the
window by clicking File and then Refresh on the menu bar of SN windows.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A3

Item
VOL

Description
An icon shows whether the volume is assigned to a pair.

: SMPL (not assigned to a pair).

: P-VOL.

: S-VOL.

The LU path (a path from a host to a volume) information appears on


the right of the icon, the port number, the host group number, and LUN
(LDKC:CU:LDEV), separated by hyphens.
The following symbols might appear at the end of the LDEV number:

# (e.g., 00:00:3C #): Indicates the volume is an external volume.


For more information about external volumes, see the Hitachi
Universal Volume Manager User Guide.

X (e.g., 00:00:3C X): Indicates the volume is a Dynamic


Provisioning virtual volume.
For more information about virtual volumes, see the Provisioning
Guide.

Status

In the SN window, pair status is shown in the [pair status in Storage


Navigator format/ pair status in CCI] format. If the pair status
name in the Storage Navigator and the pair status name in CCI are the
same, the pair status name in CCI is not shown. For more information,
see the table of Pair status values on page 4-6.

S/N(LDKC)

Serial number of the paired storage system.

ID

SSID of the paired storage system, or Path group ID that you entered
when registering the RCU.

Paired VOL

Information about the path from the host to the paired volume appears
as port number, the host group number, and LUN (LDKC:CU:LDEV),
separated by hyphens.
The symbols used in the VOL column might appear at the end of the
LDEV number.

Type

Pair type, HAM or TC.

Fence

The fence level, which is a TC setting. Does not apply for HAM pairs.

Diff

The unit of measurement used for storing differential data (by cylinder,
track, or auto).

CTG

Shows nothing for HAM volumes.

Sync Rate

During the copy process, Sync shows the percentage of completion of


the copy operation. During the split volume process, Sync shows the
concordance ratio of the specified local volume.

Quorum Disk The quorum disk ID assigned to the HAM pair.


ID
VOL Access

Shows which pair volume is online and thus receiving I/O from the host.
For more information, see Possible VOL Access values for pairs on page
A-5.

CLPR

A4

The number and name of the cache logical partition that the local
volume belongs to.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

You can to determine the volume that is receiving host I/O, which is the
online volume by checking the VOL Access values in the Pair Operation
window. The particular combination of values (one for the P-VOL and one
for the S-VOL) indicates which volume is the online volume.

Possible VOL Access values for pairs


The following table lists the possible combinations of VOL Access values for
a volume pair.
Pair status of
P-VOL**

Pair Status of
S-VOL**

VOL access of
P-VOL

VOL access of
S-VOL

Online volume

COPY

COPY

Access (No
Lock)

Blank

P-VOL

PAIR

PAIR

Blank

Blank

P-VOL

SSWS

Blank

Access (Lock)

S-VOL

Blank

Blank

Any*

Blank

Access (Lock)

Any*

Blank

Blank

Any*

Access (Lock)

Blank

P-VOL

Blank

Blank

Any*

Blank

Access (Lock)

Any*

Blank

Access (Lock)

S-VOL

Blank

Blank

Any*

Access (Lock)

Blank

Any*

SMPL

Access (Lock)

Blank

P-VOL

PSUE

Access (No
Lock)

Blank

P-VOL

PAIR

Access (Lock)

Blank

P-VOL

SSWS

Access (No
Lock)

Blank

Any*

Access (Lock)

Blank

Any*

Access (No
Lock)

Blank

Any*

Access (Lock)

Blank

Any*

Access (Lock)

Blank

P-VOL

PSUS
PSUS

PSUS

SSWS

PSUE

PSUS

PDUB

PDUB

* The S-VOL pair status is forcibly changed to SSWS by the swap suspended operation,
or else the S-VOL pair status is changed from SSWS to PSUS by the rollback operation.
This status can be see when you try to use the volume that has the older data. You can
use any one volume as an on-line volume.
** Storage Navigator pair statuses are shown in the format, SN status/CCI status. If
the two statuses are the same, the CCI status is not shown. For more information on
pair status definitions, see Pair status values on page 4-6.

Detailed Information dialog box


Use this dialog box to view details for the selected HAM pair.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A5

Item
P-VOL and S-VOL

Description

Port - GID LUN(LDKC number: CU number: LDEV


number).
The following symbols might appear at the end of the
LDEV number:
- # (e.g., 00:00:3C #): Indicates the volume is an
external volume.
For more information about external volumes, see the
Hitachi Universal Volume Manager User Guide.
- X (e.g., 00:00:3C X): Indicates the volume is a
Dynamic Provisioning virtual volume.
For more information about virtual volumes, see the
Provisioning Guide.

A6

Emulation type.

Capacity in MB (to two decimal places).

The number of blocks.

CLPR

The CLPR number and name of the local volume.

Group Name

Host group name where the local volume is connected.

Pair Status

HAM pair status. The split/suspend type is shown as well, if


the pair is split or suspended. For information about the
HAM pair status, see Pair status values on page 4-6.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

Item
Pair Synchronized

Description
The percentage of synchronization or consistency between
the pair volumes.
If you are viewing S-VOL information, the percentage for all
pair statuses except COPY is shown.
If you are viewing P-VOL information, the percentage for all
pair statuses is shown.
Note: If the operation is waiting to start, (Queueing) is
shown.

S/N and ID

Serial number and path group ID of the paired storage


system.
If you selected a P-VOL to open this dialog box, the
information about the RCU will appear.
If you selected an S-VOL to open this dialog box, the
information about the MCU will appear.

Controller ID

Controller ID and model name of the paired storage system.

HUS VM controller ID: 19

MCU-RCU Path

Channel type of the path interface between the storage


systems.

Update Type

Pair type. HAM indicates that this pair is a HAM pair.

Copy Pace

1-15 tracks (disabled when the status becomes PAIR).

Initial Copy Priority

1-256 (disabled when the status becomes PAIR).

P-VOL Fence Level

Not used for HAM pairs.

S-VOL Write

Not Received appears; and write operations to the S-VOL


are rejected.

Paired Time

Date and time that the pair was created.

Last Updated Time

Date and time that pair status was last updated.

Pair Copy Time

Time taken to copy pairs. The time shown for this item
differs from the time shown in the Copy Time on the
History window. The difference is as follows:

Pair Copy Time: Time from step 3 to 4 of the pair


creating procedure.

Copy Time: Time from step 1 to 4 of the pair creating


procedure.

Pair Copy Time is determined by the following:

Difference Management

1.

MCU receives a request to create a pair.

2.

MCU receives a request to start the paircreate


operation.

3.

The paircreate operation is started according to the


conditions of initial copy priority and maximum initial
copy activities.

4.

The paircreate operation is completed (the progress of


the operation reaches 100%).

The unit of measurement used for storing differential data.


Values: Cylinder or Track

Quorum Disk ID

The quorum disk ID assigned to the HAM pair.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A7

Item

Description

VOL Access

Shows the online volume.


For more information, see Possible VOL Access values for
pairs on page A-5.

Sync CT Group

The Consistency group number for a Synchronous-C or


Multi-C pair.
If the consistency group number is bracketed ([1]), the
consistency group consists of multiple primary and
secondary storage systems.

Refresh the Pair Operation Select to refresh the Pair Operation window after the
window after this dialog
Detailed Information dialog box closes.
box is closed
Previous

Shows pair status for the pair in *the row above.

Next

Shows pair status for the pair in the row below.

Refresh

Updates the information.

Close

Closes the dialog box.

Paircreate(HAM) dialog box


Use this dialog box to create a pair.
For instructions, see Creating a HAM pair on page 4-11.

A8

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

Item
P-VOL

Description
Shows the port number, host group number (GID),
LUN(LDKC number: CU number: LDEV number), CLPR
number, and CLPR name of the selected LU. This item
shows the P-VOL with the lowest LUN when you create
multiple pairs at a time. The following symbols might
appear at the end of the LDEV number:

# (e.g., 00:00:3C #): Indicates the volume is an


external volume.
For more information about external volumes, see the
Hitachi Universal Volume Manager User Guide.

X (e.g., 00:00:3C X): Indicates the volume is a


Dynamic Provisioning virtual volume.
For more information about virtual volumes, see the
Provisioning Guide.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A9

Item

Description

S-VOL

The port number, GID, and LUN for the pairs S-VOL. Port
number entered directly can be specified with two
characters, for example, 1A as CL1 A. Port number can be
entered in lowercase and uppercase characters.

RCU

RCU for the pairs being created.

P-VOL Fence Level

Not used for HAM, Never is default, meaning P-VOL is


never fenced.

Initial Copy Parameters


Initial Copy

Entire Volume: Copies all P-VOL data except


alternative tracks used for diagnosis and not needed
for the S-VOL.

None: Does not copy any P-VOL data to S-VOL.

Copy Pace

Desired number of tracks to be copied at one time (1-15)


during the initial copy operation. The default setting is 15.

Priority

Scheduling order of the initial copy operations (1-256) if


the number of requested initial copy operations is greater
than the maximum initial copy activity setting on the
System Option window. The highest priority is 1, the
lowest priority is 256, and the default setting is 32.

Difference management

The unit of measurement used for storing differential data


(by cylinder, track, or auto). The default setting is Auto.

HAM Parameters
Quorum Disk ID

The quorum disk ID to be assigned to the HAM pairs. The


list shows the quorum disk ID and the RCU information
such as the serial number, controller ID, and model name.

Pairsplit-r dialog box


Use this dialog box to split a pair.
For instructions, see Splitting pairs on page 4-15.

A10

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

Item
Volume

Description
Port - GID LUN (LDKC number: CU number: LDEV number) of the
selected volume. The following symbols might appear at the end of the
LDEV number:

# (e.g., 00:00:3C #): Indicates the volume is an external volume.


For more information about external volumes, see the Hitachi
Universal Volume Manager User Guide.

X (e.g., 00:00:3C X): Indicates the volume is a Dynamic


Provisioning virtual volume.
For more information about virtual volumes, see the Provisioning
Guide.

S-VOL Write

Disable appears. The S-VOL of this pair will reject the write I/Os while
the pair is being split.

Suspend
Kind

Setting for whether or not the system continues host I/O writes to the
P-VOL while the pair is split. (If you run the command from the S-VOL,
this item is disabled.)

Pairresync dialog box


Use this dialog box to resynchronize a pair.
For instructions, see Splitting pairs on page 4-15.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A11

Item
P-VOL

Description
Port - GID LUN (LDKC number: CU number: LDEV number) of the
selected volume. The following symbols might appear at the end of the
LDEV number:

# (e.g., 00:00:3C #): Indicates the volume is an external volume.


For more information about external volumes, see the Hitachi
Universal Volume Manager User Guide.

X (e.g., 00:00:3C X): Indicates the volume is a Dynamic


Provisioning virtual volume.
For more information about virtual volumes, see the Provisioning
Guide.

P-VOL Fence
Level

Not used for HAM, Never is default, meaning P-VOL is never fenced.

Copy Pace

The number of the tracks 1-15 for the resync operations (default = 15).
Initial copy parameter.

Priority

Scheduling order for the resync operation (1-256, default = 32). Initial
copy parameter.

Change
attribute to
HAM

Yes: Changes a TrueCopy Sync pair to a HAM pair.

No: Does not change a TrueCopy Sync pair to a HAM pair.

For more information, see Changing TrueCopy pairs to HAM pairs on


page 4-19.

Quorum Disk Used to specify a quorum disk ID when changing a TrueCopy Sync pair
ID
to a HAM pair. The list shows the quorum disk ID and the RCU
information such as the serial number, controller ID, and model name.

A12

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

Pairsplit-S dialog box


Use this dialog box to release a pair.
For instructions, see Releasing a pair on page 4-18.

Item
Volume

Description
Port - GID LUN (LDKC number: CU number: LDEV number) of the
selected volume. The following symbols might appear at the end of the
LDEV number:

# (e.g., 00:00:3C #): Indicates the volume is an external volume.


For more information about external volumes, see the Hitachi
Universal Volume Manager User Guide.

X (e.g., 00:00:3C X): Indicates the volume is a Dynamic


Provisioning virtual volume.
For more information about virtual volumes, see the Provisioning
Guide.

Delete Pair
by Force

Yes: The pair is released even if the primary storage system cannot
communicate with the secondary storage system.

No: The pair is released only when the primary storage system can
change pair status for both the P-VOL and S-VOL to SMPL.

When the status of the pairs to be released is SMPL, the default setting
is Yes (cannot be changed). Otherwise, the default setting is No.

Quorum Disk Operation window


Use this window to perform quorum disks operations.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A13

Item
Tree

List

Description
Shows the connected Storage System, LDKC, and Used and Not
Used.

When Storage System or LDKC is selected, the associated quorum


disk IDs show in the list area.

When Used or Not Used is selected, the used and unused quorum
disk IDs in the system is shown in the list area.

The quorum disk ID list shows quorum disk information. You can sort the
list by column in ascending or descending order. The list contains the
following items:

Quorum Disk ID:


Quorum disk ID.

Quorum Disk (LDKC:CU:LDEV):


LDEV number for the quorum disk. # appears at the end of the LDEV
number, indicating that the volume is an external volume.

Paired S/N:

Controller ID:

Serial number of the paired storage system.


Controller ID and model name of the paired storage system.

A14

Preview

Shows any changes you have made. You can alter or delete changes by
right-clicking a row in Preview.

Apply

When clicked, saves changes.

Cancel

When clicked, cancels changes.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

Add Quorum Disk ID dialog box


Use this dialog box to add a quorum disk ID.
For instructions, see Adding the ID for the quorum disk to the storage
systems on page 3-7.

Item

Description

Quorum Disk Where the external volume to be used as a quorum disk is selected.
RCU

Where the paired CU is selected. The list shows the RCU information
registered in CU Free. Multiple RCUs with the same serial number and
controller ID, but different path group IDs, appear as one RCU.

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

A15

A16

HAM GUI reference


HUS VM Block Module Hitachi High Availability Manager User Guide

Glossary
This glossary defines the special terms used in this document. Click the
letter links below to navigate.

#
2DC
two-data-center. Refers to the local and remote sites, or data centers, in
which TrueCopy (TC) and Universal Replicator (UR) combine to form a
remote replication configuration.
In a 2DC configuration, data is copied from a TC primary volume at the
local site to the UR master journal volume at an intermediate site, then
replicated to the UR secondary volume at the remote site. Since this
configuration side-steps the TC secondary volume at the intermediate
site, the intermediate site is not considered a data center.

3DC
three-data-center. Refers to the local, intermediate, and remote sites, or
data centers, in which TrueCopy and Universal Replicator combine to
form a remote replication configuration.
In a 3DC configuration, data is copied from a local site to an
intermediate site and then to a remote site (3DC cascade configuration),
or from a local site to two separate remote sites (3DC multi-target
configuration).

A
alternate path
A secondary path (port, target ID, LUN) to a logical volume, in addition
to the primary path, that is used as a backup in case the primary path
fails.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary1
HUS VM Block Module Hitachi High Availability Manager User Guide

array
Another name for a RAID storage system.

array group
See RAID group.

async
asynchronous

at-time split
Consistency group operation that performs multiple pairsplit operations
at a pre-determined time.

audit log
Files that store a history of the operations performed from SN and the
service processor (SVP), commands that the storage system received
from hosts, and data encryption operations.

B
base emulation type
Emulation type that is set when drives are installed. Determines the
device emulation types that can be set in the RAID group.

BC
business continuity

BCM
Business Continuity Manager

blade
A computer module, generally a single circuit board, used mostly in
servers.

BLK, blk
block

bmp
bitmap

C
C/T
See consistency time (C/T).

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary2
HUS VM Block Module Hitachi High Availability Manager User Guide

ca
cache

cache logical partition (CLPR)


Consists of virtual cache memory that is set up to be allocated to
different hosts in contention for cache memory.

capacity
The amount of data storage space available on a physical storage device,
usually measured in bytes (MB, GB, TB, etc.).

cascade configuration
In a 3DC cascade configuration for remote replication, data is copied
from a local site to an intermediate site and then to a remote site using
TrueCopy and Universal Replicator. See also 3DC.
In a Business Copy Z cascade configuration, two layers of secondary
volumes can be defined for a single primary volume. Pairs created in the
first and second layer are called cascaded pairs.

cascade function
A ShadowImage function for open systems where a primary volume (PVOL) can have up to nine secondary volumes (S-VOLs) in a layered
configuration. The first cascade layer (L1) is the original ShadowImage
pair with one P-VOL and up to three S-VOLs. The second cascade layer
(L2) contains ShadowImage pairs in which the L1 S-VOLs are
functioning as the P-VOLs of layer-2 ShadowImage pairs that can have
up to two S-VOLs for each P-VOL.
See also root volume, node volume, leaf volume, level-1 pair, and level2 pair.

cascaded pair
A ShadowImage pair in a cascade configuration. See cascade
configuration.

shared volume
A volume that is being used by more than one replication function. For
example, a volume that is the primary volume of a TrueCopy pair and
the primary volume of a ShadowImage pair is a shared volume.

CCI
Command Control Interface

CFL
Configuration File Loader. A SN function for validating and running
scripted spreadsheets.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary3
HUS VM Block Module Hitachi High Availability Manager User Guide

CFW
cache fast write

CG
See consistency group (CTG).

CTG
See consistency group (CTG).

CH
channel

channel path
The communication path between a channel and a control unit. A
channel path consists of the physical channel path and the logical path.

CHAP
challenge handshake authentication protocol

CL
cluster

CLI
command line interface

CLPR
cache logical partition

cluster
Multiple-storage servers working together to respond to multiple read
and write requests.

command device
A dedicated logical volume used only by Command Control Interface and
Business Continuity Manager to interface with the storage system. Can
be shared by several hosts.

configuration definition file


Defines the configuration, parameters, and options of Command Control
Interface operations. A text file that defines the connected hosts and the
volumes and groups known to the Command Control Interface instance.

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary4
HUS VM Block Module Hitachi High Availability Manager User Guide

consistency group (CG, CTG)


A group of pairs on which copy operations are performed
simultaneously; the pairs status changes at the same time. See also
extended consistency group (EXCTG).

consistency time (C/T)


Shows a time stamp to indicate how close the target volume is to the
source volume. C/T also shows the time stamp of a journal and extended
consistency group.

controller
The component in a storage system that manages all storage functions.
It is analogous to a computer and contains a processors, I/O devices,
RAM, power supplies, cooling fans, and other sub-components as
needed to support the operation of the storage system.

copy-on-write
Point-in-time snapshot copy of any data volume within a storage
system. Copy-on-write snapshots only store changed data blocks,
therefore the amount of storage capacity required for each copy is
substantially smaller than the source volume.

copy pair
A pair of volumes in which one volume contains original data and the
other volume contains the copy of the original. Copy operations can be
synchronous or asynchronous, and the volumes of the copy pair can be
located in the same storage system (local copy) or in different storage
systems (remote copy).
A copy pair can also be called a volume pair, or just pair.

COW
copy-on-write

COW Snapshot
Hitachi Copy-on-Write Snapshot

CTG
See consistency group (CTG).

CTL
controller

CU
control unit

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary5
HUS VM Block Module Hitachi High Availability Manager User Guide

currency of data
The synchronization of the volumes in a copy pair. When the data on the
secondary volume (S-VOL) is identical to the data on the primary
volume (P-VOL), the data on the S-VOL is current. When the data on the
S-VOL is not identical to the data on the P-VOL, the data on the S-VOL
is not current.

CYL, cyl
cylinder

cylinder bitmap
Indicates the differential data (updated by write I/Os) in a volume of a
split or suspended copy pair. The primary and secondary volumes each
have their own cylinder bitmap. When the pair is resynchronized, the
cylinder bitmaps are merged, and the differential data is copied to the
secondary volume.

D
DASD
direct-access storage device

data consistency
When the data on the secondary volume is identical to the data on the
primary volume.

data path
The physical paths used by primary storage systems to communicate
with secondary storage systems in a remote replication environment.

data pool
One or more logical volumes designated to temporarily store original
data. When a snapshot is taken of a primary volume, the data pool is
used if a data block in the primary volume is to be updated. The original
snapshot of the volume is maintained by storing the to-be-changed data
blocks in the data pool.

DB
database

DBMS
database management system

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary6
HUS VM Block Module Hitachi High Availability Manager User Guide

delta resync
A disaster recovery solution in which TrueCopy and Universal Replicator
systems are configured to provide a quick recovery using only
differential data stored at an intermediate site.

device
A physical or logical unit with a specific function.

device emulation
Indicates the type of logical volume. Mainframe device emulation types
provide logical volumes of fixed size, called logical volume images
(LVIs), which contain EBCDIC data in CKD format. Typical mainframe
device emulation types include 3390-9 and 3390-M. Open-systems
device emulation types provide logical volumes of variable size, called
logical units (LUs), that contain ASCII data in FBA format. The typical
open-systems device emulation type is OPEN-V.

DEVN
device number

DFW
DASD fast write

DHCP
dynamic host configuration protocol

differential data
Changed data in the primary volume not yet reflected in the copy.

disaster recovery
A set of procedures to recover critical application data and processing
after a disaster or other failure.

disk array
Disk array, or just array, is another name for a RAID storage system.

disk controller (DKC)


The hardware component that manages front-end and back-end storage
operations. The term DKC is sometimes used to refer to the entire RAID
storage system.

DKC
disk controller. Can refer to the RAID storage system or the controller
components.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary7
HUS VM Block Module Hitachi High Availability Manager User Guide

DKCMAIN
disk controller main. Refers to the microcode for the RAID storage
system.

DKP
disk processor. Refers to the microprocessors on the back-end director
features of the Universal Storage Platform V.

DKU
disk unit. Refers to the cabinet (floor model) or rack-mounted hardware
component that contains data drives and no controller components.

DMP
Dynamic Multi Pathing

DRU
Hitachi Data Retention Utility

DP-VOL
Dynamic Provisioning-virtual volume. A virtual volume with no memory
space used by Dynamic Provisioning.

dynamic provisioning
An approach to managing storage. Instead of reserving a fixed amount
of storage, it removes capacity from the available pool when data is
actually written to disk. Also called thin provisioning.

E
EC
error code

emulation
The operation of the Hitachi RAID storage system to emulate the
characteristics of a different storage system. For device emulation the
mainframe host sees the logical devices on the RAID storage system
as 3390-x devices. For controller emulation the mainframe host sees
the control units (CUs) on the RAID storage system as 2105 or 2107
controllers.
RAID storage system operates the same as the storage system being
emulated.

emulation group
A set of device emulation types that can be intermixed within a RAID
group and treated as a group.

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary8
HUS VM Block Module Hitachi High Availability Manager User Guide

env.
environment

ERC
error reporting communications

ESCON
Enterprise System Connection

EXCTG
See extended consistency group (ECTG).

EXG
external volume group

ext.
external

extended consistency group (EXCTG)


A set of Universal Replicator for Mainframe journals in which data
consistency is guaranteed. When performing copy operations between
multiple primary and secondary storage systems, the journals must be
registered in an EXCTG.

external application
A software module that is used by a storage system but runs on a
separate platform.

external port
A fibre-channel port that is configured to be connected to an external
storage system for Universal Volume Manager operations.

external volume
A logical volume whose data resides on drives that are physically located
outside the Hitachi storage system.

F
failback
The process of switching operations from the secondary path or host
back to the primary path or host, after the primary path or host has
recovered from failure. See also failover.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary9
HUS VM Block Module Hitachi High Availability Manager User Guide

failover
The process of switching operations from the primary path or host to a
secondary path or host when the primary path or host fails.

FBA
fixed-block architecture

FC
fibre channel; FlashCopy

FCA
fibre-channel adapter

FC-AL
fibre-channel arbitrated loop

FCIP
fibre-channel internet protocol

FCP
fibre-channel protocol

FCSP
fibre-channel security protocol

FIBARC
Fibre Connection Architecture

FICON
Fibre Connectivity

FIFO
first in, first out

free capacity
The amount of storage space (in bytes) that is available for use by the
host systems.

FSW
fibre switch

FTP
file-transfer protocol

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary10
HUS VM Block Module Hitachi High Availability Manager User Guide

FV
fixed-size volume

FWD
fast-wide differential

G
GID
group ID

GUI
graphical user interface

H
HA
high availability

HACMP
High Availability Cluster Multi-Processing

HAM
Hitachi High Availability Manager

HDLM
Hitachi Dynamic Link Manager

HDP
Hitachi Dynamic Provisioning

HDS
Hitachi Data Systems

HDT
Hitachi Dynamic Tiering

HDvM
Hitachi Device Manager

HGLAM
Hitachi Global Link Availability Manager

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary11
HUS VM Block Module Hitachi High Availability Manager User Guide

H-LUN
host logical unit

HOMRCF
Hitachi Open Multi-RAID Coupling Feature. Another name for Hitachi
ShadowImage.

HORC
Hitachi Open Remote Copy. Another name for Hitachi TrueCopy.

HORCM
Hitachi Open Remote Copy Manager. Another name for Command
Control Interface.

host failover
The process of switching operations from one host to another host when
the primary host fails.

host group
A group of hosts of the same operating system platform.

host mode
Operational modes that provide enhanced compatibility with supported
host platforms. Used with fibre-channel ports on RAID storage systems.

host mode option


Additional options for fibre-channel ports on RAID storage systems.
Provide enhanced functionality for host software and middleware.

HRC
Hitachi Remote Copy. Another name for Hitachi TrueCopy for IBM z/OS.

HRpM
Hitachi Replication Manager

HSCS
Hitachi Storage Command Suite. This suite of products is now called the
Hitachi Command Suite.

HUR
Hitachi Universal Replicator

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary12
HUS VM Block Module Hitachi High Availability Manager User Guide

HXRC
Hitachi Extended Remote Copy. Another name for Hitachi Compatible
Replication for IBM XRC.

I
iFCP
internet fibre-channel protocol

IML
initial microcode load; initial microprogram load

IMPL
initial microprogram load

initial copy
An initial copy operation is performed when a copy pair is created. Data
on the primary volume is copied to the secondary volume.

initiator port
A fibre-channel port configured to send remote I/Os to an RCU target
port on another storage system. See also RCU target port and target
port.

in-system replication
The original data volume and its copy are located in the same storage
system. ShadowImage in-system replication provides duplication of
logical volumes; Copy-on-Write Snapshot in-system replication provides
snapshots of logical volumes that are stored and managed as virtual
volumes (V-VOLs).
See also remote replication.

intermediate site (I-site)


A site that functions as both a TrueCopy secondary site and a Universal
Replicator primary site in a 3-data-center (3DC) cascading
configuration.

internal volume
A logical volume whose data resides on drives that are physically located
within the storage system. See also external volume.

IO, I/O
input/output

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary13
HUS VM Block Module Hitachi High Availability Manager User Guide

IOPS
I/Os per second

IP
internet protocol

IPL
initial program load

J
JNL
journal

JNLG
journal group

journal group (JNLG)


In a Universal Replicator system, journal groups manage data
consistency between multiple primary volumes and secondary volumes.
See also consistency group (CTG).

journal volume
A volume that records and stores a log of all events that take place in
another volume. In the event of a system crash, the journal volume logs
are used to restore lost data and maintain data integrity.
In Universal Replicator, differential data is held in journal volumes on
until it is copied to the S-VOL.

JRE
Java Runtime Environment

L
L1 pair
See layer-1 (L1) pair.

L2 pair
See layer-2 (L2) pair.

LAN
local-area network

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary14
HUS VM Block Module Hitachi High Availability Manager User Guide

layer-1 (L1) pair


In a ShadowImage cascade configuration, a layer-1 pair consists of a
primary volume and secondary volume in the first cascade layer. An L1
primary volume can be paired with up to three L1 secondary volumes.
See also cascade configuration.

layer-2 (L2) pair


In a ShadowImage cascade configuration, a layer-2 (L2) pair consists of
a primary volume and secondary volume in the second cascade layer. An
L2 primary volume can be paired with up to two L2 secondary volumes.
See also cascade configuration.

LBA
logical block address

LCP
local control port; link control processor

LCU
logical control unit

LDEV
logical device

LDKC
See logical disk controller (LDKC).

leaf volume
A level-2 secondary volume in a ShadowImage cascade configuration.
The primary volume of a layer-2 pair is called a node volume. See also
cascade configuration.

LED
light-emitting diode

license key
A specific set of characters that unlocks a software application so that
you can use it.

local copy
See in-system replication.

local site
See primary site.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary15
HUS VM Block Module Hitachi High Availability Manager User Guide

logical device (LDEV)


An individual logical data volume (on multiple drives in a RAID
configuration) in the storage system. An LDEV may or may not contain
any data and may or may not be defined to any hosts. Each LDEV has a
unique identifier or address within the storage system composed of the
logical disk controller (LDKC) number, control unit (CU) number, and
LDEV number. The LDEV IDs within a storage system do not change.An
LDEV formatted for use by mainframe hosts is called a logical volume
image (LVI). An LDEV formatted for use by open-system hosts is called
a logical unit (LU).

logical disk controller (LDKC)


A group of 255 control unit (CU) images in the RAID storage system that
is controlled by a virtual (logical) storage system within the single
physical storage system. For example, the Universal Storage Platform V
storage system supports two LDKCs, LDKC 00 and LDKC 01.

logical unit (LU)


A logical volume that is configured for use by open-systems hosts (for
example, OPEN-V).

logical unit (LU) path


The path between an open-systems host and a logical unit.

logical volume
See volume.

logical volume image (LVI)


A logical volume that is configured for use by mainframe hosts (for
example, 3390-9).

LU
logical unit

LUN
logical unit number

LUNM
Hitachi LUN Manager

LUSE
Hitachi LUN Expansion; Hitachi LU Size Expansion

LV
logical volume

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary16
HUS VM Block Module Hitachi High Availability Manager User Guide

M
main control unit (MCU)
A storage system at a primary or main site that contains primary
volumes of TrueCopy for Mainframe remote replication pairs. The MCU is
configured to send remote I/Os to one or more storage systems at the
secondary or remote site, called remote control units (RCUs), that
contain the secondary volumes of the remote replication pairs. See also
remote control unit (RCU).

main site
See primary site.

main volume (M-VOL)


A primary volume on the main storage system in a TrueCopy for
Mainframe copy pair. The M-VOL contains the original data that is
duplicated on the remote volume (R-VOL). See also remote volume (RVOL).

master journal (M-JNL)


Holds differential data on the primary Universal Replicator system until
it is copied to the restore journal (R-JNL) on the secondary storage
system. See also restore journal (R-JNL).

max.
maximum

MB
megabyte

Mb/sec, Mbps
megabits per second

MB/sec, MBps
megabytes per second

MCU
See main control unit (MCU).

MF, M/F
mainframe

MIH
missing interrupt handler

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary17
HUS VM Block Module Hitachi High Availability Manager User Guide

mirror
In Universal Replicator, each pair relationship in and between journals is
called a mirror. Each pair is assigned a mirror ID when it is created.
The mirror ID identifies individual pair relationships between journals.

M-JNL
main journal

modify mode
The mode of operation of SN where you can change the the storage
system configuration. The two SN modes are view mode and modify
mode. See also view mode.

MP
microprocessor

MSCS
Microsoft Cluster Server

mto, MTO
mainframe-to-open

MU
mirror unit

multi-pathing
A performance and fault-tolerant technique that uses more than one
physical connection between the storage system and host system. Also
called multipath I/O.

M-VOL
main volume

N
node volume
A level-2 primary volume in a ShadowImage cascade configuration. The
secondary volume of a layer-2 pair is called a leaf volume. See also
cascade configuration.

NUM
number

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary18
HUS VM Block Module Hitachi High Availability Manager User Guide

NVS
nonvolatile storage

O
OPEN-V
A logical unit (LU) of user-defined size that is formatted for use by opensystems hosts.

OPEN-x
A logical unit (LU) of fixed size (for example, OPEN-3 or OPEN-9) that is
used primarily for sharing data between mainframe and open-systems
hosts using Hitachi Cross-OS File Exchange.

OS
operating system

OS/390
Operating System/390

P
pair
Two logical volumes in a replication relationship in which one volume
contains original data to be copied and the other volume contains the
copy of the original data. The copy operations can be synchronous or
asynchronous, and the pair volumes can be located in the same storage
system (in-system replication) or in different storage systems (remote
replication).

pair status
Indicates the condition of a copy pair. A pair must have a specific status
for specific operations. When an operation completes, the status of the
pair changes to the new status.

parity group
See RAID group.

path failover
The ability of a host to switch from using the primary path to a logical
volume to the secondary path to the volume when the primary path fails.
Path failover ensures continuous host access to the volume in the event
the primary path fails.
See also alternate path and failback.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary19
HUS VM Block Module Hitachi High Availability Manager User Guide

PG
parity group. See RAID group.

physical device
See device.

PiT
point-in-time

point-in-time (PiT) copy


A copy or snapshot of a volume or set of volumes at a specific point in
time. A point-in-time copy can be used for backup or mirroring
application to run concurrently with the system.

pool
A set of volumes that are reserved for storing Copy-on-Write Snapshot
data or Dynamic Provisioning write data.

pool volume (pool-VOL)


A logical volume that is reserved for storing snapshot data for Copy-onWrite Snapshot operations or write data for Dynamic Provisioning.

port attribute
Indicates the type of fibre-channel port: target, RCU target, or initiator.

port block
A group of four fibre-channel ports that have the same port mode.

port mode
The operational mode of a fibre-channel port. The three port modes for
fibre-channel ports on the Hitachi RAID storage systems are standard,
high-speed, and initiator/external MIX.

PPRC
Peer-to-Peer Remote Copy

Preview list
The list of requested operations on SN.

primary site
The physical location of the storage system that contains the original
data to be replicated and that is connected to one or more storage
systems at the remote or secondary site via remote copy connections.
A primary site can also be called a main site or local site.

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary20
HUS VM Block Module Hitachi High Availability Manager User Guide

The term primary site is also used for host failover operations. In that
case, the primary site is the host computer where the production
applications are running, and the secondary site is where the backup
applications run when the applications at the primary site fail, or where
the primary site itself fails.

primary volume
The volume in a copy pair that contains the original data to be replicated.
The data in the primary volume is duplicated synchronously or
asynchronously on the secondary pairs.
The following Hitachi products use the term P-VOL: SN, Copy-on-Write
Snapshot, ShadowImage, ShadowImage for Mainframe, TrueCopy,
Universal Replicator, Universal Replicator for Mainframe, and High
Availability Manager.
See also secondary volume (S-VOL).

P-site
primary site

P-VOL
Term used for the primary volume in the earlier version of the SN GUI
(still in use). See primary volume.

Q
quick format
The quick format feature in Virtual LVI/Virtual LUN in which the
formatting of the internal volumes is done in the background. Use to
configure the system (such as defining a path or creating a TrueCopy
pair) before the formatting is completed. To quick format, the volumes
must be in blocked status.

quick restore
A reverse resynchronization in which no data is actually copied: the
primary and secondary volumes are swapped.

quick split
A split operation in which the pair becomes split immediately before the
differential data is copied to the secondary volume (S-VOL). Any
remaining differential data is copied to the S-VOL in the background. The
benefit is that the S-VOL becomes immediately available for read and
write I/O.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary21
HUS VM Block Module Hitachi High Availability Manager User Guide

R
R/W, r/w
read/write

RAID
redundant array of inexpensive disks

RAID group
A redundant array of inexpensive drives (RAID) that have the same
capacity and are treated as one group for data storage and recovery. A
RAID group contains both user data and parity information, and the
storage system can access the user data in the event that one or more
of the drives within the RAID group are not available. The RAID level of
a RAID group determines the number of data drives and parity drives
and how the data is striped across the drives. For RAID1, user data is
duplicated within the RAID group, so there is no parity data for RAID1
RAID groups.
A RAID group can also be called an array group or a parity group.

RAID level
The type of RAID implementation. RAID levels include RAID0, RAID1,
RAID2, RAID3, RAID4, RAID5 and RAID6.

RCP
remote control port

RCU
See remote control unit (RCU).

RD
read

RCU target port


A fibre-channel port that is configured to receive remote I/Os from an
initiator port on another storage system.

remote console PC
A previous term for the personal computer (PC) system that is LANconnected to a RAID storage system. The current term is SN PC.

remote control port (RCP)


A serial-channel (ESCON) port on a TrueCopy main control unit (MCU)
that is configured to send remote I/Os to a TrueCopy remote control unit
(RCU).

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary22
HUS VM Block Module Hitachi High Availability Manager User Guide

remote control unit (RCU)


A storage system at a secondary or remote site that is configured to
receive remote I/Os from one or more storage systems at the primary
or main site.

remote copy
See remote replication.

remote copy connections


The physical paths that connect a storage system at the primary site to
a storage system at the secondary site. Also called data path.

remote replication
Data replication configuration in which the storage system that contains
the original data is at a local site and the storage system that contains
the copy of the original data is at a remote site. TrueCopy and Universal
Replicator provide remote replication. See also in-system replication.

remote site
See secondary site.

remote volume (R-VOL)


In TrueCopy for Mainframe, a volume at the remote site that contains a
copy of the original data on the main volume (M-VOL) at the main site.

restore journal (R-JNL)


Holds differential data on the secondary Universal Replicator system
until it is copied to the secondary volume.

resync
Resync is short for resynchronize.

RF
record format

RIO
remote I/O

R-JNL
restore journal

RL
record length

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary23
HUS VM Block Module Hitachi High Availability Manager User Guide

RMI
Remote Method Invocation

rnd
random

root volume
A level-1 primary volume in a ShadowImage cascade configuration. The
secondary volume of a layer-1 pair is called a node volume. See also
cascade configuration.

RPO
recovery point objective

R-SIM
remote service information message

R-site
remote site (used for Universal Replicator)

RTC
real-time clock

RTO
recovery time objective

R-VOL
See remote volume (R-VOL).

R/W
read/write

S
S#
serial number

S/N
serial number

s/w
software

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary24
HUS VM Block Module Hitachi High Availability Manager User Guide

SAID
system adapter ID

SAN
storage-area network

SATA
serial Advanced Technology Attachment

SC
storage control

SCDS
source control dataset

SCI
state change interrupt

scripting
The use of command line scripts, or spreadsheets downloaded by
Configuration File Loader, to automate storage management operations.

SCSI
small computer system interface

secondary site
The physical location of the storage system that contains the primary
volumes of remote replication pairs at the main or primary site. The
storage system at the secondary site is connected to the storage system
at the main or primary site via remote copy connections. The secondary
site can also be called the remote site. See also primary site.

secondary volume
The volume in a copy pair that is the copy. The following Hitachi products
use the term secondary volume: SN, Copy-on-Write Snapshot,
ShadowImage, ShadowImage for Mainframe, TrueCopy, Universal
Replicator, Universal Replicator for Mainframe, and High Availability
Manager.
See also primary volume.

seq.
sequential

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary25
HUS VM Block Module Hitachi High Availability Manager User Guide

service information message (SIM)


SIMs are generated by a RAID storage system when it detects an error
or service requirement. SIMs are reported to hosts and displayed on SN.

service processor (SVP)


The computer inside a RAID storage system that hosts the SN software
and is used by service personnel for configuration and maintenance of
the storage system.

severity level
Applies to service information messages (SIMs) and SN error codes.

SI
Hitachi ShadowImage

SIz
Hitachi ShadowImage for Mainframe

sidefile
An area of cache memory that is used to store updated data for later
integration into the copied data.

SIM
service information message

size
Generally refers to the storage capacity of a memory module or cache.
Not usually used for storage of data on disk or flash drives.

SM
shared memory

SMTP
simple mail transfer protocol

SN
serial number shown in SN

snapshot
A point-in-time virtual copy of a Copy-on-Write Snapshot primary
volume (P-VOL). The snapshot is maintained when the P-VOL is updated
by storing pre-updated data (snapshot data) in a data pool.

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary26
HUS VM Block Module Hitachi High Availability Manager User Guide

SNMP
simple network management protocol

SOM
system option mode

source volume (S-VOL)


The volume in a copy pair containing the original data. The term is used
only in the earlier version of the SN GUI (still in use), for the following
Hitachi products: ShadowImage for Mainframe, Dataset Replication, IBM
FlashCopy.

space
Generally refers to the data storage capacity of a disk drive or flash
drive.

SRM
Storage Replication Manager

SS
snapshot

SSB
sense byte

SSID
(storage) subsystem identifier. SSIDs are used as an additional way to
identify a control unit on mainframe operating systems. Each group of
64 or 256 volumes requires one SSID, therefore there can be one or four
SSIDs per CU image. For HUS VM, one SSID is associated with 256
volumes.

SSL
secure socket layer

steady split
In ShadowImage, a typical pair split operation in which any remaining
differential data from the P-VOL is copied to the S-VOL and then the pair
is split.

S-VOL
See secondary volume or source volume (S-VOL). When used for
secondary volume, S-VOL is only seen in the earlier version of the SN
GUI (still in use).

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary27
HUS VM Block Module Hitachi High Availability Manager User Guide

SVP
See service processor (SVP).

sync
synchronous

system option mode (SOM)


Additional operational parameters for the RAID storage systems that
enable the storage system to be tailored to unique customer operating
requirements. SOMs are set on the service processor.

T
target port
A fibre-channel port that is configured to receive and process host I/Os.

target volume (T-VOL)


The volume in a mainframe copy pair that is the copy. The term is used
only in the earlier version of the SN GUI (still in use), for the following
Hitachi products: ShadowImage for Mainframe, Dataset Replication,
Compatible FlashCopy(R).
See also source volume (S-VOL).

TB
terabyte

TC
Hitachi TrueCopy

TCz
Hitachi TrueCopy for Mainframe

TDEVN
target device number

TGT
target; target port

THD
threshold

TID
target ID

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary28
HUS VM Block Module Hitachi High Availability Manager User Guide

total capacity
The aggregate amount of storage space in a data storage system.

T-VOL
See target volume (T-VOL).

U
update copy
An operation that copies differential data on the primary volume of a
copy pair to the secondary volume. Update copy operations are
performed in response to write I/Os on the primary volume after the
initial copy operation is completed.

UR
Hitachi Universal Replicator

URz
Hitachi Universal Replicator for Mainframe

USP
Hitachi TagmaStore Universal Storage Platform

USP V
Hitachi Universal Storage Platform V

USP VM
Hitachi Universal Storage Platform VM

UT
Universal Time

UTC
Universal Time-coordinated

V
V
version; variable length and de-blocking (mainframe record format)

VB
variable length and blocking (mainframe record format)

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary29
HUS VM Block Module Hitachi High Availability Manager User Guide

view mode
The mode of operation of SN where you can only view the storage
system configuration. The two SN modes are view mode and modify
mode on page Glossary-18.

virtual device (VDEV)


A group of logical devices (LDEVs) in a RAID group. A VDEV typically
consists of some fixed volumes (FVs) and some free space. The number
of fixed volumes is determined by the RAID level and device emulation
type.

Virtual LVI/LUN volume


A custom-size volume whose size is defined by the user using Virtual
LVI/Virtual LUN. Also called a custom volume (CV).

virtual volume (V-VOL)


The secondary volume in a Copy-on-Write Snapshot pair. When in PAIR
status, the V-VOL is an up-to-date virtual copy of the primary volume
(P-VOL). When in SPLIT status, the V-VOL points to data in the P-VOL
and to replaced data in the pool, maintaining the point-in-time copy of
the P-VOL at the time of the split operation.
When a V-VOL is used with Dynamic Provisioning, it is called a DP-VOL.

VLL
Hitachi Virtual LVI/LUN

VLVI
Hitachi Virtual LVI

VM
volume migration; volume manager

VOL, vol
volume

VOLID
volume ID

volser
volume serial number

volume
A logical device (LDEV), or a set of concatenated LDEVs in the case of
LUSE, that has been defined to one or more hosts as a single data

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary30
HUS VM Block Module Hitachi High Availability Manager User Guide

storage unit. A mainframe volume is called a logical volume image (LVI),


and an open-systems volume is called a logical unit. (LU).

volume pair
See copy pair.

V-VOL
virtual volume

V-VOL management area


Contains the pool management block and pool association information
for Copy-on-Write Snapshot operations. The V-VOL management area is
created automatically when additional shared memory is installed and is
required for Copy-on-Write Snapshot operations.

#
# A B C D E F G H II

K L

M N O P Q R S T U V W X Y Z

Glossary31
HUS VM Block Module Hitachi High Availability Manager User Guide

# A B C D E F G H II
#

K L

M N O P Q R S T U V W X Y Z

Glossary32
HUS VM Block Module Hitachi High Availability Manager User Guide

Index
A

Add Quorum Disk dialog box A15

GUI, using to
add quorum disk ID 37
create pairs 411
delete pairs 418
resync pairs 416
split pairs 415

C
Cache Residency Manager, sharing volumes
with 212
capacity 25
CCI
troubleshooting with 86
using for pair operations 421
changing TrueCopy pair to HAM 419
components 13
configuration
hardware order 32
quorum disk 36
software order 34
with Business Copy 213
with ShadowImage 213
create pairs 411

H
hardware
required 23
setup 32
High Availability Manager, discontinuing 53
host recognition of pair 413

I
initial copy 411
interoperability;sharing volumes 211

D
data path
general 14
max, min, recommended 27
overview 14
requirements 27
switching 52
deleting pairs 418

E
ending HAM operations 53
expired license 25
external storage systems, requirements 28

F
failover
description 16
planning 28

LDEV requirements 25
license capacity 25
licenses required 24
logical paths
max, min, recommended 27
LUN Expansion, sharing volumes with HAM 213
LUN Manager 212
LUSE, sharing volumes with HAM 213

M
max. number of pairs 25
MCU 14
multipath software
description 15
multipath software, requirements 23

N
non-owner path, restore 620

Index1
HUS VM Block Module Hitachi High Availability Manager User Guide

reverse resynchronizing 417

Open Volume Management, sharing volumes


with 212
overview 12
owner path, restore 620

P
P-VOL 14
P-VOL, disallow I/O when split 415
pair status
definitions 46
monitoring 42
Paircreate(HAM) dialog box A8
pairs
create 411
max. number 25
releasing 418
requirements 25
resynchronizing 416
split 415
troubleshooting 84
Performance Monitor 212
planning, workflow 23
ports
max, min, recommended 27
power on/off system 56
power outage, recovery from 612
program products with HAM 211
PSUS types and definitions 410

S-VOL 14
S-VOL write option, allowing I/O 415
ShadowImage and HAM 213
SIM 44
software setup 34
split operation, maintaining
synchronization 415
splitting pairs 415
Storage Navigator
general 15
requirements 28
storage systems 14
suspended pairs, troubleshooting 84
system
configuration for HAM 35
hardware requirements 23
power on/off procedure 56
primary, secondary requirements 24

T
technical support 621, 89
troubleshooting
general errors; 82
suspended pairs; 84
using CCI 86
TrueCopy, changing pair to HAM 419

quorum disk
and other products 212
delete 53
overview 15
recover if deleted by accident 55
replace 610
requirements 26
setup 36
quorum disk ID, adding 37
Quorum Disk Operation window A13

Virtual Partition Manager, sharing volumes


with 212
VOL Access explained A5

R
RCU 14
recovering quorum disk if deleted 55
recovery
after failure 65
after power outage 612
with ShadowImage 619
releasing pairs 418
requirements
data path 27
for multiple pair creation 25
hardware 23
licences 24
pair volumes 25
quorum disk 26
resynchronizing pairs 416

Index2
HUS VM Block Module Hitachi High Availability Manager User Guide

HUS VM Block Module Hitachi High Availability Manager User Guide

Hitachi Data Systems


Corporate Headquarters
2845 Lafayette Street
Santa Clara, California 95050-2639
U.S.A.
www.hds.com
Regional Contact Information
Americas
+1 408 970 1000
[email protected]
Europe, Middle East, and Africa
+44 (0)1753 618000
[email protected]
Asia Pacific
+852 3189 7900
[email protected]

MK-92RD7052-00

You might also like