TSI0556-Hitachi TagmaStore USP Software Solutions SG v2.0-2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 168

TSI0556 Hitachi TagmaStore™ Universal

Storage Platform Software Solutions


Student Guide – Book Two

Version 2.0
Student Guide TSI0556 TagmaStore Software Solutions

Notice: This document is for informational purposes only, and does not set forth any warranty, express or
implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This
document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data
Systems being in effect, and that may be configuration-dependent, and features that may not be currently
available. Contact your local Hitachi Data Systems sales office for information on feature and product
availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited
warranties. To see a copy of these terms and conditions prior to purchase or license, please go to
http://www.hds.com/products_services/support/license.html or call your local sales representative to obtain a
printed copy. If you purchase or license the product, you are deemed to have accepted these terms and
conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS
WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY
FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL,
INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR
LOST DATA, EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service
mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks, registered trademarks, or service marks of Hitachi Data Systems
Corporation in the United States and/or other countries:
HiCommand® Hi-Star Lightning 9900
ShadowImage TrueCopy TagmaStore
All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stand for 1,024 bytes, 1 MB for 1,024 kilobytes, 1 GB for 1,024 megabytes, and
1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
©2005, Hitachi Data Systems Corporation.
All Rights Reserved
Date printed: May 10, 2005
Version 2.0
Course numbers: TSI0556

Contact Hitachi Data Systems at www.hds.com.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page ii distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Student Guide

Contents
Book One
I. TAGMASTORE SOFTWARE SOLUTIONS INTRODUCTION ............... I-1
1. SOFTWARE SOLUTIONS OVERVIEW .......................................... 1-1
2. HITACHI STORAGE NAVIGATOR ................................................ 2-1
3. HITACHI LUN MANAGER SOFTWARE ........................................ 3-1
4. LUSE AND VLL VOLUMES ...................................................... 4-1
5. HITACHI DATA RETENTION UTILITY .......................................... 5-1
6. UNIVERSAL VOLUME MANAGER ............................................... 6-1
7. HITACHI CROSS-SYSTEM COPY SOFTWARE .............................. 7-1

Book Two
8. CACHE RESIDENCY MANAGER ................................................. 8-1
Module Objectives ................................................................................. 8-2
Cache Residency Manager Overview ................................................... 8-3
Cache Residency Manager Software .................................................... 8-7
Cache Residency Manager Overview ................................................... 8-8
Cache Residency Manager Concept ..................................................... 8-9
Cache Residency Manager Operations............................................... 8-15
Module Review .................................................................................... 8-17
Lab Project 6: Cache Residency Manager .......................................... 8-18

9. HITACHI DYNAMIC LINK MANAGER ........................................... 9-1


Module Objectives ................................................................................. 9-2
Hitachi Dynamic Link Manager Overview.............................................. 9-3
Hitachi Dynamic Link Manager Features............................................... 9-6
Hitachi Dynamic Link Manager Path State Transitions ....................... 9-10
Hitachi Dynamic Link Manager Installation.......................................... 9-12
Hitachi Dynamic Link Manager Operations ......................................... 9-13
Hitachi Dynamic Link Manager Parameters ........................................ 9-14
Hitachi Dynamic Link Manager GUI Interface ..................................... 9-15
Hitachi Dynamic Link Manager Operations ......................................... 9-19
Module Review .................................................................................... 9-22
Lab Project 7: Hitachi Dynamic Link Manager for Solaris ................... 9-23

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page iii
Student Guide TSI0556 TagmaStore Software Solutions

10. SHADOWIMAGE OPERATIONS .................................................10-1


Module Objectives............................................................................... 10-2
ShadowImage Software Overview ...................................................... 10-3
ShadowImage Software Parameters and Requirements.................... 10-7
ShadowImage Operations................................................................... 10-9
ShadowImage Pair Status Transitions .............................................. 10-26
Module Review.................................................................................. 10-28
Lab Project 8: ShadowImage GUI..................................................... 10-29

11. SHADOWIMAGE RAID MANAGER CCI OPERATIONS ................11-1


Introduction to ShadowImage RAID Manager CCI Operations........... 11-2
RAID Manager CCI Overview ............................................................. 11-3
RAID Manager Components ............................................................... 11-6
RAID Manager Requirements ............................................................. 11-9
RAID Manager Configuration Files ................................................... 11-10
ShadowImage and CCI Configuration Files ...................................... 11-11
Absolute LUN Numbers..................................................................... 11-16
Command Device.............................................................................. 11-17
Mirror Unit Numbers .......................................................................... 11-19
ShadowImage Configuration Example.............................................. 11-20
Set Environment Variables................................................................ 11-23
Services File ...................................................................................... 11-25
Starting RAID Manager ..................................................................... 11-26
Shutting Down RAID Manager .......................................................... 11-27
RAID Manager CCI Commands ........................................................ 11-28
RAID Manager Considerations.......................................................... 11-35
Troubleshooting................................................................................. 11-37
Module Review.................................................................................. 11-40
Lab Project 9: ShadowImage CCI..................................................... 11-41

12. VIRTUAL PARTITION MANAGER ..............................................12-1


Module Objectives............................................................................... 12-2
Virtual Partition Manager Overview..................................................... 12-3
Storage Administrator and Storage Partition Administrator ................ 12-9
Virtual Partition Manager Features ................................................... 12-17
Virtual Partition Manager Best Practices........................................... 12-22
SLPR and CLPR User IDs ................................................................ 12-25
Partition Manager Functions ............................................................. 12-27
License Key Partition Definition......................................................... 12-28
Storage Navigator ............................................................................. 12-30
Virtual Partition Manager Operations ................................................ 12-31
Module Review.................................................................................. 12-44
Lab Project 10: Virtual Partition Manager ......................................... 12-45

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page iv distributed in whole or in part, without the prior written consent of HDS.
8. Cache Residency
Manager

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-1
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Module Objectives

Module Objectives

y Upon successful completion of this module and any associated


lab(s), you will be able to:
– Identify the purpose and benefits of Cache Residency Manager
software (formerly known as FlashAccess)
– Identify the issues associated with cache residency
– Identify the cache residency requirements and restrictions for external
volumes
– Perform Cache Residency Manager software operations

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Overview

Cache Residency Manager Overview


y Purpose and Benefits
– Must be configured at Installation-Time (DCI) – it could be done dynamically
when adding additional cache
– Applicable to both open systems and mainframe volumes
– It is a licensed option
– Conceptually similar to a RamDisk
– Guarantee’s 100% read/write hit performance… no misses
– Not affected by the LRU algorithm – data with a low access frequency will
remain in cache
– Data with a low access frequency to which high access performance is required
will be resident in the cache

Cache Residency Manager software (also called Cache Residency) is a feature of Hitachi
TagmaStore™ Universal Storage Platform that allows you to store frequently accessed
data in a specific area of the subsystem’s cache memory. Cache Residency increases the
data access speed for the cache-resident data by enabling read and write I/Os to be
performed at high speed. When a data specified as the target of Cache Residency
operation is accessed by a host system for the first time, this data normally becomes
resident or staged in the allocated cache area called the Cache Residency cache. The host
that accesses this data for the second time and thereafter is able to find this data in the
Cache Residency cache. Cache Residency supports a function called pre-staging that
places specific data in the Cache Residency cache before it is accessed by the host. When
pre-staging is set to function, the host is able to find this data in the Cache Residency
cache from the first access, thus enhancing its access performance.
To be able to use the Cache Residency functions, you need to allocate a certain portion of
the subsystem’s cache memory. You can change the capacity of the Cache Residency
cache when increasing or decreasing the size of the cache memory.
The user may want to increase total subsystem cache capacity when using Cache
Residency to avoid data access performance degradation for non-Cache-Residency data.
Cache Residency is only available on TagmaStore Universal Storage Platform systems
configured with at least 512 MB of cache.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-3
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Overview

y Cache Residency Specifications


Specifications
Item
Open Systems Mainframe
Emulation Type Open-3, 8, 9, E, L, V 3390-3, 3A, 3B, 3C, 3R, 9, L
For OPEN-V, at least 512 LBAs (Logical Block At least one cache slot (or
Cache Area Address), equivalent to 264 KB track): Equivalent to 66 KB.
Allocation For other than OPEN-V, at least 96 LBAs (Logical Up to 1 LDEV
Block Address), equivalent to 66 KB
Supported Normal volumes, LUN Expansion (LUSE) volumes Normal volumes and VLL
Volumes and VLL volumes volumes
4,096 Extents per LDEV or per subsystem:
Number of cache - One LDEV could use all 4,096 extents, or
areas (Extents) - 4,096 LDEVs could occupy 4,096 extents (one
extent per LDEV) if enough space was available

Note: As far a open system LDEVs are concerned, it is best to put the entire LDEV into
Cache Residency, because there is no way to figure out what track the data
resides on, something that is possible with mainframes. That’s why VLL volumes
are good candidates for Cache Residency.
4

The cache extents are dynamic and can be added and deleted at any time. The
TagmaStore Universal Storage Platform supports a maximum of 4,096 addressable
cache extents per LDEV and per subsystem. For mainframe volumes, each Cache
Residency cache area must be defined on contiguous tracks, with a minimum size of
one cache slot (or track) and a maximum size of one LVI. This is equivalent to 66kB.
For open-systems volumes, Cache Residency cache extents must be defined in
logical blocks using logical block addresses (LBAs), with a minimum size of 512
LBAs (equivalent to 264 KB) for OPEN-V, and 96 LBAs (equivalent to 66 KB) for
other than OPEN-V. For Open-V it increments in units of 128, rather than 96.
However, in most cases users will assign an entire open-system volume for Cache
Residency. If the remaining cache memory is less than 256 MB, Cache Residency is
not available.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Overview

y Cache Residency Modes


– Two modes are available: PRIO (read-only) and BIND (read-write)
y BIND takes three times as much cache space as PRIO (because of
mirroring)
Three copies of LDEV: one read, and two write

PRIO MODE: In priority mode the Cache Residency extents are used to hold read data for
specific extents on volumes. Write data is write duplexed in normal cache and destaged to
disk using standard algorithms. Because there is no duplexed write data in the cache
reserved for Cache Residency, all priority mode Cache Residency extents are 100% utilized
by user read-type data.
BIND MODE: In bind mode the Cache Residency extents are used to hold read and write
data for specific extent(s) on volume(s). Any data written to the Cache Residency bind area
is not de-staged to the disk. To ensure data integrity, write data must be duplexed in the
Cache Residency area, which consumes a significant amount of the Cache Residency cache.
The primary advantage of bind mode is that all targeted read and write data is transferred
at host data transfer speed. In addition, the accessibility of read data is the same as Cache
Residency priority mode; write operations do not have to wait for available cache
segments; and there will be no backend contention caused by de-staging data.
Pre-Staging Function
The cache residency manager pre-staging function lets data allocated to configured cache
residency manager reside in the cache residency manager cache. This allows the host to
find data in the cache from the first access. You can use this function in both of the cache
residency manager priority and bind modes.
Note: Data can be pre-staged at a scheduled time, which should not be during peak activity

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-5
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Overview

y Cache Residency Manager Restrictions


– Cache Residency Manager operations cannot be configured for:
• Volume migration operations
• ShadowImage in-system replication software quick restore operations
• On-demand LDEVs
• Cross-System Copy

Do not perform the ShadowImage software quick restore operation or the volume
migration operation on a Cache Residency volume. These operations swap the
internal locations of the source and target volumes, which causes a loss of data
integrity. The Cache Residency bind mode is not available to external volumes
whose IO Suppression mode is set to Disable and Cache mode is also set to Disable
(which is the mode that disables the use of the cache when there is an I/O request
from the host).

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Software

Cache Residency Manager Software

y Cache Residency Support Requirements for External Volumes

IO
Cache Residency
Suppression Cache Mode
Functions
Mode

Enable Full availability of the


Disable same functions as internal
Enables cache during an I/O request
volumes.
from the host.

Disable Cache residency bind


Disable mode is not available.
Disables cache during an I/O
request from the host.
No Cache Residency
Enable
functions available.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-7
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Overview

Cache Residency Manager Overview

y Cache Residency De-stage Conditions


– Cache Residency data is de-staged when:
y The subsystem is powered off
y Maintenance operations (cache upgrades)
y Volume is deleted
y VLL volume-to-space operation
y VLL volume initialization operation

y Cache Residency Suspension


– The cache Residency Manager function is automatically suspended when
any of the following failures occur (The suspension continues until the
maintenance is completed):
y Cache failure
y Shared memory failure
y One-side cluster down
y One-side shared memory blockade

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Concept

Cache Residency Manager Concept

y Overview Cache Residency definition


LDEV# Start LBA End LBA
HOST Read Hit performance a 0 100
MAX.1024 b 500 525
c 200 450
extents d 300 475

Define from SVP or Storage Navigator

DKC Cache Manager


Usual cache area Cache Manager Residence area Definition

Cache Memory

Staging from HDD

LBA#0

LDEV#a
#100 #500 #200
LDEV#b LDEV#c #450
#525
#300
LDEV#d #475
10

Use the Storage Navigator to:


1. Select a LDEV
2. Choose LB range from - to
3. Consider Options –BIND vs. PRIO

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-9
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Concept

y PRIO Read - Miss


1st
Read The requested data is not in the cache. It is read from disk,
locked into the cache and presented to the requesting host.

Miss
Standard LRU
Cache = Management

Cache Residency
Manager Cache

Side A asynchronous Side B


de-staging

11

y PRIO Read - Hit


Following Read The requested data is in the cache. It is
Hit! immediately presented to the requesting host.

Standard LRU
Cache = Management

Cache Residency
Manager Cache

Side A Side B

12

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Concept

y PRIO Write
As you notice here, only the READ blocks are a
Write
part of the FlashAccess cache area, the WRITE
blocks are a part of standard cache.

Merging Standard LRU


new data
Cache = Management

Cache Residency
Manager Cache

Side A asynchronous Side B


de-staging

13

As you notice here, only the READ blocks are a part of the Cache Residency
Manager cache area, the WRITE blocks are a part of standard cache.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-11
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Concept

y BIND 1st Read The requested data is not in the cache. It


is read from disk, locked into the cache
and presented to the requesting host.
Miss

Standard LRU
Cache = Management

Cache Residency
Manager Cache

Side A asynchronous Side B


de-staging

14

y BIND Read - Hit Read The requested data is in the cache. It


is immediately presented to the
requesting host.

Standard LRU
Cache = Management

Cache Residency
Manager Cache

Side A Side B

15

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Concept

y BIND Write As you notice here, the READ as


well as the WRITE blocks are a
Read part of the FlashAccess cache area.
Write This is why BIND takes three times
the cache space as PRIO.

Standard LRU
Cache = Management

Cache Residency
Manager Cache

Side A Side B

16

As you notice here, the READ as well as the WRITE blocks are a part of the Cache
Residency Manager cache area.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-13
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Concept

y Summary of Cache Residency Manager Process


PRIO (read only) BIND (read and write)

1st Following 1st Following


Read Read Read Read
Write
Write (no de-staging, the data remains in cache)
Hit!
Hit! Hit!
Miss Miss

Merging Standard
new data Cache = LRU
Management

Cache Residency
Manager Cache

Side A asynchronous Side B


de-staging

LRU — Least Recently Used

17

This diagram summarizes the Cache Residency Manager process presented in the
previous slides. If the write data in question causes a cache miss, the data from the
block containing the target record up to the end of the track is staged into a read
data slot. In parallel with that, the write data is transferred when the block in
question is established in the read data slot. The parity data for the block in question
is checked for a hit or miss condition and, if a cache miss condition is detected, the
old parity is staged into a read parity slot. When all data necessary for generating
new parity is established, it is transferred to the DRR circuit in the DKA. When the
new parity is completed, the DRR transfers it into the write parity slots for cache A
and cache B (the new parity is handled in the same manner as the write data).

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Operations

Cache Residency Manager Operations

y Operations Overview
– Referencing the current Cache Residency Manager software settings
– Making data reside in the Cache Residency Manager cache
– Deleting data resident in the Cache Residency Manager cache
– Changing the mode set for the Cache Residency Manager cache

18

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-15
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Operations

y Cache Residency Manager Panel From Storage Navigator

Select CU/
LDEV to
put into If YES is selected for Prestaging, the data
Cache is loaded immediately into the cache.
Residency If NO is selected for Prestaging, the data Select BIND/
Manager is loaded the first time it is requested. PRIO mode
and the
Pre-staging
mode

Select LBA
for the
LDEV

This shows the area of Cache Residency Manager occupied by LDEVs

19

y FINAL Settings On the Cache Residency Manager Tab

Error is displayed
when the user is
trying to load an
LDEV whose size
exceeds the current
Cache Residency
Manager size

20

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Module Review

Module Review

21

Module Review Questions


1. What are the benefits of using Cache Residency Manager software?
2. When should you use BIND vs. PRIO?
3. In what situation does the Cache Residency Manager function become suspended?

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-17
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Lab Project 6: Cache Residency Manager

Lab Project 6: Cache Residency Manager

y Timing and Organization


– Time allotted to complete the project: 30 minutes
– The lab project contains three sections:
y Section 1 is the lab activity
y Section 2 contains the answers to the embedded lab questions
y Section 3 contains the review questions
– Time allotted to go over the review questions: 10 minutes
– The class will be split into lab groups
– Each lab group will have the following lab equipment:
y One Solaris host systems running Solaris 8
y One Windows host systems running Windows XP
y One TagmaStore Universal Storage Platform

22

y Upon completion of the lab project, you will be able to do the


following:
– Set an LDEV into the Cache Residency area as a Priority volume
– Set an LDEV into the Cache Residency area as a Bind volume
– Release an LDEV from the Cache Residency area

23

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-18 distributed in whole or in part, without the prior written consent of HDS.
9. Hitachi Dynamic
Link Manager

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-1
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Module Objectives

Module Objectives

y Upon successful completion of this module and any associated


lab(s), you will be able to:
– Identify Describe the purpose and benefits of Hitachi Dynamic Link
Manager path manager software
– Describe the architecture of the Hitachi Dynamic Link Manager
software
– Identify key features of the Hitachi Dynamic Link Manager
– Identify and use the Hitachi Dynamic Link Manager GUI screens

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Overview

Hitachi Dynamic Link Manager Overview

y Purpose of Hitachi Dynamic Link Manager Software


– Hitachi Dynamic Link Manager is server based software that
provides path failover and load balancing capabilities
– Hitachi Dynamic Link Manager provides:
y Support for fibre channel connectivity
y Automatic path discovery, which supports a SAN environment
y Automatic path failover and failback
y Two applications exist: CLI and GUI
y Support for all Hitachi Freedom Storage systems
y Support for Hitachi TagmaStore Universal Storage Platform
systems
– Integrates with HiCommand software

Hitachi Dynamic Link Manager is a family of Hitachi provided middleware


software utilities that are server based. Hitachi Dynamic Link Manager enhances the
availability of RAID systems by providing automatic error recovery and path
failover from server-to-RAID connection failures. Dynamic Link Manager provides
load balancing in addition to path failover by re-directing I/O activity to the least
busy path using complex algorithms.
Just because a system is RAID-protected doesn’t mean that it is protected against
connection bus failures, which is why Hitachi Dynamic Link Manager is required for
true nonstop operations. Dynamic Link Manager allows system administrators to
take advantage of the multiple paths by adding redundant connections between
data servers and RAID systems. Hitachi Dynamic Link Manager, therefore, provides
increased reliability and performance. Supported platforms include, IBM® AIX®,
Sun Solaris®, Windows NT, and Windows 2000.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-3
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Overview

y Benefits
– Provides load balancing
across multiple paths
– Utilizes the hardware’s ability
to provide multiple paths to
the same device (up to 64
paths)
– Provides failover protection
by switching to a good path,
if a path fails

Hitachi Dynamic Link Manager automatically provides path failover and load
balancing for open systems.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Overview

y Features

Connectivity SCSI, FC
Support Platforms NT, W2K, XP, Sun, AIX, HP...
Support Cluster Software MSCS, VCS, Sun Cluster, HACMP, MC/SG
Maximum Paths per LUN 64
Maximum Physical Paths 2048
Load Balance Round-Robin, Extended Round-Robin
Failover Yes
Failback Yes
GUI Yes
CLI Yes

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-5
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Features

Hitachi Dynamic Link Manager Features

y Overview
– Removes HBA as single point of failure
– Automatically detects failed path and reroutes I/O to alternate path
– Automatic discovery of HBAs and LUNs in SAN environment
– Up to 256 LUNs and 64 paths to each LUN
– Uses round-robin or extended round-robin to balance I/O’s across
available paths
– Provides tools to control and display path status
– Supports the most popular cluster technologies
– HBA vendors and standard open drivers support
– GUI and CLI support
– Error logging capability

Hitachi Dynamic Link Manager provides the ability to reduce the server’s host bus
adapter as the single point of failure in an OPEN environment. One strength of
Dynamic Link Manager is its ability to configure itself automatically. It is designed
to enhance the operating system by putting all alternate paths offline in Disk
Administrator. It functions equally well in both SCSI and fibre channel
environments. Dynamic Link Manager supports the most popular cluster
technologies like HACMP, MSCS, MC Service Guard, Sun Cluster, and VERITAS
Cluster Server™.It has GUI/CLI support for configuration management,
performance monitoring and management and authentication of user id’s using
HiCommand facility.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Features

y Load Balancing
– Dynamic Link Manager distributes storage accesses across the
multiple paths and improves the I/O performance with load balancing

Server Server
Applications Applications

Regular Driver HDLM

Load
I/O Bottle-neck Balancing

Volumes Volumes
Storage Storage

Without Load Balancing With Load Balancing

When there is more than one path to a single device within an LU, Hitachi Dynamic
Link Manager can distribute the load across the paths by using the paths to issue
I/O commands. Load balancing prevents a heavily loaded path from affecting
performance of the entire system.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-7
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Features

y Failover
– Hitachi Dynamic Link Manager provides continuous storage access
and high availability by taking over the inactive paths

Server Server
Applications Applications

HDLM HDLM

Failure Stand-by Reduction of


Failure Balancing
Paths

Volume Volume
Storage Storage

Simple Failover With Load Balancing

The failover function automatically places the error path offline to allow the system
to continue to operate using another online path.
Trigger error levels:
¾ Error
¾ Critical
The online command restarts and the offline command is used to force path
switching.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Features

y Load Balancing on Cluster


– Dynamic Link Manager allows load balancing over the clustering system
in a safe manner
– The path failover and failback of the Dynamic Link Manager will work
Active
along with cluster’s node failover andHost
failback Standby Host
Cluster
y Supported Clustering Systems
HDLM HDLM
– Windows: MSCS
– Solaris
y SUN Cluster HBA HBA HBA HBA

y Veritas Cluster Server


y HP-UX Load Balance
y MC/ServiceGuard CHA CHA
y AIX: HACMP
LUN
Storage

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-9
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Path State Transitions

Hitachi Dynamic Link Manager Path State Transitions

y Active and Inactive

Active Inactive
Offline Operation
Online Offline(C)
Online Operation
Active path Path disabled by
Pa manual operations
th
Failure on
Last Path
Recovery

Fa Fa
i lu il u
re re
Re
co
ve
ry
Online(E) Offline(E)
Path in failures on Other Path Recovery Path disabled by
last active path failures

10

Active is the state where the path is healthy. Inactive is the state where all accesses
are disables. Online is the state where Hitachi Dynamic Link Manager allows
applications access to the path. Offline is the state where Dynamic Link Manager
does not allow applications to the path.
This illustration shows the path status transition
Path has two status; Online and Offline status.
Online status path is what Dynamic Link Manager uses for failover and load
balancing. I/O can basically be issued to the path.
Offline status path is what Dynamic Link Manager does not use for failover and
load balancing. I/O cannot be issued to the path.
When an error occurred or offline command is executed, Dynamic Link Manager
places the online path offline.
When the online command is executed, Dynamic Link Manager place the offline
path online.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Path State Transitions

y Online(E) Status
– If an error occurred on the last online path for each LU, Dynamic Link Manager
changes the path status to Online(E). The Online(E) path returns an I/O error to
the applications (upper layers) in order to notify the applications that no storage
accesses are available.
Server Error
HDLM
Error
HBA HBA HBA HBA Last failed path

Path State
CHA CHA
Online
Offline (E)
LUN Online (E)
Storage
11

If an error occurred on a path which is the last path to the LUN, Hitachi Dynamic
Link Manager executes the Auto Confirmation Function. Dynamic Link Manager
checks the offline paths. If there is a path that can be used, Dynamic Link Manager
places that path online. If there is no path that can be used, Dynamic Link Manager
returns an I/O error. But, Dynamic Link Manager does not place the last path
offline.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-11
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Installation

Hitachi Dynamic Link Manager Installation

y Basic Installation Overview


– Read the Release Notes!!!
– If using clustering you may need to configure the host in a single-path
configuration
– Configure the TagmaStore Universal Storage Platform
– Install the HBAs
– Setup the switches
– Setup the BIOS for the HBAs
– Install the O.S and the HBA drivers
– Setup the HBA parameters
– Prepare the LUs
– Restart the server in a dual path configuration

12

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Operations

Hitachi Dynamic Link Manager Operations

y Operations Overview
– View: Displays host and subsystem information
– Offline: Places an online path offline
– Online: Places an offline path online
– Set: Change Dynamic Link Manager parameters
– Clear: Clears the default setting
– Help: Show the operations, and displays help for each operation

13

When you are using Dynamic Link Manager for Windows systems, execute the
command as a user of the Administrators group. When you are using Dynamic Link
Manager for Solaris systems, execute the command as a user with root permission.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-13
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Parameters

Hitachi Dynamic Link Manager Parameters

y Optional Parameters
– Load Balancing:
y Round Robin – Distributes all I/O among multiple paths
y Extended Round Robin – Distributes I/O to paths depending on type of
I/O:
– Sequential – a single path is used
– Random – Multiple paths are used
– Path HealthCheck:
y When enabled (default), Dynamic Link Manager monitors all online paths
at specified interval and puts them into Offline(E) or Online(E) status if a
failure is detected.
y There is a slight performance penalty due to extra probing I/O
y The default interval is 30 minutes.
– Auto Failback: When enabled (not the default), Dynamic Link Manager
monitors all Offline(E) and Online(E) paths at specified intervals and restores
them to online status if they are found to be operational. The default interval
is 1 minute.
14

y Optional Parameters - continued


– Intermittent Error Monitor
y Auto Failback must be On
y Parameters are Monitoring Interval and Number of Times
Example: Monitoring Interval = 30 minutes
Number of Times = 3 If an error occurs 3 times in a
30 minute period, the path is determined to have an
intermittent error and is removed and subject to
automatic failback. The path will display error status until
problem is corrected.
– Reservation Level: Used only by AIX and specific paths can be reserved
for I/O
– Remove LU
y Used with Windows 2000 and Server 2003
y Removes the LUN when all paths to the LUN are taken offline

15

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager GUI Interface

Hitachi Dynamic Link Manager GUI Interface

y Hitachi Dynamic Link Manager GUI Interface Overview


– The GUI interface provides the following three windows:
y Path Management Window: This is the main window for the
Dynamic Link Manager GUI. The Path Management window
displays the detailed configuration and path information and
allows you to change the path status, and provides access to
the other windows.
y Options Window: The Options window displays and allows
you to change the Dynamic Link Manager operating
environment settings, including function settings and error
management settings.
y Help Window: The Help window displays the HTML version of
the users manual. The Help window is opened automatically
by your default Web browser software.

16

y Options Window

Dynamic Link
Manager Version

Basic function settings


• Load balancing
• Path Health Checking
• Auto failback
• Intermittent Error Monitor
• Reservation Level
• Remove LU

Error management
function settings
Select the severity of Log
and Trace Levels

17

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-15
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager GUI Interface

y Configuration Format View

Expand entry to
display LUNs HBA Port
CHA Port

18

y Path Status
– Online
– Offline(C): I/O
cannot be issued
because the path
Current Path Status:
was placed offline Gray indicates normal status.
by the GUI or a Red indicates an error.
command
– Online(E): Indicates
an error has
occurred in the last
online path for each
device
– Offline(E): Indicates
I/O cannot be
issued because an
error occurred on
the path
19

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager GUI Interface

y Status Display
– Allows you to narrow
down your display
y Online box displays
paths in online status
y Offline(C) box
displays paths in
offline
y Offline(E) path check
box displays paths
offline
y Online(E) path box
displays paths in
online

20

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-17
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager GUI Interface

y Path Management Window

21

This is the Path List window. In this example LUN 0, 1, 2 and 3 are available
through two paths (1C and 2C –both are owner paths. Non owner paths are
applicable only on the 9200 and 9500V subsystems) to the host. To clear the data
from the screen click on Clear Data. To export this data to a CSV file click on Export
CSV. To set an individual path to OFFLINE or ONLINE status, select the path and
click on Online and Offline options in the top right hand side corner of the screen. If
you select a single LUN on the Tree in the left, only the paths for that LUN will be
displayed in the Path list on the right.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Operations

Hitachi Dynamic Link Manager Operations

y Path View Operation for the CLI


– Allows you to see information about data paths
dlnkmgr view -path

22

View allows you to see two types of information:


¾ Information about your data paths -path
¾ Information about the Dynamic Link Manager system settings – sys
Both of these parameters can take several different values.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-19
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Operations

y View System Parameters for the CLI


– Allows you to view parameter settings
dlnkmgr view -sys

23

y Set command of the CLI is used to change parameters


– The table below list the various options of the command:

-lb{on|off} Enables or disables the load balancing function.

- rr: Round robin: All I/Os will be distributed across multiple paths.
-lbtype{rr|exrr}
- exrr: Extended round robin.

Specifies the level of error information you want to collect in the


-ellv log-level
error log.
-elfs log-size Size of the log file.
-systflv trace-level The level of the trace output level.

-pchk {on [-intvl execution-interval]|off} Enables or disables Path health check and the interval.

-afb{on[-intvl execution-interval]|off} Enables or disables automatic failback. The default is off.

-rmlu {on |off} Enables or disables the path deletion functionality.

-lic Updates the license

24

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Operations

y Examples of the CLI Set command:


> dlnkmgr set –pchk on –intvl 2 Turn on Path Health Checking with an
Interval of 2 seconds

> dlnkmgr set –afb on Turn on Auto Failback

25

y CLI Online/ Offline Examples:

> dlnkmgr offline -pathid 1 Take path 1 offline

> dlnkmgr online -pathid 1 Bring path 1 online

26

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-21
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Module Review

Module Review

27

Module Review Questions


1. Name the two functions performed by Hitachi Dynamic Link Manger.
2. What should you read before installing Dynamic Link Manager?
3. What is the function of Failover?
4. What does Path HealthCheck do?
5. What does the extended round robin option imply?
6. What does the status Online(E) imply?

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Lab Project 7: Hitachi Dynamic Link Manager for Solaris

Lab Project 7: Hitachi Dynamic Link Manager for Solaris

y Timing and Organization


– Time allotted to complete the project: 60 minutes
– The lab project contains two sections:
y Section 1 is the lab activity
y Section 2 contains the review questions
– Time allotted to go over the review questions: 15 minutes
– The class will be split into lab groups
– Each lab group will have the following lab equipment:
y One Solaris host systems running Solaris 8
y One Windows host systems running Windows XP
y One TagmaStore Universal Storage Platform

28

y Upon completion of the lab project, you will be able to do the following:
– Install Dynamic Link Manager on a Sun host system
– Create the Dynamic Link Manager configuration file dlmfdrv.conf
– Use the Dynamic Link Manager GUI to collect status concerning Host
HBA/TagmaStore Universal Storage Platform connections
– Use the Dynamic Link Manager GUI to set a host connection Online or
Offline
– Configure Dynamic Link Manager to automatically failback (bring a
failed path back online) when the condition that caused the failure is
corrected
– See configuration on the next slide

29

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-23
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Lab Project 7: Hitachi Dynamic Link Manager for Solaris

y You will use the configuration created during lab project 2


W in dow s C U : 00
H ost 1 O p en -9 E m ula tion
H ost M od e = C T ag m aSto re Su bsystem
H ost G ro u p = W in la b Targ et E4 = T ID 2
L U N S ecu rity = E na ble C lu ster 1 C luste r 2 LU N 0 LU N 1
LD EV LDEV
E m u le x H B A C L1 -A C L2-A 00 :0 0 00 :01

T arg et E8 = T ID 1 S am e L U N s (LD EV s)
E m u le x H B A C L 1-B C L 2-B m ap p e d to p o rts 1A
and 2 A
Ta rget E8 = TID 1
C L1 -C C L2-C
S olaris
H ost 2 C L 1-D C L 2-D LU N 0 LU N 1
H ost M o d e = 09 LDEV L D EV
H o st S p eed = A u to 00 :02 0 0:03
H ost G ro u p = S un la b
L U N S ecu rity = E nab le Fabric = D isa b le
C o nn ectio n = F C -A L
JN I H B A
LU N 2 LU N 3
Targ et E4 = T ID 2 LD EV LD EV
JN I H B A 0 0:0 4 00 :05

S am e L U N s (LD EV s)
m ap p e d to p o rts 1C
and 2 C

30

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-24 distributed in whole or in part, without the prior written consent of HDS.
10. ShadowImage
Operations

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-1
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Module Objectives

Module Objectives

y Upon successful completion of this module and any associated lab(s),


you will be able to:
– Describe the purpose of Hitachi ShadowImage™ in-system replication
software
– Identify ShadowImage software specifications
– Identify ShadowImage software configurations and components
– Perform the following ShadowImage software operations:
y Set reserved volumes
y Create and delete pairs
y Split pairs
y Suspend pairs
y Resynchronize pairs
y Delete pairs
– Identify ShadowImage software pair status transitions

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Software Overview

ShadowImage Software Overview

y Features and Benefits of ShadowImage Software


– Features
y Full copy of a volume at a point in time
y Immediately available for concurrent use by other applications
y No host processing cycles required
y No dependence on operating
system, file system or database Production Copy of
y All copies are additionally RAID Volume Production
protected Volume
– Benefits Normal Point-in-
y Protects data availability processing time copy
continues for parallel
y Simplifies and increases disaster unaffected processing
recovery testing
y Eliminates the backup window
y Reduces testing and development cycles
y Enables non-disruptive sharing of critical information
3

ShadowImage software is a storage-based hardware solution for duplicating logical


volumes which reduces backup time and provides point-in-time backup. The
primary volumes (P-VOLs) contain the original data; the secondary volumes (S-
VOLs) contain the duplicate data. The user can choose to make up to nine copies of
each P-VOL using the cascade function. Since each S-VOL is paired with its P-VOL
independently, each S-VOL can be maintained as an independent copy set that can
be split, resynchronized, and deleted separately from the other S-VOLs assigned to
the same P-VOL.
ShadowImage:
¾ Protects information by enabling online backup, rapid-recovery of applications
and data warehousing
¾ Protects availability of information to multiple users to allow application
development testing and batch processing
¾ Protects access of mission critical applications to storage by allowing less
critical applications to be offloaded

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-3
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Software Overview

y SET Process – L1 PAIRS and CASCADE Process – L2 PAIRS


Asynchronous Write Asynchronous Write
• Testing
L1 PAIRS Cascade Connection
S-VOL • Development
S-VOL • Queries
Max. 2 S-VOL S-VOL • Off-Site Replication
• Backup / Restore
L2 PAIRS
S-VOL
WriteData
Write Data
Data P-VOL S-VOL Total 9 Copies:
Write
Three Level 1’s
Max. 2 S-VOL S-VOL And
Six Level 2’s

Max. 3 S/P-VOL or 3 S-VOL S-VOL

S-VOL

Max. 2 S-VOL S-VOL

The Hitachi Lightning 9900™ V Series enterprise storage system contains and
manages both the original and copied ShadowImage data. The Hitachi
TagmaStore™ Universal Storage Platform system supports a maximum of 16,382
ShadowImage volumes: 8,191 pairs: (8,191 P-VOLs and 8,191 S-VOLs). When
ShadowImage pairs include size-expanded LUs, the maximum number of pairs
decreases. The Paircreate command creates the first Level 1 “S” volume. The Set
command can be used to create a second and third Level 1 “S” volume. And the
Cascade command can be used to create the Level 2 “S” volumes off the Level 1 “S”
volumes.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Software Overview

y High Performance by Asynchronous Access to Secondary Volumes


Note: Because everything goes through the cache, P-VOL and S-VOL may not be
the same until the pair is split).

(1) Write I/O


Cache
Cache Memory
Host Data
(2) Write complete
(3) Asynchronous write at the best timing

– System returns write complete to host DKA Pair DKA Pair


as soon as the data is written to cache
memory S-VOL
P-VOL
– Data in cache memory is asynchronously
written to P-VOL & S-VOL at the best Max. 9 S-VOL S-VOL
timing
S-VOL
Fast response to Host side
&
Intelligent Asynchronous copy
6

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-5
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Software Overview

Root Node Leaf


y Mirrored Unit Numbers Volume Volumes Volumes
– Three bitmaps are maintained
L2
for each volume MU 1
– Bitmaps record which tracks L1
are dirty MU 0
L2
– Bitmaps are numbered 0.2 MU 2
(Mirror Unit #)
L2
MU 1
L1
MU 1
L2
MU 2

L2
MU 1
L1
MU 2
L2
MU 2
7

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Software Parameters and Requirements

ShadowImage Software Parameters and Requirements

y Pair Objects and Number of Copies

Parameter Requirement
Pair objects Logical devices (LDEVs): OPEN-X (e.g., OPEN-3, OPEN-9, OPEN-E,
OPEN-V), including custom size devices (VLL volumes) and size-
expanded LUs (LUSE volumes). Devices must be installed and
configured.

The P-VOL and S-VOL must be the same type (e.g., OPEN-3 to
OPEN-3 allowed, OPEN- 3 to OPEN-9 not allowed). A VLL P-VOL
must be paired with S-VOLs of the same type and same capacity.
Number of copies Maximum three copies (S-VOLs) per primary volume (P-VOL).
Maximum one P-VOL per S-VOL (P-VOLs cannot share an S-VOL).

An additional six copies can be created by using the ShadowImage


cascade function.

To copy the existing data in the mapped external volume using ShadowImage, the
emulation type of the mapped external volume also has to be OPEN-V. The
TagmaStore Universal Storage Platform internal volume that has the same capacity
as the mapped external volume. The maximum number of concurrent copies is 128
and the IO Suppression mode should be Disabled.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-7
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Software Parameters and Requirements

y Maximum Number of Pairs and Other Parameters

Parameter Requirement
Max number of pairs Maximum of 8,191 pairs (8,191 P-VOLs and 8,191 S-VOLs)
can be created per TagmaStore™ Universal Storage
Platform system.
The maximum number of pairs equals the total number of
ShadowImage, ShadowImage for z/OS®, Compatible
Mirroring for IBM® FlashCopy®, Volume Migration and
Cross-System Copy volume pairs.
Max number of S-VOLs 8,191
MU numbers of cascade For L1 pairs = 0, 1, and 2 (a total of 3 pairs)
pairs For L2 pairs =1 and 2 (a total of 2 pairs)
Combinations of RAID All combinations supported:
levels (primary-secondary) RAID1-RAID1, RAID5-RAID5, RAID1-RAID5, RAID5-RAID1

10

If the TagmaStore Universal Storage Platform cache maintenance is performed


during a period of high I/O usage, one or more ShadowImage pairs may be
suspended. Reduce the I/O load before cache maintenance. For pairs in PAIR status,
host I/Os are copied to the S-VOLs asynchronously. When a failure of any kind
prevents a ShadowImage copy operation from completing, the TagmaStore
Universal Storage Platform will suspend the pair. If an LDEV failure occurs, the
TagmaStore Universal Storage Platform suspends the pair. If a PDEV failure occurs,
ShadowImage pair status is not affected because of the RAID architecture.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

ShadowImage Operations

y PAIRCREATE – Modes of Operation


PAIR Status Set Reserve Attribute
Host SIMPLEX y This ShadowImage operation reserves a volume
P-VOL
so that it can be used as an S-Vol (Node & leaf
volumes)
P-Vol Available to Host
for R/W I/O Operations
All Data
COPY(PD) y Reserved volumes can be used only as S-VOLs
y When you reserve a volume, any write
operations to the reserved volume will be
S-VOL
INITIAL COPY PAIR rejected

P-Vol Differential
Data Bit Map y Updates S-Vol after initial copy
Host y Write I/O to P-VOL during initial
PAIR Status
copy – duplicated at S-VOL by
P-Vol Available to Host P-VOL PAIR
update copy after initial copy
for R/W I/O Operations
Differential Data

S-VOL
UPDATE COPY PAIR

11

The ShadowImage paircreate operation establishes the new specified ShadowImage


pair(s). The volume which will be the P-VOL must be in the SMPL (simplex) state,
and the volume which will be the S-VOL must be SMPL before being added to a
ShadowImage pair. You can reserve the S-VOL before creating a pair. You can also
create a pair with an unreserved S-VOL.
During Open ShadowImage operations, the P-VOLs remain available to all hosts for
R/W I/O operations (except during Reverse Resync.). S-VOLs become available for
host access only after the pair has been split. ShadowImage S-VOLs are updated
asynchronously….for a volume pair with PAIR status, the P-VOL and S-VOL may
not be identical.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-9
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

y Initial Copy Operation

SMPL Start
P-VOL S-VOL

COPY (PD)
P-VOL S-VOL Initial
Copy

PAIR Finished
P-VOL S-VOL

12

The ShadowImage initial copy operation takes place when you create a new volume
pair. The ShadowImage initial copy operation copies all data on the P-VOL to the
associated S-VOL. The P-VOL remains available to all hosts for read and write I/Os
throughout the initial copy operation. Write operations performed on the P-VOL
during the initial copy operation will be duplicated at the S-VOL by update copy
operations after the initial copy is complete. The status of the pair is COPY(PD) (PD
= pending) while the initial copy operation is in progress. The status changes to
PAIR when the initial copy is complete.
When creating pairs, you can select the pace for the initial copy operation(s): slower,
medium, and faster.
The slower pace minimizes the impact of ShadowImage operations on subsystem
I/O performance, while the faster pace completes the initial copy operation(s) as
quickly as possible.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y Update Copy Operation

Host I/O

Differential Data

P-VOL S-VOL

Update Copy

PAIR
P-VOL S-VOL

13

The ShadowImage update copy operation updates the S-VOL of a ShadowImage


pair after the initial copy operation is complete. Update copy operations take place
only for duplex pairs (status = PAIR). As write I/Os are performed on a duplex P-
VOL, the TagmaStore Universal Storage Platform stores the differential bitmap and
then performs update copy operations periodically based on the amount of
differential data present on the P-VOL as well as the elapsed time between update
copy operations.
Update copy operations are not performed for pairs with the following status:
COPY(PD) (pending duplex), COPY(SP) (steady split pending), PSUS(SP) (quick
split pending), PSUS (split), COPY(RS) (resync), COPY(RS-R) (resync-reverse), and
PSUE (suspended).
Update copy operations do not occur every time a host issues a write I/O operation
to the P-VOL of a ShadowImage pair. ShadowImage update copy operations are
performed asynchronously according to the differential bitmap, which is stored in
shared memory. If shared memory is lost (e.g., offline micro exchange, volatile PS
on), the differential bitmap is also lost. In this case the TagmaStore Universal Storage
Platform treats the entire ShadowImage P-VOL (S-VOL for COPY(RS) pairs) as
differential data and recopies all data to the S-VOL (P-VOL for COPY(RS) pairs) to
ensure proper pair resynchronization. For pairs with COPY(SP) or PSUS(SP) status.
The TagmaStore Universal Storage Platform changes the status to PSUE due to the
loss of the differential bitmap, also ensuring proper resynchronization of these pairs.
If shared memory has been lost, please allow extra time for ShadowImage
operations.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-11
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

y PAIRCREATE
– During ShadowImage operations, the P-VOLs remain available to all hosts
for R/W I/O operations (except during Reverse Resync)
– S-VOLs become available for host writing only after the pair has been split
– S-VOLs are updated asynchronously - for a volume pair with PAIR status,
the P-VOL and S-VOL may not be identical
– Update Copy operations DO NOT occur every time a host issues a write
I/O to the P-VOL of a ShadowImage Pair - The TagmaStore stores the
differential bitmap and then performs update copy operations periodically
based on the amount of differential data present on the P-VOL as well as
the elapsed time between update copy operations (volumes must be in
PAIR status)

14

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y PAIRSPLIT Operation
Primary Server Backup Server

Host Host

Break Mirror
(Split Pair) Backup S-VOL
Updates backup data
To tape
P-VOLS S-VOLS

Split S-VOL is Backup Data


Storage
Storage PSUS PSUS

Navigator
Navigator

15

The ShadowImage split capability provides point-in-time backup of your data, and
also facilitates real data testing by making the ShadowImage copies (S-VOLs)
available for host access. The ShadowImage pairsplit operation performs all pending
S-VOL updates (those issued prior to the split command and recorded in the P-VOL
track map) to make the S-VOL identical to the state of the P-VOL when the split
command was issued, and then provides full read/write access to the split S-VOL.
You can split existing pairs as needed, and you can also use the pairsplit operation
to create and split pairs in one step.
When the split operation is complete, the pair status changes to PSUS, and you have
full read/write access to the split S-VOL (even though it is still reserved). While the
pair is split, the TagmaStore Universal Storage Platform establishes a differential
bitmap for the split P-VOL and S-VOL and records all updates to both volumes. The
P-VOL remains fully accessible during the pairsplit operation. Pairsplit operations
cannot be performed on suspended (PSUE) pairs.
When splitting pairs, you can select the pace for the pending update copy
operation(s): slower, medium, and faster.
The slower pace minimizes the impact of ShadowImage operations on subsystem
I/O performance, while the faster pace splits the pair(s) as quickly as possible.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-13
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

y Split Sequence Operations


Pending Updates Complete
Host Host
PAIR PAIR
P-VOL fully accessible P-VOL S-VOL S-VOL can be made
for write I/Os during PSUS PSUS available for host access
Split Operation
Track Bit Maps

Record all updates to P-VOL and


S-VOL while in Split Status

Quick Split - Sequence Steady Split - Sequence


y Start QUICK SPLIT y Start STEADY SPLIT
y Pair status changes to PSUS(SP) & S-VOL y Pair status changes to COPY(SP) &
available for R/W I/O operations (you get performs pending Update Copy
immediate access to the data) operations to S-VOL
y Perform pending S-VOL updates (Update y When complete – status = PSUS & full
Copy) in background R/W access to split S-VOL
y When split completes – status = PSUS
16

When splitting pairs, you can also select the split type:
¾ Quick Split
¾ Steady Split.
When the quick split operation starts, the pair status changes to PSUS(SP), and the S-
VOL is available immediately for read and write I/Os (even though it is still
reserved). The TagmaStore Universal Storage Platform performs all pending update
copy operations to the S-VOL in the background. When the quick split is complete,
the pair status changes to PSUS.
When the steady split operation starts, the pair status changes to COPY(SP), and the
TagmaStore Universal Storage Platform performs all pending update copy
operations to the S-VOL. When the Steady Split operation is complete, the pair
status changes to PSUS, and you have full read/write access to the split S-VOL
(even though it is still reserved).

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y Resynchronize Operation
Primary Server Backup Server

Host Re-synchronize Pair Hosts


(Before next backup)
Updates
P-VOLS S-VOLS

Storage
Storage PAIR
Navigator
Navigator

– When the operation starts, the pair status changes to Copy(RS) or Copy(RS-R)
– When complete, the pair status changes to PAIR again
– Update Copy operations resume after pair status changes to PAIR

17

The PAIRRESYNC command allows users to re-synchronize split SI pairs (establish


a SI pair with a P-VOL and S-VOL). When the operation starts, the pair status
changes to Copy(RS) or Copy(RS-R). When complete, the pair status changes to Pair
again. The TagmaStore Universal Storage Platform then resumes ShadowImage
Update Copy operations after pair status changes to pair. When re-synchronizing
pairs, users select the pace: Slower, Medium or Faster .

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-15
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

y Resynchronize Options
Normal Resync
Syncs SVOL with PVOL
Host
Normal
PSUS PSUS
NORMAL P-VOL Stop I/O To S-Vol
S-VOL
• Copy Direction = P-VOL to S-VOL Storage
Reverse COPY(RS) Storage
COPY(RS) Navigator
Navigator
• S-VOL Track Map merged into -PVOL
Track Map – copies all flagged Merge
tracks from P-VOL to S-VOL Track Bit Maps PAIR
PAIR

• P-VOL remains available to hosts for


R/W I/O operations REVERSE REVERSE QUICK RESTORE
• Copy direction = S-VOL to P-VOL • Changes Volume Map - swaps
• P-VOL Track Map merged into S-VOL contents of P-VOL & S-VOL without
copying data
Reverse Resync Track Map – copies flagged tracks
from S-VOL into P-VOL • P-VOL & S-VOL not available for Write I/Os
Syncs PVOL with SVOL
• P-VOL & S-VOL not available for Write I/Os • Cannot be done on L2 Cascade Pairs
• Cannot be done on L2 Cascade Pairs

Tracks changed on S-VOL during the split are lost during the Resync because P-VOL data takes priority
over S-VOL data. The corresponding tracks of P-VOL will overwrite the changed tracks on S-VOL.
During a Reverse Resync, changed S-VOL tracks overwrite P-VOL tracks.
18

Normal. The normal pairresync operation resynchronizes the S-VOL with the P-VOL. The
copy direction for a normal pairresync operation is P-VOL to S-VOL. The pair status during
a normal resync operation is COPY(RS), and the P-VOL remains accessible to all hosts for
both read and write operations during a normal pairresync.
Quick. The quick pairresync operation speeds up the normal pairresync operation by not
copying the P-VOL data to the S-VOL right away. Instead, the S-VOL is gradually
resynched with the P-VOL as host updates are performed, when intermittent copy is
scheduled (TagmaStore Universal Storage Platform internal), or when a user issues another
pairsplit command for the pair. The P-VOL remains accessible to all hosts for both read and
write operations during a quick pairresync (same as normal pairresync). The S-VOL
becomes inaccessible to all hosts during a quick pairresync operation (same as normal
pairresync).
Reverse. The reverse pairresync operation synchronizes the P-VOL with the S-VOL. The
copy direction for a reverse pairresync operation is S-VOL to P-VOL. The pair status
during a reverse copy operation is COPY(RS-R), and the P-VOL and S-VOL become
inaccessible to all hosts for write operations during a reverse pairresync operation. As soon
as the reverse pairresync is complete, the P-VOL becomes accessible. The reverse
pairresync operation can only be performed on split pairs, not on suspended pairs. The
reverse pairresync operation cannot be performed on L2 cascade pairs. The P-VOL remains
read-enabled during the reverse pairresync operation only to enable the volume to be
recognized by the host. The data on the P-VOL is not guaranteed until the reverse
pairresync operation is complete and the status changes to PAIR.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y Quick Restore Operation

y LDEV pointers are swapped for the two volumes


y Quick Restore performs an automatic resync from the new P-VOL to the S-VOL

19

Quick Restore. The quick restore operation speeds up reverse copy by changing the
volume map in the TagmaStore Universal Storage Platform system to swap the
contents of the P-VOL and S-VOL without copying the S-VOL data to the P-VOL.
The P-VOL and S-VOL are resynchronized when update copy operations are
performed for pairs in the PAIR status. The pair status during a quick restore
operation is COPY(RS-R) until the volume map change is complete. The P-VOL and
S-VOL become inaccessible to all hosts for write operations during a quick restore
operation. Quick restore cannot be performed on L2 cascade pairs.
The P-VOL remains read-enabled during the quick restore operation only to enable
the volume to be recognized by the host. The data on the P-VOL is not guaranteed
until the quick restore operation is complete and the status changes to PAIR.
During a quick restore operation, the RAID levels, HDD types, and Cache
Residency Manager software settings of the P-VOL and S-VOL are exchanged. For
example, if the P-VOL has a RAID-1 level and the S-VOL has a RAID-5 level, the
quick restore operation changes the RAID level of the P-VOL to RAID-5 and of the
S-VOL to RAID-1.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-17
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

To avoid any performance impact due to the quick restore operation:


¾ Make sure that the P-VOL and S-VOL have the same RAID level and HDD type
before performing the quick restore operation. If you want to restore the
original RAID levels after quick restore, stop host I/Os to the pair, split the
pair, perform the quick restore operation for that pair again, and then restart
the host I/Os to the pair.
¾ Because the Cache Residency Manager settings are exchanged during a quick
restore operation, you must perform one of the two following operations. If
you do not, the change of location of the cache residence areas may cause I/O
performance to the Cache Residency Manager data to be down.
¾ Set the same Cache Residency Manager settings (locations) for the P-VOL and
S-VOL before performing the quick restore operation.
¾ Release the Cache Residency Manager settings of the P-VOL and S-VOL before
the quick restore operation, and then reset the Cache Residency Manager
settings of the P-VOL and S-VOL after the pair changes to PAIR status as a
result of the quick restore operation.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y Quick Restore with or without Swap & Freeze Option

With Swap & Freeze:


The S-VOL remains unchanged after the quick restore operation,
because the Swap & Freeze option suppresses update copy operations.

P-VOL S-VOL

P-VOL S-VOL P-VOL S-VOL 12345 abcde

PAIR
Quick Restore After a
abcde 12345 12345 abcde
Operation while
P-VOL S-VOL
PSUS PAIR

abcde, 12345: User Data 12345 12345

PAIR
Without Swap & Freeze:
The P-VOL and S-VOL are resynchronized when ordinary update copy
operations are performed after the quick restore operation.
20

If you do not want the P-VOL and S-VOL to be resynchronized after the quick
restore operation, you must set the Swap & Freeze option before performing the
quick restore operation. The Swap & Freeze option allows the S-VOLs of a
ShadowImage pair to remain unchanged after the quick restore operation. If the
quick restore operation is performed on a ShadowImage pair with the Swap &
Freeze option, update copy operations are suppressed, and are thus not performed
for pairs in the PAIR status after the quick restore operation. If the quick restore
operation is performed without the Swap & Freeze option, the P-VOL and S-VOL
are resynchronized when update copy operations are performed for pairs in the
PAIR status.
Note: Make sure that the Swap & Freeze option remains in effect until the pair status
changes to PAIR after the quick restore operation.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-19
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

y QuickResync Specifications
– QuickResync command will be completed less than 1 sec/pair
– This function copies only the Delta Bitmap (very fast)
– The pair quickly enters Pair status and S-VOL is immediately available
to the S-VOL host as Read-Only
– The actual changed tracks will be updated in the background as Update
Copy operations occur
– During the transfer of the Delta Bitmap the pair status will be COPY(RS)

Read/Write Read-Only
Read/Write Read/Write

Delta bitmap
Delta bitmap
QuickRESYNC P-VOL S-VOL
P-VOL S-VOL request Asynchronous
Copy

Status: Status: PAIR


PSUS

21

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y Enhanced ShadowImage Backup With QUICK Functions


– The three Quick functions enhance backup times:
y QuickSPLIT:
Makes it possible to read/write secondary volumes immediately after the
split command completes, even before the background copy has finished

y Quick RESYNC:
Reduce RESYNC (Primary to Secondary) time considerably. If you use
QuickRESYNC and QuickSPLIT together, the wait time to start backup
from the secondary volume will be reduced

y QuickRESTORE:
Reduces RESTORE (Secondary to Primary) time considerably. User will
be able to restart their jobs to primary volume. Be careful that this function
exchanges the place where the data exists

22

The performing of the resync/restore/split depends on the SVP mode setting. For
example the ShadowImage resync operation will be performed as a Quick Resync
operation if the Quick Resync mode is enabled on the SVP. SVP mode 87 is used to
turn on Quick Resync, SVP mode 80 is used to turn off Quick Restore, and SVP
mode 122 is used to turn off Quick Split.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-21
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

y Suspend Operation
– The TagmaStore Universal Storage Platform will automatically suspend a
pair under the following conditions:

y When the Open ShadowImage volume pair has been suspended or


deleted from the UNIX®/PC server host using the Hitachi Command
Control Interface (CCI)

y When the TagmaStore Universal Storage Platform detects an error


condition related to an update copy operation

y When the P-VOL and/or S-VOL track map in shared memory is lost

– Pair status changes to PSUE

23

The ShadowImage pairsplit-E operation suspends the ShadowImage copy


operations to the S-VOL of the pair. A ShadowImage pair can be suspended by the
user at any time. When a ShadowImage pair is suspended (status = PSUE), the
TagmaStore Universal Storage Platform stops performing.
ShadowImage copy operations to the S-VOL , continues accepting write I/O
operations to the P-VOL, and marks the entire P-VOL track as differential data.
When a pairresync operation is performed on a suspended pair, the entire P-VOL is
copied to the S-VOL. While the pairresync operation for a split ShadowImage pair
can be very fast, the pairresync operation or a suspended pair takes as long as the
initial copy operation. The reverse and quick restore pairresync operations cannot be
performed on suspended pairs.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y Pairsplit –E
– Suspends the Copy Operations to the S-VOL
– Both volumes enter Suspended Error state

Stops Copy Operations to S-VOL

PAIR PAIR

P-VOL S-VOL
PSUE PSUE

Entire P-Vol Track Map marked


as Differential Data.

24

When a ShadowImage pair is suspended (status PSUE), the subsystem stops


performing ShadowImage copy operations to the S-VOL, continues accepting write
I/O operations to the P-VOL, and marks the entire P-VOL track map as difference
data.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-23
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations

y ShadowImage Resynchronize Operation on a Suspended Pair


– When a pairresync operation is performed on a suspended pair (PSUE),
the entire P-VOL is copied to the S-VOL (S-VOL is copied to the P-VOL
for the reverse operation)

Host Host
Normal
P-VOL available PSUE PSUE Stop I/O to S-VOL
for R/W I/Os P-VOL S-VOL

PAIR PAIR

NORMAL
Copy Direction = P-VOL to S-VOL

Resynchronizes Entire Volume Equivalent to and takes as


long as Initial Copy Operation

25

When a pairresync operation is performed on a suspended pair, the entire P-VOL is


copied to the S-VOL. While the pairresync operation for a split ShadowImage pair
can be very fast, the pairresync operation for a suspended on error pair will take as
long as the initial copy operation.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-24 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations

y ShadowImage Delete Operation - Pairsplit -S


– Immediately stops Update Copy operations to S-VOL
– Status changes to SMPL (simplex)
– If you want to synchronize P-VOL and S-VOL before deleting the pair,
y Wait until all write I/Os to P-VOL complete,
y Take P-VOL offline to prevent it from being updated,
y Split the pair using Pairsplit (copies all pending updates to S-VOL)
y When pair enters PSUS delete the pair using Pairsplit -S

Host
PAIR PAIR
P-VOL S-VOL S-Vol Not available for Write
I/O unless Unreserved
SMPL Stops Copy Operations to S-VOL SMPL
Storage
Storage
(Pending Updates Discarded)
Navigator
Navigator

26

The PAIRSPLIT-S operation (delete Pair) immediately stops the ShadowImage


Update Copy operations to the S-Vol of the pair, and changes the status of both
volumes to SMPL. This operation does not delete any data. A pair can be deleted at
any time except during a QUICK SPLIT operation. During the operation, the S-Vol is
still not available for Write I/Os until the Reserve attribute is reset. During the
operation, all pending update copy operations are discarded.
The S-Vol (of a duplex pair – PAIR) may not be identical to its associated P-Vol (due
to Asynchronous Update Copy operations), therefore the user may want to
synchronize the P-VOLs and S-VOLs before doing a delete.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-25
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Pair Status Transitions

ShadowImage Pair Status Transitions

y Relationship Between the Pair Status and ShadowImage Operations

27

y Descriptions

28

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-26 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Pair Status Transitions

y Descriptions (continued)

29

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-27
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Module Review

Module Review

30

Module Review Questions


1. In an open systems environment, how many L1 point-in-time copies can be made
from a P-Vol (referred to as an _______volume)?
2. In an open systems environment, how many L2 point-in-time copies can be made
from a L1 S-VOL?
3. What are the two ShadowImage copy modes of operation?
4. ShadowImage operations are (a) asynchronous or (b) synchronous?
5. During Open ShadowImage operations, the P-VOLs remain available to all hosts
for R/WI/s (except during_____________)?
6. S-VOLs become available for host access only after the pair has been
_____________?
7. Describe the difference between the pairsplit and pairsplit –S functions.
8. What are the three ways of re-synchronizing a Split Pair?

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-28 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
Lab Project 8: ShadowImage GUI

Lab Project 8: ShadowImage GUI

y Timing and Organization


– Time allotted to complete the project: 1 Hour, 30 minutes
– The lab project contains three sections:
y Section 1 is the lab activity
y Section 2 contains the answers to the embedded lab questions
y Section 3 contains the review questions
– Time allotted to go over the review questions: 15 minutes
– The class will be split into lab groups
– Each lab group will have the following lab equipment:
y One Solaris host systems running Solaris 8
y One Windows host systems running Windows XP
y One TagmaStore Universal Storage Platform

31

y Upon completion of the lab project, you will be able to do the following:
– Create a pool of Reserved LDEVs to be used as ShadowImage S-VOLs
– Create three Level 1 (L1) S-VOLs from a single P-VOL
– Create a ShadowImage pair
– Split a ShadowImage pair putting the two volumes into suspended state
– Resynchronize a suspended pair putting the volumes back into Pair
status
– Create a Level 2 (L2) S-VOL by cascading the new L2 volume from a L1
volume
– Create an L1/L2 pair simultaneously from a Root P-VOL
– Display configuration data of a ShadowImage pair
– Display the command History and Pair status of a ShadowImage pair
– Split a ShadowImage pair putting the volumes back into Simplex status
– Remove a Reserved volume from the pool of Reserved ShadowImage
volumes
32

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-29
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Lab Project 8: ShadowImage GUI

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-30 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
Lab Project 8: ShadowImage GUI

y Configuration 3: Simultaneously Create an L1/L2 Pair


– You will create one L1/L2 pair from a P-VOL (you will use three new LDEVs from the
pool that you mapped and reserved earlier in the lab project.
East Coast Class West Coast Class
Root Root
LUN 000 P-VOL LUN 020 P-VOL
CL3-B
LDEV LDEV
05:0D 05:2D

MU 0 MU 0
LUN 000 S-VOL L1 LUN 020 S-VOL
LDEV LDEV
05:0E 05:2E

CL4-B

MU 1 MU 1
LUN 001 S-VOL L2 LUN 021 S-VOL
LDEV LDEV
05:0F 05:2F

36

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-31
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Lab Project 8: ShadowImage GUI

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-32 distributed in whole or in part, without the prior written consent of HDS.
11. ShadowImage RAID
Manager CCI
Operations

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-1
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Introduction to ShadowImage RAID Manager CCI Operations

Introduction to ShadowImage RAID Manager CCI Operations

y Objectives
– In this module, and any associated lab(s), you will learn to:
y Explain the purpose of RAID Manager Command Control Interface (CCI)
y Identify the main components of RAID Manager
y Explain how RAID Manager is setup
y State the purpose of the four sections of the HORCM configuration files
y Install the RAID Manager CCI on Windows and Sun Solaris systems
y Perform the following ShadowImage operations using CCI commands:
– raidscan
– paircreate
– pairdisplay
– pairsplit
– pairresync

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Overview

RAID Manager CCI Overview

y RAID Manager CCI


– Hitachi TagmaStore™ Universal Storage Platform optional feature
– Installed on the Host
– Manages ShadowImage pairs
– Interfaces with high availability software
y Supported CCI Platforms
– Solaris
– HP/UX
– AIX
– Digital UNIX
– Tru64
– DYNIX
– Windows NT/2000
– Red Hat Linux
– IRIX
3

With proper planning, RAID Manager can become an integral part of an


organization’s business recovery solution. RAID Manager CCI is an optional feature on
the TagmaStore Universal Storage Platform systems. It can be used with TrueCopy
basic, TrueCopy async, and ShadowImage. RAID Manager runs on the server, and
provides the interface with an organization’s high availability software. RAID
Manager CCI commands can also be used in scripts.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-3
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Overview

y Configuration of ShadowImage and RAID Manager CCI

Server Server A
Management
Interface
HORCM
HORCM
Commands
Commands
Server
Software & HORCM HORCM
HORCM HORCM
Application Instance0 Instance1
Instance0 Instance1

Command
Device

Primary Secondary
Volume Volume
TagmaStore Universal
Storage Platform
4

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Overview

Administrator Host
y Host Running RAID Manager
– Host running RAID Manager does not HORCM HORCM
require access to the P-VOLs or S-VOLs Instance Instance
– This allows an Administrator Host to
control pairs and keep the pair controls Config file Config file
separate from the Production systems

TagmaStore Universal
Storage Platform
Production Host Command
Device

Server Server
Management Software & P-VOL S-VOL
Interface Application

Volumes supported by RAID Manager CCI include:


¾ OPEN-3 (2.4GB)
¾ OPEN-8 (7.3GB)
¾ OPEN-9 (7.3GB)
¾ OPEN-E (14.5GB)
¾ OPEN-L (34.0GB)
¾ OPEN-V (48MB – 2TB)
¾ VLL (CVS) and LUSE

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-5
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Components

RAID Manager Components

y HORCM Instance
– Service or Daemon
– Used to communicate with TagmaStore Universal Storage Platform and
with remote server
y HORCM Configuration File
– Defines communication paths
– Defines LUNs (volumes) to be controlled
y HORCM Commands
– Monitor and control TrueCopy and ShadowImage operations
y HORCM Command Device
– Used to accept commands from HORCM
– Used to report command results to HORCM
Note: TrueCopy was originally called HORC (Hitachi Open Remote Copy),
and HORCM is Hitachi Open Remote Copy Manager.
ShadowImage uses the same CCI.
6

HORCM “instance” is the RAID Manager software providing communications


between the TagmaStore Universal Storage Platform system and the remote
server……the user runs 2 instances – one instance is the ‘sending’ instance,
managing the P-VOLs, and the other instance is the ‘receiving’ instance, managing
the S-Vols. Instances negotiate with TCP/IP (out-of-band communication).
HORCM configuration files – each instance requires its own configuration file.
HORCM configuration files define the communication paths, and the LUNs to be
controlled.
HORCM Control Device (Command Device) is used to:
¾ Accept commands from HORCM
¾ Direct the commands to the appropriate device
¾ Report command results to HORCM
The Command Device is a user selected dedicated logical volume on the TagmaStore
Universal Storage Platform that functions as the interface to the RAID Manager
software on the server. This device is used only by the storage subsystem and is
blocked for I/O from the users. The Command Device is used for In-band
communication to the subsystem.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager Components

A Command Device can be any Open-x emulation type device, and can be a
VirtLUN (CVS) device as small as 36MB. LUSE volumes cannot be used as
Command Devices. Command Devices must not contain any data.
It may be a good idea to define a 2nd Command Device for failover purposes. When
CCI receives an error notification in reply to a Read or Write request, CCI will attempt
to switch to the other command device. If there is not another command device to
switch to, all TrueCopy and ShadowImage operations will terminate abnormally.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-7
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Components

The basic unit of the CCI software structure is the CCI instance. Each instance uses a
defined configuration file to manage volume relationships while maintaining
awareness of the other CCI instances.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager Requirements

RAID Manager Requirements

y Requirements for Raid Manager Operation


– ShadowImage feature enabled (license installed) on the TagmaStore
– Supports ShadowImage cascaded L2 Pairs

y Following must be performed from Storage Navigator or SVP


– LUN Mapping
– Command Device must be defined (one or two Command Devices
are possible)

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-9
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Configuration Files

RAID Manager Configuration Files

y A text file located in the:


– /etc directory (UNIX)
– /HORCM/etc and /WINNT (NT) or /WINDOWS (XP)

y Provides a definition of Hosts, Groups, Volumes and Command Device to


the RAID Manager instance

y Assigned numbers are arbitrary – class examples are:


– horcm0.conf controls P-VOLs
– horcm1.conf controls S-VOLs

y Contains four parameters that define the RAID Manager Environment


– HORCM_MON
– HORCM_CMD
– HORCM_DEV
– HORCM_INST

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage and CCI Configuration Files

ShadowImage and CCI Configuration Files

y HORCM_MON: IP Address
HORCM_MON
#Ip_address service poll(10ms) timeout(10ms)
SunserverA horcm0 1000 3000
HORCM_CMD Identifies the server running the horcm instance.
#dev_name
Use the IP address or the alias.
/dev/rdsk/c0t5d2s2
For Example: 10.85.6.71 or SunserverA or Localhost
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Group1 Disk1 CL1-A-01 1 1 0
Group1 Disk2 CL1-A-01 1 2 0
Group2 Disk3 CL2-C-02 6 3 0
HORCM_INST
#dev_group ip_address service
Group1 SunserverA horcm1
Group2 SunserverA horcm1

10

The monitor parameter (HORCM_MON) defines the following values:


¾ Ip_address: The IP address of the local host.
¾ Service: The port name assigned to the CCI service (registered in the
/etc/services file). The service parameter defines the CCI instance that runs on
the local host. If a port number is specified instead of a port name, the port
number will be used.
¾ Poll: The interval for monitoring paired volumes. To reduce the HORCM
daemon load, make this interval longer. If set to -1, the paired volumes are not
monitored. The value of -1 is specified when two or more CCI instances run on
a single machine.
¾ Timeout: The time-out period of communication with the remote server.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-11
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage and CCI Configuration Files

y HORCM_CMD
HORCM_MON
#Ip_address service poll(10ms) timeout(10ms)
SunserverA horcm0 1000 3000
HORCM_CMD Identifies the path to the Command Device:
#dev_name
\\.\PhysicalDrive6 (Windows)
/dev/rdsk/c0t5d2s2
/dev/rdsk/c0t5d2s2 (Solaris)
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Group1 Disk1 CL1-A-01 1 1 0
Group1 Disk2 CL1-A-01 1 2 0
Group2 Disk3 CL2-C-02 6 3 0
HORCM_INST
#dev_group ip_address service
Group1 SunserverA horcm1
Group2 SunserverA horcm1

12

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage and CCI Configuration Files

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-13
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage and CCI Configuration Files

HORCM_INST identifies the remote HORCM instance that manages the alternate
half of each group’s mirror set
Instance parameter (HORCM_INST) defines the network address (IP address) of
the remote server.
The following values are defined in the HORCM_INST parameter:
¾ dev_group: The server name described in dev_group of HORC_DEV.
¾ ip_address: The network address of the specified remote server.
¾ service: The port name assigned to the HORCM communication path.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage and CCI Configuration Files

y Device List Relationship


horcm0.conf horcm1.conf
HORCM_MON HORCM_MON
#Ip_address service poll(10ms) timeout(10ms) #Ip_address service poll(10ms) timeout(10ms)
SunserverA horcm0 1000 3000 SunserverA horcm1 1000 3000
HORCM_CMD HORCM_CMD
#dev_name There is a 1-to-1 relationship between entries of the two configuration files:
#dev_name
/dev/rdsk/c0t5d2s2
The dev_group and dev_name/dev/rdsk/c0t5d2s2
entries must be the same for
corresponding lines (more on this later).
HORCM_DEV HORCM_DEV
#dev_group dev_name port# TargetID LU# MU# #dev_group dev_name port# TargetID LU# MU#
Group1 Disk1 CL1-A-01 1 1 0 Group1 Disk1 CL1-A-01 1 4 0
Group1 Disk2 CL1-A-01 1 2 0 Group1 Disk2 CL1-A-01 1 5 0
Group2 Disk3 CL2-C-02 6 3 0 Group2 Disk3 CL2-C-02 6 6 0
HORCM_INST HORCM_INST
#dev_group ip_address service #dev_group ip_address service
Group1 SunserverA horcm1 Group1 SunserverA horcm0
Group2 SunserverA horcm1 Group2 SunserverA horcm0

15

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-15
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Absolute LUN Numbers

Absolute LUN Numbers

Earlier versions of CCI used absolute LUNs to scan a port, whereas the LUNs on a
Group are mapped for the host system so that the target ID and LUN which is used
by CCI command will be different from the target ID and LUN shown by the host
system. Use the target ID and LUN indicated by the raidscan command.
Example of raidscan Command
raidscan –p CL1-A -fx
Port# / ALPA TID# LU# NUM(LDEV#)
CL1-A / DE/ 1 5 1(05)
CL1-A / DE/ 1 7 3(25, 26, 27)
For ShadowImage, raidscan displays the MU# for each LUN (e.g.LUN 7-0, 7-1, 7-2)
When using the latest version of CCI you can specify the Host Group number and
then the LUN number. This will not require the absolute LUN number.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Command Device

Command Device

y Command Device Overview


– The Command Device accepts ShadowImage read and write
commands that are executed by the TagmaStore Universal Storage
Platform

– The Command Device also returns read requests to the UNIX®/PC


host

– The volume designated as the Command Device is used only by the


TagmaStore Universal Storage Platform and is blocked from the user

– Make sure that the volume to be selected as the Command Device


does not contain any user data. If so it will be inaccessible to the
UNIX®/PC server host

17

The command device accepts TrueCopy and ShadowImage read and write
commands that are executed by the TagmaStore Universal Storage Platform system.
The command device also returns read requests to the UNIX/PC host. The volume
designated as the command device is used only by the TagmaStore Universal
Storage Platform and is a raw device. Make sure that the volume to be selected as
the command device does not contain any user data. If so it will be inaccessible to
the UNIX/PC server host. The command device can be any OPEN-x device (e.g.,
OPEN-3, OPEN-8) that is accessible by the host.
A Virtual LVI/LUN volume as small as 36 MB (e.g., OPEN-3-CVS) can be used as a
command device. A LUSE volume cannot be used as a command device. For Solaris
operations the command device must be labeled. To enable dual pathing of the
command device under Solaris systems, make sure to include all paths to the
command device on a single line in the HORCM_CMD section of the configuration
file. Putting the path information on separate lines may cause parsing issues, and
failover may not occur unless the HORCM startup script is restarted on the Solaris
system.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-17
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Command Device

y Identifying the Command Device


– Solaris
y Use the format command and look for the disk with the CM string
For Example: c2t0d1 <HITACHI – Open-3 – CVS – CM>
This identifies device file /dev/rdsk/c2t0d1s2, and this
string goes in the HORCM configuration file.

– Windows
y Use the CCI raidscan –x findcmddev 0,32 command
Returns a string similar to the following: \\.\PhysicalDriveX
This string goes in the HORCM configuration file, where X is the disk number.

Note: The 0,32 specifies a range of disks to search and it could specify a larger
range than the example.

18

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Mirror Unit Numbers

Mirror Unit Numbers

y Mirror Unit Numbers


– Three bit maps exist for each volume
– Bit map is used to track changes to the volume
– Three Level 1 volumes are possible
– Six Level 2 volumes are possible (two per Level 1)
Root Volume Node Volume Leaf Volume
Level 1 Mirror Level 2 Mirror

0 0
1 1 0
2 2 1
2

These numbers represent the bit maps.


0
1
2
19

MU Number
¾ Used in “HORCM Config File” to define and control the ShadowImage paired
volumes. Only ShadowImage uses the MU Number.
¾ Equal to “Delta Table” which transfers difference data between the P-VOL and
S-VOL.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-19
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage Configuration Example

ShadowImage Configuration Example

y L1 Pair Configuration (see next slide for configuration file entries)


If you wanted three level one volumes, then another mirror of P-Vol would be required.

horcm0.conf CL1-C, TID 0, LUN 4


One P-Vol
This is the actual L1 Two Level 1
configuration. S-VOL S-VOLs
CL1-A, TID 1, LUN 0
P-VOL
L1 horcm1.conf
S-VOL

CL1-C, TID 0, LUN 5

This is the way that the


CL1-A, TID 1, LUN 0
actual configuration is P-VOL L1
expressed in the horcm CL1-C, TID 0, LUN 4
MU 0 S-VOL
configuration file.
A 1-to-1 ratio of the pairs L1
P-VOL CL1-C, TID 0, LUN 5
must be maintained. This S-VOL
CL1-A, TID 1, LUN 0 MU 1
is done by specifying a
mirror (alias) entry in the
configuration file. Mirror (alias) of P-VOL
20

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage Configuration Example

y L1/L2 Pair Configuration (see next slide for configuration file entries)
CL1-C-01, TID 0, LUN 4

L1 L2
S-VOL S-VOL CL1-C-01, TID 0, LUN 6

CL1-A-01, TID 1, LUN 0


P-VOL

L1 L2 CL1-C-01, TID 0, LUN 7


S-VOL S-VOL

CL1-C-01, TID 0, LUN 5

22

HORCM_MON
horcm0.conf
y L2 Configuration #ip_address
10.15.93.201
service
horcm0
poll(10ms)
1000
timeout(10ms)
3000

Files HORCM_CMD
#dev_name
/dev/rdsk/c0t5d1s2
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Level1 vol1 CL1-A-01 1 0 0
Level1 vol2 CL1-A-01 1 0 1
Level2 Vol3 CL1-C-01 0 4 1
Level2 Vol4 CL1-C-01 0 5 1

HORCM_INST
#dev_group ip_address service
Level1 10.15.93.201 horcm1
Level2 10.15.93.201 horcm1

HORCM_MON
#ip_address service poll(10ms) timeout(10ms) horcm1.conf
10.15.93.201 horcm1 1000 3000
HORCM_CMD
#dev_name
/dev/rdsk/c0t5d1s2
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Level1 vol1 CL1-C-01 0 4 0
Level1 vol2 CL1-C-01 0 5 0
Level2 Vol3 CL1-C-01 0 6 0
Level2 Vol4 CL1-C-01 0 7 0
HORCM_INST
#dev_group ip_address service
Level1 10.15.93.201 horcm0
Level2 10.15.93.201 horcm0
23

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-21
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage Configuration Example

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Set Environment Variables

Set Environment Variables

y HORCMINST Variable
Determines what instance receives the CCI commands.

UNIX (SOLARIS) if using csh or tcsh


To set variable:
setenv HORCMINST X (X=instance#) For Example: set HORCMINST 0
To clear variable:
unsetenv HORCMINST

UNIX (SOLARIS) if using sh or ksh


To set variable:
HORCMINST=X (X=instance#) For Example: HORCMINST=0
export HORCMINST
To clear variable:
unset HORCMINST

Windows NT / 2000
C:\HORCM\etc> set HORCMINST=X C:\HORCM\etc> set HORCMINST=0

Note: When running multiple HORCM instances, each instance requires a unique HORCMINST variable.
26

The HORCMINST variable determines the instance that receives the CCI
commands.
The HORCMINST variable specifies the instance # when using 2 or more CCI
instances on the same server.
The command execution environment and the HORCM activation environment
require an instance # to be specified.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-23
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Set Environment Variables

y HORCC_MRCF Variable
Identifies the instance as a ShadowImage instance.

UNIX (SOLARIS) if using csh or tcsh


To set variable:
setenv HORCC_MRCF=1
To clear variable:
unsetenv HORCMINST

UNIX (SOLARIS) if using sh or ksh


To set variable:
HORCC_MRCF=1
export HORCC_MRCF
To clear variable:
unset HORCC_MRCF

Windows NT / 2000
C:\HORCM\etc> set HORCC_MRCF=1

27

The HORCMINST variable specifies the instance # when using 2 or more CCI
instances on the same server.
The command execution environment and the HORCM activation environment
require an instance # to be specified.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-24 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Services File

Services File

y Register the Port Name in the Services File

Solaris Systems
/etc/services
The file is actually linked to
/etc/inet/services.

NT / 2000 Systems
Note: For Windows, there must be a
\WINNT\system32\drivers\etc\services blank line at the end of the
Windows XP Services file or HORCM will
\WINDOWS\system32\drivers\etc\services not start.

28

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-25
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Starting RAID Manager

Starting RAID Manager

y Starting RAID Manager

SOLARIS
MANUAL START
# horcmstart.sh Starts 1 instance of HORCM
# horcmstart.sh 0 1 Starts both instances of HORCM

Windows NT / 2000
MANUAL START
C:\HORCM\etc> horcmstart Starts 1 instance of HORCM
C:\HORCM\etc> horcmstart 0 1 Starts both instances of HORCM

29

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-26 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Shutting Down RAID Manager

Shutting Down RAID Manager

y Shutting Down RAID Manager

SOLARIS
MANUAL SHUTDOWN
# horcmshutdown.sh Stops 1 instance of HORCM
# horcmshutdown.sh 0 1 Stops both instances of HORCM

Windows NT / 2000
MANUAL SHUT DOWN
C:\HORCM\etc> horcmshutdown Stops 1 instance of HORCM
C:\HORCM\etc> horcmshutdown 0 1 Stops both instances of HORCM

30

Aside from shutting down the RAID Manager instances for normal reasons, there is
a recommended way of correcting errors in the horcm configuration files.
When you need to modify the horcm configuration files (for whatever reason) what
you should do is:
¾ Shut down the HORCM Instances
¾ Modify the horcm configuration files
¾ Start Up the HORCM Instances

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-27
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands

RAID Manager CCI Commands

y CCI Commands
CCI Command Description
horctakover (HORC only) The host executing horctakover takes ownership of the pair.
paircurchk (HORC only) Checks the consistency of the data on the secondary volume.
paircreate Creates a pair.
pairsplit Splits a pair.
pairresync Resynchronizes a pair.
pairevtwait Event waiting command.
pairmon Monitors a pair and reports changes in the pairs status.
pairvolchk Checks the attributes of a volume connected to the local or remote
hosts.
pairdisplay Confirms the configuration of a specified pair.
raidscan Lists the SCSI/fibre port, target ID, LUN number, and LDEV
status.
raidar Reports the I/O activity of a specified LDEV.
raidqry Confirms the connection of the 7700E and the open system host.
horcctl Displays the internal trace control parameters.

31

y Windows Specific Commands


CCI Command Description
findcmddev Searches for the command device
drivescan Displays the relationship between the hard disk number
and the physical drive
portscan Displays the physical device on a designated port
sync Flashes the remaining unwritten data to the physical drive
mount Mounts the specified device
umount Unmounts the specified device
setenv Sets the environmental variables
usetenv Deletes the environmental variables
env Displays the environmental variables
sleepetenv Sets the sleep time for a specified environment variable

32

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-28 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Commands

y raidscan
Displays configuration and status information of the specified port.
For Example: raidscan –p CL1-A -fx (returns something like the following)
PORT# /ALPA/C ,TID#, LU# Num(LDEV#..) P/S, Status, LDEV#, P-Seq#, P-LDEV#
CL1-A / e8 / 5, 0, 0-0 .1(503)........SMPL
Volume Status
LDEV Number (05:03). Note: CU 00 is not listed.
Port Host Group Number
CCI Assigned LUN Number
First digit = LUN Number
Second digit = Bit Map Number
Target ID as set by the CCI (most likely will be different
from the Fibre Channel assigned TID based on the ALPA).
The physical slot of the HBA.
Arbitrated Loop Physical Address (if direct-connect = Fibre Address).
TagmaStore Universal Storage Platform Host Port.
33

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-29
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands

y paircreate
-g Specifies group name from horcmX.conf
- All pairs in group created unless restricted by another option
-d Specifies device name from horcmX.conf
- Restricts operation to one TID/LUN
- Additional options to improve security
-f Specifies fence level - data, status, never, or async + CTGID
Note: Only used with TrueCopy.
-vl Specifies that local device is primary (sending device)
-vr Specifies that remote device is primary (receiving device)
-c Specifies number of tracks to be copied during initial copy operation
-nocopy Pair is created without copying primary to secondary (dangerous!)
-m <mode> Mode=cyl / Mode=trk

Avoid using the –vr option.


Use the pairresync -restore to cause S-VOL to copy to P-VOL.
34

Paircreate generates a new volume pair from two unpaired volumes. It can be used
to create either a paired logical volume or a group of paired volumes. It allows you
to specify the direction (local or remote) of the pair (-vl or –vr).
Paircreate example:
C:\HORCM\etc> paircreate -g vg01 -d labadb1 -vl -f never

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-30 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Commands

y Pair Direction
– Warning! Pair Create will overwrite the S-VOL
y To create this pair from Instance #0, use -vl
y To create this pair form Instance #1, use -vr
– Best Practice
y Always use Instance #0 (HORCMINST=0) and always use -vl

35

y paircreate examples
– Create a single pair using the –d option (pairs only the specified volumes):
paircreate -g G1 -d L1-MU0 -vl (see slide 25)

– Create multiple pairs using only the –g option (pairs all volumes in the group):
paircreate -g G1 -vl (see slide 25)

36

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-31
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands

y pairdisplay
-g Specifies group name from horcmX.conf
– All volumes in the group checked
-d Specifies device name from horcmX.conf
– Restricts operation to one TID/LUN
-c Check pair path & displays illegal Pair connections
-l Displays volume pair status of ‘local’ host
-fx Shows LDEV numbers in Hexidecimal
-fc Shows % pair synchronization
-fm Displays the bitmap mode (Cyl/Trk)

37

y pairsplit
-g Specifies group name from horcmX.conf
– All pairs in group split unless restricted by another option
-d Specifies device name from horcmX.conf
– Restricts operation to one TID/LUN
– Additional options to improve security
-r Specifies split secondary and puts volume into read-only mode
-rw Suspended secondary and puts volume into read & write mode
-S Puts primary & secondary into simplex status (stops bit map tracking)
-R Forces secondary volume into error status (stops bit map tracking)

38

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-32 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Commands

y pairsplit examples
– Split a single pair using the –d option (splits only the specified volumes)
y Suspends the update copy – both volumes enter suspended status
y Changes are tracked in the bit maps
pairsplit -g G1 -d L1-MU0

– Split multiple pairs using only the –g option (pairs all volumes in the group)
y Suspends the update copy – both volumes enter suspended status
y Changes are tracked in the bit maps
pairsplit -g G1

– Split a single pair using the –S option


y Stops the update copy – both volumes enter simplex status
pairsplit -g G1 -d L1-MU0 -S

39

y pairresync
-g Specifies group name from horcmX.conf
– All pairs in group resynchronised unless restricted by a switch

-d Specifies device name from horcmX.conf


– Restricts operation to one TID/LUN
– Additional options to improve security

-c Specifies number of tracks to be copied during initial copy operation

40

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-33
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands

y Other Useful Operations


paircreate -g G1 -d L1-MU0 -vl -split
Splits the pair immediately after paircreate.

pairresync -g G1 -d L1-MU0 -restore


Used to copy differential data from S-VOL to P-VOL.

If P-VOL failed and the customer continued to work in S-VOL, use this
option (-restore) to copy all of S-VOL to P-VOL when P-VOL is repaired.
* The pair would be split with the pairsplit –E command (manually or by
HORCM if an error is detected) and the volumes would be in PSUS and
SSUS status.

41

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-34 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager Considerations

RAID Manager Considerations

y HORCM.CONF files must be accurate:


– Group & Device names must be identical in both config files
– CMD Device must be correct in each config file
– Correct IP address, host names and service name defined in config
files
y Watch out for leftover spaces!
y Remember to correctly set the HORCMINST and HORCC_MRCF
environment variables
y You have to used Storage Navigator/SVP to:
– Create Host Groups and map LUNs
– Create the Command Device

42

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-35
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Considerations

y Configuration File Considerations


– Anytime you make a change/update to the config file, you must
stop the HORCM instance then restart the instance
– Watch out for extra spaces and tabs in the file
– Group names and device names are case sensitive!
– Group names and device names must not only be unique within
the file but must match in both files
– Make sure the HORCM config files have been saved in the
correct OS directory

43

If you use the device name “DeviceOne” in HORCM0.conf, then it must be


“DeviceOne” in HORCM1.conf. NOT “deviceone” or “Deviceone” or “ DEVone”,
etc. Also, remember this rule applies when referring to that device at the command
prompt or in a script!!! Many users tend to forget this when issuing commands via
the command prompt!
By default, the sample HORCM.conf file is in the/horcm/etc directory. Use this to
create HORCM0 and HORCM1 configuration files.
On SUN, the HORCM0.conf and HORCM1.conf need to be in the /etc directory.
On WINDOWS, the HORCM0.conf and HORCM1.conf need to be in the /winnt
directory.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-36 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Troubleshooting

Troubleshooting

y The Instance won’t start… check to see if:


– The CMD device path is correct in the config file
– The CMD device is labeled, if not label it
– The config file is in the correct OS directory
y Pairdisplay fails… check to see if:
– The environment variables are set:
y HORCMINST = 0 or 1
y HORCC_MRCF = 1

44

Note: For TrueCopy un-set the HORCC_MRCF = 1 command.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-37
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Troubleshooting

y Paircreate fails… check to see if:


– Target ID’s are correct in the config file. Use raidscan.
– LUNs are correct in the config file. Remember that LUNs are
displayed in HEX in LUN Manager. So LUN 10 is actually LUN 16
in the config file.
– Remember the TagmaStore Universal Storage Platform has
absolute LUNs to consider. Use raidscan to find the “absolute
LUN” value
– S-VOLS are not reserved through the GUI
– Check the IO suppression mode

45

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-38 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Troubleshooting

y Troubleshooting Information

Problem
INFORMATION TYPE FILENAME
Starting CCI

HORCM Startup Log $HORCM_Log/horcm_HOST.Log

Problem With CCI Command Log $HORCC_Log/horcc_HOST.Log


Commands
Error Log $HORCM_Log/horcmlog_HOST/horcm.log

Command
Errors

$HORCM_Log Default Directory: /HORCM/Logn/Curlog (N=Instance #)


$HORCC_Log Default Directory: /HORCM/Logn

46

If an error occurs in RAID Manager, you can check the following logs:
¾ RAID Manager Instances Failed To Start – check the HORCM Startup Log
¾ RAID Manager Command Failed – check the Command Log and the Error Log

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-39
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Module Review

Module Review

47

Module Review Questions


1. What are the different components of the HORCM configuration files?
2. How many bit maps are maintained for each volume?
3. How do you find out the Target ID to use in the HORCM configuration files?
4. What environment variables must be set to run ShadowImage?
5. What OS file needs the UPD port names and numbers.
6. What is the purpose of a CMD device?
7. What command will allow a user to delete a ShadowImage pair?
8. What option is used when you want to do an S-VOL to P-VOL resync?

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-40 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Lab Project 9: ShadowImage CCI

Lab Project 9: ShadowImage CCI

y Timing and Organization


– Time allotted to complete the project: 1 Hour, 30 minutes
– The lab project contains three sections:
y Section 1 is the lab activity
y Section 2 contains the answers to the embedded lab questions
y Section 3 contains the review questions
– Time allotted to go over the review questions: 15 minutes
– The class will be split into lab groups
– Each lab group will have the following lab equipment:
y One Solaris host systems running Solaris 8
y One Windows host systems running Windows XP
y One TagmaStore Universal Storage Platform

48

y Upon completion of the lab project, you will be able to do the following:
– Install version 01-15-03/04 of the CCI (RAID Manager) on a Windows
Host system
– Using Storage Navigator, configure a TagmaStore Universal Storage
Platform subsystem LDEV as a CCI Command Device and map it to your
host system ports
– Create and configure the Hitachi Open Remote Copy Manager
(HORCM) configuration files (horcm0.conf and horcm1.conf) to support
the ShadowImage pair operations outlined by this lab project.
– Create three ShadowImage L1 S-VOLs from a P-VOL
– Split a ShadowImage pair putting the two volumes into suspended
(simplex) state
– Resynchronize a suspended pair putting the volumes back into Pair status
– Create an L1/L2 pair simultaneously from a P-VOL.
– Display the Pair status of a ShadowImage pair.

49

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-41
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Lab Project 9: ShadowImage CCI

y LUN Mapping
– You will first map ten (10) LDEVs of Control Unit 5 to ports CL1-A and
CL2-A as LUNs 2 through B

Control Unit LUN Intended Use


LDEV Map to Port
(CU) Number (See next four slides for configurations)

05 05:03 2 CL1-A & CL2-A P-VOL


05 05:04 3 CL2-A L1 MU0 S-VOL
05 05:05 4 CL2-A L2 MU1 S-VOL
05 05:06 5 CL2-A L2 MU2 S-VOL
05 05:07 6 CL2-A L1 MU1 S-VOL
05 05:08 7 CL2-A L2 MU1 S-VOL
05 05:09 8 CL2-A L2 MU2 S-VOL
05 05:0A 9 CL2-A L1 MU2 S-VOL
05 05:0B A CL2-A L2 MU1 S-VOL
05 05:0C B CL2-A L2 MU2 S-VOL

50

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-42 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Lab Project 9: ShadowImage CCI

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-43
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Lab Project 9: ShadowImage CCI

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-44 distributed in whole or in part, without the prior written consent of HDS.
12. Virtual Partition
Manager

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-1
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Module Objectives

Module Objectives

y Upon successful completion of this module and any associated lab(s),


you will be able to:
– Describe the purpose of the Hitachi Virtual Partition Manager software
– Identify the different privileges between the Storage Administrator and
Storage Partition Administrator accounts
– Define and describe the features of Storage Logical Partitions and
Cache Logical Partitions
– Describe why you would implement Storage Logical Partitions and
Cache Logical Partitions
– Create and manage Storage Logical Partitions and Cache Logical
Partitions

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Overview

Virtual Partition Manager Overview

y Business Need
– One subsystem can store a large amount of data
– Multiple companies, departments, systems, or applications can share one
subsystem
For example: Storage Service Provider
– Each user wants to use a subsystem as if the user is using an individual
subsystem exclusively, without being influenced by other users operations

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-3
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Overview

y Virtual Partition Manager Functions (CLPR and SLPR)

Cache Logical PaRtition Storage administrator Logical PaRtition


Cache can be divided into multiple virtual Storage can be divided among various
cache memories to lessen I/O contention. users to lesson conflicts over usage.

Operator
Parity Group Parity Group

Cache Cache Cache Cache

Virtual Partition Manager has two main functions: Storage Logical PaRtition (SLPR),
and Cache Logical PaRtition (CLPR). Storage Logical Partition allows you to divide
the available storage among various users, to lessen conflicts over usage. Cache
Logical Partition allows you to divide the cache into multiple virtual cache
memories, to lessen I/O contention.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Overview

y Cache Logical Partition Overview


– Storage Administrator (SA) performs all the settings and assigns resources
(ports, parity groups, and cache) to each company (he is the chief)
– Partition Storage Administrator (PSA) manages only assigned resources
Because of a higher I/O rate, this user can slow down the performance
of the other users.

Company A Company B Company C Company D


Heavy
load

LUN
LDEV

Cache (Common to all users) ・・・・


5

The Hitachi TagmaStore™ Universal Storage Platform subsystems can connect


multiple hosts, and can be shared by multiple users, such as different departments
or even different companies. This can cause conflicts among the various users. For
example, if a particular host issues a lot of I/O requests, the I/O performance of
other hosts may decrease. If the various administrators have different storage
policies and procedures, or issue conflicting commands, that can cause difficulties.
Problem: Due to the heavy load in Company A, data in other companies (Company
B/C/D) is excluded from the cache memory. As a result, the cache hit rate
decreases, and the performance of application degrades.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-5
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Overview

y Cache Logical Partition


– Cache is divided (de-staging is still performed on the total cache)
– Logically assign the size of cache (minimum 4 GB, increase by 2 GB increments)
– Host I/O is independent
– Company A is throttled back, performance increases at Companies B – D
Note: Virtual Partition Manager is NOT a performance tool.
Company A Company B Company C Company D
Heavy
load

LUN

LDEV

・・・・
Storage
Cache Cache Cache Cache
Administrator
6

Cache Logical Partition allows you to divide the cache into multiple virtual cache
memories, to lessen I/O contention. A user of each server can perform operations
without considering the operations of other users. Even if the load becomes heavy in
Company A, the operations in Company A do not affect other companies
operations.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Overview

y Storage Logical Partition


– Storage is divided among various users
– Security is enforced in storage management

Storage Logical Partition (SLPR) allows you to divide the available storage among
various users, to lessen conflicts over usage. If multiple administrators manage a
RAID, some administrators may destroy others’ volumes by mistake. To avoid such
problem, we have to create a mechanism to minimize the effects on other
administrators’ operations.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-7
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Overview

y Virtual Partition Manager Concept


– SLPR0 (Storage Partition) and CLPR0 (Cache Partition) are the defaults
y SLPR0 is a pool of logical cache partitions and ports
y CLPR0 is a pool of all cache and all the parity groups in the subsystem
y Only Storage Administrator can access SLPR0 and CLPR0
y Storage Administrator creates the other SLPRs and CLPRs
– Partition Storage Administrators manage their SLPRs
SLPR0
CLPR0

SLPR0 (Storage Administrator Manages)


CLPR1 CLPR2 CLPR2 CLPR4 CLPR0(Pool)

SLPR1 SLPR2 SLPR3 SLPR0 (Pool)


CLPR1 CLPR2 CLPR3 CLPR4 CLPR5 CLPR0
(Pool)
8

If no storage partition operations have occurred, the subsystem will have Storage
Logical Partition 0 (SLPR0), which is a pool of all of the resources of the subsystem
(e.g., cache logical partitions and ports). SLPR0 will also contain Cache Logical
Partition 0 (CLPR0), which is a pool of all of the cache and all parity groups in the
subsystem. The only users who have access to SLPR0 and CLPR0 are storage
administrators.
¾ CLPR0/SLPR0 always exists and cannot be removed.
¾ CLPR0 always belongs to SLPR0.
¾ CLPR0 is a pool area of cache and PG.
¾ Only the Storage Administrator can use SLPR0. Partitioned Storage
Administrators manage other SLPRs.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator

Storage Administrator and Storage Partition Administrator

y Administrator Access
– Administrators are assigned via the Control Panel of Storage Navigator (Option button)
y One SA
y Many SPAs
Storage Storage
Partition Partition
Administrator Administrator

Not available

Port A Port B Port C

Cache memory

CLPR 1 CLPR 2 CLPR 3


Storage Administrator

SLPR 1 SLPR 2
Subsystem
9

Administrator access for the TagmaStore Universal Storage Platform is divided into
two types. Storage Administrators manage the entire subsystem and all of its
resources. Storage Administrators can create and manage storage logical partitions
and cache logical partitions, and can assign access permission for storage partition
administrators. Only Storage Administrators can access Storage Logical Partition 0
(SLPR0) and Cache Logical Partition 0 (CLPR0). Storage Partition Administrators
can view and manage the only the resources that have been assigned to a specific
storage logical partition.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-9
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator

y Only Storage Administrator Can Define TrueCopy


Partition Admin/ Storage
User Administrator
TagmaStore Universal Storage Platform
Number One
SLPR1 SLPR2 SLPR0
CLPR1 CLPR2 CLPR0
Initiator Port

Port
Port
Shared
Shared
by
byall
all
TagmaStore Universal Storage Platform SLPRs
SLPRs
Number Two
SLPR1 SLPR2
CLPR2 RCU Target Port
CLPR1
CLPR0
SLPR0

10

Only the Storage Administrator can define Hitachi TrueCopy™ Remote Replication.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator

y Partition Storage Administrator can define ShadowImage volumes within


its own SLPR
– Storage Administrator can perform a copy operation using volumes in
multiple SLPRs.
Partition Storage Administrator Storage
for SLPR1 Administrator

SLPR1 SLPR2 SLPR0


CLPR1 CLPR2 CLPR3 CLPR0

11

Storage Partition Administrator can define Hitachi ShadowImage™ In-System


Replication in their own SLPR.
Note: This will be supported in the third GA release.
The Storage Administrator can perform a ShadowImage copy operation using
volumes in multiple SLPRs.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-11
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator

y Only Storage Administrator Can Map External Storage to CU:LDEV and


Assign SLPR/CLPR

External port belongs to SLPR0,


and are shared by all SLPRs.

TagmaStore USP Target Port


SLPR1 SLPR2 SLPR0

CLPR1 CLPR2 CLPR3 CLPR0

External
Port

9500V 9900V
Data Flow
Mapping
12

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator

y Storage Navigator
Storage Administrator Screen Storage Partition Administrator Screen
SA sees all the resources SPA sees only it’s resources

13

Resources are divided into each SLPR. In the Storage Administrator’s screen, all
resources in all SLPRs are displayed. In the Storage Partition Administrator’s screen,
only the resources in its own SLPR are displayed. Resources shared by all SLPRs are
displayed in both Storage Administrator’s and Storage Partition Administrator’s
screen.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-13
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator

y TagmaStore subsystems allow multi users to log into the subsystem and put
their session of Storage Navigator in modify mode (multiple lock control)

y No One Has a Lock

SLPR1 SLPR2 SLPR3 SLPR31

SVP Modify Authority

14

Multiple Lock Control


If the Storage Administrator has modify mode then no one else can access volumes.
However, if the Partition Storage Administrator for SLPR1 is in the modify mode
along with the Partition Storage Administrator in SLPR31 access is okay for the
volumes in SLPR31 since they are separate partitions.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator

y Multiple Lock Control - Storage Administrator sets Modify Mode


Storage Administrator (SA) sets Partition Storage Administrator
Modify Mode and blocks all other for SLPR3 attempts to set
users from entering Modify Mode Modify Mode and is blocked

SLPR1 SLPR2 SLPR3 SLPR31


SA has Modify Authority for each SLPR

Storage Administrator attempts SVP Modify Authority


to enter Maintenance Mode at
the SVP and is blocked
15

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-15
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator

y Multiple Lock Control – A Partition Storage Administrator sets Modify Mode


Storage Administrator PSA for SLPR1 attempts PSA for SLPR3 is
(SA) sets Modify to set Modify Mode and allowed to set Modify
Mode for SLPR1 is blocked Mode

SLPR1 SLPR2 SLPR3 SLPR31


SA has Modify Authority for SLPR1

Storage Administrator attempts SVP Modify Authority


to enter Maintenance Mode at
the SVP and is blocked
16

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Features

Virtual Partition Manager Features

y Virtual Partition Unit


– Storage must be assigned on a full parity group boundary
– Partial parity groups or LDEV assignment is not allowed

Parity Group

Parity Group

LDEV
Parity Group

Logical Partition Logical Partition

Parity Group

17

Virtual Partition Unit


Devices are assigned to a logical partition in units of parity group (Not each LDEV,
PDEV). The access load to an LDEV affects performance of other LDEVs in the same
parity group. This is because each parity group has the access control information
(e.g. resource lock, queue table, etc.).

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-17
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Features

Virtual Partition Image


The content of each CLPR includes I/O cache, DCR, PCR and the Sidefile and is
mapped to specific parity groups. The maximum size is 128G.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Features

y SLPR Partitioning Definition


– Maximum of 32 SLPRs per TagmaStore subsystem
– Resources for SLPR
y One or more CLPRs
y One or more target ports are assigned to a SLPR
– Ports assigned to one SLPR cannot be assigned to another SLPR
– Un-assigned ports in the pool remain shared resources
y One or more CU Numbers and SSID Numbers
– Multiple SLPRs cannot share the same CU/SSID
SLPR1 SLPR2 SLPR0 (the resource pool)
Target Port Target Port

CLPR1 CLPR2 CLPR0 = Non partitioned cache area

Note: SLPR definition is performed using Storage Navigator.


19

y Specifications
# Item Content
1 Maximum number of CLPRs 32
2 Minimum unit of CLPR Parity group
3 Change unit of CLPR Increase size by 2 GB
4 CLPR Capacity 4 GB – 128 GB
5 Max number of VDEV per CLPR 1 – 16,384
6 Change unit of VDEV per CLPR 1 – 16,384
7 Support emulation type All types supported by TagmaStore
8 Max number of CLPRs per SLPR 32
9 LUSE Support
10 RAID Level All types supported by TagmaStore
11 DCR Support if CLPR has minimum 6 GB
12 PCR Support
20

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-19
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Features

y Inflow Control
– Inflow Control is perform by comparing the Write Pending threshold to
the Write Pending rate of each CLPR
– A CLPR with a very high inflow rate will not effect the inflow rate of
other CLPRs

Host A Host B

WP WP

21

Virtual Partition Manager performs the inflow control by comparing the write
pending threshold with the write pending rate of each CLPR. Therefore, even
though the write pending rate of one CLPR is very high, other CLPRs inflow control
is not changed.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Features

y De-Stage
– De-staging is performed on the full cache
– When the Write Pending rate for one CLPR is very high, the other CLPRs
de-stage process is accelerated
For Example: The cache de-stage threshold is set to 70%.
When the cache threshold total hits 70%, then both hosts
are de-staged.

Host A Host B

22

Virtual Partition Manager performs the de-stage process by comparing the write
pending threshold with the write pending rate of the entire system. Therefore, when
the write pending rate of one CLPR is very high, other CLPRs de-stage process is
accelerated.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-21
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Best Practices

Virtual Partition Manager Best Practices

y Shared Resources
– SLPRs run independent of each other
– All other resources are shared and are dependent on each other

Shared Resources
SM CM CSW CHA/DKA Internal Path

FSW Backend fibre loops Initiator Port External Port Processor

MP usage & path usage, etc… MP usage & path usage, etc…

SLPR1 SLPR31
Cache Usage/WP ratio, etc… Cache Usage/WP ratio, etc…

Cache resource Cache resource

Host port Host port


23

The previous pages describe host ports, cache resources and ECC groups, but all
other resources are not SLPR/CLPR dependent. All other resources are shared by
all the SLPRs/CLPRs, so a SLPR/CLPR may have impact on other SLPRs/CLPRs.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Best Practices

y Hi-Star Paths
– All the internal paths are shared and cannot be divided among the
SLPRs and CLPRs

SM SM SM SM

SM-path

DKA CHA CHA DKA


CM-path (P-path)

CSW CSW

CM-path (C-path)

Cache Cache Cache Cache


24

Internal Paths cannot be divided for each SLPR/CLPR, because Hi-Star paths are
shared by all CHAs and DKAs.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-23
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Best Practices

y DKA Processors
– You can design your CLPR configuration around the hardware configuration

DKC R1 DKU

R0 DKU CLPR1

HDU HDU CLPR2


HDU HDU HDU HDU
HDU HDU HDU HDU
HDU HDU HDU HDU
HDU HDU CLPR1 & CLPR2 share the same DKA pair:
If the DKA load is high, the CLPRs affect
each other’s performance.
DK DK
DKA DKA

3rd Pair 1st Pair HDU HDU CLPR3


HDU HDU
HDU HDU
HDU HDU
CLPR4
DK DK
DKA DKA
CLPR3 & CLPR4 are on different DKA pairs:
4th Pair 2nd Pair If the DKA load is high, the CLPRs do not
affect each other’s performance.
25

DKA processors are shared by all CLPRs, but you can divide the DKA processors for
each CLPR.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-24 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
SLPR and CLPR User IDs

SLPR and CLPR User IDs

y Add a Storage Administrator (SA)


– Use the User Entry function of the Control Panel
– Select SA and then click the New Entry button (see next slide)

26

y Add a Storage Administrator (continued)


– All the functions are available (use the slide bar to display all the functions

27

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-25
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
SLPR and CLPR User IDs

y Add a Partition Storage Administrator (PSA)


– Use the User Entry function of the Control Panel
– Select XX:SLPRXX and then click the New Entry button (see next slide)

28

y Add a Partition Storage Administrator (continued)


– The PSA has fewer functions to choose from than an SA (notice no slide bar)

29

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-26 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Partition Manager Functions

Partition Manager Functions

y License Key Panel allows Partition Configuration Functions


– Partition Definition (create Storage Logical Partitions first)
– License Key Partition Definition (assign or allocate license capacity among
the various SLPRs second)

30

Select the License Key button to open the License Key Panel

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-27
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
License Key Partition Definition

License Key Partition Definition

y License Key Partition Definition for a product with limited license capacity

1. Select the product.

2. Select the SLPR.

3. Allocate a portion of the license (255.0 TB) to the SLPR.

31

Only storage administrators can make settings for SLPR0.


A storage partition administrator only has authority within the assigned storage
logical partition. The storage administrator can also assign permission for one of the
following products:
1. Open Volume Management (LUCE and VLL).
2. Data Retention.
3. LUN Manager.
4. Cache Residency.
5. Performance Monitor.
6. Storage Navigator.
7. JAVA API.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-28 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
License Key Partition Definition

y License Key Partition Definition for a product with unlimited license capacity

1. Select the product.

2. Select the SLPR.

3. Enable or Disable the product for the SLPR.

32

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-29
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Navigator

Storage Navigator

y Partition Definition Tab

SLPR names and the resources they contain

Three SLPRs have been created

33

Select the Partition Definition tab to open the Partition Definition panel. The default view is
the Logical Partition panel
Note: If you are logged on as a storage partition administrator, this panel will display only
the resources in that storage partition.
The Logical Partition panel has the following features:
¾ The Partition Definition outline is on the upper left of the Logical Partition panel, and
displays all of the storage logical partitions in the subsystem.
¾ The name and number of the storage logical partition are displayed to the right of the
each SLPR icon.
The Subsystem resource list is on the upper right side of the Logical Partition panel, and
displays the following information about the resources in the subsystem:
¾ No.: The number of the storage partition
¾ Item: The resource type
¾ Name: The storage logical partition numbers and names
¾ Cache (Num. of CLPRs): The cache capacity and number of cache logical partitions.
¾ Num. of PGs: Number of parity groups
¾ Num. of ports: Number of ports
¾ Status: (not shown in this screen shot) If the storage logical partition has been edited, the
status is displayed.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-30 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations

Virtual Partition Manager Operations

y Virtual Partition Manager Operations Overview


– Creating a storage logical partition
– Migrating the resources in a storage logical partition
– Creating a cache logical partition
– Migrating the parity groups in a cache logical partition
– Deleting a cache logical partition
– Deleting a storage logical partition

34

y Configuration Change
– Configuration Change requires processing time.
Processing time depends on cache capacity for operation, device capacity
for operation, cache usage before operation, write pending ratio before
operation, I/O load, and so on.
– It may take several hours for SLPR and CLPR changes to be applied
Sometime after applying the change the following progress panel appears
at the bottom of the Partition Definition window.

35

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-31
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations

y Creating a Storage Logical Partition

Available resources in the subsystem:


16 GB, 6 Parity Groups, and 48 Ports in
this subsystem.

To create a SLPR, right-click on the Subsystem folder and


select Create SLPR (see next slide).

36

To create a storage logical partition:


1. Launch Virtual Partition Manager, and change to Modify mode.
2. Right-click a subsystem in the Partition Definition outline on the left side of
the panel to display the pop-up menu.
3. Select Create SLPR. This will add a storage logical partition to the Partition
Definition outline. You can create up to 31 storage logical partitions in
addition to SLPR0, either at this point in the process or at a later time.
4. Select the SLPR that you want to define from the Partition Definition outline.
This will display the Storage Management Logical Partition panel.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-32 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations

y Creating a Storage Logical Partition (continued)

Click on the new SLPR to select it (see next slide).

37

5. In the SLPR Name field on the bottom left part of the panel input the name of
the selected SLPR. You can use up to 32 alphanumeric characters.
6. In the CU field, input the CU number(s) for the selected SLPR (00 - 3F). An
asterisk (*) indicates that the CU is defined as an LDEV.
7. To add a CU to the SLPR, select the CU from the Available CU list, then select
the Add button to move that CU to the CU list. You can select up to 64 CUs,
whether or not those CUs are defined as LDEVs.
8. To delete CU from the specified SLPR, select the CU from the CU list and
select Delete to return that CU to the Available CU list.
9. Available SSIDs are in SLPR0. In the SSID field, select an available SSID as
follows:
¾ In the From: field, input the starting number of the SSID (0004 to FFFE).
¾ In the To: field, input the ending number of the SSID.
10. Select Apply to apply the settings, or select Cancel to cancel the settings. The
progress bar is displayed.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-33
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations

y Creating a Storage Logical Partition (continued)

Select desired CUs and/or SSIDs, click


the Add button and then click Apply.

38

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-34 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations

y Migrating resources to and from Storage Logical Partitions: Add a Port


to the new SLPR from the pool (SLPR0)

Select and expand SLPR0.


Select the desired port(s) and
then right-click and select Cut.

39

The resources of a storage logical partition include cache logical partitions and ports,
which can be migrated to another storage logical partition as needed. The only ports
that can be migrated are Target ports and the associated NAS ports are on the same
channel adapter. Initiator ports, RCU Target ports and External ports cannot be
migrated, and must remain in SLPR0.
Notes:
¾ LUs that are associated with a port in a particular SLPR must stay within that
SLPR .
¾ LUs that are associated with a parity group in a particular SLPR must stay
within that SLPR.
¾ Parity groups containing NAS system LUs (LUN0005, LUN0006, LUN0008,
LUN0009, and LUN000A) must remain in SLPR0.
¾ NAS system LUs (LUN0000 and LUN0001) must belong to the same SLPR as
the NAS channel adapter.
To migrate one or more resources:
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Select a SLPR from the Partition Definition outline, on the upper left of the
panel. This will display the Storage Management Logical Partition panel.
3. From the Storage Logical Partition Resource List on the upper right of the
panel, select one or more cache logical partition(s) and/or port(s) to be
migrated. Right-click to display the pop-up menu. Select Cut.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-35
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations

y Migrating resources to and from Storage Logical Partitions (continued):


Add a Port to the new SLPR from the pool (SLPR0)

Right-click on the target SLPR and


select Paste CLPRs,Ports followed
by clicking Apply.

40

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-36 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations

y Creating a Cache Logical Partition

Right-click on the target SLPR and


select Create CLPR (see next slide).

41

You must first have created one or more storage logical partitions before you can
create a cache logical partition.
Note: To create a CLPR, the remaining cache size which is calculated by subtracting
Cache Residency Size and Partial Cache Residence size from the cache size of CLPR0
needs 8GB or more.
To create a cache logical partition:
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Right-click a SLPR from the Partition Definition outline, on the upper left of
the panel, to display the Create CLPR pop-up menu then select Create CLPR.
This will add a cache logical partition to the Partition Definition outline.
3. Select the newly created CLPR from the Partition Definition outline, to
display the Cache Logical Partition panel.
4. In the Detail for CLPR section, on the lower left of the panel, do the following:
5. In the CLPR Name field, type the name of the cache logical partition, in up to
16 alphanumeric characters.
6. In the Cache Size field, enter the cache capacity. You may select from 6 to 52
GB, in 2 GB increments. The default value is 4 GB. The size of the cache will
be allocated from CLPR0, but there must be at least 8 GB remaining in
CLPR0.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-37
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations

7. Cache Residency Size indicates the capacity of the Cache Residency cache.
The value of Cache Residency Size must be selected or input from 0 to 252 GB
in 0.5 GB increments. The default value is 0 GB. The defined cache residency
size must be smaller than the total defined cache residency size.
8. Cache Residency Area indicates the defined cache residency area which must
be smaller than the total defined cache residency size.
9. In the Partial Cache Residency Size field, enter the cache capacity for Partial
Cache Residence (PCR), from 0 to 252 GB in 0.5 GB increments. The default
value is 0 GB.
10. Select Apply to apply the settings. The progress bar is displayed.
11. The change in cache capacity will be reflected in this cache logical partition
and in CLPR0.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-38 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations

y Creating a Cache Logical Partition (continued)

Expand the target SLPR


(see next slide).

42

y Creating a Cache Logical Partition (continued)

Select the new CLPR and then:


Select the CU, Cache Size, and size of DCR
and PCR (if desired) and then click Apply.

Note:
The minimum Cache Size is
4 GB, but in order to assign
any DCR you must select at
least 6 GB of cache.

43

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-39
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations

y Creating a Cache Logical Partition (continued): Add a Parity Group to


the new CLPR from the pool (SLPR0)

Expand SLPR0, select CLPR0, right-click on


the desired Parity Group(s) and select Cut.

44

y Creating a Cache Logical Partition (continued): Add a Parity Group to


the new CLPR from the pool (SLPR0)

Right-click on the target CLPR and select


Paste Parity Groups followed by clicking
Apply.

45

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-40 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations

y Creating SLPR and CLPR Summary

Select the Subsystem folder:


The pool (SLPR0) now contains 10 GB, 5 Parity Groups, and 47 Ports.
SLPR01 contains 6 GB, 1 Parity Group, and 1 Port.

46

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-41
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations

y Deleting a Cache Logical Partition

47

If you delete a cache logical partition, any resources (e.g., parity groups) will be
automatically returned to CLPR0. CLPR0 cannot be deleted.
To delete a cache logical partition:
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Select a CLPR from the Partition Definition outline, on the upper left of the
panel. This will display the Cache Logical Partition panel.
3. Right-click the CLPR that you want to delete and select Delete CLPR in the
pop-up menu. The selected CLPR is deleted from the tree.
4. Select Apply to apply the settings. The progress bar is displayed.

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-42 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations

y Deleting a Storage Logical Partition

48

If you delete a storage logical partition, any resources in that storage logical partition
will be automatically returned to SLPR0.
Note: SLPR0 cannot be deleted.
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Select a SLPR from the Partition Definition outline. This will display the
Storage Management Logical Partition panel.
3. In the logical partition outline on the upper left of the panel, right-click the
storage logical partition that you want to delete. This will display the Delete
SLPR pop-up menu.
4. Select Delete SLPR to delete the storage logical partition.
5. Select Apply to apply the settings, or select Cancel to cancel the settings. The
progress bar is displayed.

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-43
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Module Review

Module Review

49

Module Review Questions


1. Define SLPR and CLPR.
2. The virtual Partition Manager is a performance tool. True or False
3. What is the difference between the Storage Administrator and Partition
Storage Administrator?
4. The SLPRs function independently from each other including cache de-
staging
5. SLPRs and CLPRs can effect performance of each other. True or False
6. What is the maximum number of SLPRs that can be defined?
7. What is the maximum amount of cache that can be assigned to a CLPT?
8. What is the minimum amount of cache required for a CLPR to define DCR?

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-44 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Lab Project 10: Virtual Partition Manager

Lab Project 10: Virtual Partition Manager

y Timing and Organization


– Time allotted to complete the project: 1 Hour, 15 minutes
– The lab project contains three sections:
y Section 1 is the lab activity
y Section 2 contains the answers to the embedded lab questions
y Section 3 contains the review questions
– Time allotted to go over the review questions: 15 minutes
– The class will be split into lab groups
– Each lab group will have the following lab equipment:
y One Solaris host systems running Solaris 8
y One Windows host systems running Windows XP
y One TagmaStore Universal Storage Platform

50

y Upon completion of the lab project, you will be able to do the following:
– Create two Storage Logical Partitions (SLPR01 and SLPR02)
– Create a Cache Logical Partition (CLPR01) in each new SLPR
– Allocate and/or Enable/Disable specified license keys for each SLPR
– Migrate specified ports to each new SLPR
– Allocate specified Control Units to each new SLPR
– Allocate specified increments of Cache to each CLPR
– Migrate specified parity groups to each new CLPR
– Delete a CLPR
– Delete a SLRP

51

This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-45
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Lab Project 10: Virtual Partition Manager

v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-46 distributed in whole or in part, without the prior written consent of HDS.

You might also like