TSI0556-Hitachi TagmaStore USP Software Solutions SG v2.0-2
TSI0556-Hitachi TagmaStore USP Software Solutions SG v2.0-2
TSI0556-Hitachi TagmaStore USP Software Solutions SG v2.0-2
Version 2.0
Student Guide TSI0556 TagmaStore Software Solutions
Notice: This document is for informational purposes only, and does not set forth any warranty, express or
implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This
document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data
Systems being in effect, and that may be configuration-dependent, and features that may not be currently
available. Contact your local Hitachi Data Systems sales office for information on feature and product
availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited
warranties. To see a copy of these terms and conditions prior to purchase or license, please go to
http://www.hds.com/products_services/support/license.html or call your local sales representative to obtain a
printed copy. If you purchase or license the product, you are deemed to have accepted these terms and
conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS
WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY
FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL,
INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR
LOST DATA, EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service
mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks, registered trademarks, or service marks of Hitachi Data Systems
Corporation in the United States and/or other countries:
HiCommand® Hi-Star Lightning 9900
ShadowImage TrueCopy TagmaStore
All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stand for 1,024 bytes, 1 MB for 1,024 kilobytes, 1 GB for 1,024 megabytes, and
1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
©2005, Hitachi Data Systems Corporation.
All Rights Reserved
Date printed: May 10, 2005
Version 2.0
Course numbers: TSI0556
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page ii distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Student Guide
Contents
Book One
I. TAGMASTORE SOFTWARE SOLUTIONS INTRODUCTION ............... I-1
1. SOFTWARE SOLUTIONS OVERVIEW .......................................... 1-1
2. HITACHI STORAGE NAVIGATOR ................................................ 2-1
3. HITACHI LUN MANAGER SOFTWARE ........................................ 3-1
4. LUSE AND VLL VOLUMES ...................................................... 4-1
5. HITACHI DATA RETENTION UTILITY .......................................... 5-1
6. UNIVERSAL VOLUME MANAGER ............................................... 6-1
7. HITACHI CROSS-SYSTEM COPY SOFTWARE .............................. 7-1
Book Two
8. CACHE RESIDENCY MANAGER ................................................. 8-1
Module Objectives ................................................................................. 8-2
Cache Residency Manager Overview ................................................... 8-3
Cache Residency Manager Software .................................................... 8-7
Cache Residency Manager Overview ................................................... 8-8
Cache Residency Manager Concept ..................................................... 8-9
Cache Residency Manager Operations............................................... 8-15
Module Review .................................................................................... 8-17
Lab Project 6: Cache Residency Manager .......................................... 8-18
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page iii
Student Guide TSI0556 TagmaStore Software Solutions
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page iv distributed in whole or in part, without the prior written consent of HDS.
8. Cache Residency
Manager
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-1
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Module Objectives
Module Objectives
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Overview
Cache Residency Manager software (also called Cache Residency) is a feature of Hitachi
TagmaStore™ Universal Storage Platform that allows you to store frequently accessed
data in a specific area of the subsystem’s cache memory. Cache Residency increases the
data access speed for the cache-resident data by enabling read and write I/Os to be
performed at high speed. When a data specified as the target of Cache Residency
operation is accessed by a host system for the first time, this data normally becomes
resident or staged in the allocated cache area called the Cache Residency cache. The host
that accesses this data for the second time and thereafter is able to find this data in the
Cache Residency cache. Cache Residency supports a function called pre-staging that
places specific data in the Cache Residency cache before it is accessed by the host. When
pre-staging is set to function, the host is able to find this data in the Cache Residency
cache from the first access, thus enhancing its access performance.
To be able to use the Cache Residency functions, you need to allocate a certain portion of
the subsystem’s cache memory. You can change the capacity of the Cache Residency
cache when increasing or decreasing the size of the cache memory.
The user may want to increase total subsystem cache capacity when using Cache
Residency to avoid data access performance degradation for non-Cache-Residency data.
Cache Residency is only available on TagmaStore Universal Storage Platform systems
configured with at least 512 MB of cache.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-3
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Overview
Note: As far a open system LDEVs are concerned, it is best to put the entire LDEV into
Cache Residency, because there is no way to figure out what track the data
resides on, something that is possible with mainframes. That’s why VLL volumes
are good candidates for Cache Residency.
4
The cache extents are dynamic and can be added and deleted at any time. The
TagmaStore Universal Storage Platform supports a maximum of 4,096 addressable
cache extents per LDEV and per subsystem. For mainframe volumes, each Cache
Residency cache area must be defined on contiguous tracks, with a minimum size of
one cache slot (or track) and a maximum size of one LVI. This is equivalent to 66kB.
For open-systems volumes, Cache Residency cache extents must be defined in
logical blocks using logical block addresses (LBAs), with a minimum size of 512
LBAs (equivalent to 264 KB) for OPEN-V, and 96 LBAs (equivalent to 66 KB) for
other than OPEN-V. For Open-V it increments in units of 128, rather than 96.
However, in most cases users will assign an entire open-system volume for Cache
Residency. If the remaining cache memory is less than 256 MB, Cache Residency is
not available.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Overview
PRIO MODE: In priority mode the Cache Residency extents are used to hold read data for
specific extents on volumes. Write data is write duplexed in normal cache and destaged to
disk using standard algorithms. Because there is no duplexed write data in the cache
reserved for Cache Residency, all priority mode Cache Residency extents are 100% utilized
by user read-type data.
BIND MODE: In bind mode the Cache Residency extents are used to hold read and write
data for specific extent(s) on volume(s). Any data written to the Cache Residency bind area
is not de-staged to the disk. To ensure data integrity, write data must be duplexed in the
Cache Residency area, which consumes a significant amount of the Cache Residency cache.
The primary advantage of bind mode is that all targeted read and write data is transferred
at host data transfer speed. In addition, the accessibility of read data is the same as Cache
Residency priority mode; write operations do not have to wait for available cache
segments; and there will be no backend contention caused by de-staging data.
Pre-Staging Function
The cache residency manager pre-staging function lets data allocated to configured cache
residency manager reside in the cache residency manager cache. This allows the host to
find data in the cache from the first access. You can use this function in both of the cache
residency manager priority and bind modes.
Note: Data can be pre-staged at a scheduled time, which should not be during peak activity
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-5
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Overview
Do not perform the ShadowImage software quick restore operation or the volume
migration operation on a Cache Residency volume. These operations swap the
internal locations of the source and target volumes, which causes a loss of data
integrity. The Cache Residency bind mode is not available to external volumes
whose IO Suppression mode is set to Disable and Cache mode is also set to Disable
(which is the mode that disables the use of the cache when there is an I/O request
from the host).
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Software
IO
Cache Residency
Suppression Cache Mode
Functions
Mode
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-7
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Overview
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Concept
Cache Memory
LBA#0
LDEV#a
#100 #500 #200
LDEV#b LDEV#c #450
#525
#300
LDEV#d #475
10
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-9
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Concept
Miss
Standard LRU
Cache = Management
Cache Residency
Manager Cache
11
Standard LRU
Cache = Management
Cache Residency
Manager Cache
Side A Side B
12
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Concept
y PRIO Write
As you notice here, only the READ blocks are a
Write
part of the FlashAccess cache area, the WRITE
blocks are a part of standard cache.
Cache Residency
Manager Cache
13
As you notice here, only the READ blocks are a part of the Cache Residency
Manager cache area, the WRITE blocks are a part of standard cache.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-11
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Concept
Standard LRU
Cache = Management
Cache Residency
Manager Cache
14
Standard LRU
Cache = Management
Cache Residency
Manager Cache
Side A Side B
15
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Concept
Standard LRU
Cache = Management
Cache Residency
Manager Cache
Side A Side B
16
As you notice here, the READ as well as the WRITE blocks are a part of the Cache
Residency Manager cache area.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-13
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Concept
Merging Standard
new data Cache = LRU
Management
Cache Residency
Manager Cache
17
This diagram summarizes the Cache Residency Manager process presented in the
previous slides. If the write data in question causes a cache miss, the data from the
block containing the target record up to the end of the track is staged into a read
data slot. In parallel with that, the write data is transferred when the block in
question is established in the read data slot. The parity data for the block in question
is checked for a hit or miss condition and, if a cache miss condition is detected, the
old parity is staged into a read parity slot. When all data necessary for generating
new parity is established, it is transferred to the DRR circuit in the DKA. When the
new parity is completed, the DRR transfers it into the write parity slots for cache A
and cache B (the new parity is handled in the same manner as the write data).
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Cache Residency Manager Operations
y Operations Overview
– Referencing the current Cache Residency Manager software settings
– Making data reside in the Cache Residency Manager cache
– Deleting data resident in the Cache Residency Manager cache
– Changing the mode set for the Cache Residency Manager cache
18
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-15
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Cache Residency Manager Operations
Select CU/
LDEV to
put into If YES is selected for Prestaging, the data
Cache is loaded immediately into the cache.
Residency If NO is selected for Prestaging, the data Select BIND/
Manager is loaded the first time it is requested. PRIO mode
and the
Pre-staging
mode
Select LBA
for the
LDEV
19
Error is displayed
when the user is
trying to load an
LDEV whose size
exceeds the current
Cache Residency
Manager size
20
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Cache Residency Manager
Module Review
Module Review
21
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 8-17
Cache Residency Manager TSI0556 TagmaStore Software Solutions
Lab Project 6: Cache Residency Manager
22
23
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 8-18 distributed in whole or in part, without the prior written consent of HDS.
9. Hitachi Dynamic
Link Manager
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-1
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Module Objectives
Module Objectives
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Overview
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-3
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Overview
y Benefits
– Provides load balancing
across multiple paths
– Utilizes the hardware’s ability
to provide multiple paths to
the same device (up to 64
paths)
– Provides failover protection
by switching to a good path,
if a path fails
Hitachi Dynamic Link Manager automatically provides path failover and load
balancing for open systems.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Overview
y Features
Connectivity SCSI, FC
Support Platforms NT, W2K, XP, Sun, AIX, HP...
Support Cluster Software MSCS, VCS, Sun Cluster, HACMP, MC/SG
Maximum Paths per LUN 64
Maximum Physical Paths 2048
Load Balance Round-Robin, Extended Round-Robin
Failover Yes
Failback Yes
GUI Yes
CLI Yes
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-5
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Features
y Overview
– Removes HBA as single point of failure
– Automatically detects failed path and reroutes I/O to alternate path
– Automatic discovery of HBAs and LUNs in SAN environment
– Up to 256 LUNs and 64 paths to each LUN
– Uses round-robin or extended round-robin to balance I/O’s across
available paths
– Provides tools to control and display path status
– Supports the most popular cluster technologies
– HBA vendors and standard open drivers support
– GUI and CLI support
– Error logging capability
Hitachi Dynamic Link Manager provides the ability to reduce the server’s host bus
adapter as the single point of failure in an OPEN environment. One strength of
Dynamic Link Manager is its ability to configure itself automatically. It is designed
to enhance the operating system by putting all alternate paths offline in Disk
Administrator. It functions equally well in both SCSI and fibre channel
environments. Dynamic Link Manager supports the most popular cluster
technologies like HACMP, MSCS, MC Service Guard, Sun Cluster, and VERITAS
Cluster Server™.It has GUI/CLI support for configuration management,
performance monitoring and management and authentication of user id’s using
HiCommand facility.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Features
y Load Balancing
– Dynamic Link Manager distributes storage accesses across the
multiple paths and improves the I/O performance with load balancing
Server Server
Applications Applications
Load
I/O Bottle-neck Balancing
Volumes Volumes
Storage Storage
When there is more than one path to a single device within an LU, Hitachi Dynamic
Link Manager can distribute the load across the paths by using the paths to issue
I/O commands. Load balancing prevents a heavily loaded path from affecting
performance of the entire system.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-7
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Features
y Failover
– Hitachi Dynamic Link Manager provides continuous storage access
and high availability by taking over the inactive paths
Server Server
Applications Applications
HDLM HDLM
Volume Volume
Storage Storage
The failover function automatically places the error path offline to allow the system
to continue to operate using another online path.
Trigger error levels:
¾ Error
¾ Critical
The online command restarts and the offline command is used to force path
switching.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Features
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-9
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Path State Transitions
Active Inactive
Offline Operation
Online Offline(C)
Online Operation
Active path Path disabled by
Pa manual operations
th
Failure on
Last Path
Recovery
Fa Fa
i lu il u
re re
Re
co
ve
ry
Online(E) Offline(E)
Path in failures on Other Path Recovery Path disabled by
last active path failures
10
Active is the state where the path is healthy. Inactive is the state where all accesses
are disables. Online is the state where Hitachi Dynamic Link Manager allows
applications access to the path. Offline is the state where Dynamic Link Manager
does not allow applications to the path.
This illustration shows the path status transition
Path has two status; Online and Offline status.
Online status path is what Dynamic Link Manager uses for failover and load
balancing. I/O can basically be issued to the path.
Offline status path is what Dynamic Link Manager does not use for failover and
load balancing. I/O cannot be issued to the path.
When an error occurred or offline command is executed, Dynamic Link Manager
places the online path offline.
When the online command is executed, Dynamic Link Manager place the offline
path online.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Path State Transitions
y Online(E) Status
– If an error occurred on the last online path for each LU, Dynamic Link Manager
changes the path status to Online(E). The Online(E) path returns an I/O error to
the applications (upper layers) in order to notify the applications that no storage
accesses are available.
Server Error
HDLM
Error
HBA HBA HBA HBA Last failed path
Path State
CHA CHA
Online
Offline (E)
LUN Online (E)
Storage
11
If an error occurred on a path which is the last path to the LUN, Hitachi Dynamic
Link Manager executes the Auto Confirmation Function. Dynamic Link Manager
checks the offline paths. If there is a path that can be used, Dynamic Link Manager
places that path online. If there is no path that can be used, Dynamic Link Manager
returns an I/O error. But, Dynamic Link Manager does not place the last path
offline.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-11
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Installation
12
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Operations
y Operations Overview
– View: Displays host and subsystem information
– Offline: Places an online path offline
– Online: Places an offline path online
– Set: Change Dynamic Link Manager parameters
– Clear: Clears the default setting
– Help: Show the operations, and displays help for each operation
13
When you are using Dynamic Link Manager for Windows systems, execute the
command as a user of the Administrators group. When you are using Dynamic Link
Manager for Solaris systems, execute the command as a user with root permission.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-13
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Parameters
y Optional Parameters
– Load Balancing:
y Round Robin – Distributes all I/O among multiple paths
y Extended Round Robin – Distributes I/O to paths depending on type of
I/O:
– Sequential – a single path is used
– Random – Multiple paths are used
– Path HealthCheck:
y When enabled (default), Dynamic Link Manager monitors all online paths
at specified interval and puts them into Offline(E) or Online(E) status if a
failure is detected.
y There is a slight performance penalty due to extra probing I/O
y The default interval is 30 minutes.
– Auto Failback: When enabled (not the default), Dynamic Link Manager
monitors all Offline(E) and Online(E) paths at specified intervals and restores
them to online status if they are found to be operational. The default interval
is 1 minute.
14
15
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager GUI Interface
16
y Options Window
Dynamic Link
Manager Version
Error management
function settings
Select the severity of Log
and Trace Levels
17
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-15
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager GUI Interface
Expand entry to
display LUNs HBA Port
CHA Port
18
y Path Status
– Online
– Offline(C): I/O
cannot be issued
because the path
Current Path Status:
was placed offline Gray indicates normal status.
by the GUI or a Red indicates an error.
command
– Online(E): Indicates
an error has
occurred in the last
online path for each
device
– Offline(E): Indicates
I/O cannot be
issued because an
error occurred on
the path
19
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager GUI Interface
y Status Display
– Allows you to narrow
down your display
y Online box displays
paths in online status
y Offline(C) box
displays paths in
offline
y Offline(E) path check
box displays paths
offline
y Online(E) path box
displays paths in
online
20
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-17
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager GUI Interface
21
This is the Path List window. In this example LUN 0, 1, 2 and 3 are available
through two paths (1C and 2C –both are owner paths. Non owner paths are
applicable only on the 9200 and 9500V subsystems) to the host. To clear the data
from the screen click on Clear Data. To export this data to a CSV file click on Export
CSV. To set an individual path to OFFLINE or ONLINE status, select the path and
click on Online and Offline options in the top right hand side corner of the screen. If
you select a single LUN on the Tree in the left, only the paths for that LUN will be
displayed in the Path list on the right.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Operations
22
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-19
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Hitachi Dynamic Link Manager Operations
23
- rr: Round robin: All I/Os will be distributed across multiple paths.
-lbtype{rr|exrr}
- exrr: Extended round robin.
-pchk {on [-intvl execution-interval]|off} Enables or disables Path health check and the interval.
24
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Hitachi Dynamic Link Manager Operations
25
26
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-21
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Module Review
Module Review
27
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Hitachi Dynamic Link Manager
Lab Project 7: Hitachi Dynamic Link Manager for Solaris
28
y Upon completion of the lab project, you will be able to do the following:
– Install Dynamic Link Manager on a Sun host system
– Create the Dynamic Link Manager configuration file dlmfdrv.conf
– Use the Dynamic Link Manager GUI to collect status concerning Host
HBA/TagmaStore Universal Storage Platform connections
– Use the Dynamic Link Manager GUI to set a host connection Online or
Offline
– Configure Dynamic Link Manager to automatically failback (bring a
failed path back online) when the condition that caused the failure is
corrected
– See configuration on the next slide
29
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 9-23
Hitachi Dynamic Link Manager TSI0556 TagmaStore Software Solutions
Lab Project 7: Hitachi Dynamic Link Manager for Solaris
T arg et E8 = T ID 1 S am e L U N s (LD EV s)
E m u le x H B A C L 1-B C L 2-B m ap p e d to p o rts 1A
and 2 A
Ta rget E8 = TID 1
C L1 -C C L2-C
S olaris
H ost 2 C L 1-D C L 2-D LU N 0 LU N 1
H ost M o d e = 09 LDEV L D EV
H o st S p eed = A u to 00 :02 0 0:03
H ost G ro u p = S un la b
L U N S ecu rity = E nab le Fabric = D isa b le
C o nn ectio n = F C -A L
JN I H B A
LU N 2 LU N 3
Targ et E4 = T ID 2 LD EV LD EV
JN I H B A 0 0:0 4 00 :05
S am e L U N s (LD EV s)
m ap p e d to p o rts 1C
and 2 C
30
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 9-24 distributed in whole or in part, without the prior written consent of HDS.
10. ShadowImage
Operations
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-1
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Module Objectives
Module Objectives
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Software Overview
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-3
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Software Overview
S-VOL
The Hitachi Lightning 9900™ V Series enterprise storage system contains and
manages both the original and copied ShadowImage data. The Hitachi
TagmaStore™ Universal Storage Platform system supports a maximum of 16,382
ShadowImage volumes: 8,191 pairs: (8,191 P-VOLs and 8,191 S-VOLs). When
ShadowImage pairs include size-expanded LUs, the maximum number of pairs
decreases. The Paircreate command creates the first Level 1 “S” volume. The Set
command can be used to create a second and third Level 1 “S” volume. And the
Cascade command can be used to create the Level 2 “S” volumes off the Level 1 “S”
volumes.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Software Overview
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-5
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Software Overview
L2
MU 1
L1
MU 2
L2
MU 2
7
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Software Parameters and Requirements
Parameter Requirement
Pair objects Logical devices (LDEVs): OPEN-X (e.g., OPEN-3, OPEN-9, OPEN-E,
OPEN-V), including custom size devices (VLL volumes) and size-
expanded LUs (LUSE volumes). Devices must be installed and
configured.
The P-VOL and S-VOL must be the same type (e.g., OPEN-3 to
OPEN-3 allowed, OPEN- 3 to OPEN-9 not allowed). A VLL P-VOL
must be paired with S-VOLs of the same type and same capacity.
Number of copies Maximum three copies (S-VOLs) per primary volume (P-VOL).
Maximum one P-VOL per S-VOL (P-VOLs cannot share an S-VOL).
To copy the existing data in the mapped external volume using ShadowImage, the
emulation type of the mapped external volume also has to be OPEN-V. The
TagmaStore Universal Storage Platform internal volume that has the same capacity
as the mapped external volume. The maximum number of concurrent copies is 128
and the IO Suppression mode should be Disabled.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-7
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Software Parameters and Requirements
Parameter Requirement
Max number of pairs Maximum of 8,191 pairs (8,191 P-VOLs and 8,191 S-VOLs)
can be created per TagmaStore™ Universal Storage
Platform system.
The maximum number of pairs equals the total number of
ShadowImage, ShadowImage for z/OS®, Compatible
Mirroring for IBM® FlashCopy®, Volume Migration and
Cross-System Copy volume pairs.
Max number of S-VOLs 8,191
MU numbers of cascade For L1 pairs = 0, 1, and 2 (a total of 3 pairs)
pairs For L2 pairs =1 and 2 (a total of 2 pairs)
Combinations of RAID All combinations supported:
levels (primary-secondary) RAID1-RAID1, RAID5-RAID5, RAID1-RAID5, RAID5-RAID1
10
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
ShadowImage Operations
P-Vol Differential
Data Bit Map y Updates S-Vol after initial copy
Host y Write I/O to P-VOL during initial
PAIR Status
copy – duplicated at S-VOL by
P-Vol Available to Host P-VOL PAIR
update copy after initial copy
for R/W I/O Operations
Differential Data
S-VOL
UPDATE COPY PAIR
11
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-9
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
SMPL Start
P-VOL S-VOL
COPY (PD)
P-VOL S-VOL Initial
Copy
PAIR Finished
P-VOL S-VOL
12
The ShadowImage initial copy operation takes place when you create a new volume
pair. The ShadowImage initial copy operation copies all data on the P-VOL to the
associated S-VOL. The P-VOL remains available to all hosts for read and write I/Os
throughout the initial copy operation. Write operations performed on the P-VOL
during the initial copy operation will be duplicated at the S-VOL by update copy
operations after the initial copy is complete. The status of the pair is COPY(PD) (PD
= pending) while the initial copy operation is in progress. The status changes to
PAIR when the initial copy is complete.
When creating pairs, you can select the pace for the initial copy operation(s): slower,
medium, and faster.
The slower pace minimizes the impact of ShadowImage operations on subsystem
I/O performance, while the faster pace completes the initial copy operation(s) as
quickly as possible.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
Host I/O
Differential Data
P-VOL S-VOL
Update Copy
PAIR
P-VOL S-VOL
13
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-11
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
y PAIRCREATE
– During ShadowImage operations, the P-VOLs remain available to all hosts
for R/W I/O operations (except during Reverse Resync)
– S-VOLs become available for host writing only after the pair has been split
– S-VOLs are updated asynchronously - for a volume pair with PAIR status,
the P-VOL and S-VOL may not be identical
– Update Copy operations DO NOT occur every time a host issues a write
I/O to the P-VOL of a ShadowImage Pair - The TagmaStore stores the
differential bitmap and then performs update copy operations periodically
based on the amount of differential data present on the P-VOL as well as
the elapsed time between update copy operations (volumes must be in
PAIR status)
14
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
y PAIRSPLIT Operation
Primary Server Backup Server
Host Host
Break Mirror
(Split Pair) Backup S-VOL
Updates backup data
To tape
P-VOLS S-VOLS
Navigator
Navigator
15
The ShadowImage split capability provides point-in-time backup of your data, and
also facilitates real data testing by making the ShadowImage copies (S-VOLs)
available for host access. The ShadowImage pairsplit operation performs all pending
S-VOL updates (those issued prior to the split command and recorded in the P-VOL
track map) to make the S-VOL identical to the state of the P-VOL when the split
command was issued, and then provides full read/write access to the split S-VOL.
You can split existing pairs as needed, and you can also use the pairsplit operation
to create and split pairs in one step.
When the split operation is complete, the pair status changes to PSUS, and you have
full read/write access to the split S-VOL (even though it is still reserved). While the
pair is split, the TagmaStore Universal Storage Platform establishes a differential
bitmap for the split P-VOL and S-VOL and records all updates to both volumes. The
P-VOL remains fully accessible during the pairsplit operation. Pairsplit operations
cannot be performed on suspended (PSUE) pairs.
When splitting pairs, you can select the pace for the pending update copy
operation(s): slower, medium, and faster.
The slower pace minimizes the impact of ShadowImage operations on subsystem
I/O performance, while the faster pace splits the pair(s) as quickly as possible.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-13
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
When splitting pairs, you can also select the split type:
¾ Quick Split
¾ Steady Split.
When the quick split operation starts, the pair status changes to PSUS(SP), and the S-
VOL is available immediately for read and write I/Os (even though it is still
reserved). The TagmaStore Universal Storage Platform performs all pending update
copy operations to the S-VOL in the background. When the quick split is complete,
the pair status changes to PSUS.
When the steady split operation starts, the pair status changes to COPY(SP), and the
TagmaStore Universal Storage Platform performs all pending update copy
operations to the S-VOL. When the Steady Split operation is complete, the pair
status changes to PSUS, and you have full read/write access to the split S-VOL
(even though it is still reserved).
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
y Resynchronize Operation
Primary Server Backup Server
Storage
Storage PAIR
Navigator
Navigator
– When the operation starts, the pair status changes to Copy(RS) or Copy(RS-R)
– When complete, the pair status changes to PAIR again
– Update Copy operations resume after pair status changes to PAIR
17
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-15
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
y Resynchronize Options
Normal Resync
Syncs SVOL with PVOL
Host
Normal
PSUS PSUS
NORMAL P-VOL Stop I/O To S-Vol
S-VOL
• Copy Direction = P-VOL to S-VOL Storage
Reverse COPY(RS) Storage
COPY(RS) Navigator
Navigator
• S-VOL Track Map merged into -PVOL
Track Map – copies all flagged Merge
tracks from P-VOL to S-VOL Track Bit Maps PAIR
PAIR
Tracks changed on S-VOL during the split are lost during the Resync because P-VOL data takes priority
over S-VOL data. The corresponding tracks of P-VOL will overwrite the changed tracks on S-VOL.
During a Reverse Resync, changed S-VOL tracks overwrite P-VOL tracks.
18
Normal. The normal pairresync operation resynchronizes the S-VOL with the P-VOL. The
copy direction for a normal pairresync operation is P-VOL to S-VOL. The pair status during
a normal resync operation is COPY(RS), and the P-VOL remains accessible to all hosts for
both read and write operations during a normal pairresync.
Quick. The quick pairresync operation speeds up the normal pairresync operation by not
copying the P-VOL data to the S-VOL right away. Instead, the S-VOL is gradually
resynched with the P-VOL as host updates are performed, when intermittent copy is
scheduled (TagmaStore Universal Storage Platform internal), or when a user issues another
pairsplit command for the pair. The P-VOL remains accessible to all hosts for both read and
write operations during a quick pairresync (same as normal pairresync). The S-VOL
becomes inaccessible to all hosts during a quick pairresync operation (same as normal
pairresync).
Reverse. The reverse pairresync operation synchronizes the P-VOL with the S-VOL. The
copy direction for a reverse pairresync operation is S-VOL to P-VOL. The pair status
during a reverse copy operation is COPY(RS-R), and the P-VOL and S-VOL become
inaccessible to all hosts for write operations during a reverse pairresync operation. As soon
as the reverse pairresync is complete, the P-VOL becomes accessible. The reverse
pairresync operation can only be performed on split pairs, not on suspended pairs. The
reverse pairresync operation cannot be performed on L2 cascade pairs. The P-VOL remains
read-enabled during the reverse pairresync operation only to enable the volume to be
recognized by the host. The data on the P-VOL is not guaranteed until the reverse
pairresync operation is complete and the status changes to PAIR.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
19
Quick Restore. The quick restore operation speeds up reverse copy by changing the
volume map in the TagmaStore Universal Storage Platform system to swap the
contents of the P-VOL and S-VOL without copying the S-VOL data to the P-VOL.
The P-VOL and S-VOL are resynchronized when update copy operations are
performed for pairs in the PAIR status. The pair status during a quick restore
operation is COPY(RS-R) until the volume map change is complete. The P-VOL and
S-VOL become inaccessible to all hosts for write operations during a quick restore
operation. Quick restore cannot be performed on L2 cascade pairs.
The P-VOL remains read-enabled during the quick restore operation only to enable
the volume to be recognized by the host. The data on the P-VOL is not guaranteed
until the quick restore operation is complete and the status changes to PAIR.
During a quick restore operation, the RAID levels, HDD types, and Cache
Residency Manager software settings of the P-VOL and S-VOL are exchanged. For
example, if the P-VOL has a RAID-1 level and the S-VOL has a RAID-5 level, the
quick restore operation changes the RAID level of the P-VOL to RAID-5 and of the
S-VOL to RAID-1.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-17
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
P-VOL S-VOL
PAIR
Quick Restore After a
abcde 12345 12345 abcde
Operation while
P-VOL S-VOL
PSUS PAIR
PAIR
Without Swap & Freeze:
The P-VOL and S-VOL are resynchronized when ordinary update copy
operations are performed after the quick restore operation.
20
If you do not want the P-VOL and S-VOL to be resynchronized after the quick
restore operation, you must set the Swap & Freeze option before performing the
quick restore operation. The Swap & Freeze option allows the S-VOLs of a
ShadowImage pair to remain unchanged after the quick restore operation. If the
quick restore operation is performed on a ShadowImage pair with the Swap &
Freeze option, update copy operations are suppressed, and are thus not performed
for pairs in the PAIR status after the quick restore operation. If the quick restore
operation is performed without the Swap & Freeze option, the P-VOL and S-VOL
are resynchronized when update copy operations are performed for pairs in the
PAIR status.
Note: Make sure that the Swap & Freeze option remains in effect until the pair status
changes to PAIR after the quick restore operation.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-19
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
y QuickResync Specifications
– QuickResync command will be completed less than 1 sec/pair
– This function copies only the Delta Bitmap (very fast)
– The pair quickly enters Pair status and S-VOL is immediately available
to the S-VOL host as Read-Only
– The actual changed tracks will be updated in the background as Update
Copy operations occur
– During the transfer of the Delta Bitmap the pair status will be COPY(RS)
Read/Write Read-Only
Read/Write Read/Write
Delta bitmap
Delta bitmap
QuickRESYNC P-VOL S-VOL
P-VOL S-VOL request Asynchronous
Copy
21
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
y Quick RESYNC:
Reduce RESYNC (Primary to Secondary) time considerably. If you use
QuickRESYNC and QuickSPLIT together, the wait time to start backup
from the secondary volume will be reduced
y QuickRESTORE:
Reduces RESTORE (Secondary to Primary) time considerably. User will
be able to restart their jobs to primary volume. Be careful that this function
exchanges the place where the data exists
22
The performing of the resync/restore/split depends on the SVP mode setting. For
example the ShadowImage resync operation will be performed as a Quick Resync
operation if the Quick Resync mode is enabled on the SVP. SVP mode 87 is used to
turn on Quick Resync, SVP mode 80 is used to turn off Quick Restore, and SVP
mode 122 is used to turn off Quick Split.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-21
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
y Suspend Operation
– The TagmaStore Universal Storage Platform will automatically suspend a
pair under the following conditions:
y When the P-VOL and/or S-VOL track map in shared memory is lost
23
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
y Pairsplit –E
– Suspends the Copy Operations to the S-VOL
– Both volumes enter Suspended Error state
PAIR PAIR
P-VOL S-VOL
PSUE PSUE
24
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-23
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Operations
Host Host
Normal
P-VOL available PSUE PSUE Stop I/O to S-VOL
for R/W I/Os P-VOL S-VOL
PAIR PAIR
NORMAL
Copy Direction = P-VOL to S-VOL
25
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-24 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Operations
Host
PAIR PAIR
P-VOL S-VOL S-Vol Not available for Write
I/O unless Unreserved
SMPL Stops Copy Operations to S-VOL SMPL
Storage
Storage
(Pending Updates Discarded)
Navigator
Navigator
26
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-25
ShadowImage Operations TSI0556 TagmaStore Software Solutions
ShadowImage Pair Status Transitions
27
y Descriptions
28
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-26 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
ShadowImage Pair Status Transitions
y Descriptions (continued)
29
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-27
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Module Review
Module Review
30
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-28 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
Lab Project 8: ShadowImage GUI
31
y Upon completion of the lab project, you will be able to do the following:
– Create a pool of Reserved LDEVs to be used as ShadowImage S-VOLs
– Create three Level 1 (L1) S-VOLs from a single P-VOL
– Create a ShadowImage pair
– Split a ShadowImage pair putting the two volumes into suspended state
– Resynchronize a suspended pair putting the volumes back into Pair
status
– Create a Level 2 (L2) S-VOL by cascading the new L2 volume from a L1
volume
– Create an L1/L2 pair simultaneously from a Root P-VOL
– Display configuration data of a ShadowImage pair
– Display the command History and Pair status of a ShadowImage pair
– Split a ShadowImage pair putting the volumes back into Simplex status
– Remove a Reserved volume from the pool of Reserved ShadowImage
volumes
32
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-29
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Lab Project 8: ShadowImage GUI
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-30 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage Operations
Lab Project 8: ShadowImage GUI
MU 0 MU 0
LUN 000 S-VOL L1 LUN 020 S-VOL
LDEV LDEV
05:0E 05:2E
CL4-B
MU 1 MU 1
LUN 001 S-VOL L2 LUN 021 S-VOL
LDEV LDEV
05:0F 05:2F
36
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 10-31
ShadowImage Operations TSI0556 TagmaStore Software Solutions
Lab Project 8: ShadowImage GUI
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 10-32 distributed in whole or in part, without the prior written consent of HDS.
11. ShadowImage RAID
Manager CCI
Operations
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-1
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Introduction to ShadowImage RAID Manager CCI Operations
y Objectives
– In this module, and any associated lab(s), you will learn to:
y Explain the purpose of RAID Manager Command Control Interface (CCI)
y Identify the main components of RAID Manager
y Explain how RAID Manager is setup
y State the purpose of the four sections of the HORCM configuration files
y Install the RAID Manager CCI on Windows and Sun Solaris systems
y Perform the following ShadowImage operations using CCI commands:
– raidscan
– paircreate
– pairdisplay
– pairsplit
– pairresync
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Overview
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-3
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Overview
Server Server A
Management
Interface
HORCM
HORCM
Commands
Commands
Server
Software & HORCM HORCM
HORCM HORCM
Application Instance0 Instance1
Instance0 Instance1
Command
Device
Primary Secondary
Volume Volume
TagmaStore Universal
Storage Platform
4
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Overview
Administrator Host
y Host Running RAID Manager
– Host running RAID Manager does not HORCM HORCM
require access to the P-VOLs or S-VOLs Instance Instance
– This allows an Administrator Host to
control pairs and keep the pair controls Config file Config file
separate from the Production systems
TagmaStore Universal
Storage Platform
Production Host Command
Device
Server Server
Management Software & P-VOL S-VOL
Interface Application
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-5
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Components
y HORCM Instance
– Service or Daemon
– Used to communicate with TagmaStore Universal Storage Platform and
with remote server
y HORCM Configuration File
– Defines communication paths
– Defines LUNs (volumes) to be controlled
y HORCM Commands
– Monitor and control TrueCopy and ShadowImage operations
y HORCM Command Device
– Used to accept commands from HORCM
– Used to report command results to HORCM
Note: TrueCopy was originally called HORC (Hitachi Open Remote Copy),
and HORCM is Hitachi Open Remote Copy Manager.
ShadowImage uses the same CCI.
6
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager Components
A Command Device can be any Open-x emulation type device, and can be a
VirtLUN (CVS) device as small as 36MB. LUSE volumes cannot be used as
Command Devices. Command Devices must not contain any data.
It may be a good idea to define a 2nd Command Device for failover purposes. When
CCI receives an error notification in reply to a Read or Write request, CCI will attempt
to switch to the other command device. If there is not another command device to
switch to, all TrueCopy and ShadowImage operations will terminate abnormally.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-7
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Components
The basic unit of the CCI software structure is the CCI instance. Each instance uses a
defined configuration file to manage volume relationships while maintaining
awareness of the other CCI instances.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager Requirements
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-9
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Configuration Files
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage and CCI Configuration Files
y HORCM_MON: IP Address
HORCM_MON
#Ip_address service poll(10ms) timeout(10ms)
SunserverA horcm0 1000 3000
HORCM_CMD Identifies the server running the horcm instance.
#dev_name
Use the IP address or the alias.
/dev/rdsk/c0t5d2s2
For Example: 10.85.6.71 or SunserverA or Localhost
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Group1 Disk1 CL1-A-01 1 1 0
Group1 Disk2 CL1-A-01 1 2 0
Group2 Disk3 CL2-C-02 6 3 0
HORCM_INST
#dev_group ip_address service
Group1 SunserverA horcm1
Group2 SunserverA horcm1
10
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-11
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage and CCI Configuration Files
y HORCM_CMD
HORCM_MON
#Ip_address service poll(10ms) timeout(10ms)
SunserverA horcm0 1000 3000
HORCM_CMD Identifies the path to the Command Device:
#dev_name
\\.\PhysicalDrive6 (Windows)
/dev/rdsk/c0t5d2s2
/dev/rdsk/c0t5d2s2 (Solaris)
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Group1 Disk1 CL1-A-01 1 1 0
Group1 Disk2 CL1-A-01 1 2 0
Group2 Disk3 CL2-C-02 6 3 0
HORCM_INST
#dev_group ip_address service
Group1 SunserverA horcm1
Group2 SunserverA horcm1
12
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage and CCI Configuration Files
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-13
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage and CCI Configuration Files
HORCM_INST identifies the remote HORCM instance that manages the alternate
half of each group’s mirror set
Instance parameter (HORCM_INST) defines the network address (IP address) of
the remote server.
The following values are defined in the HORCM_INST parameter:
¾ dev_group: The server name described in dev_group of HORC_DEV.
¾ ip_address: The network address of the specified remote server.
¾ service: The port name assigned to the HORCM communication path.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage and CCI Configuration Files
15
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-15
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Absolute LUN Numbers
Earlier versions of CCI used absolute LUNs to scan a port, whereas the LUNs on a
Group are mapped for the host system so that the target ID and LUN which is used
by CCI command will be different from the target ID and LUN shown by the host
system. Use the target ID and LUN indicated by the raidscan command.
Example of raidscan Command
raidscan –p CL1-A -fx
Port# / ALPA TID# LU# NUM(LDEV#)
CL1-A / DE/ 1 5 1(05)
CL1-A / DE/ 1 7 3(25, 26, 27)
For ShadowImage, raidscan displays the MU# for each LUN (e.g.LUN 7-0, 7-1, 7-2)
When using the latest version of CCI you can specify the Host Group number and
then the LUN number. This will not require the absolute LUN number.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Command Device
Command Device
17
The command device accepts TrueCopy and ShadowImage read and write
commands that are executed by the TagmaStore Universal Storage Platform system.
The command device also returns read requests to the UNIX/PC host. The volume
designated as the command device is used only by the TagmaStore Universal
Storage Platform and is a raw device. Make sure that the volume to be selected as
the command device does not contain any user data. If so it will be inaccessible to
the UNIX/PC server host. The command device can be any OPEN-x device (e.g.,
OPEN-3, OPEN-8) that is accessible by the host.
A Virtual LVI/LUN volume as small as 36 MB (e.g., OPEN-3-CVS) can be used as a
command device. A LUSE volume cannot be used as a command device. For Solaris
operations the command device must be labeled. To enable dual pathing of the
command device under Solaris systems, make sure to include all paths to the
command device on a single line in the HORCM_CMD section of the configuration
file. Putting the path information on separate lines may cause parsing issues, and
failover may not occur unless the HORCM startup script is restarted on the Solaris
system.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-17
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Command Device
– Windows
y Use the CCI raidscan –x findcmddev 0,32 command
Returns a string similar to the following: \\.\PhysicalDriveX
This string goes in the HORCM configuration file, where X is the disk number.
Note: The 0,32 specifies a range of disks to search and it could specify a larger
range than the example.
18
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Mirror Unit Numbers
0 0
1 1 0
2 2 1
2
MU Number
¾ Used in “HORCM Config File” to define and control the ShadowImage paired
volumes. Only ShadowImage uses the MU Number.
¾ Equal to “Delta Table” which transfers difference data between the P-VOL and
S-VOL.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-19
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage Configuration Example
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
ShadowImage Configuration Example
y L1/L2 Pair Configuration (see next slide for configuration file entries)
CL1-C-01, TID 0, LUN 4
L1 L2
S-VOL S-VOL CL1-C-01, TID 0, LUN 6
22
HORCM_MON
horcm0.conf
y L2 Configuration #ip_address
10.15.93.201
service
horcm0
poll(10ms)
1000
timeout(10ms)
3000
Files HORCM_CMD
#dev_name
/dev/rdsk/c0t5d1s2
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Level1 vol1 CL1-A-01 1 0 0
Level1 vol2 CL1-A-01 1 0 1
Level2 Vol3 CL1-C-01 0 4 1
Level2 Vol4 CL1-C-01 0 5 1
HORCM_INST
#dev_group ip_address service
Level1 10.15.93.201 horcm1
Level2 10.15.93.201 horcm1
HORCM_MON
#ip_address service poll(10ms) timeout(10ms) horcm1.conf
10.15.93.201 horcm1 1000 3000
HORCM_CMD
#dev_name
/dev/rdsk/c0t5d1s2
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Level1 vol1 CL1-C-01 0 4 0
Level1 vol2 CL1-C-01 0 5 0
Level2 Vol3 CL1-C-01 0 6 0
Level2 Vol4 CL1-C-01 0 7 0
HORCM_INST
#dev_group ip_address service
Level1 10.15.93.201 horcm0
Level2 10.15.93.201 horcm0
23
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-21
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
ShadowImage Configuration Example
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Set Environment Variables
y HORCMINST Variable
Determines what instance receives the CCI commands.
Windows NT / 2000
C:\HORCM\etc> set HORCMINST=X C:\HORCM\etc> set HORCMINST=0
Note: When running multiple HORCM instances, each instance requires a unique HORCMINST variable.
26
The HORCMINST variable determines the instance that receives the CCI
commands.
The HORCMINST variable specifies the instance # when using 2 or more CCI
instances on the same server.
The command execution environment and the HORCM activation environment
require an instance # to be specified.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-23
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Set Environment Variables
y HORCC_MRCF Variable
Identifies the instance as a ShadowImage instance.
Windows NT / 2000
C:\HORCM\etc> set HORCC_MRCF=1
27
The HORCMINST variable specifies the instance # when using 2 or more CCI
instances on the same server.
The command execution environment and the HORCM activation environment
require an instance # to be specified.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-24 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Services File
Services File
Solaris Systems
/etc/services
The file is actually linked to
/etc/inet/services.
NT / 2000 Systems
Note: For Windows, there must be a
\WINNT\system32\drivers\etc\services blank line at the end of the
Windows XP Services file or HORCM will
\WINDOWS\system32\drivers\etc\services not start.
28
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-25
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Starting RAID Manager
SOLARIS
MANUAL START
# horcmstart.sh Starts 1 instance of HORCM
# horcmstart.sh 0 1 Starts both instances of HORCM
Windows NT / 2000
MANUAL START
C:\HORCM\etc> horcmstart Starts 1 instance of HORCM
C:\HORCM\etc> horcmstart 0 1 Starts both instances of HORCM
29
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-26 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Shutting Down RAID Manager
SOLARIS
MANUAL SHUTDOWN
# horcmshutdown.sh Stops 1 instance of HORCM
# horcmshutdown.sh 0 1 Stops both instances of HORCM
Windows NT / 2000
MANUAL SHUT DOWN
C:\HORCM\etc> horcmshutdown Stops 1 instance of HORCM
C:\HORCM\etc> horcmshutdown 0 1 Stops both instances of HORCM
30
Aside from shutting down the RAID Manager instances for normal reasons, there is
a recommended way of correcting errors in the horcm configuration files.
When you need to modify the horcm configuration files (for whatever reason) what
you should do is:
¾ Shut down the HORCM Instances
¾ Modify the horcm configuration files
¾ Start Up the HORCM Instances
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-27
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands
y CCI Commands
CCI Command Description
horctakover (HORC only) The host executing horctakover takes ownership of the pair.
paircurchk (HORC only) Checks the consistency of the data on the secondary volume.
paircreate Creates a pair.
pairsplit Splits a pair.
pairresync Resynchronizes a pair.
pairevtwait Event waiting command.
pairmon Monitors a pair and reports changes in the pairs status.
pairvolchk Checks the attributes of a volume connected to the local or remote
hosts.
pairdisplay Confirms the configuration of a specified pair.
raidscan Lists the SCSI/fibre port, target ID, LUN number, and LDEV
status.
raidar Reports the I/O activity of a specified LDEV.
raidqry Confirms the connection of the 7700E and the open system host.
horcctl Displays the internal trace control parameters.
31
32
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-28 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Commands
y raidscan
Displays configuration and status information of the specified port.
For Example: raidscan –p CL1-A -fx (returns something like the following)
PORT# /ALPA/C ,TID#, LU# Num(LDEV#..) P/S, Status, LDEV#, P-Seq#, P-LDEV#
CL1-A / e8 / 5, 0, 0-0 .1(503)........SMPL
Volume Status
LDEV Number (05:03). Note: CU 00 is not listed.
Port Host Group Number
CCI Assigned LUN Number
First digit = LUN Number
Second digit = Bit Map Number
Target ID as set by the CCI (most likely will be different
from the Fibre Channel assigned TID based on the ALPA).
The physical slot of the HBA.
Arbitrated Loop Physical Address (if direct-connect = Fibre Address).
TagmaStore Universal Storage Platform Host Port.
33
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-29
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands
y paircreate
-g Specifies group name from horcmX.conf
- All pairs in group created unless restricted by another option
-d Specifies device name from horcmX.conf
- Restricts operation to one TID/LUN
- Additional options to improve security
-f Specifies fence level - data, status, never, or async + CTGID
Note: Only used with TrueCopy.
-vl Specifies that local device is primary (sending device)
-vr Specifies that remote device is primary (receiving device)
-c Specifies number of tracks to be copied during initial copy operation
-nocopy Pair is created without copying primary to secondary (dangerous!)
-m <mode> Mode=cyl / Mode=trk
Paircreate generates a new volume pair from two unpaired volumes. It can be used
to create either a paired logical volume or a group of paired volumes. It allows you
to specify the direction (local or remote) of the pair (-vl or –vr).
Paircreate example:
C:\HORCM\etc> paircreate -g vg01 -d labadb1 -vl -f never
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-30 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Commands
y Pair Direction
– Warning! Pair Create will overwrite the S-VOL
y To create this pair from Instance #0, use -vl
y To create this pair form Instance #1, use -vr
– Best Practice
y Always use Instance #0 (HORCMINST=0) and always use -vl
35
y paircreate examples
– Create a single pair using the –d option (pairs only the specified volumes):
paircreate -g G1 -d L1-MU0 -vl (see slide 25)
– Create multiple pairs using only the –g option (pairs all volumes in the group):
paircreate -g G1 -vl (see slide 25)
36
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-31
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands
y pairdisplay
-g Specifies group name from horcmX.conf
– All volumes in the group checked
-d Specifies device name from horcmX.conf
– Restricts operation to one TID/LUN
-c Check pair path & displays illegal Pair connections
-l Displays volume pair status of ‘local’ host
-fx Shows LDEV numbers in Hexidecimal
-fc Shows % pair synchronization
-fm Displays the bitmap mode (Cyl/Trk)
37
y pairsplit
-g Specifies group name from horcmX.conf
– All pairs in group split unless restricted by another option
-d Specifies device name from horcmX.conf
– Restricts operation to one TID/LUN
– Additional options to improve security
-r Specifies split secondary and puts volume into read-only mode
-rw Suspended secondary and puts volume into read & write mode
-S Puts primary & secondary into simplex status (stops bit map tracking)
-R Forces secondary volume into error status (stops bit map tracking)
38
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-32 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager CCI Commands
y pairsplit examples
– Split a single pair using the –d option (splits only the specified volumes)
y Suspends the update copy – both volumes enter suspended status
y Changes are tracked in the bit maps
pairsplit -g G1 -d L1-MU0
– Split multiple pairs using only the –g option (pairs all volumes in the group)
y Suspends the update copy – both volumes enter suspended status
y Changes are tracked in the bit maps
pairsplit -g G1
39
y pairresync
-g Specifies group name from horcmX.conf
– All pairs in group resynchronised unless restricted by a switch
40
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-33
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager CCI Commands
If P-VOL failed and the customer continued to work in S-VOL, use this
option (-restore) to copy all of S-VOL to P-VOL when P-VOL is repaired.
* The pair would be split with the pairsplit –E command (manually or by
HORCM if an error is detected) and the volumes would be in PSUS and
SSUS status.
41
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-34 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
RAID Manager Considerations
42
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-35
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
RAID Manager Considerations
43
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-36 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Troubleshooting
Troubleshooting
44
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-37
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Troubleshooting
45
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-38 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Troubleshooting
y Troubleshooting Information
Problem
INFORMATION TYPE FILENAME
Starting CCI
Command
Errors
46
If an error occurs in RAID Manager, you can check the following logs:
¾ RAID Manager Instances Failed To Start – check the HORCM Startup Log
¾ RAID Manager Command Failed – check the Command Log and the Error Log
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-39
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Module Review
Module Review
47
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-40 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Lab Project 9: ShadowImage CCI
48
y Upon completion of the lab project, you will be able to do the following:
– Install version 01-15-03/04 of the CCI (RAID Manager) on a Windows
Host system
– Using Storage Navigator, configure a TagmaStore Universal Storage
Platform subsystem LDEV as a CCI Command Device and map it to your
host system ports
– Create and configure the Hitachi Open Remote Copy Manager
(HORCM) configuration files (horcm0.conf and horcm1.conf) to support
the ShadowImage pair operations outlined by this lab project.
– Create three ShadowImage L1 S-VOLs from a P-VOL
– Split a ShadowImage pair putting the two volumes into suspended
(simplex) state
– Resynchronize a suspended pair putting the volumes back into Pair status
– Create an L1/L2 pair simultaneously from a P-VOL.
– Display the Pair status of a ShadowImage pair.
49
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-41
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Lab Project 9: ShadowImage CCI
y LUN Mapping
– You will first map ten (10) LDEVs of Control Unit 5 to ports CL1-A and
CL2-A as LUNs 2 through B
50
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-42 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions ShadowImage RAID Manager CCI Operations
Lab Project 9: ShadowImage CCI
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 11-43
ShadowImage RAID Manager CCI Operations TSI0556 TagmaStore Software Solutions
Lab Project 9: ShadowImage CCI
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 11-44 distributed in whole or in part, without the prior written consent of HDS.
12. Virtual Partition
Manager
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-1
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Module Objectives
Module Objectives
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-2 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Overview
y Business Need
– One subsystem can store a large amount of data
– Multiple companies, departments, systems, or applications can share one
subsystem
For example: Storage Service Provider
– Each user wants to use a subsystem as if the user is using an individual
subsystem exclusively, without being influenced by other users operations
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-3
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Overview
Operator
Parity Group Parity Group
Virtual Partition Manager has two main functions: Storage Logical PaRtition (SLPR),
and Cache Logical PaRtition (CLPR). Storage Logical Partition allows you to divide
the available storage among various users, to lessen conflicts over usage. Cache
Logical Partition allows you to divide the cache into multiple virtual cache
memories, to lessen I/O contention.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-4 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Overview
LUN
LDEV
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-5
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Overview
LUN
LDEV
・・・・
Storage
Cache Cache Cache Cache
Administrator
6
Cache Logical Partition allows you to divide the cache into multiple virtual cache
memories, to lessen I/O contention. A user of each server can perform operations
without considering the operations of other users. Even if the load becomes heavy in
Company A, the operations in Company A do not affect other companies
operations.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-6 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Overview
Storage Logical Partition (SLPR) allows you to divide the available storage among
various users, to lessen conflicts over usage. If multiple administrators manage a
RAID, some administrators may destroy others’ volumes by mistake. To avoid such
problem, we have to create a mechanism to minimize the effects on other
administrators’ operations.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-7
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Overview
If no storage partition operations have occurred, the subsystem will have Storage
Logical Partition 0 (SLPR0), which is a pool of all of the resources of the subsystem
(e.g., cache logical partitions and ports). SLPR0 will also contain Cache Logical
Partition 0 (CLPR0), which is a pool of all of the cache and all parity groups in the
subsystem. The only users who have access to SLPR0 and CLPR0 are storage
administrators.
¾ CLPR0/SLPR0 always exists and cannot be removed.
¾ CLPR0 always belongs to SLPR0.
¾ CLPR0 is a pool area of cache and PG.
¾ Only the Storage Administrator can use SLPR0. Partitioned Storage
Administrators manage other SLPRs.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-8 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator
y Administrator Access
– Administrators are assigned via the Control Panel of Storage Navigator (Option button)
y One SA
y Many SPAs
Storage Storage
Partition Partition
Administrator Administrator
Not available
Cache memory
SLPR 1 SLPR 2
Subsystem
9
Administrator access for the TagmaStore Universal Storage Platform is divided into
two types. Storage Administrators manage the entire subsystem and all of its
resources. Storage Administrators can create and manage storage logical partitions
and cache logical partitions, and can assign access permission for storage partition
administrators. Only Storage Administrators can access Storage Logical Partition 0
(SLPR0) and Cache Logical Partition 0 (CLPR0). Storage Partition Administrators
can view and manage the only the resources that have been assigned to a specific
storage logical partition.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-9
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator
Port
Port
Shared
Shared
by
byall
all
TagmaStore Universal Storage Platform SLPRs
SLPRs
Number Two
SLPR1 SLPR2
CLPR2 RCU Target Port
CLPR1
CLPR0
SLPR0
10
Only the Storage Administrator can define Hitachi TrueCopy™ Remote Replication.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-10 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator
11
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-11
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator
External
Port
9500V 9900V
Data Flow
Mapping
12
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-12 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator
y Storage Navigator
Storage Administrator Screen Storage Partition Administrator Screen
SA sees all the resources SPA sees only it’s resources
13
Resources are divided into each SLPR. In the Storage Administrator’s screen, all
resources in all SLPRs are displayed. In the Storage Partition Administrator’s screen,
only the resources in its own SLPR are displayed. Resources shared by all SLPRs are
displayed in both Storage Administrator’s and Storage Partition Administrator’s
screen.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-13
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator
y TagmaStore subsystems allow multi users to log into the subsystem and put
their session of Storage Navigator in modify mode (multiple lock control)
14
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-14 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Storage Administrator and Storage Partition Administrator
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-15
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Administrator and Storage Partition Administrator
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-16 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Features
Parity Group
Parity Group
LDEV
Parity Group
Parity Group
17
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-17
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Features
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-18 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Features
y Specifications
# Item Content
1 Maximum number of CLPRs 32
2 Minimum unit of CLPR Parity group
3 Change unit of CLPR Increase size by 2 GB
4 CLPR Capacity 4 GB – 128 GB
5 Max number of VDEV per CLPR 1 – 16,384
6 Change unit of VDEV per CLPR 1 – 16,384
7 Support emulation type All types supported by TagmaStore
8 Max number of CLPRs per SLPR 32
9 LUSE Support
10 RAID Level All types supported by TagmaStore
11 DCR Support if CLPR has minimum 6 GB
12 PCR Support
20
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-19
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Features
y Inflow Control
– Inflow Control is perform by comparing the Write Pending threshold to
the Write Pending rate of each CLPR
– A CLPR with a very high inflow rate will not effect the inflow rate of
other CLPRs
Host A Host B
WP WP
21
Virtual Partition Manager performs the inflow control by comparing the write
pending threshold with the write pending rate of each CLPR. Therefore, even
though the write pending rate of one CLPR is very high, other CLPRs inflow control
is not changed.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-20 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Features
y De-Stage
– De-staging is performed on the full cache
– When the Write Pending rate for one CLPR is very high, the other CLPRs
de-stage process is accelerated
For Example: The cache de-stage threshold is set to 70%.
When the cache threshold total hits 70%, then both hosts
are de-staged.
Host A Host B
22
Virtual Partition Manager performs the de-stage process by comparing the write
pending threshold with the write pending rate of the entire system. Therefore, when
the write pending rate of one CLPR is very high, other CLPRs de-stage process is
accelerated.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-21
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Best Practices
y Shared Resources
– SLPRs run independent of each other
– All other resources are shared and are dependent on each other
Shared Resources
SM CM CSW CHA/DKA Internal Path
MP usage & path usage, etc… MP usage & path usage, etc…
SLPR1 SLPR31
Cache Usage/WP ratio, etc… Cache Usage/WP ratio, etc…
The previous pages describe host ports, cache resources and ECC groups, but all
other resources are not SLPR/CLPR dependent. All other resources are shared by
all the SLPRs/CLPRs, so a SLPR/CLPR may have impact on other SLPRs/CLPRs.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-22 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Best Practices
y Hi-Star Paths
– All the internal paths are shared and cannot be divided among the
SLPRs and CLPRs
SM SM SM SM
SM-path
CSW CSW
CM-path (C-path)
Internal Paths cannot be divided for each SLPR/CLPR, because Hi-Star paths are
shared by all CHAs and DKAs.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-23
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Best Practices
y DKA Processors
– You can design your CLPR configuration around the hardware configuration
DKC R1 DKU
R0 DKU CLPR1
DKA processors are shared by all CLPRs, but you can divide the DKA processors for
each CLPR.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-24 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
SLPR and CLPR User IDs
26
27
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-25
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
SLPR and CLPR User IDs
28
29
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-26 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Partition Manager Functions
30
Select the License Key button to open the License Key Panel
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-27
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
License Key Partition Definition
y License Key Partition Definition for a product with limited license capacity
31
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-28 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
License Key Partition Definition
y License Key Partition Definition for a product with unlimited license capacity
32
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-29
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Storage Navigator
Storage Navigator
33
Select the Partition Definition tab to open the Partition Definition panel. The default view is
the Logical Partition panel
Note: If you are logged on as a storage partition administrator, this panel will display only
the resources in that storage partition.
The Logical Partition panel has the following features:
¾ The Partition Definition outline is on the upper left of the Logical Partition panel, and
displays all of the storage logical partitions in the subsystem.
¾ The name and number of the storage logical partition are displayed to the right of the
each SLPR icon.
The Subsystem resource list is on the upper right side of the Logical Partition panel, and
displays the following information about the resources in the subsystem:
¾ No.: The number of the storage partition
¾ Item: The resource type
¾ Name: The storage logical partition numbers and names
¾ Cache (Num. of CLPRs): The cache capacity and number of cache logical partitions.
¾ Num. of PGs: Number of parity groups
¾ Num. of ports: Number of ports
¾ Status: (not shown in this screen shot) If the storage logical partition has been edited, the
status is displayed.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-30 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations
34
y Configuration Change
– Configuration Change requires processing time.
Processing time depends on cache capacity for operation, device capacity
for operation, cache usage before operation, write pending ratio before
operation, I/O load, and so on.
– It may take several hours for SLPR and CLPR changes to be applied
Sometime after applying the change the following progress panel appears
at the bottom of the Partition Definition window.
35
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-31
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations
36
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-32 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations
37
5. In the SLPR Name field on the bottom left part of the panel input the name of
the selected SLPR. You can use up to 32 alphanumeric characters.
6. In the CU field, input the CU number(s) for the selected SLPR (00 - 3F). An
asterisk (*) indicates that the CU is defined as an LDEV.
7. To add a CU to the SLPR, select the CU from the Available CU list, then select
the Add button to move that CU to the CU list. You can select up to 64 CUs,
whether or not those CUs are defined as LDEVs.
8. To delete CU from the specified SLPR, select the CU from the CU list and
select Delete to return that CU to the Available CU list.
9. Available SSIDs are in SLPR0. In the SSID field, select an available SSID as
follows:
¾ In the From: field, input the starting number of the SSID (0004 to FFFE).
¾ In the To: field, input the ending number of the SSID.
10. Select Apply to apply the settings, or select Cancel to cancel the settings. The
progress bar is displayed.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-33
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations
38
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-34 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations
39
The resources of a storage logical partition include cache logical partitions and ports,
which can be migrated to another storage logical partition as needed. The only ports
that can be migrated are Target ports and the associated NAS ports are on the same
channel adapter. Initiator ports, RCU Target ports and External ports cannot be
migrated, and must remain in SLPR0.
Notes:
¾ LUs that are associated with a port in a particular SLPR must stay within that
SLPR .
¾ LUs that are associated with a parity group in a particular SLPR must stay
within that SLPR.
¾ Parity groups containing NAS system LUs (LUN0005, LUN0006, LUN0008,
LUN0009, and LUN000A) must remain in SLPR0.
¾ NAS system LUs (LUN0000 and LUN0001) must belong to the same SLPR as
the NAS channel adapter.
To migrate one or more resources:
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Select a SLPR from the Partition Definition outline, on the upper left of the
panel. This will display the Storage Management Logical Partition panel.
3. From the Storage Logical Partition Resource List on the upper right of the
panel, select one or more cache logical partition(s) and/or port(s) to be
migrated. Right-click to display the pop-up menu. Select Cut.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-35
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations
40
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-36 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations
41
You must first have created one or more storage logical partitions before you can
create a cache logical partition.
Note: To create a CLPR, the remaining cache size which is calculated by subtracting
Cache Residency Size and Partial Cache Residence size from the cache size of CLPR0
needs 8GB or more.
To create a cache logical partition:
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Right-click a SLPR from the Partition Definition outline, on the upper left of
the panel, to display the Create CLPR pop-up menu then select Create CLPR.
This will add a cache logical partition to the Partition Definition outline.
3. Select the newly created CLPR from the Partition Definition outline, to
display the Cache Logical Partition panel.
4. In the Detail for CLPR section, on the lower left of the panel, do the following:
5. In the CLPR Name field, type the name of the cache logical partition, in up to
16 alphanumeric characters.
6. In the Cache Size field, enter the cache capacity. You may select from 6 to 52
GB, in 2 GB increments. The default value is 4 GB. The size of the cache will
be allocated from CLPR0, but there must be at least 8 GB remaining in
CLPR0.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-37
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations
7. Cache Residency Size indicates the capacity of the Cache Residency cache.
The value of Cache Residency Size must be selected or input from 0 to 252 GB
in 0.5 GB increments. The default value is 0 GB. The defined cache residency
size must be smaller than the total defined cache residency size.
8. Cache Residency Area indicates the defined cache residency area which must
be smaller than the total defined cache residency size.
9. In the Partial Cache Residency Size field, enter the cache capacity for Partial
Cache Residence (PCR), from 0 to 252 GB in 0.5 GB increments. The default
value is 0 GB.
10. Select Apply to apply the settings. The progress bar is displayed.
11. The change in cache capacity will be reflected in this cache logical partition
and in CLPR0.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-38 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations
42
Note:
The minimum Cache Size is
4 GB, but in order to assign
any DCR you must select at
least 6 GB of cache.
43
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-39
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations
44
45
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-40 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations
46
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-41
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Virtual Partition Manager Operations
47
If you delete a cache logical partition, any resources (e.g., parity groups) will be
automatically returned to CLPR0. CLPR0 cannot be deleted.
To delete a cache logical partition:
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Select a CLPR from the Partition Definition outline, on the upper left of the
panel. This will display the Cache Logical Partition panel.
3. Right-click the CLPR that you want to delete and select Delete CLPR in the
pop-up menu. The selected CLPR is deleted from the tree.
4. Select Apply to apply the settings. The progress bar is displayed.
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-42 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Virtual Partition Manager Operations
48
If you delete a storage logical partition, any resources in that storage logical partition
will be automatically returned to SLPR0.
Note: SLPR0 cannot be deleted.
1. Launch Virtual Partition Manager, and change to Modify mode.
2. Select a SLPR from the Partition Definition outline. This will display the
Storage Management Logical Partition panel.
3. In the logical partition outline on the upper left of the panel, right-click the
storage logical partition that you want to delete. This will display the Delete
SLPR pop-up menu.
4. Select Delete SLPR to delete the storage logical partition.
5. Select Apply to apply the settings, or select Cancel to cancel the settings. The
progress bar is displayed.
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-43
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Module Review
Module Review
49
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-44 distributed in whole or in part, without the prior written consent of HDS.
TSI0556 TagmaStore Software Solutions Virtual Partition Manager
Lab Project 10: Virtual Partition Manager
50
y Upon completion of the lab project, you will be able to do the following:
– Create two Storage Logical Partitions (SLPR01 and SLPR02)
– Create a Cache Logical Partition (CLPR01) in each new SLPR
– Allocate and/or Enable/Disable specified license keys for each SLPR
– Migrate specified ports to each new SLPR
– Allocate specified Control Units to each new SLPR
– Allocate specified increments of Cache to each CLPR
– Migrate specified parity groups to each new CLPR
– Delete a CLPR
– Delete a SLRP
51
This manual may not be copied, transferred, reproduced, disclosed or, v2.0
distributed in whole or in part, without the prior written consent of HDS. Page 12-45
Virtual Partition Manager TSI0556 TagmaStore Software Solutions
Lab Project 10: Virtual Partition Manager
v2.0 This manual may not be copied, transferred, reproduced, disclosed or,
Page 12-46 distributed in whole or in part, without the prior written consent of HDS.