9970
9970
9970
Trademarks
Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd. The Hitachi Data Systems design mark is a trademark and service mark of Hitachi, Ltd. Hi-Track is a registered trademark of Hitachi Data Systems Corporation. Extended Serial Adapter, ExSA, Hitachi Freedom Storage, Hitachi Graph-Track, and Lightning 9900 are trademarks of Hitachi Data Systems Corporation. APC and Symmetra are trademarks or registered trademarks of American Power Conversion Corporation. HARBOR is a registered trademark of BETA Systems Software AG. AIX, DYNIX/ptx, ESCON, FICON, IBM, MVS, MVS/ESA, VM/ESA, and S/390 are registered trademarks or trademarks of International Business Machines Corporation. Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation. Tantia is a trademark of Tantia Technologies Inc. Tantia Technologies is a wholly owned subsidiary of BETA Systems Software AG of Berlin. All other brand or product names are or may be registered trademarks, trademarks or service marks of and are used to identify products or services of their respective owners.
Referenced Documents
For a listing of the 9900V user documentation, please see section 3.6.
iv
Preface
Preface
This document provides the installation and configuration planning information for the Hitachi Lightning 9900 V Series subsystems, describes the physical, functional, and operational characteristics of the Lightning 9900 V Series subsystems, and provides general instructions for operating the Lightning 9900 V Series subsystems. This document assumes that: The user has a background in data processing and understands direct-access storage device (DASD) subsystems and their basic functions, The user is familiar with the open-system platforms and/or S/390 (mainframe) operating systems supported by the Lightning 9900 V Series subsystem, and The user is familiar with the equipment used to connect RAID disk array subsystems to the supported host systems. For further information on Hitachi Data Systems products and services, please contact your Hitachi Data Systems account team, or visit Hitachi Data Systems online at http://www.hds.com. For specific information on supported host systems and platforms for the Lightning 9900 V Series subsystem, please refer to the Lightning 9900 V Series user documentation for the platform, or contact the Hitachi Data Systems Support Center. Note: Unless otherwise noted, the term 9900V refers to the entire Hitachi Lightning 9900 V Series subsystem family, including all models (e.g., 9980V, 9970V) and all configurations (e.g., all-open, all-mainframe, multiplatform). Note: The use of all Hitachi Data Systems products is governed by the terms of your license agreement(s) with Hitachi Data Systems.
COMMENTS
Please send us your comments on this document: [email protected]. Make sure to include the document title, number, and revision. Please refer to specific page(s) and paragraph(s) whenever possible.
(All comments become the property of Hitachi Data Systems Corporation.)
Thank you!
vi
Preface
Contents
Chapter 1 Overview of the Lightning 9900 V Series Subsystem
1.1 Key Features of the Lightning 9900 V Series Subsystem ..................................1 1.1.1 Continuous Data Availability ...........................................................2 1.1.2 Connectivity ..............................................................................3 1.1.3 S/390 Compatibility and Functionality..............................................3 1.1.4 Open-Systems Compatibility and Functionality .....................................4 1.1.5 Hitachi Freedom NAS and Hitachi Freedom SAN ................................5 1.1.6 Program Products and Service Offerings .............................................6 1.1.7 Subsystem Scalability ...................................................................8 Reliability, Availability, and Serviceability ...................................................9
1.2
Chapter 2
2.3
2.4
2.5 2.6
Chapter 3
3.4
3.5
vii
3.6
Data Management Functions...................................................................36 3.6.1 Hitachi TrueCopy.......................................................................38 3.6.2 Hitachi TrueCopy S/390 ............................................................38 3.6.3 Hitachi ShadowImage ..................................................................39 3.6.4 Hitachi ShadowImage S/390 .......................................................39 3.6.5 Command Control Interface (CCI) ...................................................40 3.6.6 Hitachi Extended Remote Copy (HXRC).............................................40 3.6.7 Hitachi NanoCopy.....................................................................41 3.6.8 Data Migration ..........................................................................41 3.6.9 Hitachi RapidXchange (HRX)..........................................................42 3.6.10 Hitachi Multiplatform Backup/Restore (HMBR)....................................42 3.6.11 HARBOR File-Level Backup/Restore................................................43 3.6.12 HARBOR File Transfer ................................................................43 3.6.13 HiCommand............................................................................43 3.6.14 LUN Manager ............................................................................44 3.6.15 Hitachi SANtinel ........................................................................44 3.6.16 LUN Expansion (LUSE) .................................................................45 3.6.17 Virtual LVI/LUN.........................................................................45 3.6.18 FlashAccess..............................................................................46 3.6.19 Priority Access ..........................................................................46 3.6.20 Hitachi Parallel Access Volume (HPAV) .............................................46 3.6.21 Hitachi Dynamic Link Manager (HDLM)............................................47 3.6.22 Hitachi Performance Monitor.........................................................47 3.6.23 Hitachi CruiseControl ..................................................................47 3.6.24 Hitachi Graph-Track .................................................................48
Chapter 4
4.3
4.4
4.5
viii
Contents
4.6
4.5.6 NAS and SAN Operations .............................................................. 77 Control Panel .................................................................................... 78 4.6.1 Emergency Power-Off (EPO) ......................................................... 80
Chapter 5
5.5
Chapter 6
Troubleshooting
6.1 6.2 6.3 Troubleshooting ................................................................................115 Calling the Hitachi Data Systems Support Center ........................................116 Service Information Messages (SIMs)........................................................117
ix
List of Figures
Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7 Figure 2.8 Figure 2.9 Figure 3.1 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 4.5 Figure 4.6 Figure 4.7 Figure 4.8 Figure 4.9 Figure 4.10 Figure 4.11 Figure 4.12 Figure 4.13 Figure 4.14 Figure 4.15 Figure 4.16 Figure 4.17 Figure 4.18 Figure 4.19 Figure 4.20 Figure 4.21 Figure 4.22 Figure 4.23 Figure 4.24 Figure 4.25 Figure 4.26 Figure 4.27 Figure 5.1 Figure 5.2 Figure 5.3 Figure 5.4 Figure 5.5 Figure 5.6 Figure 5.7 Lightning 9980V HiStar Network (HSN) Architecture ................................12 9980V Subsystem Frames ................................................................13 9970V Subsystem Frame .................................................................14 Conceptual 9980V ACP Array Domain ..................................................20 Sample RAID-1 Layout ....................................................................24 Sample RAID-5 Layout (Data Plus Parity Stripe)......................................25 Sample Hard Disk Drive Intermix .......................................................26 Sample Device Emulation Intermix .....................................................27 Example of Remote Console PC and SVP Configuration .............................29 Fibre-Channel Device Addressing .......................................................34 IOCP Definition for FICON Channels (direct connect, via FICON switch) .....51 IOCP Definition for 1024 LVIs (9900V connected to host CPU(s) via ESCD) ......52 IOCP Definition for 1024 LVIs (9900V directly connected to CPU).................53 Master MENU (Step 1).....................................................................57 Basic HCD Panel (Step 2).................................................................58 Define, Modify, or View Configuration Data (Step 3)................................58 Control Unit List Panel (Step 4) ........................................................59 Add Control Unit Panel (Step 5) ........................................................59 Selecting the Operating System (Step 6) ..............................................60 Control Unit Chpid, CUADD, and Device Address Range Addressing (Step 7) ....60 Select Processor / Control Unit Panel (Step 8).......................................61 Control Unit List (Step 9) ................................................................61 I/O Device List Panel (Step 10) .........................................................62 Add Device Panel (Step 11) ..............................................................62 Device / Processor Definition Panel Selecting the Processor ID (Step 12) .....63 Define Device / Processor Panel (Step 13)............................................63 Device / Processor Definition Panel (Step 14) .......................................64 Define Device to Operating System Configuration (Step 15) .......................64 Define Device Parameters / Features Panel (Step 16) ..............................65 Update Serial Number, Description and VOLSER Panel (Step 18)..................65 LVI Initialization for MVS: ICKDSF JCL .................................................67 Displaying Cache Statistics Using MVS DFSMS .........................................69 IDCAMS LISTDATA COUNTS (JCL example).............................................69 Fibre Port-to-LUN Addressing ...........................................................73 Alternate Pathing .........................................................................76 9980V Control Panel ......................................................................78 Emergency Power-Off (EPO).............................................................80 Physical Overview of 9980V Subsystem ................................................81 Physical Overview of the 9970V Subsystem ...........................................82 9980V DKC and DKU Physical Dimensions..............................................84 9970V Physical Dimensions ..............................................................85 9980V DKC Service Clearance and Floor Cutout......................................88 9980V DKU Service Clearance and Floor Cutout......................................89 9980V Subsystem Service Clearance and Floor Cutouts Min Configuration.....90
Contents
Figure 5.8 Figure 5.9 Figure 5.10 Figure 5.11 Figure 5.12 Figure 5.13 Figure 5.14 Figure 5.15 Figure 5.16 Figure 5.17 Figure 5.18 Figure 5.19 Figure 5.20 Figure 5.21 Figure 5.22 Figure 6.1
9980V Subsystem Service Clearance and Floor Cutouts Max Configuration ... 91 9970V Subsystem Service Clearance and Floor Cutout All Configurations ..... 92 Power Plugs for Three-Phase 9980V Disk Array Unit (Europe)..................... 93 Power Plugs for Three-Phase 9980V Disk Array Unit (USA)......................... 94 Power Plugs for Three-Phase 9970V Subsystem (Europe) .......................... 94 Power Plugs for Three-Phase 9970V Subsystem (USA) .............................. 95 Cable Dimensions for 50-Hz Three-Phase Subsystems .............................. 98 Power Plugs for Single-Phase 9980V Controller (Europe) .........................100 Power Plugs for Single-Phase 9980V Controller (USA) .............................100 Power Plugs for a Single-Phase 9980V Disk Array Unit (Europe) .................101 Power Plugs for a Single-Phase 9980V Disk Array Unit (USA) .....................101 Power Plugs for a Single-Phase 9970V Subsystem (Europe) .......................102 Power Plugs for a Single-Phase 9970V Subsystem (USA)...........................102 Cable Dimensions for 50-Hz Single-Phase Subsystems .............................104 9980V Subsystem Layout and Device Interface Cable Options ...................106 Typical 9900V SIM Showing Reference Code and SIM Type........................117
List of Tables
Table 1.1 Table 2.1 Table 2.2 Table 2.3 Table 3.1 Table 3.2 Table 3.3 Table 3.4 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Table 4.5 Table 4.6 Table 4.7 Table 4.8 Table 5.1 Table 5.2 Table 5.3 Table 5.4 Table 5.5 Table 5.6 Table 5.7 Program Products and Service Offerings ............................................. 6-7 Channel Adapter Specifications ........................................................ 18 ACP Specifications........................................................................ 21 Disk Drive Specifications ................................................................ 23 Device Numbers for Each CU............................................................ 33 Capacities of Standard LU Types ....................................................... 34 Data Management Functions for Open-System Users ............................... 36 Data Management Functions for S/390 Users ....................................... 37 SSID Requirements........................................................................ 49 Correspondence between Physical Paths and Channel Interface IDs (Cl 1) ..... 54 Correspondence between Physical Paths and Channel Interface IDs (Cl 2) ..... 54 HCD Definition for 64 LVIs............................................................... 55 HCD Definition for 256 LVIs ............................................................. 55 ICKDSF Commands for 9900V Contrasted to RAMAC................................. 68 9900V Open-System Platforms and Configuration Guides .......................... 72 9980V Control Panel...................................................................... 79 9980V Physical Specifications........................................................... 86 9970V Physical Specifications........................................................... 86 Floor Load Rating and Required Clearances for 9980V Min Configuration....... 90 Floor Load Rating and Required Clearances for 9980V Max Configuration ...... 91 Floor Load Rating and Required Clearances for 9970V Subsystem................ 92 9980V and 9970V Three-Phase Features .............................................. 96 Current Rating, Power Plug, Receptacle, Connector for 3-Phase 9900V ........ 97 Hitachi Lightning 9900 V Series User and Reference Guide xi
Table 5.8 Table 5.9 Table 5.10 Table 5.11 Table 5.12 Table 5.13 Table 5.14 Table 5.15 Table 5.16 Table 5.17 Table 5.18 Table 5.19 Table 5.20 Table 5.21 Table 5.22 Table 6.1 Table B.1
Input Voltage Specifications for Three-Phase AC Input .............................98 Cable Dimensions for 50-Hz Three-Phase Subsystems...............................98 9900V Single-Phase Features .......................................................... 103 Current Rating, Power Plug, Receptacle, Connector for 1-Phase 9900V ....... 103 Input Voltage Specifications for Single-Phase Power .............................. 104 Cable Dimensions for 50-Hz Single-Phase Subsystems............................. 104 Cable Requirements .................................................................... 105 Open-Systems Channel Specifications ............................................... 107 Mainframe Channel Specifications.................................................... 107 Temperature and Humidity Requirements .......................................... 108 9980V DKC Component Power and Heat Output Specifications.................. 109 9980V DKU Component Power and Heat Output Specifications.................. 110 9970V Component Power and Heat Output Specifications ....................... 111 Internal Air Flow ........................................................................ 112 Vibration and Shock Tolerances ...................................................... 113 Troubleshooting ......................................................................... 115 Unit Conversions for Standard (U.S.) and Metric Measures....................... 123
xii
Contents
Chapter 1
1.1
Unmatched performance and capacity: Multiple point-to-point data and control paths. Up to 10-GB/sec internal system bandwidth. Fully addressable 64-GB data cache; separate 3-GB control cache. Extremely fast and intelligent cache algorithms. Non-disruptive expansion to over 74 TB raw capacity. Simultaneous transfers from up to 32 separate hosts. Up to 1024 high-throughput (10K or 15K rpm) fibre-channel, dual-active disk drives.
Extensive connectivity and resource sharing: Concurrent operation of UNIX, Windows, and S/390 host systems. Fibre-channel, FICON, and Extended Serial Adapter (ESCON) server connections. Fibre-channel switched, arbitrated loop, and point-to-point configurations.
1.1.1
1.1.2
Connectivity
The Hitachi Lightning 9900 V Series RAID subsystem supports concurrent attachment to S/390 mainframe hosts and open-system (UNIX-based and/or PC-server) platforms. The 9900V subsystem can be configured with the following port types to support all-open, allmainframe, and multiplatform configurations: Fibre-channel. When fibre-channel interfaces are used, the 9900V subsystem can provide up to 32 ports for attachment to UNIX-based and/or PC-server platforms. The type of host platform determines the number of logical units (LUs) that may be connected to each port. Fibre-channel connection provides data transfer rates of up to 200 MB/sec (2 Gbps). The 9900V subsystem supports fibre-channel arbitrated loop (FCAL) and fabric fibre-channel topologies as well as high-availability (HA) fibre-channel configurations using hubs and switches. Extended Serial Adapter (ExSA) (compatible with ESCON protocol). When ExSA channel interfaces are used, the 9900V can provide up to 32 logical control unit (CU) images and 8192 logical device (LDEV) addresses. Each physical ExSA channel interface supports up to 256 logical paths providing a maximum of 8192 logical paths per subsystem. ExSA connection provides transfer rates of up to 17 MB/sec. FICON. When FICON channel interfaces are used, the 9900V subsystem can provide up to 32 logical control unit (CU) images and 8192 logical device (LDEV) addresses. Each physical FICON channel interface supports up to 512 logical paths providing a maximum of 8192 logical paths per subsystem. FICON connection provides transfer rates of up to 100 MB/sec (1Gbps). Please contact your Hitachi Data Systems account team for information on the availability date of FICON support.
1.1.3
1.1.4
1.1.5
1.1.6
Table 1.1
Function
Data Replication: Hitachi TrueCopy Hitachi TrueCopy S/390 Hitachi ShadowImage Hitachi ShadowImage S/390 Enables the user to perform remote copy operations between 9900V (and 9900) subsystems in different locations. Hitachi TrueCopy provides synchronous and asynchronous copy modes for S/390 and open-system data. Allows the user to create internal copies of volumes for a wide variety of purposes including application testing and offline backup. Can be used in conjunction with TrueCopy to maintain multiple copies of critical data at both the primary and secondary sites. Enables open-system users to perform TrueCopy and ShadowImage operations by issuing commands from the host to the 9900V subsystem. The CCI software supports scripting and provides failover and mutual hot standby functionality in cooperation with host failover products. Provides compatibility with the IBM Extended Remote Copy (XRC) S/390 host software function, which performs server-based asynchronous remote copy operations for mainframe LVIs. Enables S/390 users to make Point-in-Time (PiT) copies of production data, without quiescing the application or causing any disruption to end-user operations, for such uses as application testing, business intelligence, and disaster recovery for business continuance. Enables the rapid transfer of data from other disk subsystems onto the 9900V subsystem. Data migration operations can be performed while applications are online using the data which is being transferred. 3.6.1 3.6.2 3.6.3 3.6.4
3.6.5
3.6.6
3.6.7
Data migration (service offering only) Data Sharing and Backup/Restore: Hitachi RapidXchange (HRX)
3.6.8
Enables users to transfer data between S/390 and open-system platforms using the ExSA and/or FICON channels, which provides high-speed data transfer without requiring network communication links or tape. Allows users to perform mainframe-based volume-level backup and restore operations on the open-system data stored on the multiplatform 9900V subsystem. Enables users to perform mainframe-based file-level backup/restore operations on the open-system data stored on the multiplatform 9900V subsystem. Enables users to transfer large data files at ultra-high channel speeds in either direction between open systems and mainframe servers.
3.6.9
Hitachi Multiplatform Backup/Restore (HMBR) HARBOR File-Level Backup/Restore HARBOR File Transfer
3.6.10
3.6.11 3.6.12
Table 1.1
Function
Resource Management: HiCommand Enables users to manage the 9900V subsystem and perform functions (e.g., LUN Manager, LUN Security) from virtually any location via the HiCommand Web Client, command line interface (CLI), and/or third-party application. Enables users to configure the 9900V fibre-channel ports for operational environments (e.g., arbitrated-loop (FC-AL) and fabric topologies, host failover support). Allows users to restrict host access to data on the Lightning 9900 V Series subsystem. Open-system users can restrict host access to LUs based on the hosts World Wide Name (WWN). Enables open-system users to create expanded LUs which can be up to 36 times larger than standard fixed-size LUs. Enables users to configure custom-size LVIs and LUs which are smaller than standard-size devices. Enables users to store specific high-usage data directly in cache memory to provide virtually immediate data availability. Enables users to perform FlashAccess operations from the S/390 host system. FlashAccess allows you to place specific data in cache memory to enable virtually immediate access to this data Allows open-system users to designate prioritized ports (e.g., for production servers) and non-prioritized ports (e.g., for development servers) and set thresholds and upper limits for the I/O activity of these ports. Enables the S/390 host system to issue multiple I/O requests in parallel to single LDEVs in the Lightning 9900 V Series subsystem. HPAV provides compatibility with the IBM Workload Manager (WLM) host software function and supports both static and dynamic PAV functionality. 3.6.13
LUN Manager
3.6.14
3.6.15
LUN Expansion (LUSE) Virtual LVI (VLVI) Virtual LUN (VLUN) FlashAccess (Flsh) Cache Manager
Priority Access
3.6.20
3.6.21
Hitachi Dynamic Link Manager Storage Utilities: Hitachi Performance Monitor Hitachi CruiseControl Hitachi Graph-Track (GT)
Provides automatic load balancing, path failover, and recovery capabilities in the 3.6.22 event of a path failure.
Performs detailed monitoring of subsystem and volume activity. Performs automatic relocation of volumes to optimize performance. Provides detailed information on the I/O activity and hardware performance of the 9900V subsystem. Hitachi Graph-Track displays real-time and historical data in graphical format, including I/O statistics, cache statistics, and front-end and back-end microprocessor usage.
1.1.7
Subsystem Scalability
The architecture of the 9900V subsystem accommodates scalability to meet a wide range of capacity and performance requirements. The 9980V storage capacity can be increased from a minimum of 108 GB net or 144 GB raw (one four-drive RAID-5 parity group, 36-GB HDDs) to a maximum of 55 TB net or 74 TB raw (254 four-drive RAID-5 parity groups, 73-GB HDDs). The 9980V nonvolatile cache can be configured from 4 GB to 64 GB in increments of 4 GB. The 9970V cache can be configured from 2 GB to 32 GB in increments of 2 GB. All disk drive and cache upgrades can be performed without interrupting user access to data and with minimal impact on subsystem performance. The 9900V subsystem can be configured with the desired number and type of front-end client-host interface processors (CHIPs). The CHIPs reside on the channel adapters, which are installed in pairs. Each channel adapter pair offers eight host connections. The 9980V can be configured with up to four channel adapter pairs to provide up to 32 paths to attached host processors. The 9970V subsystem supports up to three channel adapter pairs and 24 paths. The ACPs are the back-end processors which transfer data between the disk drives and cache. Each ACP pair is equipped with eight device paths. The 9980V subsystem can be configured with up to four pairs of array control processors (ACPs), providing up to thirtytwo concurrent data transfers to and from the disk drives. The 9970V can be configured with one or two pairs of ACPs, providing up to sixteen concurrent data transfers to and from the disk drives. The 9970V subsystem can support up to a combined total of four channel adapter pair features and ACP pair features. Thus if three channel adapter features with a total of 24 host interfaces are configured, one ACP pair must be configured. If two channel adapter features with a total of 16 host interfaces are configured, one or two ACP pairs may be configured. Due to the very high bandwidth of the ACP pairs in the 9970V, it is anticipated that one ACP pair will be sufficient for most 9970V applications.
1.2
Hi-Track. The Hi-Track maintenance support tool monitors the operation of the 9900V subsystem at all times, collects hardware status and error data, and transmits this data via modem to the Hitachi Data Systems Support Center. The Support Center analyzes the data and implements corrective action as needed. In the unlikely event of a component failure, Hi-Track calls the Hitachi Data Systems Support Center immediately to report the failure without requiring any action on the part of the user. Hi-Track enables most problems to be identified and fixed prior to actual failure, and the advanced redundancy features enable the subsystem to remain operational even if one or more components fail. Note: Hi-Track does not have access to any user data stored on the 9900V subsystem. The Hi-Track tool requires a dedicated RJ-11 analog phone line. Nondisruptive service and upgrades. All hardware upgrades* can be performed nondisruptively during normal subsystem operation. All hardware subassemblies can be removed, serviced, repaired, and/or replaced nondisruptively during normal subsystem operation. All subsystem microcode upgrades can be performed during normal operations using the SVP or the alternate path facilities of the host. *With one exception: In a 9970V with two ACP pairs, the first 64 drives are connected to the first ACP pair, and the second 64 drives are connected to the second ACP pair. It is also possible to attach both sets of 64 disk drives to a single ACP pair. If you want to add a second ACP pair to an existing single-ACP-pair 9970V, a service interruption will be required to disconnect any disk drives in the second set of 64 drive locations from the first ACP pair, and to reconnect them instead to the newly installed second ACP pair. Error Reporting. The Lightning 9900 V Series subsystem reports service information messages (SIMs) to notify users of errors and service requirements. SIMs can also report normal operational changes, such as remote copy pair status change. The SIMs are logged on the 9900V service processor (SVP), reported directly to the mainframe and open-system hosts, and reported to Hitachi Data Systems via Hi-Track.
10
Chapter 2
2.1 Overview
Figure 2.1 shows the Hierarchical Star Network (HiStar or HSN) architecture of the Lightning 9900 V Series RAID subsystem. The front end of the 9900V subsystem includes the hardware and software that transfers the host data to and from cache memory, and the back end includes the hardware and software that transfers data between cache memory and the disk drives. Front End: The 9900V front end is entirely resident in the 9900V disk controller (DKC) frame and includes the client-host interface processors (CHIPs) that reside on the channel adapter boards. The CHIPs control the transfer of data to and from the host processors via the channel interfaces (e.g., fibre-channel, ExSA, FICON) and to and from cache memory via independent high-speed paths through the cache switches (CSWs). Each channel adapter board contains four CHIPs, and the channel adapter boards are installed in pairs, for a total of eight host interfaces per feature. The 9980V subsystem supports up to four pairs of channel adapter boards (four sets of eight CHIPs) for a maximum of 32 host interfaces, and the 9970V subsystem supports up to three pairs of channel adapter boards to provide a maximum of 24 host interfaces. The 9980V controller contains four cache switch (CSW) cards, and the 9970V controller contains two CSW cards. Cache memory in the 9980V resides on four cache cards, and cache memory in the 9970V resides on two cards. Cache memory is backed up by battery. An additional battery must be configured on 9980V subsystems with over 32 GB of cache memory. Shared memory (minimum 2 GB) resides on the first two cache cards and is provided with its own power sources and backup batteries. Shared memory also has independent address and data paths from the channel adapter and disk adapter boards. Back End: The 9900V back end is controlled by the array control processors (ACPs) that reside on the disk adapter boards in the 9900V controller frame. The ACPs control the transfer of data to and from the disk arrays via high-speed fibre (100 MB/sec or 1 Gbps) and then to and from cache memory via independent high-speed paths through the CSWs. The disk adapter board (DKA) contains four ACPs, and the DKAs are installed in pairs. The 9980V subsystem supports from one to four DKA pairs for a maximum of 32 ACPs. The 9970V subsystem supports one or two DKA pairs for a maximum of sixteen ACPs.
11
CHA (Add.1)
CHA (Add.2)
CHA (Add.3)
CHA (Add.3)
CHA (Add.2)
CHA (Basic)
CSW B
CSW A
CSW B
2path DKA (Basic) DKA (Add.1) DKA (Add.2) DKA (Add.3) DKA (Add.3) DKA (Add.2) DKA (Add.1) DKA (Basic)
2path
DKC
HDU-BOX6-8
HDU-BOX2-4
FC-AL
Max. 32HDDs/BOX
Max. 32HDDs/BOX
DKU S-HSN: Shared Memory Hierarchical Star Net C-HSN: Cache Memory Hierarchical Star Net CSW: Cache Switch
max.8 HDU-Box/DKU
Max.4 DKU/Subsystem In the maximum configuration of the subsystem, up to 64 HDDs can be connected through one FC-AL.
Figure 2.1
12
The 9980V subsystem (see Figure 2.2) includes the following major components: One controller frame containing the control and operational components of the subsystem. Up to four disk array frames containing the storage components (disk drive arrays) of the subsystem. The service processor (SVP) (see section 2.5). The 9900V SVP is located in the controller frame and can only be used by authorized Hitachi Data Systems personnel. The SVP provides the 9900V Remote Console functionality (see section 2.6).
DKC frame
Figure 2.2
13
The 9970V subsystem (see Figure 2.3) includes the following major components: One frame containing the controller and disk components of the subsystem. The service processor (SVP) (see section 2.5). The 9900V SVP is located in the controller frame and can only be used by authorized Hitachi Data Systems personnel. The SVP provides the 9900V Remote Console functionality (see section 2.6).
Figure 2.3
14
2.2
2.2.1
Storage Clusters
Each controller frame consists of two redundant controller halves called storage clusters. Each storage cluster contains all physical and logical elements (e.g., power supplies, channel adapters, ACPs, cache, control storage) needed to sustain processing within the subsystem. Both storage clusters should be connected to each host using an alternate path scheme, so that if one storage cluster fails, the other storage cluster can continue processing for the entire subsystem. Each pair of channel adapters is split between clusters to provide full backup for both frontend and back-end microprocessors. Each storage cluster also contains a separate, duplicate copy of cache and shared memory contents. In addition to the high-level redundancy that this type of storage clustering provides, many of the individual components within each storage cluster contain redundant circuits, paths, and/or processors to allow the storage cluster to remain operational even with multiple component failures. Each storage cluster is powered by its own set of power supplies, which can provide power for the entire subsystem in the unlikely event of power supply failure. Because of this redundancy, the Lightning 9900 V Series subsystem can sustain the loss of multiple power supplies and still continue operation. In addition, the 9900V supports connection to a UPS to provide additional battery backup capability (see section 5.5.5). Note: The redundancy and backup features of the Lightning 9900 V Series subsystem eliminate all active single points of failure, no matter how unlikely, to provide an additional level of reliability and data availability.
15
2.2.2
2.2.3
16
2.2.4
2.2.5
17
2.2.6
Maximum physical interfaces per subsystem: ExSA (serial/ESCON) FICON Fibre-channel Logical paths per FICON port Logical paths per ExSA (ESCON) port Maximum logical paths per subsystem Maximum LUs per fibre-channel port Maximum LVIs/LUs per subsystem
18
2.2.7
Channels
The Lightning 9900 V Series subsystem supports all-open system, all-mainframe, and multiplatform operations and offers the following types of host channel connections: Fibre-Channel. The 9980V subsystem supports up to 32 fibre-channel ports, and the 9970V supports up to 24 fibre ports. The fibre ports are capable of data transfer speeds of 200 MB/sec (2 Gbps). The 9900V fibre-channel cards have eight ports per pair of channel adapter boards. The 9900V supports shortwave (multimode) and longwave (single-mode) versions of fibre-channel adapters. When configured with shortwave fibrechannel cards, the 9900V subsystem can be located up to 500 meters (2750 feet) from the open-system host(s). When configured with longwave fibre-channel cards, the 9900V subsystem can be located up to ten kilometers from the open-system host(s). Extended Serial Adapter (ExSA). The 9980V subsystem supports a maximum of 32 ExSA serial channel interfaces (compatible with ESCON protocol), and the 9970V supports a maximum of 24 ExSA interfaces. The 9900V ExSA channel interface cards provide data transfer speeds of up to 17 MB/sec and have a total of eight ports per pair of channel adapter boards. Each ExSA channel can be directly connected to a CHPID or to a serial channel director. Shared serial channels can be used for dynamic path switching. The 9900V also supports the ExSA Extended Distance Feature (XDF). FICON. The 9980V subsystem supports* up to 32 FICON ports, and the 9970V supports up to 24 FICON ports. FICON ports are capable of data transfer speeds of up to 100 MB/sec (1 Gbps). FICON features, available in both shortwave (multimode) and longwave (single mode) versions, have a total of 8 FICON host interfaces per pair of FICON channel adapter cards. The 9900V supports shortwave (multimode) and longwave (single-mode) versions of FICON channel adapters. When configured with shortwave FICON-channel cards, the 9900V subsystem can be located up to 500 meters (2750 feet) from the open-system host(s). When configured with longwave FICON-channel cards, the 9900V subsystem can be located up to ten kilometers from the open-system host(s). * Note: Please contact your Hitachi Data Systems account team for information on the availability date of FICON channel interface support.
19
2.2.8
ACP Pair 4
ACP Pair 1
Figure 2.4
20
Table 2.2
Description
ACP Specifications
Specification for 9980V Specification for 9970V 1, 2, 3 or 4 8 8, 16, 24 or 32 RAID-1 and/or RAID-5 36 GB, 73 GB 3390-x and OPEN-x [2] Fibre-channel arbitrated loop (FC-AL) 100 MB/sec (1 Gbps) 8 32 10 GB/sec 8 or 16 5 GB/sec 8 or 16 1 or 2
Number of ACP pairs Backend paths per ACP pair Backend paths per subsystem Array group (or parity group) type per ACP pair Hard disk drive type per ACP pair [1] Logical device emulation type within ACP pair Backend array interface type Backend interface transfer rate (burst rate) Maximum concurrent backend operations per ACP pair Maximum concurrent backend operations per subsystem HiStar Network architecture internal bandwidth
Notes: 1. All hard disk drives (HDDs) in an array group (also called parity group) must be the same type. Please contact your Hitachi Data Systems representative for the latest information on available HDD types. 2. 3390-3 and 3390-3R LVIs cannot be intermixed in the same 9900V subsystem.
21
2.3
Array Frame
The 9980V array frames contain the physical disk drives, including the disk array groups and the dynamic spare disk drives. Each array frame has dual AC power plugs, which should be attached to two different power sources or power panels. The 9900V also supports connection to a UPS to provide additional battery backup capability. The 9980V can be configured with up to four array frames to provide a raw storage capacity of up to 74 TB. The 9970V subsystem combines the controller and disk array components in one physical frame to provide a raw storage capacity of up to 9 TB. When configured in four-drive RAID-5 parity groups, of the raw capacity is available to store user data, and of the raw capacity is used for parity data. The 9900V subsystem uses disk drives with fixed-block-architecture (FBA) format. The currently available disk drives have capacities of 36 GB and 73 GB. All drives in an array group must have the same rotation speed and the same capacity. The 36-GB and 73-GB HDDs can be attached to the same ACP pair. Table 2.3 provides the disk drive specifications. Each disk drive can be replaced nondisruptively on site. The 9900V utilizes diagnostic techniques and background dynamic scrubbing that detect and correct disk errors. Dynamic sparing is invoked automatically if needed. For both RAID-5 and RAID-1 array groups, any spare disk drive can back up any other disk drive of the same rotation speed and the same or lower capacity anywhere in the subsystem, even if the failed disk and the spare disk are in different array domains (attached to different ACP pairs). The 9980V can be configured with a minimum of one and a maximum of sixteen spare disk drives. The 9970V can be configured with a minimum of one and a maximum of four spare disk drives. The standard configuration provides one spare drive for each type of drive installed in the subsystem. The Hi-Track monitoring and reporting tool detects disk drive failures and notifies the Hitachi Data Systems Support Center automatically, and a service representative is sent to replace the disk drive. Note: The spare disk drives are used only as replacements and are not included in the storage capacity ratings of the subsystem.
22
Table 2.3
73 GB 72.91 3 inches
36 GB 35.76 2.5 inches 8 4 520(512) 0.4/0.8 7.0/8.0 3.8/4.2 14,904 2.01 68.5 to 88.3 100
Physical tracks per physical cylinder (user area) 10 (number of heads) Physical disk platters (user area) (numbers of disks) Sector length (byte) Seek time (ms) (Read/Write) MIN. MAX. AVE. Revolution speed (rpm) Average latency time (ms) Internal data transfer rate (MB/sec) Maximum interface data transfer rate (MB/sec) 5 520(512) 0.5/0.7 11.0/12.0 4.9/5.7 10,025 2.99 44.2 to 74.0 100
23
2.3.1
Track 0 to Track 7
Track 0 to Track 7
Track 8 to Track 15
Track 8 to Track 15
Track 16 to Track 23
Track 16 to Track 23
Track 24 to Track 31
Track 24 to Track 31
Track 32 to Track 39
Track 32 to Track 39
Track 40 to Track 47
Track 40 to Track 47
Track 48 to Track 55
Track 48 to Track 55
Track 56 to Track 63
Track 56 to Track 63
Figure 2.5
A RAID-5 array group consists of four disk drives. The data is written across the four hard drives in a stripe that has three data chunks and one parity chunk. Each chunk contains either eight logical tracks (S/390) or 768 logical blocks (open systems). The enhanced RAID5+ implementation in the 9900V subsystem minimizes the write penalty incurred by standard RAID-5 implementations by keeping write data in cache until an entire stripe can be built and then writing the entire data stripe to the disk drives.
24
Figure 2.6 illustrates RAID-5 data stripes mapped over four physical drives. Data and parity are striped across each of the disk drives in the array group (hence the term parity group). The logical devices (LDEVs) are evenly dispersed in the array group, so that the performance of each LDEV within the array group is the same. Figure 2.6 also shows the parity chunks that are the Exclusive OR (EOR) of the data chunks. The parity and data chunks rotate after each stripe. The total data in each stripe is either 24 logical tracks (eight tracks per chunk) for S/390 data, or 2304 blocks (768 blocks per chunk) for open-systems data. Each of these array groups can be configured as either 3390-x or OPEN-x logical devices. All LDEVs in the array group must be the same format (3390-x or OPEN-x). For open systems, each LDEV is mapped to a SCSI address, so that it has a TID and logical unit number (LUN).
RAID-5 using 3D + 1P and S/390 LDEVs
Track 0 to Track 7
Track 8 to Track 15
Track 16 to Track 23
Parity Tracks
Parity Tracks
next 8 tracks
Figure 2.6
Note: Hitachi Performance Monitor and Hitachi Graph-Track (see section 3.6) allow users to collect and view detailed usage statistics for the disk array groups in the 9900V subsystem.
2.3.2
25
2.4 2.4.1
2.4.2
RAID5
(3D+1P)
ACP
ACP
RAID group
Figure 2.7
26
2.4.3
Note: For the latest information on supported LU types and intermix requirements, please contact your Hitachi Data Systems account team. Note: The 9980V and 9970V subsystems may support different device emulations and intermix configurations.
Disk Array Unit Disk Array Unit 9980 Controller 4th ACP Pair 3rd ACP Pair 3390-9 3390-3 OPEN-9 2nd ACP Pair 1st ACP Pair OPEN-3 3390-9 OPEN-9 Disk Array Unit Disk Array Unit
Figure 2.8
27
2.5
28
2.6
Remote Console
The Hitachi Lightning 9900 V Remote Console is provided as a Java applet program which can execute on any machine that supports a Java Virtual Machine (JVM). The Remote Console PC hosts the Remote Console Java applet program and is attached to the 9900V subsystem(s) via a TCP/IP local-area network (LAN). When a Remote Console accesses and logs into the desired SVP, the Remote Console applet is downloaded from the SVP to the Remote Console, runs on the web browser of the Remote Console PC, and communicates with the attached 9900V subsystems via a TCP/IP network. Figure 2.9 shows an example of Remote Console and SVP configuration. Two LANs can be attached to the 9900V: the 9900V internal LAN (private LAN), which is used to connect the SVP(s) of multiple subsystems, and the users intranet (public LAN), which allows you to access the Remote Console functions from individual Remote Console PCs. The Remote Console communicates directly with the service processor (SVP) of each attached subsystem to obtain subsystem configuration and status information and send user-requested commands to the subsystem. The 9900V Remote Console Java applet program is downloaded to the Remote Console (web client) from the SVP (web server) each time the Remote Console is connected to the SVP. The Remote Console Java applet program runs on Web browsers, such as Internet Explorer and Netscape Navigator, which run under the Windows and Solaris operating systems to provide a user-friendly interface for the 9900V Remote Console functions.
Remote Console PC
SVP
Private LAN
Public LAN
Private LAN
Download
Configuration information
Figure 2.9
29
30
Chapter 3
3.1
3.2
I/O Operations
The 9900V I/O operations are classified into three types based on cache usage: Read hit: For a read I/O, when the requested data is already in cache, the operation is classified as a read hit. The CHIP searches the cache directory, determines that the data is in cache, and immediately transfers the data to the host at the channel transfer rate. Read miss: For a read I/O, when the requested data is not currently in cache, the operation is classified as a read miss. The CHIP searches the cache directory, determines that the data is not in cache, disconnects from the host, creates space in cache, updates the cache directory, and requests the data from the appropriate ACP pair. The ACP pair stages the appropriate amount of data into cache, depending on the type of read I/O (e.g., sequential). Fast write: All write I/Os to the 9900V subsystem are fast writes, because all write data is written to cache before being destaged to disk. The data is stored in two cache locations on separate power boundaries in the nonvolatile duplex cache (see section 2.2.3). As soon as the write I/O has been written to cache, the 9900V subsystem notifies the host that the I/O operation is complete, and then destages the data to disk.
31
3.3 3.3.1
3.3.2
32
3.4 3.4.1
3.4.2
33
3.4.3
The 9900V also allows users to configure custom-size LUs which are smaller than standard LUs as well as size-expanded LUs which are larger than standard LUs. LUN Expansion (LUSE) volumes can range in size from 4.92 GB (OPEN-3*2) to a maximum of 1312 GB (OPEN-L*36). Each LU is identified by target ID (TID) and LU number (LUN) (see Figure 3.1). Each 9900V fibre-channel port supports addressing capabilities for up to 256 LUNs when not using LUN Security and up to 512 LUNs when using LUN Security.
Fibre port 9980V
Host A
HUB or SW
Host B
LUN:0
10
11 255
After setting LUN security (Host A -> LU group A, Host B -> LU group B)
for Host A
Host A
HUB or SW
Host B
LUN:0
LU group A
Host A
HUB or SW
Host B
LUN:0
LU group A
LU group B
Figure 3.1 34
3.5
3.5.1
3.5.2
Share-Everything Architecture
The 9900V subsystems global cache provides a share-everything architecture that enables any fibre-channel port to have access to any LU in the subsystem. In the 9900V, each LU can be assigned to multiple fibre-channel ports to provide I/O path failover and/or load balancing (with the appropriate middleware support) without sacrificing cache coherency. The LUN mapping can be performed by the user using the LUN Manager Remote Console software, or by your Hitachi Data Systems representative (this is a fee-based configuration service).
35
3.6
Table 3.3
Feature Name Replication and migration: Hitacchi TrueCopy (section 3.6.1) Hitachi ShadowImage (section 3.6.3) Command Control Interface (CCI) (section 3.6.5) Data Migration (section 3.6.8) Backup/restore and sharing: Hitachi RapidXchange (HRX) (section 3.6.9) HARBOR File-Level Backup and Restore (section 3.6.11)
Hitachi Multiplatform Backup/Restore (HMBR) (section 3.6.10) No HARBOR File Transfer (section 3.6.12) Resource management: HiCommand (section 3.6.13)
LUN Manager (section 3.6.14) Hitachi SANtinel (LUN Security) (section 3.6.15) LUN Expansion (LUSE) (section 3.6.16) Virtual LUN (section 3.6.17) FlashAccess (section 3.6.18) Priority Access (section 3.6.20) Hitachi Dynamic Link Manager (section 3.6.22)
No No No No No No
Yes Yes
Storage utilities: Hitachi Performance Monitor (section 3.6.23) Hitachi CruiseControl (section 3.6.24) Hitachi Graph-Track (section 3.6.25) Network solutions: Hitachi Freedom NAS (section 1.1.5) Hitachi Freedom SAN (section 1.1.5) N/A N/A N/A N/A N/A N/A Yes Yes Yes No No No Yes Yes Yes
36
Table 3.4
Feature Name Replication and migration: Hitachi TrueCopy S/390 (section 3.6.2) Hitachi ShadowImage S/390 (section 3.6.4) Hitachi Extended Remote Copy (HXRC) (section 3.6.6)
Licensed Software? User Document(s) Yes Yes No MK-92RD107 MK-92RD109 Planning for IBM Remote Copy, SG242595; Advanced Copy Services, SC350355; DFSMS MVS V1 Remote Copy Guide and Reference, SC35-0169 MK-90DD878, Service Offering Service Offering MK-90RD020 MK-92RD104 MK-92RD102
Hitachi Nanocopy (section 3.6.7) Data Migration (section 3.6.8) Backup/restore and sharing: Hitachi RapidXchange (HRX) (section 3.6.9) Resource management: Virtual LVI (section 3.6.17) FlashAccess (section 3.6.18)
N/A No Yes No
Yes Yes Cache Manager Yes Yes No No No No Yes Yes Yes Yes
Yes FlashAccess
Hitachi Parallel Access Volume (HPAV) (section 3.6.21) Yes Storage utilities: Hitachi Performance Monitor (section 3.6.23) Hitachi CruiseControl (section 3.6.24) Hitachi Graph-Track (section 3.6.25) Yes Yes Yes
37
3.6.1
Hitachi TrueCopy
Hitachi TrueCopy enables open-system users to perform synchronous and/or asynchronous remote copy operations between 9900V subsystems. The user can create, split, and resynchronize LU pairs. TrueCopy also supports a takeover command for remote host takeover (with the appropriate middleware support). Once established, TrueCopy operations continue unattended and provide continuous, real-time data backup. Remote copy operations are nondisruptive and allow the primary TrueCopy volumes to remain online to all hosts for both read and write I/O operations. TrueCopy operations can be performed between 9900V subsystems and between 9900V and 9900 subsystems. Hitachi TrueCopy supports both fibre-channel and serial (ESCON)* interface connections between the main and remote 9900V subsystems. For fibre-channel connection, TrueCopy operations can be performed across distances of up to 30 km (18.6 miles) using single-mode longwave optical fibre cables in a switch configuration. For serial interface connection, TrueCopy operations can be performed across distances of up to 43 km (26.7 miles) using standard ESCON support. Long-distance solutions are provided, based on user requirements and workload characteristics, using approved channel extenders and communication lines. * Note: Please contact your Hitachi Data Systems account team for information on the availability date of TrueCopy over serial (ESCON) interface. Note: For further information on Hitachi TrueCopy, please see the Hitachi Lightning 9900 V Series Hitachi TrueCopy User and Reference Guide (MK-92RD108), or contact your Hitachi Data Systems account team.
3.6.2
38
3.6.3
Hitachi ShadowImage
Hitachi ShadowImage enables open-system users to maintain subsystem-internal copies of LUs for purposes such as data backup or data duplication. The RAID-protected duplicate LUs (up to nine) are created within the same 9900V subsystem as the primary LU at hardware speeds. Once established, ShadowImage operations continue unattended to provide asynchronous internal data backup. ShadowImage operations are nondisruptive; the primary LU of each ShadowImage pair remains available to all hosts for both read and write operations during normal operations. Usability is further enhanced through a resynchronization capability that reduces data duplication requirements and backup time, thereby increasing user productivity. ShadowImage also supports reverse resynchronization for maximum flexibility. ShadowImage operations can be performed in conjunction with Hitachi TrueCopy operations (see section 3.6.1) to provide multiple copies of critical data at both primary and remote sites. ShadowImage also supports the Virtual LVI/LUN and FlashAccess features of the 9900V subsystem, ensuring that all user data can be duplicated by ShadowImage operations. Note: For further information on Hitachi ShadowImage, please see the Hitachi Lightning 9900 V Series ShadowImage Users Guide (MK-92RD110), or contact your Hitachi Data Systems account team.
3.6.4
39
3.6.5
3.6.6
40
3.6.7
Hitachi NanoCopy
Hitachi NanoCopy is the storage industrys first hardware-based solution which enables customers to make Point-in-Time (PiT) copies without quiescing the application or causing any disruption to end-user operations. NanoCopy is based on Hitachi TrueCopy S/390 Asynchronous, which is used to move large amounts of data over any distance with complete data integrity and minimal impact on performance. Hitachi TrueCopy S/390 Asynchronous can be integrated with third-party channel extender products to address the access anywhere goal of data availability. Hitachi TrueCopy S/390 Asynchronous enables production data to be duplicated via ESCON or communication lines from a main (primary) site to a remote (secondary) site that can be thousands of miles away. NanoCopy copies data between any number of primary subsystems and any number of secondary subsystems, located any distance from the primary subsystem, without using valuable server processor cycles. The copies may be of any type or amount of data and may be recorded on subsystems anywhere in the world. NanoCopy enables customers to quickly generate copies of production data for such uses as application testing, business intelligence, and disaster recovery for business continuance. For disaster recovery operations, NanoCopy will maintain a duplicate of critical data, allowing customers to initiate production at a backup location immediately following an outage. This is the first time an asynchronous hardware-based remote copy solution, with full data integrity, has been offered by any storage vendor. Hitachi TrueCopy S/390 Asynchronous with Hitachi NanoCopy support is offered as an extension to Hitachi Data Systems data movement options and software solutions for the Hitachi Lightning 9900 V Series subsystem. Hitachi ShadowImage S/390 can also operate in conjunction with Hitachi TrueCopy S/390 Synchronous and Asynchronous to provide volume-level backup and additional image copies of data. This delivers an additional level of data integrity to assure consistency across sites and provides flexibility in maintaining volume copies at each site. Note: For further information on Hitachi NanoCopy, please contact your Hitachi Data Systems account team.
3.6.8
Data Migration
The Lightning 9900 V Series subsystem supports data migration operations from other disk array subsystems, including older Hitachi subsystems as well as other vendors subsystems. Data can be moved to a new location either temporarily or as part of a data relocation process. During normal migration operations, the data being migrated can be online to the host(s) for both read and write I/O operations during data migration operations. Note: Data migration is available as a Hitachi Data Systems service offering. For further information on data migration, please contact your Hitachi Data Systems account team.
41
3.6.9
42
3.6.13 HiCommand
HiCommand provides a consistent, easy to use, and easy to configure set of interfaces for managing Hitachi storage products including the Lightning 9900 V Series subsystem. HiCommand provides a web interface for real-time interaction with the storage arrays being managed, as well as a command line interface (CLI) for scripting. HiCommand gives storage administrators easier access to the existing Hitachi subsystem configuration, monitoring, and management features such as LUN Manager, LUN security, TrueCopy, and ShadowImage. Note: HiCommand 1.x does not support all Hitachi subsystem functions. HiCommand enables users to manage the 9900V subsystem and perform functions from virtually any location via the HiCommand Web Client, HiCommand command line interface (CLI), and/or third-party application. HiCommand displays detailed information on the configuration of the storage arrays added to the HiCommand system and allows you to perform important operations such as adding and deleting volume paths, securing logical units (LUs), and managing data replication operations. Note: For further information on HiCommand, please refer to the HiCommand user documentation (see Table 3.3), or contact your Hitachi Data Systems account team.
43
44
45
3.6.18 FlashAccess
FlashAccess allows users to store specific data in cache memory. FlashAccess increases the data access speed for the cache-resident data by enabling read and write I/Os to be performed at front-end host data transfer speeds. The FlashAccess cache areas (called cache extents) are dynamic and can be added and deleted at any time. The 9900V subsystem supports up to 1024 addressable cache extents. FlashAccess operations can be performed for open-system LUs (e.g., OPEN-3, OPEN-9) as well as S/390 LVIs (e.g., 3390-3, 3390-9), including custom-size volumes. Use of FlashAccess in conjunction with the Virtual LVI/LUN feature will achieve better performance improvements than when either of these options is used individually. Note: For further information on FlashAccess, please see the Hitachi Lightning 9900 V Series FlashAccess Users Guide (MK-92RD102), or contact your Hitachi Data Systems account team.
47
48
Chapter 4
4.1
S/390 Configuration
The first step in configuring the Lightning 9900 V Series subsystem is to define the subsystem to the S/390 host(s). The three basic areas requiring definition are: Subsystem ID (SSIDs), Hardware definitions, including I/O Configuration Program (IOCP) or Hardware Configuration Definition (HCD), and Operating system definitions (HCD or OS commands). Note: The missing interrupt handler (MIH) value for the 9900V subsystem is 45 seconds without TrueCopy, and 60 seconds when TrueCopy operations are in progress. (The MIH value for data migration operations is 120 seconds.)
4.1.1
*Note: HPAV operations require that one SSID be set for each set of 256 LDEVs.
49
4.2 4.2.1
S/390 Hardware Definition Hardware Definition Using IOCP (MVS, VM, or VSE)
The I/O Configuration Program (IOCP) can be used to define the 9900V subsystem in MVS, VM, and VSE environments (wherever HCD cannot be used). The 9900V subsystem supports up to thirty-two logical CU (LCU) images and 8192 LDEVs. Each LCU can hold up to 256 LDEV addresses. An LCU is the same as an IBM logical sub-system (LSS). The CUADD parameter defines the CU images by number (0-F). The unit type can be 3990 or 2105. Note: FICON channel interface requires 2105-F20 emulation. The following are cautions when using IOCP or HCD: Use FEATURE=SHARE for the devices if multiple LPARs/mainframes can access the volumes. 16,384 addresses per physical interface are allowed by MVS with FICON channels. Only 1024 addresses per physical interface are allowed by MVS with ExSA (ESCON) channels. (This includes PAV base and alias addresses.) Note: 4096 device addressing requires 16 CU images using CUADD=0 through CUADD=F in the CNTLUNIT statement. Figure 4.1 shows a sample IOCP definition for a 9900V configured with: 2105 ID. Four FICON channel paths. Two channels paths are connected to a FICON switch. Two channel paths are directly connected to the 9900V subsystem. Six LCUs (0, 1, 2, 3, 4, 5) with 256 LVIs per control unit. Sixty-four (64) base addresses and 128 alias addresses per CU 0, 1, 2, and 3. One hundred twenty-eight (128) base addresses and 128 alias addresses per CU 4 and 5.
50
****************************************************** **** Sample FICON CHPID / CNTLUNIT and IODEVICE ****** ****************************************************** CHPID CHPID CHPID CHPID PATH=(F9),TYPE=FC,SWITCH=07 PATH=(FB),TYPE=FC PATH=(FD),TYPE=FC PATH=(FF),TYPE=FC,SWITCH=07
* * * * * * * * * * * * *
CNTLUNIT CUNUMBR=8100,PATH=(F9,FB,FD,FF), LINK=(04,**,**,05), UNIT=2105,CUADD=0,UNITADD=((00,256)) CNTLUNIT CUNUMBR=8200,PATH=(F9,FB,FD,FF), LINK=(04,**,**,05), UNIT=2105,CUADD=1,UNITADD=((00,256)) CNTLUNIT CUNUMBR=8300,PATH=(F9,FB,FD,FF), LINK=(04,**,**,05), UNIT=2105,CUADD=2,UNITADD=((00,256)) CNTLUNIT CUNUMBR=8400,PATH=(F9,FB,FD,FF), LINK=(04,**,**,05), UNIT=2105,CUADD=3,UNITADD=((00,256)) CNTLUNIT CUNUMBR=8500,PATH=(F9,FB,FD,FF), LINK=(04,**,**,05), UNIT=2105,CUADD=4,UNITADD=((00,256)) CNTLUNIT CUNUMBR=8600,PATH=(F9,FB,FD,FF), LINK=(04,**,**,05), UNIT=2105,CUADD=5,UNITADD=((00,256))
IODEVICE ADDRESS=(8100,064),CUNUMBR=(8100),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8180,128),CUNUMBR=(8100),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8200,064),CUNUMBR=(8200),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8280,128),CUNUMBR=(8200),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8300,064),CUNUMBR=(8300),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8380,128),CUNUMBR=(8300),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8400,064),CUNUMBR=(8400),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8480,128),CUNUMBR=(8400),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8500,128),CUNUMBR=(8500),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8580,128),CUNUMBR=(8500),STADET=Y,UNIT=3390A IODEVICE ADDRESS=(8600,128),CUNUMBR=(8600),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8680,128),CUNUMBR=(8600),STADET=Y,UNIT=3390A
Figure 4.1
IOCP Definition for FICON Channels (direct connect and via FICON switch)
51
Figure 4.2 shows a sample IOCP hardware definition for a 9900V configured with: 3990 ID. Two (2) LPARs called PROD and TEST sharing 4 ExSA (ESCON) channels connected via 2 ESCDs to the 9980V. Each switch has ports C0 and C1 attached to the 9980V. Four (4) logical control units with 256 LVIs per control unit. Two (2) cu statements per logical control unit. To protect data integrity due to multiple operating systems sharing these volumes, these devices require FEATURE=SHARED.
CHPID CHPID CNTLUNIT CNTLUNIT CNTLUNIT CNTLUNIT CNTLUNIT CNTLUNIT CNTLUNIT CNTLUNIT IODEVICE IODEVICE IODEVICE IODEVICE PATH=(B0,B1),TYPE=CNC,PARTITION=(PROD,TEST,SHR),SWITCH=01 PATH=(B2,B3),TYPE=CNC,PARTITION=(PROD,TEST,SHR),SWITCH=02 CUNUMBR=A000,PATH=(B0,B1),UNITADD=((00,256)),LINK=(C0,C1),CUADD=0,UNIT=3990 CUNUMBR=A001,PATH=(B2,B3),UNITADD=((00,256)),LINK=(C0,C1),CUADD=0,UNIT=3990 CUNUMBR=A100,PATH=(B0,B1),UNITADD=((00,256)),LINK=(C0,C1),CUADD=1,UNIT=3990 CUNUMBR=A101,PATH=(B2,B3),UNITADD=((00,256)),LINK=(C0,C1),CUADD=1,UNIT=3990 CUNUMBR=A200,PATH=(B0,B1),UNITADD=((00,256)),LINK=(C0,C1),CUADD=2,UNIT=3990 CUNUMBR=A201,PATH=(B2,B3),UNITADD=((00,256)),LINK=(C0,C1),CUADD=2,UNIT=3990 CUNUMBR=A300,PATH=(B0,B1),UNITADD=((00,256)),LINK=(C0,C1),CUADD=3,UNIT=3990 CUNUMBR=A301,PATH=(B2,B3),UNITADD=((00,256)),LINK=(C0,C1),CUADD=3,UNIT=3990 ADDRESS=(A000,256),CUNUMBR=(A000,A001),UNIT=3390,FEATURE=SHARED ADDRESS=(A100,256),CUNUMBR=(A100,A101),UNIT=3390,FEATURE=SHARED ADDRESS=(A200,256),CUNUMBR=(A200,A201),UNIT=3390,FEATURE=SHARED ADDRESS=(A300,256),CUNUMBR=(A300,A301),UNIT=3390,FEATURE=SHARED
Note: 4096 device addressing requires 16 CU images using CUADD=0 through CUADD=F in the CNTLUNIT statement.
Figure 4.2
IOCP Definition for 1024 LVIs (9900V connected to host CPU(s) via ESCD)
52
Figure 4.3 shows a sample IOCP hardware definition for a 9980V with: 2105 ID. Eight (8) ExSA (ESCON) channels directly connected to the 9980V. Four (4) logical control units with 256 LVIs per control unit. One (1) cu statement per logical control unit. One hundred twenty-eight (128) 3390 base addresses per CU 0 and 1. One hundred twenty-eight (128) 3390 alias addresses per CU 0 and 1. Sixty-four (64) 3390 base addresses in CU 2. One hundred ninety-two (192) 3390 alias addresses in CU 2. One hundred twenty-eight (128) 3390 addresses in CU 3. Sixty-four (64) 3390 base addresses per CU 3. Sixty-four (64) 3390 alias addresses per CU 3. To protect data integrity due to multiple operating systems sharing these volumes, these devices require FEATURE=SHARED. Note: If you maintain separate IOCP definitions files and create your SCDS or IOCDS manually by running the IZP IOCP program, you must define each LCU on a 9900V subsystem using one CNTLUNIT statement in IOCP. While it is possible to define an LCU on a 9900V subsystem using multiple CNTLUNIT statements in IOCP, the resulting input deck cannot be migrated to HCD due to an IBM restriction allowing only one CNTLUNIT definition.
CHPID PATH=(60,61,62,63,64,65,66,67),TYPE=CNC CNTLUNIT CUNUMBR=8000,PATH=(60,61,62,63,64,65,66,67), * UNITADD=((00,256)),CUADD=0,UNIT=2105 CNTLUNIT CUNUMBR=8100,PATH=(60,61,62,63,64,65,66,67), * UNITADD=((00,256)),CUADD=1,UNIT=2105 CNTLUNIT CUNUMBR=8200,PATH=(60,61,62,63,64,65,66,67), * UNITADD=((00,256)),CUADD=2,UNIT=2105 CNTLUNIT CUNUMBR=8300,PATH=(60,61,62,63,64,65,66,67), * UNITADD=((00,256)),CUADD=3,UNIT=2105 IODEVICE ADDRESS=(8000,128),CUNUMBR=(8000),STADET=Y,UNIT=3390B,FEATURE=SHARED IODEVICE ADDRESS=(8080,128),CUNUMBR=(8000),STADET=Y,UNIT=3390A,FEATURE=SHARED IODEVICE ADDRESS=(8100,128),CUNUMBR=(8100),STADET=Y,UNIT=3390B,FEATURE=SHARED IODEVICE ADDRESS=(8180,128),CUNUMBR=(8100),STADET=Y,UNIT=3390A,FEATURE=SHARED IODEVICE ADDRESS=(8200,064),CUNUMBR=(8200),STADET=Y,UNIT=3390B,FEATURE=SHARED IODEVICE ADDRESS=(8240,192),CUNUMBR=(8200),STADET=Y,UNIT=3390A,FEATURE=SHARED IODEVICE ADDRESS=(8300,128),CUNUMBR=(8300),STADET=Y,UNIT=3390,FEATURE=SHARED IODEVICE ADDRESS=(8380,064),CUNUMBR=(8300),STADET=Y,UNIT=3390B,FEATURE=SHARED IODEVICE ADDRESS=(83C0,064),CUNUMBR=(8300),STADET=Y,UNIT=3390A,FEATURE=SHARED
Figure 4.3
53
The 9980V subsystem can be configured with up to 32 connectable physical paths to provide up to 32 concurrent host data transfers. The 9970V subsystem can be configured with up to 24 connectable physical paths to provide up to 24 concurrent host data transfers. Since only 16 channel interface IDs are available (due to 16 physical channel interfaces for IBM systems), the 9900V uses one channel interface ID for each pair of physical paths. For example, link control processors (LCPs) 1A and 1B correspond to channel interface ID 08 (00), and LCPs 1C and 1D correspond to channel interface ID 09 (01). Table 4.2 illustrates the correspondence between physical paths and channel interface IDs on Cluster 1, and Table 4.3 illustrates the same for Cluster 2. Table 4.2 Correspondence between Physical Paths and Channel Interface IDs (Cluster 1)
08 (00) 1A & 1B 09 (01) 1C & 1D 0A (02) 1E & 1F 0B (03) 1G & 1H 0C (04) 1J & 1K 0D (05) 1L & 1M 0E (06) 1N & 1P 0F (07) 1Q & 1R
Table 4.3
54
4.2.2
Table 4.5
Parameter Control Frame:
Channel path IDs Unit address Number of units Array Frame: Device number Number of devices Device type Connected to CUs
*Note: The NOCHECK function was introduced by APAR OY62560. Defining the 9900V as a single control unit allows all channel paths to access all DASD devices.
55
2105 Controller Emulation. To define a 9980V logical control unit (LCU) and the base and alias address range that it will support, please use the following example for HCD. Note: The following HCD steps correspond to the 2105 IOCP definition shown in Figure 4.3. Note: The HCD PAV definitions must match the configurations in the 9900V subsystem. If not, error messages will occur when the HOSTs are IPLd or the devices are varied online. 1. From an ISPF/PDF primary options menu, select the HCD option to display the basic HCD panel (see Figure 4.4). On this panel you must verify the name of the IODF or IODF.WORK I/O definition file to be used. 2. On the basic HCD panel (see Figure 4.5), select the proper I/O definition file, and then select option 1 to display the Define, Modify, or View Configuration Data panel. 3. On the Define, Modify, or View Configuration Data panel (see Figure 4.6), select option 4 to display the Control Unit List panel. 4. On the Control Unit List panel (see Figure 4.7), if a 2105 type of control unit already exists, then an Add like operation can be used by inputting an A next to the 2105 type control unit and pressing the return key. Otherwise press F11 to add a new control unit. 5. On the Add Control Unit panel (see Figure 4.8), input the following new information, or edit the information if preloaded from an Add like operation, and then press enter key: Control unit number Control unit type 2105 Switch information only if a switch exists. Otherwise leave switch and ports blank. 6. On the Select Processor / Control Unit panel (see Figure 4.9), input an S next to the PROC. ID, and then press return key. 7. On the Add Control Unit panel (see Figure 4.10), enter chpids that attach to the control unit, the logical control unit address, the device starting address, and the number of devices supported, and then press return key. 8. Verify that the data is correct on the Select Processor / Control Unit panel (see Figure 4.11), and then press F3. 9. On the Control Unit List panel (see Figure 4.12), add devices to the new Control Unit, input an S next to CU 8000, and then press enter. 10. On the I/O Device List panel (see Figure 4.13), press F11 to add new devices. 11. On the Add Device panel (see Figure 4.14), enter the following, and then press return: Device number Number of devices Device type: 3390, 3390B for HPAV base device, or 3390A for HPAV alias device. 12. On the Device / Processor Definition panel (see Figure 4.15), add this device to a specific Processor/System-ID combination by inputting an S next to the Processor and then pressing the return key. 13. On the Define Device / Processor panel, enter the values shown in Figure 4.16, and press the return key.
56
14. On the Define Processor / Definition panel (see Figure 4.17), verify that the proper values are displayed, and press the return key. 15. On the Define Device to Operating System Configuration panel, input an S next to the Config ID (see Figure 4.18), and then press the return key. 16. The Define Device Parameters / Features panel displays the default device parameters (see Figure 4.19). Note: The WLMPAV parameter defaults to YES. Set the desired parameters, and then press return key. 17. This returns to the Define Device to Operating System Configuration Panel. Press F3. 18. The Update Serial Number, Description and VOLSER panel now displays the desired device addresses (see Figure 4.20). To add more control units or device addresses, repeat the previous steps.
San Diego OS/390 R2.8 Master MENU OPTION ===> HC SCROLL ===> PAGE USERID - HDS TIME - 20:23 IS P IP R SD HC BMB BMR BMI SM IC OS OU S X ISMF PDF IPCS RACF SDSF HCD BMR BLD BMR READ BMR INDX SMP/E ICSF SUPPORT USER SORT EXIT Interactive Storage Management Facility ISPF/Program Development Facility Interactive Problem Control Facility Resource Access Control Facility System Display and Search Facility Hardware Configuration Definition BookManager Build (Create Online Documentation) BookManager Read (Read Online Documentation) BookManager Read (Create Bookshelf Index) SMP/E Dialogs Integrated Cryptographic Service Facility OS/390 ISPF System Support Options OS/390 ISPF User Options DF/SORT Dialogs Terminate ISPF using list/log defaults F3=END F9=SWAP F4=RETURN F10=LEFT F5=RFIND F11=RIGHT F6=RCHANGE F12=RETRIEVE
F1=HELP F7=UP
F2=SPLIT F8=DOWN
Figure 4.4
57
OS/390 Release 5 HCD Command ===> ________________________________________________________________ Hardware Configuration Select one of the following. 1_ 1. 2. 3. 4. 5. 6. 7. 8. 9. Define, modify, or view configuration data Activate or process configuration data Print or compare configuration data Create or view graphical configuration report Migrate configuration data Maintain I/O definition files Query supported hardware and installed UIMs Getting started with this dialog What's new in this release
For options 1 to 5, specify the name of the IODF to be used. I/O definition file . . . 'HDS.IODF06.WORK' + .----------------------------------------------------------. | (C) Copyright IBM Corp. 1990, 1998. All rights reserved. | '----------------------------------------------------------' F1=Help F2=Split F3=Exit F4=Prompt F9=Swap F12=Cancel
Figure 4.5
OS/390 Release 5 HCD .------------- Define, Modify, or View Configuration Data --------------. _ | | | Select type of objects to define, modify, or view data. | | | | 4_ 1. Operating system configurations | | consoles | | system-defined generics | | EDTs | | esoterics | | user-modified generics | | 2. Switches | | ports | | switch configurations | | port matrix | | 3. Processors | | partitions | | channel paths | | 4. Control units | | 5. I/O devices | | | | F1=Help F2=Split F3=Exit F9=Swap F12=Cancel | '-----------------------------------------------------------------------'
Figure 4.6
58
Goto Filter Backup Query Help -------------------------------------------------------------------------Control Unit List Row 27 of 40 Command ===> ___________________________________________ Scroll ===> PAGE Select one or more control units, then press Enter. To add, use F11.
/ CU Type + Serial-# + Description _ 3107 SCTC __________ ________________________________ _ 3108 SCTC __________ ________________________________ _ 3109 SCTC __________ ________________________________ _ 310A SCTC __________ ________________________________ _ 4000 3990-6 __________ ________________________________ _ 4100 3990-6 __________ ________________________________ _ 4200 3990-6 __________ ________________________________ _ 4300 3990-6 __________ ________________________________ _ 5000 3990 __________ ________________________________ _ 5001 3990 __________ ________________________________ _ 6000 3990 __________ ________________________________ _ 6001 3990 __________ ________________________________ _ 7000 3990 __________ ________________________________ _ 7001 3990 __________ ________________________________ F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F8=Forward F9=Swap F10=Actions F11=Add F12=Cancel
F7=Backward
Figure 4.7
.-------------------------- Add Control Unit ---------------------------. | | | | | Specify or revise the following values. | | | | Control unit number . . . . 8000 + | | | | Control unit type . . . . . 2105_________ + | | | | Serial number . . . . . . . __________ | | Description . . . . . . . . new 2105 type for 80xx devices__ | | | | Connected to switches . . . __ __ __ __ __ __ __ __ + | | Ports . . . . . . . . . . . __ __ __ __ __ __ __ __ + | | | | If connected to a switch, select whether to have CHPIDs/link | | addresses, and unit address range proposed. | | | | Auto-assign . . . . . . . . 2 1. Yes | | 2. No | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F9=Swap | | F12=Cancel | '-----------------------------------------------------------------------'
Figure 4.8
59
Goto Filter Backup Query Help .---------------------- Select Processor / Control Unit ----------------------. | Row 1 of 1 More: > | | Command ===> ___________________________________________ Scroll ===> PAGE | | | | Select processors to change CU/processor parameters, then press Enter. | | | | Control unit number . . : 8000 Control unit type . . . : 2105 | | | | Log. Addr. -------Channel Path ID . Link Address + ------- | | / Proc. ID Att. (CUADD) + 1---- 2---- 3---- 4---- 5---- 6---- 7---- 8---- | | S PROD __ _____ _____ _____ _____ _____ _____ _____ _____ | | ***************************** Bottom of data ****************************** | | | | | | | | | | | | | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F6=Previous F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------------' | F6=Previous F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------------'
Figure 4.9
Goto Filter Backup Query Help .--------------------------- Add Control Unit ----------------------------. | | | | | Specify or revise the following values. | | | | Control unit number . : 8000 Type . . . . . . : 2105 | | Processor ID . . . . . : PROD | | | | Channel path IDs . . . . 60 61 62 63 64 65 66 67 + | | Link address . . . . . . __ __ __ __ __ __ __ __ + | | | | Unit address . . . . . . 00 __ __ __ __ __ __ __ + | | Number of units . . . . 256 ___ ___ ___ ___ ___ ___ ___ | | | | Logical address . . . . 0_ + (same as CUADD) | | | | Protocol . . . . . . . . __ + (D,S or S4) | | I/O concurrency level . 2 + (1, 2 or 3) | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F9=Swap | | F12=Cancel | '-------------------------------------------------------------------------'
Figure 4.10 Control Unit Chpid, CUADD, and Device Address Range Addressing (Step 7)
60
Goto Filter Backup Query Help .---------------------- Select Processor / Control Unit ----------------------. | Row 1 of 1 More: > | | Command ===> ___________________________________________ Scroll ===> PAGE | | | | Select processors to change CU/processor parameters, then press Enter. | | | | Control unit number . . : 8000 Control unit type . . . : 2105 | | | | Log. Addr. -------Channel Path ID . Link Address + ------- | | / Proc. ID Att. (CUADD) + 1---- 2---- 3---- 4---- 5---- 6---- 7---- 8---- | | _ PROD Yes 0 60 61 62 63 64 65 66 67 | | ***************************** Bottom of data ****************************** | | | | | | | | | | | | | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F6=Previous F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------------'
Goto Filter Backup Query Help -------------------------------------------------------------------------Control Unit List Row 40 of 41 Command ===> ___________________________________________ Scroll ===> PAGE Select one or more control units, then press Enter. / CU Type + Serial-# + _ 7001 3990 __________ S 8000 2105 __________ ******************************* To add, use F11.
Description ________________________________ add 2105 type for 80xx devices Bottom of data ********************************
F1=Help F8=Forward
F2=Split F9=Swap
F3=Exit F10=Actions
F4=Prompt F11=Add
F5=Reset F12=Cancel
F7=Backward
61
Goto Filter Backup Query Help -------------------------------------------------------------------------I/O Device List Command ===> ___________________________________________ Scroll ===> PAGE Select one or more devices, then press Enter. Control unit number : 8000 To add, use F11. . : 2105
-------Device------- --#-- --------Control Unit Numbers + -------/ Number Type + PR OS 1--- 2--- 3--- 4--- 5--- 6--- 7--- 8--- Base ******************************* Bottom of data ********************************_ __
F1=Help F8=Forward
F2=Split F9=Swap
F3=Exit F10=Actions
F4=Prompt F11=Add
F5=Reset F12=Cancel__
F7=Backward
Goto Filter Backup Query Help .-------------------------------- Add Device ---------------------------------. | | | | | Specify or revise the following values. | | | | Device number . . . . . . . . 8000 (0000 - FFFF) | | Number of devices . . . . . . 128 | | Device type . . . . . . . . . 3390B________ + | | | | Serial number . . . . . . . . __________ | | Description . . . . . . . . . 3390 Base addresses 8000-807F__ | | | | Volume serial number . . . . . ______ (for DASD) | | | | Connected to CUs . . 8000 ____ ____ ____ ____ ____ ____ ____ + | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F9=Swap | | F12=Cancel | '-----------------------------------------------------------------------------'
62
.-------------------- Device / Processor Definition --------------------. | Row 1 of 1 | | Command ===> _____________________________________ Scroll ===> PAGE | | | | Select processors to change device/processor definitions, then press | | Enter. | | | | Device number . . : 8100 Number of devices . : 128 | | Device type . . . : 3390B | | | | Preferred Explicit Device | | / Processor ID UA + Time-Out STADET CHPID + Candidate List | ***** | s PROD __ No Yes __ No | | ************************** Bottom of data *************************** | | | | | | | | | | | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F6=Previous F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------'
Figure 4.15 Device / Processor Definition Panel Selecting the Processor ID (Step 12)
.------------------------- Define Device / Processor -------------------------. | | | | | Specify or revise the following values. | | | | Device number . : 8000 Number of devices . . . . : 128 | | Device type . . : 3390B | | Processor ID . . : PROD | | | | Unit address . . . . . . . . . . 00 + (Only necessary when different from | | the last 2 digits of device number) | | Time-Out . . . . . . . . . . . . No (Yes or No) | | STADET . . . . . . . . . . . . . Yes (Yes or No) | | | | Preferred CHPID . . . . . . . . __ + | | Explicit device candidate list . No (Yes or No) | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F9=Swap | | F12=Cancel | '-----------------------------------------------------------------------------'
63
.-------------------- Device / Processor Definition --------------------. | Row 1 of 1 | | Command ===> _____________________________________ Scroll ===> PAGE | | | | Select processors to change device/processor definitions, then press | | Enter. | | | | Device number . . : 8000 Number of devices . : 128 | | Device type . . . : 3390B | | | | Preferred Explicit Device | | / Processor ID UA + Time-Out STADET CHPID + Candidate List | | _ PROD 00 No Yes __ No | | ************************** Bottom of data *************************** | | | | | | | | | | | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F6=Previous F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------'
Figure 4.17
.----------- Define Device to Operating System Configuration -----------. | Row 1 of 1 | | Command ===> _____________________________________ Scroll ===> PAGE | | | | Select OSs to connect or disconnect devices, then press Enter. | | | | Device number . : 8100 Number of devices : 128 | | Device type . . : 3390B | | | | / Config. ID Type Description Defined | | s PROD MVS | | ************************** Bottom of data *************************** | | | | | | | | | | | | | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F6=Previous F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------'
64
.-------------------- Define Device Parameters / Features --------------------. | Row 1 of 6 | | Command ===> ___________________________________________ Scroll ===> PAGE | | | | Specify or revise the values below. | | | | Configuration ID . : PROD | | Device number . . : 8000 Number of devices : 128 | | Device type . . . : 3390B | | | | Parameter/ | | Feature Value P Req. Description | | OFFLINE No Device considered online or offline at IPL | | DYNAMIC Yes Device supports dynamic configuration | | LOCANY No UCB can reside in 31 bit storage | | WLMPAV Yes Device supports work load manager | | SHARED Yes Device shared with other systems | | SHAREDUP No Shared when system physically partitioned | | ***************************** Bottom of data ****************************** | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------------'
.---------- Update Serial Number, Description and VOLSER -----------. | Row 1 of 128 | | Command ===> _________________________________ Scroll ===> PAGE | | | | Device number . . : 8000 Number of devices : 128 | | Device type . . . : 3390B | | | | Specify or revise serial number, description and VOLSER. | | | | Device Number Serial-# Description VOLSER | | 8000 __________ 3390 Base addresses 8000-807F ______ | | 8001 __________ 3390 Base addresses 8000-807F ______ | | 8002 __________ 3390 Base addresses 8000-807F ______ | | 8003 __________ 3390 Base addresses 8000-807F ______ | | 8004 __________ 3390 Base addresses 8000-807F ______ | | 8005 __________ 3390 Base addresses 8000-807F ______ | | 8006 __________ 3390 Base addresses 8000-807F ______ | | 8007 __________ 3390 Base addresses 8000-807F ______ | | 8008 __________ 3390 Base addresses 8000-807F ______ | | 8009 __________ 3390 Base addresses 8000-807F ______ | | 800A __________ 3390 Base addresses 8000-807F ______ | | F1=Help F2=Split F3=Exit F5=Reset F7=Backward | | F8=Forward F9=Swap F12=Cancel | '-------------------------------------------------------------------'
Figure 4.20 Update Serial Number, Description and VOLSER Panel (Step 18)
65
4.2.3
4.2.4
66
4.3 4.3.1
// EXAMPLE JOB // EXEC PGM=ICKDSF //SYSPRINT DD SYSOUT=A //SYSIN DD * INIT UNITADDRESS (XXXX) NOVERIFY VOLID(YYYYYY) OWNERID(ZZZZZZZ) /*
4.3.2
67
Table 4.6
Command INSPECT
9900V INSTALL SETMODE (3390) SETMODE (3380) ANALYZE BUILDX REVAL REFRESH RAMAC 9900V RAMAC 9900V RAMAC 9900V RAMAC 9900V RAMAC 9900V DATA, NODATA CONTROL INIT REFORMAT CPVOLUME AIXVOL RAMAC 9900V RAMAC 9900V RAMAC 9900V RAMAC 9900V RAMAC 9900V RAMAC 9900V
In case of PRESERVE: CC = 12, In case of NO PRESERVE: CC = 0. CC = 0 (but not recommended by IBM). CC = 0 CC = 12, Invalid parameter(s) for device type. CC = 12, Function not supported for nonsynchronous DASD. CC = 0 CC = 0 CC = 0 CC = 0 CC = 12 Device not supported for the specified function. CC = 12, F/M = 04 (EC=66BB) Error, not a data check. Processing terminated. CC = 0, Data/Nodata parameter not allowed. CC=0 CC = 0, ALT information not displayed. CC = 0, ALT information not displayed. CC = 0, ALT information not displayed. CC = 0 CC = 0, ALT information not displayed. CC=0 CC = 0, Readcheck parameter not allowed. CC=0 Readcheck parameter not allowed. CC = 0
68
4.3.3
DEVS 15 11 11
FW BYPASSES
0007 10 N/A N/A 87% 0 **************************************************** SSID=SUBSYSTEM IDENTIFIER DEVS=NUMBER OF MANAGED DEVICES ATTACHED TO SUBSYSTEM READ=PERCENT OF DATA ON MANAGED DEVICES ELIGIBLE FOR CACHING WRITE=PERCENT OF DATA ON MANAGED DEVICES ELIGIBLE FOR FAST WRITE HIT RATIO=PERCENT OF READS WITH CACHE HITS FW BYPASSES=NUMBER OF FAST WRITE BYPASSES DUE TO NVS OVERLOAD
Figure 4.22 Displaying Cache Statistics Using MVS DFSMS The 9900V supports the following MVS cache operations: IDCAMS LISTDATA COUNTS. When the <subsystem> parameter is used with the LISTDATA command, the user must issue the command once for each SSID to view the entire 9900V image. Figure 4.23 shows a JCL example of the LISTDATA COUNTS command.
//LIST JOB. . . . . . //COUNT1 EXEC PGM=IDCAMS //SYSPRINT DD SYSOUT=A //SYSIN DD * LISTDATA COUNTS VOLUME(VOL000) LISTDATA COUNTS VOLUME(VOL064) LISTDATA COUNTS VOLUME(VOL128) LISTDATA COUNTS VOLUME(VOL192) /*
69
Subsystem counter reports. The cache statistics reflect the logical caching status of the volumes. For the 9900V, Hitachi Data Systems recommends that you set the nonvolatile storage (NVS) ON and the DASD fast write (DFW) ON for all logical volumes. This will not affect the way the 9900V caches data for the logical volumes. The default caching status for the 9900V is: CACHE ON for the subsystem CACHE ON for all logical volumes CACHE FAST WRITE ON for the subsystem NVS OFF for the subsystem Change NVS to ON for the 9900V. DFW OFF for all volumes Change DFW to ON for the 9900V. Note: In normal cache replacement, bypass cache, or inhibit cache loading mode, the 9900V performs a special function to determine whether the data access pattern from the host is sequential. If the access pattern is sequential, the 9900V transfers contiguous tracks from the disks to cache ahead of time to improve cache hit rate. Due to this advance track transfer, the 9900V shows the number of tracks transferred from the disks to the cache slot at DASD/CACHE of the SEQUENTIAL in TRANSFER OPERATIONS field in the subsystem counters report, even though the access mode is not sequential. IDCAMS LISTDATA STATUS. The LISTDATA STATUS command generates status information for a specific device within the subsystem. The 9900V reports two storage sizes: Subsystem storage. This field shows capacity in bytes of cache. For a 9900V with more than one SSID, the cache is shared among the SSIDs instead of being logically divided. This strategy ensures backup battery power for all cache in the 9900V. For the 9900V, this field shows three-fourths (75%) of the total cache size. Nonvolatile storage. This field shows capacity in bytes of random access cache with a backup battery power source. For the 9900V, this field shows one-fourth (25%) of the total cache size. IDCAMS SETCACHE. The 9900V supports the IDCAMS SETCACHE commands, which manage caching for subsystem storage through the use of one command (except for REINITIALIZE). The following SETCACHE commands work for the subsystem storage across multiple SSIDs: SETCACHE SUBSYSTEM ON|OFF SETCACHE CACHEFASTWRITE ON|OFF SETCACHE NVS ON|OFF SETCACHE DESTAGE Note: The SETCACHE REINITIALIZE command reinitializes only the logical subsystem specified by the SSID. You must issue the REINITIALIZE command once for each defined SSID. DEVSERV PATHS. The DEVSERV PATHS command is defined as the number of LVIs that can be specified by an operator (from 1 through 99). To display an entire 9900V subsystem, enter the DEVSERV command for several LVIs, as follows: DEVSERV PATHS,100,64 DEVSERV PATHS,140,64 DEVSERV PATHS,180,64 DEVSERV PATHS,1C0,64
70
4.3.4
4.3.5
71
4.4
Open-Systems Configuration
After physical installation of the 9900V subsystem has been completed, the user configures the 9900V subsystem for open-system operations with assistance as needed from the Hitachi Data Systems representative. For specific information and instructions on configuring the 9900V disk devices for open-system operations, please refer to the 9900V configuration guide for the connected platform. Table 4.7 lists the currently supported platforms and the corresponding 9900V configuration guides. Please contact your Hitachi Data Systems account team for the latest information on platform and operating system version support. Table 4.7
Platform UNIX-based platforms: IBM AIX HP-UX Sun Solaris PC server platforms: Windows NT Windows 2000 MK-92RD120 MK-92RD121 MK-92RD119 MK-92RD122 MK-92RD123
72
4.4.1
Fibre port
9980V
Host A
HUB or SW
Host B
LUN:0
10
11
255
After setting LUN security (Host A -> LU group A, Host B -> LU group B)
for Host A
Host A
HUB or SW
Host B
LUN:0
LU group A
Host A
HUB or SW
Host B
LUN:0
LU group A
LU group B
73
4.4.2
4.4.3
74
4.5 4.5.1
4.5.2
75
4.5.3
CHF1
9900 Subsystem
76
4.5.4
SIM Reporting
The 9900V subsystem logs all SIMs on the SVP. When the user accesses the 9900V subsystem using the Remote Console software, the SIM log is displayed. This enables open-system users to monitor 9900V operations from any Remote Console PC. The Remote Console software allows the user to view the SIMs by date/time or by controller.
4.5.5
4.5.6
77
4.6
Control Panel
Figure 4.26 shows the 9980V operator control panel. The 9970V control panel is the same except that the interfaces are A-M (not A-R). Table 4.8 describes the items on the operator control panel. To open the control panel cover, push and release on the point marked PUSH.
4 1 2 3 7 7 7 7 7 7 7 7
U L E U L J U L N U L P Q R K L M F G H
REMOTE MAINTENANCE
PROCESSING
ENABLE
DISABLE
STORAGE CLUSTER
A B C D
8 8 8 8 8 8 8 8 9
U L
STORAGE CLUSTER
A B C D
E U L J U L N U L
BS-ON PS-ON
LOCAL
14 15
POWER SW
ENABLE ON
LED TEST
10 11 12
OFF
CHK RST
EMERGENCY
13
Figure 4.26 9980V Control Panel 78 Chapter 4 Configuring and Using the 9900V Subsystem
Table 4.8
Item 1 2 3 Name
4 5
SUBSYSTEM RESTART REMOTE MAINTENANCE PROCESSING REMOTE MAINTENANCE ENABLE/DISABLE STORAGE CLUSTER 1 CHANNEL A-R* ENABLE U: Upper L: Lower
LED (Green)
LED (Green)
9 10 11 12 13
BS-ON PS-ON POWER SW ENABLE POWER SW ON / OFF EMERGENCY POWER OFF (EPO) REMOTE/LOCAL
LED (Amber) LED (Green) Switch Switch 1-Way Locking Switch Switch
14
REMOTE position: Subsystem is powered on/off by instructions from CPU. LOCAL position: Subsystem is powered on/off via the POWER SW ON / OFF switch. Applies to both storage clusters.
15
Switch
LED TEST position: The LEDs on Control Panel go on. CHK RESET position: The PS ALARM and TH ALARM are reset.
79
4.6.1
PUSH
80
Chapter 5
This chapter provides information for planning and preparing a site before and during installation of the Hitachi Lightning 9900 V Series subsystem. Please read this chapter carefully before beginning your installation planning. Figure 5.1 shows a physical overview of the 9980V subsystem. Figure 5.2 shows a physical overview of the 9970V subsystem. If you would like to use any of the Lightning 9900 V Series features or software products (e.g., TrueCopy, ShadowImage), please contact your Hitachi Data Systems account team to obtain the appropriate license(s) and software license key(s). Note: The general information in this chapter is provided to assist in installation planning and is not intended to be complete. The DKC460I/DKU455I and DKC465I (9980V and 9970V) installation and maintenance documents used by Hitachi Data Systems personnel contain complete specifications. The exact electrical power interfaces and requirements for each site must be determined and verified to meet the applicable local regulations. For further information on site preparation for Lightning 9900 V Series subsystem installation, please contact your Hitachi Data Systems account team or the Hitachi Data Systems Support Center.
Third additional DKU Second additional DKU Disk Subsystem Basic Unit
L2 DKU (DKU455) L1 DKU (DKU455) DKC (DKC460) SVP R1 DKU (DKU455) R2 DKU (DKU455)
Control Panel
Figure 5.1
Physical Overview of 9980V Subsystem Hitachi Lightning 9900 V Series User and Reference Guide 81
HDU-Box3
HDU-Box2
HDU-Box1
HDU-Box0
Figure 5.2
82
5.1
5.1.1
Safety Precautions
For safe operation of the 9980V and 9970V disk subsystems, please observe the following precautions: Use the subsystems with the front and rear doors closed. The doors are designed for safety and protection from noise, static electricity, and EMI emissions. Make sure that all front and rear doors are closed before operating the subsystem. The only exceptions are during the power-up or power-down processes. Perform only the procedures described in this manual when the front and rear doors must be opened for operation. Before performing power-down or power-up, make sure that the disk subsystem is not undergoing any maintenance and is not being used online. Do not place objects on top of the frames, as this is where air is exhausted. This interferes with the flow of cooling air. For troubleshooting, perform only the instructions described in this manual. If you need further information, please contact Hitachi Data Systems maintenance personnel. In case of a problem with the subsystem, please report the exact circumstances surrounding the problem and provide as much detail as possible to expedite problem isolation and resolution.
83
5.2
(Unit: mm)
727
727
800
800
727
727
782
750
Front
Front
Figure 5.3
84
9970V Subsystem
(Unit: mm)
727
800
727
782
Front
Figure 5.4
85
Table 5.1
Item Weight (kg)
Heat Output (kW) Power Consumption (kVA) Air Flow (m3/min.) Dimensions (mm)
Notes: *1: These values are used when DKC460I has 16 GB cache memory, two fibre 8-port CHA options, and two DKA pairs. *2: These values are used when DKC460I has 64 GB cache memory, four fibre 8-port CHA options, and four DKA pairs. *3: These values are used when DKC460I has 64 GB cache memory, four MF fibre 8-port CHA options, and four DKA pairs. *4: These values are used when DKU455I is fully configured with 36-GB (15K rpm) HDDs. *5: These values are used when DKU455I is fully configured with 73-GB (10K rpm) HDDs. *6: This includes the thickness of side covers (16 mm 2).
Table 5.2
Item Weight (kg) Heat Output (kW) Power Consumption (kVA) Air Flow (m3/min.) Dimensions (mm) Width Depth Height
Notes: *1: These values are used when DKC465I is fully configured with all CHA options and 36-GB (15K rpm) HDDs. *2: These values are used when DKC465I is fully configured with all CHA options and 73-GB (10K rpm) HDDs. *3: This includes the thickness of side covers (16 mm 2).
86
5.3
87
782
*2
191 76
598 800 G
55
113 250
*2
680 564 254 250 65 123 105 G 300 166 86 16 450 610 750 Front Floor cutout area for cables Screw jack Caster Service clearance G 166 86 16
*2
800* ** 2400
800
Grid panel (over 450 mm 450 mm) Opening on the bottom of the frame (for external cable entry)
*1 Values in parentheses show the allowable range of the floor cutout dimension. The floor cutout should be in the center of the DKC. In case that the floor cutout is in the right position for the external cable work and is within the allowable range, the cutout position may be off-center. In this case, check the relation between the positions of the cutout and the opening on the bottom of the frame. If the floor cutout width is more than 552 mm, be careful about the restriction of the movable direction so that the caster wheels do not fall down into the cutout. *2 These dimensions vary with the floor cutout dimension. * The thickness of the door is different in the FRONT (35 mm) than in the REAR (25 mm). ** Overhang of the MOSAIC (LOUVER) of the FRONT DOOR (7 mm) is not included.
Figure 5.5
88
750
(Unit: mm)
G 800 225 45 200 570 555 680 800* ** 2400 300 225 110 55
120 165 85
65
800
Front Floor cutout area for cables Screw jack Caster Service clearance G Grid panel (over 450mm x 450mm)
*: The thickness of the door is different in the FRONT (35 mm) than in the REAR (25 mm). **: Overhang of the MOSAIC (LOUVER) of the FRONT DOOR (7 mm) is not included.
Figure 5.6
89
a *1 191 400*3
G
b *1 241
(Unit: mm)
55
113
253*3
680 564
300
555
570
680 800* **
2400
65
123
250*3
120
65
166 86 16 *2
181 101 16
800 c *1
Floor cutout area for cables
DKC
DKU
*1 Clearance (a+b) depends on the floor load rating and clearance . Floor load rating and required clearances are in Table 5.3. *2 Clearance (d) must be over 0.3 m so as to open the DKC front door (refer to Figure 5.5). In case that clearance (d) is less than clearance (a), give priority to clearance (a). *3 See Figure 5.5 for details on the DKC floor cutout. * The thickness of the door is different in the FRONT (35 mm) than in the REAR (25 mm). ** Overhang of the MOSAIC (LOUVER) of the FRONT DOOR (7 mm) is not included.
9980V Subsystem Service Clearance and Floor Cutouts Minimum Configuration Floor Load Rating and Required Clearances for 9980V Minimum Configuration
Required Clearance (a+b) in meters
Floor Load Rating Clearance (c) in meters (feet) kg/m2 (lb/ft2) c=0 c = 0.2 (0.66) 500 (102.4) 450 (92.2) 400 (81.9) 350 (71.7) 300 (61.4) 0.6 0.9 1.4 2.0 3.0 0.4 0.7 1.0 1.6 2.5
90
a *1
3782
b *1
(Unit: mm)
800
800
* **
2400
16
750 G
750 G
750 G Front
750 G
750 G
16 800
C *1
DKU
DKU
DKC
DKU
DKU
Floor cutout area for cables Service clearance G Grid panel (over 450 mm x 450 mm)
* The thickness of the door is different in the FRONT (35 mm) than in the REAR (25 mm). ** Overhang of the MOSAIC (LOUVER) of the FRONT DOOR (7 mm) is not included.
9980V Subsystem Service Clearance and Floor Cutouts Maximum Configuration Floor Load Rating and Required Clearances for 9980V Maximum Configuration
Required Clearance (a+b)
Clearance (c) in meters (feet) Floor Load Rating kg/m2 (lb/ft2) c=0 c = 0.2 (0.66) 500 (102.4) 450 (92.2) 400 (81.9) 350 (71.7) 300 (61.4) 1.9 2.7 3.9 5.6 8.3 1.2 2.0 3.1 4.6 6.9
91
Recommended dimension: 380 ( 200-380 ) *4 *1 a 201 *5 307 168 G 54 142.5 Recommended dimension:340 (200-340) *4 230 *5 199 682 505 105 426 150 64 152.5 95 76 100 67 16 300 *3 d *2 109 G 91 800 582 648 750 Front 100 67 16 C *1 Floor cutout area for cables Screw jack DKC G Caster Service clearance Grid panel (over 450mm x 450mm) Opening on bottom of frame (for external cable entry)
*: The thickness of the door is different in the FRONT (35 mm) than in the REAR (25 mm). **: Overhang of the MOSAIC (LOUVER) of the FRONT DOOR (7 mm) is not included.
782
9970V Subsystem Service Clearance and Floor Cutout All Configurations Floor Load Rating and Required Clearances for 9970V Subsystem
Required Clearance (a+b)
Floor Load Rating Clearance (c) in meters (feet) kg/m2 (lb/ft2) c=0 c = 0.2 (0.66) 500 (102.4) 450 (92.2) 400 (81.9) 350 (71.7) 300 (61.4) 0.6 m (2 ft) 0.8 m (2.6 ft) 1.0 m (3.3 ft) 1.4 m (4.6 ft) 2.0 m (6.6 ft) 0.3 m (1 ft) 0.5 m (1.6 ft) 0.7 m (2.3 ft) 1.0 m (3.3 ft) 1.5 m (4.9 ft)
c = 0.4 (1.31) 0.1 m (.3 ft) 0.3 m (1 ft) 0.5 m (1.6 ft) 0.7 m (2.3 ft) 1.1 m (3.6)
c = 0.6 (1.97) 0 0.1 m (.3 ft) 0.3 m (1 ft) 0.5 m (1.6 ft) 0.9 m (3 ft)
92
5.4
5.4.1
to 9980V DKU
CB101
Figure 5.10 Power Plugs for Three-Phase 9980V Disk Array Unit (Europe)
93
R&S RS460P9W
Provided with DKU-F455I-3UC To be prepared as apart of power facility
to 9980V DKU
R&S RS460C9W
Rear View of 9980V Disk Array
CB101
Figure 5.11 Power Plugs for Three-Phase 9980V Disk Array Unit (USA)
94
CB101 to 9970V
R&S 3934
Rear View of 9970V
95
5.4.2
Disk Array Disk Array Disk Array 9970V The 9970V consists of a single frame.
AC Box Kit for Three-Phase Power Cable Kit for Three-Phase Power Cable Kit for Three-Phase AC Box Kit for Three-Phase Power Cable Kit for Three-Phase Power Cable Kit for Three-Phase
96
5.4.3
Current Rating, Power Plugs, Receptacles, and Connectors for Three-Phase (60 Hz only)
Table 5.7 lists the current rating and AC power cable plug, receptacle, and connector requirements (part number or equivalent) for three-phase 60-Hz subsystems. In a threephase 9980V subsystem the controller frame (DKC) receives its AC input power from the first disk array frame (DKU) via internal cabling, so the subsystem will not require any customer outlets for the controller frame. The user must supply all power receptacles and connectors for 60-Hz subsystems. Russell & Stoll (R&S) type connectors (or Hubbell or Leviton) are recommended for 60-Hz systems. Note: Each 9980V disk array frame requires two power connections for power redundancy. It is strongly recommended that the second power source be supplied from a separate power boundary to eliminate source power as a possible single (nonredundant) point of failure. Note: If you are planning to provide power using 3/4-inch diameter flexible conduit, consider using box-type receptacle connectors. In some cases, inline connectors may be used with 3/4-inch flexible conduit if the appropriate adapter is available from the connector vendor. Table 5.7
Item Hitachi Base Unit Circuit Current Rating Hitachi Feature(s) Required 60-Hz Power Plug (or equiv.) Included with the product. Box-Type Receptacle (or equiv.) (not provided)
Current Rating, Power Plug, Receptacle, and Connector for Three-Phase 9900V
9970V DKC465I-5 30 A DKC-F465I-3PS DKC-F465I-3UC R&S 3760PDG, or DDK 115J-AP8508 R&S 3754 9980V DKC DKC460I-5 (from DKU) DKC-F460I-3PS N/A N/A 9980V DKU DKU455I-18 60 A DKU-F455I-3PS DKU-F455I-3UC R&S RS460P9W R&S RS460R9W, or Hubbell HBL460R9W, or Leviton 460R9W R&S JB6-B125 plus R&S AA6L 20-degree-angle adapter (or Hubbell BB601W, Leviton BX60-V) R&S RS460C9W (or Hubbell HBL460C9W, or Leviton 460C9W)
Back Box for Receptacle Note: Use R&S back box with R&S receptacle, Hubbell back box with Hubbell receptacle, etc. Inline Connector (or equiv.) (not provided)
included
N/A
R&S 3934
N/A
*Note: For information on power connection specifications for locations outside the U.S., contact the Hitachi Data Systems Support Center for the specific country.
97
5.4.4
three-phase, three wire + ground +6% / -8% three-phase, three wire + ground +6% / -8% three-phase, four wire + ground +6% / -8%
Note: For 9980V DKU, distribution board with circuit breaker or equivalent is rated 60 amps. For 9970V DKC, distribution board with circuit breaker or equivalent is rated 30 amps. This unit does not apply to IT power system.
5.4.5
98
5.5 5.5.1
Electrical Specifications and Requirements for Single-Phase Subsystems Power Plugs for Single-Phase
Figure 5.15 illustrates the power plugs for a single-phase 9980V controller (Europe). Figure 5.16 illustrates the power plugs for a single-phase 9980V controller (USA). Figure 5.17 illustrates the power plugs for a three-phase 9980V disk array unit (Europe). Figure 5.18 illustrates the power plugs for a three-phase 9980V disk array unit (USA). Figure 5.19 illustrates the power plugs for a three-phase 9970V subsystem (Europe). Figure 5.20 illustrates the power plugs for a three-phase 9970V subsystem (USA). For information on UPS configurations, please contact your Hitachi Data Systems team.
99
200/220/230/240V AC Power
Protection Earth
R&S 9P53U2
Provided with DKC-F460I-1UC To be prepared as apart of power facility
to 9980V DKC
100
230/240V
to 9980V DKU
Blue Green/Yellow
CB101
Figure 5.17 Power Plugs for a Single-Phase 9980V Disk Array Unit (Europe)
R&S 9P53U2
Provided with DKU-F455I-1UC To be prepared as a part of power facility
to 9980V DKU
R&S 9C53U2 or 9R53U2W Front View of 9980V Disk Array Rear View of 9980V Disk Array
CB101
Figure 5.18 Power Plugs for a Single-Phase 9980V Disk Array Unit (USA)
101
200/208/220/230/240V AC Power
to 9970V CB101
Blue Green/Yellow
9P53U2
Provided with DKC-F465I-1UC To be prepared as a part of power facility
CB101 to 9970V
9C53U2
Rear View of 9970V
102
5.5.2
Frame Controller
Disk Array
9970V
DKC-F465I-1EC DKC-F465I-1UC
5.5.3
Current Rating, Power Plugs, Receptacles, and Connectors for Single-Phase (60 Hz only)
Table 5.11 lists the current rating and power cable plug, receptacle, and connector requirements (part number or equivalent) for single-phase 60-Hz subsystems. Each disk array frame (DKU) and controller frame (DKC) has two input power cables with R&S FS 3720 plugs. The user must supply the outlets for the plugs. Note: Each 9980V disk array frame requires two power connections for power redundancy. It is strongly recommended that the second power source be supplied from a separate power boundary to eliminate source power as a possible single (nonredundant) point of failure. Note: If you are planning to provide power using 3/4-inch diameter flexible conduit, consider using box-type receptacle connectors. In some cases, inline connectors may be used with 3/4-inch flexible conduit if the appropriate adapter is available from the connector vendor. Table 5.11
Item Hitachi Base Unit Circuit Current Rating Hitachi Feature(s) Required 60-Hz Power Plug (or equiv.) Included with the product. Box-Type Receptacle (or equiv.) (not provided) Back Box for Receptacle Inline Connector (or equiv.) (not provided)
Current Rating, Power Plug, Receptacle, and Connector for Single-Phase 9900V
9970V DKC465I-5 50 A DKC-F465I-1PS DKC-F465I-1UC R&S 9P53U2 R&S 9R53U2W R&S 3781A R&S 9C53U2 9980V DKC DKC460I-5 40 A DKC-F460I-1PS DKC-F460I-1UC R&S 9P53U2 R&S 9R53U2W R&S 3781A R&S 9C53U2 9980V DKU DKU455I-18 50 A DKU-F455I-1PS DKU-F455I-1UC R&S 9P53U2 R&S 9R53U2W R&S 3781A R&S 9C53U2
103
5.5.4
Note: For 9980V DKC, distribution board with circuit breaker or equivalent is rated 40 amps. For 9980V DKU, distribution board with circuit breaker or equivalent is rated 50 amps. For 9970V DKC, distribution board with circuit breaker or equivalent is rated 50 amps.
5.5.5
B A
104
5.6
Cable Requirements
Table 5.14 lists the cables required for the 9980V control frame. These cables must be ordered separately, and the quantity depends on the type and number of channels and ports. ExSA (ESCON), FICON, fibre-channel cables are available from Hitachi Data Systems. Table 5.14
Cable PCI cable FICON interface cables
Cable Requirements
Function/Description Connects 9900V to CPU power control interface. Connects mainframe host systems, channel extenders, or FICON directors to 9900V ports. Single mode cables: Yellow in color with SC- and LC-type connectors. 8-10 micron. Most common is 9-micron singlemode. Multimode cables: Orange cables with SC and LC-type connectors. 50/125 micron and 62.5 micron multimode.
Connects mainframe host systems, channel extenders, or ESCON directors to 9900V ports. Multimode cables: Commonly called jumper cables. Use LED light source. Plug directly on CHA cards. Orange cables with black duplex connectors. Contain 2 fibers (transmit and receive). 62.5 micron (up to 3 km per link). 50 micron (up to 2 km per link). Mono/Single mode cables: Required on XDF links between ESCDs or IBM 9036 ESCON remote control extenders. Use laser light source. Yellow in color with gray duplex connectors. 8-10 micron. Most common is 9 micron.
Fibre cables
Connects open-system host to 9900V ports. Fibre cable types are 50 / 125 micron or 62.5 / 125 micron multimode. SC-type (standard) connector is required for 1 Gbps port. LC-type (little) connector is required for 2 Gbps port.
10/100 BaseT (Cat 5) cable with Connects Remote Console PC to 9900V subsystem. RJ45 connector Can also be used for connecting multiple 9900Vs together (daisy-chain). 10Base2 cable (RG58) with BNC connector Connects Remote Console PC to 9900V subsystem, and allows connection to multiple 9900Vs (up to 8) without using a hub. Requires a transceiver.
105
5.6.1
DKC-F460I-200 DKU455I HDD BOX HDD BOX HDD BOX HDU BOX DKU455I HDD BOX HDD BOX HDD BOX HDU BOX DKC460I 4th DKA 2nd DKA DKU455I HDD BOX HDD BOX HDD BOX HDU BOX DKU455I HDD BOX HDD BOX HDD BOX HDU BOX
Pair
Pair
3rd DKA
1st DKA
Pair
Pair
L2 DKU
L1 DKU
DKC
R1 DKU
R2 DKU
:Standard equipment of DKC460I (between DKC and R1 DKU) :DKC-F460I-L1C (between DKC and L1 DKU)
:DKU-F455I-EXC (between DKU and DKU)
Figure 5.22 9980V Subsystem Layout and Device Interface Cable Options
106
5.7
*1: Indicates when 50 / 125-mm multi-mode fiber cable is used. If 62.5 / 125-mm multi-mode fiber cable is used, maximum length is decreased to 300 m. *2: Indicates when 50 / 125-mm multi-mode fiber cable is used. If 62.5 / 125-mm multi-mode fiber cable is used, 500 m (100 MB/s) and 300 m (200 MB/s) are decreased to 300 m and 150 m respectively.
Table 5.16
Item Option name Host interface Data transfer rate (MB/s) Number of options installed Number of PCBs installed Number of ports / subsystem Maximum cable length
*1: Indicates when 50 / 125-mm multi-mode fiber cable is used. If 62.5 / 125-mm multi-mode fiber cable is used, maximum length is decreased to 300 m.
107
5.8 5.8.1
Notes: *1. Environmental specification for operating condition should be satisfied before the disk subsystem is powered on. The maximum temperature of 90F (32C) should be strictly satisfied at the air inlet portion of the subsystem. The recommended temperature range is 70-75F (21-24C). *2. Non-operating condition includes both packing and unpacking conditions unless otherwise specified. *3. During shipping or storage, the product should be packed with factory packing. *4. No condensation in or around the drive should be observed under any conditions.
108
5.8.2
Model number DKC460I-5 DKC-F460I-1PS DKC-F460I-1EC DKC-F460I-1UC DKC-F460I-3PS DKC-F460I-200 DKC-F460I-2048 DKC-F460I-S512 DKC-F460I-8S DKC-F460I-8MS DKC-F460I-8ML DKC-F460I-8HSE DKC-F460I-8HLE DKC-F460I-SVP DKC-F460I-UPS DKC-F460I-80 DKC-F460I-41 DKC-F460I-42 DKC-F460I-L1C DKC-F460I-U405R DKC-F460I-U405L DKC-F460I-U405 DKC-F460I-18
Weight (kg) 466.0 10.0 7.5 10.5 7.0 3.6 0.2 0.05 4.2 4.2 4.2 4.2 4.2 5.5 1.2 8.8 5.6 42.2 3.5 20.0 19.0 15.0 0.6
Note. *1: This includes the thickness of side covers (16 mm 2).
109
Table 5.19
Model number DKU455I-18 DKU-F455I-1PS DKU-F455I-1EC DKU-F455I-1UC DKU-F455I-3PS DKU-F455I-3EC DKU-F455I-3UC DKU-F455I-EXC DKU-F455I-36K1 DKU-F455I-36K4 DKU-F455I-72J1 DKU-F455I-72J4
Weight (kg) 460.0 20.0 7.0 10.0 20.0 11.0 16.0 3.0 1.0 4.0 1.0 4.0
110
Table 5.20
Model number DKC465I-5 DKC-F465I-1PS DKC-F465I-1EC DKC-F465I-1UC DKC-F465I-3PS DKC-F465I-3EC DKC-F465I-3UC DKC-F465I-FSW DKC-F465I-FSW2 DKC-F465I-100 DKC-F460I-200
*1
Weight (kg) 586.0 15.0 7.5 10.5 12.0 6.0 10.0 2.0 2.0 3.4 3.6 0.2 0.05 14.5 1.2 4.2 4.2 4.2 4.2 4.2 0.6
*2
Heat Output (kW) 0.52 0.20 0.22 0.13 0.21 0.007 0.004 0.07 0.01 0.28 0.31 0.31 0.21 0.21 0.002 0.022 0.088 0.023 0.092
Power Consumption (kVA) 0.55 0.21 0.23 0.13 0.22 0.008 0.004 0.07 0.01 0.29 0.33 0.33 0.22 0.22 0.003 0.024 0.096 0.025 0.100
DKU-F455I-36K1
Note. *1: They are common to the options installed in DKC460I-5. Note. *2: They are common to the options installed in DKU455I-18. Note. *3: This includes the thickness of side covers (16 mm 2).
111
5.8.3
Loudness
The acoustic emission values [loudness in dB(A)] for a maximum 9980V subsystem configuration (one DKC, four DKUs) are: Front/rear = 60 dB(A) Both sides = 60 dB(A) Note: These values were extrapolated from the values for one DKC and one DKU.
5.8.4
112
5.8.5
Shock
8 g, 15 ms
*1. The vibration specifications apply to all three axes. *2. See ASTM D999-91 Standard Methods for Vibration Testing of Shipping Containers. *3. See ASTM D5277-92 Standard Test Methods for Performing Programmed Horizontal Impacts Using an Inclined Impact Tester. *4. See ASTM D1083-91 Standard Test Methods for Mechanical Handling of Unitized Loads and Large Shipping Cases and Crates.
113
114
Chapter 6
6.1
Troubleshooting
Troubleshooting
The Hitachi Lightning 9900 V Series subsystem provides continuous data availability and is not expected to fail in any way that would prevent access to user data. The READY and ENABLE LEDs on the 9900V control panel must be ON when the subsystem is operating online. Table 6.1 lists potential error conditions and provides recommended actions for resolving each condition. If you are unable to resolve an error condition, contact your Hitachi Data Systems representative, or call the Hitachi Data Systems Support Center for assistance (see section 6.2). Table 6.1 Troubleshooting
Recommended Action Determine the type of error (refer to the SIM codes in section 6.2). If possible, remove the cause of the error. If you cannot correct the error condition, call the Hitachi Data Systems Support Center for assistance. Call the Hitachi Data Systems Support Center for assistance. WARNING: Do not open the 9900V control frame or touch any of the controls. Determine if there is a failed storage path. If so, toggle the RESTART switch, and retry the operation. If the fence message is displayed again, call the Hitachi Data Systems Support Center for assistance. Call the Hitachi Data Systems Support Center for assistance. WARNING: Do not open the 9900V control frame or touch any of the controls. Determine if channel I/O operations to that cluster are possible. If so, call the Hitachi Data Systems Support Center to have the LEDs checked. If not, disconnect the channel(s) and/or call the Hitachi Data Systems Support Center. Pull the emergency power-off (EPO) switch (see section 4.6.1). You must call the Hitachi Data Systems Support Center to have the EPO switch reset. If there is an obvious temperature problem in the area, power down the subsystem (call the Hitachi Data Systems Support Center for assistance), lower the room temperature to the specified operating range, and power on the subsystem (call the Hitachi Data Systems Support Center for assistance). If the area temperature is not the obvious cause of the alarm, call the Hitachi Data Systems Support Center for assistance.
General power failure. Fence message is displayed on the console. READY LED does not go on, or there is no power supplied. ENABLE LEDs for a cluster do not go on.
115
6.2
116
Chapter 6 Troubleshooting
6.3
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
00 90 10 00 00 00 8F E0 44 10 00 04 00 80 04 0C 69 00 00 00 00 02 30 70 05 10 42 C0 F1 00 02 00 SSB13 SSB22, 23 SIM type F1: DKC SIM F2: CACHE SIM FE: DEVICE SIM FF: MEDIA SIM
Indicates SIM.
RC = 307080
Figure 6.1
117
118
Chapter 6 Troubleshooting
Appendix A
ACP ASTM BS BSA BTU C ca CFW CH CHA CHP CHIP CHPID CKD CL CPU CSA CSW CU CVS DASD dB(A) DFDSS DFSMS DFW DKA DKC DKU dr DSF DTDS+ ECKD EOF EPO EREP ESA ESCON ExSA FAL FBA FC FC-AL FCC
FCU FDN FDR FICON F/M FWD g Gb GB Gbps, Gb/s GLM GUI HACMP HBA HCD HCPF HDLM HMBR HPAV HRX HSN HXRC Hz ICKDSF IDCAMS IML in. IO, I/O IOCP JCL kB kcal kg km kVA kW LAN lb LD LDEV LED LPAR LCP LRU LU 120
File Conversion Utility (part of the HRX software) Freedom Data Networks Fast Dump/Restore Fiber Connection (IBM trademark for fiber connection technology) format/message fast wide differential acceleration of gravity (9.8 m/s2) (unit used for vibration and shock) gigabit gigabyte gigabit per second gigabyte link module graphical user interface High Availability Cluster Multi-Processing host bus adapter hardware configuration definition Hitachi Concurrent Processing Facility Hitachi Dynamic Link Manager Hitachi Multiplatform Backup/Restore Hitachi Parallel Access Volume Hitachi RapidXchange Hierarchical Star Network Hitachi Extended Remote Copy Hertz A DSF command used to perform media maintenance access method services (a component of Data Facility Product) initial microprogram load inch(es) input/output (operation or device) input/output configuration program job control language kilobyte kilocalorie kilogram kilometer kilovolt-ampere kilowatt local area network pound logical device logical device light-emitting diode logical partition link control processor, local control port least recently used logical unit
LUN LVI LW m MB MIH mm MP MPLF MR ms, msec MVS NAS NVS OEM OFC ORM OS P/DAS PC PCI PPRC PS R&S RAB RAID RAM RC RISC R/W S/390 SAN SCSI sec. seq. SGI SIM SMS SNMP SSID SVP SW TB TID
logical unit number, logical unit logical volume image (also called device emulation) long wavelength meter megabyte missing interrupt handler millimeter microprocessor Multi-Path Locking Facility magnetoresistive millisecond Multiple Virtual Storage (including MVS/370, MVS/ESA, MVS/XA) network-attached storage nonvolatile storage original equipment manufacturer open fibre control online read margin operating system PPRC/dynamic address switching (an IBM S/390 host software function) personal computer system power control interface Peer-to-Peer Remote Copy (an IBM S/390 host software function) power supply Russell&Stoll RAID Advisory Board redundant array of independent disks random-access memory reference code reduced instruction-set computer read/write IBM System/390 architecture storage-area network small computer system interface second sequential Silicon Graphics, Inc. service information message System Managed Storage simple network management protocol storage subsystem identification service processor short wavelength terabyte target ID Hitachi Lightning 9900 V Series User and Reference Guide 121
TPF TSO UCB UIM UL m UPS VA VAC VDE VM VOLID volser XRC VSE VTOC W WLM XA XDF
Transaction Processing Facility Time Sharing Option (an IBM System/370 operating system option) unit control block unit information module Underwriters Laboratories micron, micrometer uninterruptable power supply volt-ampere volts AC Verband Deutscher Elektrotechniker Virtual Machine (an IBM S/390 system control program) volume ID volume serial number Extended Remote Copy (an IBM S/390 host software function) Virtual Storage Extension (an IBM S/390 operating system) volume table of contents watt Workload Manager (an IBM S/390 host software function) System/370 Extended Architecture Extended Distance Feature (for ExSA channels)
122
Degrees Fahrenheit per hour (F/hour) 0.555555 Degrees centigrade per hour (C/hour) 1.8
123
124