Powervault Md3400 Administrator Guide en Us
Powervault Md3400 Administrator Guide en Us
Powervault Md3400 Administrator Guide en Us
Storage Arrays
Administrator's Guide
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2012 - 2018 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
2018 - 05
Rev. A05
Contents
1 Introduction.................................................................................................................................................. 12
Dell EMC PowerVault Modular Disk Storage Manager................................................................................................ 12
User interface....................................................................................................................................................................12
Enterprise management window.................................................................................................................................... 13
Inheriting the system settings................................................................................................................................... 13
Array management window.............................................................................................................................................14
Dell EMC PowerVault Modular Disk Configuration Utility........................................................................................... 15
Related documentation....................................................................................................................................................15
Contents 3
Disk migration............................................................................................................................................................. 23
Disk roaming............................................................................................................................................................... 24
Host server-to-virtual disk mapping........................................................................................................................ 24
Host types...................................................................................................................................................................24
Advanced features........................................................................................................................................................... 24
Types of snapshot functionality supported.............................................................................................................25
Virtual disk copy......................................................................................................................................................... 25
Virtual disk recovery.................................................................................................................................................. 26
Multi-path software.........................................................................................................................................................26
Preferred and alternate controllers and paths........................................................................................................26
Virtual disk ownership................................................................................................................................................26
Load balancing.................................................................................................................................................................. 27
Monitoring system performance.................................................................................................................................... 27
Interpreting performance monitor data...................................................................................................................28
Viewing real-time graphical performance monitor data........................................................................................ 30
Customizing the performance monitor dashboard.................................................................................................31
Specifying performance metrics...............................................................................................................................31
Viewing real-time textual performance monitor.....................................................................................................32
Saving real-time textual performance data.............................................................................................................33
Starting and stopping background performance monitor.....................................................................................33
Viewing information about the current background performance monitor session.......................................... 34
Viewing current background performance monitor data...................................................................................... 34
Saving the current background performance monitor data................................................................................. 35
Viewing saved background performance monitor data.........................................................................................35
Invalid objects in Performance Monitor...................................................................................................................36
4 Contents
Configuring SNMP alerts.......................................................................................................................................... 45
Battery settings................................................................................................................................................................ 47
Changing the battery settings..................................................................................................................................47
Setting the storage array RAID controller module clocks........................................................................................... 48
4 Using iSCSI..................................................................................................................................................49
Changing the iSCSI target authentication.................................................................................................................... 49
Entering mutual authentication permissions................................................................................................................ 50
Creating CHAP secrets................................................................................................................................................... 50
Initiator CHAP secret................................................................................................................................................ 50
Target CHAP secret...................................................................................................................................................50
Valid characters for CHAP secrets.......................................................................................................................... 50
Changing the iSCSI target identification........................................................................................................................51
Changing iSCSI target discovery settings..................................................................................................................... 51
Configuring the iSCSI host ports................................................................................................................................... 52
Advanced iSCSI host port settings................................................................................................................................52
Viewing or ending an iSCSI session............................................................................................................................... 53
Viewing iSCSI statistics and setting baseline statistics...............................................................................................53
Edit, remove, or rename host topology......................................................................................................................... 54
5 Event monitor..............................................................................................................................................55
Enabling or disabling event monitor...............................................................................................................................55
Windows..................................................................................................................................................................... 55
Linux............................................................................................................................................................................ 56
Contents 5
Changing the virtual disk modification priority.......................................................................................................66
Changing virtual disk cache settings.......................................................................................................................66
Changing segment size of virtual disk..................................................................................................................... 67
Changing the I/O type.............................................................................................................................................. 68
Thin virtual disks.............................................................................................................................................................. 68
Advantages of thin virtual disks............................................................................................................................... 68
Physical vs virtual capacity an a thin virtual disk................................................................................................... 69
Thin virtual disk requirements and limitations.........................................................................................................69
Thin virtual disk attributes.........................................................................................................................................70
Thin virtual disk states...............................................................................................................................................70
Comparison—Types of virtual disks and copy services........................................................................................ 70
Rollback on thin virtual disks..................................................................................................................................... 71
Initializing a thin virtual disk....................................................................................................................................... 71
Changing a thin virtual disk to a standard virtual disk........................................................................................... 74
Utilizing unmapping for thin virtual disk...................................................................................................................74
Enabling unmap thin provisioning for thin virtual disk............................................................................................74
Choosing an appropriate physical disk type..................................................................................................................75
Physical disk security with self encrypting disk............................................................................................................75
Creating a security key.............................................................................................................................................. 76
Changing security key................................................................................................................................................77
Saving a security key................................................................................................................................................. 78
Validate security key.................................................................................................................................................. 79
Unlocking secure physical disks............................................................................................................................... 79
Erasing secure physical disks....................................................................................................................................79
Configuring hot spare physical disks..............................................................................................................................79
Hot spares and rebuild...............................................................................................................................................80
Global hot spares....................................................................................................................................................... 80
Hot spare operation....................................................................................................................................................81
Hot spare physical disk protection............................................................................................................................81
Physical disk security........................................................................................................................................................81
Enclosure loss protection................................................................................................................................................ 82
Drawer loss protection.................................................................................................................................................... 83
Host-to-virtual disk mapping.......................................................................................................................................... 83
Creating host-to-virtual disk mappings................................................................................................................... 84
Modifying and removing host-to-virtual disk mapping..........................................................................................85
Changing RAID controller ownership of the virtual disk....................................................................................... 85
Removing host-to-virtual disk mapping.................................................................................................................. 86
Changing the RAID controller module ownership of a disk group....................................................................... 86
Changing the RAID level of a disk group.................................................................................................................86
Removing a host-to-virtual disk mapping using Linux DMMP..............................................................................87
Restricted mappings........................................................................................................................................................88
Storage partitioning......................................................................................................................................................... 88
Disk group and virtual disk expansion............................................................................................................................89
Disk group expansion.................................................................................................................................................89
Virtual disk expansion................................................................................................................................................ 90
6 Contents
Using free capacity....................................................................................................................................................90
Using unconfigured capacity.................................................................................................................................... 90
Disk group migration.........................................................................................................................................................91
Export disk group........................................................................................................................................................91
Import disk group....................................................................................................................................................... 92
Storage array media scan................................................................................................................................................92
Changing media scan settings..................................................................................................................................93
Suspending media scan.............................................................................................................................................93
Contents 7
Snapshot Virtual Disk read/write properties...............................................................................................................109
Snapshot groups and consistency groups................................................................................................................... 110
Snapshot groups....................................................................................................................................................... 110
Snapshot consistency groups..................................................................................................................................110
Understanding snapshot repositories............................................................................................................................ 111
Consistency group repositories................................................................................................................................ 111
Ranking repository candidates..................................................................................................................................111
Using snapshot consistency group to a Remote Replication................................................................................111
Creating snapshot images..............................................................................................................................................112
Creating snapshot image..........................................................................................................................................112
Canceling a pending snapshot image......................................................................................................................113
Deleting snapshot image.......................................................................................................................................... 113
Scheduling snapshot images..........................................................................................................................................114
Creating a snapshot schedule..................................................................................................................................114
Editing a snapshot schedule.................................................................................................................................... 115
Performing snapshot rollbacks...................................................................................................................................... 115
Snapshot rollback limitations................................................................................................................................... 115
Starting snapshot rollback........................................................................................................................................116
Resuming a snapshot image rollback...................................................................................................................... 116
Canceling snapshot image rollback......................................................................................................................... 117
Viewing the progress of a snapshot rollback......................................................................................................... 117
Changing snapshot rollback priority........................................................................................................................ 117
Creating snapshot group................................................................................................................................................ 118
Manually creating a consistency group repository................................................................................................ 119
Changing snapshot group settings.........................................................................................................................120
Renaming a snapshot group....................................................................................................................................120
Deleting snapshot group...........................................................................................................................................121
Converting a snapshot Virtual Disk to read-write....................................................................................................... 121
Viewing associated physical components of an individual repository virtual disk................................................... 121
Creating consistency group...........................................................................................................................................122
Manually creating a consistency group repository................................................................................................123
Renaming a consistency group............................................................................................................................... 124
Deleting consistency group..................................................................................................................................... 124
Changing the settings of a consistency group......................................................................................................124
Adding a member virtual disk to a consistency group......................................................................................... 125
Removing member virtual disk from consistency group......................................................................................126
Creating a snapshot virtual disk of a snapshot image................................................................................................126
Snapshot Virtual Disk limitations.............................................................................................................................126
Creating Snapshot Virtual Disk............................................................................................................................... 127
Creating a Snapshot Virtual Disk repository..........................................................................................................128
Changing the settings of a Snapshot Virtual Disk................................................................................................129
Disabling Snapshot Virtual Disk or consistency group Snapshot Virtual Disk................................................... 129
Re-creating a Snapshot Virtual Disk or consistency group Snapshot Virtual Disk...........................................130
Renaming a Snapshot Virtual Disk or consistency group Snapshot Virtual Disk.............................................. 130
Creating consistency group Snapshot Virtual Disk..................................................................................................... 131
8 Contents
Manually creating a consistency group Snapshot Virtual Disk repository......................................................... 132
Disabling Snapshot Virtual Disk or consistency group Snapshot Virtual Disk................................................... 133
Re-creating a Snapshot Virtual Disk or consistency group Snapshot Virtual Disk........................................... 134
Changing the modification priority of an overall repository virtual disk............................................................. 135
Changing the media scan setting of an overall repository virtual disk...............................................................135
Changing the pre-read consistency check setting of an overall repository virtual disk...................................136
Increasing capacity of overall repository................................................................................................................ 137
Decreasing the capacity of the overall repository................................................................................................ 138
Performing revive operation....................................................................................................................................139
Contents 9
Ready for use.............................................................................................................................................................151
Linux host server reboot best practices.......................................................................................................................151
Important information about special partitions........................................................................................................... 152
Limitations and known issues........................................................................................................................................152
Troubleshooting.............................................................................................................................................................. 153
10 Contents
16 Firmware inventory................................................................................................................................... 170
Viewing the firmware inventory.................................................................................................................................... 170
Contents 11
1
Introduction
CAUTION: See the Safety, Environmental, and Regulatory Information document for important safety information before
following any procedures listed in this document.
The following MD Series systems are supported by the latest version of Dell PowerVault Modular Disk Manager (MDSM):
• 2U MD Series systems:
– Dell PowerVault MD 3400/3420
– Dell PowerVault MD 3800i/3820i
– Dell PowerVault MD 3800f/3820f
NOTE: The Dell MD Series storage array supports up to 192 drives for the 2U arrays or 180 drives for the 4U (dense) arrays after
the installation of the Additional Physical Disk Support Premium Feature Key.
Topics:
User interface
The Storage Manager screen is divided into two primary windows:
• Enterprise Management Window (EMW)—The EMW provides high-level management of multiple storage arrays. You can launch the
Array Management Windows for the storage arrays from the EMW.
• Array Management Window (AMW)—The AMW provides management functions for a single storage array.
• The title bar at the top of the window—Shows the name of the application.
12 Introduction
• The menu bar, beneath the title bar—You can select menu options from the menu bar to perform tasks on a storage array.
• The toolbar, beneath the menu bar—You can select options in the toolbar to perform tasks on a storage array.
NOTE: By default, the toolbar and status bar are not displayed. To view the toolbar or the status bar, select View > Toolbar or
View > Status Bar.
The Devices tab has a Tree view on the left side of the window that shows discovered storage arrays, unidentified storage arrays, and the
status conditions for the storage arrays. Discovered storage arrays are managed by the MD Storage Manager. Unidentified storage arrays
are available to the MD Storage Manager but not configured for management. The right side of the Devices tab has a Table view that
shows detailed information for the selected storage array.
1 From the EMW, open the Inherit System Settings window in one of these ways:
• Select Tools > Inherit System Settings.
• Select the Setup tab, and under Accessibility, click Inherit System Settings.
2 Select Inherit system settings for color and font.
Introduction 13
3 Click OK.
• Select storage array options — For example, renaming a storage array, changing a password, or enabling a background media scan.
• Configure virtual disks and disk pools from the storage array capacity, define hosts and host groups, and grant host or host group
access to sets of virtual disks called storage partitions.
• Monitor the health of storage array components and report detailed status using applicable icons.
• Perform recovery procedures for a failed logical component or a failed hardware component.
• View the Event Log for a storage array.
• View profile information about hardware components, such as RAID controller modules and physical disks.
• Manage RAID controller modules — For example, changing ownership of virtual disks or placing a RAID controller module online or
offline.
• Manage physical disks — For example, assignment of hot spares and locating the physical disk.
• Monitor storage array performance.
1 In the EMW, on the Devices tab, right-click on the relevant storage array.
The context menu for the selected storage is displayed.
2 In the context menu, select Manage Storage Array.
The AMW for the selected storage array is displayed.
• Summary tab — You can view the following information about the storage array:
– Status
– Hardware
– Storage and copy services
– Hosts and mappings
– Information about storage capacity
– Premium features
• Performance tab — You can track a storage array’s key performance data and identify performance bottlenecks in your system. You
can monitor the system performance in the following ways:
– Real-time graphical
– Real-time textual
– Background (historical)
• Storage & Copy Services tab — You can view and manage the organization of the storage array by virtual disks, disk groups, free
capacity nodes, and any unconfigured capacity for the storage array.
• Host Mappings tab — You can define the hosts, host groups, and host ports. You can change the mappings to grant virtual disk
access to host groups and hosts and create storage partitions.
• Hardware tab — You can view and manage the physical components of the storage array.
• Setup tab — Shows a list of initial setup tasks for the storage array.
14 Introduction
Dell EMC PowerVault Modular Disk Configuration
Utility
NOTE: Dell EMC PowerVault Modular Disk Configuration Utility (MDCU) is supported only on MD Series storage arrays that use
the iSCSI protocol.
MDCU is an iSCSI Configuration Wizard that can be used with MD Storage Manager to simplify the configuration of iSCSI connections.
The MDCU software is available on the MD Series resource media.
Related documentation
NOTE: For all Storage documentation, go to Dell.com/powervaultmanuals and enter the system Service Tag to get your system
documentation.
• Dell EMC PowerVault MD3460/MD3860i/MD3860f Storage Arrays Getting Started Guide—Provides an overview of system features,
setting up your system, and technical specifications. This document is also shipped with your system.
• Dell EMC PowerVault MD3460/MD3860i/MD3860f Storage Arrays Owner’s Manual—Provides information about system features
and describes troubleshooting the system and install or replace system components.
• Rack Installation Instructions—Describes installing your system into a rack. This document is also shipped with your rack solution.
• Dell EMC PowerVault MD Series Storage Arrays Administrator's Guide—Provides information about configuring and managing the
system by using the MDSM GUI.
• Dell EMC PowerVault MD 34XX/38XX Series Storage Arrays CLI Guide—Provides information about configuring and managing the
system using the MDSM CLI.
• Dell EMC PowerVault MD3460/MD3860i/MD3860f Storage Arrays Deployment Guide—Provides information about deploying the
storage system in the SAN architecture.
• Dell EMC PowerVault MD 34xx and 38xx Series Support Matrix—Provides information about the software and hardware compatibility
matrices for the storage array.
Introduction 15
2
About your MD Series storage array
This chapter describes the storage array concepts, which help in configuring and operating the Dell MD Series storage arrays.
You can create disk groups from unconfigured capacity on your storage array.
A virtual disk is a partition in a disk group that is made up of contiguous data segments of the physical disks in the disk group. A virtual disk
consists of data segments from all physical disks in the disk group.
All virtual disks in a disk group support the same RAID level. The storage array supports up to 255 virtual disks (minimum size of 10 MB
each) that can be assigned to host servers. Each virtual disk is assigned a Logical Unit Number (LUN) that is recognized by the host
operating system.
Virtual disks and disk groups are set up according to how you plan to organize your data. For example, you can have one virtual disk for
inventory, a second virtual disk for financial and tax information, and so on.
Physical disks
Only Dell EMC supported physical disks are supported in the storage array. If the storage array detects unsupported physical disks, it marks
the disk as unsupported and the physical disk becomes unavailable for all operations.
For the list of supported physical disks, see the Support Matrix at Dell.com/support/manuals.
Optimal Unassigned The physical disk in the indicated slot is unused and available to be
configured.
Optimal Hot Spare Standby The physical disk in the indicated slot is configured as a hot spare.
Failed Assigned, Unassigned, Hot Spare in use, The physical disk in the indicated slot has failed because of an
or Hot Spare Standby unrecoverable error, an incorrect physical disk type or physical disk
size, or by its operational state being set to failed.
Replaced Assigned The physical disk in the indicated slot has been replaced and is
ready to be, or is actively being, configured into a disk group.
Pending Failure Assigned, Unassigned, Hot Spare in use, A Self-Monitoring Analysis and Reporting Technology (SMART)
or Hot Spare Standby error has been detected on the physical disk in the indicated slot.
Offline Not applicable The physical disk has either been spun down or had a rebuild
ended by user request.
Identify Assigned, Unassigned, Hot Spare in use, The physical disk is being identified.
or Hot Spare Standby
NOTE: Host server access must be created before mapping virtual disks.
Disk groups are always created in the unconfigured capacity of a storage array. Unconfigured capacity is the available physical disk space
not already assigned in the storage array.
Virtual disks are created within the free capacity of a disk group. Free capacity is the space in a disk group that has not been assigned to a
virtual disk.
State Description
Optimal The virtual disk contains physical disks that are online.
Degraded The virtual disk with a redundant RAID level contains an inaccessible
physical disk. The system can still function properly, but performance
may be affected and more disk failures may result in data loss.
Offline A virtual disk with one or more member disks in an inaccessible (failed,
missing, or offline) state. Data on the virtual disk is no longer accessible.
Force online The storage array forces a virtual disk that is in an Offline state to an
Optimal state. If all the member physical disks are not available, the
storage array forces the virtual disk to a Degraded state. The storage
Disk pools
Disk pooling allows you to distribute data from each virtual disk randomly across a set of physical disks. Although there is no limit on the
maximum number of physical disks that can comprise a disk pool, each disk pool must have a minimum of 11 physical disks. Additionally, the
disk pool cannot contain more physical disks than the maximum limit for each storage array.
RAID levels
RAID levels determine the way in which data is written to physical disks. Different RAID levels provide different levels of accessibility,
consistency, and capacity.
Using multiple physical disks has the following advantages over using a single physical disk:
• Placing data on multiple physical disks (striping) allows input/output (I/O) operations to occur simultaneously and improve
performance.
• Storing redundant data on multiple physical disks using replication or consistency supports reconstruction of lost data if an error occurs,
even if that error is the failure of a physical disk.
Each RAID level provides different performance and protection. You must select a RAID level based on the type of application, access, fault
tolerance, and data you are storing.
The storage array supports RAID levels 0, 1, 5, 6, and 10. The maximum and minimum number of physical disks that can be used in a disk
group depends on the RAID level:
RAID 1
RAID 1 uses disk replication so that data written to one physical disk is simultaneously written to another physical disk. RAID 1 offers fast
performance and the best data availability, but also the highest disk overhead. RAID 1 is recommended for small databases or other
applications that do not require large capacity. For example, accounting, payroll, or financial applications. RAID 1 provides full data
consistency.
RAID 5
RAID 5 uses consistency and striping data across all physical disks (distributed consistency) to provide high data throughput and data
consistency, especially for small random access. RAID 5 is a versatile RAID level and is suited for multi-user environments where typical I/O
size is small and there is a high proportion of read activity such as file, application, database, web, e-mail, news, and intranet servers.
RAID 6
RAID 6 is similar to RAID 5 but provides an additional consistency disk for better consistency. RAID 6 is the most versatile RAID level and is
suited for multi-user environments where typical I/O size is small and there is a high proportion of read activity. RAID 6 is recommended
when large size physical disks are used or large number of physical disks are used in a disk group.
RAID 10
CAUTION: Do not attempt to create virtual disk groups exceeding 120 physical disks in a RAID 10 configuration even if premium
feature is activated on your storage array. Exceeding the 120-physical disk limit may cause your storage array to be unstable.
RAID 10, a combination of RAID 1 and RAID 0, uses disk striping across replicated disks. It provides high data throughput and complete data
consistency. Using an even number of physical disks (four or more) creates a RAID level 10 disk group and/or virtual disk. Because RAID
levels 1 and 10 use disk replication, half of the capacity of the physical disks is used for replication. This leaves the remaining half of the
physical disk capacity for actual storage. RAID 10 is automatically used when a RAID level of 1 is chosen with four or more physical disks.
RAID 10 works well for medium-sized databases or any environment that requires high performance and fault tolerance and moderate-to-
medium capacity.
Segment size
Disk striping enables data to be written across multiple physical disks. Disk striping enhances performance because striped disks are
accessed simultaneously.
Stripe width, or depth, refers to the number of disks involved in an array where striping is implemented. For example, a 4-disk group with
disk striping has a stripe width of four.
NOTE: Although disk striping delivers excellent performance, striping alone does not provide data consistency.
Consistency check
A consistency check verifies the correctness of data in a redundant array—RAID levels 1, 5, 6, and 10. For example, in a system with parity,
checking consistency involves computing the data on one physical disk and comparing the results to the contents of the parity physical
disk.
A consistency check is similar to a background initialization. The difference is that background initialization cannot be started or stopped
manually, while consistency check can.
NOTE: It is recommended that you run data consistency checks on a redundant array at least once a month. This data
consistency check allows detection and automatic replacement of unreadable sectors. Finding an unreadable sector during a
rebuild of a failed physical disk is a serious problem because the system does not have the consistency to recover the data.
Media verification
Another background task performed by the storage array is media verification of all configured physical disks in a disk group. The storage
array uses the Read operation to perform verification on the space configured in virtual disks and the space reserved for the metadata.
Cycle time
The media verification operation runs only on selected disk groups, independent of other disk groups. Cycle time is the time taken to
complete verification of the metadata region of the disk group and all virtual disks in the disk group for which media verification is
configured. The next cycle for a disk group starts automatically when the current cycle completes. You can set the cycle time for a media
verification operation between 1 and 30 days. The storage controller throttles the media verification I/O accesses to disks based on the
cycle time.
The storage array tracks the cycle for each disk group independent of other disk groups on the RAID controller and creates a checkpoint. If
the media verification operation on a disk group is preempted or blocked by another operation on the disk group, the storage array resumes
• Background initialization
• Foreground initialization
• Consistency check
• Rebuild
• Copy back
If a redundant RAID controller module fails with existing virtual disk processes, the processes on the failed controller are transferred to the
peer controller. A transferred process is placed in a suspended state if there are four active processes on the peer controller. The
suspended processes are resumed on the peer controller when the number of active processes falls below four.
When considering a segment size change, two scenarios illustrate different approaches to the limitations:
• If I/O activity stretches beyond the segment size, you can increase it to reduce the number of disks required for a single I/O. Using a
single physical disk for a single request frees disks to service other requests, especially when you have multiple users accessing a
database or storage environment.
• If you use the virtual disk in a single-user, large I/O environment (such as for multimedia application storage), performance can be
optimized when a single I/O request is serviced with a single data stripe (the segment size multiplied by the number of physical disks in
the disk group used for data storage). In this case, multiple disks are used for the same request, but each disk is only accessed once.
If a redundant RAID controller module fails with an existing disk group process, the process on the failed controller is transferred to the peer
controller. A transferred process is placed in a suspended state if there is an active disk group process on the peer controller. The
suspended processes are resumed when the active process on the peer controller completes or is stopped.
NOTE: If you try to start a disk group process on a controller that does not have an existing active process, the start attempt
fails if the first virtual disk in the disk group is owned by the other controller and there is an active process on the other
controller.
• Background initialization
• Rebuild
• Copy back
• Virtual disk capacity expansion
• RAID level migration
• Segment size migration
• Disk group expansion
• Disk group defragmentation
The priority of each of these operations can be changed to address performance requirements of the environment in which the operations
are to be executed.
Disk migration
You can move virtual disks from one array to another without taking the target array offline. However, the disk group being migrated must
be offline before performing the disk migration. If the disk group is not offline before migration, the source array holding the physical and
virtual disks within the disk group marks them as missing. However, the disk groups themselves migrate to the target array.
An array can import a virtual disk only if it is in an optimal state. You can move virtual disks that are part of a disk group only if all members
of the disk group are being migrated. The virtual disks automatically become available after the target array has finished importing all the
disks in the disk group.
• One MD storage array to another MD storage array of the same type (for example, from an MD3460 storage array to another MD3460
storage array), the MD storage array you migrate to, recognizes any data structures and/or metadata you had in place on the migrating
MD storage array.
• Any storage array different from the MD storage array you migrate to (for example, from an MD3460 storage array to an MD3860i
storage array), the receiving storage array (MD3860i storage array in the example) does not recognize the migrating metadata and that
data is lost. In this case, the receiving storage array initializes the physical disks and marks them as unconfigured capacity.
NOTE: Only disk groups and associated virtual disks with all member physical disks present can be migrated from one storage
array to another. It is recommended that you only migrate disk groups that have all their associated member virtual disks in an
optimal state.
NOTE: The number of physical disks and virtual disks that a storage array supports limits the scope of the migration.
Use either of the following methods to move disk groups and virtual disks:
• Hot virtual disk migration—Disk migration with the destination storage array power turned on.
• Cold virtual disk migration—Disk migration with the destination storage array power turned off.
NOTE: To ensure that the migrating disk groups and virtual disks are correctly recognized when the target storage array has an
existing physical disk, use hot virtual disk migration.
When attempting virtual disk migration, follow these recommendations:
• Moving physical disks to the destination array for migration—When inserting physical disks into the destination storage array during hot
virtual disk migration, wait for the inserted physical disk to be displayed in the MD Storage Manager, or wait for 30 seconds (whichever
occurs first), before inserting the next physical disk.
NOTE: Without the interval between physical disk insertions, the storage array may become unstable and manageability may
be temporarily lost.
• Migrating virtual disks from multiple storage arrays into a single storage array—When migrating virtual disks from multiple or different
storage arrays into a single destination storage array, move all the physical disks from the same storage array as a set into the new
destination storage array. Ensure that all the physical disks from a storage array are migrated to the destination storage array before
starting migration from the next storage array.
NOTE: If the physical disk modules are not moved as a set to the destination storage array, the newly relocated disk groups
may not be accessible.
• Migrating virtual disks to a storage array with no existing physical disks—Turn off the destination storage array, when migrating disk
groups or a complete set of physical disks from a storage array to another storage array that has no existing physical disks. After the
NOTE: Disk groups from multiple storage arrays must not be migrated at the same time to a storage array that has no existing
physical disks. Use cold virtual disk migration for the disk groups from one storage array.
• Enabling premium features before migration—Before migrating disk groups and virtual disks, enable the required premium features on
the destination storage array. If a disk group is migrated from a storage array that has a premium feature enabled and the destination
array does not have this feature enabled, an Out of Compliance error message can be generated.
Disk roaming
You can move physical disks within an array. The RAID controller module automatically recognizes the relocated physical disks and logically
places them in the proper virtual disks that are part of the disk group. Disk roaming is permitted when the RAID controller module is either
online or powered off.
NOTE: The disk group must be exported before moving the physical disks.
• You can define one host server-to-virtual disk mapping for each virtual disk in the storage array.
• Host server-to-virtual disk mappings are shared between RAID controller modules in the storage array.
• A unique LUN must be used by a host group or host server to access a virtual disk.
• Not every operating system has the same number of LUNs available for use.
Host types
A host server is a server that accesses a storage array. Host servers are mapped to the virtual disks and use one or more iSCSI initiator
ports. Host servers have the following attributes:
NOTE: This host group is a logical entity you can create in the MD Storage Manager. All host servers in a host group must be
running the same operating system.
• Host type — The operating system running on the host server.
Advanced features
The RAID enclosure supports several advanced features:
NOTE: The premium features listed must be enabled separately. If you have purchased these features, an activation card is
supplied that contains instructions for enabling this functionality.
• Snapshot Virtual Disks using multiple point-in-time (PiT) groups — This feature also supports snapshot groups, snapshot images, and
consistency groups.
To create a snapshot image, you must first create a snapshot group and reserve snapshot repository space for the virtual disk. The
repository space is based on a percentage of the current virtual disk reserve.
You can delete the oldest snapshot image in a snapshot group either manually or you can automate the process by enabling the Auto-
Delete setting for the snapshot group. When a snapshot image is deleted, its definition is removed from the system, and the space
occupied by the snapshot image in the repository is released and made available for reuse within the snapshot group.
• Back up data.
• Copy data from disk groups that use smaller-capacity physical disks to disk groups using greater capacity physical disks.
• Restore snapshot virtual disk data to the source virtual disk.
Virtual disk copy generates a full copy of data from the source virtual disk to the target virtual disk in a storage array.
• Source virtual disk—When you create a virtual disk copy, a copy pair consisting of a source virtual disk and a target virtual disk is
created on the same storage array. When a virtual disk copy is started, data from the source virtual disk is copied completely to the
target virtual disk.
• Target virtual disk—When you start a virtual disk copy, the target virtual disk maintains a copy of the data from the source virtual disk.
You can choose whether to use an existing virtual disk or create a new virtual disk as the target virtual disk. If you choose an existing
virtual disk as the target, all data on the target is overwritten. A target virtual disk can be a standard virtual disk or the source virtual
disk of a failed or disabled snapshot virtual disk.
NOTE: The target virtual disk capacity must be equal to or greater than the source virtual disk capacity.
When you begin the disk copy process, you must define the rate at which the copy is completed. Giving the copy process top priority
slightly impacts I/O performance, while giving it lowest priority makes the copy process longer to complete. You can modify the copy
priority while the disk copy is in progress.
Multi-path software
Multi-path software (also referred to as the failover driver) is the software resident on the host server that provides management of the
redundant data path between the host server and the storage array. For the multi-path software to correctly manage a redundant path, the
configuration must have redundant iSCSI connections and cabling.
The multi-path software identifies the existence of multiple paths to a virtual disk and establishes a preferred path to that disk. If any
component in the preferred path fails, the multi-path software automatically reroutes I/O requests to the alternate path so that the storage
array continues to operate without interruption.
NOTE: Multi-path software is available on the MD Series storage arrays resource DVD.
• Physically removed
• Updating firmware
• Involved in an event that caused failover to the alternate controller
Paths used by the preferred RAID controller module to access either the disks or the host server are called the preferred paths; redundant
paths are called the alternate paths. If a failure causes the preferred path to become inaccessible, the storage array automatically uses the
alternate path to access data, and the enclosure status LED blinks amber.
• The impact each virtual disk has on other virtual disks in the same disk group.
• The patterns of usage for each virtual disk.
Load balancing
A load balance policy is used to determine which path is used to process I/O. Multiple options for setting the load balance policies let you
optimize I/O performance when mixed host interfaces are configured.
You can choose one of these load balance policies to optimize I/O performance:
• Round-robin with subset — The round-robin with subset I/O load balance policy routes I/O requests, in rotation, to each available data
path to the RAID controller module that owns the virtual disks. This policy treats all paths to the RAID controller module that owns the
virtual disk equally for I/O activity. Paths to the secondary RAID controller module are ignored until ownership changes. The basic
assumption for the round-robin policy is that the data paths are equal. With mixed host support, the data paths may have different
bandwidths or different data transfer speeds.
• Least queue depth with subset — The least queue depth with subset policy is also known as the least I/Os or least requests policy.
This policy routes the next I/O request to a data path that has the least outstanding I/O requests queued. For this policy, an I/O
request is simply a command in the queue. The type of command or the number of blocks that are associated with the command are
not considered. The least queue depth with subset policy treats large block requests and small block requests equally. The data path
selected is one of the paths in the path group of the RAID controller module that owns the virtual disk.
• Least path weight with subset (Windows operating systems only) — The least queue depth with subset policy is also known as the
least I/Os or least requests policy. This policy routes the next I/O request to a data path that has the least outstanding I/O requests
queued. For this policy, an I/O request is simply a command in the queue. The type of command or the number of blocks that are
associated with the command are not considered. The least queue depth with subset policy treats large block requests and small block
requests equally. The data path selected is one of the paths in the path group of the RAID controller module that owns the virtual disk.
• View in real time the values of the data collected for a monitored device. This capability helps you to determine if the device is
experiencing any problems.
• Identify when a problem started or what caused a problem by seeing a historical view of a monitored device.
• Specify the performance metric and the objects that you want to monitor.
• View data in tabular format (actual values of the collected metrics) or graphical format (as line graphs), or export the data to a file.
This table shows some specific characteristics of each type of performance monitoring:
Type of Sampling Interval Length of Time Maximum Number Ability to Save Data How Monitoring
Performance Displayed of Objects Displayed Starts and Stops
Monitoring
Real-time graphical 5 sec 5 min rolling window 5 No Starts automatically
when AMW opens.
Stops automatically
when AMW closes.
Real-time textual 5-3600 sec Most current value No limit Yes Starts and stops
manually. Also stops
• Each time the sampling interval elapses, the Performance Monitor queries the storage array again and updates the data. The impact to
storage array performance is minimal.
• The background monitoring process samples and stores data for a seven-day time period. If a monitored object changes during this
time, the object does not have a complete set of data points spanning the full seven days. For example, virtual disk sets can change as
virtualDisks are created, deleted, mapped, or unmapped or physical disks can be added, removed, or failed.
• Performance data is collected and displayed only for an I/O host visible (mapped) virtual disk, a snapshot group repository virtual disk,
and a consistency group repository virtual disk. Data for a replication repository virtual disk is not collected.
• The values reported for a RAID controller module or storage array might be greater than the sum of the values reported for all the
virtual disks. The values reported for a RAID controller module or storage array include both host I/Os and I/Os internal to the storage
array (metadata reads and writes), whereas the values reported for a virtual disk include only host I/O.
You might want to monitor the workload across the storage array.
Monitor the Total I/Os in the background performance monitor. If
the workload continues to increase over time while application
performance decreases, you might need to add additional storage
arrays. By adding storage arrays to your enterprise, you can
The higher the cache hit rate, the higher I/O rates will be. Higher
write I/O rates are experienced with write caching enabled
compared to disabled. In deciding whether to enable write caching
for an individual virtual disk, look at the current IOPS and the
maximum IOPS. You should see higher rates for sequential I/O
patterns than for random I/O patterns. Regardless of your I/O
pattern, enable write caching to maximize the I/O rate and to
shorten the application response time. For more information about
read/write caching and performance, see the related topics listed at
the end of this topic.
I/O Latency, ms Latency is useful for monitoring the I/O activity of a specific
physical disk and a specific virtual disk and can help you identify
physical disks that are bottlenecks.
Physical disk type and speed influence latency. With random I/O,
faster spinning physical disks spend less time moving to and from
different locations on the disk.
Larger I/Os have greater latency due to the additional time involved
with transferring data.
Cache Hit Percentage A higher cache hit percentage is desirable for optimal application
performance. A positive correlation exists between the cache hit
percentage and the I/O rates.
The cache hit percentage of all of the virtual disks might be low or
trending downward. This trend might indicate inherent randomness
in access patterns. In addition, at the storage array level or the RAID
controller module level, this trend might indicate the need to install
more RAID controller module cache memory if you do not have the
maximum amount of memory installed.
A real-time performance monitor graph plots a single performance metric over time for up to five objects. The x-axis of the graph
represents time. The y-axis of the graph represents the metric value. When the metric value exceeds 99,999, it displays in thousands (K),
beginning with 100K until the number reaches 9999K, at which time it displays in millions (M). For amounts greater than 9999K but less
than 100M, the value displays in tenths (for example, 12.3M).
1 To view the dashboard, in the Array Management Window (AMW), click the Performance tab.
The Performance tab opens showing six graphs.
2 To view a single performance graph, in the Array Management Window (AMW), select Monitor > Health > Monitor Performance >
Real-time performance monitor > View graphical.
The View Real-time Graphical Performance Monitor dialog opens.
3 In the Select metric drop-down list, select the performance data that you want to view.
You can select only one metric.
4 In the Select an object(s) list, select the objects for which you want to view performance data. You can select up to five objects to
monitor on one graph.
Use Ctrl-Click and Shift-Click to select multiple objects. Each object is plotted as a separate line on the graph.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another
line.
5 When you are done viewing the performance graph, click Close.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another
line.
5 To save the changed portlet to the dashboard, click Save to Dashboard, and then click OK.
The Save to Dashboard option is not available if you did not make any changes, if both a metric and an object are not selected, or if
the dialog was not invoked from a portlet on the dashboard.
The dashboard on the Performance tab updates with the new portlet.
6 To close the dialog, click Cancel.
NOTE: A kilobyte is equal to 1024 bytes, and a megabyte is equal to 1024 x 1024 bytes. Some applications calculate kilobytes
as 1,000 bytes and megabytes as 1,000,000 bytes. The numbers reported by the monitor might be lower by this difference.
• I/O Latency – The time it takes for an I/O request to complete, in milliseconds. For physical disks, I/O latency includes seek, rotation,
and transfer time.
• Cache Hit Percentage – The percentage of total I/Os that are processed with data from the cache rather than requiring I/O from disk.
Includes read requests that find all the data in the cache and write requests that cause an overwrite of cache data before it has been
committed to disk.
• SSD Cache Hit Percentage – The percentage of read I/Os that are processed with data from the SSD physical disks.
The metrics available include the current value, minimum value, maximum value, and average value. The current value is the most recent
data point collected. The minimum, maximum, and average values are determined based on the start of performance monitoring. For real-
time performance monitoring, the start is when the Array Management Window (AMW) opened. For background performance monitoring,
the start is when background performance monitoring started.
Performance metrics at the storage array level are the sum of metrics on the RAID controller modules. Metrics for the RAID controller
module and disk group are computed by aggregating the data retrieved for each virtual disk at the disk group/owning RAID controller
On a performance monitor graph, you can specify one metric and up to five objects. Not all metrics apply to all objects.
Metric Storage Array RAID Virtual Disks Snapshot Thin Virtual Disk Groups or Physical Disks
Controller Virtual Disks Disks Disk Pools
Modules
Total I/Os X X X X X X –
IOs/sec X X X X X X –
MBs/sec X X X X X X –
I/O Latency – – X X X – X
Cache hit % X X X X X X –
NOTE: For an accurate elapsed time, do not use the Synchronize RAID controller module Clocks option while using
Performance Monitor. If you do, it is possible for the elapsed time to be negative.
7 To stop collecting performance data, click Stop, and then click Close.
NOTE: When you close the EMW, you might be monitoring more than one storage array. Performance data is not saved for
any storage array that is in the Unresponsive state.
A dialog is displayed asking you whether you want to save the performance data.
6 Do you want to save the current Performance Monitor data?
• Yes – Click Yes, select a directory, enter a filename, and then click Save.
• No – Click No.
7 To close the View Current Background Performance Monitor dialog, click Close.
NOTE: For an accurate elapsed time, do not use the Synchronize RAID controller module Clocks option while using
Performance Monitor. If you do, it is possible for the elapsed time to be negative.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another line. If you perform the View
Current option before the first sampling interval elapses (10 minutes), the graph shows that it is initializing.
5 (Optional) To change the time period plotted on the graph, make selections in the Start Date, Start Time, End Date, and End Time
fields.
6 To close the dialog, click Close.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another
line.
7 (Optional) To change the time period plotted on the graph, make selections in the Start Date, Start Time, End Date, and End Time
drop-down lists.
8 To close the dialog, click Close.
If the invalid object represents a deleted object, its performance graph no longer updates. When this event happens, you should redefine
the graph to monitor a valid object.
Invalid objects can be caused by a number of factors:
It is possible to have two objects with the same name. Two virtual disks can have the same name if you delete a virtual disk and then later
create another virtual disk with the same name. The original virtual disk’s name contains an asterisk indicating that the virtual disk no longer
exists. The new virtual disk has the same name, but without an asterisk. Two physical disks will have the same name if you replace a
physical disk. The original physical disk’s name contains an asterisk indicating that it is invalid and no longer exists. The new physical disk
has the same name without an asterisk.
• Out-of-band management
• In-band management
The Enterprise Management Window (EMW) is the first page that loads when you open the Modular Disk Storage Manager (MDSM) and
it allows you to discover, connect to, and manage MD3 storage arrays through in-band and out-of-band connectivity.
The indented storage names are arrays that have been discovered and when the operator selects the array, and it allows them to manage
the array.
Topics:
• Out-of-band management
• In-band management
• Storage arrays
• Setting up your storage array
• Configuring alert notifications
• Battery settings
• Setting the storage array RAID controller module clocks
Out-of-band management
In the out-of-band management method, data is separate from commands and events. Data travels through the host-to-controller
interface, while commands and events travel through the management port Ethernet cables.
This management method lets you configure the maximum number of virtual disks that are supported by your operating system and host
adapters.
A maximum of eight storage management stations can concurrently monitor an out-of-band managed storage array. This limit does not
apply to systems that manage the storage array through the in-band management method.
When you use out-of-band management, you must set the network configuration for each RAID controller module’s management Ethernet
port. This includes the Internet Protocol (IP) address, subnetwork mask (subnet mask), and gateway. If you are using a Dynamic Host
Configuration Protocol (DHCP) server, you can enable automatic network configuration, but if you are not using a DHCP server, you must
enter the network configuration manually.
NOTE: RAID controller module network configurations can be assigned using a DHCP server with the default setting. However, if
a DHCP server is not available for 150 seconds, the RAID controller modules assign static IP addresses.
• For 60 disk arrays, the left-most ports labeled MGMT are used. The default the addresses assigned are 192.168.128.101 for controller 0
and 192.168.128.102 for controller 1.
• For 12 or 24 disk arrays, the right-most ports labeled MGMT are used. The default the addresses assigned are 192.168.129.101 for
controller 0 and 192.168.129.102 for controller 1.
NOTE: For detailed information about setting up in-band and out-of-band management see your system’s Deployment Guide at
Dell.com/support/manuals.
When you add storage arrays by using this management method, specify only the host name or IP address of the host. After you add the
specific host name or IP address, the host-agent software automatically detects any storage arrays that are connected to that host.
NOTE: Some operating systems can be used only as storage management stations. For more information about the operating
system that you are using, see the MD PowerVault Support Matrix at Dell.com/support/manuals.
Storage arrays
You must add the storage arrays to the MD Storage Manager before you can set up the storage array for optimal use.
You can:
NOTE: Verify that your host or management station network configuration—including station IP address, subnet mask, and
default gateway—is correct before adding a storage array using the Automatic option.
NOTE: For Linux, set the default gateway so that broadcast packets are sent to 255.255.255.0. For Red Hat Enterprise Linux, if
no gateway exists on the network, set the default gateway to the IP address of the NIC.
NOTE: The MD Storage Manager uses TCP/UDP port 2463 for communication to the MD storage array.
NOTE: The Automatic Discovery option and the Rescan Hosts option in the EMW provide automatic methods for discovering
managed storage arrays.
To add an in-band storage array, add the host through which the storage array is attached to the network.
NOTE: It can take several minutes for the MD Storage Manager to connect to the specified storage array.
NOTE: When adding a storage array using in-band management with iSCSI, a session must first be established between
the initiator on the host server and the storage array. For more information, see Using iSCSI.
NOTE: The host agent must be restarted before in-band management communication can be established. See Starting Or
Restarting The Host Context Agent Software.
3 Click Add.
4 Use one of these methods to name a storage array:
• In the EMW, select the Setup tab, and select Name/Rename Storage Arrays.
• In the AMW, select the Setup tab, and select Rename Storage Array.
• In the EMW, right-click the icon corresponding to the array and select Rename.
• Locate the storage array — Find the physical location of the storage array on your network by turning on the system identification
indicator.
• Give a new name to the storage array — Use a unique name that identifies each storage array.
• Set a storage array password — Configure the storage array with a password to protect it from unauthorized access. The MD Storage
Manager prompts for the password when an attempt is made to change the storage array configuration, such as when a virtual disk is
created or deleted.
• Configure iSCSI host ports — Configure network parameters for each iSCSI host port automatically or specify the configuration
information for each iSCSI host port.
• Configure the storage array — Create disk groups, virtual disks, and hot spare physical disks by using the Automatic configuration
method or the Manual configuration method.
• Map virtual disks — Map virtual disks to hosts or host groups.
• Save configuration — Save the configuration parameters in a file that you can use to restore the configuration, or reuse the
configuration on another storage array.
After you complete the basic steps for configuring the storage array, you can perform these optional tasks:
• Manually define hosts — Define the hosts and the host port identifiers that are connected to the storage array. Use this option only if
the host is not automatically recognized and shown in the Host Mappings tab.
• Configure Ethernet management ports — Configure the network parameters for the Ethernet management ports on the RAID
controller modules if you are managing the storage array by using the out-of-band management connections.
• View and enable premium features — Your MD Storage Manager may include premium features. View the premium features that are
available and the premium features that are already started. You can start available premium features that are currently stopped.
• Manage iSCSI settings — You can configure iSCSI settings for authentication, identification, and discovery.
• Each storage array must be assigned a unique alphanumeric name up to 30 characters long.
• A name can consist of letters, numbers, and the special characters underscore (_), dash (–), and pound sign (#). No other special
characters are allowed.
NOTE: Avoid arbitrary names or names that may lose meaning in the future.
3 Click OK.
A message is displayed warning you about the implications of changing the storage array name.
4 Click Yes.
The new storage array name is displayed in the EMW.
5 Repeat step 1 through step 4 to name or rename additional storage arrays.
Setting a password
You can configure each storage array with a password to protect it from unauthorized access. The MD Storage Manager prompts for the
password when an attempt is made to change the storage array configuration, such as, when a virtual disk is created or deleted. View
operations do not change the storage array configuration and do not require a password. You can create a new password or change an
existing password.
To set a new password or change an existing password:
1 In the EMW, select the relevant storage array and open the AMW for that storage array.
NOTE: If you are setting the password for the first time, leave the Current password
blank.
4 Type the New password.
NOTE: It is recommended that you use a long password with at least 15 alphanumeric characters to increase security. For
more information about secure passwords, see Password Guidelines.
5 Re-type the new password in Confirm new password.
6 Click OK.
NOTE: You are not prompted for a password when you attempt to change the storage array configuration in the current
management session.
Password guidelines
• Use secure passwords for your storage array. A password should be easy for you to remember but difficult for others to determine.
Consider using numbers or special characters in the place of letters, such as a 1 in the place of the letter I, or the at sign (@) in the
place of the letter 'a'.
• For increased protection, use a long password with at least 15 alphanumeric characters. The maximum password length is 30
characters.
• Passwords are case sensitive.
NOTE: You can attempt to enter a password up to ten times before the storage array enters a lockout state. Before you can
try to enter a password again, you must wait 10 minutes for the storage array to reset. To reset the password, press the
password reset switch on your RAID controller module.
1 In the EMW, select the Devices tab and select the relevant managed storage array.
2 Select Edit > Comment.
The Edit Comment dialog is displayed.
3 Type a comment.
NOTE: The number of characters in the comment must not exceed 60 characters.
4 Click OK.
This option updates the comment in the Table view and saves it in your local storage management station file system. The comment
does not appear to administrators who are using other storage management stations.
1 In the EMW, select the Devices tab and select the relevant managed storage array.
2 Select Edit > Remove > Storage Array.
You can also right-click on a storage array and select Remove > Storage Array.
1 From the menu bar in the AMW, select Storage Array > Premium Features.
The Premium Features and Feature Pack Information window is displayed.
2 Click Use Key File.
The Select Feature Key File window opens, which lets you select the generated key file.
3 Navigate to the relevant folder, select the appropriate key file, and click OK.
The Confirm Enable Premium Fetaures dailog is displayed.
4 Click Yes.
The required premium feature is enabled on your storage array.
5 Click Close.
1 In the AMW, on the menu bar, select Storage Array > Change > Failover Alert Delay.
The Failover Alert Delay window is displayed.
2 In Failover alert delay, enter a value between 0 and 60 minutes.
3 Click OK.
4 If you have set a password for the selected storage array, the Enter Password dialog is displayed. Type the current password for the
storage array.
1 In the AMW, select Storage Array > Change > Cache Settings.
The Change Cache Settings window is displayed.
2 In Start demand cache flushing , select or enter the percentage of unwritten data in the cache to trigger a cache flush .
3 Select the appropriate Cache block size.
A smaller cache size is a good choice for file-system use or database-application use. A larger cache size is a good choice for
applications that generate sequential I/O, such as multimedia.
4 If you have set a password for the selected storage array, the Enter Password dialog is displayed. Type the current password for the
storage array and click OK.
1 In the AMW, from the menu bar, select Hardware > Enclosure > Change > ID.
2 Select a new enclosure ID number from the Change Enclosure ID list.
The enclosure ID must be between 0 and 99 (inclusive).
3 To save the changed enclosure ID, click OK.
1 In the AMW, from the menu bar, select Hardware > Enclosure > Change > Hardware View Order.
2 From the enclosures list, select the enclosure you want to move and click either Up or Down to move the enclosure to the new
position.
3 Click OK.
4 If you have set a password for the selected storage array, the Enter Password dialog is displayed. Type the current password for the
storage array.
5 Click OK.
NOTE: This option enables you to set up alerts for all the storage arrays connected to the host.
• On the Setup, select Configure Alerts. Go to step 2.
2 Select one of the following radio buttons to specify an alert level:
• All storage arrays — Select this option to send an e-mail alert about events on all storage arrays.
• An individual storage array — Select this option to send an e-mail alert about events that occur on only a specified storage array.
The newly added e-mail address is displayed in the Configured e-mail addresses area.
5 For the selected e-mail address in the Configured e-mail addresses area, in the Information To Send list, select:
• Event Only — The e-mail alert contains only the event information. By default, Event Only is selected.
• Event + Profile — The e-mail alert contains the event information and the storage array profile.
• Event + Support — The e-mail alert contains the event information and a compressed file that contains complete support
information for the storage array that has generated the alert.
6 For the selected e-mail address in the Configured e-mail addresses area, in the Frequency list, select:
• Every event — Sends an e-mail alert whenever an event occurs. By default, Every event is selected.
• Every x hours — Sends an e-mail alert after the specified time interval if an event has occurred during that time interval. You can
select this option only if you have selected either Event + Profile or Event + Support in the Information To Send list.
1 Open the Configure Alerts dialog by performing one of these actions in the EMW:
• On the Devices tab, select a node and then on the menu bar, select Edit > Configure Alerts. Go to step 3.
NOTE: This option enables you to set up alerts for all the storage arrays connected to the host.
• On the Setup, select Configure Alerts. Go to step 2.
2 Select one of the following options to specify an alert level:
• All storage arrays — Select this option to send an alert notification about events on all storage arrays.
• An individual storage array — Select this option to send an alert notification about events that occur in only a specified storage
array.
NOTE: If you do not know location of the selected storage array, click Blink to turn on the LEDs of the storage array.
3 To configure an SNMP alert originating from the event monitor, see Creating SNMP Alert Notifications Originating from the Event
Monitor.
4 To configure an SNMP alert originating from the storage array, see Creating SNMP Alert Notifications Originating from the Storage
Array.
• Host destinations for SNMP traps must be running an SNMP service so that the trap information can be processed.
• To set up alert notifications using SNMP traps, you must copy and compile a management information base (MIB) file on the
designated network management stations.
• Global settings are not required for the SNMP trap messages. Trap messages sent to a network management station or other SNMP
servers are standard network traffic, and a system administrator or network administrator handles the security issues.
1 Do one of the following actions based on whether you want to configure alerts for a single storage array or for all storage arrays.
• Single storage array – In the Enterprise Management Window (EMW), select the Devices tab. Right-click the storage array that
you want to send alerts, and then select Configure Alerts.
• All storage arrays – In the EMW, select the Setup tab. Select Configure Alerts, and then select the All storage arrays radio
button, and then click OK.
To configure an SNMP alert notification originating from the storage array, you specify the community name and the trap destination. The
community name is a string that identifies a known set of network management stations and is set by the network administrator. The trap
destination is the IP address or the host name of a computer running an SNMP service. At a minimum, the trap destination is the network
management station. Keep these guidelines in mind when configuring SNMP alert notifications:
• Host destinations for SNMP traps must be running an SNMP service so that the trap information can be processed.
• Global settings are not required for the SNMP trap messages. Trap messages sent to a network management station or other SNMP
servers are standard network traffic, and a system administrator or network administrator handles the security issues.
NOTE: If the SNMP - Storage Array Origin Trap tab does not appear, this feature might not be available on your RAID
controller module model.
4 (Optional) If you want to define the SNMP MIB-II variables that are specific to the storage array, perform this step.
You only need to enter this information once for each storage array. An icon is displayed next to the Configure SNMP MIB-II Variables
button if any of the variables are currently set. The storage array returns this information in response to GetRequests.
• The Name field populates the variable sysName.
Battery settings
A smart battery backup unit (BBU) can perform a learn cycle. The smart BBU module includes the battery, a battery gas gauge, and a
battery charger. The learn cycle calibrates the smart battery gas gauge so that it provides a measurement of the charge of the battery
module. A learn cycle can only start when the battery is fully charged.
The learn cycle completes the following operations:
A learn cycle starts automatically when you install a new battery module. Learn cycles for batteries in both RAID controller modules in a
duplex system occur simultaneously.
Learn cycles are scheduled to start automatically at regular intervals, at the same time and on the same day of the week. The interval
between cycles is described in weeks.
1 In the AMW, from the menu bar, select Hardware > Enclosure > Change > Battery Settings.
1 In the AMW, on the menu bar, select Hardware > RAID Controller Module > Synchronize Clocks.
2 If a password is set, in the Enter Password dialog, type the current password for the storage array, and click Synchronize.
The RAID controller module clocks are synchronized with the management station.
Topics:
NOTE: If you do not want to create a CHAP secret, you can generate a random CHAP secret automatically. To generate a
random CHAP secret, click Generate Random CHAP Secret.
7 Click OK.
NOTE: You can select the None and CHAP at the same time, for example, when one initiator may not have CHAP and the
other initiator has only CHAP selected.
Using iSCSI 49
Entering mutual authentication permissions
Mutual authentication or two-way authentication is a way for a client or a user to verify themselves to a host server, and for the host
server to validate itself to the user. This validation is accomplished in such a way that both parties are sure of the other’s identity.
To add mutual authentication permissions:
, - . / 0 1 2 3 4 5 6 7
8 9 : ; < = > ? @ A B C
D E F G H I J K L M N O
P Q R S T U V W X Y Z [
50 Using iSCSI
\ ] ^ _ a b c d e f g h
I j k l m n o p q r s t
u v w x y z { | } ~
NOTE: Aliases can contain up to 30 characters. Aliases can include letters, numbers, and the special characters underscore
(_), minus (-), and pound sign (#). No other special characters are permitted.
NOTE: Open iSCSI (which is used by Red Hat Enterprise Linux 5 and SUSE Linux Enterprise Server 10 with SP 1) does not
support using target alias.
NOTE: After you manually enter an IP address, you can also click Advanced to configure the customized TCP listening
ports.
NOTE: If you do not want to allow discovery sessions that are not named, select Disallow un-named discovery sessions.
NOTE: Un-named discovery sessions are discovery sessions that are permitted to run without a target name. With an un-
named discovery session, the target name or the target portal group tag is not available to enforce the iSCSI session
identifier (ISID) rule.
6 Click OK.
Using iSCSI 51
Configuring the iSCSI host ports
The default method for configuring the iSCSI host ports, for IPv4 addressing, is DHCP. Always use this method unless your network does
not have a DHCP server. It is advisable to assign static DHCP addresses to the iSCSI ports to ensure continuous connectivity. For IPv6
addressing, the default method is Stateless auto-configuration. Always use this method for IPv6.
To configure the iSCSI host ports:
NOTE: For each iSCSI host port, you can use either IPv4 settings or IPv6 settings or both.
4 In the Configured Ethernet port speed list, select a network speed for the iSCSI host port.
The network speed values in the Configured Ethernet port speed list depend on the maximum speed that the network can support.
Only the network speeds that are supported are displayed.
All of the host ports on a single controller operate at the same speed. An error is displayed if different speeds are selected for the host
ports on the same controller.
5 To use the IPv4 settings for the iSCSI host port, select Enable IPv4 and select the IPv4 Settings tab.
6 To use the IPv6 settings for the iSCSI host port, select Enable IPv6 and select the IPv6 Settings tab.
7 To configure the IPv4 and IPv6 settings, select:
• Obtain configuration automatically from DHCP server to automatically configure the settings. This option is selected by default.
• Specify configuration to manually configure the settings.
NOTE: If you select the automatic configuration method, the configuration is obtained automatically using the DHCP for
IPv4 settings. Similarly for IPv6 settings, the configuration is obtained automatically based on the MAC address and the
IPv6 routers present on the subnetwork.
8 Click Advanced IPv4 Settings and Advanced IPv6 Settings to configure the Virtual Local Area Network (VLAN) support and
Ethernet priority.
9 Click the Advanced Port Settings to configure the TCP listening port settings and Jumbo frame settings.
10 To enable the Internet Control Message Protocol (ICMP), select Enable ICMP PING responses.
The ICMP setting applies to all the iSCSI host ports in the storage array configured for IPv4 addressing.
NOTE: The ICMP is one of the core protocols of the Internet Protocol suite. The ICMP messages determine whether a host
is reachable and how long it takes to get packets to and from that host.
11 Click OK.
Use the advanced settings for the individual iSCSI host ports to specify the TCP frame size, the virtual LAN, and the network priority.
Setting Description
Virtual LAN (VLAN) A method of creating independent logical networks within a physical network. Several VLANs can exist within a
network. VLAN 1 is the default VLAN.
52 Using iSCSI
Setting Description
NOTE: For more information about creating and configuring a VLAN with MD Support Manager, in the
AMW, click the Support tab, then click View Online Help.
Ethernet Priority The network priority can be set from lowest to highest. Although network managers must determine these
mappings, the IEEE has made broad recommendations:
• 0—lowest priority—default
• (1–4)—ranges from “loss eligible” traffic to controlled-load applications, such as streaming multimedia and
business-critical traffic
• (5–6)—delay-sensitive applications such as interactive video and voice
• 7—highest priority reserved for network-critical traffic
TCP Listening Port The default Transmission Control Protocol (TCP) listening port is 3260.
Jumbo Frames The maximum transmission units (MTUs). It can be set between 1501 and 9000 Bytes per frame. If the Jumbo
Frames are disabled, the default MTU is 1500 Bytes per frame.
NOTE: Changing any of these settings resets the iSCSI port. I/O is interrupted to any host accessing that port. You can access
the I/O automatically after the port restarts and the host logs in again.
• Unauthorized access — If an initiator is logged on whom you consider to not have access, you can end the iSCSI session. Ending the
iSCSI session forces the initiator to log off the storage array. The initiator can log on if None authentication method is available.
• System downtime — If you need to turn off a storage array and initiators are logged on, you can end the iSCSI session to log off the
initiators from the storage array.
1 In the AMW menu bar, select Storage Array > iSCSI > View/End Sessions.
2 Select the iSCSI session that you want to view in the Current sessions area.
The details are displayed in the Details area.
3 To save the entire iSCSI sessions topology as a text file, click Save As
4 To end the session:
a Select the session that you want to end, and then click End Session.
The End Session confirmation window is displayed.
b Click Yes to confirm that you want to end the iSCSI session.
NOTE: If you end a session, any corresponding connections terminate the link between the host and the storage array, and
the data on the storage array is no longer available.
NOTE: When a session is manually terminated using the MD Storage Manager, the iSCSI initiator software automatically
attempts to re-establish the terminated connection to the storage array. This may cause an error message.
1 In the AMW menu bar, select Monitor > Health > iSCSI Statistics.
The View iSCSI Statistics window is displayed.
2 Select the iSCSI statistic type you want to view in the iSCSI Statistics Type area. You can select:
• Ethernet MAC statistics
Using iSCSI 53
• Ethernet TCP/IP statistics
• Target (protocol) statistics
• Local initiator (protocol) statistics
3 In the Options area, select:
• Raw statistics — To view the raw statistics. Raw statistics are all the statistics that have been gathered since the RAID controller
modules were started.
• Baseline statistics — To view the baseline statistics. Baseline statistics are point-in-time statistics that have been gathered since
you set the baseline time.
After you select the statistics type and either raw or baseline statistics, the details of the statistics appear in the statistics tables.
NOTE: You can click Save As to save the statistics that you are viewing in a text file.
4 To set the baseline for the statistics:
a Select Baseline statistics.
b Click Set Baseline.
c Confirm that you want to set the baseline statistics in the dialog that is displayed.
The baseline time shows the latest time you set the baseline. The sampling interval is the difference in time from when you set the
baseline until you launch the dialog or click Refresh.
NOTE: You must first set a baseline before you can compare baseline statistics.
Manually delete the host and the 1 Click the Host Mappings tab.
host group
2 Select the item that you want to remove and select Host Mappings > Remove.
For more information about Host, Host Groups, and Host Topology, see About Your Host.
54 Using iSCSI
5
Event monitor
An event monitor is provided with Dell EMC PowerVault Modular Disk Storage Manager. The event monitor runs continuously in the
background and monitors activity on the managed storage arrays. If the event monitor detects any critical problems, it can notify a host or
remote system using e-mail, Simple Network Management Protocol (SNMP) trap messages, or both.
For the most timely and continuous notification of events, enable the event monitor on a management station that runs 24 hours a day.
Enabling the event monitor on multiple systems or having a combination of an event monitor and MD Storage Manager active can result in
duplicate events, but this does not indicate multiple failures on the array.
The Event Monitor is a background task that runs independently of the Enterprise Management Window (EMW).
• Set up alert destinations for the managed device that you want to monitor. A possible alert destination would be the Dell Management
Console.
• Replicate the alert settings from a particular managed device by copying the emwdata.bin file to every storage management station
from which you want to receive alerts.
Each managed device shows a check mark that indicates that alerts have been set.
NOTE: It is recommended that you configure the event monitor to start by default on a management station that runs 24 hours a
day.
Windows
To enable or disable the event monitor:
1 Open the Run command in windows. Press the <Windows logo key><R>.
The Run command box is displayed.
2 In Open, type services.msc.
The Services window is displayed.
3 From the list of services, select Modular Disk Storage Manager Event Monitor.
4 Select Action > Properties.
5 To enable the event monitor, in the Service Status area, click Start.
6 To disable the event monitor, in the Service Status area, click Stop.
Event monitor 55
Linux
To enable the event monitor, at the command prompt, type SMmonitor start and press <Enter>. When the program startup begins,
the following message is displayed: SMmonitor started.
To disable the event monitor, start terminal emulation application (console ox xterm) and at the command prompt, type SMmonitor
stop, and press <Enter>. When the program shutdown is complete, the following message is displayed: Stopping Monitor
process.
56 Event monitor
6
About your host
Configuring host access
Dell EMC PowerVault Modular Disk Storage Manager (MD Storage Manager) is comprised of multiple modules. One of these modules is
the Host Context Agent, which is installed as part of the MD Storage Manager installation and runs continuously in the background.
If the Host Context Agent is running on a host, that host and the host ports connected from it to the storage array are automatically
detected by the MD Storage Manager. The host ports are displayed in the Host Mappings tab in the Array Management Window (AMW).
The host must be manually added under the Default Host Group in the Host Mappings tab.
NOTE: On MD3800i, MD3820i, and MD3860i storage arrays that use the iSCSI protocol, the Host Context Agent is not dynamic
and must be restarted after establishing iSCSI sessions to automatically detect them.
Use the Define Host Wizard to define the hosts that access the virtual disks in the storage array. Defining a host is one of the steps
required to let the storage array know which hosts are attached to it and to allow access to the virtual disks. For more information about
defining the hosts, see Defining A Host.
To enable the host to write to the storage array, you must map the host to the virtual disk. This mapping grants a host or a host group
access to a particular virtual disk or to several virtual disks in a storage array. You can define the mappings on the Host Mappings tab in the
AMW.
On the Summary tab in the AMW, the Host Mappings area indicates how many hosts are configured to access the storage array. Click
Configured Hosts in the Host Mappings area to see the names of the hosts.
A collection of elements, such as default host groups, hosts, and host ports, are displayed as nodes in the object tree on the left pane of
the Host Mappings tab.
The host topology is reconfigurable. You can perform the following tasks:
NOTE: The host port identifier name must contain only the letters A through F.
6 Click Add.
The host port identifier and the alias for the host port identifier is added to the host port identifier table.
7 Click Next.
The Specify Host Type window is displayed.
8 In Host type (operating system), select the relevant operating system for the host.
The Host Group Question window is displayed.
9 In the Host Group Question window, you can select:
• Yes — This host shares access to the same virtual disks with other hosts.
• No — This host does NOT share access to the same virtual disks with other hosts.
10 Click Next.
11 If you select:
• Yes — The Specify Host Group window is displayed.
• No — Go to step 13.
12 Enter the name of the host group or select an existing host group and click Next.
The Preview window is displayed.
13 Click Finish.
The Creation Successful window is displayed confirming that the new host is created.
14 To create another host, click Yes on the Creation Successful window.
If a host is part of a cluster, every host in the cluster must be connected to the storage array, and every host in the cluster must be added
to the host group.
NOTE: To remove hosts, select the hosts in the Hosts in group area, and click Remove.
7 Click OK.
1 In the AMW, select the Host Mappings tab, select the host node in the object tree.
2 Perform one of these actions:
• From the menu bar, select Host Mappings > Host > Move.
• Right-click the host node, and select Move from the pop-up menu.
1 In the AMW, select the Host Mappings tab, select the host group node in the object tree.
2 Perform one of these actions:
• From the menu bar, select Host Mappings > Host Group > Remove.
• Right-click the host group node, and select Remove from the pop-up menu.
Host topology
Host topology is the organization of hosts, host groups, and host interfaces configured for a storage array. You can view the host topology
in the Host Mappings tab of the AMW. For more information, see Using The Host Mappings Tab.
The following tasks change the host topology:
The MD Storage Manager automatically detects these changes for any host running the host agent software.
To start or stop the Host Context Agent on Linux, enter the following commands at the prompt:
SMagent start
SMagent stop
NOTE: See the Deployment Guide for more information about cabling configurations.
NOTE: For more information about configuring hosts, see About Your Host.
If a component such as a RAID controller module or a cable fails, or an error occurs on the data path to the preferred RAID controller
module, virtual disk ownership is moved to the alternate non-preferred RAID controller module for processing. This failure or error is called
failover.
Drivers for multipath frameworks such as Microsoft Multi-Path IO (MPIO) and Linux Device Mapper (DM) are installed on host systems
that access the storage array and provide I/O path failover.
For more information about Linux DM, see Device Mapper Multipath for Linux. For more information about MPIO, see Microsoft.com.
NOTE: You must have the multipath driver installed on the hosts always, even in a configuration where there is only one path to
the storage system, such as a single port cluster configuration.
During a failover, the virtual disk transfer is logged as a critical event, and an alert notification is sent automatically if you have configured
alert destinations for the storage array.
• Create a disk group from unconfigured capacity. First define the RAID level and free capacity (available storage space) for the disk
group, and then define the parameters for the first virtual disk in the new disk group.
• Create a new virtual disk in the free capacity of an existing disk group or disk pool. You only need to specify the parameters for the new
virtual disk.
A disk group has a set amount of free capacity that is configured when the disk group is created. You can use that free capacity to
subdivide the disk group into one or more virtual disks.
• Automatic configuration—Provides the fastest method, but with limited configuration options
• Manual configuration—Provides more configuration options
When creating a virtual disk, consider the uses for that virtual disk, and select an appropriate capacity for those uses. For example, if a disk
group has a virtual disk that stores multimedia files (which tend to be large) and another virtual disk that stores text files (which tend to be
small), the multimedia file virtual disk requires more capacity than the text file virtual disk.
A disk group should be organized according to its related tasks and subtasks. For example, if you create a disk group for the Accounting
Department, you can create virtual disks that match the different types of accounting performed in the department: Accounts Receivable
(AR), Accounts Payable (AP), internal billing, and so forth. In this scenario, the AR and AP virtual disks probably need more capacity than
the internal billing virtual disk.
NOTE: In Linux, the host must be rebooted after deleting virtual disks to reset the /dev entries.
NOTE: Before you can use a virtual disk, you must register the disk with the host systems. See Host-To-Virtual Disk Mapping.
1 To start the Create Disk Group Wizard, perform one of these actions:
• To create a disk group from unconfigured capacity in the storage array, in the Storage & Copy Services tab, select a storage array
and right-click the Total Unconfigured Capacity node, and select Create Disk Group from the pop-up menu.
• To create a disk group from unassigned physical disks in the storage array — On the Storage & Copy Services tab, select one or
more unassigned physical disks of the same physical disk type, and from the menu bar, select Storage > Disk Group > Create.
• Select the Hardware tab and right-click the unassigned physical disks, and select Create Disk Group from the pop-up menu.
• To create a secure disk group — On the Hardware tab, select one or more unassigned security capable physical disks of the same
physical disk type, and from the menu bar, select Storage > Disk Group > Create.
NOTE: You can select multiple physical disks at the same time by holding <Ctrl> or <Shift> and selecting additional
physical disks.
c To view the capacity of the new disk group, click Calculate Capacity.
d Click Finish.
A message prompts you that the disk group is successfully created and that you should create at least one virtual disk before you can
use the capacity of the new disk group. For more information about creating virtual disks, see Creating Virtual Disks.
• Many hosts can have 256 logical unit numbers (LUNs) mapped per storage partition, but the number varies per operating system.
• After you create one or more virtual disks and assign a mapping, you must register the virtual disk with the operating system. In
addition, you must make sure that the host recognizes the mapping between the physical storage array name and the virtual disk name.
Depending on the operating system, run the host-based utilities, hot_add and SMdevices.
• If the storage array contains physical disks with different media types or different interface types, multiple Unconfigured Capacity
nodes may be displayed in the Total Unconfigured Capacity pane of the Storage & Copy Services tab. Each physical disk type has an
associated Unconfigured Capacity node if unassigned physical disks are available in the expansion enclosure.
• You cannot create a disk group and subsequent virtual disk from different physical disk technology types. Each physical disk that
comprises the disk group must be of the same physical disk type.
NOTE: Ensure that you create disk groups before creating virtual disks. If you chose an Unconfigured Capacity node or
unassigned physical disks to create a virtual disk, the Disk Group Required dialog is displayed. Click Yes and create a disk
group by using the Create Disk Group Wizard. The Create Virtual Disk Wizard is displayed after you create the disk group.
NOTE: If you select Custom, you must select an appropriate segment size.
8 Select Enable dynamic cache read prefetch.
For more information about virtual disk cache settings, see Changing The Virtual Disk Cache Settings.
NOTE: Enable dynamic cache read prefetch must be disabled if the virtual disk is used for database applications or
applications with a large percentage of random reads.
9 From the Segment size list, select an appropriate segment size.
10 Click Finish.
The virtual disks are created.
NOTE: A message prompts you to confirm if you want to create another virtual disk. Click Yes to proceed further, else click
No.
NOTE: Thin virtual disks are supported on disk pools. For more information, see Thin Virtual Disks.
• If more than one virtual disk is selected, the modification priority defaults to the lowest priority. The current priority is shown only if a
single virtual disk is selected.
• Changing the modification priority by using this option modifies the priority for the selected virtual disks.
NOTE: To select nonadjacent virtual disks, press <Ctrl> click and select the appropriate virtual disks. To select adjacent
virtual disks, press <Shift> click the appropriate virtual disks. To select all of the available virtual disks, click Select All.
5 Click OK.
A message prompts you to confirm the change in the virtual disk modification priority.
6 Click Yes.
7 Click OK.
• After opening the Change Cache Settings dialog, the system may display a window indicating that the RAID controller module has
temporarily suspended caching operations. This action may occur when a new battery is charging, when a RAID controller module has
been removed, or if a mismatch in cache sizes has been detected by the RAID controller module. After the condition has cleared, the
cache properties selected in the dialog become active. If the selected cache properties do not become active, contact your Technical
Support representative.
• If you select more than one virtual disk, the cache settings default to no settings selected. The current cache settings appear only if
you select a single virtual disk.
• If you change the cache settings by using this option, the priority of all the virtual disks that you selected is modified.
1 In the AMW, select the Storage & Copy Services tab and select a virtual disk.
2 In the menu bar, select Storage > Virtual Disk > Change > Cache Settings.
The Change Cache Settings window is displayed.
3 Select one or more virtual disks.
To select nonadjacent virtual disks, press <Ctrl> click. To select adjacent virtual disks, press <Shift> click. To select all the available
virtual disks, select Select All.
4 In the Cache Properties area, you can select:
• Enable read caching
• Enable write caching
– Enable write caching without batteries — to permit write caching to continue even if the RAID controller module batteries
are discharged completely, not fully charged, or are not present.
CAUTION: Possible loss of data—Selecting the Enable write caching without batteries option lets write caching continue
even when the batteries are discharged completely or are not fully charged. Typically, write caching is turned off temporarily
by the RAID controller module until the batteries are charged. If you select this option and do not have a universal power
supply for protection, you could lose data. In addition, you could lose data if you do not have RAID controller module
batteries and you select the Enable write caching without batteries option.
NOTE: When the Optional RAID controller module batteries option is enabled, the Enable write caching does not appear.
The Enable write caching without batteries is still available, but it is not checked by default.
NOTE: Cache is automatically flushed after the Enable write caching check box is disabled.
5 Click OK.
A message prompts you to confirm the change in the virtual disk modification priority.
6 Click Yes.
7 Click OK.
The Change Virtual Disk Properties - Progress dialog is displayed.
NOTE: The operation to change the segment size is slower than other modification operations—for example, changing RAID
levels or adding free capacity to a disk group. This slowness is the result of how the data is reorganized and the temporary
internal backup procedures that occur during the operation.
The amount of time that a change segment size operation takes depends on:
If you want this operation to complete faster, you can change the modification priority to the highest level, although this may decrease
system I/O performance.
1 In the AMW, select the Storage & Copy Services tab and select a virtual disk.
2 From the menu bar, select Storage > Virtual Disk > Change > Segment Size.
3 Select the required segment size.
A message prompts you to confirm the selected segment size.
4 Click Yes.
NOTE: To view the progress or change the priority of the modification operation, select a virtual disk in the disk group, and
from the menu bar, select Storage > Virtual Disk > Change > Modification Priority.
When you choose one of the virtual disk I/O characteristics, the corresponding dynamic cache prefetch setting and segment size that are
typically well suited for expected I/O patterns are populated in the Dynamic cache read prefetch field and the Segment size field.
NOTE: Thin virtual disks can only be created from an existing disk pool.
• you anticipate that storage consumption on a virtual disk is highly unpredictable or volatile
• an application relying on a specific virtual disk is exceptionally mission critical
Virtual capacity is capacity that is reported to the host, while physical capacity is the amount of actual physical disk space allocated for
data write operations. Generally, physical capacity is much smaller than virtual capacity.
Thin provisioning allows virtual disks to be created with a large virtual capacity but a relatively small physical capacity. This is beneficial for
storage utilization and efficiency because it allows you to increase capacity as application needs change, without disrupting data
throughput. You can also set a utilization warning threshold that causes MD Storage Manager to generate an alert when a specified
percentage of physical capacity is reached.
Minimum 32 MB
Maximum 63 TB
Physical capacity
Minimum 4 GB
Maximum 64 TB
• Preferred Capacity — Sets the initial physical capacity of the virtual disk (MB, GB or TB). Preferred capacity in a disk pool is allocated
in 4 GB increments. If you specify a capacity amount that is not a multiple of 4 GB, MD Storage Manager assigns a 4 GB multiple and
assigns the remainder as unused. If space exists that is not a 4 GB multiple, you can use it to increase the size of the thin virtual disk. To
increase the size of the thin virtual disk, select Storage > Virtual Disk > Increase Capacity.
• Repository Expansion Policy — Select either Automatic or Manual to indicate whether MD Storage Manager must automatically
expand physical capacity thresholds. If you select Automatic, enter a Maximum Expansion Capacity value that triggers automatic
capacity expansion. The MD Storage Manager expands the preferred capacity in increments of 4 GB until it reaches the specified
capacity. If you select Manual, automatic expansion does not occur and an alert is displayed when the Warning Threshold value
percentage is reached.
• Warning Threshold — When consumed capacity reaches the specified percentage, MD Storage Manager sends an E-mail or SNMP
alert.
Copy Services Feature Standard Virtual Disk in a Disk Group Standard Virtual Disk in a Disk Thin Virtual Disk
Pool
Snapshot image Supported Supported Supported
The source of a virtual disk copy can be either a standard virtual disk in a disk group, a standard virtual disk in a disk pool, or a thin virtual
disk. The target of a virtual disk copy can be only a standard virtual disk in a disk group or a standard virtual disk in a disk pool, not a thin
virtual disk.
• Keep the same physical capacity — If you keep the same physical capacity, the virtual disk can keep its current repository virtual disk,
which saves initialization time.
• Change the physical capacity — If you change the physical capacity, a new repository virtual disk is created and you can optionally
change the repository expansion policy and warning threshold.
• Move the repository to a different disk pool.
Initializing a thin virtual disk erases all data from the virtual disk. However, host mappings, virtual capacity, repository expansion policy and
security settings are preserved. Initialization also clears the block indices, which causes unwritten blocks to be read as if they are zero-filled.
After initialization, the thin virtual disk appears to be completely empty.
• You can create thin virtual disks only from disk pools, not from disk groups.
• By initializing a thin virtual disk with the same physical capacity, the original repository is maintained but the contents of the thin virtual
disk are deleted.
• You can create thin virtual disks only from disk pools, not from disk groups.
• By initializing a thin virtual disk with the same physical capacity, the original repository is maintained but the contents of the thin virtual
disk are deleted.
NOTE: Do not allocate all the capacity to standard virtual disks—ensure that you keep storage capacity for copy services
(snapshot images, snapshot virtual disks, virtual disk copies, and remote replications).
NOTE: Regardless of the capacity specified, capacity in a disk pool is allocated in 4 GB increments. Any capacity that is not
a multiple of 4 GB is allocated but not usable. To make sure that the entire capacity is usable, specify the capacity in 4 GB
increments. If unusable capacity exists, the only way to regain it is to increase the capacity of the virtual disk.
NOTE: The benefit of reusing an existing repository is that you can avoid the initialization process that occurs when you
create a new one.
10 If you want to change the repository expansion policy or warning threshold, click View advanced repository settings.
• Repository expansion policy – Select either Automatic or Manual. When the consumed capacity gets close to the physical
capacity, you can expand the physical capacity. The MD storage management software can automatically expand the physical
capacity, or you can do it manually. If you select Automatic, you also can set a maximum expansion capacity. The maximum
expansion capacity allows you to limit the virtual disk’s automatic growth below the virtual capacity. The value for the maximum
expansion capacity must be a multiple of 4 GB.
• Warning threshold – In the Send alert when repository capacity reaches field, enter a percentage. The MD Storage Manager
sends an alert notification when the physical capacity reaches the full percentage.
11 Click Finish.
The Confirm Initialization of Thin Virtual Disk window is displayed.
12 Read the warning and confirm if you want to initialize the thin virtual disk.
13 Type yes, and click OK.
The thin virtual disk initializes.
NOTE: You can create thin virtual disks only from disk pools, not from disk groups.
NOTE: Do not allocate all the capacity to standard virtual disks—ensure that you keep storage capacity for copy services
snapshot images, snapshot virtual disks, virtual disk copies, and remote replications).
NOTE: Regardless of the capacity specified, capacity in a disk pool is allocated in 4 GB increments. Any capacity that is not
a multiple of 4 GB is allocated but not usable. To make sure that the entire capacity is usable, specify the capacity in 4 GB
increments. If unusable capacity exists, the only way to regain it is to increase the capacity of the virtual disk.
Based on the value that you entered in the previous step, the Disk pool physical capacity candidates table is populated with
matching repositories.
9 Select a repository from the table.
NOTE: The benefit of reusing an existing repository is that you can avoid the initialization process that occurs when you
create a new one.
10 If you want to change the repository expansion policy or warning threshold, click View advanced repository settings.
• Repository expansion policy – Select either Automatic or Manual. When the consumed capacity gets close to the physical
capacity, you can expand the physical capacity. The MD Storage Manager can automatically expand the physical capacity, or you
can do it manually. If you select Automatic, you also can set a maximum expansion capacity. The maximum expansion capacity
allows you to limit the virtual disk’s automatic growth below the virtual capacity. The value for the maximum expansion capacity
must be a multiple of 4 GB.
• Warning threshold – In the Send alert when repository capacity reaches field, enter a percentage. The MD Storage Manager
sends an alert notification when the physical capacity reaches the full percentage.
11 Click Finish.
The Confirm Initialization of Thin Virtual Disk window is displayed.
12 Read the warning and confirm if you want to initialize the thin virtual disk.
13 Type yes, and click OK.
The thin virtual disk initializes.
Existing thinly provisioned virtual disks in a storage array that you upgrade to version 8.25 are still reported to the host operating system as
standard virtual disks until, you use the command-line interface to set the reporting status to thin. Thinly provisioned virtual disks that you
configure after upgrading to version 8.25 are reported to the host operating systems as thinly provisioned virtual disks.
To make sure that the change in reporting policy is recognized, reboot any hosts that use any virtual disks whose reporting status is
changed.
When you enable reporting of thinly-provisioned virtual disks to host operating systems, the host can subsequently use the UNMAP
command to reclaim unused space from thinly-provisioned virtual disks.
You can create a secure disk group from security capable physical disks. When you create a secure disk group from security capable
physical disks, the physical disks in that disk group become security enabled. When a security capable physical disk has been security
enabled, the physical disk requires the correct security key from a RAID controller module to read or write the data. All the physical disks
and RAID controller modules in a storage array share security key. The shared security key provides read and write access to the physical
disks, while the physical disk encryption key on each physical disk is used to encrypt the data. A security capable physical disk works like
any other physical disk until it is security enabled.
Whenever the power is turned off and turned on again, all the security enabled physical disks change to a security locked state. In this
state, the data is inaccessible until the correct security key is provided by a RAID controller module.
You can view the self encrypting disk status of any physical disk in the storage array from the Physical Disk Properties dialog. The status
information reports whether the physical disk is:
• Security capable
• Secure—Security enabled or disabled
• Read/Write Accessible—Security locked or unlocked
You can view the self encrypting disk status of any disk group in the storage array. The status information reports whether the storage
array is:
• Security capable
• Secure
No The disk group is composed of all SED physical The disk group is not entirely composed of SED physical
disks and is in a Non-Secure state. disks.
The Physical Disk Security menu is displayed in the Storage Array menu. The Physical Disk Security menu has the following options:
• Create Key
• Change Key
• Save Key
• Validate Key
NOTE: If you have not created a security key for the storage array, the Create Key option is active. If you have created a security
key for the storage array, the Create Key option is inactive with a check mark to the left. The Change Key option, the Save Key
option, and the Validate Key option are now active.
The Secure Physical Disks option is displayed in the Disk Group menu. The Secure Physical Disks option is active if these conditions are
true:
• The selected storage array is not security enabled but is comprised entirely of security capable physical disks.
• The storage array contains no snapshot base virtual disks or snapshot repository virtual disks.
• The disk group is in an Optimal state.
• A security key is set up for the storage array.
NOTE: The Secure Physical Disks option is inactive if these conditions are not true.
The Secure Physical Disks option is inactive with a check mark to the left if the disk group is already security enabled.
The Create a secure disk group option is displayed in the Create Disk Group Wizard–Disk Group Name and Physical Disk Selection
dialog. The Create a secure disk group option is active only when these conditions are met:
You can erase security enabled physical disks so that you can reuse the physical disks in another disk group or in another storage array.
When you erase security enabled physical disks, ensure that the data cannot be read. When all the physical disks that you have selected in
the Physical Disk type pane are security enabled, and none of the selected physical disk is part of a disk group, the Secure Erase option is
displayed in the Hardware menu.
The storage array password protects a storage array from potentially destructive operations by unauthorized users. The storage array
password is independent from self encrypting disk, and should not be confused with the pass phrase that is used to protect copies of a
security key. However, it is good practice to set a storage array password.
1 In the AMW, from the menu bar, select Storage Array > Security > Physical Disk Security > Create Key.
2 Perform one of these actions:
• If the Create Security Key dialog is displayed, go to step 6.
• If the Storage Array Password Not Set or Storage Array Password Too Weak dialog is displayed, go to step 3.
3 Choose whether to set (or change) the storage array password at this time.
• Click Yes to set or change the storage array password. The Change Password dialog is displayed. Go to step 4.
• Click No to continue without setting or changing the storage array password. The Create Security Key dialog is displayed. Go to
step 6.
4 In New password, enter a string for the storage array password. If you are creating the storage array password for the first time, leave
Current password blank. Follow these guidelines for cryptographic strength when you create the storage array password:
NOTE: Create Key is active only if the pass phrase meets the preceding mentioned criterion.
9 In the Confirm pass phrase dialog box, re-enter the exact string that you entered in the Pass phrase dialog box.
Make a record of the pass phrase that you entered and the security key identifier that is associated with the pass phrase. You need
this information for later secure operations.
10 Click Create Key.
11 If the Invalid Text Entry dialog is displayed, select:
• Yes — There are errors in the strings that were entered. The Invalid Text Entry dialog is displayed. Read the error message in the
dialog, and click OK. Go to step 6.
• No — There are no errors in the strings that were entered. Go to step 12.
12 Make a record of the security key identifier and the file name from the Create Security Key Complete dialog, and click OK.
After you have created a security key, you can create secure disk groups from security capable physical disks. Creating a secure disk group
makes the physical disks in the disk group security enabled. Security enabled physical disks enter Security Locked status whenever power
is re-applied. They can be unlocked only by a RAID controller module that supplies the correct key during physical disk initialization.
Otherwise, the physical disks remain locked, and the data is inaccessible. The Security Locked status prevents any unauthorized person
from accessing data on a security enabled physical disk by physically removing the physical disk and installing the physical disk in another
computer or storage array.
1 In the AMW menu bar, select Storage Array > Security > Physical Disk Security > Change Key.
The Confirm Change Security Key window is displayed.
2 Type yes in the text field, and click OK.
The Change Security Key window is displayed.
3 In Secure key identifier, enter a string that become part of the secure key identifier.
You may leave the text box blank, or enter up to 189 alphanumeric characters without white space, punctuation, or symbols. Additional
characters is generated automatically.
4 Edit the default path by adding a file name to the end of the path or click Browse, navigate to the required folder, and enter the name
of the file.
5 In Pass phrase, enter a string for the pass phrase.
The pass phrase must meet the following criteria:
• It must be between eight and 32 characters long.
• It must contain at least one uppercase letter.
• It must contain at least one lowercase letter.
• It must contain at least one number.
• It must contain at least one nonalphanumeric character—for example, < > @ +.
1 In the AMW toolbar, select Storage Array > Security > Physical Disk Security > Save Key.
The Save Security Key File - Enter Pass Phrase window is displayed.
2 Edit the default path by adding a file name to the end of the path or click Browse, navigate to the required folder and enter the name
of the file.
3 In Pass phrase, enter a string for the pass phrase.
The pass phrase must meet the following criteria:
• It must be between eight and 32 characters long.
• It must contain at least one uppercase letter.
• It must contain at least one lowercase letter.
• It must contain at least one number.
• It must contain at least one non-alphanumeric character (for example, < > @ +).
CAUTION: Possible loss of data access—The Secure Erase option removes all of the data that is currently on the physical disk.
This action cannot be undone.
Before you complete this option, make sure that the physical disk that you have selected is the correct physical disk. You cannot recover
any of the data that is currently on the physical disk.
After you complete the secure erase procedure, the physical disk is available for use in another disk group or in another storage array. See
help topics for more information about the secure erase procedure.
• You can use only unassigned physical disks with Optimal status as hot spare physical disks.
• You can unassign only hot spare physical disks with Optimal, or Standby status. You cannot unassign a hot spare physical disk that has
the In Use status. A hot spare physical disk has the In Use status when it is in the process of taking over for a failed physical disk.
• Hot spare physical disks must be of the same media type and interface type as the physical disks that they are protecting.
• If there are secure disk groups and security capable disk groups in the storage array, the hot spare physical disk must match the
security capability of the disk group.
• Hot spare physical disks must have capacities equal to or larger than the used capacity on the physical disks that they are protecting.
NOTE: This option is available only if you select a hot spare physical disk that is already assigned.
5 To assign hot spares, in the Hot Spare Coverage window, select a disk group in the Hot spare coverage area.
6 Review the information about the hot spare coverage in the Details area.
7 Click Assign.
The Assign Hot Spare window is displayed.
8 Select the relevant Physical disks in the Unassigned physical disks area, as hot spares for the selected disk and click OK.
9 To unassign hot spares, in the Hot Spare Coverage window, select physical disks in the Hot spare physical disks area.
10 Review the information about the hot spare coverage in the Details area.
11 Click Unassign.
A message prompts you to confirm the operation.
12 Type yes and click OK.
• A standby hot spare is a physical disk that has been assigned as a hot spare and is available to take over for any failed physical disk.
• An in-use hot spare is a physical disk that has been assigned as a hot spare and is currently replacing a failed physical disk.
NOTE: For a security capable disk group, security capable hot spare physical disks are preferred. If security capable physical
disks are not available, non-security capable physical disks may be used as hot spare physical disks. To ensure that the disk group
is retained as security capable, the non-security capable hot spare physical disk must be replaced with a security capable
physical disk.
If you select a security capable physical disk as hot spare for a non-secure disk group, a dialog box is displayed indicating that a security
capable physical disk is being used as a hot spare for a non-secure disk group.
The availability of enclosure loss protection for a disk group depends on the location of the physical disks that comprise the disk group. The
enclosure loss protection might be lost because of a failed physical disk and location of the hot spare physical disk. To make sure that
enclosure loss protection is not affected, you must replace a failed physical disk to initiate the copyback process.
The virtual disk remains online and accessible while you are replacing the failed physical disk, because the hot spare physical disk is
automatically substituted for the failed physical disk.
1 Equip your storage array with security-capable physical disks—either SED physical disks or FIPS physical disks.
2 Create a security key that is used by the controller to provide read/write access to the physical disks.
3 Create a security-enabled disk pool or disk group.
NOTE: All SED physical disks supported on MD34xx/MD38xx are FIPS certified. For details, see the Supported physical disk
section in the Dell PowerVault MD Series Support Matrix at Dell.com/powervaultmanuals.
Attention: When a disk pool or disk group is secured, the only way to remove security is to delete the disk pool or disk group.
Deleting the disk pool or disk group deletes all the data in the virtual disks that it contains.
When a security-capable physical disk has been security enabled, the physical disk requires the correct security key from a controller to
read or write the data. All the physical disks and controllers in a storage array share security key. Furthermore, if you have both SED
physical disks and FIPS physical disks, they also share security key. The shared security key provides read and write access to the physical
disks, while the physical disk encryption key on each physical disk is used to encrypt the data. A security-capable physical disk works like
any other physical disk until it is security enabled.
Whenever the power is turned off and turned on again, all the security-enabled physical disks change to a security locked state. In this
state, the data is inaccessible until the correct security key is provided by a controller.
You can erase security-enabled physical disks so that you can reuse the physical disks in another disk pool, disk group, or in another storage
array. When you erase security-enabled physical disks, you ensure that the data cannot be read. When all the physical disks that you have
selected and the physical pane are security enabled, and none of the selected physical disks are part of a disk pool or disk group, the
Secure Erase option is displayed in the Drive menu.
The storage array password protects a storage array from potentially destructive operations by unauthorized users. The storage array
password is independent from the Physical Disk Security feature, and should not be confused with the pass phrase that is used to protect
copies of a security key. However, Dell EMC recommends that you set a storage array password before you create, change, or save a
security key or unlock secure physical disks.
CAUTION: Enclosure loss protection is not guaranteed if a physical disk has already failed in the disk group. In this situation,
losing access to an expansion enclosure and consequently another physical disk in the disk group causes a double physical disk
failure and loss of data.
Enclosure loss protection is achieved when you create a disk group where all of the physical disks that comprise the disk group are located
in different expansion enclosures. This distinction depends on the RAID level. If you choose to create a disk group by using the Automatic
method, the software attempts to choose physical disks that provide enclosure loss protection. If you choose to create a disk group by
using the Manual method, you must use the criteria specified below.
RAID level 1 Ensure that each physical disk in a replicated pair is located in a different expansion enclosure. This enables you to
have more than two physical disks in the disk group within the same expansion enclosure.
For example, if you are creating a six physical disk, disk group (three-replicated pairs), you could achieve enclosure
loss protection with only two expansion enclosures by specifying that the physical disk in each replicated pair are
located in separate expansion enclosures. This example shows this concept:
• Replicate pair 1 — Physical disk in enclosure 1 slot 1 and physical disk in enclosure 2 slot 1.
• Replicate pair 2 — Physical disk in enclosure 1 slot 2 and physical disk in enclosure 2 slot 2.
• Replicate pair 3 — Physical disk in enclosure 1 slot 3 and physical disk in enclosure 2 slot 3.
RAID level 0 Because RAID level 0 does not have consistency, you cannot achieve enclosure loss protection.
Table 12. Drawer loss protection requirements for different raid levels
RAID Level Drawer Loss Protection Requirements
RAID Level 6 RAID Level 6 requires a minimum of 5 physical disks. Place all the physical disks in different drawers or
place a maximum of two physical disks in the same drawer and the remaining physical disks in different
drawers.
RAID Level 5 RAID Level 5 requires a minimum of 3 physical disks. Place all the physical disks in different drawers for a
RAID Level 5 disk group. Drawer loss protection cannot be achieved for RAID Level 5 if more than one
physical disk is placed in the same drawer.
RAID Level 1 and RAID Level RAID Level 1 requires a minimum of 2 physical disks. Make sure that each physical disk in a remotely
10 replicated pair is located in a different drawer. By locating each physical disk in a different drawer, you can
have more than two physical disks of the disk group within the same drawer. For example, if you create a
RAID Level 1 disk group with six physical disks (three replicated pairs), you can achieve the drawer loss
protection for the disk group with only two drawers as shown in this example: 6-physical disk RAID Level 1
disk group:
Replicated pair 1 = Physical disk located in enclosure 1, drawer 0, slot 0, and physical disk in enclosure 0,
drawer 1, slot 0
Replicated pair 2 = Physical disk in enclosure 1, drawer 0, slot 1, and physical disk in enclosure 1, drawer 1,
slot 1
Replicated pair 3 = Physical disk in enclosure 1, drawer 0, slot 2, and physical disk in enclosure 2, drawer 1,
slot 2
RAID Level 10 requires a minimum of 4 physical disks. Make sure that each physical disk in a remotely
replicated pair is located in a different drawer.
RAID Level 0 You cannot achieve drawer loss protection because the RAID Level 0 disk group does not have
consistency.
NOTE: If you create a disk group using the Automatic physical disk selection method, MD Storage Manager attempts to choose
physical disks that provide drawer loss protection. If you create a disk group by using the Manual physical disk selection method,
you must use the criteria that are specified in the previous table.
If a disk group already has a Degraded status due to a failed physical disk when a drawer fails, drawer loss protection does not protect the
disk group. The data on the virtual disks becomes inaccessible.
• Each virtual disk in the storage array can be mapped to only one host or host group.
• Host-to-virtual disk mappings are shared between controllers in the storage array.
• A unique LUN must be used by a host group or host to access a virtual disk.
• Each host has its own LUN address space. MD Storage Manager permits the same LUN to be used by different hosts or host groups to
access virtual disks in a storage array.
• All operating system do not have the same number of LUNs available.
• You can define the mappings on the Host Mappings tab in the AMW. See Using The Host Mappings Tab.
• An access virtual disk mapping is not required for an out-of-band storage array. If your storage array is managed using an out-of-band
connection, and an access virtual disk mapping is assigned to the Default Group, an access virtual disk mapping is assigned to every
host created from the Default Group.
• Most hosts have 256 LUNs mapped per storage partition. The LUN numbering is from 0 through 255. If your operating system restricts
LUNs to 127, and you try to map a virtual disk to a LUN that is greater than or equal to 127, the host cannot access it.
• An initial mapping of the host group or host must be created using the Storage Partitioning Wizard before defining additional mappings.
See Storage Partitioning.
NOTE: When configuring an iSCSI storage array, if a host or a host group is selected that does not have a SAS host bus
adapter (SAS HBA) host port defined, a warning dialog is displayed.
5 In Logical unit number, select a LUN.
The supported LUNs are 0 through 255.
6 Select the virtual disk to be mapped in the Virtual Disk area.
The Virtual Disk area lists the names and capacity of the virtual disks that are available for mapping based on the selected host group
or selected host.
7 Click Add.
NOTE: The Add button is inactive until a host group or host, LUN, and virtual disk are selected.
8 To define additional mappings, repeat step 4 through step 7.
NOTE: After a virtual disk has been mapped once, it is no longer available in the Virtual Disk area.
9 Click Close.
The mappings are saved. The object tree and the Defined Mappings pane in the Host Mappings tab are updated to reflect the
mappings.
NOTE: Stop any host applications associated with this virtual disk, and unmount the virtual disk, if applicable, from your
operating system.
6 In the Change Mapping dialog, click Yes to confirm the changes.
The mapping is checked for validity and is saved. The Defined Mappings pane is updated to reflect the new mapping. The object tree
is also updated to reflect any movement of host groups or hosts.
7 If a password is set on the storage array, the Enter Password dialog is displayed. Type the current password for the storage array, and
click OK.
8 If configuring a Linux host, run the rescan_dm_devs utility on the host, and remount the virtual disk if required.
NOTE: This utility is installed on the host as part of the MD Storage Manager install process.
9 Restart the host applications.
1 In the AMW, select the Storage & Copy Services tab and select a virtual disk.
2 From the menu bar, select the appropriate RAID controller module slot in Storage > Virtual Disk > Change > Ownership/Preferred
Path.
3 Click Yes to confirm the selection.
1 In the AMW, select the Storage & Copy Services tab and select a disk group.
2 From the menu bar, select Storage > Disk Group > Change > Ownership/Preferred Path.
3 Select the appropriate RAID controller module slot and click Yes to confirm the selection.
CAUTION: Possible loss of data access—Changing ownership at the disk group level causes every virtual disk in that disk
group to transfer to the other RAID controller module and use the new I/O path. If you do not want to set every virtual disk
to the new path, change ownership at the virtual disk level instead.
The ownership of the disk group is changed. I/O to the disk group is now directed through this I/O path.
NOTE: The disk group may not use the new I/O path until the multi-path driver reconfigures and recognizes the new path.
This action usually takes less than 5 minutes.
1 In the AMW, select the Storage & Copy Services tab and select a disk group.
2 From the menu bar, select Storage > Disk Group > Change > RAID Level.
3 Select the appropriate RAID level and click Yes to confirm the selection.
The RAID level operation begins.
NOTE: The virtual disk that you want to delete from the mapping. For example, the following information may be displayed:
mpath6 (3600a0b80000fb6e50000000e487b02f5) dm-10
DELL, MD32xx
[size=1.6T][features=3 queue_if_no_path
pg_init_retries 50][hwhandler=1 rdac]
\_ round-robin 0 [prio=6][active]
\_ 1:0:0:2 sdf 8:80 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 0:0:0:2 sde 8:64 [active][ghost]
In this example, the mpath6 device contains two paths:
-- /dev/sdf at Host 1, Channel 0, Target 0, LUN 2
--/dev/sde at Host 0, Channel 0, Target 0, LUN 2
3 Flush the multipathing device mapping using the following command:
# multipath -f /dev/mapper/mapth_x
Where, sd_x is the SD node (disk device) returned by the multipath command. Repeat this command for all paths related to this
device. For example:
#echo 1 > /sys/block/sdf/device/delete
#echo 1 > /sys/block/sde/device/delete
5 Remove mapping from c, or delete the LUN if necessary.
6 If you want to map another LUN or increase virtual disk capacity, perform this action from MD Storage Manager.
NOTE: If you are only testing LUN removal, you can stop at this step.
7 If a new LUN is mapped or virtual disk capacity is changed, run the following command: # rescan_dm_devs
Linux 255
Guidelines when you work with host types with LUN mapping restrictions:
• You cannot change a host adapter port to a restricted host type if there are already mappings in the storage partition that would
exceed the limit imposed by the restricted host type.
• Consider the case of the Default Group that has access to LUNs up to 256 (0–255) and a restricted host type is added to the Default
Group. In this case, the host that is associated with the restricted host type is able to access virtual disks in the Default Group with
LUNs within its limits. For example, if the Default Group had two virtual disks mapped to LUNs 254 and 255, the host with the
restricted host type would not be able to access those two virtual disks.
• If the Default Group has a restricted host type assigned and the storage partitions are disabled, you can map only a total of 32 LUNs.
Any additional virtual disks that are created are put in the Unidentified Mappings area. If more mappings are defined for one of these
Unidentified Mappings, the Define Additional Mapping dialog shows the LUN list, and the Add button is unavailable.
• Do not configure dual mappings on a Windows host.
• If there is a host with a restricted host type that is part of a specific storage partition, all the hosts in that storage partition are limited
to the maximum number of LUNs allowed by the restricted host type.
• You cannot move a host with a restricted host type into a storage partition that already has LUNs mapped that are greater than what is
allowed by the restricted host type. For example, if you have a restricted host type that allows only LUNs up to 31, you cannot move
that restricted host type into a storage partition that has LUNs greater than 31 already mapped.
The Default Group on the Host Mappings tab has a default host type. To change the host type, right-click on the host and select Change
Default Host Operating System from the pop-up menu. If you set the default host type to a host type that is restricted, the maximum
number of LUNs allowed in the Default Group for any host is restricted to the limit imposed by the restricted host type. If a particular host
with a nonrestricted host type becomes part of a specific storage partition, you are able to change the mapping to a higher LUN.
Storage partitioning
A storage partition is a logical entity consisting of one or more virtual disks that can be accessed by a single host or shared among hosts
that are part of a host group. The first time you map a virtual disk to a specific host or host group, a storage partition is created.
Subsequent virtual disk mappings to that host or host group do not create another storage partition.
One storage partition is sufficient if:
• Only one attached host accesses all the virtual disks in the storage array
• All attached hosts share access to all the virtual disks in the storage array
When you choose this type of configuration, all the hosts must have the same operating system and special software (such as clustering
software) to manage virtual disk sharing and accessibility.
• Specific hosts must access specific virtual disks in the storage array
You can use the Storage Partitioning Wizard to define a single storage partition. The Storage Partitioning Wizard guides you through the
major steps required to specify which host groups, hosts, virtual disks, and associated logical unit numbers (LUNs) are to be included in the
storage partition.
• No valid host groups or hosts exist in the object tree on the Host Mappings tab
• No host ports are defined for the host being included in the storage partition
• All mappings are defined
NOTE: You can include a secondary virtual disk in a storage partition. However, any hosts that are mapped to the secondary
virtual disk have read-only access until the virtual disk is promoted to a primary virtual disk, or the replicate relationship is
removed.
Storage partitioning topology is the collection of elements, such as Default Group, host groups, hosts, and host ports shown as nodes in the
object tree of the Host Mappings tab in the AMW. For more information, see Using The Host Mappings Tab.
If a storage partitioning topology is not defined, an informational dialog is displayed each time you select the Host Mappings tab. You must
define the storage partitioning topology before you define the actual storage partition.
NOTE: If the RAID level of the disk group is RAID Level 5, or RAID Level 6, and the expansion enclosure has enclosure loss
protection, Display only physical disks that ensure enclosure loss protection is displayed and is selected by default.
4 In the Available physical disks area, select physical disks up to the allowed maximum number of physical disks.
NOTE: You cannot mix different media types or different interface types within a single disk group or virtual disk.
5 Click Add.
A message prompts you to confirm your selection.
6 To add the capacity to the disk group, click Yes.
NOTE: After the capacity expansion is completed, extra free capacity is available in the disk group for creation of new
virtual disks or expansion of existing virtual disks.
NOTE: Snapshot repository virtual disks can be expanded from the CLI or from MD Storage Manager. All other virtual disk types
are expandable only from the CLI.
If you receive a warning that the snapshot repository virtual disk is becoming full, you may expand the snapshot repository virtual disk from
MD Storage Manager.
1 In the Array Management Window (AMW), select Storage & Copy Services.
NOTE: After increasing the virtual disk capacity, you cannot decrease it. This operation may take a while to complete and
you cannot cancel it after it starts. However, the virtual disk will remain accessible.
2 Select an appropriate virtual disk.
3 From the menu, select Storage > Virtual Disk > Increase Capacity. Or
Right-click the virtual disk and select Increase Capacity from the pop-up menu.
NOTE: You can also use the Command Line Interface (CLI) on both Windows and Linux hosts to increase the capacity of a virtual
disk. For more information, see Dell EMC PowerVault MD 34XX/38XX Series Storage Arrays CLI Guide.
When you migrate the exported disk group to the new storage array, the import fails if a majority of the physical disks are not present in the
group. For example, both the physical disks in a two-disk RAID 1 configuration, or three physical disks (one from each disk pair) in a four-
disk RAID 10 configuration must be present.
Non-exportable components
You must remove or clear any non-exportable settings before you can complete the export disk group procedure. Remove or clear the
following settings:
• Persistent reservations
• Host-to-virtual disk mappings
• Virtual disk copy pairs
• Snapshot virtual disks and snapshot repository virtual disks
• Remote replicated pairs
• Replication repositories
NOTE: You lose access to your data during the export/import process.
NOTE: You must export a disk group before you move the disk group or import the disk group.
• Persistent reservations
• Host-to-virtual disk mappings
• Virtual disk copy pairs
• Snapshot virtual disks and snapshot repository virtual disks
• Remote replicate pairs
• Replication repositories
1 Insert the exported physical disks into the available physical disk slots.
2 Review the Import Report for an overview of the disk group that you are importing.
3 Check for non-importable components.
4 Confirm that you want to proceed with the import procedure.
NOTE: Some settings cannot be imported during the import disk group procedure.
Non-importable components
Some components cannot be imported during the import disk group procedure. These components are removed during the procedure:
• Persistent reservations
• Mappings
• Virtual disk copy pairs
• Snapshot virtual disks and snapshot repository virtual disks
• Unrecovered media error — Data could not be read on the first attempt or on any subsequent attempts. For virtual disks with
consistency protection, data is reconstructed, rewritten to the physical disk, and verified and the error is reported to the event log. For
1 In the AMW, select the Storage & Copy Services tab and select any virtual disk.
2 From the menu bar, select Storage > Virtual Disk > Change > Media Scan Settings.
The Change Media Scan Settings window is displayed.
3 Deselect Suspend media scan, if selected.
4 In Scan duration (in days), enter or select the duration (in days) for the media scan.
The media scan duration specifies the number of days for which the media scan runs on the selected virtual disks.
5 To disable media scans on an individual virtual disk, select the virtual disk in the Select virtual disks to scan area, and deselect Scan
selected virtual disks.
6 To enable media scans on an individual virtual disk, select the virtual disk in the Select virtual disks to scan area, and select Scan
selected virtual disks.
7 To enable or disable the consistency check, select either With consistency check or Without consistency check.
NOTE: A consistency check scans the data blocks in a RAID Level 5 virtual disk, or a RAID Level 6 virtual disk and checks
the consistency information for each block. A consistency check compares data blocks on RAID Level 1 replicated physical
disks. RAID Level 0 virtual disks have no data consistency.
8 Click OK.
1 In the AMW, select the Storage & Copy Services tab and select any virtual disk.
2 From the menu bar, select Storage > Virtual Disk > Change > Media Scan Settings.
The Change Media Scan Settings window is displayed.
3 Select Suspend media scan.
NOTE: This applies to all the virtual disks on the disk group.
4 Click OK.
NOTE: The maximum physical disk speed is 15,000 rpm for standard SAS and 7,200 rpm for 3.5" nearline SAS.
NOTE: In a disk pool, the physical disks must have the same capacities. If the physical disks have different capacities, the MD
Storage Manager uses the smallest capacity among the physical disks in the pool. For example, if your disk pool is comprised
of several 4 GB physical disks and several 8 GB physical disks, only 4 GB on each physical disk is used.
The data and consistency information in a disk pool is distributed across all of the physical disks in the pool and provides the following
benefits:
• Simplified configuration
• Better utilization of physical disks
• Reduced maintenance
• the ability to use thin provisioning
Topics:
NOTE: Because disk pools can coexist with disk groups, a storage array can contain both disk pools and disk groups.
• All physical disk media types in a disk pool must be the same. Solid State Disks (SSDs) are not supported.
• You cannot change the segment size of the virtual disks in a disk pool.
• You cannot export a disk pool from a storage array or import the disk pool to a different storage array.
• You cannot change the RAID level of a disk pool. MD Storage Manager automatically configures disk pools as RAID level 6.
• All physical disk types in a disk pool must be the same.
• You can protect your disk pool with Self Encrypting Disk (SED), but the physical disk attributes must match. For example, SED-enabled
physical disks cannot be mixed with SED-capable physical disks. You can mix SED-capable and non SED-capable physical disks, but the
encryption abilities of the SED physical disks cannot be used.
NOTE: The Only security-capable physical disks option is available only when a security key is set up for the storage array.
• Any available physical disks — To create a disk pool comprised of physical disks that may or may not be security capable or are a
mix of security levels.
NOTE: You can mix Self Encrypting Disk (SED)-capable and non SED-capable physical disks. However, the encryption
abilities of the SED-capable physical disks cannot be used, as the physical disk attributes do not match.
Based on the physical disk type and physical disk security type that you have selected, the Disk pool candidates table shows one
or more disk pool configurations.
6 Locate the Secure Enable? column in the Disk pool candidates table and select the disk pool that you want to secure.
NOTE: You can click View Physical Disks to view the details of the physical disks that comprise the selected disk pool
configuration.
• The AMW is opened to manage a storage array, disk pools do not exist in the storage array, and there are enough similar physical disks
to create a disk pool.
• New physical disks are added to a storage array that has at least one disk pool. If there are enough eligible physical disks available, you
can create a disk pool of different physical disk types than the existing disk pool.
NOTE: If you do not want the Automatic Configuration dialog to be displayed again when unconfigured capacity is detected,
you can select Do not display again. If you later want this dialog to be displayed again when unconfigured capacity is detected,
you can select Storage Array > Preferences in the AMW to reset your preferences. If you do not want to reset the
preferences, but do want to invoke the Automatic Configuration dialog, select Storage Array > Configuration > Disk Pools.
Each physical disk in a disk pool must be of the same physical disk type and physical disk media type and have similar capacity. If there are
enough of physical disks of those types, the MD Storage Manager prompts you to create a single disk pool. If the unconfigured capacity
consists of different physical disk types, the MD Storage Manager prompts you to create multiple disk pools.
If a disk pool is already defined in the storage array, and you add new physical disks of the same physical disk type as the disk pool, the MD
Storage Manager prompts you to add the physical disks to the existing disk pool. If the new physical disks are of different physical disk
types, the MD Storage Manager prompts you to add the physical disks of the same physical disk type to the existing disk pool, and to use
the other physical disk types to create different disk pools.
NOTE: If there are multiple disk pools of the same physical disk type, a message is displayed indicating that the MD Storage
Manager cannot recommend the physical disks for a disk pool automatically. However, you can manually add the physical disks to
an existing disk pool. You can click No to close the Automatic Configuration dialog and, from the AMW, select Storage Array >
Disk Pool > Add Physical disks (Capacity).
If more physical disks are added to the storage array when the Automatic Configuration dialog is open, you can click Update to detect the
additional physical disks. As a best practice, add all the physical disks to a storage array at the same time. This action enables the MD
Storage Manager to recommend the best options for using the unconfigured capacity.
You can review the options, and click Yes in the Automatic Configuration dialog to create one or more disk pools, or to add the
unconfigured capacity to an existing disk pool, or both. If you click Yes, you also can create multiple equal-capacity virtual disks after the
disk pool is created.
If you choose not to create the recommended disk pools, or not to add the unconfigured capacity to a disk pool, click No to close the
Automatic Configuration dialog. You can then manually configure the disk pools by selecting Storage Array > Disk Pool > Create from the
AMW.
NOTE: If the LEDs for the disk pool do not stop blinking, from the AMW, select Hardware > Blink > Stop All Indications.
5 Click OK.
• A disk pool name can consist of letters, numbers, and the special characters underscore (_), hyphen (-), and pound (#). If you choose
any other characters, an error message is displayed. You are prompted to choose another name.
• Limit the name to 30 characters.
• Use a unique, meaningful name that is easy to understand and remember.
• Do not use arbitrary names or names that may quickly lose their meaning in the future.
• If you choose a disk pool name that is already in use, an error message is displayed. You are prompted to choose another name.
NOTE: The early warning notification is available only after you select the critical warning notification.
5 Select or type a value to specify a percentage of usable capacity.
• The status of the disk pool must be Optimal before you can add unassigned physical disks.
• You can add a maximum of 12 physical disks to an existing disk pool. However, the disk pool cannot contain more physical disks than the
maximum limit for a storage array.
• You can add only unassigned physical disks with an Optimal status to a disk pool.
• The data in the virtual disks remains accessible during this operation.
To add unassigned physical disks a disk pool:
NOTE: The RAID controller module firmware arranges the unassigned physical disk options with the best options listed at
the top in the Select physical disks for addition area.
4 Select one or more physical disks in the Select physical disks for addition area.
The total free capacity that will be added to the disk pool is displayed in the Total usable capacity selected field.
5 Click Add.
The higher the priority level, the larger is the impact on host I/O and system performance.
• If you delete a disk pool that contains a snapshot repository virtual disk, you must delete the base virtual disk before you delete the
associated snapshot virtual disk.
• The capacity from the physical disks that were previously associated with the deleted disk pool is added to either of these nodes:
– An existing Unconfigured Capacity node.
– A new Unconfigured Capacity node if one did not exist previously.
• You cannot delete a disk pool that has any of these conditions:
– The disk pool contains a repository virtual disk, such as a snapshot group repository virtual disk, a replication repository virtual disk,
or a Consistency Group member repository virtual disk. You must delete the logical component that has the associated repository
virtual disk in the disk pool before you can delete the disk pool.
NOTE: You cannot delete a repository virtual disk if the base virtual disk is in a different disk pool and you have not
requested to delete that disk pool at the same time.
1 To view the components, select the Storage & Copy Services tab.
The object tree is displayed on the left, and the Properties pane is displayed on the right. The object tree provides a view of the
components in the storage array in a tree structure. The components shown include the disk pools, the disk groups, the virtual disks,
the free capacity nodes, and any unconfigured capacity for the storage array. The Properties pane displays detailed information about
the component that is selected in the object tree.
2 To view the physical components that are associated with a component, perform one of these actions:
• Right-click a component, and select View Associated Physical Components.
• Select a component, and click View Associated Physical Components in the Properties pane.
• Select a component, and from the menu bar, select Storage > Disk Pool > View Associated Physical Components.
• Security Capable
• Secure
Secure – No The disk pool is composed of all SED physical disks The disk pool is not entirely composed of SED physical
and is in Non-Secure status. disks.
• The selected storage array is not security enabled but is comprised entirely of security capable physical disks.
• The storage array contains no snapshot copy base virtual disks or snapshot repository virtual disks.
• The disk pool is in Optimal status.
• A security key is set up for the storage array.
The Secure Physical Disks option is inactive if the preceding conditions are not true. The Secure Physical Disks option is inactive with a
check mark to the left if the disk pool is already security enabled.
The Create a secure disk pool option is displayed in the Create Disk Pool - Disk Pool Name and Physical Disk Selection dialog. The
Create a secure disk pool option is active only when the following conditions are met:
NOTE: When you are creating a thin virtual disk, the Enable dynamic cache read prefetch option is not available.
9 Click Next.
10 Do one of the following:
• Select Use recommended capacity settings, and click Next.
• Select Choose your own settings and then select Customize capacity settings (advanced). Click Next and go to step 11.
11 Use the Preferred capacity box to indicate the initial physical capacity of the virtual disk and the Units list to indicate the specific
capacity units to use—MB, GB, or TB.
NOTE: The physical capacity is the amount of physical disk space that is currently reserved for write requests. The physical
capacity must be at least 4 GB, and cannot be larger than 256 GB.
Based on the value that you entered in the previous step, the Disk pool physical capacity candidates table is populated with
matching repository virtual disks. New repository candidates returned either matches the capacity you specify, or be rounded up to the
closest 4 GB increment to make sure all the repository capacity is usable.
12 Select a repository from the table.
Existing repositories are placed at the top of the list.
NOTE: The benefit of reusing an existing repository is that you can avoid the initialization process that occurs when you
create a new one.
13 If you want to change the repository expansion policy or warning threshold, click View advanced repository settings.
• Repository expansion policy – Select either Automatic or Manual. When the consumed capacity gets close to the physical
capacity, you can expand the physical capacity. The MD storage management software can automatically expand the physical
capacity, or you can do it manually. If you select Automatic, you also can set a maximum expansion capacity. The maximum
expansion capacity allows you to limit the virtual disk’s automatic growth below the virtual capacity. The value for the maximum
expansion capacity must be a multiple of 4 GB.
• Warning threshold – In the Send alert when repository capacity reaches field, enter a percentage. The MD Storage Manager
sends an alert notification when the physical capacity reaches the full percentage.
14 Click Finish.
The Virtual Disk Successfully Created window is displayed.
15 Click OK.
If you want to create another virtual disk, click Yes on the Do you want to create another virtual disk? . Perform any operating
system modifications necessary on the application host so that the applications can use the virtual disk. For more information, see the
MD Storage Manager Software Installation Guide for your operating system.
Topics:
Storing the data on the SSD cache eliminates the need for repeated access to the base virtual disk. However, both SSD cache virtual disks
count against the number of virtual disks supported on the storage array.
• file system
• database
• web server
• capacity of the SSD cache from a list of possible candidates consisting of different counts of SSD physical disks.
• whether you want to enable SSD cache on all eligible virtual disks currently mapped to hosts
• whether to use SSD cache on existing virtual disks or when creating new virtual disks
NOTE: To view the physical disks that comprise the usable capacity, select the appropriate row under SSD cache
candidates, and click View Physical Disks.
7 SSD Cache is enabled by default. To disable, click Suspend. To re-enable, click Resume.
8 Click Create.
The LEDs on the physical disks comprising the SSD cache blink.
3 After locating the physical disks, click OK.
The LEDs stop blinking.
4 If the LEDs for the disk group do not stop blinking, from the toolbar in AMW, select Hardware > Blink > Stop All Indications.
If the LEDs successfully stop blinking, a confirmation message is displayed.
5 Click OK.
In the Table view of the SSD cache, the Status is displayed as Suspended.
3 To resume SSD caching, do one of the following:
• From the menu bar, select Storage > SSD Cache > Resume.
• Right click on the SSD cache and select Resume.
In the Table view of the SSD cache, the Status is displayed as Optimal.
The newly selected I/O characteristic type is displayed in the Table view for the selected SSD cache.
NOTE: Depending on the cache capacity and workload, it may take about 10 to 20 hours to fully populate the cache. There
is valid information even after a run of a few minutes, but it takes a number of hours to obtain the most accurate
predictions.
NOTE: While the performance modeling tool is running, a progress bar is displayed in the main area of the window. You can
close or minimize the window and the performance modeling continues to run. You can even close the MD Storage Manager
and the performance modeling session continues to run.
NOTE: At the beginning of the ramp up time, the performance may be slower than if SSD cache was never enabled.
7 To save the results of a performance modeling session, click Save As and save the data to a .csv file.
A snapshot image is a logical image of the content of an associated base virtual disk created at a specific point-in-time, often known as a
restore point. This type of image is not directly readable or writable to a host because the snapshot image is used to save data from the
base virtual disk only. To allow the host to access a copy of the data in a snapshot image, you must create a snapshot virtual disk. This
snapshot virtual disk contains its own repository, which is used to save subsequent modifications made by the host application to the base
virtual disk without affecting the referenced snapshot image.
Topics:
To create a snapshot image, you must first create a snapshot group and reserve snapshot repository space for the virtual disk. The
repository space is based on a percentage of the current virtual disk reserve.
You can delete the oldest snapshot image in a snapshot group either manually or you can automate the process by enabling the Auto-
Delete setting for the snapshot group. When a snapshot image is deleted, its definition is removed from the system, and the space
occupied by the snapshot image in the repository is released and made available for reuse within the snapshot group.
• Read-Only snapshot virtual disks provide the host read access to a copy of the data contained in the snapshot image. However, the
host cannot modify the snapshot image. A Read-Only snapshot virtual disk does require an associated repository.
• Read-Write snapshot virtual disks require an associated repository to provide the host write access to a copy of the data contained in
the snapshot image. A Read-Write snapshot virtual disk requires its own repository to save any subsequent modifications made by the
host application to the base virtual disk without affecting the referenced snapshot image. The snapshot is allocated from the storage
pool from which the original snapshot image is allocated. All I/O writes to the snapshot image are redirected to the snapshot virtual disk
repository that was allocated for saving data modifications. The data of the original snapshot image remains unchanged. For more
information, see Understanding Snapshot Repositories.
• Snapshot groups — A snapshot group is a collection of point-in-time images of a single associated base virtual disk.
• Consistency groups — A consistency group is a group of virtual disks that you can manage as a single entity. Operations performed on
a consistency group are performed simultaneously on all virtual disks in the group.
Snapshot groups
The purpose of a snapshot group is to create a sequence of snapshot images on a given base virtual disk without impacting performance.
You can set up a schedule for a snapshot group to automatically create a snapshot image at a specific time in the future or on a regular
basis.
When creating a snapshot group, the following rules apply:
A snapshot group uses a repository to save all data for the snapshot images contained in the group. A snapshot image operation uses less
disk space than a full physical copy because the data stored in the repository is only the data that has changed since the latest snapshot
image.
A snapshot group is created initially with one repository virtual disk. The repository initially contains a small amount of data, then increases
over time with subsequent data updates. You can increase the size of the repository by increasing the capacity of the repository, or add
virtual disks to the repository.
• Consistency groups can be created initially with or without member virtual disks.
• Snapshot images can be created for a consistency group to enable consistent snapshot images between all member virtual disks.
• Consistency groups can be rolled back.
• A virtual disk can belong to multiple consistency groups.
• Only standard virtual disks and thin virtual disks can be included in a consistency group.
• A base virtual disk can reside on either a disk group or disk pool.
• If you attempt to create a snapshot image on a snapshot group and that snapshot group has reached its maximum number of snapshot
images, you can retry creating snapshot images after doing one of the following:
– Enable automatic deletion of snapshot images in the Advanced Options section of the Create wizard.
– Manually delete one or more snapshot images from the snapshot group.
• If you attempt to create a snapshot image and either of the following conditions below are present, the creation may remain in Pending
state:
– The base virtual disk that contains this snapshot image is a member of an Remote Replication group.
– The base virtual disk is currently synchronizing. When synchronization is complete, the snapshot image creation will complete.
• You cannot create a snapshot image on a failed virtual disk or on a snapshot group designated as Reserved.
1 From the AMW, select the base virtual disk you are copying and select Copy Services > Snapshot Image > Create.
The Select or Create a Snapshot Group window is displayed.
2 Do one of the following:
• If snapshot groups exist on the base virtual disk or if the base virtual disk already has the maximum number of snapshot groups,
the An Existing Snapshot Group radio button is selected by default. Go to step 3.
• If the base virtual disk does not contain any snapshot groups, the following message is displayed: There are no existing
snapshot groups on this base virtual disk. Use the option below to create a new snapshot
group. You must create a snapshot group on the base virtual disk before you can proceed. Go to step 4.
3 If you want to create a snapshot image on an existing snapshot group:
a Select a snapshot group from the existing snapshot group table.
NOTE: Ensure that you select a snapshot group that has not reached its maximum limit of snapshot images.
b Click Finish to automatically complete the snapshot image creation process and then go to step 5.
4 If you want to create a snapshot group for the snapshot image, you must select how you want to create the snapshot group
repository. Do one of the following:
• Select Automatic and click Finish to create the snapshot group repository with the default capacity settings. This is the
recommended option. Go to step 5.
• Select Manual and click Next to define the properties for the snapshot group repository. Then click Finish to continue with the
snapshot image creation process. Go to step 5.
NOTE: Use this option if you want to specify all the customizable settings for the snapshot group repository. The Manual
method is considered advanced. It is recommended that you fully understand physical disk consistency and optimal
physical disk configurations before proceeding with the Manual method.
• The base virtual disk for a snapshot group or one or more member virtual disks of a consistency group that contains this snapshot
image is a member of an asynchronous remote replication group.
• The virtual disk or virtual disks are currently in a synchronizing operation.
The snapshot image creation operation completes as soon as the synchronization operation is complete. To cancel the pending snapshot
image creation before the synchronization operation completes, do the following:
1 From the AMW, select either the snapshot group or consistency group that contains the pending snapshot image.
2 Do one of the following:
• Copy Services > Snapshot Group > Advanced > Cancel Pending Snapshot Image.
• Copy Services > Consistency Group > Advanced > Cancel Pending Consistency Group Snapshot Image.
When a snapshot image is deleted from a consistency group, the system performs the following actions:
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot image that you want to delete from the snapshot group or consistency group and then select one of the following
menu paths to delete the snapshot image:
• Copy Services > Snapshot Image > Delete.
• Copy Services > Consistency Group > Consistency Group Snapshot Image > Delete.
• You can set up a schedule for a snapshot group to automatically create a snapshot image at a specific time in the future or on a regular
basis.
• You can set up a schedule for a consistency group to automatically create a snapshot image of each member virtual disk in the group at
a specific time in the future or on a regular basis.
You can create a schedule that runs daily or weekly in which you select specific days of the week (Sunday through Saturday). To make
scheduling easier, you can import an existing schedule for a snapshot group or consistency group. In addition, you can temporarily suspend
scheduled snapshot image creation by disabling the schedule. When a schedule is disabled, the scheduled snapshot image creations do not
occur.
• Using a schedule can result in a large number of snapshot images, so be sure that you have sufficient repository capacity.
• Each snapshot group or consistency group can have only one schedule.
• Scheduled snapshot image creations do not occur when the storage array is offline or turned off.
• If you delete a snapshot group or consistency group that has a schedule, the schedule is also deleted.
The snapshot image creation operation completes as soon as the synchronization operation is complete. To cancel the pending snapshot
image creation before the synchronization operation completes, do the following:
1 From the AMW, select either the snapshot group or consistency group that contains the pending snapshot image.
2 Do one of the following:
• Copy Services > Snapshot Group > Create Snapshot Image Schedule.
• Copy Services > Consistency Group > Consistency Group Image > Create/Edit Schedule.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot group or consistency group for which you want to edit a schedule.
3 Do one of the following:
• Copy Services > Snapshot Group > Edit Snapshot Image Schedule.
• Copy Services > Consistency Group Snapshot Image > Create/Edit Schedule.
• Creating a snapshot virtual disk of a snapshot image, which allows you to retrieve deleted files from that snapshot virtual disk (the base
virtual disk remains undisturbed).
• Restoring a snapshot image to the base virtual disk, which allows you to roll back the base virtual disk to a previous point-in-time.
NOTE: The host will have immediate access to the new-rolled-back base virtual disk, but the existing base virtual disk will not
allow the host read-write access after the rollback is initiated. You can create a snapshot of the base virtual disk just before you
start the rollback to save the pre-rollback base virtual disk for recovery purposes.
Snapshot images are useful any time you want to roll back to a known good data set at a specific point in time. For example, before
performing a risky operation on a virtual disk, you can create a snapshot image to enable “undo” capability for the entire virtual disk. You
can start a rollback from the following types of snapshot images:
• Snapshot image of a base virtual disk, which allows you to roll back the base virtual disk associated with a snapshot group to a previous
state.
• Consistency group snapshot image, which allows you to roll back all or select member virtual disks of the consistency group to a
previous state.
NOTE: You also can use the command line interface (CLI) to start a rollback operation from multiple snapshot images
concurrently, cancel a rollback operation, resume a rollback operation, modify the priority of a rollback operation, and view the
progress of a rollback operation.
Depending on your selection, either the Confirm Rollback Snapshot Image or the Confirm Rollback Snapshot Image window is
displayed.
3 If you are starting the rollback operation from a consistency group snapshot image, select the member virtual disks from the member
virtual disks table that you want to rollback; otherwise skip to step 4.
4 In the Rollback priority area, use the slider bar to set a priority for the rollback operation.
• There are five priority rates available: lowest, low, medium, high, and highest.
• If the priority is set at the lowest rate, I/O activity is prioritized and the rollback operation takes longer time to complete.
• If the priority is set at the highest priority rate, the rollback operation is prioritized, but I/O activity for the storage array may be
affected.
5 To confirm and start the rollback operation, type yes in the text box, and click Rollback.
You can view the progress of the rollback operation in the Properties pane when you select the base virtual disk or the consistency
group member virtual disk in the Logical pane.
1 From the AMW, select the Storage & Copy Services tab.
2 Select a snapshot image of either a base virtual disk or of a consistency group’s member virtual disk and then select Copy Services >
Snapshot Image > Rollback > Resume.
The Confirm Resume Rollback window is displayed.
3 Click Resume.
The following may occur depending on the error condition:
• If the resume rollback operation is successful — You can view the progress of the rollback operation in the Properties pane when
you select the base virtual disk or the consistency group member virtual disk in the Logical pane.
• If the resume rollback operation is not successful — The rollback operation is paused again. The base virtual disk or member virtual
disk displays Needs Attention icons, and the controller logs the event to the Major Event Log (MEL). You can follow the Recovery
Guru procedure to correct the problem or contact your Technical Support representative.
NOTE: If the snapshot group on which the snapshot image resides has one or more snapshot images that are automatically
purged, the snapshot image used for the rollback operation may not be available for future rollbacks.
1 From the AMW, select the Storage & Copy Services tab.
2 Select a snapshot image of either a base virtual disk or of a consistency group’s member virtual disk and then select Copy Services >
Snapshot Image > Rollback > Advanced > Cancel.
The Confirm Cancel Rollback window is displayed.
3 Click Resume.
4 Click Yes to cancel the rollback operation.
5 Type yes in the text box, and click OK.
The Rollback operation is a long-running operation. The Operations in Progress window displays all of the long-running operations that are
currently running on the storage array. From this window, you can view the progress of the rollback operation for a snapshot image and its
associated base virtual disk or consistency group member virtual disk.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the storage array for which you want to display the operations in progress.
The Operations in Progress window is displayed.
3 To view the progress for operations that affect a base virtual disk or a consistency group snapshot image, click the triangle next to a
base virtual disk or a consistency group snapshot image to expand or collapse it.
4 To change the interval for refreshing the display, use the spinner box in the lower-right corner of the window, and click Update.
5 To refresh the display immediately, click Refresh Now.
There are five priority rates available: lowest, low, medium, high, and highest.
• If the priority is set at the lowest rate, I/O activity is prioritized and the rollback operation takes longer time to complete.
• If the priority is set at the highest priority rate, the rollback operation is prioritized, but I/O activity for the storage array may be
affected.
1 From the AMW, select the Storage & Copy Services tab.
2 Do one of the following:
• Select a snapshot image of either a base virtual disk or of a consistency group’s member virtual disk and then select Copy
Services > Snapshot Image > Rollback > Change Priority.
• Select a consistency group of either a base virtual disk or of a consistency group’s member virtual disk and then select Copy
Services > Consistency Group Snapshot Image > Rollback > Change Priority.
• When a base virtual disk that contains a snapshot group is added to an asynchronous remote replication group, the system
automatically changes the repository full policy to automatically purge the oldest snapshot image and sets the autodelete limit to the
maximum allowable snapshot limit for a snapshot group.
• If the base virtual disk resides on a standard disk group, the repository members for any associated snapshot group, can reside on either
a standard disk group or a disk pool. If a base virtual disk resides on a disk pool, all repository members for any associated snapshot
group must reside on the same disk pool as the base virtual disk.
• You cannot create a snapshot group on a failed virtual disk.
• If you attempt to create a snapshot image, that snapshot image creation operation might remain in a Pending state because of the
following conditions:
– The base virtual disk that contains this snapshot image is a member of an asynchronous remote replication group.
– The base virtual disk is in a synchronizing operation. The snapshot image creation completes when the synchronization operation is
complete.
1 From the AMW, select the base virtual disk whose data you want to copy.
2 Select a base virtual disk and then select Copy Services > Snapshot Group > Create.
The Snapshot Group Settings window is displayed.
3 In the Snapshot group name field, enter a unique name (30 character maximum) that best describes the virtual disk selected for this
group. For example, AccountingData.
By default, the snapshot group name is shown in the name text box as:[base-virtual disk-name] - SG + sequence-
numberIn this example, SG (snapshot group) is the appended suffix and sequence-number is the chronological number of the
snapshot group relative to the base virtual disk.
For example, if you create the first snapshot group for a base virtual disk called “Accounting”, the default name of the snapshot group
is “Accounting_SG_01”. The default name of the next snapshot group you create based on “Accounting” is “Accounting_SG_02”.
4 Select Create the first Snapshot Image Now to take the first copy of the associated base virtual disk at the same time the snapshot
group is created.
5 Do one of the following to select how you want to create the snapshot group repository:
• Select Automatic and click Finish to create the snapshot group repository with the default capacity settings. This option is the
recommended one.
NOTE: Use this option if you want to specify all the customizable settings for the snapshot group repository. The Manual
method is considered advanced and only those who understand physical disk consistency and optimal physical disk
configurations should use this method. See Creating The Snapshot Group Repository (Manually) for instructions on how to
set the repository parameters.
6 Click Finish.
The system performs the following actions:
• The snapshot group and its properties under the individual virtual disk node for the associated base virtual disk are displayed in the
navigation tree.
• If Create the first Snapshot Image Now was selected, the system takes a copy of the associated base virtual disk and the
Snapshot Image Successfully Created window is .
• There is a minimum required capacity for a consistency group repository (depending on your configuration).
• When you define the capacity requirements for a repository, keep in mind any future requirements that you might have for other virtual
disks in this disk group or disk pool. Make sure that you have enough capacity to meet your data storage needs, but you do not over
allocate because you can quickly use up all the storage in your storage array.
• The list of repository candidates can contain both new and existing repository virtual disks. Existing repository virtual disks are left on
the storage array by default when you delete a consistency group. Existing repository virtual disks are placed at the top of the list. The
benefit to reusing an existing repository virtual disk is that you can avoid the initialization process that occurs when you create a new
one.
1 From the AMW, select the Storage & Copy Services tab.
2 Select Copy Services→ Consistency Group→ Create.
The Consistency Group Settings window is displayed.
3 Select Manual and click Next to customize the repository candidate settings for the consistency group.
The Consistency Group Repository Settings - Manual window is displayed.
4 Select how you want to filter the repository candidates for each member virtual disk in the consistency group, based on either a
percentage of the base virtual disk capacity or by preferred capacity.
The best repository candidate for each member virtual disk based on the selections you made is displayed.
5 Select Edit individual repository candidates if you want to edit repository candidates for the member virtual disks.
6 Select the repository, from the Repository candidates table, that you want to use for each member virtual disk in the consistency
group.
NOTE: Select a repository candidate that is closest to the capacity you specified.
• The Repository candidates table shows both new and existing repositories that are capable of being used for each member virtual
disk in the consistency group based on the value you specified for percentage or the value you specified for preferred capacity.
• By default, the system displays the repositories for each member virtual disk of the consistency group using a value of 20% of the
member virtual disk’s capacity. It filters out undersized repository candidates, and those with different Data Service (DS)
• Auto-Delete Settings — You can configure each snapshot group to keep the total number of snapshot images in the group at or below
a user-defined maximum. When this option is enabled, the system automatically deletes the oldest snapshot image in the group, any
time a new snapshot is created, to comply with the maximum number of snapshot images allowed for the group.
• Snapshot Group Repository Settings — You can define a maximum percentage for the snapshot group repository that determines
when a warning is triggered when the capacity of a snapshot group repository reaches the defined percentage. In addition, you can
specify which policy to use when the capacity of the snapshot group repository reaches its maximum defined percentage:
– Automatically purge oldest snapshot image — The system automatically purges the oldest snapshot image in the snapshot group,
which releases the repository’s reserve space for reuse within the snapshot group.
• Reject writes to base virtual disk: When the repository reaches its maximum defined percentage, the system rejects any I/O write
request to the base virtual disk that triggered the repository access.
1 From the AMW, select the Storage & Copy Services tab.
2 From the snapshot groups category node, select the snapshot group that you want to change and then select Copy Services >
Snapshot Group > Change Settings.
The Change Snapshot Group Settings window is displayed.
3 Change the snapshot group settings as required.
4 Click OK to apply your changes to the snapshot group.
• A name can consist of letters, numbers, and the special characters underscore (_), hyphen (-), and pound (#). If you choose any other
characters, an error message is displayed. You are prompted to choose another name.
• Limit the name to 30 characters. Any leading and trailing spaces in the name are deleted.
• Use a unique, meaningful name that is easy to understand and remember.
• Avoid arbitrary names or names that would quickly lose their meaning in the future.
• If you try to rename a snapshot group with a name that is already in use by another snapshot group, an error message is displayed, and
you are prompted to choose another name for the group.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot group that you want to rename and then select Copy Services > Snapshot Group > Rename.
The Rename Snapshot Group window is displayed.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot group that you want to delete and then select Copy Services > Snapshot Group > Delete.
The Confirm Delete window is displayed.
3 Select Delete all repositories associated with this object? if you want to delete the associated repository that exists for the snapshot
group.
4 Type yes in the text box and then click Delete to delete the snapshot group.
The conversion operation requires that a repository be provisioned to support write operations on the snapshot virtual disk.
1 From the AMW, select the Storage & Copy Services tab.
2 Select either a snapshot virtual disk or a consistency group member’s snapshot virtual disk and then select Copy Services >
Snapshot Virtual disk > Convert to Read-Write.
3 Select how you want to create the repository for the Read-Write snapshot virtual disk. Do one of the following:
• Select Automatic to create the snapshot virtual disk repository with the default capacity settings. This is the recommended option.
• Select Manual to define the properties for the snapshot virtual disk repository. Use this option if you want to specify all of the
customizable settings for the snapshot virtual disk repository. The Manual method is considered advanced and only those who
understand physical disk consistency and optimal physical disk configurations should use this method.
4 Click Convert to convert the read-only snapshot virtual disk to read-write.
The snapshot virtual disk or consistency group member’s snapshot virtual disk table as read- write is displayed under the Mode
column, and the Repository columns are now populated.
• Snapshot group
• Snapshot virtual disk
• Consistency group member virtual disk
• If the base virtual disk resides on a standard disk group, the repository members for any associated consistency group, can reside on
either a standard disk group or a disk pool. If a base virtual disk resides on a disk pool, all repository members for any associated
consistency group must reside on the same disk pool as the base virtual disk.
• You cannot create a consistency group on a failed virtual disk.
• A consistency group contains one snapshot group for each virtual disk that is a member of the consistency group. You cannot
individually manage a snapshot group that is associated with a consistency group. Instead you must perform the manage operations
(create snapshot image, delete snapshot image or snapshot group, and rollback snapshot image) at the consistency group level.
• If you attempt to create a consistency group snapshot image, the operation might remain in a Pending state because of the following
conditions:
– The base virtual disk that contains this consistency group snapshot image is a member of an asynchronous remote replication
group.
– The base virtual disk is in a synchronizing operation. The consistency group snapshot image creation completes when the
synchronization operation is complete.
1 From the AMW, select the Storage & Copy Services tab.
2 Select Copy Services > Consistency Group > Create.
The Consistency Group Settings window is displayed.
3 In the Consistency group name field, enter a unique name (30-character maximum) that best describes the member virtual disks that
you want to add for this group.
By default, the consistency group name is shown in the name text box as:CG + sequence-number
In this example, CG (Consistency Group) is the prefix and sequence-number is the chronological number of the consistency group, and
is incremented based on how many consistency groups currently exist.
4 Select if you want to add the member virtual disks to the consistency group now or later:
• Select Add members now and then from the eligible member virtual disks, select the virtual disks that you want to add as
members to the consistency group. If you choose this method, you must create a repository for each member of the consistency
group. Go to step 5. You can click the Select all check box to add all the virtual disks displayed in the Eligible virtual disks table to
the consistency group.
• Select Add members later and then click Finish to create the consistency group without member virtual disks. Go to step 6.
The Eligible virtual disks table shows only those virtual disks that are capable of being used in the consistency group. To be eligible to
be a member of a consistency group, a virtual disk cannot be in a Failed state and must contain less than the maximum allowable
number of associated snapshot groups.
5 Select how you want to create the repositories for each member in the consistency group.
• Select Automatic and click Finish to create the repositories with the default capacity settings. This option is the recommended
one.
• Select Manual and click Next to define the capacity settings for the repositories; and then click Finish to continue with the
consistency group creation process. You can click Edit individual repository candidates to manually edit a repository candidate for
each member virtual disk.
• There is a minimum required capacity for a consistency group repository (depending on your configuration).
• When you define the capacity requirements for a repository, keep in mind any future requirements that you might have for other virtual
disks in this disk group or disk pool. Make sure that you have enough capacity to meet your data storage needs, but you do not over
allocate because you can quickly use up all the storage in your storage array.
• The list of repository candidates can contain both new and existing repository virtual disks. Existing repository virtual disks are left on
the storage array by default when you delete a consistency group. Existing repository virtual disks are placed at the top of the list. The
benefit to reusing an existing repository virtual disk is that you can avoid the initialization process that occurs when you create a new
one.
1 From the AMW, select the Storage & Copy Services tab.
2 Select Copy Services→ Consistency Group→ Create.
The Consistency Group Settings window is displayed.
3 Select Manual and click Next to customize the repository candidate settings for the consistency group.
The Consistency Group Repository Settings - Manual window is displayed.
4 Select how you want to filter the repository candidates for each member virtual disk in the consistency group, based on either a
percentage of the base virtual disk capacity or by preferred capacity.
The best repository candidate for each member virtual disk based on the selections you made is displayed.
5 Select Edit individual repository candidates if you want to edit repository candidates for the member virtual disks.
6 Select the repository, from the Repository candidates table, that you want to use for each member virtual disk in the consistency
group.
NOTE: Select a repository candidate that is closest to the capacity you specified.
• The Repository candidates table shows both new and existing repositories that are capable of being used for each member virtual
disk in the consistency group based on the value you specified for percentage or the value you specified for preferred capacity.
• By default, the system displays the repositories for each member virtual disk of the consistency group using a value of 20% of the
member virtual disk’s capacity. It filters out undersized repository candidates, and those with different Data Service (DS)
attributes. If appropriate candidates are not returned using these settings, you can click Run Auto-Choose to provide automatic
candidate recommendations.
• The Difference column shows the mathematical difference between your selected capacity and the actual capacity of the
repository candidate. If the repository candidate is new, the system uses the exact capacity size that you specified and displays
zero (0) in the Difference column.
7 To edit an individual repository candidate:
a Select the candidate from the Repository candidates table and click Edit to modify the capacity settings for the repository.
b Click OK.
8 Select View advanced options and then accept or change the following default settings as appropriate.
• A name can consist of letters, numbers, and the special characters underscore (_), hyphen (-), and pound (#). If you choose any other
characters, an error message is displayed. You are prompted to choose another name.
• Limit the name to 30 characters. Any leading and trailing spaces in the name are deleted.
• Use a unique, meaningful name that is easy to understand and remember.
• Avoid arbitrary names or names that would quickly lose their meaning in the future.
• If you try to rename a consistency group with a name that is already in use by another consistency group, an error message is displayed,
and you are prompted to choose another name for the group.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the consistency group that you want to rename and then select Copy Services > Consistency Group > Rename.
The Rename Consistency Group window is displayed.
3 Type a new name for the consistency group and then click Rename.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the consistency group that you want to delete and then select Copy Services > Consistency Group > Delete.
The Confirm Delete window is displayed.
3 Select Delete all repositories associated with this consistency group if you want to delete the associated repository that exists for
the consistency group.
4 Type yes in the text box and then click Delete to delete the consistency group.
• Auto-Delete Settings — You can configure each consistency group to keep the total number of snapshot images in the group at or
below a user-defined maximum. When this option is enabled, the system automatically deletes the oldest snapshot image in the group,
any time a new snapshot is created, to comply with the maximum number of snapshot images allowed for the group.
• Consistency Group Repository Settings — You can define a maximum percentage for the consistency group member repository that
determines when a warning is triggered when the capacity of a consistency group member repository reaches the defined percentage.
1 From the AMW, select the Storage & Copy Services tab.
2 From the consistency groups category node, select the consistency group that you want to change and then select Copy Services >
Consistency Group > Change Settings.
The Change Consistency Group Settings window is displayed.
3 Change the consistency group settings as required.
4 Click OK to apply your changes to the consistency group.
1 From the Array Management Window (AMW), select the Storage & Copy Services tab.
2 Do one of the following:
• Select the base virtual disk that you want to add to the consistency group and then select Storage > Virtual disk > Add to
Consistency Group. The Select Consistency Group and Repository window is displayed.
• Select the consistency group to which you want to add member virtual disks and then select Copy Services > Consistency Group
> Add Member Virtual Disks. The Select Virtual Disks and Repositories window is displayed.
3 Depending on your selection in step 2, do one of the following:
• In the Select Consistency Group and Repository window, select the consistency group from the Consistency groups table, to
which you want add the base virtual disk.
• In the Select Virtual Disks and Repositories, select the member virtual disks from the eligible virtual disks table, that you want to
add to the consistency group. The eligible virtual disks table shows only those virtual disks that are capable of being used in the
consistency group. You can click the Select all check box to add all the virtual disks displayed in the Eligible virtual disks table to
the consistency group.
4 Select how you want to create the repository for the member virtual disk(s) you are adding to the consistency group:
• Select Automatic and click Finish to create the repository with the default capacity settings. This option is the recommended one.
• Select Manual and click Next to define the capacity settings for the repository and then click Finish.
Use the Manual option if you want to specify all of the customizable settings for the repository. The Manual method is considered
advanced and only those who understand physical disk consistency and optimal physical disk configurations should use this method.
The new member virtual disk(s) for the consistency group are displayed in the Member Virtual Disks table.
1 From the AMW, select the Storage & Copy Services tab.
2 Do one of the following:
• Select the base virtual disk that you want to remove from the consistency group and then select Storage > Virtual disk > Remove
From Consistency Group.
• Select the consistency group to which you want to add member virtual disks and then select Copy Services > Consistency Group
> Remove Member Virtual Disks.
3 If you selected a base virtual disk that is a member of multiple consistency groups or if you selected a consistency group from which
you want to remove member virtual disk, do one of the following:
• Select one or more consistency groups, from the Consistency groups table, that you want to remove the base virtual disk from
and then click Remove.
NOTE: You can click the Select all check box to remove the virtual disk from all the consistency groups displayed in the
table.
• Select the member virtual disks, from the Member virtual disks table, that you want to remove from the consistency group and
then click Remove.
NOTE: You can click the Select all check box to remove all the virtual disks displayed in the table.
4 Select the Delete all repositories associated with this member virtual disk if you want to delete all associated repositories that exist
for one or more member virtual disks in the consistency group.
5 Type yes in the text box and then click Delete to delete one or more member virtual disks from the consistency group.
The system removes the member virtual disks from the consistency group; they are not deleted.
• A read-only snapshot virtual disk provides a host application with READ access to a copy of the data contained in the snapshot image,
but without the ability to modify the snapshot image. A read-only snapshot virtual disk does not have an associated repository.
• A read-write snapshot virtual disk requires an associated repository to provide the host application with WRITE access to a copy of the
data contained in the snapshot image.
For example, if you create the first snapshot virtual disk for a base virtual disk called “Accounting”, the default name of the snapshot
virtual disk is “Accounting_SV_01”. The default name of the next snapshot virtual disk you create based on “Accounting” is
“Accounting_SV_02”.
There is a 30-character limit. After you reach this limit, you can no longer type in the text box. If the base virtual disk is 30 characters,
the default name for the group uses the base virtual disk name truncated enough to add the suffix “SV” and the sequence string.
5 In the Map to host drop-down, specify how you want to map the host to the snapshot virtual disk.
• Map Now to Default Group – The virtual disk is automatically assigned a logical unit number (LUN) and is accessible by any hosts
that are connected to the storage array.
• Map Later – The virtual disk is not assigned a LUN and is not accessible by any hosts until you go to the Host Mappings tab and
assign a specific host and LUN to this virtual disk.
• Select a specific host – You can select a specific host or host group from the list. This option is available only if Storage
Partitioning is enabled.
NOTE: Make sure there are enough free LUNs on the host or host group that you selected to map to a snapshot virtual disk.
6 Select how to grant host access to the snapshot virtual disk. Do one of the following:
• Select Read Write and go to step 7.
• Select Read Only and click Finish to create the snapshot virtual disk. Go to step 8.
NOTE: Repositories are not required for Read Only snapshot virtual disks.
Keep these guidelines in mind when you grant host access to a snapshot virtual disk:
• Each host has its own logical unit number (LUN) address space and allows the same LUN to be used by different host groups or
hosts to access snapshot virtual disks in a storage array.
• You can define one mapping for each snapshot virtual disk in the storage array.
• Mappings are shared between controllers in the storage array.
• The same LUN cannot be used twice by a host group or a host to access a snapshot virtual disk. You must use a unique LUN.
• An access virtual disk mapping is not required for out-of-band storage arrays.
7 Choose how you want to create the repository for the Read-Write snapshot virtual disk. Do one of the following:
Use this option if you want to specify all the customizable settings for the snapshot virtual disk repository. The Manual method is
considered advanced and only those who understand physical disk consistency and optimal physical disk configurations should use this
method.
8 Click Finish.
The snapshot virtual disk and its properties under the individual virtual disk node for the associated base virtual disk is displayed in the
navigation tree. The snapshot virtual disk is added as a new virtual disk that contains the snapshot image information, which is the
data of the virtual disk at the particular time of snapshot image creation.
• There is a minimum required capacity for a snapshot group repository which depends on your configuration.
• When you define the capacity requirements for a repository, keep in mind any future requirements that you may have for other virtual
disks in this disk group or disk pool. Make sure that you have enough capacity to meet your data storage needs without allocating too
much capacity that takes up the storage in your system.
• The list of repository candidates can contain both new and existing repository virtual disks. Existing repository virtual disks are placed at
the top of the list. The benefit of reusing an existing repository virtual disk is that you can avoid the initialization process that occurs
when you create a new one.
1 From the Snapshot Virtual Disk Settings window, select Manual and click Next to define the properties for the snapshot virtual disk
repository.
The Snapshot Virtual disk Repository Settings - Manual window is displayed.
2 Choose how you want filter the repository candidates in the Repository candidates table, based on either a percentage of the base
virtual disk capacity or by preferred capacity.
The repository candidates that you selected are displayed.
3 Select the repository, from the Repository candidates table, that you want to use for the snapshot virtual disk and select a repository
candidate that is closest to the capacity you specified.
• The Repository candidates table shows both new and existing repositories that are capable of being used for the snapshot virtual
disk based on the value you specified for percentage or the value you specified for preferred capacity.
• The Difference column shows the mathematical difference between your selected capacity and the actual capacity of the
repository candidate. If the repository candidate is new, then the system uses the exact capacity size that you specified and
displays zero (0) in the Difference column.
4 In the % Full box, define the value that determines when a warning is triggered when the capacity of a snapshot virtual disk repository
reaches the defined percentage.
5 Click Finish.
1 From the AMW, select the Storage & Copy Services tab.
2 Select a base virtual disk, and then select Copy Services > Snapshot Virtual disk > Change Settings.
The Change Snapshot Virtual Disk Settings window is displayed.
3 Modify the repository full settings as required.
4 Click OK to apply the changes.
• You are finished with the snapshot virtual disk or consistency group snapshot virtual disk for the time being.
• You intend to re-create the snapshot virtual disk or consistency group snapshot virtual disk (that is designated as read-write) later and
want to retain the associated snapshot repository virtual disk so that it does not need to be created again.
• You want to maximize the storage array performance by stopping write activity to the snapshot repository virtual disk.
If you decide to re-create the snapshot virtual disk or consistency group snapshot virtual disk, you must choose a snapshot image from the
same base virtual disk.
If you disable the snapshot virtual disk or consistency group snapshot virtual disk, the system performs the following actions:
• Retains the World-Wide Name (WWN) for the snapshot virtual disk or consistency group snapshot virtual disk.
• Retains the snapshot virtual disk or consistency group snapshot virtual disk’s association with the same base virtual disk.
• Retains the snapshot virtual disk or consistency group snapshot virtual disk’s associated repository—if the virtual disk is designated as
read-write.
• Retains any host mapping and access (any read-write requests fail).
• Removes the snapshot virtual disk or consistency group snapshot virtual disk’s association with the current snapshot image.
• For a consistency group snapshot virtual disk, disables each member’s snapshot virtual disk.
NOTE: If you are finished with the snapshot virtual disk or consistency group snapshot virtual disk and do not intend to re-create
it later, you must delete the virtual disk, instead of disabling it.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot virtual disk or consistency group snapshot virtual disk that you want to disable and then select one of the
following:
• Copy Services > Snapshot Virtual disk > Disable. The Confirm Disable Snapshot Virtual Disk window is displayed.
• Copy Services > Consistency Group Snapshot Virtual Disk > Disable. The Confirm Disable Consistency Group Snapshot
Virtual Disk window is displayed.
3 Type yes in the text box and then click Disable to disable the snapshot virtual disk.
The snapshot virtual disk or consistency group snapshot virtual disk is displayed in the Logical pane with the Disabled Snapshot status
icon. If you disabled a read-write snapshot virtual disk or consistency group snapshot virtual disk, its associated snapshot repository
virtual disk does not change status. The write activity to the snapshot repository virtual disk stops until the snapshot virtual disk or
consistency group snapshot virtual disk is re-created.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot virtual disk or consistency group snapshot virtual disk that you want to disable and then select one of the
following:
• Copy Services > Snapshot Virtual disk > Re-create. The Confirm Re-Create Snapshot Virtual Disk window is displayed.
• Copy Services > Consistency Group Snapshot Virtual Disk > Re-create. The Confirm Re-Create Consistency Group Snapshot
Virtual Disk window is displayed.
3 Select whether to re-create the snapshot virtual disk or consistency group snapshot virtual disk using an existing snapshot image, or a
new snapshot image and then click Re-create.
The status of the snapshot virtual disk or consistency group snapshot virtual disk is changed from Disabled to Optimal.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot virtual disk or consistency group snapshot virtual disk that you want to disable and then select one of the
following:
NOTE: If you attempt to create a snapshot virtual disk for a snapshot image and that snapshot image is in a pending snapshot
image creation operation, it is due to the following conditions:
• The base virtual disk that contains this snapshot image is a member of an asynchronous remote replication group
• The base virtual disk is in a synchronizing operation. The snapshot image is created when the synchronization operation is
completed.
To create a consistency group snapshot virtual disk:
1 From the AMW, select the Storage & Copy Services tab.
2 Do one of the following:
• Select a consistency group, and then select Copy Services > Consistency Group > Create Consistency Group Snapshot Virtual
Disk. The Select Existing Snapshot Image or New Snapshot Image window is displayed. Go to step 3.
• Select a consistency group snapshot image from the Consistency Group Snapshot Images table, and then select Copy Services
> Consistency Group Snapshot Image > Create Consistency Group Snapshot Virtual Disk. The Consistency Group Snapshot
Virtual Disk Settings window is displayed. Go to step 4.
3 If you selected a consistency group in step 2, select the consistency group snapshot image for which you want to create a snapshot
virtual disk. Do one of the following:
• Select An existing snapshot image and then select a snapshot image from the consistency group snapshot images table and click
Next.
• Select A new snapshot image and then a snapshot group from the existing snapshot group table and then click Next.
NOTE: Repositories are not required for Read-Only snapshot virtual disks.
9 Select how you want to create the snapshot virtual disk repositories for each member in the consistency group. Do one of the
following:
• Select Automatic and click Finish to create each snapshot virtual disk repository with the default capacity settings. This option is
the recommended one.
• Select Manual and click Next to define the properties for each snapshot virtual disk repository; then click Finish to continue with
the snapshot virtual disk creation process. You can click Edit individual repository candidates to manually edit a repository
candidate for each member virtual disk.
Use this option if you want to specify all the customizable settings for the snapshot virtual disk repository. The Manual method is
considered advanced and only those who understand physical disk consistency and optimal physical disk configurations should use this
method.
The snapshot virtual disk and its properties for the associated consistency group are displayed in the navigation tree.
• There is a minimum required capacity for a snapshot virtual disk repository (depending on your configuration).
1 From the AMW, select the Storage & Copy Services tab.
2 Select the consistency group to which you want to add member virtual disks and then select Copy Services > Consistency Group >
Remove Member Virtual Disks.
The Consistency Group Snapshot Virtual Disk Settings window is displayed.
3 Select Manual and click Next to customize the repository candidate settings for the consistency group.
The Consistency Group Snapshot Virtual Disk Repository Settings - Manual window is displayed.
4 Select how you want filter the repository candidates for each member virtual disk in the consistency group, based on either a
percentage of the base virtual disk capacity or by preferred capacity.
The best repository candidate for each member virtual disk based on your selections is displayed.
5 Select Edit individual repository candidates if you want to edit repository candidates for the member virtual disks.
6 Select the repository, from the Repository candidates table, that you want to use for each member virtual disk in the consistency
group.
Select a repository candidate that is closest to the capacity you specified.
• The Repository candidates table shows both new and existing repositories that are capable of being used for each member virtual
disk in the consistency group based on the value you specified for percentage or the value you specified for preferred capacity.
• By default, the system displays the repositories for each member virtual disk of the consistency group using a value of 20 percent
of the member virtual disk’s capacity. It filters out undersized repository candidates, and those with different Data Service (DS)
attributes. If appropriate candidates are not returned using these settings, you can click Run Auto-Choose to provide automatic
candidate recommendations.
• The Difference column shows the mathematical difference between your selected capacity and the actual capacity of the
repository candidate. If the repository candidate is new, the system uses the exact capacity size that you specified and displays
zero (0) in the Difference column.
7 To edit an individual repository candidate:
a Select the candidate from the Repository candidates table and click Edit to modify the capacity settings for the repository.
b Click OK.
8 In the % full box, define the value that determines when a warning is triggered when the capacity of a consistency group snapshot
virtual disk repository reaches the defined percentage.
9 Click Finish to create the repository.
• You are finished with the snapshot virtual disk or consistency group snapshot virtual disk for the time being.
• You intend to re-create the snapshot virtual disk or consistency group snapshot virtual disk (that is designated as read-write) later and
want to retain the associated snapshot repository virtual disk so that it does not need to be created again.
• You want to maximize the storage array performance by stopping write activity to the snapshot repository virtual disk.
If you decide to re-create the snapshot virtual disk or consistency group snapshot virtual disk, you must choose a snapshot image from the
same base virtual disk.
• Retains the World-Wide Name (WWN) for the snapshot virtual disk or consistency group snapshot virtual disk.
• Retains the snapshot virtual disk or consistency group snapshot virtual disk’s association with the same base virtual disk.
• Retains the snapshot virtual disk or consistency group snapshot virtual disk’s associated repository—if the virtual disk is designated as
read-write.
• Retains any host mapping and access (any read-write requests fail).
• Removes the snapshot virtual disk or consistency group snapshot virtual disk’s association with the current snapshot image.
• For a consistency group snapshot virtual disk, disables each member’s snapshot virtual disk.
NOTE: If you are finished with the snapshot virtual disk or consistency group snapshot virtual disk and do not intend to re-create
it later, you must delete the virtual disk, instead of disabling it.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot virtual disk or consistency group snapshot virtual disk that you want to disable and then select one of the
following:
• Copy Services > Snapshot Virtual disk > Disable. The Confirm Disable Snapshot Virtual Disk window is displayed.
• Copy Services > Consistency Group Snapshot Virtual Disk > Disable. The Confirm Disable Consistency Group Snapshot
Virtual Disk window is displayed.
3 Type yes in the text box and then click Disable to disable the snapshot virtual disk.
The snapshot virtual disk or consistency group snapshot virtual disk is displayed in the Logical pane with the Disabled Snapshot status
icon. If you disabled a read-write snapshot virtual disk or consistency group snapshot virtual disk, its associated snapshot repository
virtual disk does not change status. The write activity to the snapshot repository virtual disk stops until the snapshot virtual disk or
consistency group snapshot virtual disk is re-created.
• The snapshot virtual disk or consistency group snapshot virtual disk must be in either an Optimal status or Disabled status.
• For consistency group snapshot virtual disk, all member snapshot virtual disks must be in a Disabled state before you can re-create the
consistency group snapshot virtual disk.
• You cannot re-create an individual member snapshot virtual disk, you can re-create only the overall consistency group snapshot virtual
disk.
• All write data on any associated snapshot repository virtual disk is deleted. Snapshot virtual disk or consistency group snapshot virtual
disk parameters remain the same as the previously disabled virtual disk parameters. The original names for the snapshot virtual disk or
consistency group snapshot virtual disk are retained. You can change these names after the re-create option completes.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot virtual disk or consistency group snapshot virtual disk that you want to disable and then select one of the
following:
• Copy Services > Snapshot Virtual disk > Re-create. The Confirm Re-Create Snapshot Virtual Disk window is displayed.
• Snapshot group
• Snapshot virtual disk
• Consistency group member virtual disk
• Replicated Pair
NOTE: Changing the modification priority by using this option modifies the priority only for the overall repository that you
selected. The settings are applied to all individual repository virtual disks contained within the overall repository.
To change the modification priority:
• Snapshot group
• Snapshot virtual disk
• Consistency group member virtual disk
• Replicated pair
• Changing the media scan settings by using this option modifies the settings only for the overall repository that you selected.
• The settings are applied to all individual repository virtual disks contained within the overall repository.
1 In the AMW, select the Storage & Copy Services tab and select any virtual disk.
2 Select the storage object for which to change the media scan settings.
3 Right click the selected storage object and select Overall Repository > Change Media Scan Settings.
The Change Media Scan Settings window is displayed.
4 Select Enable media scan.
• Snapshot group
• Snapshot virtual disk
• Consistency group member virtual disk
• Replicated Pair
• Changing the Pre-Read Consistency Check setting modifies the setting only for the overall repository that you selected.
• The Pre-Read Consistency Check setting is applied to all individual repository virtual disks contained within the overall repository.
• If an overall repository virtual disk that is configured with pre-read is migrated to a RAID level that does not maintain consistency
information, the metadata of the overall repository virtual disk continues to show that pre-read is enabled. However, reads to that
overall repository virtual disk ignores consistency pre-read. If the virtual disk is subsequently migrated back to a RAID level that supports
consistency, the option becomes available again.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the storage object for which to change the pre-read consistency check settings.
3 Right click the select object and select Overall Repository > Change Pre-read Consistency Check.
4 Select Enable pre-read consistency check, and click OK.
NOTE: Enabling the option on overall repository virtual disks without consistency does not affect the virtual disk. However,
the attribute is retained for that overall repository virtual disk if it is ever changed to one with consistency information.
5 Click Yes.
• Deletes all member snapshot virtual disks—for a consistency group snapshot virtual disk.
• Removes all associated host mappings.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the snapshot virtual disk or consistency group snapshot virtual disk that you want to disable and then select one of the
following:
• Copy Services > Snapshot Virtual disk > Delete. The Confirm Delete Snapshot Virtual Disk window is displayed.
• Copy Services > Consistency Group Snapshot Virtual Disk > Delete. The Confirm Delete Consistency Group Snapshot Virtual
Disk window is displayed.
3 If the snapshot virtual disk or the consistency group snapshot virtual disk is read-write, select the option to delete the associated
repository.
4 Type yes in the text box and then click Delete to delete the snapshot virtual disk or consistency group snapshot virtual disk.
• Snapshot group
• Snapshot virtual disk
• Consistency group member virtual disk
• Consistency group member snapshot virtual disk
• Replicated pair
Use this option when you receive a warning that the overall repository is in danger of becoming full. You can increase the repository
capacity by performing one of these tasks:
NOTE: If no free capacity exists on any disk group or disk pool, you can add unconfigured capacity in the form of unused
physical disks to a disk group or disk pool.
You cannot increase the storage capacity of an overall repository if one of these conditions exists:
• The repository virtual disk that you want to add does not have an Optimal status.
• Any repository virtual disk in the disk group or disk pool that you want to add is in any state of modification.
• No free capacity exists in the disk group or disk pool that you want to add.
• No unconfigured capacity exists in the disk group or disk pool that you want to add.
• There are no eligible existing repository virtual disks—including mismatched DS attributes.
• Make sure that a base virtual disk and each of the individual repository virtual disks in the overall repository have the same Data Service
(DS) attributes, specifically for the following characteristics:
• RAID Level—A repository in a disk pool is considered to have a matching RAID Level for any base virtual disk on a disk group, regardless
of the base virtual disk’s actual RAID Level. However, a repository on a disk group is considered to have a matching RAID Level only if
that RAID Level is identical to the RAID Level of the base virtual disk.
• Physical Disk Type—A match requires that the base virtual disk and the repository virtual disk reside on either a disk group or disk pool
with identical physical disk type attributes.
• You cannot increase or decrease the repository capacity for a snapshot virtual disk that is read-only because it does not have an
associated repository. Only snapshot virtual disks that are read-write require a repository.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the storage object for which you want to increase the repository capacity.
NOTE: You can click the Select all check box to add all the repository virtual disks displayed in the Eligible repository
virtual disks table.
b Select Allow mismatch in DS attributes to display more repository virtual disks that do not have the same DS settings as the
base virtual disk.
6 To create a repository virtual disk, perform the following steps:
a From the Create New Repository On drop-down list, select a disk group or disk pool.
The drop-down lists only the eligible repository virtual disks that have the same DS settings as the associated base virtual disk.
You can select Allow mismatch in DS attributes to display more repository virtual disks that do not have the same DS settings
as the base virtual disk.
If free capacity is available in the disk group or disk pool you selected, the total free space is displayed in the Capacity spinner
box.
b If necessary, adjust the Capacity.
NOTE: If free capacity does not exist on the disk group or disk pool you selected, the free space that appears in the
Capacity spinner box is 0. If this storage array has Unconfigured Capacity, you can create a disk group or disk pool and
then retry this operation using the new free capacity on that disk group or disk pool.
7 Click Increase Repository.
The system performs the following actions:
• Updates the capacity for the repository
• Displays one or more newly added repository member virtual disks for the repository
• Snapshot group
• Snapshot virtual disk
• Consistency group member virtual disk
• Consistency group member snapshot virtual disk
• Replicated pair virtual disk
You cannot decrease the storage capacity of the overall repository if one of these conditions exists:
• The overall repository contains only one repository member virtual disk.
• If there are one or more snapshot images associated with the overall repository.
• If a snapshot virtual disk or a consistency group member snapshot virtual disk is disabled.
• You can remove repository member virtual disks only in the reverse order that they were added.
• An overall repository must have at least one repository member virtual disk.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the storage object for which you want to decrease the repository capacity.
3 Right click the selected storage object and select Overall Repository > Decrease Capacity.
The Decrease Repository Capacity window is displayed.
4 Select one or more repository virtual disks from the Repository member virtual disks table that you want to remove.
• The table displays the member virtual disks in reverse order that they were added for the storage object. When you can click on
any row in the table, that row and all rows preceding it are selected.
• The last row of the table, which is the first repository added, is disabled because at least one repository must exist for the storage
object.
5 Click Delete selected repository virtual disks if you want to delete all associated repositories that exist for each member virtual disk
selected in the Repository member virtual disks table.
6 Click Decrease Repository.
The system performs the following actions:
• Updates the capacity for the overall repository.
• Displays the newly-updated repository member virtual disk(s) for the overall repository.
• Snapshot group
• Snapshot virtual disk
• Consistency group member virtual disk
• Consistency group member snapshot virtual disk
NOTE: Use the Revive option only if you are instructed to do so in a Recovery Guru procedure or by a Technical Support
representative. You cannot cancel this operation after it starts.
Use this option when you receive a warning that the overall repository is in danger of becoming full. You can increase the repository
capacity by performing one of these tasks:
NOTE: If no free capacity exists on any disk group or disk pool, you can add unconfigured capacity in the form of unused
physical disks to a disk group or disk pool.
CAUTION: Using the Revive option when there are still failures may cause data corruption or data loss, and the storage object
returns to the Failed state.
1 From the AMW, select the Storage & Copy Services tab.
2 Select the storage object that you want to revive and then select one of the following menu paths—depending on the storage object
you selected:
• Copy Services > Snapshot Group > Advanced > Revive.
• Copy Services > Snapshot Virtual Disk > Advanced > Revive.
• Copy Services > Consistency Group Member Virtual Disk > Advanced > Revive.
3 Type yes in the text box and then click Revive to restore the storage object to an Optimal state.
NOTE: If you ordered this feature, you received a Premium Feature Activation card that shipped in the same box as your Dell
PowerVault MD Series storage array. Follow the directions on the card to obtain a key file and to enable the feature.
NOTE: The preferred method for creating a virtual disk copy is to copy from a snapshot virtual disk. This allows the original
virtual disk used in the snapshot operation to remain fully available for read/write activity while the snapshot is used as the
source for the virtual disk copy operation.
When you create a virtual disk copy, you create a copy pair that has a source virtual disk and a target virtual disk on the same storage array.
The source virtual disk is the virtual disk that contains the data you want to copy. The source virtual disk accepts the host I/O read activity
and stores the data until it is copied to the target virtual disk. The source virtual disk can be a standard or thin virtual disk.
The target virtual disk is a standard or thin virtual disk in a disk group or disk pool and, if the legacy version is enabled, a legacy snapshot
base virtual disk.
• Copying data for improved access—As your storage requirements for a virtual disk change, you can use a virtual disk copy to copy data
to a virtual disk in a disk group that uses physical disks with larger capacity within the same storage array. Copying data for larger
access capacity enables you to move data to greater capacity physical disks—for example, 61–146 GB.
• Restoring snapshot virtual disk data to the source virtual disk—The Virtual Disk Copy feature enables you first to restore the data from
a snapshot virtual disk and then to copy the data from the snapshot virtual disk to the original source virtual disk.
• Copying data from a thin virtual disk to a standard virtual disk residing in the same storage array. However, you cannot copy data in the
opposite direction—from a standard virtual disk to a thin virtual disk.
• Creating a backup copy—The Virtual Disk Copy feature enables you to create a backup of a virtual disk by copying data from one virtual
disk (the source virtual disk) to another virtual disk (the target virtual disk) in the same storage array, minimizing the time that the
source virtual disk is unavailable to host write activity. You can then use the target virtual disk as a backup for the source virtual disk, as
a resource for system testing, or to copy data to another device, such as a tape drive or other media.
NOTE: Recovering from a backup copy—You can use the Edit Host-to-Virtual Disk Mappings feature to recover data from the
backup virtual disk you created in the previous procedure. The Host Mappings option enables you to unmap the source virtual
disk from its host and then to map the backup virtual disk to the same host.
Topics:
Offline copy
An offline copy reads data from the source virtual disk and copies it to a target virtual disk, while suspending all updates to the source
virtual disk when the copy is in progress. In an offline virtual disk copy, the relationship is between a source virtual disk and a target virtual
disk. Source virtual disks that are participating in an offline copy are available for read requests, while the virtual disk copy displays the In
Progress or Pending status. Write requests are allowed only after the offline copy is complete. If the source virtual disk is formatted with a
journaling file system, any attempt to issue a read request to the source virtual disk may be rejected by the storage array RAID controller
modules and result in an error message. Make sure that the Read-Only attribute for the target virtual disk is disabled after the virtual disk
copy is complete to prevent error messages from being displayed.
Online copy
An online copy creates a point-in-time snapshot copy of any virtual disk within a storage array, while still allowing writes to the virtual disk
when the copy is in progress. This is achieved by creating a snapshot of the virtual disk and using that snapshot as the actual source virtual
disk for the copy. In an online virtual disk copy, the relationship is between a snapshot virtual disk and a target virtual disk. The virtual disk
for which the point-in-time image is created (the source virtual disk) must be a standard virtual or thin disk in the storage array.
A snapshot virtual disk and a snapshot repository virtual disk are created during the online copy operation. The snapshot virtual disk is not
an actual virtual disk containing data; instead, it is a reference to the data contained on the virtual disk at a specific time. For each snapshot
taken, a snapshot repository virtual disk is created to hold the copy-on-write data for the snapshot. The snapshot repository virtual disk is
used only to manage the snapshot image.
Before a data block on the source virtual disk is modified, the contents of the block to be modified are copied to the snapshot repository
virtual disk. Because the snapshot repository virtual disk stores copies of the original data in those data blocks, further changes to those
data blocks write only to the source virtual disk.
NOTE: If the snapshot virtual disk that is used as the copy source is active, the source virtual disk performance degrades due to
copy-on-write operations. When the copy is complete, the snapshot is disabled and the source virtual disk performance is
restored. Although the snapshot is disabled, the repository infrastructure and copy relationship remain intact.
NOTE: An attempt to directly create a virtual disk copy for an MSCS shared disk, rather than using a snapshot virtual disk, fails
with the following error: The operation cannot complete because the selected virtual disk is not a source virtual disk candidate.
NOTE: When creating a snapshot virtual disk, map the snapshot virtual disk to only one node in the cluster. Mapping the
snapshot virtual disk to the host group or both nodes in the cluster may cause data corruption by allowing both nodes to
concurrently access data.
• If you are using the target virtual disk for backup purposes.
• If you are using the data on the target virtual disk to copy back to the source virtual disk of a disabled or failed snapshot virtual disk.
If you decide not to preserve the data on the target virtual disk after the virtual disk copy is complete, change the write protection setting
for the target virtual disk to Read/Write.
• While a virtual disk copy has a status of In Progress, Pending, or Failed, the source virtual disk is available for read I/O activity only. After
the virtual disk copy is complete, read and write I/O activity to the source virtual disk are permitted.
• A virtual disk can be selected as a target virtual disk for only one virtual disk copy at a time.
• A virtual disk copy for any virtual disk cannot be mounted on the same host as the source virtual disk.
• Windows does not allow a physical disk letter to be assigned to a virtual disk copy.
• A virtual disk with a Failed status cannot be used as a source virtual disk or target virtual disk.
• A virtual disk with a Degraded status cannot be used as a target virtual disk.
• A virtual disk participating in a modification operation cannot be selected as a source virtual disk or target virtual disk. Modification
operations include the following:
– Capacity expansion
– RAID-level migration
– Segment sizing
– Virtual disk expansion
– Defragmenting a virtual disk
NOTE: The following host preparation sections also apply when using the virtual disk copy feature through the CLI
interface.
• The Create Copy Wizard, which assists in creating a virtual disk copy
NOTE: Write requests to the target virtual disk are rejected when the Read-only permission is enabled on the target virtual
disk.
• To disable Read-only permission, select Change > Target Virtual Disk Permissions > Disable Read-Only.
If eight virtual disk copies with a status of In Progress exist, any subsequent virtual disk copy has a status of Pending, which stays until one
of the eight virtual disk copies completes.
When you have completed the wizard dialogs, the virtual disk copy starts, and data is read from the source virtual disk and written to the
target virtual disk.
Operation in Progress icons are displayed on the source virtual disk and the target virtual disk while the virtual disk copy has a status of In
Progress or Pending.
When the virtual disk copy fails, a critical event is logged in the Event Log, and a Needs Attention icon is displayed in the AMW. While a
virtual disk copy has this status, the host has read-only access to the source virtual disk. Read requests from and write requests to the
target virtual disk do not take place until the failure is corrected by using the Recovery Guru.
Copy manager
After you create a virtual disk copy by using the Create Copy Wizard, you can monitor the virtual disk copy through the Copy Manager.
From the Copy Manager, a virtual disk copy may be re-copied, stopped, or removed. You can also modify the attributes, such as the copy
priority and the target virtual disk Read-Only attribute. You can view the status of a virtual disk copy in the Copy Manager. Also, if you
want to determine which virtual disks are involved in a virtual disk copy, you can use the Copy Manager or the storage array profile.
CAUTION: If you decide not to preserve the data on the target virtual disk after the virtual disk copy has completed, disable the
Read-Only attribute for the target virtual disk. See Virtual Disk Read/Write Permissions for more information about enabling and
disabling the Read-Only attribute for the target virtual disk.
1 Stop all I/O activity to the source virtual disk and the target virtual disk.
2 Unmount any file systems on the source virtual disk and the target virtual disk.
3 In the AMW, select the Storage & Copy Services tab.
4 Under Virtual Disks area, select the source virtual disk that you want to use for the online copy.
5 Right click on the selected source virtual disk and select Create > Virtual Disk Copy in the pop-up menu.
The Select Copy Type wizard is displayed.
6 Select a copy type and click Next.
NOTE: If you select Offline, the source virtual disk is not available for any I/O when the copy operation is in progress.
NOTE: Operation in Progress icons appear on the source virtual disk and the target virtual disk while the virtual disk copy
has a status of In Progress or Pending.
• I/O activity
• Virtual disk RAID level
• Virtual disk configuration — Number of physical disks in the virtual disk groups
• Virtual disk type — Snapshot virtual disks may take more time to copy than standard virtual disks
• Snapshots created using older RAID controller firmware versions (legacy snapshots) will take longer to complete
During a virtual disk copy, resources for the storage array are diverted from processing I/O activity to completing a virtual disk copy. This
affects the overall performance of the storage array. When you create a new virtual disk copy, you define the copy priority to determine
how much RAID processing time is diverted from I/O activity to a virtual disk copy operation.
1 In the AMW, select the Storage & Copy Services tab and select Copy Services > Virtual Disk Copy > Manage Copies.
The Copy Manager window is displayed.
2 In the table, select one or more copy pairs.
3 Select Change > Copy Priority.
The Change Copy Priority window is displayed.
4 In the Copy Priority area, select the appropriate copy priority, depending on your system performance needs.
If the copy priority is set at the lowest rate, I/O activity is prioritized, and the virtual disk copy takes longer.
1 In the AMW, select the Storage & Copy Services tab and select Copy Services > Virtual Disks > Manage Copies.
The Copy Manager window is displayed.
2 Select the copy pair in the table.
3 Select Copy > Stop.
4 Click Yes.
1 Stop all I/O activity to the source and target virtual disk.
2 Using your Windows system, flush the cache to both the source and the target virtual disk—if mounted. At the host prompt, type:
SMrepassist -f <filename-identifier> and press <Enter>.
For more information, see SMrepassist Utility.
3 Click the Summary tab, then click Storage & Copy Services to ensure that the virtual disk is in Optimal or Disabled status.
4 Remove one or more physical disk letters of the source and (if mounted) virtual disk in Windows or unmount one or more virtual
physical disks in Linux to help guarantee a stable copy of the physical disk for the virtual disk. If this is not done, the copy operation
reports that it has completed successfully, but the copied data is not updated properly.
5 Follow any additional instructions for your operating system. Failure to follow these additional instructions can create unusable virtual
disk copies.
• If hosts are mapped to the source virtual disk, the data that is copied to the target virtual disk when you perform the re-copy operation
might have changed since the previous virtual disk copy was created.
• Select only one virtual disk copy in the Copy Manager dialog.
CAUTION: Possible loss of data—The re-copying operation overwrites existing data on the target virtual disk.
CAUTION: Possible loss of data access—While a virtual disk copy has a status of In Progress or Pending, source virtual disks are
available for read I/O activity only. Write requests are allowed after the virtual disk copy has completed.
To recopy the virtual disk:
1 Stop all I/O to the source virtual disk and the target virtual disk.
2 Unmount any file systems on the source virtual disk and the target virtual disk.
3 In the AMW, select Copy Services > Virtual Disk Copy > Manage Copies.
The Copy Manager window is displayed.
4 Select the copy pair in the table.
5 Select Copy > Re-Copy.
The Re-Copy window is displayed.
6 Set the copy priority.
There are five copy priority rates available: lowest, low, medium, high, and highest. If the copy priority is set at the lowest rate, I/O
activity is prioritized, and the virtual disk copy takes longer. If the copy priority is set to the highest priority rate, the virtual disk copy is
prioritized, but I/O activity for the storage array might be affected.
• Removing copy pairs does not delete the data on the source virtual disk or target virtual disk.
• If the virtual disk copy has a status of In Progress, you must stop the virtual disk copy before you can remove the copy pair.
1 In the AMW, select Copy Services > Virtual Disk Copy > Manage Copies.
The Copy Manager window is displayed.
2 In the table, select one or more copy pairs.
3 Select Copy > Remove Copy Pairs.
The Remove Copy Pairs dialog is displayed.
4 Click Yes.
NOTE: After creating a partition on a multipathing device, all I/O operations, including file system creation, raw I/O and file
system I/O, must be done through the partition node and not through the multipathing device nodes.
Prerequisites
The following tasks must be completed before proceeding. For more information about step 1 through step 3, see the storage array’s
Deployment Guide. For more information about step 4, see Creating Virtual Disks.
1 Install the host software from the MD Series storage arrays resource DVD — Insert the Resource media in the system to start the
installation of Modular Disk Storage Manager (MD Storage Manager) and Modular Disk Configuration Utility (MDCU).
NOTE: Installation of Red Hat 5.x requires a remount of the DVD media to make contents executable.
2 Reboot when prompted by the install program — The installation program prompts for and needs a reboot at completion of the
installation.
3 Configure using MDCU — After the host server has rebooted, the MDCU automatically starts and is present on the desktop. This
utility allows for quick and easy configuration of new and or existing MD Series storage arrays present on your network. It also provides
a GUI Wizard for establishing the iSCSI sessions to the array.
4 Create and map virtual disks using the MD Storage Manager — After configuring the arrays using the MDCU, run the MD Storage
Manager to create and map virtual disks.
NOTE: Any arrays configured with MDCU automatically get added to the list of devices in the EMW.
If an array virtual disk (VD) is mapped to the host server later the rescan_dm_devices command must be run again to make the VD a
visible LUN to the operating system.
where:
mpath1 is the name of the virtual device created by device mapper. It is located in the /dev/mapper directory.
Sdc is the physical path to the owning RAID for the device.
Sdb is the physical path to the nonowning RAID for the device.
where:
mpathb is the name of the virtual device created by device mapper. It is located in the /dev/mapper directory.
Sdx is the physical path to the owning RAID for the device.
Sdcl is the physical path to the nonowning RAID for the device.
where mpath<x> is the multipathing device node on which you want to create the partition.
NOTE: The <x> value is an alphanumeric operating system-dependent format. The corresponding value for mapped virtual disks
can be seen using the previously run multipath command. See your operating system documentation for additional information
about fdisk.
• On Red Hat Enterprise Linux (RHEL) hosts, a partition node has the format:/dev/mapper/mpath<y>p<y>
Where <y> is the alphabetic number for the multipathing device, <y> is the partition number for this device.
• On SUSE Linux Enterprise Server (SLES) 11.x hosts, a partition node has the format:/dev/mapper/mpath<y>-part<y>
Where <y> is letters assigned to the multipathing device and <y> is the partition number.
• On SLES 10.3 hosts, a partition node has the format: /dev/mapper/mpath<y>_part<y>
Where <y> is one or more letters assigned to the multipathing device and <y> is the partition number.
NOTE: After creating a partition on a device capable of multipathing, all I/O operations, including file system creation, raw I/O
and file system I/O, must be done through the partition node, and not through the multipathing device nodes.
where <partition node> is the partition on which the file system is created.
1 Unmount all Device Mapper multipath device nodes mounted on the server: # umount <mounted_multipath_device_node>
2 Stop the Device Mapper multipath service: # /etc/init.d/multipathd stop
3 Flush the Device Mapper multipath maps list to remove any old or modified mappings: # multipath –F
NOTE: The boot operating system drive may have an entry with the Device Mapper multipathing table. This is not affected
by the multipath –F command.
4 Log out of all iSCSI sessions from the host server to the storage array: # iscsiadm –m node --logout
CAUTION: Certain commands, such as lsscsi, display one or more instances of Universal Xport devices. These device nodes
must never be accessed, mounted, or used in any way. Doing so can cause loss of communication to the storage array and
possibly cause serious damage to the storage array, potential making data stored on the array inaccessible.
Only multipathing device nodes and partition nodes created using the directions provided above must be mounted or in any way accessed
by the host system or its users.
multipath –ll Displays the current multipath topology using all available information—sysfs, the device
mapper, path checkers, and so on.
multipath Reaggregates multipathing device with simplified output.
multipath –f Flushes out Device Mapper for the specified multipathing device. Used if the underlying
<multipath_dev_node> physical devices are deleted/unmapped.
multipath –F Flushes out all unused multipathing device maps.
rescan_dm_devs Dell EMC provided script. Forces a rescan of the host SCSI bus and aggregates multipathing
devices as needed. Use this command when:
where [device] is the multipath device name—for example, mpath2; do not specify the path
• I/O may hang when a Device Mapper device is deleted before the virtual disk is unmounted.
• If the scsi_dh_rdac module is not included in initrd, slower device discovery may be seen and the syslog may become populated
with buffer I/O error messages.
• I/O may hang if the host server or storage array is rebooted while I/O is active. All I/O to the storage array should be stopped before
shutting down or rebooting the host server or storage array.
• With an MD Series storage array, after a failed path is restored, failback does not occur automatically because the driver cannot
autodetect devices without a forced rescan. Run the command rescan_dm_devs to force a rescan of the host server. This restores
the failed paths enabling failback to occur.
• Failback can be slow when the host system is experiencing heavy I/O. The problem is exacerbated if the host server is also
experiencing high processor utilization.
• The Device Mapper Multipath service can be slow when the host system is experiencing heavy I/O. The problem is exacerbated if the
host server is also experiencing high processor utilization.
Troubleshooting
Table 16. Troubleshooting
Question Answer
How can I check if multipathd is running? Run the following command:
/etc/init.d/multipathd status
Why does the multipath –ll command output not First verify if the devices are discovered or not. The command #cat /proc/
show any devices? scsi/scsi displays all the devices that are already discovered. Then verify the
multipath.conf to ensure that it is been updated with proper settings. After this, run
multipath. Then run multipath –ll, the new devices must show up.
Why is a newly mapped LUN not assigned a Run rescan_dm_devs in any directory. This should bring up the devices.
multipathing device node?
I removed a LUN. But the multipathing mapping is The multipathing device is still available after you remove the LUNs. Run multipath
still available. –f <device node for the deleted LUN> to remove the multipathing
mapping. For example, if a device related with /dev/dm-1 is deleted, you must run
multipath –f /dev/dm-1 to remove /dev/dm-1 from DM mapping table. If
multipathing daemon is stopped/restarted, run multipath –F to flush out all stale
mappings.
Failback does not happen as expected with the Sometimes the low-level driver cannot autodetect devices coming back with the
array. array. Run rescan_dm_devs to rescan host server SCSI bus and reaggregate
devices at multipathing layer.
Topics:
NOTE: No configuration steps are required to enable ALUA on the operating systems listed above.
1 Run the following command: # esxcli storage nmp satp rule add –s VMW_SATP_ALUA –V DELL –M array_PID -c
tpgs_on
Where, array_PID is your storage array model/product ID. To select the appropriate array_PID for your storage array, see the following
table.
MD3420 MD34xx
MD3800i MD38xxi
MD3820i MD38xxi
MD3800f MD38xxf
MD3820f MD38xxf
MD3460 MD34xx
MD3860i MD38xxi
MD3860f MD38xxf
2 Reboot your ESX-based host server.
Verify that the claim rule for VMW_SATP_ALUA with the VID/PID = Dell/array_PID shows the tpgs_on flag specified.
The value for Storage Array Type must be VMW_SATP_ALUA on each MD Series storage array.
• Remote Replication — Standard asynchronous replication using point-in-time images to batch the resynchronization between the local
and remote site. This type of replication is supported on both Fibre Channel and iSCSI storage arrays (not between).
• Remote Replication (Legacy) — Synchronous (or full-write) replication that synchronizes local and remote site data in real-time. This
type of replication is supported on Fibre Channel storage arrays only.
Topics:
NOTE: The standard Remote Replication premium feature is supported on both iSCSI and Fibre Channel storage arrays.
• Resynchronization and recovery point images for both primary and secondary virtual disk.
• Log information that tracks regions on the primary virtual disk that is written between synchronization intervals. These logs are only
used on the primary virtual disk, but are also written to the secondary virtual disk in case of a role reversal.
The replication repository is normally created automatically when you create a replicated pair. However, you can also create the repository
manually.
• Remote Replication — Also known as standard or asynchronous, it is supported on both iSCSI- and Fibre Channel-based storage arrays
(both local and remote storage arrays must use the same data protocol) and requires a dual RAID controller configuration.
• Remote Replication (Legacy) — Also known as synchronous or full-write, it is supported on Fibre Channel storage arrays only.
With synchronous Remote Replication (Legacy), every data write to a source virtual disk is replicated to a remote virtual disk. This produces
an identical, real-time remote of production data.
• Number of repository virtual disks required—Standard Remote Replication requires a repository virtual disk to be created for each
replicated pair (remote virtual disk-to-local virtual disk). Alternately, Remote Replication (Legacy) only requires a single repository virtual
disk.
• Data protocol supported—Standard Remote Replication is supported on both iSCSI and Fibre Channel storage arrays. Remote
Replication (Legacy) is supported only on Fibre Channel storage arrays.
NOTE: Both remote and local storage arrays must be of the same data protocol -- replication between Fibre Channel and
iSCSI storage arrays is not supported.
• Distance limitations—Distance between local and remote storage arrays is unlimited using the Standard Remote Replication premium
feature. Remote Replication (Legacy) has a limitation of approximately 10 km (6.2 miles) between local and remote storage arrays,
based on general latency and application performance requirements.
Synchronous Remote Replication (Legacy) is designed to provide replication between a relatively small number of local systems that require
business continuity—for example, data center-type operations, local disaster recovery and other top-tier applications.
• Two storage arrays with write access and both these storage arrays must have sufficient space to replicate data between them.
• Each storage must have a dual-controller Fibre Channel or iSCSI configuration (single-controller configurations are not supported).
• Fibre Channel Connection Requirements — You must attach dedicated remote replication ports to a Fibre Channel fabric environment.
In addition, these ports must support the Name Service.
• You can use a fabric configuration that is dedicated solely to the remote replication ports on each RAID controller module. In this case,
host systems can connect to the storage arrays using fabric.
• Fibre Channel Arbitrated Loop (FC-AL), or point-to-point configurations, are not supported for array-to-array communications.
• Maximum distance between the local site and remote site is 10 km (6.2 miles), using single-mode fibre Gigabit interface converters
(GBICs) and optical long-wave GBICs.
• iSCSI connection considerations:
– iSCSI does not require dedicated ports for replication data traffic
– iSCSI array-to-array communication must use a host-connected port (not the Ethernet management port).
– The first port that successfully establishes an iSCSI connection is used for all subsequent communication with that remote storage
array. If that connection subsequently fails, a new session is attempted using any available ports.
• Activating the Remote Replication premium feature on both the local and remote storage arrays
• Creating a remote Replication group on the local storage array
• Adding a replicated pair of virtual disks to the Remote Replication group
NOTE: Perform the activation steps below on the local storage array first and then repeat them on the remote storage array.
1 In the AMW of the local storage array, select the Storage & Copy Services tab.
2 Select Copy Services > Remote Replication > Activate.
3 If both Remote Replication and Remote Replication (Legacy) premium features are supported on your storage array, select Remote
Replication.
The Create Disk Pool wizard or the Create Disk Group wizard is displayed.
6 Click OK.
The Remote Replication Activated window is displayed. The system performs when the Remote Replication premium feature is
activated:
• Logs out all hosts currently using the highest numbered Fibre Channel host port on the RAID controller modules.
• Reserves the highest numbered Fibre Channel host port on the RAID controller modules for replication data transmissions.
• Rejects all host communication to this RAID controller module host port as long as the replication feature is active.
• If the Remote Replication (Legacy) feature has been activated, the two replication repositories are created.
NOTE: Repeat these steps to activate the remote replication premium features on the remote storage array.
1 From the AMW, select Copy Services > Remote Replication > Deactivate.
A message prompts you to confirm if the Remote Replication premium feature is to be deactivated.
2 Click Yes.
• The local storage array serves as the primary side of the Remote Replication group, while the remote storage array serves as the
secondary side of the Remote Replication group.
• At the virtual disk level, all virtual disks added to the Remote Replication group on the local storage array serve as the primary role in the
Remote Replication configuration. Virtual disks added to the group on the remote storage array serve the secondary role.
1 In the AMW of the local storage array, select the Storage & Copy Services tab.
2 Select Copy Services > Remote Replication > Remote Replication > Replication Group > Create.
The Create Remote Replication Group window is displayed.
3 In Remote replication group name, enter a group name (30 characters maximum).
4 In the Choose the remote storage array drop-down, select a remote storage array.
NOTE: If a remote storage array is not available, you cannot continue. Verify your network configuration or contact your
network administrator.
5 In the Connection type drop-down, choose your data protocol (iSCSI or Fibre Channel only).
6 Select View synchronization settings to set the synchronization settings for your Remote Replication group.
7 Click OK.
The Remote Replication group is created.
Replicated pairs
The last step in setting up Remote Replication is creating a replicated pair of virtual disks and placing them in an already-created Remote
Replication group.
A replicated pair consists of two virtual disks, one serving as the primary virtual disk on the local storage array and the other serving as the
secondary virtual disk on the remote storage array. In a successful Remote Replication configuration, both these virtual disks contain
identical copies of the same data. The replicated pair is contained in Remote Replication group, allowing them to synchronize at the same
time as any other replicated pairs within the same Remote Replication group.
At the I/O level, all write operations are performed first to the primary virtual disk and then to the secondary virtual disk.
• Only standard virtual disks can be used in a replicated pair. Thin provisioned or snapshot virtual disks (any type) cannot be used.
• The Remote Replication premium feature must be enabled and activated on the local and remote storage arrays used for replication
before creating replication pairs or Remote Replication groups.
• Local and remote storage arrays must be connected using supported Fibre Channel or iSCSI connections.
• The remote storage array must contain a virtual disk that is greater than or equal to the capacity of the primary virtual disk on the local
storage array.
• Creating a replicated pair requires you to use the AMW of the local storage array and the AMW of the remote storage array to complete
the creation process. Make sure that you have access to both storage arrays.
1 In the AMW of the local storage array, select the Storage & Copy Services tab.
2 Select Copy Services > Remote Replication > Remote Replication > Replication Group > Create Replication Pair.
The Select Remote Replication Group window is displayed.
NOTE: If the local storage array does not contain any Remote Replication groups, you must create one on the local storage
array before proceeding.
3 Select an existing Remote Replication group, then click Next.
4 In the Select Primary Virtual Disk window, select one of the following:
• Select an existing virtual disk on the local storage array to serve as the primary virtual disk in the replicated pair and click Next. Go
to step 4.
• Select the option to create a new virtual disk and click Next. See Creating a Standard Virtual Disk.
5 In the Select Repository window, select whether you want to create the replication repository automatically or manually:
• Automatic — Select Automatic and click Finish to create the replication repository with default capacity settings.
• Manual — Select Manual and click Next to define the properties for the replication repository. Then click Finish.
NOTE: The replication repository is normally created automatically during virtual disk pair creation. Manual repository
creation is recommended only for advanced storage administrators who understand physical disk consistency and optimal
physical disk configurations. The Automatic method is recommended.
6 Click OK when you see a message that the pair is successfully created.
1 In the AMW of the local storage array, select the Storage & Copy Services tab.
2 Select the Remote Replication group containing the replicated pair you want to remove and select one of the following:
• Copy Services > Remote Replication > Remote Replication > Replication Group > Remove.
• From the Associated Replicated Pairs table in the right pane, select the replicated pair you want to remove and select Copy
Services > Remote Replication > Remote Replication > Replication Pair > Remove.
NOTE: When you remove a replicated pair, the system deletes the associated replication repositories. To preserve them, de-
select Delete replicated pair repositories.
You can activate the files immediately or wait until a more convenient time. You may want to activate the firmware or NVSRAM files at a
later time because of these reasons:
• Time of day — Activating the firmware and the NVSRAM can take a long time, so you can wait until I/O loads are lighter. The RAID
controller modules are offline briefly to load the new firmware.
• Type of package — You may want to test the new firmware on one storage array before loading the files onto other storage arrays.
The ability to download both files and activate them later depends on the type of RAID controller module in the storage array.
NOTE: You can use the command line interface to download and activate the firmware to several storage arrays by using a script.
NOTE: It is recommended that the firmware and NVSRAM be upgraded during a maintenance period when the array is not being
used for I/O.
NOTE: The RAID enclosure must contain at least two disk drives in order to update the firmware on the controller.
1 If you are using the EMW, go to step 9. If you are using the AMW, go to step 2.
2 In the AMW, select Upgrade > RAID Controller Module Firmware > Upgrade.
The Download RAID Controller Module Firmware is displayed.
NOTE: The RAID Controller Module Firmware area and the NVSRAM area list the current firmware and the current
NVSRAM versions respectively.
3 To locate the directory in which the file to download resides, click Select File next to the Selected RAID controller module firmware
file text box.
4 In the File Selection area, select the file to download.
By default, only the downloadable files that are compatible with the current storage array configuration are displayed.
When you select a file in the File Selection area of the dialog, applicable attributes (if any) of the file are displayed in the File
Information area. The attributes indicate the version of the file.
5 If you want to download an NVSRAM file with the firmware:
a Select Transfer NVSRAM file with RAID controller module firmware.
b Click Select File.
NOTE: The Details pane shows the details of only one storage array at a time. If you select more than one storage array in
the Storage Array pane, the details of the storage arrays are not shown in the Details pane.
11 Click Firmware in the Download area.
If you select a storage array that cannot be upgraded, the Firmware button is disabled. The Download Firmware dialog is displayed.
The current firmware version and the NVSRAM version of the selected storage arrays appear.
NOTE: If you select the storage arrays with different RAID controller module types that cannot be updated with the same
firmware or NVSRAM file and click Firmware, the Incompatible RAID Controller Modules dialog is displayed. Click OK to
close the dialog and select the storage arrays with similar RAID controller module types.
12 To locate the directory in which the file to download resides, click Browse in the Select files area.
The Select File dialog is displayed.
13 Select the file to download.
14 Click OK.
15 If you want to download the NVSRAM file with the RAID controller module firmware, select Download NVSRAM file with firmware in
the Select files area.
Any attributes of the firmware file are displayed in the Firmware file information area. The attributes indicate the version of the
firmware file.
Any attributes of the NVSRAM file are displayed in the NVSRAM file information area. The attributes indicate the version of the
NVSRAM file.
16 If you want to download the file and activate the firmware and NVSRAM later, select the Transfer files but don’t activate them
(activate later) check box.
NOTE: If any of the selected storage arrays do not support downloading the files and activating the firmware or NVSRAM
later, the Transfer files but don’t activate them (activate later) check box is disabled.
17 Click OK.
The Confirm Download dialog is displayed.
18 Click Yes.
The download starts and a progress indicator is displayed in the Status column of the Upgrade RAID Controller Module Firmware
window.
NOTE: If the file selected is not valid or is not compatible with the current storage array configuration, the File Selection
Error dialog is displayed. Click OK to close it, and choose a compatible NVSRAM file.
6 Click Yes in the Confirm Download dialog.
The download starts.
7 Perform one of these actions:
• Select Tools > Upgrade RAID Controller Module Firmware.
• Select the Setup tab, and click Upgrade RAID Controller Module Firmware.
The Storage array pane lists the storage arrays. The Details pane shows the details of the storage array that is selected in the Storage
array pane.
8 In the Storage array pane, select the storage array for which you want to download the NVSRAM firmware.
You can select more than one storage array.
NOTE: The Details pane shows the details of only one storage array at a time. If you select more than one storage array in
the Storage array pane, the details of the storage arrays are not shown in the Details pane.
9 Click NVSRAM in the Download area.
NOTE: If you select a storage array that cannot be upgraded, the NVSRAM button is disabled.
The Download NVSRAM dialog is displayed. The current firmware version and the NVSRAM version of the selected storage arrays is
displayed.
NOTE: If you select the storage arrays with different RAID controller module types that cannot be updated with the same
NVSRAM file and click NVSRAM, the Incompatible RAID Controller Modules dialog is displayed. Click OK to close the dialog
and select the storage arrays with similar RAID controller module types.
10 To locate the directory in which the NVSRAM file to download resides, click Browse in the Select file area.
The Select File dialog is displayed.
11 Select the file to download.
12 Click OK.
Attributes of the NVSRAM file are displayed in the NVSRAM file information area. The attributes indicate the version of the NVSRAM
file.
13 Click OK.
The Confirm Download dialog is displayed.
14 Click Yes.
The physical disk firmware controls various features of the physical disk. The disk array controller (DAC) uses this type of firmware.
Physical disk firmware stores information about the system configuration on an area of the physical disk called DACstore. DACstore and the
physical disk firmware enable easier reconfiguration and migration of the physical disks. The physical disk firmware performs these
functions:
• The physical disk firmware records the location of the physical disk in an expansion enclosure. If you take a physical disk out of an
expansion enclosure, you must insert it back into the same physical disk slot, or the physical disk firmware cannot communicate with
the RAID controller module or other storage array components.
• RAID configuration information is stored in the physical disk firmware and is used to communicate with other RAID components.
CAUTION: Risk of application errors—Downloading the firmware could cause application errors.
Keep these important guidelines in mind when you download firmware to avoid the risk of application errors:
• Downloading firmware incorrectly could result in damage to the physical disks or loss of data. Perform downloads only under the
guidance of your Technical Support representative.
• Stop all I/O to the storage array before the download.
• Make sure that the firmware that you download to the physical disks are compatible with the physical disks that you select.
• Do not make any configuration changes to the storage array while downloading the firmware.
NOTE: Downloads can take several minutes to complete. During a download, the Download Physical Disk - Progress dialog is
displayed. Do not attempt another operation when the Download Physical Disk - Progress dialog is displayed.
To download Physical Disk Firmware:
CAUTION: Risk of possible loss of data or risk of damage to the storage array—Downloading the expansion enclosure EMM
firmware incorrectly could result in loss of data or damage to the storage array. Perform downloads only under the guidance of
your Technical Support representative.
CAUTION: Risk of making expansion enclosure EMM unusable—Do not make any configuration changes to the storage array
while downloading expansion enclosure EMM firmware. Doing so could cause the firmware download to fail and make the
selected expansion enclosure unusable.
NOTE: If you click Stop while a firmware download is in progress, the download-in-progress finishes before the operation
stops. The status for the remaining expansion enclosures changes to Canceled.
7 Monitor the progress and completion status of the download to the expansion enclosures. The progress and status of each expansion
enclosure that is participating in the download is displayed in the Status column of the Select enclosures table.
• A media error is encountered when trying to access a physical disk that is a member of a nonredundant disk group (RAID 0 or degraded
RAID 1, RAID 5 or RAID 10).
You can also save the firmware inventory to a text file. You can then send the file to your Technical Support representative for analysis. Your
Technical Support representative can detect any firmware mismatches.
NOTE: The suffix *.txt is added to the file name automatically if you do not specify a suffix for the file
name.
3 In File name dialog box, enter a name for the file to be saved. You may also specify another physical disk and directory if you want to
save the file in a location other than the default.
4 Click Save.
An ASCII text file that contains the firmware inventory is saved to the designated directory.
NOTE: Dell EMC is discontinuing support of the VSS and VDS hardware providers. For more information about deprecation, see
the Dell EMC MD Series Storage Arrays Information Update. For supported software, see the Supported Management Software
section in the Dell PowerVault MD Series Support Matrix at Dell.com/powervaultmanuals.
• Virtual disks used as source virtual disks for VSS snapshots must not have names longer than 16 characters.
• Dell EMC is discontinuing support of the VSS and VDS hardware providers. For more information about deprecation, see the Dell
EMC MD Series Storage Arrays Information Update. For supported software, see the Supported Management Software section in
the Dell PowerVault MD Series Support Matrix at Dell.com/powervaultmanuals.
The VSS hardware provider uses the source virtual disk name as a prefix for the snapshot and repository virtual disk names. The resulting
snapshot and repository names are too long if the source virtual disk name exceeds 16 characters.
VSS attaches to the service and uses it to coordinate the creation of snapshot virtual disks on the storage array. VSS-initiated snapshot
virtual disks can be triggered through backup tools, known as requestors. The VSS Provider Configuration Tool makes available the
following configuration options:
• Snapshot Repository Virtual Disk Properties—This section contains a drop-down list for the RAID level and a field for entering source
virtual disk capacity percentage for snapshot repositories.
• Snapshot Repository Virtual Disk Location—This section contains a list of preferences for the location of the snapshot repository virtual
disk. These preferences are honored whenever conditions permit.
The Microsoft VSS installer service for storage provisioning is available on the MD Series resource media in the \windows\VDS_VSS
directory.
NOTE: When registering VSS during your Windows setup, the registration graphical user interface (GUI) prompts you to provide
the name of your array because settings in the GUI are array-specific, not host-specific.
After the AMW opens, select the Hardware tab to see the components in the storage array. A component that has a problem is indicated
by a status icon.
The status icons indicate the status of the components that comprise the storage array. Also, the Recovery Guru option provides a detailed
explanation of the conditions and the applicable steps to remedy any Needs Attention status. For more information, see Recovery Guru.
For the status of a storage array, the icons shown in the following table are used in the Tree view, the Table view, and both the EMW Status
Bar and the AMW Status Bar.
Needs Attention There is a problem with the managed storage array that requires your
intervention to correct it.
Unresponsive The storage management station cannot communicate with the storage array
or one RAID controller module or both RAID controller modules in the storage
array.
Software Unsupported The storage array is running a level of software that is no longer supported by
the MD Storage Manager.
In the Table view, every managed storage array is listed once, regardless of the number of attachments it has in the Tree view. After the
storage array has been contacted by the MD Storage Manager, an icon representing its hardware status is displayed. Hardware status can
be Optimal, Needs Attention, or Fixing. If, however, all the network management connections from the storage management station to the
storage array shown in the Tree view are Unresponsive, the storage array status is represented as Unresponsive.
In the EMW Status Bar and the AMW Status Bar, the icons also have these behaviors:
• Hold the mouse over the icon in the EMW Status Bar and the AMW Status Bar to show a tooltip with a brief description of the status.
• The icons for the Needs Attention status and Unresponsive status are displayed in the EMW Status Bar and the AMW Status Bar if
there are discovered storage arrays with either condition.
The EMW Tree view has additional status icons that are shown in the following table.
Alert Set You can set alerts at any of the nodes in the
Tree view. Setting an alert at a parent node
level, such as at a host level, sets alert for
any child nodes. If you set an alert at a
parent node level and any of the in-band
storage array child nodes have a Needs
Upgrade status, the Alert Disables status
icon is displayed next to the parent node in
the tree view.
Setting an Alert at the Parent Node Level You can set alerts at any of the nodes in the
Tree view. Setting an alert at a parent node
level, such as at a host level, sets alert for
any child nodes. If you set an alert at a
parent node level and any of the in-band
storage array child nodes have a Needs
Upgrade status, the Alert Disables status
icon appears next to the parent node in the
tree view.
In the Tree view, icons can appear in a string to convey more information. For example, the following string means that the storage array is
optimal, an alert is set for the storage array, and firmware is available for download:
NOTE: The MD Storage Manager may take a few minutes to update a status change to Unresponsive or from Unresponsive. A
status change from or to Unresponsive depends on the network link to the storage array. All other status change updates faster.
Trace buffers
Trace information can be saved to a compressed file. The firmware uses the trace buffers to record processing activity, including exception
conditions, that may be useful for debugging. Trace information is stored in the current buffer and can be moved to the flushed buffer after
being retrieved. Because each RAID controller module has its own buffer, there may be more than one flushed buffer. The trace buffers can
be retrieved without interrupting the operation of the storage array and with minimal effect on performance.
NOTE: Use this option only under the guidance of a Technical Support representative.
A zip-compressed archive file is stored at the location you specify on the host. The archive contains trace files from one or both of the
RAID controller modules in the storage array along with a descriptor file named trace_description.xml. Each trace file includes a header
that identifies the file format to the analysis software used by the Technical Support representative. The descriptor file contains:
1 From the AMW, select Monitor > Health > Retrieve Trace Buffers.
The Retrieve Trace Buffers dialog is displayed.
2 Select either RAID controller module 0, RAID controller module 1, or both.
If the RAID controller module status message to the right of a check box indicates that the RAID controller module is offline, the check
box is disabled.
3 From the Trace buffers list, select the relevant option.
4 To move the buffer, select Move current trace buffer to the flushed buffer after retrieval.
NOTE: Move current trace buffer to the flushed buffer after retrieval is not available if the Flushed buffer option is selected
in step 3.
5 Enter a name for the physical disk data filename in Specify filename or click Browse to navigate to a previously saved file to overwrite
an existing file.
6 Click Start.
The trace buffer information is archived to the file specified.
7 After the retrieval process is completed:
• To retrieve trace buffers again using different parameters, repeat step 2 through step 6.
• To close the dialog, click Close.
1 From the EMW, select Tools > Legacy Collect Support Data > Create/Edit Schedule.
1 From the EMW, select Tools > Collect Support Data > Create/Edit Schedule.
The Schedule Support Data Collection dialog is displayed.
2 In the Storage arrays table, select one or more storage arrays.
3 Perform one of the following actions:
• To suspend a support data collection schedule, click Suspend, then click Yes.
• To restart a support data collection schedule, click Resume, then click OK.
4 Click OK.
1 From the EMW, select Tools > Collect Support Data > Create/Edit Schedule.
The Schedule Support Data Collection dialog is displayed.
2 In the Storage arrays table, select one or more storage arrays.
3 Click Remove.
4 Review the information, then click Yes.
The Schedule Support Data Collection dialog is displayed.
5 Click OK.
Event log
You can use the Event Log Viewer to view a detailed list of events that occur in a storage array. The event log is stored on reserved areas
on the storage array disks. It records configuration events and storage array component failures. The event log stores approximately 8000
events before it replaces an event with a new event. If you want to keep the events, you may save them, and clear them from the event
log.
The MD Storage Manager records the following events:
• Critical events — Errors occurring on the storage array that needs to be addressed immediately. Loss of data access may occur if the
error is not immediately corrected.
Recovery Guru
The Recovery Guru is a component of MD Storage Manager that diagnoses critical events on the storage array and recommends step-by-
step recovery procedures for problem resolution.
In the AMW, to display the Recovery Guru, perform one of these actions:
1 To open the storage array profile, in the AMW, perform one of the following actions:
The Storage Array Profile dialog is displayed. The Storage Array Profile dialog contains several tabs, and the title of each tab
corresponds to the subject of the information contained.
2 Perform one of these actions in the Storage Array Profile dialog:
• View detailed information — Go to step 3.
• Search the storage array profile — Go to step 4.
• Save the storage array profile — Go to step 5.
• Close the storage array profile — Go to step 6.
3 Select one of the tabs, and use the horizontal scroll bar and the vertical scroll bar to view the storage array profile information.
NOTE: You can use the other steps in this procedure to search the storage array profile, to save the storage array profile, or
to close the storage array profile.
4 To search the storage array profile, perform these steps:
a Click .
b Type the term that you want to search for in the Find text box.
If the term is located on the current tab, the term is highlighted in the storage array profile information.
NOTE: The search is limited to the current tab. If you want to search for the term in other tabs, select the tab and
click the Find button again.
c Click the Find button again to search for additional occurrences of the term.
5 To save the storage array profile, perform these steps:
a Click Save As.
b To save all sections of the storage array profile, select All sections.
c To save information from particular sections of the storage array profile, select the Select sections, and select the check boxes
corresponding to the sections that you want to save.
d Select an appropriate directory.
e In File Name, type the file name of your choice. To associate the file with a particular software application that opens it, specify
a file extension, such as .txt.
1 In the AMW, select a node in the Storage & Copy Services tab or in the object tree of the Host Mappings tab.
2 Click View Associated Physical Components. Alternatively, if the selected node is a virtual disk, right-click the node to open a pop-up
menu and select View > Associated Physical Components. If the selected node is a disk group, unconfigured capacity, or free
capacity, right-click the node to open a pop-up menu and select View Associated Physical Components.
The View Associated Physical Components dialog is displayed with blue dots next to the physical components that are associated
with the selected node.
3 To close the View Associated Physical Components dialog, click Close.
1 Check the Tree View in the EMW to see if all storage arrays are unresponsive.
2 If any storage arrays are unresponsive, check the storage management station network connection to make sure that it can reach the
network.
3 Ensure that the RAID controller modules are installed and that there is power to the storage array.
4 If there a problem with the storage array, then correct the problem.
5 Perform one of these actions, depending on how your storage array is managed:
• Out-of-band managed storage array—Go to step 6.
• In-band managed storage array—Go to step 12.
6 For an out-of-band managed storage array, ensure that the RAID controller modules are network accessible by using the ping
command to make sure that the RAID controller module can be reached. Type one of these commands, and press <Enter>.
• ping <host-name>
• ping <RAID controller module-IP-addres>
7 If the verification is successful, see step 8, if not, see step 9.
8 Remove the storage array with the Unresponsive status from the EMW, and select Add Storage Array to add the storage array again.
9 If the storage array does not return to Optimal status, check the Ethernet cables to make sure that there is no visible damage and that
they are securely connected.
10 Make sure the appropriate network configuration tasks have been performed. For example, make sure that IP addresses have been
assigned to each RAID controller module.
11 If there is a cable or network accessibility problem, see step 20, if not step 12.
12 For an in-band managed storage array, make sure that the host is network accessible by using the ping command to verify that the
host can be reached. Type one of these commands, and press <Enter>.
• ping <host-name>
• ping <RAID controller module-IP-addres>
13 If the verification is successful, see step 14, if not, step 15.
14 Remove the host with the Unresponsive status from the EMW, and select Add Storage Array to add the host again.
15 If the host does not return to Optimal status, go to step 16.
16 Ensure that the host is turned on and operational and that the host adapters have been installed.
17 Check all external cables and switches or hubs to make sure that no visible damage exists and that they are securely connected.
18 Make sure the Host Context Agent software is installed and running.
If you started the host system before you were connected to the RAID controller module in the storage array, the Host Context Agent
software will not be able to detect the RAID controller modules. If so, make sure that the connections are secure, and restart the Host
Context Agent software.
19 If you have recently replaced or added the RAID controller module, restart the Host Context Agent software so that the new RAID
controller module is recognized.
20 If the problem still exists, make the appropriate host modifications, check with other administrators to see if a firmware upgrade was
performed on the RAID controller module from another storage management station.
If a firmware upgrade was performed, the EMW on your management station may not be able to locate the new AMW software
needed to manage the storage array with the new version of the firmware.
21 If the problem persists contact your Technical Support representative.
22 Determine if there is an excessive amount of network traffic to one or more RAID controller modules.
This problem is self-correcting because the EMW software periodically retries to establish communication with the RAID controller
modules in the storage array. If the storage array was unresponsive and a subsequent attempt to connect to the storage array
succeeds, the storage array becomes responsive.
For an out-of-band managed storage array, determine if management operations are taking place on the storage array from other
storage management stations. A RAID controller module-determined limit exists to the number of Transmission Control Protocol/
Internet Protocol (TCP/IP) connections that can be made to the RAID controller module before it stops responding to subsequent
• If you have an expansion enclosure with a white LED, the Blink Expansion Enclosure operation causes the white LED on the expansion
enclosure to come on. The LED does not blink.
• If you have any other types of expansion enclosures, this operation causes the appropriate LED on all of the physical disks in the
expansion enclosure to blink.
1 From the AMW, select Monitor > Health > Capture State Information.
2 Read the information in the Confirm State Capture dialog, and type yes to continue.
3 In the Specify filename text box, enter a name for the file to be saved, or browse to a previously saved file if you want to overwrite an
existing file.
Use the convention filename.dmp for the name of the file. The suffix .dmp is added to the file automatically if you do not specify a
suffix for the file.
4 Click Start.
NOTE: Each test shows a status of Executing while it is in progress. The test then shows Completed when it successfully
finishes. If any of the tests cannot be completed, a Failed status is displayed in the Execution summary window.
5 Monitor the progress and completion status of all the tests. When they finish, click OK to close the State Capture dialog.
Clicking Cancel stops the state capture process, and any remaining tests do not complete. Any test information that has been
generated to that point is saved to the state capture file.
SMrepassist utility
SMrepassist (replication assistance) is a host-based utility for Windows platforms. This utility is installed with MD Storage Manager. Use
this utility before and after you create a virtual disk copy on a Windows operating system to ensure that all the memory-resident data for
file systems on the target virtual disk is flushed and that the driver recognizes signatures and file system partitions. You can also use this
utility to resolve duplicate signature problems for snapshot virtual disks.
From a command prompt window on a host running Windows, navigate to: C:\Program Files\Dell\MD Storage Manager\util and run the
following command:
SMrepassist -f <filesystem-identifier>
Where, -f flushes all the memory-resident data for the file system indicated by <filesystem-identifier>, and <filesystem-identifier>,
specifies a unique file system in the following syntax: drive-letter:<mount-point-path>
The file system identifier may consist of only a physical disk letter, as in the following example:
SMrepassist -f E:
An error message is displayed in the command line when the utility cannot distinguish between the following:
• Source virtual disk and snapshot virtual disk—for example, if the snapshot virtual disk has been removed.
• Standard virtual disk and virtual disk copy—for example, if the virtual disk copy has been removed.
Unidentified devices
An unidentified node or device occurs when the MD Storage Manager cannot access a new storage array. Causes for this error include
network connection problems, the storage array is turned off, or the storage array does not exist.
NOTE: Before beginning any recovery procedure, make sure that the Host Context Agent software is installed and running. If you
started the host before the host was connected to the storage array, the Host Context Agent software is not able to find the
storage array. If so, make sure that the connections are tight, and restart the Host Context Agent software.
1 Make sure that the network connection to the storage management station is functional.
2 Make sure that the controllers are installed and that the power to the storage array is turned on. Correct any existing problems before
continuing.
3 If you have an in-band storage array, use the following procedure. Click Refresh after each step to check the results:
a Make sure that the Host Context Agent software is installed and running. If you started the host before the host was connected
to the controllers in the storage array, the Host Context Agent software is not able to find the controllers. If so, make sure that
the connections are tight, and restart the Host Context Agent software.
b Make sure that the network can access the host by using the ping command in the following syntax: ping <host-name-
or-IP-address-of-the-host>
If the network can access the host, continue to step c. If the network cannot access the host, skip to step d.
c Remove the host with the unresponsive status from the MD Storage Manager, and add that host again.
If the host returns to optimal status, you have completed this procedure.
d Make sure that the power to the host is turned on and that the host is operational.
e If applicable, make sure that the host bus adapters have been installed in the host.
f Examine all external cables and switches or hubs to make sure that you cannot see any damage and that they are tightly
connected.
g If you have recently replaced or added the controller, restart the Host Context Agent software so that the new controller is
found.
If a problem exists, make the appropriate modifications to the host.
4 If you have an out-of-band storage array, use the following procedure. Click Refresh after each step to make sure of the results:
a Make sure that the network can access the controllers by using the ping command. Use the following syntax: ping
<controller-IP-address>
If the network can access the controllers, continue to step b. If the network cannot access the controllers, skip to step c.
b Remove the storage array with the unresponsive status from MD Storage Manager, and add that storage array again.
If the storage array returns to optimal status, you have completed this procedure.
c Make sure that you cannot see any damage and that they are tightly connected by examining the Ethernet cables.
d Make sure that the applicable network configuration tasks have been done—for example, the IP addresses have been assigned
to each controller.
5 Make sure that the controller firmware is compatible with MD Storage Manager on your management station. If the controller
firmware was upgraded, the MD Storage Manager may not have access to the storage array. A new version of MD Storage Manager
may be needed to manage the storage array with the new version of the controller firmware.
If this problem exists, see Getting Help.
6 Look to see if there is too much network traffic to one or more controllers. This problem corrects itself because the MD Storage
Manager tries to re-establish communication with the controllers in the storage array at regular times. If the storage array was
unresponsive and a subsequent attempt to connect to the storage array succeeds, the storage array becomes responsive.
7 For an out-of-band storage array, look to see if management operations are taking place on the storage array from other storage
management stations. The type of management operations being done and the number of management sessions taking place
together establish the number of TCP/IP connections made to a controller. When the maximum number of TCP/IP connections have
been made, the controller stops responding. This problem corrects itself because after some TCP/IP connections are complete, the
controller becomes responsive to other connection tries.
8 If the storage array is still unresponsive, problems may exist with the controllers.
If these problems persist, see Getting Help.
The SMagent software may take a little time to initialize. The cursor is shown, but the terminal window does not respond. When the
program starts, the following message is displayed: SMagent started.
After the program completes the startup process, text similar to the following, is displayed:Modular Disk Storage Manager
Agent, Version 90.02.A6.14Copyright (C) 2009-2010 Dell, Inc. All rights reserved.Checking device
<n/a> (/dev/sg10): ActivatingChecking device /dev/sdb (/dev/sg11): SkippingChecking device <n/a>
(/dev/sg3): ActivatingChecking device <n/a> (/dev/sg4): ActivatingChecking device <n/a> (/dev/
sg5): ActivatingChecking device <n/a> (/dev/sg6): ActivatingChecking device <n/a> (/dev/sg7):
ActivatingChecking device <n/a> (/dev/sg8): ActivatingChecking device <n/a> (/dev/sg9):
Activating
1 Go to Dell.com/support.
2 Select your support category.
3 Verify your country or region in the Choose a Country/Region drop-down list at the bottom of the page.
4 Select the appropriate service or support link based on your need.