Docu57785 VPLEX 5.4 CLI ReferenceGuide
Docu57785 VPLEX 5.4 CLI ReferenceGuide
GeoSynchrony
Version Number 5.4.1
REV 02
Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage
Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic
Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra,
Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation
Technology, Common Information Model, Configuration Intelligence, Connectrix, CopyCross, CopyPoint, CX, Dantz, Data Domain,
DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab,
EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File
Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, InputAccel, InputAccel Express, Invista,
Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, OnAlert, OpenScale, PixTools, Powerlink, PowerPath,
PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, SafeLine, SAN Advisor, SAN Copy, SAN
Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix,
Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, Viewlets, Virtual Matrix, Virtual Matrix
Architecture, Virtual Provisioning, VisualSAN, VisualSRM, VMAX, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression,
xPresso, YottaYotta, the EMC logo, and the RSA logo, are registered trademarks or trademarks of EMC Corporation in the United States
and other countries. Vblock is a trademark of EMC Corporation in the United States.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the
EMC online support website.
Preface
New Commands for VPLEX..................................................................... 11
Modified Commands for VPLEX.............................................................. 12
Removed Commands for VPLEX ............................................................. 15
New Context for VPLEX .......................................................................... 15
Related documentation ......................................................................... 23
Conventions used in this document ...................................................... 23
Where to get help.................................................................................. 24
Your comments ..................................................................................... 25
............................................................................................................. 25
Index
As part of an effort to improve its product lines, EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not
be supported by all versions of the software or hardware currently in use. The product
release notes provide the most up-to-date information on product features.
Contact your EMC representative if a product does not function properly or does not
function as described in this document.
Note: This document was accurate at publication time. New versions of this document
might be released on the EMC online support website. Check the EMC online support
website to ensure that you are using the latest version of this document.
About this guide This document is part of the VPLEX documentation set, and describes the VPLEX
command line interface. Each major section includes introductory information and a
general procedure for completing a task.
The introductory information and detailed steps for each procedure appear in the
EMC VPLEX Management Console online help so you have the complete information
available when you actually configure and manage the storage environment, should
you require help.
This guide is part of the Management Console documentation set, and is intended for
use by customers and service providers who use the VPLEX CLI to configure and
manage a storage environment.
◆ scheduleSYR add - Revised example description to reflect correct date and time for
scheduled job.
◆ security renew-all-certificates - Changed the service password in the examples to a
sample password. Added note to the command description to contact the System
Administrator for the current service password.
◆ set - Updated the example for displaying information in the eth0 context. Updated the
example for setting a storage volume’s thin-rebuild attribute to “true.” Changed
example to show new fields for "underlying-storage-block-size" and "vias-based."
◆ set topology - Added a note about using the default setting for the front-end ports and
connecting to the hosts via a switched fabric. Also added a warning message about
the consequences of changing the local COM topology.
◆ show-use-hierarchy - Removed sentence in the [-t|--targets] option about the target
requiring a fully qualified path. Replaced example showing usage hierarchy for a
specified meta-volume.
◆ sms dump - Added a note to the command description specifying that the log files
listed are a core set and do not include all log files.
◆ storage-tool compose - Changed argument name from [-s|--source-storage-volume] to
[-m|--source-mirror]. Moved the name, geometry, and storage-views arguments under
required arguments. Added consistency-group and storage-views options, and
explanation in the command description. Replaced the example.
◆ storage-volume auto-unbanish-interval - Changed reference to topic in
storage-volume-unbanish command from link to text reference.
◆ storage-volume claimingwizard - In Table 21, storage array EMC VPLEX, Command to
create hints file, revised EMC VPLEX export view map -f EMC_PROD12.txt -v <views to
EMC VPLEX export storage-view map -f EMC_PROD12.txt -v <views>.
◆ storage-volume find-array - Revised example and added a new example.
◆ storage-volume unclaim - Fixed typographical error in the --return-to-pool option
name.
◆ user add - Added required -u option to the example.
◆ user passwd - Added required -u option to the example.
◆ user remove - Added required -u option to the example.
◆ version - Added command line to first example. Replaced examples to show current
version information and correct director context.
◆ virtual-volume create - Changed command output example and added the following
fields and descriptions to the table: recoverpoint-replication-at, and vpd-id. Added the
disconnected status to the service status field.
◆ virtual-volume destroy - Added link to virtual-volume expand command in See also
section.
◆ virtual-volume provision - Added the [-t|--thin] option to provision a thin LUN on a VNX
array for legs that support thin provisioning.
Context Tree Upgrade The cluster-connectivity context will become deprecated in a future version. It is being
replaced with the Connectivity context, which is available in this release. Use the new
context whenever possible.
The following changes apply:
◆ Any configuration that exists in cluster-connectivity/ will automatically be reflected in
the connectivity/wan-com/ context and vice versa. For example, an existing wan-com
configuration will be displayed in connectivity/wan-com.
◆ All the attributes from the cluster-connectivity/ context are present in the
connectivity/wan-com/ context.
◆ Commands to change the remote-cluster-addresses attribute:
Old: cluster-connectivity remote-clusters add-addresses
New: configuration remote-clusters add-addresses
Old: cluster-connectivity remote-clusters clear-addresses
New: configuration remote-clusters clear-addresses
◆ Member-ports is now a context instead of an attribute. The fields that are part of the
string representation of a member-port under the cluster-connectivity context are now
table columns in a long listing (ll) of a connectivity/**/member-ports/ context.
Old: cluster-connectivity/port-groups/port-group-0/member-ports (attribute)
New: connectivity/wan-com/port-groups/ip-port-group-0/member-ports (context)
◆ Inside cluster-connectivity/port-groups/port-group-0:
• “subnet” is the name of a subnet from cluster-connectivity/subnets
• “option-set” is the name of a subnet from cluster-connectivity/option-sets
If the subnet=”cluster-1-SN00” and option-set=”optionset-com-0”, then
Old: cluster-connectivity/subnets/cluster-1-SN00/
New: connectivity/wan-com/port-groups/ip-port-group-0/subnet/
Old: cluster-connectivity/option-sets/optionset-com-0
New: connectivity/wan-com/port-groups/ip-port-group-0/option-set/
◆ There is no longer any need or ability to set the subnet used by a port-group.
Cluster-connectivity Cluster-connectivity
Context
(Deprecated) This context contains information for cluster connectivities.
Context
/clusters/cluster-*/cluster-connectivity
Position
Parent - cluster-*
Children - Option-sets (deprecated), Port-groups (deprecated), Subnets (deprecated)
Context-Specific Commands
remote-clusters
Data Structure
discovery-address
discovery-port
listening-port
remote-cluster-addresses
Notes
This context is replaced by:
/clusters/cluster-*/connectivity/wan-com.
Option-sets
Option-set configuration for IP wan-com.
Context
/clusters/cluster-*/cluster-connectivity/option-sets/*
Position
Parent - cluster-connectivity (deprecated)
Children - None
Context-Specific Commands
None
Data Structure
connection-open-timeout
keepalive-timeout
socket-buf-size
Notes
The currently-used option-set context is replaced by:
/clusters/cluster-*/connectivity/wan-com/port-groups/ip-port-group-*/option-set
Port-groups
A communication channel for IP wan-com composed of one port from each director of the
local cluster.
Context
/clusters/cluster-*/cluster-connectivity/port-groups/port-group-0
/clusters/cluster-*/cluster-connectivity/port-groups/port-group-1
Position
Parent - cluster-connectivity
Children - None
Context-Specific Commands
None
Data Structure
enabled
member-ports
option-set
subnet
Notes
This context is replaced by Ethernet port-groups:
/clusters/cluster-*/connectivity/wan-com/port-groups/ip-port-group-0
/clusters/cluster-*/connectivity/wan-com/port-groups/ip-port-group-1
Subnets
Network configuration for IP wan-com.
Context
/clusters/cluster-*/cluster-connectivity/subnets/*
Position
Parent - cluster-connectivity
Children - None
Context-Specific Commands
In container context:
clear
create
destroy
modify
In subnet instance context:
None
Data Structure
cluster-address
gateway
mtu
prefix
proxy-external-address
remote-subnet-address
Notes
The currently-used subnet context is replaced by:
/clusters/cluster-*/connectivity/wan-com/port-groups/ip-port-group-*/subnet
Named subnet instance contexts are created in the subnets container context by the
command subnet create and are removed by the command subnet destroy.
Back-end
Configuration of back-end connectivity to arrays.
Context
/clusters/cluster-*/connectivity/back-end
Position
Parent - connectivity
Children - port-groups
Context-Specific Commands
None
Data Structure
None
Front-end
Configuration of front-end connectivity to hosts.
Context
/clusters/cluster-*/connectivity/front-end
Position
Parent - connectivity
Children - port-groups
Context-Specific Commands
None
Data Structure
None
Local-com
Configuration for local-com inter-director connectivity.
Context
/clusters/cluster-*/connectivity/local-com
Position
Parent - connectivity
Children - port-groups
Context-Specific Commands
None
Data Structure
None
Wan-com
Configuration for wan-com inter-cluster connectivity.
Context
/clusters/cluster-*/connectivity/wan-com
Position
Parent - connectivity
Children - port-groups
Context-Specific Commands
remote-clusters add-addresses
remote-clusters clear-addresses
Data Structure
discovery-address
discovery-port
listening-port
remote-cluster-addresses
Port-groups
Communication channels formed by one port from each director in the local cluster.
Context
/clusters/cluster-*/connectivity/*/port-groups
Position
Parent - back-end, front-end, local-com, wan-com
Children - Ethernet-port-group, Fc-port-group
Context-Specific Commands
None
Data Structure
None
Notes
Port-groups are named according to the type (Ethernet: ip/iscsi, Fibre Channel: fc) and
numbering of the ports they contain. The existence of port-group instance contexts is
determined by the ports on the system. IP port-groups will only exist if Ethernet ports exist
and are assigned to the communication role associated with this context. Likewise, FC
port-groups will only exist if there are appropriate Fibre Channel ports.
Ethernet port-group
Communication channel composed of one Ethernet port from each director in the local
cluster.
Context
/clusters/cluster-*/connectivity/*/port-groups/ip-port-group-*
/clusters/cluster-*/connectivity/*/port-groups/iscsi-port-group-*
Position
Parent - port-groups
Children - member-ports, option-set, subnet
Context-Specific Commands
subnets clear
subnet remote-subnet add
subnet remote-subnet remove
Data Structure
enabled
Notes
Port-groups are named according to the role (local-com/wan-com: ip, front-end/back-end:
iscsi) and numbering of the ports they contain. The existence of port-group instance
contexts is determined by the ports on the system. IP/iSCSI port-groups will only exist if
Ethernet ports exist and are assigned to the role associated with this context.
Member-ports
A member port in a port-group.
Context
/clusters/cluster-*/connectivity/*/port-groups/*/member-ports/director-*
Position
Parent - Ethernet-port-group, Fc-port-group
Children - None
Context-Specific Commands
None
Data Structure
address director enabled engine port-name
Notes
Each member-port sub-context is named for the director to which the member port
belongs. This naming convention avoids the name collision caused when a port has the
same name on each director.
A long listing (ll) of the member-ports container context summarizes the information from
each member port:
Director Port Enabled Address
Option-sets
Option-set configuration for Ethernet port-groups.
Context
/clusters/cluster-*/connectivity/*/port-groups/ip-port-group*/option-set
/clusters/cluster-*/connectivity/*/port-groups/iscsi-port-group*/option-set
Position
Parent - Ethernet-port-group
Children - None
Context-Specific Commands
None
Data Structure
connection-open-timeout
keepalive-timeout
socket-buf-size
Related documentation
Related documents (available on EMC Online Support) include:
◆ EMC VPLEX Release Notes for GeoSynchrony Releases
◆ EMC VPLEX Product Guide
◆ EMC VPLEX Site Preparation Guide
◆ EMC VPLEX Hardware Installation Guide
◆ EMC VPLEX Configuration Worksheet
◆ EMC VPLEX Configuration Guide
◆ EMC VPLEX Security Configuration Guide
◆ EMC VPLEX CLI Reference Guide
◆ EMC VPLEX Administration Guide
◆ VPLEX Management Console Help
◆ EMC VPLEX Element Manager API Guide
◆ EMC VPLEX Open-Source Licenses
◆ EMC VPLEX GPL3 Open-Source Licenses
◆ EMC Regulatory Statement for EMC VPLEX
◆ Procedures provided through the Generator
◆ EMC Host Connectivity Guides
A caution contains information essential to avoid data loss or damage to the system or
equipment.
IMPORTANT
An important notice contains information essential to operation of the software.
Typographical conventions
EMC uses the following type style conventions in this document:
Normal Used in running (nonprocedural) text for:
• Names of interface elements (such as names of windows, dialog boxes,
buttons, fields, and menus)
• Names of resources, attributes, pools, Boolean expressions, buttons,
DQL statements, keywords, clauses, environment variables, functions,
utilities
• URLs, pathnames, filenames, directory names, computer names,
filenames, links, groups, service keys, file systems, notifications
Bold Used in running (nonprocedural) text for:
• Names of commands, daemons, options, programs, processes,
services, applications, utilities, kernels, notifications, system call, man
pages
Used in procedures for:
• Names of interface elements (such as names of windows, dialog boxes,
buttons, fields, and menus)
• What user specifically selects, clicks, presses, or types
Italic Used in all text (including procedures) for:
• Full titles of publications referenced in text
• Emphasis (for example a new term)
• Variables
Courier Used for:
• System output, such as an error message or script
• URLs, complete paths, filenames, prompts, and syntax when shown
outside of running text
Courier bold Used for:
• Specific user input (such as commands)
Courier italic Used in procedures for:
• Variables on command line
• User input variables
[] Square brackets enclose optional values
| Vertical bar indicates alternate selections - the bar means “or”
{} Braces indicate content that you must specify (that is, x or y or z)
... Ellipses indicate nonessential information omitted from the example
Your comments
Your suggestions will help to improve the accuracy, organization, and overall quality of the
user publications. Send your opinions of this document to:
[email protected]
Note: Contact the System Administrator to obtain the service password. For more
information about user passwords, refer to the VPLEX Security Configuration Guide.
1. Log in to the service account at the cluster's management server's public IP address
using an SSH client (PuTTY or OpenSSH). Configure the SSH client as follows:
• Port 22
• SSH protocol version is set to 2
• Scrollback lines set to 20000
The log in prompt appears:
login as:
4. Type the vplexcli command to connect to the VPLEX command line interface:
service@ManagementServer:~> vplexcli
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
VPlexcli:/>
Log out
Use the exit command to exit the VPLEX command line interface from any context.
For example:
VPlexcli:/clusters> exit
Connection closed by foreign host.
Password Policies
The VPLEX management server uses a Pluggable Authentication Module (PAM)
infrastructure to enforce minimum password quality. For more information about
technology used for password protection, refer to the VPLEX Security Configuration Guide.
Refer to “Log in to/log out from the CLI” on page 27 for information on the commands
used to set password policies and the values allowed.
Note the following:
◆ Password policies do not apply to users configured using the LDAP server.
◆ Password policies do not apply to the service account.
◆ The Password inactive days policy does not apply to the admin account to protect the
admin user from account lockouts.
◆ During the management server software upgrade, an existing user’s password is not
changedonly the user’s password age information changes.
◆ You must be an admin user to configure a password policy.
Table 1 lists and describes the password policies and the default values.
Password inactive days The number of days after a password has expired 1
before the account is locked.
The password policy for existing admin and customer-created user accounts is updated
automatically as part of the upgrade. See the VPLEX Security Configuration Guide for
information about service user account passwords.
Note: A space is allowed only between the characters in a password, not in the beginning
or the end of the password
VPlexcli:/> cd /clusters/cluster-1/devices/
VPlexcli:/clusters/cluster-1/devices>
For example, to navigate from the root (/) context to the connectivity context to view
member ports for a specified iscsi-port-group:
VPlexcli:/> cd /clusters
VPlexcli:/clusters> cd cluster-1
VPlexcli:/clusters/cluster-1> cd connectivity
VPlexcli:/clusters/cluster-1/connectivity> cd back-end
VPlexcli:/clusters/cluster-1/connectivity/back-end> cd port-groups
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups> cd
iscsi-port-group-8
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/iscsi-p
ort-group-8> cd member-ports
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/iscsi-p
ort-group-8/member-ports> ll
Alternatively, type all the context identifiers in a single command. For example, the above
navigation can be typed as:
VPlexcli:/> cd
clusters/cluster-1/connectivity/back-end/port-groups/iscsi-port-gro
up-8/member-ports> ll
Use the cd command with no arguments or followed by a space and three periods (cd ...)
to return to the root context:
VPlexcli:/engines/engine-1-1/fans> cd
VPlexcli:/>
Use the cd command followed by a space and two periods (cd ..) to return to the context
immediately above the current context:
VPlexcli:/monitoring/directors/director-1-1-B> cd ..
VPlexcli:/monitoring/directors>
To navigate directly to a context from any other context use the cd command and specify
the absolute context path. In the following example, the cd command changes the context
from the data migrations/ extent-migrations context to the engines/engine-1/fans
context:
VPlexcli:/data-migrations/extent-migrations> cd
/engines/engine-1-1/fans/
VPlexcli:/engines/engine-1-1/fans>
VPlexcli:/monitoring/directors/director-1-1-A> pushd
[/engines/engine-1-1/directors/director-1-1-A,
/monitoring/directors/director-1-1-A,
/monitoring/directors/director-1-1-A]
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> pushd
[/monitoring/directors/director-1-1-A,
/engines/engine-1-1/directors/director-1-1-A,
/monitoring/directors/director-1-1-A]
VPlexcli:/monitoring/directors/director-1-1-A>
◆ Use the popd command to remove the last directory saved by the pushd command
and jump to the new top directory.
In the following example, the dirs command displays the context stack saved by the
pushd command, and the popd command removes the top directory, and jumps to the
new top directory:
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> dirs
[/engines/engine-1-1/directors/director-1-1-A,
/monitoring/directors/director-1-1-A]
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> popd
[/engines/engine-1-1/directors/director-1-1-A]
VPlexcli:/monitoring/directors/director-1-1-A>
Note: The context tree displays only those objects associated with directors to which the
management system is connected.
VPlexcli:/monitoring/directors/director-1-1-B/monitors>
◆ The ls command displays the sub-contexts immediately accessible from the current
context:
VPlexcli:/> ls
clusters data-migrations distributed-storage
engines management-server monitoring
notifications system-defaults
◆ For contexts where the next lowest level is a list of individual objects, the ls command
displays a list of the objects:
VPlexcli:/clusters/cluster-1/exports/ports> ls
P000000003B2017DF-A0-FC00 P000000003B2017DF-A0-FC01
P000000003B2017DF-A0-FC02 P000000003B2017DF-A0-FC03
P000000003B3017DF-B0-FC00 P000000003B3017DF-B0-FC01
P000000003B3017DF-B0-FC02 P000000003B3017DF-B0-FC03
device-migrations/ extent-migrations/
◆ The tree command displays the immediate sub-contexts in the tree using the current
context as the root:
VPlexcli:/ cd /clusters/cluster-1/devices/Symm_rC_3
VPlexcli:/clusters/cluster-1/devices/Symm_rC_3> tree
/clusters/cluster-1/devices/Symm_rC_3:
components
Symm_rC_3_extent_0
Symm_rC_3_extent_1
◆ The tree -e command displays immediate sub-contexts in the tree and any
sub-contexts under them:
VPlexcli:/clusters/cluster-1/devices/Symm_rC_3> tree -e
/clusters/cluster-1/devices/Symm_rC_3:
components
Symm_rC_3_extent_0
components
Symm0487_44C
components
Symm_rC_3_extent_1
components
Symm0487_44B
components
Note: For contexts where the next level down the tree is a list of objects, the tree command
displays the list. This output can be very long. For example:
VPlexcli:/clusters/cluster-1> tree
/clusters/cluster-1:
cluster-connectivity
cluster-links
to-cluster-2
proxy-servers
static-routes
devices
base0
components
extent_CX4_lun0_1
components
CX4_lun0
components
.
.
.
exports
initiator-ports
LicoJ006_hba0
LicoJ006_hba1
.
.
.
ports
P000000003CA00147-A0-FC00
P000000003CA00147-A0-FC01
.
.
.
storage-views
LicoJ009
LicoJ013
storage-elements
extents
extent_CX4_Logging_1
.
.
Some contexts “inherit” commands from their parent context. These commands can be
used in both the current context and the context immediately above in the tree:
VPlexcli:/distributed-storage/bindings> help -G
Commands inherited from parent contexts:
dd rule rule-set summary
Some commands are loosely grouped by function. For example, the commands to create
and manage performance monitors start with the word “monitor”.
Use the <Tab> key display the commands within a command group. For example, to display
the commands that start with the word “monitor”, type “monitor” followed by the <Tab>
key:
VPlexcli:/> monitor <Tab>
Page output
For large configurations, output from some commands can reach hundreds of lines.
Paging displays long output generated by the ll and ls commands one page at a time:
To enable paging, add -p at the end of any command:
VPlexcli:/clusters/cluster-1/storage-elements> ls storage-volumes -p
One page of output is displayed. The following message is at the bottom of the first page:
-- more --(TOP )- [h]elp
Tab completion
Use the Tab key to:
◆ Complete a command
◆ Display valid contexts and commands
◆ Display command arguments
Complete a command
Use the Tab key to automatically complete a path or command until the path/command is
no longer unique.
For example, to navigate to the UPS context on a single cluster (named cluster-1), type:
cd /clusters/cluster-1/uninterruptible-power-supplies/
2. There is only one cluster (it is unique). Press Tab to automatically specify the cluster:
cd /clusters/cluster-1/
cluster-connectivity/ devices/
exports/ storage-elements/
system-volumes/ uninterruptible-power-supplies/
virtual-volumes/
VPlexcli:/> cd /clusters/cluster-1/
Wildcards
The VPLEX command line interface includes 3 wildcards:
• * - matches any number of characters.
• ? - matches any single character.
• [a|b|c] - matches any of the single characters a or b or c.
* wildcard
Use the * wildcard to apply a single command to multiple objects of the same type
(directors or ports). For example, to display the status of ports on each director in a
cluster, without using wildcards:
ll engines/engine-1-1/directors/director-1-1-A/hardware/ports
ll engines/engine-1-1/directors/director-1-1-B/hardware/ports
ll engines/engine-1-2/directors/director-1-2-A/hardware/ports
ll engines/engine-1-2/directors/director-1-2-B/hardware/ports
.
.
.
Alternatively:
◆ Use one * wildcard to specify all engines, and
◆ Use a second * wildcard specify all directors:
ll engines/engine-1-*/directors/*/hardware/ports
** wildcard
Use the ** wildcard to match all contexts and entities between two specified objects. For
example, to display all director ports associated with all engines without using wildcards:
ll /engines/engine-1-1/directors/director-1-1-A/hardware/ports
ll /engines/engine-1-1/directors/director-1-1-B/hardware/ports
.
.
.
Alternatively, use a ** wildcard to specify all contexts and entities between /engines and
ports:
ll /engines/**/ports
? wildcard
Use the ? wildcard to match a single character (number or letter).
ls /storage-elements/extents/0x1?[8|9]
[a|b|c] wildcard
Use the [a|b|c] wildcard to match one or more characters in the brackets.
ll engines/engine-1-1/directors/director-1-1-A/hardware/ports/A[0-1]
displays only ports with names starting with an A, and a second character of 0 or 1.
Names
Major components of the VPLEX are named as follows:
◆ Clusters - VPLEX Local™ configurations have a single cluster, with a cluster ID of
cluster 1. VPLEX Metro™ and VPLEX Geo™ configurations have two clusters with
cluster IDs of 1 and 2.
VPlexcli:/clusters/cluster-1/
◆ Engines are named <engine-n-n> where the first value is the cluster ID (1 or 2) and the
second value is the engine ID (1-4).
VPlexcli:/engines/engine-1-2/
◆ Directors are named <director-n-n-n> where the first value is the cluster ID (1 or 2), the
second value is the engine ID (1-4), and the third is A or B.
VPlexcli:/engines/engine-1-1/directors/director-1-1-A
For objects that can have user-defined names, those names must comply with the
following rules:
◆ Can contain uppercase and lowercase letters, numbers, and underscores
◆ No spaces
◆ Cannot start with a number
◆ No more than 63 characters
Specifying addresses
VPLEX uses both IPv4 and IPv6 addressing. Many commands can be specified as IPv4 or
IPv6 formats.
Refer to the EMC VPLEX Administration Guide for usage rules and address formats.
Command globbing
Command globbing combines wildcards and context identifiers in a single command.
Globbing can address multiple entities using a single command.
Example 1
In the following example, a single command enables ports in all engines and all directors
(A and B) whose name include 0-FC and 1-FC:
set /engines/*/directors/*/hardware/ports/*[0-1]-FC*:: enabled true
Example 2
To display the status of all the director ports on a large configuration using no wildcards,
type:
ll /engines/engine-1-<Enclosure_ID>/directors/<director_
name>/hardware/ports
Type the command with the arguments with identifiers in any order (not as specified by
the syntax):
VPlexcli:/> alias --to "cd clusters" --name cdc
or,
Type the command with the arguments without identifiers in the order specified by the
command syntax:
VPlexcli:/> alias cdc "cd clusters"
--verbose argument
The --verbose argument displays additional information for some commands. For
example, without --verbose argument:
VPlexcli:/> connectivity validate-be
Cluster cluster-1
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement
for storage volume paths*.
0 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from
same director.
*To meet the high availability requirement for storage volume paths each
storage volume must be accessible from each of the directors through 2
or more VPlex backend ports, and 2 or more Array target ports, and there
should be 2 or more ITLs.
Cluster cluster-2
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement
for storage volume paths*.
0 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from
same director.
*To meet the high availability requirement for storage volume paths each
storage volume must be accessible from each of the directors through 2
or more VPlex backend ports, and 2 or more Array target ports, and there
should be 2 or more ITLs.
VPlexcli:/>
Cluster cluster-2
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement
for storage volume paths*.
0 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from
same director.
*To meet the high availability requirement for storage volume paths each
storage volume must be accessible from each of the directors through 2
or more VPlex backend ports, and 2 or more Array target ports, and there
should be 2 or more ITLs.
VPlexcli:/>
Type the first letter of the command to search for. After the first letter is typed, the
search tool displays a list of possible matches.
Use the history nn command to display the last nn entries in the list:
VPlexcli:/clusters/cluster-1> history 22
478 ls storage-volumes -p
479 cd clusters/cluster-1/
480 ls storage-volumes
481 cd storage-elements/
482 ls storage-volumes -p
Get help
◆ Use the help or ? command with no arguments to display all the commands available,
the current context, including global commands.
◆ Use the help or ? command with -G argument to display all the commands available,
the current context, excluding global commands:
VPlexcli:/clusters> help -G
Commands specific to this context and below:
◆ Use the help command or command -help to display help for the specified command.
This chapter describes each of the VPLEX CLI commands in alphabetical order.
advadm dismantle
Dismantles VPLEX storage objects down to the storage-volume level, and optionally
unclaims the storage volumes.
Optional arguments
--unclaim-storage-volumes - Unclaim the storage volumes after the dismantle is
completed.
[-f|--force] - Force the dismantle without asking for confirmation. Allows the command to
be run from a non-interactive script.
destroyed
/clusters/cluster-1/storage-elements/extents/extent_CLAR0014_LUN14_
1
Example In the following example, the specified volumes are NOT dismantled because they are
exported or are members of a consistency group:
VPlexcli:/> advadm dismantle -v rC_extentSrc_C1_CHM_00*, axel_dr1_vol
The following virtual-volumes will not be dismantled because they are exported. Please remove
them from the storage-views before dismantling them:
/clusters/cluster-1/virtual-volumes/rC_extentSrc_C1_CHM_0002_vol is in
/clusters/cluster-1/exports/storage-views/chimera_setupTearDown_C1
/clusters/cluster-1/virtual-volumes/rC_extentSrc_C1_CHM_0001_vol is in
/clusters/cluster-1/exports/storage-views/chimera_setupTearDown_C1
.
.
.
The following virtual-volumes will not be dismantled because they are in consistency-groups.
Please remove them from the consistency-groups before dismantling them:
/clusters/cluster-2/virtual-volumes/axel_dr1_vol is in
/clusters/cluster-2/consistency-groups/async_sC12_vC12_nAW_CHM
No virtual-volumes to dismantle.
alias
Creates a command alias.
Syntax alias
[-n|--name] name
[-t|--to] “commands and arguments to include
in the new alias”
Description Aliases are shortcuts for frequently used commands or commands that require long
strings of context identifiers.
Use the alias command with no arguments to display a list of all aliases configured on the
VPLEX.
Use the alias name command to display the underlying string of the specified alias.
Use the alias name “string of CLI commands” command to create an alias with the
specified name that invokes the specified string of commands.
Use the unalias command to delete an alias.
The following aliases are pre-configured:
• ? Substitutes for the help command.
• ll Substitutes for the ls -a command.
• quit Substitutes for the exit command.
An alias that executes correctly in one context may conflict with an existing command
when executed from another context (pre-existing commands are executed before aliases
if the syntax is identical).
Make sure that the alias name is unique, that is, not identical to an existing command or
alias.
If an alias does not behave as expected, delete or reassign the alias.
The CLI executes the first matching command, including aliases in the following order:
1. Local command in the current context.
2. Global command in the current context.
3. Root context is searched for a match.
An alias set at the command line will not persist when the user interface is restarted. To
create a alias command that will persist, add it to the /var/log/VPlex/cli/VPlexcli-init file.
Use an alias:
VPlexcli:/> mon-Dir-1-1-B
VPlexcli:/monitoring/directors/director-1-1-B>
alias 43
VPLEX CLI Command Descriptions
See also ◆ ls
◆ unalias
amp register
Associates Array Management Provider to a single VPLEX Cluster.
Contexts /clusters/ClusterName/storage-elements/array-providers
Optional Arguments
[-c | --cluster] cluster context path - The VPLEX cluster associated with the Array
Management Provider. This argument may be omitted when the command is executed
from or below a cluster context (meaning the cluster is implied).
[-s | --use-ssl] - Specifies whether to use HTTP or HTTPS (SSL) protocol when connecting to
the Array Management Provider URL.
[-h |--help] - Displays the usage for this command.
[--verbose] - Provides more output during command execution.
Description An Array Management Provider (AMP) is an external array management endpoint that
VPLEX communicates with to execute operations on individual arrays. Examples of AMPs
include external SMI-S providers
An AMP exposes a management protocol/API. Array operations can be executed through
this API. Examples of Management protocols include Provisioning and Snap & Clone.
An AMP manages one or more arrays. For example, an SMI-S provider can manage multiple
arrays.
amp unregister
Unregisters Array Management Provider.The Array Management provider is no longer
available for any operations after being unregistered.
Contexts /clusters/ClusterName/storage-elements/array-providers
Optional Arguments
[-c | --cluster] cluster context - The cluster associated with the Array Management Provider.
Can be omitted when the command is executed from or below a cluster context in which
case the cluster is implied.
[-f | --force] - Force the operation without confirmation. Allows the command to be run from
a non-interactive script.
[-h |--help] - Displays the usage for this command.
[--verbose] - Provides more output during command execution.
Description An Array Management Provider (AMP) is an external array management endpoint that
VPLEX communicates with to execute operations on individual arrays. Examples of AMPs
include external SMI-S providers
The amp unregister command unregisters an array provider. The AMP is no longer
available for any operations.
amp unregister 45
VPLEX CLI Command Descriptions
In the initial release, the CLI provides no validation for this command. Therefore, always
ensure the AMP is safe to remove before unregistering a provider. For example, ensure
the AMP is not currently in use or required by any provisioned volumes. In the future,
validation will be added for this command.
VPlexcli:/>
Unregistering an array-provider will permanently remove it. Do you wish to proceed? (Yes/No)
Yes
VPlexcli:/>
array claim
Claims and names unclaimed storage volumes for a given array.
Optional arguments
[-m|--mapping-file] mapping file - Location of the name mapping file.
[-t|--tier] mapping file - Add a tier identifier to the storage volumes to be claimed.
[-l|--claim] - Try to claim unclaimed storage-volumes.
[--force] - Force the operation without confirmation. Allows the command to be run from a
non-interactive script.
* - argument is positional.
Description Claims and names unclaimed storage volumes for a given array.
Some storage arrays support auto-naming (EMC Symmetrix/VMAX, CLARiiON/VNX,
XtremIO, Hitachi AMS 1000, HDS 9970/9980 and USP VM) and do not require a mapping
file.
Other storage arrays require a hints file generated by the storage administrator using the
array’s command line. The hints file contains the device names and their World Wide
Names. Refer to Table 21, “Create hints files for storage-volume naming,”.
Use the --mapping-file argument to specify a hints file to use for naming claimed storage
volumes. File names will be used to determine the array name.
Use the --tier argument to add a storage tier identifier in the storage-volume names.
This command can fail if there is not a sufficient number of metavolume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to
this problem.
array re-discover
Re-discovers an array, and makes the array's storage volumes visible to the VPLEX.
Optional arguments
[-d|--hard] - Perform a hard rediscover. This is disruptive.
array re-discover 47
VPLEX CLI Command Descriptions
The --hard option is disruptive and can result in data unavailability and/or data loss on
live exported paths.
Logical-unit swapping occurs when the number of logical-units remains the same, when
the actual logical-units have changed. If logical-unit swapping occurred, use the --hard
argument to force renewal of all ITs in the array.
If the --hard argument is not used, this command does not detect logical-unit swapping
conditions, and I/O is not disrupted on the logical-units that have not changed.
[-f|--force] - Force the operation without confirmation. Allows the command to be run from
a non-interactive script.
* - argument is positional.
Description Manually synchronizes the export state of the target device. Used in two scenarios:
◆ When the exported LUNs from the target array to VPLEX are modified.
Newer protocol-compliant SCSI devices return a notification code when the exported
set changes, and may not require manual synchronization. Older devices that do not
return a notification, must be manually synchronized.
◆ When the array is not experiencing I/O (the transport interface is idle), there is no
mechanism by which to collect the notification code. In this scenario, do one of the
following:
• Wait until I/O is attempted on any of the LUNs,
• Disruptively disconnect and reconnect the array, or
• Use the array rediscover command.
This command cannot detect LUN-swapping conditions on the array(s) being
re-discovered. On older configurations, this might disrupt I/O on more than the given
array.
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-0x00000000192601378> array
re-discover --force
array re-discover 49
VPLEX CLI Command Descriptions
array register
Registers the specified iSCSI storage-array.
array used-by
Displays the components that use a specified storage-array.
Arguments [-a|--array] context-path - * Specifies the storage-array for which to find users. This
argument is not required if the context is the target array.
* - argument is positional.
Description Displays the components (storage-volumes) that use the specified storage array.
Example Display the usage of components in an array from the target storage array context:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-APM00050404263>
array used-by
/clusters/cluster-1/storage-elements/extents/extent_6006016061211100363da903017ae011_1:
SV1
/clusters/cluster-1/devices/dev_clus1:
extent_SV1_1
SV1
/clusters/cluster-1/system-volumes/log1_vol:
extent_SV1_2
SV1
/clusters/cluster-1/devices/clus1_device1:
extent_SV1_3
SV1
/clusters/cluster-1/devices/clus1_dev2:
extent_SV1_4
SV1
/clusters/cluster-1/devices/device_6006016061211100d42febba1bade011_1:
extent_6006016061211100d42febba1bade011_1
VPD83T3:6006016061211100d42febba1bade011
/distributed-storage/distributed-devices/dev1_source:
dev1_source2012Feb16_191413
extent_sv1_1
sv1
/clusters/cluster-1/system-volumes/MetaVol:
VPD83T3:6006016022131300de76a5cec256df11
/clusters/cluster-1/system-volumes/MetaVol:
VPD83T3:600601606121110014da56b3b277e011
/clusters/cluster-1/system-volumes/MetaVol_backup_2012Feb13_071901:
VPD83T3:6006016061211100c4a223611bade011
Summary:
Count of storage-volumes that are not in use: 0
Count of storage-volumes that are in use: 6
Example Display the usage of components in an array from the /storage-arrays context:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays> array used-by --array
EMC-CLARiiON-APM00050404263
/clusters/cluster-1/storage-elements/extents/extent_6006016061211100363da903017ae011_1:
SV1
.
.
.
Note: The -m option is no longer supported. Use the -g and -u options for managing access
to the management server by groups and users.
Optional arguments
[-s|--server-name] server-name - Name of the directory server. This argument is required
when the --connection-type argument is specified with a value of 2 (LDAPS).
[-t|--connection-type] {1|2} - Select the cryptographic protocol to use to connect to the
LDAP/Active Directory server. Values are:
• 1 - (Default) - Use LDAP. Communication with the LDAP/Active Directory server will
be in plain text.
• 2 - Use the Secure LDAP (LDAPS). If LDAPS is selected, use the --server-name
argument to specify the LDAP server.
[-o|--port] port - Port number of the LDAP/Active Directory server.
Range: 1- 65536.
Default: 389 when --connection-type is set to LDAP (1).
[-c|--custom-attributes] - Provide custom attribute names for attribute mapping. Prompts
for mapping the attribute names.
[-u|--map-user] “map-user” - Specifies which users can login to the management server.
The map-user must be enclosed in quotes.
[-g|--map-group] “map-group” - Specifies a group. Only members of this group within the
user search path can log in to the management server. The map-group must be enclosed
in quotes.
--dry-run - Run the command but don't do anything.
cn=Administrator,dc=security,dc=orgName,dc=companyName,dc=com
Best practice is to add groups rather than users. Adding groups allows multiple users to be
added using one map-principal. VPLEX is abstracted from any changes (modify/delete) to
the user.
Example Configure the Active Directory directory service on the VPLEX cluster:
VPlexcli:/> authentication directory-service configure -d 2 -i 192.168.98.101 -b
"dc=org,dc=company,dc=com" -r
"ou=vplex,dc=org,dc=company,dc=com" -n "cn=Administrator,cn=Users,dc=org,dc=company,dc=com" -t
2 -s servername -p
OR
OR
Note: Default values are displayed in brackets for each attribute. Press Enter to accept the
default, or type a value and press Enter. Values for custom attributes are case sensitive.
Verify the case when specifying the value for a custom attribute.
Please note that configuring LDAP means that network channel data is unencrypted. A better
option is to use LDAPS so the data is encrypted. Are you sure you want to continue the
configuration with LDAP. Continue? (Yes/No) Yes
VPlexcli:/>
Note: To define a different posixGroup attribute use custom attributes. Use an appropriate
objectClass attribute for posixGroup (e.g., “posixGroup” or “groupOfNames” or “Group”)
as used by the OpenLDAP server.
VPlexcli:/>
Note: Option -m for commands can only be used with the older configuration. Use the -g
and -u options for adding user and group access to the management server.
Enter:
“CN=user\\ test: (IT),OU=group,DC=company,DC=com"
Description A directory server user is an account that can log in to the VPLEX Management Server.
Users can be specified explicitly or implicitly via groups.
Best practice is to add groups rather than users. Adding groups allows multiple users to be
added using one map-principal. VPLEX is abstracted from any changes (modify/delete) to
the user.
ip: 10.31.52.53
mapped-principal:
[Users ]: ['uid=testUser3,ou=qe,dc=security,dc=sve,dc=emc,dc=com']
See also
Field Description
Description Removes the existing directory service configuration from the VPLEX management server.
This command will unconfigure the existing directory service.. Continue? (Yes/No) Yes
Note: Option -m for commands can only be used with the older configuration. Use the -g
and -u options for adding user and group access to the management server.
Enter:
“CN=user\\ test: (IT),OU=group,DC=company,DC=com"
If the directory service is OpenLDAP, entries in the mapped principal argument ('ou', 'uid',
'dc) must be specified in small letters (no capitals).
If the directory service is AD, entries in the mapped principal argument ('OU', 'CN', 'DC')
must be specified in capital letters.
Use the authentication directory-service show command to display the current mapped
principals.
Description This command unmaps a directory server user or a user group from the VPLEX cluster.
There must be at least one principal mapped. If there is only one principal, it cannot be
unmapped. If a user search path is specified, unmapping all users and group principals
will provide access to all users in the user search path.
OR
OR
batch-migrate cancel
Cancels an active migration and returns the source volumes to their state before the
migration.
Description Attempts to cancel every migration in the specified batch file. If an error is encountered, a
warning is printed to the console and the command continues until every migration has
been processed.
Note: In order to re-run a canceled migration plan, the batch-migrate remove command
must be used to remove the records of the migration.
Example VPlexcli:/data-migrations/device-migrations>
batch-migrate cancel --file migrate.txt
batch-migrate cancel 59
VPLEX CLI Command Descriptions
◆ batch-migrate pause
◆ batch-migrate remove
◆ batch-migrate resume
◆ batch-migrate start
◆ batch-migrate summary
batch-migrate check-plan
Checks a batch migration plan.
Migration of volumes in asynchronous consistency groups is not supported on volumes
that are in active use. Schedule this activity as a maintenance activity to avoid DU.
batch-migrate clean
Cleans the specified batch migration and deletes the source devices.
Optional arguments
[-e|--rename-targets] - rename the target devices and virtual volumes to the source device
names.
* argument is positional.
Description Dismantles the source device down to its storage volumes and unclaims the storage
volumes.
◆ For device migrations, cleaning dismantles the source device down to its storage
volumes. The storage volumes no longer in use are unclaimed.
For device migrations only, use the optional --rename-targets argument to rename the
target device after the source device. If the target device is renamed, the virtual
volume on top of it is also renamed if the virtual volume has a system-assigned
default name.
Without renaming, the target devices retain their target names, which can make the
relationship between volumes and devices less evident.
batch-migrate clean 61
VPLEX CLI Command Descriptions
◆ For extent migrations, cleaning destroys the source extent and unclaims the
underlying storage-volume if there are no extents on it.
This command must be run before the batch-migration has been removed. The command
will not clean migrations that have no record in the CLI context tree.
Example In the following example, source devices are torn down to their storage volumes and the
target devices and volumes are renamed after the source device names:
VPlexcli:/> batch-migrate clean --rename-targets --file migrate.txt
batch-migrate commit
Commits the specified batch migration.
Description Attempts to commit every migration in the batch. Migrations in the batch cannot be
committed until all the migrations are complete.
If an error is encountered, a warning is displayed and the command continues until every
migration has been processed.
The batch migration process inserts a temporary RAID 1 structure above the source
devices/extents with the target devices/extents as an out-of-date leg of the RAID.
Migration can be understood as the synchronization of the out-of-date leg (the target).
After the migration is complete, the commit step detaches the source leg of the temporary
RAID and removes the RAID.
The virtual volume, device, or extent is identical to the one before the migration except
that the source device/extent is replaced with the target device/extent.
A migration must be committed in order to be cleaned.
Note: Use the batch-migrate summary command to verify that the migration has
completed with no errors before committing the migration.
batch-migrate create-plan
Creates a batch migration plan file.
Optional arguments
--force - Forces an existing plan file with the same name to be overwritten.
* - argument is positional.
Description ◆ The source and target extents must be typed as a comma-separated list, where each
element is allowed to contain wildcards.
batch-migrate create-plan 63
VPLEX CLI Command Descriptions
◆ If this is an extent migration, the source and target cluster must be the same.
◆ If this is a device migration, the source and target clusters can be different.
◆ The source and target can be either local-devices or extents. Mixed migrations from
local-device to extent and vice versa are not allowed.
◆ The command attempts to create a valid migration plan from the source
devices/extents to the target devices/extents.
If there are source devices/extents that cannot be included in the plan, a warning is
printed to the console, but the plan is still created.
◆ Review the plan and make any necessary changes before starting the batch migration.
If problems are found, correct the errors and re-run the command until the plan-check
passes.
3. Use the batch-migrate start command to start the migration:
VPlexcli:/> batch-migrate start migrate.txt
5. When all the migrations are complete, use the batch-migrate commit command to
commit the migration:
VPlexcli:/> batch-migrate commit migrate.txt
This will dismantle the source devices down to their storage volumes and rename the
target devices and volumes after the source device names.
7. Use the batch-migrate remove command to remove the record of the migration:
VPlexcli:/> batch-migrate remove migrate.txt
batch-migrate pause
Pauses the specified batch migration.
Description Pauses every migration in the batch. If an error is encountered, a warning is displayed and
the command continues until every migration has been processed.
Active migrations (a migration that has been started) can be paused and resumed at a
later time.
Pause an active migration to release bandwidth for host I/O during periods of peak traffic.
batch-migrate pause 65
VPLEX CLI Command Descriptions
batch-migrate remove
Removes the record of the completed batch migration.
Description Remove the migration record only if the migration has been committed or canceled.
Migration records are in the /data-migrations/device-migrations context.
or:
VPlexcli:>batch-migrate remove /data-migrations/device-migrations
--file migrate.txt.
◆ batch-migrate commit
◆ batch-migrate create-plan
◆ batch-migrate pause
◆ batch-migrate resume
◆ batch-migrate start
◆ batch-migrate summary
batch-migrate resume
Attempts to resume every migration in the specified batch.
batch-migrate start
Starts the specified batch migration.
batch-migrate resume 67
VPLEX CLI Command Descriptions
Optional arguments
[-s|transfer-size] size - Maximum number of bytes to transfer as one operation per device.
Specifies the size of read sector designated for transfer in cache. Setting transfer size to a
lower value implies more host I/O outside the transfer boundaries. Setting transfer size to
a higher value may result in faster transfers. See About transfer-size below. Valid values
must be a multiple of 4 K.
Range: 40 K - 128 M.
Default: 128 K.
--force - Do not ask for confirmation when starting individual migrations. Allows this
command to be run using a non-interactive script, or to start cross-cluster device
migrations in a Geo system without prompting for every device.
--paused - Starts the migration in a paused state. The migration remains paused until
restarted using the batch-migrate resume command.
* - argument is positional.
Description Starts a migration for every source/target pair in the given migration-plan.
Inter-cluster migration of volumes is not supported on volumes that are in use. Schedule
this activity as a maintenance activity to avoid Data Unavailability.
Consider scheduling this activity during maintenance windows of low workload to reduce
impact on applications and possibility of a disruption.
If a migration fails to start, a warning is printed to the console. The command will continue
until every migration item has been processed.
Individual migrations may ask for confirmation when they start. Use the --force argument
to suppress these requests for confirmation.
Batch migrations across clusters can result in the following error:
VPlexcli:/> batch-migrate start /var/log/VPlex/cli/migrate.txt
Refer to the troubleshooting section of the VPLEX procedures in the SolVe Desktop for
instructions on increasing the number of slots.
About transfer-size
Transfer-size is the size of the region in cache used to service the migration. The area is
globally locked, read at the source, and written at the target.
Transfer-size can be as small 40 K, as large as 128 M, and must be a multiple of 4 K. The
default recommended value is 128 K.
A larger transfer-size results in higher performance for the migration, but may negatively
impact front-end I/O. This is especially true for VPLEX Metro migrations.
A smaller transfer-size results in lower performance for the migration, but creates less
impact on front-end I/O and response times for hosts.
Set a large transfer-size for migrations when the priority is data protection or migration
performance.
Set a smaller transfer-size for migrations when the priority is front-end storage response
time.
Factors to consider when specifying the transfer-size:
◆ For VPLEX Metro configurations with narrow inter-cluster bandwidth, set the transfer
size lower so the migration does not impact inter-cluster I/O.
◆ The region specified by transfer-size is locked during migration. Host I/O to or from
that region is held. Set a smaller transfer-size during periods of high host I/O.
◆ When a region of data is transferred, a broadcast is sent to the system. Smaller
transfer-size mean more broadcasts, slowing the migration.
batch-migrate start 69
VPLEX CLI Command Descriptions
◆ dm migration start
batch-migrate summary
Displays a summary of the batch migration.
Optional arguments
[-v|verbose] - In addition to the specified migration, displays a summary for any
in-progress and paused migrations.
Command output:
source device source target device target migration status percentage eta
-------------------- cluster -------------------- cluster name -------- done ---
-------------------- --------- -------------------- --------- --------- -------- ---------- ---
temp1_r1_0_cluster-1 cluster-1 temp2_r1_0_cluster-2 cluster-2 BR1_0 complete 100 -
temp1_r1_1_cluster-1 cluster-1 temp2_r1_1_cluster-2 cluster-2 BR1_1 complete 100 -
temp1_r1_2_cluster-1 cluster-1 temp2_r1_2_cluster-2 cluster-2 BR1_2 complete 100 -
Processed 3 migrations from batch migration BR1:
committed: 0
complete: 3
in-progress: 0
queued: 0
paused: 0
error: 0
cancelled: 0
no-record: 0
Field Description
status Status of the individual migration. See below for possible values.
eta For migrations currently being processed, the estimated time to completion.
Processed n Of the number of source-target pairs specified in the batch migration plan, the
migrations... number that have been processed.
committed Of the number of source-target pairs that have been processed, the number
that have been committed.
completed Of the number of source-target pairs that have been processed, the number
that are complete.
in-progress Of the number of source-target pairs that have been processed, the number
that are in progress.
paused Of the number of source-target pairs that have been processed, the number
that are paused.
cancelled Of the number of source-target pairs that have been processed, the number
that have been cancelled.
no-record Of the number of source-target pairs that have been processed, the number
that have no record in the context tree.
Note: If more than 25 migrations are active at the same time, they are queued, their status
is displayed as in-progress, and percentage-complete is displayed as ?.
batch-migrate summary 71
VPLEX CLI Command Descriptions
◆ batch-migrate start
battery-conditioning disable
Disables battery conditioning on the specified backup battery units.
Description Automatic battery conditioning of every SPS is enabled by default. Use this command to
disable battery conditioning for all SPS units in a cluster, or a specified SPS unit.
Disabling conditioning on a unit that has a cycle in progress causes that cycle to abort,
and user confirmation is required. Use the --force argument to skip the user confirmation.
Automatic battery conditioning must be disabled during:
◆ Scheduled maintenance
◆ System upgrades
◆ Unexpected and expected power outages
For all procedures that require fully operational SPS, ensure that SPS conditioning is
disabled at least 6 hours in advance of the procedure. This prevents the SPS from
undergoing battery conditioning during the procedure.
Use the battery-conditioning set-schedule command to select the day the automatic
monthly conditioning cycle starts.
Use the battery-conditioning manual-cycle request command to run an additional
conditioning cycle on one or more backup battery units.
Example Disable battery conditioning for a specified SPS unit and display the change:
VPlexcli:/> battery-conditioning disable -s
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a
Battery conditioning disabled on backup battery units 'stand-by-power-supply-a'.
VPlexcli:/> ll
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning/
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning:
Name Value
--------------------- ----------------------------
enabled false
in-progress false
manual-cycle-requested false
next-cycle Mon Dec 05 17:00:00 MST 2011
previous-cycle Fri Nov 25 13:25:00 MST 2011
previous-cycle-result PASS
Example Disable battery conditioning on a specified SPS that is currently undergoing battery
conditioning:
VPlexcli:/> battery-conditioning disable -s
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a
The backup battery unit 'stand-by-power-supply-a' is currently undergoing a conditioning cycle.
Disabling conditioning will abort the cycle and cannot be undone.
Do you wish to disable conditioning on this unit anyway? (Yes/No) y
Battery conditioning disabled on backup battery units 'stand-by-power-supply-a'.
battery-conditioning enable
Enables conditioning on the specified backup battery units.
battery-conditioning enable 73
VPLEX CLI Command Descriptions
Description Use this command to enable battery conditioning for all standby power supply (SPS) units
in a cluster, or for a specific SPS unit.
SPS battery conditioning assures that the battery in an engine’s standby power supply can
provide the power required to support a cache vault. A conditioning cycle consists of a 5
minute period of on-battery operation and a 6 hour period for the battery to recharge.
Automatic conditioning runs every 4 weeks, one standby power supply at a time.
Automatic battery conditioning of every SPS is enabled by default.
Automatic battery conditioning must be disabled during:
◆ Scheduled maintenance
◆ System upgrades
◆ Unexpected and expected power outages
Use this command to re-enable battery conditioning after activities that require battery
conditioning to be disabled are completed.
Example Enable battery conditioning on a specified SPS and display the change:
VPlexcli:/> battery-conditioning enable -s
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a
Battery conditioning enabled on backup battery units 'stand-by-power-supply-a'.
VPlexcli:/> ll
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning/
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning:
Name Value
--------------------- ----------------------------
enabled true
in-progress false
manual-cycle-requested false
next-cycle Mon Dec 05 17:00:00 MST 2011
previous-cycle Fri Nov 25 13:25:00 MST 2011
previous-cycle-result PASS
Example Enable battery conditioning on all SPS units in cluster-1 and display the change:
VPlexcli:/> battery-conditioning enable --all-at-cluster cluster-1 -t sps
Battery conditioning enabled on backup battery units 'stand-by-power-supply-a,
stand-by-power-supply-b'.
VPlexcli:/> ll /engines/*/stand-by-power-supplies/*/conditioning
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning:
Name Value
---------------------- ----------------------------
enabled true
in-progress false
manual-cycle-requested false
next-cycle Tue Jan 03 17:00:00 MST 2012
previous-cycle Fri Dec 16 21:31:00 MST 2011
previous-cycle-result PASS
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-b/conditioning:
Name Value
---------------------- ----------------------------
enabled true
in-progress false
manual-cycle-requested false
next-cycle Wed Jan 04 05:00:00 MST 2012
previous-cycle Sat Dec 17 15:37:20 MST 2011
previous-cycle-result PASS
Description Cancels a manually requested conditioning cycle on the specified backup battery unit.
Automatic battery conditioning cycles run on every SPS every 4 weeks.
Manually requested battery conditioning cycles are in addition to the automatic cycles.
IMPORTANT
This command does not abort a battery conditioning cycle that is underway. It cancels only
a request for a manual cycle.
Example Cancel a manually-scheduled SPS battery conditioning from the root context, and display
the change:
VPlexcli:/> battery-conditioning manual-cycle cancel-request -s
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a
The manual conditioning cycle on 'stand-by-power-supply-a' has been canceled. The next
conditioning cycle will be performed on Sat Dec 31 17:00:00 MST 2011
VPlexcli:/> ll
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning:
Name Value
---------------------- ----------------------------
enabled true
in-progress false
manual-cycle-requested false
next-cycle Sat Dec 31 17:00:00 MST 2011
previous-cycle Fri Dec 16 21:31:00 MST 2011
previous-cycle-result PASS
Example Cancel a manually-scheduled SPS battery conditioning from the engine context:
VPlexcli:/engines/engine-1-1> battery-conditioning manual-cycle cancel-request -s
stand-by-power-supply-a
The manual conditioning cycle on 'stand-by-power-supply-a' has been canceled. The next
conditioning cycle will be performed on Sat Dec 31 17:00:00 MST 2011
Optional arguments
[-f|--force] - Forces the requested battery conditioning cycle to be scheduled without
confirmation if the unit is currently in a conditioning cycle. Allows this command to be run
from non-interactive scripts.
* - argument is positional.
Description Requests a conditioning cycle on the specified backup battery unit, and displays the time
the cycle is scheduled to start. The requested battery conditioning cycle is scheduled at
the soonest available time slot for the specified unit.
If the specified unit is currently undergoing a conditioning cycle, this command requests
an additional cycle to run at the next available time slot.
If battery conditioning is disabled, the manually requested cycle will not run.
Use this command to manually schedule a battery conditioning cycle when automatic
conditioning has been disabled in order to perform maintenance, upgrade the system, or
shut down power.
The conditioning cycle invoked by this command will run in the next 6-hour window
available for the selected unit.
Example Schedule a manual SPS battery conditioning cycle from the root context and display the
change:
VPlexcli:/> battery-conditioning manual-cycle request -s
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a
A manual conditioning cycle will be performed on 'stand-by-power-supply-a' on Tue Dec 20
23:00:00 MST 2011.
VPlexcli:/>
ll /engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning:
Name Value
---------------------- ----------------------------
enabled true
in-progress false
manual-cycle-requested true
next-cycle Tue Dec 20 23:00:00 MST 2011
previous-cycle Fri Dec 16 21:31:00 MST 2011
previous-cycle-result PASS
Example Schedule a manual SPS battery conditioning from the engines/engine context when a
conditioning cycle is already underway:
VPlexcli:/engines/engine-1-1/> battery-conditioning manual-cycle request
stand-by-power-supply-a/
The backup battery unit 'stand-by-power-supply-a' is currently undergoing a conditioning cycle.
Scheduling a manual cycle now is unnecessary and discouraged as it will contribute to
over-conditioning and a shortened battery life.
Do you wish to manually schedule a conditioning cycle on this unit anyway? (Yes/No) y
A manual conditioning cycle will be performed on 'stand-by-power-supply-a' on Fri Nov 25
13:25:00 MST 2011.
◆ battery-conditioning enable
◆ battery-conditioning manual-cycle cancel-request
◆ battery-conditioning set-schedule
◆ battery-conditioning summary
battery-conditioning set-schedule
Set the battery conditioning schedule (day of week) for backup battery units on a cluster.
[-d|-- day-of-week] - * Day of the week on which to run the battery conditioning. Valid
values are: sunday, monday, tuesday, wednesday, etc.
[-c|--cluster] - * Cluster on which to set the battery conditioning schedule.
* - argument is positional.
Description Sets the day of week when the battery conditioning cycle is started on all backup battery
units (BBU) on a cluster.
The time of day the conditioning cycle runs on an individual backup battery unit is
scheduled by VPLEX.
SPS battery conditioning assures that the battery in an engine’s standby power supply can
provide the power required to support a cache vault. A conditioning cycle consists of a 5
minute period of on-battery operation and a 6 hour period for the battery to recharge.
Automatic conditioning runs every 4 weeks, one standby power supply at a time.
Automatic battery conditioning of every SPS is enabled by default.
Use this command to set the day of the week on which the battery conditioning cycle for
each SPS unit (one at a time) begins.
Example Set the start day of the battery conditioning cycle for all SPS units in cluster-1 to Saturday
and display the change:
VPlexcli:/> battery-conditioning set-schedule -t sps -d saturday -c cluster-1
Battery conditioning schedule for sps units on cluster 'cluster-1' successfully set to
'saturday'.
VPlexcli:/> ll /engines/*/stand-by-power-supplies/*/conditioning/
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-a/conditioning:
Name Value
---------------------- ----------------------------
enabled true
in-progress false
manual-cycle-requested false
next-cycle Fri Feb 03 17:00:00 MST 2012
previous-cycle Fri Dec 16 21:31:00 MST 2011
previous-cycle-result PASS
/engines/engine-1-1/stand-by-power-supplies/stand-by-power-supply-b/conditioning:
Name Value
---------------------- ----------------------------
enabled true
in-progress false
manual-cycle-requested false
next-cycle Sat Feb 04 05:00:00 MST 2012
previous-cycle Sat Dec 17 15:37:20 MST 2011
previous-cycle-result PASS
Field Description
manual-cycle-requested Whether a manually requested battery conditioning cycle is scheduled on the backup
battery unit.
next-cycle The date and time of the next battery conditioning cycle (either manually requested
or automatic) on the backup battery unit.
Note: If the engines have a different time zone setting than the management server,
and the date and time selected for the start of battery conditioning is within the
difference between the two time zones, the day selected by the --day-of-week
argument may not be the same day displayed in this field.
previous-cycle The date and time of the previous battery conditioning cycle (either manually
requested or automatic) on the backup battery unit. See Note in the “next cycle” field
above.
previous-cycle-result ABORTED - The previous battery conditioning cycle was stopped while it was
underway.
PASS - The previous battery conditioning cycle was successful.
SKIPPED - The previous cycle was not performed because battery conditioning was
disabled.
UNKNOWN - The time of the previous battery conditioning cycle cannot be
determined or no cycle has ever run on the backup battery unit.
battery-conditioning set-schedule 79
VPLEX CLI Command Descriptions
battery-conditioning summary
Displays a summary of the battery conditioning schedule for all devices, grouped by type
and cluster
Description Displays a summary of the conditioning schedule for all devices, grouped by type and
cluster.
Field Description
Manual Cycle Whether a manually requested battery conditioning cycle is scheduled on the backup
Requested battery unit.
Next Cycle The date and time of the next battery conditioning cycle (either manually requested
or automatic) on the backup battery unit.
Note: If the engines have a different time zone setting than the management server,
and the date and time selected for the start of battery conditioning is within the
difference between the two time zones, the day selected by the --day-of-week
argument may not be the same day displayed in this field.
Previous Cycle The date and time of the previous battery conditioning cycle (either manually
requested or automatic) on the backup battery unit. See Note in Next Cycle.
Field Description
Previous Cycle Result ABORTED - The previous battery conditioning cycle was stopped while it was
underway.
PASS - The previous battery conditioning cycle was successful.
SKIPPED - The previous cycle was not performed because battery conditioning was
disabled.
UNKNOWN - The time of the previous battery conditioning cycle cannot be
determined or no cycle has ever run on the backup battery unit.
Schedule Day of the week on which day of week when the battery conditioning cycle is started
on all backup battery units on a cluster. Configured using the battery-conditioning
set-schedule command.
cache-invalidate
In the virtual-volume context, invalidates the cache of a virtual-volume on all directors
based on its visibility to the clusters. In the consistency-group context, invalidates the
cache of all exported virtual-volumes in the specified consistency-group, on all directors
based on the virtual volume’s visibility to the clusters. Additionally, flushes the dirty data,
if any, on to the back-end storage volumes for virtual-volumes in asynchronous mode.
Contexts virtual-volume
consistency-group
In consistency-group context:
consistency-group cache-invalidate
[-g|--consistency-group ] consistency-group context path
[--force]
[--verbose]
cache-invalidate 81
VPLEX CLI Command Descriptions
Optional arguments
[--force] - The --force option suppresses the warning message and executes the
cache-invalidate directly.
[--verbose] - The --verbose option provides more detailed messages in the command
output when specified.
* - argument is positional.
IMPORTANT
This command does suspend I/O on the virtual volume while invalidation is in progress.
Usage of this command causes data unavailability on the virtual volume for a short time in
a normal operating setup.
Description: All cached data associated with the selected volume is invalidated on all directors of the
VPLEX cluster. Subsequent reads from host application will fetch data from storage
volume due to cache miss.
Cache can be invalidated only for exported virtual volumes. There is no cache on
non-exported volumes.
When executed for a given virtual volume, based on the visibility of the virtual volume, this
command invalidates the cache for that virtual volume on the directors at cluster(s) that
export them to hosts.
IMPORTANT
The CLI must be connected to all directors in order for the cache to be flushed and
invalidated. The command performs a pre-check to verify all directors are reachable.
When executed for a given consistency-group, based on the visibility of each of the virtual
volumes in the consistency-group, this command invalidates the cache for the virtual
volume(s) part of the consistency-group on the directors at cluster(s) that export them to
hosts.
The command performs pre-checks to verify all expected directors in each cluster are
connected and in healthy state before issuing the command.
If I/O is in progress on the virtual-volume, the command issues an error requesting the
user to stop all host applications accessing the virtual-volume first.
The command issues a warning stating that the cache invalidate could cause a data
unavailability on the volume that it is issued on.
Any cached data at the host or host applications will not be cleared by this command.
Do not execute the cache-invalidate command on a Recover Point enabled virtual-volume.
The VPLEX clusters should not be undergoing an NDU while this command is being
executed.
Example Example output when cache-invalidation for the virtual-volume and consistency-group is
successful.
Brief message:
Cache invalidation successful for virtual-volume 'testvol_1'.
Verbose message:
Cache invalidation was successful.
cache-invalidate 83
VPLEX CLI Command Descriptions
Example Example output when cache-invalidation failed for the virtual-volume and for the
consistency-group.
Brief message:
Cache invalidation failed for consistency-group 'test_cg_1'.
Verbose message:
Cache invalidation failed.
Example Example output when any director(s) cannot be reached. The message includes an
instruction to re-issue the command.
Note: The cache-invalidate command must be completed for every director where the
virtual-volume or consistency-group is potentially registered in cache.
Virtual-volume:
Cache invalidation for virtual-volume 'testvol_1' could not be performed on directors
'director1-1-A, director-1-1-B'.
Please make sure all configured directors are connected and healthy and re-issue the command.
Consistency group:
Cache invalidation for consistency-group 'test_cg_1' could not be performed on directors
'director1-1-A, director-1-2-A'.
Please make sure all configured directors are connected and healthy and re-issue the command.
Consistency group:
Cache invalidation for consistency-group 'test_cg_1' is not complete on the following directors:
director-1-1-A. Please use the 'consistency-group cache-invalidate-status test_cg_1' command
to find out the current state of cache invalidation.
cache-invalidate-status
Displays the current status of a cache-invalidate operation on a virtual-volume, or for a
specified consistency group.
Contexts virtual-volume
consistency-group
In consistency-group context:
cache-invalidate-status
[-g|--consistency-group] consistency-group context path
[-h|--help]
[--verbose]
Optional arguments
[-h | --help] - Displays the usage for this command.
[--verbose] - Provides a detailed summary of results for all directors when specified. If not
specified, only the default brief summary message is displayed.
* - argument is positional.
Description This command is executed only when the virtual volume cache-invalidate command or the
consistency group cache-invalidate command has exceeded the timeout period of five
minutes. If this command is executed for a reason other than timeout, the command will
return “no result.”
The cache-invalidate status command displays the current status for operations that are
successful, have failed, or are still in progress.
cache-invalidate-status 85
VPLEX CLI Command Descriptions
If the --verbose option is not specified, this command generates a brief message
summarizing the current status of the cache-invalidation for the virtual volume or
consistency group, whichever context has been specified. In the event of
cache-invalidation failure, the brief mode output will only summarize the first director to
fail, for if one director fails, the entire operation is considered to have failed.
If the --verbose option is specified, then the cache invalidate status is displayed on each
of the directors on which the cache invalidate is pending.
There are three fields of particular interest
Field Description
This command is read-only. It does not make any changes to storage in VPLEX or to VPLEX
in any way. It is only for providing information on the current status of cache-invalidation
operations.
Example Example output when the cache-invalidate operation finished with success.
Brief message:
Cache invalidation for consistency-group 'test_cg_1' is complete.
Verbose message:
VPlexcli:/> virtual-volume cache-invalidate-status vv1
cache-invalidate-status
-----------------------
director-1-1-A status: completed
result: successful
cause: -
director-2-1-B status: completed
result: successful
cause: -
cache-invalidate-status
-----------------------
director-1-1-A status: completed
result: successful
cause: -
Example Example output when the cache-invalidate operation finished with failure.
Brief message:
Cache invalidation for virtual-volume 'vv1' failed on director 'director-1-1-A' because: Volume
is in suspended state.
Verbose message:
VPlexcli:/> virtual-volume cache-invalidate-status vv1
cache-invalidate-status
-----------------------
director-1-1-A status: completed
result: failure
cause: Volume is in suspended state
Verbose message:
cache-invalidate-status
-----------------------
director-1-1-A status: in-progress
result: -
cause: -
cache-invalidate-status 87
VPLEX CLI Command Descriptions
capture begin
Begins a capture session.
Description The session captures saves all the stdin, stdout, stderr, and session I/O streams to 4 files:
◆ session name-session.txt - Output of commands issued during the capture session.
◆ session name-stdin.txt - CLI commands input during the capture session.
◆ session name-stdout.txt - Output of commands issued during the capture session.
◆ session name-stderr.txt - Status messages generated during the capture session.
Note: Raw tty escape sequences are not captured. Use the --capture shell option to
capture the entire session including the raw tty sequences.
Capture sessions can have nested capture sessions but only the capture session at the
top of the stack is active.
Use the capture end command to end the capture session.
Use the capture replay command to resubmit the captured input to the shell.
Example In the following example, the capture begin command starts a capture session named
TestCapture. Because no directory is specified, output files are placed in the
/var/log/VPlex/cli/capture directory on the management server.
VPlexcli:/> capture begin TestCapture
# capture begin TestCapture
VPlexcli:/>
capture end
Ends the current capture session and removes it from the session capture stack.
Description The session at the top of the stack becomes the active capture session.
VPlexcli:/clusters/cluster-1>
capture pause
Pauses the current capture session.
capture replay
Replays a previously captured session.
capture end 89
VPLEX CLI Command Descriptions
Description Replays the commands in the stdin.txt file from the specified capture session.
Output of the replayed capture session is written to the
/var/log/VPlex/cli/capture/recapture directory on the management server.
Output is the same four files created by capture begin.
Attributes:
Name Value
---------------------- --------------------------------------------
allow-auto-join true
auto-expel-count 0
auto-expel-period 0
.
.
.
capture resume
Resumes the current capture session.
cd
Changes the working directory.
Description Use the cd command with no arguments or followed by three periods (cd... ) to return to
the root context.
Use the cd command followed by two periods (cd..) to return to the context immediately
above the current context.
Use the cd command followed by a dash (cd -) to return to the previous context.
To navigate directly to a context from any other context, use the cd command and specify
the context path.
cd -c - and cd --context - are not supported.
VPlexcli:/>
VPlexcli:/monitoring/directors>
chart create
Creates a chart based on a CSV file produced by the report command.
--input input file - CSV file to read data from, enclosed in quotes.
--output output file - PNG file to save chart to, enclosed in quotes.
--series column - The column in the CSV file to use as series.
chart create 91
VPLEX CLI Command Descriptions
Password: :
creating logfile:/var/log/VPlex/cli/session.log_service_localhost_ T28921_20101020175912
.
.
.
-rw-r--r-- 1 service users 844 2010-07-19 15:55 CapacityClusters.csv
-rw-r--r-- 1 service users 18825 2010-07-19 15:56 CapacityClusters.png
.
.
.
cluster add
Adds a cluster to a running VPLEX.
Optional arguments
[-f|--force] - Forces the cluster addition to proceed even if conditions are not optimal.
* - argument is positional.
Description A cluster must be added in order to communicate with other clusters in a VPLEX.
Use the --to argument:
◆ During system bring-up when no clusters have yet been told about other clusters. In
this scenario, any cluster can be used as the system representative.
◆ Multiple systems have been detected. Connection to multiple systems, is not
supported.
If there only one system actually present, but it has split into islands due to
connectivity problems, it is highly advisable to repair the problems before proceeding.
Add the given cluster to each island separately.
If the intention is to merge two existing systems, break up one of the systems and add
it to the other system cluster-by-cluster.
cluster add 93
VPLEX CLI Command Descriptions
Islands:
Island ID Clusters
---------- -------------------
1 cluster-1, cluster-2
cluster cacheflush
Flushes the cache on directors at the specified clusters to the back-end storage volumes.
Optional arguments
[-e|--sequential] - Flushes the cache of multiple directors sequentially. Default is to flush
the caches in parallel.
--verbose - Displays a progress report during the flush. Default is to display no output if
the run is successful.
Description
IMPORTANT
The CLI must be connected to a director before the cache can be flushed. Only exported
virtual volumes can be flushed.
When executed from a specific cluster context, this command flushes the cache of the
directors at the current cluster.
Example
VPlexcli:/clusters/> cacheflush --clusters /clusters/* --verbose --sequential
cluster configdump
Dumps cluster configuration in an XML format, optionally directing it to a file.
cluster configdump 95
VPLEX CLI Command Descriptions
◆ Volume configuration
◆ Initiators
◆ View configuration
◆ System-volume information
The XML output includes the DTD to validate the content.
Example Dump the configuration at cluster-1, navigate to the cli directory on the management
server, and display the file:
VPlexcli:/clusters> configdump --verbose --file /var/log/VPlex/cli/config-dump-cluster-1.txt
--cluster cluster-1
VPlexcli:/clusters> exit
Connection closed by foreign host.
service@ManagementServer:~> cd /var/log/VPlex//cli
service@ManagementServer:/var/log/VPlex/cli>tail config-dump-cluster-1.txt
</views>
<system-volumes>
<meta-volumes>
<meta-volume active="true" block-count="23592704" block-size="4096B" geometry="raid-1"
locality="local" name="metadata_1" operational-status="ok" ready="true" rebuild-allowed="true"
size="96635715584B" system-id="metadata_1"/>
</meta-volumes>
<logging-volumes>
<logging-volume block-count="20971520" block-size="4096B" geometry="raid-0"
locality="local" name="logging_1_vol" operational-status="ok" size="85899345920B"
system-id="logging_logging_1_vol"/>
</logging-volumes>
</system-volumes>
.
.
.
cluster expel
Expels a cluster from its current island.
Description Cluster expulsion prevents a cluster from participating in a VPLEX. Expel a cluster when:
◆ The cluster is experiencing undiagnosed problems.
◆ To prepare for scheduled outage.
◆ The target cluster, or the WAN over which the rest of the system communicates with it,
is going to be inoperable for a while.
◆ An unstable inter-cluster link impacts performance.
An expelled cluster is still physically connected to the VPLEX, but not logically connected.
The --force argument is required for the command to complete.
Use the cluster unexpel command to allow the cluster to rejoin the island.
Islands:
Island ID Clusters
--------- ---------
1 cluster-1
2 cluster-2
Cluster cluster-1
operational-status: isolated
transitioning-indications: suspended volumes,expelled
transitioning-progress:
health-state: degraded
health-indications: 1 suspended Devices
Cluster cluster-2
operational-status: degraded
transitioning-indications: suspended exports,suspended volumes
transitioning-progress:
health-state: degraded
health-indications: 2 suspended Devices
cluster expel 97
VPLEX CLI Command Descriptions
cluster forget
Tells VPLEX and Unisphere for VPLEX to forget the specified cluster.
Optional arguments
[-d|--disconnect] - Disconnect from all directors in the given cluster and remove the cluster
from the context tree after the operation is complete.
[-f|--force] - Force the operation to continue without confirmation.
* - argument is positional.
Description Removes all references to the specified cluster from the context tree.
The prerequisites for forgetting a cluster are as follows:
◆ The target cluster can not be in contact with other connected clusters.
◆ The Unisphere for VPLEX cannot be connected to the target cluster.
◆ Detach all distributed devices with legs at the target cluster (there must be no
distributed devices with legs on the target cluster).
◆ No rule sets that affect the target cluster.
◆ No globally visible devices at the target cluster.
Use the following steps to forget a cluster:
1. If connected, use the cluster forget command on the target cluster to forget the other
clusters.
2. Use the cluster forget command on all other clusters to forget the target cluster.
This command does not work if the clusters have lost communications with each other. If a
cluster is down, destroyed, or removed, use the cluster expel command to expel it.
cluster shutdown
Starts the orderly shutdown of all directors at a single cluster.
Shutting down a VPlex cluster may cause data unavailability. Please refer to the VPLEX
procedures in the SolVe Desktop for the recommended procedure to shut down a cluster.
Note: Does not shut down the operating system on the cluster.
Use this command as an alternative to manually shutting down the directors in a cluster.
When shutting down multiple clusters:
◆ Shut each cluster down one at a time.
◆ Verify that each cluster has completed shutdown prior to shutting down the next one.
If shutting down multiple clusters, refer to the VPLEX procedures in the SolVe Desktop for
the recommended procedure for shutting down both clusters.
When a cluster completes shutting down, the following log message is generated for each
director at the cluster:
'Director shutdown complete (cluster shutdown)'
cluster shutdown 99
VPLEX CLI Command Descriptions
Status Description
-------- -----------------
Started. Shutdown started.
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
Islands:
Island ID Clusters
--------- ---------
2 cluster-2
Connectivity problems:
From Problem To
--------- --------- ---------
cluster-2 can't see cluster-1
VPlexcli:/> ll /clusters/cluster-1
Attributes:
Name Value
---------------------- ------------
allow-auto-join -
auto-expel-count -
auto-expel-period -
auto-join-delay -
cluster-id 7
connected false
default-cache-mode -
default-caw-template true
director-names [DirA, DirB]
island-id -
operational-status not-running
transition-indications []
transition-progress []
health-state unknown
health-indications []
◆ director shutdown
cluster status
Displays a cluster's operational status and health state.
Cluster cluster-2
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
wan-com: ok
Cluster cluster-2
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: connectivity: PARTIAL
Cluster cluster-2
operational-status: degraded
transitioning-indications: suspended exports,suspended volumes
transitioning-progress:
health-state: minor-failure
health-indications: 227 suspended Devices
250 non-running remote virtual-volumes.
250 unhealthy Devices or storage-volumes
storage-volume unreachable
local-com: ok
Example Show cluster status when one cluster is shut down or expelled:
VPlexcli:/> cluster status
Cluster cluster-1
operational-status: not-running
transitioning-indications:
transitioning-progress:
health-state: unknown
health-indications:
Cluster cluster-2
operational-status: degraded
transitioning-indications: suspended exports,suspended volumes
transitioning-progress:
health-state: degraded
health-indications: 2 suspended Devices
Cluster cluster-1
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
Cluster cluster-2
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: degraded
health-indications: 1 degraded storage-volumes
local-com: ok
wan-com: ok
Field Description
operational status Operational status of the cluster. During transition periods cluster moves from one
operational state to another.
cluster departure - One or more of the clusters cannot be contacted. Commands
affecting distributed storage are refused.
degraded - The cluster is not functioning at an optimal level. This may indicate
non-functioning remote virtual volumes, unhealthy devices or storage volumes,
suspended devices, conflicting director count configuration values, out-of-date
devices, and so forth.
device initializing - If clusters cannot communicate with each other, then the
distributed-device will be unable to initialize.
device out of date - Child devices are being marked fully out of date.
Sometimes this occurs after a link outage.
expelled - The cluster has been isolated from the island either manually (by an
administrator) or automatically (by a system configuration setting).
ok - The cluster is operating normally.
shutdown - The cluster’s directors are shutting down.
suspended exports - Some I/O is suspended. This could be result of a link failure or
loss of a director. Other states might indicate the true problem.
Note: it may not be a problem, and the VPLEX might be waiting for you to confirm the
resumption of I/O.
Field Description
health-state critical failure - The cluster is not functioning and may have failed completely. This
may indicate a complete loss of back-end connectivity.
degraded - The cluster is not functioning at an optimal level. This may indicate
non-functioning remote virtual volumes, unhealthy devices or storage volumes,
suspended devices, conflicting director count configuration values, or out-of-date
devices.
ok - The cluster is functioning normally.
unknown - VPLEX cannot determine the cluster's health state, or the state is invalid.
major failure - The cluster is failing and some functionality may be degraded or
unavailable. This may indicate complete loss of back-end connectivity.
minor failure - The cluster is functioning, but some functionality may be degraded.
This may indicate one or more unreachable storage volumes.
health-indications Additional information if the health-state field is anything other than “ok”.
local-com ok - All wan-com links have the expected connectivity: this port-group is operating
correctly.
warning - Some links have unexpected connectivity. This port-group is operational
but not properly configured. Performance may not be optimal.
error - Some connectivity is missing from this port-group. It is not operating correctly.
fail - All connectivity is missing from this port-group. wan-com is not operational.
wan-com full - All port-groups have a status of either ok or warning. wan-com connectivity is
complete through minor configuration errors may still exist. See individual port-group
statuses.
partial - Some port-groups have a status of error or fail, but at least one port-group
has a status of ok or warning. wan-com is operating (possibly minimally) through at
least one channel. Performance is degraded.
none- All port-groups have a status of either error or fail. wan-com is not operational.
not-applicable - The system is a single-cluster (i.e. Local) system. Validating wan-com
connectivity is not applicable.
cluster summary
Displays a summary of all clusters and the connectivity between them.
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
Example Display cluster summary for VPLEX Metro configuration with a inter-cluster link outage:
VPlexcli:/> cluster summary
Clusters:
Name Cluster ID TLA Connected Expelled Operational Status Health State
--------- ---------- ----- --------- --------- ------------------- -------------
cluster-1 1 FNM00103600160 true false degraded minor-failure
cluster-2 2 FNM00103600161 true false degraded minor-failure
Islands:
Island ID Clusters
--------- ---------
1 cluster-1
2 cluster-2
Connectivity problems:
From Problem To
--------- --------- ---------
cluster-2 can't see cluster-1
cluster-1 can't see cluster-2
Example Display cluster summary for VPLEX Metro configuration with a cluster expelled:
VPlexcli:/> cluster summary
Clusters:
Name Cluster ID TLA Connected Expelled Operational Status Health State
--------- ---------- -------------- --------- -------- ------------------ -------------
cluster-1 1 FNM00103600160 true true isolated degraded
cluster-2 2 FNM00103600161 true false degraded degraded
Islands:
Island ID Clusters
--------- ---------
1 cluster-1
2 cluster-2
Field Description
Clusters:
Cluster ID Cluster ID. For VPLEX Local, always 1. For VPLEX Metro or Geo, 1 or 2.
TLA The Top-level Assembly. The product TLA must uniquely identify the product instance. For
VPLEX the TLA must uniquely identify the cluster (which is the rack and all physical
components in it)
Connected Whether or not the CLI is connected to at least one director in the cluster (connected to the
cluster).
true - CLI is connected to the cluster.
false - CLI is not connected to the cluster.
Field Description
Operational degraded - The cluster is not operating as configured and is not currently transitioning.
Status Examples include: degraded redundancy level (a director is dead), all exports switched to
write through because of hardware health problems, suspended virtual volumes / exports,
storage volumes not visible from all directors, meta-volume not yet processed.
isolated - The cluster is not communicating with any other clusters.
ok - The cluster is functioning normally.
transitioning - The cluster is reacting to external events and may not be operating as
configured. I/O may be suspended during the transition period.
unknown - The VPLEX encountered a problem determining the operational status of the
cluster. This may indicate a degraded state, since it usually means that at least one of the
directors is not responding or is communicating abnormally.
Health State critical failure - The cluster is not functioning and may have failed completely. This may
indicate a complete loss of back-end connectivity.
degraded - The cluster is not functioning at an optimal level. This may indicate
non-functioning remote virtual volumes, unhealthy devices or storage volumes, suspended
devices, conflicting director count configuration values, out-of-date devices, and so forth.
ok - The cluster is functioning normally.
unknown - The VPLEX cannot determine the cluster's health state, or the state is invalid.
major failure - The cluster is failing and some functionality may be degraded or unavailable.
This may indicate complete loss of back-end connectivity.
minor failure - The cluster is functioning, but some functionality may be degraded. This may
indicate one or more unreachable storage volumes.
Islands:
Clusters Names of clusters belonging to the island. For current release, always cluster-1 or cluster-2.
cluster unexpel
Allows a cluster to rejoin the VPLEX.
Description Clears the expelled flag for the specified cluster, allowing it to rejoin the VPLEX.
Clusters:
Name Cluster ID TLA Connected Expelled Operational Status Health State
--------- ---------- -------------- --------- -------- ------------------ -------------
cluster-1 1 FNM00103600160 true true isolated degraded
cluster-2 2 FNM00103600161 true false degraded degraded
Islands:
Island ID Clusters
--------- ---------
1 cluster-1
2 cluster-2
2. Use the ll command in the target cluster’s cluster context to display the cluster’s
allow-auto-join attribute setting.
VPlexcli:/> ll /clusters/cluster-1
/clusters/cluster-1:
Attributes:
Name Value
------------------------------ --------------------------------
allow-auto-join true
auto-expel-count 0
auto-expel-period 0
auto-join-delay 0
cluster-id 1
.
.
.
If the cluster’s allow-auto-join attribute is set to true, the cluster automatically rejoins
the system. Skip to step 4.
If the cluster’s allow-auto-join flag was set to false, proceed to step 3.
3. Navigate to the target cluster’s cluster context and use the set command to set the
cluster’s allow-auto-join flag to true. For example:
VPlexcli:/ cd clusters/cluster-1
VPlexcli:/clusters/cluster-1> set allow-auto-join true
4. Use the cluster unexpel command to manually unexpel a cluster, allowing the cluster
to rejoin VPLEX. The syntax for the command is:
For example:
VPlexcli:/clusters> cluster unexpel --cluster cluster-1
5. Use the cluster summary command to verify all clusters are in one island and working
as expected.
VPlexcli:/> cluster summary
Clusters:
Name Cluster ID TLA Connected Expelled Operational Status Health State
--------- ---------- -------------- --------- -------- ------------------ ------------
cluster-1 1 FNM00091300128 true false ok ok
cluster-2 2 FNM00091300218 true false ok ok
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
cluster-witness configure
Creates the cluster-witness context for enabling VPLEX Witness functionality and
configuration commands.
Description Cluster Witness is an optional component of VPLEX Metro and VPLEX Geo configurations.
Cluster Witness monitors both clusters and updates the clusters with its guidance, when
necessary. Cluster Witness allows VPLEX to distinguish between inter-cluster link failures
versus cluster failures, and to apply the appropriate detach-rules and recovery policies.
IMPORTANT
This command must be run on both management servers to create cluster-witness CLI
contexts on the VPLEX.
IMPORTANT
ICMP traffic must be permitted between clusters for this command to work properly.
To verify that ICMP is enabled, log in to the shell on the management server and use the
ping IP address command where the IP address is for a director in the VPLEX.
If ICMP is enabled on the specified director, a series of lines is displayed:
service@ManagementServer:~> ping 128.221.252.36
PING 128.221.252.36 (128.221.252.36) 56(84) bytes of data.
64 bytes from 128.221.252.36: icmp_seq=1 ttl=63 time=0.638 ms
64 bytes from 128.221.252.36: icmp_seq=2 ttl=63 time=0.591 ms
64 bytes from 128.221.252.36: icmp_seq=3 ttl=63 time=0.495 ms
64 bytes from 128.221.252.36: icmp_seq=4 ttl=63 time=0.401 ms
64 bytes from 128.221.252.36: icmp_seq=5 ttl=63 time=0.552 ms
clusters/ data-migrations/
distributed-storage/ engines/ management-server/
monitoring/ notifications/ recoverpoint/
security/
VPlexcli:> ls /cluster-witness
Attributes:
Name Value
------------- -------------
admin-state disabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45
Contexts:
components
See also ◆ EMC VPLEX Procedures in SolVe Desktop - VPLEX Witness: Install and Setup
◆ cluster-witness disable
◆ cluster-witness enable
◆ configuration cw-vpn-configure
cluster-witness disable
Disables Cluster Witness on both management servers and on Cluster Witness Server.
Note: This command is available only after Cluster Witness has been configured and
cluster-witness CLI context is visible.
Use the --force-without-server option with extreme care. Use this option to disable
Cluster Witness in order to use configured rule-sets for I/O to distributed volumes in
consistency groups.
Description Disables Cluster Witness on both management servers and on Cluster Witness Server.
Allows consistency group rule-sets to dictate I/O behavior to distributed virtual volumes in
consistency groups.
IMPORTANT
Cluster Witness has no effect on distributed virtual volumes outside of consistency
groups.
Use this command from only one management server.
Disabling Cluster Witness does not imply that Cluster Witness components are shut down.
If Cluster Witness is disabled, the clusters stop sending health-check traffic to the Cluster
Witness Server and the Cluster Witness Server stops providing guidance back to the
clusters.
Note: If the Cluster Witness Server or connectivity to the Cluster Witness Server will be not
operational for a long period, use the --force-without-server argument. This prevents a
system-wide Data Unavailability of all distributed virtual volumes in consistency groups if
an additional inter-cluster link communication or cluster failure occurs while there is no
access to Cluster Witness Server and Cluster Witness is enabled. Once Cluster Witness
Server is accessible from both management servers, use the cluster-witness enable
command to re-enable the functionality.
Automatic pre-checks ensure that the Cluster Witness configuration is in a state where it
can be disabled. Pre-checks:
◆ Verify management connectivity between the management servers
◆ Verify connectivity between management servers and the Cluster Witness Server
◆ Verify all the directors are up and running
"WARNING: Disabling Cluster Witness may cause data unavailability in the event of a disaster.
Please consult the VPLEX documentation to confirm that you would like to disable Cluster
Witness. Continue? Yes
Example Disable Cluster Witness from the cluster-witness context when the Cluster Witness Server
is not reachable. In the following example:
◆ The disable command fails because the Cluster Witness Server is not reachable.
◆ The disable --force-without-server command disables Cluster Witness.
◆ The ll /components command displays the state of the Cluster Witness configuration.
VPlexcli:/cluster-witness> disable
WARNING: Disabling Cluster Witness may cause data unavailability in the event of a disaster.
Please consult the VPLEX documentation to confirm that you would like to disable Cluster
Witness. Continue? (Yes/No) y
VPlexcli:/cluster-witness>ll components/
/cluster-witness/components:
Name ID Admin State Operational State Mgmt Connectivity
---------- -- ----------- ----------------- -----------------
cluster-1 1 disabled - ok
cluster-2 2 disabled - ok
server - unknown - failed
See also ◆ EMC VPLEX Procedures in SolVe Desktop - VPLEX Witness: Install and Setup
◆ cluster summary
◆ cluster-witness enable
◆ vpn status
◆ cluster-witness configure
◆ configuration cw-vpn-configure
cluster-witness enable
Enables Cluster Witness on both clusters and Cluster Witness Server in a VPLEX Metro or
Geo configuration.
Note: This command is available only after Cluster Witness has been configured and
cluster-witness CLI context is visible.
Description
Use this command from the management server on only one cluster.
There is no rollback mechanism. If the enable command fails on some components and
succeeds on others, it may leave the system in an inconsistent state. If this occurs,
consult the Troubleshooting Guide and/or contact EMC Customer Support.
The cluster-witness context does not appear in the VPLEX CLI unless the context has been
created using the cluster-witness configure command. The cluster-witness CLI context
appears under the root context. The cluster-witness context includes the following
sub-contexts:
◆ /cluster-witness/components/cluster-1
◆ /cluster-witness/components/cluster-2
◆ /cluster-witness/components/server
Attributes:
Name Value
------------------ -------------
admin-state disabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.235
Contexts:
Name Description
---------- --------------------------
components Cluster Witness Components
VPlexcli:/> cd /cluster-witness
VPlexcli:/cluster-witness> ll /components/*
/cluster-witness/components/cluster-1:
Name Value
----------------------- ------------------------------------------------------
admin-state enabled
diagnostic INFO: Current state of cluster-1 is in-contact (last
state change: 0 days, 56 secs ago; last message
from server: 0 days, 0 secs ago.)
id 1
management-connectivity ok
operational-state in-contact
/cluster-witness/components/cluster-2:
Name Value
----------------------- ------------------------------------------------------
admin-state enabled
diagnostic INFO: Current state of cluster-2 is in-contact (last
state change: 0 days, 56 secs ago; last message
from server: 0 days, 0 secs ago.)
id 2
management-connectivity ok
operational-state in-contact
/cluster-witness/components/server:
Name Value
----------------------- ------------------------------------------------------
admin-state enabled
diagnostic INFO: Current state is clusters-in-contact (last state
change: 0 days, 56 secs ago.) (last time of
communication with cluster-2: 0 days, 0 secs ago.)
(last time of communication with cluster-1: 0 days, 0
secs ago.)
id -
management-connectivity ok
operational-state clusters-in-contact
Field Description
Name Name of component,
For VPLEX clusters – name assigned to cluster.
For VPLEX Witness server – “server”.
id ID of a VPLEX cluster.
Always blank “-“ for Witness server.
admin state Identifies whether VPLEX Witness is enabled/disabled. Valid values are:
enabled - VPLEX Witness functionality is enabled on this component.
disabled - VPLEX Witness functionality is disabled on this component.
inconsistent - All Cluster Witness components are reachable over the management
network but some components report their administrative state as disabled while others
report it as enabled. This is a rare state which may result failure during enabling or
disabling.
unknown - This component is not reachable and its administrative state cannot be
determined.
private-ip-address Private IP address of the Cluster Witness Server VM used for cluster witness-specific
traffic.
public-ip-address Public IP address of the Cluster Witness Server VM used as an endpoint of the IPsec
tunnel.
diagnostic String generated by CLI based on the analysis of the data and state information reported
by the corresponding component.
WARNING: Cannot establish connectivity with Cluster Witness Server to query diagnostic
information. - Cluster Witness Server or one of the clusters is unreachable.
Local cluster-x hasn't yet established connectivity with the server - The cluster has never
connected to Cluster Witness Server.
Remote cluster-x hasn't yet established connectivity with the server - The cluster has
never connected to Cluster Witness Server.
Cluster-x has been out of touch from the server for X days, Y secs - Cluster Witness Server
has not received messages from a given cluster for longer than 60 seconds.
Cluster witness server has been out of touch for X days, Y secs - Either cluster has not
received messages from Cluster Witness Server for longer than 60 seconds.
Cluster Witness is not enabled on component-X, so no diagnostic information is available
- Cluster Witness Server or either of the clusters is disabled.
Mgmt Connectivity Reachability of the specified Witness component over the IP management network from
the management server where the CLI command is run.
ok - The component is reachable
failed - The component is not reachable
collect-diagnostics
Collects the two latest core files from each component, logs, and configuration
information from the management server and directors.
Syntax collect-diagnostics
--notrace
--nocores
--noperf
--noheap
--noextended
--faster
--local-only
--minimum
--allcores
--large-config
--recoverpoint-only
--out-dir directory
--minimum - Combines all space-saving and time-saving options and operates only on the
local cluster.
Use this argument when the collect-diagnostics command is expected to take a long time
or produce excessively large output files. Combines the --noextended, --faster and
--local-only arguments. Also omits the second sms dump command and output file
(smsDump-CLI_Logs_timestamp.zip).
--allcores - Collect all available core files from the directors. By default, only the two latest
core files are collected.
--large-config - Omits the cluster configdump command output. Use this argument only
when the configuration has:
◆ 8000 or more storage volumes, and
◆ 4000 or more local top-level devices, and
◆ 2000 or more distributed devices.
--recoverpoint-only - Collects only RecoverPoint diagnostic information. If there are no
cores files to be collected, the command generates only one base file.
--out-dir directory - The directory into which to save the zip file containing the collected
diagnostics.
Default: /diag/collect-diagnostics-out
Not needed for normal usage.
Description Collects logs, cores and configuration information from management server and directors.
Places the collected output files in the /diag/collect-diagnostics-out directory on the
management server.
2 compressed files are placed in /diag/collect-diagnostics-out directory:
◆ tla-diagnostics-extended-timestamp.tar.gz - Contains java heap dump, fast trace
dump, two latest core files (if they exist).
◆ tla-diagnostics-timestamp.tar.gz - Contains everything else.
Best practice is to collect both files. The extended collect-diagnostics file is usually large,
and thus takes some time to transfer.
Recommended practice is to transfer the base collect-diagnostics file
(tla-diagnostics-timestamp.tar.gz) first and begin analysis while waiting for the extended
file to transfer.
Starting in GeoSynchrony 5.1, this command also collects RecoverPoint diagnostics data:
◆ The extended .tar file includes the two latest RecoverPoint kdriver core files.
◆ The base tar file includes an additional directory: /opt/recoverpoint/. Files in this
directory include the RecoverPoint splitter logs:
◆ A zip file named director-name-time-stamp.zip contains the following splitter logs:
• vpsplitter.log.xx -
• vpsplitter.log.periodic_env -
• vpsplitter.log.current_env. -
collect-diagnostics 117
VPLEX CLI Command Descriptions
Other than core files, director diagnostics are retrieved from ALL directors in a VPLEX Metro
or Geo unless the --local-only argument is used.
Note: Core files are always collected only from local directors. Starting in Release 5.1, only
the latest 2 core files are collected by default, and any older core files are not collected. To
collect all the core files, use the --allcores argument.
In VPLEX Metro and Geo configurations, run the collect-diagnostics command on each
management server, but NOT at the same time. Even if the --local-only argument is used,
do not run the command on both management servers at the same time.
Example Collect diagnostics, omitting the core files on the directors and the management server
console heap, and send the output to the default directory:
VPlexcli:/> collect-diagnostics --nocores --noheap
Example Collect all RecoverPoint diagnostics, including all available RecoverPoint core files, and
send the output to the default directory:
VPlexcli:/> collect-diagnostics --recoverpoint-only --allcores
configuration complete-system-setup
Completes the VPLEX Metro or Geo configuration.
Description Completes the automated EZ-Setup Wizard for VPLEX Metro and VPLEX Geo configurations.
This command must be run twice: once on each cluster.
Note: Before using this command on either cluster, first use the configuration
system-setup command (on both clusters).
configuration configure-auth-service
Configures the authentication service selected by the user.
You may select to use your existing LDAP or Active Directory as a directory service to
authenticate VPLEX users. To configure this, you will need the authentication service provider
server information, and the security information to map the users.
Or, you may choose not to configure an authentication service provider at this time. You may
configure an authentication service provider for authentication at any time, using VPLEX CLI
commands.
Would you like to configure an authentication service provider to authenticate VPLEX users?
(yes/no) [no]::yes
1. LDAP
2. AD
Select the type of authentication service provider you would like use for VPLEX authentication.
(1 - 2) [1]: 1
1. SSL
2. TLS
To configure the Authentication Service Provider you will need: the base distinguished name,
the bind distinguished name, and the mapprincipal. Examples of these are:
Note: After running this command, run the webserver restart command.
configuration connect-local-directors
Connects to the directors in the local cluster.
Example Use the --force argument when the directors are already connected:
VPlexcli:/> configuration connect-local-directors --force
Already connected to Plex firmware director-1-1-A
<128.221.252.35,128.221.253.35>.
◆ configuration system-setup
configuration connect-remote-directors
Connects to the remote directors after the VPN connection has been established.
Description During system setup for a VPLEX Metro or Geo configuration, use the configuration
connect-remote-directors command to connect the local cluster to the directors in the
remote cluster.
Run this command twice: once from the local cluster to connect to remote directors, and
once from the remote cluster to connect to local directors.
Prerequisite: Number of directors at each cluster.
configuration continue-system-setup
Continues the EZ-Setup Wizard after back-end storage is configured and allocated for the
cluster.
Description This command validates the back-end configuration for the local cluster. The cluster must
have its back-end allocated and configured for this command to succeed.
Use the configuration system-setup command to start the EZ-Setup Wizard to configure
the VPLEX.
Zone the back-end storage to the port WWNs of the VPLEX back-end ports.
After the back-end storage is configured and allocated for the cluster, use this command
to complete the initial configuration.
configuration cw-vpn-configure
Establishes VPN connectivity between a VPLEX management server and the Cluster
Witness Server and starts the VPN tunnel between them. The command is interactive and
requires inputs to complete successfully.
IMPORTANT
This command must be run on both management servers to establish a 3-way VPN
between the management servers and the Cluster Witness Server.
Description
IMPORTANT
In order for this command to succeed, the following conditions must be true:
◆ VPN connectivity is established between the VPLEX management servers
◆ Cluster Witness Server is successfully deployed based on the steps in the Cluster
Witness Installation guide
◆ Cluster Witness Server is configured with a static public IP interface
This command:
◆ Configures the VPN tunnel between the local VPLEX management server and the
Cluster Witness Server.
◆ Checks if VPLEX cluster's management servers are connected by the VPN tunnel. If
Cluster Witness Server is not yet configured with VPN:
• Generates the Cluster Witness Server host certificate
• Configures the IPSec settings for Cluster Witness Server on the VPLEX management
server
• Configures the IPSec settings for the management server on the Cluster Witness
Server
• Restarts the IPSec service on the Cluster Witness Server VM
◆ Validates Cluster Witness Server VPN connectivity with the VPLEX management server.
Prerequisites
The following information is required to complete this command:
◆ VPLEX Metro or Geo setup is successfully completed using EZSetup. This creates a
VPN connection between VPLEX management servers.
◆ Confirm that the VPLEX setup meets the requirement for Cluster Witness
configuration. See the VPLEX Cluster Witness Deployment and Configuration guide for
more information.
◆ Passphrase of the Certificate Authority Key that was provided during VPN configuration
between management servers.
◆ Passphrase for creating the Cluster Witness host certificate. This passphrase should
be different than passphrase used in creating host certificates for management
servers.
Note: If this command is run when the VPN is already configured, the following error
message is displayed: VPN connectivity is already established.
Note: The passphrase entered for the Cluster Witness Server Host Certificate Key is
reconfirmed with a Re-enter the passphrase for the Certificate Key prompt. This is because
the Cluster Witness Server Host Certificate Key password is created for the first time. There
is no reconfirmation prompt for the passphrase of Certificate Authority because this CA
passphrase Key is already created earlier as a part of complete-metro-setup.
In this example, run the command on the management server in the first VPLEX cluster
(10.31.25.26):
VPlexcli:/> configuration cw-vpn-configure -i 10.31.25.45
The Cluster Witness requires a VPLEX Metro or VPLEX Geo configuration. Is this system configured
as a Metro or Geo? (Y/N): y
“Please enter the passphrase for the Certificate Authority that was provided while
configuring VPN between management servers.”
Enter the passphrase to create the Cluster Witness Server Host Certificate Key (at least 8
characters):passphrase
Re-enter the passphrase for the Certificate Key: passphrase
New Host certificate request /etc/ipsec.d/reqs/cwsHostCertReq.pem created
Please enter the IP address of the remote cluster management server that will be included in
the 3-way VPN setup: 10.31.25.27
Verifying the VPN status between the management servers...
IPSEC is UP
Remote Management Server at IP Address 10.31.25.27 is reachable
Remote Internal Gateway addresses are reachable
Verifying the VPN status between the management server and the cluster witness server...
Cluster Witness Server at IP Address 128.221.254.3 is not reachable
Verifying the VPN status between the management server and the cluster witness server...
IPSEC is UP
Cluster Witness Server at IP Address 128.221.254.3 is reachable
The VPN configuration between this cluster and the Witness server is complete.
The Setup Wizard has completed the automated portion of configuring your cluster.
From this point, please follow the manual procedures defined in the Installation and Setup
Guide.
VPlexcli:/>
Run the command on the management server in the second VPLEX cluster (10.31.25.27):
VPlexcli:/> configuration cw-vpn-configure -i 10.31.25.45
Cluster witness requires a VPlex Metro or VPlex Geo configuration. Is this system a Metro or
Geo? (Y/N): y
Please enter the IP address of the remote cluster management server in the plex that will be
involved in the 3 way VPN setup: 10.31.25.26
Verifying the VPN status between the management servers...
IPSEC is UP
Remote Management Server at IP Address 10.31.25.26 is reachable
Remote Internal Gateway addresses are reachable
Verifying the VPN status between the management server and the cluster witness server...
Cluster Witness Server at IP Address 128.221.254.3 is not reachable
Verifying the VPN status between the management server and the cluster witness server...
IPSEC is UP
Cluster Witness Server at IP Address 128.221.254.3 is reachable
The Setup Wizard has completed the automated portion of configuring your cluster.
From this point, please follow the manual procedures defined in the Installation and Setup
Guide.
The task summary and the commands executed for each automation task has been captured in
/var/log/VPlex/cli/VPlexcommands.txt
configuration cw-vpn-reset
Resets the VPN connectivity between the management server and the Cluster Witness
Server.
Description Resets the VPN between the management server on a cluster and the Cluster Witness
Server.
Use this command with EXTREME CARE. This command will erase all Cluster Witness VPN
configuration.
This command should be used only when Cluster Witness is disabled, as it is not providing
guidance to VPLEX clusters at that time.
Using the cw-vpn-reset command when Cluster Witness is enabled will cause Cluster
Witness to lose connectivity with the Cluster Witness Server and will generate a call-home
event.
In order to complete, this command requires VPN connectivity between the management
server and the Cluster Witness Server.
Note: Run this command twice: once from each management server.
To disable the VPN connectivity to the Cluster Witness Server please enter RESET (case
sensitive):RESET Verifying if there
is a VPN connection between the Management Server and the Cluster Witness Server...
Verifying if the Cluster Witness has been configured on this Management Server...
Verifying if the Cluster Witness has been enabled on this Management Server...
Successfully removed the connection name and updated the Cluster Witness Server ipsec.conf file
Successfully transferred the ipsec configuration file to the Cluster Witness Server and
restarted the IPSec process
Successfully removed the cluster witness connection name from the Management Server ipsec.conf
file
Successfully restarted the ipsec process on the Management Server
The task summary and the commands executed for each automation task has been captured in
/var/log/VPlex/cli/VPlexcommands.txt
Successfully removed the connection name and updated the Cluster Witness Server ipsec.conf file
Successfully transferred the ipsec configuration file to the Cluster Witness Server and
restarted the IPSec process
Successfully removed the certificate files from the Cluster Witness Server
Successfully removed the cluster witness connection name from the Management Server ipsec.conf
file
Successfully restarted the ipsec process on the Management Server
The task summary and the commands executed for each automation task has been captured in
/var/log/VPlex/cli/VPlexcommands.txt
configuration enable-front-end-ports
After the meta-volume is created using the EZ-Setup wizard, enable front-end ports using
this command.
Description Used to complete the initial system configuration using the EZ-Setup Wizard. After the
meta-volume has been configured on the cluster, use this command to resume setup and
enable the front-end ports on the local cluster.
Prerequisite: The cluster must be configured with a meta-volume and a meta-volume
backup schedule.
Description This command runs an interview script that prompts for values to configure call-home
notification.
If call-home notification is already configured, the current configuration information is
displayed.
If call-home notification is not configured, interview questions to configure the service
that is not configured are displayed.
Note: This command does not modify an existing configuration. Use the configuration
event-notices-reports reset command to reset (delete) an existing event notification and
reporting configuration. Then use this command to configure new settings.
configuration event-notices-reports-show
This command shows call-home notification connection records and system connections
based on the ConnectEMC configuration.
--verbose - Provides more output during command execution. This may not have any effect
for some commands.
Description This command shows call-home notification connection records and system connections
based on the ConnectEMC configuration. There are no limitations.
configuration get-product-type
Displays the VPLEX product type (Local, Metro, or Geo).
configuration join-clusters
Validates WAN connectivity and joins the two clusters.
Description This command validates WAN connectivity and joins the two clusters.
Note: This command can be configured as Metro Fibre Channel using the EZ-Setup wizard.
configuration metadata-backup
Configures and schedules the daily backup of VPLEX metadata.
Description Selects the volumes to use as backup volumes and creates the initial backup of both
volumes.
The meta-volume’s backup size should be equal to or greater than the active meta-volume
size. The current requirement is 78G per storage volume.
See the EMC VPLEX Technical Notes for best practices regarding the kind of backend array
volumes to consider for a meta-volume.
IMPORTANT
This command must be executed on the management server in which you want to create
the backups.
Runs an interview script that prompts for values to configure and schedule the daily
backups of VPLEX metadata.
◆ Selects the volumes on which to create the backup
◆ Updates the VPlex configuration .xml file (VPlexconfig.xml)
◆ Creates an initial backup on both selected volumes
◆ Creates two backup volumes named:
• volume-1_backup_timestamp
• volume-2_backup_timestamp
◆ Schedules a backup at a time selected by the user
Enter two or more storage volumes, separated by commas.
Renaming backup metadata volumes is not supported.
Specify two or more storage volumes. Storage volumes must be:
- unclaimed
- on different arrays
Please select volumes for meta-data backup, preferably from two different arrays
(volume1,volume2):VPD83T3:60000970000192601714533036464236,VPD83T3:60000970000192601714533036
464237
What hour of the day should the meta-data be backed up? (0..23): 11
What minute of the hour should the meta-data be backed up? (0..59): 25
VPLEX is configured to back up meta-data every day at 11:25 (UTC).
Would you like to change the time the meta-data is backed up? [no]: no
Review and Finish
Review the configuration information below. If the values are correct, enter
yes (or simply accept the default and press Enter) to start the setup process. If the values
are not correct, enter no to go back and make changes or to exit the setup.
Meta-data Backups
Meta-data will be backed up every day at 11:25.
The following volumes will be used for the backup :
VPD83T3:60000970000192601714533036464236,
VPD83T3:60000970000192601714533036464237
Would you like to run the setup process now? [yes]:
Would you like to change the volumes on which to backup the metadata? [no]:
VPLEX is configured to back up meta-data every day at 11:25 (UTC).
Would you like to change the time the meta-data is backed up? [no]: yes
What hour of the day should the meta-data be backed up? (0..23): 11
What minute of the hour should the meta-data be backed up? (0..59): 00
VPLEX is configured to back up meta-data every day at 11:00 (UTC).
Review and Finish
Review the configuration information below. If the values are correct, enter
yes (or simply accept the default and press Enter) to start the setup process. If the values
are not correct, enter no to go back and make changes or to exit the setup.
Meta-data Backups
Meta-data will be backed up every day at 11:20.
The following volumes will be used for the backup :
VPD83T3:60000970000192601714533036464236,
VPD83T3:60000970000192601714533036464237
Would you like to run the setup process now? [yes]: yes
/clusters/cluster-2/system-volumes:
Detroit_LOGGING_VOL_vol Detroit_METAVolume1 Detroit_METAVolume1_backup_2010Dec23_052818
Detroit_METAVolume1_backup_2011Jan16_211344
configuration ntp-sync-system
Synchronizes the system time across the management server and directors in a single or
multi-site system.
Description This command synchronizes the time across the management server and directors in a
single or multi-site system.
After running the NDU pre-check, if you see a time drift warning execute this command first
on SMS-1, then on SMS-2.
After performing a director replacement, if you see a time drift warning, execute this
command on the corresponding management server.
VPlexcli:/>
[From SMS-2]
VPlexcli:/>
configuration register-product
Registers the VPLEX product with EMC.
Example
VPlexcli:/> configuration register-product
Welcome to the VPLEX Product Registration Assistant. To register your
VPLEX product, please provide the information prompted for below.
This information will be sent to EMC via email or, this information will be captured in a file
that you can attach to an email to send to EMC.
Attempting to determine the VPLEX Product Serial Number from the system.
Which method will be used to Connect Home. Enter the number associated with your selection.
1: ESRS 2: Email Home 3: Do Not Connect Home
Connect Home using : 3
Which method will be used for Remote Support. Enter the number associated with your selection.
1: ESRS 2: WebEx
Remote Support using : 2
To complete the registration process, this information must be sent to EMC. We can send this
product registration information to EMC for you using an SMTP server of your choice. Would you
like to send this now?
(Y/N): n
To complete the registration process, this information must be sent to
EMC. Please attach the file located here:
/var/log/VPlex/cli/productRegistration.txt
to an email and send it to [email protected] as soon as possible to complete
your VPLEX registration.
Description Adds one or more address:port configurations for the specified remote-cluster entry for
this cluster.
See the VPLEX procedures in the SolVe Desktop for more information on managing
WAN-COM IP addresses.
VPlexcli:/> ls /clusters/cluster-1/connectivity/wan-com
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [192.168.11.252:11000]
VPlexcli:/> ls /clusters/cluster-1/connectivity/wan-com
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [10.6.11.252:11000, 192.168.11.252:11000]
Example Add cluster 2 addresses to cluster 1, assuming that the WAN COM port groups for cluster 2
have their cluster-address properly configured:
VPlexcli:/clusters/cluster-1/connectivity/wan-com> ls
Attributes:
Name Value
------------------------ -----------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses -
VPlexcli:/clusters/cluster-1/connectivity/wan-com> ls
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [10.6.11.252:11000, 192.168.11.252:11000]
Description Clears one, several, or all address:port configurations for the specified remote-cluster
entry for this cluster.
See the VPLEX procedures in the SolVe Desktop for more information on managing
WAN-COM IP addresses.
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [10.6.11.252:11000, 192.168.11.252:11000]
VPlexcli:/> ls /clusters/cluster-1/connectivity/wan-com
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [192.168.11.252:11000]
configuration short-write
Reports whether the short-write feature is enabled or disabled and provides a way to
enable or disable the feature on a cluster.
Description If the short-write command is executed with no options it shows the current status of the
short-write feature. To change whether the short-write feature is enabled or not, specify on
or off using the --status option.
Note: When changing the status of the short-write feature, if any operation fails, the
currently changed values will be reverted.
Example Using the configuration short-write command to report the status of the short-write feature
on a cluster.
vplexcli> configuration short-write
The short-write feature is enabled
Example Using the configuration short-write command to set the status of the short-write feature to
enabled on a cluster.
vplexcli> configuration short-write --status on
The short-write feature is set to enabled
configuration show-meta-volume-candidates
Display the volumes which meet the criteria for a VPLEX meta-volume.
If the meta-volume is configured on a CLARiiON® array, it must not be placed on the vault
drives of the CLARiiON.
The task summary and the commands executed for each automation task has been captured in
/var/log/VPlex/cli/VPlexcommands.txt
VPlexcli:/clusters/cluster-1/connectivity/wan-com/port-groups/ip-port-group-3/subnet> ls
Name Value
---------------------- -----
cluster-address 192.168.10.10
gateway 192.168.10.1
mtu 1500
prefix 192.168.10.0/255.255.255.0
proxy-external-address 10.10.42.100
remote-subnet-address 192.168.100.10/255.255.255.0
VPlexcli:/clusters/cluster-1/connectivity/wan-com/port-groups/ip-port-group-3/subnet> clear
./
Are you sure you want to clear these subnets? (Yes/No) yes
VPlexcli:/clusters/cluster-1/connectivity/wan-com/port-groups/ip-port-group-3/subnet> ls
Name Value
---------------------- -----
cluster-address -
gateway -
mtu 1500
prefix -
proxy-external-address -
remote-subnet-address -
remote-subnet-address 192.168.100.10/255.255.255.0
Are you sure you want to clear these subnets? (Yes/No) yes
Optional arguments
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - argument is positional.
Description Adds to a front-end or back-end iSCSI subnet, a routing prefix (remote subnet) that should
be accessed through the subnet’s gateway.
Note: This command is valid only on systems that support iSCSI devices.
Optional arguments
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - argument is positional.
Description Removes from a front-end or back-end iSCSI subnet, a routing prefix (remote subnet) that
should be accessed through the subnet’s gateway.
Note: This command is valid only on systems that support iSCSI devices.
configuration sync-time
Synchronizes the time of the local management server with a remote NTP server. Remote
Metro and Geo clusters are synchronized with the local management server.
May cause the CLI or SSH session to disconnect. If this occurs, re-log in and continue
system set-up where you left off.
This command synchronizes Cluster 1 with the external NTP server, or will synchronize
Cluster 2 with the local management server on Cluster 1.
Use this command before performing any set-up on the second cluster of a VPLEX Metro or
Geo configuration.
Use this command during initial system configuration before using the configuration
system-setup command.
The command prompts for the public IP address of the NTP server, or the local
management server, if not provided.
Please enter the IP address of the NTP server in order to synchronize the system clock.
ATTENTION: VPlex only supports syncing Cluster-1 to an external NTP server while Cluster-2 must
be synced to Cluster-1. IP Address of NTP server:10.108.69.121
Syncing time on the management server of a Metro/Geo system could cause the VPN to require a
restart. Please Confirm (Yes: continue, No: exit) (Yes/No) yes
Verifying the VPN status between the management server and the cluster witness server...
IPSEC is UP
Cluster Witness Server at IP Address 128.221.254.3 is reachable
configuration sync-time-clear
Clears the NTP configuration on a management server.
Description Clears the NTP configuration and restarts the NTP protocol.
This command clears the NTP configuration on the management server of the cluster on
which it is run.
configuration sync-time-show
Displays the NTP configuration of a cluster’s management server.
This cluster is configured to get its time from the following servers:
10.6.210.192
configuration system-reset
Resets the cluster to the manufacturing state.
configuration system-setup
Starts the EZ-Setup Wizard automated configuration tool.
Description Configures the VPN and establishes a secure connection between the clusters.
Use the exit command any time during the session to exit EZ-Setup.
Use the configuration system-setup command to resume EZ-Setup. The process restarts
from the first step. Any values from the previous session appear as default values.
IMPORTANT
There must be no meta-volume configured, and no storage exposed to hosts in order to
run this command.
Use the configuration system-setup command to restart the wizard from the first step.
Values entered in the previous session are displayed as the defaults.
◆ Zone the back-end storage to the port WWNs of the VPLEX back-end ports.
◆ Use the configuration continue-system-setup command to complete the initial
configuration.
◆ Use the configuration remote-clusters clear-addresses command to display storage
volumes that are candidates for the meta-volume (unclaimed, no extents, and at least
78 GB capacity).
◆ Create the meta-volume.
◆ Use the configuration enable-front-end-ports command to enable the front-end ports
on the local cluster.
For VPLEX Metro or Geo configurations:
◆ Use the configuration continue-system-setup command to complete the configuration
for the local cluster.
◆ Enable and zone the WAN ports.
◆ Use the configuration connect-remote-directors command to connect the local cluster
to the directors on the remote cluster, and the remote cluster to the directors on the
local cluster.
After configuration is complete, use the following commands to make modifications:
◆ Use the configuration configure-auth-service to configure the authentication service
selected by the user.
◆ Use the configuration cw-vpn-configure and configuration cw-vpn-reset commands to
manage VPLEX Witness.
◆ Use the configuration event-notices-reports-configure and configuration
event-notices-reports-reset commands to import/apply/remove customized
call-home event files.
configuration upgrade-meta-slot-count
Upgrades the slot count of the active meta volume at the given cluster to 64,000 slots.
Context /clusters/cluster/system-volumes
[--verbose]
[-f | --force]
IMPORTANT
Specify two or more storage volumes. Storage volumes should be on different arrays.
[-f | --force] - Forces the upgrade to proceed without asking for confirmation.
Description On the metadata volume, each slot stores header information for each storage volume,
extent, and logging volume. This command upgrades the slot count of the active meta
volume at the given cluster to 64,000 slots.
By default, the oldest meta volume backup at the cluster serves as a temporary meta
volume.
If you specify the -d or --storage-volume option, then the command creates the temporary
meta volume from scratch from those disks. The temporary meta volume is active while
the currently-active meta volume is being upgraded. At the end of the process, VPLEX
reactivates the original meta volume and the temporary meta volume becomes a backup
again. VPLEX renames the backup to reflect the new point in time at which it became a
backup.
If the meta-volume is configured on a CLARiiON array, it must not be placed on the
vault drives of the CLARiiON.
connect
Connects to a director.
Syntax connect
[-o|--host] [host name|IP address]
--logport port number
--secondary-host [host name|IP address]
--secondary-logport secondary port number
[-n|--name] name
[-t|--type] system type
[-p|--password] password
[-c|--connection-file] filename
[-s|--save-authentication]
--no-prompt
connect 153
VPLEX CLI Command Descriptions
Note: If the disconnect command is issued for a director, the entry in the connections file
for that director is removed from the connections file.
When a director is connected, the context tree expands with new contexts representing
the director, including:
◆ A new director context below /engines/engine/directors representing storage and
containing the director’s properties.
◆ If this is the first connection to a director at that cluster, a new cluster context below
/clusters.
◆ If this is the first connection to a director belonging to that engine, a new engine
context below /engines.
Use the connect -name name command to name the new context below
/engines/engine/directors. If is omitted, the host name or IP address is used.
If the --connection-file argument is used, the specified file must list at least one host
address to connect to on each line with the format:
host|[secondary host][,name]
Sample connections file:
service@ManagementServer:/var/log/VPlex/cli> tail connections
128.221.252.67:5988|128.221.253.67:5988,Cluster_2_Dir_1A
128.221.252.68:5988|128.221.253.68:5988,Cluster_2_Dir_1B
128.221.252.69:5988|128.221.253.69:5988,Cluster_2_Dir_2A
128.221.252.70:5988|128.221.253.70:5988,Cluster_2_Dir_2B
128.221.252.35:5988|128.221.253.35:5988,Cluster_1_Dir1A
128.221.252.36:5988|128.221.253.36:5988,Cluster_1_Dir1B
128.221.252.36:5988,128.221.252.36
128.221.252.35:5988,128.221.252.35
connectivity director
Displays connections from the specified director through data (non-management) ports.
Description Prints a table of discovered storage volumes, initiators and directors. Lists the ports on
which it discovered each storage volume, initiator and director.
Example Display connectivity from director Cluster_1_DirA and sort output by port:
VPlexcli:/> connectivity director Cluster_1_Dir1A/ --sort-by port
Device VPD83T3:60000970000192601378533030303530 is a default LUN_0.
Device VPD83T3:60000970000192601723533030303530 is a default LUN_0.
Device VPD83T3:60000970000192601852533030303530 is a default LUN_0.
StorageVolumes discovered - sorted by: port
Port WWN StorageVolume Name LUN
------- ------------------ ---------------------------------------- ------------------
A2-FC00 0x50000972081aeda5 VPD83T3:60000970000192601723533030313530 0x0001000000000000
VPD83T3:60000970000192601723533030313534 0x0002000000000000
0x500601603ce03506 VPD83T3:6006016021d0250026b925ff60b5de11 0x0008000000000000
VPD83T3:6006016021d0250027b925ff60b5de11 0x0001000000000000
.
.
.
A2-FC02 0x500601613ce03506 VPD83T3:6006016021d0250026b925ff60b5de11 0x0008000000000000
VPD83T3:6006016021d0250027b925ff60b5de11 0x0001000000000000
.
.
.
Example Display connectivity from director Cluster_1_DirA and sort output by storage volume
name:
VPlexcli:/> connectivity director Cluster_1_Dir1A/ --sort-by name
Device VPD83T3:60000970000192601378533030303530 is a default LUN_0.
Device VPD83T3:60000970000192601723533030303530 is a default LUN_0.
Device VPD83T3:60000970000192601852533030303530 is a default LUN_0.
StorageVolumes discovered - sorted by: name
StorageVolume Name WWN LUN Ports
---------------------------------------- ------------------ ------------------ ---------------
VPD83T3:60000970000192601723533030313530 0x50000972081aeda5 0x0001000000000000 A2-FC00,A3-FC00
VPD83T3:60000970000192601723533030313534 0x50000972081aeda5 0x0002000000000000 A2-FC00,A3-FC00
VPD83T3:6006016021d0250026b925ff60b5de11 0x500601603ce03506 0x0008000000000000 A2-FC00,A3-FC00
0x500601613ce03506 0x0008000000000000 A2-FC02,A3-FC02
.
.
.
connectivity show
Displays the communication endpoints that can see each other.
Description Displays connectivity, but does not perform connectivity checks. Displays which ports can
talk to each other.
Example Use the --endpoints argument followed by the <Tab> key to display all ports:
VPlexcli:/> connectivity show --endpoints
connectivity validate-be
Checks that the back-end connectivity is correctly configured.
Description This provides a summary analysis of the back-end connectivity information displayed by
connectivity director if connectivity director was executed for every director in the system.
It checks the following:
◆ All directors see the same set of storage volumes.
◆ All directors have at least two paths to each storage-volume.
◆ The number of active paths from each director to a storage volume does not exceed 4.
IMPORTANT
If the number of paths per storage volume per director exceeds 8 a warning event, but not
a call home is generated. If the number of paths exceeds 16, an error event and a
call-home notification are generated.
On VPLEX Metro systems where RecoverPoint is deployed, run this command on both
clusters.
If the connectivity director command is run for every director in the VPLEX prior to running
this command, this command displays an analysis/summary of the back-end connectivity
information.
Example Entering the connectivity validate-be command without any arguments provides a
summary output as shown.
VPlexcli:/> connectivity validate-be
Cluster cluster-1
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for storage volume paths*.
0 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from same director.
*To meet the high availability requirement for storage volume paths each storage volume must
be accessible from each of the directors through 2 or more VPlex backend ports, and 2 or more
Array target ports, and there should be 2 or more ITLs.
Cluster cluster-2
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for storage volume paths*.
5019 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from same director.
*To meet the high availability requirement for storage volume paths each storage volume must
be accessible from each of the directors through 2 or more VPlex backend ports, and 2 or more
Array target ports, and there should be 2 or more ITLs.
Summary
Cluster cluster-1
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for storage volume paths*.
0 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from same director.
*To meet the high availability requirement for storage volume paths each storage volume must
be accessible from each of the directors through 2 or more VPlex backend ports, and 2 or more
Array target ports, and there should be 2 or more ITLs.
Cluster cluster-2
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for storage volume paths*.
5019 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from same director.
*To meet the high availability requirement for storage volume paths each storage volume must
be accessible from each of the directors through 2 or more VPlex backend ports, and 2 or more
Array target ports, and there should be 2 or more ITLs.
Summary
Cluster cluster-1
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for storage volume paths*.
0 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from same director.
*To meet the high availability requirement for storage volume paths each storage volume must
be accessible from each of the directors through 2 or more VPlex backend ports, and 2 or more
Array target ports, and there should be 2 or more ITLs.
Cluster cluster-2
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for storage volume paths*.
5019 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from same director.
*To meet the high availability requirement for storage volume paths each storage volume must
be accessible from each of the directors through 2 or more VPlex backend ports, and 2 or more
Array target ports, and there should be 2 or more ITLs.
◆ connectivity validate-local-com
◆ connectivity validate-wan-com
◆ health-check
◆ validate-system-configuration
connectivity validate-local-com
Validates that the actual connectivity over local-com matches the expected connectivity.
Description Verifies the expected local-com connectivity. This command assembles a list of expected
local-com connectivity, compares it to the actual local-com connectivity, and reports any
missing or extra connections. This command verifies only IP- or FC-based local-com
connectivity.
Expected connectivity is determined by collecting all ports whose role is local-com and
verifying that each port in a port-group has connectivity to every other port in the same
port-group.
When both FC and IP ports with role local-com are present, the smaller subset is discarded
and the protocol of the remaining ports is assumed to be the correct protocol.
Example Expected connectivity when both port-groups are detected on two directors:
VPlexcli:/> connectivity validate-local-com -c cluster-1/ -e
Expected connectivity:
ip-port-group-0
/engines/engine-unknown/directors/director-1-1-A/hardware/ports/GE00 ->
/engines/engine-unknown/directors/director-1-1-B/hardware/ports/GE00
/engines/engine-unknown/directors/director-1-1-B/hardware/ports/GE00 ->
/engines/engine-unknown/directors/director-1-1-A/hardware/ports/GE00
ip-port-group-1
/engines/engine-unknown/directors/director-1-1-A/hardware/ports/GE01 ->
/engines/engine-unknown/directors/director-1-1-B/hardware/ports/GE01
/engines/engine-unknown/directors/director-1-1-B/hardware/ports/GE01 ->
/engines/engine-unknown/directors/director-1-1-A/hardware/ports/GE01
connectivity validate-wan-com
Verifies the expected IP and FC WAN COM connectivity.
Description This command assembles a list of expected WAN COM connectivity, compares it to the
actual WAN COM connectivity and reports any discrepancies (i.e. missing or extra
connections).
This command verifies IP or FC based WAN COM connectivity.
If no option is specified, displays a list of ports that are in error: either missing expected
connectivity or have additional unexpected connectivity to other ports.
The expected connectivity is determined by collecting all ports with role wan-com and
requiring that each port in a port group at a cluster have connectivity to every other port in
the same port group at all other clusters.
When both FC and IP ports with role wan-com are present, the smaller subset is discarded
and the protocol of the remaining ports is assumed as the correct protocol.
ip-port-group-1
/engines/engine-1-1/directors/director-1-1-A/hardware/ports/A2-XG01 ->
/engines/engine-2-1/directors/director-2-1-A/hardware/ports/A2-XG01
/engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG01
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG01 ->
/engines/engine-2-1/directors/director-2-1-A/hardware/ports/A2-XG01
/engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG01
/engines/engine-2-1/directors/director-2-1-A/hardware/ports/A2-XG01 ->
/engines/engine-1-1/directors/director-1-1-A/hardware/ports/A2-XG01
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG01
/engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG01 ->
/engines/engine-1-1/directors/director-1-1-A/hardware/ports/A2-XG01
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG01
ip-port-group-0
/engines/engine-1-1/directors/director-1-1-A/hardware/ports/A2-XG00 ->
/engines/engine-2-1/directors/director-2-1-A/hardware/ports/A2-XG00
/engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG00
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG00 ->
/engines/engine-2-1/directors/director-2-1-A/hardware/ports/A2-XG00
/engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG00
/engines/engine-2-1/directors/director-2-1-A/hardware/ports/A2-XG00 ->
/engines/engine-1-1/directors/director-1-1-A/hardware/ports/A2-XG00
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG00
/engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG00 ->
/engines/engine-1-1/directors/director-1-1-A/hardware/ports/A2-XG00
Example Display/verify WAN connectivity when ports are healthy, connectivity is correctly
configured:
VPlexcli:/> connectivity validate-wan-com
connectivity: FULL
Example Display/verify WAN connectivity when there are errors communicating with directors or
with the port-group configuration:
VPlexcli:/> connectivity validate-wan-com
connectivity: PARTIAL
ip-port-group-0 - ERROR - Connectivity errors were found for the following com ports:
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG00 ->
Missing connectivity to /engines/engine-2-2/directors/director-2-2-B/hardware/ports/B2-XG00
Example Display/verify WAN connectivity when no connectivity was detected between any ports:
VPlexcli:/> connectivity validate-wan-com
connectivity: NONE
ip-port-group-1 - ERROR - Connectivity errors were found for the following com ports:
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG01 ->
Missing connectivity to /engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG01
/engines/engine-2-1/directors/director-2-1-B/hardware/ports/B2-XG01 ->
Missing connectivity to /engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG01
ip-port-group-0 - ERROR - Connectivity errors were found for the following com ports:
/engines/engine-1-1/directors/director-1-1-B/hardware/ports/B2-XG00 ->
Missing connectivity to /engines/engine-2-2/directors/director-2-2-B/hardware/ports/B2-XG00
/engines/engine-1-2/directors/director-1-2-A/hardware/ports/A2-XG00 ->
Missing all expected connectivity.
This command modifies COM parameters that should not need modification under
normal operations. Contact EMC Support before using this command.
Optional arguments
[-f|--force] - Force the value to be set, bypassing any confirmations or guards.
Description COM bandwidth management senses how fast I/O is sent to a peer and adjusts the I/O
window size accordingly. Adjustments ensure that:
◆ High priority I/O does not have to wait longer than a expected maximum time before
being tried the first time.
◆ Enough I/O is queued to the sending protocol to obtain the best link throughput.
Description If the values are the same for all directors, then only one set of values is displayed.
Otherwise, the values for each director are displayed individually.
For descriptions of entries in the Name field, see the connectivity window set command.
Field Description
avgWaitUpperBound The threshold, in milliseconds, at which the window size will shrink. If the average
waiting time of all outstanding I/Os exceeds this value, the window size will shrink.
Otherwise, if full, the window size will grow.
historyDecayDenominator The history speed information decays over time. The unit speed (used to predict
windows size) is:
speed = decay-rate * history-speed + (1 - decay-rate) * current-speed
where the decay-rate = historyDecayNumerator/historyDecayDenominator.
historyDecayNumerator Value used to calculate the rate at which speed information decays.
minChangeTime The minimum interval in milliseconds between any two com window size
calculations. The calculation will cause the history speed to decay, but not
necessarily change the window size
windowEnlargeDelta The delta (in bytes) of increment each time the window size enlarges.
Description
Example
VPlexcli:/> connectivity window stat
Statistics for director Cluster_1_Dir1A:
Director Current Window Size Outstanding I/O Queued I/O
---------------- ------------------- --------------- ----------
Cluster_1_Dir1B 10 0 0
Cluster_2_Dir_1A 93 0 0
Cluster_2_Dir_1B 93 0 0
Cluster_2_Dir_2A 105 0 0
Cluster_2_Dir_2B 105 0 0
Cluster_1_Dir1A 22 0 0
Field Description
Current Window Size The current window size towards the remote director.
Outstanding I/O The number of I/Os currently outstanding (started) to the remote director
Queued I/O The number of I/Os in the queue, waiting to start. Outstanding I/O and Queued I/O
are exclusive to each other.
consistency-group add-virtual-volumes
Adds one or more virtual volumes to a consistency group.
Description Adds the specified virtual volumes to a consistency group. The properties of the
consistency group immediately apply to the added volume.
IMPORTANT
Only volumes with visibility and storage-at-cluster properties which match those of the
consistency group can be added to the consistency group.
If any of the specified volumes are already in the consistency group, the command skips
those volumes, but prints a warning message for each one.
IMPORTANT
When adding virtual volumes to a RecoverPoint-enabled consistency group, the
RecoverPoint cluster may not note the change for 2 minutes. Wait for 2 minutes between
adding virtual volumes to a RecoverPoint-enabled consistency group and creating or
changing a RecoverPoint consistency group.
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> consistency-group
list-eligible-virtual-volumes
[TestDDevice-1_vol, TestDDevice-2_vol, TestDDevice-3_vol, TestDDevice-4_vol,
TestDDevice-5_vol]
Example Add multiple volumes using a single command. Enter separate virtual volumes by
commas:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> add-virtual-volumes
TestDDevice-1_vol,TestDDevice-2_vol
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> ll
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters []
cache-mode asynchronous
detach-rule active-cluster-wins
operational-status [(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters [cluster-1, cluster-2]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [TestDDevice-1_vol, TestDDevice-2_vol]
visibility [cluster-1, cluster-2]
Contexts:
Name Description
------------ -----------
advanced -
recoverpoint -
consistency-group choose-winner
Selects a winning cluster during an inter-cluster link failure.
Optional arguments
[-f|--force] - Do not prompt for confirmation. Allows this command to be run using a
non-interactive script.
* - argument is positional.
When the clusters cannot communicate, it is possible to use this command to select both
clusters as the winning cluster (conflicting detach). In a conflicting detach, both clusters
resume I/O independently.
When the inter-cluster link heals in such a situation, manual intervention is required to
pick a winning cluster. The data image of the winning cluster will be used to make the
clusters consistent again. Any changes at the losing cluster during the link outage will be
discarded.
Do not use this command to specify more than one cluster as the winner.
WARNING: This can cause data divergence and lead to data loss. Ensure the other cluster is not
serving I/O for this consistency group before continuing. Continue? (Yes/No) Yes
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: suspended, details::
[cluster-departure, rebuilding-across-clusters,
restore-link-or-choose-winner] }), (cluster-2,{ summary::
suspended, details:: [cluster-departure,
rebuilding-across-clusters, restore-link-or-choose-winner]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: suspended, details::
[requires-resume-after-rollback] }),
(cluster-2,{ summary:: suspended, details:: [cluster-departure]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
consistency-group create
Creates and names an empty consistency group.
Arguments [-n|--name] consistency-group name - * Name of the new consistency group. Must be
unique within a cluster. Name conflicts across clusters can be resolved by changing the
name later using the set name command.
[-c|--cluster] cluster - Context path of the cluster at which to create the consistency group.
If the current context is a cluster or below, that cluster is the default. Otherwise, this
argument is required.
* - argument is positional.
IMPORTANT
When enabling or disabling RecoverPoint for a consistency group, the RecoverPoint cluster
may not note the change for 2 minutes. Wait for 2 minutes between setting or changing
the recoverpoint-enabled property before creating or changing a RecoverPoint consistency
group.
Refer to the EMC VPLEX Administration Guide for more information about the consistency
group properties.
Note: Refer to the EMC VPLEX Administration Guide for a description of the fields in the
following examples.
VPlexcli:/> ls /clusters/*/consistency-groups/
/clusters/cluster-1/consistency-groups:
test10 test11 test12 test13 test14
test15 test16 test5 test6 test7 test8
test9 vs_RAM_c1wins vs_RAM_c2wins vs_oban005 vs_sun190
/clusters/cluster-2/consistency-groups:
.
.
.
VPlexcli:/> cd /clusters/cluster-1/consistency-groups/
VPlexcli:/clusters/cluster-1/consistency-groups> ls
TestCG test10 test11 test12 test13
test14 test15 test16 test5 test6
test7 test8 test9 vs_RAM_c1wins vs_RAM_c2wins
vs_oban005 vs_sun190
VPlexcli:/clusters/cluster-1/consistency-groups> ls TestCG
/clusters/cluster-1/consistency-groups/TestCG:
Attributes:
Name Value
------------------- --------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule -
operational-status [(cluster-1,{ summary:: ok, details:: [] })]
passive-clusters []
recoverpoint-enabled true
storage-at-clusters []
virtual-volumes []
visibility [cluster-1]
Contexts:
advanced recoverpoint
◆ consistency-group remove-virtual-volumes
◆ EMC VPLEX Administration Guide
consistency-group destroy
Destroys the specified empty consistency groups.
Optional arguments
[-f|--force] - Force the operation to continue without confirmation. Allows this command to
be run using a non-interactive script.
* - argument is positional.
Before using the consistency-group destroy command on an asynchronous consistency
group, ensure that no host applications are using any member volumes of the
consistency group.
consistency-group list-eligible-virtual-volumes
Displays the virtual volumes that are eligible to be added to a specified consistency group.
Arguments [-g|--consistency-group] consistency-group - The consistency group for which the eligible
virtual volumes shall be listed.
If the current context is a consistency group or is below, a consistency group, that
consistency group is the default.
Otherwise, this argument is required.
Description Displays eligible virtual volumes that can be added to a consistency group. Eligible virtual
volumes:
◆ Must not be a logging volume
◆ Have storage at every cluster in the 'storage-at-clusters' property of the target
consistency group
◆ Are not members of any other consistency group
◆ Have no properties (detach rules, auto-resume) that conflict with those of the
consistency group. That is, detach and resume properties of either the virtual volume
or the consistency group must not be set.
Example List eligible virtual volumes from the target consistency group context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG2>
list-eligible-virtual-volumes
[dr1_C12_0000_vol, dr1_C12_0001_vol, dr1_C12_0002_vol,
dr1_C12_0003_vol, dr1_C12_0004_vol, dr1_C12_0005_vol,
dr1_C12_0006_vol, dr1_C12_0007_vol, dr1_C12_0008_vol,
dr1_C12_0009_vol, dr1_C12_0010_vol, dr1_C12_0011_vol,
dr1_C12_0012_vol, dr1_C12_0013_vol, dr1_C12_0014_vol,
dr1_C12_0015_vol, dgc_p2z_test_vol, vmax_DR1_C1_r1_0000_12_vol,
vmax_DR1_C1_r0_0000_12_vol,
.
.
.
List eligible virtual volumes from the root context:
VPlexcli:/> consistency-group list-eligible-virtual-volumes
/clusters/cluster-1/consistency-groups/TestCG2
[dr1_C12_0000_vol, dr1_C12_0001_vol,
dr1_C12_0002_vol,dr1_C12_0003_vol, dr1_C12_0004_vol,
.
.
.
consistency-group remove-virtual-volumes
Removes one or more virtual volumes from the consistency group.
Description Removes one or more virtual volumes from the consistency group.
If the pattern given to --virtual-volumes argument matches volumes that are not in the
consistency group, the command skips those volumes, and prints a warning message for
each one.
Before using the consistency-group remove-virtual-volumes command on an
asynchronous consistency group, ensure that no host applications are using the volumes
you are removing.
If this is an asynchronous consistency group in a VPLEX Geo system, and one or more of
the volumes being removed is in a storage view, this command asks for confirmation
before proceeding. Volumes removed from an asynchronous group become synchronous,
and an application that is sensitive to the increased I/O latency may experience data loss
or data unavailability.
Best practice is to either:
◆ Remove the volumes from the view, or
◆ Perform the operation when I/O loads are light.
Use the --force argument to suppress the request for confirmation.
IMPORTANT
When removing virtual volumes from a RecoverPoint-enabled consistency group, the
RecoverPoint cluster may not note the change for 2 minutes. Wait for 2 minutes between
removing virtual volumes from a RecoverPoint-enabled consistency group and creating or
changing a RecoverPoint consistency group.
/clusters/cluster-1/consistency-groups/TestCG:
------------------------------- ----------------------------------------------
.
.
.
virtual-volumes [dr1_C12_0919_vol, dr1_C12_0920_vol,
dr1_C12_0921_vol, dr1_C12_0922_vol]
visibility [cluster-1, cluster-2]
.
.
.
VPlexcli:/> ls /clusters/cluster-1/consistency-groups/TestCG
/clusters/cluster-1/consistency-groups/TestCG:
Name Value
------------------------------- ----------------------------------------------
.
.
.
storage-at-clusters [cluster-1, cluster-2]
synchronous-on-director-failure -
virtual-volumes [dr1_C12_0919_vol, dr1_C12_0921_vol,
dr1_C12_0922_vol]
.
.
.
consistency-group resolve-conflicting-detach
Select a winning cluster on a consistency group on which there has been a conflicting
detach.
Optional arguments
[-f|--force] - Do not prompt for confirmation. Allows this command to be run using a
non-interactive script.
* - argument is positional.
Description
This command results in data loss at the losing cluster.
During an inter-cluster link failure, an administrator may permit I/O to continue at both
clusters. When I/O continues at both clusters:
◆ The data images at the clusters diverge.
◆ Legs of distributed volumes are logically separate.
When the inter-cluster link is restored, the clusters learn that I/O has proceeded
independently.
I/O continues at both clusters until the administrator picks a “winning” cluster whose data
image will be used as the source to resynchronize the data images.
Use this command to pick the winning cluster. For the distributed volumes in the
consistency group:
◆ I/O at the “losing” cluster is suspended (there is an impending data change)
◆ The administrator stops applications running at the losing cluster.
◆ Any dirty cache data at the losing cluster is discarded
◆ The legs of distributed volumes rebuild, using the legs at the winning cluster as the
rebuild source.
When the applications at the losing cluster are shut down, use the consistency-group
resume-after-data-loss-failure command to allow the system to service I/O at that cluster
again.
Example Select cluster-1 as the winning cluster for consistency group “TestCG” from the TestCG
context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> resolve-conflicting-detach
This will cause I/O to suspend at clusters in conflict with cluster cluster-1, allowing you to
stop applications at those clusters. Continue? (Yes/No) yes
Select cluster-1 as the winning cluster for consistency group “TestCG” from the root
context:
VPlexcli:/> consistency-group resolve-conflicting-detach --cluster cluster-1
--consistency-group /clusters/cluster-1/consistency-groups/TestCG
This will cause I/O to suspend at clusters in conflict with cluster cluster-1, allowing you to
stop applications at those clusters. Continue? (Yes/No) Yes
In the following example, I/O has resumed at both clusters during an inter-cluster link
outage. When the inter-cluster link is restored, the two clusters will come back into contact
and learn that they have each detached the other and carried on I/O.
◆ The ls command shows the operational-status as ‘ok,
requires-resolve-conflicting-detach’ at both clusters.
◆ The resolve-conflicting-detach command selects cluster-1 as the winner.
Cluster-2 will have its view of the data discarded.
I/O is suspended on cluster-2.
◆ The ls command displays the change in operational status.
• At cluster-1, I/O continues, and the status is ‘ok’.
• At cluster-2, the view of data has changed and so I/O is suspended pending the
consistency-group resume-at-loser command.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
-------------------- -----------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details::
[requires-resolve-conflicting-detach] }),
(cluster-2,{ summary:: ok, details::
[requires-resolve-conflicting-detach] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
Contexts:
advanced recoverpoint
consistency-group resume-after-data-loss-failure
Resumes I/O on an asynchronous consistency group when there are data loss failures.
Optional arguments
[-f|--force] - Do not prompt for confirmation. Allows this command to be run using a
non-interactive script.
* - argument is positional.
Description In the event of multiple near-simultaneous director failures, or a director failure followed
very quickly by an inter-cluster link failure, an asynchronous consistency group may
experience data loss. I/O automatically suspends on the volumes in the consistency
group at all participating clusters.
Use this command to resume I/O. Specifically, this command:
◆ Selects a winning cluster whose current data image will be used as the base from
which to continue I/O.
◆ On the losing cluster, synchronizes the data image with the data image on the winning
cluster.
◆ Resumes I/O at both clusters.
This command may make the data loss larger, because dirty data at the losing cluster may
be discarded.
All the clusters participating in the consistency group must be present in order to use this
command.
If there has been no data-loss failure in the group, this command prints an error message
and does nothing.
Example Select the leg at cluster-1 as the source image to synchronize data for virtual volumes in
consistency group “TestCG” (from root context):
VPlexcli:/> consistency-group resume-after-data-loss-failure --cluster cluster-1
--consistency-group /clusters/cluster-1/consistency-groups/TestCG
Data may be discarded at clusters other than cluster-1. Continue? (Yes/No) Yes
Perform the same task from the specific consistency group context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> resume-after-data-loss-failure
--cluster cluster-1
Data may be discarded at clusters other than cluster-1. Continue? (Yes/No) Yes
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule winner cluster-2 after 5s
operational-status [suspended, requires-resume-after-data-loss-failure]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes DR1_RAM_c2win_lr1_36_vol, DR1_RAM_c2win_lr1_16_vol,
DR1_RAM_c2win_lr0_6_vol, DR1_RAM_c2win_lrC_46_vol,
.
.
.
VPlexcli:/clusters/cluster-1/consistency-groups/CG1> resume-after-data-loss-failure -f -c
cluster-1
VPlexcli:/clusters/cluster-1/consistency-groups/CG1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-2]
cache-mode synchronous
detach-rule winner cluster-2 after 5s
operational-status [ok]
passive-clusters [cluster-1]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes DR1_RAM_c2win_lr1_36_vol, DR1_RAM_c2win_lr1_16_vol,
DR1_RAM_c2win_lr0_6_vol, DR1_RAM_c2win_lrC_46_vol,
.
.
.
In the following example, the auto-resume-at-loser property of the consistency group is
set to false; that is I/O remains suspended on the losing cluster when connectivity is
restored. I/O must be manually resumed.
◆ The ls command displays the operational status of a consistency group where
cluster-1 is the winner (detach-rule is winner) after multiple failures at the same time
have caused a data loss. The cluster-1 side of the consistency group is suspended.
◆ The resume-after-data-loss-failure command selects cluster-2 as the source image
from which to re-synchronize data.
◆ After a short wait, the ls command displays that the cluster-1 side of the consistency
group remains suspended:
VPlexcli:/clusters/cluster-1/consistency-groups/CG2> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule winner cluster-1 after 5s
operational-status [suspended, requires-resume-after-data-loss-failure]
passive-clusters [cluster-1, cluster-2]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes DR1_RAM_c2win_lr1_7_vol, DR1_RAM_c2win_lr0_37_vol,
DR1_RAM_c2win_lr0_27_vol, DR1_RAM_c2win_lr1_27_vol,
DR1_RAM_c2win_lrC_17_vol, DR1_RAM_c2win_lr0_17_vol,
.
.
.
VPlexcli:/clusters/cluster-1/consistency-groups/CG2> resume-after-data-loss-failure -f -c
cluster-2
VPlexcli:/clusters/cluster-1/consistency-groups/CG2> ll
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-2]
cache-mode synchronous
detach-rule winner cluster-1 after 5s
operational-status [suspended]
passive-clusters [cluster-1]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes DR1_RAM_c2win_lr1_7_vol, DR1_RAM_c2win_lr0_37_vol,
DR1_RAM_c2win_lr0_27_vol, DR1_RAM_c2win_lr1_27_vol,
DR1_RAM_c2win_lrC_17_vol, DR1_RAM_c2win_lr0_17_vol,
.
.
.
consistency-group resume-after-rollback
Resume I/O to the volumes on the winning cluster in a consistency group after:
◆ The losing clusters have been detached, and
◆ Data has been rolled back to the last point at which all clusters had a consistent view.
In a Geo configuration, on a cluster that successfully vaulted and unvaulted, the user
should contact EMC Engineering for assistance before rolling back the data prior to
re-establishing communication with the non-vaulting cluster.
Optional arguments
[-f|--force] - Do not prompt for confirmation. Allows this command to be run using a
non-interactive script.
* - argument is positional.
Description
This command addresses data unavailability but will cause data loss.
This command is part of a two-step recovery procedure to allow I/O to continue in spite of
an inter-cluster link failure.
1. Use the consistency-group choose-winner command to select the winning cluster.
2. Use this command to tell the winning cluster to roll back its data image to the last
point where the clusters were known to agree, and then proceed with I/O.
The first step in the recovery procedure can be automated by setting a detach-rule-set.
The second step is required only if the losing cluster has been “active” -that is, writing to
volumes in the consistency group since the last time the data images were identical at the
clusters.
If the losing cluster is active, the distributed cache at the losing cluster contains dirty data,
and without that data, the winning cluster's data image is inconsistent. Resuming I/O at
the winner requires rolling back the winner's data image to the last point where the
clusters agreed.
Applications may experience difficulties if the data changes, so the roll-back and
resumption of I/O is not automatic.
The delay gives the administrator the chance to halt applications. The administrator then
uses this command start the rollback roll-back in preparation for resuming I/O.
The winning cluster rolls back its data image to the last point at which the clusters had the
same data images, and then allows I/O to resume at that cluster.
At the losing cluster, I/O remains suspended.
When the inter-cluster link is restored, I/O remains suspended at the losing cluster, unless
the 'auto-resume' flag is set to 'true'.
Example Resume I/O on the cluster-2 leg of virtual volumes in consistency group TestCG:
VPlexcli:/clusters/cluster-2/consistency-groups/TestCG> resume-after-rollback
This will change the view of data at cluster cluster-2, so you should ensure applications are
stopped at that cluster. Continue? (Yes/No) Yes
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
operational-status [cluster-departure, rebuilding-across-clusters,
restore-link-or-choose-winner] }), (cluster-2,{ summary::
suspended, details:: [cluster-departure,
rebuilding-across-clusters, restore-link-or-choose-winner]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resume-after-rollback
This will change the view of data at cluster cluster-1, so you should ensure applications are
stopped at that cluster. Continue? (Yes/No) Yes
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [cluster-departure] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
consistency-group resume-at-loser
If I/O is suspended due to a data change, resumes I/O at the specified cluster and
consistency group.
Optional arguments
[-f|--force] - Do not prompt for confirmation. Without this argument, the command asks for
confirmation to proceed. This protects against accidental use while applications are still
running at the losing cluster which could cause applications to misbehave. Allows the
command to be executed from a non-interactive script.
* - argument is positional.
Description During an inter-cluster link failure, an administrator may permit I/O to resume at one of the
two clusters: the “winning” cluster.
I/O remains suspended on the “losing” cluster.
When the inter-cluster link heals, the winning and losing clusters re-connect, and the
losing cluster discovers that the winning cluster has resumed I/O without it.
Unless explicitly configured otherwise (using the auto-resume-at-loser property), I/O
remains suspended on the losing cluster. This prevents applications at the losing cluster
from experiencing a spontaneous data change.
The delay allows the administrator to shut down applications.
After stopping the applications, the administrator can use this command to:
◆ Resynchronize the data image on the losing cluster with the data image on the winning
cluster,
◆ Resume servicing I/O operations.
The administrator may then safely restart the applications at the losing cluster.
Without the '--force' option, this command asks for confirmation to proceed, since its
accidental use while applications are still running at the losing cluster could cause
applications to misbehave.
Example
VPlexcli:/clusters/cluster-2/consistency-groups/TestCG> resume-at-loser
This may change the view of data presented to applications at cluster cluster-2. You should
first stop applications at that cluster. Continue? (Yes/No) Yes
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-resume-at-loser] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode asynchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: ok, details:: [] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
Optional arguments [-f|--force] - Force the operation to continue without confirmation. Allows this command to
be run from non-interactive scripts. This option available in Release 5.1 and later.
Description Applies the “active-cluster-wins” detach rule to one or more specified consistency groups.
Note: This command requires user confirmation unless the --force argument is used.
In the event of a cluster failure or departure, this rule-set will result in I/O continuing only
on the cluster that last received I/O. I/O will be suspended at all other clusters. If VPLEX
Witness is deployed, it will override this setting (on synchronous consistency-groups only)
if the last-active cluster has failed.
The applicable detach rule varies depending on the visibility, storage-at-clusters, and
cache mode properties of the consistency group. Table 10 lists the applicable detach rules
for various combinations of visibility, storage-at-clusters, and cache-mode.
IMPORTANT
When RecoverPoint is deployed, it may take up to 2 minutes for the RecoverPoint cluster to
take note of changes to a VPLEX consistency group. Wait for 2 minutes after changing the
detach rule for a VPLEX consistency group before creating or changing a RecoverPoint
consistency group.
Example Set the detach-rule for a single asynchronous consistency group from the group’s context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set-detach-rule active-cluster-wins
Set the detach-rule for two asynchronous consistency groups from the root context:
VPlexcli:/> consistency-group set-detach-rule active-cluster-wins -g
/clusters/cluster-1/consistency-groups/TestCG,/clusters/cluster-1/consistency -groups/TestCG2
Optional arguments [-f|--force] - Force the operation to continue without confirmation. Allows this command to
be run from non-interactive scripts.
Description Applies the no-automatic-winner detach rule to one or more specified consistency groups.
Note: This command requires user confirmation unless the --force argument is used.
This detach rule dictates no automatic detaches occur in the event of an inter-cluster link
failure.
Applicable to both asynchronous and synchronous consistency groups.
In the event of a cluster failure or departure, this rule-set will result in I/O being
suspended at all clusters whether or not VPLEX Witness is deployed. To resume I/O, use
either the consistency-group choose-winner or consistency-group resume-after-rollback
commands to designate the winning cluster.
IMPORTANT
When RecoverPoint is deployed, it may take up to 2 minutes for the RecoverPoint cluster to
take note of changes to a VPLEX consistency group. Wait for 2 minutes after changing the
detach rule for a VPLEX consistency group before creating or changing a RecoverPoint
consistency group.
Example Set the detach-rule for a single consistency group from the group’s context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set-detach-rule no-automatic-winner
Set the detach-rule for two consistency groups from the root context:
VPlexcli:/> consistency-group set-detach-rule no-automatic-winner -g
/clusters/cluster-1/consistency-groups/TestCG,/clusters/cluster-1/consistency -groups/TestCG2
Required arguments [-c|--cluster] cluster-id - The cluster that will be the winner in the event of an inter-cluster
link failure.
[-d|--delay] seconds - The number of seconds after an inter-cluster link fails before the
winning cluster detaches. Valid values for the delay timer are:
0 - Detach occurs immediately after the failure is detected.
number - Detach occurs after the specified number of seconds have elapsed. There is
no practical limit to the number of seconds, but delays longer than 30 seconds won't
allow I/O to resume quickly enough to avoid problems with most host applications.
Description Applies the winner detach rule to one or more specified synchronous consistency groups.
Note: This command requires user confirmation unless the --force argument is used.
In the event of a cluster failure or departure, this rule-set will result in I/O continuing on
the selected cluster only. I/O will be suspended at all other clusters. If VPLEX Witness is
deployed it will override this selection (on synchronous consistency-groups only) if the
selected cluster has failed.
Applicable only to synchronous consistency groups.
IMPORTANT
When RecoverPoint is deployed, it may take up to 2 minutes for the RecoverPoint cluster to
take note of changes to a VPLEX consistency group. Wait for 2 minutes after changing the
detach rule for a VPLEX consistency group before creating or changing a RecoverPoint
consistency group.
Example Set the detach-rule for a single consistency group from the group’s context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set-detach-rule winner --cluster
cluster-1 --delay 5s
Example Set the detach-rule for two consistency groups from the root context:
VPlexcli:/> consistency-group set-detach-rule winner --cluster cluster-1 --delay 5s
--consistency-groups TestCG, TestCG2
consistency-group summary
Displays a summary of all the consistency groups with a state other than OK.
Description Displays all the consistency groups with a state other than 'OK' and the consistency
groups at the risk of a rollback.
date
Displays the current date and time in Coordinated Universal Time (UTC).
Syntax date
describe
Describes the attributes of the given context.
Syntax describe
[context path]
Example In the following example, the ll command displays information about a port, and the
describe command with no arguments displays additional information.
VPlexcli:/clusters/cluster-2/exports/ports/P000000003CB001CB-B1-FC01> ll
Name Value
------------------------ ------------------
director-id 0x000000003cb001cb
discovered-initiators []
.
.
.
VPlexcli:/clusters/cluster-2/exports/ports/P000000003CB001CB-B1-FC01> describe
Attribute Description
------------------------ --------------------------------------------------
director-id The ID of the director where the port is exported.
discovered-initiators List of all initiator-ports visible from this port.
.
.
.
Use the describe context command to display information about the specified context:
VPlexcli:/> describe --context /clusters/cluster-2/exports/ports/P000000003CB001CB-B1-FC01
Attribute Description
------------------------------------------------------------------------------------- ----------------
/clusters/cluster-2/exports/ports/P000000003CB001CB-B1-FC01::director-id The ID of the director where the
port is exported.
/clusters/cluster-2/exports/ports/P000000003CB001CB-B1-FC01::discovered-initiators List of all initiator-ports visible
from this port.
.
.
.
device attach-mirror
Attaches a mirror as a RAID 1 child to another (parent) device, and starts a rebuild to
synchronize the mirror.
Optional arguments
[-m|--mirror] context path or name - * Name or context path of the mirror to detach. Does
not need to be a top-level device. If the name of a device is used, ensure the device name
is not ambiguous, For example, ensure that the same device name is not used by local
devices in different clusters.
[-r|--rule-set] rule-set - Rule-set to apply to the distributed device that is created when a
mirror is added to a local device.
[-f|--force] - When--force is set, do not ask for confirmation when attaching a mirror. Allows
this command to be run using a non-interactive script.
If the --force argument is not used, prompts for confirmation in two circumstances when
the mirror is remote and the parent device must be transformed into a distributed device:
◆ The rule set that will be applied to the new distributed device potentially allows
conflicting detaches; and
◆ In VPLEX GEO systems, where the parent device already has a volume in a storage
view. The new distributed device will be synchronous. Applications using the volume
may experience greater I/O latency than on the original local device. If an application
is sensitive to this latency, it may experience data unavailability.
* - argument is positional.
Refer to the troubleshooting section of the VPLEX procedures in the SolVe Desktop for
instructions on increasing the number of slots.
Note: If the RAID 1 device is added to a consistency group, the consistency group’s detach
rule overrides the device’s detach rule.
Example Attach a mirror without specifying a rule-set (allow VPLEX to select the rule-set):
VPlexcli:/clusters/cluster-1/devices> virtual-volume create test_r0c_1
VPlexcli:/clusters/cluster-1/devices> device attach-mirror --device test_r0c_1 --mirror
test_r0c_2
Distributed device 'regression_r0c_1' is using rule-set 'cluster-1-detaches'.
Attach a mirror:
VPlexcli:/> device attach-mirror --device /clusters/cluster-1/devices/site1device0 --mirror
/clusters/cluster-1/devices/site1mirror
device collapse
Collapses a one-legged device until a device with two or more children is reached.
Description If a RAID 1 device is left with only a single child (after removing other children), use the
device collapse command to collapse the remaining structure. For example:
If RAID 1 device “A” has two child RAID 1 devices “B” and “C”, and child device “C” is
removed, A is now a one-legged device, but with an extra layer of abstraction:
A
|
B
../ \..
Use device collapse to remove this extra layer, and change the structure into:
A
../ \..
device detach-mirror
Removes (detaches) a mirror from a RAID-1 device.
Optional arguments
[-m|--mirror] context path or name - * Name or context path of the mirror to detach. Does
not have to be a top-level device. If the device name is used, verify that the name is
unique throughout the VPLEX, including local devices on other clusters.
[-s|--slot] slot number - Slot number of the mirror to be discarded. Applicable only when
the --discard argument is used.
[-i|--discard] - When specified, discards the detached mirror. The data is not discarded.
[-f|--force] -Force the mirror to be discarded. Must be used when --discard argument is
used. The --force argument is set for detaching an unhealthy
mirror it is discarded even if --discard flag is not set.
* - argument is positional.
Description Use this command to detach a mirror leg from a RAID-1 device.
Virtual volume
Device mirrors
If the RAID device supports a virtual volume, and the --discard argument is not used:
◆ The mirror (child device) is removed from the RAID-1 parent device,
◆ The detached child becomes a top-level device
◆ A new virtual-volume is created on top the new device. The name of the new device is
prefixed with the name of the original device.
Data on the
detached device
is preserved
If the RAID device supports a virtual volume, and the --discard argument is used:
◆ The mirror (child device) is removed from the RAID-1 parent device
◆ The detached child becomes a top-level device
◆ No new virtual volume is created
◆ The mirror is detached regardless of its current state. Data consistency is not
guaranteed.
◆ In GEO configurations, dirty cache data will not be flushed to the mirror at the time of
issuing the detach operation.
New virtual volume
is created
Figure 3 Devices and virtual volumes: after detach mirror - with discard
Example Identify and detach a dead mirror leg from a distributed device.
In the following example:
◆ The ll command in /distributed-storage/distributed-devices context displays a
stressed distributed device.
/distributed-storage/distributed-devices/dev_test_dead_leg_2_DD/distributed-device-components
:
VPlexcli:/distributed-storage/distributed-devices> ll
Name Status Operational Health Auto Rule WOF Transfer
----------------------------- ------- Status State Resume Set Group Size
----------------------------- ------- ----------- ------ ------ Name Name --------
----------------------------- ------- ----------- ------ ------ ----- ----- --------
ESX_stretched_device running ok ok true colin - 2M
bbv_temp_device running ok ok true colin - 2M
dd_source_device running ok ok true colin - 2M
ddt running ok ok true colin - 2M
dev_test_dead_leg_2_DD running ok ok - colin - 2M
windows_big_drive running ok ok true colin - 2M
.
.
.
[--verbose]
Description Mirror isolation provides a mechanism to automatically unisolate mirrors that were
previously isolated. When mirror isolation feature is enabled, disabling mirror
auto-unisolation will prevent the system from automatically unisolating any isolated
mirrors whose underlying storage-volume’s performance is now in the acceptable range.
For the option to manually unisolate the mirror, follow the troubleshooting procedure for
VPLEX in the SolVe Desktop.
Example Shows the result when the command is executed on all clusters when mirror isolation is
disabled:
VPlexcli:/> device mirror-isolation auto-unisolation disable
Mirror isolation provides a mechanism to automatically unisolate mirrors that were previously
isolated. This operation will prevent the system from automatically unisolating the underlying
storage-volumes once their performance is in the acceptable range. You can manually unisolate
the mirror by following the troubleshooting procedure.
Example Shows the command executed with the --force option, when the mirror-isolation feature is
disabled:
VPlexcli:/> device mirror-isolation auto-unisolation disable -f
Mirror isolation is not enabled on clusters cluster-1,cluster-2.
Mirror isolation provides a mechanism to automatically unisolate mirrors that were previously
isolated. When mirror isolation is enabled, this operation will prevent the system from
automatically unisolating the underlying storage-volumes once their performance is in the
acceptable range. You can manually unisolate the mirror by following the troubleshooting
procedure.
Example Shows auto-unisolation was not disabled because the feature is not supported:
VPlexcli:/> device mirror-isolation auto-unisolation disable
Mirror isolation is not enabled on clusters cluster-1,cluster-2.
Mirror isolation provides a mechanism to automatically unisolate mirrors that were previously
isolated. When mirror isolation is enabled, this operation will prevent the system from
automatically unisolating the underlying storage-volumes once their performance is in the
acceptable range. You can manually unisolate the mirror by following the troubleshooting
procedure.
Example Shows auto-unisolation enabled when mirror isolation is disabled on both clusters:
VPlexcli:/> device mirror-isolation auto-unisolation enable
Mirror isolation is not enabled on clusters cluster-1,cluster-2.
Example Shows auto-unisolation enabled when mirror isolation is disabled on one of the clusters:
VPlexcli:/> device mirror-isolation auto-unisolation enable
Example Shows auto-unisolation enable operation failed as the feature is not supported:
VPlexcli:/> device mirror-isolation auto-unisolation enable
Mirror isolation is not enabled on clusters cluster-1,cluster-2.
Example Shows auto-unisolation enable operation failed because one cluster is not available:
VPlexcli:/> device mirror-isolation auto-unisolation enable
device mirror-isolation auto-unisolation enable: Evaluation of <<device mirror-isolation
auto-unisolation enable>> failed.
cause: Could not enable auto unisolation.
cause: Firmware command error.
cause: communication error recently.
Example Shows auto-unisolation enable operation failed because the meta volume is not ready:
VPlexcli:/> device mirror-isolation auto-unisolation enable
device mirror-isolation auto-unisolation enable: Evaluation of <<device mirror-isolation
auto-unisolation enable>> failed.
cause: Could not enable auto unisolation.
cause: Could not enable auto unisolation.
Firmware command error. The active metadata
device is not healthy enough to persist the change fully.
[-c|--clusters]context-path [, context-path...]
[-f|--force]
[-h|--help]
[--verbose]
Description A RAID-1 mirror leg built upon a poorly performing storage volume can bring down the
performance of the whole RAID-1 device and increase IO latencies to the applications
using this device. VPLEX will prevent IOs to such poorly performing mirror legs to improve
the RAID-1 performance. This behavior or feature is known as mirror isolation.
When disabling the mirror isolation feature on one or more clusters, this command prints
a warning and asks for confirmation.
IMPORTANT
This command disables the mirror isolation feature and will prevent VPLEX from improving
the performance of a RAID 1 device containing a poorly performing mirror leg. This
command should only be used if redundancy is desired over RAID 1 performance
improvement.
Example Disable mirror isolation on all clusters without being prompted to confirm:
VPlexcli:/> device mirror-isolation disable -f
WARNING: Disabling the mirror isolation feature will prevent VPLEX from improving the
performance of a RAID-1 device containing a poorly performing mirror leg. This command should
be only used if redundancy is desired over RAID-1 performance improvement.
Disabling the mirror isolation feature will prevent VPLEX from improving the performance of a
RAID-1 device containing a poorly performing mirror leg. This command should be only used if
redundancy is desired over RAID-1 performance improvement.
Example Attempt to disable mirror-isolation on both clusters and succeeded on cluster 1, but failed
on cluster 2 because the feature is not supported:
VPlexcli:/> device mirror-isolation disable
Disabling the mirror isolation feature will prevent VPLEX from improving the performance of a
RAID-1 device containing a poorly performing mirror leg. This command should be only used if
redundancy is desired over RAID-1 performance improvement.
Description A RAID 1 mirror leg built upon a poorly performing storage volume can bring down the
performance of the whole RAID 1 device and increase IO latencies to the applications
using this device. VPLEX will prevent IOs to such poorly performing mirror legs to improve
the RAID 1 performance. This behavior or feature is known as mirror isolation.
Note: This command enables the mirror isolation feature and should only be used if RAID
1performance improvement is desired over redundancy.
Or
VPlexcli:/> device mirror-isolation enable -c *
Please be aware that auto unisolation is disabled. In order to manually enable this feature you
can use 'device mirror-isolation auto-unisolation enable'.
Example Attempt to enable mirror-isolation on both clusters and succeeded on cluster 1, but failed
on cluster 2 because the feature is not supported:
VPlexcli:/> device mirror-isolation enable
Description Used to display all the configuration parameters related to mirror isolation for the
specified clusters.
The current configuration parameters supported are:
Enabled Indicates "true" if the feature is enabled, "false" if disabled and "<not
available>" if the value could not be retrieved.
Auto Unisolation Indicates “true” if the system will automatically unisolate an isolated mirror
when the underlying storage-volume’s performance is in the acceptable range,
“false” if manual unisolation was desired, and “<not available>” if the value
could not be retrieved.
Isolation Indicates the isolation sweep interval in seconds if the value was retrieved
Interval successfully, and "<not available>" if the value could not be retrieved.
Unisolation Indicates the unisolation sweep interval in seconds if the value was retrieved
interval successfully, and "<not available>" if the value could not be retrieved.
If a value for any configuration parameter cannot be retrieved for the cluster, it may be
because the feature is not supported or there was a command failure.
Or
VPlexcli:/> device mirror-isolation show -c *
Cluster Enabled Auto unisolation Isolation Interval Unisolation Interval
--------- ------- ---------------- ------------------ --------------------
cluster-1 true false 60 14400
cluster-2 true false 60 14400
device resume-link-down
Resumes I/O for devices on the winning island during a link outage.
[-a|--all-at-island] - Resume I/O on all devices on the chosen winning cluster and the
clusters with which it is communicating.
[-f|--force] - Force the I/O to resume.
Description Used when the inter-cluster link fails. Allows one or more suspended mirror legs to resume
I/O immediately.
For example, used when the peer cluster is the winning cluster but is known to have failed
completely.
Resumes I/O on the specified cluster and the clusters it is in communication with during a
link outage.
Detaches distributed devices from those clusters that are not in contact with the specified
cluster or detaches local devices from those clusters that are not in contact with the local
cluster.
The device resume-link-down command causes I/O to resume on the local cluster
regardless of any rule-sets applied to the device. Verify that rules and any manual
detaches do not result in conflicting detaches (cluster-1 detaching cluster-2, and
cluster-2 detaching cluster-1). Conflicting detaches will result in lost data on the losing
cluster, a full rebuild, and degraded access during the time of the full rebuild.
When the inter-cluster link fails in a VPLEX Metro or Geo configuration, distributed devices
are suspended at one or more clusters. When the rule-set timer expires, the affected
cluster is detached.
Alternatively, use the device resume-link-down command to detach the cluster
immediately without waiting for the rule-set timer to expire.
Verify that rules and any manual detaches do not result in conflicting detaches (cluster-1
detaching cluster-2, and cluster-2 detaching cluster-1).
Conflicting detaches will result in lost data on the losing cluster, a full rebuild, and
degraded access during the time of the full rebuild.
Only one cluster should be allowed to continue for each distributed device. Different
distributed devices can have different clusters continue.
Use the ll /distributed-storage/distributed-devices/device command to display the rule
set applied to the specified device.
Use the ll /distributed-storage/rule-sets/rule-set/rules command to display the detach
timer for the specified rule-set.
device resume-link-up
Resumes I/O on suspended top level devices, virtual volumes, or all virtual volumes in the
VPLEX.
Description Use this command after a failed link is restored, but I/O is suspended at one or more
clusters.
Usually applied to the mirror leg on the losing cluster when auto-resume is set to false.
During a WAN link outage, after cluster detach, the primary cluster detaches to resume
operation on the distributed device.
If the auto-resume property of a remote or distributed device is set to false and the link
has come back up, use the device resume-link-up command to manually resume the
second cluster.
director appcon
Runs the application console on Linux systems.
For /engines/engine-2-1/directors/Cluster_2_Dir_1B:
Planned target.
For /engines/engine-2-1/directors/Cluster_2_Dir_1A:
Planned target.
For /engines/engine-2-2/directors/Cluster_2_Dir_2A:
.
.
.
director appdump
Downloads an application dump from one or more boards.
Including core images can make this command take a very long time.
[-z|--no-zip] - Turns off the packaging of dump files into a compressed zip file.
[-p|--no-progress] - Turns off progress reporting.
[-t|--targets] target-glob, target-glob... - List of one or more glob patterns. Operates on the
specified targets. Globs may be a full path glob or a name pattern. If only a name pattern is
supplied, the command finds allowed targets whose names match. Entries must be
separated by commas.
Omit this argument if the current context is at or below the target context.
--timeout seconds - Sets the command timeout. Timeout occurs after the specified
number of seconds multiplied by the number of targets found.
Default: 180 seconds per target.
0: No timeout.
--show-plan-only - Shows the targets that will be affected, but the actual operation is not
performed. Recommended when the --targets argument is used.
Description Used by automated scripts and by EMC Customer Support to help troubleshoot problems.
The hardware name and a timestamp are embedded in the dump filename. By default, the
name of the dump file is:
hardware name-YYYY.MM.DD-hh.mm.ss.zip.
For /engines/engine-2-1/directors/dirB:
Planned target.
For /engines/engine-2-1/directors/dirA:
Planned target.
For /engines/engine-1-1/directors/DirA:
Planned target.
For /engines/engine-1-1/directors/DirB:
Planned target.
director appstatus
Displays the status of the application on one or more boards.
Description Used by automated scripts and by EMC Customer Support to help troubleshoot problems.
For /engines/engine-1-1/directors/Cluster_1_Dir1B:
00601610672e201522-2 running -
For /engines/engine-1-1/directors/Cluster_1_Dir1A:
director commission
Starts the director’s participation in the cluster.
Optional arguments
[-f|--force] - Commission the director regardless of firmware version mismatch.
--timeout seconds - The maximum time to wait for --apply-cluster-settings operations to
complete, in seconds.
Default: 60 seconds.
0: No timeout.
[a|--apply-cluster-settings] - Add this director to a running cluster and apply any
cluster-specific settings. Use this argument when adding or replacing a director in an
existing VPLEX.
* - argument is positional.
Example Add a director to a running cluster using the default timeout (60 seconds):
VPlexcli:/> director commission --director Cluster_1_Dir1A --apply-cluster-settings
director decommission
Decommissions a director. The director stops participating in cluster activities.
Description This command removes the director from participating in the VPLEX, and initializes it to
only a partial operational state. The director is no longer a replication target and its
front-end ports are disabled.
Then it reboots the director.
director fc-port-stats
Displays/resets Fibre Channel port statistics for a specific director.
Optional arguments
[-o|--role] role - Filter the ports included in the reply by their role.If no role is specified, all
ports at the director are included. This argument is ignored if --reset is specified. Roles
include:
• back-end - The port is used to access storage devices that the system itself does
I/O to.
• front-end - The port is used to make storage available to hosts.
Description Displays statistics generated by the Tachyon driver for FibreChannel ports at the specified
director and optionally with the specified role, or reset said statistics'
Run this command from the /engines/engine/directors/director context to display the
Fibre Channel statistics for the director in the current context.
Example Display a director’s Fibre Channel port statistics from the root context:
VPlexcli:/> director fc-port-stats -d director-2-1-A
Example Reset the port statistics counters on a director’s Fibre Channel ports from the root context:
VPlexcli:/> director fc-port-stats -d director-2-1-A --reset
Example Display a director’s Fibre Channel port statistics from the director context:
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> fc-port-stat
Requests:
- Accepted: 0 0 0 0 7437 7437
- Rejected: 0 0 0 0 0 0
- Started: 0 0 0 0 7437 7437
- Completed: 0 0 0 0 7437 7437
- Timed-out: 0 0 0 0 0 0
Tasks:
- Received: 0 0 0 0 7437 7437
- Accepted: 0 0 0 0 7437 7437
- Rejected: 0 0 0 0 0 0
- Started: 0 0 0 0 7437 7437
- Completed: 0 0 0 0 7437 7437
- Dropped: 0 0 0 0 0 0
Field Description
Frames: frames discarded, expired, or with CRC or encoding errors on this port since the counter was last reset.
Accepted scsi requests accepted. Requests that have not been rejected.
Rejected scsi requests rejected. Requests can be rejected due to: Not an initiator, Port not
ready, no memory for allocation.
Timed-out Number of requests timed out. Requests have been sent but not responded to
within 10 seconds.
Rejected Tasks rejected due to: 1. connection not ready 2. IO was aborted by TSDK 3. Tried to
send more data than the initiator requested. 4. Unable to allocate enough SGL
entries.
Dropped Tasks dropped due to: 1. Connection not ready. 2. No available task buffers to
handle the incoming task.
Description Show firmware status and version for one or more directors.
[Director Cluster_1_Dir1A]:
Field Description
Status active - The software in this bank is currently operating on the director.
inactive - The software in this bank is not operating on the director.
Marked for next reboot no - The software in this bank will not be used the next time the director reboots.
yes - The software in this bank will be used the next time the director reboots.
director forget
Removes a director from the VPLEX.
Description Removes the specified director from the context tree. Deletes all information associated
with the director.
director passwd
Changes the access password for the specified director.
director ping
Displays the round-trip latency from a given director to the target machine, excluding any
VPLEX overhead.
Optional arguments
[-n|--director] director - The director from which to perform the operation.
Range: 1 - 2147483647
Default: 5.
Description ICMP traffic must be permitted between clusters for this command to work properly.
Notes To verify that ICMP is enabled, log in to the shell on the management server and use the
ping IP address command where the IP address is for a director in the VPLEX.
If ICMP is enabled on the specified director, a series of lines is displayed:
service@ManagementServer:~> ping 128.221.252.36
PING 128.221.252.36 (128.221.252.36) 56(84) bytes of data.
64 bytes from 128.221.252.36: icmp_seq=1 ttl=63 time=0.638 ms
64 bytes from 128.221.252.36: icmp_seq=2 ttl=63 time=0.591 ms
64 bytes from 128.221.252.36: icmp_seq=3 ttl=63 time=0.495 ms
64 bytes from 128.221.252.36: icmp_seq=4 ttl=63 time=0.401 ms
64 bytes from 128.221.252.36: icmp_seq=5 ttl=63 time=0.552 ms
director shutdown
Starts the orderly shutdown of a director’s firmware
Optional arguments
[-n|--director] context path - * Director to shut down.
* - argument is positional.
Note: Does not shut down the operating system on the director.
Status Description
-------- -----------------
Started. Shutdown started.
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> ll
Attributes:
Name Value
------------------------------ ------------------
auto-boot true
auto-restart true
.
.
.
marker-led off
operational-status stopped
.
.
.
director tracepath
Displays the route taken by packets from a specified director to the target machine.
Optional arguments
[-n|--director] - The name of the director from which to perform the operation. Can be either
the director's name (for example director-1-1-A) or an IP address.
Description Displays the hops, latency, and MTU along the route from the specified director to the
target at the specified IP address.
The number of hops does not always correlate to the number of switches along the route.
For example, a switch with a fire wall on each side is counted as two hops.
The reported latency at each hop is the round-trip latency from the source hop.
The MTU reported at each hop is limited by the MTU of previous hops and therefore not
necessarily the configured MTU at that hop.
In the following illustration, a director in cluster-2 (director-2-1-A) is pinged from a director
in cluster-1 (director-1-1-A) in single engine clusters. The ping travels through both
management servers before reaching its target:
If the target machine does not respond properly, the traceroute may stall. Run this
command multiple times.
VPlexcli:/> ll /engines/*/directors
/engines/engine-1-1/directors:
Name Director ID Cluster Commissioned Operational Communication
-------------- ------------------ ID ------------ Status Status
-------------- ------------------ ------- ------------ ----------- -------------
director-1-1-A 0x0000000044603198 1 true ok ok
director-1-1-B 0x0000000044703198 1 true ok ok
/engines/engine-2-1/directors:
Name Director ID Cluster Commissioned Operational Communication
-------------- ------------------ ID ------------ Status Status
-------------- ------------------ ------- ------------ ----------- -------------
director-2-1-A 0x00000000446031b1 2 true ok ok
director-2-1-B 0x00000000447031b1 2 true ok ok
Display the route from the specified director to the specified address using the director’s
IP address:
VPlexcli:/> director tracepath 10.6.211.91 -i 10.12.136.12
director uptime
Prints the uptime information for all connected directors.
Description Uptime measures the time a machine has been up without any downtime.
dirs
Displays the current context stack.
Syntax dirs
Description The stack is displayed from top to bottom, in left to right order.
VPlexcli:/> cd /engines/engine-1-1/
VPlexcli:/engines/engine-1-1> dirs
[/engines/engine-1-1]
VPlexcli:/engines/engine-1-1> cd /directors/
VPlexcli:/engines/engine-1-1/directors> dirs
[/engines/engine-1-1/directors]
disconnect
Disconnects one or more connected directors.
Syntax disconnect
[-n|--directors] context path,context path...
Description Stops communication from the client to the remote directors and frees up all resources
that are associated with the connections.
Removes the entry in the connections file for the specified director(s).
dm migration cancel
Cancels an existing data migration.
Optional arguments
[-f|--force] - Forces the cancellation of the specified migration(s).
* - argument is positional.
Description Use the dm migration cancel --force --migrations context-path command to cancel a
migration.
Specify the migration by name if that name is unique in the global namespace. Otherwise,
specify a full context path.
Migrations can be canceled in the following circumstances:
◆ The migration is in progress or paused. The migration is stopped, and any resources it
was using are freed.
◆ The migration has not been committed. The source and target devices or extents are
returned to their pre-migration state.
A migration cannot be canceled if it has been committed.
To remove the migration record from the context tree, see the dm migration remove
command.
dm migration clean
Cleans a committed data migration.
Optional arguments
[-f|--force] - Forces the cancellation of the specified migration(s).
[-e|--rename-target] - For device migrations only, renames the target device after the source
device. If the target device is renamed, the virtual volume on top of it is also renamed if
the virtual volume has a system-assigned default name.
* - argument is positional.
Description For device migrations, cleaning dismantles the source device(s) down to its storage
volumes. The storage volumes no longer in use are unclaimed.
For device migrations only, use the --rename-target argument to rename the target device
after the source device. If the target device is renamed, the virtual volume on top of it is
also renamed if the virtual volume has a system-assigned default name.
Without renaming, the target devices retain their target names, which can make the
relationship between volume and device less evident.
For extent migrations, cleaning destroys the source extent and unclaims the underlying
storage-volume if there are no extents on it.
Example
VPlexcli:/data-migrations/device-migrations> dm migration clean --force --migrations
migrate_012
dm migration commit
Commits a completed data migration allowing for its later removal.
Description The migration process inserts a temporary RAID 1 structure above the source
device/extent with the target device/extent as an out-of-date leg of the RAID 1. The
migration can be understood as the synchronization of the out-of-date leg (the target).
After the migration is complete, the commit step detaches the source leg of the RAID 1 and
removes the RAID 1.
The virtual volume, device or extent is identical to the one before the migration except that
the source device/extent is replaced with the target device/extent.
A migration must be committed in order to be cleaned.
Verify that the migration has completed successfully before committing the migration.
dm migration pause
Pauses the specified in-progress or queued data migrations.
Description Active migrations (a migration that has been started) can be paused and then resumed at
a later time.
Pause an active migration to release bandwidth for host I/O during periods of peak traffic.
Specify the migration by name if that name is unique in the global namespace. Otherwise,
specify a full pathname.
Use the dm migration resume command to resume a paused migration.
dm migration remove
Removes the record of canceled or committed data migrations.
Description Before a migration record can be removed, it must be canceled or committed to release
the resources allocated to the migration.
dm migration resume
Resumes a previously paused data migration.
Description Active migrations (a migration that has been started) can be paused and then resumed at
a later time.
Pause an active migration to release bandwidth for host I/O during periods of peak traffic.
Use the dm migration resume command to resume a paused migration.
◆ dm migration commit
◆ dm migration pause
◆ dm migration remove
◆ dm migration start
dm migration start
Starts the specified migration.
Optional arguments
[-s|--transfer-size] value - Maximum number of bytes to transfer per operation per device. A
bigger transfer size means smaller space available for host I/O. Must be a multiple of 4 K.
Range: 40 KB - 128 M. Default: 128 K.
If the host I/O activity is very high, setting a large transfer size may impact host I/O. See
About transfer-size in the batch-migrate start command.
--force - Do not ask for confirmation. Allows this command to be run using a
non-interactive script.
--paused - Starts the migration in a paused state.
* - argument is positional.
Description Starts the specified migration. If the target is larger than the source, the extra space on the
target is unusable after the migration. If the target is larger than the source, a prompt to
confirm the migration is displayed.
Cross-cluster data migration on volumes in asynchronous consistency groups is not
recommended if they are actively servicing IOs, as the host application will be subject to
high latency. Schedule this activity as a maintenance activity to avoid data unavailability.
Extent migrations - Extents are ranges of 4K byte blocks on a single LUN presented from a
single back-end array. Extent migrations move data between extents in the same cluster.
Use extent migration to:
◆ Move extents from a “hot” storage volume shared by other busy extents,
◆ De-fragment a storage volume to create more contiguous free space,
◆ Support technology refreshes.
Start and manage extent migrations from the extent migration context:
VPlexcli:/> cd /data-migrations/extent-migrations/
VPlexcli:/data-migrations/extent-migrations>
IMPORTANT
Extent migrations are blocked if the associated virtual volume is undergoing expansion.
Refer to the virtual-volume expand command.
Device migrations - Devices are RAID 0, RAID 1, or RAID C built on extents or other devices.
Devices can be nested; a distributed RAID 1 can be configured on top of two local RAID 0
devices. Device migrations move data between devices on the same cluster or between
devices on different clusters. Use device migration to:
◆ Migrate data between dissimilar arrays
◆ Relocate a hot volume to a faster array
This command can fail on a cross-cluster migration if there is not a sufficient number of
meta volume slots. See the troubleshooting section of the VPLEX procedures in the SolVe
Desktop for a resolution to this problem.
Start and manage device migrations from the device migration context:
VPlexcli:/> cd /data-migrations/device-migrations/
VPlexcli:/data-migrations/device-migrations>
When running the dm migration start command across clusters, you might receive the
following error message:
See the troubleshooting section of the VPLEX procedures in the SolVe Desktop for
instructions on increasing the number of slots.
drill-down
Displays the components of a view, virtual volume or device, down to the storage-volume
context.
Syntax drill-down
[-v|--storage-view] context path,context path...
[-o|--virtual-volume] context path,context path...
[-r|--device] context path,context path...
drill-down 231
VPLEX CLI Command Descriptions
tport: P000000003CB000E6-B0-FC00
tport: P000000003CA001CB-A1-FC00
tport: P000000003CA000E6-A1-FC0
ds dd create
Creates a new distributed-device.
Syntax ds dd create
[-n|name] name
[-d|--devices] context path [,context path,...]
[-l|--logging-volumes] context path [,context path,...]
[-r| rule-set] rule-set
[-s|--source-leg] context path
[-f|--force]
Optional arguments
[-r|--rule-set] rule-set - The rule-set to apply to the new distributed device. If the --rule-set
argument is omitted, the cluster that is local to the management server is assumed to be
the winner in the event of an inter-cluster link failure.
[-s|--source-leg] context path - Specifies one of the local devices to use as the source data
image for the new device. The command will copy data from the source-leg to the other
legs of the new device.
[-f|--force] - Forces a rule-set with a potential conflict to be applied to the new distributed
device.
Description The new distributed device consists two legs; local devices on each cluster.
A device created by this command does not initialize its legs, or synchronize the contents
of the legs. Because of this, consecutive reads of the same block may return different
results for blocks that have never been written. Host reads at different clusters are almost
certain to return different results for the same unwritten block, unless the legs already
contain the same data.
Use this command only if the resulting device will be initialized using tools on the host.
Do not use this command if one leg of the resulting device contains data that must be
preserved. Applications using the device may corrupt the pre-existing data.
To create a device when one leg of the device contains data that must be preserved, use
the device attach-mirror command to add a mirror to the leg. The data on the leg will be
copied automatically to the new mirror.
The individual local devices may include any underlying type of storage volume or
geometry (RAID 0, RAID 1, or RAID C), but they should be the same capacity.
If a distributed device is configured with local devices of different capacities:
◆ The resulting distributed device will be only as large as the smaller local device
◆ The leftover capacity on the larger device will not be available
To create a distributed device without wasting capacity, choose local devices on each
cluster with the same capacity.
The geometry of the new device is automatically 'RAID 1'.
Each cluster in the VPLEX may contribute a maximum of one component device to the new
distributed device.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to
this problem.
If there is pre-existing data on a storage-volume, and the storage-volume is not claimed
as being application consistent, converting an existing local RAID device to a distributed
RAID using the ds dd create command will not initiate a rebuild to copy the data to the
other leg. Data will exist at only one cluster. To prevent this, do one of the following:
1) Claim the disk with data using the application consistent flag, or
2) Create a single-legged RAID 1 or RAID 0 and add a leg using the device attach-mirror
command.
Use the set command to enable/disable automatic rebuilds on the distributed device. The
rebuild setting is immediately applied to the device.
◆ set rebuild-allowed true starts or resumes a rebuild if mirror legs are out of sync.
ds dd create 233
VPLEX CLI Command Descriptions
Example In the following example, the ds dd create command creates a new distributed device with
the following attributes:
◆ Name: ExchangeDD
◆ Devices:
• /clusters/cluster-2/devices/s6_exchange
• /clusters/cluster-1/devices/s8_exchange
◆ Logging volumes:
• /clusters/cluster-1/system-volumes/cluster_1_loggingvol
• /clusters/cluster-2/system-volumes/cluster_2_loggingvol
◆ Rule-set: rule-set-7a
VPlexcli:/distributed-storage/distributed-devices> ds dd create --name ExchangeDD --devices
/clusters/cluster-2/devices/s6_exchange,/clusters/cluster-1/devices/s8_exchange
--logging-volumes
/clusters/cluster-1/system-volumes/cluster_1_loggingvol,/clusters/cluster-2/system-volumes/cl
uster_2_loggingvol --rule-set rule-set-7a
In the following example, the ds dd create command creates a distributed device, and
with the default rule-set:
VPlexcli:/> ds dd create --name TestDisDevice --devices
/clusters/cluster-1/devices/TestDevCluster1, /clusters/cluster-2/devices/TestDevCluster2
ds dd declare-winner
Declares a winning cluster for a distributed-device that is in conflict after a link outage.
Syntax ds dd declare-winner
[-c|--cluster] context path
[-d|--distributed-device] context path
[-f|--force]
Description If the legs at two or more clusters are in conflict, use the ds dd declare-winner command to
declare a winning cluster for a specified distributed device.
Example
VPlexcli:/distributed-storage/distributed-devices> ds dd declare-winner --distributed-device
DDtest_4 –-cluster cluster-2 --force
ds dd destroy
Destroys the specified distributed-device(s).
Syntax ds dd destroy
[-d|--distributed-devices] context path,context path...
[-f|--force]
Description In order to be destroyed, the target distributed device must not host virtual volumes.
Context
------------------------------------------------------
/distributed-storage/distributed-devices/TestDisDevice
ds dd remove-all-rules
Removes all rules from all distributed devices.
Syntax ds dd remove-all-rules
[-f|--force]
ds dd destroy 235
VPLEX CLI Command Descriptions
Description From any context, removes all rules from all distributed devices.
There is NO undo for this procedure.
Example
VPlexcli:/distributed-storage/distributed-devices/dd_23> remove-all-rules
All the rules in distributed-devices in the system will be removed. Continue? (Yes/No) yes
ds dd set-log
Allocates/unallocates segments of a logging volume to a distributed device or a
component of a distributed device.
Syntax ds dd set-log
[-d|--distributed devices] context path,context path...
[-c|--distributed-device-component] context path
[-l|--logging-volumes] context path,context path...
[-n|--cancel]
Optional arguments
[-n|--cancel] - Cancel/unallocate the log setting for the specified component of a
distributed-device or all the components of the specified distributed-device.
Use the --cancel argument very carefully.
A warning message is issued if attempting to cancel logging volumes on a distributed
device.
Removing the logging-volume for a device deletes the existing logging entries for that
device. A FULL rebuild of the device will occur after a link failure and recovery.
Removing the logging volume for all distributed devices will remove all entries from the
logging volume. In the event of a link failure and recovery, this results in a FULL rebuild of
all distributed devices.
Description Logging volumes keep track of 4 k byte blocks written during an inter-cluster link failure.
When the link recovers, VPLEX uses the information in logging volumes to synchronize the
mirrors.
If no logging volume is allocated to a distributed device, a full rebuild of the device will
occur when the inter-cluster link is restored after an outage.
Do not change a device’s logging volume unless the existing logging-volume is corrupted
or unreachable or to move the logging volume to a new disk.
Use the ds dd set-log command only to repair a corrupted logging volume or to transfer
logging to a new disk.
Use the --distributed-devices argument to allocate/unallocate segments on the specified
logging volume to the specified device.
Use the --distributed-devices-component argument to allocate/unallocate segments on
the specified logging volume to the specified device component.
Note: Specify either distributed devices or distributed device components. Do not mix
devices and components in the same command.
If the logging volume specified by the --logging-volume argument does not exist, it is
created.
Use the --cancel argument to delete the log setting for a specified device or device
component.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting procedures for VPLEX in the SolVe Desktop for a resolution to this
problem.
ds dd set-log 237
VPLEX CLI Command Descriptions
VPlexcli:/distributed-storage/distributed-devices/TestDisDevice> ds dd set-log
--distributed-devices TestDisDevice --logging-volumes
/clusters/cluster-2/system-volumes/New-Log_Vol
Example Attempt to cancel a logging volume for a distributed device that is not fully logged:
Issuing the cancel command on a distributed device that is not fully logged will result in a
warning message.
VPlexcli:/distributed-storage/distributed-devices/dr1_C12_0249> ds dd set-log
--distributed-devices dr1_C12_0249 --cancel
WARNING: This command will remove the logging segments from distributed device 'dr1_C12_0249'.
If a distributed device is not fully logged, it is vulnerable to full rebuilds following
inter-cluster WAN link failure or cluster failure.
It is recommended that the removed logging-segments be restored as soon as possible.
ds rule destroy
Destroys an existing rule.
Description A rule-set contains rules. Use the ll command in the rule-set context to display the rules in
the rule-set.
Example Use the ds rule destroy command to destroy a rule in the rule set.
VPlexcli:/distributed-storage/rule-sets/ruleset_recreate5/rules> ll
RuleName RuleType Clusters ClusterCount Delay Relevant
-------- ----------------- --------- ------------ ----- --------
rule_1 island-containing cluster-1 2 10s true
◆ ds rule-set destroy
◆ ds rule-set what-if
ds rule island-containing
Adds a island-containing rule to an existing rule-set.
* - argument is positional.
Description Describes when to resume I/O on all clusters in the island containing the specified cluster.
Example In the following example, the rule island-containing command creates a rule that dictates:
◆ VPLEX waits for 10 seconds after a link failure and then:
◆ Resumes I/O to the island containing cluster-1,
◆ Detaches any other islands.
VPlexcli:/distributed-storage/rule-sets/TestRuleSet/rules> ds rule island-containing
--clusters cluster-1 --delay 10s
VPlexcli:/distributed-storage/rule-sets/TestRuleSet/rules> ll
RuleName RuleType Clusters ClusterCount Delay Relevant
-------- ----------------- --------- ------------ ----- --------
rule_1 island-containing cluster-1 - 10s true
ds rule-set copy
Copy an existing rule-set.
Description Copies an existing rule-set and assigns the specified name to the copy.
Example
VPlexcli:/distributed-storage/rule-sets> ll
Name PotentialConflict UsedBy
------------------ ----------------- ----------------------------------------
TestRuleSet false
ds rule-set create
Creates a new rule-set with the given name and encompassing clusters.
ds rule-set destroy
Destroys an existing rule-set.
Description Deletes the specified rule-set. The specified rule-set can be empty or can contain rules.
Before deleting a rule-set, use the set command to detach the rule-set from any virtual
volumes associated with the rule-set.
Attributes:
Name Value
------------------ ------------------------
key ruleset_5537985253109250
potential-conflict false
used-by dd_00
VPlexcli:/distributed-storage/rule-sets/TestRuleSet> cd
//distributed-storage/distributed-devices/dd_00
ds rule-set what-if
Tests if/when I/O is resumed at individual clusters, according to the current rule-set.
Description Only two clusters and one island are supported for the current release.
◆ ds rule island-containing
◆ ds rule-set copy
◆ ds rule-set create
◆ ds rule-set destroy
ds summary
Display summary information about distributed-devices.
Syntax ds summary
Cluster summary:
Cluster cluster-2 : 25 distributed devices.
Cluster cluster-1 : 25 distributed devices.
Capacity summary:
0 devices have some free capacity.
0B free capacity of 500G total capacity.
Example Display summary information when one or more devices are unhealthy:
VPlexcli:/> ds summary
ds summary 243
VPLEX CLI Command Descriptions
Cluster summary:
Cluster cluster-2 : 25 distributed devices.
Cluster cluster-1 : 25 distributed devices.
Capacity summary:
0 devices have some free capacity.
0B free capacity of 500G total capacity.
Example Use the --verbose argument to display detailed information about unhealthy volumes in
each consistency group:
VPlexcli:/> ds summary --verbose
Cluster summary:
Cluster cluster-2 : 25 distributed devices.
Cluster cluster-1 : 25 distributed devices.
Capacity summary:
0 devices have some free capacity.
0B free capacity of 500G total capacity.
Field Description
Health State major failure - One or more children of the distributed device is out-of-date and will
never rebuild, possibly because they are dead or unavailable.
minor failure - Either one or more children of the distributed device is out-of-date and
will rebuild, or the Logging Volume for the distributed device is unhealthy.
non-recoverable error - VPLEX cannot determine the distributed device's Health state.
ok - The distributed device is functioning normally.
unknown - VPLEX cannot determine the device's health state, or the state is invalid.
Operational Status degraded - The distributed device may have one or more out-of-date children that will
eventually rebuild.
error - One or more components of the distributed device is hardware-dead.
ok - The distributed device is functioning normally.
starting - The distributed device is not yet ready.
stressed - One or more children of the distributed device is out-of-date and will never
rebuild.
unknown - VPLEX cannot determine the distributed device's Operational state, or the
state is invalid.
Service Status cluster unreachable - VPLEX cannot reach the cluster; the status is unknown.
need resume - The other cluster detached the distributed device while it was
unreachable. The distributed device needs to be manually resumed for I/O to resume at
this cluster.
need winner - All clusters are reachable again, but both clusters had detached this
distributed device and resumed I/O. You must pick a winner cluster whose data will
overwrite the other cluster's data for this distributed device.
potential conflict - The clusters have detached each other resulting in a potential for
detach conflict.
running - The distributed device is accepting I/O.
suspended - The distributed device is not accepting new I/O; pending I/O requests are
frozen.
winner-running - This cluster detached the distributed device while the other cluster was
unreachable, and is now sending I/O to the device.
Capacity Summary Number of devices with free capacity, amount of free capacity for the cluster, and total
capacity for all clusters.
ds summary 245
VPLEX CLI Command Descriptions
Field Description
CG Name Name of the consistency group of which the unhealthy device is a member.
Field Description
Operational Status Current status for this consistency group with respect to each cluster on which it is
visible.
ok - I/O can be serviced on the volumes in the consistency group.
suspended - I/O is suspended for the volumes in the consistency group. The reasons
are described in the “operational status: details”.
degraded - I/O is continuing, but there are other problems described in “operational
status: details”:
unknown - The status is unknown, likely because of lost management connectivity.
ds summary 247
VPLEX CLI Command Descriptions
Field Description
Status Details If operational status is “ok” this field is empty: “[ ]”. Otherwise, it displays additional
information, which may be any of the following:
temporarily-synchronous - This asynchronous consistency group has temporarily
switched to 'synchronous' cache mode because of a director failure.
requires-resolve-conflicting-detach - After the inter-cluster link is restored, two clusters
have discovered that they have detached one another and resumed I/O independently.
The clusters are continuing to service I/O on their independent versions of the data. The
consistency-group resolve-conflicting-detach command must be used to make the view
of data consistent again at the clusters.
rebuilding-across-clusters - One or more distributed member volumes is being rebuilt.
At least one volume in the group is out of date at that cluster and is re-syncing. If the link
goes out at this time the entire group is suspended. Use the rebuild status command to
display which volume is out of date at which cluster.
rebuilding-within-cluster - One or more local rebuilds is in progress at this cluster.
data-safe-failure - A single director has failed. The volumes are still crash-consistent,
and will remain so, unless a second failure occurs before the first is recovered.
requires-resume-after-data-loss-failure - There have been at least two concurrent
failures, and data has been lost. For example, a director fails shortly after the
inter-cluster link fails, or when two directors fail at almost the same time. Use the
consistency-group resume-after-data-loss-failure command to select a winning cluster
and allow I/O to resume.
cluster-departure - Not all the visible clusters are in communication.
requires-resume-after-rollback - A cluster has detached its peer cluster and rolled back
the view of data, but is awaiting the consistency-group resume-after-rollback command
before resuming I/O. Displayed:
• For an asynchronous consistency group where both clusters have been writing, that
is where both clusters are active.
• At the winning side when a detach rule fires, or shortly after the consistency-group
choose-winner command picks a winning cluster.
requires-resume-at-loser - Displayed on the losing side when the inter-cluster link heals
after an outage. After the inter-cluster link is restored, the losing cluster discovers that
its peer was declared the winner and resumed I/O. Use the consistency-group
resume-at-loser command to make the view of data consistent with the winner, and to
resume I/O at the loser.
restore-link-or-choose-winner - I/O is suspended at all clusters because of a cluster
departure, and cannot automatically resume. This can happen if:
• There is no detach-rule
• If the detach-rule is 'no-automatic-winner', or
• If the detach-rule cannot fire because its conditions are not met.
For example, if more than one cluster is active at the time of an inter-cluster link outage,
the 'active-cluster-wins' rule cannot take effect. When this detail is present, I/O will not
resume until either the inter-cluster link is restored, or the user intervenes to select a
winning cluster with the consistency-group choose-winner command.
unhealthy-devices - I/O has stopped in this consistency group because one or more
volumes is unhealthy and cannot perform I/O.
will-rollback-on-link-down - If there were a link-down now, the winning cluster would
have to roll back the view of data in order to resume I/O.
◆ virtual-volume provision
event-test
Verifies that the management server can receive events from a director.
Syntax event-test
[-n|--directors] context path,context path...
[-c|--clusters] context path,context path...
[-l|--level][emergency|alert|critical|error|warning|
notice|info|debug]
[-o|--component] component
[-m|--message] “message”
Description Tests the logging path from one or mores director to the management server.
Every component in the software that runs on the director logs messages to signify
important events. Each logged message/event is transferred from the director to the
management server and into the firmware log file.
Use this command to verify this logging path is ok. Specify what level of event should be
generated. Optionally, specifiy the text to appear in the component portion of the test
message.
Check the appropriate firmware log for the event created by this command.
event-test 249
VPLEX CLI Command Descriptions
◆ The event test command creates an alert event for the specified director.
◆ The exit command exits the CLI.
◆ The tail command displays the firmware log for the director.
VPlexcli:/> event-test --director director-2-1-A --level alert --message "Test Alert"
VPlexcli:/> exit
Connection closed by foreign host.
exec
Executes an external program.
Note: The correct syntax for program names and arguments depends on the host system.
exit
Exits the shell.
Syntax exit
[-e|--exit-code] exit-code
[-s|--shutdown]
Description If the shell is not embedded in another application, the shell process will stop.
[-w|--wait] - The maximum number of seconds to wait for a response from the fabric
discovery.
Default: 10.
Range: 1- 3600.
[-c|--cluster] context path - Discover initiator ports on the specified cluster.
Description Initiator discovery finds unregistered initiator-ports on the front-end fabric and determines
the associations between the initiator ports and the target ports.
Use the ll command in initiator-ports context to display the same information for small
configurations (where timeout does not occur)
Use the export initiator-port discovery command for large configurations in which ls
command may encounter timeout limits.
* - argument is positional.
Optional arguments
[-c|--cluster] context path - Cluster on which the initiator port is registered.
[-t|--type] {hpux|sun-vcs|aix|recoverpoint|default} - Type of initiator port. If no type is
specified, the default value is used.
hpux - Hewlett Packard UX
sun-vcs - Sun Solaris
aix - IBM AIX
recoverpoint - EMC RecoverPoint
default - If no type is specified.
Description Registers an initiator-port and associates it with a SCSI address. For Fibre Channel, the
SCSI address is represented by a WWN pair. Use the ll command in
/engines/engine/directors/director/hardware /ports/port context to display portWWNs
and nodeWWNs.
◆ set
Optional arguments
[-c|--cluster] cluster context - * The cluster at which to create the view.
[-p|--ports] port,port... - List of port names. If omitted, all ports at the cluster will be used.
Entries must be separated by commas.
* - argument is positional.
Description Reads host port WWNs (with optional node WWNs) and names from a host declaration file.
Creates a view, registering each port WWN /name pair as an initiator port in that view.
The host description file contains one line for each port on the host in the following
format:
<port WWN> [|node WWN>] <port name>
Hosts must be registered in order to be exported (added to a storage view). Registering
consists of naming the initiator and listing its ports WWN/GUID.
Each port of a server’s HBA/HCA must be registered as a separate initiator.
Description Displays a list of target port logins for the specified initiator ports.
Example Shows target port logins for all the initiator ports in VPLEX:
VPlexcli:/> export initiator-port show-logins *
Example Shows target port logins for initiator ports 11 and 22:
VPlexcli:/> export initiator-port show-logins -i initiator_11,initiator_22
Optional arguments
[-f|--force] - Destroys the initiator-ports even if they are in use.
* - argument is positional.
Example
VPlexcli:> export initiator-port unregister -i win2k3_105_port1
Description Prints a summary of the views and volumes exported on each port, and a detailed
summary of the unhealthy ports.
In the root context, displays information for all clusters.
In /cluster context or below, displays information for only the current cluster.
◆ extent summary
◆ local-device summary
◆ storage-volume summary
◆ virtual-volume provision
Optional arguments
[-v|--view] storage-view context path - View to which to add the specified initiator port(s).
* - argument is positional.
Optional arguments
[-v|--view] context path - Storage view to which to add the specified port(s).
* - argument is positional.
Example VPlexcli:/clusters/cluster-1/exports/storage-views/TestStorageView>
export storage-view addport --ports P000000003CB00147-B0-FC03
Optional arguments
[-v|--view] context path> - View to which to add the specified virtual volume(s).
[-f|--force] - Force the virtual-volumes to be added to the view even if they are already in
use, if they are already assigned to another view, or if there are problems determining the
view's state. Virtual-volumes that already have a LUN in the view will be re-mapped to the
newly-specified LUN.
* - argument is positional.
Description Add the specified virtual volume to the specified storage view. Optionally, specify the LUN
to assign to the virtual volume. Virtual volumes must be in a storage view in order to be
accessible to hosts.
When virtual-volumes are added using only volume names, the next available LUN number
is automatically assigned.
Virtual-volumes and LUN-virtual-volume pairs can be specified in the same command line.
For example:
r0_1_101_vol,(2,r0_1_102_vol),r0_1_103_vol
To modify the LUN assigned to a virtual volume, specify a virtual volume that is already
added to the storage view and provide a new LUN.
Example Modify the LUN assigned to a virtual volume already added to a view:
◆ The ll command in storage view context displays the LUN (0) assigned to a storage
volume.
◆ The export storage-view addvirtualvolume (LUN,Virtual-volume) --force command
assigns a new LUN to the virtual volume.
◆ The ll command in storage view context displays the new LUN assigned to a storage
volume:
VPlexcli:/clusters/cluster-1/exports/storage-views/TestStorageView> ll
Name Value
------------------------ --------------------------------------------------
controller-tag -
initiators []
operational-status stopped
port-name-enabled-status [P000000003CA00147-A1-FC01,true,suspended,
P000000003CB00147-B0-FC01,true,suspended]
ports [P000000003CA00147-A1-FC01, P000000003CB00147-B0-FC01]
virtual-volumes [(0,TestDisDevice_vol,VPD83T3:6000144000000010a0014760d64cb325,16G)]
VPlexcli:/clusters/cluster-1/exports/storage-views/TestStorageView> ll
Name Value
------------------------ --------------------------------------------------
controller-tag -
initiators []
operational-status stopped
port-name-enabled-status [P000000003CA00147-A1-FC01,true,suspended,
P000000003CB00147-B0-FC01,true,suspended]
ports [P000000003CA00147-A1-FC01, P000000003CB00147-B0-FC01]
virtual-volumes [(5,TestDisDevice_vol,VPD83T3:6000144000000010a0014760d64cb325,16G)]
Example Add a synchronous virtual volume to a view using the --force option from the root context:
VPlexcli:/> export storage-view addvirtualvolume --view
/clusters/Saul1/exports/storage-views/TestStorageView --virtual-volumes
dr710_20_C1Win_0038_12_vol --force
Volume {1} is synchronous and on a non-local device. Applications using this volume may
experience per I/O inter-cluster latency. If the applications are sensitive to this latency,
they may experience data unavailability. Do you wish to proceed ? (Yes/No)
Example To check all view configurations for all clusters from the CLI, type:
VPlexcli:/> export storage-view checkconfig
Checking cluster cluster-1:
No errors found for cluster cluster-1.
Checking cluster cluster-2:
No errors found for cluster cluster-2.
Volume dd_13_vol is exported multiple times:
view: LicoJ009, lun: 14
view: LicoJ010, lun: 14
Volume dd_16_vol is exported multiple times:
view: LicoJ009, lun: 17
view: LicoJ010, lun: 17
Volume dd_12_vol is exported multiple times:
view: LicoJ009, lun: 13
view: LicoJ010, lun: 13
Volume dd_19_vol is exported multiple times:
view: LicoJ009, lun: 20
view: LicoJ010, lun: 20
.
.
.
Optional arguments
[-c|--cluster] context path - The cluster on which to create the view.
* - argument is positional.
Description A storage view is a logical grouping of front-end ports, registered initiators (hosts), and
virtual volumes used to map and mask LUNs. Storage views are used to control host
access to storage.
In order for hosts to access virtual volumes, the volumes must be in a storage view. A
storage view consists of:
◆ One or more initiators. Initiators are added to a storage view using the export
storage-view addinitiatorport command.
◆ One or more virtual-volumes. Virtual volumes are added to a storage view using the
export storage-view addvirtualvolume command.
◆ One or more FE ports. Ports are added to a storage view using the export storage-view
addport command.
The name assigned to the storage view must be unique throughout the VPLEX. In VPLEX
Metro or Geo configurations, the same name must not be assigned to a storage view on
the peer cluster.
Optional arguments
[-f|--force] - Force the storage view to be destroyed even if it is in use.
* - argument is positional.
Description This command is most useful for configurations with thousands of LUNs, and a large
number of views and exported virtual volumes.
Find the views exported by initiators whose name starts with “Lico”:
VPlexcli:/clusters/cluster-1/exports> export storage-view find --initiator Lico*
Views including inititator Lico*:
View LicoJ009.
View LicoJ013.
Optional arguments
[-f|--file] file - Name of the file to which to send the output. If no file is specified, output is
to the console screen.
* argument is positional.
Optional arguments
[-v|--view] context path - The storage view from which to remove the initiator port.
* - argument is positional.
Optional arguments
[-v|--view] context path - View from which to remove the specified port(s).
* - argument is positional.
Optional arguments
[-f|--force] - Force the virtual-volumes to be removed from the view even if the specified
LUNs are in use, the view is live, or some of the virtual-volumes do not exist in the view.
[-v|--view] context path - View from which to remove the specified virtual volume(s).
* - argument is positional.
Example Delete a virtual volume from the specified storage view, even though the storage view is
active:
VPlexcli:/clusters/cluster-1/exports/storage-views> removevirtualvolume --view E209_View
--virtual-volume (1,test3211_r0_vol) --force
WARNING: The storage-view 'E209_View' is a live storage-view and is exporting storage through
the following initiator ports:
'iE209_hba1_b', 'iE209_hba0'. Performing this operation may affect hosts' storage-view of
storage. Proceeding anyway.
Example
VPlexcli:/clusters/cluster-2/exports/storage-views> export
storage-view show-powerpath-interfaces
PowerPath Interface Target Port
------------------- ---------------------------
SP A8 P000000003CA000E6-A0-FC03.0
SP A9 P000000003CB001CB-B0-FC01.0
SP A6 P000000003CA001CB-A0-FC01.0
SP A7 P000000003CB000E6-B0-FC01.0
SP A0 P000000003CA000E6-A0-FC00.0
SP A1 P000000003CA000E6-A0-FC01.0
SP A4 P000000003CA000E6-A0-FC02.0
SP A5 P000000003CB001CB-B0-FC00.0
Example Display storage view summary for a specified cluster (no unhealthy views):
VPlexcli:/> export storage-view summary --clusters cluster-1
Each WWN is either '0x' followed by one or more hexadecimal digits or an abbreviation, in
the following format:
<string>:<number>[,<number>]
For example,
0xd1342a|0xd1342b
hyy1:194e,4|hyy1:194e
0xd1342a
hyy1:194e,4
Optional arguments
[-p|--port] context path - Target port for which to rename the WWN pair.
Disable the corresponding Fibre Channel port before executing this command.
Example
VPlexcli:/> export target-port renamewwns --wwns 0xd1342a|0xd1342b --port
P0000000000000001-FK00
extent create
Creates one or more storage-volume extents.
Optional arguments
[-s|--size] size - The size of each extent, in bytes. If not specified, the largest available
contiguous range of 4K byte blocks on the storage volume is used to create the specified
number of extents.
[-n|--num-extents] integer - The number of extents to create per specified storage volume.
Maximum of 128 extents per storage volume. If not specified, only one extent per
storage-volume is created.
[-o|--block-offset] integer - The block-offset on the underlying storage volume on which the
extent is created. If not specified, the block-offset is determined automatically.
* - argument is positional.
Description An extent is a slice (range of 4K byte blocks) of a storage volume. An extent can use the
entire capacity of the storage volume, or the storage volume can be carved into a
maximum of 128 extents.
Extents are the building blocks for devices.
If the storage volume is larger than the desired virtual volume, create an extent the size of
the desired virtual volume. Do not create smaller extents, and then use different RAIDs to
concatenate or stripe the extents.
If the storage volume is smaller than the desired virtual volume, create a single extent per
storage volume, and then use devices to concatenate or stripe these extents into a larger
device.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of theVPLEX procedures in the SolVe Desktop for a resolution to
this problem.
VPlexcli:/>cd /clusters/cluster-1/storage-elements/storage-volumes
extent destroy
Destroys one or more storage-volume extents.
Optional arguments
[-f|--force] - Forces the destruction of the given extents, bypassing all guards and
confirmations.
* - argument is positional.
extent summary
Displays a list of a cluster's unhealthy extents.
Description Displays a cluster's unhealthy extents (if any exist), the total number of extents by use,
and calculates the total extent capacity for this cluster.
An unhealthy extent has a non-nominal health state, operational status or I/O status.
If the --clusters argument is not specified and the command is executed at or below a
specific cluster's context, information is summarized for only that cluster. Otherwise, the
extents of all clusters are summarized.
Field Description
Field Description
Operational degraded - The extent may be out-of-date compared to its mirror (applies only to extents
Status that are part of a RAID 1 device).
ok - The extent is functioning normally.
starting - The extent is not yet ready.
unknown - VPLEX cannot determine the extent's Operational state, or the state is invalid.
Health State degraded - The extent may be out-of-date compared to its mirror (applies only to extents
that are part of a RAID 1 device).
ok - The extent is functioning normally.
non-recoverable-error - The extent may be out-of-date compared to its mirror (applies only to
extents that are part of a RAID 1 device), and/or the Health state cannot be determined.
unknown - VPLEX cannot determine the extent's Operational state, or the state is invalid.
Extent Summary
Use used - Of the total number of extents on the cluster, the number in use.
claimed - Of the total number of extents on the cluster, the number that are claimed
unclaimed - Of the total number of extents on the cluster, the number that are unclaimed.
unusable - Indicates that the underlying storage-volume of the extent is dead or
unreachable. Use the storage-volume summary command to check the storage-volume. Use
the validate-system-configuration command to check reachability from the directors.
logging - Of the total number of extents on the cluster, the number that are in use for
logging.
getsysinfo
Returns information about the current system.
Syntax getsysinfo
--output path name
--linux
Field Description
Flag includeDebug
Treating this tower like version Denotes the system is Release 4.0 or later. Ignore this line.
D4
nn ports - unknown system type The getsysinfo script looked for hardware prior to Release 4.0 and did not find
it.
System does NOT have comtcp Communication protocol used on Ethernet ports for connections to other
enabled clusters prior to Release 4.0. Ignore this line.
health-check
Displays a report indicating overall hardware/software health.
health-check 273
VPLEX CLI Command Descriptions
Syntax health-check
[-m|--highlevel]
[-f|--full]
--configuration
--back-end
--front-end
--cache
--consistency-group
--wan
--hardware
Description This command provides a high level view of the system health. The command checks for
major subcomponents with error conditions, warnings, or that are OK.
Consolidates information from the following commands:
◆ version
◆ cluster status
◆ cluster summary
◆ connectivity validate-be
◆ connectivity validate-wan-com
◆ ds summary
◆ export storage-view summary
◆ virtual-volume summary
◆ storage-volume summary
◆ ll /clusters/**/system-volumes/
Meta Data:
----------
Cluster Volume Volume Oper Health Active
Name Name Type State State
--------- ------------------------------- -------------- ----- ------ ------
cluster-1 Advil_1 meta-volume ok ok True
cluster-1 logging_c1_log_vol logging-volume ok ok -
cluster-1 Advil_1_backup_2012Mar07_043012 meta-volume ok ok False
cluster-1 Advil_1_backup_2012Mar08_043011 meta-volume ok ok False
cluster-2 logging_c2_log_vol logging-volume ok ok -
cluster-2 Advil-2_backup_2012Mar08_043020 meta-volume ok ok False
cluster-2 Advil-2_backup_2012Mar07_043017 meta-volume ok ok False
cluster-2 Advil-2 meta-volume ok ok True
Front End:
----------
Cluster Total Unhealthy Total Total Total Total
Name Storage Storage Registered Ports Exported ITLs
Views Views Initiators Volumes
--------- ------- --------- ---------- ----- -------- -----
cluster-1 4 2 12 8 135 672
cluster-2 0 0 0 0 0 0
Storage:
--------
Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible
Name Storage Storage Virtual Virtual Dist Dist Dual from
Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs
--------- ------- --------- ------- --------- ----- --------- ----- -----------
cluster-1 2375 10 229 10 12 0 0 0
cluster-2 2365 0 205 0 12 0 0 0
Consistency Groups:
-------------------
Cluster Total Unhealthy Total Unhealthy
Name Synchronous Synchronous Asynchronous Asynchronous
Groups Groups Groups Groups
--------- ----------- ----------- ------------ ------------
cluster-1 9 0 0 0
cluster-2 5 0 0 0
FC WAN Connectivity:
--------------------
Port Group Connectivity
------------ ------------
port-group-1 ok
port-group-0 ok
Cluster Witness:
----------------
Cluster Witness is not configured
RecoverPoint:
-------------
Cluster Total Unhealthy Total Unhealthy Total Mis-aligned Total Unhealthy
Name RP Clusters RP Clusters Replicated Replicated RP-enabled RP-enabled Registered RP Storage
Virtual Virtual Consistency Consistency RP Initiators/ Views
Volumes Volumes Groups Groups Storage Views
--------- ----------- ----------- ---------- ---------- ----------- ----------- -------------- ----------
cluster-1 1 1 1 0 8 1 8/1 0
cluster-2 - - - - - - -/- -
**This command is only able to check the health of the local cluster(cluster-1)'s RecoverPoint
configuration, therefore if this system is a VPLEX Metro or VPLEX Geo repeat this command on the remote
cluster to get the health of the remote cluster's RecoverPoint configuration.
Array Aware:
------------
Cluster Name Provider Address Connectivity Registered Total
Arrays Storage Pool
------------ -------- ------------- ------------ ---------- ------------
Hopkinton dsvea125 10.108.64.125 connected 2 13
Hopkinton dsvea123 10.108.64.123 connected 2 29
health-check 275
VPLEX CLI Command Descriptions
Output to /var/log/VPlex/cli/health_check_full_scan.log
health-check 277
VPLEX CLI Command Descriptions
port-group-1: OK
All com links have the expected connectivity: this port-group is operating
correctly.
port-group-0: OK
All com links have the expected connectivity: this port-group is operating
correctly.
port-group-1: OK
All com links have the expected connectivity: this port-group is operating
correctly.
port-group-0: OK
All com links have the expected connectivity: this port-group is operating
correctly.
Output to /var/log/VPlex/cli/health_check_full_scan.log
health-check 279
VPLEX CLI Command Descriptions
Virtual HA (VHA):
Checking director VM existence...... OK
Checking director existence........... OK
Checking director VM placement...... OK
Checking director runtime status..... OK
Checking virtual networking state..... Error
Checking if vSphere HA is enabled..... OK
Checking vSphere HA................... Warning
Checking if vSphere DRS is enabled.... OK
Checking VSphere DRS.................. OK
Checking VSphere DRS run time......... OK
Output to /var/log/VPlex/cli/health_check_full_scan.log
Virtual HA (VHA):
Checking director VM existence...... OK
Checking director existence........... OK
Checking director VM placement...... OK
Checking director runtime status..... OK
Checking virtual networking state..... Error
Failures
The following director VMs have an invalid number of configured NICs:
vDirector-4
vDirector-3
vDirector-1
vDirector-2
Warnings
The following director VMs do not have redundant front-end communication due to missing or
misconfigured virtual ethernet cards:
vDirector-4
vDirector-3
vDirector-1
vDirector-2
help
Displays help on one or more commands.
Syntax help
[-i|--interactive]
[-G|--no-global]
[-n|--no-internal]
options (* = required):
-h, --help
Displays the usage for this command.
--verbose
Provide more output during command execution. This may not have any effect for some
commands.
-c, --clusters= <clusters>
clusters whose operational-status to display.
Along with the operational-status, an indication of why it could be non-nominal and a progress
indicator are displayed.
Health-state has a similar indicator.
Here is a list of available topics. Enter any topic name to get more help.
5.14 Summary
help 281
VPLEX CLI Command Descriptions
history
Displays or clears the command history list.
Syntax history
[-c|--clear]
[-n|--number] number
[-s|--secret] secret - * Specifies the secret to use in the configured CHAP credentials. The
secret must be between 12 and 255 characters long, and can be composed of all printable
ASCII characters.
[-t|--targets] target [, target...] -* Specifies the IQNs or IP addresses of the targets for which
to configure credentials.
[--secured] - Prevents the secret from being stored as plain text in the log files.
Optional arguments
[-c|--cluster] cluster context - Specifies the context-path of the cluster at which the
back-end CHAP credentials should be added.
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - argument is positional.
Note: This command is valid only on systems that support iSCSI devices.
Description Disables back-end CHAP. Once CHAP is disabled on the back-end, all configured
credentials will be unused, but not deleted.
You will not be able to login to storage arrays that enforce CHAP if you disable back-end
CHAP that was previously configured.
Note: This command is valid only on systems that support iSCSI devices.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
Description Enables back-end CHAP. Enabling CHAP on the back-end allows VPLEX to log in securely to
storage arrays that also have CHAP configured. To use CHAP once it is enabled, all storage
arrays must have added CHAP credentials or have set a default CHAP credential .
Note: This command is valid only on systems that support iSCSI devices.
Note: This command is valid only on systems that support iSCSI devices.
Optional arguments
[-c|--cluster] cluster context - Specifies the context path of the cluster at which the
back-end CHAP credential should be removed.
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - argument is positional.
Note: This command is valid only on systems that support iSCSI devices.
Note: This command is valid only on systems that support iSCSI devices.
Example Remove default credentials for back-end CHAP from ports on cluster 1:
VPlexcli:/> iscsi chap back-end remove-default-credential -c cluster-1/
Optional arguments
[-c|--cluster] cluster context - Specifies the context path of the cluster at which the default
back-end CHAP credential should be set.
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - positional argument
Description Sets a default credential for back-end CHAP configuration. When back-end CHAP is
enabled, VPLEX will log into its targets using the default credential. If a specific target has
a separate credential configured, VPLEX will use that separate credential when attempting
to log into the target.
Note: This command is valid only on systems that support iSCSI devices.
Optional arguments
[-c|--cluster] cluster context - Specifies the context path of the cluster at which the
front-end CHAP credentials should be added.
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - argument is positional.
Note: This command is valid only on systems that support iSCSI devices.
Description Disables front-end CHAP. Once CHAP is disabled on the front-end all configured
credentials will be unused, but not deleted.
Note: This command is valid only on systems that support iSCSI devices.
Description Enables front-end CHAP. Enabling CHAP on the front-end will allow initiators to log in
securely. To use CHAP once it is enabled, all initiators must have added CHAP credentials
or have set a default CHAP credential .
Initiators will no longer be able to log-in once CHAP is enabled if they do not have a
credential configured.
Note: This command is valid only on systems that support iSCSI devices.
Note: This command is valid only on systems that support iSCSI devices.
Optional arguments
[-c|--cluster] cluster context - Specifies the context path of the cluster at which the
front-end CHAP credentials should be removed.
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - positional argument
Note: This command is valid only on systems that support iSCSI devices.
Note: This command is valid only on systems that support iSCSI devices.
Optional arguments
[-c|--cluster] cluster context - Specifies the context path of the cluster at which the default
front-end CHAP credential should be set.
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - positional argument
Description Sets a default credential for front-end CHAP configuration. When front-end CHAP is
enabled, any initiator without a separate credential will be able to login using this default
credential. A specific initiator that has a separate credential configured will not be able to
login using the default credential.
Note: This command is valid only on systems that support iSCSI devices.
iscsi check-febe-connectivity
Note: This command is not supported in VPLEX.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
Description The command checks whether the front-end and back-end configurations for iSCSI
connectivity are valid and meet best practices for VPLEX iSCSI configuration.
Note: This command is valid only on systems that support iSCSI devices.
Adds iSCSI iSNS servers to the cluster, so that the cluster can register the iSCSI front-ends
at those iSNS servers, allowing hosts to discover the VPLEX by querying the iSNS server.
Optional arguments
[-c|--cluster] cluster context - Context path of the cluster on which the iSNS server should
be added.
[-h|--help] - Displays command line help.
* - argument is positional.
Description Adds one or more server addresses to the list of iSNS servers.
Note: This command is only valid on systems that support iSCSI devices.
Example Adding two iSNS servers with IP addresses of 192.168.100.2 and 192.168.101.2:
VPlexcli:/> iscsi isns add -c cluster-1 -s 192.168.100.2,192.168.101.2
Successfully added 2 iSNS servers.
Example Add two iSNS server addresses with specified ports to the list of iSNS servers.
VPlexcli:/> iscsi isns add -c cluster-1 -s 192.168.100.2:3000,192.168.101.2:3001
Successfully added 2 iSNS servers.
Example Attempt to add two iSNS servers that are already configured with the given IP addresses.
VPlexcli:/> iscsi isns add -c cluster-1 -s 192.168.100.2,192.168.101.2
Unable to configure the following iSNS servers:
iSNS Server Error
------------- -------------------------------------------------------------------
192.168.100.2 iSNS Server 192.168.100.2:3260 is already configured at cluster-1
192.168.101.2 iSNS Server 192.168.101.2:3260 is already configured at cluster-1
Note: This command is only valid on systems that support iSCSI devices. If a server is not
configured on all directors, a warning message indicates the servers are missing from
specific directors.
Removes iSNS servers from the list of configured iSNS servers on the cluster.
Optional arguments
[-c|--cluster] cluster context - Context path of the cluster from which the iSNS server should
be removed.
[-h|--help] - Displays command line help.
* - argument is positional.
Description Removes one or more server addresses from the list of iSNS servers.
Note: This command is only valid on systems that support iSCSI devices.
Example Remove two iSNS servers with IP addresses of 192.168.100.2 and 192.168.101.2:
VPlexcli:/> iscsi isns remove -c cluster-1 -s 192.168.100.2,192.168.101.2
Successfully removed 2 iSNS servers.
Example Remove two iSNS servers with IP addresses and specified ports.
VPlexcli:/> iscsi isns remove -c cluster-1 -s 192.168.100.2:3000,192.168.101.2:3001
Successfully removed 2 iSNS servers.
Example Attempt to remove two iSNS servers that have already been removed.
VPlexcli:/> iscsi isns remove -c cluster-1 -s 192.168.100.2,192.168.101.2
Unable to remove the following iSNS servers:
iSNS Server Error
------------- -------------------------------------------------------------------
192.168.100.2 iSNS Server 192.168.100.2:3260 is not present at cluster-1
192.168.101.2 iSNS Server 192.168.101.2:3260 is not present at cluster-1
Adds iSCSI storage array target portals to specified cluster, enabling ports to be
discovered.
Optional arguments
[-c|--cluster] cluster context - Context path of the cluster on which target portals should be
added.
[-h|--help] - Displays command line help.
* - argument is positional.
Description After an iSCSI target portal is added, discovery takes place immediately. Thereafter, periodic
discoveries are automatically performed at 30-minute intervals.
Note: This command is only valid on systems that support iSCSI devices.
Example Add two target portals with IP addresses of 192.168.100.2 and 192.168.101.2:
VPlexcli:/> iscsi sendtargets add -c cluster-1 -t 192.168.100.2, 192.168.101.2
Successfully added 2 target portals.
Example Add two target portals with IP addresses and specified ports, 192.168.100.2:3000 and
192.168.101.2:3001.
VPlexcli:/> iscsi sendtargets add -c cluster-1 -t 192.168.100.2:3000, 192.168.101.2:3001
Successfully added 2 target portals.
Example Attempt to add two target portals that are already configured with the given IP addresses.
VPlexcli:/> iscsi sendtargets add -c cluster-1 -t 192.168.100.2, 192.168.101.2
Unable to configure the following target portals:
Target Portal Error
------------- -------------------------------------------------------------------
192.168.100.2 Target portal 192.168.100.2:3260 is already configured at cluster-1
192.168.101.2 Target portal 192.168.101.2:3260 is already configured at cluster-1
Description Lists the iSCSI target portals available on at least one director. If a portal is not available
on all directors, a warning message indicates which portals are missing from which
directors.
Note: This command is only valid on systems that support iSCSI devices.
Example List target portals and encounter directors missing target ports:
Note: This command is only valid on systems that support iSCSI devices.
Removes an iSCSI storage array's target portals from the cluster list of configured
target portals.
Optional arguments
[-c|--cluster] cluster context - Context path of the cluster from which the target portals
should be removed.
[-h|--help] - Displays command line help.
* - argument is positional.
Description After an iSCSI target portal is removed, a new round of discovery begins immediately.
Note: This command is only valid on systems that support iSCSI devices.
Example Remove two target portals with IP addresses of 192.168.100.2 and 192.168.101.2:
VPlexcli:/> iscsi sendtargets remove -c cluster-1 -t 192.168.100.2, 192.168.101.2
Successfully removed 2 target portals.
Example Remove two target portals with IP addresses and specified ports, 192.168.100.2:3000
and 192.168.101.2:3001.
VPlexcli:/> iscsi sendtargets remove -c cluster-1 -t 192.168.100.2:3000, 192.168.101.2:3001
Successfully removed 2 target portals.
Example Attempt to remove two target portals that are not configured with the given IP addresses.
VPlexcli:/> iscsi sendtargets remove -c cluster-1 -t 192.168.100.2, 192.168.101.2
Unable to remove the following target portals:
Target Portal Error
------------- -------------------------------------------------------------------
192.168.100.2 Target portal 192.168.100.2:3260 is not present at cluster-1
192.168.101.2 Target portal 192.168.101.2:3260 is not present at cluster-1
Displays the list of iSCSI targets that are discovered on the cluster.
Description Displays the list of iSCSI targets that are discovered on the cluster.
Note: This command is valid only on systems that support iSCSI devices.
Optional arguments
[-c|--cluster] cluster context - Context path of cluster from which the target portals should
be logged out.
[-h|--help] - Displays command line help.
* - argument is positional.
Note: This command is valid only on systems that support iSCSI devices.
local-device create
Creates a new local-device.
Note: If this device will have another device attached (using the device attach-mirror
command to create a RAID-1), the name of the resulting RAID-1 is the name given here
plus a timestamp. Names in VPLEX are limited to 63 characters. The timestamp consumes
16 characters. Thus, if this device is intended as the parent device of a RAID-1, the device
name must not exceed 47 characters.
[-g|--geometry] {raid-0|raid-1|raid-c} - * Geometry for the new device. Valid values are
“raid-0”, “raid-1”, or “raid-c”.
Use this command to create a RAID 1 device only if:
- None of the legs contains data that must be preserved,
- The resulting device will be initialized using tools on the host, or
- The resulting device will be added as a mirror to another device.
Description A device is configured from one or more extents in a RAID 1, RAID 0, or concatenated (RAID
C) configuration.
The block sizes of the supporting extents must be the same (4 K bytes) and will determine
the local-device block size.
When creating a device with RAID 1 geometry, this command prints a warning and asks for
confirmation.
If the --source-leg argument is not specified, this command does not initialize or
synchronize the legs of a RAID 1 device. Because of this, a RAID 1 device created by this
command does not guarantee that consecutive reads of the same block will return the
same data if the block has never been written.
To create a RAID 1 device when one leg of the device contains data that must be
preserved, use the --source-leg argument or the device attach-mirror command to add a
mirror to the leg.
By default, automatic device rebuilds are enabled on all devices. For configurations with
limited bandwidth between clusters, it may be useful to disable automatic rebuilds.
Use the set command to enable/disable automatic rebuilds on the distributed device. The
rebuild setting is immediately applied to the device.
◆ Set rebuild-allowed to true to start or resume a rebuild if the mirror legs are out of
sync.
◆ Set rebuild-allowed set to false to stop any rebuild in progress.
When automatic rebuild is re-enabled on a device where it has been disabled, the
rebuild starts again from the place where it stopped.
VPlexcli:/clusters/cluster-2/storage-elements/extents> cd
VPlexcli:/> ll -p **/devices
/clusters/cluster-1/devices:
Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual
--------------- Status State Count Size -------- -------- ---------- Size Volume
--------------- ----------- ------ ------- ----- -------- -------- ---------- -------- ---------
TestDevCluster1 ok ok 4195200 4K 16G raid-1 local 2M -
base0 ok ok 262144 4K 1G raid-0 local - base0_vol
base1 ok ok 262144 4K 1G raid-0 local - base1_vol
In the following example, the local-device create command creates a RAID-1 device from 2
extents; extent_lun_1_1 and extent_lun_2_1 in which:
◆ extent_lun_2_1 is the same size or larger than extent_lun_1_1
◆ extent_lun_1_1 is the source leg of the new device
◆ extent_lun_2_1 is the mirror leg
VPlexcli:/> local-device create --geometry raid-1 --extents extent_lun_1_1, extent_lun_2_1
--name dev_lun_1 --source-leg extent_lun_1_1
/clusters/cluster-1/devices:
Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual
----------- Status State Count Size -------- -------- ---------- Size Volume
----------- ----------- ------ -------- ----- -------- -------- ---------- -------- --------
dev_lun_1 ok ok 20709376 4K 5G raid-1 local - -
local-device destroy
Destroys existing local-devices.
Optional arguments
[-f|--force] - Force the destruction of the devices without asking for confirmation.
* - argument is positional.
Description The device must not be hosting storage or have a parent device.
Context
---------------------------------------
/clusters/cluster-1/devices/was_1_leg_r1
Do you wish to proceed? (Yes/No)
local-device summary
Displays unhealthy local devices and a summary of all local devices.
Description Displays unhealthy local devices and a summary of all local devices. Unhealthy devices
have non-nominal health state, operational status, or service-status.
If the --clusters argument is not specified and the command is executed at or below a
/clusters/cluster context, information for only that cluster is displayed.
Health devices 5
unhealthy 1
Visibility local 5
Field Description
Health
unhealthy Of the total number of devices in the cluster, the number whose health state is not “ok”.
Visibility Of the total number of devices in the cluster, the number with global or local visibility.
global - The remote cluster can access the virtual volume. A virtual volume on a top-level
device that has global visibility can be exported in storage views on any cluster.
local (default) - Device is visible only to the local cluster.
Capacity
devices w/ space Of the total number of devices in the cluster, the number with available space.
Description Log filters define criteria for the destination of specific log data. A filter is placed in an
ordered list, and filters see a received event in the order they sit in the list (shown by the
log filter list command).
By default, filters consume received events so that a matching filter stops the processing
of the event. Use the --no-consume argument to create a filter that allows processing of
matching events to continue.
Example Filter out (hide) all messages with the string test in them:
VPlexcli:/> log filter create -m "test"
Filter added.
Filter all messages into the events log generated by the logserver component with the
string Test:
VPlexcli:/> log filter create --source 1 --component logserver --message Test
Filter added.
Example
VPlexcli:/> log filter list
1. [Source='/var/log/VPlex/cli/events.log', Component='logserver', Message matches 'Test']
Destination='null' Consume='true'
2. Component='logserver' Destination='null' Consume='true'
3. [Threshold='>0'] Destination='null' Consume='true'
Description The number printed beside each filter serves as both an identifier for the log filter destroy
command as well as the order in which each respective filter will see an event.
Example
Optional arguments
[-f|--failover-source] host:port - IP address and port of the failover source to be added.
* argument is positional.
Description
For use by EMC personnel only.
.
.
.
6. [128.221.252.69:5988]/cpu0/log
7. [128.221.252.69:5988]/xmmg/log
Description
For use by EMC personnel only.
Description Lists the log paths from which log events are processed and their reference IDs.
Used to create log filters.
1. /var/log/VPlex/cli/events.log
2. 128.221.252.35:5988,[128.221.253.35:5988]/xmmg/log
3. 128.221.252.36:5988,[128.221.253.36:5988]/cpu0/log
4. [128.221.252.35:5988],128.221.253.35:5988/cpu0/log
5. [128.221.252.36:5988],128.221.253.36:5988/xmmg/log
logging-volume add-mirror
Adds a logging volume mirror.
logging-volume create
Creates a new logging volume in a cluster.
Optional arguments
[-d|--stripe-depth] depth - Required if --geometry is raid-0. Stripe depth must be:
◆ Greater than zero, but not greater than the number of blocks of the smallest element
of the RAID 0 device being created
◆ A multiple of 4 K bytes
A depth of 32 means 128 K (32 x 4 K) is written to the first disk, then the next 128 K is
written to the next disk.
Best practice regarding stripe depth is to follow the best practice of the underlying array.
Concatenated RAID devices are not striped.
* - argument is positional.
Description Creates a logging volume. The new logging volume is immediately available for use with
distributed-devices.
A logging volume is required on each cluster in VPLEX Metro and Geo configurations. Each
logging volume must be large enough to contain one bit for every page of distributed
storage space (approximately 10 GB of logging volume space for every 160 TB of
distributed devices).
Logging volumes experience a large amount of I/O during and after link outages. Best
practice is to stripe each logging volume across many disks for speed, and to have a mirror
on another fast disk.
To create a logging volume, first claim the storage volumes that will be used, and create
extents from those volumes.
◆ Use the ll /clusters/cluster/storage-elements/storage-volumes command to display
the available storage volumes on the cluster.
◆ Use the storage-volume claim -n storage-volume_name command to claim one or
more storage volumes.
◆ Use the extent create -d storage-volume_name, storage-volume_name command to
create an extent to use for the logging volume.
Repeat this step for each extent to be used for the logging volume.
Example
VPlexcli:/clusters/cluster-1/system-volumes> logging-volume create -n c1_log_vol -g raid-1 -e
extent_1 , extent_2
VPlexcli:/clusters/cluster-1/system-volumes> cd c1_log_vol
VPlexcli:/clusters/cluster-1/system-volumes/c1_log_vol> ll
/clusters/cluster-1/system-volumes/c1_log_vol
/clusters/cluster-1/system-volumes/c1_log_vol:
Attributes:
Name Value
-------------------------------- --------------
application-consistent false
biggest-free-segment-block-count 2612155
block-count 2621440
block-size 4K
capacity 10G
component-count 1
free-capacity 9.97G
geometry raid-0
health-indications []
health-state ok
locality local
operational-status ok
rebuild-allowed -
rebuild-eta -
rebuild-progress -
rebuild-status -
rebuild-type -
stripe-depth 4K
supporting-device logging_c1_log
system-id logging_c1_log
transfer-size -
volume-type logging-volume
Contexts:
Name Description
---------- -------------------------------------------------------------------
components The list of components that support this logging-volume.
segments Shows what parts of the logging volume are assigned to log changes
on distributed-device legs.
VPlexcli:/clusters/cluster-1/system-volumes/c1_log_vol> ll
/clusters/cluster-1/system-volumes/c1_log_vol/components
/clusters/cluster-1/system-volumes/c1_log_vol/components:
Name Slot Type Operational Health Capacity
----------------------- Number ------ Status State --------
----------------------- --------- ------ --------------- -------- --------
extent_VNX-1912_LUN10_1 0 extent ok ok 15G
extent_VNX-1912_LUN11_1 1 extent ok ok 15G
VPlexcli:/clusters/cluster-1/system-volumes/c1_log_vol> ll
/clusters/cluster-1/system-volumes/c1_log_vol/segments
/clusters/cluster-1/system-volumes/c1_log_vol/segments:
Name Starting Block Use
-------------------------------------------------------- Block Count --------------------------------------------
-------------------------------------------------------- -------- ------- --------------------------------------------
allocated-c1_dr1ActC1_softConfig_CHM_C1_0000 1084 17 allocated for
c1_dr1ActC1_softConfig_CHM_C1_0000
allocated-c1_dr1ActC1_softConfig_CHM_C1_0001 1118 17 allocated for
c1_dr1ActC1_softConfig_CHM_C1_0001
.
.
.
allocated-r0_deviceTgt_C2_CHM_0001 2077 17 allocated for r0_deviceTgt_C2_CHM_0001
allocated-r1_mirrorTgt_C1_CHM_0001 2060 17 allocated for r1_mirrorTgt_C1_CHM_0001
free-1057 1057 10 free
free-2094 2094 3930066 free
free-40 40 2 free
free-82 82 2 free
VPlexcli:/clusters/cluster-1/system-volumes/c1_log_vol>
Field Description
biggest-free-segme The block count of the largest remaining free segment in the logging
nt-block-count volume. This is the upper limit on the size of a new allocated segment.
capacity The total number of bytes in the volume. Equals the 'block-size' multiplied
by the 'block-count'.
free-capacity The number of free slots for storage-volume headers in this logging volume.
geometry Indicates the geometry or redundancy of this device. Will always be raid-1.
rebuild -eta The estimated time remaining for the current rebuild to complete.
Field Description
supporting-device The local, remote, or distributed device underlying the virtual volume.
transfer-size The transfer size during rebuild in bytes. See About transfer-size in the
batch-migrate start command.
/components context
Operational Status The operational status for the entity. This indicates whether the entity is
functioning, and if so, how well it is functioning.
/segments context
logging-volume detach-mirror
Detaches a mirror from a logging volume.
Description This command detaches a mirror from a logging-volume. The logging-volume must have a
RAID1 geometry and the mirror must be a direct child of the logging-volume.
You must specify the --slot or --mirror option but not both.
To detach a mirror from a component of a logging-volume use the device detach-mirror
command.
/clusters/Hopkinton/system-volumes/logging_vol:
Attributes:
Name Value
-------------------------------- --------------
application-consistent false
biggest-free-segment-block-count 2620324
block-count 2621440
block-size 4K
capacity 10G
component-count 1
free-capacity 10G
geometry raid-1
health-indications []
health-state ok
locality local
operational-status ok
provision-type legacy
rebuild-allowed true
rebuild-eta -
rebuild-progress -
rebuild-status done
rebuild-type full
stripe-depth -
supporting-device logging
system-id logging
transfer-size 128K
volume-type logging-volume
Contexts:
Name Description
---------- -------------------------------------------------------------------
components The list of components that support this logging-volume.
segments Shows what parts of the logging volume are assigned to log
changes on distributed-device legs.
/clusters/Hopkinton/system-volumes/logging_vol/components:
Name Slot Type Operational Health Capacity
------------------------------- Number ------ Status State -------
------------------------------- ------- ------ ----------- ------ -------
extent_CLARiiON1389_LUN_00023_1 0 extent ok ok 10G
logging-volume destroy
Destroys an existing logging volume.
Description The volume to be destroyed must not be currently used to store block write logs for a
distributed-device.
◆ logging-volume detach-mirror
logical-unit forget
Forgets the specified logical units (LUNs).
Optional arguments
[-s|--forget-storage-volumes] - If a LUN has an associated storage-volume, forget it AND the
associated storage-volume.
Description Forget one or more logical units (LUNs). Optionally, forget the storage volume if one is
configured on the LUN. This command attempts to forget each LUN in the list specified,
logging/displaying errors as it goes.
A logical unit can only be forgotten if it has no active paths. LUNs can be remembered
even if a cluster is not currently in contact with them. This command tells the cluster that
the specified LUNs are not coming back and therefore it is safe to forget about them.
If a specified LUN has an associated storage-volume, that LUN is skipped (is not
forgotten).
Use the --verbose argument to print a message for each volume that could not be
forgotten.
Use the --forget-storage-volume argument to forget the logical-unit AND its associated
storage-volume. This is equivalent to using the storage-volume forget command on those
storage-volumes.
Example Forget the logical units in the current logical unit context:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-SYMMETRIX-192602773/logical-
units> logical-unit forget
13 logical-units were forgotten.
102 logical-units have associated storage-volumes and were not forgotten
Example Use the --verbose arguments to display detailed information about any logical units that
could not be forgotten:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-SYMMETRIX-192602773/logical-
units> logical-unit forget --forget-storage-volumes --verbose
.
.
13 logical-units were forgotten:
VPD83T3:60000970000192602773533030353777
.
.
.
11 storage-volumes were forgotten:
VPD83T3:6006016030802100e405a642ed16e1099
.
.
.
ls
Displays information about the current object or context.
Description If the [-c|--context] argument is omitted, displays the contents of the current context.
The contents of a context include: its child contexts, if any; its attributes, if any; and the
available commands, if any.
The context name can be any valid glob pattern.
Notes The VPLEX CLI includes ll, a pre-defined alias of ‘ls -a’.
VPlexcli:/> ls -C /clusters/cluster-2/devices/device_CLAR0014_LUN04_1
/clusters/cluster-2/devices/device_CLAR0014_LUN04_1:
Name Value
---------------------- -----------------------
application-consistent false
block-count 2621440
block-size 4K
capacity 10G
geometry raid-0
health-indications []
health-state ok
locality local
operational-status ok
rebuild-allowed -
rebuild-eta -
rebuild-progress -
.
.
.
Use the --attribute argument to display the operational status of all directors:
VPlexcli:/> ls --attribute /engines/*/directors/*::operational-status
/engines/engine-2-1/directors/dirB:
Name Value
------------------ -----
operational-status ok
/engines/engine-2-1/directors/dirA:
Name Value
------------------ -----
operational-status ok
.
.
.
Display a cluster’s attributes and the contexts below the cluster context:
VPlexcli:/> ls /clusters/cluster-1
/clusters/cluster-1:
Attributes:
Name Value
---------------------- ------------
allow-auto-join true
auto-expel-count 0
auto-expel-period 0
auto-join-delay 0
cluster-id 1
connected true
default-cache-mode synchronous
default-caw-template true
director-names [DirA, DirB]
island-id 1
operational-status ok
transition-indications []
transition-progress []
health-state ok
health-indications []
Contexts:
ls 323
VPLEX CLI Command Descriptions
management-server network-service-restart
Restarts the network service on the management server. This command is supported in
Release 5.1.1 and later.
management-server set-ip
Assigns IP address, net-mask, and gateway IP address to the specified management port.
Changing the IP address for port eth3 can disrupt your inter-cluster link, and if VPLEX
Witness is deployed, disrupt the VPN between the VPLEX clusters and the Witness Server.
Additional failures (for example a remote VPLEX cluster failure) while VPN between the
VPLEX clusters and the Witness Server is disrupted could lead to DU for all distributed
virtual volumes in synchronous consistency groups.
For the procedure to safely change the management server IP address, refer to the
Troubleshoot => Cluster Witness section of the VPLEX procedures in the SolVe Desktop.
/management-server/ports/eth3:
Name Value
-------------- -------------
address 10.31.52.70
auto-negotiate true
duplex full
gateway 10.31.52.21
inet6-address -
inet6-gateway -
net-mask 255.255.248.0
speed 0.977GB/s
status up
Name Value
------------- -----------------------------------------------------
address 10.31.52.197
auto-negotiate true
duplex full
gateway 10.31.52.1
inet6-address [3ffe:80c0:22c:803c:215:17ff:fecc:4408/64 Scope: Global,
3ffe:80c0:22c:803c:415:17ff:fecc:4408/64 Scope: Link]
inet6-gateway 3ffe:80c0:22c:803c::1
net-mask 255.255.252.0
speed 0.977GB/s
status up
manifest upgrade
Loads a new manifest file, replacing the old one, if it exists.
Description The new manifest file will be validated before it replaces the old one.
If there is no current valid manifest file (corrupted or missing), the specified manifest file
is installed without confirmation.
If a valid manifest file exists, confirmation is required if the specified manifest file does
not have a newer version than the existing one.
manifest version
Displays the version of the currently loaded manifest file.
meta-volume attach-mirror
Attaches a storage-volume as a mirror to a meta-volume.
Description Creates a mirror and backup of the specified meta-volume. The specified
storage-volume(s):
◆ Must not be empty,
◆ Must be at the implied or specified cluster,
◆ Unclaimed,
◆ 78 GB or larger.
EMC recommends a mirror and a backup of the meta-volume are created using at least two
disks from two different arrays.
Note: A mirror can be attached when the meta-volume is first created by specifying two
storage volumes.
meta-volume backup
Creates a new meta-volume and writes the current in-memory system data to the new
meta-volume without activating it.
Optional arguments
[-c|--cluster] context path - The cluster whose active meta-volume will be backed-up.
[-f|--force] - Forces the backup meta-volume to be activated without asking for
confirmation.
* - argument is positional.
Description Backup creates a point-in-time copy of the current in-memory metadata without activating
it. The new meta-volume is named:
current-metadata-namebackup_yyyyMMMdd_HHmms
Metadata is read from the meta-volume only during the boot of each director.
IMPORTANT
At boot, VPLEX reads the first meta-volume it encounters. If an out-of-date meta-volume is
visible to VPLEX, and is the first encountered during boot, that meta-volume is read.
All but the latest meta-volume should be moved to where they are not visible to VPLEX
during boot.
IMPORTANT
No modifications should be made to VPLEX during the backup procedure. Make sure that
all other users are notified.
Use the ll command in the system-volumes context to verify that the meta-volume is Active
and its Ready state is true.
meta-volume create
Creates a new meta-volume in a cluster when there is no existing active meta-volume.
IMPORTANT
Specify two or more storage volumes. Storage volumes should be on different arrays.
Optional arguments
[f|--force] - Forces the meta-volume to be created without asking for confirmation.
* - argument is positional.
Description VPLEX metadata includes virtual-to-physical mappings, data about devices, virtual
volumes, and configuration settings.
Metadata is stored in cache and backed up on a specially designated external volume
called the meta-volume.
The meta-volume is critical for system recovery. The best practice is to mirror the
meta-volume across two or more back-end arrays to eliminate the possibility of data loss.
Choose the arrays used to mirror the meta-volume such that they are not required to
migrate at the same time.
Meta-volumes differ from standard storage volumes in that:
◆ A meta-volume is created without first being claimed,
◆ Meta-volumes are created directly on storage volumes, not extents.
If the meta-volume is configured on a CLARiiON array, it must not be placed on the vault
drives of the CLARiiON.
Note: Example out put is truncated. Vendor, IO Status, and Type fields are omitted.
◆ The meta-volume create command creates a new mirrored volume using the 2
specified storage volumes.
◆ The ll command displays the new meta-volume.
VPlexcli:/> configuration show-meta-volume-candidates
Name Capacity...Array Name
---------------------------------------- -------- -----------------------------------
VPD83T3:60060480000190100547533030364539 187G .....EMC-SYMMETRIX-190100547
VPD83T3:60000970000192601707533031333132 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333133 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333134 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333135 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333136 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333137 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333138 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:6006016049e02100442c66c8890ee011 80G ......EMC-CLARiiON-FNM00083800068
l
.
.
.
VPlexcli:/>cd /clusters/cluster-1/system-volumes
VPlexcli:/clusters/cluster-1/system-volumes> ll c1_meta
/clusters/cluster-1/system-volumes/c1_meta:
Attributes:
Name Value
---------------------- -----------
active true
application-consistent false
block-count 20971264
block-size 4K
capacity 80G
component-count 2
free-slots 27199
geometry raid-1
health-indications []
health-state ok
locality local
operational-status ok
ready true
rebuild-allowed true
rebuild-eta -
rebuild-progress -
rebuild-status done
rebuild-type full
slots 32000
stripe-depth -
system-id c1_meta
transfer-size 128K
volume-type meta-volume
Contexts:
Name Description
---------- -------------------------------------------------------------------
components The list of components that support this device or system virtual
volume.
Field Description
active Indicates whether this is the currently-active metadata volume. The system
has only one active metadata volume at a time.
free-slots The number of free slots for storage-volume headers in this meta-volume.
geometry Indicates the geometry or redundancy of this device. Will always be raid-1.
Field Description
rebuild -eta The estimated time remaining for the current rebuild to complete.
slots The total number of slots for storage-volume headers in this meta-volume.
transfer-size The transfer size during rebuild in bytes. See the “About transfer-size”
section in the batch-migrate start command.
meta-volume destroy
Destroys a meta-volume, and frees its storage volumes for other uses.
Optional arguments
[f|--force] - Destroys the meta-volume without asking for confirmation (allows the
command to be run from a non-interactive script). Allows the meta-volume to be
destroyed, even if the meta-volume is in a failed state and unreachable by VPLEX.
* - argument is positional.
/clusters/cluster-1/system-volumes/meta1:
Attributes:
Name Value
---------------------- -----------
active false
application-consistent false
block-count 23592704
.
.
.
VPlexcli:/clusters/cluster-1/system-volumes> meta-volume destroy -v
meta1
Meta-volume 'meta1' will be destroyed. Do you wish to continue?
(Yes/No) y
meta-volume detach-mirror
Detaches a storage-volume/mirror from a meta-volume.
Optional arguments
[-f|--force] - Force the mirror to be discarded. Required when the --discard argument is
used.
[-s|--slot] - The slot number of the mirror to be discarded. Applicable only when the
--discard argument is used.
--discard - Discards the mirror to be detached. The data is not discarded.
* - argument is positional.
Example
VPlexcli:/clusters/cluster-1/system-volumes/meta-vol-1/components> ll
Name Slot Type Operational Health Capacity
---------------------------------------- Number -------------- Status State --------
---------------------------------------- ------ -------------- ----------- ------ --------
VPD83T3:60000970000192601869533030373030 2 storage-volume ok ok 128G
VPD83T3:60000970000194900497533030333338 1 storage-volume ok ok 128G
meta-volume move
Writes the current in-memory system data to the specified target meta-volume, then
activates it.
◆ 78 GB or larger
Description Writes the metadata to the specified meta-volume, and activates it. The specified
meta-volume must already exist (it is not created automatically).
This command fails if the destination meta volume has a lower number of meta data slots
than required to support the current configuration. This is highly likely if the target
meta-volume was manually created before Release 5.1 and has 32000 slots. Confirm this
by using the ll command in the system volume context. See the troubleshooting
procedures for VPLEX in the SolVe Desktop for information on fixing this problem.
3. Use the meta-volume move command to move the existing in-memory metadata to the
new meta-volume.
VPlexcli:/engines/engine-1-1/directors> meta-volume move --target-volume meta_dmx
meta-volume verify-on-disk-consistency
Analyzes a meta-volume's committed (on-disk) header slots for consistency across all
mirrors/components.
The slow option may take hours to complete on a production meta-volume.
Description An active meta-volume with an inconsistent on-disk state can lead to a data unavailability
(DU) during NDU.
Best practice is to NDU immediately after passing this meta-volume consistency check.
IMPORTANT
If any errors are reported, do not proceed with the NDU, and contact EMC Customer
Support.
The format and the length of time for the command to complete vary depending on the
VPLEX Release:
◆ For Release 4.2, the format of the command is:
meta-volume verify-on-disk-consistency -style slow --meta-volume meta-volume-name>
The length of time for the command to complete for Release 4.2 depends on the size
of the configuration. On a very large configuration, the command may take as long as
4 hours.
Note: Running this command is optional before upgrading from Release 4.2.
◆ For Release 5.0 and later, the format of the command is:
meta-volume verify-on-disk-consistency -style long --meta-volume meta-volume-name>
Note: Running this command is recommended before upgrading from Release 5.0 or later.
Example Verify the specified meta-volume is consistent using the slow style:
VPlexcli:/> meta-volume verify-on-disk-consistency --style slow --meta-volume meta_cluster1
monitor add-console-sink
Adds a console sink to the specified performance monitor.
Optional arguments
[-f|--force] - Forces the creation of the sink, even if existing monitors are delayed in their
polling.
[-o|--format] {csv|table} - The output format. Can be csv (comma-separated values)' or
table.
Default: table.
* -argument is positional.
Description Creates a console sink for the specified performance monitor. Console sinks send output
to VPLEX Management Server Console.
Every monitor must have at least one sink, and may have multiple sinks. A monitor does
not begin operation (polling and collecting performance data) until a sink is added to the
monitor.
Use the monitor add-console-sink command to add a console sink to an existing monitor.
Console monitors display the specified statistics on the Unisphere for VPLEX, interrupting
any other input/output to/from the console.
Example Add a console sink with output formatted as table (the default output format for console
sinks):
VPlexcli:/> monitor add-console-sink --monitor Director-2-1-B_TestMonitor
Navigate to the monitor context and use the ll console command to display the sink
settings:
VPlexcli:/cd /monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks
VPlexcli:/monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks> ll
VPlexcli:/monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks> ll
console
/monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks/console:
Name Value
------- -------
enabled true
format table
sink-to console
type console
monitor add-file-sink
Adds a file sink to the specified performance monitor.
Optional arguments
[-f|--force] - Forces the creation of the sink, even if existing monitors are delayed in their
polling.
[-n|--name] name - Name for the new sink. If no name is provided, the default name “file”
is applied.
[-o|--format] {csv|table} - The output format. Can be csv (comma-separated values)' or
table.
Default: csv.
* -argument is positional.
Description Creates a file sink for the specified monitor. File sinks send output to the specified file.
The default location of the output file is /var/log/VPlex/cli.
The default name for the file sink context is 'file”.
Every monitor must have at least one sink, and may have multiple sinks. A monitor does
not begin operation (polling and collecting performance data) until a sink is added to the
monitor
Use the monitor add-file-sink command to add a file sink to an existing monitor.
Example To add a file sink to send output to the specified .csv file:
VPlexcli:/monitoring/directors/director-1-1-A/monitors> monitor add-file-sink --monitor
director-1-1-A_stats --file /var/log/VPlex/cli/director_1_1_A.csv
Navigate to the monitor sinks context and use the ll sink-name command to display the
sink:
VPlexcli:/cd /monitoring/directors/director-1-1-A/monitors/director-1-1-A_stats/sinks
VPlexcli:/monitoring/directors/Director-1-1-A/monitors/director-1-1-A_stats/sinks> ll file
/monitoring/directors/Director-1-1-A/monitors/director-1-1-A_stats/sinks/file:
Name Value
------- -------------------------------
enabled true
format csv
sink-to /var/log/VPlex/cli/director_1_1_A.csv
type file
monitor collect
Force an immediate poll and collection of performance data without waiting for the
automatic poll interval.
Description Polls and collects performance data from user-defined monitors. Monitors must have at
least one enabled sink.
Example
VPlexcli:/> monitor collect
/monitoring/directors/director-2-1-B/monitors/director-2-1-B_TestMonitor
VPlexcli:/>
Source: director-2-1-B_TestMonitor
Time: 2010-07-01 10:05:55
director.be-ops (counts/s):
.
.
.
◆ monitor create
◆ report poll-monitors
monitor create
Creates a performance monitor.
Optional arguments
[-p|-- period] collection-period- Frequency at which this monitor collects statistics. Valid
arguments are an integer followed by:
ms - milliseconds (period is truncated to the nearest second)
s - seconds (Default)
min - minutes
h - hours
0 - Disables automatic polling.
Description Performance monitoring collects and displays statistics to determine how a port or volume
is being used, how much I/O is being processed, CPU usage, and so on.
The VPLEX collects and displays performance statistics using two user-defined objects:
◆ monitors - Gather the specified statistics.
◆ monitor sinks - Direct the output to the desired destination. Monitor sinks include the
console, a file, or a combination of the two.
The monitor defines the automatic polling period, the statistics to be collected, and the
output of the format. The monitor sinks define the output destination.
Polling occurs when:
◆ The timer defined by the monitor’s period attribute has expired.
◆ The monitor has at least one sink with the enabled attribute set to true.
Polling is suspended when:
◆ The monitor’s period is set to 0, and/or
◆ All the monitor’s sinks are either removed or their enabled attribute is set to false
Create short-term monitors to diagnose an immediate problem.
Create longer-term monitors for ongoing system management.
If the second file exceeds, 10B, it is saved as filename.csv.2, and subsequent output is
saved in filename.csv. Up to 10 such rotations, and numbered .csv files are supported.
When the file sink is removed or the monitor is destroyed, output to the .csv file stops, and
the current .csv file is time stamped. For example:
service@sms-cluster-1:/var/log/VPlex/cli> ll my-data.csv*
-rw-r--r-- 1 service users 10566670 2012-03-06 21:23 my-data.csv.1
-rw-r--r-- 1 service users 5637498 2012-03-06 21:26
my-data.csv_20120306092614973
Procedure overview To create and operate a monitor, use the following general steps:
1. Determine the type of statistic to collect from the target object.
Use the monitor stat-list category and/or the monitor stat-list * command to display
the statistics to include in the monitor.
Note whether the statistic you want to collect requires an argument (port number,
volume ID).
2. Determine how often the monitor should collect statistics.
3. Use the monitor create command to create a monitor.
4. Use the monitor add-sink commands to add one or more sinks to the monitor.
Add a console sink to send performance data to the Unisphere for VPLEX.
Add a file sink to send performance data to a specified file.
5. The monitor begins operation (polling and collecting performance data) when the sink
is added to the monitor.
To disable automatic polling without deleting the monitor or it’s sinks, do one of the
following:
• Use the set command to change the monitor’s period attribute to 0.
• Use the set command to change the sink’s enabled attribute to false.
6. Use the monitor collect command to update/collect statistics immediately without
waiting for the monitor’s automatic collection.
7. Monitor output.
Console sinks display monitor output on the console.
For file sinks, navigate to /var/log/VPlex/cli/ on the management server and use the
tail -f filename to display the output,
or:
Send output to a csv file, open the file in Microsoft Excel and create a chart.
Example Create a simple monitor with the default period, and no targets:
VPlexcli:/monitoring> monitor create --name TestMonitor --director Director-2-1-B --stats
director.fe-read,director.fe-write
Successfully created 1 monitor(s) out of 1.
/clusters/cluster-1/storage-elements/storage-volumes/*
monitor destroy
Destroys a performance monitor.
Optional arguments
[-f|-- force] - Destroy monitors with enabled sinks and bypass confirmation.
[-c|--context-only] - Removes monitor contexts from the Unisphere for VPLEX and the CLI,
but does not delete monitors from the firmware. Use this argument to remove contexts
that were created on directors to which the element manager is no longer connected.
Example
VPlexcli:/> monitor destroy Cluster_2_Dir_2B_diskReportMonitor,
Cluster_2_Dir_2B_portReportMonitor,Cluster_2_Dir_2B_volumeReportMonitor
WARNING: The following items will be destroyed:
Context
------------------------------------------------------------------------------------
/monitoring/directors/Cluster_2_Dir_2B/monitors/Cluster_2_Dir_2B_diskReportMonitor
/monitoring/directors/Cluster_2_Dir_2B/monitors/Cluster_2_Dir_2B_portReportMonitor
/monitoring/directors/Cluster_2_Dir_2B/monitors/Cluster_2_Dir_2B_volumeReportMonitor
monitor remove-sink
Removes a sink from a performance monitor.
◆ monitor add-console-sink
◆ monitor add-file-sink
monitor stat-list
Displays statistics available for performance monitoring.
Description Performance statistics are grouped into categories Use the monitor stat-list command
followed by the <Tab> key to display the statistics categories.
Use the --categories categories argument to display the statistics available in the specified
category.
Use the * wildcard to display all statistics for all categories.
Note: A complete list of the command output is available in the EMC VPLEX Administration
Guide.
OR
ndu pre-check
Performs a pre-NDU validation and check.
Description The ndu pre-check command should be run before you run a non-disruptive upgrade on a
system to upgrade GeoSynchrony. This command runs through a number of checks to see
if the non-disruptive upgrade would run into any errors in upgrading GeoSynchrony.
The checks performed by ndu pre-check are listed in the Upgrade procedure for each
software release. This procedure can be found in the VPLEX procedures in the SolVe
Desktop.
NDU pre-checks must be run within 24 hours before starting the NDU process.
NDU is not supported in a VPLEX Geo configuration if remote volumes are exported. NDU
does not proceed to the next release until the remote volumes are converted to local
volumes or distributed asynchronous volumes.
Disclaimers for multipathing in ndu pre-check give time for user to validate hosts.
For more detailed information about NDU pre-checks, see the upgrade procedures for
VPLEX in the SolVe Desktop.
ndu pre-config-upgrade
Disruptively upgrades a VPLEX Geo that has not been fully installed and configured.
Description Disruptively upgrades a VPLEX Geo when the VPLEX is not fully installed and configured.
This command requires the VPLEX be in a pre-config state. Specifically, do not use this
procedure unless NO meta-volume is configured (or discoverable).
This command is used as part of a non-disruptive upgrade procedure for installed systems
that have not yet been configured. For more information, see the upgrade procedures for
VPLEX in the SolVe Desktop.
ndu recover
Perform NDU recovery after a failed NDU attempt.
Description If the NDU failed before I/O is transferred from the second upgraders (running old
software) to the first upgraders (running new software), then the first upgraders are rolled
back to the old software.
If the NDU failed after I/O transfer, the directors are rolled forward to the new software.
--dry-run - Do not perform the ssd firmware upgrade but run the same procedure as an
actual install (including netbooting the directors).
--skip-be-switch-check - Skips the NDU pre-check for unhealthy back-end switches.
--skip-cluster-status-check - Skip the NDU pre-check for cluster problems (missing
directors, suspended exports, inter-cluster link failure).
--skip-confirmations - Skip any user confirmations normally required before proceeding
when there are NDU pre-check warnings.
--skip-distributed-device-settings-check - Skips the NDU pre-check for distributed device
settings (auto-resume set to true).
--skip-fe-switch-check - Skips the NDU pre-check for unhealthy front-end switches.
--skip-group-be-checks - Skip all NDU pre-checks related to back-end validation. This
includes the system configuration validation and unreachable storage volumes
pre-checks.
--skip-group-config-checks - Skip all NDU pre-checks related to system configuration. This
includes the system configuration validation and director commission pre-checks.
--skip-group-fe-checks - Skip all NDU pre-checks related to front-end validation. This
includes the unhealthy storage views and storage view configuration pre-checks.
--skip-group-health-checks - Skip all NDU pre-checks related to system health validation.
This includes the system configuration validation, unhealthy virtual volumes, cluster
status, and the inter-cluster communications connectivity pre-checks.
--skip-meta-volume-backup-check- Skips the check to verify that backups for the
meta-data volumes at all clusters have been configured.
--skip-meta-volume-redundancy-check - Skips the NDU pre-check for verifying the
meta-volume redundancy
--skip-remote-mgmt-version-check - Skip the remote management server version check.
--skip-storage-volumes-check - Skip the NDU pre-check for unreachable storage volumes.
--skip-sysconfig-check - Skip the system configuration validation NDU pre-check and
proceeds with NDU even if there are errors with cache replication, logging volume setup,
back-end connectivity, and metadata volume health.
--skip-view-config-check - Skip the NDU pre-check for storage view configuration (front-end
high availability). This option is required to pass the NDU pre-checks when operating a
minimum configuration. For minimum configurations, front-end high-availability
pre-checks must be performed manually.
--skip-view-health-check - Skip the NDU pre-check for unhealthy storage views.
--skip-virtual-volumes-check - Skip the NDU pre-check for unhealthy virtual volumes.
--skip-wan-com-check - Skip the inter-cluster communications connectivity NDU pre-check
and proceeds with NDU even if there are errors specifically related to inter-cluster
communications connectivity.
Skipping the WAN communications pre-check may increase the risk for NDU failure
should the inter-cluster communication connection fail.
Note: Multiple skip options can be specified to skip multiple pre-checks. Enter skip
options separated by a space.
Description Upgrades the directors one at a time. Assures that there are directors available to service
I/O as some of the directors are being upgraded. The upgraded director rejoins the system
before the next director is upgraded.
The director SSD firmware upgrade is performed by netbooting the director to ensure that
the SSD is not in use while the firmware is being upgraded.
Non-disruptively upgrades the SSD firmware on the directors in a running VPLEX system.
Refer to the upgrade procedure for VPLEX in the SolVe Desktop for more information on
using this command.
Example: Check whether the directors need the SSD firmware upgrade (--check-only option):
VPlexcli:/> ndu rolling-upgrade ssd-fw --image
/tmp/VPlexInstallPackages/VPlex-5.0.1.00.00.06-director-field-disk-image.tar --check-only
================================================================================
[Tue Feb 22 13:32:57 2011] Checking Director SSD Firmware
================================================================================
[UPGRADE REQUIRED: director-1-1-B] SSD Model:M8SB2-30UC-EMC 118032709, FwRev: C04-6693, To:
C04-6997
[UPGRADE REQUIRED: director-1-1-A] SSD Model:M8SB2-30UC-EMC 118032709, FwRev: C04-6693, To:
C04-6997
[UPGRADE REQUIRED: director-2-1-A] SSD Model:M8SB2-30UC-EMC 118032709, FwRev: C04-6693, To:
C04-6997
[UPGRADE REQUIRED: director-2-1-B] SSD Model:M8SB2-30UC-EMC 118032709, FwRev: C04-6693, To:
C04-6997
================================================================================
The output for 'ndu ssd-fw' has been captured in
/var/log/VPlex/cli/capture/ndu-ssd-fw-session.txt
Example Use the following syntax to target specific directors for the SSD firmware upgrade using
the --targets option:
VPlexcli:/> ndu rolling-upgrade ssd-fw --image /tmp/VPlexInstallPackages/
VPlex-5.0.1.00.00.06-director-field-disk-image.tar --targets director-*-2-A
Example Use the following syntax to upgrade all the directors if the SSD firmware in the given image
is newer:
VPlexcli:/> ndu rolling-upgrade ssd-fw --image
/tmp/VPlexInstallPackages/VPlex-5.0.1.00.00.06-director-field-disk-image.tar
Example Use the following syntax to do a dry-run of the rolling upgrade of SSD firmware but skip the
actual step of writing the new SSD firmware to the SSD device.
================================================================================
[Mon Jan 24 16:15:56 2011] Checking Director SSD Firmware
================================================================================
[UPGRADE REQUIRED: director-2-2-A] SSD Model:M8SB1-56UC-EMC 118032769, FwRev: C08-6997, To:
C08-6997
[UPGRADE REQUIRED: director-1-2-A] SSD Model:M8SB1-56UC-EMC 118032769, FwRev: C08-6997, To:
C08-6997
================================================================================
Performing NDU pre-checks
================================================================================
Verify director communication status.. OK
================================================================================
.
.
.
================================================================================
[Mon Jan 24 16:17:59 2011] SSD firmware upgrade /engines/engine-1-2/directors/director-1-2-A
================================================================================
Waiting for system to be stable before upgrading the SSD firmware on
/engines/engine-1-2/directors/director-1-2-A : .DONE
[PXE] Whitelisting director-1-2-a
Rebooting director-1-2-A: .DONE
Waiting for netboot. This will take several minutes.
upgrading the ssd firmware on /engines/engine-1-2/directors/director-1-2-A
SSD Desired FwRev: C08-6997
SSD Model: M8SB1-56UC-EMC 118032769, Current SSD FwRev: C08-6997
Skipping SSD Firmware Upgrade. [already at C08-6997]
.
.
.
================================================================================
[Mon Jan 24 16:39:53 2011] Successfully upgraded the SSD firmware on
/engines/engine-2-2/directors/director-2-2-A
================================================================================
================================================================================
[Mon Jan 24 16:39:53 2011] Teardown PXE Server
================================================================================
Waiting for system to be stable after upgrading the SSD firmware on directors: ...DONE
Cleaning up management server at cluster-2: .DONE
================================================================================
[Mon Jan 24 16:40:35 2011] Director SSD Firmware upgrade summary
================================================================================
director-1-2-A: SSD Model: M8SB1-56UC-EMC 118032769, Current FwRev: C08-6997, Image FwRev:
C08-6997
director-2-2-A: SSD Model: M8SB1-56UC-EMC 118032769, Current FwRev: C08-6997, Image FwRev:
C08-6997
================================================================================
The output for 'ndu ssd-fw' has been captured in
/tmp/derk/clidir/capture/ndu-ssd-fw-session.txt
VPlexcli:/>
ndu start
Begins the non-disruptive upgrade (NDU) process of the director firmware.
Optional arguments
--cws-package cws firmware tar file - Full path to Cluster Witness Server package on the
management server.
In order to start an NDU, the VPLEX must be fully installed and a meta-volume must be
present. It is mandatory to run the ndu pre-check command before running this
command. For the director firmware upgrade to be non-disruptive to host I/O, the NDU
pre-check must pass without any errors. The system must be healthy and in a highly
available configuration.
--force-geo results in temporary data unavailability at the losing cluster for asynchronous
consistency groups.
Skipping the WAN communications pre-check may increase the risk for NDU failure due to
WAN-COM issues that could have been noted in the ndu pre-check.
--skip-cws-upgrade - Skips the upgrade of the Cluster Witness Server and proceeds with
the rest of the NDU.
--skip-be-switch-check - Skips the NDU pre-check for unhealthy back-end switches.
--skip-cluster-status-check - Skip the NDU pre-check for cluster problems (missing
directors, suspended exports, inter-cluster link failure, and so on).
--skip-confirmations - Skip any user confirmations normally required before proceeding
when there are NDU pre-check warnings.
--skip-distributed-device-settings-check - Skips the NDU pre-check for distributed device
settings (auto-resume set to true).
--skip-fe-switch-check - Skips the NDU pre-check for unhealthy front-end switches.
--skip-group-be-checks - Skip all NDU pre-checks related to back-end validation. This
includes pre-checks for system configuration validation and unreachable storage
volumes.
--skip-group-config-checks - Skip all NDU pre-checks related to system configuration. This
includes the system configuration validation and director commission pre-checks.
--skip-group-fe-checks - Skip all NDU pre-checks related to front-end validation. This
includes the unhealthy storage views and storage view configuration pre-checks.
--skip-group-health-checks - Skip all NDU pre-checks related to system health validation.
This includes the system configuration validation, unhealthy virtual volumes, cluster
status, and the inter-cluster communications connectivity pre-checks.
--skip-meta-volume-backup-check- Skips the check to verify that backups for the
meta-data volumes at all clusters have been configured.
--skip-meta-volume-redundancy-check - Skips the NDU pre-check for verifying the
meta-volume redundancy.
--skip-storage-volumes-check - Skip the NDU pre-check for unreachable storage volumes.
Description This command starts a non-disruptive upgrade and can skip certain checks to push a
non-disruptive upgrade when the ndu pre-checks command fails. The pre-checks
executed by the ndu pre-check command verify that the upgrade from the current software
to the new software is supported, the configuration supports NDU, and the system state is
ready (clusters and volumes are healthy).
You must resolve all issues disclosed by the ndu pre-check command before running the
ndu start command.
Skip options enable ndu start to skip one or more NDU pre-checks. Skip options should be
used only after fully understanding the problem reported by the pre-check to minimize the
risk of data unavailability.
Note: Skip options may be combined to skip more than one pre-check. Multiple skip
options must be separated by a space.
IMPORTANT
It is recommended that you upgrade VPLEX using the upgrade procedure found in the
SolVe Desktop. This procedure also details when the ndu start command should be used
with skip options and how to select and use those skip options.
ndu status
Displays the NDU status.
Description If an NDU firmware or OS upgrade is running, this command displays the upgrade activity.
If neither NDU firmware or OS upgrade is running, this command displays information
about the previous NDU firmware upgrade.
If the last operation was a rolling-upgrade, the OS upgrade information is displayed. The
ndu start command clears this information.
If an NDU firmware or OS upgrade has failed, this command displays a message to use the
ndu recover command.
if an NDU recovery is in progress, has succeeded or failed, this command displays a a
status message.
================================================================================
[Fri Dec 17 18:05:21 2010] System state summary
================================================================================
The directors {director-1-1-B, director-1-1-A, director-1-2-B, director-1-2-A} are
operational at version 2.1.19.0.0.
================================================================================
The output for 'ndu status' has been captured in
/var/log/VPlex/cli/capture/ndu-status-session.txt
Display NDU status after an NDU failed and ndu recover was run:
VPlexcli:/> ndu status
Gathering NDU status...
NDU recover succeeded on management server 127.0.0.1 on Fri, 17 Dec 2010 01:00:27.
================================================================================
[Fri Dec 17 01:05:25 2010] System state summary
================================================================================
The directors {director-1-1-B, director-1-1-A, director-1-2-B, director-1-2-A} are
operational at version 2.1.19.0.0.
================================================================================
The output for 'ndu status' has been captured in
/var/log/VPlex/cli/capture/ndu-status-session.txt
Optional arguments
[-f|--force] - Forces the import of the specified file without asking for confirmation. Allows
this command to be run from non-interactive scripts.
Description Imports and applies modifications to call-home events. This command imports the
specified .xml file that contains modified call-home events. There can be two types of .xml
event files:
◆ EMC-generic events are modifications recommended by EMC.
EMC provides an .xml file containing commonly requested modifications to the default
call-home events.
◆ Customer-specific events are events modified to meet a specific customer
requirement.
EMC provides a custom events file developed by EMC engineering and applied by EMC
Technical Support.
Call-home behavior changes immediately when the modified events file is applied.
If a customized events file is already applied, applying a new file overrides the existing
file.
If the same event is modified in both the customer-specific and EMC-generic files, the
modification specified in the customer-specific file is applied for that event, and the note
“Not applied” appears in the command output.
If call-home notification is disabled when the custom events file is applied, the modified
events are saved and applied when call-home notification is enabled.
Use the set command to enable/disable call-home notifications.
Use the ls notifications/call-home command to display whether call-home notification is
enabled:
VPlexcli:/> ls /notifications/call-home
/notifications/call-home:
Attributes:
Name Value
------- -----
enabled true
Example In the following example, the custom events file 'custom.xml” is imported from a directory
on the management server and applied when call-home notification is disabled:
VPlexcli:/notifications/call-home> import-event-modifications -m /home/service/demo/custom.xml
Importing the 'custom_events.xml' file will override the existing call-home events, for all the
event categories specified in the 'custom_events.xml' file.
Do you want to proceed? (Yes/No) yes
VPlexcli:/notifications/call-home>
Example In the following example, a custom events file is imported when call-home notification is
enabled:
VPlexcli:/notifications/call-home> import-event-modifications --file
/home/service/Test/customCallHome.xml
Importing the 'custom_events.xml' file will override the existing call-home events, for all the
event categories specified in the 'custom_events.xml' file.
Do you want to proceed? (Yes/No) yes
Description This command removes the specified custom call-home events file. There are two types of
.xml event files:
◆ EMC-generic events are modifications recommended by EMC.
EMC provides an .xml file containing commonly requested modifications to the default
call-home events.
◆ Customer-specific events are events modified to meet a specific customer
requirement.
EMC provides a custom events file developed by EMC engineering and applied by EMC
Technical Support.
If no file is specified, this command removes both custom call-home events files.
The specified file is not deleted from the management server. When a custom events file is
removed, the default events file LIC.xml is applied.
Use the ndu upgrade-mgmt-server command to re-import the file.
Example In the following example, the specified customer-specific call-home events file is
removed:
VPlexcli:/notifications/call-home> remove-event-modifications
--customer-specific
Example In the following example, The EMC-generic call-home events file is removed:
VPlexcli:/notifications/call-home> remove-event-modifications
--emc-generic
Example In the following example, both call-home events files are removed:
VPlexcli:/notifications/call-home> remove-event-modifications
Description If event modifications are applied to call-home events, this command displays those
events whose call-home events have been modified.
If the same event is modified by both the customer-specific and the EMC-generic events
files, the setting in the customer-specific file overrides the entry in the EMC-generic file.
Use this command with no arguments to display a summary of all event modifications.
Use this command with the -c or -e arguments to display a summary of only the
customer-specific or EMC generic modified events.
Use the --verbose argument to display detailed information.
Customer-specific events :
-------------------------
event code name severity
Example In the following example, the same event is modified by both the customer-specific and
the EMC-generic events files. The setting in the customer-specific file overrides the setting
in the EMC-generic file:
VPlexcli:/notifications/call-home> view-event-modifications
EMC-generic events :
---------------------
event code name severity
---------- ------------ --------
0x8a2d6025 SCSI_IT_LOST WARNING
0x8a2c901c scom_28_CRIT ERROR
Customer-specific events :
-------------------------
event code name severity
---------- ------------ --------
0x8a2d6025 SCSI_IT_LOST WARNING
0x8a029060 amf_96_CRIT CRITICAL
Example Use the --verbose argument to display detailed information about customer-specific event
modifications:
VPlexcli:/notifications/call-home> view-event-modifications --customer-specific --verbose
Customer-specific events :
-------------------------
name: CWS_EVENT_HEALTHY_CONNECTIVITY_INTERVAL_CHANGED
severity: WARNING
component: cws
locale: CLUSTER
obsolete event: NO
resend timeout: 60
default event: TRUE
threshold count: 0
threshold interval: 0
customer description: CUSTOM Cluster Witness Server Healthy Connectivity Interval is changed.
format string: Cluster Witness Server Healthy Connectivity Interval is changed from
%u to %u seconds
Description Call-home can be configured to send events to EMC Support and/or one or more
recipients in your organization.
Use this command to send a test event to the configured recipients. VPLEX sends the test
call-home within 1 minute of running this command.
If call-home is configured to send event notifications to personnel in your organization,
check the e-mail accounts specified to receive notifications to verify the test event arrived.
If call-home is configured to send event notifications to EMC, contact EMC Support to
verify that the test event arrived.
Use the set command to enable/disable call-home notifications.
Use the ls notifications/call-home command to verify that call-home is enabled:
VPlexcli:/> ls /notifications/call-home
/notifications/call-home:
Attributes:
Name Value
------- -----
enabled true
Example Use the call-home test command to send a test call-home event to the configured
recipients:
VPlexcli:/notifications/call-home> call-home test
call-home test was successful.
Optional arguments
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
- * argument is positional.
Description This command cancels jobs that are queued in the job queue.
Only jobs that are in a Queued state can be cancelled. Once they are cancelled, the jobs
will remain in the job queue with a state of Cancelled.
Optional arguments
[-s|--state] job state - Specifies a job state by which to filter the jobs. If specified, this
argument filters the jobs by their states. This option is most useful when all jobs are
specified in the command invocation.
[-f|--force] - Deletes the jobs without asking for confirmation.
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution.
- * argument is positional.
Description This command permanently removes jobs from the job queue. Jobs that are in progress
cannot be deleted.
A job that is in progress cannot be deleted. All other job states - cancelled, failed, and
successful - can be deleted.
Deleted 2 jobs.
Example Shows the attempt to delete two specific jobs failed (jobs were skipped):
VPlexcli:/notifications/jobs> delete Provision_4_14-04-14-15-57-03,
Provision_4_08-04-14-17-34-51
WARNING: The following items will be deleted:
Context
-------------------------------------------------
/notifications/jobs/Provision_4_08-04-14-17-34-51
/notifications/jobs/Provision_4_14-04-14-15-57-03
Do you wish to proceed? (Yes/No) y
Skipped 2 jobs.
Example Use the --verbose option to display more information for a job that is skipped. In this
example, one job was skipped, or not deleted, because it was in progress:
VPlexcli:/notifications/jobs> delete * --verbose
WARNING: The following items will be deleted:
Context
-------------------------------------------------
/notifications/jobs/Provision_4_08-04-14-17-34-51
/notifications/jobs/Provision_4_14-04-14-15-57-03
Do you wish to proceed? (Yes/No) y
WARNING: Operation Delete failed for target 'Provision_4_08-04-14-17-34-51'. Cannot delete job
'Provision_4_08-04-14-17-34-51' because its state is InProgress.
Deleted 1 job.
Skipped 1 job.
Optional arguments
[-h}--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
- * argument is positional.
Description Resubmit failed or canceled jobs to the job queue. The jobs will be placed in a queued
state and executed in normal priority order.
Example
VPlexcli:/notifications/call-home/snmp-traps> notifications snmp-trap create Test
VPlexcli:/notifications/call-home/snmp-traps> cd /Test
VPlexcli:/notifications/call-home/snmp-traps/Test> ll
Name Value
---------------- ------
community-string public
remote-host -
remote-port 162
started false
Description The --force argument is required to destroy an SNMP trap sink that has been started.
password-policy reset
Resets the password-policies to the default factory settings.
Contexts /security/authentication
Note: This command can only be run by the admin user. So it will ask for the admin
password.
After successful authentication, If you do not specify the --force option during command
execution, the command prompts with a confirmation message and, based on user input,
it will proceed.
IMPORTANT
Using this command will override all existing password-policy configurations and return
them to the default settings.
Do not run this command unless you are certain that it is required to
return password-policy to default-state.
Do you want to proceed? (Yes/No) yes
password-policy set
Each attribute in the Password policy is configurable. The new value will be updated to the
respective configuration file and existing users will be updated with these configuration.
Following sections walk through the process of setting each attribute.
Contexts /security/authentication/password-policy
Description The password policies are not applicable to users configured through an LDAP server.
The password policies are not applicable to the service user.
Password inactive days is not applied to admin user to protect the admin user from
account lockouts.
Example To view the existing password-policy settings, run the ll command in the
security/authentication/password-policy context:
VPlexcli:/security/authentication/password-policy> ll
Name Value
----------------------- -----
password-inactive-days 1
password-maximum-days 90
password-minimum-days 1
password-minimum-length 8
password-warn-days 15
Example To view the permissible values for the password-policy set command, enter the command
with no options:
VPlexcli:/security/authentication/password-policy> set
attribute input-description
----------------------- --------------------------------------
name Read-only.
password-inactive-days Takes an integer value between -1 and
99999. Specifying a '-1' value will disable the policy.
password-maximum-days Takes an integer value between 1 and
99999.
password-minimum-days Takes an integer value between -1 and
99999. Specifying a '-1' value will disable the policy.
password-minimum-length Takes an integer value between 6 and
99999.
password-warn-days Takes an integer value between -1 and
99999. Specifying a '-1' value will disable the policy.
Example To set the password inactive days value to 3 days, use this command:
VPlexcli:/security/authentication/password-policy> set
password-inactive-days 3
admin password:
VPlexcli:/security/authentication/password-policy> ll
Name Value
----------------------- -----
password-inactive-days 3
password-maximum-days 90
password-minimum-days 1
password-minimum-length 8
password-warn-days 15
plugin addurl
Adds an URL to the plug-in search path.
Description Note: The plugin commands are not intended for customer use.
Plug-ins extend the class path of the CLI. Plug-ins support dynamic addition of
functionality. The plugin search path is used by the plugin register command.
plugin listurl
Lists URLs currently in the plugin search path.
Description The search path URLs are those locations added to the plugin search path using the plugin
addurl command.
Note: The plugin commands are not intended for customer use.
file:/opt/emc/VPlex/jython2.2/LibExt/AutoBundles/prodscripts.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/bin/commons-daemon.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/bin/bootstrap.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/bin/tomcat-juli.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/tomcat-i18n-es.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/tomcat-juli-adapters.ja
r, file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/catalina-tribes.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/servlet-api.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/tomcat-coyote.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/realm-adapter.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/catalina-ha.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/jasper-jdt.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/catalina.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/catalina-ant.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/jsp-api.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/annotations-api.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/jasper-el.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/jasper.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/tomcat-i18n-ja.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/el-api.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/tomcat-i18n-fr.jar,
file:/opt/emc/VPlex/apache-tomcat-6.0.x/lib/tomcat-dbcp.jar
plugin register
Registers a shell plugin by class name.
Description Plugin class is found in the default classpath, or in locations added using the plugin
addurl command.
Plug-ins add a batch of commands to the CLI, generally implemented as a set of one or
more Jython modules.
Note: The plugin commands are not intended for customer use.
popd
Pops the top context off the stack, and changes the current context to that context.
Syntax popd
VPlexcli:/engines/engine-1-1/directors/director-1-1-B> popd
[/engines/engine-1-1/directors/director-1-1-B,
/clusters/cluster-1/storage-elements/storage-arrays, /, /]
VPlexcli:/engines/engine-1-1/directors/director-1-1-A>
pushd
Pushes the current context onto the context stack, and then changes the current context to
the given context.
Syntax pushd
[-c|--context] context
Use the pushd command to push a second context onto the context stack:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays> pushd
/engines/engine-1-1/directors/director-1-1-A/
[/engines/engine-1-1/directors/director-1-1-A,
/clusters/cluster-1/storage-elements/storage-arrays, /, /]
pushd 369
VPLEX CLI Command Descriptions
Now, there are two contexts on the context stack. Use the pushd command to toggle
between the two contexts:
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> pushd
[/clusters/cluster-1/storage-elements/storage-arrays,
/engines/engine-1-1/directors/director-1-1-A, /, /]
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays> pushd
[/engines/engine-1-1/directors/director-1-1-A,
/clusters/cluster-1/storage-elements/storage-arrays, /, /]
VPlexcli:/engines/engine-1-1/directors/director-1-1-A>
rebuild set-transfer-size
Changes the transfer-size of the given devices.
Description If the target devices are rebuilding when this command is issued, the rebuild is paused
and resumed using the new transfer-size.
Note: If there are queued rebuilds, the rebuild may not resume immediately.
◆ rebuild status
rebuild show-transfer-size
Shows the transfer-size of specified RAID 1 devices.
rebuild status
Displays all global and cluster-local rebuilds along with their completion status.
Optional arguments
--show-storage-volumes - Displays all storage volumes that need to be rebuilt, both active
and queued. If not present, only the active rebuilds are displayed.
Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds.
report aggregate-monitors
Aggregate the reports generated by the report create-monitors or monitor commands.
This command assumes that the per director reports have filenames with the following
format:
<report type>ReportMonitor_<director>.csv
All other files in the directory will be ignored.
Aggregate report filenames are in the following format:
<report type>Performance_<cluster>.csv
If an aggregate report already exists, it will be overwritten.
report capacity-arrays
Generates a capacity report.
[-d|-directory] directory - Directory in which to create the csv files. Output is written to two
files:
◆ File for local storage: CapacityArraysLocal.csv
◆ File for shared storage: CapacityArraysShared.csv.
Default directory path: /var/log/VPlex/cli/reports/* on the management server.
Description Generates a capacity report for all the storage in a VPLEX, grouped by storage arrays.
This command assumes the following:
◆ All storage volumes in a storage array have the same tier value.
◆ The tier is indicated in the storage-volume name. The tier attribute in the virtual
volumes context is ignored.
If a file is specified, output is formatted as:
<time>,<cluster name>,<array name>,<tier string>,<alloc>,<unalloc devices>,<unalloc
storage-volumes> <time>,<alloc>,<unalloc devices>
If the files already exist, the report is appended to the end of the files.
Note: Tier IDs are required to determine the tier of a storage-volume/storage array. Storage
volumes that do not contain any of the specified IDs are given the tier value 'no-tier'.
The report is separated into two parts: local storage and shared storage.
◆ Local storage is accessible only from the same cluster where the storage is physically
located. Information in the report for local storage includes:
• Cluster id
• Storage array
• Tier - the tier of the storage array
• Allocated - storage that is visible through a view (exported)
• Unallocated-device - storage that is in devices, but not visible from a view. For
example, a virtual volume that has not been exported or free space in a device that
is not part of a virtual volume.
• Unallocated-storage-volume - storage in unused storage volumes.
◆ Shared storage is accessible from clusters other than where it is physically located
(distributed and remote virtual volumes). Information in the report for shared storage
includes:
• allocated - storage that is visible through a view (exported)
• unallocated-device - storage that is in devices, but not visible from a view. For
example, a virtual volume that has not been exported or free space in a device that
is not part of a virtual volume.
There is no 'tier' indicator for shared storage because the tiers may be different for
each mirror of a distributed-device.
There are no shared storage volumes. Storage Volumes are only locally accessible and
are part of the cluster allocation.
Display the report file To display the raw report file, do the following:
1. Exit to the management server:
VPlexcli:/> exit
Connection closed by foreign host.
service@ManagementServer:~>
2. Navigate to the VPLEX CLI reports directory (or the specified output directory):
service@ManagementServer:~> cd /var/log/VPlex/cli/reports
service@ManagementServer:/var/log/VPlex/cli/reports> ll
total 48
-rw-r--r-- 1 service users 2253 2010-08-12 15:46
CapacityArraysLocal.csv
-rw-r--r-- 1 service users 169 2010-08-12 15:46
CapacityArraysShared.csv
.
.
.
3. Use the cat filename command to display the file:
service@ManagementServer:/var/log/VPlex/cli/reports> cat CapacityArraysLocal.csv
Time, Cluster name, Array name, Tier string, Allocated volumes (GiB), Unalloc devices (GiB),
Unalloc storage_volumes (GiB)
2010-06-21 16:00:32, cluster-1, EMC-0x00000000192601378, no-tier, 0, 0, 5666242560000
2010-06-21 16:00:32, cluster-1, EMC-0x00000000192601852, no-tier, 0, 0, 5292530073600
Example
"report capacity-arrays" was not able to extract the tier id from the following
storage-volumes/devices. Please ensure that the tier-regex contains 1 capture group.
tier-regex: ^[^_]+([HL])_.+$
storage-volumes/devices:
Symm1723_1FC
CX4_lun0
base_volume
Symm1852_1C0
VPD83T3:60000970000192601378533030313530
Symm1852_5C0
VPD83T3:60000970000192601378533030313538
report capacity-clusters
Generates a capacity report for every cluster.
Example
VPlexcli:/> report capacity-clusters
Cluster, Unclaimed disk capacity (GiB), Unclaimed storage_volumes, Claimed disk capacity(GiB),
Claimed storage_volumes, Used storage-volume capacity (GiB), Used storage_volumes, Unexported
volume capacity (GiB), Unexported volumes, Exported volume capacity (GiB), Exported volumes
cluster-1, 5705.13, 341, 7947.68, 492, 360.04, 15, 3.00, 3, 2201.47, 27
cluster-2, 5337.10, 328, 7995.69, 495, 2478.45, 137, 20.00, 3, 2178.46, 25
VPlexcli:/> report capacity-clusters --verbose
Cluster, StorageVolume Name, VPD83 ID, Capacity, Use, Vendor
cluster-1,CX4_Logging,VPD83T3:6006016021d02500e6d58bab2227df11,80G,used,DGC
cluster-1,CX4__M0,VPD83T3:6006016021d02500be83caff0427df11,90G,-data,DGC
cluster-1,CX4__M1,VPD83T3:6006016021d02500bf83caff0427df11,90G,claimed,DGC
cluster-1,CX4_lun0,VPD83T3:6006016021d0250026b925ff60b5de11,10G,used,DGC
.
.
.
report capacity-hosts
Generates a host capacity report.
Example
VPlexcli:/> report capacity-hosts
Cluster, Views, Exported capacity (GiB), Exported volumes
cluster-1, 2, 2209.47, 28
cluster-2, 1, 2178.46, 25
report create-monitors
Creates three performance monitors for each director in the VPLEX: storage-volume
performance, port performance, and virtual volume performance. Each monitor has one
file sink.
Description Creates three monitors for each director in the VPLEX. Monitors are named:
◆ Cluster_n_Dir_nn_diskReportMonitor
◆ Cluster_n_Dir_nn_portReportMonitor
◆ Cluster_n_Dir_nn_volumeReportMonitor
The period attribute for the new monitors is set to 0 (automatic polling is disabled). Use
the report poll-monitors command to force a poll.
Each monitor has one file sink. The file sinks are enabled.
By default, output files are located in /var/log/VPlex/cli/reports/ on the management
server. Output filenames are in the following format:
<Monitor name>_<Cluster_n_Dir_nn.csv
time: 11 sec
Creating monitor volumeReportMonitor on Director Cluster_1_Dir1A monitoring 30 targets, file
/var/log/VPlex/cli/reports/volumeReportMonitor_Cluster_1_Dir1A.csv.
Successfully created 1 monitor(s) out of 1.
time: 0 sec
Creating monitor portReportMonitor on Director Cluster_1_Dir1A monitoring 16 targets, file
/var/log/VPlex/cli/reports/portReportMonitor_Cluster_1_Dir1A.csv.
Successfully created 1 monitor(s) out of 1.Creating monitor diskReportMonitor on Director
Cluster_1_Dir1A monitoring 981 targets, file
/var/log/VPlex/cli/reports/diskReportMonitor_Cluster_1_Dir1A.csv.
Successfully created 1 monitor(s) out of 1.
.
.
.
VPlexcli:/> ll /monitoring/directors/*/monitors
/monitoring/directors/Cluster_1_Dir1A/monitors:
Name Ownership Collecting Period Average Idle For Bucket Bucket Bucket Bucket
----------------------------------- --------- Data ------ Period -------- Min Max Width Count
----------------------------------- --------- ---------- ------ ------- -------- ------ ------- ------ ------
Cluster_1_Dir1A_diskReportMonitor true true 0s - 7.1min 100 1600100 25000 64
Cluster_1_Dir1A_portReportMonitor true true 0s - 6.88min - - - 64
Cluster_1_Dir1A_volumeReportMonitor true true 0s - 6.9min - - - 64
/monitoring/directors/Cluster_1_Dir1B/monitors:
Name Ownership Collecting Period Average Idle Bucket Bucket Bucket Bucket
----------------------------------- --------- Data ------ Period For Min Max Width Count
----------------------------------- --------- ---------- ------ ------- ------- ------ ------- ------ ------
Cluster_1_Dir1B_diskReportMonitor true true 0s - 6.88min 100 1600100 25000 64
Cluster_1_Dir1B_portReportMonitor true true 0s - 6.68min - - - 64
Cluster_1_Dir1B_volumeReportMonitor true true 0s - 6.7min - - - 64
.
.
.
In the following example, the --force argument forces the creation of monitors, even
though the creation results in missed polling periods:
VPlexcli:/> report create-monitors --force
Creating monitor diskReportMonitor on Director Cluster_1_Dir1A monitoring 981 targets, file
/var/log/VPlex/cli/reports/diskReportMonitor_Cluster_1_Dir1A.csv.
WARNING: One or more of your monitors is currently at least 25.0% behind its polling period.
WARNING: One or more of your monitors is currently at least 25.0% behind its polling period.
time: 1 sec
Creating monitor volumeReportMonitor on Director Cluster_1_Dir1A monitoring 30 targets, file
/var/log/VPlex/cli/reports/volumeReportMonitor_Cluster_1_Dir1A.csv.
WARNING: One or more of your monitors is currently at least 25.0% behind its polling period.
WARNING: One or more of your monitors is currently at least 25.0% behind its polling period.
.
.
.
report poll-monitors
Polls the report monitors created by the report create-monitors command.
Description The monitors created by the report create-monitors command have their period attribute
set to 0 seconds (automatic polling is disabled) and one file sink.
Use this command to force an immediate poll and collection of performance data for
monitors created by the report create-monitors command.
Output is written to files located in /var/log/VPlex/cli/reports/ on the management
server.
Example
VPlexcli:/> report poll-monitors
Collecting data for director Cluster_2_Dir_1B monitor Cluster_2_Dir_1B_diskReportMonitor.
Collecting data for director Cluster_2_Dir_1B monitor Cluster_2_Dir_1B_portReportMonitor.
Collecting data for director Cluster_2_Dir_1B monitor Cluster_2_Dir_1B_volumeReportMonitor.
.
.
.
rp import-certificate
Imports a RecoverPoint security certificate from the specified RPA cluster.
Syntax rp import-certificate
Description This command runs an interview script to import the RecoverPoint security certificate.
In Metro systems, run this command on both management servers.
This command restarts VPLEX CLI and GUI sessions on the VPLEX cluster to which the RPA
cluster is attached. With this release, using this command will lead to loss of VPLEX
Integrated Array Services (VIAS) provisioning jobs created.
Example Import the RecoverPoint security certificate from the RPA cluster at IP address
10.6.210.85:
VPlexcli:/> rp import-certificate
This command will cause the VPLEX CLI process to restart if security settings are modified.
This will require a new log in from all connected CLI and GUI clients.
-----Certificate Details-----
rp import-certificate 381
VPLEX CLI Command Descriptions
◆ rp validate-configuration
rp rpa-cluster add
Associates a cluster of RecoverPoint Appliances to single VPLEX cluster.
Optional arguments
[-c|--cluster] cluster-id - Context path of the VPLEX cluster associated with this cluster of
RPAs. If no VPLEX cluster is specified, the ID of the local cluster is used. The local cluster is
the cluster whose cluster-id matches the management server’s IP seed. See “About cluster
IP seed and cluster ID” in the security ipsec-configure command.
* argument is positional.
Description Adds information about a RecoverPoint Appliance cluster to VPLEX. Used by VPLEX to
connect to RecoverPoint and retrieve replication information.
In Metro systems, run this command on both management servers.
After the RPA cluster is added, information about the RPA cluster and its consistency
groups and volumes appear in the following VPLEX CLI contexts and commands:
◆ /recoverpoint/rpa-clusters/ip_address/volumes
◆ /clusters/cluster_name/consistency-groups/cg_name/recoverpoint
◆ rp summary command
◆ rp validate-configuration command
/recoverpoint/rpa-clusters:
RPA Host VPLEX Cluster RPA Site RPA ID RPA Version
----------- ------------- --------- ------ -----------
10.6.210.87 cluster-1 Tylenol-1 RPA 1 4.1(d.147)
10.6.211.3 cluster-2 Tylenol-2 RPA 1 4.1(d.147)
VPlexcli:/> ll recoverpoint/rpa-clusters/10.6.210.87/
/recoverpoint/rpa-clusters/10.6.210.87:
Attributes:
Name Value
---------------------- -------------------------------------------------------
admin-username admin
config-changes-allowed true
rp-health-indications [Problem detected with RecoverPoint RPAs and splitters]
rp-health-status warning
rp-software-serial-id -
rpa-host 10.6.210.87
rpa-id RPA 1
rpa-site Tylenol-1
rpa-version 4.1(d.147)
vplex-cluster cluster-1
Contexts:
Name Description
------------------ -----------------------------------------------------------
consistency-groups Contains all the RecoverPoint consistency groups which
consist of copies local to this VPLEX cluster.
volumes Contains all the distributed virtual volumes with a local
extent and the local virtual volumes which are used by this
RPA cluster for RecoverPoint repository and journal volumes
and replication volumes.
VPlexcli:/> cd recoverpoint/rpa-clusters/10.6.210.87/consistency-groups
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups> ll
Name
----
CG-1
CG-2
CG-3
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups> ll CG-1
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1:
Attributes:
Name Value
----------------------------- ----------------------
active-replicating-rp-cluster Tylenol-1
distributed-group false
enabled true
preferred-cluster Tylenol-1
preferred-primary-rpa RPA2
production-copy Pro-1
protection-type MetroPoint Replication
uid 7a59f870
Contexts:
Name Description
---------------- -------------------------------------------------------------
copies Contains the production copy and the replica copies of which
this RecoverPoint consistency group consists.
links Contains all the communication pipes used by RecoverPoint to
replicate consistency group date between the production copy
and the replica copies.
replication-sets Contains all the replication sets of which this RecoverPoint
consistency group consists.
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1> ll copies
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1/copies:
Name
-----
Pro-1
Pro-2
REP-1
REP-2
REP-3
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1> ll links
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1/links:
Name
------------
Pro-1->REP-1
Pro-1->REP-3
Pro-2->REP-2
Pro-2->REP-3
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1> ll replication-sets
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1/replication-sets:
Name
-----
RSet0
RSet1
RSet2
RSet3
VPlexcli:/> ll recoverpoint/rpa-clusters/10.6.210.87/volumes/ty_dr1_pro_vol
/recoverpoint/rpa-clusters/10.6.210.87/volumes/ty_dr1_pro_vol:
Name Value
------------------------- --------------------------------
rp-consistency-group CG-1
rp-consistency-group-copy Pro-1
rp-replication-set RSet0
rp-role Production Source
rp-type Replication
rpa-site Tylenol-1
size 5G
uid 6000144000000010f03de8cb2f4c66d9
vplex-cluster cluster-1
vplex-consistency-group cg1_pro
Field Description
admin-username A pre-configured RecoverPoint user with an admin role granted all system permissions to
manage RecoverPoint. Excluded privileges: downloading objects located on the RPAs,
changing users and roles, security levels, and LDAP configuration.
config-changes- Whether or not RecoverPoint appliance configuration changes are currently allowed
allowed (maintenance mode). RecoverPoint appliance configuration changes are not allowed
during VPLEX NDU.
rp-health-indication If rp-health-status is anything other than OK, additional information about the problem
s and the component that is impacted.
Field Description
rp-health-status Operational health of the RP cluster components including WAN, volumes. RPAs, and
splitters.
OK - All components of the RP configuration are operating as expected.
error - One or more components of the RP configuration is not operating as expected. The
rp-health-indications field displays additional information.
warning - One or more components of the RP configuration is not operating as expected.
The rp-health-indications field displays additional information.
unknown - VPLEX cannot connect to the RPA cluster.
VPLEX Cluster Cluster ID or name of the VPLEX cluster associated with the RPA.
RPA Site Name of the RPA site. There can be up to two sites in a RecoverPoint installation; a local
site and a remote site. In one-site configurations (CDP), both the production and local
copy reside at the local site. In two-site configurations (stretch CDP, CRR, and CLR), the
production copy is at the local site and the remote copy is at the remote site.
in /recoverpoint/rpa-clusters/ip-address/consistency-groups context
copies Contains the production copy and the replica copies of which this RecoverPoint
consistency group consists.
links Contains all the communication pipes used by RecoverPoint to replicate consistency group
date between the production copy and the replica copies.
replication-sets Contains all the replication sets of which this RecoverPoint consistency group consists.
in /recoverpoint/rpa-clusters/ip-address/volumes context
Field Description
RP Group If the volume is a member of a RecoverPoint consistency group, the name of the group.
VPLEX Group The VPLEX consistency group to which this volume belongs. Production and replica
volumes associated with RecoverPoint must be in VPLEX consistency groups that have the
following attributes:
• Cache-mode property is synchronous
• Consistency groups with the “visibility” property set to both clusters must also have
their “storage-at-clusters” set to both clusters.
• Recoverpoint-enabled property set to true.
In /recoverpoint/rpa-clusters/ip-address/volumes/volume context
rp-replication-set The RecoverPoint replication set to which the volume belongs. Replication sets consist of
the production source volume and the replica volume(s) to which it replicates.
Every storage volume in the production storage must have a corresponding volume at each
copy.
uid Unique Identifier for the volume. A 64-bit number used to uniquely identify each VPLEX
volume.
vplex-cluster The VPLEX cluster with which this RPA cluster is associated.
vplex-consistency- The name of the VPLEX consistency group of which this volume is a member.
group
rp rpa-cluster remove
Removes information about a RecoverPoint Appliance from VPLEX.
/recoverpoint/rpa-clusters:
RPA Host VPLEX Cluster RPA Site RPA ID RPA Version
----------- ------------- -------- ------ -----------
10.6.210.75 cluster-1 Site1 RPA 1 3.5(l.26)
VPlexcli:/> ls recoverpoint/rpa-clusters/10.6.210.75
rp summary
Displays a summary of replication for the entire VPLEX cluster, across all connected RPA
sites/clusters.
Syntax rp summary
Description This command calculates the total number of volumes and the total capacity for each RP
type and RP role it finds in the /recoverpoint/rpa-clusters context.
Also prints cumulative information.
RecoverPoint MetroPoint summary information is included in the totals.
Example Display a VPLEX Metro with RecoverPoint RPAs deployed at both VPLEX clusters:
VPlexcli:/> rp summary
1 MetroPoint group(s) are configured with 2 Production Source volumes using a total capacity
of 10G.
Distributed Volumes used for MetroPoint replication will be counted in the capacity of each
cluster above.
rp summary 389
VPLEX CLI Command Descriptions
Field Description
VPLEX Cluster Name of the VPLEX cluster associated with a RecoverPoint RPA.
Total Capacity Total capacity of the volumes protected by RecoverPoint at the specified
VPLEX cluster.
rp validate-configuration
Validates the RecoverPoint splitter configuration.
Syntax rp validate-configuration
Description This command checks the system configuration with respect to RecoverPoint and displays
errors or warnings if errors are detected.
For VPLEX Metro configurations, run this command on both management servers.
When RPAs are zoned to VPLEX using single-channel mode (2 RPA ports are zoned to
VPLEX front end ports, and 2 RPA ports are zoned to the VPLEX back end ports) this
command reports the ports as “WARNING”. This is because the command checks that all
4 ports on the RPA are zoned to both VPLEX front end and back end ports (dual-channel
mode). See the second example listed below.
Best practice is to zone every RPA port to both VPLEX front end and back end ports. For
configurations where this is not possible or desirable, this command detects the port
configuration, displays a warning. Administrators who have purposely configured
single-channel mode can safely ignore the warning.
rp validate-configuration 391
VPLEX CLI Command Descriptions
Validating that VPLEX sees all expected initiator ports from RPAs
WARNING
=======================================
Validation Summary
=======================================
The following potential problems were found in the system:
8 potential problem(s) were found with the zoning between VPLEX and RecoverPoint.
schedule add
Schedules a job to run at the specified times.
schedule list
Lists all scheduled jobs.
schedule modify
Modifies an existing scheduled job.
Example To modify a job with the ID of 3 so that it runs every day at 11:00 a.m. type:
VPlexcli:/> schedule list
[0] 30 13 * * 3 syrcollect
[1] * 1 * * * tree
[2] * 2 * * * tree
[3] * 3 * * * tree
schedule remove
Removes a scheduled job.
scheduleSYR add
Schedules a weekly SYR data collection.
Description Typically, SYR collection and reporting are configured at initial system setup. Use this
command to add a scheduled SYR collection time if none was configured.
SYR data collection can be scheduled to occur at most once a week. Attempts to add
another weekly schedule results in an error.
SYR reporting gathers VPLEX configuration files and forward them to EMC. SYR reports
provide:
◆ Faster problem resolution and RCA
◆ Proactive maintenance
◆ Data for performance analysis
To modify the existing SYR collection time, use the scheduleSYR remove command to
delete the current time, and the scheduleSYR add command to specify a new collection
time.
scheduleSYR list
Lists the scheduled SYR data collection job.
scheduleSYR remove
Removes the currently scheduled SYR data collection job.
Description Only one SYR data collection can be scheduled. The current SYR collection cannot be
modified. To modify the SYR data collection job:
◆ Use the scheduleSYR remove command to remove the existing collection job.
◆ Use the scheduleSYR add command to create a new collection job.
script
Changes to interactive Jython scripting mode.
Syntax script
[-i|--import] module
[-u|--unimport] module
Optional arguments
[-i|--import] module - Import the specified Jython module without changing to interactive
mode. After importation, commands registered by the module are available in the CLI. If
the module is already imported, it is explicitly reloaded.
[-u|--unimport] module - Unimport the specified Jython module without changing to
interactive mode. All the commands that were registered by that module are unregistered.
Description Changes the command mode from VPLEX CLI to Jython interactive mode.
To return to the normal CLI shell, type a period '.' and press ENTER.
Use the --import and --export arguments to import/export the specified Jython module
without changing to interactive mode.
>>>
VPlexcli:/>
security create-ca-certificate
Creates a new Certification Authority (CA) certificate.
Optional arguments
[-l|--keylength] length - The length (number of bits) for the CA key.
Description A management server authenticates users against account information kept on its local
file system. An authenticated user can manage resources in the clusters.
The Certification Authority (CA) is used to sign management server certificates.
The security create-ca-certificate and security create-host-certificate commands create the
CA and host certificates using a pre-configured Distinguished Name where the Common
Name is the VPLEX cluster Top Level Administrator (TLA). If the TLA is not already set, it
must be set manually to prevent certificate creation failure.
Alternatively, use the --ca-subject-filename argument to create a custom Distinguished
Name. Specify the full path of the subject file unless the subject file is in the local CLI
directory.
This command creates two objects on the management server:
◆ A CA certificate file valid for 1825 days (5 years). The file is located at:
/etc/ipsec.d/cacerts/strongswanCert.pem
◆ A private key protected by a passphrase. The CA key is located at:
/etc/ipsec.d/private/strongswanKey.pem
Note: Take note of the passphrases you use to create these certificates and save them in a
secure location. They will be required at other times when maintaining the VPLEX clusters.
The steps to configure security on a VPLEX differ depending on whether the configuration
is a VPLEX Local or VPLEX Metro/Geo.
For VPLEX Local configurations:
1. Use the security create-ca-certificate command to create the CA certificate.
2. Write down the passphrase entered during the creation of CA certificate.
3. Use the security create-host-certificate command to create the host certificate.
A prompt is displayed for the passphrase used to create the CA certificate.
4. Write down the passphrase entered during the creation of host certificate.
5. Use the security ipsec-configure command to start the VPN process.
A prompt for the passphrase used to create the host certificate is displayed.
For VPLEX Metro and Geo configurations:
Note: Take note of the passphrases you use to create these certificates and save them in a
secure location. They will be required at other times when maintaining the VPLEX clusters.
scp /etc/ipsec.d/private/strongswanKey.pem
[email protected]:/etc/ipsec.d/private
IMPORTANT
In VPLEX Metro and Geo configurations: Use the same CA certificate passphrase for host
certificates on management servers 1 and 2.
Example Create a default CA certificate with the default CA certificate subject information.
VPlexcli:/> security create-ca-certificate
Example Create a custom CA certificate with a custom CA certificate subject information. In the
following example:
• The security create-certificate-subject command creates a custom subject file.
named TestSubject.txt.
• The security create-ca-certificate command creates a custom CA certificate with the
specified custom subject file.
VPlexcli:/> security create-certificate-subject -c US -s NewYork -m EMC -u EMC -l NewYork -n
CommonTestName -e [email protected] -o TestSubject.txt
◆ security show-cert-subj
◆ EMC VPLEX Security Configuration Guide
security create-certificate-subject
Creates a subject file used in creating security certificates.
Optional arguments
[-c|--country] country - The Country value for the country key in the subject file.
[-s|--state] state - The State value for the state key in the subject file.
[-m|--org-name] organizational name - Organizational Name value for the organizational
name key in the subject file.
[-u|--org-unit] organizational unit - Organizational Unit value for the organizational unit key
in the subject file.
[-l|--locality] locality - Locality value for the locality key in the subject file.
[-n|--common-name] name - Name value for the name key in the subject file.
[-e|--email] e-mail - E-mail value for the e-mail key in the subject file.
[-o|--subject-out-filename] Output Subject Filename - The filename of the subject file to be
created.
--force - Overwrites the specified subject-out-filename if a file of that name already exists.
If a file with the subject-out-filename already exists and the --force argument is not
specified, the command fails.
SUBJECT_LOCALITY=Hopkinton
SUBJECT_ORG=EMC
SUBJECT_ORG_UNIT=EMC
SUBJECT_COMMON_NAME=FNM00102200421
[email protected]
security create-host-certificate
Creates a new host certificate and signs it with an existing CA certificate.
Description Generates a host certificate request and signs it with the Certification Authority certificate
created by the security create-ca-certificate command.
The CA Certificate and CA Key must be created prior to running this command.
The host certificate is stored at /etc/ipsec.d/certs.
The host key is stored at /etc/ipsec.d/private.
The host certificate request is stored at /etc/ipsec.d/reqs.
The CA certificate file is read from /etc/ipsec.d/cacerts.
The CA Key is read from /etc/ipsec.d/private.
Example Create a default host certificate with the default host certificate subject information:
VPlexcli:/> security create-host-certificate
Example Create a custom host certificate with the default host certificate subject information:
VPlexcli:/>security create-host-certificate --host-cert-outfilename
TestHostCert.pem --host-key-outfilename TestHostKey.pem
Example Create a custom host certificate with custom host certificate subject information. In the
following example:
• The security create-certificate-subject command creates a custom subject file
named TestSubject.txt.
• The security create-host-certificate command creates a custom host certificate with
the specified custom subject file.
security delete-ca-certificate
Deletes the specified CA certificate and its key.
Optional arguments
[-o|--ca-cert-outfilename] filename - CA Certificate output filename.
Default: strongswanCert.pem.
[-f|--ca-key-outfilename] filename - CA Key output filename.
Default: strongswanKey.pem.
Description Deletes the CA certificate and deletes the entries from the lockbox that were created by
EZ-setup.
security delete-host-certificate
Deletes the specified host certificate.
Optional arguments
[-o|--host-cert-outfilename] filename - host certificate output filename.
Default: hostCert.pem.
[-f|--host-key-outfilename] filename - Host key output filename.
Default: hostKey.pem.
Description Deletes the specified host certificate and deletes the entries from the lockbox that were
created by EZ-setup.
security export-ca-certificate
Exports a CA certificate and CA key to a given location.
Optional arguments
[-c|--ca-cert-filepath] absolute filepath - The absolute path of the CA certificate file to
export.
Default: /etc/ipsec.d/cacerts/strongswanCert.pem.
[-k|--ca-key-filepath] absolute filepath - The absolute path of the CA Key file to export.
Default: /etc/ipsec.d/private/strongswanKey.pem.
Note: The user executing this command must have write privileges at the location to which
the certificate is exported.
Example Export a custom CA certificate and it's key (created using the security create-ca-certificate
command) to /var/log/VPlex/cli:
VPlexcli:/> security export-ca-certificate -c /etc/ipsec.d/cacerts/TestCACert.pem -k
/etc/ipsec.d/private/TestCAKey.pem -e /var/log/VPlex/cli
security export-host-certificate
Exports a host certificate and host key to the specified location.
Optional arguments
[-c|--host-cert-filepath] absolute filepath - The absolute path of the host certificate file to
export.
Default: /etc/ipsec.d/certs/hostCert.pem
[-k|--host-key-filepath] absolute filepath - The absolute path of the host key file to export.
Default: /etc/ipsec.d/private/hostKey.pem
Note: The user executing this command must have write privileges at the location to which
the certificate is exported.
Example Export a custom host certificate and it's key (created using the security
create-host-certificate command) to /var/log/VPlex/cli:
VPlexcli:/> security export-host-certificate -c /etc/ipsec.d/certs/TestHostCert.pem -k
/etc/ipsec.d/private/TestHostKey.pem -e /var/log/VPlex/cli
security import-ca-certificate
Imports a CA certificate and CA key from a given location.
Optional arguments
[-i|--ca-cert-import-location] import location - The absolute path of the location to which to
import the CA certificate.
Default location - /etc/ipsec.d/cacerts.
[-j|--ca-key-import-location] import location - The absolute path of the location to which to
import the CA certificate.
Default location - /etc/ipsec.d/private.
Note: The user executing this command must have write privileges at the location from
which the certificate is imported.
If the import locations for the CA certificate and CA key has files with the same names,
they are overwritten.
The import/export of CA certificates does not work for external CA certificates.
Example Import the CA certificate and its key from a specified location to the default CA certificate
and key location (/var/log/VPlex/cli):
VPlexcli:/> security import-ca-certificate -c /var/log/VPlex/cli/strongswanCert.pem -k
/var/log/VPlex/cli/strongswanKey.pem
Example Import the CA certificate and key from /var/log/VPlex/cli directory to a custom location:
VPlexcli:/> security import-ca-certificate -c /var/log/VPlex/cli/strongswanCert.pem -k
/var/log/VPlex/cli/strongswanKey.pem -i /Test/cacerts -j /Test/private
security import-host-certificate
Imports a host certificate and host key from a given location.
Optional arguments
[-i|--host-cert-import-location] import location - The absolute path of the location to which
to import the host certificate.
Default location - /etc/ipsec.d/certs.
[-j|--host-key-import-location] import location - The absolute path of the location to which
to import the host certificate.
Default location - /etc/ipsec.d/private.
Note: The user executing this command must have write privileges at the location from
which the certificate is imported.
If the import locations for the host certificate and host key have files with the same
names, the files are overwritten.
Example Import the host certificate and it's key from /var/log/VPlex/cli to a custom host certificate
and key location:
VPlexcli:/> security import-ca-certificate -c /var/log/VPlexcli/hostCert.pem -k
/var/log/VPlex/cli/hostKey.pem -i /Test/certs -j /Test/private
◆ security import-ca-certificate
◆ security ipsec-configure
◆ security show-cert-subj
◆ EMC VPLEX Security Configuration Guide
security ipsec-configure
Configures IPSec after the CA and host certificates have been created.
Optional arguments
[-c|--host-cert-filename] host certificate - host certificate filename.
[-k|--host-key-filename] host key - host key filename.
7. On the second cluster, use the security ipsec-configure command. Specify the IP
address of the first cluster.
8. On the first cluster, use the security ipsec-configure command. Specify the IP address
of the second cluster.
9. On either cluster, use the vpn status command to verify that the VPN is established.
Note: This command should be used only in VPLEX Metro and VPLEX Geo configurations to
create the VPN tunnel between clusters.
The distinguished name (DN) used to configure the ipsec is read from the host certificate
created on the remote management server. The filename of the host certificate file created
on the remote management server must be hostCert.pem.
◆ security create-host-certificate
◆ security export-ca-certificate
◆ security import-ca-certificate
◆ security show-cert-subj
◆ “Using the security commands” in the security create-ca-certificate command
◆ EMC VPLEX Security Configuration Guide
security remove-login-banner
Removes the login banner from the management server.
security renew-all-certificates
Renews CA and host security certificates.
Description When VPLEX is installed, EZ-Setup creates one CA certificate and two or three host
certificates:
◆ Certification Authority (CA) certificate shared by all clusters
◆ VPN host certificate
◆ Web server host certificate
◆ VPLEX Witness host certificate (when VPLEX Witness is installed)
IMPORTANT
In Metro and Geo systems , you are always prompted for the Certificate Authority (CA)
passphrase when you run the command on the second cluster.
When renewing the certificates on the second cluster of a Metro or Geo system , you
may be prompted to enter the service password. Contact the System Administrator to
obtain the current service password.
In Metro and Geo systems , do not renew the security certificates using the current
passphrases if you do not have a record of the Certificate Authority (CA) passphrase. You
must provide the current CA passphrase when you renew the certificates on the second
cluster. If you do not have a record of the CA passphrase, do not renew the certificates
until you have the passphrase or renew with a common passphrase.
Before you begin: ◆ Navigate to the /ect/ssl directory on the management servers, and see on which
cluster the file index.txt includes this string: CN=VPlex VPN CWS. If the string is
present, run the renewal command on that cluster first.
◆ Use the vpn status command to verify that the VPN tunnel between clusters is
operational, and the Cluster Witness Server is reachable. Do not proceed if these
conditions are not present.
◆ Use the ll cluster-witness command to verify that the cluster-witness admin-state is
disabled. If it is enabled, use the cluster-witness disable command to disable it.
The certificates above will be renewed, to expire on the dates shown. Do you want to continue?
(Y/N): y
Would you like to renew the certificates using the current passphrases? (Y/N): no
Please create a passphrase (at least 8 chars) to be used for all the certificate renewals:
Re-enter the passphrase for the Certificate Key: CA-passphrase
Renewing CA certificate...
The CA certificate was successfully renewed.
Renewing VPN certificate...
The VPN certificate was successfully renewed.
Renewing WEB certificate...
Your Java Key Store has been created.
https keystore: /var/log/VPlex/cli/.keystore
started web server on ports {'http': 49880, 'https': 49881}
The Web certificate was successfully renewed.
Generating certificate renewal summary...
All VPLEX certificates have been renewed successfully
WARNING : After running this command on the first cluster, the VPN tunnel between clusters will
be down temporarily until you run this command on the second cluster. This will not affect I/O
but will result in the inability to manage the remote cluster.
The certificates above will be renewed, to expire on the dates shown. Do you want to
continue?(Y/N): y
Would you like to renew the certificates using the current passphrases? (Y/N): y
Some or all of the passphrases are not available, so new passphrases must be created:
Please create a passphrase (at least 8 chars) for the Certificate Authority renewal:
CA-passphrase
Please create a passphrase (at least 8 chars) for the VPN certificate renewal: VPN-passphrase
Please create a passphrase (at least 8 chars) for the web certificate renewal: WEB-passphrase
Please create a passphrase (at least 8 chars) for the cluster witness certificate renewal:
CWS-passphrase
Renewing CA certificate...
The CA certificate was successfully renewed.
Renewing CW certificate...
The CWS certificate was successfully renewed.
WARNING : After running this command on the first cluster, the VPN tunnel between clusters will
be down temporarily until you run this command on the second cluster. This will not affect I/O
but will result in the inability to manage the remote cluster.
Before continuing to renew certificates on this cluster, please confirm that certificates have
been renewed on the other cluster.
Have certificates have been renewed on the other cluster? (yes/no) (Y/N): y
Detecting all the VPLEX certificates currently configured on the system...
The certificates above will be renewed, to expire on the dates shown. Do you want to continue?
(Y/N): y
Would you like to renew the certificates using the current passphrases? (Y/N): y
Some or all of the passphrases are not available, so new passphrases must be created:
Please enter the 'service' account password( 8 chars ) for the Remote Management Server:
emc12345
Please enter the passphrase for the Certificate Authority on the remote cluster: CA-passphrase
Please create a passphrase (at least 8 chars) for the VPN certificate renewal: VPN-passphrase
Please create a passphrase (at least 8 chars) for the web certificate renewal: WEB-passphrase
Renewing CA certificate...
The CA certificate was successfully renewed.
security set-login-banner
Applies a text file as the login banner on the management server.
Optional arguments
[-f|--force] - Forces the addition of the login banner without asking for any user
confirmation. Allows this command to be run from non-interactive scripts.
Description This command sets the login banner for the management server. This command applies
the contents of the specified text file as the login banner.
The change takes effect at the next login to the management server.
The formatting of the text in the specified text file is replicated in the banner.
There is no limit to the number of characters or lines in the specified text file.
Use this command to create a customized login banner. The formatting of the text in the
specified text file is replicated in the banner.
Example In the following example, a text file “login-banner.txt” containing the following lines is
specified as the login banner:
VPLEX cluster-1/Hopkinton
Test lab 3, Room 6, Rack 47
Metro with RecoverPoint CDP
VPlexcli:/> security set-login-banner -b
/home/service/login-banner.txt
The text provided in the specified file will be set as the Login banner
for this management server.
At next login to the management server, the new login banner is displayed:
login as: service
VPLEX cluster-1/Hopkinton
Test lab 3, Room 6, Rack 47
Metro with RecoverPoint CDP
Password:
security show-cert-subj
Displays the certificate subject file.
sessions
Displays active Unisphere for VPLEX sessions.
Syntax sessions
Description Displays the username, hostname, port and start time of active sessions to the Unisphere
for VPLEX.
set
Changes the value of a writable attribute(s) in the given context.
Syntax set
[-d|--default]
[-f|--force]
[-a|--attributes] selector pattern
[-v|--value] new value
Optional arguments
[-d|--default] - Sets the specified attribute(s) to the default values, if any exist. If no
attributes are specified, displays the default values for attributes in the current/specified
given context.
[-f|--force] - Force the value to be set, bypassing any confirmations or guards.
[-a|--attributes] selector pattern - * Attribute selector pattern.
[-v|--value] new value - * The new value to assign to the specified attribute(s).
* - argument is positional.
Description Use the set command with no arguments to display the attributes available in the current
context.
Use the set --default command with no additional arguments to display the default values
for the current context or a specified context.
Use the set command with an attribute pattern to display the matching attributes and the
required syntax for their values.
Use the set command with an attribute pattern and a value to change the value of each
matching attribute to the given value.
An attribute pattern is an attribute name optionally preceded with a context glob pattern
and a double-colon (::). The pattern matches the named attribute on each context
matched by the glob pattern.
If the glob pattern is omitted, set assumes the current context.
If the value and the attribute name are omitted, set displays information on all the
attributes on all the matching contexts.
Example Display which attributes are writable in the current context, and their valid inputs:
VPlexcli:/distributed-storage/distributed-devices/TestDisDevice> set
attribute input-description
-------------------------------------------------------------------------------------------------------
application-consistent Takes one of '0', '1', 'f', 'false', 'n', 'no', 'off', 'on', 't', 'true', 'y', 'yes'
(not case sensitive).
auto-resume Takes one of '0', '1', 'f', 'false', 'n', 'no', 'off', 'on', 't', 'true', 'y', 'yes'
(not case sensitive).
block-count Read-only.
block-size Read-only.
capacity Read-only.
clusters-involved Read-only.
.
set 421
VPLEX CLI Command Descriptions
.
.
Use the --default argument without any attribute(s) to display the default values for the
current (or specified) context's attributes:
VPlexcli:/distributed-storage/distributed-devices/TestDisDevice> set --default
attribute default-value
---------------------- -----------------
application-consistent No default value.
auto-resume No default value.
block-count No default value.
.
..
Example Set the remote IP address and started attributes for SNMP traps:
VPlexcli:/notifications/call-home/snmp-traps/Test> set remote-host 10.6.213.39
VPlexcli:/notifications/call-home/snmp-traps/Test> set started true
storage-volumetype normal
system-id VPD83T3:6006016061212e00b0171b696696e211
thin-rebuild true
total-free-space 0B
underlying-storage-block-size 512
use used
used-by [extent_test01_1]
vendor-specific-name DGC
vias-based false
Example Use the set enabled false --force command in notifications/call-home context to disable
call-home notifications (recommended during disruptive operations):
VPlexcli:/> cd /notifications/call-home/
Use the set enabled true command in notifications/call-home context to enable call-home
notifications:
VPlexcli:/> cd /notifications/call-home/
VPlexcli:/engines/engine-1-1/directors/director-1-1-A/hardware/ports> set
A0-FC03::enabled true
VPlexcli:/engines/engine-1-1/directors/director-1-1-A/hardware/ports> ll
Name Address Role Port Status
------- ------------------ --------- -----------
A0-FC00 0x5000144260006e00 front-end no-link
A0-FC01 0x5000144260006e01 front-end up
A0-FC02 0x5000144260006e02 front-end up
A0-FC03 0x5000144260006e03 front-end no-link
A1-FC00 0x5000144260006e10 back-end up
A1-FC01 0x5000144260006e11 back-end up
A1-FC02 0x5000144260006e12 back-end no-link
A1-FC03 0x5000144260006e13 back-end no-link
A2-FC00 0x5000144260006e20 wan-com up
A2-FC01 0x5000144260006e21 wan-com up
A2-FC02 0x5000144260006e22 wan-com no-link
A2-FC03 0x5000144260006e23 wan-com no-link
A3-FC00 0x5000144260006e30 local-com up
set 423
VPLEX CLI Command Descriptions
Note: Changing a virtual-volume name will not cause any impact to host IO (DU).
VPlexcli:/clusters/cluster-1/virtual-volumes/EMC-CLARiiON-0075-VNX-LUN122_1_vol>
set -a name -v new_name
VPlexcli:/clusters/cluster-1/virtual-volumes/new_name> ll
Name Value
------------------ -----------------------------------------------
block-count 2621440
block-size 4K
cache-mode synchronous
capacity 10G
consistency-group -
expandable true
health-indications []
health-state ok
locality local
operational-status ok
scsi-release-delay 0
service-status running
storage-tier -
supporting-device device_EMC-CLARiiON-APM00113700075-VNX_LUN122_1
system-id EMC-CLARiiON-0075-VNX-LUN122_1_vol
volume-type virtual-volume
Return to the virtual-volumes context and change directory to the new name:
VPlexcli:/clusters/cluster-1/virtual-volumes/new_name> cd ..
VPlexcli:/clusters/cluster-1/virtual-volumes> cd new_name
Run a listing on the volume to display the new name for the system-id:
VPlexcli:/clusters/cluster-1/virtual-volumes/new_name> ll
Name Value
------------------ -----------------------------------------------
block-count 2621440
block-size 4K
cache-mode synchronous
capacity 10G
consistency-group -
expandable true
health-indications []
health-state ok
locality local
operational-status ok
scsi-release-delay 0
service-status running
storage-tier -
supporting-device device_EMC-CLARiiON-APM00113700075-VNX_LUN122_1
system-id new_name
volume-type virtual-volume
◆ storage-volume unclaim
set topology
Changes the topology attribute for a Fibre Channel port.
Contexts /engines/engine/directors/director/hardware/ports/port
Note: According to best practices, the front-end ports should be set to the default p2p and
connected to the hosts via a switched fabric.
It is not recommended to change the topology on the local COM ports, as it can lead to
the directors going down and data unavailability.
VPlexcli:/engines/engine-1-1/directors/Cluster_1_Dir1A/hardware/ports/A4-FC02> ll
Name Value
------------------ ------------------
address 0x5000144240014742
current-speed 8Gbits/s
description -
enabled true
max-speed 8Gbits/s
node-wwn 0x500014403ca00147
operational-status ok
port-status up
port-wwn 0x5000144240014742
protocols [fc]
role wan-com
target-port -
topology p2p
show-use-hierarchy
Display the complete usage hierarchy for a storage element from the top-level element
down to the storage-array.
Syntax show-use-hierarchy
[-t|--targets] context path,context path,...
Note: A complete context path to the targets must be specified. For example:
show-use-hierarchy /clusters/cluster-1/storage-elements/storage-volumes/volume
or:
show-use-hierarchy /clusters/cluster-1/storage-elements/storage-volumes/*
* - argument is positional.
This command drills from the specified target up to the top-level volume and down to the
storage-array. The command will detect sliced elements, drill up through all slices and
indicate in the output that slices were detected. The original target is highlighted in the
output. '
storage-array: EMC-SYMMETRIX-192601690
Example Display the usage hierarchy for the specified meta volume:
VPlexcli:/clusters/cluster-1/system-volumes/meta_1> cd ..
VPlexcli:/clusters/cluster-1/system-volumes> show-use-hierarchy meta_1
meta-volume: meta_1 (78G, raid-1, cluster-1)
storage-volume: VPD83T3:6006016047a02d0046a94fcee916e111 (78G)
logical-unit: VPD83T3:6006016047a02d0046a94fcee916e111
storage-array: EMC-CLARiiON-APM00114103169
storage-volume: VPD83T3:60060480000190300487533030363245 (80G)
logical-unit: VPD83T3:60060480000190300487533030363245
storage-array: EMC-SYMMETRIX-190300487
Example Display the usage hierarchy for the specified logging volume:
VPlexcli:/> show-use-hierarchy clusters/cluster-1/system-volumes/log_1_vol
logging-volume: log_1_vol (2.01G, raid-1, cluster-1)
extent: extent_Symm0487_039A_1 (2.01G)
storage-volume: Symm0487_039A (2.01G)
logical-unit: VPD83T3:60060480000190300487533030333941
storage-array: EMC-SYMMETRIX-190300487
sms dump
Collects the logs files on the management server.
Optional arguments
[-t|--target_log] target logName - Collect only files specified under logName from
smsDump.xml.
Note: The log files listed below are the core set of files along with other files that are not
listed.
Clilogs
/var/log/VPlex/cli/client.log* -- VPlexcli logs, logs dumped by VPlexcli scripts
/var/log/VPlex/cli/session.log* -- what the user does in a VPlexcli session
/var/log/VPlex/cli/firmware.log* -- nsfw.log files from all directors
ConnectEMC
/var/log/ConnectEMC/logs/* -- connectemc logs
/opt/emc/connectemc/archive -- connectemc logs
/opt/emc/connectemc/failed -- connectemc logs
/opt/emc/connectemc/*.xml -- connectemc logs
/opt/emc/connectemc/*.ini -- connectemc logs
/var/log/VPlex/cli/ema_adaptor.log*
Configuration
/var/log/VPlex/cli/*.config
/var/log/VPlex/cli/*xml
/var/log/VPlex/cli/*.properties
/var/log/VS1/cli/persistentstore.xml -- generated when user connects to VPlexcli
/var/log/VPlex/cli/connections -- what the VPlexcli is connected to.
/var/log/VPlex/cli/VPlexcommands.txt
/var/log/VPlex/cli/VPlexconfig.xml
/var/log/VPlex/cli/VPlexcli-init
/opt/vs1/backup/*.ini
/opt/vs1/backup/*.xml
/opt/emc/VPlex/*.xml
/opt/emc/VPlex/*.properties
Upgrade
/var/log/VPlex/cli/capture/* (ndu status files)
/tmp/VPlexInstallPackages/*.xml
/tmp/VPlexInstallPackages/*.properties
/tmp/VPlexInstallPackages/*.log
/var/log/install.log
System
/var/log/warn*
/var/log/messages*
/var/log/boot.msg
/var/log/boot.omsg
/var/log/firewall
/etc/sysconfig/SuSEfirewall2
/etc/sysconfig/network/ifcfg*
/etc/sysconfig/network/ifroute*
/etc/sysctl.conf
Example Collect the logs files on the management server and send them to the designated
directory:
VPlexcli:/> sms dump --destination-directory /var/log/VPlex/cli
Initiating sms dump...
sms dump completed to file
/var/log/VPlex/cli/smsDump_2010-09-15_16.40.20.zip.
snmp-agent configure
Configures the VPLEX SNMP agent service on the local cluster.
Description Configures SNMP agent on the local cluster, and starts the SNMP agent.
snmp-agent configure checks the number of directors in the local cluster and configures
the VPLEX SNMP agent on the VPLEX management server. Statistics can be retrieved from
all directors in the local cluster.
Note: All the directors have to be operational and reachable through the VPLEX
management server before the SNMP agent is configured.
When configuration is complete, the VPLEX snmp-agent starts automatically.
◆ Runs on the management server and fetches performance related data from individual
directors using a firmware specific interface.
◆ Provides SNMP MIB data for directors for the local cluster only.
◆ Runs on Port 161 of the management server and uses the UDP protocol.
◆ Supports the following SNMP commands:
• SNMP Get
• SNMP Get Next
• SNMP get Bulk
The SNMP Set command is not supported in this release.
VPLEX supports SNMP version snmpv2c.
VPLEX MIBs are located on the management server in the /opt/emc/VPlex/mibs directory.
Use the public IP address of the VPLEX management server to retrieve performance
statistics using SNMP.
VPlexcli:/>
snmp-agent start
Starts the SNMP agent service.
◆ snmp-agent unconfigure
snmp-agent status
Displays the SNMP agent service on the local cluster.
Description Displays the status of the SNMP agent on the local cluster.
snmp-agent stop
Stops the SNMP agent service.
snmp-agent unconfigure
Destroys the SNMP agent.
Description Unconfigures the SNMP agent on the local cluster, and stops the agent.
source
Reads and executes commands from a script.
Syntax source
[-f|--file] filename
Example In the following example, a text file Source.txt contains only two commands:
service@ManagementServer:/var/log/VPlex/cli> cat Source.txt
version -a
exit
When executed:
The first command in the file is run
The exit command exits the command shell
VPlexcli:/> source --file /var/log/VPlex/cli/Source.txt
What Version Info
---------------------------------------------- -------------- ----
Product Version 4.1.0.00.00.12 -
SMSv2 0.16.15.0.0 -
Mgmt Server Base D4_MSB_7 -
Mgmt Server Software D4.70.0.9 -
/engines/engine-2-1/directors/Cluster_2_Dir_1B 1.2.43.9.0 -
/engines/engine-2-1/directors/Cluster_2_Dir_1A 1.2.43.9.0 -
/engines/engine-1-1/directors/Cluster_1_Dir1B 1.2.43.9.0 -
/engines/engine-1-1/directors/Cluster_1_Dir1A 1.2.43.9.0 -
/engines/engine-2-2/directors/Cluster_2_Dir_2B 1.2.43.9.0 -
/engines/engine-2-2/directors/Cluster_2_Dir_2A 1.2.43.9.0 -
storage-tool compose
Creates a virtual-volume on top of the specified storage-volumes, building all intermediate
extents, local, and distributed devices as necessary.
Optional arguments
[-m|--source-mirror] source-mirror - Specifies the storage volume to use as a source mirror
when creating local and distributed devices.
Note: If specified, --source-mirror will be used as a source-mirror when creating local and
distributed RAID 1 devices. This will trigger a rebuild from the source-mirror to all other
mirrors of the RAID 1 device (local and distributed). While the rebuild is in progress the
new virtual volume (and supporting local and/or distributed devices) will be in a degraded
state, which is normal. This option only applies to RAID 1 local or distributed devices. The
--source-mirror may also appear in --storage-volumes.
Description This command supports building local or distributed (i.e., DR1-based) virtual volumes
with RAID 0, RAID 1, or RAID C local devices. It does not support creating multi-device
storage hierarchies (i.e., a RAID 1 on RAID 0s on RAID Cs).
For RAID 1 local devices, a maximum of eight legs may be specified.
If the new virtual-volume’s global geometry is not compatible with the specified
consistency-group or storage-views, the virtual-volume will not be created. However,
failure to add the new virtual-volume to the specified consistency-group or storage-views
does not constitute an overall failure to create the storage and will not be reported as
such.
Note: In the event of an error, the command will not attempt to perform a roll-back and
destroy any intermediate storage objects it has created. If cleanup is necessary, use the
show-use-hierarchy command on each storage-volume to identify all residual objects and
delete each one manually.
Example Create a virtual volume with RAID 1 local devices and specified storage volumes:
VPlexcli:/> storage-tool compose --name TEST --geometry raid-1 --storage-volumes
VPD83T3:60060160cea33000fc39e04dac48e211, VPD83T3:60060160cea33000fb9c532eac48e211,
VPD83T3:600601605a903000f2a9692fa548e211, VPD83T3:600601605a903000f3a9692fa548e211
storage-volume auto-unbanish-interval
Displays or changes auto-unbanish interval on a single director.
Optional arguments
[-i|--interval] [ seconds] - Number of seconds the director firmware waits before
unbanishing a banished storage-volume (LUN).
Range: 20 seconds - no upper limit.
Default: 30 seconds.
* - argument is positional.
Description See “Banished storage volumes (LUNs)” in the storage-volume unbanish command
description.
At regular intervals, the VPLEX directors looks for logical units that were previously
banished. If VPLEX finds banished logical units, it unbanishes them. This process
happens automatically and continuously, and includes a delay interval with a default
value of 30 seconds.
Every 30 seconds the process looks for previously banished logical units and unbanishes
any it finds.
Use this command to display and/or change the delay interval.
Note: This change in the interval value is not saved between restarts of the director
firmware (NDU, director reboots). When the director firmware is restarted, the interval
value is reset to the default of 30 seconds.
Use the auto-unbanish-interval --director director command to display the current delay
(in seconds) for automatic unbanishment on the specified director.
Use the auto-unbanish-interval --director director --interval interval command to change
the delay timer for the specified director to the specified number of seconds.
The default metric for setting the --interval argument is seconds, but minutes and hours,
and days are accepted. The following are valid values for the --interval argument: 2s,
2second, 2seconds, 2sec, 2min, 2minute, 2minutes, 2hr, 2hours, 2hour.
storage-volume claim
Claims the specified storage volumes.
[-f|--force]
Optional arguments
[--appc] - Make the specified storage volumes application consistent. Prevents data
already on the specified storage volumes from being deleted or overwritten during the
process of constructing a virtual volume.
After a virtual volume is constructed using this storage volume, there is no restriction on
the access to the data, i.e. the data can be overwritten by host I/O.
The application consistent attribute may be modified using the “set” command but only
when the storage-volume is in the claimed state. The application consistent attribute
may not be altered for storage volumes that are unclaimed or in use.
[-n|--name] new name - The new name of the storage-volume after it is claimed.
--thin-rebuild - Claims the specified storage volumes as “thin”. Thin storage allocates
blocks of data on demand versus allocating all the blocks up front.
If a storage volume has already been claimed, it can be designated as thin using the set
command.
--batch-size integer - When using wildcards to claim multiple volumes with one command,
the maximum number of storage volumes to claim at once.
[-f|--force] - Force the storage-volume to be claimed. For use with non-interactive scripts.
* - argument is positional.
Description A storage volume is a device or LUN that is visible to VPLEX. The capacity of storage
volumes is used to create extents, devices and virtual volumes.
Storage volumes must be claimed, and optionally named before they can be used in a
VPLEX cluster. Once claimed, the storage volume can be used as a single extent occupying
the volume’s entire capacity, or divided into multiple extents (up to 128).
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to
this problem.
Thin provisioning
Thin provisioning allows storage to migrate onto a thinly provisioned storage volumes
while allocating the minimal amount of thin storage container capacity.
Thinly provisioned storage volumes can be incorporated into RAID 1 mirrors with similar
consumption of thin storage container capacity.
VPLEX preserves the unallocated thin pool space of the target storage volume by detecting
zeroed data content before writing, and suppressing the write for cases where it would
cause an unnecessary allocation. VPLEX requires the user to specify thin provisioning for
each back-end storage volume. If a storage volume is thinly provisioned, the thin-rebuild
attribute must be to true either during or after claiming.
If a thinly provisioned storage volume contains non-zero data before being connected to
VPLEX, the performance of the migration or initial RAID 1 rebuild is adversely affected.
System volumes are supported on thinly provisioned LUNs, but these volumes must have
their full capacity of thin storage container resources set aside and not be in competition
for this space with any user-data volumes on the same pool.
If:
◆ The thin storage allocation pool runs out of space, and
◆ If this is the last redundant leg of the RAID 1,
further writing to a thinly provisioned device causes the volume to lose access to the
device, a DU.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>claim --storage-volumes
VPD83T3:6006016021d025007029e95b2327df11
Example Claim a storage volume and name it Symm1254_7BF from the clusters/cluster context:
VPlexcli:/clusters/cluster-1> storage-volume claim -name Symm1254_7BF -d
VPD83T3:60000970000192601254533030374241
Example Claim storage volumes using the --thin-rebuild option. In the following example:
◆ The claim command with --thin-rebuild claims two storage volumes as thin storage
(from the clusters/cluster/storage-elements/storage-volumes context)
◆ The ll command displays one of the claimed storage volumes:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll
VPD83T3:6006016091c50e005057534d0c17e011
/clusters/cluster-1/storage-elements/storage-volumes/VPD83T3:6006016091c50e005057534d0c17e011
:
Name Value
---------------------- -------------------------------------------------------
application-consistent false
block-count 524288
block-size 4K
capacity 2G
description -
free-chunks ['0-524287']
health-indications []
health-state ok
io-status alive
itls 0x5000144230354911/0x5006016930600523/6,
0x5000144230354910/0x5006016930600523/6,
0x5000144230354910/0x5006016830600523/6,
0x5000144230354911/0x5006016830600523/6,
0x5000144220354910/0x5006016930600523/6,
0x5000144220354910/0x5006016830600523/6,
0x5000144220354911/0x5006016930600523/6,
0x5000144220354911/0x5006016830600523/6
largest-free-chunk 2G
locality -
operational-status ok
storage-array-name EMC-CLARiiON-APM00042201310
storage-volumetype normal
system-id VPD83T3:6006016091c50e005057534d0c17e011
thin-rebuild true
total-free-space 2G
use claimed
used-by []
vendor-specific-name DGC
Example Claim multiple storage volumes whose names begin with VPD83T3:600601602:
VPlexcli:/clusters/cluster-1> storage-volume claim --storage-volumes VPD83T3:600601602*
storage-volume claimingwizard
Finds unclaimed storage volumes, claims them, and names them appropriately.
Once set, the application consistent attribute cannot be changed. This attribute can only
be set when the storage- volumes/extents are in the claimed state.
--thin-rebuild - Claims the specified storage volumes as “thin”. Thin storage allocates
blocks of data on demand versus allocating all the blocks up front. Thin provisioning
eliminates almost all unused storage and improves utilization rates.
Description Storage volumes must be claimed, and optionally named before they can be used in a
VPLEX cluster.
Storage tiers allow the administrator to manage arrays based on price, performance,
capacity and other attributes. If a tier ID is assigned, the storage with a specified tier ID
can be managed as a single unit. Storage volumes without a tier assignment are assigned
a value of ‘no tier’.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to
this problem.
Example Use the --set-tier argument to add or change a storage tier identifier in the storage-volume
names from a given storage array. For example:
VPlexcli:/clusters/cluster-1> storage-volume claimingwizard --set-tier ="(Clar0400, L),
(Symm04A1, H)"
names all storage volumes from the CLARiiON array as Clar0400L_<lun name>, and all
storage volumes from the Symmetrix® array as Symm04A1H_<lun name>
EMC Symmetrix, HDS 9970/9980 and USP V storage arrays include their array and serial
number in response to SCSI inquiries. The claiming wizard can claim their storage volumes
without additional information. Names are assigned automatically.
Other storage arrays require a hints file generated by the storage administrator using the
array’s command line. The hints file contains the device names and their World Wide
Names.
Use the --file argument to specify a hints file to use for naming claimed storage volumes.
The following table lists examples to create hint files:
Example In the following example, the claimingwizard command with no arguments claims storage
volumes from an EMC Symmetrix array:
VPlexcli:/clusters/cluster-1> storage-volume claimingwizard
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll
Note that the Symmetrix storage volumes are named in the format:
Symm<last 4 digits of array serial number>_<Symmetrix Device Number>
◆ The --file argument specifies a CLARiiON hints file containing device names and World
Wide Names
◆ The --thin-rebuild argument claims the specified storage volumes as “thin” (data will
be allocated on demand versus up front)
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claimingwizard --cluster
cluster-1 --file /home/service/clar.txt --thin-rebuild
Found unclaimed storage-volume VPD83T3:6006016091c50e004f57534d0c17e011 vendor DGC : claiming
and naming clar_LUN82.
Example Find and claim storage volumes on any array in cluster-1 that does not require a hints file
from the /clusters/cluster/storage-elements/storage-volumes context:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claimingwizard
storage-volume find-array
Searches storage arrays for the specified storage-volumes.
Description Searches all the storage arrays in all clusters for the specified storage-volumes.
The search is case-sensitive.
OR
storage-volume forget
Tells the cluster that a storage-volume or a set of storage volumes are physically removed.
Description Storage-volumes can be remembered even if a cluster is not currently in contact with
them. Use this command to tell the cluster that the storage-volumes are not coming back
and therefore it is safe to forget about them.
Storage volumes can only be forgotten if they are unclaimed or unusable, and
unreachable.
This command also forgets the logical-unit for this storage-volume.
Use the storage-volume forget command to tell the cluster that unclaimed and
unreachable storage volumes are not coming back and it is safe to forget them.
Forgotten storage volumes are removed from the context tree.
Use the --verbose argument to print a message for each volume that could not be
forgotten.
Use the logical-unit forget command for the functionality supported by the removed
arguments.
Example Forget all unclaimed, unused, and unreachable storage volume on the cluster:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> storage-volume forget *
3 storage-volumes were forgotten.
Example Use the --verbose argument to display detailed information while you forget all unclaimed,
unused, and unreachable storage volumes on the cluster:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> storage-volume forget *
--verbose
WARNING: Error forgetting storage-volume 'VPD83T3:60000970000192602773533030353933': The 'use'
property of storage-volume VPD83T3:60000970000192602773533030353933' is 'meta-data' but must
be 'unclaimed' or 'unusable' before it can be forgotten.
.
.
.
3 storage-volumes were forgotten:
VPD83T3:6006016030802100e405a642ed16e111
.
.
storage-volume list-banished
Displays banished storage-volumes on a director.
Description Displays the names of storage-volumes that are currently banished for a given director.
storage-volume resurrect
Resurrect the specified storage-volumes.
Optional arguments
[-f|--force] - Force the storage-volume resurrect and bypass the test.
Description Resurrects the specified dead storage volumes and tests the resurrected device before
setting its state to healthy.
A storage-volume is declared dead:
◆ After VPLEX retries a failed I/O to the backend arrays 20 times without success.
◆ If the storage-volume is reachable but errors prevent the I/O from succeeding.
A storage volume declared hardware dead cannot be unclaimed or removed (forgotten).
Use this command to resurrect the storage volume. After the storage volume is
resurrected, it can be unclaimed and removed.
IMPORTANT
Fix the root cause before resurrecting a storage volume because the volume can be
successfully resurrected only to go back to dead on the next I/O.
Note: This command will not work if the storage volume is marked unreachable.
This command has no ill effects if issued for a healthy storage volume.
LUNs exported from storage arrays can disappear or display I/O errors for various reasons,
including:
◆ Marked read-only during copies initiated by the storage array
◆ Unrecoverable device errors
◆ Snapshot activation/deactivation on the storage array
◆ An operator shrinks the size of a storage-volume, causing the VPLEX to refuse to do
I/O to the storage-volume.
◆ 100% allocated thin pools
◆ Persistent reservation on storage volume
◆ Dropped frames due to bad cable or SFP
Dead storage volumes are indicated by:
• The cluster summary command shows degraded health-state and one or more
unhealthy storage volumes. For example:
VPlexcli:/clusters/cluster-2/> cluster status
Cluster cluster-2
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: degraded
health-indications: 1 unhealthy Devices or storage-volumes
◆ The storage-volume summary command shows the I/O status of the volume as dead.
For example:
VPlexcli:/> storage-volume summary
SUMMARY (cluster-1)
StorageVolume Name IO Status Operational Status Health State
------------------------ --------- ------------------ ----------------
dead_volume dead error critical-failure
Symptom:
Storage-volume is dead
storage-volume summary
Displays a list of a cluster's storage volumes.
Example Display default summary (all clusters) on a VPLEX with unhealthy volumes:
VPlexcli:/> storage-volume summary
SUMMARY (cluster-1)
StorageVolume Name IO Status Operational Status Health State
---------------------------------------- ----------- ------------------ ----------------
Clar0106_LUN14 alive degraded degraded
Health out-of-date 0
storage-volumes 363
unhealthy 1
Use meta-data 4
unusable 0
used 358
Capacity total 2T
SUMMARY (cluster-2)
Storage-Volume Summary (no tier)
---------------------- --------------------
Health out-of-date 0
storage-volumes 362
unhealthy 0
Vendor DGC 114
EMC 248
Use meta-data 4
used 358
Example Display summary for only cluster-1 on a VPLEX with unhealthy volumes:
VPlexcli:/> storage-volume summary --clusters cluster-1
StorageVolume Name IO Status Operational Status Health State
------------------ ----------- ------------------ ----------------
Log1723_154 unreachable error critical-failure
Log1852_154 unreachable error critical-failure
Meta1723_150 unreachable error critical-failure
Meta1852_150 unreachable error critical-failure
Symm1378_0150 unreachable error critical-failure
Symm1378_0154 unreachable error critical-failure
.
.
Storage-Volume Summary (no tier)
---------------------- --------------------
Health out-of-date 0
storage-volumes 981
unhealthy 966
Vendor DGC 15
None 966
When slot usage reaches 90%, this command also displays the following:
Meta Slots total 64000
reclaimable 9600
used 57600
storage-volumes 8000
extents 24000
logging-segments 25600
Example Display summary for both clusters in a VPLEX with no unhealthy storage volumes:
VPlexcli:/> storage-volume summary
SUMMARY (cluster-1)
Storage-Volume Summary (no tier)
---------------------- ---------------------
Health out-of-date 0
storage-volumes 2318
unhealthy 0
SUMMARY (cluster-2)
Storage-Volume Summary (no tier)
---------------------- ---------------------
Health out-of-date 0
storage-volumes 2318
unhealthy 0
Field Description
Health State degraded - The extent may be out-of-date compared to its mirror (applies only to extents
that are part of a RAID 1 device).
ok - The extent is functioning normally.
non-recoverable-error - The extent may be out-of-date compared to its mirror (applies only
to extents that are part of a RAID 1 device), and/or the Health state cannot be determined.
unknown - VPLEX cannot determine the extent's Operational state, or the state is invalid.
critical failure - VPLEX has marked the storage volume as hardware-dead.
Storage-Volume Summary
out-of-date Of the total number of storage volumes on the cluster, the number that are out-of-date
compared to their mirror.
unhealthy Of the total number of storage volumes on the cluster, the number with health state that is
not “ok”.
Vendor Of the total number of storage volumes on the cluster, the number from the specified
vendor.
claimed Of the total number of storage volumes on the cluster, the number that are claimed.
meta-data Of the total number of storage volumes on the cluster, the number in use as meta-volumes.
unclaimed Of the total number of storage volumes on the cluster, the number that are unclaimed.
Field Description
used Of the total number of storage volumes on the cluster, the number that are in use.
storage-volume unbanish
Unbanishes a storage-volume on one or more directors.
Optional arguments
[-d|--storage-volume] - The storage-volume to unbanish.
This argument is not required if the current context is a storage-volume or below. If the
current context is a storage-volume or below, it operates on that storage volume.
* - argument is positional.
Description VPLEX examines path state information for LUNs on arrays. If the path state information is
inconsistent, VPLEX banishes the LUN, and makes it inaccessible.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes/Symm0487_0C1B> storage-volume
unbanish --director director-1-1-A
director-1-1-A Unbanished.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes/Symm0487_0C1B> storage-volume
list-banished --director director-1-1-A
There are no banished storage-volumes on director 'director-1-1-A'.
storage-volume unclaim
Unclaims the specified previously claimed storage volumes.
Optional arguments
[-b|--batch-size] integer - Specifies the maximum number of storage volumes to unclaim at
once.
[-r|--return-to-pool] - Returns the storage capacity of each VIAS-based volume to the pool
on the corresponding storage-array.
* - argument is positional.
Description Use the storage-volume unclaim command to return the specified storage volumes to the
unclaimed state.
The target storage volume must not be in use.
Note: When the storage-volume unclaim command is used with VIAS based and non VIAS
based storage volumes the result of the command will be different. VIAS based virtual
volumes will be removed from VPLEX they will no longer be visible. Non VIAS based
storage volumes will be marked as unclaimed. This is the intended behavior.
Note: The thin-rebuild attribute can only be modified for storage volumes that are either
claimed or used. When the unclaimed storage-volume is claimed and its state is claimed
or used, use the set command to modify the thin-rebuild attribute.
VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes> unclaim -d
Basic_c1_ramdisk_100GB_686_
storage-volume used-by
Displays the components that use the specified storage volumes.
Description To manually deconstruct an encapsulated storage volume, remove each layer starting from
the top.
Use the storage-volume used-by command to see the layers from the bottom up.
/clusters/cluster-1/devices/base1:
extent_CX4_lun0_2
CX4_lun0
/clusters/cluster-1/devices/base2:
extent_CX4_lun0_3
CX4_lun0
/clusters/cluster-1/devices/base3:
extent_CX4_lun0_4
CX4_lun0
/clusters/cluster-1/storage-elements/extents/extent_CX4_lun0_5:
CX4_lun0
/clusters/cluster-1/storage-elements/extents/extent_CX4_lun0_6:
CX4_lun0
subnet clear
Clears one or more attributes of an existing subnet configuration.
Optional arguments
[-a|--attribute] attribute - * The name of the attribute to modify.
* argument is positional
Description Clears (sets to null) one or more of the following attributes of an existing subnet
configuration:
◆ cluster-address
◆ gateway
◆ prefix
IMPORTANT
Use the subnet modify command to configure new values for the attributes.
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets/cluster-1-SN01> clear
cluster-address
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets/cluster-1-SN01> ll
Name Value
---------------------- --------------------------
cluster-address -
gateway 192.168.12.1
mtu 9000
prefix 192.168.12.0:255.255.255.0
proxy-external-address -
remote-subnet-address 192.168.22.0:255.255.255.0
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets/cluster-1-SN01> ll
Name Value
---------------------- --------------------------
cluster-address 192.168.10.200
gateway 192.168.12.1
mtu 9000
prefix 192.168.12.0:255.255.255.0
proxy-external-address -
remote-subnet-address 192.168.22.0:255.255.255.0
subnet create
Creates a new subnet.
Optional arguments
[-c|--cluster] cluster - Context path of the target owner cluster.
[-a|--cluster-address] address - The public address of the cluster to which this subnet
belongs.
[-g|--gateway] IP address - The gateway address for this subnet.
[-m|--mtu] size - The maximum Transfer Unit size for this subnet.
Range - An integer between 96 and 9000.
The VPLEX CLI accepts MTU values lower than 96, but they are not supported.
Entering a value less than 96 prevents the port-group from operating.
[-p|--prefix] prefix - The prefix/subnet mask for this subnet. Specified as an IP address and
subnet mask in integer dot notation, separated by a colon. For example,
192.168.20.0/255.255.255.0
[-e|--proxy-external-address] address - The externally published cluster address for this
subnet. Can be one of:
◆ w.x.y.z where w,x,y,z are [0..255] - Configures the specified IP address. For example:
10.0.1.125.
◆ empty - Clears any configured IP address.
[-r|--remote-subnet-address] address - The [destination IP]:[netmask] subnet in the remote
cluster that is reachable from the local subnet. Can be one of:
◆ w.x.y.z where w,x,y,z are [0..255] - Configures the specified IP address. For example:
172.16.2.0/255.255.255.0.
◆ empty - Clears any configured IP address
* argument is positional.
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets> ll TestSubNet
/clusters/cluster-1/cluster-connectivity/subnets/TestSubNet:
Name Value
---------------------- --------------------------
cluster-address -
gateway 192.168.10.1
mtu 1500
prefix 192.168.10.0/255.255.255.0
proxy-external-address -
remote-subnet-address -
subnet destroy
Destroys a subnet configuration.
Optional arguments
--force - Forces the subnet configuration to be destroyed without asking for confirmation.
Allows this command to be run from non-interactive scripts.
* argument is positional
◆ subnet create
◆ subnet modify
subnet modify
Modifies an existing subnet configuration.
Optional arguments
[-a|--cluster-address] address - The public address of the cluster to which this subnet
belongs.
[-g|--gateway] IP address - The gateway address for this subnet.
[-p|--prefix] prefix - The prefix/subnet mask for this subnet. Specified as an IP address and
subnet mask in integer dot notation, separated by a colon. For example,
192.168.20.0/255.255.255.0
If the prefix is changed, ensure that the cluster IP address, gateway address, and port IP
addresses are all consistent with the subnet prefix.
* argument is positional.
Description Modifies one or more of the following attributes of an existing subnet configuration:
◆ cluster-address
◆ gateway
◆ prefix
syrcollect
Collects system configuration data for System Reporting (SYR).
Syntax syrcollect
[-d|--directory] directory
Description Manually starts a collection of SYR data, and optionally sends the resulting zip file to EMC.
Run this command after every major configuration change or upgrade.
Data collected includes:
◆ VPLEX information
◆ RecoverPoint information (if RecoverPoint is configured)
◆ Cluster information
◆ Engine/chassis information
◆ RAID information
◆ Port information
◆ Back end storage information
The output of the command is a zipped xml file named:
<VPLEXTLA>_Config_<TimeStamp>.zip.
in the specified output directory.
Files in the default directory are automatically sent to EMC.
Use the --directory argument to specify a non-default directory. Output files sent to a
non-default directory are not automatically sent to EMC.
Example Start an SYR data collection, and send the output to EMC:
VPlexcli:/> syrcollect
Example Start an SYR data collection, and send the output to the specified directory:
VPlexcli:/> syrcollect -d /var/log/VPlex/cli
syrcollect 457
VPLEX CLI Command Descriptions
tree
Displays the context tree.
Syntax tree
[-e|--expand]
[-c|--context] subcontext root
[-s|--select] glob pattern
◆ set
unalias
Removes a command alias.
Syntax unalias
[-n|--name] name
[-a|--all]
VPlexcli:/> alias
Name Description
---- ---------------------------------
? Substitutes the 'help' command.
ll Substitutes the 'ls -al' command.
quit Substitutes the 'exit' command.
user add
Adds a username to the VPLEX management server.
Description Administrator privileges are required to execute the user add command.
unalias 459
VPLEX CLI Command Descriptions
VPLEX has two pre-configured CLI users that can not be removed: admin and service.
Note: In VPLEX Metro and Geo configuration, VPLEX CLI accounts created on one
management server are not propagated to the second management server. The user list
command displays only those accounts configured on the local management server, not
both server.
Note: Administrative privileges are required to add, delete, and reset user accounts. The
password for the admin account must be reset the first time the admin account is
accessed. After the admin password has been reset, the admin user can manage (add,
delete, reset) user accounts.
To change the password for the admin account, ssh to the management server as user
“admin”. Enter the default password teS6nAX2. A prompt to change the admin account
password appears. Enter a new password.
Optional arguments
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - argument is positional.
Description Used to add a user to the event server to access VPLEX events.
Optional arguments
[-h|--help] - Displays command line help.
[--verbose] - Provides more output during command execution. This may not have any
effect for some commands.
* - argument is positional.
Description Used to change the password that the external subscribers use to access VPLEX events.
user list
Displays usernames configured on the local VPLEX management server.
Note: In VPLEX Metro and Geo configuration, VPLEX CLI accounts created on one
management server are not propagated to the second management server. The user list
command displays only those accounts configured on the local management server, not
both servers.
Example Display the user accounts configured on the local management server:
VPlexcli:/> user list
Username
--------
admin
service
TestUser
user passwd
Allows a user to change the password for their own username.
Description Executable by all users to change the password only for their own username.
Type the new password. Passwords must be at least 8 characters long, and must not be
dictionary words.
A prompt to confirm the new password appears:
Confirm password:
user remove
Removes a username from the VPLEX management server.
Description Administrator privileges are required to execute the user remove command.
Note: Administrative privileges are required to add, delete, and reset user accounts. The
password for the admin account must be reset the first time the admin account is
accessed. After the admin password has been reset, the admin user can manage (add,
delete, reset) user accounts.
To change the password for the admin account, ssh to the management server as user
“admin”. Enter the default password teS6nAX2. A prompt to change the admin account
password appears. Enter a new password.
user reset
Allows an Administrator user to reset the password for any username.
Note: Administrative privileges are required to add, delete, and reset user accounts. The
password for the admin account must be reset the first time the admin account is
accessed. After the admin password has been reset, the admin user can manage (add,
delete, reset) user accounts.
To change the password for the admin account, ssh to the management server as user
“admin”. Enter the default password teS6nAX2. A prompt to change the admin account
password appears. Enter a new password.
All users can change the password for their own account using the user passwd command.
validate-system-configuration
Performs a basic system configuration check.
Syntax validate-system-configuration
validate-system-configuration 465
VPLEX CLI Command Descriptions
*To meet the high availability requirement for storage volume paths
each storage volume must be accessible from each of the directors
through 2 or more VPlex backend ports, and 2 or more Array target
ports, and there should be 2 or more ITLs.
Cluster cluster-1
10 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability
requirement for storage volume paths*.
0 storage-volumes which are not visible from all directors.
*To meet the high availability requirement for storage volume paths
each storage volume must be accessible from each of the directors
through 2 or more VPlex backend ports, and 2 or more Array target
ports, and there should be 2 or more ITLs.
Errors were encountered in the back-end connectivity. Please run
'connectivity validate-be -d' for details.
Validate meta-volume
Checking cluster cluster-1 ...
Checking cluster cluster-2 ...
ok
vault go
Initiates a manual vault on every director in a given cluster under emergency conditions.
Syntax vault go
[-c|--cluster] cluster
[--force]
Arguments [-c|--cluster] cluster - Specify the cluster on which to start cache vaulting.
[--force] - Force the operation to continue without confirmation. Allows this command to be
run from non-interactive scripts.
Description Use this command to initiate a manual dump from every director in a given cluster to
persistent local storage under emergency conditions.
Use this command to manually start cache vaulting if an emergency shutdown is required
and the storage administrator cannot wait for automatic vaulting to begin.
vault overrideUnvaultQuorum
Allows the cluster to proceed with the recovery of the vaults without all the required
directors.
Arguments [-c|--cluster] cluster - Overrides unvault quorum for the specified cluster.
--evaluate-override-before-execution - Evaluates the possible outcome of running this
command but does not do anything.
--force - Force the operation to continue without confirmation. Allows this command to be
run from non-interactive scripts.
Description
This command could result in data loss.
Use this command to tell the cluster not to wait for all the required director(s) to rejoin the
cluster before proceeding with vault recovery.
Use this command with the --evaluate-override-before-execution argument to evaluate the
cluster's vault status and make a decision whether to accept a possible data loss and
continue to bring the cluster up. The evaluation provides information as to whether the
cluster has sufficient vaults to proceed with the vault recovery that will not lead to data
loss.
Note: One valid vault can be missing without experiencing data loss.
Warning: Execution of this command can result in possible data loss based on the current vault
status of the cluster.
Execution of the override unvault quorum has been canceled by the user!!
vault status
Displays the current cache vault/unvault status of the cluster.
Description Cache vaulting safeguards dirty data during power outages. Cache vaulting dumps all dirty
data to persistent local storage. Vaulted data is recovered (unvaulted) when power is
restored.
This command always displays the cluster’s vault state and the vault state of each of the
cluster’s directors.
When run after a vault has begun and the vault state is Vault Writing or Vault Written, the
following information is displayed:
◆ Total number of bytes to be vaulted in the cluster
◆ Estimated time to completion for the vault
When run after the directors have booted and unvaulting has begun and the states are
Unvaulting or Unvault Complete, the following information is displayed:
◆ Total number of bytes to be unvaulted in the cluster
◆ Estimated time to completion for the unvault
◆ Percent of bytes remaining to be unvaulted
◆ Number of bytes remaining to be unvaulted
If you enter the --verbose argument, the command displays the following information:
◆ Average vault or unvault rate.
If this command is run after the directors have booted, unvaulted, and are waiting to
acquire an unvault quorum:
◆ The state is Unvault Quorum Waiting.
◆ The output displays a list of directors that are preventing the cluster from gaining
unvault quorum.
If the --verbose argument is used, the following additional information is displayed:
Power Loss Detected Power loss has been detected. Waiting for power to be restored.
Vault Written Dirty data has been written to local persistent storage.
Unvaulting Vaulted dirty data is being read from local persistent storage.
Unvault Complete All vaulted dirty data has been read from local persistent storage.
Unvault Quorum Waiting Waiting on all director(s) required before proceeding with the recovery of the vault.
Vault Recovering Recovering all the vaulted dirty data to VPLEX global cache from the local director's
memory.
Example Display the summarized status for a cluster that is not currently vaulting or unvaulting:
VPlexcli:/> vault status --cluster cluster-1
================================================================================
Cluster level vault status summary
================================================================================
Cluster:/clusters/cluster-1
Cluster is not vaulting/unvaulting.
================================================================================
Director level summary
================================================================================
/engines/engine-1-1/directors/director-1-1-B:
state: Vault Inactive
/engines/engine-1-1/directors/director-1-1-A:
state: Vault Inactive
================================================================================
Cluster level unvault status summary
================================================================================
Cluster:/clusters/cluster-2
Cluster is unvaulting.
Total number of bytes remaining to unvault in the cluster: 583.499776 MB
Estimated time remaining for cluster's unvault completion: 24 seconds
================================================================================
Director level unvault status summary
================================================================================
/engines/engine-2-1/directors/director-2-1-B:
state: Unvaulting - Reading vaulted data from vault in to the local
director's memory
Total number of bytes to unvault: 566.403072 MB
Total number of bytes unvaulted: 289.505280 MB
Total number of bytes remaining to unvault: 276.897792 MB
Percent unvaulted: 51%
Average unvault rate: 14.471168 MB/second
Estimated time remaining to unvault complete: 19 seconds
/engines/engine-2-1/directors/director-2-1-A:
state: Unvaulting - Reading vaulted data from vault in to the local
director's memory
Total number of bytes to unvault: 554.848256 MB
Total number of bytes unvaulted: 248.246272 MB
Total number of bytes remaining to unvault: 306.601984 MB
Percent unvaulted: 44%
Average unvault rate: 12.410880 MB/second
Estimated time remaining to unvault complete: 24 seconds
================================================================================
Cluster level summary
================================================================================
Cluster:/clusters/cluster-1
Cluster is waiting on all director(s) required before proceeding with the recovery of vault
Number of directors required to be present before cluster can proceed with the recovery
of the vault : 4
Number of directors present with valid vaults: 0
Number of directors present with invalid vaults: 3
Number of directors missing and possibly preventing the cluster to proceed with the
recovery of the vault : 1
Missing directors: director-1-1-A
================================================================================
Director level summary
================================================================================
/engines/engine-1-1/directors/director-1-1-B:
state: Waiting for unvault recovery quorum - Waiting on all director(s) required before
proceeding with the recovery of vault
Vault does not contain any data
Required number of directors to proceed with the recovery of vault: 4
Number of directors preventing the cluster to proceed with the recovery of vault: 1
Missing director list: director-1-1-A
/engines/engine-1-2/directors/director-1-2-B:
state: Waiting for unvault recovery quorum - Waiting on all director(s) required before
proceeding with the recovery of vault
Vault does not contain any data
Required number of directors to proceed with the recovery of vault: 4
Number of directors preventing the cluster to proceed with the recovery of vault: 1
Missing director list: director-1-1-A
/engines/engine-1-2/directors/director-1-2-A:
state: Waiting for unvault recovery quorum - Waiting on all director(s) required before
proceeding with the recovery of vault
Vault does not contain any data
Required number of directors to proceed with the recovery of vault: 4
Number of directors preventing the cluster to proceed with the recovery of vault: 1
Missing director list: director-1-1-A
/engines/engine-1-1/directors/director-1-1-A:
director could not be reached
verify fibre-channel-switches
Verifies that the Fibre Channel switch on each cluster's internal management network has
been configured correctly.
Contexts /clusters/cluster-n
Passwords for the service accounts on the switches are required to run this command.
Please enter the service account password for the Management Server:
Please enter the service account password for the fibre channel switch at IP 128.221.252.66:
Please enter the service account password for the fibre channel switch at IP 128.221.253.66:
version
Display version information for connected directors.
Syntax version
[-a|--all]
[-n|directors] context path,context path...
--verbose
Description This command displays version information for all directors, a specified director, or
individual software components for each director.
Example Display management server/SMS version and version for the specified director:
VPlexcli:/> version director-2-1-B
What Version Info
-------------------------------------------- -------------- ----
Product Version 5.4.0.00.00.10 -
SMSv2 D35.20.0.10.0 -
Mgmt Server Base D35.20.0.1 -
Mgmt Server Software D35.20.0.13 -
/engines/engine-2-1/directors/director-2-1-B 6.5.54.0.0 -
Example Display version information for management server, SMS, and all directors:
VPlexcli:/> version -a
What Version Info
-------------------------------------------- -------------- ----
Product Version 5.4.0.00.00.10 -
SMSv2 D35.20.0.10.0 -
Mgmt Server Base D35.20.0.1 -
Mgmt Server Software D35.20.0.13 -
/engines/engine-2-1/directors/director-2-1-B 6.5.54.0.0 -
version 473
VPLEX CLI Command Descriptions
/engines/engine-2-1/directors/director-2-1-A 6.5.54.0.0 -
/engines/engine-1-1/directors/director-1-1-B 6.5.54.0.0 -
/engines/engine-1-1/directors/director-1-1-A 6.5.54.0.0 -
Example Display version information for individual software components on each director. See
Software components table below for a description of the components.
VPlexcli:/> version -a --verbose
Product Version: 5.4.0.00.00.10
What: SMSv2
Version: D35.20.0.10.0
Build time: June 09, 2014 at 11:38:36PM EDT
Build machine: dudleyed05
Build OS: Linux version 2.6.27-7-generic on amd64
Build compiler: 1.6.0_45
Build source: /spgear/spgear_misc/htdocs/harness/release/1795/work/ui/src
What: ECOM
Version: 6.5.1.0.0-0
What: ZECL
Version: 6.5.52.0.0-0
What: ZPEM
Version: 6.5.52.0.0-0
What: NSFW
Version: 65.1.54.0-0
What: ECOM
Version: 6.5.1.0.0-0
What: ZECL
Version: 6.5.52.0.0-0
What: ZPEM
Version: 6.5.52.0.0-0
What: NSFW
Version: 65.1.54.0-0
What: ECOM
Version: 6.5.1.0.0-0
What: ZECL
Version: 6.5.52.0.0-0
What: ZPEM
Version: 6.5.52.0.0-0
What: NSFW
Version: 65.1.54.0-0
version 475
VPLEX CLI Command Descriptions
What: ECOM
Version: 6.5.1.0.0-0
What: ZECL
Version: 6.5.52.0.0-0
What: ZPEM
Version: 6.5.52.0.0-0
What: NSFW
Version: 65.1.54.0-0
ECOM The EMC Common Object Manager is a hub of communications and common
services for applications based on EMC’s Common Management Platform.
ZECL A kernel module in the director that interfaces with the ZPEM process to
provide, among other things, access to the I2C bus.
ZPEM Manages the overall health of the hardware. It includes monitoring of the
various Field Replaceable Units (FRUs), Power and Temperature values and
monitoring of external entities like the Standby Power Supply (SPS), COM FC
Switch and the UPS used to provide backup power to the FC switches.
virtual-volume create
Creates a virtual volume on a host device.
Optional arguments
[-t|--set-tier] - Set the storage-tier for the new virtual volume.
* - argument is positional.
Description A virtual volume is created on a device or a distributed device, and is presented to a host
through a storage view. Virtual volumes are created on top-level devices only, and always
use the full capacity of the device or distributed device.
The underlying storage of a virtual volume may be distributed over multiple storage
volumes, but appears as a single contiguous volume.
The specified device must not already have a virtual volume and must not have a parent
device.
Use the --set-tier argument to set the storage tier for the new virtual volume.
For example, assign Symmetrix arrays as tier 1 storage, and CLARiiON as tier 2 storage.
Use the ll command in a specific virtual volume’s context to display the current
storage-tier.
Use the set command to modify a virtual volume’s storage-tier.
VPlexcli:/clusters/cluster-1/virtual-volumes> cd dd_vol_1
VPlexcli:/clusters/cluster-1/virtual-volumes/dd_vol_1> ll
Name Value
-------------------------- ----------------------------------------
block-count 1310720
block-size 4K
cache-mode synchronous
capacity 5G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state ok
locality distributed
operational-status ok
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-tier -
supporting-device dd_sample1
system-id dd_vol_1
volume-type virtual-volume
vpd-id VPD83T3:6000144000000010f02886e1aa9e6b15
Field Description
capacity The total number of bytes in the volume. Equals the 'block-size' multiplied by the
'block-count'.
consistency-group The name of the consistency group to which this volume belongs, if any.
Field Description
expandable-capaci Excess capacity not yet exposed to the host by the virtual volume. This capacity is
ty available for expanding the virtual volume.
Zero (0) - Expansion is not supported on the virtual volume or that there is no capacity
available for expansion.
Non-zero - The capacity available for virtual volume expansion using the storage-volume
method.
expansion-method The expansion method that can be used to expand the virtual volume.
concatenation - The virtual volume can be expanded only by adding the specified extents.
not-supported - The virtual volume cannot be expanded. This includes virtual volumes that
are members of RecoverPoint consistency groups.
storage-volume - The virtual volume can be expanded using storage array based volume
expansion or by migrating to a larger device.
in-progress - An expansion has been started, but has not completed. The following
operations are blocked on the volume: additional expansion, migration, and NDU.
unknown - VPLEX could not determine the expansion status of the volume.
health-indications Indicates the reasons for a health-state that is not 'ok' or the reasons a volume expansion
failed.
health state major failure - One or more of the virtual volume's underlying devices is out-of-date, but
will never rebuild.
minor failure - One or more of the virtual volume's underlying devices is out-of-date, but
will rebuild.
non-recoverable error - VPLEX cannot determine the virtual volume's Health state.
ok - The virtual volume is functioning normally.
unknown -VPLEX cannot determine the virtual volume's Health state, or the state is invalid.
locality local - The virtual volume relies completely on storage at its containing cluster.
remote - The virtual volume is a proxy for a volume whose storage resides at a different
cluster. I/O to a remote virtual volume travels between clusters.
distributed - The virtual volume is the cluster-local representation of a distributed RAID-1.
Writes to a distributed volume travels to all the clusters at which it has storage; reads
come, if possible, from the local leg.
operational status degraded - The virtual volume may have one or more out-of-date devices that will
eventually rebuild.
error - One or more of the virtual volume's underlying devices is hardware-dead.
ok - The virtual volume is functioning normally.
starting -The virtual volume is not yet ready.
stressed - One or more of the virtual volume's underlying devices is out-of-date and will
never rebuild.
unknown - VPLEX cannot determine the virtual volume's Operational state, or the state is
invalid.
recoverpoint-protec The list of VPLEX cluster names at which virtual volumes are protected by RecoverPoint. For
tion-at MetroPoint protected VPLEX volumes, this field will contain the names of both VPLEX
clusters, indicating protection at both sites.
Field Description
recoverpoint-usage The replication role this virtual-volume is being used for by attached RecoverPoint clusters,
if any.
Production Source - Volumes that are written to by the host applications. Writes to
production volumes are split such that they are sent to both the normally designated
volumes and RPAs
Local Replica - Local volumes to which production volumes replicate.
Remote Replica - remote volumes to which production volumes replicate
Journal - Volumes that contain data waiting to be distributed to target replica volumes and
copies of the data previously distributed to the target volumes.
Repository - A volume dedicated to RecoverPoint for each RPA cluster that stores
configuration information about the RPAs and RecoverPoint consistency groups.
- The volume is not currently being used by RecoverPoint.
scsi-release-delay A SCSI release delay time in milliseconds. Optimum value is 0 to 2 seconds. Setting a very
high value could break the SCSI semantics. If another reserve arrives at this cluster within
this time frame, neither release nor reserve will be sent across the WAN.
virtual-volume destroy
Destroys existing virtual volumes.
Optional arguments
[-f|--force] - Forces the destruction of the virtual volumes without asking for confirmation.
Allows this command to be run from non-interactive scripts.
Description Deletes the virtual volume and leaves the underlying structure intact. The data on the
volume is no longer accessible.
Only unexported virtual volumes can be deleted. To delete an exported virtual volume,
first remove the volume from the storage view.
Context
-----------------------------------------------------
/clusters/cluster-1/virtual-volumes/was_1_leg_r1_vol
Do you wish to proceed? (Yes/No) y
virtual-volume expand
Non-disruptively increases the capacity of an existing virtual volume.
Optional arguments
[-e|--extent] context path - * The target local device or extent to add to the virtual-volume
using the concatenation method of expansion. The local device or extent must not have a
virtual-volume on top of it.
[-f|--force] - The meaning of this argument varies, depending on whether the --extent
argument is used (expansion method = concatenation) or not used (expansion-method =
storage-volume)
◆ For storage-volume expansion, the --force argument skips the confirmation message.
◆ For concatenation expansion, the --force argument expands a virtual-volume built on a
RAID-1 device using a target that is not a RAID-1 and or/is not as redundant as the
device supporting the virtual-volume.
* - argument is positional.
Description This command expands the specified virtual volume using one of two methods;
storage-volume or concatenation.
The ll command output shows whether the volume is expandable, the expandable
capacity (if any), and the expansion method available for the volume. For example:
VPlexcli:> ll /clusters/cluster-1/virtual-volumes/ Test_volume
Name Value
------------------- --------------
.
.
.
capacity 0.5G
consistency-group -
expandable true
expandable-capacity 4.5G
expansion-method storage-volume
expansion-status -
.
.
.
There are two methods to expand a virtual volume; storage-volume and concatenation.
◆ storage-volume - If the virtual-volume has a non-zero expandable-capacity, this
command will expand the capacity of the virtual-volume by it's full
expandable-capacity.
To use the storage-volume method of expansion, use this command without the
--extent argument. The storage-volume method of expansion adds the entire amount
of the expandable-capacity to the volume’s configured capacity.
◆ concatenation - (also known as RAID-C expansion) Expand the virtual volume by
adding the specified extents or devices.
The concatenation method does not support non-disruptive expansion of DR1
devices.
Use this command with the --extent argument to expand a virtual volume using the
concatenation method of expansion.
Before expanding a storage volume, understand the limitations of the function and the
prerequisites required for volumes to be expanded. See the VPLEX Administration Guide
for more information on how expansion works. For procedure to expand virtual volumes,
see the VPLEX procedures in the SolVe Desktop.
VPlexcli:/clusters/cluster-1/virtual-volumes> cd Test_volume/
VPlexcli:/clusters/cluster-1/virtual-volumes/Test_volume> ll
Name Value
------------------- --------------
block-count 131072
block-size 4K
cache-mode synchronous
capacity 0.5G
consistency-group -
expandable true
expandable-capacity 4.5G
expansion-method storage-volume
expansion-status in-progress
health-indications []
.
.
.
VPlexcli:/clusters/cluster-1/virtual-volumes> ll /clusters/cluster-1/virtual-volumes
/clusters/cluster-1/virtual-volumes:
/clusters/cluster-1/storage-elements/extents:
Name StorageVolume Capacity Use
-------------------------------- ----------------------- -------- -------
extent_Symm1554Tdev_061D_1 Symm1554Tdev_061D 100G used
extent_Symm1554Tdev_0624_1 Symm1554Tdev_0624 100G used
extent_Symm1554Tdev_0625_1 Symm1554Tdev_0625 100G used
extent_Symm1554_0690_1 Symm1554_0690 8.43G used
extent_Symm1554_0691_1 Symm1554_0691 8.43G used
extent_Symm1554_0692_1 Symm1554_0692 8.43G used
.
.
.
VPlexcli:/> cd /clusters/cluster-1/virtual-volumes/Test-Device_vol
virtual-volume provision
Provisions new virtual volumes using storage pools.
IMPORTANT
Storage pool names that contain spaces must be enclosed by double quotation marks.
Optional arguments
[-g|--consistency-group] consistency-group context-path - Specifies the consistency-group
to which to add the virtual-volumes.
Note: During VPLEX Integrated Array Services (VIAS) virtual volume provisioning, the
virtual volume can be added to new or existing VPLEX consistency groups using VPLEX
GUI. However, VPLEX command line interface will only support adding virtual volumes to
existing VPLEX consistency groups.
Description Provisions new virtual volumes using storage pools. The RAID geometry is determined
automatically by the number and location of the specified pools.
VPlexcli:/> A volume provisioning job was created and may be tracked by executing 'ls
/notifications/jobs/Provision_1_01-01-2014:00:00:01'.
Example List the job notification to track the status of the provisioning operation:
Note: The Job IDs created for volume provisioning will be active for 48 hours and their
expiration times posted. Jobs will be deleted and will need to be recreated if any of the
following occur: failures to the management server, such as shutdown, restart, VPLEX
Management Server upgrade, or VPLEX Management Console restart.
VPlexcli:/> ls /notifications/jobs/Provision_1_01-01-2014:00:00:01
A volume provisioning job was created and may be tracked by executing 'ls
/notifications/jobs/Provision_1_19-02-14:19:46:00'.
virtual-volume summary
Displays a summary for all virtual volumes.
Description Displays a list of any devices with a health-state or operational-status other than 'ok'.
Displays a summary including devices per locality (distributed versus local), cache-mode
(synchronous versus asynchronous), and total capacity for the cluster.
Displays any volumes with an expandable capacity greater than 0, and whether an
expansion is in progress.
If the --clusters argument is not specified and the command is executed at or below a
/clusters/cluster context, information is displayed for the current cluster.
Otherwise, virtual volumes of all clusters are summarized.
Example In the following example, one distributed virtual volume has expandable capacity at both
clusters:
VPlexcli:/> virtual-volume summary
Virtual-volume health summary (cluster-1):
Field Description
health state major failure - One or more of the virtual volume's underlying devices is out-of-date, but
will never rebuild.
minor failure - One or more of the virtual volume's underlying devices is out-of-date, but
will rebuild.
non-recoverable error - VPLEX cannot determine the virtual volume's Health state.
ok - The virtual volume is functioning normally.
unknown -VPLEX cannot determine the virtual volume's Health state, or the state is
invalid.
operational status degraded - The virtual volume may have one or more out-of-date devices that will
eventually rebuild.
error - One or more of the virtual volume's underlying devices is hardware-dead.
ok - The virtual volume is functioning normally.
starting -The virtual volume is not yet ready.
stressed - One or more of the virtual volume's underlying devices is out-of-date and will
never rebuild.
unknown - VPLEX cannot determine the virtual volume's Operational state, or the state is
invalid.
Field Description
Summaries
Total Total number of virtual volumes on the cluster, and number of unhealthy virtual volumes.
Expansion virtual-volume name - Name of any volume with expandable capacity greater than 0 or an
summary expansion underway.
expandable-capacity - Additional capacity (if any) added to the back end storage volume
not yet added to the VPLEX virtual volume.
capacity - Current capacity of the virtual volume.
expansion-status - Indicates whether an expansion is possible is in progress, or has
failed. A value of “-” indicates expansion is possible, but is not in progress, and has not
failed.
vpn restart
Restarts the VPN connection between management servers.
◆ vpn stop
vpn start
Starts the VPN connection between management servers.
vpn status
Verifies the VPN connection between management servers.
Description Verifies whether the VPN connection between the management servers is operating
correctly and checks whether all the local and remote directors can be reached (pinged).
If Cluster Witness is deployed, verifies the VPN connection between the management
servers and the Cluster Witness Server.
Verifying the VPN status between the management server and the cluster witness server...
IPSEC is UP
Cluster Witness Server at IP Address 128.221.254.3 is reachable
See also ◆ About cluster IP seed and cluster ID in the security ipsec-configure command.
◆ vpn restart
◆ vpn start
◆ vpn stop
vpn stop
Stops the VPN connection between management servers.
Connectivity
iscsi chap back-end add-credentials
iscsi chap back-end disable
iscsi chap back-end enable
iscsi chap back-end list-credentials
iscsi chap back-end remove-credentials
iscsi chap back-end remove-default-credential
iscsi chap back-end set-default-credential
iscsi chap front-end add-credentials
iscsi chap front-end disable
Clusters
cluster add
cluster cacheflush
cluster configdump
cluster expel
cluster forget
cluster local-ha repair-directors power-on
cluster local-ha repair-dirs
cluster local-ha repair-network-configuration
cluster local-ha repair-vsphere-ha
cluster local-ha-validate
cluster shutdown
cluster status
cluster summary
cluster unexpel
configuration short write
report capacity-clusters
Directors
connect
connectivity director
director appcon
director commission
director decommission
director forget
director passwd
director shutdown
director uptime
disconnect
Data migration
batch-migrate cancel
batch-migrate check-plan
batch-migrate clean
batch-migrate commit
batch-migrate create-plan
batch-migrate pause
batch-migrate remove
batch-migrate resume
batch-migrate start
batch-migrate summary
dm migration cancel
dm migration clean
dm migration commit
dm migration pause
dm migration remove
dm migration resume
dm migration start
ds summary
extent create
extent destroy
extent summary
local-device create
local-device destroy
local-device summary
rebuild set-transfer-size
rebuild show-transfer-size
report capacity-arrays
set
show-use-hierarchy
storage-tool compose
storage-volume auto-unbanish-interval
storage-volume claim
storage-volume claimingwizard
storage-volume find-array
storage-volume forget
storage-volume list-banished
storage-volume resurrect
storage-volume summary
storage-volume unbanish
storage-volume unclaim
storage-volume used-by
virtual-volume create
virtual-volume destroy
virtual-volume expand
virtual-volume provision
virtual-volume summary
Storage views
export initiator-port discover
export initiator-port register
export initiator-port register-host
export initiator-port unregister
export port summary
export storage-view addinitiatorport
export storage-view addport
export storage-view addvirtualvolume
export storage-view checkconfig
export storage-view create
export storage-view destroy
export storage-view find
export storage-view find-unmapped-volumes
export storage-view map
export storage-view removeinitiatorport
export storage-view removepor
export storage-view removevirtualvolume
export storage-view show-powerpath-interfaces
export storage-view summary
System management
battery-conditioning disable
battery-conditioning enable
battery-conditioning manual-cycle cancel-request
battery-conditioning manual-cycle request
battery-conditioning set-schedule
battery-conditioning summary
cache-invalidate
cache-invalidate-status
configuration remote-clusters add-addresses
configuration remote-clusters clear-addresses
configuration sync-time
configuration sync-time-clear
configuration sync-time-show
date
exec
exit
log filter create
log filter destroy
log filter list
log source create
log source destroy
log source list
logging-volume add-mirror
logging-volume create
logging-volume detach-mirror
logging-volume destroy
logical-unit forget
management-server net-service-restart*
manifest upgrade
manifest version
ndu copy-upgrade-package
ndu upgrade-mgmt-server
notifications call-home import-event-modifications
notifications call-home remove-event-modifications
notifications call-home view-event-modifications
notifications snmp-trap create
notifications snmp-trap destroy
report capacity-hosts
schedule add
schedule list
schedule modify
schedule remove
scheduleSYR add
scheduleSYR list
scheduleSYR remove
script
set
sessions
snmp-agent configure
snmp-agent start
snmp-agent status
snmp-agent stop
snmp-agent unconfigure
source
user add
user event-server add-user
user event-server change-password
user list
user passwd
user remove
user reset
version
vpn restart
vpn start
vpn status
vpn stop
Meta-volumes
configuration metadata-backup
configuration show-meta-volume-candidates
meta-volume attach-mirror
meta-volume backup
meta-volume create
meta-volume destroy
meta-volume detach-mirror
meta-volume move
meta-volume verify-on-disk-consistency
drill-down
unalias
security remove-login-banner
security set-login-banner
Cache vaulting
vault go
vault overrideUnvaultQuorum
vault status
Consistency groups
consistency-group add-virtual-volumes
consistency-group cache-invalidate
consistency-group choose-winner
consistency-group create
consistency-group destroy
consistency-group list-eligible-volumes
consistency-group remove-virtual-volumes
consistency-group resolve-conflicting-detach
consistency-group resume-after-data-loss-failure
consistency-group resume-after-rollback
consistency-group resume-at-loser
consistency-group set-detach-rule active-cluster-wins
consistency-group set-detach-rule no-automatic-winner
consistency-group set-detach-rule winner
consistency-group summary
set
Monitoring/statistics
monitor add-console-sink
monitor add-file-sink
monitor collect
monitor create
monitor destroy
monitor remove-sink
monitor stat-list
report aggregate-monitors
report create-monitors
Monitoring/statistics 499
VPLEX Commands by Topic
report poll-monitors
Security/authentication
authentication directory-service configure
authentication directory-service map
authentication directory-service show
authentication directory-service unconfigure
authentication directory-service unmap
authentication password-policy set
authentication password-policy reset
configuration configure-auth-service
password-policy reset
password-policy set
security create-ca-certificate
security create-certificate-subject
security create-host-certificate
security delete-ca-certificate
security delete-hostcertificate
security export-ca-certificate
security export-host-certificate
security import-ca-certificate
security import-host-certificate
security renew-all-certificates
security ipsec-configure
security show-cert-subj
user event-server add-user
user event-server change-password
Cluster Witness
cluster-witness configure
cluster-witness disable
cluster-witness enable
configuration cw-vpn-configure
configuration cw-vpn-reset
RecoverPoint
rp rpa-cluster add
rp rpa-cluster remove
rp summary
rp validate-configuration
Configuration/EZ-Setup
configuration complete-system-setup
configuration connect-local-directors
configuration connect-remote-directors
configuration continue-system-setup
configuration cw-vpn-configure
Security/authentication 501
VPLEX Commands by Topic
configuration cw-vpn-reset
configuration enable-front-end-ports
configuration event-notices-reports config
configuration event-notices-reports reset
configuration event-notices-reports-show
configuration get-product-type
configuration join-clusters
configuration register-product
configuration system-reset
configuration system-setup
configuration show-meta-volume-candidates
configuration upgrade-meta-slot-count
Miscellaneous
alias
chart create
INDEX C
cache-invalidate
consistency-group 81, 116
virtual-volume 81, 116
A
call-home import-event-modifications 355
add-console-sink, monitor 337
call-home remove-event-modifications 357
add-file-sink, monitor 338
call-home-view-event-modifications 358
addinitiatorport, export view 257
cancel
add-mirror, logging-volume 314
dm migration 223
addport, export view 257
capacity-clusters, report 376
addurl, plugin 367
capacity-hosts, report 377
addvirtualvolume, export view 258
capture
aggregate, report monitors 372
begin 88
appdump, director 209
end 89
array register 50
pause 89
array used-by 50
replay 89
attach-mirror
resume 90
meta-volume 327
checkconfig, export view 260
authentication 51, 55, 57, 58
check-plan, batch-migrate 60
authentication directory-service 56
claim, storage-volume 435
authentication directory-service configure 51
claimingwizard 438
authentication directory-service map 55
clean, dm migration 224
authentication directory-service show 56
CLI
authentication directory-service unconfigure 57
? wildcard 37
* wildcard 36
B ** wildcard 36
Batch migration accessing help 40
perform a migration 64 context tree 29, 30
batch-migrate display command arguments 36
check-plan 60 executing commands 34
pause 65 Log in 27
remove 66 Log out 28
resume 67 log out 28
start 67 login 27
summary 70 page output 35
batch-migrate cancel 59 positional arguments 38
battery-conditioning disable 72 Tab completion 35
battery-conditioning enable 73 using commands 34
battery-conditioning manual-cycle cancel-request 75 -verbose and --help arguments 39
battery-conditioning manual-cycle request 76 viewing command history 40
battery-conditioning set-enabled 73 Where am I in the context tree? 32
battery-conditioning set-schedule 78 CLI commands
battery-conditioning summary 80 cd 33
begin, capture 88 globbing 38
ls 32
popd 31
positional arguments 38