0% found this document useful (0 votes)
493 views33 pages

VPLEX VS6 Shutdown Procedure For Cluster 1 in A Metro Configuration

v

Uploaded by

Jorge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
493 views33 pages

VPLEX VS6 Shutdown Procedure For Cluster 1 in A Metro Configuration

v

Uploaded by

Jorge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

VPLEX SolVe Generator

Solution for Validating your engagement

Topic
VPLEX Customer Procedures
Selections
Procedures: Manage
Management Procedures: Shutdown
Shutdown Procedures: VS6 Shutdown Procedures
VS6 Shutdown Procedures: Cluster 1 in a Metro configuration
Serial Number(s): 21590212

Generated: July 30, 2021 10:36 AM GMT

REPORT PROBLEMS

If you find any errors in this procedure or have comments regarding this application, send email to
[email protected]

Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION (“EMC”)
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-
INFRINGEMENT AND ANY WARRANTY ARISING BY STATUTE, OPERATION OF LAW, COURSE OF
DEALING OR PERFORMANCE OR USAGE OF TRADE. IN NO EVENT SHALL EMC BE LIABLE FOR
ANY DAMAGES WHATSOEVER INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL,
LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, EVEN IF EMC HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice. Use, copying, and distribution of any EMC software described in this
publication requires an applicable software license.

Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be the property of their respective owners.

Publication Date: July, 2021

version: 2.9.0.68

Page 1 of 33
Contents
Preliminary Activity Tasks .......................................................................................................4
Read, understand, and perform these tasks.................................................................................................4

VS6 Shutdown Procedure for Cluster 1 in a Metro Configuration...........................................5


Before you begin...........................................................................................................................................5
Task 1: Connecting to MMCS-A....................................................................................................6
Phase 1: Shut down the cluster ....................................................................................................................6
Task 2: Log in to the VPlexcli........................................................................................................7
Task 3: Connect to the management server on cluster 2 .............................................................7
Task 4: Change the transfer size for all distributed-devices to 128K ............................................7
Task 5: Verify current data migration status..................................................................................8
Task 6: Make any remote exports available locally on cluster-2...................................................9
Task 7: Stop the I/O on the hosts that are using VPLEX volumes from cluster 1, or move I/O to
cluster 2 10
Task 8: Check status of rebuilds initiated from cluster-1, wait for these rebuilds to complete ....11
Task 9: Re-login to the management server and VPlexcli on cluster 2.......................................11
Task 10: Verify the cluster health..................................................................................................11
Task 11: Verify COM switch health ...............................................................................................11
Task 12: Collect Diagnostics.........................................................................................................12
Task 13: Disable RecoverPoint consistency groups that use VPLEX volumes ............................12
Task 14: Power off the RecoverPoint cluster ................................................................................13
Task 15: Disable Call Home..........................................................................................................14
Task 16: Identify RecoverPoint-enabled distributed consistency-groups with cluster 1 as winner15
Task 17: Make cluster 2 the winner for all distributed synchronous consistency-groups with
RecoverPoint not enabled .....................................................................................................................15
Task 18: Set the winner cluster for all distributed synchronous consistency groups and all
devices outside consistency groups ......................................................................................................17
Task 19: Make cluster 2 the winner for all distributed devices outside consistency group ...........18
Task 20: Disable VPLEX Witness .................................................................................................19
Task 21: Shut down the VPLEX firmware on cluster 1 .................................................................19
Task 22: Manually resume any suspended Recover Point enabled consistency groups on cluster-
2 20
Task 23: Shut down the VPLEX directors on cluster-1 .................................................................21
Task 24: Shut down MMCS-A and MMCS-B on cluster-1 ............................................................22
Task 25: Shut down power to the VPLEX cabinet ........................................................................22
Task 26: Exit the SSH sessions, restore your laptop settings, restore the default cabling
arrangement 23

version: 2.9.0.68

Page 2 of 33
Phase 2: Perform maintenance activities....................................................................................................24
Phase 3: Restart cluster..............................................................................................................................24
Task 27: Bring up the VPLEX components...................................................................................25
Task 28: Starting a PuTTY (SSH) session....................................................................................27
Task 29: Verify COM switch health ...............................................................................................28
Task 30: (Optionally) Change management server IP address ....................................................28
Task 31: Verify the VPN connectivity ............................................................................................28
Task 32: Power on the RecoverPoint cluster and enable consistency groups .............................29
Task 33: Resume volumes at cluster 1 .........................................................................................29
Task 34: Enable VPLEX Witness..................................................................................................29
Task 35: Check rebuild status and wait for rebuilds to complete ..................................................30
Task 36: Remount VPLEX volumes on hosts connected to cluster-1, and start I/O .....................30
Task 37: Verify the health of the clusters ......................................................................................30
Task 38: Restore the original rule-sets for consistency groups ....................................................31
Task 39: Restore the original rule-sets for distributed devices .....................................................31
Task 40: Restore the remote exports............................................................................................32
Task 41: Enable Call Home ..........................................................................................................32
Task 42: Collect Diagnostics.........................................................................................................33
Task 43: Exit the SSH sessions, restore laptop settings, and restore cabling arrangements.......33

version: 2.9.0.68

Page 3 of 33
Preliminary Activity Tasks
This section may contain tasks that you must complete before performing this procedure.

Read, understand, and perform these tasks


1. Table 1 lists tasks, cautions, warnings, notes, and/or knowledgebase (KB) solutions that you need to
be aware of before performing this activity. Read, understand, and when necessary perform any
tasks contained in this table and any tasks contained in any associated knowledgebase solution.

Table 1 List of cautions, warnings, notes, and/or KB solutions related to this activity

000171121: To provide feedback on the content of generated procedures

2. This is a link to the top trending service topics. These topics may or not be related to this activity.
This is merely a proactive attempt to make you aware of any KB articles that may be associated with
this product.

Note: There may not be any top trending service topics for this product at any given time.

VPLEX Top Service Topics

version: 2.9.0.68

Page 4 of 33
VS6 Shutdown Procedure for Cluster 1 in a Metro Configuration
Before you begin
Read this entire shutdown document before beginning this procedure. Before you begin a system
shutdown on a VPLEX metro system, review this section.
Confirm that you have the following information:
 IP address of the MMCS-A and MMCS-B in cluster 1 and cluster 2
 IP addresses of the hosts that are connected to cluster 1 and cluster 2
 (If applicable) IP addresses and login information for the RecoverPoint clusters attached to cluster 1
and cluster 2
 All VPLEX login usernames and passwords.
Default usernames and passwords for the VPLEX management servers, VPlexcli, VPLEX Witness
are published in the EMC VPLEX Security Configuration Guide.

Note: The customer might have changed some usernames or passwords. Ensure that you know any
changed passwords or that the customer is available when you need the changed passwords.

The following VPLEX documents are available on EMC Support Online:


 EMC VPLEX CLI Guide
 EMC VPLEX Administration Guide
 EMC VPLEX Security Configuration Guide
The following RecoverPoint documents are available on EMC Support Online:
 RecoverPoint Deployment Manager version Product Guide
 VPLEX Technical Note
The SolVe Desktop includes the following procedures referenced in this document:
 Change the management IP address server address (VS6):
 Changing the Cluster Witness Server's public IP address
 Configure 3-way VPN between Cluster Witness Server and VPLEX cluster (VS6)

CAUTION: If you
are shutting down
ALL the components
in the SAN, shut
down the
components in the
following order:

1. [ ] Hosts connected to the VPLEX cluster.


This enables an orderly shutdown of all applications using VPLEX virtual storage.

version: 2.9.0.68

Page 5 of 33
2. [ ] RecoverPoint, if present in the configuration.
3. [ ] Components in the cluster's cabinet, as described in this document.
4. [ ] Storage arrays from which the cluster is getting the I/O disks and the meta-volume disks.
5. [ ] Front-end and back-end COM switches.

Task 1: Connecting to MMCS-A


Procedure
1. [ ] Launch PuTTY.exe.
2. [ ] Do one of the following:
 If a previously configured session to the MMCS exists in the Saved Sessions window click Load.
 Otherwise, start PuTTY with the following values:

Field Value
Host Name (or IP address) 128.221.252.2
Port 22
Connection type SSH
Close window on exit Only on clean exit

Note: If you need more information on setting up PuTTY, see the EMC VPLEX Configuration Guide.

3. [ ] Click Open.
4. [ ] In the PuTTY session window, at the prompt, log in as service.
5. [ ] Type the service password.

Note: Contact the System Administrator for the service password. For more information about user
passwords, see the EMC VPLEX Security Configuration Guide.

Phase 1: Shut down the cluster


This procedure is in several phases. The first is shutting down cluster 1.

CAUTION: If any
step you perform
creates an error
message or fails to
give you the
expected result,
consult the
troubleshooting
information available
in generator, or
contact the EMC
Support Center. Do

version: 2.9.0.68

Page 6 of 33
not continue until the
issue has been
resolved.

Task 2: Log in to the VPlexcli


Procedure
1. [ ] Select the VPLEX Cluster 1 session and click Load.
2. [ ] Click Open, and log in to the MMCS with username service and password.

3. [ ] At the shell prompt, type the following command to connect to the VPlexcli:
vplexcli

Task 3: Connect to the management server on cluster 2


Procedure
1. [ ] Select the VPLEX Cluster 2 session and click Load.
2. [ ] Click Open, and log in to the MMCS with username service and password.

3. [ ] At the shell prompt, type the following command to connect to the VPlexcli:
vplexcli
After you finish
For the rest of this procedure:

Commands typed in the CLI session to cluster 1 are tagged with this icon:

Commands typed in the CLI session to cluster 2 are tagged with this icon:

Commands typed in the LINUX shell session to cluster 1 are tagged with:

Commands typed in the LINUX shell session to cluster 2 are tagged with:

Task 4: Change the transfer size for all distributed-devices to 128K


About this task
For more information on transfer-size, refer to the Administration Guide.
Procedure

1. [ ] Type the ls -al command from the /distributed-storage/distributed-


devices CLI context to display the value for Transfer size for all distributed devices.
VPlexcli:/distributed-storage/distributed-devices> ls –al

Name Status Operational Health Auto Rule Transfer


---------------------- ------- Status State Resume Set Size
---------------------- ------- ------------ ------- ------- Name --------
---------------------- ------- ------------ ------- ------- ---- --------
DR1_C1-C2_1gb_dev1 running ok ok true - 2M
DR1_C1-C2_1gb_dev10 running ok ok true - 2M
DR1_C1-C2_1gb_dev11 running ok ok true - 2M
.
.

version: 2.9.0.68

Page 7 of 33
.

The transfer size must be 128K or less.

2. [ ] If there are distributed devices with a transfer size of greater than 128K, do one of
the following:
 If all distributed devices have a transfer-size greater than 128K, type the following command to
change the transfer size for all devices:
VPlexcli:/distributed-storage/distributed-devices> set *::transfer-size 128K

Note: This command may take a few minutes to complete.

 If only some distributed devices have a transfer-size greater than 128K, type the following
commands to change the transfer-size for the specified distributed device:
cd /distributed-storage/distributed-devices

set distributed_device_name:: transfer-size 128K

3. [ ] Type the ls -al command to verify that the transfer size value for all distributed-devices is
128K or less
VPlexcli:/distributed-storage/distributed-devices> ls –al

Task 5: Verify current data migration status


Any current migration jobs stop during a system shutdown and will resume when cluster 1 is restarted.
About this task

CAUTION: Any data


migration job initiated
on cluster 1 pause
when cluster 1 shuts
down and resume
when cluster 1 is
restarts.

Procedure
1. [ ] Verify whether any data migration has been initiated on cluster-1 and is ongoing.
Refer to the VPLEX Administration Guide:
 'Monitor a migration's progress' for one-time data migration
 'Monitor a batch migration's progress' for batch migrations
2. [ ] If any migrations are ongoing, do one of the following:
 If the data being migrated must be available on the target before cluster 1 is shutdown, wait for the
data migrations to complete before proceeding with this procedure.
 If the data being migrated does not need to be available on the target when cluster 1 is shutdown,
proceed to the next Task.

version: 2.9.0.68

Page 8 of 33
Results
Migrations should be stopped and you can proceed to stop I/O on the hosts.

Task 6: Make any remote exports available locally on cluster-2


To prevent data unavailability on remote exports on cluster 1, make the data available locally on cluster 2.
Use the following task to move data on remote-exports on cluster 1 to local devices on cluster 2
About this task
Note: This task has no impact on I/O to hosts.

The VPLEX CLI Guide describes the commands used in this procedure.
Procedure

1. [ ] From the VPlexcli prompt on cluster-2, type the following command


ls /clusters/cluster-2/virtual-volumes/$d where $d::locality \== remote

/clusters/cluster-2/virtual-volumes/remote_r0_softConfigActC1_C2_CHM_0000_vol:
Name Value
------------------ -----------------------------------------
block-count 255589
block-size 4K
cache-mode synchronous
capacity 998M
consistency-group -
expandable false
health-indications []
health-state ok
locality remote
operational-status ok
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-tier -
supporting-device remote_r0_softConfigActC1_C2_CHM_0000
system-id remote_r0_softConfigActC1_C2_CHM_0000_vol
volume-type virtual-volume
.
.
.

2. [ ] If the output returned any volumes with service-status as running, for each remote export
on cluster 2

a. Record the supporting-device in column 1 in Table 1.


b. Record the capacity value in column 2.
Expand the table as needed.

Table 1 Remote export device migration source and target device

Remote export device Remote export capacity Local device on cluster-2 Local device capacity
from cluster-1 (source (target device)
device)

version: 2.9.0.68

Page 9 of 33
Remote export device Remote export capacity Local device on cluster-2 Local device capacity
from cluster-1 (source (target device)
device)

3. [ ] For each remote export identified in the previous step, identify a local device on cluster 2 based
on the conditions identified in the VPLEX Administration Guide. See the Data Migration chapter.

Note: Best practice is to select target devices that are the same size as their source devices. This will
simplify the device migration later in this procedure.

4. [ ] Record the device Name/system-id value in column 3.


5. [ ] Record the capacity value in column 4.
You will use these cluster 2 devices to migrate data from the cluster 1 devices using device
migrations.
6. [ ] Use the following steps to migrate data from cluster 1 to cluster 2:

a. Create migration job with source device and target device from column 1 and 3 in Table 1.
b. Give a migration job name to distinguish it from other existing migration jobs if any.
c. Monitor migration progress until it has finished.
d. Commit the completed migration.
e. Remove migration records.
7. [ ] Refer to the following procedures in the VPLEX Administration Guide:
 If there is only one device to migrate, refer to "One-time data migration."
 If there multiple devices to migrate, refer to " Batch migrations'."

Task 7: Stop the I/O on the hosts that are using VPLEX volumes from cluster 1, or move I/O to
cluster 2
This Task requires access to the hosts accessing the storage through cluster 1. Coordinate this activity
with host administrators if you do not have access to the hosts.
About this task
The steps to complete this Task vary depending on whether the entire SAN is being shut down, and
whether certain hosts using storage on cluster-1 support I/O failover.
Procedure
1. [ ] If the entire front-end SAN will be shut down:

a. Log onto the host and stop the I/O applications.


b. Depending on the supported methods of the host OS utilizing the VPLEX volumes, let the I/O
drain from the hosts by doing one of the following:
 Shut down the hosts
 Unmount the file systems
2. [ ] Determine whether each host accessing cluster-1 supports I/O failover (either manual or
automatic failover).
 If yes (host supports failover), perform the tasks to failover the I/O to cluster-2.

version: 2.9.0.68

Page 10 of 33
 If no (host does not support failover), then for each host, perform the following steps:

1. Log onto the host and stop the I/O applications.


2. Depending on the supported methods of the host OS utilizing the VPLEX volumes, let the I/O
drain from the hosts by doing one of the following:
Shut down the hosts
Unmount the file systems

Task 8: Check status of rebuilds initiated from cluster-1, wait for these rebuilds to complete

Perform this task from cluster 2.


Procedure
1. [ ] Type the rebuild status command and verify that all rebuilds on distributed devices are
complete before shutting down the clusters.
VPlexcli:/> rebuild status

If rebuilds are complete the command will report the following output:

Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.

Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds

Task 9: Re-login to the management server and VPlexcli on cluster 2


The VPLexcli session to cluster-2 may have timed out.
Before you begin
About this task
Use PuTTY (version 0.60 or later) or a similar SSH client, to connect to the public IP address of the
management server on cluster 2, and login as user service.

Task 10: Verify the cluster health


Before continuing with the procedure, ensure that there are no issues with the health of the cluster.
Procedure
1. [ ] From the VPlexcli prompt, type the following command, and confirm that the operational and
health states appear as ok:
health-check

Results
If you do not have a RecoverPoint splitter in your environment, you can now begin shutting down call
home and other processes that are no longer necessary. If you are running RecoverPoint, follow the
RecoverPoint shutdown tasks.

Task 11: Verify COM switch health


If the cluster is dual-engine or quad-engine, verify the health of the InfiniBand COM switches as follows:

version: 2.9.0.68

Page 11 of 33
Procedure

1. [ ] At the VPlexcli prompt, type the following command to verify connectivity among the
directors in the cluster:
connectivity validate-local-com -c clustername
Output example showing connectivity:
VPlexcli:/> connectivity validate-local-com -c cluster-1

connectivity: FULL

ib-port-group-3-0 - OK - All expected connectivity is present.


ib-port-group-3-1 - OK - All expected connectivity is present.

2. [ ] In the output, confirm that the cluster has full connectivity.

Task 12: Collect Diagnostics


Collect diagnostic information both before and after the shutdown.
Procedure
1. [ ] Type the following command to collect configuration information and log files from all directors
and the management server:
collect-diagnostics -–minimum
The information is collected, compressed in a Zip file, and placed in the directory /diag/collect-
diagnostics-out on the management server.
2. [ ] After the log collection is complete, use FTP or SCP to transfer the logs from /diag/collect-
diagnostics-out to another computer.

Task 13: Disable RecoverPoint consistency groups that use VPLEX volumes
Disabling RecoverPoint consistency groups prevents errors in data replication while the system is
experiencing some maintenance tasks. Perform this task if there is a RecoverPoint splitter in your
environment.
About this task

CAUTION: This task


disrupts replication
on volumes that are
part of the
RecoverPoint
consistency group
being disabled.
Ensure that you
perform this task on
the correct
RecoverPoint cluster
and RecoverPoint
consistency group.

Procedure

version: 2.9.0.68

Page 12 of 33
1. [ ] Type the ll /recoverpoint/rpa-clusters/ command to display
RecoverPoint clusters attached to cluster-1:
VPlexcli:/> ll /recoverpoint/rpa-clusters/

/recoverpoint/rpa-clusters:
RPA Host VPLEX Cluster RPA Site RPA ID RPA Version
----------- ------------- -------- ------ -----------
6.210.75 cluster-1 advil RPA 1 3.5(n.109)
2. [ ] Type the ll /recoverpoint/rpa-clusters/ ip-address /volumes command where
ip-address is the RPA host address that is displayed in the previous step to display the names of
RecoverPoint consistency groups that use VPLEX volumes. For example:
VPlexcli:/> ll /recoverpoint/rpa-clusters/10.6.210.75/volumes/

/recoverpoint/rpa-clusters/10.6.210.75/volumes:
Name RPA RP Type RP Role RP VPLEX Group
Capacity
----------------------------- Site ----------- ---------- Group -----------
----------
---------------------------- ----- ----------- ---------- ----- -----------
----------RP_Repo_Vol2_vol advil Repository - -
RP_RepJournal 10G
demo_prodjournal_1_vol advil Journal - cg1
RP_RepJournal 5G
demo_prodjournal_2_vol advil Journal - cg1
RP_RepJournal 5G
demo_prodjournal_3_vol advil Journal - cg1
RP_RepJournal 5G
.
.
.

3. [ ] Login to the RecoverPoint GUI for each RecoverPoint cluster that is attached to cluster-1 that is
enabled and is impacted when VPLEX cluster-1 shuts down.
4. [ ] Determine which RecoverPoint consistency groups the shutdown impacts.
 Inspect the Splitter Properties associated with the VPLEX cluster.
 Compare the serial number of the VPLEX cluster with the Splitter Name in the RecoverPoint GUI.
5. [ ] Record the names of the consistency groups.

Note: You will need this information to reconfigure the RecoverPoint consistency groups after you
complete the shutdown.

6. [ ] Disable each RecoverPoint consistency group associated with the VPLEX splitter on cluster-1

Task 14: Power off the RecoverPoint cluster


If there is a RecoverPoint splitter in the configuration, before shutting down the VPLEX cluster, power off
the RecoverPoint cluster.
About this task

version: 2.9.0.68

Page 13 of 33
CAUTION: This
step disrupts
replication on all
volumes that are this
RecoverPoint cluster
replicates. Ensure
that you perform this
task on the correct
RecoverPoint
cluster.

Procedure
1. [ ] Shut down each RecoverPoint cluster that is using a VPLEX virtual volume as its repository
volume.
2. [ ] Record the names of each RecoverPoint cluster that you shut down

Note: You need this information later in the procedure when you are powering on these RecoverPoint
clusters.

Task 15: Disable Call Home


Disable call-home, to prevent call-homes during the remainder of this procedure.
Procedure

1. [ ] At the VPlexcli prompt, type the following command to browse to the call-home
context:
VPlexcli:/> cd notifications/call-home/

2. [ ] Type the following command to check the value of enabled property.


VPlexcli:/notifications/call-home> ls
Attributes:
Name Value
------- -----
enabled true
Contexts:
snmp-traps

If the enabled property value is false, do not perform the next step.
Note whether call home was enabled or disabled. Later in the procedure, you need this information to
determine whether to enable call home again.
3. [ ] Type the following command to disable call home:
VPlexcli:/notifications/call-home> set enabled false --force

If this command worked, an ls of the context shows that the enabled state of the call home is
false.

version: 2.9.0.68

Page 14 of 33
Task 16: Identify RecoverPoint-enabled distributed consistency-groups with cluster 1 as winner
If your environment includes RecoverPoint, identify any RecoverPoint-enabled consistency groups that
are configured with cluster 1 as the winner
About this task
Note: It is not possible to change the detach-rule for a consistency-group with Recover Point enabled.

Procedure

1. [ ] From the VPlexcli prompt, type the following commands to display the consistency-
groups with RecoverPoint enabled:
VPlexcli:/> ls -p /clusters/cluster-1/consistency-groups/$d where
$d::recoverpoint-enabled \== true
/clusters/cluster-1/consistency-groups/Aleve_RPC1_Local_Journal_A:
Attributes:
Name Value
-------------------- ---------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule winner cluster-1 after 5s
operational-status [(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters []
recoverpoint-enabled true
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes Aleve_RPC1_local_Journal_A_0000_vol,
Aleve_RPC1_local_Journal_A_0001_vol,
Aleve_RPC1_local_Journal_A_0002_vol,
Aleve_RPC1_local_Journal_A_0003_vol,
Aleve_RPC1_local_Journal_A_0004_vol,
Aleve_RPC1_local_Journal_A_0005_vol,
Aleve_RPC1_local_Journal_A_0006_vol,
Aleve_RPC1_local_Journal_A_0007_vol,
Aleve_RPC1_local_Journal_A_0008_vol,
Aleve_RPC1_local_Journal_A_0009_vol, ... (45 total)
visibility [cluster-1, cluster-2]

Contexts:
advanced recoverpoint
.
.
.

2. [ ] Record the names of all the Recover Point enabled distributed consistency groups with cluster 1
as the winner.

Note: You will use this information to manually resume the consistency group on cluster 2 in Phase 3
of this procedure.

Task 17: Make cluster 2 the winner for all distributed synchronous consistency-groups with
RecoverPoint not enabled
Procedure

1. [ ] From the VPlexcli prompt on cluster-1, type the following commands to display the
consistency-groups:

version: 2.9.0.68

Page 15 of 33
cd /clusters/cluster-1/consistency-groups

ll
/clusters/cluster-1/consistency-groups:
Name Operational Active Passive Detach Cache
Mode
-------------------------- Status Clusters Clusters Rule -------
----
-------------------------- ------------ -------- -------- --------- -------
----
sync_sC12_vC12_nAW_CHM (cluster-1,{ winner
synchronous
summary:: cluster-1
ok, after 22s
details:: []
}),
(cluster-2,{
summary::
ok,
details:: []
})
sync_sC12_vC12_wC2a25s_CHM (cluster-1,{ winner
synchronous
summary:: cluster-2
ok, after 25s
details:: []
}),
(cluster-2,{
summary::
ok,
details:: []
})

2. [ ] Record the name and rule-set of all consistency-groups with a rule-set that configures cluster-1
as winner or no-automatic-winner in the Detach Rule column in the table below. Expand the table as
needed.

Note: You will use this information when you reset the rule-set name in Phase 3 of this procedure.

Table 2 Consistency-groups with cluster-1 as winner or no-automatic-winner

Consistency group Detach rule

3. [ ] Make cluster 2 the winner for these consistency groups to prevent the consistency group from
suspending I/O to the volumes on cluster 2.
Type the following commands, where consistency-group_name is the name of a consistency-
group in the table and delay is the current delay for it.
cd consistency-group_name

set-detach-rule winner cluster-2 –-delay delay

cd ..

version: 2.9.0.68

Page 16 of 33
4. [ ] Repeat the previous step for every step in the table.
5. [ ] Type the following command to verify the rule-set name changes:
ll /clusters/cluster-1/consistency-groups/
6. [ ] In the output, confirm that all the consistency-groups (with known exceptions from previous
tasks) show cluster-2 as the winner

Task 18: Set the winner cluster for all distributed synchronous consistency groups and all
devices outside consistency groups
Include MetroPoint consistency groups while performing this task. Do not include RecoverPoint
consistency groups as they are addressed in another task.
Procedure
1. [ ] You must make the cluster that is recorded in the previous task as the winner for consistency
groups listed in the previous table, so that the last cluster to be shut down is set as the winner for all
consistency groups.
2. [ ] Type the following commands, where consistency-group_name is the name of a consistency
group in the previous table and delay is the current delay.
cd consistency-group_name
set-detach-rule winner cluster noted down from previous task –-delay delay
cd ..

3. [ ] Repeat the previous step for every consistency group listed in the previous table.
4. [ ] To verify the rule-set name change, type the following command:
cd /distributed-storage/distributed-devices
ll

5. [ ] In the output, confirm that all the consistency groups show the correct winner cluster.
6. [ ] Type the following commands to display the distributed devices:
cd /distributed-storage/distributed-devices
ll

7. [ ] Record the name and rule-set of all distributed devices, where the rule-set configures any cluster
other than the winner cluster selected in the following table:

Table 3 List of distributed devices having rule-set other than the winner cluster

Distributed Device Name Rule Set Name

Note: The information is required when you reset the rule-set in Phase 3.

8. [ ] Change the rule-set for distributed devices:

Note: Note:You can change the rule-set for all distributed devices, or for selected distributed devices.

To change the rule-set for all distributed devices, type the following command:

version: 2.9.0.68

Page 17 of 33
set *::rule-set-name <cluster noted down from Task 15:>-detaches
To change the rule-set for selected distributed devices, type the following command for each
distributed_device_name listed in the previous table.
cd distributed_device_name
set rule-set-name cluster noted down from previous table-detaches
cd ..

9. [ ] In the output, confirm that all distributed devices show the correct winner cluster.

Task 19: Make cluster 2 the winner for all distributed devices outside consistency group
Procedure

1. [ ] Type the following commands to display the distributed devices:


cd /distributed-storage/distributed-devices
ll

2. [ ] In the output, note all distributed devices with a rule-set that configures cluster 1 as the winner in
the Rule Set Name column.
The default rule-set that configures cluster-1 as the winner is cluster-1-detaches.

Note: Customers may have created their own rule-set with cluster-1 as a winner.

3. [ ] Record the name and the rule-set of all the distributed devices with a rule-set that configures
cluster 1 as the winner or for which there is no rule-set-name (the rule-set-name field is blank) in the
Rule-set name column in the table below.

WARNING: If a distributed device outside of a consistency group has no rule-set name, it will be
suspended upon the shutdown of the cluster. This can lead to data unavailability.

Note: You will need this information when you reset the rule-set in Phase 3 of this procedure.

Table 4 Distributed devices with rule-set cluster 1 is winner

Distributed device name Rule-set name

4. [ ] This step varies depending on whether you are changing the rule-set for all distributed devices,
or for selected distributed devices
 To change the rule-set for all distributed devices, type the following command from the
/distributed-storage/distributed-devices context:
set *::rule-set-name cluster-2-detaches
 To change the rule-set for selected distributed devices, type the following command for each
device whose rule-set you want to change, where distributed_device_name is the name of a
device in the table.
cd distributed_device_name

set rule-set-name cluster-2-detaches

version: 2.9.0.68

Page 18 of 33
cd ..

5. [ ] Type the following command to verify the rule-set name changes:


ll /distributed-storage/distributed-devices
6. [ ] In the output, confirm that all distributed devices (with known exceptions) show cluster 2 as the
winner.

Task 20: Disable VPLEX Witness


If VPLEX Witness is enabled in the configuration, disable it.
Procedure

1. [ ] From the VPlexcli prompt, type the following commands to determine if VPLEX
Witness is enabled:
cd /cluster-witness
ls

Attributes:
Name Value
------------- -------------
admin-state enabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

2. [ ] Record whether VPLEX Witness is enabled or disabled.

Note: You will need this information later in the procedure.

3. [ ] If VPLEX Witness is enabled, type the following command to disable it


cluster-witness disable --force
4. [ ] Type the following command to verify that VPLEX Witness is disabled:
VPlexcli:/cluster-witness> ls
Attributes:
Name Value
------------- -------------
admin-state disabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

Task 21: Shut down the VPLEX firmware on cluster 1


About this task

CAUTION: Running this command on the wrong cluster will result in Data Unavailability.

version: 2.9.0.68

Page 19 of 33
CAUTION: During the cluster shutdown procedure before executing the shutdown command DO NOT
DISABLE the WAN COM on any of the VPLEX directors (by disabling one or more directors' WAN COM
ports, or disabling the external WAN COM links via the WAN COM switches). Disabling the WAN COM
before executing the 'cluster shutdown' command triggers the VPLEX failure recovery process for
volumes, which can result in the 'cluster shutdown' command hanging. Disabling the WAN COM
beforethe cluster shutdown has not been tested and is not supported.

CAUTION:

Perform this task on cluster 1.


Procedure
1. [ ] To shut down the firmware in cluster 1, type the following commands:
VPlexcli:> cluster shutdown --cluster cluster-1
Warning: Shutting down a VPlex cluster may cause data unavailability. Please
refer to the VPlex documentation for the
recommended procedure for shutting down a cluster. To show that you understand
the impact, enter
'shutdown': shutdown
You have chosen to shutdown 'cluster-1'. To confirm, enter 'cluster-1':
cluster-1

Status Description
-------- -----------------
Started. Shutdown started.

Note: It takes ~3–5 minutes for the system to shut down.

2. [ ] To display the cluster status, type the following command:


VPlexcli:/> cluster status

Cluster cluster-1
operational-status: not-running
transitioning-indications:
transitioning-progress:
health-state: unknown
health-indications:
local-com: failed to validate local-com: Firmware
command error.
communication error recently.

3. [ ] In the output, confirm that the operational-status for cluster 1 is not-running.


4. [ ] Type the following command to display the cluster summary:
In the output, confirm that cluster 1 is down. To determine that cluster 1 is down, examine the output
to see that Connected is false, Expelled, Operational Status, and Health State are -
and cluster-1 is not listed in Islands

Task 22: Manually resume any suspended Recover Point enabled consistency groups on
cluster-2

Run this task on cluster 1.


Procedure

version: 2.9.0.68

Page 20 of 33
1. [ ] If the volumes in a consistency group identified in previous steps need to service I/O to hosts on
cluster-1, then type the following CLI commands for each of those consistency-group to make cluster-
2 the winner, and to allow it service I/O:
VPlexcli:/clusters/cluster-2/consistency-groups> choose-winner -c cluster-2 -g
async_sC12_vC2_aCW_CHM
WARNING: This can cause data divergence and lead to data loss. Ensure the other
cluster is not serving I/O for this consistency group before continuing.
Continue? (Yes/No) Yes

2. [ ] Type the following command to ensure none of the above consistency groups require
resumption:
consistency-group summary

3. [ ] Look for consistency groups with requires-resume-at-loser

Task 23: Shut down the VPLEX directors on cluster-1


Complete the shutdown by shutting down the directors.
About this task

CAUTION: Running
the shutdown
command on the
wrong director
results in data
unavailability.

Procedure
1. [ ] From the VPlexcli prompt, type the following command:
exit

2. [ ] From the shell prompt, type the following commands to shut down director 1-1-A:

Note: In the first command, the l in -l is a lowercase L.

ssh -l root 128.221.252.35

director-1-1-a:~ # shutdown -P "now"

Broadcast message from root (pts/0) (Fri Nov 18 20:04:33 2011):

The system is going down to maintenance mode NOW!

3. [ ] Repeat the previous step for each remaining director in cluster 1, substituting the applicable ssh
command shown in the following table:

version: 2.9.0.68

Page 21 of 33
Table 5 ssh commands to connect to directors

Cluster size Director ssh command Checkbox

 Single-engine 1-1-A ssh -l root 128.221.252.35 [X]


 Dual-engine 1-1-B ssh -l root 128.221.252.36 []
 Quad-engine
 Dual-engine 1-2-A ssh -l root 128.221.252.37 []
 Quad-engine 1-2-B ssh -l root 128.221.252.38 []
 Quad-engine 1-3-A ssh -l root 128.221.252.39 []
1-3-B ssh -l root 128.221.252.40 []
1-4-A ssh -l root 128.221.252.41 []
1-4-B ssh -l root 128.221.252.42 []

4. [ ] Type the following command, and verify that the director is down:
ping –b 128.221.252.35

Note: A director can take up to 4 minutes to shut down completely.

Output example if the director is down:


PING 128.221.252.35 (128.221.252.35) 56(84) bytes of data.
From 128.221.252.33 icmp_seq=1 Destination Host Unreachable
From 128.221.252.33 icmp_seq=2 Destination Host Unreachable

5. [ ] Repeat the previous step for each director you shut down, substituting the applicable IP address
shown in the previous table.

Task 24: Shut down MMCS-A and MMCS-B on cluster-1


Procedure
1. [ ] Shutdown MMCS-B.

a. SSH to MMCS-B using ssh [email protected]


b. Shutdown MMCS-B using sudo /sbin/shutdown 0
Broadcast message from root (pts/1) (Tue Feb 8 18:12:30 2010):
The system is going down to maintenance mode NOW!

2. [ ] Type the following command to shut down the MMCS-A on cluster-1:


sudo /sbin/shutdown 0
Broadcast message from root (pts/1) (Tue Feb 8 18:12:30 2010):
The system is going down to maintenance mode NOW!

Task 25: Shut down power to the VPLEX cabinet


Procedure
1. [ ] Switch the breakers on all PDU units on the cabinet to the OFF position.

version: 2.9.0.68

Page 22 of 33
2. [ ] Check the Power LED on the engine between drive fillers 7 and 8 is off.

Task 26: Exit the SSH sessions, restore your laptop settings, restore the default cabling
arrangement
If you are still logged in to the VPLEX CLI sessions, log out now and restore the laptop settings. If you
used a service laptop to access the management server, use the steps in this task to restore the default
cable arrangement.
About this task
Repeat these steps on each cluster
Procedure
1. [ ] If you changed or disabled any settings on the laptop before starting this procedure, restore the
settings.
2. [ ] The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC
cabinet or non-EMC cabinet:
 EMC cabinet:

1. Disconnect the red service cable from the Ethernet port on the laptop, and remove the laptop
from the laptop tray.

version: 2.9.0.68

Page 23 of 33
2. Slide the cable back through the cable tie until only one or 2 inches protrude through the tie,
and then tighten the cable tie.
3. Slide the laptop tray back into the cabinet.
4. Replace the filler panel at the U20 position.
5. If you used the cabinet's spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
 Non-EMC cabinet:

1. Disconnect the red service cable from the laptop.


2. Coil the cable into a loose loop and hang it in the cabinet. (Leave the other end connected to
the VPLEX management server.)

Phase 2: Perform maintenance activities


Once all components of the cluster are down, you can perform maintenance activities before restarting it.

CAUTION: This
document assumes
that all existing SAN
components and
access to them from
VPLEX components
do not change as a
part of the
maintenance activity.
If components or
access changes,
please contact EMC
Customer Support to
plan this activity.

Perform the activity that required the shutdown of the cluster.

Phase 3: Restart cluster


This procedure describes the tasks to bring up the cluster after it has been shutdown
The procedure assumes that the cluster was shutdown by following the tasks described earlier in this
document.
Order to restart hosts, clusters, and other components

CAUTION: If you
are bringing up ALL
the components in
the SAN, bring them

version: 2.9.0.68

Page 24 of 33
up in the order
described in the
following steps.
While you are
bringing up all the
components in that
order, ensure that
the previous
component is fully up
and running before
continuing with next
component. Ensure
that there is a time
(20 s or more) gap
before starting each
component.

SAN components:

1. [ ] Storage arrays from which VPLEX is getting the I/O disks and the metavolume disks.
2. [ ] Front-end and back-end InfiniBand switches.
VPLEX components:

1. [ ] Components in the VPLEX cabinet, as described in this document.


2. [ ] (If applicable) RecoverPoint.
3. [ ] Hosts connected to the VPLEX cluster.

Task 27: Bring up the VPLEX components


Procedure
1. [ ] Switch each breaker switch on the PDU units to the ON position for all receptacle groups that
have power cables connected to them.

version: 2.9.0.68

Page 25 of 33
2. [ ] Verify that the blue Power LED on the engine is showing is as shown in the figure.

3. [ ] On dual-engine or quad-engine clusters only, verify that the Online LED on each UPS (shown in
the following figure) is illuminated (green), and that none of the other three LEDs on the UPS is
illuminated.

version: 2.9.0.68

Page 26 of 33
If the Online LED on a UPS is not illuminated, push the UPS power button, and verify that the LEDs
are as described above before proceeding to the next step.

4. [ ] Verify that the UPS AC power input status LEDs are on (solid) to confirm that each unit is getting
power from both power zones.

5. [ ] On dual-engine or quad-engine clusters only, verify that no UPS circuit breaker has triggered. If
either circuit breaker on a UPS has triggered, press it to reseat it.

CAUTION: If any step you perform creates an error message or fails to give you the expected result,
consult the troubleshooting information in the generator, or contact the EMC Support Center. Do not
proceed until the issue has been resolved.

Task 28: Starting a PuTTY (SSH) session


Procedure
1. [ ] Launch PuTTY.exe.
2. [ ] Do one of the following:
 If a previously configured session to the MMCS exists in the Saved Sessions window click Load.
 Otherwise, start PuTTY with the following values:

version: 2.9.0.68

Page 27 of 33
Field Value
Host Name (or IP address) 128.221.252.2
Port 22
Connection type SSH
Close window on exit Only on clean exit

Note: If you need more information on setting up PuTTY, see the EMC VPLEX Configuration Guide.

3. [ ] Click Open.
4. [ ] In the PuTTY session window, at the prompt, log in as service.
5. [ ] Type the service password.

Note: Contact the System Administrator for the service password. For more information about user
passwords, see the EMC VPLEX Security Configuration Guide.

Task 29: Verify COM switch health


If the cluster is dual-engine or quad-engine, verify the health of the InfiniBand COM switches as follows:
Procedure

1. [ ] At the VPlexcli prompt, type the following command to verify connectivity among the
directors in the cluster:
connectivity validate-local-com -c clustername
Output example showing connectivity:
VPlexcli:/> connectivity validate-local-com -c cluster-1

connectivity: FULL

ib-port-group-3-0 - OK - All expected connectivity is present.


ib-port-group-3-1 - OK - All expected connectivity is present.

2. [ ] In the output, confirm that the cluster has full connectivity.

Task 30: (Optionally) Change management server IP address


If the IP address of the management server in cluster 1 has changed, then follow the procedure in the
Solve Desktop titled "Change the management server address".

Task 31: Verify the VPN connectivity


Procedure
1. [ ] At the VPlexcli prompt, type the following command to confirm that the VPN tunnel has been
established, and that the local and remote directors are reachable from management server-1:
vpn status
2. [ ] In the output, confirm that IPSEC is UP:
VPlexcli:/> vpn status
Verifying the VPN status between the management servers...
IPSEC is UP
Remote Management Server at IP Address 10.31.25.27 is reachable
Remote Internal Gateway addresses are reachable

3. [ ] Repeat the steps on cluster 2

version: 2.9.0.68

Page 28 of 33
Task 32: Power on the RecoverPoint cluster and enable consistency groups
If a RecoverPoint cluster that used VPLEX virtual volumes for its repository volume was powered off in
the Shutdown phase of this procedure, power on the RecoverPoint cluster.
About this task
Refer to the procedures in the RecoverPoint documentation.
If a RecoverPoint consistency group was disabled in the first phase of this procedure, perform this task to
enable those consistency groups. Refer to the procedures in the RecoverPoint documentation.
Procedure
1. [ ] Login to the RecoverPoint GUI.
2. [ ] Enable each RecoverPoint consistency group that was disabled in Phase 1.
3. [ ] Repeat these steps for every RecoverPoint cluster attached to the VPLEX cluster.

Task 33: Resume volumes at cluster 1


If the consistency groups and the distributed storage not in consistency groups have auto-resume set to
false, then those volumes will not automatically resume when you restore cluster 1.
About this task
In the VPLEX CLI of cluster 1 , follow these steps to resume volumes that do not have
auto-resume set to true:
Procedure
1. [ ] Type the following command to display if any consistency groups require resumption:
consistency-group summary
Look for any consistency groups with requires-resume-at-loser.
2. [ ] Type the following command for each consistency groups that has the requires-resume-at-loser:
cd /clusters/cluster-1/
consistency-group resume-at-loser -c cluster -g consistency-group
3. [ ] Type the following command to display whether any volumes outside of a consistency group
require resumption on cluster 1:
ll /clusters/cluster-1/virtual-volumes/
4. [ ] Type the following command to resume at the loser cluster for all distributed volumes not in
consistency groups:
device resume-link-up -f -a

Task 34: Enable VPLEX Witness


If VPLEX Witness is deployed, and was disabled in Phase 1, complete this task to re-enable VPLEX
Witness.
Procedure

1. [ ] Type the following commands to enable VPLEX Witness on cluster-1 and confirm
that it is enabled:
cluster-witness enable
cd /cluster-witness
ls

Output example if VPLEX witness is enabled:

version: 2.9.0.68

Page 29 of 33
Attributes:
Name Value
------------- -------------
admin-state enabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

2. [ ] Confirm VPLEX witness is in contact with both clusters:


VPlexcli:/> ll cluster-witness/components/
/cluster-witness/components:
Name ID Admin State Operational State Mgmt Connectivity
--------- -- ----------- ------------------- -----------------
cluster-1 1 enabled in-contact ok
cluster-2 2 enabled in-contact ok
server - enabled clusters-in-contact ok

3. [ ] Confirm Admin State is enabled and Mgmt Connectivity is ok for all three components.
4. [ ] Confirm Operational State is in-contact for clusters and clusters-in-contact for
server.

Task 35: Check rebuild status and wait for rebuilds to complete
Rebuilds may take some time to complete while I/O is in progress. For more information on rebuilds,
please check the VPLEX Administration Guide "Data Migration" chapter.
Procedure

1. [ ] Type the rebuild status command and verify that all rebuilds are complete.
If rebuilds are complete, the command will report the following output:
Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds

Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.

Task 36: Remount VPLEX volumes on hosts connected to cluster-1, and start I/O
Remounting VPLEX volumes requires access to the hosts accessing the storage through cluster-1 and
cluster-2. This task may require co-ordinating this task with host administrators if user does not have
access to the hosts.
Procedure
1. [ ] Perform a scan on the hosts and discover the VPLEX volumes.
2. [ ] Mount the necessary file systems on the VPLEX volumes.
3. [ ] Start the necessary I/O applications on the host.

Task 37: Verify the health of the clusters


After all maintenance activities, check the cluster health on both clusters.
Procedure

version: 2.9.0.68

Page 30 of 33
1. [ ] Type the following command, and confirm that the operational and health states
appear as ok:
health-check

2. [ ] Repeat the previous step on cluster 2.

Task 38: Restore the original rule-sets for consistency groups


If you changed the rule-sets for synchronous consistency groups in Phase 1, make the cluster selected in
Phase 1 the winner for all distributed synchronous consistency groups.
About this task
To change the rule-sets to their original value, follow these steps.

Note: Skip this task if you do not want to change the rule-sets.

See Table 1 in Phase 1 for the list of consistency groups.


Procedure
1. [ ] To restore the original rule-sets, type the following commands, where consistency-
group_name is the name of a consistency-group, original rule-set is the rule set in Phase 1
and delay is the delay set for the consistency-group:
cd consistency-group_name
set-detach-rule original rule-set –-delay delay
cd ..

2. [ ] Repeat the previous step for each consistency group listed in the table.
3. [ ] To verify the rule-set name change, type the following command:
ll /clusters/cluster-1/consistency-groups/
4. [ ] In the output, confirm that all the consistency groups listed in the table are restored to their
original detach rules.

Task 39: Restore the original rule-sets for distributed devices


If you changed the rule-set name for distributed devices to make cluster 2 as the winner in Phase 1, then
make the original winner cluster the winner for all distributed devices outside of consistency groups.
About this task
Perform the following steps to change the rule-set to its original value:
Procedure
1. [ ] Change the rule-set of distributed devices.

Note: You can change the rule-set for all distributed devices, or for selected distributed devices.

 To change the rule-set for distributed devices, type the following command from the
/distributed-storage/distributed-devices context:
set *::rule-set-name original rule-set-name

 To change the rule-set for selected distributed devices, type the following commands, where
distributed_device_name is the name of a device listed in the table.
cd distributed_device_name
set rule-set-name original rule-set-name

version: 2.9.0.68

Page 31 of 33
cd ..

2. [ ] To verify the rule-set name changes, type the following command:


ll /distributed-storage/distributed-devices

3. [ ] In the output, confirm that all the distributed devices listed in Table 2 are restored to the original
detach rule.

Task 40: Restore the remote exports


If cluster1 remote exports were moved to cluster 1 before shutdown in Phase 1, use the steps in this task
to restore them.
Procedure
1. [ ] Refer to the table for the names of the cluster-2 devices used for data migration from cluster 1
before cluster 1 was shutdown.

2. [ ] Perform the following operations to migrate data from cluster 2 back to cluster 1

a. Create a migration job with source device and target device from the table column 3 and 1
respectively.
b. Verify that the prerequisites for device migration are met. Refer to the VPLEX Administration
Guide.
c. Monitor migration progress until it has finished.
d. Commit the completed migration.
e. Remove migration records.
Depending on the number of devices to migrate, refer to the following sections in the Data Migration
chapter of the VPLEX Administration Guide:
 To migrate one device, refer to "One-time data migrations"
 To migrate multiple devices, refer to "Batch migrations"

Task 41: Enable Call Home


About this task
Do the following to enable Call Home.
Procedure

1. [ ] At the VPlexcli prompt, type the following to enable Call Home:


VPlexcli:/>cd /notifications/call-home
VPlexcli:/notifications/call-home> set enabled true

2. [ ] Verify Call Home has been enabled by typing the ls command.


If Call Home is enabled, the following output should appear.

Attributes:
Name Value
---- -----
Enabled true
Results
Call Home is enabled.

version: 2.9.0.68

Page 32 of 33
Task 42: Collect Diagnostics
Collect diagnostic information both before and after the shutdown.
Procedure
1. [ ] Type the following command to collect configuration information and log files from all directors
and the management server:
collect-diagnostics -–minimum
The information is collected, compressed in a Zip file, and placed in the directory /diag/collect-
diagnostics-out on the management server.
2. [ ] After the log collection is complete, use FTP or SCP to transfer the logs from /diag/collect-
diagnostics-out to another computer.

Task 43: Exit the SSH sessions, restore laptop settings, and restore cabling arrangements
If you are still logged in to the VPLEX CLI session, restore the laptop settings. If you used a service
laptop to access the management server, use the steps in this task to restore the default cable
arrangement.
Procedure
1. [ ] If you changed or disabled any settings on the laptop before starting this procedure, restore the
settings.
2. [ ] The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC
cabinet or non-EMC cabinet:
 EMC cabinet:

1. Disconnect the red service cable from the Ethernet port on the laptop, and remove the laptop
from the laptop tray.
2. Slide the cable back through the cable tie until only one or 2 inches protrude through the tie,
and then tighten the cable tie.
3. Slide the laptop tray back into the cabinet.
4. Replace the filler panel at the U20 position.
5. If you used the cabinet's spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
 Non-EMC cabinet:

1. Disconnect the red service cable from the laptop.


2. Coil the cable into a loose loop and hang it in the cabinet. (Leave the other end connected to
the VPLEX management server.)

version: 2.9.0.68

Page 33 of 33

You might also like