VIPR Controller 3.5
VIPR Controller 3.5
Version 3.5
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other
countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
EMC Corporation
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.EMC.com
Use this roadmap as a starting point for ViPR Controller installation and configuration.
You must perform the following high-level sequence of steps to install and configure ViPR
Controller. These steps must be completed for each instance of a ViPR Controller virtual
data center. Once ViPR Controller is installed and configured, you can automate block
and file storage provisioning tasks within the ViPR Controller virtual data center.
1. Review the ViPR Controller readiness checklist on page 7.
2. Obtain the EMC ViPR Controller license file on page 11.
3. Determine which method you will be using to deploy ViPR Controller, and follow the
installation instructions:
l Install ViPR Controller on VMware as a vApp on page 14
l Install ViPR Controller on VMware without a vApp on page 17
l Install ViPR Controller on Hyper-V on page 23
4. Optionally:
l Install the ViPR Controller CLI.
For steps to install the ViPR Controller CLI, refer to the ViPR Controller CLI Reference
Guide which is available from the ViPR Controller Product Documentation Index .
l Deploy a compute image server on page 34
5. Once you have installed the ViPR Controller, refer to the ViPR Controller User Interface
Tenants, Projects, Security, Users and Multisite Configuration Guide to:
l Add users into ViPR Controller via authentication providers.
l Assign roles to users.
l Create multiple tenants (optional)
l Create projects.
6. Prepare to configure the ViPR Controller virtual data center, as described in the ViPR
Controller Virtual Data Center Requirements and Information Guide.
7. Configure the ViPR Controller virtual data center as described in the ViPR Controller
User Interface Virtual Data Center Configuration Guide.
Use this checklist as an overview of the information you will need when you install and
configure the EMC ViPR Controller virtual appliance.
For the specific models, and versions supported by the ViPR Controller, ViPR Controller
resource requirements see the ViPR Controller Support Matrix.
l Identify an VMware or Hyper-V instance on which to deploy ViPR Controller.
l Make sure all ESXi servers (or all HyperV servers) on which ViPR controller will be
installed are synchronized with accurate NTP servers.
l Collect credentials to access the VMware or Hyper-V instance.
Deploying ViPR Controller requires credentials for an account that has privileges to
deploy on the VMware or Hyper-V instance.
l Refer to the ViPR Controller Support Matrix to understand the ViPR Controller VMware or
Hyper-V resource requirements, and verify that the VMware or Hyper-V instance has
sufficient resources for ViPR Controller deployment.
l If deploying on VMware, it is recommended to deploy the ViPR Controller on a
minimal of a 3 node ESXi DRS cluster, and to set an anti-affinity rule among the ViPR
Controller nodes to, "Separate Virtual Machines," on available ESXi nodes. Refer to
VMware vSphere documentation for instructions to setup ESX/ESXi DRS anti-affinity
rules.
l Identify 4 IP addresses for 3 node deployment or 6 IP addresses for 5 node
deployment. The addresses are needed for the ViPR Controller VMs and for the virtual
IP by which REST clients and the UI access the system. The address can be IPv4 or
IPv6.
Note
that in dual mode, all controllers and VIPs must have both IPv6 and IPv4 addresses.
l A supported browser.
l Download the ViPR Controller deployment files from support.EMC.com.
l For each ViPR Controller VM, collect: IP address, IP network mask, IP network
gateway, and optionally IPv6 prefix length and IPv6 default gateway.
l Two or three DNS servers
l The DNS servers configured for ViPR Controller deployment must be configured to
perform both forward and reverse lookup for all devices that will be managed by ViPR
Controller.
l Two or three NTP servers.
l ViPR Controller requires ICMP protocol is enabled for installation and normal usage.
l FTP/FTPS or CIFS/SMB server for storing ViPR Controller backups remotely. You need
the URL of the server and credentials for an account with read and write privileges on
the server. Plan for 6 GB per backup initially, then monitor usage and adjust as
needed.
l A valid SMTP server and email address.
l An Active Directory or LDAP server and related attributes.
ViPR Controller validates added users against an authentication server. To use
accounts other than the built-in user accounts, you need to specify.
Starting with ViPR Controller 3.0, a new licensing model was deployed.
Overview
Starting with Release 3.0, ViPR Controller implemented a new licensing model. The new
model supports a new-format managed capacity license and a raw, usable, frame-based
capacity license. With the raw capacity single license file, each license file can include
multiple increments, both array-type and tiered.
The new licensing model is not compatible with the old-format managed capacity license
used with older versions of ViPR Controller.
ViPR Controller 3.5 new installation
l For a fresh ViPR 3.5 installation with a new license, you should encounter no problem
and may proceed normally.
l If you try to do a fresh ViPR 3.5 installation with an old license, you will receive an
error message "Error 1013: License is not valid" and will not be able to proceed with
the installation. You must open a Service Request (SR) ticket to obtain a new license
file.
ViPR Controller 3.5 upgrade installation
l For an upgrade ViPR 3.5 installation with an old license, ViPR 3.5 will continue to use
the old-format license, but the license will say "Legacy" when viewing the Version and
License section of the Dashboards in the ViPR GUI. There is no automatic conversion
to the new-format license. To convert to the new-format license, you must open a
Service Request (SR) ticket to obtain a new license file. After you upload the new-
format license, the GUI display will show "Licensed".
Pre-3.0 versions of ViPR Controller
l Pre 3.0 versions of ViPR controller will accept the new-format license file. However,
they will only recognize the last increment in the new file.
l After you upgrade to Version 3.0 or greater, you will need to upload the new-format
license again.
Licensing Model 9
Licensing Model
EMC ViPR Controller supports a new-format managed capacity license and a raw, usable,
frame-based capacity license. You need to obtain the license file (.lic) from the EMC
license management web site for uploading to ViPR Controller.
Before you begin
Note
There is a new licensing model for EMC ViPR Controller Version 3.0 and above. For details,
refer to the chapter "Licensing Model" in the EMC ViPR Controller Installation, Upgrade, and
Maintenance Guide, which can be found on the ViPR Controller Product Documentation
Index .
In order to obtain the license file you must have the License Authorization Code (LAC),
which was emailed from EMC.
The license file is needed during initial setup of ViPR Controller, or when adding capacity
to your existing ViPR Controller deployment. Initial setup steps are described in the
deployment sections of this guide. If you are adding a ViPR Controller license to an
existing deployment, follow these steps to obtain a license file.
Procedure
1. Go to support.EMC.com
2. Select Support > Service Center.
3. Select Get and Manage Licenses.
4. Select ViPR from the list of products.
5. On the LAC Request page, enter the LAC code and Activate.
6. Select the entitlements to activate and Start Activation Process.
7. Select Add a Machine to specify any meaningful string for grouping licenses.
The "machine name" does not have to be a machine name at all; enter any string that
will help you keep track of your licenses.
8. Enter the quantities for each entitlement to be activated, or select Activate All. Click
Next.
If you are obtaining licenses for a multisite (geo) configuration, distribute the
controllers as appropriate to obtain individual license files for each virtual data
center.
For a System Disaster Recovery environment, you do NOT need extra licenses for
Standby sites. The Active site license is shared between the sites.
9. Optionally specify an addressee to receive an email summary of the activation
transaction.
10. Click Finish.
11. Click Save to File to save the license file (.lic) to a folder on your computer.
vipr-<version>-controller-3+2.ova
Deploys on 5 VMs. Two VMs can go down without affecting availability of the
virtual appliance.
This option is recommended for deployment in production environments.
One IPv4 address for public network. Each Controller VM requires either a unique,
static IPv4 address in the subnet defined by the netmask, or a unique static IPv6
address, or both.
Note than an address conflict across different ViPR Controller installations can
result in ViPR Controller database corruption that would need to be restored from
a previous good backup.
Public virtual IPv4 address
Key name: network_vip
IPv4 address used for UI and REST client access. See also Avoid conflicts in EMC
ViPR network virtual IP addresses on page 54.
Network netmask
Key name: network_netmask
One IPv6 address for public network. Each Controller VM requires either a unique,
static IPv6 address in the subnet defined by the netmask, or a unique static IPv4
address, or both.
Note than an address conflict across different ViPR Controller installations can
result in ViPR Controller database corruption that would need to be restored from
a previous good backup.
IPv6 address used for UI and REST client access. See also Avoid conflicts in EMC
ViPR network virtual IP addresses on page 54.
15. Wait 7 minutes after powering on the VM before you follow the next steps. This will
give the ViPR Controller services time to start up.
16. Open https://ViPR_virtual_ip with a supported browser and log in as root.
Initial password is ChangeMe.
The ViPR_virtual_IP is the ViPR Controller public virtual IP address, also known as the
network.vip (the IPv4 address) or the network.vip6 (IPv6). Either value, or the
corresponding FQDN, can be used for the URL.
17. Browse to and select the license file that was downloaded from the EMC license
management web site, then Upload License.
18. Enter new passwords for the root and system accounts.
The passwords must meet these requirements:
l at least 8 characters
l at least 1 lowercase
l at least 1 uppercase
l at least 1 numeric
l at least 1 special character
l no more than 3 consecutive repeating
l at least change 2 characters (settable)
l not in last 3 change iterations (settable)
The ViPR Controller root account has all privileges that are needed for initial
configuration; it is also the same as the root user on the Controller VMs. The system
accounts (sysmonitor, svcuser, and proxyuser) are used internally by ViPR Controller.
19. For DNS servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by
commas.
20. For NTP servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by
commas.
21. Select a transport option for ConnectEMC (FTPS (default), SMTP, or none) and enter an
email address (user@domain) for the ConnectEMC Service notifications.
If you select the SMTP transport option, you must specify an SMTP server under SMTP
settings in the next step. "None" disables ConnectEMC on the ViPR Controller virtual
appliance.
In an IPv6-only environment, use SMTP for the transport protocol. (The ConnectEMC
FTPS server is IPv4-only.)
22. (Optional) Specify an SMTP server and port for notification emails (such as
ConnectEMC alerts, ViPR Controller approval emails), the encryption type (TLS/SSL or
not), a From address, and authentication type (login, plain, CRAM-MD5, or none).
Optionally test the settings and supply a valid addressee. The test email will be from
the From Address you specified and will have a subject of "Mail Settings Test".
If TLS/SSL encryption used, the SMTP server must have a valid CA certificate.
23. Finish.
At this point ViPR Controller services restart (this can take several minutes).
After you finish
You can now set up Authentication Providers as described in ViPR Controller User Interface
Tenants, Projects, Security, Users and Multisite Configuration Guide, and setup your virtual
data center as described in ViPR Controller User Interface Virtual Data Center Configuration
Guide. Both guides are available from the ViPR Controller Product Documentation Index .
Procedure
1. Log in to a Linux or Windows computer that has IP access to the vCenter Server or to a
specific ESXi server.
2. Download vipr-<version>-controller-vsphere.zip from the ViPR download page on
support.emc.com.
3. Unzip the ZIP file.
4. Open a bash command window on Linux, or a PowerShell window on Windows, and
change to the directory where you unzipped the installer.
5. To deploy the ViPR Controller, run the vipr-version-deployment installer script to
deploy ViPR Controller.
You can run the script in interactive mode, or through the command line. Interactive
mode will easily guide you through the installation, and the interactive script encodes
the vCenter username and password for you in the event the username or password
contains special characters, you will not be required to manually encode them.
For interactive mode enter:
l bash shell:
If you choose to deploy the ViPR Controller from the command line, you will need to
manually enter the deployment parameters, and escape special characters if any are
used in the vCenter username and password.
The following are examples of deploying ViPR Controller from the command line. See
the following table for complete syntax.
l bash shell:
Option Description
-help Optional, to see the list of parameters, and descriptions.
-mode install Required for initial install.
-mode redeploy Required to redeploy a node for restore. For details see the EMC
ViPR Controller System Disaster Recovery, Backup and Restore
Guide, which is available from the ViPR Controller Product
Documentation Index .
-interactive Optional for install, and redeploy.
Prompts for user input, one parameter at a time. Do not use
delimiters when in interactive mode, that is, no single quotes,
no double quotes.
Option Description
node 1:
node 2:
node 3:
Option Description
-cpucount Optional for install, and redeploy.
Number of CPUs for each virtual machine. Valid values are 2 -
16.
By default , 2 CPUs are used for 3 node installation and 4 CPUs
are used for 5 node installation. For details see the ViPR
Controller Support Matrix.
vi://mydomain.com%5cmyuser1:[email protected]:
443/My-Datacenter/host/ViPR-Cluster/Resources/ViPR-Pool
For details refer to the VMware OVF Tool User Guide.
Option Description
You do not need to escape special characters when entering
the username at the interactive mode prompt.
6. If redeploying a failed node, for the remaining steps refer to the EMC ViPR Controller
System Disaster Recovery, Backup and Restore Guide, which is available from the ViPR
Controller Product Documentation Index .
If installing ViPR Controller for the first time, repeat steps 1 - 5 for each node you are
installing.
You will need to enter the information required to install the first node, however, you
will not need to enter all of the information for the additional nodes. A .settings
file is created during installation of the first node. The settings file is used to enter the
configuration information for the remaining nodes.
You will only need to change specific parameters for each subsequent node that you
want to change, such as "node id", VM name, or target datastore.
9. Browse to and select the license file that was downloaded from the EMC license
management web site, then Upload License.
10. Enter new passwords for the root and system accounts.
The passwords must meet these requirements:
l at least 8 characters
l at least 1 lowercase
l at least 1 uppercase
l at least 1 numeric
l at least 1 special character
l no more than 3 consecutive repeating
l at least change 2 characters (settable)
l not in last 3 change iterations (settable)
The ViPR Controller root account has all privileges that are needed for initial
configuration; it is also the same as the root user on the Controller VMs. The system
accounts (sysmonitor, svcuser, and proxyuser) are used internally by ViPR Controller.
11. For DNS servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by
commas.
12. For NTP servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by
commas.
13. Select a transport option for ConnectEMC (FTPS (default), SMTP, or none) and enter an
email address (user@domain) for the ConnectEMC Service notifications.
If you select the SMTP transport option, you must specify an SMTP server under SMTP
settings in the next step. "None" disables ConnectEMC on the ViPR Controller virtual
appliance.
In an IPv6-only environment, use SMTP for the transport protocol. (The ConnectEMC
FTPS server is IPv4-only.)
14. (Optional) Specify an SMTP server and port for notification emails (such as
ConnectEMC alerts, ViPR Controller approval emails), the encryption type (TLS/SSL or
not), a From address, and authentication type (login, plain, CRAM-MD5, or none).
Optionally test the settings and supply a valid addressee. The test email will be from
the From Address you specified and will have a subject of "Mail Settings Test".
If TLS/SSL encryption used, the SMTP server must have a valid CA certificate.
l You need credentials to log in to the Service Center Virtual Machine Manager
(SCVMM).
l Be prepared to provide new passwords for the ViPR Controller root and system
accounts.
l You need IPv4 and/or IPv6 addresses for DNS and NTP servers.
l You need the name of an SMTP server. If TLS/SSL encryption is used, the SMTP server
must have a valid CA certificate.
l You need access to the ViPR Controller license file.
l Note the following restrictions on ViPR Controller VMs in a Hyper-V deployment:
n Hyper-V Integration Services are not supported. Do not install Integration Services
on ViPR Controller VMs.
n Restoring from a Hyper-V virtual machine checkpoint or clone is not supported.
n Modifications to VM memory, CPU, or data disk size requires powering off whole
cluster, prior to changing with SCVMM.
Procedure
1. Log in to the SCVMM server using the Administrator account, and copy the zip file to
the SCVMM server node.
2. Unzip the ZIP file.
3. Open a PowerShell window and change to the unzip directory.
4. To deploy the ViPR Controller, run the vipr-version-deployment installer script.
You can run the script in interactive mode, or through the command line. Interactive
mode will easily guide you through the installation, or you can use the command line
to enter the parameters on your own.
From the command line, you will need to enter the parameters when deploying. The
following is only an example, see the table for complete syntax.
Option Description
-help Optional, to see the list of parameters, and descriptions.
-mode install Required for initial install.
-mode redeploy Required to redeploy a node for restore. For details see the: EMC
ViPR Controller System Disaster Recovery, Backup and Restore
Guide, which is available from the ViPR Controller Product
Documentation Index .
-interactive Optional for install, and redeploy.
Prompts for user input, one parameter at a time. Do not use
delimiters when in interactive mode, that is, no single quotes, no
double quotes.
Option Description
-gateway Required for install.
IPv4 default gateway.
node 2:
node 3:
Option Description
Option Description
-vswitch Required for install, and redeploy.
Name of the virtual switch.
5. If redeploying a failed node, for the remaining steps, refer to the EMC ViPR Controller
System Disaster Recovery, Backup and Restore Guide, which is available from the ViPR
Controller Product Documentation Index .
If installing ViPR Controller for the first time, repeat steps 1 - 4 for each node you are
installing.
You will need to retype all the information required to install the first node, however,
you will not need to enter the information for the additional nodes. A .settings file
is created during installation of the first node. The settings file is used to enter the
configuration information for the remaining nodes.
The ViPR_virtual_IP is the ViPR Controller public virtual IP address, also known as the
network.vip (the IPv4 address) or the network.vip6 (IPv6). Either value, or the
corresponding FQDN, can be used for the URL.
8. Browse to and select the license file that was downloaded from the EMC license
management web site, then Upload License.
9. Enter new passwords for the root and system accounts.
The passwords must meet these requirements:
l at least 8 characters
l at least 1 lowercase
l at least 1 uppercase
l at least 1 numeric
l at least 1 special character
l no more than 3 consecutive repeating
l at least change 2 characters (settable)
l not in last 3 change iterations (settable)
The ViPR Controller root account has all privileges that are needed for initial
configuration; it is also the same as the root user on the Controller VMs. The system
accounts (sysmonitor, svcuser, and proxyuser) are used internally by ViPR Controller.
10. For DNS servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by
commas.
11. For NTP servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by
commas.
12. Select a transport option for ConnectEMC (FTPS (default), SMTP, or none) and enter an
email address (user@domain) for the ConnectEMC Service notifications.
If you select the SMTP transport option, you must specify an SMTP server under SMTP
settings in the next step. "None" disables ConnectEMC on the ViPR Controller virtual
appliance.
In an IPv6-only environment, use SMTP for the transport protocol. (The ConnectEMC
FTPS server is IPv4-only.)
13. (Optional) Specify an SMTP server and port for notification emails (such as
ConnectEMC alerts, ViPR Controller approval emails), the encryption type (TLS/SSL or
not), a From address, and authentication type (login, plain, CRAM-MD5, or none).
Optionally test the settings and supply a valid addressee. The test email will be from
the From Address you specified and will have a subject of "Mail Settings Test".
If TLS/SSL encryption used, the SMTP server must have a valid CA certificate.
14. Finish.
At this point ViPR Controller services restart. This can take several minutes.
After you finish
You can now set up Authentication Providers as described in ViPR Controller User Interface
Tenants, Projects, Security, Users and Multisite Configuration Guide, and setup your virtual
data center as described in ViPR Controller User Interface Virtual Data Center Configuration
Guide. Both guides are available from the ViPR Controller Product Documentation Index .
After installing the required Python packages, you will need to set up your local host,
environment variables, with the path to your Python installation directory. Refer to Python
documentation for complete details.
Note
For sites with self-signed certificates or where issues are detected, optionally use
http://<ViPR_Controller_VIP>:9998/cli only when you are inside a
trusted network. <ViPR_Controller_VIP> is the ViPR Controller public virtual IP address,
also known as the network vip. The CLI installation bundle is downloaded to the
current directory.
4. Use tar to extract the CLI and its support files from the installation bundle.
tar -xvf <cli_install_bundle>
5. Run the CLI installation program.
python setup.py install
6. Change directory to /opt/storageos/cli or to the directory where the CLI is
installed.
7.
Note
Perform this step only when you have not provided the correct input in step 5.
Edit the viprcli.profile file using the vi command and set the VIPR_HOSTNAME
to the ViPR Controller public virtual IP address and VIPR_PORT=4443 environment
variable and save the file.
# vi viprcli.profile
#!/usr/bin/sh
:wq
8. Run the source command to set the path environment variable for the ViPR Controller
executable.
source ./viprcli.profile
9. From the command prompt run the viprcli -h command.
If the help for viprcli is displayed, then the installation is successful.
10. Authenticate (log into) the ViPR Controller instance with the viprcli to confirm that your
installation was successful.
Note
For sites with self-signed certificates or where issues are detected, optionally use
http://<ViPR_Controller_virtual_IP>:9998/cli only when you are
inside a trusted network. <ViPR_Controller_virtual_IP> is the ViPR Controller public
virtual IP address, also known as the network vip.
l If your browser prompts you to save the ViPR-cli.tar.gz file, save it to the
temporary CLI installer directory that you created in step 2. For example, c:\cli
\temp.
l If your browser automatically downloads the ViPR-cli.tar.gz file, without
giving you the opportunity to select a directory, then copy the downloaded ViPR-
cli.tar.gz file to the temporary CLI installer directory that you created in step
2.
4. Open a command prompt and change to the directory you created in step 2, where
you saved or copied the ViPR-cli.tar.gz file. This example will use c:\cli
\temp.
5. Enter the python console by typing python at the command prompt:
c:\cli\temp>python
Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
(AMD64)] on win
32
Type "help", "copyright", "credits" or "license" for more
information.
>>>
6. Using the tarfile module, open and extract the files from the ViPR-cli.tar.gz
file.
>>> tfile.extractall('.')
>>> exit()
7. Since you are already in the directory to which the files have been extracted, run the
python setup.py install command. Follow the installation instructions and
provide the required information.
Note
You can also enter y to select the defaults for the installation directory (EMC\VIPR
\cli) and the port number (4443).
8. (Optional) If incorrect information was provided in the previous step, edit the
viprcli.profile.bat file and set the following variables.
Variable Value
SET VIPR_HOSTNAME The ViPR Controller hostname, set to the fully qualified
domain name (FQDN) of the ViPR Controller host, or the
virtual IP address of your ViPR Controller configuration.
SET VIPR_PORT The ViPR Controller port. The default value is 4443.
9. Change directories to the location where the viprcli was installed. The default is: C:
\EMC\ViPR\cli.
10. Run the viprcli.profile.bat command.
11. Authenticate (log into) the ViPR Controller instance with the viprcli to confirm that your
installation was successful.
See Authenticating with viprcli on page 32.
C:/>
viprcli authenticate -u root -d c:\tmp
C:/>
viprcli -hostname <fqdn, or host ip> authenticate -u root -
d c:\tmp
Do not end the directory path with a '\'. For example, c:\tmp\
Type the password when prompted.
Authenticate on Linux
To log into the default ViPR Controller instance use:
#
viprcli authenticate -u root -d /tmp
#
viprcli -hostname <fqdn, or host ip> authenticate -u root -
d /tmp
Note
The non-root users must have read, write, and execute permissions to use the CLI
installed by root. However, they don't need all these permissions for installing and
running the CLI in their home directory.
2. If you do not have the original files that you used to install the ViPR Controller CLI,
then follow the steps to extract the CLI and its support files that are appropriate for
your platform:
l Steps 1 through 4 of Install the ViPR Controller CLI on Linux on page 29.
l Steps 1 through 7 of Install the ViPR Controller CLI on Windows on page 31.
3. In the directory to which you extracted the CLI files, run the CLI uninstall program.
python setup.py uninstall
4. When prompted, provide the directory where the CLI is installed, for example /opt/
storageos/cli.
For information about ViPR Controller support for a Vblock system, see the: ViPR Controller
Virtual Data Center Requirements and Information Guide, which is available from the ViPR
Controller Product Documentation Index .
Note
The OS Image Server, which is provided with ViPR Controller, contains a dedicated
DHCP server.
l Isolated from other networks to avoid conflicts with other VLANs.
Note
Property Description
Appliance fully qualified name FQDN of the image server host name.
Management Network IP IPv4 address for the Management Network
Address interface
Management Network Netmask IPv4 netmask for the Management Network
interface
Management Network Gateway IPv4 address for the Management Network gateway
Private OS Install Network IP IPv4 address for the OS Install Network interface
address
DNS Server(s) IPv4 addresses for one or more DNS servers
Search Domain(s) One or more domains for directing searches.
Time Zone Select the time zone where the image server
resides.
/etc/sysconfig/network/ifcfg-eth1
DEVICE='eth1'
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='12.0.55.10'
NETMASK='255.255.255.0'
l Compute Image Server must have DHCP server
n DHCP server must be listening on the OS Install Network
n DHCP response must contain "next-server" option with its own OS Install Network
IP and "filename" option set to "/pxelinux.0"
n Suggested DHCP version: Internet Systems Consortium DHCP Server 4.2 http://
www.isc.org/downloads/dhcp/ as demonstrated in the following example. Note
the next-server, and filename.
/etc/dhcpd.conf
ddns-update-style none;
ignore client-updates;
/etc/sysconfig/dhcpd
# listen on eth1 only
DHCPD_INTERFACE="eth1"
l Compute Image Server must have TFTP server
n TFTP server must listen on the OS Install Network
n TFTPBOOT directory must contain pxelinux.0 binary (version 3.86) https://
www.kernel.org/pub/linux/utils/boot/syslinux/3.xx/
n Suggested TFTP server version: tftp-hpa https://www.kernel.org/pub/
software/network/tftp/tftp-hpa/
n TFTP can be configured to run as its own service or as part of xinetd. In the
following example, TFTP was configured with xinetd
/etc/xinetd.d/tftp
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /opt/tftpboot/ -vvvvvvv
disable = no
per_source = 11
cps = 100 2
flags = IPv4
}
l SSH acces
n User account must have permissions to write to TFTPBOOT directory.
n User account must have permissions to execute mount/umount commands
l Python
l Enough disk space to store multiple OS images - at least 50 GB is recommended
l No firewall blocking standard SSH, DHCP, TFTP ports and HTTP on 44491 (or a custom
port chosen for HTTP).
l wget binary must be installed.
3. Optionally check Remember me, which maintains your session for a maximum of 8
hours or 2 hours of idle time (whichever comes first), even if you close the browser. If
you don't check this option, your session ends when you close the browser, or log out.
Logging out always closes the session.
Note that this option does not remember user credentials between sessions.
Note
Access to different areas of the ViPR Controller UI is governed by the actions permitted to
the role assigned to the user. The actions authorized when you access ViPR Controller
from the UI can differ (be more constrained) from those available when you use the REST
API or CLI.
Note
Note
In Geo-federated Environment:
l Adds a VDC to create Geo-federated environment
l Add, disconnect, reconnect, or delete a VDC
l Has System Administrator privileges on global virtual pools, which are
global resources.
l Sets ACL assignments for virtual arrays, and virtual pools, from the ViPR
Controller API
System Auditor Has read-only access to the ViPR Controller virtual data center audit logs.
Tenant-level roles
Tenant roles are used to administrate the tenant-specific settings, such as the service
catalog and projects, and to assign additional users to tenant roles. The following table
lists the authorized actions for each user role at the tenant level.
Note
In Geo-federated Environment:
l Has Tenant Administrator privileges on tenants, which are global
resources.
Project Administrator l Creates projects in their tenant and obtains an OWN ACL on the
created project.
l Pre-upgrade planning............................................................................................46
l Upgrade ViPR Controller........................................................................................ 49
l Add the Node ID property in VMware after upgrading the ViPR Controller vApp...... 50
l Changing ScaleIO storage provider type and parameters after upgrading ViPR
Controller.............................................................................................................. 51
l Upgrade the ViPR Controller CLI............................................................................. 51
Pre-upgrade planning
Some pre-upgrade steps are required and you should prepare for ViPR Controller to be
unavailable for a period of time.
l The minimum base version for upgrade to ViPR Controller 3.5 is version 2.3. If you
want to upgrade from version 2.1.x, or earlier or version 2.2, you should first follow
the upgrade guide from those releases.
l For supported upgrade paths, and most recent environment and system
requirements, see The EMC ViPR Controller Release Notes, which are available from
the ViPR Controller Product Documentation Index .
l To ensure your environment is compliant with the latest support matrix, review the
ViPR Controller Support Matrix.
l Determine if you will be upgrading from an EMC-based repository, or from an internal
location by first downloading the ViPR Controller installation files.
n If upgrading from an EMC-based repository, configure the ViPR Controller to point
to the EMC-based repository as described in: Configuring ViPR Controller for
upgrade from an EMC-based repository on page 47.
n If your site cannot access the EMC repository, and you will be installing from an
internal location refer to Configuring ViPR Controller for an upgrade from an
internal location on page 48.
l Verify that the ViPR Controller status is Stable from the ViPR Controller UI System >
Dashboard.
l In a multisite (geo) configuration, don't start an upgrade under these conditions:
n if there are add, remove, or update VDC operations in progress on another VDC.
n if an upgrade is already in progress on another VDC.
n if any other VDCs in the federation are unreachable, or have been manually
disconnected, or if the current VDC has been disconnected.
In these cases, you should manually disconnect the unreachable VDC, and
reconnect any disconnected VDC.
n Also, make sure that the ports that are used for IPSec in ViPR Controller 3.5 are
open (not blocked by a firewall) in the customer environment between the
datacenters.
l Before upgrading, make a backup of the ViPR Controller internal databases using a
supported backup method so that in the unlikely event of a failure, you will be able to
restore to the previous instance. Refer to the version of ViPR Controller backup
documentation that matches the version of ViPR Controller you are backing up. For
ViPR Controller versions 2.4 and later, backup information is provided in the EMC ViPR
Controller Disaster Recovery, Backup and Restore Guide. For earlier versions, backup
information is provided in the EMC ViPR Controller Installation, Upgrade, and
Maintenance Guide.
l Prepare for the ViPR Controller virtual appliance to be unavailable for provisioning
operations for 6 minutes plus approximately 1 minute for every 10,000 file shares,
volumes, block mirrors, and block snapshots in the ViPR Controller database. System
Management operations will be unavailable for a period of 8 minutes (for a 2+1
Controller node deployment) or 12 minutes (for a 3+2 Controller node deployment)
plus approximately 1 minute for every 10,000 file shares, volumes, block mirrors, and
block snapshots in the ViPR Controller database.
l Verify that all ViPR Controller orders have completed before you start the upgrade.
l If RecoverPoint is used, upgrade RecoverPoint to a version supported by ViPR
Controller 3.5, before upgrading ViPR Controller itself. Refer to the EMC ViPR Support
Matrix for supported RecoverPoint versions.
l If your ViPR Controller is managing EMC ScaleIO storage, upgrade EMC ScaleIO to a
version supported by ViPR Controller 3.5, before upgrading ViPR Controller itself. As
part of the EMC ScaleIO upgrade, you must install the ScaleIO Gateway. Refer to the
EMC ViPR Support Matrix for supported EMC ScaleIO versions.
l Prior to upgrading ViPR Controller to version 3.5, refer to the EMC ViPR Support Matrix
for SMI-S versions supported for ViPR Controller 3.5 for VMAX, and VNX for Block
storage systems. If upgrading the SMI-S is required:
n When upgrading an SMI-S provider to meet the ViPR Controller requirements, you
must upgrade ViPR Controller first, and then the SMI-S provider.
n If you are required to upgrade the SMI-S provider from 4.6.2 to 8.x, you must
contact EMC Customer Support prior to upgrading ViPR Controller or the SMI-S
provider.
Note
Note
ViPR Controller does not support spaces in project names, therefore, spaces are not
supported on XtremIO folder names.
Option Description
Repository URL URL to the EMC upgrade repository. One value only. Default value
is https://colu.emc.com/soap/rpc.
Proxy HTTP/HTTPS proxy required to access the EMC upgrade repository.
Leave empty if no proxy is required.
Username Username to access EMC Online Support.
Password Password to access EMC Online Support.
Check Frequency Number of hours between checks for new upgrade versions.
3. Click Save.
After you finish
Use the following command to configure the ViPR Controller for an upgrade from an EMC-
based repository using the ViPR Controller CLI:
Note
If you have modified the viprcli.profile file appropriately, you do not need to
append -hostname <vipr_ip_address> to the command.
For complete details refer to the ViPR Controller CLI Reference Guide which is available from
the ViPR Controller Product Documentation Index .
Note
If you have modified the viprcli.profile file appropriately, you do not need to
append -hostname <vipr_ip_address> to the command.
For complete details refer to the ViPR Controller CLI Reference Guide which is available
from the ViPR Controller Product Documentation Index .
Enter the username password.
3. Enter the following to upload the image file to a location on the ViPR Controller virtual
appliance where it will be found by ViPR Controller to upgrade:
For details about using the ViPR Controller CLI see: ViPR Controller CLI Reference Guide,
which is available from the ViPR Controller Product Documentation Index .
4. Proceed to the next section to upgrade to the new version.
The System Maintenance page opens while installation is in progress, and shows you
the current state of the upgrade process.
Wait for the system state to be Stable before making provisioning or data requests.
5. If you are upgrading on a ViPR Controller instance that was deployed as a VMware
vApp, then continue to add the Node ID property as described in Add the Node ID
property in VMware after upgrading the ViPR Controller vApp on page 50.
For complete details refer to the ViPR Controller CLI Reference Guide which is available from
the ViPR Controller Product Documentation Index .
Note the following about ViPR Controller after an upgrade:
l Modified ViPR Controller catalog services are always retained on upgrade, but to
obtain new services, and original versions of modified services, go to Edit Catalog,
and click Update Catalog.
l After upgrading to version 2.4 or higher, any array with meta volumes need to be
rediscovered, before you attempt to ingest those meta volumes.
l After upgrading to version 2.4 or higher, rediscover your RecoverPoint Data Protection
Systems. This refreshes ViPR Controller's system information and avoids
inconsistencies when applying RecoverPoint protection with ViPR Controller 2.4 or
higher.
Note
Failure to perform this operation after upgrade from ViPR Controller versions 2.3.x or
earlier will cause ViPR Controller operational failures if, at any time, you use vSphere to
rename the original ViPR Controller vApp nodes names.
Procedure
1. From the VMware vSphere, power off the ViPR Controller vApp.
2. Right click on the first virtual machine in the ViPR Controller vApp, and choose Edit
Settings.
3. Go to the Options > vApp Options > Advanced menu.
4. Open the Properties, and create a new property with the following settings:
l Enter a Label, optionally name it Node ID.
l Leave the Class ID empty.
l Enter "node_id" for the ID. The name "node_id" is required for the id name, and
cannot be modified.
l Leave the Instance ID empty.
l Optionally enter a Description of the ViPR Controller node.
l Type: string.
l Enter the Default value, which must be the node id set by ViPR Controller during
deployment for example, vipr1, for the first ViPR Controller node, vipr2 for the
second ViPR Controller node.
ViPR Controller values for a 3 node deployment are vipr1, vipr2, vipr3, and for a 5
node deployment are vipr1, vipr2, vipr3, vipr4, and vipr5.
l Check User Configurable.
5. Repeat steps 2 through 4 for each virtual machine deployed with the ViPR Controller
vApp.
6. Power on the ViPR Controller vApp.
6. Select Save.
7. Navigate to Pysical Assets > Storage Systems.
8. For each of the storage systems associated with the updated ScaleIO storage
provider:
a. Select the ScaleIO storage system.
b. Click Rediscover.
Changing ScaleIO storage provider type and parameters after upgrading ViPR Controller 51
Upgrading ViPR Controller
Change the IP address of EMC ViPR Controller node deployed as a VMware vApp
This section describes how to change node IP address or VIP for a ViPR Controller virtual
machine on VMware that was deployed as a vApp.
Before you begin
If ViPR Controller was not deployed as a vApp, do not follow this procedure. Instead, refer
to Change the IP address of EMC ViPR Controller node on VMware deployed with no vApp.
This operation requires the System Administrator role in ViPR Controller.
You need access to the vCenter Server that hosts the ViPR vApp.
If the ViPR Controller was deployed without a vApp, do not follow this procedure.
The ViPR Controller vApp must not be part of a multi-VDC or System Disaster Recovery
configuration:
l To check for a multi-VDC environment, go to Virtual > Virtual Data Centers; there
should only be one VDC listed.
l To check for a System Disaster Recovery environment, go to System > System Disaster
Recovery; there should only be an Active site listed, and no Standby sites.
Procedure
1. From the ViPR Controller UI, shutdown all VMs (System > Health > Shutdown All).
2. Open a vSphere client on the vCenter Server that hosts the ViPR Controller vApp.
3. Right-click the ViPR vApp whose IP address you want to change and select Edit
Settings.
4. Click Properties and expand EMC ViPR.
5. Edit the desired IP values and click OK.
6. If applicable, change the network adapter to match a change in the subnet:
a. Select a specific VM.
b. Edit Settings.
c. Select Virtual Hardware > Network adapter.
d. Click OK.
7. From the vSphere client, power on the ViPR vApp.
Note: the ViPR Controller vApp will fail to boot up after an IP address change if the
vApp is part of a multi-VDC (geo) configuration. In this case you would need to revert
the IP address change.
Change the IP address of ViPR Controller node on VMware without vApp, or Hyper-V
using ViPR Controller UI
Use the ViPR Controller UI to change the IP address of ViPR Controller nodes running on
VMware without a vApp, or Hyper-V systems.
Before you begin
If ViPR Controller was deployed as a vApp, do not follow this procedure. Instead, refer to
Change the IP address of EMC ViPR Controller node deployed as a VMware vApp on page
54.
This operation requires the Security Administrator role in ViPR Controller.
The ViPR Controller instance must not be part of a multi-VDC or System Disaster Recovery
configuration:
l To check for a multi-VDC environment, go to Virtual > Virtual Data Centers; there
should only be one VDC listed.
l To check for a System Disaster Recovery environment, go to System > System Disaster
Recovery; there should only be an Active site listed, and no Standby sites.
Procedure
1. From the ViPR Controller UI, go to Settings > Network Configuration.
2. Leave the defaults, or enter the new IP addresses in the corresponding fields.
Do not leave any of the IP address fields empty. You must leave the default, or enter
the new IP address.
3. If you are changing the subnet, continue to step 4, otherwise, continue to step 5.
4. Enable the Power off nodes option.
5. Click Reconfigure.
A message appears telling you that the change was submitted, and your ViPR
Controller instance will lose connectivity.
Change the IP address of ViPR Controller node on VMware without vApp, or Hyper-V using ViPR Controller UI 55
Managing the ViPR Controller Nodes
If you are not changing your subnet, you will be able to log back into ViPR Controller 5
to 15 minutes after the configuration change has been made. Only perform steps 6
and 7 if you are changing your network adapter settings in the VM management
console.
6. Go to your VM management console (vSphere for VMware or SCVMM for Hyper-V), and
change the network settings for each virtual machine.
7. Power on the VMs from the VM management console.
You should be able to log back into the ViPR Controller 5 to 15 minutes after powering
on the VMs
If you changed ViPR Controller virtual IP address, remember to login with new virtual
IP. ViPR Controller will not redirect you from the old virtual IP to the new virtual IP.
Change the IP address of ViPR Controller node on VMware with no vApp using
vCenter
This section describes how to change a node IP address or VIP from vCenter for a ViPR
Controller virtual machine that was deployed on VMware as separate VMs, not as a vApp,
in the event that the ViPR Controller UI was unavailable to change the IP addresses.
Before you begin
If ViPR Controller was deployed as a vApp, do not follow this procedure. Instead, refer to
Change the IP address of the EMC ViPR Controller node on VMware deployed as vApp on
page 57.
For the ViPR Controller user roles required to perform this operation see ViPR Controller
user role requirements on page 40.
You need access to the vCenter Server instance that hosts ViPR Controller.
The ViPR Controller instance must not be part of a multi-VDC or System Disaster Recovery
configuration:
l To check for a multi-VDC environment, go to Virtual > Virtual Data Centers; there
should only be one VDC listed.
l To check for a System Disaster Recovery environment, go to System > System Disaster
Recovery; there should only be an Active site listed, and no Standby sites.
Procedure
1. From the ViPR UI, shutdown all VMs (System > Health > Shutdown All).
2. Open a vSphere client on the vCenter Server that hosts the ViPR Controller VMs.
3. Right-click the ViPR Controller node whose IP address you want to change and select
Power On.
4. Right-click the ViPR VM whose IP address you want to change and select Open
Console.
5. As the node powers on, select the 2nd option in the GRUB boot menu: Configuration
of a single ViPR(vipr-x.x.x.x.x) Controller node.
Be aware that you will only have a few seconds to select this option before the virtual
machine proceeds with the default boot option.
6. On the Cluster Configuration screen, select the appropriate ViPR node id and click
Next.
7. On the Network Configuration screen, enter the new IP addresses for all nodes that
need to change in the appropriate fields and click Next.
You will only need to type new IP addresses in one node, and then accept new
configuration on subsequent nodes in steps 12-13.
8. On the Deployment Confirmation screen, click Config.
9. Wait for the "Multicasting" message at the bottom of the console next to the Config
button, then power on the next ViPR Controller node.
10. As the node powers on, right-click the node and select Open Console.
11. On the next node, select the new VIP.
Note: if you changed the VIP in a previous step, you will see two similar options. One
has the old VIP, the other has the new VIP. Be sure to select the new VIP.
12. Confirm the Network Configuration settings, which are prepopulated.
13. On the Deployment Confirmation screen, click Config.
14. Wait for the "Multicasting" message at the bottom of the console next to the Config
button, then power on the next ViPR Controller node.
15. Repeat steps 10 through 14 for the remaining nodes.
16. When the "Multicasting" message has appeared for all nodes, select Reboot from the
console, for each ViPR node.
After you finish
At this point the IP address change is complete. Note that the virtual machine will fail to
boot up after an IP address change if the ViPR Controller is part of a multi-VDC (geo)
configuration. In this case you would need to revert the IP address change.
6. On the Cluster Configuration screen, select the appropriate ViPR Controller node id
and click Next.
7. On the Network Configuration screen, enter the new IP addresses for all nodes that
need to change in the appropriate fields and click Next.
You will only need to type new IP addresses in one node, and then accept new
configuration on subsequent nodes in steps 12-13.
8. On the Deployment Confirmation screen, click Config.
9. Wait for the "Multicasting" message at the bottom of the console next to the Config
button, then power on the next ViPR Controller node.
10. On the SCVMM UI, as the node powers on, right-click the node and select Connect or
View > Connect via Console.
11. On the next node, select the new VIP for the cluster configuration. .
Note
if you changed the VIP in a previous step, you will see two similar options. One has
the old VIP, the other has the new VIP. Be sure to select the new VIP.
During initial deployment, the default names are assigned to the nodes in ViPR
Controller, vSphere for VMware installations, and SCVMM for Hyper-V installations.
Note
Node ids cannot be changed. Only the node names can be changed.
Note
Alternatively, if you change the ViPR Controller node names in vSphere or SCVMM,
they are not changed in the ViPR Controller. If you want the node names to mach, you
will need to manually change the node names in ViPR Controller to match the
changes made in vSphere or SCVMM.
l Use the following naming conventions for the node name:
n Use only characters 0-9, a-z, and '-'
n Maximum number of characters is 253
n If using FQDN for the node name:
– No labels can be empty
– Each label can have a maximum of 63 chars
n Each custom node name must be unique
n If you will be using custom short names, each custom short name must be unique
The short node name can be used for API query parameters and SSH between
nodes. The short node name is the name that comes before the first period of the
fully node name for example the short name for myhost.test.companyname.com
is “myhost.”
n Do not use the node id for another node in the custom node name for example, do
not use vipr1.test.companyname.com for the vipr2 node name.
l Whether you change the node name or not, if you have deployed ViPR Controller on
VMware with a vApp, and you are upgrading from ViPR Controller versions 2.3.x or
lower, then you will need to add the node_id property in VMware after upgrading to
ViPR Controller 2.4 or higher, as described in Add the Node ID property in VMware
after upgrading the ViPR Controller vApp on page 50. You do not have to perform this
action if this is a new installation and not an upgrade
3. Choose True to enable ViPR Controller to use the short node name.
4. Click Save.
The ViPR Controller instance will automatically restart to apply the changes.
# cat nodenames-file.txt
node_1_name=mynode1.domain.com
node_2_name=mynode2.domain.com
node_3_name=mynode3.domain.com
node_4_name=mynode4.domain.com
node_5_name=mynode5.domain.com
use_short_node_name=true
Where the node_n_name, sets the node name for the associated ViPR Controller
Node ID for example:
l The value for node_1_name will replace the node name for vipr1
l The value for node_2_name will replace the node name for vipr2
l The value for node_3_name will replace the node name for vipr3
l The value for node_4_name will replace the node name for vipr4
l The value for node_5_name will replace the node name for vipr5
You can change the node names for as many number of nodes that are deployed
either 3 node, or 5 node deployment.
2. Run the CLI command to update properties, and pass the file as an argument:
PUT https://ViPR_Controller_VIP:4443/config/properties/
<property_update>
<properties>
<entry>
<key>node_1_name</key>
<value>mynode1.domain.com</value>
</entry>
<entry>
<key>node_2_name</key>
<value>mynode2.domain.com</value>
</entry>
<entry>
<key>node_3_name</key>
<value>mynode3.domain.com</value>
</entry>
<entry>
<key>node_4_name</key>
<value>mynode4.domain.com</value>
</entry>
<entry>
<key>node_5_name</key>
<value>mynode5.domain.com</value>
</entry>
<entry>
<key> use_short_node_name </key>
<value>true</value>
</entry>
</properties>
</property_update>
Where the node name key, sets the node name for the associated ViPR Controller Node ID
for example:
l The value for node_1_name will replace the node name for vipr1
l The value for node_2_name will replace the node name for vipr2
l The value for node_3_name will replace the node name for vipr3
l The value for node_4_name will replace the node name for vipr4
l The value for node_5_name will replace the node name for vipr5
You can change the node names for as many number of nodes that are deployed either 3
node, or 5 node deployment.
For more details about using the ViPR Controller REST API, see the ViPR Controller REST
API Reference .
If ViPR Controller was deployed as separate VMs (that is, no vApp), the individual
VMs are visible in the VMs and Templates view.
ConnectIN is used by EMC Support to interact with ViPR Controller. ConnectIN uses the
ESRS protocol for communications. ConnectIN functionality is generic and does not
require configuration in ViPR Controller. After you register ViPR Controller, EMC engineers
will be able to establish an ESRS tunnel to your ViPR Controller instance and start an SSH
or UI session.
l Tenant Approvers to request approvals from ViPR Controller provisioning users to run
a service.
l Users
n Root users can receive email notifications of failed backup uploads, or
notifications of expired passwords.
n Provisioning users can receive email notifications indicating if the Tenant
Approver approved the order the user placed, or not.
Enabling email notifications
All email notification require that you enter the following fields either during initial login,
or from the Settings > General Configuration > Email tab.
Option Description
SMTP server SMTP server or relay for sending email.
Port Port on which the SMTP service on the SMTP server is listening for
connections. "0" indicates the default SMTP port is used (25, or 465 if
TLS/SSL is enabled).
Encryption Use TLS/SSL for the SMTP server connections.
Authentication Authentication type for connecting the SMTP server.
Username Username for authenticating with SMTP server.
Password Password for authenticating with SMTP server.
From address From email address to send email messages (user@domain).
Once these settings have been enabled, you can continue to configure ViPR Controller for
ConnectEMC, Tenant Approver, and user email notifications.
To receive email from ConnectEMC
Configure the ConnectEMC email from the Settings > General Configuration > ConnectEMC
tab.
To send email to Tenant Approvers
Configure the Tenant Approver email from the Tenant Settings > Approval Settings page.
To send email to root users
You must be logged in as root. Open the root drop-down menu in the right corner of the
ViPR Controller UI title bar, and select Preferences.
To send email to provisioning users
You must be logged in as the provisioning user. Open the user drop-down menu in the
right corner of the ViPR Controller UI title bar, and select Preferences.
System Disaster Recovery provides email alerts for two types of issue:
1. Network issue (the Active site has lost communication with a Standby site)
2. A Standby site has become Degraded, due to a loss of connection with the Active site
for ~15 minutes.
Example 1:
From: "[email protected]" <[email protected]>
Date: Wednesday, February 10, 2016 5:55 PM
To: Corporate User <[email protected]>
Subject: ATTENTION - standby1-214 network is broken
Your standby site: standby1-214's network connection to Active site has been broken.
Please note that this could be reported for the following reasons. 1) Network connection
between standby site and active site was lost. 2) Standby site is powered off. 3) Network
latency is abnormally large and could cause issues with disaster recovery operations.
Thank you, ViPR
Example 2:
From: "[email protected]" <[email protected]>
Date: Wednesday, February 10, 2016 5:55 PM
To: Corporate User <[email protected]>
Subject: ATTENTION - standby 10.247.98.73 is degraded
Your Standby site 10.247.98.73_name has been degraded by Active site at 2016-04-05
10:28:27. This could be caused by following reasons (including but not limited to):1)
Network connection between Standby site and Active site was lost.2) Majority of nodes in
Standby site instance are down.3) Active or Standby site has experienced an outage or
majority of nodes and not all nodes came back online (its controller status is
"Degraded").
Please verify network connectivity between Active site and Standby Site(s), and make
sure Active and Standby Site's controller status is "STABLE".NOTE: If Active site or
Standby site temporarily experienced and outage of majority of nodes, the Standby site
can only return to synchronized state with Active when ALL nodes of Active and Standby
site(s) are back and their controller status is "STABLE".
Thank you, ViPR
4. In the License File field, click Browse and select the license file that was saved to your
local host.
5. Click Upload License File.
5. Send.
Note
System logs and alerts are site-specific. In a System Disaster Recovery environment, logs
can be viewed and collected separately on the Active site and the Standby site(s) .
Each ViPR Controller service on each virtual machine logs messages at an appropriate
level (INFO, DEBUG, WARN and ERROR) and the service logs can be viewed when a
problem is suspected. However, the log messages may not provide information that can
be acted on by a System Administrator, and may need to be referred to EMC.
System alerts are a class of log message generated by the ViPR Controller system
management service aimed at System Administrators and reflect issues, such as
environment configuration and connectivity, that a System Administrator should be able
to resolve.
Download ViPR Controller System logs
The download button enables the you to download a zip file containing the logs that
correspond to the current filter setting. In addition to the logs directory, the zip also
contains an info directory, containing the configuration parameters currently applied, and
orders directory showing all orders that have been submitted.
1. From the ViPR Controller UI go to the System > Logs page.
2. Click Download and specify the content that will be packaged in the zip file
containing the logs.
A logs archive (.zip) file called logs-<date>-<time>.zip will be downloaded. The
logs archive contains all log, system configuration, and order information. You can
identify the service log file for a specific node in the zip file, by the log file name. The .log
files are named as follows: servicename_nodeid_nodename.log for example:
l apisvc.vipr1.mynodename.log is a log file of the API service operations run
on the first node of a ViPR Controller. mynodename.log is the custom node name
provided by the user.
If a custom node name was not provided, then the node id will also be in the place of the
node name for example:
l apisvc.vipr1.vipr1.log.
Audit Log
The System > Audit Log page displays the recorded activities performed by administrative
users for a defined period of time.
The Audit Log table displays the Time at which the activity occurred, the Service Type (for
example, vdc or tenant), the User who performed the activity, the Result of the operation,
and a Description of the operation.
Filtering the Audit Log Display
1. Select System > Audit Log. The Audit Log table defaults to displaying activities from
the current hour on the current day and with a Result Status of ALL STATUS (both
SUCCESS and FAILURE) .
2. To filter the Audit Log table, click Filter.
3. In the Filter System Logs dialog box, you can specify the following filters:
l Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
l Start Time: To display the audit log for a longer time span, use the calendar
control to select the Date from which you want to see the logs, and use the Hour
control to select the hour of day from which you want to display the audit log.
l Service Type: Specify a Service Type (for example, vdc or tenant).
l User: Specify the user who performed the activity.
l Keyword: Specify a keyword term to filter the Audit Log even further.
4. Select Update to display the filtered Audit Log.
Downloading Audit Logs
1. Select System > Audit Log. The Audit Log table defaults to displaying activities from
the current hour on the current day and with a Result Status of ALL STATUS (both
SUCCESS and FAILURE) .
2. To download audit logs, click Download.
3. In the Download System Logs dialog box, you can specify the following filters:
l Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
l Start Time: Use the calendar control to select the Date from which you want to see
the logs, and use the Hour control to select the hour of day from which you want
to display the audit log.
l End Time: Use the calendar control to select the Date to which you want to see the
logs, and use the Hour control to select the hour of day to which you want to
display the audit log. Check Current Time to use the current time of day.
l Service Type: Specify a Service Type (for example, vdc or tenant).
l User: Specify the user who performed the activity.
l Keyword: Specify a keyword term to filter the downloaded system logs even
further.
4. Select Download to download the system logs to your system as a zip file.
Audit Log 73
Other ViPR Controller configuration options
log events. All logs from all ViPR services except Nginx (for example, syssvc, apisvc,
dbsvc, etc.) are forwarded in real time to the remote Syslog server after successful
configuration. Audit logs are also forwarded.
Before you begin
Procedure
1. Configure Syslog Server to accept messages from remote hosts:
Note
$InputTCPServerStreamDriverMode 1
$InputTCPServerStreamDriverAuthMode x509/certvalid
$InputTCPServerRun 10514 #This is the port number you input
in the configuration page for the rsyslog server
Configure Syslog server to accept messages via UDP protocol:
6. On the remote Syslog server, edit the /etc/rsyslog.conf file to add these lines:
$ModLoad imudp # Module to support UDP remote messages
inbound
$UDPServerAddress * # Listen to any/all inbound IP addresses
(note that the * is default, specifying to make config
clear)
$UDPServerRun 514 # Listen on port 514
Configure Syslog server to accept messages via TCP protocol:
7. On the remote Syslog server, edit the /etc/rsyslog.conf file to add these lines:
$ModLoad imtcp
$InputTCPServerRun 514
2. Configure the format/template for formatting and saving logs:
Note
8. Configure the format/template for how and where the logs are saved, by
editing /etc/rsyslog.conf. Following are two examples:
a. Example 1: Configure a central location to save the logs (/var/log/syslog/
TemplateLogs):
$template MyTemplate, "[ViPR] - <%pri%> - %timestamp% -
%FROMHOST% - %HOSTNAME% -## %PROGRAMNAME% ##- %syslogtag%
-MM- %msg%\n"
local2.* -/var/log/syslog/TemplateLogs;MyTemplate
b. Example 2: Configure locations to save the logs by service (with log location
as /var/log/syslog/AuditLog.log and /var/log/syslog/
syssvcLog.log):
if ($msg contains ' AuditLog ') then -/var/log/syslog/
AuditLog.log
if ($msg contains ' syssvc ') then -/var/log/syslog/
syssvcLog.log
9. Restart the remote Syslog server.
# service rsyslog restart
3. Setup ViPR Controller for Syslog forwarding:
10. Select System > General Configuration > Syslog Forwarder.
11. Enter values for the properties.
Option Description
Syslog Settings
Option Description
Enable Remote Select True to specify and enable a remote Syslog server.
Syslog
Remote Server
Settings
Syslog Transport Specify the Syslog transport protocol. Select UDP, TCP, or "TCP
Protocol with encryption" (TLS). For a UDP or TCP connection, you will
specify a Syslog Server IP or FQDN, and a Port. For a TLS
connection, you will specify a Syslog Server IP or FQDN, a Port, a
Security Certificate, and ViPR Controller Security Certificate.
Remote Syslog
Servers & Ports
Server The IP address of the remote Syslog server. You can obtain this
from the Syslog server Administrator.
Port The port number for the server. The ports on which syslog
services typically accept connections are 514/10514.
Certificate This field appears only if you selected "TCP with encryption"
(TLS) as the Syslog Transport Protocol. This field contains the
certificate file from the remote Syslog server. Paste the entire
content of server.crt (including --Start and --End strings),
generated in step 1, if TLS is enabled.
Add Click this button to additional remote Syslog servers.
12. Click the Test button to validate the Syslog server input before saving.
13. Click Save.
4. Confirm that the remote Syslog server setup:
14. Confirm that the remote Syslog server is saving the ViPR Controller logs as expected.
a. Login to the remote Syslog server.
b. Verify that the Syslog server is running and listening on the configured port and
protocol. In the example below, it is UDP on port 514:
# netstat -uanp|grep rsyslog
udp 0 0 0.0.0.0:514 0.0.0.0:* 21451/rsyslogd
udp 0 0 :::514 :::* 21451/rsyslogd
c. Determine the location of the logs specified in the template section of /etc/
rsyslog.conf. In the example below it is /var/log/syslog/
TemplateLogs.
$template MyTemplate, "[ViPR] - <%pri%> - %timestamp% -
%FROMHOST% - %HOSTNAME% -## %PROGRAMNAME% ##- %syslogtag%
-MM- %msg%\n"
local2.* -/var/log/syslog/TemplateLogs;MyTemplate
d. Go to the directory defined in /etc/rsylsog.conf and confirm that logs are
written to that directory. Note that the format of the saved files will be depend on
templates defined by the Syslog server System Administrator.
Example 1:
# tail -f /var/log/syslog/TemplateLogs
…
[ViPR] - <150> - Aug 19 09:28:33 - lglw2022.lss.emc.com -
vipr2 -## ...lable_versions><available_ver
##- ...lable_versions><available_ver -MM-
on><new_version>vipr-3.5.0...
[ViPR] - <150> - Aug 19 09:28:33 - lglw2023.lss.emc.com -
vipr3 -## vipr ##- vipr -MM- vipr3 syssvc 2016-08-19
09:28:33 INFO DrUtil:531 - get local coordinator mode from
vipr3:2181
[ViPR] - <150> - Aug 19 09:28:33 - lglw2023.lss.emc.com -
vipr3 -## vipr ##- vipr -MM- vipr3 syssvc 2016-08-19
09:28:33 INFO DrUtil:543 - Get current zookeeper mode leader
…
Example 2:
# tail -f /var/log/syslog/AuditLog.log
…
2016-08-19T07:56:56+00:00 vipr2 vipr vipr2 AuditLog
2016-08-19 07:56:56 INFO AuditLog:114 - audit log is config
null SUCCESS "Update system property
(config_version=1471593416583,network_syslog_remote_servers_
ports=10.247.102.30:514) succeed."
2016-08-19T07:58:04+00:00 vipr2 vipr vipr2 AuditLog
2016-08-19 07:58:04 INFO AuditLog:114 - audit log is config
null SUCCESS "Update system property
(config_version=1471593484027,network_syslog_remote_servers_
ports=lglw2030.lss.emc.com:
514,system_syslog_transport_protocol=TCP) succeed."
…
#tail -f /var/log/syslog/syssvcLog.log
…
2016-08-19T09:37:39+00:00 vipr4 vipr vipr4 syssvc 2016-08-19
09:37:39 INFO DrUtil:543 - Get current zookeeper mode
follower
2016-08-19T09:37:39+00:00 vipr4 vipr vipr4 syssvc 2016-08-19
09:37:39 INFO DrDbHealthMonitor:55 - Current node is not ZK
leader. Do nothing