Docu86395 - ViPR SRM 4.1.1 Installation and Configuration Guide
Docu86395 - ViPR SRM 4.1.1 Installation and Configuration Guide
Version 4.1.1
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
EMC Corporation
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.EMC.com
Figures 5
Tables 7
Chapter 8 Uninstallation 65
Overview.................................................................................................... 66
Stopping EMC M&R platform services on a UNIX server........................... 66
Uninstalling the product from a UNIX server.............................................. 66
Stopping EMC M&R platform services on a Windows server..................... 66
Uninstalling the product from a Windows server........................................ 67
Uninstalling a SolutionPack.........................................................................67
SolutionPacks
SolutionPacks are software components that support EMC and third-party
storage infrastructure components. Each SolutionPack enables you to select a
specific report in the UI. To learn more about the SolutionPacks that ViPR SRM
supports, see the following documents:
l EMC ViPR SRM Support Matrix
l EMC ViPR SRM Release Notes
l EMC ViPR SRM SolutionPack Guide
ViPR SRM vApps are distributed using Open Virtualization Format (OVF) files.
Depending on your environment's requirements, you will use the 4VM vApp OVF or the
1VM vApp OVF files.
4VM vApp OVF
Enables you to install four VMs (Frontend, Primary Backend, Additional Backend
and one Collector). A vApp VM will include an ADG directory that is used by the
autoconfiguration process of the vApp VMs. The 4VM vApp automatically
configures the Collector host to have 48 GB of memory and 8 CPUs. The
following SolutionPacks are pre-installed on the Collector host:
l Brocade FC Switch
l Cisco MDS/Nexus
l Isilon
l Physical Hosts
l Unity/VNX/VNXe
l VMAX/VMAX 2
l VMAX 3/VMAX All Flash
l VMware vCenter
l VPLEX
l XtremIO
The Collector host deployed with the 1VM is configured with 16 GB of memory and 4
CPUs.
ViPR SRM vApp VMs have properties that are used to configure the host level
networking information. If the vApp VM/folder needs to be moved from one vCenter
to another, you must use the vCenter export and import procedure. Do not use the
vCenter remove from inventory method. For additional details, refer to Guidelines for
Managing VMware vApp Solutions (h15461).
ViPR SRM vApps fully support the VM vMotion and Storage vMotion DRS functions
of vCenter.
n DNS servers
n Domain search strings. For a distributed ViPR SRM environment, the domains
for all the ViPR SRM servers must be entered for each of the ViPR SRM
servers.
Procedure
1. Navigate to the Support by Product page for ViPR SRM (https://
support.emc.com/products/34247_ViPR-SRM).
2. Click Downloads.
3. Download the ViPR SRM <version number> vApp Deployment zip file.
Each download has a checksum number. Copy the checksum number and
validate the integrity of the file using an MD5 checksum utility.
The host being connected to the vCenter should be local to the ESX servers for
the quickest deployment. Locate the 4VM OVF deployment file on the host
running the vCenter client or place the files on the DataStore.
4. Open vCenter Client and connect to the vCenter Server that manages your
VMware environment.
Do not run vCenter Client on a VPN connection.
For the fastest deployment time, the host running vCenter Client should be
local to the ESX servers.
5. Select where in the vCenter ESX cluster/server you want to deploy the VMs for
ViPR SRM.
6. Select File > Deploy OVF Template.
7. In the Source step, locate the 4VM OVF template file.
8. Click Next.
9. In the OVF Template Details step, review the details of the loaded OVF file,
and then click Next.
10. In the End User License Agreement step, review the license agreement. Click
Accept, and then click Next.
11. In the Name and Location step:
a. Specify a new name or accept the default name for the appliance.
b. Specify an inventory location for the appliance in your VMware environment.
c. Click Next.
12. In the Resource Pool step, select the Resource Pool or the folder where the
deployment will place the ViPR SRM VMs, and click Next.
13. In the Storage step, select the destination storage (DataStore) for the virtual
machine files, and then click Next.
The compatibility window will state if there is insufficient disk space on the
selected DataStore, and a warning will display when you click Next.
14. In the Disk Format step, select the storage space provisioning method, and
then click Next.
Option Description
Thin-provisioned format On-demand expansion of available storage, used for
newer data store file systems.
Thick-provisioned Appliance storage that is allocated immediately and
format reserved as a block.
Note
15. In the Network Mapping step, select a destination network for all of the VMs,
and then click Next.
With ViPR SRM 4.1.1, the only option is to place all 4 VMs on the same ESX
server network. This is known as the simplified network deployment.
16. In the IP Address Allocation step, choose the IP allocation policy and IP
protocol to use, and then click Next.
17. In the Properties step, provide the values for each of the VMs, and then click
Next.
18. In the Ready to Complete step, review the list of properties for the appliance,
and then click Finish.
A pop-up window opens in vCenter Client showing the deployment progress.
19. After the 4VM deployment finishes, in the Deployment Completed
Successfully dialog box, click Close.
20. Before you power on the vApp, make the following changes to the VM
configurations:
l Add additional VMDK disks to expand the file system.
l Adjust the vCPU and VM Memory as specified in the ViPR SRM design
21. Use the 1VM OVF to add any Additional Backend VMs and Collector VMs as
described in the following section.
The MySQL version included with the product is 5.7.17 MySQL Community Server
(GPL).
Do not add any binary VMs into the vApp container (including any ViPR SRM binary
VMs).
The procedures enable you to install two types of software:
SolutionPacks
Software components that support EMC and third-party storage infrastructure
components. Each SolutionPack enables you to select a specific report in the UI.
To learn more about the SolutionPacks that ViPR SRM supports, see the
following documents:
l EMC ViPR SRM Support Matrix
l EMC ViPR SRM Release Notes
l EMC ViPR SRM SolutionPack Guide
ViPR SRM vApps are distributed using Open Virtualization Format (OVF) files. You will
use the 1VM vApp OVF files to scaleout Additional Backends, Collectors, and
Frontends.
1VM vApp OVF
Enables you to install a single vApp VM. The options are Frontend, Primary
Backend, Additional Backend, Collector and All-in-One. You can use this option to
install additional Collectors and Additional Backend VMs to scale out the existing
ViPR SRM installation. You can add a single vApp VM (Collector or Additional
Backend) to an existing vApp container that was created with the 4VM vApp.
When you restart the vApp container, the new VMs will be automatically
configured into ViPR SRM. vApp VMs include an ADG directory that is used by the
automatic configuration process.
ViPR SRM vApp VMs have properties that are used to configure the host level
networking information. If the vApp VM/folder needs to be moved from one vCenter
to another, you must use the vCenter export and import procedure. Do not use the
vCenter remove from inventory method.
ViPR SRM vApps fully support the VM vMotion and Storage vMotion DRS functions
of vCenter.
n Netmask
n DNS servers
n Domain search strings. For a distributed ViPR SRM environment, the domains
for all the ViPR SRM servers must be entered for each of the ViPR SRM
servers.
For instructions to add remote Collectors, see Deploying Collector vApp VMs in
different datacenters.
Procedure
1. Navigate to the Support by Product page for ViPR SRM (https://
support.emc.com/products/34247_ViPR-SRM).
2. Click Downloads.
3. Download the ViPR SRM <version number> vApp Deployment zip file.
Each download has a checksum number. Copy the checksum number and
validate the integrity of the file using an MD5 checksum utility
The host being connected to the vCenter should be local to the ESX servers for
the quickest deployment. Locate the 1VM OVF deployment file on the host
running the vCenter Client or place the files on the DataStore.
4. Open vCenter Client and connect to the vCenter Server that manages your
VMware environment.
Do not run vCenter Client over a VPN connection.
For the fastest deployment time, the host running vCenter Client should be
local to the ESX servers.
5. From the list in the vCenter tree, select the location where you want to place
ViPR SRM.
6. Select File > Deploy OVF Template.
7. In the Source step, locate the 1VM OVF template file.
8. Click Next.
9. In the OVF Template Details step, review the details of the loaded OVF file,
and then click Next.
10. In the End User License Agreement step, review the license agreement. Click
Accept, and then click Next.
11. In the Name and Location step:
a. Specify a new name or accept the default name for the appliance.
b. In the Inventory Location, select the Datacenter and sub-location where the
appliance will be deployed. Navigate through the folder levels to define the
exact location.
c. Click Next.
12. In the Deployment Configuration step, select the type of appliance VM that
you want to install.
13. In the Host/Cluster step, select the ESX server or ESX Cluster, and click Next.
14. In the Resource Pool step, there is a list of the vApps that are already installed.
Select the ViPR SRM vApp, and click Next.
15. In the Storage step, select the destination storage (DataStore) for the virtual
machine files, and then click Next.
16. In the Disk Format step, select the storage space provisioning method, and
then click Next.
Option Description
Thin-provisioned format On-demand expansion of available storage, used for
newer data store file systems.
Thick-provisioned Appliance storage that is allocated immediately and
format reserved as a block.
Note
17. In the Network Mapping step, select a destination network for the VM, and
then click Next.
18. In the IP Address Allocation step, choose the Fixed IP allocation policy and IP
protocol to use, and then click Next.
19. In the Properties step, provide the values for each field, and then click Next.
20. In the Ready to Complete step, review the list of properties for the appliance,
and then click Finish.
A pop-up menu that shows the deployment progress opens in vCenter Client.
21. After the 1VM deployment finishes, in the Deployment Completed
Successfully dialog box, click Close.
22. Repeat this process for each Additional Backend and Collector needed in this
datacenter.
After you finish
After all of the scale-out vApp VMs have been deployed and added to the ViPR SRM
vApp container, follow these steps to complete the configuration:
1. Edit the ViPR SRM vApp container settings.
2. Modify the start order of the vApp entities as described in Modify start order of
vApps.
3. Adjust the VM memory, CPU settings, and Additional Storage for each of the VMs
as described in your EMC ViPR SRM design specification.
4. Power off the vApp container. All of the VMs will perform a Guest Shutdown in the
reverse startup order
5. Power on the ViPR SRM vApp container. Right click the vApp and select Power
On.
A built-in service detects the new VMs and performs the needed configurations to add
the scale-out VM to the existing ViPR SRM installation.
Option Description
Thin-provisioned format On-demand expansion of available storage, used for
newer data store file systems.
Thick-provisioned Appliance storage that is allocated immediately and
format reserved as a block.
Option Description
16. In the Network Mapping step, select a destination network for the VM, and
then click Next.
17. In the IP Address Allocation step, choose the Fixed IP allocation policy and the
IP protocol to use, and then click Next.
18. In the Properties step, provide the values for each field, and then click Next.
19. In the Ready to Complete step, review the list of properties for the appliance,
and then click Finish.
A pop-up menu that shows the deployment progress opens in vCenter Client.
20. After the deployment finishes, in the Deployment Completed Successfully
dialog box, click Close.
21. Repeat these steps for each Collector that you need to install in a remote
datacenter.
22. Before you power on the vApp, make the following changes to the VM
configurations:
l Add additional VMDK disks to expand the file system.
l Adjust the vCPU and VM Memory as specified in the ViPR SRM design.
23. If you are adding a remote collector that is deployed in a remote datacenter to
the ViPR SRM vApp, use the steps for adding a collector that are described in
Adding Remote Collectors to the existing ViPR SRM deployment. These steps
will finish the collector configuration and add the collector to the ViPR SRM UI.
After you finish
For Collectors installed in a remote datacenter, you will need to use the ViPR SRM UI
to make some configuration changes to the Load Balancer Connectors, generic-rsc,
and generic-snmp installed on each Collector.
DataStores
The 4VM vApp deployment places the 4 VMs on a single DataStore. Migrate the VM
from this DataStore to its assigned DataStore. The required storage per ViPR SRM
VM can be found in the design provided by EMC.
For reference, the target storage sizes are as follows:
l Frontend – 320 GB
l Primary Backend – 800 GB and larger
l Additional Backends – 1 TB and larger
4. In the Shutdown Action section, select Guest Shutdown from the Operation
list.
5. Change the elapsed time to 600 seconds.
6. Click OK.
Note
ws-user watch4net
MySQL watch4net
If you choose to change the root user password, the password must conform to the
following requirements.:
l Be at least eight characters and no more than 40 characters
l Contain at least one numeric character
l Contain at least one uppercase and one lowercase character
l Contain at least one non-alphanumeric character such as # or !
l Cannot contain the single quote character (') because it is a delimiter for the
password string
Note
Databases/MySQL/Default/data/[SERVER NAME].err
Backends/Alerting-Backend/Default/logs/alerting-0-0.log
Backends/APG-Backend/Default/logs/cache-0-0.log
Collecting/Collector-Manager/Default/logs/collecting-0-0.log
Web-Servers/Tomcat/Default/logs/service.log
Tools/Task-Scheduler/Default/logs/scheduler-0-0.log
Tools/Webservice-Gateway/Default/logs/gateway-0-0.log
Note
3. Repeat these steps for each remote Collector's Load Balancer Connector.
4. Under Other Components, click the Reconfigure icon for a generic-snmp or
generic-rsc instance. Use the following settings:
l Data Configuration: send data to the localhost over port 2020.
l Frontend Web service: send data to the Frontend over port 58080.
l Topology Service: send data to the Primary backend.
5. In the SNMP Collector Name field, enter the FQDN of the collector host.
6. Repeat the steps for each instance of generic-snmp and generic-rsc.
/opt/APG/bin/mysql-client.sh
2. When prompted, select root as the username, mysql for the database, and
watch4net as the password.
The following table is an example of the configuration you should see on an Additional
Backend host:
Definition Description
OS linux-x64 or windows-x64
For example:
frontend=lglba148.lss.emc.com:linux-x64
primarybackend=lglba224.lss.emc.com:linux-x64
additionalbackend_1=lglac142.lss.emc.com:linux-x64
collector_1=lglba150.lss.emc.com:linux-x64
This answers file can be modified later to add any new Collectors and Additional
Backends. When the SRM-Conf-Tools scripts run, they distinguish new servers from
existing servers and make the necessary configuration changes.
Because the SRM-Conf-Tools and the answers file can be used for configuring
additional servers at a later date, EMC recommends storing the files in a /sw directory
in the / directory instead of the /tmp directory because the /tmp directory could be
deleted at any time.
Note
For Windows, use .cmd instead of .sh, and / instead of \ for directories.
Procedure
1. Install the ViPR SRM software as described in Installing on UNIX or Installing on
Windows Server.
2. Configure the binary collectors:
a. Navigate to the following directory:
Linux: cd /opt/APG/bin
launch-collector-configuration.sh –c /sw/srm-hosts
launch-frontend-scale-collector.sh -c /sw/srm-hosts
4. Verify the Remote Collector configuration through the ViPR SRM UI.
2. Under Other Components, click the Reconfigure icon for a Load Balancer
Connector for each remote Collector. Use the following settings:
l Arbiter Configuration: send data to the Primary Backend over port 2020.
l Alerting on data collection: send data to the Primary Backend over port
2010.
l Frontend Web service: send data to the Frontend over port 58080.
3. Repeat these steps for each remote Collector's Load Balancer Connector.
4. Under Other Components, click the Reconfigure icon for a generic-snmp or
generic-rsc instance. Use the following settings:
l Data Configuration: send data to the localhost over port 2020.
l Frontend Web service: send data to the Frontend over port 58080.
l Topology Service: send data to the Primary backend.
5. In the SNMP Collector Name field, enter the FQDN of the collector host.
6. Repeat the steps for each instance of generic-snmp and generic-rsc.
Note
The following sections use Linux commands and directories as examples. For
Windows, use .cmd instead of .sh, and / instead of \ for directories.
Linux requirements
The environment must meet the following requirements. Make adjustments to the
host before continuing.
l /tmp folder larger than 2.5 GB
l SWAP file should be at least equal to the RAM size
l On CentOs or RedHat-like Linux, the SELinux should be disabled or reconfigured
l The graphical desktop environment is not required
l On some Linux distributions:
n MySQL server requires libaio1, libaio-dev, or libaio to start
n The installation process requires unzip
n On system restart the apg services may not start
Installing on Linux
You can install the product on supported UNIX hosts. This procedure specifically uses
the Linux installation procedure as an example.
Before you begin
l Ensure that you have a login with root privileges. This product should only be
installed using root and root privileges.
l Ensure that the ports listed in the Ports Usage Matrix are enabled and not blocked
by a host or network firewall.
Refer to Updating firewall ports in Red Hat and CentOS servers.
l Download the installation file from support.emc.com, and place it in a folder
(for example /sw) on the server.
These instructions are meant to provide a high-level overview of the installation
process. Detailed instructions are provided in the following sections.
Procedure
1. Log in to the server as root.
2. Navigate to the /sw folder.
3. Change the permissions of the installer.
For example: chmod +x <file_name>.sh
4. Run the installer from the directory.
For example: ./<file_name>.sh
5. Read and accept the End User License Agreement.
6. Accept the default installation directory of /opt/APG or type another location.
Linux requirements 41
Installing Using the Binary Installer
7. Select the appropriate installation option for the type of host that you are
installing. Refer to Installation Options for details.
#===================
# Common Properties
#===================
hostname=lglba148.lss.emc.com
6. To restart the services, type the following commands from the /opt/APG/bin
directory of the installation:
13. Restart the services, and troubleshoot any service that does not show a status
of “running.”
Definition Description
hostname The server's FQDN. It match the setting of
the hostname variable in the
apg.properties file. For Linux servers, this
should always be the hostname plus the
domain name (FQDN). For Windows, this
could be the hostname (shortname) or the
FQDN depending on how the Windows server
resolution is configured (DNS, Active DNS, or
Wins/NetBios). A Wins resolution will use the
hostname (shortname) in uppercase.
OS linux-x64 or windows-x64
For example:
frontend=lglba148.lss.emc.com:linux-x64
primarybackend=lglba224.lss.emc.com:linux-x64
additionalbackend_1=lglac142.lss.emc.com:linux-x64
collector_1=lglba150.lss.emc.com:linux-x64
This answers file can be modified later to add any new Collectors and Additional
Backends. When the SRM-Conf-Tools scripts run, they distinguish new servers from
existing servers and make the necessary configuration changes.
Because the SRM-Conf-Tools and the answers file can be used for configuring
additional servers at a later date, EMC recommends storing the files in a /sw directory
in the / directory instead of the /tmp directory because the /tmp directory could be
deleted at any time.
./launch-primarybackend-configuration.sh –c /sw/srm-hosts
4. Restart the services and verify that they are running. Troubleshoot any service
that does not show a status of “running.”
./launch-additionalbackend-configuration.sh –c /sw/srm-hosts
4. Restart the services and verify that they are running. Troubleshoot any service
that does not show a status of “running.”
./launch-collector-configuration.sh –c /sw/srm-hosts
4. Restart the services and verify that they are running. Troubleshoot any service
that does not show a status of “running.”
./launch-frontend-configuration.sh –c /sw/srm-hosts
5. Verify that the ViPR SRM management resources have been created:
/opt/APG/bin/manage-resources.sh list
The following output shows the management resources based on the example
configuration used in the document:
"dba/APG-DB",
"dba/APG-DB-lglac142-1",
"dba/APG-DB-lglac142-2",
"dba/APG-DB-lglac142-3",
"dba/APG-DB-lglac142-4",
"dba/FLOW-COMPLIANCE-BREACH",
"dba/FLOW-COMPLIANCE-CONFIGCHANGE",
"dba/FLOW-COMPLIANCE-POLICY",
"dba/FLOW-COMPLIANCE-RULE",
"dba/FLOW-EVENTS-GENERIC",
"dba/FLOW-EVENTS-GENERICARCH",
"dba/FLOW-OUTAGE-DB",
"dba/FLOW-PROSPHERE-ARCH",
"dba/FLOW-PROSPHERE-LIVE",
"dba/FLOW-RPE2-ARCH",
"dba/FLOW-RPE2-LIVE",
"dba/FLOW-SOM-ARCH",
"dba/FLOW-SOM-LIVE",
"dba/FLOW-UCS-LIVE",
"dba/FLOW-VIPR-EVENTS",
"dba/FLOW-VMWARE-EVENTS",
"dba/FLOW-VMWARE-TASKS",
"dba/FLOW-VNX-LIVE",
"dba/FLOW-WHATIF-SCENARIOS",
"mgmt/APG-DB",
"mgmt/APG-DB-lglac142-1",
"mgmt/APG-DB-lglac142-2",
"mgmt/APG-DB-lglac142-3",
"mgmt/APG-DB-lglac142-4",
"rest/EVENTS",
"rest/METRICS"
Results
At this point, the basic ViPR SRM configuration is complete and you can log in to the
UI. Navigate to Centralized Management > Physical Overview to see the four
servers that you just configured.
frontend=lglba148.lss.emc.com:linux-x64
primarybackend=lglba224.lss.emc.com:linux-x64
additionalbackend_1=lglac142.lss.emc.com:linux-x64
additionalbackend_2=lglac143.lss.emc.com:linux-x64
collector_1=lppd149.lss.emc.com:linux-x64
4. Copy the modified answer file (srm-hosts) to these ViPR SRM Frontend,
Primary Backend, and Additional Backends. (The modified file is not needed on
the existing Collector servers.)
5. Navigate to /opt/APG/bin.
6. Run the following script to configure the new Additional Backend host:
launch-additionalbackend-configuration.sh –c /sw/srm-hosts
7. Restart the services and verify that they are running. Troubleshoot any service
that does not show a status of “running.”
./launch-additionalbackend-scale-additionalbackend.sh –c / sw/
srm-hosts
./launch-primarybackend-scale-additionalbackend.sh –c / sw/
srm-hosts
./launch-frontend-scale-additionalbackend.sh –c / sw/srm-hosts
11. List the Management Resources to verify that the Additional Backends hosts
were added:
./manage-resources.sh list
In this example configuration, the following entries would be added to the list of
resources:
"dba/APG-DB-lglba250-1",
"dba/APG-DB-lglba250-2",
"dba/APG-DB-lglba250-3",
"dba/APG-DB-lglba250-4",
"mgmt/APG-DB-lglba250-1",
"mgmt/APG-DB-lglba250-2",
"mgmt/APG-DB-lglba250-3",
"mgmt/APG-DB-lglba250-4",
12. Restart all of the services on the Additional Backend servers, Primary Backend
server, and Frontend Server.
13. Log in to ViPR SRM and confirm that the new Additional Backend is in the UI.
Results
The Additional Backend hosts are added to the existing ViPR SRM configuration.
Navigate to Centralized Management > Physical Overview to see the five servers
that you have configured.
Note
For Windows, convert .sh to .cmd for the commands and / to \ for directories.
Procedure
1. The base ViPR SRM software and OS modifications should already be
completed as described in Installing on UNIX or Installing on Windows Server.
2. Navigate to .../APG/bin.
3. Modify the SRM-Conf-Tools answer file (srm-hosts) as described in Creating
the SRM-Conf-Tools answers file.
4. Add the new collector to the srm-hosts file.
In the example below, collector_2 is the new Collector.
frontend=lglba148.lss.emc.com:linux-x64
primarybackend=lglba224.lss.emc.com:linux-x64
additionalbackend_1=lglac142.lss.emc.com:linux-x64
additionalbackend_2=lglac143.lss.emc.com:linux-x64
collector_1=lppd149.lss.emc.com:linux-x64
collector_2=lglba150.lss.emc.com:linux-x64
5. Copy the modified answer file (srm-hosts) to the ViPR SRM Frontend, (This
new file is not needed on the existing ViPR SRM servers.)
6. Navigate to .../APG/bin.
7. Run the following script to configure the new Collector host:
./launch-collector-configuration.sh –c /sw/srm-hosts
8. Restart the services and verify that they are running. Troubleshoot any service
that does not show a status of “running.”
./launch-frontend-scale-collector.sh –c / sw/srm-hosts
Results
The Collector hosts are added to the existing ViPR SRM configuration. Navigate to
Centralized Management > Physical Overview to see the six servers that you have
configured.
/opt/APG/bin/mysql-client.sh
2. When prompted, select root as the username, mysql for the database, and
watch4net as the password.
3. Run the following query:
The following table is an example of the configuration you should see on an Additional
Backend host:
Option Description
Linux /opt/APG/Custom/WebApps-Resources/Default/
actions/event-mgmt/linux/conf
Windows Program Files\APG\Custom\WebApps-Resources
\Default\actions\event-mgmt\windows\conf.cmd
Databases/MySQL/Default/data/[SERVER NAME].err
Backends/Alerting-Backend/Default/logs/alerting-0-0.log
Backends/APG-Backend/Default/logs/cache-0-0.log
Collecting/Collector-Manager/Default/logs/collecting-0-0.log
Web-Servers/Tomcat/Default/logs/service.log
Tools/Task-Scheduler/Default/logs/scheduler-0-0.log
Tools/Webservice-Gateway/Default/logs/gateway-0-0.log
Databases\MySQL\Default\data\[SERVER NAME].err.
Backends\Alerting-Backend\Default\logs\alerting-0-0.log
Backends\APG-Backend\Default\logs\cache-0-0.log
Collecting\Collector-Manager\Default\logs\collecting-0-0.log
Web-Servers\Tomcat\Default\logs\service.log
Tools\Task-Scheduler\Default\logs\scheduler-0-0.log
Tools\Webservice-Gateway\Default\logs\gateway-0-0.log
Note
Example:
https://myHost.emc.com/centralized-management
3. Log in.
a. Default username is admin.
b. Default password is changeme.
c. Click Sign In.
After you finish
You are automatically logged off after four hours.
The icon indicates that connectivity to the server has been established.
8. Click Save.
Note
By default, this task is set to run once everyday at 12AM. You can customize
the task schedule by editing the configuration file.
If a major update of the EMC M&R platform is detected, the Status tab includes a
Major Update Status section that describes the version that is available, provides a
link to the upgrade documentation, and includes a Start Download button.
l Overview............................................................................................................66
l Stopping EMC M&R platform services on a UNIX server...................................66
l Uninstalling the product from a UNIX server......................................................66
l Stopping EMC M&R platform services on a Windows server............................. 66
l Uninstalling the product from a Windows server................................................ 67
l Uninstalling a SolutionPack................................................................................ 67
Uninstallation 65
Uninstallation
Overview
You can uninstall a SolutionPack and uninstall EMC M&R platform from a UNIX or
Windows server.
Stop the EMC M&R platform services before uninstalling EMC M&R platform.
Note
The list of services varies depending upon which type of installation was performed,
for example, vApp, collector, backend, frontend, and so forth.
Procedure
l Type manage-modules.sh service stop <service_name> from the bin
directory of the installation to stop a specific EMC M&R platform service.
This example shows how to stop all EMC M&R platform services:
Note
The list of services varies depending upon which type of installation was performed,
for example, vApp, collector, backend, frontend, and so forth.
Procedure
1. Type manage-modules.cmd service stop <service_name> from the bin
directory of the installation to stop a specific EMC M&R platform service.
This example shows how to stop all EMC M&R platform services:
Uninstalling a SolutionPack
If you no longer want to view the reports of a certain SolutionPack, you can uninstall
that SolutionPack from the server.
Procedure
1. Log in with administrator credentials for EMC M&R platform and select
Administration.
2. Select Centralized Management in the Administration tree.
3. Select SolutionPacks in the tree.
4. Select the SolutionPack that you want to uninstall in the Installed
SolutionPacks screen.
5. In the Properties area, click Trashcan icon for each instance of the
SolutionPackBlock and click Remove.
Field Description
Socket Collector port On this TCP port on the Primary Backend, the
Arbiter is accepting the remote connections
from all LBCs.
APG Backend hostname or IP address The hostname of the server where the apg
database and its backend service are running.
In this deployment, the possible options are
backend and backend2. Do not use
"localhost" for the default apg on the primary
backend.
APG Backend data port Each apg has a backend and each backend
has its own TCP port to receive raw data. The
port must be unique only inside the server.
Refer to Configuring the Additional Backend.
In this installation, the ports are 2000, 2100,
2200, 2300 and 2400.
Backend database hostname or IP address The hostname where the MySQL database is
running. By default, it is same as the APG
Backend hostname.
Field Description
Backend database password The default password for the MySQL user is
"watch4net"
l Unattended installation...................................................................................... 76
l Unattended installation arguments for Linux...................................................... 76
l Unattended installation arguments for Windows................................................ 76
Unattended Installation 75
Unattended Installation
Unattended installation
EMC M&R 6.7 and higher supports fully unattended installations, which are
particularly useful for installing the software on remote systems via scripts. This
appendix describes the installation of the platform software, but it does not include
the installation and configuration of modules or SolutionPacks.
Example 1 To override the default installation and set the installation type to collector:
l ACCEPTEULA =Yes
Accepts the EULA. By providing this switch, you are confirming that you have read
and accepted the EULA. The installer will refuse to run in unattended mode if you
have not accepted the EULA.
l INSTALL-TYPE=installation_type
Overrides the default installation type. The available options are: default, minimal,
collector, backend, and frontend. The command only considers the first letter, so
INSTALL-TYPE=C is equivalent to INSTALL-TYPE=collector. The value of
the parameter is not case sensitive.
l /D
Sets the default installation directory. This must be the last parameter. It cannot
contain any quotes (even if the path contains spaces), and only absolute paths are
supported.