E XMS 5.7.1 SoftwareInstallandUpgradeGuide RevC
E XMS 5.7.1 SoftwareInstallandUpgradeGuide RevC
E XMS 5.7.1 SoftwareInstallandUpgradeGuide RevC
Release 5.7.1
July 2017
Revision C
Copyright
Notice
Information in this guide is subject to change without notice. Companies, names, and data used in
examples herein are fictitious unless otherwise noted. No part of this guide may be reproduced or
transmitted in any form by means electronic or mechanical, for any purpose, without express written
permission of Empirix Inc.
Trademarks
The following are trademarks and service marks, or registered trademarks and service marks, of
Empirix Inc. or its subsidiaries, in the U.S. and other jurisdictions: Empirix, the Empirix logo, One-
Sight, Hammer On-Call, Voice Watch, IntelliSight, Hammer xCentrix, Hammer XMS, Hammer XMS
Active, Hammer G5, Hammer Call Analyzer, Hammer NetEm, Hammer TDM, Hammer FX-TDM and
Hammer DEX are trademarks or registered trademarks of Empirix Inc. in the U.S. and other jurisdic-
tions.
All other names are used for identification purposes only and are trademarks or registered trade-
marks of their respective companies.
Java and all Java based trademarks and logos are trademarks of Sun Microsystems, Inc. in the U.S.
and other countries.
Apache Tomcat
Licensed under the Apache Tomcat License, Version 2.0; you may not use this file except in compli-
ance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is
distributed on an “AS IS” basis, without warranties or conditions of any kind, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
Wireshark
Wireshark and the “fin” logo are registered trademarks of the Wireshark Foundation.
WebRTC
Copyright (c) 2011, The WebRTC project authors. All rights reserved.
OpenAM
OpenAM product is copyright © 2010-2014 by ForgeRock.
v
Chapter 3 Upgrading Vertica on the ROS and DB Nodes
Prerequisite Steps..................................................................................... 3-1
Backing Up and Restoring the Vertica Database...................................... 3-2
Backing Up the Vertica Database (Full)................................................ 3-2
Backing Up User-Defined Directories and Files ................................... 3-3
Restoring the Vertica Database (Full)................................................... 3-3
Backing Up and Restoring the ROS Configuration and MySQL ............... 3-4
Backing Up the ROS Configuration and MySQL .................................. 3-4
Restoring the ROS Configuration and MySQL ..................................... 3-6
Configuring NFS for Backup/Restore........................................................ 3-6
NFS Server (SLES 11.3/SLES 11.0) .................................................... 3-6
NFS Client ............................................................................................ 3-9
Upgrading Vertica Database 6.1, 7.0 or 7.1 to 7.2.................................. 3-11
Post-Migration Configuration .................................................................. 3-12
Post-installation Check ........................................................................... 3-12
vi
Chapter 7 Configuration for Using HTML5 DiagnostiX When No DMS is Present
Required Configuration for Voice Protocols on NWV or Standalone ROS
when using HTML5 DiagnostiX ................................................................ 7-1
Appendix C Troubleshooting
Common Errors and Resolutions ............................................................. C-1
Install Missing Packages ...................................................................... C-1
Resolve Installation Package Errors .................................................... C-2
Remove the MySQL Lock During an Upgrade ..................................... C-3
Update the Kernel Package for MSP Probes Based on Red Hat 6.8... C-5
Vertica Upgrade Script Errors 6.1.3 to 7.2.3 ........................................ C-5
vii
viii
CHAPTER 1 Overview of the Installation
and Upgrade Process
1. Autoinstallation can be performed remotely via IPMI or on-site with physical media such as a DVD burned
from the autoinstaller ISO image. If an internal DVD drive is not present on the server, an external USB
DVD drive can be used.
2. The 5023 requires BIOS version 11.0. Run utility check_5023_bios.sh to confirm whether the server is
ready to be upgraded to SLES 11.3 with the corresponding autoinstaller. To upgrade the BIOS, on-site
presence is needed to boot from a USB stick and to run the BIOS upgrade.
3. The autoinstaller must be used because third-party RAID drivers need to be loaded for the hard drives to
be recognized correctly as part of a RAID.
4. Third-party RAID drivers are not available for the 5023 server for SLES 11.4.
5. Third-party RAID drivers have not been qualified for the 5423 server for SLES 11.4.
6. The autoinstaller must be used to set up the partitions correctly.
7. To upgrade to SLES 11.4, a separate procedure must be followed to perform a distribution upgrade from
SLES 11.3 to 11.4 (refer to Appendix D Installing and Upgrading SLES OS).
8. Ivy Bridge servers have already been installed with SLES 11.3, however some servers have the kernel
(3.0.76-0.11) and packages that are provided by the original SLES 11.3 installer instead of the newer ker-
nel (3.0.101-0.47.67) and packages that have been patched to fix various security vulnerabilities and the
leap second issue.
1-2 Chapter 1
First-Time Installation Prerequisites
You have installed the OS (SLES or Red Hat) on the target E-XMS
servers described in the Empirix Operating System Installation and
Configuration Guide.
You must be running SLES 11.4 after the installation. You can either
upgrade before or after the E-XMS software installation, depending on
whether you have the SLES 11.4 iso image to use for the installation. If
not, then upgrade from SLES 11.3 to 11.4 after the installation. See
Appendix D, Installing and Upgrading SLES OS.
NOTE: After the OS installation, some common tasks that may need
to be performed include configuring IP addresses, changing the host-
name, and setting the date, time, and time zone.
IMPORTANT: If you are using Red Hat make sure the packages ‘read-
line.i686’ and nscd are not installed on your system. To uninstall the
packages, enter the following commands:
# yum remove readline.i686
# yum remove nscd
You have the fully-qualified domain names, IP addresses, and root
account passwords for all of the NWV and ROS nodes. See “Configure
Fully Qualified Domain Names (FQDN) on NWV and ROS Servers,”
page 1-8.
The IP addresses and fully qualified domain names for the E-XMS
NWV and ROS servers are defined in the /etc/hosts file on each node
as described in “Configure Fully Qualified Domain Names (FQDN) on
NWV and ROS Servers,” page 1-8.
You have created an E-XMS YUM software repository where you can
download and copy the required software bundles to the E-XMS serv-
ers. See “Create an E-XMS YUM Software Repository,” page 1-9.
You have created a third-party repository where you can download
and copy required software applications to the E-XMS servers. Refer
to the following sections:
“Create a Third-Party Software Repository,” page 1-10.
“Add SLES or Red Hat Software Repository Before Installation or
Upgrade,” page 1-11
“Installing Wireshark on NWV and ROS,” page 1-12
TCP ports must be allowed through any firewalls in the network. See
Appendix A, E-XMS Network TCP Port Assignments.
Upgrade Prerequisites
If you are performing an upgrade to the E-XMS software, make sure the
following requirements are met:
You have the fully-qualified domain names, IP addresses, and root
account passwords for the NWV and ROS servers.
The IP addresses and fully qualified domain names for all the E-XMS
NWV and ROS servers are defined in the /etc/hosts file on each node
as described in “Configure Fully Qualified Domain Names (FQDN) on
NWV and ROS Servers,” page 1-8.
You have installed Wireshark for 5.7.1.
The E-XMS system is working properly. In particular, the Redundant
NWV and HA-ROS replication are working with no errors on either
side.
ext3 filesystem for Vertica must be installed on Database and ROS
servers
SLES 11.3 or 11.4 on all SLES based servers. If on earlier version
you'll need to reimage to SLES 11.3 first (see Appendix D, Install-
ing and Upgrading SLES OS)
1-4 Chapter 1
Planning Your Upgrade
NOTE: If for some reason one of your 5.5 ROS still has XFS file system,
then a remiage to SLES 11.3 using the autoinstall to get to EXT3 files sys-
tem will be required for the Vertica upgrade.
E-XMS
Step Task System Status Approximate Duration
1 Prerequisites: get necessary files in Up Depends on site
place
2 Upgrade Vertica 7.2.3 on the HA- 30 mins
ROS database cluster
3 Upgrade Vertica 7.2.3 on DMS data- 30 mins
base cluster
4 Upgrade from E-XMS 5.5 to 5.7.1 15 to 30 mins
per server
5 Post Upgrade checks Up 30 mins
depending on the size of deploy-
ment
6 Upgrade OS to SLES 11.4 Down 15 mins
7 Post Reimage checks Up 30 mins
depending on the size of deploy-
ment
E-XMS
Step Task System Status Approximate Duration
1 Prerequisites: get necessary files in Up Depends on site
place
2 Backup Vertica and MySQL on Optional 5 to 22 hrs
ROS depending on the amount of Ver-
See “Backing Up and Restoring the tica data to be preserved. See
Vertica Database,” page 3-2. “Vertica Backup and Restore
Time Estimates,” page 1-7.
3 Upgrade Vertica on the ROS Down 30 mins
4 Migrate to E-XMS 5.7.1 Down 15 to 30 mins per server
E-XMS
Step Task System Status Approximate Duration
5 Post Upgrade checks Up 30 mins
depending on the size of deploy-
ment
6 Upgrade SLES 11.3 to 11.4 Down 15 mins per server
7 Post Reimage checks Up 30 mins
depending on the size of deploy-
ment
E-XMS
Step Task System Status Approximate Duration
1 Prerequisites: get necessary files in Up Depends on site
place
2 Configure NFS Client for Backup 10 mins
3 Full backup of the Vertica database Up 5 to 22 hrs
See “Backing Up and Restoring the depending on the amount of Ver-
Vertica Database,” page 3-2. tica data to be preserved. See
next section Vertica Backup and
Restore estimate calculations.
4 Shutdown the ROS processes Down 5 mins
5 Incremental Backup Vertica Down 1-3 hours
Backup and Full of MySQL and depending on days since Full
E-XMS Configuration backup. See section below Ver-
See “Backing Up and Restoring the tica Incremental Backup estimate
Vertica Database,” page 3-2. calculation.
6 Install SLES 11.3, configure with Down 45 mins
YaST
See Appendix D, Installing and
Upgrading SLES OS
7 Configure NFS Client for Restore 10 mins
8 Install E-XMS 5.1 to 5.4 Down 30 mins
1-6 Chapter 1
Planning Your Upgrade
E-XMS
Step Task System Status Approximate Duration
9 Restore the MySQL database and Down 2 mins
configuration backups
See “Backing Up and Restoring the
Vertica Database,” page 3-2.
10 Start E-XMS Down/Up 5 mins
11 Full restore of Vertica database Up 6.5 to 23.5 hrs
See “Backing Up and Restoring the depending on the amount of Ver-
Vertica Database,” page 3-2. tica data to be preserved. See
next section Vertica Backup and
Restore estimate calculations.
12 Post Restore checks Up 30 mins
depending on the size of deploy-
ment
13 Shutdown E-XMS 5.1 to 5.4 Down 5 mins
14 Upgrade Vertica to 7.2.3 Down 15 to 45 mins
15 Upgrade/Migrate to E-XMS 5.7.1 Down 15 to 30 mins
depending on the server type
16 Start E-XMS 5.7.1 Down/Up 5 mins
17 Post Upgrade checks Up 30 mins
depending on deployment
18 Upgrade NWV Web Portal 5 mins
19 Upgrade SLES 11.3 to 11.4 Down 15 mins per server
20 Post SLES 11.4 Upgrade Checks Up 15 to 30 mins
All calculations for time in this section are based on the fact that the net-
work is fast enough to support a transfer rate of 100Mbps, but the limiting
factor is really the rate at which data can be pulled from Vertica. If the
actual throughput that is achievable between servers is less than this,
then the calculations should be adjusted accordingly. Similarly, if the avail-
able bandwidth is > 100 Mbps, the calculations can be adjusted accord-
ingly, but keep in mind that the rate at which the data can be pulled from or
pushed to Vertica could be the limiting factor.
It is recommended to test a full backup to determine the actual time
required based on the conditions at a particular deployment.
NOTE: For information on backing up the Vertica database, see “Backing
Up and Restoring the Vertica Database,” page 3-2.
1-8 Chapter 1
Create an E-XMS YUM Software Repository
This version of E-XMS supports the use of a local software repository. The
installation tarball includes a commodity script called "install-repo.sh". The
script automatically updates the repository configuration in SLES or Red
Hat to include a local E-XMS repository.
1. Copy the E-XMS software bundle to the target E-XMS node. Save it in
a temporary location, for example:
/home/E-XMS_SW
2. If there is an existing "repo" directory present in share, remove it so
that there are no conflicts with files and packages:
# rm -rf /home/hammer/share/<version>
3. Create a directory where the software repository can reside and tar-
balls can be extracted from:
# mkdir -p /home/hammer/share/<version>
4. Extract the E-XMS tarball from the install repo directory to the directory
you created in step 2 <exms-version-target.gz>:
1-10 Chapter 1
Add SLES or Red Hat Software Repository Before Installation or Upgrade
b. Click the Version 5.7 folder and refer to the README.txt file for
instructions on downloading the wireshark rpm.
c. Compare the rpm checksum with the contents of md5 file. Both
files need to be in the same directory when using the “-c” option of
md5sum:
$ md5sum -c hxms-wireshark-3.0.0-5.x86_64.md5
hxms-wireshark-3.0.0-5.x86_64.rpm: OK
Alternatively, the contents of the .md5 file can be displayed and
manually compared against the output of the md5sum command:
$ md5sum hxms-wireshark-3.0.0-5.x86_64.rpm
cf01c6195a1c278ca4cfad2cf3592138
hxms-wireshark-3.0.0-5.x86_64.rpm
$ cat hxms-wireshark-3.0.0-5.x86_64.md5
cf01c6195a1c278ca4cfad2cf3592138
hxms-wireshark-3.0.0-5.x86_64.rpm
d. Install/upgrade the new Wireshark library rpm on the NWV and
ROS before installing or upgrading to E-XMS 5.7.1.
$ rpm -Uvh --force
hxms-wireshark-3.0.0-5.x86_64.rpm
1-12 Chapter 1
CHAPTER 2 Installing E-XMS Software
This chapter describes how to install E-XMS software for the following
sample configurations:
“Configuration 1: Standalone ROS with MSP Probes,” page 2-2
“Configuration 2: Fully-Distributed DMS,” page 2-3
“Configuration 3: Redundant NWV with a ROS and MSP Probes,”
page 2-4
“Configuration 4: NWV with HA-ROS Application Nodes and ROS DB
Cluster,” page 2-5
Each E-XMS server and component installations consists of:
Installing E-XMS rpm packages and their dependencies (other
required rpm packages for the component to be able to run).
Running a setup script to configure the component. For example,
setup-ros.sh creates the MySQL and Vertica schemas and the Apache
Tomcat/Tomee and OpenAM SSO configurations.
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
2-2 Chapter 2
Configuration 2: Fully-Distributed DMS
2-4 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
Prerequisites
Create a Persistent SSH Session
Created a persistent SSH session to monitor the progress of the installa-
tion procedure. This will create a virtual session to the system that will
allow you to reconnect to the session if you lose connectivity during the
installation.
1. Log in to the server using an SSH client as ‘hammer’.
su -
3. Open the screen session:
screen /bin/bash -s <screen session name>
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
2-6 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. On both the primary and secondary NWV servers restart hammer ser-
vices:
# /etc/init.d/hmmonitor restart
3. On the secondary NWV server restart Tomcat and TomEE services:
# /etc/init.d/tomee stop
# /etc/inti.d/tomcat7 stop
# /etc/init.d/tomcat7 start
# /etc/init.d/tomee start
The NWV server installation is completed. Now install the next compo-
nent.
2-8 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
The primary NWV server must be installed and running. After the second-
ary NWV server is installed and running, rerun the setup-nwv.sh script on
the primary NWV to enable MySQL and OpenAM replication.
# /home/hammer/hmcommon/bin/setup-nwv.sh
If the system does not meet the minimum requirements the warning
below displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.
2-10 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
# /home/hammer/hmcommon/bin/setup-nwv.sh
If the system does not meet the minimum requirements the following
warning displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.
2-12 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
2-14 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
2-16 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
SLES
2-18 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
3. In the DMS RDB or ROS DB node name field, just press return.
/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
# /home/hammer/hmcommon/bin/setup-dms.sh
If the system does not meet the minimum requirements the warning
below displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.
2-20 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
2-22 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
# /home/hammer/hmcommon/bin/setup-probe.sh
If the system does not meet the minimum requirements the warning
below displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.
2-24 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. Once the probe installation is completed, hmmonitor is restarted.
Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
2-26 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/bin/setup-lsskpigen.sh
4. The interactive configuration script prompts you to configure the fol-
lowing parameters:
a. SMM System IP - the IP address of the SMM server which is used
to provide configuration
b. KPIGen System ID - a numeric identifier unique to this KPIGen in
the system. This must match the KPIGen System ID set in the
DMS Administration console for this KPIGen system.
2-28 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
6. After NTP is configured, the script will complete the setup of the LSS-
KPIGen. The license.xml file must be available in /home/hammer/
hmcommon.
Installing RAN Vision LSS-RANMon on an MSP Probe
1. LSS-RANMon is installed on an E-XMS Probe. The exms-probe soft-
ware package is required by LSS-RANMon and must be installed prior
to installation of LSS-RANMon.
2. After installing the OS (SLES or Red Hat) and adding the E-XMS YUM
repository, start the RAN Vision LSS-KPIGen installation:
SLES
# zypper install ranvision-lssranmon
Red Hat
# yum install ranvision-lssranmon
NOTE: All dependencies and conflicts are automatically checked
during the installation and the required packages are downloaded and
installed. See “Common Errors and Resolutions,” page C-1 to resolve
any errors.
3. RANMon makes use of existing configuration settings from the E-XMS
probe software and therefore requires no additional setup.
Installing RAN Vision TraceReader on an MSP Probe
/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
2-30 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
2-32 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
# /home/hammer/hmcommon/bin/setup-ros.sh
2-34 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
If the system does not meet the minimum requirements the following
warning displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.
2-36 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. Restart all processes on the active HA-ROS (not the standby HA-
ROS) in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
Installing High Availability MSP 6000 Probes for the Voice Engine
High availability functionality for redundant MSP 6000 probes is only sup-
ported on the voice engine. This procedure provides the steps to install
and configure high availability redundant MSP 6000 probes.
NOTE: This procedure can also be used for MSP 1500 probes.
Prerequisites
The following hardware is required:
2 MSP 6000 Probes
1 ROS; 1 Virtual IP address
# /home/hammer/hmcommon/bin/setup-probe.sh
IMPORTANT: Both HA MSP 6000 probes must have the same probe ID.
# /etc/init.d/hmmonitor stop
3. Make sure the following files are present on both MSP 6000 probes in
the directory /home/hammer/hmcommon/etc:
vip-post-down.sh
vip-post-up.sh
vip-pre-down.sh
vip-pre-up.sh
2-38 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
# /home/hammer/hmcommon/etc/ucarp-config
b. Configure the following parameters:
hm_ha_interface:eth0
hm_ha_srcip:X.X.X.X
hm_ha_peer_ip:Y.Y.Y.Y
hm_ha_virtualip:Z.Z.Z.Z
hm_ha_vhid:NNN
hm_ha_password:secret
Where:
X.X.X.X: local ip address of eth0
Y.Y.Y.Y: ip address of the other probe in the pair
Z.Z.Z.Z: virtual ip shared between the 2 probe
(this is the address to be used in the ROS)
NNN: virtual id, it must be unique in the network
(typically, this is the probeID)
5. Set up a pre-shared key between both MSP 6000 probes:
cd /home/hammer/hmcommon/bin
./ single_ssh_key.sh hammer Y.Y.Y.Y
6. Remove hmmonitor services:
insserv -r hmmonitor
NOTE: The command to remove hmmonitor services is not executed
on RHEL 6.8.
7. Enable UCARP services to manage requests for the virtual ip:
chkconfig ucarpd on
8. Start UCARP services on both MSP 6000 probes:
# /etc/init.d/hmmonitor start
2-40 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
2-42 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:
a. hmmonitor
b. Tomcat7
c. TomEE
NOTE: Tomcat7 is not used for the ROS when under a NWV server
but must be running.
Failure restart processes in this order may cause licensed features to
not work properly.
The HA-ROS server installation is completed. Now install the next compo-
nent.
2-44 Chapter 2
CHAPTER 3 Upgrading Vertica on the ROS
and DB Nodes
Prerequisite Steps
1. Make sure all the Vertica nodes are up and running.
# mkdir /home/hammer/share/migration-utilities
# cd /home/hammer/share/migration-utilities
5. The NFS server should have been previously set up. Now configure
the NFS client on each NWV server. See “Configuring NFS for
Backup/Restore,” page 3-6.
6. Purge Vertica data older than <num-days-of-data-to-retain> days old
and drop excess projections before backing up the Vertica Database.
3-2 Chapter 3
Backing Up and Restoring the Vertica Database
# ./xmsDropProjections.sh
7. Start the full backup of the Vertica Database and let it run in the back-
ground until it completes; this could take from 5 to 22 hours, depend-
ing on the amount of data, see “Planning Your Upgrade,” page 1-4.
# ./backup_vert.sh 1
*** Need example of the sessions records output from this command
showing a .gz file
***
/home/hammer/pso
Each file or directory to be backed up must appear on its own line in the
config file. The absolute path must be used for each entry (i.e. start with a
'/').
The config file will be automatically backed up.
When the script "restore_mystuff.sh" is called, the config file (if one is
specified) will be restored along with all of the directories and files that
were previously backed up.
speed of the network. To determine how much time the full backup should
take, see “Planning Your Upgrade,” page 1-4. The region will be fully
operational (except for access to all historical data, which will be gradually
restored over time) during the restore.
1. Start the restore process for the Vertica Database and let it run in the
background until completion. This will restore the full and incremental
backups that were previously performed.
# cd /home/hammer/share/migration-utilities
# ./restore_vert.sh
current_statement
------------------------------------------------------
------------------------------------------------------
------------------------------------------------------
---------------
(10 rows)
3-4 Chapter 3
Backing Up and Restoring the ROS Configuration and MySQL
*** Need example of the sessions records output from this command
showing a .gz file
***
4. Backup the E-XMS configuration files and MySQL tables.
# ./backup_conf_mysql.sh
f. /etc/sysconfig/network/routes
# ./restore_conf_mysql.sh
3-6 Chapter 3
Configuring NFS for Backup/Restore
Throughput available between the server holding the backup data and
the server that will be backed up/restored.
Whether there is a firewall in between the servers.
Whether the free backup storage is likely to decrease due to other
data being written.
Whether there's already a lot of disk activity on the server that will be
holding the backup storage.
Only one backup operation is recommended for each server hosting
the backup storage. In theory, multiple backups/restores can be per-
formed to/from the server hosting the backup storage, but keep in
mind that if insufficient storage has been set aside, this could cause
one or more backups to fail. Another consideration in this use case is
whether bandwidth requirements from multiple backups could slow
down the backup operations.
1. Check whether the NFS server package has been already installed. If
the command returns back a value, then jump to step 3. Otherwise,
proceed with step 2.
# rpm -qa | grep -i nfs-kernel-server
nfs-kernel-server-1.2.3-18.38.43.1
nfs-kernel-server-1.1.3-18.17
2. Install the NFS server package, which is available under the RPM sub-
directory where the migration utilities were extracted.
For SuSE 11-3 machine
# cd /home/hammer/share/migration-utilities
3. Create a base directory for the backups. This will be exported via NFS
so that it can be remotely mounted. It MUST be world-readable so that
multiple user accounts can write to/read from the directory.
# mkdir -p /home/hammer/NFSCloud
# ls -ld /home/hammer/NFSCloud
Ready
Initializing
Finishing
NFS Exports
* /home/hammer/NFSCloud
3-8 Chapter 3
Configuring NFS for Backup/Restore
# ls -ld /home/hammer/NFSCloud/<nfs-client-IP-address>
NFS Exports
# rm -rf /home/hammer/NFSCloud
NFS Client
The server that will be installed with SLES 11.3 will need to mount the
exported remote directory:
Before the SLES 11.3 installation procedure to backup data.
After the SLES 11.3 installation procedure and after the re-installation
of the original E-XMS release to restore data.
# ls -ld /work/exms-db-backups
Filesystem Size
Used Avail Use% Mounted on
<nfs-server-IP-address>:/home/hammer/NFSCloud/<nfs-
client-IP-address> 1T 0 1T 0% /work/exms-
db-backups
4. Check whether vert_admin user (if it exists) can write to the directory.
# su vert_admin
> rm /work/exms-db-backups/myfile1
> exit
If you get a "Permission denied" error, you need to make sure the direc-
tory is world readable by running as root:
# chmod -R 777 /work/exms-db-backups
If you get the error "su: user vert_admin does not exist", then you can
safely ignore the error and continue, as this server (usually an NWV) does
not have this user defined.
5. Check whether the mysql user can write to the directory.
# su mysql
> rm /work/exms-db-backups/myfile2
> exit
3-10 Chapter 3
Upgrading Vertica Database 6.1, 7.0 or 7.1 to 7.2
# ./backup_vert_clear.sh
# rm -rf /work/exms-db-backups/*
<nfs-server-IP-address>:/home/hammer/NFSCloud/<nfs-client-IP-
address> on /work/exms-db-backups type nfs
(rw,hard,intr,tcp,addr=<nfs-server-IP-address>)
# umount -lf /work/exms-db-backups
/home/hammer/hxms/x86_64
You can also put the rpms in an alternate location and when prompted by
the script for missing rpms, type this alternate location.
NOTE: Empirix recommends that you backup your Vertica database
before upgrading.
1. Ensure the dialog.rpm is installed. If not, install the rpm from the loca-
tion thirdparty-sles-repo-5.7.1:
zypper install dialog
Post-Migration Configuration
If the ROS is under the NWV, log in to the NWV UI to check the Region ID
value, and use that value to change the region_id value at /home/ham-
mer/hmcommon/common-config file on the ROS server. Restart the ham-
mer services on the ROS.
For a Standalone ROS, it is recommended that you check the DB record
that shows the same result as in the region_id at common-config file by
typing the following command on the ROS:
# mysql -uroot -A xms_national -e "select region_id,
url from region;"
If the region_id is not the same, change it in the common-config file and
restart the hammer services on the ROS.
Post-installation Check
Make sure the data is correctly loaded in the database and that the ORI
interface can extract the information.
3-12 Chapter 3
Post-installation Check
NOTE: Contact Empirix Tech Support if you're have a large database with
sensitive information and you've never performed this installation proce-
dure.
NOTE: This installation procedure can potentially take a long time,
depending on the size of the database. Make sure you have an appropri-
ate length of time to perform this installation procedure.
3-14 Chapter 3
CHAPTER 4 Upgrading E-XMS
Components
This chapter provides upgrade instructions for the following E-XMS 5.7.1
components on SLES or Red Hat:
“Upgrading an MSP Probe,” page 4-9
“Upgrading a ROS,” page 4-10
“Upgrading an Active and Standby HA-ROS node,” page 4-10
“Upgrading a NWV or Redundant NWV,” page 4-13
“Upgrading an SMM,” page 4-14
“Upgrading a DMS (Aggregator/Proxy),” page 4-14)
“Upgrading a DMS or ROS RDB or RDB Cluster,” page 4-14
“Upgrading RAN Vision,” page 4-14
When upgrading from a legacy installation on SLES, see “Upgrading to E-
XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility,” page 4-1
When upgrading an MSP 5000 or Ubuntu MSP probe, the old legacy
upgrade method is still used. Please refer to the instructions in the sec-
tion, “Upgrading an MSP Probe,” page 4-9.
NOTE: For information on configuring traffic monitoring on a probe run-
ning GTP and GTPv2 traffic, see Appendix E, Configuring Dynamic Link-
sets for GTP/GTPv2 Traffic.
The exms migration utility only needs to be done once to get the system
converted over from the old legacy installation to the new zypper update
method.
NOTE: The HA-ROS DB nodes, MSP 5000 or Ubuntu probes do not
require the exms migration utility to be installed or run.
Upgrade Prerequisites
Before you begin upgrading to E-XMS 5.7.1, perform the following tasks.
1. Ensure the system is healthy:
4-2 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility
# cp /etc/my_full.cnf/etc/my_full.cnf-orig
3. Edit the file to add the following bolded lines on both HA-ROS nodes:
binlog-do-db=hammer_monitor
binlog-do-db=regmon
binlog-do-db=sys_health_stats
binlog-do-db=xpa
replicate-do-db=hammer_monitor
replicate-do-db=regmon
replicate-do-db=sys_health_stats
replicate-do-db=xpa
4. Start MySQL on both the Active and Standby HA-ROS nodes:
# mysql -A hammer_monitor
2. Execute the following command:
4-4 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility
NOTE: The active and standby state will change as failover occurs.
IMPORTANT: Download and install the correct Wireshark 3.0 version from
SourceForge for the E-XMS build version you plan to install. See “Installing
Wireshark on NWV and ROS,” page 1-12.
IMPORTANT: Upgrade Vertica to version 7.2.3 before you run the migra-
tion utility. You should do this immediately before you run the migrate-
xms.sh script because the 5.1 to 5.4 software releases are not qualified to
work with Vertica 7.2.3.
See Chapter 3, Upgrading Vertica on the ROS and DB Nodes.
NOTE: Once the migration procedure has begun, do not interrupt its
execution. If the script does not complete, please contact Empirix Tech-
nical Support for assistance.
/home/hammer/exms-migrate/bin/migrate-xms.sh
The migration script will perform the following tasks:
Backup configuration and data
Remove old software
Install new E-XMS 5.7.1 software
Restore data and configuration
WARNING! After the migrate-xms.sh script has completed, the E-XMS 5.7.1 software
is installed. Do not run the zypper update or zypper install commands.
3. Upgrade SLES 11.3 to 11.4. This is now required for all 5.7.1 upgrades
for all SLES-based servers such as MSP, ROS, NWV, DMS.
/etc/init.d/ucarpd stop
2. Check the role status on the standby HA-ROS.
/etc/init.d/ucarpd status
The status should be ‘Standby’
3. Check the role status on the active HA-ROS.
/etc/init.d/ucarpd status
The status should be ‘Active’. If the status is not ‘Active’ restart this
HA-ROS.
/etc/init.d/ucarpd restart
4. Start the standby HA-ROS.
/etc/init.d/ucarpd start
5. Afater migrating each of the HA-ROS application nodes, run the setup
command on the original Active node:
/home/hammer/hmcommon/bin/setup-ros.sh
Set “Enable MySQL Replication” to 1 to complete the migration of the
HA-ROS application nodes.
6. Check that MySQL and file replication is working on the Active and
Standby servers using the MySQL SHOW SLAVE STATUS com-
mands.
4-6 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility
/home/hammer/hmcommon/bin/setup-nwv.sh
Set “Enable Redundancy” to 1 to complete the migration of the redun-
dant NWVs and reestablish the bidirectional MySQL and OpenAM
data replication between the two nodes.
5. Check that MySQL and OpenAM replication is working correctly
between the primary and secondary NWV using the show slave status
commands and the OpenAM console.
6. Install the SMM, if not already installed. The SMM is always required
on the NWV or Standalone ROS to be able to use the new HTML5
DiagnostiX feature introduced in E-XMS 5.7.1.
NOTE: If SMM was already installed prior to the upgrade on your NWV
or Standalone Region, then SMM was migrated also as part of running
the migrate-xms.sh script.
7. Upgrade the NWV Web Portal, if previously installed before the
upgrade.
NOTE: The NWV Web Portal is required to be installed on every NWV
and Standalone ROS as of E-XMS 5.1, so it is highly recommended to
be installed.
Upgrade Prerequisites
Before you begin upgrading to E-XMS 5.7.1, follow these steps:
1. Ensure the system is healthy:
4-8 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7
Make sure that every ROS with Tomcat7 is running and healthy.
Even if the ROS is not being used a region under a NWV, Tomcat
must be working for the update to work.
Make sure that every NWV and ROS with FQDN and hostname
values are in the /etc/hosts file.
2. Create a persistent screen session:
cd hxms
./upgrade.sh -u
Upgrading a ROS
Upgrade a ROS after Vertica has been upgraded to 7.2.3.
SLES
# zypper update exms-ros
Red Hat
# yum update exms-ros
There are no questions in the script requiring a response.
NOTE: If this is a standalone ROS, you must also upgrade the SMM. See
“Upgrading an SMM,” page 4-14.
2. HA-ROS nodes
3. NWV
4. MSP probes
4-10 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7
# cp /etc/my_full.cnf/etc/my_full.cnf-orig
3. Edit the file to add the following bolded lines on both HA-ROS nodes:
binlog-do-db=hammer_monitor
binlog-do-db=regmon
binlog-do-db=sys_health_stats
binlog-do-db=xpa
replicate-do-db=hammer_monitor
replicate-do-db=regmon
replicate-do-db=sys_health_stats
replicate-do-db=xpa
4. Start MySQL on both the Active and Standby HA-ROS nodes:
# mysql -A hammer_monitor
2. Execute the following command:
4-12 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7
# mysql -A xms_national
2. Execute the following command:
4 = secondary
Upgrading an SMM
To upgrade an SMM for NWV or a standalone ROS, start the update:
SLES
# zypper update exms-smm
Red Hat
# yum update exms-smm
There are no questions in the script requiring a response.
4-14 Chapter 4
Manually Verifying Successful Upgrade Completion
b. Start the Java Diagnostics GUI from the E-XMS landing screen.
6. Execute the “Call Detail Summary Report” for all regions to ensure sta-
tistics are available.
7. If the customer is expecting to use the new HTML5 DiagnostiX search,
please refer to Chapter 7, Configuration for Using HTML5 Diagnos-
tiX When No DMS is Present for instructions on additional configura-
tion required and how to verify it is working.
4-16 Chapter 4
CHAPTER 5 Post-Installation
Requirements
HA Proxy
High-Availability Proxy is a third-party component used by the NWV and
Standalone ROS (with SMM) to communicate with a DMS Report Data-
base (RDB) running Vertica.
NOTE: You must install E-XMS software on the NWV or Standalone ROS
before installing HA Proxy.
Because HA Proxy software license cannot be distributed with E-XMS
software, you must download HA Proxy from Linux and install it on your
E-XMS system.
E-XMS has been tested with HA Proxy version 1.4.21 for i586 Linux with
stripped symbols. This is the recommended version and can be down-
loaded from the following location:
http://www.haproxy.org/download/1.4/bin/haproxy- 1.4.21-pcre-40kses-
linux-i586.stripped.
Download HA Proxy
1. Download HA Proxy:
a. Login as root.
wget http://www.haproxy.org/download/1.4/bin/
haproxy-1.4.21-pcre-40kses-linux-i586.stripped
d. If the system does not have Internet access or you must use a dif-
ferent version, the file must be transferred to the /root directory.
a. Login as root.
5-2 Chapter 5
CHAPTER 6 Upgrading the Network Wide
View Web Portal
This chapter provides instructions for upgrading the Network Wide View
Web Portal on a NWV or on a standalone ROS.
Prerequisite
E-XMS 5.x must be correctly configured on your server. The OpenAM
server must be up and running.
To test the OpenAM browser go to:
https://<fqdn>:8443/OpenAM
If you can reach with website, OpenAM and Tomcat7 are running.
Procedure
1. Copy nwv-portal-installer-<release>.rpm into
/home/hammer/share folder of E-XMS as host for NWV-Portal.
2. Login on server as root and go under /home/hammer/share
cd /home/hammer/share
3. If you have previously installed the NWV-portal rpm, you must remove
it using the command:
rpm -e nwv-portal-installer-<old rpm>
cd /home/hammer/hmportal/bin
./install_over_nwv.sh <current_host_fqdn>
2 dots (.example.com). Mapping between the FQDN of the host and its
IP Address must be resolved via DNS and in local /etc/hosts file. The
FQDN must be defined in the /etc/hosts file.
7. Open a browser and type https://<current_host_fqdn>
Prerequisite
The application host must be correctly configured in standalone mode with
the Web Application Server e.g. TomEE or apache are present.
6-2 Chapter 6
OpenAM Agent Installer
cd <path_to_folder_containing_rpm>
rpm -U openam-agent-installer-<release>-x86_64.rpm
cd /home/hammer/hmportal_client/bin
./install_agent_on_app.sh <current_host_fqdn>
<openam_host_fqdn> <tomee_installation_folder> <tomee_user-
name> <app_type>
Where:
<current_host_fqdn> is the fully qualified domain of server hosting the
application
<openam_host_fqdn> is the fully qualified domain of server hosting
OpenAM web application
<tomee_installation_folder> is the place where TomEE is installed
(CATALINA_HOME folder).
<tomee_username> is the user that executes tomee server in this
host (see the ownership of jsvc in case of doubts)
<app_type> is one of the values among the following:
exms - to add another ROS/NWV instance
is - to provide support for Intellisight
isran - uses apache instead of TomEE. The agent configuration
procedure is not currently automated by the script.
<filter-name>Agent</filter-name>
<display-name>Agent</display-name>
<filter-class>com.sun.identity.agents.filter.AmA
gentFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>Agent</filter-name>
<url-pattern>/*</url-pattern>
<dispatcher>REQUEST</dispatcher>
<dispatcher>INCLUDE</dispatcher>
<dispatcher>FORWARD</dispatcher>
<dispatcher>ERROR</dispatcher>
</filter-mapping>
<Listener className="org.apache.tomee.catalina.ServerL-
istener" />
6-4 Chapter 6
OpenAM Agent Installer
<Listener className="org.apache.catalina.core.AprLife-
cycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JasperL-
istener" />
<Listener className="org.apache.catalina.core.JreMemor-
yLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.Global-
ResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLo-
calLeakPreventionListener" />
Documentation at /docs/jndi-resources-howto.html
-->
<GlobalNamingResources>
-->
type="org.apache.catalina.UserDatabase"
factory="org.apache.catalina.users.MemoryU-
serDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
Documentation at /docs/config/service.html
-->
<Service name="Catalina">
-->
connectionTimeout="20000"
redirectPort="443" />
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
6-6 Chapter 6
OpenAM Agent Installer
maxThreads="150" scheme="https"
secure="true"
clientAuth="false"
protocol="org.apache.coyote.http11.Http11-
NioProtocol"
keystoreFile="/usr/share/tomee/conf/
default.keystore"
keystorePass="secret"
keyAlias="default"/>
-->
<!--
<Cluster className="org.apache.catalina.ha.tcp.Sim-
pleTcpCluster"/>
-->
<Realm className="org.apache.catalina.realm.Lock-
OutRealm">
<Realm className="com.sun.identity.agents.tom-
cat.v6.AmTomcatRealm" debug="99"/>
<!--
<Realm className="org.apache.cata-
lina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
-->
</Realm>
unpackWARs="true" autoDeploy="true">
<Valve className="org.apache.catalina.authentica-
tor.SingleSignOn" />
<Valve className="org.apache.cata-
lina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suf-
fix=".txt"
</Host>
</Engine>
</Service>
</Server>
6-8 Chapter 6
OpenAM Agent Installer
3. Set the users information and assign it to a group. Don't force the
change password check!
https://FQDN_PORTAL:8443/opeam
5. Go to Access Control > /(Top Level Realm) > Subjects > User, then
click New.
6. Add the new user information. The ID must correspond to the ‘Name’
set in IntelliSight. When done entering new user information, click OK.
7. Click the user name in the list, then select the group. Add groups
accordingly. Select Roles from the list box on the left side and click
Add to move them to the list box on the right side. When you are done,
click Save.
6-10 Chapter 6
Interoperability Between IntelliSight and E-XMS
2. Enter the new user info in the form. Select the Role to be ‘Power User’.
Note the name of the user account.
6-12 Chapter 6
CHAPTER 7 Configuration for Using
HTML5 DiagnostiX When No
DMS is Present
On customer systems using the HTML5 user interface DiagnostiX feature
for voice protocols, perform the steps below for configuring the database
views the first time upgrading to 5.7.1. Once done this does not have to be
repeated again.
Prerequisite
E-XMS Release 5.7.1 requires the installation and setup of the SMM on
the NWV or Standalone ROS, even if it is not a DMS system. For installa-
tion steps, see See Chapter 2, Installing E-XMS Software.
After installation or upgrade to E-XMS 5.7.1, copy the script
'CreateROSVerticaViewsForProtocols,py" (if not already there) into the
directory of each active ROS:
/home/hammer/hmcommon/bin/scripts/python/projects/ros/
src/ros
/home/hammer/hmsasps/bin/saspsmultistage.sh
Each field must be delimited by a comma "," and refer to a single protocol.
The example configures all the 13 voice protocols accessible through the
ROS Database:
SIP
ISUP (to configure both ISUP and BICC protocols)
RANAP
BSSAP
CAMEL
INAP
H248
DIAMETER (to configure the DIAMETER+ protocol)
S102
7-2 Chapter 7
Required Configuration for Voice Protocols on NWV or Standalone ROS when using HTML5 DiagnostiX
Where:
<Vertica IP> is one (=any working) node of the ROS Vertica DB.
NOTE: This must be run on the NWV at the end of the installation/
upgrade and each time the DB configuration changes (e.g. when a proto-
col filter is enabled by the Java UI or any other change in the DB).
7-4 Chapter 7
APPENDIX A E-XMS Network TCP Port
Assignments
A-2 Appendix A
Network Port Assignment Table
Diagram
Notation Device A Device B Port(s) Direction Comments
1 User Client Network 80, 443, 8080, A>B Client connects to NWV via
Wide View 8443 HTTP and/or HTTPS
2 User Client MSP Probe 5913 A<>B Required for some System
Administration of the MSP
probes (except MSP 5000)
Diagram
Notation Device A Device B Port(s) Direction Comments
6 MSP Probe MSP Probe 5112, 5158 A<>B Only for probe to probe peer
dispatching
A-4 Appendix A
Network Port Assignment Table
Diagram
Notation Device A Device B Port(s) Direction Comments
Diagram
Notation Device A Device B Port(s) Direction Comments
A-6 Appendix A
Network Port Assignment Table
Diagram
Notation Device A Device B Port(s) Direction Comments
29 Any SNMP Network UDP port 161 A>B Port needs to be open to
Requester Wide View, request any MIB entry
Regional
OS,
MSP Probe,
IDMC,
OneSight,
30 Any SNMP Network UDP port 162 B>A Port needs to be open to
Trap Wide View, send any configured SNMP
Receiver Regional trap to an NMS
OS,
MSP Probe,
IDMC,
OneSight
A-7
Network Port Assignment Table
A-8 Chapter A
APPENDIX B VM Configuration and Set Up
This chapter provides the steps necessary to configure and monitor soft-
ware-only probes on Virtual Machines (VM) running VMware and KVM
hypervisors.
VM Configuration
To monitor traffic running on an existing Virtual Machine (VM) connected
to a virtual switch in VMware, you need to set up a new Virtual Machine
Port Group and configure it for Promiscuous Mode. When promiscuous
mode is enabled on a virtual adapter, all traffic flowing through the virtual
switch, including local traffic between virtual machines and remote traffic
originating from outside the virtual host, is sent to the promiscuous virtual
adapter.
4. In the Properties dialog, select the virtual switch with the VM whose
traffic needs to be monitored by the Software Only probe. For exam-
ple, the screen capture below shows vSwitch2.
5. Click Add to display the Add Network Wizard.
B-2 Appendix B
VM Configuration
9. Click Finish to add the Virtual Machine Port Group with label Local2.
10. Click Close. Continue to the next section to set the new Virtual
Machine Port Group to Promiscuous Mode.
B-4 Appendix B
VM Configuration
11. Select the new port group from the list in the left pane. Notice the Pro-
miscuous Mode is set to Reject.
13. Select the Security tab, check Promiscuous Mode to change the mode
dropdown to Accept.
14. Click OK. The Promiscuous Mode is now set to Accept.
B-5
VM Configuration
Add the new Port Group In the vSphere Client as a network adapter of
the VM that will run on the Software Only probe.
16. Select the desired VM in the left pane, which is ‘ProbeSO’.
B-6 Chapter B
VM Configuration
18. Select the Hardware tab and click Add to display Device Types.
19. Select Ethernet Adapter and click Next to display Network Types. The
Adapter Type: E1000 displays by default.
B-7
VM Configuration
20. In the Network Connection section, select ‘Local2’ from the dropdown.
B-8 Chapter B
VM Configuration
24. In the Adapter Type section, select a different adapter type from the
dropdown, for example: VMXNET3.
25. Click Next to review your changes.
B-9
VM Configuration
The Virtual Machine Properties dialog shows the adapter in the pro-
cess of being added.
28. Right-click the new adapter and select Edit to verify the settings and
ensure the adapter was added. When verified, click OK.
B-10 Chapter B
VM Monitoring
VM Monitoring
To monitor traffic running on a KVM system without using Open vSwitch
requires you to turn off MAC address learning for the bridge created by
KVM for the physical Ethernet port where the traffic is being monitored. To
turn off MAC address learning, type:
brctl setageing br1 0
This will set the aging for the MAC address learning to 0 for bridge br1 so
it will never learn any MAC address and will flood the frame out of its
active ports, except the port where the frame was received.
To turn on MAC address learning, type
brctl setageing br1 100
2. Click Add Hardware to display the Add New Virtual Hardware dialog.
B-11
VM Monitoring
5. Click Finish.
B-12 Chapter B
APPENDIX C Troubleshooting
Troubleshooting C-1
Common Errors and Resolutions
During the migration to E-XMS 5.5, you are prompted to remove NTP/
NSCD packages that conflict with E-XMS. You must remove these pack-
ages to proceed. Select “Solution 1” to remove the conflicted packages.
If the system is registered to a SUSE server and there is no Internet
access, a connection error displays. You can ignore this error or disable
the repository using the following command:
zypper mr --disable “reponame”
C-2 Appendix C
Common Errors and Resolutions
Troubleshooting C-3
Common Errors and Resolutions
the removal of the mysql lock to install the most recent version of the lib-
mysqlclient-solution 4.
Please choose Solution 4 when prompted.
Problem: exms-ros-5.7.0-345.x86_64 requires exms-
mysql >= 1.1, but this requirement cannot be
provided
uninstallable providers: exms-mysql-1.1-
0.x86_64[exms-sles]
Solution 1: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.6.1.x86_64[thirdparty-
sles-3.0.101-0.47.67-default]
Solution 2: remove lock to allow installation of
libmysqlclient_r15-5.0.96-
0.8.8.1.x86_64[thirdparty-sles-3.0.101-0.47.67-
default]
Solution 3: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.6.1.x86_64[ranvision-
sles]
Solution 4: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.8.8.1.x86_64[ranvision-
sles]
Solution 5: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.6.1.x86_64[SUSE-Linux-
Enterprise-Server-11-SP3 11.3.3-1.138]
Solution 6: do not install exms-ros-5.7.0-345.x86_64
Solution 7: break exms-ros-5.7.0-345.x86_64 by
ignoring some of its dependencies
C-4 Appendix C
Common Errors and Resolutions
Update the Kernel Package for MSP Probes Based on Red Hat 6.8
When installing E-XMS on a Red Hat 6.8 based probe using the yum com-
mand ‘yum install exms-probe’ the installation exits with the following
error message:
Kernel development package for currently running
kernel version. Please install the kernel development
for kernel version 2.6.32-642.el6.x86_64 and run this
script again
To resolve this issue, you must manually update the kernel development
package to match to the currently running kernel version by executing the
following command:
yum update
Now reboot your system.
a. /etc/init.d/process_launcher stop
b. /etc/init.d/hmmonitor stop
3. Kill the admintools processes if they are not performing a critical func-
tion on the Vertica database:
kill pid
Where pid is the process id.
C-5
Common Errors and Resolutions
C-6 Chapter C
APPENDIX D Installing and Upgrading
SLES OS
The chapter provides procedures to install SLES 11.3 using the Empirix
auto-installer ISO and upgrade SLES 11.3 to SLES 11.4 using the SLES
11.4 distribution ISO. See “Performing the Distribution Upgrade to SLES
11.4,” page D-24.
a. Create a base directory for the auto-installer ISO images. This will
be exported via NFS to be remotely mounted/accessed for the
auto-installer installation.
# mkdir -p /home/hammer/share/autoinstaller
b. VPN into the local network of the NFS server, if you are not already
connected.
c. Open ‘Computer’ (Windows 7) or ‘This PC’ (Windows 10), right-
click the icon and select ‘Map network drive...’.
D-2 Appendix D
Installing SLES 11.3 OS with Auto-installer
<NFS_server_IP>:/home/hammer/share/autoinstaller
Click Finsh. Now replace <FNS_server_IP> with the IP address of the
NFS server where the autoinstaller.iso is stored.
The mapped network drive is now available in ‘This PC’ Network Loca-
tions.
d. In the Security Warning dialog, check “I accept the risk...” and click
Run.
D-4 Appendix D
Installing SLES 11.3 OS with Auto-installer
D-5
Installing SLES 11.3 OS with Auto-installer
Intel Server
a. Select the Remote Control tab on the main menu and then click
‘Launch Console’.
c. If a message displays similar to the one shown below, see the sec-
tion..... and then launch the ‘jviewer.jnlp’ file again. If there is no
D-6 Chapter D
Installing SLES 11.3 OS with Auto-installer
d. In the Security Warning dialog, check ‘I accept the risk...’ and click
Run.
D-7
Installing SLES 11.3 OS with Auto-installer
4. Reboot the device and wait for the options screen that looks similar to
the image shown below.
5. Now select the required option from the list. The installation starts.
auto-installer Server
Filename Installer Option Server Model Description E-XMS Model
db_7.10 E-XMS HCOS Ivy Bridge High HC-ROS
performance
database
DMS DB Ivy Bridge High RDB
performance
database
mid_range_9.8 E-XMS HPOS Ivy Bridge Mid-size HP-ROS
database
low_end_10.7 E-XMS ROS and Ivy Bridge Entry-level 1MSP-ROS
NWV database
Ivy Bridge Entry-level NWV
database
xms_hcos_2.54 Installation Supermicro HCOS HC-ROS
D-8 Chapter D
Installing SLES 11.3 OS with Auto-installer
auto-installer Server
Filename Installer Option Server Model Description E-XMS Model
xms_hpos_1.45 Installation Old Intel servers 5023, 5423 HP-ROS
D-9
Installing SLES 11.3 OS with Auto-installer
D-10 Chapter D
Installing SLES 11.3 OS with Auto-installer
12. After reboot, the options screen appears again with the option ‘Boot
from Hard Disk’ selected.
D-11
Installing SLES 11.3 OS with Auto-installer
D-12 Chapter D
Installing SLES 11.3 OS with Auto-installer
D-13
Installing SLES 11.3 OS with Auto-installer
2. Open the application ‘Configure Java’ and select the Security tab.
Check Enable Java Content in the Browser’ and click ‘Edit Site List...’.
3. In the Exception Site List dialog, click Add to add the webserver
address of the IPMI, then click OK.
D-14 Chapter D
Updating the Intel Server 5023 BIOS
D-15
Updating the Intel Server 5023 BIOS
C:\ BIOS96
The BIOS update begins. The update will take about 3 minutes to
complete.
D-16 Chapter D
Updating the Intel Server 5023 BIOS
7. Verify to BIOS version number and setup the BIOS parameters based
on the files available in the zip file<<S5000 BIOS 96 Settings Rev
2.pdf>> mentioned above in step 2.
D-17
Upgrading SLES11.3 to SLES 11.4
Conventions
In this section, commands to run appear after the prompt. Unless explicitly
indicated, run the commands as the root user, indicated by a similar
prompt:
#
D-18 Chapter D
Upgrading SLES11.3 to SLES 11.4
For example, use the ‘ls’ command followed by the output of the com-
mand:
# ls
check_process_running.pl passwd.exp
To execute the command, be sure NOT to copy/paste the leading '#' or
the command will not get executed.
When running a series of commands as a specific user, the following con-
vention will be used. In this example, the current user is root and the user
is changed to the mysql user:
# su mysql
> ls
check_process_running.pl passwd.exp
> exit
#
Some commands and output must take into consideration IP addresses,
build numbers, and customer-specific values. Placeholders for these val-
ues will be indicated with the convention <replace this with actual value>.
The highlighted expression, within and including the angle brackets (< >),
must be substituted with the actual value. For example, 5.7.1-b<build-
number> would become 5.7.1-b206 if the build-number for the release
being installed is 206.
Pre-requisites
Only E-XMS 5.7 (or later) is supported on SLES 11.4.
SLES 11.4 is NOT supported on the 5023/5423 servers; the servers
can only be installed with SLES 11.3 because they require additional
third-party drivers that will not work with or have not be qualified for
SLES 11.4.
SLES 11.4 installation ISO (SLES-11-SP4-DVD-x86_64-GM-
DVD1.iso) and valid SLES license.
Local repository/server to host the SLES 11.4 installation ISO. The
desired method is to setup central server(s) to host the installation ISO
via NFS. If NFS is not desired as a means to access the installation
ISO, then the ISO may be copied to each server and that local ISO can
be configured as an installation repository.
The preference is to use servers on the network to host the SLES
11.4 installation ISO and to configure NFS to export the ISO image.
These servers must not be behind any firewalls.
If the NWV server or another server is not readily network-accessi-
ble by all systems, then the recommendation is to use the ROS for
D-19
Upgrading SLES11.3 to SLES 11.4
each region as a local repository for all of the systems under the
region (ROS + probes).
nfs-kernel-server-1.2.3-18.38.43.1.x86_64.rpm if an NFS server
package is not already installed on the server(s) to host the instal-
lation ISO.
D-20 Chapter D
Upgrading SLES11.3 to SLES 11.4
D-21
Upgrading SLES11.3 to SLES 11.4
# zypper refresh
Retrieving repository 'SLES 11.4' metadata [done]
Building repository 'SLES 11.4' cache [done]
Repository 'exms-common' is up to date.
Repository 'exms-sles' is up to date.
Repository 'exms-wireshark' is up to date.
Retrieving repository 'ranvision-sles' metadata
[done]
Retrieving repository 'thirdparty-sles-3.0.101-
0.47.67-default' metadata [done]
All repositories have been refreshed.
D-22 Chapter D
Upgrading SLES11.3 to SLES 11.4
needs to be copied to the local server and the repository added with a dif-
ferent syntax.
1. Create a base directory for SLES 11.4-related ISO images and reposi-
tories.
# mkdir -p /home/hammer/share/SLES-11.4
# chmod a+rx /home/hammer/share/SLES-11.4
2. Download SLES-11-SP4-DVD-x86_64-GM-DVD1.iso to /home/ham-
mer/share/SLES-11.4 and make sure it is world-readable.
# chmod a+r /home/hammer/share/SLES-11.4/SLES-11-
SP4-DVD-x86_64-GM-DVD1.iso
3. Add the SLES 11.4 installation ISO as a software repository to the
local server.
# zypper ar "iso:/?iso=/home/hammer/share/SLES-11.4/
SLES-11-SP4-DVD-x86_64-GM-DVD1.iso" "SLES 11.4"
Adding repository 'SLES 11.4' [done]
Repository 'SLES 11.4' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: iso:///?iso=/home/hammer/share/SLES-11.4/SLES-
11-SP4-DVD-x86_64-GM-DVD1.iso
4. Make sure the repositories have been refreshed.
# zypper refresh
Retrieving repository 'SLES 11.4' metadata [done]
Building repository 'SLES 11.4' cache [done]
Repository 'exms-common' is up to date.
Repository 'exms-sles' is up to date.
Repository 'exms-wireshark' is up to date.
Retrieving repository 'ranvision-sles' metadata
[done]
Retrieving repository 'thirdparty-sles-3.0.101-
0.47.67-default' metadata [done]
All repositories have been refreshed.
D-23
Upgrading SLES11.3 to SLES 11.4
# zypper lr
2. Disable the SuSE 11.3 repository
# zypper refresh
#
2. Perform the distribution upgrade that will update all relevant packages
to upgrade the server from SLES 11.3 to 11.4. The distribution upgrade
will need to be run a couple of times to ensure that all packages have
been upgraded. Confirm that the SuSE release has officially been
updated to 11.4. Commands and user input have been highlighted in
green below to make them more visible.
# zypper dup -r "SLES 11.4" -l
153 packages to upgrade, 60 to downgrade, 5 new, 5
to remove.Overall download size: 287.1 MiB. After
the operation, additional 24.3 MiB will be used.
Continue? [y/n/? shows all options] (y): <Enter>
<output truncated>
D-24 Chapter D
Upgrading SLES11.3 to SLES 11.4
# reboot
D-25
Upgrading SLES11.3 to SLES 11.4
D-26 Chapter D
APPENDIX E Configuring Dynamic
Linksets for GTP/GTPv2
Traffic
This appendix explains how to configure traffic monitoring on a probe,
supporting a mixture of GTP and GTPv2 traffic on the same linkset.
This linkset mechanism supports a traffic configuration that associates the
user plane to the control plane for the following LTE interfaces:
Gn: GTPv1 CP and UP
S4: GTPv2 CP and UP
S11/S12: GTPv2 CP on S11 and S1-U (S1-U must be associated to
the S11 CP)
S5/S8: GTPv2 CP and UP
NOTE: No VLAN id, IP addresses ranges or SFP can be used to distin-
guish interfaces.
/etc/init.d/ipxguid stop
2. Stop the hmmonitor:
/etc/init.d/hmmonitor stop
3. Manually remove the test.xml file.
Set ip_up_thread_enabled=1
For information about available parameters, refer to table below “Con-
figuring Linkset Parameters,” page E-3"
5. From the Control GUI, start the IPX GUID:
/etc/init.d/ipxguid start
/etc/init.d/hmmonitor start
When enabled, all the user plane modules over the tunnel in the test.xml
file are set as "_up".
Example
From the probe Control GUI enable:
Control Plane: Gn, S5S8
User Plane: http, ftp
In test.xml you will see following xml nodes:
<module id= "http_up" name= "http"> -----> HTTP over the generic
tunnel
<module id= "ftp_up" name= "ftp"> -----> FTP over the generic tunnel
If disabled (ip_up_thread_enabled=0), the xml nodes are not removed
from test.xml file.
To disable and use the probe without the generic thread (ip_up), follow
these steps from the CLI:
1. Disable the thread in parameters.ini file (ip_up_thread_enabled=0).
/etc/init.d/ipxguid stop
3. Manually remove the test.xml file:
E-2 Appendix E
Configuring Linkset Parameters
rm /home/hammer/hmipxprobe/etc/test.xml
4. Start the IPX GUID:
/etc/init.d/ipxguid start
5. Configure the probe using the Control GUI.
Default
Parameter Description Value
E-3
Configuring Linkset Parameters
Default
Parameter Description Value
E-4 Chapter E