Installing Vstream VMware
Installing Vstream VMware
This chapter describes how to install the vSTREAM virtual appliance in a VMware ESXi
environment. See the following sections for details:
"System Requirements
"Installing/Upgrading NETSCOUT Software" on page 6-3
"Deploying the Virtual Appliance" on page 6-4
"Reinstalling the vSTREAM Application" on page 6-38
"Configuring System Settings" on page 6-40
"Configure PCI Passthrough in VMware Deployments" on page 6-42
"Enable SR-IOV" on page 6-46
"Enable DPDK Support" on page 6-49
"Using vSTREAM Virtual Appliances in a Clustered/vMotion Environment" on
page 6-50
Note: This guide assumes you have an existing VMware ESXi server deployed, as
well as working knowledge of your VMware environment. Refer to your VMware
documentation for details on administering the target environment.
6-1
Table 6-1 summarizes the minimum hypervisor and virtual environment requirements for the
target VMware environment:
Note: Deploying a vSTREAM virtual appliance requires sufficient privileges in the target
virtual environment. Consult the documentatio
for details on the privileges necessary to deploy a virtual machine from a template.
Virtual Switch
6-2
Installing GeoProbe software allows management and analysis from an associated IrisView
server. You can also optionally export data to an associated nGenius Business Analytics (nBA)
server if ASI support is enabled during installation of the Geo-6320-xxx-176.bin file.
Figure 6-1 Deploying the OVF File for vSTREAM Virtual Machine
3 If you are using the vSphere Web Client and have not already installed the Client
Integration Plug-in to enable OVF functionality, the vSphere Web Client will guide you
through the procedure to download and install it now. Once you have finished
installing the plug-in, restart your browser and begin the installation procedure again.
Note: The first time you launch your browser after installing the Client Integration
Plug-in, you may need to launch it by right-clicking its icon and choosing the Run as
administrator option to allow use of the plug-in. Once the plug-in has been executed
and allowed once, you should be able to run the browser normally on subsequent
connections.
4 You can install either from a URL or a Local file. In this example, click the Browse
button to navigate to the location where you extracted the vSTREAM.ovf file, select
both the .ovf and vSTREAM-disk1.vmdk files, and click Next to continue (Figure 6-2).
Figure 6-3 Specifying the Name and Location for the Virtual Machine
6 Select the destination compute resource where the deployed template should be run.
You can select a cluster, host, or resource pool (Figure 6-4).
7 Review the details on the selected OVF file displayed by the wizard (Figure 6-5) and
click Next to continue.
Note: The disk space reported here is for the virtual machine and its operating system
only; additional disk space is required for storage as a secondary disk. Refer to Table 6-4
on page 6-11. for storage recommendations by model number.
8 Select the Thick Provision Lazy Zeroed disk format for the vi
disks and leave the VM Storage Policy at the Datastore Default (Figure 6-6). The Thick
Provision Lazy Zeroed format is the only one supported for the vSTREAM virtual
machine.
9 If you are installing in an environment with multiple storage arrays available, you can
e virtual machine. Select a datastore with
sufficient free space for the virtual machine and click Next to continue (Figure 6-6).
11 The next step lets you configure system settings for the vSTREAM virtual appliance,
including its IP profile, DNS server(s), NTP server(s), and the IP address of the
managing nGeniusONE server (Figure 6-8).
Configuring these settings in the deployment wizard is optional. If you choose not
to configure them at this time, you can configure them post-installation using the
nGApplianceConfig.plx script, as described in "Configuring System Settings" on
page 6-40.
Table 6-3 summarizes the system settings you can configure in this step:
Note: The installation wizard does not let you configure a time zone and defaults the
setting to the Eastern U.S. time zone. Log in to the command line after installation and
use the standard Linux tzselect command to reconfigure the time zone as necessary.
The wizard lets you set an IP address matching the version of the
management network you selected in Step 10, IPv4 for IPv4 and
IPv6 for IPv6.
Netmask Subnet mask for the Management port (required for IPv4 only).
You can enter is either using standard IPv4 dotted decimal
format (for example, 255.255.255.0) or in CIDR format (for
example, /24).
Domain Name
connected.
Domain Name Server 1/2 IP address of a DNS server (nameserver). You can specify a
second DNS server to be used as a backup in case the first server
is unreachable.
NTP Server 1/2 IP address of an NTP server to be used for synchronization of the
You can specify a second NTP
server to be used as a backup in case the first server is
unreachable.
nGenius Server IP The IP address of the nGeniusONE server that will manage this
virtual appliance.
vSTREAM Data Disk Size Specify the size of the secondary disk used for packet and data
storage and click Next to continue. The default size is 100 GB.
You can specify values up a maximum of 64 TB.
vCPUs 1 1-241
RAM 2 GB 2-64 GB
Hard Disks
1. Note that vSTREAM vCPUs are licensed in blocks of eight. You can only provision vCPUs up to the
maximum allowed by your currently installed license. You can always purchase and apply a license to allow
additional vCPUs.
Note: Some vSTREAM features require provisioning of more RAM than others.
Refer to "vSTREAM Provisioning Requirements" on page 1-6 for a discussion of the
resources required for different features.
A New Network entry appears in the Virtual Hardware list at the left of the Edit Settings
dialog box.
3 Click the New Network entry and set the following options for the monitoring vNIC
(Figure 6-14):
Connect at power on option so that the vNIC connects when the
virtual appliance starts.
Adapter Type to VMXNET3. This is the only supported adapter for
vSTREAM.
p configured in promiscuous mode for
packet acquisition, you can use the network dropdown shown in Figure 6-14 to
assign the monitoring interface to it now.
4 Review the settings for the monitoring vNIC, correct as necessary, and click OK to
create the vNIC.
3 From the vSphere client, open the Virtual Machine Properties view for the vSTREAM
virtual machine.
5 Use the network dropdown highlighted in Figure 6-16 to assign eth0 to the correct
vSwitch port group for management traffic (Figure 6-16).
6 Check the Network adapter entries for each monitoring interface and ensure they
are set to the correct networks as well.
7 Open the following file in a text editor:
8 Ensure that the value reported for HWADDR in this file matches the MAC address
reported for eth0 in the ifconfig command you used in Step 1. If it does not match,
edit the value for HWADDR to match and save your changes.
Note: You can show the MAC address for just eth0 with the ifconfig
eth0 command.
Virtual
Switch Monitoring Scenario Configuration Summary
Refer to "Configuring Port Mirroring on a vSphere Distributed Switch" on page 6-22 for details on
this scenario.
Refer to "Configuring a Distributed Port Group on a vSphere Distributed Switch" on page 6-27 for
details on this scenario.
Refer to "Configuring a Port Group on a vSphere Standard Switch" on page 6-32 for details on this
scenario.
vSTREAM virtual appliance. The second pNIC is separate from the one used for the
traffic. This is shown as pNIC1 in Figure 6-17.
itch (vSS) or vSphere Distributed Switch (vDS) must be
added to the hypervisor and associated with the second pNIC. This is shown as
vSwitch1 in Figure 6-17.
c to the monitor port must be configured correctly. To
monitor a particular link, for example, you would want to configure a physical switch
affic to the physical switch port connected to the
Figure 6-17 shows an example of this configuration. The Monitored Site is configured to
mirror traffic to the physical switch port where pNIC1 on vSwitch1 is connected. In turn, the
vSTREAM is connected to the same vSwitch1, allowing it to see traffic mirrored from the
monitored site. Note that no other virtual machines should be connected to the vSwitch used
as the destination for external traffic (vSwitch1 in Figure 6-17)
2 Select the entry for the distributed switch to which the virtual machines you want to
monitor are connected.
3 Click on the Configure tab and select Settings > Port mirroring as illustrated in
Figure 6-19.
5 Select the type of port mirroring session to create. vSTREAM supports either of the
following session types:
Use this session type when the vSTREAM mirror
destination is on the same ESXi host as the sources you want to mirror.
Use this session type when the
vSTREAM mirror destination is on a different ESXi host than the sources you
want to mirror. When using this session type, you must associate an IP address
with the destination vSTREAM monitoring interface and specify it in the Select
destinations step of the Add Port Mirroring Session wizard.
6 Make the following settings in the Edit properties step (Figure 6-21):
a Supply a descriptive name for the port mirroring session in the Name field.
b If you want the port mirroring session to start as soon as it is created, you can
change the Status field to Enabled. If you elect to leave it Disabled, you can
easily start the session from the Port Mirroring interface later on.
8 Use the Select sources step to choose the virtual machines whose traffic you want to
mirror to vSTREAM:
a Click the button to select the ports you want to mirror to vSTREAM.
b Use the Select Ports dialog box to select the virtual machines whose traffic you
want to mirror. Check the boxes of each virtual machine to be mirrored and click
OK when you are done (Figure 6-22).
10 Click Next.
11 Review the settings for the port mirroring session in the Ready to complete step and
use the Back button to correct settings as necessary. Once you are satisfied that the
settings are correct click Finish to add the session.
12 If you did not enable the port mirroring session by default, you can select its entry in
the Port Mirroring list, click the Edit button, and change its Status to Enabled to start
port mirroring.
Note: NETSCOUT recommends that vDS deployments use port mirroring instead of
port groups set to promiscuous mode. Refer to "Configuring Port Mirroring on a
vSphere Distributed Switch" on page 6-22 for details on setting up port mirroring.
2 Right-click the entry for the distributed switch to which the virtual machines you want
to monitor are connected and select Distributed Port Group > New Distributed
Port Group, as illustrated in Figure 6-26.
4 Set the following options in the Configure settings page and click Next to continue:
Set to Static binding.
Port allocation Elastic so that new ports are created as needed.
Number of Ports 1.
Network resource pool (default).
VLAN type VLAN Trunking.
VLAN trunk range
VLAN trunk range to 0-4094, as shown in
Figure 6-28.
5 Check the Customize default policies configuration box (Figure 6-29) to display
additional options, including promiscuous mode, and click Next.
Figure 6-30 Enabling Promiscuous Mode for the Distributed Port Group
7 Click through the remaining steps in the wizard, leaving their settings at the default,
until you reach 8 Ready to complete.
8 Review your settings and click Finish to add the new distributed port group to the vDS.
Once added, the new Port Group appears in the inventory list below the selected vDS,
as shown below.
3 Locate the entry in the Select Network dialog box for the distributed port group you
created in the previous procedure, select it, and click OK.
In this example, we are connecting a vSTREAM monitoring interface to the vSTREAM
Port Group we created in the previous procedure (Figure 6-37).
Figure 6-32 Selecting the Distributed Port Group for the vSTREAM Monitoring Interface
6 Click Next.
7 Change the text in the Network Label field to something meaningful (for example,
Monitoring Network).
8 Set the VLAN ID field as follows:
VLAN ID to All (4095).
VLANs, enter their specific IDs.
6 Click the Security entry and check the Override box box next to Promiscuous Mode
and choose Accept from its corresponding dropdown box (Figure 6-35).
Important:
3 Locate the entry in the Select Network dialog box for the port group you created in
the previous procedure, select it, and click OK.
In this example, we are connecting a vSTREAM monitoring interface to the Monitoring
Network port group we created in the previous procedure (Figure 6-37).
Figure 6-37 Selecting the Port Group for the vSTREAM Monitoring Interface
There is no need to perform a separate application installation unless you want to change the
sizes of the data storage partitions. Refer to "Reinstalling the vSTREAM Application" on
page 6-38
Note: Use the default username and password the first time you log in to
the operating system. After you log in the first time, you can modify the
default password using the passwd command.
4 Install the application using the command below. Note that the xxx indicates the build
number for the .bin file, and will vary by version:
./is-6320-xxx-vSTREAM.bin
It can take several minutes for the installation to begin.
Note: If the installation file does not run, you may need to make it executable with
the chmod +x command. For example, chmod +x is-6320-xxx-vSTREAM.bin.
5 The installation script asks you to select your locale. Choose your language and press
Enter.
6 Press Enter on the Introduction screen.
7 Continue pressing Enter to read the End User License Agreement.
8 When prompted, press Y to accept the license agreement.
9 The installation script asks if you want to create a packet store partition:
1 - Yes d and you will be able to record and
store packets on the vSTREAM virtual appliance. With packet recording enabled,
the minimum size of the secondary storage disk is 100 GB.
2 - No eated. The virtual appliance will not
record packets. ASI monitoring is still available, as is on-demand capture, but
full-time recording is not available. If you choose No, you can configure the
10 The installation script prompts you to configure the /xdr, /metadata, and /asi
partitions. These partitions (if created) are all located on the same drive space
allocated for packet storage. Because of this, the more space you allocate for
partitions, the less space you will have available for packet data.
/xdr If the appliance will be configured to produce xDRs/ASRs for use Default = 5% of available
by nGeniusONE or nGenius Subscriber Intelligence, you MUST storage (if created).
allocate an /xdr partition for storage of xDRs/ASRs (eXtended Enter 0 to eliminate.
Data Records/Adaptive Session Records). This partition can be
eliminated if the appliance will not be used with these
applications.
Adaptive Session Records (ASRs) store session-level metadata
for transactions observed using supported protocols, for
example, an HTTP session or an email exchange. ASRs combine
statistics for entire sessions, providing end-to-end transaction
information. All TCP/UDP and SCTP parent applications and
user-created custom applications with the exception of Active
Agent, Peer-to-Peer, and a few other protocols support "ASR
applications." This support of deep-parsing ASRs at the child
application level for protocols such as HTTP, Oracle, AMEX, VISA,
SIP, DNS, DHCP and others provides a more granular collection
of session-level metrics. For example, you can monitor a wide
array of data for standard card processing, web, and
multi-media protocols, and custom applications.
/metadata This partition is required for nGeniusONE features such as Default = 1% of available
remote decode, data capture, and trace file storage. storage
Note: You are not asked to create this partition if you did not
create a packet storage partition.
Set a size for this partition based on your anticipated usage of
the features listed below:
nGeniusONE Decode View stores transient session data
files in /data and <installdirectory>/rtm/pa/data. Although
these files are automatically removed when the decode
session is closed, multiple simultaneous decode sessions
can also create temporary index files in the /metadata
partition consuming as much as 20 G of space.
NOTE: If you choose the minimum /metadata partition size, it is
strongly recommended that you do not save remote trace files
on the vSTREAM appliance. These trace files consume space on
the partition and reduce the space available for the ASI
metadata required for nGeniusONE monitors and enablers.
Excluding the remote decode operations, files saved on this
partition must be managed manually. Users who anticipate
heavy use of any of the above features should increase the
partition size to a greater percentage of the total storage.
11 The installation script displays the Pre-Installation Summary screen. Press Enter to
continue.
12 Installation begins. The installer presents an Installation Complete message when
finished. Press Enter to exit the installation script.
11 Enter a valid subnet mask for the Management port (required for IPv4 only) and press
Enter.
12 Enter a valid gateway IP address for the Management port and press Enter.
13 If you chose to assign both address types to the Management port, repeat Step 10 and
Step 12 for the IPv6 address; otherwise, continue with the next step.
14 Supply a simple hostname for the appliance and press Enter.
15
Enter.
16 Enter the IP address of a DNS server (nameserver). The script gives you the option of
entering multiple DNS server addresses to be used as backups in case the first DNS
server specified is unreachable.
17 vSTREAM supports NTP for synchronization of
Enter the IP address of one or more NTP servers. Servers are used as fallbacks in the
same order they are specified.
Note: Only IPv4 addresses are supported for specifying time sources; IPv6 addresses
are not supported.
18 Configure the appliance Time Zone.
19 When the script displays your settings, confirm that they are correct.
y and press Enter to continue.
n and press Enter. You can then re-enter your
settings.
20 When asked if you want to reboot, enter y and press Enter. The system automatically
propagates properties file changes and the appliance restarts.
Important: While the system is being reconfigured, you are unable to log in to the
appliance. Do not manually reboot the appliance during this period. Doing so can
cause undesirable results.
Note: Make sure that the physical Ethernet port being used for
SR-IOV has its MTU set to 9100 to enable support for jumbo frames.
DPDK must be used with PCI passthrough. You configure PCI passthrough settings after the
vSTREAM application is installed and before enabling DPDK (if you are using it).
This section describes how to configure a PCI device as a passthrough in VMware and add it
to the vSTREAM for analysis. The major steps are as follows:
"Enable Virtualization Extensions in BIOS" on page 6-42
"Enable PCI Passthrough to Physical Device" on page 6-42
"Add PCI Device to vSTREAM Virtual Appliance" on page 6-45
4 From the list of devices that appears, select the device you would like to make
available as a passthrough and click OK. The figure below illustrates the selection of
an Ethernet adapter.
6 Click the Reboot This Host button to reboot the host machine and complete the
procedure.
3 Select the New PCI Device entry device in the list at the left of the Edit Settings dialog
box and use the dropdown to select the physical device for the passthrough.
4 Click OK.
5 Open a terminal window to the vSTREAM appliance and use the following command
to disable the VF driver on the vSTREAM:
echo "blacklist ixgbe" > /etc/modprobe.d/blacklist-ixgbe.conf.
6 Restart the virtual appliance to complete the procedure.
DPDK must be used with SR-IOV. You configure SR-IOV settings after installing the
InfiniStream application and before enabling DPDK.
Use the following procedure to configure SR-IOV support on the vSTREAM virtual machine:
1 Ensure that SR-IOV is enabled in BIOS on the target ESXi host.
2 Open a console connection to the ESXi host where the vSTREAM virtual machine is
installed.
3 Use the following command to review the physical adapters installed on the host,
including their ordering:
lspci | grep -i intel | grep -i 'ethernet\|network'
In response, the system lists the physical Ethernet ports installed on the host in the
order in which they are installed on the PCI bus. For example, the following output
shows a total of six ports, with four on the onboard adapter (X540-AT2) and two on an
external adapter (10G 2P X520 Adapter):
~ # lspci | grep -i intel | grep -i 'ethernet\|network'
0000:03:00.0 Network controller: Intel Corporation Ethernet Controller X540-AT2 [vmnic0]
0000:03:00.1 Network controller: Intel Corporation Ethernet Controller X540-AT2 [vmnic1]
0000:05:00.0 Network controller: Intel Corporation Ethernet Controller X540-AT2 [vmnic2]
0000:05:00.1 Network controller: Intel Corporation Ethernet Controller X540-AT2 [vmnic3]
0000:82:00.0 Network controller: Intel Corporation Ethernet 10G 2P X520 Adapter [vmnic4]
0000:82:00.1 Network controller: Intel Corporation Ethernet 10G 2P X520 Adapter [vmnic5]
4 The SR-IOV feature creates multiple logical ports out of a physical port. These logical
ports are referred to as Virtual Functions (VFs) and can be attached to the vSTREAM
virtual machine for monitoring.
You use the following command to create VFs from a physical port:
~ # esxcfg-module ixgbe -s max_vfs=W,X,Y,Z
The max_vfs argument is a comma-separated list with each number in the list
specifying the number of Virtual Functions for the corresponding physical port in the
same order they appear on the PCI bus. So, for example, consider how
max_vfs=0,10,0,10 maps to the physical ports shown in the grep output above:
maps
grep output showing pci bus order to max_vfs=0,10,0,10
0000:82:00.0 Network controller: Intel Corporation Ethernet 10G 2P X520 Adapter [vmnic4] - (not specified)
0000:82:00.1 Network controller: Intel Corporation Ethernet 10G 2P X520 Adapter [vmnic5] - (not specified)
To create 10 VFs out of both ports on the physical X520 adapter shown in the grep
output above, the command is as follows:
~ # esxcfg-module ixgbe -s max_vfs=0,0,0,0,10,10
Note:
5 Once you have created the desired VFs, you can verify your settings using the
esxcfg-module command with the -g switch. For example, the following command
verifies that we have created 10 VFs each on the ports in position 5 and 6 on the PCI
bus (the physical X520 adapter in our example):
~ # esxcfg-module -g ixgbe
ixgbe enabled = 1 options = 'max_vfs=0,0,0,0,10,10'
6 Reboot the ESXi host to apply your VF settings.
7 Open the vSphere Web Client and use the Host > Manage > Settings > PCI Devices
view to verify that VFs were successfully created.
following procedure:
1 Open the vSphere Web Client and power off the Virtual InfiniStream appliance.
2 Right-click the Navigator panel entry for the Virtual InfiniStream appliance to which
you want to assign a VF and select the Edit Settings command.
4 Click the Add New Device dropdown at the top of the dialog box, select the Network
Adapter option, and click Add.
5 A New Network entry appears in the Edit Settings dialog box.
6 Click the New Network entry and set the following options for the monitoring vNIC:
New Network dropdown. This
is the port group whose traffic will be monitored.
Connect at power on option so that the vNIC connects when the
virtual appliance starts.
Adapter Type to SR-IOV passthrough.
Physical Function dropdown to select a physical function with SR-IOV
enabled and VFs assigned. The dropdown only lists devices with SR-IOV
enabled.
Guest OS MTU Changes dropdown to either Allow or Disallow to
Note: if the ixgbevf.conf file does not exist, you must create it.
Note: Keep in mind that assigning the same VF to multiple virtual machines is not
supported. vSphere manages the VFs for a given physical function and tracks VF
assignments.
7 Important: If you are using the Intel X710 NIC, you must clear VLAN settings on the
KVM virtual machine or OpenStack compute node in order to ensure that vSTREAM
sees and parses VLAN headers correctly:
For example, after configuring the VLAN ID on the VF, use the following command to
put VF #1 into its default VLAN configuration:
$> ip link set eno1 vf 1 vlan 0
Note: You enable Intel DPDK support after deploying the virtual appliance and installing
the InfiniStream application and configuring PCI passthrough settings.
Use the following procedure to enable DPDK support on the vSTREAM virtual machine:
1 Open a console connection to the vSTREAM virtual machine.
2 Navigate to the /opt/NetScout/rtm/bin directory (you can type isbin and press Enter
as a shortcut to this directory).
3 Stop all vSTREAM processes with the following command:
./stopall
Run the ./PS command to list any running processes and manually kill any that remain.
For example:
pkill nsprobe
4 Run the /opt/dpdk/set-dpdk enable script to enable DPDK support. For example:
Disabling DPDK
1 If at some point you decide to disable DPDK support, you can do so with the
/opt/dpdk/set-dpdk disable command.
configuration of the vSTREAM instance must not be changed. This includes vNIC
ordering and connections to virtual switches.
Next Steps
The next steps are to license sufficient eight-vCPU blocks to cover the vCPUs provisioned for
all vSTREAM instances, add the appliance to nGeniusONE, and configure CDM/ASI settings to
monitor your network. Refer to "Configuring vSTREAM" on page 15-1 for details on these
steps.