Hol 1803 01 Net - PDF - en PDF
Hol 1803 01 Net - PDF - en PDF
Hol 1803 01 Net - PDF - en PDF
Table of Contents
Lab Overview - HOL-1803-01-NET - Getting Started with VMware NSX ............................ 2
Lab Guidance .......................................................................................................... 3
Module 1 - NSX Manager Installation and Configuration (15 Minutes) .............................. 9
Introduction........................................................................................................... 10
Hands-on Labs Interactive Simulation: NSX Installation and Configuration - Part
1............................................................................................................................ 12
Hands-on Labs Interactive Simulation: NSX Installation and Configuration - Part
2............................................................................................................................ 15
Module 1 Conclusion ............................................................................................. 16
Module 2 - Logical Switching (30 minutes) ..................................................................... 18
Logical Switching - Module Overview .................................................................... 19
Logical Switching .................................................................................................. 20
Scalability and Availability .................................................................................... 47
Module 2 Conclusion ............................................................................................. 51
Module 3 - Logical Routing (60 minutes)......................................................................... 53
Routing Overview .................................................................................................. 54
Dynamic and Distributed Routing ......................................................................... 56
Centralized Routing............................................................................................... 85
ECMP and High Availability.................................................................................. 104
Prior to moving to Module 4 - Please complete the following cleanup steps ....... 157
Module 3 Conclusion ........................................................................................... 163
Module 4 - Edge Services Gateway (60 minutes).......................................................... 165
Introduction to NSX Edge Services Gateway ....................................................... 166
Deploy Edge Services Gateway for Load Balancer .............................................. 167
Configure Edge Services Gateway for Load Balancer.......................................... 186
Edge Services Gateway Load Balancer - Verify Configuration ............................. 196
Edge Services Gateway Firewall.......................................................................... 209
DHCP Relay ......................................................................................................... 219
Configuring L2VPN .............................................................................................. 245
Native Bridging ................................................................................................... 303
Module 4 Conclusion ........................................................................................... 345
HOL-1803-01-NET Page 1
HOL-1803-01-NET
Lab Overview -
HOL-1803-01-NET -
Getting Started with
VMware NSX
HOL-1803-01-NET Page 2
HOL-1803-01-NET
Lab Guidance
Note: It will take more than 90 minutes to complete this lab. You should
expect to only finish 2-3 of the modules during your time. The modules are
independent of each other so you can start at the beginning of any module
and proceed from there. You can use the Table of Contents to access any
module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the
Lab Manual.
VMware NSX is the platform for Network Virtualization. You will gain hands-on
experience with Logical Switching, Distributed Logical Routing, Dynamic Routing,
Distributed Firewall and Logical Network Services. This lab introduces the core
capabilities of VMware NSX in vSphere environments used to enable Network and
Security virtualization.
• Module 1 - Installation Walk Through (15 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
• Module 2 - Logical Switching (30 minutes) - Basic - This module will cover the
creation of logical switches and add virtual machines to the logical switches.
• Module 3 - Logical Routing (60 minutes) - Basic - This module will demonstrate
the dynamic and distributed routing capabilities supported on the NSX platform
by providing routes between a 3-tier application.
• Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
showcase the capabilities of the Edge Services Gateway by providing common
services such as DHCP, VPN, NAT, Dynamic Routing, Load Balancing and Physical
to Virtual Bridging.
Lab Captains:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
http://docs.hol.vmware.com
HOL-1803-01-NET Page 3
HOL-1803-01-NET
This lab may be available in other languages. To set your language preference and have
a localized manual deployed with your lab, you may utilize this document to help guide
you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
1. The area in the RED box contains the Main Console. The Lab Manual is on the tab
to the Right of the Main Console.
2. A particular lab may have additional consoles found on separate tabs in the upper
left. You will be directed to open another specific console if needed.
3. Your lab starts with 90 minutes on the timer. The lab can not be saved. All your
work must be done during the lab session. But you can click the EXTEND to
increase your time. If you are at a VMware event, you can extend your lab time
twice, for up to 30 minutes. Each click gives you an additional 15 minutes.
Outside of VMware events, you can extend your lab time up to 9 hours and 30
minutes. Each click gives you an additional hour.
During this module, you will input text into the Main Console. Besides directly typing it
in, there are two very helpful methods of entering data which make it easier to enter
complex data.
HOL-1803-01-NET Page 4
HOL-1803-01-NET
You can also click and drag text and Command Line Interface (CLI) commands directly
from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.
HOL-1803-01-NET Page 5
HOL-1803-01-NET
In this example, you will use the Online Keyboard to enter the "@" sign used in email
addresses. The "@" sign is Shift-2 on US keyboard layouts.
HOL-1803-01-NET Page 6
HOL-1803-01-NET
When you first start your lab, you may notice a watermark on the desktop indicating
that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and
run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the
labs out of multiple datacenters. However, these datacenters may not have identical
processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft
licensing requirements. The lab that you are using is a self-contained pod and does not
have full access to the Internet, which is required for Windows to verify the activation.
Without full access to the Internet, this automated process fails and you see this
watermark.
HOL-1803-01-NET Page 7
HOL-1803-01-NET
Please check to see that your lab is finished all the startup routines and is ready for you
to start. If you see anything other than "Ready", please wait a few minutes. If after 5
minutes your lab has not changed to "Ready", please ask for assistance.
HOL-1803-01-NET Page 8
HOL-1803-01-NET
HOL-1803-01-NET Page 9
HOL-1803-01-NET
Introduction
VMware NSX is the leading network virtualization platform that delivers the operational
model of a virtual machine for the network. Just as server virtualization
provides extensible control of virtual machines running on a pool of server hardware,
network virtualization with NSX provides a centralized API to provision and configure
many isolated logical networks that run on a single physical network.
Logical networks decouple virtual machine connectivity and network services from the
physical network, giving cloud providers and enterprises the fexibility to place or
migrate virtual machines anywhere in the data center while still supporting layer-2 /
layer-3 connectivity and layer 4-7 network services.
Within this module we will be using an Interactive Simulation to focus on how to perform
the actual deployment of NSX within your environment. Within the lab environment the
actual deployment has already been completed for you.
HOL-1803-01-NET Page 10
HOL-1803-01-NET
NSX Components
HOL-1803-01-NET Page 11
HOL-1803-01-NET
HOL-1803-01-NET Page 12
HOL-1803-01-NET
This part of the lab is presented as a Hands-on Labs Interactive Simulation. This will
allow you to experience steps which are too time-consuming or resource intensive to do
HOL-1803-01-NET Page 13
HOL-1803-01-NET
live in the lab environment. In this simulation, you can use the software interface as if
you are interacting with a live environment.
*** SPECIAL NOTE *** The simulation you are about to do is comprised of two parts.
The first part will finish at the end of NSX Manager configuration. To continue to the
second half of the simulation you will need click on "Return to the Lab" in the upper
right of the screen. The manual will also outline the steps at the conclusion of the NSX
Manager configuration.
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
HOL-1803-01-NET Page 14
HOL-1803-01-NET
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
HOL-1803-01-NET Page 15
HOL-1803-01-NET
Module 1 Conclusion
In this module we showed the simplicity in which NSX can be installed and configured to
start providing layer two through seven services within software.
We covered the installation and configuration of the NSX Manager appliance which
included deployment, integrating with vCenter and configuring logging and backups. We
then covered the deployment of NSX Controllers as the control plane and installation of
the VMware Infrastructure Bundles (vibs) which are kernel modules pushed down to the
hypervisor. Finally we showed the automated deployment of VXLAN Tunnel Endpoints
(VTEP's), creation of a VXLAN Network Identifier pool (VNI's) and the creation of a
Transport Zone.
If you are looking for additional information on deploying NSX then review the NSX 6.3
Documentation Center via the URL below:
• Go to http://tinyurl.com/hkexfcl
• Module 1 - Installation Walk Through (15 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
• Module 2 - Logical Switching (30 minutes) - Basic - This module will cover the
creation of logical switches and add virtual machines to the logical switches.
• Module 3 - Logical Routing (60 minutes) - Basic - This module will demonstrate
the dynamic and distributed routing capabilities supported on the NSX platform
by providing routes between a 3-tier application.
• Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
showcase the capabilities of the Edge Services Gateway by providing common
services such as DHCP, VPN, NAT, Dynamic Routing, Load Balancing and Physical
to Virtual Bridging.
Lab Captains:
HOL-1803-01-NET Page 16
HOL-1803-01-NET
HOL-1803-01-NET Page 17
HOL-1803-01-NET
Module 2 - Logical
Switching (30 minutes)
HOL-1803-01-NET Page 18
HOL-1803-01-NET
• Review the NSX controller cluster. The NSX controller cluster has eliminated the
requirement for multicast protocol support on the physical fabric, and also
provides functions such as VTEP, IP and MAC resolution.
• Create a Logical Switch and attach two VMs to the Logical Switch.
• Review the scalability and high availability of the NSX platform.
HOL-1803-01-NET Page 19
HOL-1803-01-NET
Logical Switching
In this section, we will be doing the following:
Open a browser by double clicking on the Google Chrome icon on the desktop.
The home page should be the vSphere Web Client. Otherwise, click on the vSphere Web
Client Taskbar icon for Google Chrome.
HOL-1803-01-NET Page 20
HOL-1803-01-NET
1. Click Installation.
2. Click Host Preparation.
HOL-1803-01-NET Page 21
HOL-1803-01-NET
You will see that the data plane components, also called network virtualization
components, are installed on the hosts in our clusters. These components include the
following: Hypervisor level kernel modules for Port Security, VXLAN, Distributed Firewall
and Distributed Routing.
Firewall and VXLAN functions are configured and enabled on each cluster after the
installation of the network virtualization components. The port security module assists
the VXLAN function while the distributed routing module is enabled once the NSX edge
logical router control VM is configured.
HOL-1803-01-NET Page 22
HOL-1803-01-NET
As shown in the diagram, the hosts in the compute clusters are configured with VXLAN
Tunnel End Point (VTEP). The environment uses the 192.168.130.0/24 subnet for the
VTEP pool.
HOL-1803-01-NET Page 23
HOL-1803-01-NET
One of the key challenges with VXLAN deployment in the past is that multicast protocol
support is required from physical network devices. This challenge is addressed in the
NSX Platform by providing a controller based VXLAN implementation and removing any
need to configure multicast in the physical network. This mode (Unicast) is the default
mode and customers don't have to configure any multicast addresses while defining the
logical network pool.
If Multicast replication mode is chosen for a given Logical Switch, NSX relies on the
native L2/L3 multicast capability of the physical network to ensure VXLAN encapsulated
multi-destination traffic is sent to all VTEPs. In this mode, a multicast IP address must be
associated to each defined VXLAN L2 segment (i.e., Logical Switch). L2 multicast
capability is used to replicate traffic to all VTEPs in the local segment (i.e., VTEP IP
addresses that are part of the same IP subnet). Additionally, IGMP snooping should be
configured on the physical switches to optimize the delivery of L2 multicast traffic.
Hybrid mode offers operational simplicity similar to unicast mode – IP multicast routing
configuration is not required in the physical network – while leveraging the L2 multicast
capability of physical switches.
HOL-1803-01-NET Page 24
HOL-1803-01-NET
• Unicast : The control plane is handled by an NSX controller. All unicast traffic
leverages headend replication. No multicast IP addresses or special network
configuration is required.
• Multicast: Multicast IP addresses on the physical network are used for the
control plane. This mode is recommended only when you are upgrading from
older VXLAN deployments. Requires PIM/IGMP on physical network.
• Hybrid : The optimized unicast mode. Offloads local traffic replication to physical
network (L2 multicast). This requires IGMP snooping on the first-hop switch, but
does not require PIM. First-hop switch handles traffic replication for the subnet.
Hybrid mode is recommended for large-scale NSX deployments.
1. Click on Segment ID. Note that the Multicast addresses section above is blank.
As mentioned earlier, this is because we are using the default unicast mode with
a controller-based VXLAN implementation.
HOL-1803-01-NET Page 25
HOL-1803-01-NET
2. Double-click on RegionA0-Global-TZ.
A transport zone defines the span of a logical switch. Transport Zones dictate the
clusters that will participate in the use of a particular logical network. As you add new
clusters in your datacenter, you can increase the transport zone and thus increase the
span of the logical network. Once the logical switch spans across all compute clusters,
mobility and placement barriers (due to limited VLAN boundaries) are removed.
1. Click on the Manage tab. It will show the clusters that are part of this Transport
Zone.
HOL-1803-01-NET Page 26
HOL-1803-01-NET
After looking at the various NSX components and VXLAN configuration, we will now
create a NSX logical switch. The NSX logical switch creates logical broadcast domains or
segments to which an application or tenant virtual machine can be logically wired.
If you have already navigated to other pages, return to the Networking & Security
Section via the Home Section on vSphere Web Client (the steps can be found at the
start of this chapter).
HOL-1803-01-NET Page 27
HOL-1803-01-NET
HOL-1803-01-NET Page 28
HOL-1803-01-NET
HOL-1803-01-NET Page 29
HOL-1803-01-NET
We will cover more details on NSX Edge and routing in the subsequent modules.
For now, we will need to connect our logical switch to the NSX Edge Services Gateway,
Perimeter-Gateway-01. This will provide connectivity between VMs that are connected to
the logical switch and VMs that are not connected to the logical switch.
HOL-1803-01-NET Page 30
HOL-1803-01-NET
HOL-1803-01-NET Page 31
HOL-1803-01-NET
HOL-1803-01-NET Page 32
HOL-1803-01-NET
1. Click Finish.
HOL-1803-01-NET Page 33
HOL-1803-01-NET
After configuring the logical switch and providing access to the external network, it is
time to connect the web application virtual machines to this network.
HOL-1803-01-NET Page 34
HOL-1803-01-NET
In order to be able to add the VMs to the logical switch that we created, we need to
make sure that the VMs network adapter is enabled and connects to the correct vDS.
HOL-1803-01-NET Page 35
HOL-1803-01-NET
HOL-1803-01-NET Page 36
HOL-1803-01-NET
HOL-1803-01-NET Page 37
HOL-1803-01-NET
HOL-1803-01-NET Page 38
HOL-1803-01-NET
HOL-1803-01-NET Page 39
HOL-1803-01-NET
HOL-1803-01-NET Page 40
HOL-1803-01-NET
1. Click Finish.
HOL-1803-01-NET Page 41
HOL-1803-01-NET
HOL-1803-01-NET Page 42
HOL-1803-01-NET
HOL-1803-01-NET Page 43
HOL-1803-01-NET
Open Putty
1. Click Start.
2. Click the Putty Application icon from the Start Menu.
You are connecting from the Main Console which is in the 192.168.110.0/24 subnet.
The traffic will go through the NSX Edge and then to the Web Interface.
HOL-1803-01-NET Page 44
HOL-1803-01-NET
1. Select web-03a.corp.local.
2. Click Open.
Note: If web-03a.corp.local does not show up as an option, you can put 172.16.40.11
as the IP address in the Host Name text field.
HOL-1803-01-NET Page 45
HOL-1803-01-NET
Remember to use the SEND TEXT option to send this command to the console.
(See Lab Guidance)
ping -c 2 web-04a
If you see DUP! packets, that is due to the nature of VMware's nested lab environment.
This will not happen in a production environment.
Do not close your Putty Session. Minimize the window for later use.
HOL-1803-01-NET Page 46
HOL-1803-01-NET
For resiliency and performance, production deployments must deploy a NSX controller
cluster with multiple NSX controller nodes. The NSX controller cluster represents a scale-
out distributed system, where each NSX controller node is assigned a set of roles. The
assigned role defines the types of task that can be implemented by the NSX controller
node. NSX controller nodes are deployed in odd numbers. The current best practice (and
the only supported configuration) is for the NSX cluster to have three NSX controller
nodes of active-active-active load sharing and redundancy.
If a NSX controller(s) fail, the data plane (VM) traffic will not be affected. Traffic will
continue as the logical network information has already been pushed down to the logical
switches (the data plane). However, you will not be able to edit (add/move/change)
without the control plane (NSX controller cluster).
HOL-1803-01-NET Page 47
HOL-1803-01-NET
HOL-1803-01-NET Page 48
HOL-1803-01-NET
1. Click Installation.
2. Click Management.
Under the NSX Controller nodes section, you can see that there are three NSX
controller nodes. NSX controller nodes are always deployed in odd numbers for high
availability and scalability.
HOL-1803-01-NET Page 49
HOL-1803-01-NET
1. Expand RegionA01
2. Click on one of the three NSX Controllers
3. Click Summary tab.
Observe the host that this NSX controller is residing on. The remaining two NSX
controllers will reside in two different hosts as well. In a production environment, all
three NSX controllers will reside on three different hosts with DRS anti-affinity rules to
avoid multiple failure of NSX controllers due to a single host outage.
HOL-1803-01-NET Page 50
HOL-1803-01-NET
Module 2 Conclusion
In this module, we demonstrated the following benefits of the NSX platform:
1. Network agility like the speedy provisioning and configuring of logical switches to
interface with virtual machines and external networks.
2. Scalability of the NSX architecture like the ability of the transport zone to quickly
span across multiple compute clusters and NSX controller cluster's capability as a
scale-out distributed system.
If you are keen to learn more about NSX, please visit the NSX 6.3 Documentation Center
via the following URL:
• Go to https://tinyurl.com/y9zy7cpn
• Module 1 - Installation Walk Through (15 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
• Module 2 - Logical Switching (30 minutes) - Basic - This module will cover the
creation of logical switches and add virtual machines to the logical switches.
• Module 3 - Logical Routing (60 minutes) - Basic - This module will demonstrate
the dynamic and distributed routing capabilities supported on the NSX platform
by providing routes between a 3-tier application.
• Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
showcase the capabilities of the Edge Services Gateway by providing common
services such as DHCP, VPN, NAT, Dynamic Routing, Load Balancing and Physical
to Virtual Bridging.
Lab Captains:
HOL-1803-01-NET Page 51
HOL-1803-01-NET
HOL-1803-01-NET Page 52
HOL-1803-01-NET
Module 3 - Logical
Routing (60 minutes)
HOL-1803-01-NET Page 53
HOL-1803-01-NET
Routing Overview
Lab Module Overview
In the previous module, we experienced the ease and convenience of creating isolated
logical switches/networks with a few clicks. To provide communication across these
isolated logical layer 2 networks, routing support is essential. In the NSX platform, the
distributed logical router allows you to route traffic between logical switches and the
routing capability is distributed in the hypervisor. By incorporating this logical routing
component, NSX can reproduce complex routing topologies in the logical space. For
example, a three-tier application will be connected to three logical switches and the
routing between the tiers handled by this distributed logical router.
This module will help us understand some of the routing capabilities supported in the
NSX platform and how to utilize these capabilities while deploying a three-tier
application.
• Examine traffic flow when the routing is handled by an external physical router or
NSX Edge Services Gateway.
• Configure the Distributed Logical Router and it's Logical Interfaces (LIFs) to
enable routing between app-tier and db-tier of the 3-tier application.
• Configure dynamic routing protocols on the Distributed Logical Router and NSX
Edge Services Gateway and understand the control of internal route
advertisements to external router.
• Scale and protect the NSX Edge Services Gateway through the use of other
routing protocols such as ECMP (Equal Cost Multipath Routing).
HOL-1803-01-NET Page 54
HOL-1803-01-NET
Many of the modules will have you enter Command Line Interface (CLI) commands.
There are two ways to send CLI commands to the lab.
1. Highlight the CLI command in the manual and use Control+c to copy to
clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.
Second, a text file (README.txt) has been placed on the desktop of the environment
allowing you to easily copy and paste complex commands or passwords in the
associated utilities (CMD, Putty, console, etc). Certain characters are often not present
on keyboards throughout the world. This text file is also included for keyboard layouts
which do not provide those characters.
HOL-1803-01-NET Page 55
HOL-1803-01-NET
The above picture shows this lab's environment where both Application VM and
Database VM reside on the same physical host. The red arrows show the traffic flow
between the two VMs.
HOL-1803-01-NET Page 56
HOL-1803-01-NET
5. The traffic is sent to the host which the Database VM is residing on.
6. The traffic reaches the Database VM from the host.
At the end of this lab, we will review the traffic flow diagram after distributed routing is
configured. This will help us to understand the positive impact that distributed routing
has on network traffic.
• Open a browser by double clicking on the Google Chrome icon on the desktop.
The home page should be the vSphere Web Client. Otherwise, click on the vSphere Web
Client Taskbar icon for Google Chrome.
HOL-1803-01-NET Page 57
HOL-1803-01-NET
Before you start the configuration for Distributed Routing, let's verify that the 3-tier Web
Application is working correctly. The three tiers of the application (web, app and
database) are on different logical switches and NSX Edge is providing routing between
these tiers.
• The web server will return a web page with customer information stored in the
database.
HOL-1803-01-NET Page 58
HOL-1803-01-NET
As you have seen in the earlier topology, the three logical switches or three tiers of the
application are terminated on the Perimeter Gateway (NSX Edge). The Perimeter
Gateway (NSX Edge) provides the routing between the three tiers. We are going to
change that topology by removing the App and DB interfaces from the Perimeter
Gateway (NSX Edge). After deleting the interfaces, we will move those interfaces on to
the Distributed Router (NSX Edge). To save time, a Distributed Router (NSX Edge) has
been deployed for you.
HOL-1803-01-NET Page 59
HOL-1803-01-NET
1. Click Manage.
2. Click Settings.
3. Click Interfaces.
HOL-1803-01-NET Page 60
HOL-1803-01-NET
You will see the configured interfaces and their properties. Information includes the vNIC
number, interface name, interface type (Internal, Uplink or Trunk) and interface status
(active or disabled).
HOL-1803-01-NET Page 61
HOL-1803-01-NET
HOL-1803-01-NET Page 62
HOL-1803-01-NET
After removing the App and DB interfaces from the Perimeter-Gateway-01 (NSX Edge),
we will navigate back to the Networking & Security section to access the Distributed-
Router-01 (NSX Edge).
We will begin configuring Distributed Routing by adding the App and DB interfaces to
the Distributed Router (NSX Edge).
1. Double-click Distributed-Router-01.
1. Click on Manage.
HOL-1803-01-NET Page 63
HOL-1803-01-NET
2. Click on Settings.
3. Click on Interfaces to display all the interfaces configured on the Distributed
Router (NSX Edge).
HOL-1803-01-NET Page 64
HOL-1803-01-NET
Add Subnets
HOL-1803-01-NET Page 65
HOL-1803-01-NET
• Repeat the previous two steps to add and configure the DB_Tier Interface
on Distributed-Router-01 (NSX Edge).
• Name DB_Tier.
• Connect to DB_Tier_Logical_Switch.
• IP address 172.16.30.1 and a subnet prefix length of 24.
HOL-1803-01-NET Page 66
HOL-1803-01-NET
After the two interfaces are configured on Distributed-Router-01 (NSX Edge), the
interface configurations are automatically pushed to every host in the environment. In
each host, there is a Routing (DR) Kernel loadable module which will handle routing
between VMs. In our lab scenario, the routing between the App and DB interfaces will be
handled by the Distributed Routing (DR) Kernel loadable module in the host instead of
Perimeter Gateway (NSX Edge). If there is communication between VMs that are
connected to different subnets but resides on the same host, traffic will not take an un-
optimal path as shown in the earlier traffic flow diagram.
HOL-1803-01-NET Page 67
HOL-1803-01-NET
After making the changes for routing to be handled by the distributed router, you will
notice that access to the 3-tier Web Application fails. This is because there is currently
no route between the Web Servers and App/DB VMs.
1. Click on HOL - Customer Database browser tab (this tab was opened in the
previous steps).
Note: If you close the HOL - Customer Database browser tab earlier, open a new
browser tab and click Customer DB App bookmark.
1. Click Refresh.
The application will take a few seconds to time out and you may need to click "X" to
stop the browser. If you see customer data, it may be cached data and you will need to
close and re-open the browser to correct it.
Note: If you do have to re-open the browser, after verifying the 3 tier application is not
working, click on the bookmark in the browser for vSphere Web Client and login again
with the credentials "root" password "VMware1!". Then click on Networking and
Security, Edge Appliances and finally double-click on "Distributed-Router".
HOL-1803-01-NET Page 68
HOL-1803-01-NET
1. Click Routing.
2. Click Global Configuration.
3. Click Edit to change Dynamic Routing Configuration.
1. Select the IP address of the Uplink interface as the default Router ID. In our case,
the Uplink interface is Transit_Network_01 and the IP address is 192.168.5.2.
HOL-1803-01-NET Page 69
HOL-1803-01-NET
2. Click OK
Note: The router ID is a 32 bit identifier denoted as an IP address and is important in the
operation of OSPF as it indicates the routers identity in an autonomous system. In our
lab scenario, we are using a router ID that is the same as the IP address of the uplink
interface on the NSX edge which is acceptable but not necessary. The screen will return
to the Global Configuration section with the option to Publish Changes.
Publish Changes
1. Click OSPF.
2. Click Edit to change OSPF Configuration. This will open the OSPF Configuration
dialog box.
HOL-1803-01-NET Page 70
HOL-1803-01-NET
Enable OSPF
Note: The Protocol Address is required for sending control traffic to the Distributed
Router Control VM while the Forwarding Address is used by the router datapath
module in the hosts to forward datapath packets. The separation of control plane and
data plane traffic in NSX means that the routing instance's data forwarding capability is
maintained even when the control function is restarted or reloaded. The control function
is referred to as "Graceful Restart" or "Non-stop Forwarding". The screen will return
to the Global Configuration section with the option to Publish Changes. However,
please DO NOT publish changes yet. Instead of publishing changes for every
configuration, we will complete all the configurations and publish them all at one time.
HOL-1803-01-NET Page 71
HOL-1803-01-NET
1. Click Green Plus icon. This will open the New Area Definition dialog box.
2. Enter 10 as the Area ID. Leave the other settings as default.
3. Click OK
Note: The Area ID for OSPF is very important and there are several types of OSPF
areas. Hence, please ensure that the NSX Edge is defined in the correct area so that it
can work properly with the OSPF configuration within the network.
HOL-1803-01-NET Page 72
HOL-1803-01-NET
1. Click Green Plus icon. This will open the New Area to Interface Mapping
dialog box.
2. Select Transit_Network_01 as the Interface.
3. Select 10 as the Area.
4. Click OK.
Publish Changes
HOL-1803-01-NET Page 73
HOL-1803-01-NET
Ensure the OSPF configuration on Distributed-Router-01 (NSX Edge) matches the picture
above.
HOL-1803-01-NET Page 74
HOL-1803-01-NET
HOL-1803-01-NET Page 75
HOL-1803-01-NET
Publish Changes
HOL-1803-01-NET Page 76
HOL-1803-01-NET
1. Double-click Perimeter-Gateway-01.
HOL-1803-01-NET Page 77
HOL-1803-01-NET
1. Click Manage.
2. Click Routing.
3. Click OSPF.
4. Click Edit to change OSPF Configuration. This will open the OSPF Configuration
dialog box.
Enable OSPF
1. Click Green Plus icon. This will open the New Area Definition dialog box.
2. Enter 10 as the Area ID. Leave the other settings as default.
3. Click OK
HOL-1803-01-NET Page 78
HOL-1803-01-NET
Note: The Area ID for OSPF is very important and there are several types of OSPF
areas. Hence, please ensure that the NSX Edge is defined in the correct area so that it
can work properly with the OSPF configuration within the network.
1. Click Green Plus icon. This will open the New Area to Interface Mapping
dialog box.
2. Select Transit_Network_01 as the vNIC.
3. Select 10 as the Area.
4. Click OK.
Publish Changes
HOL-1803-01-NET Page 79
HOL-1803-01-NET
You will notice that Perimeter-Gateway-01 (NSX Edge) has already been configured for
dynamic routing with BGP. This dynamic routing configuration allows Perimeter-
Gateway-01 (NSX Edge) to communicate and distribute routes to the router running the
overall lab.
HOL-1803-01-NET Page 80
HOL-1803-01-NET
1. Check OSPF.
2. Verify BGP is checked.
3. Click OK.
Note: BGP is the routing protocol used between Perimeter-Gateway-01 (NSX Edge) and
the vPod Router.
HOL-1803-01-NET Page 81
HOL-1803-01-NET
Publish Changes
HOL-1803-01-NET Page 82
HOL-1803-01-NET
The new topology shows route peering between Distributed Router and Perimeter
Gateway (NSX Edge). Routes to any network connected to the Distributed Router will be
distributed to the Perimeter Gateway (NSX Edge). In addition, we also have control over
the routing from the Perimeter Gateway to the physical network.
HOL-1803-01-NET Page 83
HOL-1803-01-NET
The routing information is being exchanged between the Distributed Router and
Perimeter Gateway. Once the routing between the two NSX Edges is established, the
connectivity to the 3-tier Web Application will be restored. Let's verify that the routing is
functional by accesssing the 3-tier Web Application.
1. Click on HOL - Customer Database browser tab (this tab was opened in the
previous steps). However, it may show 504 Gateway Time-out instead.
2. Click Refresh.
Note: It may take a minute for route propagation as the lab is a nested environment.
In this section, we have successfully configured dynamic and distributed routing. In the
next section, we will review centralized routing with the Perimeter Gateway (NSX Edge).
HOL-1803-01-NET Page 84
HOL-1803-01-NET
Centralized Routing
In this section, we will look at various elements to see how the routing is done
northbound from the edge. This includes how OSPF dynamic routing is controlled,
updated, and propagated throughout the system. We will verify the routing on the
perimeter edge appliance through the virtual routing appliance that runs and routes the
entire lab.
Special Note: On the desktop, you will find a file named README.txt. It contains the CLI
commands needed in the lab exercises. If you can't type them you can copy and paste
them into the putty sessions. If you see a number with "french brackets - {1}" this tells
you to look for that CLI command for this module in the text file.
HOL-1803-01-NET Page 85
HOL-1803-01-NET
The above diagram shows the current topology where OSPF is redistributing the routes
between Perimeter Gateway and Distributed Router. In addition, we also see the
northbound link from Perimeter Gateway to the vPod Router.
First we will confirm the Web App is functional, then we will log into the NSX Perimeter
Gateway to view OSPF neighbors and see existing route distribution. This will show how
the Perimeter Gateway is learning routes from not only the Distributed Router, but the
vPod router that is running the entire lab.
HOL-1803-01-NET Page 86
HOL-1803-01-NET
Before you start the configuration for Distributed Routing, let's verify that the 3-tier Web
Application is working correctly. The three tiers of the application (web, app and
database) are on different logical switches and NSX Edge is providing routing between
these tiers.
• The web server will return a web page with customer information stored in the
database.
Navigate to Perimeter-Gateway VM
HOL-1803-01-NET Page 87
HOL-1803-01-NET
1. Expand RegionA01.
2. Select Perimeter-Gateway-01-0.
3. Click on the Black Screen.
HOL-1803-01-NET Page 88
HOL-1803-01-NET
When the VM console launches in the browser tab, it will appear as a black screen. Click
inside the black screen and press Enter a few times to make the VM console appear
from the screensaver.
HOL-1803-01-NET Page 89
HOL-1803-01-NET
1. Username: admin
2. Password: VMware1!VMware1!
HOL-1803-01-NET Page 90
HOL-1803-01-NET
Many of the modules will have you enter Command Line Interface (CLI) commands.
There are two ways to send CLI commands to the lab.
1. Highlight the CLI command in the manual and use Control+c to copy to
clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.
Second, a text file (README.txt) has been placed on the desktop of the environment
allowing you to easily copy and paste complex commands or passwords in the
associated utilities (CMD, Putty, console, etc). Certain characters are often not present
on keyboards throughout the world. This text file is also included for keyboard layouts
which do not provide those characters.
HOL-1803-01-NET Page 91
HOL-1803-01-NET
1. BGP neighbor is 192.168.100.1 - This is the router ID of the vPod Router inside
the NSX environment.
2. Remote AS 65002 - This is the autonomous system number of the vPod Router's
external network.
3. BGP state = Established, up - This means the BGP neighbor adjacency is
complete and the BGP routers will send update packets to exchange routing
information.
show ip route
HOL-1803-01-NET Page 92
HOL-1803-01-NET
1. B indicates that this route is learned via BGP. This is also the default route and it
originates from the vPod Router (192.168.100.1).
2. C indicates that this route is directly connected to Perimeter-Gateway-01.
172.16.10.0/24 is the Web-Tier Logical Switch (network segment).
3. O indicates that this route is learned via OSPF from Distributed-Router-01
(192.168.5.2). 172.16.20.0/24 and 172.16.30.0/24 are the App-Tier Logical
Switch and DB-Tier Logical Switch (network segments).
There could be a situation where you would only want BGP routes to be distributed
inside of the virtual environment, but not with the physical world. We are able to control
that route distribution easily from the NSX Edge configuration.
HOL-1803-01-NET Page 93
HOL-1803-01-NET
HOL-1803-01-NET Page 94
HOL-1803-01-NET
1. Click Manage.
2. Click Routing.
3. Click BGP.
HOL-1803-01-NET Page 95
HOL-1803-01-NET
Publish Change
The VM console may appear as a black screen in the browser tab. Click inside the black
screen and press Enter a few times to make the VM console appear from the
screensaver.
You will notice that vPod Router (192.168.250.1) has been dropped from the list.
HOL-1803-01-NET Page 96
HOL-1803-01-NET
Show Routes
show ip route
Notice that the only routes being learned via OSPF is from the Distributed Router
(192.168.5.2).
HOL-1803-01-NET Page 97
HOL-1803-01-NET
Since no routes exist between the control center and the virtual networking
environment, the web app should fail.
1. Click on HOL - Customer Database browser tab (this tab was opened in the
previous steps).
2. Click Refresh.
The application will take a few seconds to time out and you may need to click "X" to
stop the browser. If you see customer data, it may be cached data and you will need to
close and re-open the browser to correct it.
HOL-1803-01-NET Page 98
HOL-1803-01-NET
Now let's get the route peering between the Perimeter-Gateway-01 and vPod Router
back in place.
HOL-1803-01-NET Page 99
HOL-1803-01-NET
Publish Change
The VM console may appear as a black screen in the browser tab. Click inside the black
screen and press Enter a few times to make the VM console appear from the
screensaver.
You will notice that vPod Router (192.168.100.1) is shown as a neighbor now.
show ip route
Show Routes
The default route from the vPod Router (192.168.100.1) is now back in the list.
With the routes back in place, the Web App should be functional again.
1. Click on HOL - Customer Database browser tab (this tab was opened in the
previous steps).
2. Click Refresh.
We have successfully completed this section of the lab and will move on to ECMP and
High Availability with the NSX Edges in the next section.
ECMP is a routing strategy that allows next-hop packet forwarding to a single destination
can occur over multiple best paths. These best paths can be added statically or as a
result of metric calculations by dynamic routing protocols like OSPF or BGP. The Edge
Services Gateway utilizes Linux network stack implementation, a round-robin algorithm
with a randomness component. After a next hop is selected for a particular source and
destination IP address pair, the route cache stores the selected next hop. All packets for
that flow go to the selected next hop. The Distributed Logical Router uses an XOR
algorithm to determine the next hop from a list of possible ECMP next hops. This
algorithm uses the source and destination IP address on the outgoing packet as sources
of entropy.
Now we will configure a new Perimeter Gateway, and establish an ECMP cluster between
the Perimeter Gateways for the Distributed Logical Router to leverage for increased
capacity and availability. We will test availability by shutting down one of the Perimeter
Gateways, and watching the traffic path change.
We first need to modify the existing Perimeter Gateway NSX Edge to remove the
secondary IP address:
1. Click Manage.
2. Click Settings.
3. Click Interfaces.
4. Select vNIC 0.
5. Click on the Edit pencil.
1. Click OK.
Set Password
Note: All passwords for NSX Edges are 12-character complex passwords.
1. Click Green Plus icon. The Add NSX Edge Appliance dialog box will appear.
2. Select RegionA01-MGMT01 for Cluster/Resource Pool.
3. Select RegionA01-ISCSI01-MGMT01 for Datastore.
4. Select esx-04a.corp.local for Host.
5. Click OK.
Continue Deployment
1. Click Next.
1. Click Green Plus icon. This will add the first interface.
We have to pick the northbound switch interface (a distributed port group) for this
Perimeter Gateway.
1. Click Green Plus icon. This will add the second interface.
We have to pick the northbound switch interface (VXLAN Backed Logical Switch) for this
Perimeter Gateway.
Continue Deployment
Ensure the IP Addresses and Subnet Prefix Length information are the same as the
picture.
1. Click Next.
We are removing the default gateway since information is received via OSPF.
Finalize Deployment
Edge Deploying
1. The NSX Edges section will show that there is 1 Installing while Perimeter-
Gateway-02 is being deployed.
2. The status for Perimeter-Gateway-02 will indicate that it is Busy. This means the
deployment is in process.
3. Click refresh icon on the vSphere Web Client to see the deployment status of
Perimeter-Gateway-02.
Once the status for Perimeter-Gateway-02 indicates that it is Deployed, we can move on
to the next step.
We will need to configure OSPF on Perimeter-Gateway-02 (NSX Edge) before ECMP can
be enabled.
1. Double-click Perimeter-Gateway-02.
1. Click Manage.
2. Click Routing.
3. Select Global Configuration.
4. Click Edit to change Dynamic Routing Configuration.
5. Select Uplink -192.168.100.4 as Router ID.
6. Click OK.
Publish Changes
Enable OSPF
1. Click OSPF.
2. Click Edit to change OSPF Configuration.
3. Check Enable OSPF.
4. Click OK.
Now the same must be done for the downlink interface to the Distributed Router
Publish Changes
Enable BGP
1. Select BGP.
2. Click Edit to change BGP Configuration.
3. Check Enable BGP.
4. Enter 65001 as the Local AS.
5. Click OK.
Publish Changes
We must now enable BGP and OSPF route redistribution in order for the routes to be
accessible through this edge.
Publish Changes
Enable ECMP
We are now going to enable ECMP on the Distributed Router and Perimeter Gateways
1. Click Manage.
2. Click Routing.
3. Click Global Configuration.
4. Click Enable.
Publish Change
1. Double-click Perimeter-Gateway-01.
1. Click Manage.
2. Click Firewall.
3. Click Disable.
Publish Change
1. Click Manage.
2. Click Routing.
3. Click Global Configuration.
4. Click Enable.
Publish Change
1. Click Back.
1. Double-click Perimeter-Gateway-02
1. Click Manage.
2. Click Firewall.
3. Click Disable.
Publish Change
1. Click Manage.
2. Click Routing.
3. Click Global Configuration.
4. Click Enable.
Publish Change
Topology Overview
At this stage, this is the topology of the lab. This includes the new Perimeter Gateway
that has been added, routing configured, and ECMP turned on.
Let's now access the distributed router to ensure that OSPF is communicating and ECMP
is functioning.
1. Click Refresh.
2. Expand RegionA01.
3. Select Distributed-Router-01-0.
4. Select Summary.
5. Click on VM Console.
Access VM Console
When the VM console launches in the browser tab, it will appear as a black screen. Click
inside the black screen and press Enter a few times to make the VM console appear
from the screensaver.
1. Username: admin
2. Password: VMware1!VMware1!
The first thing we will do is look at the OSPF neighbors to the Distributed Router.
This shows us that the Distributed Router has two OSPF neighbors. The neighbors are
the Perimeter-Gateway-1(192.168.100.3) and Perimeter-Gateway-2
(192.168.100.4).
show ip route
Note: The vPod Router network segments and default route are advertised via both
Perimeter Gateway network addresses. The red arrows above are pointing to the
addresses of both the Perimeter-Gateway-01 and Perimeter-Gateway-02.
Note: To release your cursor from the window, press Ctrl+Alt keys.
Now we will look at ECMP from the vPod Router, which simulates a physical router in
your network.
We must telnet into the module that controls BGP in the vPod Router.
We must telnet into the module that controls OSPF in the vPod Router.
Show Routes
show ip bgp
2. In this section you notice that all networks have two next hop routers listed, and this
is because Perimeter-Gateway-01 (192.168.100.3) and Perimeter-Gateway-02
(192.168.100.4) are both Established neighbors for these networks.
At this point, any traffic connected to the distributed router can egress out either of the
perimeter gateways with ECMP.
1. Expand RegionA01.
2. Right-click Perimeter-Gateway-01-0.
3. Click Power.
4. Click Shut Down Guest OS.
Confirm Shutdown
1. Click Yes.
With ECMP, BGP, and OSPF in the environment, we are able to dynamically change
routes in the event of a failure in a particular path. We will now simulate one of the
paths going down, and route redistribution occuring.
ping -t db-01a
You will see pings from the control center to the database server (db-01a) start.
Leave this window open and running as you go to the next step.
When the VM console launches in the browser tab, it will appear as a black screen. Click
inside the black screen and press Enter a few times to make the VM console appear
from the screensaver.
show ip route
Note: Only the Perimeter-Gateway-02 is now available to acces the vPod Router network
segments.
1. Expand RegionA01.
2. Right-click Perimeter-Gateway-01-0.
3. Click Power.
4. Click Power On.
It will take a minute or two for the VM to power up. Once it shows the VMTools are online
in the VM Summary, you can move to the next step.
1. Click Refresh icon. This will check for updates on the VMTools status.
• On the taskbar, go back to your command prompt running your ping test.
Although this is not a clear depiction of the fail over from Perimeter-Gateway-02 to
Perimeter-Gateway-01, the ping traffic would migrate from Perimeter-Gateway-02 to
Perimeter-Gateway-01 with minimal impact if the active path went down.
When the VM console launches in the browser tab, it will appear as a black screen. Click
inside the black screen and press Enter a few times to make the VM console appear
from the screensaver.
Show Routes
Let's check the status of the routes on the Distributed-Router-01 since we powered
Perimeter-Gateway-01 back up.
show ip route
Note: we should now see that all vPod Router networks have returned to dual
connectivity.
A final note on ECMP and HA in this lab. While we have shutdown Perimeter-
Gateway-01, the result of of doing this on Perimeter-Gateway-02 would be the
same.
The only caveat is that the Customer DB App does not work when Perimeter-
Gateway-01 is offline since the web server VMs are directly connected to it. We could
resolve this by moving the Web-Tier down to the Distributed-Router-01 as you did the
Database and App networks in the Dynamic and Distributed Routing section of this
lab. Once that is complete, the Customer DB App will function if Perimeter Gateway 1 or
2 were offline. It is important to note that performing this migration will break
other modules in this lab! This is the reason it is not done as part of the
manual. If other modules are not going to be attempted, this migration can
be performed without an issue.
Delete Perimeter-Gateway-02
Confirm Delete
1. Click Yes.
1. Double-click Distributed-Router-01.
1. Click Manage.
2. Click Routing.
3. Click Global Configuration.
4. Click Disable.
Publish Change
1. Double-click Perimeter-Gateway-01.
1. Click Manage.
2. Click Routing.
3. Click Global Configuration.
4. Click Disable.
Publish Change
1. Click Manage.
2. Click Firewall.
3. Click Enable.
Publish Change
Module 3 Conclusion
In this module, we covered the routing capabilities of NSX Distributed Logical Router and
Edge Services Gateways:
1. Migrated Logical Switches from Edge Services Gateway (ESG) to the Distributed
Logical Router (DLR).
2. Configured the dynamic routing protocol between ESG and DLR.
3. Review the centralized routing capabilities of ESG and dynamic route peering
information.
4. Demonstrated scalability and availablity of ESG by deploying a second ESG and
establishing route peering between the two ESGs via Equal Cost Multipath (ECMP)
route configuration.
5. Removed ESG2 and ECMP route configuration.
If you are keen to learn more about NSX, please visit the NSX 6.3 Documentation Center
via the following URL:
• Go to https://tinyurl.com/y9zy7cpn
• Module 1 - Installation Walk Through (15 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
• Module 2 - Logical Switching (30 minutes) - Basic - This module will cover the
creation of logical switches and add virtual machines to the logical switches.
• Module 3 - Logical Routing (60 minutes) - Basic - This module will demonstrate
the dynamic and distributed routing capabilities supported on the NSX platform
by providing routes between a 3-tier application.
• Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
showcase the capabilities of the Edge Services Gateway by providing common
services such as DHCP, VPN, NAT, Dynamic Routing, Load Balancing and Physical
to Virtual Bridging.
Lab Captains:
The NSX Edge logical (distributed) router provides East-West distributed routing with
tenant IP address space and data path isolation. Virtual machines or workloads that
reside on the same host on different subnets can communicate with one another
without having to traverse a traditional routing interface.
The NSX Edge Gateway connects isolated, stub networks to shared (uplink) networks by
providing common gateway services such as DHCP, VPN, NAT, dynamic routing, and
Load Balancing. Common deployments of NSX Edges include the DMZ, VPN, Extranets,
and multi-tenant Cloud environments where the NSX Edge creates virtual boundaries for
each tenant.
TCP, UDP, HTTP, or HTTPS requests can be load balanced utilizing the NSX Edge Services
gateway. The Edge Services Gateway can provide load balancing up to Layer 7 of the
Open Systems Interconnection (OSI) model.
In this section, we will deploy and configure a new NSX Edge Appliance as a "One-
Armed" Load Balancer.
• Lab status is shown on the Desktop of the Main Console Windows VM.
Validation checks ensure all components of the lab are correctly deployed and once
validation is complete, status will be updated to Green/Ready. It is possible to have a
Lab deployment fail due to environment resource constraints.
If you are not already logged into the vSphere Web Client:
(The home page should be the vSphere Web Client. If not, Click on the vSphere Web
Client Taskbar icon for Google Chrome.)
Clicking on the Push-Pins will allow task panes to collapse and provide more viewing
space to the main pane. You can also collapse the left-hand pane to gain the maximum
space.
We will configure the one-armed load balancing service on a new Edge Services
Gateway. To begin the new Edge Services Gateway creation process, make sure you're
in the Networking & Security section of the vSphere Web Client:
For the new NSX Edge Services Gateway, set the following configuration options
Note: All passwords for NSX Edges are 12-character complex passwords.
There are four different appliance sizes for Edge Service Gateway. The specifications
(#CPUs, Memory) are as follows:
We will be selecting a Compact sized Edge for this new Edge Services Gateway, but it's
worth remembering that these Edge Service Gateways can also be upgraded to a larger
size after deployment. To continue with the new Edge Service Gateway creation:
1. Click Green Plus icon. This will open the Add NSX Edge Appliances pop-up
window.
Cluster/Datastore placement
Configure Deployment
1. Click Next.
Since this is a one-armed load balancer, it will only need one network interface.
We will be configuring the first network interface for this new NSX Edge.
This one-armed load balancer's interface will need to be on the same network as the
two web servers that this Edge will be providing Load Balancing services.
Configuring Subnets
1. Click Green Plus icon. This will configure the IP address of this interface.
Ensure the IP Addresses and Subnet Prefix Length information are the same as the
picture above.
1. Click Next.
Monitoring Deployment
1. The NSX Edges section will show that there is 1 Installing while OneArm-
LoadBalancer is being deployed.
2. The status for OneArm-LoadBalancer will indicate that it is Busy. This means the
deployment is in process.
3. Click refresh icon on the vSphere Web Client to see the deployment status of
OneArm-LoadBalancer.
Once the status for OneArm-LoadBalancer indicates that it is Deployed, we can move on
to the next step.
The above depicts the eventual topology we will have for the load balancer service
provided by the NSX Edge Services Gateway we just deployed. To get started, from
within the NSX Edges area of the Networking & Security plug-in for the vSphere Web
Client, double click on the Edge we just made to go into its management page.
1. Double-click OneArm-LoadBalancer.
1. Click Manage.
2. Click Load Balancer.
3. Click Global Configuration.
4. Click Edit to change Load Balancer global configuration.
Utilizing profiles can make traffic-management tasks less error-prone and more efficient.
Monitors ensure that pool members serving virtual servers are up and working. The
default HTTPS monitor will simply do a "GET" at "/". We will modify the default monitor
to do a health check at application specific URL. This will help determine that not only
the pool member servers are up and running but the application is as well.
A group of servers of Pool is the entity that represents the nodes that traffic is getting
load balanced to. We will be adding the two web servers web-01a and web-02a to a new
pool. To create the new pool:
1. Click Pools.
2. Click Green Plus icon. This will open the New Pool pop-up window.
Repeat above the process to add one more pool member using the following
information:
• Name: web-02a
• IP Address: 172.16.10.12
• Port: 443
• Monitor Port: 443
1. Click OK.
A Virtual Server is the entity that accepts traffic from the "front end" of a load balanced
service configuration. User traffic is directed towards the IP address the virtual server
represents, and is then redistributed to nodes on the "back-end" of the load balancer. To
configure a new Virtual Server on this Edge Services Gateway, first
2. Click Green Plus icon. This will open the New Virtual Server pop-up window.
Please configure the following options for this new Virtual Server:
1. Click refresh icon. This will allow you to see the Round-Robin of the two pool
members.
Note: You may have to click a few times to get the browser to refresh outside of the
browser cache.
1. Click on Pools.
2. Click Show Pool Statistics.
3. Click on "pool-1". We will see each member's current status.
4. Close the window by clicking the X.
To aid troubleshooting, NSX Load Balancer "show ...pool" command will yield informative
description for pool member failures . We will create two different failures and examine
the response using show commands on Load Balancer Edge Gateway.
1. Enter LoadBalancer in the search box. The search box is located at the top right
corner of vSphere Web Client.
2. Click on "OneArm-LoadBalancer-0".
1. Click on Summary.
2. Click on VM console.
Login to OneArm-LoadBalancer-0
1. Login as admin.
2. Enter VMware1!VMware1! as password.
Many of the modules will have us enter Command Line Interface (CLI) commands.
There are two ways to send CLI commands to the lab.
1. Highlight the CLI command in the manual and use Control+c to copy to
clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.
Second, a text file (README.txt) has been placed on the desktop of the environment
providing you with all the user accounts and passwords for the environment.
Note: The status of pool members, web-01a and web-02a are shown to be "UP".
Start PuTTY
SSH to web-01a.corp.local
Loadbalancer console
Because the service is down, the failure detail shows the client could not establish SSL
session.
Shutdown web-01a
1. Enter web-01a in the search box. The search box is located at the top right
corner of vSphere Web Client.
2. Click on web-01a.
1. Click Actions.
2. Click Power.
3. Click Power Off.
4. Click Yes.
Because the VM is currently down, the failure detail shows that the client could not
establish L4 connection as oppose to L7 (SSL) connection in previous step.
Power web-01a on
1. Click Actions.
2. Click Power.
3. Click Power On.
Conclusion
In this lab, we have deployed and configured a new Edge Services Gateway and enabled
load balancing services for the 1-Arm LB Customer DB application.
This concludes the Edge Services Gateway Load Balancer lesson. Next, we will learn
more about the Edge Services Gateway Firewall.
We can navigate to an NSX Edge to see the firewall rules that apply to it. Firewall rules
applied to a Logical Router only protect control plane traffic to and from the Logical
Router control virtual machine. They do not enforce any data plane protection. To
protect data plane traffic, create Logical Firewall rules for East-West protection or rules
at the NSX Edge Services Gateway level for North-South protection.
Rules created on the Firewall user interface applicable to this NSX Edge are displayed in
a read-only mode. Rules are displayed and enforced in the following order:
1. Click Manage.
2. Click Firewall.
3. Click Default Rule.
4. Click Plus icon under Action column.
5. Click Deny from Action.
Publish Changes
We will not be making permanent changes to the Edge Services Gateway Firewall
setting.
Now that we are familiar with editing an existing Edge Services Gateway firewall rule,
we will add a new edge firewall rule that will block the Control Center's access to the
Customer DB Application.
Specify Source
Hover mouse in the upper right corner of the Source column and click Pencil icon.
Confirm Source
Specify Destination
Hover mouse in the upper right corner of the Destination column and click Pencil icon.
1. Click Object Type drop down menu and select Logical Switch.
2. Click Web_Tier_Logical_Switch.
3. Click right arrow. This will move the Web_Tier_Logical_Switch to Selected
Objects.
4. Click OK.
Configure Action
Publish Changes
Now that we have configured a new FW rule that will block the Control Center from
accessing the Web Tier logical switch, let's run a quick test:
Verify the Main Console cannot access the Customer DB App. We should see a
browser page that states the web site cannot be reached. Now, lets modify the FW rule
to allow the Main Console access to the Customer DB App.
1. Click Plus icon in the upper right corner of the Action column of the Main Console
FW Rule.
2. Click Accept under Action.
3. Click OK.
Publish Changes
Since the Main Console FW rule has been changed to "Accept", the Main Console can
now access the Customer DB App.
Publish Changes
Conclusion
In this lab, we learned to modify an existing Edge Services Gateway Firewall rule and
configure a new Edge Services Gateway Firewall rule that blocks external access to the
Customer DB App.
This concludes the Edge Services Gateway Firewall lesson. Next, we will learn more
about Edge Services Gateway managing DHCP services.
DHCP Relay
In a network where there are only single network segments, DHCP clients can
communicate directly with their DHCP server. DHCP servers can also provide IP
addresses for multiple networks, even ones not on the same segments as themselves.
Though when serving up IP addresses for IP ranges outside its own, it is unable to
communicate with those clients directly. This is due to the clients not having a routable
IP address or gateway that they are aware of.
In these situations a DHCP Relay agent is required in order to relay the received
broadcast from DHCP clients by sending it to the DHCP server in unicast. The DHCP
server will select a DHCP scope based upon the range the unicast is coming from,
returning it to the agent address which is then broadcast back to the original network to
the client.
• Windows Server based DHCP Server, with appropriate DHCP scope and scope
options set
• TFTP server for the PXE boot files: This server has been installed, configured and
OS files loaded.
Lab Topology
This diagram lays out the final topology that will be created and used in this lab module.
We must first create a new Logical Switch that will run our new 172.16.50.0/24 network.
We will now attach the logical switch to an interface on the Perimeter Gateway. This
interface will be the default gateway for the 172.16.50.0/24 network with an address of
172.16.50.1.
Add Interface
This section will attach the logical switch to an interface on the Perimeter Gateway.
1. Click Manage.
2. Click Settings.
3. Click Interfaces.
4. Select vnic9.
5. Click Pencil icon.
1. Click Select.
Select the new Logical Switch that we just created in the previous steps.
1. Select DHCP-Relay.
2. Click OK.
Staying inside of the Perimeter Gateway, we must do the global configuration of DHCP
Relay.
1. Click Manage.
2. Click DHCP.
3. Click Relay.
4. Click Edit.
Within the global configuration of DHCP is where we select the DHCP servers that will
respond to DHCP requests from our guest VMs.
There are three methods by which we can set DHCP Server IPs:
IP Sets
IP Sets are configured from the NSX Manager Global Configuration and allow us to
specify a subset of DHCP servers by creating a named grouping.
IP Addresses
Domain Names
This method allows us to specify a DNS name that could be a single or multiple DHCP
server addresses.
The DHCP Relay Agent will relay any DHCP requests from the gateway address on the
logical switch to the configured DHCP Servers. We must add an agent to the logical
switch / segment we created on 172.16.50.0/24.
We will now create a blank VM that will PXE boot from the DHCP server we are relaying
to.
Create New VM
1. Expand RegionA01-COMP01.
2. Right-click esx-02a.corp.local.
3. Click New Virtual Machine.
4. Click New Virtual Machine....
Name the VM
Select Host
1. Click Next.
Select Storage
1. Click Next.
Select Compatibility
1. Click Next.
Select Guest OS
We need to delete the default hard disk. Since we are booting from the network, the
hard disk is not needed. This is because the PXE image is booting and running
completely within RAM.
Complete VM Creation
1. Click Finish.
Power on VM
Next we will open a console to this VM and watch it boot from the PXE image. It receives
this information via the remote DHCP server we configured earlier.
Image Booting
This screen will appear once the VM has a DHCP address and is downloading the PXE
image from the boot server. This screen will take about 1-2 mins, please move on to the
next step.
While we wait for the VM to boot, we can verify the address used in the DHCP Leases.
View Leases
We can view the DHCP server to identify the IP address taken by the VM.
1. Expand controlcenter.corp.local.
2. Expand IPv4.
3. Expand Scope [172.16.50.0] NSX_ESG_Relay_Lab.
4. Select Address Leases.
We will see the address 172.16.50.10 which was in the range created earlier.
View Options
We can also see the scope options used to boot the PXE Image
We will notice that 066 Boot Server Host Name and 67 Bootfile Name were used.
We can now close DHCP.
Access Booted VM
The widget in the upper-right corner of the VM will show statistics, along with the IP of
the VM. This should match the IP shown in DHCP earlier.
Verify Connectivity
As the dynamic routing is already in place with the virtual network, we will have
connectivity to the VM upon its creation. We can verify the connectivity by pinging the
VM from the Main Console.
ping 172.16.50.10
We will see a ping response from the VM. After which, we can now close this command
prompt window.
Conclusion
In this section, we have completed the creation of a new network segment, then relayed
the DHCP requests from that network to an external DHCP server. In doing so, we were
able to access additional boot options of this external DHCP server and PXE into a Linux
OS.
Configuring L2VPN
In this section, we will be utilizing the L2VPN capabilities of the NSX Edge Gateway to
extend a L2 boundary between two separate vSphere clusters. To demonstrate this
capability, we will deploy an an NSX Edge L2VPN Server on the RegionA01-MGMT01
cluster and an NSX Edge L2VPN Client on the RegionA01-COMP01 cluster and finally test
the tunnel status to verify a successful configuration.
1. Open the Google Chrome web browser from the desktop (if not already open).
To create the L2VPN Server service, we must first deploy an NSX Edge Gateway for that
service to run on.
The New NSX Edge wizard will appear, with the first section "Name and Description"
displayed. Enter in the following values corresponding to the following numbers. Leave
the other fields blank or at their default values.
1. Click Next.
Add Interface
1. Click OK.
1. Click Next.
1. Click Finish.
Before we configure the newly deployed NSX Edge for L2VPN connections, we need to
complete the following preparatory steps:
1. Double-click L2VPN-Server.
1. Click Manage.
2. Click Settings.
3. Click Interfaces.
4. Click vnic1.
5. Click Pencil icon.
1. Click Green Plus icon. This will open the Add Sub Interface pop-up window.
Ensure the Sub Interface configuration is the same as the picture above.
1. Click OK.
Ensure the Trunk Interface configuration is the same as the picture above.
1. Click OK.
1. Click Routing.
2. Click Global Configuration.
3. Click Edit to change Dynamic Routing Configuration.
Add L2VPNServer-Uplink
1. Click OK.
1. Click OSPF.
2. Click Green Plus icon under Area to Interface Mapping.
1. Check Connected.
2. Click OK.
Publish Changes
We have performed all prerequisites and will now configure the L2VPN service on this
Edge Gateway.
The 172.16.10.1 address belongs to the L2VPN-Server Edge Gateway and routes are
being distributed dynamically via OSPF. Next, we will configure the L2VPN service on this
Edge Gateway so that the Edge acts as "Server" in the L2VPN.
1. Click VPN.
2. Click L2 VPN.
3. Click Change to edit Global Configuration Details.
1. Click OK.
1. Click Enable for L2VPN Service Status. This will enable L2VPN-Server service.
2. Click Publish Changes.
We have completed the configuration for the L2 VPN Server. Next, we will be deploying a
new NSX Edge Gateway to act as the L2 VPN Client.
Now that the server side of the L2VPN is configured, we will move on to deploying a new
NSX Edge Gateway to act as the L2 VPN client. Before deploying the NSX Edge Gateway
L2VPN Client, we need to configure the Uplink and Trunk distributed port groups on the
distributed virtual switch.
1. Select RegionA01-vDS-COMP.
2. Click Create a new port group.
Configure Settings
Ready to Complete
1. Click Finish.
We will need to configure another distributed port group which is named as Trunk-
Network-RegionA01-vDS-COMP. Repeat the same steps to create Trunk-Network-
RegionA01-vDS-COMP.
1. When completed, we should see the newly created distributed port groups.
2. Click Back.
3. Click Next.
1. Click Green Plus icon. This will open the Add NSX Edge Appliance pop-up
window.
2. Select RegionA01-COMP02 as Cluster/Resource Pool.
3. Select RegionA01-ISCSI01-COMP01 as Datastore.
4. Select esx-03a.corp.local as Host.
5. Select Discovered virtual machine as Folder.
6. Click OK.
1. Click Next.
2. Select Uplink-RegionA01-vDS-COMP.
3. Click OK.
1. Click OK.
Click Next
1. Click Next.
1. Click Finish.
1. Double-click L2VPN-Client.
Similar to the configuration for L2VPN-Server Edge Gateway, we will also need to add a
Trunk interface to this Edge:
1. Click Manage.
2. Click Settings.
3. Click Interfaces.
4. Click vnic1.
5. Click Pencil icon.
1. Click OK.
Ensure the Sub Interface configuration is the same as the picture above.
1. Click OK.
Ensure the Trunk Interface configuration is the same as the picture above.
1. Click VPN.
2. Click L2 VPN.
3. Select Client as the L2VPN Mode.
4. Click Change to edit the Global Configuration Details.
1. Click OK.
1. Click Enable for L2VPN Service Status. This will enable L2VPN-Client service.
Publish Changes
Ensure that the Status is shown as "Up" after the service has been enabled. We may
need to click Fetch Status a few times to get the updated status.
Congrats! We've successfully configured the NSX L2VPN service. This concludes the
lesson for configuring NSX Edge Services Gateway L2VPN services.
Native Bridging
NSX provides in-kernel software L2 Bridging capabilities that allow organizations to
seamlessly connect traditional workloads and legacy VLANs to virtualized networks
using VXLAN. L2 Bridging is widely used in brownfield environments to simplify the
introduction of logical networks and other scenarios involving physical systems that
require L2 connectivity to virtual machines.
The logical routers can provide L2 bridging from the logical networking space within NSX
to the physical VLAN-backed network. This allows for the creation of a L2 bridge
between a logical switch and a VLAN, which enables the migration of virtual workloads
to physical devices with no impact on IP addresses. A logical network can leverage a
physical L3 gateway and access existing physical networks and security resources by
bridging the logical switch broadcast domain to the VLAN broadcast domain. From NSX-
V 6.2 onwards, this function has been enhanced as bridged Logical Switches can be
connected to Distributed Logical Routers. This operation was not permitted in previous
versions of NSX.
This module will guide us through the configuration of a L2 Bridging instance between a
traditional VLAN and an Access NSX Logical Switch.
Introduction
The picture above shows the L2 Bridging enhancements provided in NSX 6.2 onwards:
• In NSX 6.0 and 6.1, it was not possible to bridge a Logical Switch that was
connected to a Distributed Logical Router. The Logical Switch has to be connected
to an Edge Services Gateway.
• In NSX 6.2 onwards, it is possible to bridge a Logical Switch that was connected
to a Distributed Logical Router. This provides optimized East-West traffic flows.
Many of the modules will have you enter Command Line Interface (CLI) commands.
There are two ways to send CLI commands to the lab.
1. Highlight the CLI command in the manual and use Control+c to copy to clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.
Second, a text file (README.txt) has been placed on the desktop of the environment
allowing you to easily copy and paste complex commands or passwords in the
associated utilities (CMD, Putty, console, etc). Certain characters are often not present
on keyboards throughout the world. This text file is also included for keyboard layouts
which do not provide those characters.
• Bring up the vSphere Web Client via the icon on the desktop labeled, Chrome.
The home page should be the vSphere Web Client. Otherwise, click on the vSphere Web
Client Taskbar icon for Google Chrome.
We will verify the initial configuration as shown in the picture above. The environment
comes with a Port Group on the Management & Edge cluster, named "Bridged-Net-
RegionA0-vDS-MGMT". The web server VMs, named "web-01a", and "web-02a" are
attached to the Web-Tier-01 Logical Switch. The Web-Tier-01 Logical Switch is isolated
from the Bridged-Net.
1. Expand vcsa-01a.corp.local.
2. Expand RegionA01.
3. Expand RegionA01-vDS-MGMT.
4. Click Bridged-Net-RegionA0-vDS-MGMT.
1. Click Summary.
2. Click Actions.
3. Click Edit Settings.
Note: We are going to set the VLAN to allow for the presentation of the Bridged-Net to
the Distributed Router for L2 Bridging.
1. Click VLAN.
2. Select VLAN as the VLAN type.
Verify VLAN ID
1. Click Summary.
Migrate Web-01a
1. Expand vcsa-01a.corp.local.
2. Expand RegionA01.
3. Right-click web-01a.corp.local.
4. Click Migrate.
1. Expand vcsa-01a.corp.local.
2. Expand RegionA01.
3. Expand RegionA01-MGMT01.
4. Select esx-04a.corp.local.
5. Click Next.
Select Storage
1. Select RegionA01-ISCSI01-MGMT01.
2. Click Next.
Select Bridged-Net-RegionA0-vDS-MGMT
1. Click Next.
1. Click Next.
Click Finish
1. Click Finish.
1. Select Bridged-Net-RegionA0-vDS-MGMT.
2. Select VMs.
3. Select Virtual Machines. web-01a.corp.local should appear on the list.
4. Click web-01a.corp.local.
Open VM Console
1. Click Summary.
2. Click on the Black Screen. This will open up the VM console.
When the VM console launches in the browser tab, it will appear as a black screen. Click
inside the black screen and press Enter a few times to make the VM console appear
from the screensaver.
1. Login as root.
2. Enter VMware1! as password.
3. Enter ping -c 3 172.16.10.1.
ping -c 3 172.16.10.1
There are no other devices on VLAN 101 and the L2 Bridging has not been configured
yet. Wait until the ping times out to verify that the VM is isolated.
1. Click Manage.
2. Click Settings.
3. Click Interfaces.
4. Click Web_Tier.
5. Click Red "X" to delete Web_Tier Logical Switch.
Click OK
1. Click OK.
Go back to Edges
1. Click Back.
1. Click Manage.
2. Click Settings.
3. Click Interfaces.
4. Click Green Plus icon to add the Web-Tier Logical Switch.
1. Select Web_Tier_Logical_Switch.
2. Click OK.
We will enable NSX L2 Bridging between VLAN 101 and the Web-Tier-01 Logical Switch,
so that web-01a.corp.local will be able to communicate with the rest of the network.
From NSX-V 6.2 onwards, it is possible to have a L2 Bridge and a Distributed Logical
Router connected to the same Logical Switch. This represents an important
enhancement as it simplifies the integration of NSX in brownfield environments and the
migration from legacy to virtual networking.
1. Click Bridging.
2. Click Green Plus icon.
1. Select Web_Tier_Logical_Switch.
2. Click OK.
1. Select Bridged-Net-RegionA0-vDS-MGMT.
2. Click OK.
1. Click OK.
Publish Changes
Verify L2 Bridging
NSX L2 Bridging has been configured. You will now verify L2 connectivity between the
"web-01a" VM, attached on VLAN 101, and the machines connected "Web-Tier-01"
Logical Switch
ping -c 3 172.16.10.1
Note: We might experience "duplicate" pings during this test (responses appearing as
DUPs). This is due to the nature of the Hands-On Labs environment and will not happen
in a production environment.
If you want to proceed with the other modules of this Hands-On Lab, make sure to follow
the following steps to disable the L2 Bridging, as the example configuration realized in
this specific environment could conflict with other sections, such as L2VPN.
1. Click Manage.
2. Click Bridging.
3. Click Red Cross icon.
Note: There should one instance, Bridge-01, which was created earlier.
Publish Changes
Migrate Web-01a
1. Expand vcsa-01a.corp.local.
2. Expand RegionA01.
3. Right-click web-01a.corp.local.
4. Click Migrate.
1. Expand vcsa-01a.corp.local.
2. Expand RegionA01.
3. Expand RegionA01-COMP01.
4. Select esx-01a.corp.local.
5. Click Next.
Select Storage
1. Select RegionA01-ISCSI01-COMP01.
2. Click Next.
1. Click Next.
Click Finish
1. Click Finish.
Conclusion
Congratulations, you have successfully completed the NSX L2 Bridging module! In this
module we configured and tested the bridging from a traditional VLAN-backed PortGroup
to an NSX VXLAN Logical Switch.
Module 4 Conclusion
In this module, we touched on the advanced features of NSX Edge Services Gateway:
If you are keen to learn more about NSX, please visit the NSX 6.3 Documentation Center
via the following URL:
• Go to https://tinyurl.com/y9zy7cpn
• Module 1 - Installation Walk Through (15 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
• Module 2 - Logical Switching (30 minutes) - Basic - This module will cover the
creation of logical switches and add virtual machines to the logical switches.
• Module 3 - Logical Routing (60 minutes) - Basic - This module will demonstrate
the dynamic and distributed routing capabilities supported on the NSX platform
by providing routes between a 3-tier application.
• Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
showcase the capabilities of the Edge Services Gateway by providing common
services such as DHCP, VPN, NAT, Dynamic Routing, Load Balancing and Physical
to Virtual Bridging.
Lab Captains:
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit
http://hol.vmware.com/ to continue your lab experience online.
Version: 20180413-130034