Hol 2225 02 Net - PDF - en
Hol 2225 02 Net - PDF - en
HOL-2225-02-NET
NSX-T Advanced
Networking
HOL-2225-02-NET: NSX-T Advanced Networking
Table of contents
Lab Overview - HOL-2225-02-NET - NSX-T Advanced Networking 4
Module 1 Conclusion........................................................................ 76
Appendix 258
Note: It may take more than 90 minutes to complete this lab. You may only finish 2-3 of the modules during your time. However, you
may take this lab as many times as you want. The modules are independent of each other so you can start at the beginning of any
module and proceed from there. Use the Table of Contents to access any module in the lab. The Table of Contents can be accessed in
the upper right-hand corner of the Lab Manual.
This lab contains 6 modules that focus on getting started with the NSX-T platform. We will show administrators how they can execute
the configuration of network and security settings. This includes the provisioning of logical segments, logical routers and security
associated.
◦Module 1 - NSX Dynamic Routing, Multicast and VRF (45 minutes) (Basic) In this module you will explore advanced
◦Module 2 - NSX Load Balancing (30 minutes) (Basic) In this module you will create a virtual server and explore basic
◦Module 3 - NSX VPN (30 minutes) (Basic) In this module you will configure an IPsec VPN tunnel in NSX.
◦Module 4 - NSX Native Operations (30 minutes) (Advanced) The goal of this module is to explore some of the
◦Module 5 - Automation in NSX (30 minutes) (Advanced) The purpose of this module is to explore the NSX REST API
via Postman, as well as configuring NSX using automation tools such as Terraform.
◦Module 6 - NSX Federation (30 minutes) (Advanced) In this module we will simulate the setup and configuration of
Lab Captains:
•Lead
Lead Captain - Jennifer Schmidt - Staff Virtual Cloud Network TAM - United States
•Captain
Captain - Joe Collon - Staff Solution Engineer Virtual Cloud Network - United States
•Captain
Captain - Phoebe Kim - Senior Cloud Solutions Architect - United States
•Captain
Captain - Mihajlo Veselinovic - Staff Virtual Cloud Network TAM - United States
•Associate
Associate Captain - Claire Davin - Virtual Cloud Network Solutions Engineer - France
This lab manual can be downloaded from the Hands-on Labs document site found here:
http://docs.hol.vmware.com
This lab may be available in other languages. To set your language preference and view a localized manual deployed with your lab,
utilize this document to guide you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
Welcome! If this is your first time taking a lab navigate to the Appendix in the Table of Contents to review the interface and features
before proceeding.
For returning users, feel free to start your lab by clicking next in the manual.
Please verify that your lab has finished all the startup routines and is ready for you to start. If you see anything other than "Ready",
please wait a few minutes. If after 5 minutes your lab has not changed to "Ready", please ask for assistance.
The goal of this module is to explore some of the various routing topologies and features available within NSX. One such feature is
Virtual Routing and Forwarding (VRF). VRF Lite support in NSX-T 3.x provides multi-tenant data plane isolation in the Tier-0 gateway.
VRF has its own isolated routing table, uplinks, NAT and gateway firewall services.
•Configure Tier-0 Equal Cost Multi Path (ECMP) connectivity through two Edge nodes
•Configure ECMP to leverage Bidirectional Forwarding Detection (BFD) for faster network convergence
•Enable multicast routing between workloads on two different NSX overlay segments
Routing into and out of the NSX environment is handled by the Tier-0 Gateway. The Tier-0 gateway is configured with interfaces on one
or more Edge nodes, and uses those interfaces to route North-South traffic between the physical fabric and NSX. In this section we will
configure a second Edge interface on an existing Tier-0 gateway for improved throughput and resilience, leveraging a routing topology
termed Equal Cost Multi Pathing (ECMP).
1. Open a browser by double clicking the Google Chrome icon on the desktop.
1. Launch the NSX web interface by clicking on the nsxmgr-01a bookmark in the Region A bookmark folder of Google Chrome.
3.Click LOG IN
We will now review the configuration of the existing Tier-0 (T0) Gateway T0-GW-01
T0-GW-01. This T0 Gateway router is configured to use the
Uplink connections provided by Edge Cluster EdgeCluster-01, which is comprised of Edge Nodes edgenode-01a and edgenode-02a.
2.Click Tier-0 Gateways in the menu on the left side of the NSX-T Networking user interface
5.Click the 1 link to the right of External and Service Interfaces to display the Set Interfaces dialog
Review the EN1-Uplink1 interface configured on T0-GW-01. This is the North-South Uplink interface the T0 uses to peer with the
external routed environment. Observe the following settings:
•Name: EN1-Uplink1
•Type: External
•Status: Success
From this screen we can determine that the T0 Gateway has a single Uplink interface that uses IP address 192.168.120.3
192.168.120.3. This Uplink is
hosted on the Edge Node edgenode-01a and is currently Up Up. In its current configuration, the Tier-0 Gateway is utilizing a single
interface for North-South connectivity. If the host containing the Edge Node edgenode-01a were to fail for any reason, routing into and
out of the NSX environment would be lost. We will correct this later by establishing a second interface on Edge Node edgenode-02.
Border Gateway Protocol (BGP) is a communication protocol used by routers to exchange route information. When two or more routers
are configured in this way and communicate with one another, they are called neighbors. We will now review T0-GW-01's BGP
configuration.
2.Click the 1 link to the right of BGP Neighbors to display the Set BGP Neighbors dialog
T0-GW-01 is configured with one BGP neighbor. Review the settings for the 192.168.120.1 neighbor:
•BFD: Disabled
•Route Filter: 1
•Allowas-in: Disabled
•Status: Success
In this instance, T0-GW-01 is peering with a router at IP address 192.168.120.1 using BGP AS number 65002
65002. Its status is currently Up,
indicated as Success
Success.
Introduced with NSX-T 3.0 is a new Network Topology visualization view. This view displays a graphical representation of the NSX
environment, including Tier-0 and Tier-1 Gateways, Segments, and their connectivity to one another. As we observed in the previous
steps, T0-GW-01 has an Uplink interface with an IP address of 192.168.120.3. This T0 gateway is also providing gateway services for a
pair of overlay networks, as shown in the Topology diagram.
1. Click Network Topology in the menu on the left side of the NSX-T Networking user interface
2.If necessary, click the Zoom In icon to make the Segment names visible in the Topology diagram
1. Trace the route path from your Admin Console to web server web-01a on Segment Web-LS, by typing the following
tracert -d 172.16.10.11
Observe that:
•The physical router then routes the packet to the Uplink interface of T0-GW-01 (192.168.120.3
192.168.120.3)
•Finally, the packet is delivered to server web-01a on NSX Segment Web-LS that is connected to the Tier-0 gateway
(172.16.10.11
172.16.10.11)
With this test, we have confirmed good connectivity from our desktop to a VM on Segment Web-LS.
2.Click the Windows minimize button to minimize the Command Prompt window, and return to the NSX UI in Chrome.
We will now modify the existing Tier-0 Gateway to utilize the second Edge Node in Cluster EdgeCluster-01, providing two North-South
paths into and out of the NSX environment. Please perform the following steps from the vCenter UI in Chrome:
2.Click Tier-0 Gateways in the menu on the left side of the NSX Networking user interface
3.Click the More Options icon to the left of T0-GW-01 to display its Options menu, then click Edit
Observe that our existing Tier-0 Gateway is configured for an HA Mode of Active Active. This allows the use of multiple Edge Nodes in
the Edge Cluster simultaneously. Also note that the Tier-0 Gateway is configured to use Edge Cluster EdgeCluster-01.
Observe that the existing EN1-Uplink1 interface is running on Edge Node edgenode-01a and is configured for IP address 192.168.120.3
192.168.120.3.
If a failure were to occur on this Edge Node, North-South connectivity to the NSX environment would be lost. We will now add a second
Uplink interface to the Tier-0 Gateway that leverages edgenode-02a, the second Edge Node in Edge Cluster EdgeCluster-01.
1. Name: EN2-Uplink1
2.Type: External
6.Click SAVE
Confirm that our Tier-0 Gateway now has two interfaces: EN1-Uplink1 and EN2-Uplink1
EN2-Uplink1. Interface EN2-Uplink1 exists on Edge Node
edgenode-02a with IP address 192.168.120.4/24
192.168.120.4/24.
Note: The status may initially show as "Uninitialized". Click the "REFRESH" link until it shows "Success".
2.Click CLOSE
We will now configure BGP to advertise from the second interface that we defined on edgenode-02a. This will allow BGP on the Tier-0
Gateway to establish peering from the interfaces on both edgenode-01a and edgenode-02a. During normal operation, both Edges will
be considered viable paths into and out of the NSX environment. In the event that an Edge Transport Node fails, its BGP neighbor state
will be lost and its path information will be removed from the BGP routing table. Traffic will continue to flow through the remaining Edge
Transport Node. Upon recovery of the lost Edge Transport Node, its BGP state will be reestablished and its path information will be
added back to the BGP routing table automatically.
2.Click the 1 link to the right of BGP Neighbors to display the Set BGP Neighbors dialog
1. Click the More Options icon to the left of 192.168.120.1 to display its Options menu, then click Edit
1. Source Addresses: Add 192.168.120.4 (this should be in addition to the existing 192.168.120.3 entry)
2.Click SAVE
3.Click CLOSE
We will now view the new interface configuration on Edge Node edgenode-02a, and confirm that the second interface was created
successfully.
2.Click PuTTY
1. Password: VMware1!VMware1!
Once you are authenticated to the Edge Node, maximize the PuTTY window for better visibility.
1. Get a list of Logical Routers connected to edgenode-02a by typing the following command and pressing Enter:
get logical-routers
NOTE: The VRF number of SR-T0-GW-01 may differ from the screenshot.
1. Enter the VRF routing context on the Edge Node by entering the following command and pressing enter (NOTE
NOTE: Replace "1"
in the command below with the VRF number found in the previous step):
vrf 1
2.Get the BGP neighbor status by running the following command and pressing enter:
Verify the neighbor relationship with 192.168.120.1 is showing a state of Estab (Established).
Please return to the NSX user interface by selecting Google Chrome in the Windows taskbar. We will now revisit the Network Topology
view to see the second Uplink interface of our Tier-0 Gateway.
1. Click Network Topology in the menu on the left side of the NSX-T Networking user interface
2.Click the Zoom In icon to make the Tier-0 IP addresses visible in the Topology diagram
As you can see, we now have two edge nodes that have established connections with our external router, providing redundant North-
South routing to the NSX environment.
When routers need to exchange reachability information, there are typically two ways for this to occur: static routing and dynamic
routing. NSX supports both methods.
Static routing involves configuring each router in the environment with explicit route information; Router A is provided a static
configuration telling it the routes that are reachable via Router B. Static routing is the simplest method of providing route information,
but as its name implies it is generally unable to adapt to changes in the network topology.
Dynamic routing involves configuring routers in the environment to advertise routing information to one another. If a network topology
change occurs, routers participating in dynamic routing will advertise these changes to each other and will adjust the routing table
accordingly. NSX supports dynamic routing via the Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF) protocols.
In addition to these routing methods, NSX also supports Bidirectional Forwarding Detection (BFD). BFD is a simple protocol that
operates at the forwarding plane and sends periodic Hello packets (called BFD Control packets) to its configured neighbors at a user-
specified interval. If a specified number of these Control packets are missed by a neighbor, the neighbor will consider the peer to be
down and will notify any routing protocols of the failure. The BFD process occurs independent of any routing protocol in use and allows
failures to be detected more quickly, making sub-second convergence possible.
We will now modify the existing Tier-0 Gateway to enable BFD. The vPOD router that our Tier-0 Gateway peers with has already been
configured for BFD, and should establish a session with the Tier-0 as soon as we enable it.
2.Click Tier-0 Gateways in the menu on the left side of the NSX Networking user interface
3.Click the More Options icon to the left of T0-GW-01 to display its Options menu, then click Edit
2.Click the 1 link to the right of BGP Neighbors to display the BGP Neighbors dialog
1. Click the More Options icon to the left of 192.168.120.1 to display its Options menu, then click Edit
1. Click the toggle to the left of Disabled under BFD to enable it; the label will change to indicate Enabled
3.In the BFD Interval field, enter 1000 (for 1000 milliseconds, or one second)
4.Click SAVE
5.Click CLOSE
We have now enabled BFD on the Tier-0 Gateway for BGP peer 192.168.120.1. We have set the BFD Interval to one second (1000 ms)
and left the BFD Multiplier at its default value of 3. This means that if three BFD packets are not received (for a total of three seconds),
BFD will mark the BGP neighbor as down.
We will now simulate an Edge Node failure by disconnecting the Uplink interfaces of the Edge Node. Recall that we configured a
second Uplink interface on the Tier-0 Gateway via edgenode-02a, enabling Equal Cost Multi Pathing (ECMP). This means that both
edgenode-01a and edgenode-02a are providing equally valid paths into and out of the NSX environment. As a result, we need to
determine which path is currently being used to reach the VM web-01a at IP address 172.16.10.11, on NSX Segment Web-LS.
ping web-01a.corp.local
1. Enter the following to perform a traceroute to web-01a.corp.local and display its path:
tracert -d web-01a.corp.local
The first hop is the IP address of the vPod router (the gateway of your admin desktop). Observe the second hop, 192.168.120.4
192.168.120.4, which is
the IP address of the Tier-0 Gateway interface on edgenode-02a. Traffic is then delivered to web-01a at 172.16.10.11 in the third and final
hop.
NOTE: Because both paths are equally valid, your traceroute may traverse the Tier-0 interface on edgenode-01a instead of
edgenode-02a. If this is the case, your second route hop will display 192.168.120.3 instead of 192.168.120.4
192.168.120.4. If your traceroute displays a
second route hop of 192.168.120.3, please substitute edgenode-01a in the following steps to test fault tolerance.
Please return to the NSX user interface by selecting Google Chrome item in the Windows Taskbar, or by minimizing the Command
Prompt window.
We will now connect to vCenter and simulate a failure by disconnecting the Edge Node your trace route utilized. The loss of this Edge
Node will cause all traffic to route through the remaining Edge Node.
NOTE: If you do not have an existing tab for vCenter in Google Chrome, click the New Tab button and select the vcsa-01a Web Client
bookmark from the Region A bookmark folder.
1. Username: [email protected]
2.Password: VMware1!
3.Click Login
ping -t web-01a.corp.local
You should observe 100% Reply packets during this extended ping, although due to the nature of networking it is also possible (and
acceptable) to see an occasional timeout.
Leave the extended ping running and return to the vCenter UI. We will now disconnect the interfaces of the Edge Node.
NOTE: Because both paths are equally valid, the output of the traceroute you performed earlier may have traversed the Tier-0 interface
on either edgenode-01a or edgenode-02a. If your second route hop in the traceroute displayed 192.168.120.3
192.168.120.3, please select
edgenode-01a in the following steps. Likewise, if the second route hop in the traceroute displayed 192.168.120.4
192.168.120.4, please select
edgenode-02a.
2.Click the Edit Settings... icon to display the Edit Settings dialog
We will simulate the rapid failure of an Uplink by disconnecting its interface on the Edge Node.
1. Click the checkbox to the left of Connected for Network adapter 2 to deselect it (this is the first Uplink interface of the edge)
2.Click the checkbox to the left of Connected for Network Adapter 3 to deselect it (this is the second Uplink interface of the
edge)
NOTE: Make sure you select the correct Network adapters; they should be Network adapter 2 and Network adapter 3
3, connected to
port group Edge-Trunk-A and Edge-Trunk-B
Edge-Trunk-B, as indicated in the screenshot.
Return to the Command Prompt window and observe that very few packets were dropped while BFD reconverged the topology around
the failure. In the example screenshot, a single ping packet was dropped, but due to the various timers involved you may see more. As
we will explore in the next step, traffic is now traversing the remaining edge node.
Since the network topology has reconverged around the failure, its path to the Tier-0 Gateway should now be through its interface on
edgenode-01a (or alternately edgenode-02a, as noted earlier).
1. Press CTRL+C to exit the continuous ping from the previous step, and return to a command prompt
2.Enter the following to perform a traceroute to web-01a.corp.local and display its path:
tracert -d web-01a.corp.local
The first hop is the IP address of the vPod router (the gateway of your admin desktop). The second hop is now 192.168.120.3 (or
192.168.120.4
192.168.120.4), the IP address of the Tier-0 Gateway interface on edgenode-02a (or edgenode-01a). Traffic is then delivered to web-01a
at 172.16.10.11 in the final hop.
Now that we have tested fault tolerance on the Edge Node, we will reconnect its network adapters and return the Edge Node VM to a
normal state. Return to the vSphere Client, then perform the following.
1. Click the Refresh icon to update the virtual machine's current status
2.Click the Edit Settings... icon to display the Edit Settings dialog
1. Click the checkbox to the left of Connected for Network adapter 2 to select it
2.Click the checkbox to the left of Connected for Network adapter 3 to select it
NOTE: Make sure you select the correct Network adapters; they should be Network adapter 2 and Network adapter 3
3, connected to
port groups Edge-Trunk-A and Edge-Trunk-B
Edge-Trunk-B, as indicated in the screenshot.
Virtual Routing and Forwarding (VRF) is a feature that allows a single router to maintain separate, isolated routing tables. The VRF Lite
feature in NSX-T incorporates this to provide multi-tenant data plane isolation in the Tier-0 gateway. In addition to its own isolated
routing table, VRF has its own uplinks, NAT tables and gateway firewall services.
In this lab we will deploy a VRF configuration by connecting to a second upstream router. We will observe that routes advertised to this
VRF instance will not propagate to the T0 Gateway itself, maintaining route separation between the T0 and its VRF.
Your lab is configured with an existing Tier-0 Gateway, T0-GW-01. This Gateway peers with the simulated physical router ("vPOD
Router") via BGP on subnet 192.168.120.0/24. A second vPOD Router has been deployed in your lab, with its own routes and BGP
configuration. We will configure a new VRF instance that will peer via BGP with this second vPOD Router on VLAN 131, with subnet
192.168.131.0/24. In the above diagram, the components on the right will be new, and will represent an isolated routing domain.
To begin, we will create an NSX Segment that will be used to connect our VRF instance to the secondary vPOD router on VLAN 131.
2.Click Segments in the menu on the left side of the NSX Networking user interface
2.Leave the default of None for Connected Gateway (we will only use this Segment for Uplink ports)
5.Click SAVE
3.Click SAVE
We will continue to edit the VRF now that its initial configuration has been specified.
2.Click Set to the right of External and Service Interfaces to display the Set Interfaces dialog
6.Click SAVE
7. Click CLOSE
We will complete our BGP configuration by defining a BGP peer on the VRF.
2.Click Set to the right of BGP Neighbors to display the Set BGP Neighbors dialog
5.Click SAVE
6.Click CLOSE
2.Click SAVE
3.Click CLOSE EDITING at the bottom of the edit dialog to exit edit mode and return to the list of Tier-0 Gateways
We should have successful BGP peering between our new VRF and the second vPOD Router in the environment. This VRF will maintain
a separate routing table from that of the existing T0-GW-01. We will now explore the routing tables of the two routers.
2.Click PuTTY
Once you are authenticated to the Edge Node, maximize the PuTTY window for better visibility.
1. Get a list of Logical Routers connected to edgenode-01a by entering the following command:
get logical-routers
NOTE
NOTE: The VRF number of SR-VRF-VRF-GW-01 may differ from the screenshot.
1. Enter the VRF routing context on the Edge Node by entering the following command (NOTE
NOTE: Replace "4" in the command
vrf 4
Verify the neighbor relationship with 192.168.131.1 is showing a state of Estab (Established).
1. View the route table for the VRF by entering the following command:
get route
Observe that there are three total routes in the route table. In reverse order, they are:
192.168.131.0/24: This "t0c" (Tier0-Connected) route corresponds to the VRF-Uplink Segment that connects our VRF to the remote
router.
10.100.131.0/24: This subnet is learned via BGP ("b") from the peering router.
0.0.0.0/0: This default route is also learned via BGP ("b") from the peering router.
Observe that none of the 172.16.x networks that are connected to the T0-GW-01 router are visible in this VRF routing instance. This is
because the parent Tier-0 router and VRF maintain separate, isolated routing tables.
Lastly, we will view the route table for the primary Tier-0 instance and observe that it is not receiving routes from our VRF.
exit
2.List logical routers to locate the router instance for T0-GW-01 by entering the following command:
get logical-routers
3.Locate the VRF for SR-T0-GW-01 and enter its VRF by executing the following command:
NOTE
NOTE: Your VRF number may differ from the one displayed in this guide.
vrf 1
4.Enter the following command to confirm that our VRF routes are not visible in the T0-GW-01 routing table:
get route
You may close or minimize the PuTTY window before continuing on to the next step.
When devices communicate on a network using IP, there are two common types of transmission: Unicast and Broadcast. Unicast
packets are those sent with a single intended destination. For example, when downloading a file, the packets making up that file are
typically sent directly to your IP address via unicast. In this scenario, packets from a single source are received by a single destination.
Broadcast packets, on the other hand, are received by every device on the local network. An example of a broadcast packet is an ARP
request, which is routinely used before two IP devices can communicate. When a device on the network first attempts to reach another
device on its local network, it broadcasts a packet on the local network asking which device is using the destination IP address. In this
example, a single copy of the ARP request is received by every device on the local subnet. As you can imagine, large amounts of
broadcast traffic can quickly overwhelm a network. One important characteristic of a broadcast packet is that it cannot cross a routed
boundary, which is why a VLAN (or other layer 2 network) is often referred to as a "broadcast domain."
A third and less common packet type, known as Multicast, is intended to provide the benefits of a broadcast packet (sent once, received
by many) without the overhead commonly associated with a broadcast packet. With multicast, destination devices can choose what
type of multicast packets they'd like to receive. This is known as a multicast group. An example of this is a streaming server sending two
video feeds. By sending these feeds via multicast, destination devices can choose which of the streams they'd like to receive. Devices
that are not interested in ("subscribed to") the multicast group will not be sent the packets making up the video stream. Another
advantage to multicast is that routers can be configured to pass multicast streams across subnets to remote network destinations.
In this exercise, we will configure routing in NSX to allow a device on one subnet (VM web-01a on Segment Web-LS
Web-LS) to receive
multicast traffic from a source on a second subnet (VM app-01a on Segment App-LS
App-LS).
2.Click PuTTY
In this example, we will use server web-01a as a multicast receiver. We will execute a script that tells web-01a to listen for multicast
packets on IP address 224.1.1.1. This script will wait until it receives a multicast transmission, which it will then summarize on the console.
1. Launch the multicast receiver on server web-01a by entering the following command:
./multi-receive.sh
2.Click PuTTY
In your first PuTTY window, web-01a should be listening as a multicast receiver on IP address 224.1.1.1. We will now execute a script that
tells server app-01a to broadcast a multicast stream on IP address 224.1.1.1. If possible, leave both PuTTY sessions (web-01a and
app-01a) so they are visible on your screen simultaneously. Otherwise, you may toggle between the two windows to observe the result
of this exercise.
1. Launch the multicast transmitter on server app-01a by entering the following command:
./multi-transmit.sh
You should see the console on server app-01 update as it transmits a multicast stream to IP address 224.1.1.1. The script will exit after
transmitting for 30 seconds. However, if you observe the PuTTY window for web-01a, no additional output is generated.
This is because server web-01a is in subnet 172.16.10.0/24, and server app-01a is in subnet 172.16.20.0/24. Even though there is IP
connectivity between the servers, without the NSX router being configured to pass multicast, the transmission from app-01a is never
delivered to the destination at web-01a.
2.Click Tier-0 Gateways in the menu on the left side of the NSX Networking user interface
3.Click the More Options icon to the left of T0-GW-01 to display its Options menu, then click Edit
2.Click the toggle to the left of Disabled on the Multicast line; the text label will update to indicate Enabled
4.Click SAVE
We have now enabled multicast routing for streams in the 224.1.1.0 range of addresses. We will now re-initiate our multicast
transmission from server app-01a.
In your first PuTTY window, web-01a should still be listening as a multicast receiver on IP address 224.1.1.1. We will now relaunch the
multi-transmit script on server app-01a, and should see a different result.
1. Relaunch the multicast transmitter on server app-01a by entering the following command:
./multi-transmit.sh
You should see the console on server app-01 update as it transmits a multicast stream to IP address 224.1.1.1, similar to the output it
displayed previously. The script will exit after transmitting for 30 seconds. Return to the PuTTY session for server web-01a by clicking in
its window, or on its window name in the Windows taskbar.
1. You should now receive the multicast datagrams that are being transmitted by server app-01a from IP address 172.16.20.11.
Please proceed to any module below which interests you the most.
•Module 1 - NSX Dynamic Routing, Multicast and VRF (45 minutes) (Basic) In this module you will explore advanced networking
•Module 2 - NSX Load Balancing (30 minutes) (Basic) In this module you will create a virtual server and explore basic load
balancing in NSX.
•Module 3 - NSX VPN (30 minutes) (Basic) In this module you will configure an IPsec VPN tunnel in NSX.
•Module 4 - NSX Native Operations (30 minutes) (Advanced) The goal of this module is to explore some of the various day 2
•Module 5 - Automation in NSX (30 minutes) (Advanced) The purpose of this module is to explore the NSX REST API via
•Module 6 - NSX Federation (30 minutes) (Advanced) In this module we will simulate the setup and configuration of an NSX
Federation deployment.
Lab Captains:
•Lead
Lead Captain - Jennifer Schmidt - Staff Virtual Cloud Network TAM - United States
•Captain
Captain - Joe Collon - Staff Solution Engineer Virtual Cloud Network - United States
•Captain
Captain - Phoebe Kim - Senior Cloud Solutions Architect - United States
•Captain
Captain - Mihajlo Veselinovic - Staff Virtual Cloud Network TAM - United States
•Associate
Associate Captain - Claire Davin - Virtual Cloud Network Solutions Engineer - France
The goal of this lab is to explore load balancing in NSX. In this module you will complete the following tasks:
NSX Edge Nodes are service appliances with pools of capacity, dedicated to running network and security services in the NSX fabric
that cannot be distributed to the hypervisors. Edge Nodes are used to provide routed connectivity between the overlay and the physical
infrastructure via the Service Router (SR) component of the Tier-0 Gateway, and can also provide additional centralized, non-distributed
services such as load balancing, NAT and VPN. Services provided by the NSX Edge Transport Node include:
•NAT
•Load Balancer
As soon as one of these services is configured or an external interface is defined on a Tier-0 or Tier-1 gateway, a Service Router (SR) is
instantiated on the selected Edge node. The Edge node is also a transport node in NSX, hosting its own TEP address. This allows it to
communicate with other nodes in the overlay network. NSX Edge Transport Nodes are typically configured for one Overlay Transport
Zone, and will also be connected to one or more VLAN transport zones when used for North-South (Uplink) connectivity.
Beginning with the NSX-T Data Center 3.0 release, support for the Intel® QuickAssist Technology (QAT) is provided on bare metal
servers.Intel® QAT provides the hardware acceleration for various cryptography operations, such as IPSec VPN bulk cryptography,
offloading the function from the Intel® Xeon® Scalable processor.
The QAT feature is enabled by default if the NSX Edge is deployed on a bare metal server with an Intel® QuickAssist PCIe card that is
based on the installed C62x chipset (Intel® QuickAssist Adapter 8960 or 8970). The single root I/O virtualization (SR-IOV) interface must
be enabled in the BIOS firmware
NSX Edge Node is available for deployment in either a virtual machine (VM) or bare metal form factor. When deployed as a VM, the
Edge Node benefits from native vSphere features such as Distributed Resource Scheduler (DRS) and vMotion. Deploying the Edge
Node on bare metal allows direct access to the device's hardware resources, providing increased performance and lower latency than
the VM form factor.
2nd Gen Intel® Xeon® Scalable processors, with Intel® Virtualization Technology (Intel® VT), built into and enhanced in five successive
generations of Intel® Xeon® processors, enables live migration of VMs across Intel Xeon processor generations.
Consider the network bandwidth requirements within your data center when planning vMotion. A 10 GbE NIC can vMotion up to 8 VMs
simultaneously.
•If not continuing from a previous module, open a browser by double clicking the Google Chrome icon on the desktop.
3.Click on nsxmgr-01a
3.Click LOG IN
When deploying centralized services in the NSX fabric, the instance of that service is provisioned and realized on an NSX Edge Node. If
the Edge Node hosting this service were to experience a failure, any services running on the Edge Node would also fail as a result. To
prevent a failure from impacting these services, Edge Nodes are grouped into logical objects called Edge Clusters.
An Edge Cluster is a group of one or more Edge Nodes that specifies the fault domain for services and how they should be recovered.
Your lab is provisioned with four Edge Transport Nodes: edgenode-01a
edgenode-01a, edgenode-02a
edgenode-02a, edgenode-03a and edgenode-04a
edgenode-04a. We will
now review the existing Edge Cluster configuration and create one additional cluster.
2.Click Fabric in the menu on the left side of the NSX System user interface
6.Click EDIT
Observe that as stated above, there are four Edge Nodes. Edge Cluster EdgeCluster-01 is configured to use Edge Nodes
edgenode-01a and edgenode-02a
edgenode-02a, indicated in the Selected column. edgenode-03a and edgenode-04a are displayed as Available and
are not part of this Edge Cluster.
We will now define a new Edge Cluster that will be used for the services we configure in this module.
2.Click the checkbox to the left of Available to select all available Edge Nodes (edgenode-03a and edgenode-04a)
3.Click the right arrow icon to move the Edge Nodes from Available to the Selected column
4.Click ADD
Confirm that Edge Cluster EdgeCluster-02 was created successfully, and that it is comprised of 2 Edge Transport Nodes.
The exercises in this module rely on a 3 Tier Application that has been partially configured in this lab. In order to successfully test the
load balancing component of NSX, the remainder of the application's network connectivity must first be configured.
A step by step guide is provided, but a summary of the steps is provided in case the user wishes to configure the application without
assistance. In this section, we will:
From vCenter:
•Assign port group DB-LS to Network Adapter 1 of VM db-01a in the RegionA01-COMP01 cluster
If you have completed this task without guidance and can successfully ping server db-01a at IP address 172.16.30.11, please proceed to
the next chapter (NSX Edge Services - Load Balancing) by clicking HERE or using the table of contents on the top of the manual.
In order to create a new overlay segment and attach it to the existing Tier-0 Gateway T0-GW-01, perform the following steps from the
NSX Manager user interface:
2.Click Segments in the menu on the left side of the NSX Networking user interface
8.Click SAVE
In order to connect VM db-01a to our newly-created NSX Overlay Segment db-01a, perform the following steps from the vCenter user
interface:
NOTE: If you do not have an existing tab for vCenter in Google Chrome, click the New Tab button and select the vcsa-01a Web Client
bookmark from the Region A bookmark folder.
From the Edit Settings dialog, attach the newly-created Segment DB-LS to Network adapter 1 of db-01a:
1. To the right of Network adapter 1, click the arrow to expand the drop down menu
1. Verify Network adapter 1 indicates that it is connected to DB-LS. Click OK in the Edit Settings dialog to apply the
configuration change.
1. Ping db-01a.corp.local on NSX Segment DB-LS by entering the following text, then pressing enter:
ping db-01a.corp.local
You should observe ping replies from the db server. Please close or minimize the Command Prompt window once completed.
We will now create a load balancer in NSX. In this section of the module you will execute the following tasks:
•Create health checks for HTTPS services on web-01a and web-02a web servers
•Create a virtual IP (VIP) to load balance web server traffic to 2 separate web servers
In order to create a Load balancer we need a Tier-1 Gateway deployed to at least one edge gateway. Please return to the NSX user
interface by selecting the NSX Tab in Google Chrome.
2.Click Tier-1 Gateways in the menu on the left side of the NSX Networking user interface
Note: Tier-1 gateways used for Load balancing services must be placed on edge nodes of medium or large size
6.Click SAVE
1. Click Tier-0 Gateways in the menu on the left side of the NSX Networking user interface
2.Click the More Options icon to the left of T0-GW-01 to display its Options menu, then click Edit
2.Click the 1 link to the right of Route Re-distribution to display the Set Route Re-distribution dialog
Note: The actual number of Route Re-distributions may differ from the instructions, depending on the modules you have completed
prior to this step.
Since we rely on the Tier-0 Gateway to re-distribute the routes from the Tier-1 to the physical fabric, we also need to allow the Load
Balancer and SNAT routes to be re-distributed at the Tier-0 level as well. As we can see, currently only Connected and Static routes are
being re-distributed into BGP.
2.Click APPLY
1. Click SAVE
Now that we have the T-1 and routing requirements set up, let's create our Load Balancer.
1. Click Load Balancing in the menu on the left side of the NSX Networking user interface
6.Click SAVE
1. Click NO to continue
1. Click the refresh icon periodically until the Status shows Succes
NOTE: It may take up to 3 - 4 minutes for the Status to display Success. During this time you may see the status transition through other
states, including Failed and Unknown. This is normal, and occurs while Policy Manager attempts to realize the desired configuration on
the NSX Manager.
1. Click MONITORS
2.Click APPLY
1. Click SAVE
In this step we will create a new Server Pool. A Server Pool is a list of the systems the load balancer will monitor and deliver traffic to for
a given Virtual Server.
4.Click the Select Members link to display the Configure Server Pool Members dialog
5.Click SAVE
5.Click SAVE
6.Click APPLY
We will now select a Health Monitor for this Pool. Health Monitors define how the load balancer will check the pool members to
determine their ability to accept incoming connections.
1. Click the Set link to display the Select Active Monitors dialog
2.Click APPLY
1. Click SAVE
The last step is to define a Virtual Server. The Virtual Server has an IP address that accepts incoming connections and routes them to a
pool member. How a pool member is chosen gets specified during configuration, and can be based on a number of factors including
availability, load, and number of connections.
2.Enter 172.16.10.10 for IP Address (This IP address has a DNS record of webapp.corp.local)
3.Enter 443 for Ports
4.Select the LB-01 Load Balancer instance that was created earlier in this module
5.Select the Web-Servers-01 Server Pool that was created earlier in this module
6.Click SAVE
Our new load balancer configuration is now complete. We have configured a Virtual Server on port 443 with an IP address of
172.16.10.10
172.16.10.10, sending traffic to the web servers in Server Pool Web-Servers-01
Web-Servers-01. This server pool has two IP addresses, 172.16.10.11 and
172.16.10.12
172.16.10.12, corresponding to web-01a and web-02a respectively. The Load Balancer is monitoring the health and availability of the
web application by connecting to the pool members every five seconds with the URL /cgi-bin/app.py and expecting a response that
contains the text "Customer
Customer Database
Database".
1. Click the refresh icon periodically until the Status shows Success
NOTE: It may take up to 3-4 minutes for the Status to display Success
2.Click the VIP-Region A Customer App (will not work until LB configured) shortcut from the drop down
You should now see the test web application and the Customer Database information.
We will now log into the vCenter web client so we can manually fail one of the web servers and test fault tolerance. If you have an
existing tab with the vCenter client, click to select it.
NOTE: If you do not have an existing tab for vCenter, please click the New Tab icon in Google Chrome and select the vcsa-01a Web
Client bookmark from the Region A bookmark folder.
Recall the web server that served your request when we recently connected to the Webapp Virtual Server. We will now power that
server off and verify that traffic gets directed to the remaining web server.
1. Click to select the web server you noted down in the previous step
2.Click Actions
3.Click Power
1. Click the Customer Database tab to change focus back to the web application
3.Verify that you are now connecting to the remaining web server pool member
Note: This should be the opposite of the server you initially connected to
Before completing this section of the module, we will return the web server we had powered off to an operational state. Please select
the vCenter Web Client tab in Google Chrome before performing the following steps.
1. Click the web server that was recently powered off to ensure it is selected
Please proceed to any module below which interests you the most.
•Module 1 - NSX Dynamic Routing, Multicast and VRF (45 minutes) (Basic) In this module you will explore advanced networking
•Module 2 - NSX Load Balancing (30 minutes) (Basic) In this module you will create a virtual server and explore basic load
balancing in NSX.
•Module 3 - NSX VPN (30 minutes) (Basic) In this module you will configure an IPsec VPN tunnel in NSX.
•Module 4 - NSX Native Operations (30 minutes) (Advanced) The goal of this module is to explore some of the various day 2
•Module 5 - Automation in NSX (30 minutes) (Advanced) The purpose of this module is to explore the NSX REST API via
•Module 6 - NSX Federation (30 minutes) (Advanced) In this module we will simulate the setup and configuration of an NSX
Federation deployment.
Lab Captains:
•Lead
Lead Captain - Jennifer Schmidt - Staff Virtual Cloud Network TAM - United States
•Captain
Captain - Joe Collon - Staff Solution Engineer Virtual Cloud Network - United States
•Captain
Captain - Phoebe Kim - Senior Cloud Solutions Architect - United States
•Captain
Captain - Mihajlo Veselinovic - Staff Virtual Cloud Network TAM - United States
•Associate
Associate Captain - Claire Davin - Virtual Cloud Network Solutions Engineer - France
Included with the NSX platform are various types of Virtual Private Networks, commonly referred to as a VPN. VPNs serve multiple
purposes, including data encryption and extending Layer 2 and Layer 3 networks to remote locations. NSX supports the following VPN
types as services on the NSX Edge Node:
IPSec: Internet Protocol Security (IPSec) VPN is used to secure traffic flowing between two networks that are connected over a public
network. This is done through the use of IPSec gateways called endpoints. Data that traverses the IPSec tunnel is encrypted at the
source and decrypted at the destination. This allows for secure transmission over public carrier networks such as the Internet. Traffic is
typically routed as it passes through an IPSec VPN; therefore, extending Layer 2 across endpoints is not supported. NSX supports both
policy-based and route-based IPSec VPN tunnels.
Layer 2 VPN: Layer 2 VPN (L2VPN) allows you to extend Layer 2 networks (VNIs or VLANs) across multiple sites on the same broadcast
domain. This connection also takes advantage of IPSec encryption between the L2VPN client and server. Whereas IPSec is a standard
protocol and offerings from multiple vendors can communicate with one another, L2VPN is only available for NSX-T and does not have
any third-party interoperability.
EVPN: Ethernet VPN (EVPN) is a standards-based BGP control plane that provides the ability to extend Layer 2 and Layer 3
connectivity across data centers and other network fabrics. In comparison to IPSec and L2VPN, EVPN does not provide encryption
between endpoints.
The goal of this lab is to explore and configure an IPSec VPN tunnel in NSX. In this module you will complete the following task:
We will now create an IPSec tunnel in NSX and connect it to a Tier-1 Gateway. We will also move Segment Web-LS to this Tier-1
Gateway temporarily and test connectivity. This exercise will simulate a tenant environment where devices on Web-LS will communicate
with a remote location, such a branch office or secondary data center.
1. Open a browser by double clicking the Google Chrome icon on the desktop.
1. Open the NSX web interface by clicking on the nsxmgr-01a bookmark in the Region A bookmark folder on the toolbar of
Google Chrome
3.Click LOG IN
In order to create a Load balancer we need a Tier-1 Gateway deployed to at least one edge node.
2.Click Tier-1 Gateways in the menu on the left side of the NSX Networking user interface
4.Click the arrow to the left of Route Advertisement to expand it (you may need to scroll down to see Route Advertisements)
5.Click the toggle to enable All Connected Segments & Service Ports
1. Click Tier-0 Gateways in the menu on the left side of the NSX Networking user interface
2.Click the More Options icon to the left of T0-GW-01 to display its Options menu, then click Edit
2.Click the 1 link to the right of Route Re-distribution to display the Set Route Re-distribution dialog
Note: The actual number of Route Re-distributions may differ from the instructions, depending on the modules you have completed
prior to this step.
1. Click the Number (2) under Route Re-distribution to view what is currently configured on the T0
1. As we can see, currently only Tier-0 Connected and Static routes are being re-distributed into BGP
2.Click Close
Since we rely on the Tier-0 Gateway to re-distribute the routes from the Tier-1 to the physical fabric, we also need to allow the VPN and
its Connected routes to be re-distributed at the Tier-0 level as well.
1. Click the checkbox to select Connected Interfaces & Segments under the Advertised Tier-1 Subnets (this will automatically
3.Click APPLY
Now that we have the T-1 and routing requirements set up, let's create our VPN.
1. Click VPN in the menu on the left side of the NSX Networking user interface
3.Select IPSec
1. Click NO to continue
We will now create a Local Endpoint for our VPN Tunnel. This will be the NSX side of the IPSec connection.
4.Click SAVE
We will now configure our IPSec tunnel using the Local Endpoint and VPN services we defined in the previous steps.
7. Click SAVE
1. Click the refresh icon periodically until the Status shows Success
It may take up to 3 - 4 minutes for the Status to display Success. During this time you may see the status transition through other states,
including Failed and Unknown. This is normal, and occurs while Policy Manager attempts to realize the desired configuration on the
NSX Manager.
We have now successfully configured an IPSec tunnel between our T1-VPN-01 Tier-1 Gateway and a remote site. We will now migrate
our existing Segment Web-LS to our new Tier-1 gateway, before using server web-01a on this segment to ping the remote side of the
IPSec tunnel.
1. Click Segments in the menu on the left side of the NSX Networking user interface
2.Click the More Options icon to the left of Web-LS to display its Options menu, then click Edit
Scroll to the bottom of the Edit Segment dialog to locate the SAVE button
2.Click PuTTY
When specifying the IPSec configuration, we defined a Virtual Tunnel Interface (VTI) on the IPSec link. A VTI assigns a subnet and IP
address to the inside (encrypted side) of the VPN tunnel. First, we will ping the local side of the IPSec tunnel's VTI. This is the interface
we defined on the Tier-1 Gateway when creating the VPN.
ping -c 4 10.100.222.1
Observe that ping replies are being received from the local side of the VPN gateway.
We will now ping the remote side of the IPSec tunnel's VTI. This interface and its associated configuration have been predefined as part
of the remote gateway's IPSec configuration.
ping -c 4 10.100.222.2
Observe that ping replies are being received from the remote side of the VPN gateway.
2.Minimize
Minimize the putty window to return to the NSX UI
Now that we have successfully tested both sides of the IPSec tunnel, we will return Segment Web-LS to its original Tier-0 Gateway.
1. Click Segments in the menu on the left side of the NSX Networking user interface
2.Click the More Options icon to the left of Web-LS to display its Options menu, then click Edit
Scroll to the bottom of the Edit Segment dialog to locate the SAVE button
Please proceed to any module below which interests you the most.
•Module 1 - NSX Dynamic Routing, Multicast and VRF (45 minutes) (Basic) In this module you will explore advanced networking
•Module 2 - NSX Load Balancing (30 minutes) (Basic) In this module you will create a virtual server and explore basic load
balancing in NSX.
•Module 3 - NSX VPN (30 minutes) (Basic) In this module you will configure an IPsec VPN tunnel in NSX.
•Module 4 - NSX Native Operations (30 minutes) (Advanced) The goal of this module is to explore some of the various day 2
•Module 5 - Automation in NSX (30 minutes) (Advanced) The purpose of this module is to explore the NSX REST API via
•Module 6 - NSX Federation (30 minutes) (Advanced) In this module we will simulate the setup and configuration of an NSX
Federation deployment.
Lab Captains:
•Lead
Lead Captain - Jennifer Schmidt - Staff Virtual Cloud Network TAM - United States
•Captain
Captain - Joe Collon - Staff Solution Engineer Virtual Cloud Network - United States
•Captain
Captain - Phoebe Kim - Senior Cloud Solutions Architect - United States
•Captain
Captain - Mihajlo Veselinovic - Staff Virtual Cloud Network TAM - United States
•Associate
Associate Captain - Claire Davin - Virtual Cloud Network Solutions Engineer - France
NSX-T provides several tools and utilities to simplify daily operations and provide the level of visibility an enterprise-grade SDN solution
requires.
•Visibility
Visibility Tools providing information about the health and status of the NSX components, traffic statistics or visibility of the
◦Dashboards
◦Counters/Stats/Tables
•Troubleshooting
Troubleshooting Tools helping to find out problems or configuration issues when something does not work
◦NSX Alarms/Events
This diagram illustrates the virtual machines that make up our 3 Tier Web App we will be exploring.
NSX provides comprehensive monitoring tools through NSX native monitoring capability and integration with 3rd party tools.
In this view we can see the relationship between configured T0 gateways, T1 gateways, logical segments and connected virtual
machines.
1. Note the relationship between logical segments and connected virtual machines.
If you have taken previous modules in this lab your Network Topology may look different from this image
T0-GW-01 [192]
In this view we can see the name and details of the T0 gateway
Take a moment to explore the topology, hover over items to see more detail, click on groups of VM's to expand and collapse the view.
1. Click Security
3.Click Configuration
4.Note all the detail on the page and the ability to click on links for further inspection / configuration. Feel free to click on links
and explore.
Once you are done exploring Click Security -> Security Overview ->Configuration to return to this view.
1. Click Security
3.Click Capacity
4.Note the current Security Inventory configuration and the maximum capacity, this is an easy way to keep an eye on sizing in
the environment.
1. Click Inventory
3.Click Configuration
4.Note all the detail on the page and the ability to click on links for further inspection / configuration. Feel free to click on links
and explore.
Once you are done exploring Click Inventory -> Inventory Overview ->Configuration to return to this view.
1. Click Inventory
3.Click Capacity
4.Note the current Inventory configuration and the maximum capacity, this is an easy way to keep an eye on sizing in the
environment.
1. Click System
3.Click Configuration
4.Note the detail on the page and the ability to click on links for further inspection / configuration. Feel free to click on links and
explore.
Once you are done exploring Click System-> System Overview ->Configuration to return to this view.
1. Click System
3.Click Capacity
4.Note the current System configuration and the maximum capacity, this is an easy way to keep an eye on sizing in the
environment.
2.Click the arrow next to Fabric in the menu on the left side to see Nodes
3.Click Nodes
6.Click the arrow next to RegionA01-COMP01 to show the hypervisors configured with NSX
•System
System Usage including CPU, memory, file system information, load and uptime
•Transport
Transport Node Status including status of the connectivity to Manager and Controllers, pNIC/Bond status.
2.Review Tunnel Status and Remote Transport Node Status of the overlay tunnels established by the host.
3.Close
Close monitoring window
L2 Counters/Stats/Table [205]
2.Click Segments in the menu on the left side of the NSX Manager User Interface
4.Click View Statistics to open aggregated informations for that logical switch
•Cumulative
Cumulative Traffic statistics for Unicast, Broadcast and Multicast, and Dropped packets
•Additional
Additional switch-aggregated statistics for blocked traffic
traffic, including the reason for traffic being dropped (Spoof Guard, BPDU
Note
Note: Layer 2 information can be found on different tabs. Those related to logical ports provide individual information for a specific port
while those related to logical switches provide aggregated information for that logical switch.
Each individual NSX component constantly scans and monitors their predefined alarm conditions. When the alarm condition occurs,
events will be sent to the NSX manager. Users can receive notifications of alarms. NSX can also integrate with an existing monitoring
infrastructure by sending out events via log messages to syslog or traps to an SNMP server when an alarm condition occurs.
2.Click Alarms
Review the potential alarms/events for this NSX platform. The alarms are clickable, and will show the description and recommended
actions.
More details of each alarm can be found here on the following link : https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/
administration/GUID-23FB78F5-E0AF-40E3-9450-0B957B374383.html
Traceflow [212]
Traceflow helps inspect the path of a packet as it travels from one logical port to a single or multiple logical ports.
1. Click the arrow next to VM name and select web-01a as source VM. The virtual interface should be automatically selected
Forwarded), Transport Node, Component, and the Port Connection Tool graphical map of the topology if unicast and logical
switch are selected as destinations. By clicking on the components in the visual output reveals more information.
If you have completed previous modules in this lab your Traceflow output may look different from the image
The LAB 2226-02-SEC NSX Advanced Security goes further on how to use Traceflow to observe packets being forwarded or dropped
by Distributed Firewall in its Module 1.
IPFIX [217]
1. Move to IPFIX in the menu on the left side of Plan & Troubleshoot interface
When IPFIX is enabled in NSX, all configured host transport nodes send IPFIX messages to the collectors using port 4739. NSX
supports IPFIX for switches and firewalls as you can see by looking at the different sub menus of IPFIX Tab.
1. Move to Port Mirroring in the menu on the left side of Plan & Troubleshoot Interface
2.The admin can monitor port mirroring sessions for troubleshooting. Among the available types are:
◦Logical SPAN: to be used when both NICS, source and destination of the mirroring session, are on the same
Transport Node.
◦Remote L3 SPAN: forwards captured traffic to a remote IP address, encapsulated in one of the three following
While troubleshooting it can be valuable to capture network packets sent and received by a specific virtual machine. To capture these
packets we will need to SSH into the ESXi server that is currently hosting the vm we are targeting. In this example we will be capturing
packets to and from web-01a. This example will be limited to capturing packets directly on the vm's virtual interface however we can
also capture packets before and after the geneve overlay at the ESXi servers vmkernel interface as well as other locations.
4.Click the arrow next to the Default Layer3 Firewall rule section to open the DFW default rules
ping web-01a.corp.local -t
2.Click PuTTY
net-stats -l
1. Capture the traffic as web-01a-TX.pcapng to the /tmp directory of the ESXi server by running the following command, and
change the Port number below 67108881 with the Port number on the step located in step Find web-01a Port :
3.Click WinSCP
1. Click esx-01a.corp.local
2.Click Login
3.Click Yes to add host key to cache if prompted (not shown)
1. Select web-01a-TX.pcapng
2.Click Download
4.Click OK
2.Click OK
3.Click Wireshark
1. Click File -> Open on the top level navigation menu of Wireshark.
2.Click Open
If you would like information on how to use Wireshark to view the packet capture please visit the following link:
https://www.wireshark.org/docs/wsug_html_chunked/ChapterWork.html
Please proceed to any module below which interests you the most.
•Module 1 - NSX Dynamic Routing, Multicast and VRF (45 minutes) (Basic) In this module you will explore advanced networking
•Module 2 - NSX Load Balancing (30 minutes) (Basic) In this module you will create a virtual server and explore basic load
balancing in NSX.
•Module 3 - NSX VPN (30 minutes) (Basic) In this module you will configure an IPsec VPN tunnel in NSX.
•Module 4 - NSX Native Operations (30 minutes) (Advanced) The goal of this module is to explore some of the various day 2
•Module 5 - Automation in NSX (30 minutes) (Advanced) The purpose of this module is to explore the NSX REST API via
•Module 6 - NSX Federation (30 minutes) (Advanced) In this module we will simulate the setup and configuration of an NSX
Federation deployment.
Lab Captains:
•Lead
Lead Captain - Jennifer Schmidt - Staff Virtual Cloud Network TAM - United States
•Captain
Captain - Joe Collon - Staff Solution Engineer Virtual Cloud Network - United States
•Captain
Captain - Phoebe Kim - Senior Cloud Solutions Architect - United States
•Captain
Captain - Mihajlo Veselinovic - Staff Virtual Cloud Network TAM - United States
•Associate
Associate Captain - Claire Davin - Virtual Cloud Network Solutions Engineer - France
The goal of this module is to explore how you can automate the deployment and configuration of NSX-T resources.
We will explore automation in NSX by using two tools, Postman and Terraform.
Postman is an API development tool that enables you to make REST API calls directly to the NSX Manager.
Terraform allows you to define an intended state as a configuration text file, and will realize the intended configuration when you apply
the configuration.
The native NSX-T API can be used to automate deploying and configuring NSX-T resources. In this module, you will use native NSX-T
API with Postman, an API development tool, to perform the following tasks:
•Create a distributed firewall rule for clients to access the phoenix web VMs
1. Click 2225-02 Module 5 Requests to expand the collection and view all the requests
2.Click the Variables tab to view the variables for this collection; these variables will be used to create API request URLs and to
◦nsx_url
nsx_url: URL of the NSX-T Manager
◦vcsa_url
vcsa_url: URL of the vCenter Server
◦vcsa_session
vcsa_session: session ID output from an API request that creates a vCenter Server session; this is needed for
◦web_ls_objectID
web_ls_objectID: object ID of the logical segment where phoenix-web-01a needs to be placed
◦web_vmID
web_vmID: object ID of the phoenix-web-01a virtual machine
◦web_hwNIC
web_hwNIC: VM hardware NIC ID of the phoenix-web-01a virtual machine
◦domain_ID
domain_ID: domain ID of infra
2.Click the Body tab to view the JSON request body details
◦The tier-1 router will advertise its connected routes and IPsec local endpoints, determined by
route_advertisement_types
3.Click Send
2.Click the Body tab to view the JSON request body details
◦The segment will be attached to the tier-1 router created in the previous step, shown by connectivity_path
3.Click Send
1. Launch the NSX web interface by clicking on the nsxmgr-01a bookmark in the Region A bookmark folder of Google Chrome
3.Click LOG IN
◦This vCSA session ID will be used for all the API requests to the vCenter Server
3.Click Send
2.Click the Tests tab to view the script that will store the value of network in the response to a variable called web_ls_objectID
◦This segment object ID will later be used for the API request to attach the phoenix-web-01a VM to the phoenix-
web-ls
3.Click Send
2.Click the Tests tab to view the script that will store the value of vm in the response to a variable called web_vmID
◦This VM ID will be used in the next API request to gather more information about the VM hardware NIC as well as
3.Click Send
2.Click the Tests tab to view the script that will store the value of nic in the response to a variable called web_hwNIC
◦This Hardware NIC ID will be used for the next API request to attach the VM to a logical segment
3.Click Send
2.Click the Body tab to view the JSON request body details
3.Click Send
We will now connect to vCenter to verify that the Phoenix-Web-01a VM has been attached to the appropriate Segment.
3.Click LOG IN
1. Click phoenix-web-01a VM
1. Click the request called Create security group for phoenix-web VMs
2.Click the Body tab to view the JSON request body details
◦The expression section provides details for the security group membership criteria
◦The group membership criteria are configured to include any virtual machines that contain "phoenix-web" in their
names
3.Click Send
2.Click the Body tab to view the JSON request body details
◦The Distributed Firewall Rules to be included in this section are listed in the rules section
◦Only one rule is created by this API request, and the rule name is defined by display_name
◦The rule allows any source to access phoenix-web-group with HTTP/HTTPS service
3.Click Send
Note: If you previously closed NSX Manager or it has timed out, click the nsxmgr-01a shortcut under RegionA folder in the toolbar and
enter the following to login:
•Username: admin
•Password: VMware1!VMware1!
1. Click Security
3.Verify that the phoenix-app firewall rule section has been created with one distributed firewall rule that allows any sources to
reach the phoenix web VMs with HTTP and HTTPS services
Earlier in this module, you explored deploying and configuring objects in NSX-T through the available REST API by using Postman. The
REST API is perfect for scripting, or when you need a standards compliant method of consuming NSX in conjunction with third party
applications. In addition, VMware maintains a Terraform provider for NSX. Terraform enables you to create configuration files that allow
you to define complex actions in a simple text file. In this section, we will explore the NSX-T provider for Terraform by performing the
following tasks with the Phoenix sample application:
•Create phoenix-app and phoenix-db logical segments and connect them to our existing T1-PHOENIX-GW (created earlier via
Postman)
•Attach phoenix-app and phoenix-db VMs to their new, respective logical segments
•Create security groups for Phoenix App and DB VMs (Phoenix Web was created earlier via Postman)
•Create distributed firewall rules to complete the security policy between the three tiers of the Phoenix app
You should be able to ping the Tier-1 Gateway address of Segment Phoenix-Web-LS at 172.16.50.1, as well as VM phoenix-web-01a at
172.16.50.11 from your admin desktop. You will not be able to perform these actions if you have not completed Module 3 in this lab.
If you have not completed Module 3 in this lab, please follow the instructions in the following steps to prepare the lab for this module.
3.Click the nsxmgr-01a shortcut to launch the NSX Manager login page
As mentioned, in the last exercise you explored automation of NSX-T by executing various REST API calls through the Postman
application. After performing these steps, the Phoenix app in your lab should resemble the diagram above.
You created a new Tier-1 Gateway, T1-PHOENIX-01, and connected it to the existing T0-GW-01 router. In addition, you created a new
NSX Segment, Phoenix-Web-LS, and attached it to this new Tier-1 Gateway with an IP address of 172.16.50.1/24. Phoenix-App-LS and
Phoenix-DB-LS have not yet been created and are the focus of the next exercise, thus they are lightened in the diagram.
At this point, the VMs on Segment Phoenix-Web-LS should have a complete path to your admin desktop. However, if you attempt to
ping phoenix-web-01a from your desktop, you will not receive any ICMP replies.
If you wish to troubleshoot and diagnose this issue on your own, you may skip the following steps by clicking HERE.
Otherwise, please continue with the next steps to resolve this issue and complete the configuration.
2.Click Tier-0 Gateways in the menu on the left side of the NSX Networking user interface
3.Click the More Options icon to the left of T0-GW-01 to display its Options menu, then click Edit
2.Click the 1 link to the right of Route Re-distribution to display the Set Route Re-distribution dialog
Note: The actual number of Route Re-distributions may differ from the instructions, depending on the modules you have completed
prior to this step.
1. Click the Number (2) under Route Re-distribution to view what is currently configured on the T0
1. As we can see, currently only Tier-0 Connected and Static routes are being re-distributed into BGP
2.Click Close
Since we rely on the Tier-0 Gateway to re-distribute routes to the physical fabric, we also need to allow the routes connected to its
Tier-1 Gateways to be re-distributed at the Tier-0 level as well. As you saw, this is not currently the case since only Tier-0 Subnets are
being advertised. Thus, the Phoenix-Web-LS subnet is not currently advertised outside of the NSX fabric and is unreachable. We will
now configure the Tier-0 Gateway to re-distribute networks from its connected Tier-1 Gateways.
1. Click the checkbox to select Connected Interfaces & Segments under the Advertised Tier-1 Subnets (this will automatically
2.Click APPLY
You should now be able to ping the Tier-1 Gateway address of Segment Phoenix-Web-LS at 172.16.50.1, as well as VM phoenix-web-01a
at 172.16.50.11 from your admin desktop.
2.Click Distributed Firewall in the menu on the left side of the NSX Security user interface
3.Click the arrow to the left of the phoenix-app firewall policy to expand it
Observe that one rule is currently defined within this policy, allowing any source to communicate with NSX Security Group phoenix-
web-group on HTTP or HTTPS. This Policy and Rule were created earlier in this module by using Postman and the NSX-T REST API.
2.Click PuTTY
cd /terraform
There are two files in the /terraform directory: phoenix-app.tf and tf-import.sh. We will briefly describe these files and their purpose.
phoenix-app.tf: This is the main file that tells Terraform the desired configuration, in this case a three tiered application with its
associated security policy. With no information other than this file, Terraform would consider all of the objects contained within it to be
new objects and will attempt to create them. Since you created the first steps of this three tiered application earlier in this module by
using Postman and the NSX-T REST API, we need a way to associate the preexisting objects with the resources in this configuration file.
Terraform does this through a process called an import
import.
tf-import.sh: This is the import file that will map the preexisting objects created through Postman to the objects contained in the
Terraform configuration file. In this Linux shell script, there are four objects we are associating:
•Existing NSX Security Group phoenix-web-group to Terraform NSX-T Policy Group phoenix-web-group
•Existing NSX Security Policy phoenix-app to Terraform NSX-T Security Policy phoenix-app
For more information about importing existing objects into a Terraform configuration, please reference the Terraform documentation at
the following link: https://www.terraform.io/docs/cli/import/index.html
1. Observe the files in the /terraform directory by entering the following command:
ls -l
2.View the contents of the Terraform import script tf-import.sh by entering the following command:
cat tf-import.sh
3.Once you have finished viewing the tf-import file, display the phoenix-app.tf configuration file by entering the following
command:
cat phoenix-app.tf
NOTE: The file phoenix-app.tf is approximately 250 lines long and its contents will exceed what your terminal can display at once. After
completing the third step, you can use the PuTTY scrollbar to parse the various sections of the file.
Note how self-contained and portable the code in the Terraform config file is compared to the REST API calls. This config file also acts
as a source of truth for configuration. Once we deploy this configuration, Terraform will map the newly-created objects to this
configuration file in the same way it does for the imported objects in the tf-import script. On subsequent code runs, Terraform will be
able to detect any changes made to the configuration, and will return the running configuration to the state defined in the configuration
file.
1. Observe the security policy definition section. We have defined one policy, called phoenix-app. This phoenix-app policy was
created as part of the earlier Postman REST API calls you performed. For this reason, we will execute the tf-import later in this
section telling NSX-T to associate the existing phoenix-app policy in NSX to this section definition.
can see in the text, we are referencing NSX data source services HTTP AND HTTPS. No source group is listed, so the default
of "Any" will be applied. The destination group will be the phoenix-web-group, defined earlier in the configuration file.
We will now import the objects we created via Postman, and allow them to be managed via Terraform.
1. Enter the following command to run the terraform import process via a Linux shell script:
./tf-import.sh
Observe that four objects have been successfully imported into Terraform: VM phoenix-app-01a
phoenix-app-01a, VM phoenix-db-01a
phoenix-db-01a, NSX Security
Group phoenix-web-group
phoenix-web-group, and NSX Security Policy phoenix-app
phoenix-app.
terraform apply
2.Terraform will progress through two phases while attempting to realize the configuration file. The first phase will parse the
configuration file and determine which objects need to be created, modified or deleted. This may take 1 - 2 minutes to
complete.
3.Terraform will then provide a summary of all actions it is about to take. At this point, you can review a list of the objects that
Observe that applying our Terraform configuration will result in 6 new objects, and 3 updated objects. No objects will be destroyed
(deleted).
1. Confirm that you'd like to apply these actions by entering yes, followed by Enter
1. As the policy is being applied by Terraform, it will display the object it is currently modifying. You should see the various
components display on the screen. The policy may take 1 - 2 minutes to complete. Once it has finished, you should see an
If the Terraform script fails with the above errors, it is because the NSX-T segments are taking longer than expected to be created. If
this happens, apply the Terraform configuration again by entering the following command:
terraform apply
1. Click the NSX - Google Chrome tab in the Windows Taskbar to return to the NSX-T UI. If it is not already selected, click the
2.Click Distributed Firewall in the menu on the left side of the NSX Security user interface
3.Click the arrow to the left of the phoenix-app firewall policy to expand it
4.You many need to use the Refresh button to refresh the distributed firewall policy until the two new rules appear and the
status is Success
Observe that there are now three firewall Rules defined within the phoenix-app Policy. Through Terraform, we imported the existing
Policy and completed its definition by adding the two remaining policy Rules as well as their associated Security Groups.
To show how Terraform can maintain a constant state, we will now delete the phoenix-client-to-web rule that was originally created in
Postman.
1. Click the More Options icon to the left of the phoenix-client-to-web rule to display its Options menu, then click Delete Rule
1. Click Publish in the top right of the screen to apply the changes
1. Click the phoenix-web-01a.corp.local - PuTTY tab in the Windows Taskbar to return to our PuTTY session
terraform apply
You will once again see updates as Terraform determines the current state of the environment and calculates any changes that need to
be made.
Observe that Terraform will now change the current deployment due to the missing rule (you can scroll back in the PuTTY to see more
details)
1. Confirm that you'd like to apply these actions by entering yes, followed by Enter
1. Click the NSX - Google Chrome tab in the Windows Taskbar to return to the NSX-T UI
Conclusion [302]
This concludes the automation module. Keep in mind that tools such as Terraform can be combined with continuous integration and
delivery pipelines, and when architected appropriately all management of NSX-T can be done through automation.
Please proceed to any module below which interests you the most.
•Module 1 - NSX Dynamic Routing, Multicast and VRF (45 minutes) (Basic) In this module you will explore advanced networking
•Module 2 - NSX Load Balancing (30 minutes) (Basic) In this module you will create a virtual server and explore basic load
balancing in NSX.
•Module 3 - NSX VPN (30 minutes) (Basic) In this module you will configure an IPsec VPN tunnel in NSX.
•Module 4 - NSX Native Operations (30 minutes) (Advanced) The goal of this module is to explore some of the various day 2
•Module 5 - Automation in NSX (30 minutes) (Advanced) The purpose of this module is to explore the NSX REST API via
•Module 6 - NSX Federation (30 minutes) (Advanced) In this module we will simulate the setup and configuration of an NSX
Federation deployment.
Lab Captains:
•Lead
Lead Captain - Jennifer Schmidt - Staff Virtual Cloud Network TAM - United States
•Captain
Captain - Joe Collon - Staff Solution Engineer Virtual Cloud Network - United States
•Captain
Captain - Phoebe Kim - Senior Cloud Solutions Architect - United States
•Captain
Captain - Mihajlo Veselinovic - Staff Virtual Cloud Network TAM - United States
•Associate
Associate Captain - Claire Davin - Virtual Cloud Network Solutions Engineer - France
The goal of this module is to explore how you can configure NSX-T federation and create stretched NSX objects across different NSX-T
instances.
•Create stretched NSX objects (logical routers, logical segments, security groups, distributed firewall rule)
With NSX Federation, you can manage multiple NSX-T Data Center environments with a single pane of glass view, create gateways and
segments that span one or more locations, and configure and enforce firewall rules consistently across locations.
To start using NSX Federation, you must first install Global Manager then add different NSX-T instances as locations. The NSX-T
manager appliances that are managed by Global Manager are called Local Managers. Once the locations are configured, you can
configure networking and security from Global Manager.
It is recommended to have active and standby Global Manager clusters in different locations for resiliency and availability.
•Tunnel End Point (TEP): the IP address of a transport node (Edge node or Host) used for Geneve encapsulation within a
location.
•Remote Tunnel End Points (RTEP): the IP address of a transport node (Edge node only) used for Geneve encapsulation across
locations.
The lab environment consists of a multi-site deployment. Each site is managed by an independent vCenter and NSX Manager. At Site A,
a single NSX Global Manager has been pre-deployed to allow for the creation of the federation set up as part of the lab exercise. The
workload clusters at each site have been prepared for NSX-T already, and two Edge nodes per site have been grouped in an NSX Edge
Cluster. At Site A, the HOL Multi-Tier WebApp is functional, and its VMs are connected to VLAN-Backed vDS port-groups. The physical
network is simulated using a single Linux Router VM named vPod Router running Quagga. The vPod router will act as the ToR switches
at both sites.
The vPod router acts as the default gateway for any management or infrastructure network at both sites. The vPod router also acts as
the default gateway for the three WebApp networks (Web, App, and Db). The vPod router is running BGP with AS 65100, and it is
already configured to peer with the NSX infrastructure on 4 different VLANs: 100 and 200 at SiteA, 300 and 400 at SiteB.
In this lab, you are going to start by onboarding the NSX Managers at each site to the Global manager. After this step, any
configuration, global or local to a specific Local Manager can be accomplished through the Global Manager UI. Next, you will configure
the edge clusters at each site to forward RTEP traffic between the two locations. RTEP interfaces must be configured on each edge
node on a different subnet and VLAN than the one in use for the TEP traffic. IP Pools for the RTEP interfaces have been pre-created in
the lab and routing between the RTEP subnets at each site set up in advance.
Once the NSX Federation is set up, the first logical component you are going to create is a stretched Tier-0 Gateway. The stretched
Tier-0 Gateway will have SR components deployed on four edge nodes, the two at SiteA and the two at SiteB. Extending the design
recommendations for a single site, you will configure the SR component on each edge node to peer to the physical network over two
distinct VLANs. You will use VLAN 100 and 200 at Site A, and VLAN 300 and 400 at Site B. In the lab, however, you will only configure
a single BGP peering per site: Edge-01a to the vPod Router over VLAN 100, and Edge-01b to the vPod Router over VLAN 300 to save
time and focus on federation-specific configurations.
After creating a Tier-0 Gateway, you are going to configure a Stretched Tier-1 Gateway and Stretched segments for the WebApp. You
will advertise connected segments at both sites.
After configuring Stretched segments and Global security policies, you will migrate web-01a VM to Site B to verify that the Global
security policies are still effective after the migration.
You will also explore the local egress functionality of NSX Federation.
You can hide the manual to use more of the screen for the simulation.
NOTE: When you have completed the simulation, click on the Manual tab to open it and continue with the lab.
Congratulations on completing Module 6! You have now completed the last module for this lab.
Please proceed to any module below which interests you the most.
•Module 1 - NSX Dynamic Routing, Multicast and VRF (45 minutes) (Basic) In this module you will explore advanced networking
•Module 2 - NSX Load Balancing (30 minutes) (Basic) In this module you will create a virtual server and explore basic load
balancing in NSX.
•Module 3 - NSX VPN (30 minutes) (Basic) In this module you will configure an IPsec VPN tunnel in NSX.
•Module 4 - NSX Native Operations (30 minutes) (Advanced) The goal of this module is to explore some of the various day 2
•Module 5 - Automation in NSX (30 minutes) (Advanced) The purpose of this module is to explore the NSX REST API via
•Module 6 - NSX Federation (30 minutes) (Advanced) In this module we will simulate the setup and configuration of an NSX
Federation deployment.
Lab Captains:
•Lead
Lead Captain - Jennifer Schmidt - Staff Virtual Cloud Network TAM - United States
•Captain
Captain - Joe Collon - Staff Solution Engineer Virtual Cloud Network - United States
•Captain
Captain - Phoebe Kim - Senior Cloud Solutions Architect - United States
•Captain
Captain - Mihajlo Veselinovic - Staff Virtual Cloud Network TAM - United States
•Associate
Associate Captain - Claire Davin - Virtual Cloud Network Solutions Engineer - France
•Module 6 Special Assistance - Luca Camarda - Livefire Solutions Architect - United States
Appendix
Welcome to Hands-on Labs! This overview of the interface and features will help you to get started quickly. Click next in the manual to
explore the Main Console or use the Table of Contents to return to the Lab Overview page or another module.
1. The area in the large RED box contains the Main Console. The Lab Manual is on the tab to the right of the Main Console.
2.Your lab starts with 90 minutes on the timer. The lab cannot be saved. Your lab will end when the timer expires. Click the
EXTEND button to increase the time allowed. If you are at a VMware event, you can extend your lab time twice up to 30
minutes. Each click gives you an additional 15 minutes. Outside of VMware events, you can extend your lab time up to 9
In this lab you will input text into the Main Console. Besides directly typing it in, there are two very helpful methods of entering data
which make it easier to enter complex data.
Click and Drag Lab Manual Content Into Console Active Window [317]
https://www.youtube.com/watch?v=xS07n6GzGuo
You can also click and drag text and Command Line Interface (CLI) commands directly from the Lab Manual into the active window in
the Main Console.
You can also use the Online International Keyboard found in the Main Console.
1. Click on the keyboard icon found on the Windows Quick Launch Task Bar.
In this example, you will use the Online Keyboard to enter the "@" sign used in email addresses. The "@" sign is Shift-2 on US keyboard
layouts.
When the lab starts you may notice a watermark on the desktop indicating that Windows is not activated.
A major benefit of virtualization allows virtual machines to be moved and run on any platform. Hands-on Labs utilizes this benefit and
hosts labs from multiple datacenters. However, these datacenters may not have identical processors which triggers a Microsoft
activation check through the Internet.
Rest assured VMware and Hands-on Labs are in full compliance with Microsoft licensing requirements. The lab that you are using is a
self-contained pod and does not have full access to the Internet. Without this the Microsoft activation process fails, and you see this
watermark.
Use the Table of Contents to return to the Lab Overview page or another module.