f5 Vmware NSX T Deployment Guide
f5 Vmware NSX T Deployment Guide
1
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Version History
Date Version Author Description
December 2019 1.0 Ulises Alonso Camaró, Initial public version.
F5 Networks
June 2020 2.0 Ulises Alonso Camaró Validated with NSX-T 3.0.
F5 Networks
Updated all screenshots and configuration flows to match
NSX-T 3.0.
September 2020 2.1 Ulises Alonso Camaró Extended topology suitability matrix based on flow’s direction
with the inter-tenant E-W flows case.
F5 Networks
Added MAC masquerading information.
Added VMC on AWS section.
Added section on Hybrid and Multi-Cloud design
considerations.
Renamed from “Integration guide” to “deployment guide”
2
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
INTRODUCTION .................................................................................................................................... 6
Design consideration: Use of dynamic routing (BGP) with upstream networks ............................................... 13
3
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Introduction ................................................................................................................................................ 81
GENERAL NOTES................................................................................................................................ 98
End to End testing: test egress forwarding connectivity through the BIG-IP. ................................................. 110
End to End testing: test egress forwarding connectivity without the BIG-IP. ................................................. 112
End to End testing: test Ingress connectivity through the BIG-IP. ................................................................. 112
4
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
5
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Introduction
The Software-Defined Data Center (SDDC) is characterized by server virtualization, storage
virtualization, and network virtualization. Server virtualization has already proved the value of
SDDC architectures in reducing costs and complexity of the compute infrastructure. VMware
NSX network virtualization provides the third critical pillar of the SDDC. It extends the same
benefits to the data center network to accelerate network service provisioning, simplify network
operations, and improve network economics.
This guide provides configuration guidance and best practices for the topologies in most
common scenarios ensuring compatibility and minimal disruption to the existing environments.
Unlike with NSX-V, F5 BIG-IP does not participate in the control plane of the overlay
networking. This is due to NSX-T’s lack of a publicly documented API. The integration is based
on routing within the overlay networks. This has the following implications:
- For North-South traffic flows this is not an issue because the number of networks to which
the F5 BIG-IP has to be connected is small and is not expected to change often.
- For East-West traffic this inhibits the possibility of using F5 BIG-IP hardware. Also, the
number of network segments to which the F5 BIG-IP is expected to be connected for this
use case is very high, but the VMware hypervisor only allows the VMs to be connected
with up to 10 vNICs1 with one network segment per vNIC. In this guide this VMware
limitation is overcome by creating multiple clusters of BIG-IPs. This allows higher
distribution of the traffic and CPU utilization across the VMware cluster.
Using F5 BIG-IP ADC instead of NSX-T’s load balancer provides the following benefits:
- NSX-T’s load balancer is not a distributed function and runs centralized on NSX-T Edge’s
nodes, which can represent a bottleneck. F5 BIG-IP can run in multiple hypervisors
concurrently by either running Active-Active F5 Scale-N clusters or multiple F5 BIG-IP
clusters.
- F5 BIG-IP provides proven, scalable, and world-class performance for ADC, NAT and
Firewall capabilities, and provides additional functionalities such as Advanced WAF, SSL-
VPN, Anti-DDoS protection, Secure Web Gateway with Identity Management and many
other solutions with a unified management & visibility with F5 BIG-IQ.
1
For checking vSphere’s limits consult the link
https://configmax.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%206.7&categories=1
-0 and search “Networking Virtual Devices” or ”Virtual NICs per virtual machine”.
6
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Additionally, when using BIG-IP (either Hardware or Virtual Edition) north of the NSX-T Edge
nodes this arrangement typically uses BGP (specially for Active-Active deployments) in which
case BIG-IP will require the Advanced Routing module to be provisioned. See
K46129932: How to verify Advance Routing Module is provisioned for more details.
- Inline topologies:
There is a section with implementation details for each topology, and for Topology A there are
three implementation options. This is followed by a section containing details common to all
topologies and best practices when deploying F5 in VMware. Then, a section for configuring
and testing a service with F5 BIG-IP. Finally, there is a section with considerations for
container platforms, Red Hat OpenShift and other Kubernetes based options.
2
To be precise, in some topologies BIG-IP is connected to NSX-T Edge using eBGP but BGP
is an Internet standard, not NSX-T specific.
7
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
NSX-T Tier-0 LR
Topology B: Inline at Tier-1
Ingress VIPs
NSX-T Tier-1 LR
BIG-IP Scale-N VE
Egress VIPs
This topology allows the use of either BIG-IP hardware or Virtual Editions (VE). In this
topology the F5 BIG-IP is placed in a special vantage point for all tenants where security-
related services can be enforced easily (for example WAF, Firewall and anti-DDoS) and
also NAT if needed.
NSX-T Edge cluster in Active-Standby mode using a dynamic routing with BGP.
NSX-T Edge cluster in Active-Active mode using dynamic routing with BGP ECMP.
This topology is similar to Topology A but allows per-Tenant BIG-IP clusters, hence
providing isolation between tenants. In this topology it is proposed eliminating NSX-T’s
Tier-1 Gateways to keep a 2-tier routing model while keeping BIG-IPs inline to the traffic
path (there is more information in the Topology B section). This topology only uses BIG-IP
Virtual Editions.
8
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
The BIG-IPs are not inline for plain forwarding traffic and hence this traffic doesn’t need
SNAT.
For BIG-IP services, the traffic goes through the BIG-IPs through a parallel path and
SNAT is required in order to keep traffic symmetric. See the Design considerations
section for more information when using NAT.
Overlay
VLAN
External network
NSX-T Tier-1 LR
Ingress VIPs BIG-IP Scale-N VE
Like Topology A which is also connected to a Tier-0 Gateway, this topology allows the use
of either BIG-IP hardware or Virtual Editions. Other than the requirement of using SNAT,
the main difference from Topology A is that each tenant can have their own BIG-IPs
instances with complete isolation. This is can be achieved either using BIG-IP hardware
instantiating vCMP guests or using F5 BIG-IP Virtual Edition instances for each tenant.
This topology is similar to Topology C but with the BIG-IPs attached to the Tier-1 routers
and would allow that Edge services could be applied at the NSX-T boundary for all traffic
9
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
flows without any traffic bypassing these Edge services. This is equivalent to the topology
used by NSX-T Load Balancers.
Although this topology can be used for both North-South and East-West services traffic, it
can be useful combining Topology D for East-West traffic with Topology A for North-South
traffic. This combined A & D Topology is especially useful when high performance is
required, and NSX-T Edges operate in Active-Active mode with ECMP. In this case, the F5
BIG-IP has to take over NSX-T Edge’s stateful functions. The BIG-IP can also perform
additional single-point control functionalities such as WAF, anti-DDoS, or SSL-VPN, which
are not available in NSX-T Edge.
Note that both topologies that are applied to Tier-0 allow multi-tenancy with either software
partitions or virtualization partitions (vCMP).
Type: If all the traffic goes through the BIG-IPs (Inline) or not (Parallel). When a topology is
inline implies that the BIG-IPs are able to be an enforcement point for all traffic and it is
guaranteed no traffic will by-pass BIG-IP’s topologies.
Tier: If the BIG-IPs are attached to a Tier-0 or Tier-1 NSX-T Gateway. In the case of Topology
C the proposed topology actually replaces NSX-T’s Tier-1 Gateway. See topology’s section for
more details.
HW: the topology allows for hardware appliances or chassis. Hardware platforms with vCMP
technology is recommended. This allows hard resource isolation between tenants.
Keeps source address: Ingress traffic doesn’t need to translate the source IP address of the
clients. This avoids the need of using the X-Forwarded-For HTTP header.
10
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Inter-tenant distributed forwarding path: when using plain routing between tenant
workloads the processing path is fully distributed by only using NSX-T’s networking. In other
words, this scenario is a path between Tier-1 workload to another Tier-1 workload and not
using BIG-IP services. Note that when using NSX-T’s native LB the processing is done
centralized in the NSX-T Edge nodes.
Enforcement point: this is characteristic of being an Inline topology type as described above.
Allows per-tenant VE clusters: the topology allows creating separate BIG-IP VE clusters for
each tenant where these do not interfere with each other.
Suitable for North-South: North-South flows is traffic that goes in and out of the NSX-T
deployment. In the case of topologies C and D the routed traffic doesn’t get any BIG-IP service
applied.
Suitable for intra-tenant East-West: traffic that doesn’t use a Tier-0 Gateway. BIG-IP at Tier-
0 (topologies A and C) don’t affect East-West traffic flows. Topology B or D should be chosen
depending on if it is required that the BIG-IP be a tenant enforcement point. Although Topology
D doesn’t allow the BIG-IP to be an enforcement point it allows distributed L3 forwarding by
using only Tier-1 gateways for these flows.
Suitable for inter-tenant East-West: traffic that uses Tier-0 Gateway. When routed these
flows typically take advantage of distributed processing and traffic goes directly from VM to
VM. BIG-IP at Tier-0 can deal with these flows if the VIPs are not in tenant’s segments. Note
that when using BIG-IP for these flows it doesn’t incur in more node hops than native NSX-T
LB because the native NSX-T LB is implemented in the Edge nodes and also represent a node
hop. For topologies B and D it is the same situation as for intra-tenant East West flows.
11
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
There are many other topology possibilities; the following examples have specific use cases:
- BIG-IP Service scaling group (SSG) for CPU-intensive workloads such as Advanced WAF
in large scale deployments.
- Per-App VE which provides DevOps teams with an ADC and a WAF to deliver services
and security just for the application they are developing.
For more information on these please consult BIG-IP Cloud Edition Solution Guide.
12
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
In general, it is recommended to use redundancy at all Network Layers. In the case of Layer 2
networking this is typically achieved by using LACP3 which is supported in the ESXi/vSphere
hypervisor and in the NSX-T Transport and Edge nodes. In the case of BIG-IP hardware
platforms LACP is supported. The VMs in ESXi/vSphere do not receive the LACP frames from
the hypervisor hence the network appliances such as BIG-IP VE cannot implement LACP and
this must be configured instead at the hypervisor level. In other words, LACP should be
configured in the NSX-T transport nodes or ESXi/vSphere and this will be transparent to the
BIG-IP VE.
If NAT is required, it can be performed by the F5 BIG-IPs, which has the added value of
offloading this functionality from NSX-T Edge. This in turn allows NSX-T Edge nodes to run in
Active-Active mode with ECMP without restrictions - NAT in Tier-0 can only run in Active-
Active when using Reflexive (stateless) mode4.
In many instances, services need to be aware of the client’s IP address. In these cases, and
when the F5 BIG-IP performs NAT, the client IP address can be added in the HTTP payload
using the X-Forwarded-For header for unencrypted and encrypted traffic by performing
SSL/TLS termination in the F5 BIG-IP. This capability of always being able to insert the X-
Forwarded-For header is an important reason for choosing F5 BIG-IP for NAT functionality.
3
LACP - Link Aggregation Control Protocol is an IEEE standard.
4
Reflexive NAT - https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.5/administration/GUID-46900DFB-58EE-4E84-9873-357D91EFC854.html
13
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
NSX-T Edge’s Tier-0 routers exchange routes with upstream devices by means of eBGP. It is
recommended the use of dynamic routing in the following use cases:
14
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
External network
NSX-T’s boundary
NSX-T Tier-0 LR
The main feature of this topology is that the F5 BIG-IP can easily be an enforcement point for
North-South traffic. In this scenario, F5 can be either deployed as hardware or as a Virtual
Edition. When using a Virtual Edition, multi-tenancy can be achieved by using separate logical
partitions. When using BIG-IP hardware, multi-tenancy can also be achieved with full isolation
by using vCMP.
When NSX-T Edge is running in Active-Active mode with ECMP, it is not able to run stateful
services (ie: edge firewall, load balancing, or NAT with the exception of Reflexive NAT). In this
high-performance use case, this functionality can be off-loaded to the F5 BIG-IP (hardware
platforms are recommended, using chassis for ultimate scalability without reconfiguration).
15
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
When using this logical topology there are two alternatives for the physical topology. These
can be seen in the next figure.
VE VE VE VE
Physical or Virtual BIG-IP BIG-IP Physical or Virtual BIG-IP BIG-IP
HW HW HW HW
L2/SUBNET D
L2/SUBNET C
SINGLE L2/SUBNET
L2/SUBNET B
L2/SUBNET A
BGP or static routing with floating next-hops. BGP routing only option.
Cannot use ECMP Can use ECMP
16
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
10.105.217.0/24 .1
Single external network segment
.101 .102 (basic upstream connectivity example)
Ingress VIPs .100 Services and security
Physical or Virtual VE
F5 BIG-IP
BIG-IP Scale-N HW
Egress VIPs .10 Routing, Secure Web Gateway
NSX-T Tier-1
Distributed
Logical Router
.1
10.106.32.0/24
Services segment (example)
Figure 5 – Example of topology A using static routing used through this section.
Given the many possibilities of configuring NSX-T Edge nodes and their logical switch uplink
ports, it is assumed that these have been already created. This guide is focused in the
17
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
configuration for the Layer 3 and higher layers that are specific to this topology. See section
Design consideration: Layer 2 networking for details.
In NSX-T manager, go to Networking > Tier-0 Gateways > Add Gateway > Tier-0 as
shown in the next figure.
18
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
1.2. Create an Interface for each Edge Node used by the Tier-0 Gateway/Gateway.
Select the router created (T0-Topology-A in our example) and create two interfaces in the
UI by first selecting the Edit option in the T0 Gateway, then scrolling down to the
19
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Interfaces section clicking in the Set option of External and Service Interfaces . Enter
the following parameters for each interface:
- Name: In this example, edge-1-uplink-red is used for the first router port and edge-2-
uplink-red for the second (we will use edge-*-uplink-blue in the BGP+ECMP
scenarios).
- Type: External
- Edge Node: This will be edge-1-topology-a and edge-2-topology-a for each external
interface respectively.
- MTU: use external network’s MTU, which should be the same on the BIG-IP.
- URPF Mode: Strict is a good practice providing security with no expected
performance impact. Strict should be used unless asymmetric paths are used.
- Segment: This is the L2 network to which the interface is attached to. It is a pre-
requisite to have this previously created. See section Design consideration: Layer 2
networking for details.
- IP Address/mask: this is the IP address assigned to the address port in the shared
segment between the NSX-T Edge nodes and the F5 BIG-IPs. In this example,
10.106.53.1/24 is used for router port in edge-01 and 10.106.53.2/24 in edge-02.
- Click Add.
Figure 8 – Filling the details of a router port of one of the uplinks for the Tier-0 Gateway.
20
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
The HA VIP is an IP address that will be shared by the two Edge Nodes used for the Tier-0
Gateway just created and it will be used as the ingress IP to the NSX-T networks.
Select the Gateway just created (T0-Topology A in our example), and create an HA VIP in
the UI by selecting Edit > HA VIP Copnfiguration > Set and entering the following
parameters:
1.4. Add a default route in the Tier-0 Gateway towards the BIG-IP cluster floating Self IP
address.
In our example, the BIG-IP cluster floating address to use as the next hop is 10.106.53.10.
Select the T0-Topology A Gateway created and then create a static routing in the UI by
selecting Routing > Static Routes > Set as follows and entering as Next Hop BIG-IP’s
floating-IP, in this example 10.106.53.10:
This will be used later to instantiate a VM and perform a verification of the deployment.
21
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
In NSX-T manager, select Networking > Tier-1 Gateways > Add Tier-1 Gateway > Tier-1
Router filling the following parameters:
The next step is to create a network attached to this Tier-1 Gateway. In the UI, select
Networking > Segments > Add Segment and enter the following parameters:
22
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
First, create the Self IPs and floating Self IPs towards the spine routers (north-bound) and
towards the NSX-T Tier-0 Gateway (south-bound). These do not require any special
configuration. An example of the first BIG-IP unit is shown next.
Figure 14 – Self IPs and floating Self IPs required (shown in BIG-IP unit 1).
23
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Note: the non-floating Self IPs are per BIG-IP unit, while the floating Self IPs are synchronized
across the BIG-IP units.
The next step is to configure the static routing in the BIG-IP. Typically, these involve two
routes:
These routes can be shown in the next figure and should be configured in both BIG-IP units
(this configuration is not synchronized automatically across BIG-IPs).
At this point, follow the testing steps described in the Verifying the deployment section.
24
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
10.105.217.0/24 .1
Single external network segment
.101 .102 (basic upstream connectivity example)
Ingress VIPs .100 Services and security
Physical or Virtual VE
F5 BIG-IP
BIG-IP Scale-N HW
Egress VIPs .10 Routing, Secure Web Gateway
NSX-T Tier-0
Logical Router
Edge-01 Edge-02
NSX-T Tier-1
Distributed
Logical Router
.1
10.106.32.0/24
Services segment (example)
Figure 16 – Example of topology A using BGP routing used through this section
Given the many possibilities of configuring NSX-T Edge nodes and their logical switch uplink
ports, it is assumed that these have been already created. This guide is focused in the
25
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
configuration for the Layer 3 and higher layers that are specific to this topology. See section
Design consideration: Layer 2 networking for details.
In NSX-T manager, go to Networking > Tier-0 Gateways > Add Gateway > Tier-0 as
shown in the next figure.
26
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
1.2. Create an Interface for each Edge Node used by the Tier-0 Gateway/Gateway.
Select the router created (T0-Topology-A in our example) and create two interfaces in the
UI by first selecting the Edit option in the T0 Gateway, then scrolling down to the
27
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Interfaces section clicking in the Set option of External and Service Interfaces . Enter
the following parameters for each interface:
- Name: In this example, edge-1-uplink-red is used for the first router port and edge-2-
uplink-red for the second (we will use edge-*-uplink-blue in the BGP+ECMP
scenarios).
- Type: External
- Edge Node: This will be edge-1-topology-a and edge-2-topology-a for each external
interface respectively.
- MTU: use external network’s MTU, which should be the same on the BIG-IP.
- URPF Mode: Strict is a good practice providing security with no expected
performance impact. Strict should be used unless asymmetric paths are used.
- Segment: This is the L2 network to which the interface is attached to. It is a pre-
requisite to have this previously created. See section Design consideration: Layer 2
networking for details.
- IP Address/mask: this is the IP address assigned to the address port in the shared
segment between the NSX-T Edge nodes and the F5 BIG-IPs. In this example,
10.106.53.1/24 is used for router port in edge-01 and 10.106.53.2/24 in edge-02.
- Click Add.
Figure 19 – Filling the details of a router port of one of the uplinks for the Tier-0 Gateway.
28
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
1.3. In the Tier-0 Gateway, configure a BGP peering mesh with the F5 BIG-IPs.
In this section, it is described a BGP configuration (eBGP to be more precise) where both
the NSX-T Edge cluster and the F5 BIG-IP cluster have an Active-Standby configuration.
The steps involved are:
In NSX-T manager, select the Tier-0 Gateway the UI by clicking Networking > Routers
then follow the Routing > BGP dialogs of the router. Click the Edit button and set the
values as follows:
- Local AS: This is typically within the private range 64.512 – 65.534.
- Graceful restart: Set to disable as per VMware’s best practice NSXT-VI-SDN-038.
- ECMP: Set to disable.
In the same BGP section, click the link Set in the BGP Neighbors field and complete
the tabs: Neighbor, Local Address and BFD for the two BIG-IP Self IPs. In the next
29
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
figure, the peering configuration for the BIG-IP unit #1 is shown. The only configuration
difference between BIG-IP unit #1 and unit #2 is the Neighbor Address.
In this figure, the default values are used with the exception of the following fields:
The remaining step is to redistribute the NSX-T routes into NSX-T’s BGP which then
will be announced to the BGP peers (in this case the F5 BIG-IPs). This is done at Tier-
0 Gateway level in the section shown in the next figure.
30
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Create a redistribution entry which includes NSX connected networks as it can be seen
in the next figure.
31
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
This will be used later to instantiate a VM and perform a verification of the deployment.
In NSX-T manager, select Networking > Tier-1 Gateways > Add Tier-1 Gateway > Tier-1
Router filling the following parameters:
The next step is to create a network attached to this Tier-1 Gateway. In the UI, select
Networking > Segments > Add Segment and enter the following parameters:
32
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
First, create the Self IPs and floating Self IPs towards the spine routers (north-bound) and
towards the NSX-T Tier-0 Gateway (south-bound). These do not require any special
configuration. An example of the first BIG-IP unit is shown next.
Figure 27 – Self IPs and floating Self IPs required (shown in BIG-IP unit 1).
The non-floating Self IPs need to allow TCP port 179 in order the BGP peering to be
established. This is done by configuring the port lock down security feature of the Self IPs as
shown in the next figure. BFD protocol will be automatically allowed.
33
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Note that the non-floating Self IPs are per BIG-IP unit whilst the floating Self IPs are
synchronized across the BIG-IP units.
The next step is to configure the BGP routing in the BIG-IP. This involves two steps:
- Enabling BGP and BFD protocols in the routing domain used to connect to the NSX-T
environment. This is done in the UI.
- Configuring BGP and BFD in the ZebOS cli (imish).
In order to enable BGP and routing protocols. Use the BIG-IPs UI and browse through
Network > Route Domains > 0 (assuming that the default routing domain is the one being
used). In this window enable BFD and BGP as seen in the next figure. Note that given this is
34
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
part of F5 BIG-IP’s base config it is not synchronized and must be done in all the F5 BIG-IP
units.
Figure 29 – Enabling BFD and BGP in F5 BIG-IP. This must be performed in all units.
The next step is to configure BFD and BGP itself. Log in through SSH into each BIG-IP unit
and run the imish command which enters the ZebOS cli (ZebOS uses a typical router cli
command set). The F5 BIG-IP must mimic NSX-T’s BGP configuration. This is shown in the
next figure with embedded comments.
35
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
!
bfd gtsm enable safety feature enabled by default
!
ip prefix-list default-route seq 5 permit 0.0.0.0/0
!
route-map default-route permit 5 route-map to set the next-hop to the floating-IP
match ip address prefix-list default-route
address
set ip next-hop 10.105.53.10 primary
!
Figure 30 – ZebOS BGP without ECMP configuration in the BIG-IP.
At this point, follow the testing steps described in the Verifying the deployment section.
36
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
For large / high performance deployments, NSX-T Edge nodes are typically configured in
Active-Active. In this deployment guide it is assumed that when using NSX-T Active-Active the
most likely scenario is that NSX-T Edge nodes are bare metal servers and the BIG-IPs are
implemented in hardware. When using Active/Active NSX-T Edge it is likely to be used with
ECMP5 which provides additional L3 load sharing paths. This scenario is outlined in the next
figure for two NSX-T Edge nodes with two uplink Layer 3 paths. We will use a different Layer 2
segment for each Layer 3 path for additional isolation and bandwidth.
Figure 31 – Active-Active NSX-T Edge with two ECMP uplinks and BIG-IP in Active-Standby.
In this scenario the NSX-T Edge nodes are not able to process traffic in a stateful manner. The
F5 BIG-IPs in Active-Standby will implement the services that require processing the traffic in a
stateful manner. Given that it is highly likely that BIG-IP hardware is used, an F5 BIG-IP
Active-Active setup is not required in this scenario.
An F5 BIG-IP Active-Active setup in this scenario would require a more complex configuration
5
Please note that NSX-T Edge Active-Active doesn’t imply the use ECMP or vice versa.
37
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
in order to keep the traffic symmetry outside the NSX-T environment. Instead, if ultimate
scalability is required, the best option is adding blades with a chassis platform which provides
ultimate scale-out performance without requiring any reconfiguration and keeps a simple
architecture.
In this topology, each Edge node needs two uplinks which must be in different logical switches
and different transport zones. The Edge nodes share the logical switches for each uplink
subnet. Figure 32 shows the detail of the BGP peerings established between NSX-T edge
nodes and the BIG-IPs. Note that although the Edge nodes have as next-hop the floating Self
IPs of each subnet, the BGP peerings are setup with the non-floating Self IPs. In total 4 BGP
peerings are created but unlike with the previous BGP configuration without ECMP, this time
each peer uses a different Layer 3 network for each peering.
VE VE
BIG-IP HW BIG-IP HW
.11 .11 .12 .12
Logical Switch Logical Switch
& &
Transport Zone Transport Zone
for Uplink-Red for Uplink-Blue
eBGP mesh
with BFD
Edge node
.1 .1 .2 .2 (transport node)
NSX-T Tier-0
Logical Router Detail
Edge-01 Edge-02
Figure 32 – BGP peering detail with two uplink Layer 3 paths & transport zones for ECMP.
Given the many possibilities of configuring NSX-T Edge nodes and their logical switch uplink
ports, it is assumed that these have been already created. This guide is focused in the
configuration for the Layer 3 and higher layers that are specific to this topology. See section
Design consideration: Layer 2 networking for details.
38
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
In NSX-T manager, create a separate transport zone of type VLAN and logical switches for
each uplink subnet.
Ultimately there will be used 3 transport zones, one for each uplink (tz-vlan-uplink-red and
tz-vlan-uplink-blue) and one for the overlay networking. All these are shown in the next
figure.
Figure 33 – Overall configuration of transport zones. The used ones by this topology are
highlighted (red and blue for the uplinks).
39
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
2. Edit the Edge transport nodes to add the two uplink transport zones.
Go to System > Fabric > Nodes > Edge Transport Nodes and Edit each Edge transport node
associated with the T0 Gateway, adding a switch (N-VDS switch) for each Uplink transport
zone created in the previous steps. This is shown in the next figure.
Figure 34 – Adding the switches for each Uplink transport zone in each Edge transport nodes.
40
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Besides each transport-zone, each associated N-VDS switch requires specific Uplink profile
and Uplink interfaces. An example for Transport Zone tz-vlan-uplink-red is shown next.
In NSX-T manager, go to Networking > Tier-0 Gateways > Add Gateway > Tier-0 as
shown in the next figure.
41
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
3.2. Create a Router interface for each Edge Node used by the Tier-0 Gateway.
Select the just created Tier-0 Gateway and create 1 Gateway port for each peering
address. This is one Gateway’s interface for the combination of each subnet (two in this
example) and NSX-T Edge nodes (two in this example). In total 4 Gateway interfaces will
be created as shown next. It is very important to correctly assign the right Edge Transport
42
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
node and switch. The ports and their configuration used in this example are shown next.
The settings for each Gateway’s interfaces are analogous to the Active-Standby setup.
3.3. Enable BGP in the Tier-0 Gateway likewise the Active-Standby setup but in this case
enabling ECMP.
Figure 39 - Enable BGP with ECMP in the Tier-0 Gateway in Active-Active mode.
Unlike in the Active-Standby setup, in this case the source address for each peering will be
specified. Overall the configuration settings to be used are shown next:
43
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
- BFD Configuration: the appropriate BFD settings depend if the BIG-IPs/NSX-T Edges
are bare metal (timers set to 300ms) or virtual machines (timers set to 1000s) as
described in BGP configuration details within the GENERAL NOTES section.
Ultimately the configuration should be similar to the one in the following figure:
The remaining step is to redistribute the NSX-T routes into NSX-T’s BGP which then will be
announced to the BGP peers (in this case the F5 BIG-IPs). This is done at Tier-0 Gateway
level in the section shown in the next figure.
Create a redistribution entry which includes NSX connected networks as it can be seen in
the next figure.
44
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
4. Create a Tier-1 Router. This step is the same as in the Active-Standby setup.
Overall, the configuration of Self IPs is analogous to the Active-Standby setup but in this case,
there are two segments (vlan-south-blue and vlan-south-red). The overall configuration for
BIG-IP unit #1 is shown in the next figure.
The Self IPs towards NSX-T’s uplinks have the same configuration as in the Active-Standby
configuration using BGP. Please check the Active-Standby implementation section for details
on configuring these Self IPs.
The next step is to configure BFD and BGP itself. For this log in through SSH into each BIG-IP
unit and run the imish command which enters the ZebOS cli (ZebOS uses a typical router cli
command set). The F5 BIG-IP must mimic NSX-T’s BGP configuration. This is shown in the
45
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
next figure with embedded comments. The differences between with the Active-Standby setup
are shown in colors other than orange.
One key aspect of doing L3 path load sharing (in this case using BGP+ECMP) is that the BIG-
IP can receive traffic for the same flow in different VLANs (asymmetric traffic) by default, as a
security feature the BIG-IP doesn’t allow such behavior blocking this traffic.
46
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Figure 45 – Configuration required for ECMP which might generate asymmetric traffic.
At this point, follow the testing steps described in the Verifying the deployment section.
47
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
48
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Overlay
Physical Router
VLAN
External network
NSX-T Tier-0 LR
East-West VS1
East-West VS2
Listeners of the Virtual Servers
The main characteristic of this topology is that NSX-T’s Tier-1 Gateways are replaced by BIG-
IPs. NSX-T’s distributed firewall works normally, but this topology eliminates NSX-T’s
distributed routing between the segments at the Tier-1. This is not as performance impacting
as it might seem. It only impacts performance when there is plain routing between the
segments. If the services between the segments are implemented with load balancing (which
is beneficial for availability of the services) there is no performance impact because load
balancing is always implemented in a centralized manner (whether implementing it with NSX-
T’s LB or BIG-IP ADC or any other VM-based load balancer), unless using NSX-T’s DLB which
has very limited functionality.
Eliminating the NSX-T’s Tier-1 Gateway keeps a simpler 2-tier routing and allows F5 BIG-IP
Services to be implemented between the tenant segments. If it is expected to have a high
volume of plain routing traffic between the tenant’s segments, then NSX-T’s distributed
Gateway should be inserted south of tenant’s BIG-IPs, creating a 3-tier routing where BIG-IP’s
routing tier would just be transit between NSX-T’s top and bottom Gateways.
Unlike other LB implementations it is not necessary to dedicate a subnet for East-West VIPs.
BIG-IP Virtual Servers can be have one or more VIPs listening in one or more segments
independently of the address of the VIP. This will be exemplified in the implementation section.
It is recommended to have BIG-IP clusters specific for each tenant. This is aligned with
49
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
VMware’s vision where the Tier-1’s domain can be managed by each tenant. The benefits of
using BIG-IQ for centralized management and visibility are more relevant in this this topology.
Additionally, having several BIG-IP clusters distributes the workload across the ESXi
hypervisors unlike NSX-T’s LBs which might be more limited running in NSX-T Edge’s hosts
only.
10.105.196.0/24
External network
.2 .3
HA VIP .4
NSX-T Tier-0
Logical Router
Edge-01 Edge-02
.1
VS VS South-A 10.106.51.0/24
1 2
South-B 10.106.52.0/24
.51.110 .{51,52}.120
In order to have a manageable network, contiguous networks are used for each tenant. In this
example, /20 prefixes are used. This is especially relevant in this topology because NSX-T’s
Gateways are not used. Only NSX-T Gateways can advertise routes within the whole NSX-T
network. In the case of using BIG-IP as a Tier-1 Gateway replacement, it is needed to
configure static routes in NSX-T’s Tier-0. By having contiguous networks for each tenant, it is
only needed a single routing entry per tenant.
The transit network between the Tier-0 and the BIG-IPs uses a /24. Using a /24 prefix is larger
than strictly necessary for an HA-pair (only 4 hosts address would be needed) but allows for
more ingress VIP addresses and expanding the BIG-IP HA cluster into a Scale-N Active-Active
cluster (up to 8 BIG-IPs per cluster) or multiple BIG-IP clusters.
50
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
From the figure above, it can be seen that this topology is only supported by BIG-IP VE. The
configuration will be detailed next. As with all other topologies, this guide focuses in the
configuration for the Layer 3 and higher layers that are specific to this topology.
In NSX-T manager, go to Networking > Tier-0 Gateways > Add Gateway > Tier-0 as
shown in the next figure.
51
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
1.2. Create an Interface for each Edge Node used by the Tier-0 Gateway.
Select the router created (T0-Topology B in our example) and create two interfaces in the
UI by first selecting the Edit option in the T0 Gateway, then scrolling down to the
52
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Interfaces section clicking in the Set option of External and Service Interfaces . Enter
the following parameters for each interface:
- Name: In this example, edge-1-uplink-red is used for the first router port and edge-2-
uplink-red for the second (we will use edge-*-uplink-blue in the BGP+ECMP
scenarios).
- Type: External
- Edge Node: This will be edge-1-topology-a and edge-2-topology-a for each external
interface respectively.
- MTU: use external network’s MTU, which should be the same on the BIG-IP.
- URPF Mode: Strict is a good practice providing security with no expected
performance impact. Strict should be used unless asymmetric paths are used.
- Segment: This is the L2 network to which the interface is attached to. It is a pre-
requisite to have this previously created. See section Design consideration: Layer 2
networking for details.
- IP Address/mask: this is the IP address assigned to the address port in the shared
segment between the NSX-T Edge nodes and the F5 BIG-IPs. In this example,
10.106.53.1/24 is used for router port in edge-01 and 10.106.53.2/24 in edge-02.
- Click Add.
Figure 50 – Filling the details of a router port of one of the uplinks for the Tier-0 Gateway.
53
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
The HA VIP is an IP address that will be shared by the two Edge Nodes used for the Tier-0
Gateway created and will be used as the ingress IP to the NSX-T networks.
Select the Router created (T0-Topology A in our example), and create an HA VIP in the UI
by selecting Edit > HA VIP Configuration > Set and entering the following parameters:
Add a default route in the Tier-0 Gateway towards the BIG-IP cluster floating Self IP
address.
In our example, the BIG-IP cluster floating address to use as the next hop is 10.106.53.10.
Select the T0-Topology A Gateway created and then create a static routing in the UI by
selecting Routing > Static Routes > Set as follows and entering as Next Hop BIG-IP’s
floating-IP, in this example (not shown in the figure) 10.106.53.10.
2. Create a segment for the transit network between Tier-0 Gateway/Edges and the BIG-IPs.
54
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Go to Networking > Segments > ADD SEGMENT and create a Logical Switch within the overlay
Transport Zone and attaching it to the Tier-0 Gateway as follows:
Figure 54 – Creating an overlay segment for the transit network between the Tier-0 Gateway
and the BIG-IPs.
By using a contiguous prefix per tenant it is only needed to add a single route to the
existing routing table. Ultimately the routing table will look like Figure 55.
Figure 55 – Adding tenant’s routing entries. Highlighted is the routing entry for tenant green
for which BIG-IPs are configured in this section.
Follow the same steps as for creating the segment for the transit network, creating as
many logical switches as networks are going to be used for the tenant. In this example we
55
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
will create only the ones for the tenant green, these will be:
Unlike in Topology A’s implementations, in this topology the BIG-IPs will use NSX-T overlay
segments for the data traffic. After creating the segments in the NSX manager, the BIG-IP VE
can be attached to these segments just like a non NSX-T segment:
Notice the different types of Networks (NSX and regular/non-NSX). The BIG-IP will make use
of all these networks just like any regular untagged VLAN as shown in the next figure:
Figure 57 – Adding the NSX-T segment to the BIG-IP is just like a regular untagged VLAN.
56
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Next, create the Self IPs and floating Self IPs towards the Tier-0 Gateways (north-bound) and
for the tenants’ networks (south-bound). None of these require any special configuration. An
example of the first BIG-IP unit is shown next.
Figure 58 – Self IPs and floating Self IPs required (shown in BIG-IP unit 1).
Please note that the non-floating Self IPs are per BIG-IP unit whilst the floating Self IPs are
synchronized across the BIG-IP units.
The next step is to configure the static routing in the BIG-IP. In this case, it is only required a
default route towards the Tier-0 Gateway because all other networks are directly connected.
This is shown in the next figure and should be configured in both BIG-IP units (this
configuration is not synchronized automatically across BIG-IPs).
At this point follow the testing steps described in the Verifying the deployment section.
As mentioned previously, it is not required to dedicate a subnet for East-West VIPs, in fact
BIG-IP Virtual Servers can be have one or more IP addresses listening in one or more
segments independently of the address. This is exhibit in the implementation diagram where
57
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
- The destination address of the Virtual server (which is shown in the figure above).
- The segments where the Virtual Server is going to listen (this is independent of the
destination address) and it is configured in the BIG-IP by selecting the VLANs where the
Virtual server will be enabled or disabled.
- The source address of the Virtual Server which is a set of prefixes which limit the
application of the Virtual Server. The main use of this feature is to have a different Virtual
Server for the same destination and VLAN combination, and the Virtual Server that applies
will depend on the source of the request.
58
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Overlay
Physical Router VLAN
External network
Ingress VIPs
Transit VE
NSX-T Tier-0 LR network BIG-IP Scale-N HW
SNAT SNAT forces a symmetric
traffic path
Direct path for accessing internal hosts Overlay when using VE BIG-IP
externally and also used VLAN when using hardware BIG-IP
for the default route from inside
Traffic-path wise, the main characteristic of this topology is that it allows direct access to the
workloads without going through the BIG-IPs (BIG-IP bypass). Performance reasons should
not drive the selection of this topology: the logical additional hop that the F5 BIG-IP represents
incurs in very little latency added with no throughput reduction. Moreover, when using F5 BIG-
IP hardware the added latency is negligible compared to the latency impact that virtualization
infrastructures imply.
In the previous figure, depending on the choice of a hardware or virtualized BIG-IP, the NSX-T
boundary will differ. When using a hardware BIG-IP, the connectivity between the Tier-0 and
the BIG-IPs will be done with an NSX-T Edge uplink. When using a virtualized BIG-IP, this
connectivity will be done with a regular router port.
The main reason for choosing this topology should be that each tenant can have their own
North-South BIG-IP VE, which they can manage independently. For the purpose of full
isolation, this can be achieved for either Topology A or C using a hardware BIG-IP with vCMP
technology. A multi-tenant setup with full isolation is shown in the Figure 62.
59
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Overlay
VLAN
Physical Router
External network
NSX-T Tier-0 LR
Transit network
BIG-IP BIG-IP BIG-IP
BIG-IP BIG-IP BIG-IP Physical (vCMP guests)
BIG-IP BIG-IP or
BIG-IP
2 Virtual Edition
3
4
NSX-T Tier-1 LR
60
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
- Allows direct path to NSX-T which in turn allows NSX-T Edge to perform NAT at Tier-0
without eliminating direct IP visibility from the BIG-IP.
- Allows the deployment of a BIG-IP cluster for different tenants without impacting each other.
- Allows the use of either hardware or virtualized BIG-IPs.
- It is a more complex topology, with different paths for the same endpoints.
- Requires SNAT, hiding client’s IP addresses.
This topology is suitable for ADC, WAF & Identity management use cases but requires that the
direct path is tightly controlled in NSX-T’s firewall otherwise security functionalities would be
bypassed.
Overlay
VLAN
Spine router(s)
.1
Virtual Servers
.101 .100 .102 for services & security
.2 .3 Ingress VIPs
Virtual VE
NSX-T Ingress VIP .10 HA VIP F5 BIG-IP BIG-IP Scale-N HW
SNAT
NSX-T Tier-0 .101
10.106.48.0/24 .100 .102
Logical Router Transit network
Edge-01 Edge-02
Direct path for accessing internal hosts .1
externally and also used SNAT forces a symmetric a traffic path
for the default route from inside
.1
10.106.51.0/24
Services network (example)
Pool members (within NSX-T’s address range)
61
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
In the example used for this topology BIG-IP VE is used which means that the segment
between the BIG-IP and the Edge nodes uses the NSX-T overlay. This will be shown in the
following configuration. Given the many possibilities of configuring NSX-T Edge nodes and
their logical switch uplink ports, it is assumed that these have been already created. This guide
is focused in the configuration for the Layer 3 and higher layers that are specific to this
topology. See section Design consideration: Layer 2 networking for details.
In NSX-T manager, go to Networking > Tier-0 Gateways > Add Gateway > Tier-0 as
shown in the next figure.
62
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
1.2. Create an Interface for each Edge Node used by the Tier-0 Gateway.
Select the router created (T0-Topology C in our example) and create two interfaces in the
UI by first selecting the Edit option in the T0 Gateway, then scrolling down to the
63
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Interfaces section clicking in the Set option of External and Service Interfaces . Enter
the following parameters for each interface:
- Name: In this example, edge-1-uplink-vlan216 is used for the first router port and
edge-2-uplink-vlan216 for the second.
- Type: External
- Edge Node: This will be edge-1-topology-c and edge-2-topology-c for each external
interface respectively.
- MTU: use external network’s MTU, which should be the same on the BIG-IP.
- URPF Mode: Strict is a good practice providing security with no expected
performance impact. Strict should be used unless asymmetric paths are used.
- Segment: This is the L2 network to which the interface is attached to. It is a pre-
requisite to have this previously created. See section Design consideration: Layer 2
networking for details.
- IP Address/mask: this is the IP address assigned to the address port in the shared
segment between the NSX-T Edge nodes and the F5 BIG-IPs. In this example,
10.106.53.1/24 is used for router port in edge-01 and 10.106.53.2/24 in edge-02.
- Click Add.
Figure 66 – Filling the details of a router port of one of the uplinks for the Tier-0 Gateway.
64
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
The HA VIP is an IP address that will be shared by the two Edge Nodes used for the Tier-0
Gateway created and will be used as the ingress IP to the NSX-T networks.
Select the Router created (T0-Topology A in our example), and create an HA VIP in the UI
by selecting Edit > HA VIP Configuration > Set and entering the following parameters:
Add a default route in the Tier-0 Gateway towards the BIG-IP cluster floating Self IP
address.
In our example, the BIG-IP cluster floating address to use as the next hop is 10.106.53.10.
Select the T0-Topology A Gateway created and then create a static routing in the UI by
selecting Routing > Static Routes > Set as follows and entering as Next Hop BIG-IP’s
floating-IP, in this example 10.106.216.1:
1.4. Create the transit network between the Tier-0 Gateway/Edges and the BIG-IP.
Go to Networking > Segments > ADD SEGMENT and create a Segment within the Overlay
or a VLAN Transport Zone, this will mainly depend if the BIG-IP is a VE or hardware. In
this case we are using a VE and the transit network will be in the overlay Transport
65
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Zone. The segment (we use segment-348 in this example) must be attached to the
Tier-0 Gateway previously created. This configuration is shown next.
Figure 70 - Creating the Transit segment (segment-348) within the Overlay Transport
Zone for a BIG-IP VE
Although not part of this topology, this configuration be used later to instantiate a VM and
perform a verification of the deployment.
66
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
In NSX-T manager, select Networking > Tier-1 Gateways > Add Tier-1 Gateway > Tier-1
Router filling the following parameters:
The next step is to create a network attached to this Tier-1 Gateway. In the UI, select
Networking > Segments > Add Segment and enter the following parameters:
67
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
In this example, we are using BIG-IPs VE and for the transit network NSX-T overlay segments.
The configuration used in this example is shown next:
Figure 73 - Attaching the BIG-IP to an NSX-T overlay segment for the transit network.
The BIG-IP will make use of all these networks just like any regular untagged VLAN as shown
in the next figure:
Next, create the Self IPs and floating Self IPs towards the spine routers (north-bound) and
towards the NSX-T networks (south-bound) through the NSX-T Tier-0 Gateway’s transit
68
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
network. These do not require any special configuration. An example of the first BIG-IP unit is
shown next.
Figure 75 – Self IPs and floating Self IPs required (shown in BIG-IP unit 1).
Note that the non-floating Self IPs are per BIG-IP unit while the floating Self IPs are
synchronized across the BIG-IP units.
The next step is to configure the static routing on the BIG-IP. Typically, these involve two
routes:
These routes can be shown in the next figure and should be configured in both BIG-IP units
(this configuration is not synchronized automatically across BIG-IPs).
69
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
At this point, follow the testing steps described in the Verifying the deployment section.
70
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Overlay
Physical Router
VLAN
NSX-T Tier-0 LR
VS1
VS2
VS3
Listeners of the Virtual Servers
The ideal scenario to handle East-West traffic is to have a BIG-IP cluster for each tenant. This
is aligned with VMware’s vision where the Tier-1’s domain can be managed by each tenant.
The benefits of using BIG-IQ for centralized management and visibility are more relevant in
this topology. Additionally, having several BIG-IP clusters distributes the workload across the
ESXi hypervisors unlike NSX-T’s LBs, which might be more limited running in NSX-T Edge’s
hosts only.
In the next figure, an implementation example of this topology is shown, which describes the
flows for North-South traffic:
71
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Overlay
VLAN
Spine router(s)
.1
10.105.216.0/24
External network
Ingress VIPs
SNAT
BIG-IP Scale-N VE
10.105.32.0/24 10.105.34.0/24
10.105.33.0/24
- Ingress traffic through the Tier-0 Gateway direct to the workload servers (blue color), either
from outside the NSX-T environment (shown in the figure) or from another tenant (not
shown). This traffic reaches the VMs directly, no LB or services are applied to it. No SNAT
is required. Normally, these flows are not allowed freely and filtering rules are set in the NSX-
T’s firewall.
- Ingress traffic reaching tenant’s services (orange color). The VIPs might be in a given
subnet and the workload servers in any other subnet. The traffic doesn’t go through the
Tier-1 Gateway twice.
In the next figure, an implementation example of this topology is shown, this time describing the
flows for East-West traffic:
72
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Spine router(s)
.1
10.105.216.0/24
External network
.2 .3
HA VIP .10
NSX-T Tier-0
Distributed
Edge-01 Edge-02 Gateway
East-West path for a VS with
a single IP where servers
and clients are in different
networks.
VS single IP SNAT
BIG-IP Scale-N VE
10.105.33.0/24
In the figure above we can differentiate two East-West flows within the same Tenant (within
the routing scope of a Tier-1 Gateway):
- The purple flow shows a typical Virtual Server with a single IP address (VIP). The flow
outlined is between segments orange and green. The VIP belongs to segment orange and
the client is in the green segment. In order for the client to reach the VIP it has go to
through the Tier-1 Gateway. This is an efficient path though because Layer 3 processing is
distributed.
- The orange flow shows a Virtual server with two IP addresses (VIPs), one in segment
green and another in segment blue. This arrangement allows that regardless the clients
are in segment green or blue, they never have to go through the Tier-1 Gateway. This
improves performance and simplifies the traffic flows.
Please note that in both Virtual Server configurations SNAT is required to avoid Direct Server
Return (DSR) which would not allow for proxy based advanced services. DSR is out of scope
of this guide.
Additionally different Virtual Servers with the same destination IP/port can be implemented by
using the Source Address setting in the Virtual Servers.
73
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Figure 80 – Source Address setting to discriminate the prefixes to which the Virtual Server
applies.
Although topology D can be used for both North-South and East-West traffic, it is important to
note that this topology can be combined with Topology A. In such combined scenario Topology
D would be used only for East-West traffic within a tenant (and could be managed by each
74
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
tenant) and Topology A could be used for North-South flows. An example of this combined
topology is shown in Figure 81.
Overlay
Physical or Virtual
L2/SUBNET D
L2/SUBNET C
L2/SUBNET B
L2/SUBNET A
75
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Spine router(s)
.1
10.105.216.0/24
External network
.2 .3
HA VIP .10
NSX-T Tier-0
Distributed
Edge-01 Edge-02 Gateway
East-West path for a VS with
a single IP where servers
and clients are in different
networks.
VS single IP SNAT
BIG-IP Scale-N VE
10.105.33.0/24
Note that in this example topology that there is no virtual server for the egress traffic. The
outbound traffic from the internal hosts is routed directly to the Tier-1 Gateway. If the
deployment requires an egress VIP to install advanced services such as Web Gateway this
would be better using any of the inline topologies (topology A or C).
The configuration steps are described next and we start with the previously existing Tier-0
Gateway of topology A, to which we will attach the Tier-1 Gateway. There is no limitation in the
Tier-0 Gateway chosen.
This Tier-1 Gateway will have a transit network towards Tier-0 (automatically created) and in
this example 3 user segments in the overlay transport zone (orange, green and blue).
76
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
In NSX-T manager, select Networking > Tier-1 Gateways > Add Tier-1 Gateway > Tier-1
Router filling the following parameters:
The next step is to create the orange, green and blue networks and attach them to this Tier-1
Gateway. In the UI, select Networking > Segments > Add Segment and enter the following
parameters:
77
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
First, create the Self IPs and floating Self IPs in the VIP segment that are attached to the Tier-
1 Gateway. These do not require any special configuration. An example of the first BIG-IP unit
is shown in Figure 85.
Figure 85 – Self IPs and floating Self IPs required (shown in BIG-IP unit 1).
78
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Note that the non-floating Self IPs are per BIG-IP unit while the floating Self IPs are
synchronized across the BIG-IP units.
We will use a default route to reach the non-directly connected networks. We will use the first
self-IP to reach the Tier-1 Gateway. This is shown in Figure 86:
At this point, follow the testing steps described in the Verifying the deployment section.
79
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
80
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
- It allows to deploy Data Centers on demand (SDDC – Software Defined Data Center) on
AWS infrastructure.
- VMC is deployed within an AWS VPC (Virtual Private Cloud) which allows simple access to
AWS services such as Direct Connect or additional user compute in EC2.
- Analogously to the previous item, the EC2 compute resources in the VPC can also make
use of the VMC deployment. The VPC and the VMC deployment are connected using plain
routing.
The next picture shows a scenario where we have two VMware deployments, one of them
within VMC where we also make use of additional EC2 compute resources within the same
VPC where the SDDC is.
In this figure, we can see that the user in VMC is restricted to the Compute Networks in AWS
(top right of the picture) which can only be connected to the CGW (a T1 Gateway). Given this
6
Starting with VMC on AWS’s SDDC version 1.12 it is possible to have more than one Tier-0 Gateways
using the so-called Multi-Edge SDDC topology but this is out of scope of this guide.
81
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
constraint, we will limit the proposed topologies to a modified Topology D which makes use of
SNAT. We will also mention alternatives to avoid the use of SNAT.
Internet
IGW
NAT
172.16.0.0/16 VPC
VPC router
CGW 172.16.200.0/24 AWS servers
VIPexternal
10.199.2.0/24
External Service
SNAT
HA Internal Service
SNAT
10.199.1.0/24
Forwarding
SNAT
MGMT
10.199.0.0/24 10.199.3.0/24 Red
- Frontends -
10.199.4.0/24 Green
- Apps -
VMC on AWS 10.199.5.0/24 Blue
- DB -
In this sample topology, we create a typical 3-tier architecture with Frontend (External
Service), Application (Internal Service) and Database tiers. Notice that the Database Tier is
configured as “Disconnected” to provide an additional layer of secure by means of controlling
the access through a VIP in the BIG-IP. The created segments can be seen in the next figure.
82
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
It is worth noting that VMC does not allow creating custom segment profiles, which inhibits the
use of MAC Masquerading mechanism. See the subsection TODO-MAC Masquerading for
more details.
The VPC in which the VMC deployment is hosted can be checked from the VMC console as
shown in the next figure.
If we want to check the routing table of the VPC, we need to use the AWS console. When we
add new segments in VMC, routes will be automatically populated in the VPC router to provide
connectivity from the non-VMC environment towards the VMC environment. We can see the
configuration of this example in the next figure:
83
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Please note that this routing table is independent of the routing table within VMC. We can see
this because the only VPC owned route/non-VMC owned route is marked as local in the
Target column.
Lastly, we will configure a public address for the VMC deployment. This public address can be
used as egress and ingress point for the non VMC deployment.
84
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
This public IP needs to be mapped into the VIP of the BIG-IP that we will configure later on.
This is done by a 1:1 NAT which happens in the IGW of the VMC SDDC and is configured in
the VMC console as shown in the next figure, where 10.199.2.100 will be the VIP in the BIG-
IPs.
Figure 93 - Configuring the required 1:1 NAT for the BIG-IP VIP.
It has floating-IPs configured for all subnets with the exception of the HA segment but strictly
speaking the floating-IP is only required for the blue segment used for the Database Tier which
is disconnected from the CGW (NSX Tier-1 Gateway) and we use the BIG-IP as the default
gateway, for an additional layer of security. The Frontend-Tier and the App-Tier use the CGW
85
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
as their default gateway. For the non-floating Self-IPs we use .11 for the BIG-IP unit #1 and
.12 for the BIG-IP unit #2.
The connectivity to the non-directly connected segments, including the AWS workload
segments in the VPC, is done by a single default route as shown next.
Figure 95 - Routing required for non-directly connected segments, including AWS workload
segments in the VPC.
- A VS for the Frontend (named Frontend) for which we previously configured the public IP
and the 1:1 NAT.
- A VS for the App using the VMC compute (named App).
- A VS for an additional App using the AWS compute in the VPC (named AppAWS).
- A VS for forwarding between the App Tier and the DB Tier (named Forwarding).
All these VS with the exception of the forwarding VS are enabled only in the segment where
the address belongs.
In the case of the Forwarding VS, it is enabled in the App Tier and the DB tier to allow traffic
initiated from either of the two segments. The BIG-IP can be configured with additional controls
to enhance the security between these two segments.
86
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Figure 96 - Overview of the service configuration with detailing the additional segments where
the Forwarding VIP is enabled.
Once VMC supports either modifying the routing table of the CGW or allows overlapping
addresses with disconnected segments there are ways to do not require SNAT. When either of
these features are available in VMC this guide will be updated with a non-SNAT topology.
87
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
88
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
As a consequence, many designs are possible. Ultimately the design will be highly dependent
on the applications and on the databases, which most of the times require replication across
sites. From the point of BIG-IP there are very few restrictions. The topic is so wide that this
guide will give overall guidance and will consider three scenarios:
Overall approach
There are several approaches to multi-cloud. IP Anycast is a transparent mechanism with high
reliability and fast recovery times that relies in highly coordinated IP routing which is not
possible across cloud vendors. IP anycast routing strategies are also possible but, in many
cases, routes cannot be migrated across Autonomous Systems swiftly. IP addressing based
strategies inherently do not allow a high degree of control on service publishing. F5
recommends Global Server Load Balancing (GSLB) because it has the following benefits:
Cross-cloud vendor. It can be used in any public cloud or private data center and
supports any IP service (not necessarily served by BIG-IP).
High degree of control. Rules can be setup based on service name instead of IP
address. Traffic is directed to specific data center based on operational decisions such as
service load and also allowing canary, blue/green, and A/B deployments across data
centers.
Stickiness. Regardless the topology changes in the network, clients will be consistently
directed to the same data center.
IP Intelligence. Clients can be redirected to the desired data center based on client’s
location and gather stats for analytics.
At time of this writing, we recommend F5 BIG-IP’s DNS module for GSLB because its more
sophisticated health probing and its automatic service discovery feature.
89
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
These are cross-cloud vendor offerings not tied to BIG-IP but have an exceptional integration
with BIG-IP. Both F5 BIG-IP and F5 Cloud Services provide Pay as You Go pricing options.
Please check the Silverline links for more detail on this SaaS Security topic.
Dedicated circuits with low latency and high throughput where traffic is only IP routed.
This is the case of local VPC connectivity from VMC through an ENI interface and Direct
Connect which allows inter-site connectivity.
Shared circuits with non-guaranteed latency and limited throughput where traffic is
encapsulated (often encrypted too) via gateways. This is the case with VPNs.
An overview of these connectivity options can be seen in the next figure. In it we discourage
VPN connectivity for BIG-IP data plane traffic. This is because BIG-IPs typically deal with
application and frontend tiers where low latency and throughput cannot be constrained. These
are critical for application performance. Lower performance connectivity such as VPNs should
typically be limited for services such as management and databases which can handle the
traffic asynchronously for database replication.
90
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Customer
Owned AWS
Low Accounts VPC ENIs for Compute Gateway
IPsec VPN
performance Direct Connect VPN to VPCs in other AWS Regions
Low performance
Inter-site dependency
Same VPC
where local VPC should be preferred.
Customer
Datacenters
VMware Cloud
or
SDDC
Other Public Low NSX L2VPN
Clouds
performance IPsec VPN
Direct Connect
51 | ©2019 F5
Figure 97 - Distilled connectivity options between the different types of clouds (squares). The
less suitable connectivity options are stricken through and with annotations in red indicating
the reason why they are less suitable.
Direct Connect, or even better VMC to local VPC connectivity can be used for stretching a
cluster of servers across different infrastructures. Please note that this might create differently
performant servers if pool members are spread amongst these infrastructures. Note as well
that this also lowers reliability because there are more components and thus more points of
failure involved. Whenever possible we will avoid these connectivity options too. In the design
guidelines within this section we will indicate when these are suitable from BIG-IP data plane
point of view.
Application migration.
Workload rebalancing.
7
https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-A7E39202-11FA-476A-A795-
AB70BA821BD3.html
91
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
All these use cases make use of VM migration facilities provided by HCX. For the specific case
of Workload rebalancing F5 recommends the use of GSLB instead.
In general, HCX doesn’t mandate how the services are exposed externally therefore GSLB is
always a valid option.
The VMware HCX Network Extension permits keeping the same IP and MAC addresses
during a VM migration. This minimizes service disruption and is transparent to all devices
including BIG-IP.
AWS VMC
VPC IGW [Optional ELB] IGW
- Specific use cases -
VPC VMC
router CGW
92
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
- At time of this writing, using an AWS IGW instead of an IGW via VMC has the possibility of
using ELBs which provides Advanced Shield capabilities.
- The cost will depend where we have more traffic and where we have more compute
resources.
- Typically, ADCs like BIG-IP deal with Frontend-tier and App-tier servers which should not
have to talk with peers in other sites. These tiers have the most throughput and latency
demands so inter-site communication should be avoided. Otherwise, this could incur in
uneven performance and increased and unnecessary costs.
- Identify strictly necessary inter-site dependencies. The typical case is DB replication which
has much less throughput demands. Also, latency is a lesser issue because replication
often happens asynchronously.
- There are other very relevant sources of inter-site traffic such as Automation, VM migration
and data-store replication (for example a repository of images). VMware’s HCX traffic fits in
this category.
The first two items in this list deal with traffic that is generated upon client requests (blue
arrows in the figure below). On the other hand, the third item is a new category of traffic
(orange arrows) that is not expected to have dependencies when handling an ongoing
customer request. Another characteristic of this traffic is that its traffic demands will greatly
depend on frequency of updates in the applications.
- Simpler sites are easier to manage, scale, and replicate. GSLB allows for distribution of
workloads based on a site’s or a service’s load and capacity so it is perfectly fine to have
differently sized data centers. The most important attribute is to have them architecturally
equal. Automations that are cross-cloud vendor capable are advised.
Using BIG-IP DNS and following the above guidelines we can create a cross-cloud vendor
solution using GSLB. This is shown in the next figure.
93
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
DNS requests Client traffic DNS requests Client traffic DNS requests Client traffic
X X
Frontend Tier Frontend Tier Frontend Tier
App Tier
DB Tier
X App Tier
DB Tier
X App Tier
DB Tier
DevOps
Legend
BIG-IP DNS service discovery and health probing mesh. Partially shown.
Probably the most remarkable aspect of the diagram are the network dependencies and
demands which drive the design. In this diagram Inter-site dependency is reduced to the
minimum, typically DB replication only.
We can also see that there is additional inter-site traffic like the BIG-IP DNS iQuery (used for
service discovery and health probing) but this traffic is different in nature because it is failure
tolerant.
In the design above, the DNS functionality is implemented in a standalone BIG-IPs because
redundancy is accomplished by having an independent BIG-IP DNS at each site. Having this
BIG-IP DNS separated from the BIG-IP Scale-N cluster that handles client traffic gives clarity
in the diagram and more relevantly sets a clear demarcation of functions. If desired, the BIG-IP
DNS functionality can be consolidated in the BIG-IP Scale-N cluster at each site but a
preferable approach is locate BIG-IP DNS outside of the data centers.
- To be closer to the clients. This only slightly improves DNS performance since client’s local
DNS resolvers usually reply from their DNS cache.
- To have a closer view to client’s network performance and reachability to the clouds. This
is very relevant.
94
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Client traffic DNS requests Client traffic DNS requests Client traffic
X X
Frontend Tier Frontend Tier Frontend Tier
App Tier
DB Tier
X App Tier
DB Tier
X App Tier
DB Tier
DevOps
Legend
BIG-IP DNS service discovery and health probing mesh. Partially shown.
Figure 100 - Preferred multi-cloud arrangement by using Internet exchanges for BIG-IP DNS.
It is worth noting that the architecture being described in this section can be used for cloud
bursting as well. Cloud Bursting refers to the use case when the main site has limited
scalability and it is required to have increased capacity in peak periods. This cloud bursting
capability is usually accomplished by spawning needed resources in Software Defined Data
Centers/Public Clouds.
The approach described above in this section is preferred over adding compute from a Public
Cloud by means of a Direct Connect circuit. This is because a GSLB multi-site approach has
the following advantages:
- It automatically increases Internet traffic capacity. Each site has its own Internet access.
- It can reduce costs. Using a replica site uses almost the same compute resources and
eliminates the need for a high performance Direct Connect.
- It provides increased reliability because of less inter-site dependency.
- Its automation is simpler because sites are architecturally similar.
- It is not necessary to deal with the bandwidth allocation management that the Direct
Connect circuit will need over the time.
- An independent multi-site architecture can be easily replicated to additional sites when
needed.
- It allows the use of more distributed regions, optimizing customer experience.
- The cloud bursting site can have alternative uses such as allowing migrations or new
application roll outs.
95
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
An alternative Cloud burst architecture, specific to some use cases is described next.
Usually this is not extended to the Public Cloud because compute is usually not the limitation (I/O usually is)
and/or customer wants to keep control of the data locally
Figure 101 - Overall design of a single site with Cloud Bursting capability.
In this architecture the On-premises data center is stretched to a public cloud when load
conditions require increasing the compute needs. In this scenario Internet access is kept in the
On-premises data center. It requires the use of a high performance Direct Connect link with
low latency. This is usually within the metropolitan area of the On-premises facility. This Direct
Connect circuit needs to be established once and its capacity increased ahead of the peak
periods. Some housing vendors allow to change circuit’s capacity programmatically.
When compute changes dynamically, it is a perfect fit for F5’s Service Discovery feature of
AS3, automatically populating the pools with the added or removed computing instances.
Please check the clouddocs.f5.com site for this and other automation options.
96
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
97
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
GENERAL NOTES
Virtualization is a potential source of latency and by using longer timers it is reduced the
chance of false positives of link failures.
- NSXT-VI-SDN-037 – Configure BGP Keep Alive Timer to 4 and Hold Down Timer to 12
seconds.
8
https://docs.vmware.com/en/VMware-Validated-Design/index.html
98
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Following VMware general recommendations, the management interface (of either BIG-
IP or BIG-IQ) should not be in an overlay network or use N-VDS at all. Typically, the
management interface will be connected to a VDS switch, therefore isolating the
management plane from the NSX-T networking.
When deploying the BIG-IP OVA file using defaults, a specific amount of memory is reserved
for the BIG-IP VE virtual machine. By default, CPU is not specifically reserved, but should be
manually configured with an appropriate CPU reservation in order to prevent instability on
heavily loaded hosts. This is done in vCenter.
The CPU must support a one-to-one, thread-to-defined virtual CPU ratio, or on single-
threading architectures, support at least one core per defined virtual CPU. In VMware
ESXi 5.5 and later, do not set the number of virtual sockets to more than 2.
BIG-IPs used for North-South traffic should be placed in the same cluster as NSX-T
Edge nodes in order keep traffic affinity. This might be a dedicated “Centralized Services”
cluster, a shared “Management & Edge” cluster or in an all-shared “Collapsed” cluster
depending the size of the deployment.
BIG-IPs used for East-West traffic should be distributed across the Compute Clusters to
distribute their workload as much as possible. In the case that each tenant has their own
nodes, the BIG-IPs should be run just as another tenant VM maximizing affinity of the
traffic flows.
The above VM placement best practices can be achieved with the Dynamic Resource
Scheduler (DRS). In the next picture, the creation of anti-affinity rules is shown to avoid two
BIG-IPs of the same cluster running on the same hypervisor. Note: the anti-affinity rules
should be “must” rather than “should” to guarantee anti-affinity and therefore high
availability.
99
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Figure 103 - Setting anti-affinity rules with VMware's Dynamic Resource Scheduler.
• High Availability of VMs in VMC requires using the stretched cluster deployment type.
When deploying a VM you can choose an ESXi host in the desired Availability Zone
(AZ). In case of failure, the VM will stay in its original AZ if possible. Each site in a
stretched cluster resides in a separate fault domain. See the VMC FAQ9 and this
community article10 for more details. A screenshot of this configuration is shown next.
9
https://cloud.vmware.com/vmc-aws/faq#stretched-clusters-for-vmware-cloud-on-aws
10
https://cloud.vmware.com/community/2018/05/15/stretched-clusters-vmware-cloud-aws-overview/
100
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
101
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Please note that this feature is an optimization to slightly reduce the time for the traffic to be
sent to the appropriate BIG-IP when a traffic-group shift occurs. This optimization, although it
is a slight reduction in time might be critical for some applications. Usually this feature is not
needed and is not noticeable when configured because the GARP mechanism used by default
is fast enough for the vast majority of applications.
MAC Masquerade is achieved by having a single MAC addresses for each traffic-group which
is shared by the BIG-IPs of the Scale-N cluster (by default each BIG-IP has a different MAC
address for each traffic-group). This BIG-IP feature is further described in K13502: Configuring
MAC masquerade (11.x - 15.x)11.
NSX-T has a very security tight L2 configuration and requires adjustment. More precisely, a
new MAC Discovery Profile needs to be created with the following settings changed from their
default:
These settings can be seen in the following figure. This profile has to be applied to all the
segments of the traffic group where MAC masquerading is going to be used.
Figure 105 - Creating a new MAC Discovery Profile for MAC Masquerade.
11
https://support.f5.com/csp/article/K13502
102
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
VMC on AWS
At time of this writing VMC on AWS doesn’t allow this customization hence MAC Masquerade
cannot be used.
This section takes into account Red Hat OpenShift and Kubernetes in general. At present,
handling Pivotal PKS differently than any other Kubernetes flavor is not required, and as long
as Pivotal PKS aligns to the Kubernetes API, this will be supported by F5 Networks like any
other Kubernetes flavor. Red Hat OpenShift and Pivotal PKS are able to use NSX-T’s load
balancer natively. In this release of the guide, the focus is in replacing the LBs for workloads
and not for the management and control plane of these platforms.
As described in previous sections, for any of these container platforms the POD’s IP addresses
should be routable from the BIG-IP. In other words, there cannot be any NAT between the BIG-
IP and the PODs. Moreover, there are two ways which POD workers can be exposed with a
resource of kind Service: via NodePort or via ClusterIP. Although both are supported it is highly
recommended to use ClusterIP12. This is because when using NodePort mode the BIG-IP (or
any other external host) cannot send traffic directly to the PODs which means for the BIG-IP
that:
- There is an additional layer of load balancing (at node level) which adds latency and
complexity, which makes troubleshooting and observability more difficult.
Once the PODs that compose the workers of a given Service are defined, the BIG-IP must be
automatically configured and updated when the PODs of the service are created, updated or
deleted. This is performed by the F5 Container Ingress Services (CIS)13 which is installed as a
Kubernetes POD that monitors configuration changes in Kubernetes. F5 CIS automatically
updates the BIG-IP configuration by translating orchestration commands into F5 SDK/iControl
REST calls. The overall architecture is shown in the next picture.
12
https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-modes.html#kctlr-modes
13
https://clouddocs.f5.com/containers/v2/
103
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
F5 CIS
Figure 106 - F5 BIG-IP integration with container platforms with F5 Container Ingress Services
(CIS)
Although in the diagram above only one CIS instance is shown, it is possible that a single
instance of F5 BIG-IP can be managed by several CIS instances associating different
container namespaces or projects to different partitions in the F5 BIG-IP.
Kubernetes services can be exposed in F5 BIG-IP using several resource types, these are
shown in the next table:
104
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
These options can be combined in the same deployment. Note that in the above table, the
LoadBalancer Service type is not mentioned. This is out of scope because it is meant to be
implemented by a cloud provider’s load balancer. Also note that the LoadBalancer Service
type is also not efficient in the use of IP address ranges because it requires an IP address for
each instance.
NSX Container Plug-in (NCP) provides integration between NSX-T Data Center and OpenShift
(also to other PaaS/CaaS). In this section, the settings of ncp.ini are described (or the related
YAML ConfigMap file at installation time) that should be taken into account:
use_native_loadbalancer = False
In order to have PODs that do not require SNAT, it is necessary to indicate either the desired
CIDR address blocks or the UUIDs of previously defined address blocks in the next variable:
no_snat_ip_blocks = <comma separated list of UUIDs or CIDRs>
When creating projects/namespaces these will need to be created with the ncp/no_snat=true
annotation. This way the subnets will be taken from these IP blocks and there will be no SNAT
for them. These IP blocks are expected to be routable. An example namespace is shown next:
apiVersion: v1
kind: Namespace
metadata:
name: no-nat-namespace
annotations:
ncp/no_snat: "true“
External IP Pools will not be used because any SNAT or Ingress/LoadBalancer resource will
be handled by the BIG-IP. Further details can be found in the following documents:
- VMware’s “NSX Container Plug-in for OpenShift - Installation and Administration Guide”.
Like any other container platform, NAT must be disabled within the container environment.
This is to allow the BIG-IP to have direct visibility to the container’s IP address.
In the case of Pivotal PKS this is indicated with the PKS Ops Manager UI while performing
PKS installation. Following the regular PKS configuration, it is needed to unset the NAT option
in the Networking Tab as shown in the next screenshot.
105
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Figure 107 - Indicating PKS networking options at installation time. The NAT option must be
unset.
106
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Adjacent next-nops
Topology A Northbound – 10.105.217.1
Impl. static routing Southbound – 10.106.53.1
Topology A Northbound – 10.105.217.1
Impl. dynamic routing Southbound – 10.106.53.{1,2}
Topology A Northbound – 10.105.217.1
Impl. dynamic routing +ECMP Southbound Uplink Red – 10.106.53.{1,2}
Southbound Uplink Blue – 10.106.54.{1,2}
Topology B Northbound – 10.106.49.1
Southbound – 10.106.{51,52}.10 (Servers)
Topology C Northbound – 10.10.216.1
Southbound – 10.106.48.1
Topology D NorthBound – 10.106.32.1 (default route’s next-hop)
Southbouind – 10.106.{32,33,34},100 (Servers)
The next step will be creating a test VM that will be attached to the tenant networks where the
workload servers will reside.
Segment / IP address
Topology A 10.106.32.10
Topology B 10.106.{51,52}.10
Topology C 10.106.51.10
Topology D
10.106.{32,33,34},100
Configuring the VM’s network interface should allow pinging the NSX-T Tier-1 Gateway’s
router port (or the BIG-IP in the case of Topology B) as shown in the next figure. The next test
will be to ping BIG-IP’s closest IP.
107
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
The IP addresses to be used in these two tests are shown in the next table.
If testing BIG-IP’s closest IP doesn’t succeed it is recommended to 1) ping from the BIG-IP
end instead and check the port-lock down in the Self IPs, 2) ping the floating Self IP address
from the BIG-IPs themselves and 3) ping the non-floating IPs as well.
Login in the imish cli and run the following command in both BIG-IP units and verify that the
Session State is Up for all BFD sessions (one per BGP peering configured):
Figure 108 - Verification of the NSX-T uplinks by checking the BFD sessions.
Next, verify that the BGP peerings are in Established state by running the following command:
As you can see in Figure 109, it is expected to see two lines with Established state (one line
per BGP peering). This command must be run in both BIG-IPs as well. If the output shown is
not the same as above, verify that: BGP’s TCP port 179 is open, the peering IP addresses for
each BIG-IP are correct and the BGP password is correct.
The next step is to verify that the routes are exchanged through BGP as expected. You should
expect two next-hops for the NSX-T routes (in blue) and one for the default route (in green).
108
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
bigip1a.nsxt.bd.f5.com[0]#show ip bgp
BGP table version is 9, local router ID is 192.174.70.111
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, l
- labeled
S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
Finally, if using an NSX-T Edge Active-Active setup, verify that the NSX-T routes are ECMP
routes by checking in the BIG-IP tmsh cli with the following command (again in both BIG-IP
units).
Figure 111 - Verifying NSX-T ECMP routes learned via dynamic routing (BGP).
109
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Create a forwarding type virtual server in the F5. This virtual server will service outbound traffic
flows from the NSX-T environment. The configuration of this virtual server is shown in the
following Figure 112, where the parameters are in red are mandatory.
Figure 112 - Creating a Forwarding Virtual Server for testing egress traffic.
110
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Note that in the case of the Topology A with the Active-Active setup the two VLANs used for
the NSX-T uplinks must be specified.
The optional parameter Source Address can be used to restrict from which source addresses
the VIP is limited. This could be changed to NSX-T’s address range (10.106.0.0/16) to tighten
security.
The optional Source Address Translation parameter can be used in the case you want to
hide the NSX-T’s address range and NAT these addresses when going north of the F5 BIG-
IPs.
After applying this configuration, you can reach the spine router’s IP address which is the
default gateway of the F5 BIG-IPs. If the spine routers provide Internet connectivity at this
stage, it should be possible to ping an Internet address as shown in the next figure.
Figure 113 - Ping test using spine router's IP address and the well-known Internet address
8.8.8.8 for checking egress connectivity.
In all the example topologies, the same spine routers are used so the IP address to use for this
testing is the same. If this test doesn’t succeed it is recommended to 1) In the case of using
Topology A, check the advertised networks in the NSX-T Tier-1 Gateway, 2) verify the routing
111
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
table in the NSX-T Tier-0 Gateway, 3) verify the routing table in the BIG-IPs and 4) run a
tcpdump -nel -i 0.0 in the Active BIG-IP to see what is actually happening.
If these tests doesn’t succeed it is recommended to 1) Check the advertised networks in the
NSX-T Tier-1 Gateway, 2) verify the routing table in the NSX-T Tier-0 Gateway, 2) verify the
routing table in the BIG-IPs and 3) use NSX-T tracing & packet capture tools.
112
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
The overall configuration of this webserver virtual server is shown next following Topology B.
The values for all topologies are shown at the end of the graphical example.
Figure 114 - Creating a Standard Virtual Server for testing Ingress services' connectivity.
Before clicking the Finished button for creating the virtual server it is needed to attach a pool
with the test VM as member. This is done by clicking the ‘+’ button shown next:
Figure 115 – Creating a new pool that will be used for the connectivity test with the Ingress
Virtual Server.
Then specifying the pool as shown in the next picture. Please note that the default HTTP
health monitor is used.
113
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
Figure 116 - Specifying pool member details for the test Ingress Virtual Server.
This pool health monitor already tests the connectivity form the BIG-IP to the web server when
it is shown as green as in the next figure at virtual server level.
If the pool health monitor doesn’t succeed it is recommended to 1) perform a ping test from the
BIG-IP to the pool member, 2) verify that the web server is up and the socket listening in the
expected address and 3) there is no distributed firewall rule that inhibits the connectivity
between the Self IP of the BIG-IPs used for sending the probes and the pool member.
Figure 117 - virtual server status after creating the webserver VS for Ingress traffic.
114
DESIGN GUIDE AND BEST PRACTICES
VMware NSX-T and F5 BIG-IP
This ‘green’ status doesn’t validate end to end traffic path for this it is needed send an HTTP
request from a host upstream of the spine-router.
If this doesn’t succeed it is recommended to 1) perform the HTTP request locally using the
pool member’s address (not 127.0.0.1), 2) perform a ping test to the BIG-IP’s virtual server
address and 3) verify that the virtual server is enabled in the expected VLANs, these are the
VLANs where the connection to the virtual server are established and not the VLANs towards
the pool members. Also, if there is a routing problem many times enabling SNAT might solve
these and would reveal that there is a routing miss-configuration.
115