Virtual Network Design Guide
Virtual Network Design Guide
Virtual Network Design Guide
T E C H N I C A L W HI T E P A P E R JANUArY 2013
Table of Contents Intended Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Components of the VMware Network Virtualization Solution .. . . . . . . . . . . . . . . . . . . . . . . 4 vSphere Distributed Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Logical Network (VXLAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 vCloud Networking and Security Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 vCloud Networking and Security Manager .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 vCloud Director. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 VXLAN Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Standardization Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Encapsulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 VXLAN Packet Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Intra-VXLAN Packet Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Inter-VXLAN Packet Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Network Virtualization Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Physical Network .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Network Topologies with L2 Conguration in the Access Layer. . . . . . . . . . . . . . . . 12 Network Topologies with L3 Conguration in the Access Layer. . . . . . . . . . . . . . . . 13 Logical Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Scenario 1 Greeneld Deployment: Logical Network with a Single Physical L2 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Scenario 2 Logical Network: Multiple Physical L2 Domains .. . . . . . . . . . . . . . . . . . 15 Scenario 3 Logical Network: Multiple Physical L2 Domains with vMotion. . . . . . 16 Scenario 4 Logical Network: Stretched Clusters Across Two Datacenters . . . . . 17 Managing IP Addresses in Logical Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Scaling Network Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Consumption Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 In vCloud Director. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 In vCloud Networking and Security Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Using API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Troubleshooting and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Network Health Check. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 VXLAN Connectivity Check Unicast and Broadcast Tests . . . . . . . . . . . . . . . . . . . . . . 23 Monitoring Logical Flows IPFIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Port Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
T ECHNICAL W HI T E P A P E R / 2
Intended Audience
This document is targeted toward virtualization and network architects interested in deploying VMware network virtualization solutions.
Overview
The IT industry has gained signicant efficiency and exibility as a direct result of virtualization. Organizations are moving toward a virtual datacenter (VDC) model, and exibility, speed, scale and automation are central to their success. Although compute and memory resources are pooled and automated, networks and network services, such as security, have not kept pace. Traditional network and security operations not only reduce efficiency but also limit the ability of businesses to rapidly deploy, scale and protect applications. VMware vCloud Networking and Security offers a network virtualization solution to overcome these challenges.
Application
Application
Application
Workload
Workload
Workload
x86 Environment
Virtual Machine Virtual Machine Virtual Machine Decoupled
Server Hypervisor
Requirement: x86
Physical Network
Figure 1 draws an analogy between compute and network virtualization. Just as VMware vSphere abstracts compute capacity from the server hardware to create virtual pools of resources, network virtualization abstracts the network into a generalized pool of network capacity. The unied pool of network capacity can then be optimally segmented into logical networks directly attached to specic applications. Customers can create logical networks that span physical boundaries, optimizing compute resource utilization across clusters and pods. Unlike legacy architectures, logical networks can be scaled without reconguring the underlying physical hardware. Customers can also integrate network servicessuch as rewalls, VPNs and load balancersand deliver them exactly where they are needed. Single pane of glass management for all these services further reduces the cost and complexity of datacenter operations.
T ECHNICAL W HI T E P A P E R / 3
The VMware network virtualization solution addresses the following key needs in todays datacenter: Increasing compute utilization by pooling compute clusters Enabling noncontiguous cluster expansion Leveraging capacity across multiple racks in the datacenter Overcoming IP-addressing challenges when moving workloads Avoiding VLAN sprawl in large environments Enabling multitenancy at scale without encountering VLAN scale limitations By adopting network virtualization, customers can effectively address these issues as well as realize the following business benets: Drive faster provisioning of network and services, enabling business agility Improve infrastructure utilization, leading to significant CapEx savings Increase compute utilization by 30 percent by efficiently pooling compute resources Increase network utilization by 40 percent due to compute pooling and improved traffic management Decouple logical networks from physical networks, providing complete flexibility Isolate and segment network traffic at scale Provide multitenancy without increasing the administrative burden Automate repeatable network and service provisioning workflows, translating to 30 percent or more in OpEx savings on network operations alone
T ECHNICAL W HI T E P A P E R / 4
VCD
VMware L3 Edge
Physical IP Network
VM
1
VM VM VM VM
T ECHNICAL W HI T E P A P E R / 5
vCloud Director
The vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization is fully integrated in vCloud Director workows, enabling rapid self-service provisioning within the context of the application workload. vCloud Director uses vCloud Networking and Security Manager in the backend to provision network virtualization elements. vCloud Director is not part of vCloud Networking and Security; it is a separate purchased component. It is not mandatory for deploying a network virtualization solution, but it is highly recommended to achieve the complete operational exibility and agility discussed previously. See consumption models for all available consumption choices for VMware network virtualization.
T ECHNICAL W HI T E P A P E R / 6
Encapsulation
VXLAN makes use of an encapsulation or tunneling method to carry the L2 overlay network traffic on top of L3 networks. A special kernel module running on the vSphere hypervisor host along with a vmknic acts as the virtual tunnel endpoint (VTEP). Each VTEP is assigned a unique IP address that is configured on the vmknic virtual adapter associated with the VTEP. The VTEP on the vSphere host handles all encapsulation and deencapsulation of traffic for all virtual machines running on that host. A VTEP encapsulates the MAC and IP packets from the virtual machines with a VXLAN+UDP+IP header and sends the packet out as an IP unicast or multicast packet. The latter mode is used for broadcast and unknown destination MAC frames originated by the virtual machines that must be sent across the physical IP network. Figure 3 shows the VXLAN frame format. The original packet between the virtual machines communicating on the same VXLAN segment is encapsulated with an outer Ethernet header, an outer IP header, an outer UDP header and a VXLAN header. The encapsulation is done by the source VTEP and is sent out to the destination VTEP. At the destination VTEP, the packet is stripped of its outer header and is passed on to the destination virtual machine if the segment ID in the packet is valid.
Outer MAC DA
Outer MAC SA
Outer 8021.Q
Outer IP DA
Outer IP SA
Outer UDP
Inner MAC DA
Inner MAC SA
CRC
VXLAN Encapsulation
The destination MAC address in the outer Ethernet header can be the MAC address of the destination VTEP or that of an intermediate L3 router. The outer IP header represents the corresponding source and destination VTEP IPs. The association of the virtual machines MAC to the VTEPs IP is discovered via source learning. More details on the forwarding table are provided in the VXLAN Packet Flow section. The outer UDP header contains source port, destination port and checksum information. The source port of the UDP header is a hash of the inner Ethernet frames header. This is done to enable a level of entropy for ECMP/load balancing of the virtual machinetovirtual machine traffic across the VXLAN overlay. The VXLAN header is an 8-byte field that has 8 bits to indicate whether the VXLAN Network Identifier (VNI) is valid, 24 bits for the VXLAN Segment ID/VXLAN VNI and the remaining 24 bits reserved.
T ECHNICAL W HI T E P A P E R / 7
T ECHNICAL W HI T E P A P E R / 8
L2
IP
Payload
VM
L2
IP
Payload
VM
MAC 1
MAC 2
The next part of this section describes packet flow in the following VXLAN deployments: 1) Intra-VXLAN packet flow; that is, two virtual machines on the same logical L2 network 2) Inter-VXLAN packet flow; that is, two virtual machines on two different logical L2 networks
T ECHNICAL W HI T E P A P E R / 9
VM
VM
192.168.1.10
192.168.1.11
VXLAN BLUE
192.168.1.0/24 192.168.1.1 vCloud Networking and Security Edge Gateway 172.26.10.10
External Network
172.26.10.0/24 Virtual MachinetoVirtual Machine communication Virtual MachinetoInternet communication
Internet
In the case of virtual machinetovirtual machine communication on the same logical L2 network, the following two traffic ow examples illustrate possibilities that are dependent on where the virtual machines are deployed: 1) Both virtual machines are on the same vSphere host. 2) The virtual machines are on two different vSphere hosts. In the first case, traffic remains on one vSphere host; in the second case, the virtual machine packet is encapsulated into a new UDP header by the source VTEP on one vSphere host and is sent over through the external IP network infrastructure to the destination VTEP on another vSphere host. In this process, the external switches and routers do not detect anything about the virtual machines IP (192.168.1.10/192.168.1.11) and MAC address because they are embedded in the new UDP header. In the scenario where the virtual machine is communicating with the external world, as shown by the green dotted line, it first will send the traffic to gateway IP address 192.168.1.1; the vCloud Networking and Security Edge gateway will send unencapsulated traffic over its external-facing interface to the Internet.
T ECHNICAL W HI T E P A P E R / 1 0
VM
VM
VM
192.168.1.10
192.168.1.11
192.168.2.10
VXLAN Blue
192.168.1.0/24
VXLAN Orange
192.168.2.0/24
192.168.1.1
192.168.2.1
172.26.10.10
External Network
172.26.10.0/24 Virtual MachinetoVirtual Machine communication between two VXLANs
Internet
T ECHNICAL W HI T E P A P E R / 1 1
Physical Network
The physical datacenter network varies across different customer environments in terms of which network topology they use in their datacenter. Hierarchical network design provides the required high availability and scalability to the datacenter network. This section assumes that the reader has some background in various network topologies utilizing traditional L3 and L2 network configurations. Readers are encouraged to look at the design guides from the physical network vendor of choice. We will examine some common physical network topologies and how to enable network virtualization in them. Network Topologies with L2 Conguration in the Access Layer In this topology access layer, switches connect to the aggregation layer over an L2 network. Aggregation switches are the VLAN termination points, as shown in Figure 7. Spanning Tree Protocol (STP) is traditionally used to avoid loops. Routing protocols run between aggregation and core layers.
VM VM VM
VM VM
VM VM
Deploy VDS
VLAN100
VLAN100
L3 Access Layer
STP
Routing
Rack 1
Core Layer
Rack 10
In such deployments with a single subnet (VLAN 100) configured on different racks, enabling network virtualization based on VXLAN requires the following: Enable IGMP snooping on the L2 switches. Enable the IGMP querier feature on one of the L2/L3 switches in the aggregation layer. Increase the end-to-end MTU by a minimum of 50 bytes to accommodate a VXLAN header. The recommended size is 1,550 or jumbo frames.
T ECHNICAL W HI T E P A P E R / 1 2
To overcome slower convergence times and lower link utilization limitations of STP, most datacenter networks today use technologies such as Cisco vPC/VSS (or MLAG, MCE, SMLT, and so on). From the VXLAN design perspective, there is no change to the previously stated requirements. When the physical topology has an access layer with multiple subnets configured (for example, VLAN 100 in Rack 1 and VLAN 200 in Rack 10 in Figure 8), the aggregation layer must have Protocol-Independent Multicast (PIM) enabled to ensure that multicast routes across multiple subnets are exchanged. All the VXLAN requirements previously discussed apply to leaf and spine datacenter architectures as well. Network Topologies with L3 Conguration in the Access Layer In this topology, access layer switches connect to the aggregation layer over an L3 network. Access switches are the VLAN termination points, as shown in Figure 8. Key advantages of this design are better utilization of all the links using Equal-Cost Multipathing (ECMP) and elimination of STP. From the VXLAN deployment perspective, the following requirements must be met: Enable PIM on access switches. Ensure that during the VXLAN preparation process, no VLAN is configured. This ensures that a VDS doesnt perform VLAN tagging, also called virtual switch tagging (VST) mode. Increase end-to-end MTU by a minimum of 50 bytes to accommodate a VXLAN header. The recommended size is 1,550 or jumbo frames.
VM VM VM
VM VM
VM VM
Deploy VDS
Enable PIM
ECMP
Aggregation Layer
Rack 1
Core Layer
Rack 10
T ECHNICAL W HI T E P A P E R / 1 3
Logical Network
After the physical network has been prepared, logical networks are deployed with VXLAN, with no ongoing changes to the physical network. The logical network design differs based on the customers needs and the type of compute, network and storage components they have in the datacenter. The following aspects of the virtual infrastructure should be taken into account before deploying logical networks: A cluster is a collection of vSphere hosts and associated virtual machines with shared resources. One cluster can have a maximum of 32 vSphere hosts. A VDS is the datacenter-wide virtual switch that can span across up to 500 hosts in the datacenter. Best practice is to use one VDS across all clusters to enable simplified design and cluster-wide VMware vSphere vMotion migration. With VXLAN, a new traffic type is added to the vSphere host: VXLAN transport traffic. As a best practice, the new VXLAN traffic type should be isolated from other virtual infrastructure traffic types. This can be achieved by assigning a separate VLAN during the VXLAN preparation process. A VMware vSphere ESXi hosts infrastructure traffic, including vMotion migration, VMware vSphere Fault Tolerance, management, and so on, is not encapsulated and is independent of the VXLAN-based logical network. These traffic types should be isolated from each other, and enough bandwidth should be allocated to them. As of this release only, VMware does not support placing infrastructure traffic such as vMotion migration on VXLAN-based virtual networks. Only virtual machine traffic is supported on logical networks. To support vMotion migrations of workloads between clusters, all clusters should have access to all storage resources. The link aggregation method configured on the vSphere hosts also impacts how VXLAN transport traffic traverses the host NICs. The VDS VXLAN port groups teaming can be configured as failover, LACP active mode, LACP passive mode or static EtherChannel. a. When LACP or static EtherChannel is configured, the upstream physical switch must have an equivalent port channel or EtherChannel configured. b. Also, if LACP is used, the physical switch must have 5-tuple hash distribution enabled. c. Virtual port ID and load-based teaming are not supported with VXLAN. Next, the design in the following three scenarios is discussed. Greenfield deployment A datacenter built from scratch. Brownfield deployment An existing operational datacenter with virtualization. Stretched cluster Two datacenters separated by a short distance. Scenario 1 Greeneld Deployment: Logical Network with a Single Physical L2 Domain In a greeneld deployment, the recommended design is to have a single VDS stretching across all the compute clusters within the same vCenter Server. All hosts in the VDS are placed on the same L2 subnet (single VLAN on all uplinks). In Figure 9, the VLAN 10 spanning the racks is switchednot routedcreating a single L2 subnet. This single subnet serves as the VXLAN transport subnet, and each host receives an IP address from this subnet, used in VXLAN encapsulation. Multicast and other requirements are met based on the physical network topology. Refer to the L2 configuration in the access layer shown in Figure 9 for details on multicast-related conguration.
T ECHNICAL W HI T E P A P E R / 1 4
VM VM VM
VM VM
VM VM
VM
VXLAN Fabric
vSphere
vSphere
vSphere
vSphere
Switch
Keep in mind the following key points while deploying: The VDS VXLAN port group must be in the same VLAN across all hosts in all clusters. This configuration is handled through the vCloud Networking and Security Manager plug-in in vCenter Server. VDS, VLAN, teaming and MTU settings must be provided as part of the VXLAN configuration process. A VTEP IP address is assigned either via DHCP or statically via vCenter Server. Virtual machines communicating outside the logical network (to the Internet or to nonlogical networks within the datacenter) require a VXLAN gateway. vMotion Boundary The vMotion boundary, or the workload migration limit, in VXLAN deployment is dictated by the following two criteria: 1) vMotion migration is limited to hosts managed by a single vCenter Server instance. 2) vMotion migration is not possible across two VDS. In this scenario where all the hosts are part of the same VDS, vMotion migration will work across all hosts as long as the shared storage requirement is satisfied across the two clusters. Scenario 2 Logical Network: Multiple Physical L2 Domains In brownfield deployments, clusters are typically deployed with multiple VDS, one per cluster. Each VDS is on a different subnet, terminated on an aggregation router. Logical L2 networks can span across these subnet boundaries. The main difference as compared to scenario 1 is that VXLAN transport traffic is routed instead of being switched in the same subnet. Multicast and ECMP requirements are dependent on the physical topology. Refer to the L3 configuration in the access layer shown in Figure 10 for details on multicast-related configuration.
T ECHNICAL W HI T E P A P E R / 1 5
VM VM VM
VM VM
VM VM
VM
VXLAN Fabric
vSphere
vSphere
vSphere
vSphere
Router
Figure 10. Browneld Deployment Two VDS
Keep in mind the following key points while deploying: VTEPs in different subnets can route traffic to each other. A VTEP IP address is assigned either via DHCP or statically via vCenter. Applications running in virtual machines cannot detect the physical topology and are in the same subnet. Virtual machines communicating outside the logical network (to the Internet or to nonlogical networks within the datacenter) require a VXLAN gateway. (See appendix 2 for packet flows.) vMotion Boundary In this two-VDS VXLAN deployment, the vMotion boundary is limited to one VDS. The workloads deployed on a logical L2 network cannot be moved to a host connected to a different VDS. However, if workload placement alone is the goal, this design enables the choice of any cluster for the deployment of a workload, even if they are on different physical VLANs. Scenario 3 Logical Network: Multiple Physical L2 Domains with vMotion If vMotion migration across clusters is an important requirement, the following modified design should be used. Here, a single VDS spans across multiple clusters, enabling vMotion migration across clusters. The following are some of the key differences in this design: No VLAN ID is configured during the VXLAN preparation. The VDS will not perform VLAN tagging for the VXLAN traffic going out on the uplinks (no VST). Dedicated uplinks are required on the hosts to carry untagged VXLAN traffic. The physical-switch ports, where the host uplinks are connected, are configured as access ports with appropriate VLAN. For example, as shown in Figure 11, access switch ports of cluster 1 are configured with VLAN 10; those of cluster 2 are configured with VLAN 20.
T ECHNICAL W HI T E P A P E R / 1 6
VM VM VM
VM VM
VM VM
VM
VXLAN Fabric
Rack 10 Cluster 2
vSphere vSphere
VLAN 10
Switch
VLAN 20
Router
Because the storage network is parallel and independent of a logical network, it is assumed that both clusters can reach the shared storage. Standard vMotion migration distance limitations and single vCenter requirements still apply. Because the moved virtual machine is still in the same logical L2 network, no IP readdressing is necessary, even though the physical hosts might be on different subnets. Scenario 4 Logical Network: Stretched Clusters Across Two Datacenters Stretched clusters offer the ability to balance workloads between two datacenters. This nondisruptive workload mobility enables migration of services between geographically adjacent sites. A stretched cluster design helps pool resources in two datacenters and enables workload mobility. Virtual machinetovirtual machine traffic is within the same logical L2 network, enabling L2 adjacency across datacenters. The virtual machinetovirtual machine traffic dynamics are the same as those previously cited. In this section, we will discuss the impact of this design on northsouth traffic (virtual machine communicating outside the logical L2 network) because that is the main difference as compared to previous scenarios. Figure 12 shows two sites, site A and site B, with two hosts deployed in each site along with the storage and the replication setup. Here all hosts are managed by a single vCenter Server and are part of the same VDS. In general, for stretched cluster design, the following requirements must be met: The two datacenters must be managed by one vCenter Server because the VXLAN scope is limited to a single vCenter Server. vMotion support requires that the datacenters have a common stretched VDS (as in scenario 3). A multiple VDS design, discussed in scenario 2, can also be used, but vMotion migration will not work.
T ECHNICAL W HI T E P A P E R / 1 7
VM
After vMotion
VM
VXLAN 5002
vSphere Distributed Switch
Stretched Cluster
WAN Site A
IP Network IP Network
Site B
Internet
Storage A
FC/IP
Storage B
Internet
LUN (R/W)
Figure 12. Stretched Cluster
LUN (R/O)
In this design, the vCloud Networking and Security Edge gateway is pinned to one of the datacenters (site A in this example). In the vCloud Networking and Security 5.1 release, each VXLAN segment can have only one vCloud Networking and Security Edge gateway. This has the following implications: All northsouth traffic from the second datacenter (site B) in the same VXLAN (5002) must transit the vCloud Networking and Security Edge gateway in the first datacenter (site A). Also, when a virtual machine is moved from site A to site B, all northsouth traffic returns to site A before reaching the Internet or other physical networks in the datacenter. Storage must support a campus cluster configuration. These implications raise obvious concerns regarding bandwidth consumption and latency, so an activeactive multidatacenter design is not recommended. This design is mainly targeted toward the following scenarios: Datacenter migrations that require no IP address changes on the virtual machines. After the migration has been completed, the vCloud Networking and Security Edge gateway can be moved to the new datacenter, requiring a change in external IP addresses on the vCloud Networking and Security Edge only. If all virtual machines have public IP addresses and are not behind vCloud Networking and Security Edge gateway network address translation (NAT), more changes are needed. Deployments that require limited northsouth traffic. Because virtual machinevirtual machine traffic does not require crossing the vCloud Networking and Security Edge gateway, the stretched cluster limitation does not apply. These scenarios also benet from elastic pooling of resources and initial workload placement exibility. If virtual machines are in different VXLANs, the limitations do not apply.
T ECHNICAL W HI T E P A P E R / 1 8
VM
VXLAN 5000
VM
192.168.3.10
192.168.1.0/24
VM
VXLAN 5002
192.168.3.0/24
192.168.2.10
VM
VXLAN 5001
192.168.1.1 192.168.3.1 vCloud Networking and Security Edge Gateway Standard NAT Conguration and DHCP service 172.26.10.1
192.168.2.0/24
192.168.2.1
Figure 13. NAT and DHCP Conguration on vCloud Networking and Security Edge Gateway
T ECHNICAL W HI T E P A P E R / 1 9
The following are some configuration details of the vCloud Networking and Security Edge gateway: Blue, green and purple virtual wires (VXLAN segments) are associated with separate port groups on a VDS. Internal interfaces of the vCloud Networking and Security Edge gateway connect to these port groups. The vCloud Networking and Security Edge gateway interface connected to the blue virtual wire is configured with IP 192.168.1.1. Enable DHCP service on this internal interface of vCloud Networking and Security Edge by providing a pool of IP addresses. For example, 192.168.1.10 to 192.168.1.50. All the virtual machines connected to the blue virtual wire receive an IP address from the DHCP service configured on Edge or on the same subnet. The NAT configuration on the external interface of the vCloud Networking and Security Edge gateway allows virtual machines on a virtual wire to communicate with devices on the external network. This communication is allowed only when the requests are initiated by the virtual machines connected to the internal interface of the vCloud Networking and Security Edge. In situations where overlapping IP and MAC address support is required, one vCloud Networking and Security Edge gateway per tenant is recommended. Figure 14 shows an overlapping IP address deployment with two tenants and two separate vCloud Networking and Security Edge gateways.
Tenant 1
10.10.1.10 10.10.1.11
Tenant 2
10.10.1.10
VM
VXLAN 5000
VM
VM
VXLAN 5001
10.10.1.0/24
10.10.1.0/24
10.10.1.1
10.10.1.1
Without Network Address Translation Customers who are not limited by routable IP addresses, have virtual machines with public IP addresses or do not want to deploy NAT can use static routing on vCloud Networking and Security Edge.
T ECHNICAL W HI T E P A P E R / 2 0
172.26.1.10
172.26..1.11
VM
VXLAN 5000
VM
172.26..3.10
172.26.1.0/24
VM
VXLAN 5002
172.26..3.0/24
172.26.2.10
VM
VXLAN 5001
172.26.2.0/24
172.26.2.1
In the deployment shown in Figure 15, the vCloud Networking and Security Edge gateway is not configured with the DHCP and NAT services. However, static routes are set up between different interfaces of the vCloud Networking and Security Edge gateway. Other Network Services In a multitenant environment, vCloud Networking and Security Edge firewall can also be used to segment intertenant and intratenant traffic. vCloud Networking and Security Edge load balancer can be used for load balancing external to internal Web traffic, for example, when multiple Web servers are deployed on the logical network. Static routes must be congured on the upstream router to properly route inbound traffic to the vCloud Networking and Security Edge external interface. vCloud Networking and Security Edge also provides DNS relay functionality to resolve domain names. DNS relay conguration should point to an existing DNS in the physical network. Alternatively, a DNS server can be deployed in the logical network itself.
T ECHNICAL W HI T E P A P E R / 2 1
2) vCloud Networking and Security Edge gateway: Each vCloud Networking and Security Edge gateway can have a maximum of 10 interfaces and can be configured to connect to an internal or external network. The number of logical networks requiring gateway services determines the number of gateway instances that must be deployed based on the 10-interfaces-per-gateway maximum. For example, if one interface per gateway is connected to an external network (leaving 9 for internal networks), the number of gateway instances required for 90 logical L2 networks would be 90/9that is, 10 vCloud Networking and Security Edge gateway devices. Available in three different sizes, based on capacity. 3) VXLAN Traffic: The planned virtual machine consolidation ratio should take into consideration the amount of virtual machine traffic that VTEP must handle. Meet the bandwidth requirements for the VXLAN traffic by assigning sufficient NICs for the same. To optimally utilize the uplinks, use link aggregation methods on the physical switches. 4) Multicast: Each VXLAN logical network is uniquely identified by a combination of a number called segment ID (determined from a range dened by the user) and the congured multicast group. The multicast grouptoVXLAN segment ID mapping is handled by the vCloud Networking and Security Manager. There is no need to have one-to-one mapping between the segment ID and the multicast group. In case of a limited number of multicast groups, vCloud Networking and Security Manager maps multiple logical networks (segment IDs) to one multicast group.
Consumption Models
After the VXLAN configuration has been completed, customers can create and consume logical L2 networks on demand. Depending on the type of vCloud Networking and Security bundle purchased, they have the following three options: 1) Use the vCloud Director interface. 2) Use the vCloud Networking and Security Manager interface. 3) Use REST APIs offered by vCloud Networking and Security products.
In vCloud Director
vCloud Director creates a VXLAN network pool implicitly for each provider VDC backed by VXLAN prepared clusters. The total number of logical networks that can be created using a VXLAN network pool is determined by the configuration at the time of VXLAN fabric preparation. A cloud administrator can in turn distribute this total number to the various organization VDCs backed by the provider VDC. The quota allocated to an organization VDC determines the number of logical networks (organization VDC/ VMware vSphere vApp networks) backed by VXLAN that can be created in that organization VDC.
T ECHNICAL W HI T E P A P E R / 2 2
Using API
In addition to vCloud Director and vCloud Networking and Security Manager, vCloud Networking and Security components can be managed using APIs provided by VMware. For detailed information on how to use the APIs, refer to the vCloud Networking and Security 5.1 API Programming Guide at https://www.vmware.com/pdf/vshield_51_api.pdf.
Port Mirroring
VDS provides multiple standard port mirroring features such as SPAN, RSPAN and ERSPAN that help in detailed traffic analysis.
T ECHNICAL W HI T E P A P E R / 2 3
Conclusion
The VMware network virtualization solution addresses the current challenges with the physical network infrastructure and brings flexibility, agility and scale through VXLAN-based logical networks. Along with the ability to create on-demand logical networks using VXLAN, the vCloud Networking and Security Edge gateway helps customers deploy various logical network services such as firewall, DHCP, NAT and load balancing on these networks. The operational tools provided as part of the solution help in the troubleshooting and monitoring of these overlay networks.
T ECHNICAL W HI T E P A P E R / 2 4
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2013 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-WP-NETWORK-VIRT-GUIDE-USLET-101 Docsource: OIC - 12VM008.07