Vxlan Routing With Evpn PDF
Vxlan Routing With Evpn PDF
Contents
Introduction 2
Distributed architecture 5
Conclusion
EVPN: Introduction
Introduction
Modern day data centers are moving from layer 2 to layer 3 architectures to take advantage
of a vast array of benefits. A layer 3 infrastructure provides smaller failure domains, less
spanning tree troubleshooting, fewer proprietary protocols such as MLAG, as well as
increased redundancy and resilience over a strict layer 2 data center.
However, some distributed applications, load balancers, and storage appliances still require
layer 2 connectivity to communicate. Therefore, VXLAN was born to run as an overlay on a
robust layer 3 data center to offer the best of both worlds: maintaining layer 2 connectivity
between racks while deploying a layer 3 infrastructure. VXLAN also offers additional
benefits over layer 2 VLANs such as scale and flexibility.
To provide the layer 2 connectivity, VLANs are bridged into VXLAN tunnels, which run over
the IP infrastructure. Since these packets are bridged, they cannot reach outside the same
VLAN/VXLAN without performing some type of routing. However, many server applications
require both layer 2 connectivity to the same VLAN on a different rack and layer 3
connectivity to other VXLANs/VLANs within in the data center in a different rack (or even
outside the data center altogether). For example, in Figure 1 below, the green application
only communicates via layer 2, while the orange application needs access to the Internet
or other VLANs. Without static configuration, the front-end server cannot reach the Internet
without some type of VXLAN routing.
Internet
Layer 3 IP Fabric
VLAN 10 VLAN 10
In order to reach each other or the Internet, these individual tunnels must be integrated with
the larger routing fabric. Older ASICs in many data center switches, such as the Broadcom
Trident II, cannot support dual lookup routing — routing first on the outer header IP address
and then again on the decapsulated packet. Therefore, data centers with an external
This paper is meant as an addendum to the paper, BGP EVPN for VXLAN, which
describes using EVPN for VLAN/VXLAN bridging. Deploying BGP EVPN also as a layer
3 routing protocol for VXLAN dramatically simplifies the VXLAN routing deployment by
using the same protocol, BGP, for the underlay, for bridging between VLANs/VXLANs, and
for routing between VXLAN tunnels and to outside the data center. This paper provides a
VXLAN routing overview, describes some of the architectures and aspects of VXLAN routing
with EVPN, and provides some typical design examples along with configuration snippets.
In the past, customers with older generation silicon performed VXLAN routing using an
external router and/or a hyperloop. Newer generation ASICs such as the Broadcom Trident
II+, Trident III, Tomahawk1 , and Mellanox Spectrum support VXLAN routing directly on the
switch. For example, Figure 2 shows Host A on VLAN A communicating with a host on
VLAN B located on a different rack.
Routing
vni vni
Bridge
swp1 swp2
Send to host C
Host A VLAN A Host B VLAN B
on VLAN B on a
different rack
VXLAN routing internal to the switch allows for much more efficient architectures in a
network. You can efficiently deploy one of two different architectures to route across
VXLAN boundaries while still supporting VXLAN bridging when applicable. The two
architectures are discussed in the next section.
1 Tomahawk provides internal VXLAN routing via an internal loopback
CENTRALIZED ARCHITECTURE
In a centralized architecture, only one or a pair of routers provide access between VXLANs.
All inter-VLAN traffic or traffic intended for outside the local data center needs to be VXLAN
tunneled to the centralized router and back to the destination. This "tromboning" of traffic
means additional east-west traffic will occur in the data center to support VXLAN routing.
For each VLAN/VXLAN pair in the local network, the centralized router must have a VNI
and a switch virtual interface (SVI) that is used as the VLAN's default gateway. The default
gateway IP and MAC are advertised via BGP EVPN extended community to all the leaf
switches. See Figure 3.
Internet
Layer 3 Network
eBGP Peering
Spine01 Spine02 (ipv4 unicast)
For example, Server01 on VLAN A wants to communicate with Server04 on VLAN B. Traffic
exits Server01 and enters Leaf01 with a destination IP/MAC of the SVI on the centralized
router (the default gateway). Leaf01 bridges and encapsulates the traffic in a VXLAN tunnel
headed for the default gateway. The centralized router decapsulates the packet and routes
the packet to SVI B, which is seen as a connected route. The router then encapsulates the
packet in a new VXLAN header and sends it to Leaf02 on the destination VXLAN. Leaf02
decapsulates and bridges the packet to Server04.
DISTRIBUTED ARCHITECTURE
A distributed architecture involves configuring an SVI and thus enabling VXLAN routing
on each top of rack (ToR) leaf switch. Therefore, the VXLAN routing occurs closest to the
host, keeping traffic local which provides more efficient routing and lower latency than the
centralized architecture.
Internet
Layer 3 Network
eBGP Peering
Spine01 Spine02 (ipv4 unicast)
This paper covers the two route types directly used for VXLAN routing, type 2 and type 5.
Cumulus Linux also supports type 3, which is used for VTEP discovery.
Route Distinguisher
Ethernet Tag ID
MAC Address
IP Address Length
IP Address
Label 1 (L2VNI)
Label 2 (L3VNI)
As seen in Figure 5, a type 2 route includes both an IP address and MAC address coupled
together. The MAC address in the advertisement enables all leafs to know the location of
every host and enables layer 2 reachability while eliminating data plane learning. Adding the IP
address onto the advertisement for each MAC address allows each leaf switch to route as well
as perform ARP suppression, which reduces broadcast traffic in the network.
●● Inter data center routing with stretched VXLAN tunnels (VXLAN tunnels between
data centers or PODs)
If external layer 3 connectivity is required, a separate route type, type 5, is used. Type 5 is
discussed below.
Route Distinguisher
Ethernet Tag ID
IP Prefix Length
IP Prefix
GW IP address
Label (L3VNI)
As is seen above in Figure 6, the route advertisement contains the IP address and IP prefix
length, but no MAC address. The type 5 route is used with a distributed architecture to
advertise external, inter-data center and/or inter-POD routes to all local leafs within a VRF
and can only be used with an L3VNI. More information about the L3VNI can be found in the
Symmetric IRB Model section of this paper.
Figure 7 depicts a setup where type 5 EVPN routes are used for external routing. In this
case, we are advertising the 172.16.0.0/16 route from the Internet router towards the border
leafs in the BGP IPv4 address family. It is inserted into vrf1 and sent to all the VTEPs in the
network via a type 5 EVPN route. A type 5 route may also be used for inter-POD or inter-
data center routing.
Layer 3 Network
AS65020 AS65020
Internet
Advertise
172.16.0.0/16
eBGP Peering
(ipv4 unicast)
L2 VNI L2 VNI
VRF1 Leaf01 VRF1 Leaf02 VRF1 Border Leaf
L3 VNI L3 VNI
AS65011 AS65012 AS65041
Rack 1 Rack 2
Each model is useful in certain situations and has unique characteristics. While some
vendors support either one model or the other, a Cumulus Linux EVPN with VXLAN routing
implementation can support either model. Supporting both provides interoperability with
various other vendors as well as a choice for which model is best for your data center. This
section discusses the benefits and operation of each model used for VXLAN integrated
routing and bridging.
Consider the scenario in Figure 8. Host A on VLAN A wants to communicate with Host C
on VLAN A. Host A knows Host C is on its same subnet (using the destination address and
its own IP address and mask to determine) so it initiates the communication by ARPing for
Host C. Since Host C has already sent a frame in the past, Leaf02 learned its MAC address
and previously communicated it to Leaf01. Leaf01 responds to Host A's ARP request, telling
Host A Host C's MAC address. Host A then sends its packets directly to Host C, bridging
across the orange VXLAN tunnel.
Host A wants to now communicate with Host B, which is located on a different VLAN and
thus reachable via a different VNI. Since the destination is a different subnet from Host A,
Host A sends the frame to its default gateway, which is Leaf01. The Leaf01 southbound
interface towards Host A is configured with an anycast IP address, which is discussed
more deeply in the architectures section. Leaf01 recognizes that the destination MAC
address is itself and uses the routing table to route the packet to the Green VNI. Leaf01
then tunnels the frame in the Green VNI to Leaf02. Leaf02 removes the VXLAN header from
the frame, and bridges the frame to Host B.
The return traffic behaves similarly. Host B sends a frame to Leaf02, which recognizes its
own destination MAC address and routes the packet to the Orange VNI. The packet is
tunneled within the Orange VNI to Leaf01. Leaf01 removes the VXLAN header from the
frame and bridges it to Host A.
With the asymmetric model, all the required source and destination VNIs (that is, Orange and
Green) must be present on each leaf, even if that leaf doesn't have a host in that associated
VLAN in its rack. In many instances, this is needed for VM mobility anyway. As a result, all
leafs would be required to hold all routes and all MAC addresses that communicate with each
other. Deploying an asymmetric model is a simple solution as no additional VNIs need to be
configured and fewer routing hops occur to communicate between VXLANs. However, it does
not scale as well as the symmetric model covered below.
In the case of multitenancy, each set of VLANs can be placed into separate VRFs and
routed between them.
Layer 3 Network
To Leaf03
Leaf01 Leaf02
VNI 24
Routing & bridging VNI 24 Routing & bridging
on ingress & egress on ingress & egress VNI 13
L3VNI
L3VNI
Host A VLAN 24 Host B VLAN 13
Host C VLAN 24
Rack 1 Rack 2
VXLAN Tunnel, Green VNI 13 (Layer 2 communication only (e.g. to Rack 3))
Leaf01 recognizes that the destination MAC address is itself and uses the routing table to
route the packet to the egress leaf (Leaf02) over the L3VNI. The MAC address of Leaf02 is
communicated to Leaf01 via a BGP extended community. The VXLAN-encapsulated packet
has the egress leaf's MAC as the destination MAC address and the L3VNI as the VNI.
Leaf02 performs VXLAN decapsulation and recognizes that the destination MAC address
is itself and routes the packet to the destination VLAN to reach the destination host. The
return traffic is routed similarly over the same L3VNI. Routing and bridging happens on both
the ingress leaf and the egress leaf.
With the symmetric model, the leaf switches only need to host the VNIs/VLANs that are
located on their rack, as well as the L3VNI and its associated VLAN, since the ingress leaf
switch doesn't route directly to the destination VNI. The ability to host only the local VNIs
(plus one extra) provides additional scale over the asymmetric model. However, the data
plane traffic is more complex as an extra routing hop occurs and an extra VXLAN tunnel
and VLAN in your network is required.
Multitenancy requires one L3VNI per VRF, and all switches participating in that VRF must be
configured with the same L3VNI.
The three solutions of this section use the Cumulus Linux Reference Architecture, as
discussed in each section. Although the examples are shown with the default, or one data
plane VRF, the same procedure as shown here could be used with multiple VRFs as well.
Although not depicted in the diagrams, all switches and hosts are connected to an out-
of-band management network. Management VRF is configured on the switches and
eth0 (connected to the out-of-band management network) is located in the management
VRF. Although an out-of-band management network connected to the switch via the
management VRF is always recommended, it is not required to deploy these solutions.
We set up an eBGP unnumbered underlay between the leafs, the spine and the exit leafs
in all models outlined in this paper. Using eBGP as the underlay routing protocol provides
a robust scalable infrastructure for the layer 2 overlay and sets the stage to also use BGP
EVPN as the overlay routing protocol.
We configure VXLAN tunnels as the overlay — providing layer 2 connectivity over the layer
3 infrastructure. BGP EVPN is used as the VXLAN routing and bridging control plane and
provides routing between VXLAN tunnels on the local, directly-connected leaf. All hosts are
For each of these examples, we deploy four servers, two in VLAN13 and two in VLAN24. We
run MLAG on the leaf switches to provide redundancy to the hosts. We deploy a border leaf
connected to an Internet router. For purposes of these examples, the following addresses are
used. Note that not all the addresses depicted in Table 1 below apply to all scenarios.
Table 1 - Addresses
ADDRESSES
104001
Exit01 10.0.0.41/32 N/A 4001 (mapped to L3VNI) 65041
(L3VNI)
104001
Exit02 10.0.0.42/32 N/A 4001 (mapped to L3VNI) 65042
(L3VNI)
10.1.3.11
10.1.3.1 (anycast gtwy)
13 (data) 13
10.2.4.11
Leaf01 10.0.0.11/32 24 (data) 24 65011
10.2.4.1 (anycast gtwy)
104001
N/A 4001
(L3VNI)
10.1.3.12/24
13 (data) 13
10.1.3.1/24 (anycast gtwy)
10.2.4.12/24
Leaf02 10.0.0.12/32 24(data) 24 65012
10.2.4.1/24 (anycast gtwy)
104001
N/A 4001
(L3VNI)
10.1.3.13/24
13 (data) 13
10.1.3.1/24
10.2.4.13/24
Leaf03 10.0.0.13/32 24(data) 24 65013
10.2.4.1/24 (anycast gtwy)
104001
N/A 4001
(L3VNI)
10.1.3.14/24
13 (data) 13
10.1.3.1/24 (anycast gtwy)
10.2.4.14/24
Leaf04 10.0.0.14/32 24(data) 24 65014
10.1.3.1/24 (anycast gtwy)
104001
N/A 4001
(L3VNI)
In all scenarios the spine switches are configured the same. They are configured with eBGP
unnumbered with the EVPN address family, peering with each leaf and any border/exit
routers. A sample configuration is show below in Figure 10 where the loopback is 10.0.0.21
and the BGP AS is 65020. Switch ports 1-4 connect to each leaf switch, respectively, and
swp29 and 30 connect to the exit switches.
Internet
Layer 3 Network
eBGP
Peering
Spines
Swp1-4, 29-30
Swp44
Swp51-52 Swp51-52
Each of the two racks hosts two servers and two leaf switches. The servers on a rack are
on separate VLANs from each other. One server is in VLAN 13 and the other server is in
VLAN 24. The leaf switches are configured with MLAG and each server is configured with a
bond interface. However, each member of the bond is connected to a different leaf switch
in the same rack.
Two VNIs and an anycast VTEP exist on each leaf switch, VNI 13 and VNI 24. An anycast
address is used for the VTEP to allow for VXLAN redundancy with the MLAG. Only bridging
occurs on the leaf switches in this design.
Each leaf is connected to each spine. The spines are configured for eBGP unnumbered
as the underlay routing protocol, and with BGP EVPN for the overlay. The spines are
connected to the centralized routers.
The centralized (exit) routers also have the same VNIs configured as the leafs and an
anycast VTEP. Since it hosts the SVI and thus is the default gateway for all the VNIs, the
centralized routers also advertises themselves as the default gateway by using a BGP
extended community with a type 2 EVPN route. Since we are running MLAG with VRR
between the centralized routers, the virtual IP and MAC addresses are advertised as well as
the physical addresses. The centralized routers peer to the spines for the underlay with the
IPv4 address family, and peer to the spines with the L2VPN EVPN address family. They also
peer with the Internet router with the IPv4 unicast address family.
This design can be expanded to accommodate many more servers and racks.
A sample configuration snippet for a leaf switch is shown in Figure 12. The MLAG, peerlink
and bond interface towards the servers are left off for brevity.
The centralized switches are configured with a VTEP and all the same VNIs. An SVI is also
configured on each VLAN on the centralized switches.
BGP EVPN is configured to send out the default gateway community to all the leaf switches,
as highlighted below. The tenant VLANS are advertised via the BGP network command,
and a route-map is used to send only the tenant VLAN subnets to the Internet router.
Alternatively, redistribute connected could also be used with a route map. A route-map is
also used to send only the loopback and VTEP anycast addresses of the exit routers to the
underlay. The MLAG configurations were left off for brevity. Figure 13 shows a configuration
snippet on a centralized router.
The configuration from the Internet router is very basic: we configure BGP unnumbered,
and advertise the loopback (BGP router ID) and the default route. Figure 14 shows the
configuration snippet. The management VRF was left off for brevity.
interface lo
address 10.0.0.253/32
interface eth0
address 192.168.0.253/24
interface swp1
ipv6 nd ra-interval 10
no ipv6 nd suppress-ra
router bgp 25253
bgp router-id 10.0.0.253
coalesce-time 1000
bgp bestpath as-path multipath-relax
neighbor swp1 interface remote-as external
neighbor swp2 interface remote-as external
address-family ipv4 unicast
network 10.0.0.253/32
neighbor swp1 default-originate
neighbor swp2 default-originate
line vty
Moving back to the leaf, we can examine the EVPN route table. We see the addresses
to the virtual default gateways — 10.1.3.1 and 10.2.4.1— are installed as type 2 routes as
expected; each has two ECMP routes, one through each spine switch. The addresses to
the physical interfaces are displayed as well. Since the centralized architecture bridges at
the ToRs, EVPN sends only type 2 and 3 routes to the leafs. The EVPN routes in this case
are used for bridging only and are shown in Figure 15. Some routes to the other hosts were
left off for brevity.
* [2]:[0]:[0]:[48]:[44:39:39:ff:00:13]:[128]:[fe80::4639:39ff:feff:13]
10.0.0.142 0 65020 65042 i
*> [2]:[0]:[0]:[48]:[44:39:39:ff:00:13]:[128]:[fe80::4639:39ff:feff:13]
10.0.0.142 0 65020 65042 i
* [3]:[0]:[32]:[10.0.0.142]
10.0.0.142 0 65020 65042 i
*> [3]:[0]:[32]:[10.0.0.142]
10.0.0.142 0 65020 65042 i
Route Distinguisher: 10.0.0.42:4
* [2]:[0]:[0]:[48]:[3a:ba:74:bb:6f:17]:[32]:[10.2.4.12]
10.0.0.142 0 65020 65042 i
*> [2]:[0]:[0]:[48]:[3a:ba:74:bb:6f:17]:[32]:[10.2.4.12]
10.0.0.142 0 65020 65042 i
* [2]:[0]:[0]:[48]:[3a:ba:74:bb:6f:17]:[128]:[fe80::38ba:74ff:febb:6f17]
10.0.0.142 0 65020 65042 i
*> [2]:[0]:[0]:[48]:[3a:ba:74:bb:6f:17]:[128]:[fe80::38ba:74ff:febb:6f17]
10.0.0.142 0 65020 65042 i
* [2]:[0]:[0]:[48]:[44:39:39:ff:00:24]:[32]:[10.2.4.1]
10.0.0.142 0 65020 65042 i
*> [2]:[0]:[0]:[48]:[44:39:39:ff:00:24]:[32]:[10.2.4.1]
10.0.0.142 0 65020 65042 i
* [2]:[0]:[0]:[48]:[44:39:39:ff:00:24]:[128]:[fe80::4639:39ff:feff:24]
10.0.0.142 0 65020 65042 i
*> [2]:[0]:[0]:[48]:[44:39:39:ff:00:24]:[128]:[fe80::4639:39ff:feff:24]
10.0.0.142 0 65020 65042 i
* [3]:[0]:[32]:[10.0.0.142]
10.0.0.142 0 65020 65042 i
*> [3]:[0]:[32]:[10.0.0.142]
10.0.0.142 0 65020 65042 i
Displayed 52 prefixes (94 paths)
We can see the default gateway community for the route distinguisher by using the net
show bgp l2vpn evpn route rd <rd> command, as shown below in Figure 16. Some of the
output has been snipped for brevity.
Looking at the routing table on the leaf, we see only layer 3 routes to the loopbacks of the
switches; all the routing between VXLAN subnets is happening on the centralized switch.
Figure 17 depicts the output.
The routing table on the centralized switch is as expected, shown in Figure 18, where we
see the VLAN subnets as directly connected and the default pointing to the internet router:
This setup is available virtually on Cumulus Networks GitHub site and all the full
configurations are depicted.
Layer 3 Network
Two servers are located in VLAN 24 and two servers are located in VLAN 13. The servers
are dual connected via MLAG to the leaf switches. Asymmetric VXLAN routing with EVPN is
deployed to provide inter-VLAN connectivity.
No special additional leaf (ToR) configuration is needed for VXLAN routing in the
asymmetric scenario. It is enabled by default when the associated VLAN is configured with
an SVI. However, to provide routing to a destination VLAN, the local switch must have a
local VNI and VLAN configured for that destination VLAN, even if no hosts on that same
destination VLAN exist on the rack.
A sample leaf switch configuration (leaf01) is shown in Figure 20. Each leaf is configured
similarly. The MLAG/bond configuration and the peerlink were left off for brevity. Note the
SVI is configured on each VLAN, along with Virtual Router Redundancy (VRR). For example,
a server on VLAN 13 would use 10.1.3.1 as its default gateway.
VRFs may be added for multitenancy, and routing between VXLANs happens only within a VRF.
Looking at the EVPN routing table shown in Figure 21, the four server IP addresses along
with their MAC addresses appear in the table as type 2 routes: 10.1.3.101, 10.2.4.102,
10.1.3.103 and 10.2.4.104:
Looking at the kernel routing table, we can see that to get to either subnet, we access it
directly on the same leaf switch as shown in Figure 22.
Internet
Lo=172.16.1.1
Layer 3 Network
(BGP unnumbered underlay) 0.0.0.0/0
Spine01 Spine02
VXLAN T
For purposes of this example, we deploy four servers and two VLANs. The servers use
MLAG to connect to the leaf switches for redundancy and are configured with an anycast
default gateway. The same virtual anycast IP address is configured on each switch's
southbound SVI, using VRR, towards the servers.
Each of the four leafs host two SVIs, one terminating with VLAN 13 and one with VLAN
24. VXLAN distributed routing occurs on these leafs. Each leaf switch also has VNIs 13
and 24 configured for the VXLAN tunnels to layer 2 transport both VLAN 13 and VLAN 24
respectively. Specifically for the symmetric routing model, each leaf and exit switch also
hosts VLAN 4001 (the transport VLAN) and VNI 104001 (the L3VNI).
All leaf switches host vrf1, and all servers are located in vrf1. Since we are using EVPN,
there is no need to configure vrf1 on the spine switches.
This design has no hosts attached to border leafs, but attaching hosts to the border leafs
as well is fully supported.
For this design, we generate a default route (0.0.0.0/0) from the Internet router and advertise
that default route to the exit leafs via the BGP IPv4 address family. A sample snippet
configuration for peering and generating a default route via eBGP unnumbered is shown in
Figure 24:
interface swp1
ipv6 nd ra-interval 10
no ipv6 nd suppress-ra
interface swp2
ipv6 nd ra-interval 10
no ipv6 nd suppress-ra
router bgp 25253
bgp router-id 10.0.0.253
bgp bestpath as-path multipath-relax
neighbor swp1 interface remote-as external
neighbor swp2 interface remote-as external
!
address-family ipv4 unicast
network 10.0.0.253/32
neighbor swp1 default-originate
neighbor swp2 default-originate
exit-address-family
The exit leaf's interface that peers with the Internet router (swp44) is placed in vrf1, which
also places the default route in vrf1. The Exit01 router injects vrf1's default route into
vrf1's EVPN address family as a type 5 route with the command advertise ipv4 unicast, as
highlighted below. EVPN then advertises this route via the local VLAN4001, which is also in
vrf1, across the L3VNI (VNI 104001) to all other leafs in the network that participate in vrf1.
The local leafs inject the type 5 default route into its vrf1 routing table. No other VNIs are
configured on the exit leafs since no hosts are directly connected to the exit leafs.
A sample configuration snippet from Exit01 is shown in Figure 25. Note the advertise ipv4
unicast command is only needed on routers where you want to redistribute IPv4 address
family routes into L2VPN EVPN address family routes, which is generally on the border/
exit leafs. We advertise the VXLAN subnets to the Internet router. Also, in this case we
need only the L3VNI on the exit leaf as there are no hosts attached to that leaf. If hosts
were attached to those leafs, we would also need the VNIs/SVIs configured here as well, as
shown in the leaf snippet.
The spine switches are configured for both the IPv4 unicast address family as well as
the EVPN address family, which is fairly straightforward, as shown below. A snippet from
Spine01 is shown below. Interfaces swp1-4 connect to the leaf switches, and swp29-30
connect to the exit switches. Spine02 is configured similarly.
Although not depicted above for brevity, the leaf's southbound ports are placed in vrf1
and a tenant VLAN. Each tenant VLAN is mapped to a VNI that provides the layer 2
VXLAN tunneling. In addition, VNI 104001 (associated with interface vxlan4001) and VLAN
4001 are configured to create the L3VNI. This provides the hop between the local VLANs
and the L3VNI.
Finally, let's look at the leaf's EVPN routing table in Figure 27, showing both type 2 and type
5 routes (the type 3 routes provide VTEP discovery). We can see the type 2 routes to the
servers in the data center, as well as the type 5 routes to reach outside the data center. Part
of the routing table has been left off for brevity.
Looking in the leaf's routing table in Figure 28, we can see all the routes are available in vrf1
as well as the default route that is reachable over vlan4001, which is attached to the L3VNI:
VRF vrf1:
B>* 0.0.0.0/0 [20/0] via 10.0.0.41, vlan4001 onlink, 00:05:53 #Type 5 with L3VNI
* via 10.0.0.42, vlan4001 onlink, 00:05:53
K * 0.0.0.0/0 [255/8192] unreachable (ICMP unreachable), 00:10:03
B>* 10.0.0.253/32 [20/0] via 10.0.0.41, vlan4001 onlink, 00:05:53
* via 10.0.0.42, vlan4001 onlink, 00:05:53
C * 10.1.3.0/24 is directly connected, vlan13-v0, 00:10:03
C>* 10.1.3.0/24 is directly connected, vlan13, 00:10:03
B>* 10.1.3.103/32 [20/0] via 10.0.0.134, vlan4001 onlink, 00:04:56 #Type 2 with L3VNI
C * 10.2.4.0/24 is directly connected, vlan24-v0, 00:10:03
C>* 10.2.4.0/24 is directly connected, vlan24, 00:10:03
B>* 10.2.4.104/32 [20/0] via 10.0.0.134, vlan4001 onlink, 00:04:16 #Type 2 with L3VNI
A virtual setup with this topology is available at Cumulus Linux GitHub site.
Upon initial setup, it is wise to check the configurations to be sure nothing is missing
and all VNIs are configured — NetQ can accomplish this in simply one step, as shown
in Figure 29 below.
In this example, we are missing activating the L2VPN EVPN address family on spine01. We
can also check BGP and it will tell us the same information as shown in Figure 30.
After this is fixed, all will look good, as shown in Figure 31 below.
NetQ also displays all the VNIs in the entire network in one simple command, to check and
be sure the correct VNIs are where they belong, as shown in Figure 32:
We can also easily display all VNIs per node, as shown in Figure 33:
This is just a few examples of how NetQ can help with operations in a EVPN environment.
There are many more features available. Visit our NetQ User's Guide for more information.
Conclusion
Modern data centers deploy layer 3 with the BGP routing protocol in order to scale and
provide a robust, easy to troubleshoot infrastructure that also supports a multi-vendor
environment. However, since some applications still require layer 2 connectivity, VXLAN
tunnels, deployed over a layer 3 fabric, have become a popular way to achieve layer 2
connectivity between racks.
In order for the VXLAN tunnels to reach each other or the outside world, VXLAN routing
must be enabled. Newer merchant silicon supports this functionality directly in the ASIC,
thereby making a cost effective and simple deployment. EVPN, which is already popular
for VXLAN bridging, now integrates with VXLAN routing, unifying the control plane and
streamlining deployment.
Cumulus Linux EVPN is the ideal control plane solution for VXLAN routing. It uses the
same routing protocol preferred for data center infrastructures — BGP — for both VXLAN
bridging and routing. Additionally, it has inherent support for multitenancy.
Try it out for yourself on this ready-to-go demo using Cumulus VX with NetQ and Vagrant
or try it out in Cumulus in the Cloud.
Cumulus Networks is leading the transformation of bringing web-scale networking to enterprise cloud. Its network
switch, Cumulus Linux, is the only solution that allows you to affordably build and efficiently operate your network like the
world’s largest data center operators, unlocking vertical network stacks. By allowing operators to use standard hardware
components, Cumulus Linux offers unprecedented operational speed and agility, at the industry’s most competitive cost.
Cumulus Networks has received venture funding from Andreessen Horowitz, Battery Ventures, Capital, Peter Wagner and
four of the original VMware founders.
©2018 Cumulus Networks. All rights reserved. CUMULUS, the Cumulus Logo, CUMULUS NETWORKS, and the Rocket Turtle Logo (the “Marks”) are
trademarks and service marks of Cumulus Networks, Inc. in the U.S. and other countries. You are not permitted to use the Marks without the prior
written consent of Cumulus Networks. The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus
Torvalds, owner of the mark on a worldwide basis. All other marks are used under fair use or license from their respective owners.
04032018