B Cisco Vxlan Config v1
B Cisco Vxlan Config v1
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2021 Cisco Systems, Inc. All rights reserved.
CONTENTS
CHAPTER 1 About 1
About This Demonstration 1
Limitations 1
Customization Options 1
Requirements 2
About This Solution 2
Topology 2
Equipment Details 4
Switch Information 5
Tenant Information 5
Server Information 5
Component Details 6
Before You Present 6
Get Started 6
Accessing Devices 7
CHAPTER 2 Scenarios 9
Configure Anycast RP 13
Configure RP Address 14
Configure Interfaces 14
Configure RP Address 15
Verification 15
CHAPTER 3 Appendix 35
Device Toubleshooting 35
Limitations
Certain features of the Cisco VXLAN solution are outside the scope of this demonstration, because the
demonstration uses virtual devices rather than a physical fabric:
• Due to the way the Nexus 9000v operates, it does not start with a boot statement and will get stuck in
loader on boot. To prevent this, make sure the show boot command contains a valid image to boot from.
• Some commands are not available on the virtual Nexus 9k that might be required on a CloudScale Nexus
9K. Please consult the documentation.
• Since the hardware is virtual, some things may appear odd when it comes to the interfaces. For example,
if two interfaces are directly connected, shutting one side down should show "down" on the other side.
This does not occur in virtual hardware.
Customization Options
We recommend that you test different scenarios after building the VXLAN Fabric.
• Tenant-2 is built but is not really used throughout the demo. We recommend that you move some of the
servers to Tenant-2 to show how multi-tenancy works to isolate traffic.
• The vPC configuration is very generic. It is outside the scope of this lab to set up the vPC configuration
with "advertise-pip". We recommend that you try and play with it. It does work, and it is helpful to know
the differences.
Requirements
The table below outlines the requirements for this preconfigured demonstration.
Required Optional
Laptop Cisco AnyConnect®
Topology
This content includes preconfigured users and components to illustrate the scripted scenarios and features of
the solution.
dCloud Topology
Physical Topology
Equipment Details
Name Description Host Name (FQDN) IP Address Username Password
CML Cisco Modeling Labs cml.dcloud.cisco.com 198.18.133.3
Switch Information
Name Loopback 0 IP Loopback 1 IP Loopback 1 Secondary Loopback 15 IP
Spine-1 10.0.0.1 10.0.1.1 10.255.255.255
Tenant Information
Multicast
Name VLAN ID VLAN Name VNI SVI IP
Group
Tenant-1 101 Tenant-1_Network-1 10101 239.0.0.101 192.168.101.1/24
Server Information
Name VLAN IP Address Gateway
Server-1 101 192.168.101.10/24 192.168.101.1
Component Details
• CML - 2.1.1-b19
• Nexus 9K - 9.3(6)
• IOSv - 15.9(3)M2
• Ubuntu - 20.04.1
• TinyCore Linux - 5.4.3-tinycore
Get Started
Follow these steps to schedule a session of the content and configure your presentation environment.
Procedure
Step 2 For best performance, connect to the workstation with Cisco AnyConnect VPN [Show Me How] and the
local RDP client on your laptop [Show Me How]
• Workstation 1: 198.18.133.252, Username: administrator, Password: C1sco12345.
Important After you access the remote desktop, wait 15 minutes for the devices to fully initialize. If you
do not wait accordingly, the devices may not be accessible.
This demonstration/lab is designed to be completed in one sitting without interruption, otherwise
you may see some errors and may have to log back into the application and/or devices.
The Nexus 9000v I/O is demanding of dCloud platform resources. As a result, device crashes
may occur. To recover failed devices, refer to the Device Troubleshooting Appendix in this
document.
Accessing Devices
Important After you access the remote desktop, wait 15 minutes for the devices to fully initialize. If you do not
wait accordingly, the devices may not be accessible.
Procedure
Important After you access the remote desktop, wait 15 minutes for the devices to fully initialize. If you do not
wait accordingly, the devices may not be accessible.
The Nexus 9000v I/O is demanding of dCloud platform resources. As a result, device crashes may occur.
To recover failed devices, refer to the Device Troubleshooting Appendix in this document.
Procedure
Step 1 On all of the Spine and Leaf switches, enter the following commands to enable the OSPF routing protocol
and set the Router-ID to match the Loopback 0 IP address.
Spine-1:
Spine-1# configure
feature ospf
router ospf UNDERLAY
router-id 10.0.0.1
end
copy run start
Spine-2:
Spine-2# configure
feature ospf
router ospf UNDERLAY
router-id 10.0.0.2
end
copy run start
Leaf-1:
Leaf-1# configure
feature ospf
router ospf UNDERLAY
router-id 10.0.0.11
end
copy run start
Leaf-2:
Leaf-2# configure
feature ospf
router ospf UNDERLAY
router-id 10.0.0.12
end
copy run start
Leaf-3:
Leaf-3# configure
feature ospf
router ospf UNDERLAY
router-id 10.0.0.13
end
copy run start
Leaf-4:
Leaf-4# configure
feature ospf
router ospf UNDERLAY
router-id 10.0.0.14
end
copy run start
Now we will configure the interfaces for OSPF. In this setup, the goal is to enable OSPF with a point-to-point
network for faster convergence. Each of the loopback interfaces must be reachable throughout the network.
The loopbacks have already been created. In this case, the goal is to save IP Space also inside the fabric by
using “ip unnumbered”.
Loopback addresses described:
• Loopback0 – Used for the “ip unnumbered” and for the BGP Peering source/destination
• Loopback1 – Used for the VXLAN tunnel interface source and destination
• Loopback15 – Used only on spine switches for the Anycast RP address for multicast routing. Multicast
routing is used for BUM traffic discovery.
Step 2 Throughout the lab, the same config can be used on multiple devices. In this setup, we recommend that you
use a text editor in order to copy and paste the configuration. In this situation, all of the Spine Switches use
the exact same config. Only Spine-1 is shown below. Make sure to put the config on both Spine-1 and Spine-2.
Spine-1# configure
interface ethernet1/1-4
no switchport
medium p2p
ip router ospf UNDERLAY area 0.0.0.0
ip unnumbered loopback0
no shutdown
exit
interface loopback0
ip router ospf UNDERLAY area 0.0.0.0
interface loopback1
ip router ospf UNDERLAY area 0.0.0.0
interface loopback15
ip router ospf UNDERLAY area 0.0.0.0
end
copy run start
Step 3 Here, all of the leaf switches will utilize the same config. Make sure to put the config on all four Leaf switches.
Leaf-1# configure
interface ethernet 1/1-2
no switchport
medium p2p
ip router ospf UNDERLAY area 0.0.0.0
ip unnumbered loopback0
no shutdown
exit
interface loopback0
ip router ospf UNDERLAY area 0.0.0.0
interface loopback1
ip router ospf UNDERLAY area 0.0.0.0
end
copy run start
Procedure
Step 1 Enter the following command on Spine-1 and Spine-2. In this output, we are looking to verify that all of the
Leaf switches did form an OSPF neighbor adjacency.
Spine-1:
Spine-1# show ip ospf neighbors
OSPF Process ID UNDERLAY VRF default
Total number of neighbors: 4
Neighbor ID Pri State Up Time Address Interface
Spine-2:
Spine-2# show ip ospf neighbors
OSPF Process ID UNDERLAY VRF default
Total number of neighbors: 4
Neighbor ID Pri State Up Time Address Interface
10.0.0.11 1 FULL/ - 00:01:48 10.0.0.11 Eth1/1
10.0.0.12 1 FULL/ - 00:01:48 10.0.0.12 Eth1/2
10.0.0.13 1 FULL/ - 00:01:37 10.0.0.13 Eth1/3
10.0.0.14 1 FULL/ - 00:01:38 10.0.0.14 Eth1/4
Spine-2#
Spine-1:
Step 2 Enter the following command on Spine-1. In this output, the goal is to verify that all of the loopback IP
Addresses are reachable from each device. In this example, only the view from Spine-1 is shown. It is highly
recommended to check this output on each switch (Spine 1 and 2; Leaf-1, Leaf-2, Leaf-3 and Leaf-4.
Spine-1# show ip route
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
10.0.0.1/32, ubest/mbest: 2/0, attached
*via 10.0.0.1, Lo0, [0/0], 00:29:47, local
*via 10.0.0.1, Lo0, [0/0], 00:29:47, direct
10.0.0.2/32, ubest/mbest: 4/0
*via 10.0.0.11, Eth1/1, [110/81], 00:03:49, ospf-UNDERLAY, intra
*via 10.0.0.12, Eth1/2, [110/81], 00:03:49, ospf-UNDERLAY, intra
*via 10.0.0.13, Eth1/3, [110/81], 00:03:49, ospf-UNDERLAY, intra
*via 10.0.0.14, Eth1/4, [110/81], 00:03:49, ospf-UNDERLAY, intra
10.0.0.11/32, ubest/mbest: 1/0
*via 10.0.0.11, Eth1/1, [110/41], 00:02:06, ospf-UNDERLAY, intra
10.0.0.12/32, ubest/mbest: 1/0
*via 10.0.0.12, Eth1/2, [110/41], 00:01:55, ospf-UNDERLAY, intra
10.0.0.13/32, ubest/mbest: 1/0
*via 10.0.0.13, Eth1/3, [110/41], 00:01:48, ospf-UNDERLAY, intra
10.0.0.14/32, ubest/mbest: 1/0
*via 10.0.0.14, Eth1/4, [110/41], 00:01:41, ospf-UNDERLAY, intra
10.0.1.1/32, ubest/mbest: 2/0, attached
*via 10.0.1.1, Lo1, [0/0], 00:29:47, local
*via 10.0.1.1, Lo1, [0/0], 00:29:47, direct
10.0.1.2/32, ubest/mbest: 4/0
*via 10.0.0.11, Eth1/1, [110/81], 00:03:44, ospf-UNDERLAY, intra
*via 10.0.0.12, Eth1/2, [110/81], 00:03:44, ospf-UNDERLAY, intra
*via 10.0.0.13, Eth1/3, [110/81], 00:03:44, ospf-UNDERLAY, intra
*via 10.0.0.14, Eth1/4, [110/81], 00:03:44, ospf-UNDERLAY, intra
10.0.1.11/32, ubest/mbest: 1/0
*via 10.0.0.11, Eth1/1, [110/41], 00:02:01, ospf-UNDERLAY, intra
10.0.1.12/32, ubest/mbest: 1/0
*via 10.0.0.12, Eth1/2, [110/41], 00:01:50, ospf-UNDERLAY, intra
10.0.1.13/32, ubest/mbest: 1/0
*via 10.0.0.13, Eth1/3, [110/41], 00:01:43, ospf-UNDERLAY, intra
10.0.1.14/32, ubest/mbest: 1/0
*via 10.0.0.14, Eth1/4, [110/41], 00:01:36, ospf-UNDERLAY, intra
10.0.1.100/32, ubest/mbest: 2/0
*via 10.0.0.11, Eth1/1, [110/41], 00:01:50, ospf-UNDERLAY, intra
*via 10.0.0.12, Eth1/2, [110/41], 00:01:50, ospf-UNDERLAY, intra
In this example, we will use Multicast. There are a handful of ways to configure multicast. For simplicity in
the configuration, we will be using Anycast RP. It involves some extra configuration on the spine switches
using Loopback15 (was previously configured) with the same IP Address on both spine switches.
For the Spine switches, each of the loopback interfaces and physical interfaces that are connected to Leaf
switches need to be configured to run “ip pim sparse-mode”. The “anycast-rp” configuration tells the switch
which IP will be the RP address and the Loopback0 for the other switches that are running the Anycast RP
Address. Finally, it needs to be told what the RP address is.
Configure Anycast RP
Procedure
Here, both Spine switches will utilize the same config. Make sure to enter the following commands on both
Spine-1 and Spine-2 switches.
Note Both spine switches will be configured exactly the same. Since we already configured the PIM
Feature and the Loopback 3 Interface, it is rather trivial to enable Anycast RP. The first two lines
shown below are all it takes. The first IP is the IP Address of the RP and the second IP is the
loopback0 interface of all the Spine switches acting as an Anycast RP including this one. The
configuration can be copied and pasted on both spines. The second section is where the RP is
statically assigned to the switch
Spine-1, Spine-2:
Spine-1#configure
feature pim
ip pim anycast-rp 10.255.255.255 10.0.0.1
ip pim anycast-rp 10.255.255.255 10.0.0.2
ip pim rp-address 10.255.255.255
end
copy run start
The Leaf switch configuration is simpler than the Spine configuration. Each of the loopback and physical
interfaces that are connected to the Spine Switches must be configured with “ip pim sparse-mode’. The only
other requirement is to specify the Anycast RP address of the Spines.
Configure RP Address
Procedure
Here, all four Leaf switches will utilize the same config. Make sure to put the config on all Leaf switches. All
of the leaf configurations will use the same IP Address as the RP. A switch will choose either path to the
10.255.255.255 IP Address via uplink. It will not matter which one it picks because they are synchronized
using anycast-rp. The same one line should be applied to each Leaf switch.
Leaf-1, Leaf-2, Leaf-3, Leaf-4:
Leaf-1# configure
feature pim
ip pim rp-address 10.255.255.255
end
copy run start
Configure Interfaces
Procedure
Here, the Spine switches will utilize the same config. Make sure to put the following configuration on both
Spine-1 and Spine-2. Both spine switches will be configured exactly the same. Since we already configured
the PIM Feature and the Loopback 3 Interface, it is rather trivial to enable Anycast RP. The first two lines
shown below are all it takes. The first IP is the IP Address of the RP and the second IP is the loopback0
interface of all the Spine switches acting as an Anycast RP including this one. The configuration can be copied
and pasted on both spines. The second section is where the RP is statically assigned to the switch.
Spine-1, Spine-2:
Spine-1# configure
interface loopback 0
ip pim sparse-mode
interface loopback 1
ip pim sparse-mode
interface loopback 15
ip pim sparse-mode
exit
interface ethernet 1/1-4
ip pim sparse-mode
end
copy run start
Configure RP Address
Procedure
Here, all of the leaf switches will utilize the same config. Make sure to enter the following configuration on
all four Leaf switches. All of the leaf configurations will use the same IP Address as the RP. A switch will
choose either path to the 10.255.255.255 IP Address via uplink. It will not matter which one it picks because
they are synchronized using anycast-rp. The same one line should be applied to each Leaf Switch.
Leaf-1, Leaf-2, Leaf-3, Leaf-4:
Leaf-1# configure
interface loopback0
ip pim sparse-mode
interface loopback1
ip pim sparse-mode
exit
int ethernet1/1-2
ip pim sparse-mode
end
copy run start
Verification
Procedure
Please be sure to run the following verification commands on both Spine-1 and Spine-2. The output should
be very similar for both switches. However, covering the details of Multicast is outside the scope of this lab.
The “nv overlay evpn” command enables the l2vpn evpn address-family for BGP. The command provides
the capability of using the control-plan for endpoint learning instead of the dataplane with flood and learn.
Procedure
Step 1 On each Spine and Leaf switch, enter the following commands to enable NV Overlay.
Step 2 Enter the following commands to configure the BGP Process. On the Spine switches, the “retain route-target
all” command is required since the Spine switches will be passing the VXLAN traffic, but will not know
about any of the Tenant information. For the most part, each switch will actually be configured the same.
However, it is recommended to specify a router-id that matches the Loopback0 interface.
Spine-1:
Spine-1# configure
router bgp 65001
router-id 10.0.0.1
address-family ipv4 unicast
address-family l2vpn evpn
retain route-target all
end
copy run start
Spine-2:
Spine-2# configure
router bgp 65001
router-id 10.0.0.2
address-family ipv4 unicast
address-family l2vpn evpn
retain route-target all
end
copy run start
Leaf-1:
Leaf-1# configure
router bgp 65001
router-id 10.0.0.11
address-family ipv4 unicast
address-family l2vpn evpn
end
copy run start
Leaf-2:
Leaf-2# configure
router bgp 65001
router-id 10.0.0.12
address-family ipv4 unicast
address-family l2vpn evpn
end
copy run start
Leaf-3:
Leaf-3# configure
router bgp 65001
router-id 10.0.0.13
address-family ipv4 unicast
address-family l2vpn evpn
end
copy run start
Leaf-4:
Leaf-4# configure
router bgp 65001
router-id 10.0.0.14
address-family ipv4 unicast
address-family l2vpn evpn
end
copy run start
Step 3 Enter the following commands to configure Spine switches to connect to Leaf switches. Templates are used
to make the configuration more scalable and easier to read. While not completely necessary, it makes the
config cleaner if more neighbors are added.
Spine-1 and Spine-2:
Spine-1# configure
router bgp 65001
template peer iBGP-Leafs
remote-as 65001
update-source loopback0
address-family ipv4 unicast
send-community both
route-reflector-client
address-family l2vpn evpn
send-community both
route-reflector-client
exit
exit
neighbor 10.0.0.11
description Leaf-1 Loopback0
inherit peer iBGP-Leafs
neighbor 10.0.0.12
description Leaf-2 Loopback0
inherit peer iBGP-Leafs
neighbor 10.0.0.13
description Leaf-3 Loopback0
inherit peer iBGP-Leafs
neighbor 10.0.0.14
description Leaf-4 Loopback0
inherit peer iBGP-Leafs
end
copy run start
end
copy run start
Verification
Procedure
Enter the following commands on Spine-1 to verify the BGP neighbor relationships formed between the Spine
and Leaf pairs. It doesn’t matter at this time that the tables are empty with 0 routes.
Spine-1# show bgp ipv4 unicast summary
BGP summary information for VRF default, address family IPv4 Unicast
BGP router identifier 10.0.0.1, local AS number 65001
BGP table version is 6, IPv4 Unicast config peers 4, capable peers 4
0 network entries and 0 paths using 0 bytes of memory
BGP attribute entries [0/0], BGP AS path entries [0/0]
BGP community entries [0/0], BGP clusterlist entries [0/0]
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
10.0.0.11 4 65001 9 9 6 0 0 00:00:21 0
10.0.0.12 4 65001 9 9 6 0 0 00:00:19 0
10.0.0.13 4 65001 9 9 6 0 0 00:00:24 0
10.0.0.14 4 65001 9 9 6 0 0 00:00:20 0
Spine-1# show bgp l2vpn evpn summary
BGP summary information for VRF default, address family L2VPN EVPN
BGP router identifier 10.0.0.1, local AS number 65001
BGP table version is 6, L2VPN EVPN config peers 4, capable peers 4
0 network entries and 0 paths using 0 bytes of memory
BGP attribute entries [0/0], BGP AS path entries [0/0]
BGP community entries [0/0], BGP clusterlist entries [0/0]
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
10.0.0.11 4 65001 9 9 6 0 0 00:00:28 0
10.0.0.12 4 65001 9 9 6 0 0 00:00:26 0
10.0.0.13 4 65001 9 9 6 0 0 00:00:31 0
10.0.0.14 4 65001 9 9 6 0 0 00:00:27 0
Configuring Overlay
Value Proposition: In this scenario, the goal is to build a VLAN to VNI reference to match the prior Table
information. In reality, it is only 1 addition command that most people aren’t familiar with. The only addition
command is to associate the VNI to the VLAN. This is solved using the “vn-segment” command under a
VLAN.
Looking at the design, there are not any hosts plugged into Leaf-4. Leaf-4 is called a “Border Leaf”. It is not
common to plug end hosts into a border leaf. Therefore, the Layer 2 VLAN’s and VNI’s are not necessary to
be configured on the Border Leaf.
Create VLAN/VNI
Procedure
Enter the following commands to configure VLAN/VNI. Note that it is not necessary to configure VLAN
102 on Leaf-1 and Leaf-2 since there are no hosts plugged into it. It is included to make the configuration the
same. Once the lab is completed, it is also recommended to move hosts between VLANs in order to further
enhance understanding.
Leaf-1, Leaf-2, Leaf-3:
Leaf-1# configure
feature vn-segment-vlan-based
vlan 101
name Tenant-1_Network-1
vn-segment 10101
exit
vlan 102
name Tenant-1_Network-2
vn-segment 10102
exit
vlan 201
name Tenant-2_Network-1
vn-segment 10201
exit
vlan 202
name Tenant-2_Network-2
vn-segment 10202
exit
vlan 1001
name Tenant-1_L3VNI
vn-segment 101001
exit
vlan 1002
name Tenant-2_L3VNI
vn-segment 101002
end
copy run start
Note The “Warning” you receive in the command output can be ignored. This occurs due the fact that
the devices are actually Virtual.
Leaf-4:
Leaf-4# configure
feature vn-segment-vlan-based
vlan 1001
name Tenant-1_L3VNI
vn-segment 101001
exit
vlan 1002
name Tenant-2_L3VNI
vn-segment 101002
end
copy run start
Note Ignore the “Warning” you receive in the command output. This is occurring due the fact that the
devices are actually Virtual.
Create Tenants
Procedure
Enter the following commands on all Leaf switches to create tenants. VRFs are used to separate different
tenants at Layer 3. In reality, it is what makes a fabric multi-tenant. VRFs are not new. However, they do need
to have the proper VNI configured above to match the Tenant and the additional route-target command for
evpn.
Leaf-1, Leaf-2, Leaf-3, Leaf-4
Leaf-1# configure
vrf context Tenant-1
rd auto
vni 101001
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
exit
exit
vrf context Tenant-2
rd auto
vni 101002
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
end
copy run start
Procedure
Leaf-4:
Leaf-4# configure
interface nve1
no shutdown
source-interface loopback1
host-reachability protocol bgp
member vni 101001 associate-vrf
exit
member vni 101002 associate-vrf
end
copy run start
Verification
Procedure
Configure EVPN
The EVPN section is what sets up Layer 2 connectivity across the fabric. It only requires configuration on
Leaf switches with hosts connected.
Procedure
Enter the following commands to add the Layer 3 routing capability across the fabric. Adding the VRF
information to BGP puts all the data into BGP l2vpn evpn table. Utilizing a route-map will bring all the SVI
interface subnets into the BGP process as well.
Leaf-1, Leaf-2, Leaf-3, Leaf-4
Leaf-1# configure
route-map DIRECT permit 10
match tag 12345
Configure SVIs
The SVIs utilize what is called an “Anycast gateway.” This feature puts the same IP/MAC address on each
of the Leaf switches for hosts to connect to. It specifies a “universal” MAC Address for all the switches to
use so that if a host migrates between switches, the gateway MAC address doesn’t change.
Procedure
Step 1 Enter the following command to configure the feature on all Leaf switches.
Leaf-1, Leaf-2, Leaf-3, and Leaf-4
Leaf-1# configure
feature interface-vlan
end
Step 2 The first SVIs to configure are the Layer 2 SVIs. Notice how they are “tagging” the routes. This will be useful
later when the networks are configured to be routed externally.
Layer 2 SVIs on Leaf-1, Leaf-2, Leaf-3
Leaf-1# configure
fabric forwarding anycast-gateway-mac 1234.1234.1234
interface vlan 101
vrf member Tenant-1
ip address 192.168.101.1/24 tag 12345
mtu 9216
no ip redirects
fabric forwarding mode anycast-gateway
no shutdown
interface vlan 102
vrf member Tenant-1
ip address 192.168.102.1/24 tag 12345
mtu 9216
no ip redirects
fabric forwarding mode anycast-gateway
no shutdown
interface vlan 201
vrf member Tenant-2
ip address 192.168.201.1/24 tag 12345
no ip redirects
mtu 9216
fabric forwarding mode anycast-gateway
no shut
Step 3 Layer 3 SVIs are used to route traffic across the fabric. The configuration is similar to Layer 2 SVIs except
they don’t have an IP address. They use “ip forward” to inform the SVI of its role.
Layer 3 SVIs on Leaf-1, Leaf-2, Leaf-3, Leaf-4
Leaf-1# configure
interface vlan 1001
vrf member Tenant-1
ip forward
mtu 9216
no ip redirects
no shut
exit
interface vlan 1002
vrf member Tenant-2
ip forward
mtu 9216
no ip redirects
no shut
end
copy run start
On Leaf-3, enter the following commands to configure Server-2 and Server-3 access:
Leaf-3:
Leaf-3# configure
int ethernet1/3
switchport
switchport mode access
switchport access vlan 101
spanning-tree port type edge
int ethernet1/4
switchport
switchport mode access
switchport access vlan 102
spanning-tree port type edge
end
copy run start
Verification
Procedure
Before the host ports can be configured, the vPC domain must be built. The end goal here is to create a vPC
to Server-1. The end goal is to show redundancy and how to configure it.
Leaf-1:
Leaf-1# configure
feature vpc
feature lacp
vrf context vpc-pka
address-family ipv4 unicast
exit
exit
interface ethernet1/5
no switchport
vrf member vpc-pka
Warning: Deleted all L3 config on interface Ethernet1/5
ip address 192.168.0.0/31
no shutdown
vpc domain 10
peer-keepalive destination 192.168.0.1 source 192.168.0.0 vrf vpc-pka
peer-switch
peer-gateway
ip arp synchronize
exit
interface ethernet1/6-7
switch
switch mode trunk
channel-group 100 mode active
no shutdown
exit
interface port-channel 100
vpc peer-link
end
copy run start
Leaf-2:
Leaf-2# configure
feature vpc
feature lacp
vrf context vpc-pka
address-family ipv4 unicast
exit
exit
interface ethernet1/5
no switchport
vrf member vpc-pka
Warning: Deleted all L3 config on interface Ethernet1/5
ip address 192.168.0.1/31
no shutdown
vpc domain 10
peer-keepalive destination 192.168.0.0 source 192.168.0.1 vrf vpc-pka
peer-switch
peer-gateway
ip arp sync
ip arp synchronize
exit
interface ethernet 1/6-7
switch
switch mode trunk
no shut
channel-group 100 mode active
interface po100
interface port-channel 100
vpc peer-link
end
copy run start
Verification
Procedure
Verification
Procedure
External Routing
Value Proposition: In this scenario, you will set up external routing.
Enter the following commands to set up the interfaces on Leaf-4 to reach the WAN router.
Leaf-4:
Leaf-4# configure
int ethernet1/3
no switchport
no shutdown
exit
int ethernet1/3.10
encapsulation dot1q 10
vrf member Tenant-1
ip address 172.16.1.0/31
no shutdown
exit
int ethernet1/3.20
encapsulation dot1q 20
vrf member Tenant-2
ip address 172.16.2.0/31
no shutdown
exit
end
copy run start
Verification
Procedure
Enter the following commands to configure BGP peering to the WAN router.
Leaf-4:
Leaf-4# configure
router bgp 65001
vrf Tenant-1
neighbor 172.16.1.1
remote-as 65002
address-family ipv4 unicast
exit
exit
exit
vrf Tenant-2
neighbor 172.16.2.1
remote-as 65002
address-family ipv4 unicast
end
copy run start
Verification on Leaf-4
Procedure
Enter the following commands to configure BGP to filter host routes to the WAN router.
Leaf-4:
Leaf-4# configure
ip prefix-list NOHOSTS seq 5 permit 0.0.0.0/0 le 31
route-map EBGP-PEER permit 5
match ip address prefix-list NOHOSTS
route-map EBGP-PEER deny 90
exit
router bgp 65001
vrf Tenant-1
neighbor 172.16.1.1
address-family ipv4 unicast
route-map EBGP-PEER out
exit
exit
exit
vrf Tenant-2
neighbor 172.16.2.1
address-family ipv4 unicast
route-map EBGP-PEER out
end
copy run start
Verification on Server-4
Procedure
Device Toubleshooting
On occasion, a Nexus 9K device may crash due to the Nexus 9000v’s highly demanding IO and available
dCloud environment resources.
For this reason, we have implemented an Out-of-Band method of accessing the serial consoles for the Nexus
9K devices using a guacamole webapp running in a server inside the session called web-consoles.
Google Chrome on the remote desktop is configured to open up the web-consoles webapp and all lab devices’
serial ports that have been configured within the guacamole application.
If a device has crashed in your session, use the following procedure to recover the failed node.
Procedure
Step 1 first click the connection of the device name that crashed (for example leaf1_cli).
Step 2 This places you in rommon mode, where you will see the Loader > prompt.
Step 3 Once you are in rommon mode (Loader> prompt), enter the boot bootflash:nxos.9.3.6.bin command
and press <Enter>.
Step 4 Wait for the switch to finish booting. Once the device finishes booting, you will see the login prompt for the
device (for example Leaf-1).
Step 5 Log in to the device. You can continue the demonstration where you left off prior to the device crash.