0% found this document useful (0 votes)
214 views21 pages

NVIDIA Cumulus Linux Test Drive - Lab Guide For Attendees - July 2020 PDF

Uploaded by

maswid2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
214 views21 pages

NVIDIA Cumulus Linux Test Drive - Lab Guide For Attendees - July 2020 PDF

Uploaded by

maswid2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

 

 
NVIDIA Cumulus Linux Virtual Test Drive  
Built for Cumulus Linux v3.7.12  
 
 
 
 
 
 
 
 

 
   
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

NVIDIA Cumulus Linux Test Drive: Lab Guide 


Contents 
 
Lab 1: Verifying Lab Connectivity and License Installation 3 
Access your CITC workbench 3 
Connect to your oob-mgmt-server 5 
Run the setup playbook 6 
Apply a Cumulus Linux License 7 

Lab 2: Interface Configuration 9 


Configure loopback addresses on leaf01 and leaf02 9 
Verify loopback IP address configuration 9 
Configure bond between leaf01 and leaf02 10 
Configure bridge and access ports on leaf01 and leaf02 11 
Verify bridge configuration on leaf01 and leaf02 11 
Configure SVI and VRR on leaf01 and leaf02 12 
Test VRR connectivity 12 
Verify MAC address table on leaf01 and leaf02 14 

Lab 3: BGP Unnumbered 16 


Apply loopback address to spine01 16 
Configure BGP unnumbered on spine01, leaf01 and leaf02 16 
Verify BGP connectivity between fabric nodes 17 
Advertise Loopback and SVI subnets from leaf01, leaf02 and spine01 into fabric 18 
Verify that BGP is advertising the routes 18 
Verify connectivity and path between server01 and server02 19 

Appendix A: How to use an SSH client to manually connect to the to the lab 20 
 

   

cumulusnetworks.com 2
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
Lab 1: Verifying Lab Connectivity and License Installation 
Objective: 
You will confirm that you can access your Cumulus In the Cloud (CITC) workbench topology. This involves first visiting your Cumulus In The Cloud (CITC) workbench in a web 
browser. You will connect to your Lab workbench via SSH to an out-of-band management server (oob-mgmt-server), from which you can access your switches. Then, we’ll 
install a Cumulus Linux license as you would on a real (non-virtual switch). 
 
Goals: 
● Learn how to access your CITC workbench in a web browser 
● Log into your oob-mgmt-server. 
● From your oob-mgmt-server, access your switches via SSH. 
● Install and apply a Cumulus Linux License 
 
Procedure: 
To access your lab you will need to be registered with cumulusnetworks.com. You must have an account with cumulusnetworks.com that is bound to 
the email address used for your test drive registration. 
 
Access your CITC workbench 
1. Use a web browser to access and log into ​https://air.cumulusnetworks.com 
 

 
 
2. Once at the Cumulus In the Cloud console, find your Cumulus Linux Test Drive simulation 
 

cumulusnetworks.com 3
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
 
3. Click the “Launch” button to open your simulation console 
 

 
 
 
   

cumulusnetworks.com 4
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
Connect to your oob-mgmt-server 
 
1. Find and click the “Advanced” button in the lower left of the simulation console: 
 

 
 
2. The Advanced view presents you with your console connection to the oob-mgmt-server. Click on the pop-out icon to pop out your 
oob-mgmt-server console to allow you to resize and position for your convenience. 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

cumulusnetworks.com 5
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

3. You can also click on any of the nodes in the “Nodes” list to pop out a console window to that device.  
 

 
 
Run the setup playbook 
4. Change directories to the folder named “Test-Drive-Automation” from the user cumulus home directory. 
 
cumulus@oob-mgmt-server:~$ ​cd Test-Drive-Automation 
cumulus@oob-mgmt-server:~/Test-Drive-Automation$  
 
5. Perform a ‘git pull’ to sync/fetch changes 
 
cumulus@oob-mgmt-server:~/Test-Drive-Automation$ ​git pull 
Already up-to-date. 
cumulus@oob-mgmt-server:~/Test-Drive-Automation$  
 
6. Run the ‘start-lab.yml’ Ansible playbook. 
 
cumulus@oob-mgmt-server:~/Test-Drive-Automation$ ansible-playbook start-lab.yml  
 
PLAY [all] 
*********************************************************************************************************************************
****************** 
 
TASK [Restart NTP] 
*********************************************************************************************************************************
********** 
changed: [leaf01] 
changed: [spine01] 
changed: [leaf02] 
changed: [server02] 
changed: [server01] 
changed: [netq-ts] 
 
TASK [Restart NetQ Agent] 
*********************************************************************************************************************************
*** 
changed: [leaf01] 
changed: [spine01] 
changed: [leaf02] 
changed: [server01] 
changed: [server02] 
changed: [netq-ts] 
 
PLAY [host] 
*********************************************************************************************************************************
***************** 
 
TASK [Setting up the test hosts config] 
********************************************************************************************************************** 

cumulusnetworks.com 6
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

changed: [server02] 
changed: [server01] 
 
RUNNING HANDLER [apply interface config] 
********************************************************************************************************************* 
changed: [server02] 
changed: [server01] 
 
PLAY RECAP 
*********************************************************************************************************************************
****************** 
leaf01​ : ​ok=2 ​ ​changed=2 ​ unreachable=0 failed=0   
leaf02​ : ​ok=2 ​ ​changed=2 ​ unreachable=0 failed=0   
netq-ts​ : ​ok=2 ​ ​changed=2 ​ unreachable=0 failed=0   
server01​ : ​ok=4 ​ c​ hanged=4 ​ unreachable=0 failed=0   
server02​ : ​ok=4 ​ c​ hanged=4 ​ unreachable=0 failed=0   
spine01​ : ​ok=2 ​ ​changed=2 ​ unreachable=0 failed=0   
 
cumulus@oob-mgmt-server:~/Test-Drive-Automation$  
 
Apply a Cumulus Linux License 
7. On oob-mgmt-server: SSH to Leaf01.  
 
cumulus@oob-mgmt-server:~$ ​ssh leaf01 
Warning: Permanently added 'leaf01,192.168.200.11' (ECDSA) to the list of known hosts. 
 
Welcome to Cumulus VX (TM) 
 
Cumulus VX (TM) is a community supported virtual appliance designed for 
experiencing, testing and prototyping Cumulus Networks' latest technology. 
For any questions or technical support, visit our community site at: 
http://community.cumulusnetworks.com 
 
The registered trademark Linux (R) is used pursuant to a sublicense from LMI, 
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide 
basis. 
cumulus@leaf01:~$  
 
8. On leaf01​: View the license on the oob-mgmt-server by downloading and displaying the license with the curl command. Then apply the license 
using cl-license. Lastly, view where the license is installed to on disk after the license is installed. ​ ​Note: this process is not actually required 
in Vx but is performed for parity with real hardware.  
 
cumulus@leaf01:~$ ​curl http://192.168.200.1/license.lic 
this is a fake license 
cumulus@leaf01:~$ ​sudo cl-license -i http://192.168.200.1/license.lic  
--2017-09-25 16:21:57-- http://192.168.200.1/license.lic 
Connecting to 192.168.200.1:80... connected. 
HTTP request sent, awaiting response... 200 OK 
Length: 31 
Saving to: ‘/tmp/lic.mvCHDM’ 
 
/tmp/lic.mvCHDM 100%[=====================>] 31 --.-KB/s in 0s   
 
2017-09-25 16:21:57 (4.61 MB/s) - ‘/tmp/lic.mvCHDM’ saved [31/31] 
 
License file installed. 
 
cumulus@leaf01:~$ ​cat /etc/cumulus/.license 
this is a fake license 
cumulus@leaf01:~$ 
 
9. On leaf01​: Restart switchd to apply the license. Check the status of switchd to make sure it is running.  
 
cumulus@leaf01:~$ s​ udo systemctl restart switchd.service  
cumulus@leaf01:~$ s​ udo systemctl status switchd.service 
●​ switchd.service - Cumulus Linux Switch Daemon 
Loaded: loaded (/lib/systemd/system/switchd.service; enabled) 
Drop-In: /etc/systemd/system/switchd.service.d 
└─override.conf 
Active: ​active (running)​ since Mon 2017-09-25 16:22:30 UTC; 10s ago 
Process: 2553 ExecStopPost=/bin/sh -c /usr/bin/killall -q -s 36 clagd || exit 0 (code=exited, 
status=0/SUCCESS) 
Main PID: 2577 (switchd) 
CGroup: /system.slice/switchd.service 
└─2577 /usr/sbin/switchd -vx 
 
Sep 25 16:22:30 leaf01 switchd[2577]: Initializing Cumulus Networks switch ...hd 
Sep 25 16:22:30 leaf01 switchd[2577]: switchd.c:1707 switchd version 1.0-cl3u17 
Sep 25 16:22:30 leaf01 switchd[2577]: switchd.c:1708 switchd cmdline: -vx 

cumulusnetworks.com 7
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

Sep 25 16:22:30 leaf01 switchd[2577]: switchd.c:446 /config/ignore_non_swps...UE 


Sep 25 16:22:30 leaf01 switchd[2577]: switchd.c:446 /config/logging changed...FO 
Sep 25 16:22:30 leaf01 systemd[1]: Started Cumulus Linux Switch Daemon. 
Sep 25 16:22:30 leaf01 switchd[2577]: switchd.c:1403 Startup complete. 
Hint: Some lines were ellipsized, use -l to show in full. 
 
This concludes Lab 1. 

   

cumulusnetworks.com 8
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
Lab 2: Interface Configuration 
Objective: 
This lab will configure several types of interfaces. First, a bond will be configured between leaf01 and leaf02. The bond will be configured as a trunk to pass vlan10 and 
vlan20. Connections between leafs and servers will be configured as access ports. Server01 and Server02 will be in different subnets, so leaf01 and leaf02 will be configured 
to route for each vlan using VRR to provide high availability gateways for each vlan. 
 
By the end of this lab, we’ll have the following topology implemented: 
 

 
 
 
Dependencies on other Labs: 
● None 
 
Goals: 
● Configure loopback addresses for leaf01 and leaf02  
● Configure a bond between leaf01 and leaf02 
● Configure a bridge 
● Create a trunk port and access port 
● Configure SVIs on leaf01 and leaf02 
● Configure VRR addresses on leaf01 and leaf02  
 
Procedure: 
Configure loopback addresses on leaf01 and leaf02 

Interface Configuration Details 

Interface↓ \ Switch→   leaf01  leaf02 

Loopback IP  10.255.255.1/32  10.255.255.2/32 


 
1. On leaf01​: Assign an ip address to the loopback interface.  
 
cumulus@leaf01:~$ n
​ et add loopback lo ip address 10.255.255.1/32 
cumulus@leaf01:~$ n​ et commit 
 
2. On leaf02​: Assign an ip address to the loopback interface.  
 
cumulus@leaf02:~$ n
​ et add loopback lo ip address 10.255.255.2/32 
cumulus@leaf02:~$ n​ et commit 
 
Verify loopback IP address configuration 
3. On leaf01​: Check that the address has been applied.  
 
cumulus@leaf01:~$ ​net show interface lo 
 
Name MAC Speed MTU Mode 
-- ---- ----------------- ----- ----- -------- 
UP lo 00:00:00:00:00:00 N/A 65536 Loopback 
 
IP Details 
------------------------- --------------- 
IP: 127.0.0.1/8 
IP: 10.255.255.1/32 
IP: ::1/128 
IP Neighbor(ARP) Entries: 0 
 

cumulusnetworks.com 9
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
4. On leaf02​: Check that the address has been applied.  
 
cumulus@leaf02:~$ ​net show interface lo 
 
Name MAC Speed MTU Mode 
-- ---- ----------------- ----- ----- -------- 
UP lo 00:00:00:00:00:00 N/A 65536 Loopback 
 
IP Details 
------------------------- --------------- 
IP: 127.0.0.1/8 
IP: 10.255.255.2/32 
IP: ::1/128 
IP Neighbor(ARP) Entries: 0 
 
 
Important things to observe: 
● Loopback has user-defined IP address as well as home address assigned to it 
● Loopback has a predefined default configuration on Cumulus Linux. Make sure not to delete it. 
 
Configure bond between leaf01 and leaf02 

Bond Configuration Details 

Bond↓ \ Switch→   leaf01  leaf02 

Bond name  BOND0  BOND0 

Bond members  swp49,swp50  swp49,swp50 


 
5. On leaf01​: Create a bond with members swp49 and swp50.  
 
cumulus@leaf01:~$ n​ et add bond BOND0 bond slaves swp49-50 
cumulus@leaf01:~$​ n​ et commit 
 
6. On leaf02​: Create a bond with members swp49 and swp50.  
 
cumulus@leaf02:~$ n​ et add bond BOND0 bond slaves swp49-50 
cumulus@leaf02:~$​ n​ et commit 
 
7. On leaf01 ​and​ ​leaf02​: Check status of the bond between two switches. Verify that the bond is operational by checking the status of the 
bond and its members. See the red highlighted output below to verify that your lab output matches.  
 
cumulus@leaf01:~$ ​net show interface bonds 
Name Speed MTU Mode Summary 
-- ----- ----- ---- ------- ---------------------------------- 
UP​ BOND0 2G 1500 802.3ad Bond Members: ​swp49(UP), swp50(UP) 
 
cumulus@leaf01:~$ ​net show interface bondmems 
Name Speed MTU Mode Summary 
-- ----- ----- ---- ------- ----------------- 
UP​ swp49 1G 1500 LACP-UP Master: ​BOND0(UP) 
UP​ swp50 1G 1500 LACP-UP Master: ​BOND0(UP) 
 
 
 
cumulus@leaf02:~$ ​net show interface bonds 
Name Speed MTU Mode Summary 
-- ----- ----- ---- ------- ---------------------------------- 
UP​ BOND0 2G 1500 802.3ad Bond Members: ​swp49(UP), swp50(UP) 
 
cumulus@leaf02:~$ ​net show interface bondmems 
Name Speed MTU Mode Summary 
-- ----- ----- ---- ------- ----------------- 
UP​ swp49 1G 1500 LACP-UP Master: ​BOND0(UP) 
UP​ swp50 1G 1500 LACP-UP Master: ​BOND0(UP) 
 
 
 
 
Important things to observe: 
● The speed of the bond is the cumulative speed of all member interfaces 
● Bond member interface status and bond interface status are displayed in output 
● LLDP remote port information is included in bond member status output 
 

cumulusnetworks.com 10
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

Configure bridge and access ports on leaf01 and leaf02 


 
Bridge Configuration Details 

Bridge↓ \ Switch→   leaf01  leaf02 

Bridge vlans  10,20  10,20 

Bridge members  BOND0,swp1  BOND0,swp2 

Bridge access port  swp1  swp2 

Bridge access vlan  10  20 


 
8. On leaf01​: Create a bridge with vlans 10 and 20.  
 
cumulus@leaf01:~$ ​net add bridge bridge vids 10,20 
 
9. On leaf01​: Add swp1 and BOND0 as a member to the bridge. ​Note: The name BOND0 is case sensitive in all places.  
 
cumulus@leaf01:~$ ​net add bridge bridge ports BOND0,swp1 
 
10. On leaf01​: Make swp1 an access port for vlan 10.  
 
cumulus@leaf01:~$ ​net add interface swp1 bridge access 10 
 
11. On leaf01​: Commit the changes.  
 
cumulus@leaf01:~$ ​net commit 
 
12. On leaf02​: Repeat the same steps but use swp2 as the access port towards the server.  
 
cumulus@leaf02:~$ ​net add bridge bridge vids 10,20 
cumulus@leaf02:~$ ​net add bridge bridge ports BOND0,swp2 
cumulus@leaf02:~$ ​net add interface swp2 bridge access 20 
cumulus@leaf02:~$​ ​net commit 
 
Note: The section below is provided for easier copying and pasting. 
 
net add bridge bridge vids 10,20 
net add bridge bridge ports BOND0,swp2 
net add interface swp2 bridge access 20 
net commit 
 
 
Verify bridge configuration on leaf01 and leaf02 
13. On leaf01​: Verify the configuration on leaf01 by checking that swp1 and BOND0 are part of the bridge.  
 
cumulus@leaf01$ ​net show bridge vlan 
 
Interface VLAN Flags 
----------- ------ --------------------- 
swp1 10 PVID, Egress Untagged 
BOND0 1 PVID, Egress Untagged 
10 
20 
 
14. On leaf02​: Verify the same configuration on leaf02 by checking that swp2 and BOND0 are part of the bridge. 
 
 
cumulus@leaf02$ ​net show bridge vlan 
 
Interface VLAN Flags 
 
----------- ------ --------------------- 
swp2 20 PVID, Egress Untagged 
BOND0 1 PVID, Egress Untagged 
10 
20 
 
 
Important things to observe: 
● Access ports are only a single line with the VLAN associated with the port 

cumulusnetworks.com 11
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

● Trunk ports are multiple lines with each VLAN associated with the trunk listed 

On leaf01:  On leaf02: 
● swp1 should be an access port in vlan 10  ● swp2 should be an access port in vlan 20 
● BOND0 should be a trunk for vlan10 and vlan20, with a native  ● BOND0 should be a trunk for vlan10 and vlan20, with a native 
vlan of 1 (PVID)  vlan of 1 (PVID) 
 
Configure SVI and VRR on leaf01 and leaf02 

VRR Configuration details 

Setting↓ \ Switch→   leaf01  leaf02 

VLAN10 real IP address  10.0.10.2/24  10.0.10.3/24 

VLAN10 VRR IP address  10.0.10.1/24  10.0.10.1/24 

VLAN10 VRR MAC address  00:00:00:00:1a:10  00:00:00:00:1a:10 

VLAN20 real IP address  10.0.20.2/24  10.0.20.3/24 

VLAN20 VRR IP address  10.0.20.1/24  10.0.20.1/24 

VLAN20 VRR MAC address  00:00:00:00:1a:20  00:00:00:00:1a:20 

SERVER01 vlan  10  10 

SERVER02 vlan  20  20 


 
 
15. On leaf01​: Create an SVI for vlan10 
 
cumulus@leaf01:~$ ​net add vlan 10 ip address 10.0.10.2/24 
 
16. On leaf01​: Create an SVI for vlan 20.  
 
cumulus@leaf01:~$ ​net add vlan 20 ip address 10.0.20.2/24 
 
17. On leaf01​: Apply a VRR address for vlan10.  
 
cumulus@leaf01:~$ ​net add vlan 10 ip address-virtual 00:00:00:00:1a:10 10.0.10.1/24 
 
18. On leaf01​: Apply a VRR address for vlan20.  
 
cumulus@leaf01:~$ ​net add vlan 20 ip address-virtual 00:00:00:00:1a:20 10.0.20.1/24 
 
19. On leaf01​: Commit the changes.  
 
cumulus@leaf01:~$ ​net commit 
 
20. On leaf02​: Repeat steps 1-6.  
 
cumulus@leaf02:~$ ​net add vlan 10 ip address 10.0.10.3/24 
cumulus@leaf02:~$ ​net add vlan 20 ip address 10.0.20.3/24 
cumulus@leaf02:~$ ​net add vlan 10 ip address-virtual 00:00:00:00:1a:10 10.0.10.1/24 
cumulus@leaf02:~$ ​net add vlan 20 ip address-virtual 00:00:00:00:1a:20 10.0.20.1/24 
cumulus@leaf02:~$ ​net commit 
 
Note: The section below is provided for easier copying and pasting. 
 
net add vlan 10 ip address 10.0.10.3/24 
net add vlan 20 ip address 10.0.20.3/24 
net add vlan 10 ip address-virtual 00:00:00:00:1a:10 10.0.10.1/24 
net add vlan 20 ip address-virtual 00:00:00:00:1a:20 10.0.20.1/24 
net commit 
 
Test VRR connectivity 
21. On server01: Test connectivity from server01 to the VRR gateway address.  
 
cumulus@server01:~$ ​ping 10.0.10.1 
PING 10.0.10.1 (10.0.10.1) 56(84) bytes of data. 
64 bytes from 10.0.10.1: icmp_seq=1 ttl=64 time=0.686 ms 
64 bytes from 10.0.10.1: icmp_seq=2 ttl=64 time=0.922 ms 
^C 
--- 10.0.10.1 ping statistics --- 

cumulusnetworks.com 12
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

2 packets transmitted, 2 received, 0% packet loss​, time 1001ms 


rtt min/avg/max/mdev = 0.686/0.804/0.922/0.118 ms 
 
 
22. On server01: Test connectivity from server01 to leaf01 real IP address.  
 
cumulus@server01:~$ ​ping 10.0.10.2 
PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data. 
64 bytes from 10.0.10.2: icmp_seq=1 ttl=64 time=0.887 ms 
64 bytes from 10.0.10.2: icmp_seq=2 ttl=64 time=0.835 ms 
^C 
--- 10.0.10.2 ping statistics --- 
2 packets transmitted, 2 received, 0% packet loss​, time 1001ms 
rtt min/avg/max/mdev = 0.835/0.861/0.887/0.026 ms 
 
23. On server01: Test connectivity from server01 to leaf02 real IP address. 
 
cumulus@server01:~$ ​ping 10.0.10.3 
PING 10.0.10.3 (10.0.10.3) 56(84) bytes of data. 
64 bytes from 10.0.10.3: icmp_seq=1 ttl=64 time=0.528 ms 
64 bytes from 10.0.10.3: icmp_seq=2 ttl=64 time=0.876 ms 
^C 
--- 10.0.10.3 ping statistics --- 
2 packets transmitted, 2 received, 0% packet loss​, time 1001ms 
rtt min/avg/max/mdev = 0.528/0.702/0.876/0.174 ms 
 
24. On server01: Check the IP neighbor table which is similar to the ARP table, to view each MAC address. ​The arp table could also be 
evaluated using the “arp” command. 
 
cumulus@server01:~$ ​ip neighbor show 
192.168.200.250 dev eth0 lladdr a0:00:00:00:02:50 REACHABLE 
10.0.10.3 dev eth1 lladdr 44:38:39:00:02:49 REACHABLE 
10.0.10.2 dev eth1 lladdr 44:38:39:00:01:49 REACHABLE 
10.0.10.1 dev eth1 lladdr 00:00:00:00:1a:10 REACHABLE 
192.168.200.254 dev eth0 lladdr 98:7a:07:2d:ca:88 REACHABLE 
 
25. On server02: Repeat the same connectivity tests in step 10-13 from server02 to switch IP addresses.  
 
cumulus@server02:~$ ​ping 10.0.20.1 
PING 10.0.20.1 (10.0.20.1) 56(84) bytes of data. 
64 bytes from 10.0.20.1: icmp_seq=1 ttl=64 time=1.22 ms 
64 bytes from 10.0.20.1: icmp_seq=2 ttl=64 time=0.672 ms 
^C 
--- 10.0.20.1 ping statistics --- 
2 packets transmitted, 2 received, 0% packet loss​, time 1001ms 
rtt min/avg/max/mdev = 0.672/0.949/1.226/0.277 ms 
 
cumulus@server02:~$ ​ping 10.0.20.2 
PING 10.0.20.2 (10.0.20.2) 56(84) bytes of data. 
64 bytes from 10.0.20.2: icmp_seq=1 ttl=64 time=0.735 ms 
64 bytes from 10.0.20.2: icmp_seq=2 ttl=64 time=1.02 ms 
^C 
--- 10.0.20.2 ping statistics --- 
2 packets transmitted, 2 received, 0% packet loss​, time 1001ms 
rtt min/avg/max/mdev = 0.735/0.882/1.029/0.147 ms 
 
cumulus@server02:~$ ​ping 10.0.20.3 
PING 10.0.20.3 (10.0.20.3) 56(84) bytes of data. 
64 bytes from 10.0.20.3: icmp_seq=1 ttl=64 time=0.993 ms 
64 bytes from 10.0.20.3: icmp_seq=2 ttl=64 time=1.08 ms 
^C 
--- 10.0.20.3 ping statistics --- 
2 packets transmitted, 2 received, 0% packet loss​, time 1002ms 
rtt min/avg/max/mdev = 0.993/1.040/1.087/0.047 ms 
 
cumulus@server02:~$ ​ip neighbor show 
10.0.20.2 dev eth2 lladdr 44:38:39:00:01:49 REACHABLE 
10.0.20.1 dev eth2 lladdr 00:00:00:00:1a:20 REACHABLE 
192.168.200.250 dev eth0 lladdr a0:00:00:00:02:50 REACHABLE 
192.168.200.254 dev eth0 lladdr 98:7a:07:2d:ca:88 REACHABLE 
10.0.20.3 dev eth2 lladdr 44:38:39:00:02:49 REACHABLE 
 
 
Important things to observe: 
● Pings to the VRR and unique SVI IP addresses should all be successful for all Vlans 

cumulusnetworks.com 13
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
26. On server01 and server02: Ping to verify connectivity between server01 and server02. 
 
cumulus@server01:~$ ​ping 10.0.20.102 
PING 10.0.20.102 (10.0.20.102) 56(84) bytes of data. 
64 bytes from 10.0.20.102: icmp_seq=1 ttl=63 time=0.790 ms 
64 bytes from 10.0.20.102: icmp_seq=2 ttl=63 time=1.35 ms 
^C 
--- 10.0.20.102 ping statistics --- 
2 packets transmitted, 2 received, 0% packet loss, time 1001ms 
rtt min/avg/max/mdev = 0.790/1.070/1.351/0.282 ms 
 
 
cumulus@server02:~$​ ​ping 10.0.10.101 
PING 10.0.10.101 (10.0.10.101) 56(84) bytes of data. 
64 bytes from 10.0.10.101: icmp_seq=1 ttl=63 time=1.08 ms 
64 bytes from 10.0.10.101: icmp_seq=2 ttl=63 time=1.36 ms 
^C 
--- 10.0.10.101 ping statistics --- 
2 packets transmitted, 2 received, 0% packet loss, time 1001ms 
rtt min/avg/max/mdev = 1.089/1.225/1.361/0.136 ms 
 
27. On server01 and server02: traceroute to server02. 
 
cumulus@server01:~$ ​traceroute 10.0.20.102 
traceroute to 10.0.20.102 (10.0.20.102), 30 hops max, 60 byte packets 
1 10.0.10.1 (10.0.10.1) 1.628 ms 1.672 ms 1.855 ms 
2 10.0.20.102 (10.0.20.102) 7.947 ms 7.973 ms 8.155 ms 
cumulus@server01:~$  
 
 
cumulus@server02:~$ ​traceroute 10.0.10.101 
traceroute to 10.0.10.101 (10.0.10.101), 30 hops max, 60 byte packets 
1 10.0.20.1 (10.0.20.1) 2.813 ms 2.776 ms 3.307 ms 
2 10.0.10.101 (10.0.10.101) 9.199 ms 7.836 ms 7.766 ms 
cumulus@server02:~$  
 
 
Verify MAC address table on leaf01 and leaf02 
28. On ​leaf01​ and ​leaf02​ : Check to verify that the MAC addresses are learned correctly.  
 
cumulus@leaf01:~$​ net show bridge macs 
 
VLAN Master Interface MAC TunnelDest State Flags LastSeen 
-------- ------ --------- ----------------- ---------- --------- ----- -------- 
1 bridge BOND0 44:38:39:00:02:49 00:00:04 
1 bridge BOND0 44:38:39:00:02:50 00:00:04 
10 bridge BOND0 44:38:39:00:02:49 00:00:16 
10 bridge bridge 00:00:00:00:1a:10 permanent 00:07:11 
10 bridge bridge 44:38:39:00:01:49 permanent 00:07:11 
10 bridge swp1 44:38:39:00:08:01 00:00:08 
20 bridge BOND0 44:38:39:00:02:49 00:00:05 
20 bridge BOND0 44:38:39:00:09:02 00:00:47 
20 bridge bridge 00:00:00:00:1a:20 permanent 00:07:11 
20 bridge bridge 44:38:39:00:01:49 permanent 00:07:11 
 
<output truncated for brevity> 
 
cumulus@leaf02:~$ ​net show bridge macs 
 
VLAN Master Interface MAC TunnelDest State Flags LastSeen 
-------- ------ --------- ----------------- ---------- --------- ----- -------- 
1 bridge BOND0 44:38:39:00:01:49 00:00:01 
1 bridge BOND0 44:38:39:00:01:50 00:00:08 
10 bridge BOND0 44:38:39:00:01:49 00:01:05 
10 bridge BOND0 44:38:39:00:08:01 00:00:22 
10 bridge bridge 00:00:00:00:1a:10 permanent 00:09:36 
10 bridge bridge 44:38:39:00:02:49 permanent 00:09:36 
20 bridge BOND0 44:38:39:00:01:49 00:02:49 
20 bridge bridge 00:00:00:00:1a:20 permanent 00:09:36 
20 bridge bridge 44:38:39:00:02:49 permanent 00:09:36 
20 bridge swp2 44:38:39:00:09:02 00:00:03 
 
<output truncated for brevity> 
 

cumulusnetworks.com 14
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

Important things to observe: 


● MAC addresses of servers should be learned on BOND0 and swp interface of switch 
 
This concludes Lab 2.   

cumulusnetworks.com 15
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
Lab 3: BGP Unnumbered 
Objective: 
This lab will configure BGP unnumbered between the leaf01/leaf02 to spine01. This configuration will share the ip addresses of the loopback interfaces on each device as 
well as the vlan10 and vlan20 subnets on the leaf01 and leaf02 devices. 
 

 
 
Dependencies on other Labs: 
● None. Running Lab3.yml playbook configures all prerequisites. 
 
Goals: 
● Configure BGP unnumbered on spine01 
● Configure BGP unnumbered on leaf01/leaf02 
● Advertise loopback addresses into BGP  
● Advertise SVI subnets of leafs into BGP 
● Verify BGP peering 
● Verify BGP route advertisements 
● Verify routed connectivity and path between servers 
 
Procedure: 
 
Run Lab3 setup playbook 
 
1. On oob-mgmt-server: Run the playbook named ‘lab3.yml’. Even if you fully completed Lab2, you must run this playbook. 
 
cumulus@oob-mgmt-server:~/Test-Drive-Automation$ ​ansible-playbook lab3.yml  
 
Apply loopback address to spine01 
 
Loopback Configuration   

Configuration↓ \ Switch→   leaf01  leaf02  spine01 

Loopback IP address  10.255.255.1/32  10.255.255.2/32  10.255.255.101/32 


 
2. On spine01: Configure a loopback interface 
 
cumulus@spine01:~$​ n
​ et add loopback lo ip address 10.255.255.101/32 
cumulus@spine01:~$​ n​ et commit 
 
Leaf01 and Leaf02 loopback addresses are already configured. 
 
Configure BGP unnumbered on spine01, leaf01 and leaf02 
 
3. On spine01: Configure a BGP Autonomous System (AS) number for the routing instance.  
 
cumulus@spine01:~$ n
​ et add bgp autonomous-system 65201 
cumulus@spine01:~$ n​ et add bgp bestpath as-path multipath-relax 
 
4. On spine01: Configure BGP peering on swp1 towards leaf01 and swp2 towards leaf02.  

cumulusnetworks.com 16
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
cumulus@spine01:~$ n
​ et add bgp neighbor swp1 interface remote-as external 
cumulus@spine01:~$ n​ et add bgp neighbor swp2 interface remote-as external 
 
5. On spine01: Commit the changes.  
 
cumulus@spine01:~$ ​net commit 
 
6. On leaf01​: Repeat steps 2-5. 
 
cumulus@leaf01:~$ ​net add bgp autonomous-system 65101 
cumulus@leaf01:~$ ​net add bgp bestpath as-path multipath-relax 
cumulus@leaf01:~$ ​net add bgp neighbor swp51 interface remote-as external 
cumulus@leaf01:~$ ​net commit 
 
For copy/paste convenience: 
net add bgp autonomous-system 65101 
net add bgp bestpath as-path multipath-relax 
net add bgp neighbor swp51 interface remote-as external 
net commit 
 
7. On leaf02​: Repeat steps 2-5.  
 
cumulus@leaf02:~$ ​net add bgp autonomous-system 65102 
cumulus@leaf02:~$ ​net add bgp bestpath as-path multipath-relax 
cumulus@leaf02:~$ ​net add bgp neighbor swp51 interface remote-as external 
cumulus@leaf02:~$ ​net commit 
 
For copy/paste convenience: 
net add bgp autonomous-system 65102 
net add bgp bestpath as-path multipath-relax 
net add bgp neighbor swp51 interface remote-as external 
net commit 
 
 
Verify BGP connectivity between fabric nodes 
8. On spine01: Verify BGP peering between spine and leafs.  
 
cumulus@spine01:~$ ​net show bgp summary 
 
show bgp ipv4 unicast summary 
============================= 
BGP router identifier 10.255.255.101, local AS number 65201 vrf-id 0 
BGP table version 0 
RIB entries 0, using 0 bytes of memory 
Peers 2, using 39 KiB of memory 
 
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 
leaf01(swp1)​ 4 65101 26 27 0 0 0​ 00:01:09​ 0 
leaf02(swp2) ​ 4 65102 15 16 0 0 0 ​00:00:38 ​ 0 
 
Total number of neighbors 2 
 
 
show bgp ipv6 unicast summary 
============================= 
% No BGP neighbors found 
 
show bgp l2vpn evpn summary 
=========================== 
% No BGP neighbors found 
 
9. On leaf01​: Verify BGP peering between leafs and spine  
 
cumulus@leaf01:~$ ​net show bgp summary 
 
show bgp ipv4 unicast summary 
============================= 
BGP router identifier 10.255.255.1, local AS number 65101 vrf-id 0 
BGP table version 0 
RIB entries 0, using 0 bytes of memory 
Peers 1, using 20 KiB of memory 
 
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 
spine01(swp51)​ 4 65201 13 15 0 0 0 ​00:00:35​ 0 
 

cumulusnetworks.com 17
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

Total number of neighbors 1 


 
 
show bgp ipv6 unicast summary 
============================= 
% No BGP neighbors found 
 
show bgp l2vpn evpn summary 
=========================== 
% No BGP neighbors found 
 
 
cumulus@leaf02:~$ ​net show bgp sum 
 
show bgp ipv4 unicast summary 
============================= 
BGP router identifier 10.255.255.2, local AS number 65102 vrf-id 0 
BGP table version 0 
RIB entries 0, using 0 bytes of memory 
Peers 1, using 20 KiB of memory 
 
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 
spine01(swp51)​ 4 65201 4 6 0 0 0 ​00:00:07​ 0 
 
Total number of neighbors 1 
 
Important things to observe: 
● The BGP neighbor shows the hostname of the BGP peer 
● Only the peer is up, no routes are being advertised yet 
● The BGP router identifier uses the loopback address 
 
Advertise Loopback and SVI subnets from leaf01, leaf02 and spine01 into fabric 
 
Routing Advertisement Configuration   

Routes↓ \ Switch→   leaf01  leaf02  spine01 

Subnets to be advertised  10.255.255.1/32  10.255.255.2/32  10.255.255.101/32 


10.0.10.0/24  10.0.20.0/24 
 
10. On spine01: Advertise loopback address into BGP.  
 
cumulus@spine01:~$ n
​ et add bgp network 10.255.255.101/32 
cumulus@spine01:~$ n​ et commit 
 
11. On leaf01​: Advertise loopback address into BGP.  
 
cumulus@leaf01:~$ ​net add bgp network 10.255.255.1/32 
 
12. On leaf01​: Advertise subnet for VLAN10.  
 
cumulus@leaf01:~$ ​net add bgp network 10.0.10.0/24 
 
13. On leaf01​: Commit the changes.  
 
cumulus@leaf01:~$ ​net commit 
 
14. On leaf02​: Repeat steps xx-xx. Notice the different loopback IP and subnet that is advertised.  
 
cumulus@leaf02:~$ n
​ et add bgp network 10.255.255.2/32 
cumulus@leaf02:~$ n​ et add bgp network 10.0.20.0/24 
cumulus@leaf02:~$ n ​ et commit 
 
net add bgp network 10.255.255.2/32 
net add bgp network 10.0.20.0/24 
net commit 
 
Verify that BGP is advertising the routes 
15. On spine01: Check that routes are being learned. 
 
cumulus@spine01:~$ ​net show bgp 
 

cumulusnetworks.com 18
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

show bgp ipv4 unicast 


===================== 
BGP table version is 5, local router ID is 10.255.255.101 
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath, 
i internal, r RIB-failure, S Stale, R Removed 
Origin codes: i - IGP, e - EGP, ? - incomplete 
 
Network Next Hop Metric LocPrf Weight Path 
*> 10.0.10.0/24 swp1 0 0 65101 i 
*> 10.0.20.0/24 swp2 0 0 65102 i 
*> 10.255.255.1/32 swp1 0 0 65101 i 
*> 10.255.255.2/32 swp2 0 0 65102 i 
*> 10.255.255.101/32 
0.0.0.0 0 32768 i 
 
Displayed 5 routes and 5 total paths 
 
 
show bgp ipv6 unicast 
===================== 
No BGP prefixes displayed, 0 exist 
cumulus@spine01:~$  
 
 
Important things to observe: 
● AS PATH identifies where routes are originating 
● NEXT HOP is the interface and not an IP address because of BGP unnumbered Where the next hop is equal to 0.0.0.0, that route is 
originated locally. 
 
Verify connectivity and path between server01 and server02 
16. On Server01, ping to Server02 (10.0.20.102) 
 
cumulus@server01:~$ ​ping 10.0.20.102 
PING 10.0.20.102 (10.0.20.102) 56(84) bytes of data. 
64 bytes from 10.0.20.102: icmp_seq=1 ttl=61 time=9.86 ms 
64 bytes from 10.0.20.102: icmp_seq=2 ttl=61 time=5.96 ms 
64 bytes from 10.0.20.102: icmp_seq=3 ttl=61 time=5.80 ms 
^C 
--- 10.0.20.102 ping statistics --- 
3 packets transmitted, 3 received, 0% packet loss, time 2003ms 
rtt min/avg/max/mdev = 5.806/7.211/9.864/1.877 ms 
 
17. On Server01, traceroute to Server02. Identify all of the hops. 
 
cumulus@server01:~$ ​traceroute 10.0.20.102 
traceroute to 10.0.20.102 (10.0.20.102), 30 hops max, 60 byte packets 
1 10.0.10.1 (10.0.10.1) 1.280 ms 1.389 ms 1.553 ms 
2 10.255.255.101 (10.255.255.101) 4.702 ms 4.679 ms 4.789 ms 
3 10.255.255.2 (10.255.255.2) 8.438 ms 8.877 ms 9.476 ms 
4 10.0.20.102 (10.0.20.102) 9.541 ms 9.766 ms 13.549 ms 
cumulus@server01:~$  
 
 
Important things to observe: 
● With Unnumbered interfaces, traceroute (ICMP source interface) packets come from the loopback ipv4 address of the node. 
 
 
This concludes the Cumulus Linux Test Drive. 

   

cumulusnetworks.com 19
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

Appendix A: How to use an SSH client to manually connect to the to the lab 
 
First, click the “add service” button under the Services window to expose the SSH service on the oob-mgmt-server to the Internet. 
 
Note​: It takes​ about 5 minutes​ for background processing to occur before the SSH service is fully opened.  
 
 
Next, click on the hyperlink for the SSH service. If your web browser is configured with an application to handle SSH URLs, then clicking on the link 
from your browser will automatically launch the application to handle the SSH connection and connect with the correct username, IP address, and 
port number.  
 
If your browser is unable to handle the SSH URL or, follow the steps below to manually connect via SSH: 
 
Manual SSH Connection Details 

Username  cumulus 

Password  CumulusLinux! 

Server Hostname  air.cumulusnetworks.com 

SSH Port  Use “External Port” in Services box on UI -------------------->   

 
Note​: This SSH connection ​does not use the default destination TCP port 22.​ Ensure that the external port is specified in your SSH client. 
Note: ​Usernames and passwords are case sensitive 
 
To connect via SSH manually, you must have an SSH client installed. 
 
● Windows users: Download PuTTY from ​https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html 
● Mac users: Use the ​Terminal​ application. 
● Linux users: Open a Bash shell. 
 
Linux/Mac OS example: 
 
user@laptop$ ​ssh [email protected] -p 15954 
The authenticity of host '[air.cumulusnetworks.com]:15954 ([139.178.69.119]:15954)' can't be established. 
ECDSA key fingerprint is SHA256:Cl/v7P3VmbLhlWlA/6uhJps3Um1hcQkX4dSKAb4Pwmc. 
Are you sure you want to continue connecting (yes/no)? yes 
Warning: Permanently added '[air.cumulusnetworks.com]:15954,[139.178.69.119]:15954' (ECDSA) to the list of known hosts. 
[email protected]'s password:  

​_______​ ​x​ ​x​ ​x​ ​| | 
​._​ ​<_______​~​ ​x​ ​X​ ​x​ ___ _ _ _ __ ___ _ _| |_ _ ___ 
(​' ​\​ ​,' ​||​ `,​ ​ ​ / __| | | | '_ ` _ \| | | | | | | / __| 
​`._:^​ ​||​ ​:>​ ​| (__| |_| | | | | | | |_| | | |_| \__ \ 
​^T~~~~~~T​'​ ​\___|\__,_|_| |_| |_|\__,_|_|\__,_|___/ 
​~"​ ​~" 
 
 
############################################################################ 

# Out Of Band Management Station 

############################################################################ 
Last login: Fri Mar 6 19:35:14 2020 
cumulus@oob-mgmt-server:~$  

 
 
 
Windows using PuTTY example: 
 

cumulusnetworks.com 20
NVIDIA Cumulus Linux Test Drive: ​Lab Guide

 
 
After clicking “Open”, provide the username “cumulus” and password “CumulusLinux!” 
 

 
 
You now have an SSH session to your workbench, and you will be at the BASH prompt on the oob-mgmt-server.  
 
 
 

About NVIDIA (Formerly Cumulus Networks) 


Cumulus Networks is leading the transformation of bringing web-scale networking to enterprise cloud. Its network switch, NVIDIA ​©​ Cumulus Linux, is the only solution that allows you to 
affordably build and efficiently operate your network like the world’s largest data center operators, unlocking vertical network stacks. By allowing operators to use standard hardware 
components, Cumulus Linux offers unprecedented operational speed and agility, at the industry’s most competitive cost. ​For more information visit ​cumulusnetworks.com​ ​or follow 
@cumulusnetworks​. 
 
© 2020 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, and NVIDIA © Cumulus Linux, NVIDIA © Cumulus NetQ are trademarks and/or registered 
trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they 
are associated. 

cumulusnetworks.com 21

You might also like