Arista-Netdevops-Community - Kvm-Lab-For-Network-Engineers - Cheatsheet Explaining How To Build vEOS Lab Based On KVM
Arista-Netdevops-Community - Kvm-Lab-For-Network-Engineers - Cheatsheet Explaining How To Build vEOS Lab Based On KVM
Custom properties
README.md add section about iptables 3 years ago
14 stars
build-shell-scripts.py add requirements and update readme 4 years ago
5 watching
create-lab.sh add shell scripts and update readme 4 years ago 9 forks
Report repository
delete-lab.sh add shell scripts and update readme 4 years ago
No packages published
README License
Contributors 2
KVM Lab for Network Engineers (How-To Cheatsheet) ankudinov Petr Ankudinov
dependabot[bot]
KVM Lab for Network Engineers (How-To Cheatsheet)
Document Description
Lab Materials Languages
Building The Lab
Step 1: Build New Kernel Shell 48.4% Python 30.4%
Document Description
This document explains how to build a KVM lab with Arista vEOS. But it can be used to run any other VM.
WARNING: This document is a draft and a lot can be optimized. Current version was published based on requests
from the field.
Lab Materials
Intel NUC6i3, 32GB RAM, 1TB HDD
Ubuntu 19.10 Server
You can use any other server or Linux distribution. That may slighly change the process documented below.
NOTE: If you want learn more about the limitations and GROUP_FWD_MASK, read this post. It's very helpful and
informative. You can also find these two mail threads interesting:
To compile the kernel you have to get source files first. Best options to do that (Ubuntu) are apt-get and Git. Git is
superior and using git clone instead of apt-get is strongly encouraged.
Change to source code directory using cd linux-5.3/ (or any other directory with the source code) and edit
following files:
net/bridge/br_input.c
net/bridge/br_netlink.c
net/bridge/br_private.h
net/bridge/br_sysfs_br.c
net/bridge/br_sysfs_if.c
If for any reason you want to control what bridges are allowed to forward LACP and LLDP, use following diff to make
required changes:
@@ -1134,8 +1132,6 @@ static int br_changelink(struct net_device *brdev, struct nlattr *tb[],
if (data[IFLA_BR_GROUP_FWD_MASK]) {
u16 fwd_mask = nla_get_u16(data[IFLA_BR_GROUP_FWD_MASK]);
return 0;
diff --git a/net/bridge/br_sysfs_if.c b/net/bridge/br_sysfs_if.c
index 7a59cdddd3ce..9001f8e646d3 100644
--- a/net/bridge/br_sysfs_if.c
+++ b/net/bridge/br_sysfs_if.c
@@ -178,8 +178,6 @@ static ssize_t show_group_fwd_mask(struct net_bridge_port *p, char *buf)
static int store_group_fwd_mask(struct net_bridge_port *p,
unsigned long v)
{
- if (v & BR_GROUPFWD_MACPAUSE)
- return -EINVAL;
p->group_fwd_mask = v;
return 0;
Typically it is not required as all bridges in a typical lab have to be transparent and to avoid configuring
group_fwd_mask for every bridge we can just allow forwarding unconditionally. That will only require br_input.c to
be changed:
default:
/* Allow selective forwarding for most other protocols */
- fwd_mask |= p->br->group_fwd_mask;
- if (fwd_mask & (1u << dest[5]))
- goto forward;
+ goto forward;
}
NOTE: You can just copy provided br_input.c if you linux kernel version is 5.3.
Example:
Reboot. Use uname -r to compare kernel version before and after reboot.
Apr 04 11:44:24 nuc6i3 dnsmasq[10398]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6
Apr 04 11:44:24 nuc6i3 dnsmasq-dhcp[10398]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time
Apr 04 11:44:24 nuc6i3 dnsmasq-dhcp[10398]: DHCP, sockets bound exclusively to interface virbr0
Apr 04 11:44:24 nuc6i3 dnsmasq[10398]: reading /etc/resolv.conf
Apr 04 11:44:24 nuc6i3 dnsmasq[10398]: using nameserver 127.0.0.53#53
Apr 04 11:44:24 nuc6i3 dnsmasq[10398]: read /etc/hosts - 7 addresses
Apr 04 11:44:24 nuc6i3 dnsmasq[10398]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Apr 04 11:44:24 nuc6i3 dnsmasq-dhcp[10398]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Apr 04 11:44:24 nuc6i3 dnsmasq[10398]: reading /etc/resolv.conf
Apr 04 11:44:24 nuc6i3 dnsmasq[10398]: using nameserver 127.0.0.53#53
It is possible to add UKSM support in Linux kernel (similar to EVE-NG), but UKSM support seems to be far from
optimal and some VMs (including vEOS) may experience problems. A quote from a related thread that explains it all:
Still, if you want to experiment, start with: https://github.com/dolohow/uksm Looks like the project is still alive. There
are 7 contributors, one active (as of May 2020). If you decide to use UKSM, do not forget to star and contribute. If you
know someone from EVE-NG team, tell them to do the same.
Allow sudo without password. Not secure, but simple and generally fine for the lab. Run visudo and add following
for your account and/or group:
WARNING: Do that at your own risk! Especially if security is critical for you lab. For example, it has direct
connection to the Internet.
petr@nuc6i3:~$ ls ./images/
vEOS-lab-4.22.4M.vmdk
Change mode to route and disable DHCP, as it will be replaced with dhcpd later:
WARNING: Access to your VMs from outside may be blocked by iptables as by default only outgoing
connections are allowed. Change the rules accordingly if required.
Save rules:
To automate the lab, we are going to use the simplest option. Yet, it's powerful enough to deploy and destroy the lab
quickly.
2nd, create (or edit) a YAML file to describe the lab topology. Take a look at lab-topology.yml as an example. This
file defines a simple leaf-spine network topology.
NOTE: If required, it's easy to build a GUI / visualization around that. But in my opinion, controlling your lab
topology via YAML is way more powerful.
Next, run build-shell-scripts.py to build shell scripts to create and destroy the lab:
positional arguments:
yaml_filename YAML file containing lab definition.
optional arguments:
-h, --help show this help message and exit
--create Create shell script for lab setup.
--delete Create shell script to delete the lab.
--username USERNAME, -u USERNAME
Username to connect to KVM host.
--hostname HOSTNAME, -n HOSTNAME
KVM host address or name.
(.venv) pa@MacBook-Pro kvm-lab-for-network-engineers % ./build-shell-scripts.py lab-topology.yml --crea
(.venv) pa@MacBook-Pro kvm-lab-for-network-engineers % ./build-shell-scripts.py lab-topology.yml --dele
(.venv) pa@MacBook-Pro kvm-lab-for-network-engineers % chmod +x ./create-lab.sh
(.venv) pa@MacBook-Pro kvm-lab-for-network-engineers % chmod +x ./delete-lab.sh
Network list (a network is created for each p2p link in the lab):
Verify if Linux bridges allow LLDP using show lldp neighbors . Configure LACP between 2 switches and verify that
port-channel is up. Test jumbo MTU up to 9000 bytes.
First, after a fresh OVS install (“apt install openvswitch-switch”, etc) create the ovs instance (ovs0 in the example):
<interface type='bridge'>
<mac address='52:54:00:9d:bd:75'/>
<source bridge='ovs0'/>
<vlan>
<tag id='34'/>
</vlan>
<virtualport type='openvswitch'>
<parameters interfaceid='bd9ddb67-a0af-4564-ab08-000ab909f164'/>
</virtualport>
<target dev='veos3_ma1'/> *** How the HOST end of veth VM interface is named in the HOST
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:ab:16:23'/>
<source bridge='ovs0'/>
<vlan>
<tag id='13'/>
</vlan>
<virtualport type='openvswitch'>
<parameters interfaceid='7af2af86-c8e3-43cd-a466-fe0919060624'/>
</virtualport>
<target dev='veos3_e1'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:e9:15:a4'/>
<source bridge='ovs0'/>
<vlan>
<tag id='13'/>
</vlan>
<virtualport type='openvswitch'>
<parameters interfaceid='92f70f11-cfd6-4c17-a5ac-9da97712c8ad'/>
</virtualport>
<target dev='veos3_e2'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</interface>
Now, once the VM has been started interfaces are seen in the hypervisor host as regular interfaces:
And they are now added to the OVS0 instance (as per XML VM config) as access ports on the corresponding OVS
vlans:
# ovs-vsctl show
b2e9be2e-0bb4-4690-8a33-075e2b31d6eb
Bridge "ovs0"
Port "veos3_ma1"
tag: 34
Interface "veos3_ma1"
Port "veos3_e5"
tag: 30
Interface "veos3_e5"
Port "veos3_e4"
tag: 23
Interface "veos3_e4"
Port "veos3_e3"
tag: 23
Interface "veos3_e3"
Port "veos3_e2"
tag: 13
Interface "veos3_e2"
Port "veos3_e1"
tag: 13
Interface "veos3_e1"
Port "ovs0"
Interface "ovs0"
type: internal
Now the OVS0 instance has the container interface in OVS vlan 23 (like the veos3_e3 interface):
# ovs-vsctl show
b2e9be2e-0bb4-4690-8a33-075e2b31d6eb
Bridge "ovs0"
Port "73b6c8e075ea4_l"
tag: 23
Interface "73b6c8e075ea4_l"
Port "veos3_ma1"
tag: 34
Interface "veos3_ma1"
Port "veos3_e5"
tag: 30
Interface "veos3_e5"
Port "veos3_e4"
tag: 23
Interface "veos3_e4"
Port "veos3_e3"
tag: 23
Interface "veos3_e3"
Port "veos3_e2"
tag: 13
Interface "veos3_e2"
Port "veos3_e1"
tag: 13
Interface "veos3_e1"
Port "ovs0"
Interface "ovs0"
type: internal
As expected, inside the container the other end of the 73b6c8e075ea4_l veth is seen as a regular eth0 interface. Inside
the host1 docker container (i.e. docker attach host1) LLDP can be enabled similar to this (grabbed from Docker Topo
lldpad -d
for i in `ls /sys/class/net/ | grep 'eth\|ens\|eno'`
do
lldptool set-lldp -i $i adminStatus=rxtx
lldptool -T -i $i -V sysName enableTx=yes
lldptool -T -i $i -V portDesc enableTx=yes
lldptool -T -i $i -V sysDesc enableTx=yes
done
Adding the following openflow rules sets up “direct forwarding”: From veos3_e3 to 73b6c8e075ea4_l (container
interface) From 73b6c8e075ea4_l (container interface) to veos3_e3
# cat connect.sh
#!/usr/bin/bash
/usr/bin/ovs-ofctl add-flow ovs0 in_port=$1,actions=output:$2
/usr/bin/ovs-ofctl add-flow ovs0 in_port=$2,actions=output:$1
# cat /mnt/flash/veos-config
SYSTEMMACADDR=5054.0000.0101
After the change is done and the veoslab switch rebooted you can see the new System Mac Address.
vlab01#show version
vEOS
Hardware version:
Serial number:
System MAC address: 5054.0000.0101
Mlag
By default libvirt assigns mac addresses to VM interfaces beggining with 52:54:xx:xx:xx:xx. These macs are defined in
the VM XML definition file like in the previous example in "Attaching VM interfaces to OVS" section. If you are using
mlag and see your mlag state always in "connecting" then try to assign mac addresses to your veoslab VM interfaces
with the locally administered bit cleared out. That is the second least significant bit of the first mac address byte. For
instance, assign mac addresses beggining with 50:54:xx:xx:xx:xx instead.
© 2024 GitHub, Inc. Terms Privacy Security Status Docs Contact Manage cookies Do not share my personal information