Draft 3 (ver 1.1) - Network Automation for VxLAN EVPN Fabric & ACI Fabric

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Internal Draft

Date : 17 June 2023

Network Automation
for VxLAN EVPN Fabric & ACI Fabric

Project Name : Network Automation for VxLAN EVPN Fabric & ACI Fabric
Project Status: Initial Phase - Providing Technical Solutions for Automation and
Orchestration for VxLAN EVPN Fabric & ACI Fabric using Terraform, Python, Json,
Netconf/Yang, Restconf & ACITOOLKIT.
Version : 1.1
Author : Rahul Siddhanak

Document History
Version Date Description
Ver 1.0 02 June 2023 Network Automation for VxLAN EVPN Fabric
Ver 1.1 17 June 2023 Network Automation for ACI Fabric
Abstract

This document discusses solutions for Network automation for EVPN VxLAN using Python, JSON, Netconf,
Restconf and Yang. Four tasks are discussed in this document.

1. In the first task, Build Virtual Automation server with VS Code and install python, Netconf/Restconf

and yang plugins.

2. In the second task, connect network devices using Netconf

3. In the third task, write VXLAN automation script and apply to EVPN VxLAN Node

4. Test and verify Leaf and spines switches connectivity with EVPN VxLAN configuration

Step 1 : Build Virtual Automation server with VS Code and install python, Netconf/Restconf and yang plugins
a) Install ncclient in Automation Server using Python
Command:
python -m pip install ncclient
pythom – pip install - - upgrade pip

b) Import ncclient manager


Command:
from ncclient import manager

c) Install Pyang plugin and Import json Library


Command:
pip install pyang
import json
import requests
Step 2 : Connect Network device using Netconf
Command :
host = {
‘host’ : ‘192.168.11.11’, (# VxLAN
‘port’ : ‘830’
‘username’ : admin
‘password’ : ‘cisco’
‘hostkey_verify’:False
}
netconf = manager.connect(**host)

print(“NETCONF session is established. Welcome to Network Automation…!)

Step 3 : Write VXLAN automation script and apply to EVPN VxLAN Node

Note :
1. This python script is written to deploy VXLAN across a Cisco Nexus 9000 series Leaf/Spine switches
infrastructure.
2. This script was built and tested on Python 3.9 and requires the following python libraries: numpy,
json, urllib3, requests etc
3. Prior to running the script, the Nexus switches need to have some initial configuration completed:

• Configure mgmt0 IP address


• Enable LLDP prior to running the script
• nxapi feature must be enabled on each switch
• Interfaces connecting between Spine & Leaf switches must be set to layer 3 and have IP
addresses configured.

Script 1 : DepolyEVPNVxLAN.py

Each switch element consists of the following:


hostname: the switch hostname
url: IP address of the mgmt0 interface on the switch
user: username for the script to use when connecting with the switch
password: password to acces the switch as the user in "username"
leaf: a boolean value to set whether this switch is to be configuered as a leaf switch
true = leaf switch
false = spine switch
loopback0: the IP address to be configured as the loopback0 IP for OSPF & BGP
loopback1: the ip address to be configured for the loopback1 interface
Leaf switches: this ip is used to bind to the nve1 interface
Spine switches: this ip is used for pim anycast rp address

"switches": [
{
"hostname":"LEAF-1",
"url":"192.168.0.11",
"user":"admin",
"password":"DHS@123#",
"leaf":true,
"loopback0":"10.10.10.1",
"loopback1":"10.1.100.1"
},
{
"hostname":"SPINE-1",
"url":"192.168.0.21",
"user":"admin",
"password":" DHS@123#",
"leaf":false,
"loopback0":"10.10.10.10",
"loopback1":"10.10.100.1"
}
],

OSPF: This is the information for the OSPF instance used for the underlay

"ospf": {
"name":"UNDERLAY"
}
name: the name of the OSPF instance for the VXLAN underlay

BGP: This is the information for the BGP overlay configuration on the switches

"bgp": {
"aSystem":"65000"
}
aSystem: the autonomous system number for the iBGP instance in this VXLAN environment.
It is recommended to use a private AS number (64512 - 65534)

PIM: This is the information for the PIM multicast used for VXLAN

"pim": {
"group":"239.0.0.0/24"
}
group: the multicast ip in CIDR notation used for PIM - each VLAN will be assigned an IP within this range

VLANS: A list containing dictionaries with information for the VLANs to associate to VXLAN VNIDs.
Copy/paste to add VLAN information as needed.

"vlans": [
{
"id":"42",
"vnid":"40042",
"mcast":"0.0.0.0",
"ip":"0.0.0.0/0"
},
{
"id":"100",
"vnid":"40100",
"mcast":"239.0.0.10",
"ip":"172.20.10.1/24"
}
]
There will need to be one VLAN created for an L3 VXLAN tunnel, it is identified by an all-zero mcast & IP
address.
id: the 802.1q VLAN tag id number
vnid: the VXLAN VNID to associate the VLAN to
mcast: the multicast IP address for this VLAN, this must be within the range of the IP set in the PIM section
of the config file.
ip: the IP address for the VLAN's switched virtual interfaces to be configured on each leaf switch

VXLAN: VXLAN specific information

"vxlan": {
"anycastMac":"0000.1111.2222"
}
anycastMac: Mac address used for VXLAN anycast gateway, can be entered in 3 ways
Method 1: 0000.0000.0000
Method 2: 00:00:00:00:00:00
Method 3: 00-00-00-00-00-00

VRF: This is the information for creating the VRF handling VXLAN traffic

"vrf": {
"context":"vxlan",
"vni":"40042"
}
Context: the name of the VRF
vni: this must match the VNI of the VLAN used for the L3 VXLAN tunnel created in the VLAN section.

RouteMap: The name of the route map for redistributing direct routes into BGP.

"routeMap":"permitAll"

Script 2 : NxosCall.py

NxosCall.py enables python programs to interact with Nexus 9000 switches

# Send command to Nexus Switch API and collect response data.

def NxosAPI(urlSwitch,pyld,user,passwd):

myheaders={'content-type':'application/json'}
sysData =
requests.post(urlSwitch,data=json.dumps(pyld),headers=myheaders,auth=(user,passwd),verify=F
alse).json()
#print(sysData)
return sysData

# Format payload data for NXOS API show command.

def NxosShow(showCmd):
print("Sending command: ", showCmd)

payload={
"ins_api": {
"version": "1.0",
"type": "cli_show",
"chunk": "0",
"sid": "1",
"input": showCmd,
"output_format": "json"
}
}

return payload

# Format payload data for NXOS API configuration command.

def NxosConfig(configCmd):

cmdText = ""
if type(configCmd) == list or type(configCmd) == tuple:
for item in configCmd:
print("Sending command: ", item)
if configCmd[len(configCmd) - 1] == item:
cmdText = cmdText + item
else:
cmdText = cmdText + item + "; "
elif type(configCmd) == str:
cmdText = configCmd
print("Sending command: ", cmdText)
else:
print("Error: command can only be list, tuple, or string.")
print("Commands not sent.")

payload = {
"ins_api": {
"version": "1.0",
"type": "cli_conf",
"chunk": "0",
"sid": "1",
"input": cmdText,
"output_format": "json",
"rollback": "rollback-on-error"
}
}

return payload

def NxosResult(apiReply):
if type(apiReply) is dict:
if apiReply["code"] == "200":
print("Success")
else:
print("Error: Config Attempt Failed!")
print("Code: ", apiReply["code"])
print("Msg: ", apiReply["msg"])
else:
for response in apiReply:
print(response)
if response["code"] == "200":
print("Success!")
else:
print("Error: Config Attempt Failed!")
print("Code: ", response["code"])
print("Msg: ", response["msg"])
Script 3 : OSPFUnderlay.py

OSPFUnderlay.py configures OSPF on Cisco Nexus 9000 switches for VXLAN deployment.

# Check the NXOS Feature list for active OSPF

def showOSPF(url,user,passwd):
ospfEnabled = False
ospf=NxosCall.NxosAPI(url,NxosCall.NxosShow("show
feature"),user,passwd)["ins_api"]["outputs"]["output"]["body"]["TABLE_cfcFeatureCtrlTable"]["RO
W_cfcFeatureCtrlTable"]
for row in ospf:
if row["cfcFeatureCtrlName2"] == "ospf":
if row["cfcFeatureCtrlOpStatus2"] == re.compile("^enabled"):
ospfEnabled = True
return ospfEnabled

# Get LLDP data from the switch and compare to list of hostnames, returns list of interfaces connected to
remote VXLAN switches

def getSpineLeafInt(url,user,passwd,hostnames):
lldp=NxosCall.NxosAPI(url,NxosCall.NxosShow("show lldp
neighbors"),user,passwd)["ins_api"]["outputs"]["output"]["body"]["TABLE_nbor"]["ROW_nbor"]

linkList = []
if type(lldp) is dict:
for item in hostnames:
if lldp["chassis_id"] == item:
linkList.append(lldp["l_port_id"])
else:
for each in lldp:
for item in hostnames:
if each["chassis_id"] == item:
linkList.append(each["l_port_id"])
return linkList

# Configure OSPF on a switch

def setOSPFunderlay(url,user,passwd,lo0,ospfName,hostnames):

# Regular expressions for IPv4 Addresses with and without CIDR subnet notation.

ip = re.compile("^(25[0-5]|2[0-4][0-9]|[0|1]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[0|1]?[0-
9][0-9]?)){3}$")
ipCIDR = re.compile("^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-
9]|[01]?[0-9][0-9]?)){3}(\/([0-9]|[1-2][0-9]|3[0-2]))?$")

# Create Loopback0 interface for OSFP router ID.

print("Loopback IP Address: ", lo0)


#print(ip.match(lo1))
#print(ipCIDR.match(lo1))
if ip.match(lo0):
lo0CIDR = lo0 + "/32"
print("Setting Loopback0 IP Address to: ",lo0CIDR)
elif ipCIDR.match(lo0):
lo0CIDR = lo0
lo0 = re.sub("(\/([0-9]|[1-2][0-9]|3[0-2]))?$", "", lo0)
print("Setting Loopback0 IP Address to: ",lo0CIDR)
else:
print("ERROR: Loopback address must be an ip address. 'X.X.X.X' or 'X.X.X.X/X'")
return None
print("Creating Loopback interface.")
setLo1Cmd = "interface loopback0 ; ip address " + lo0CIDR + " ; ip router ospf " + ospfName +
" area 0.0.0.0 ; no shutdown"
loop0=NxosCall.NxosAPI(url,NxosCall.NxosConfig(setLo1Cmd),user,
passwd)["ins_api"]["outputs"]["output"]
#print(loop0)
NxosCall.NxosResult(loop0)

# Create OSPF instance on the switch

print("Creating OSPF Instance")


newOSPF="router ospf " + ospfName + " ; router-id " + lo0

instanceOSPF=NxosCall.NxosAPI(url,NxosCall.NxosConfig(newOSPF),user,passwd)["ins_api"]["out
puts"]["output"]
print(instanceOSPF)
NxosCall.NxosResult(instanceOSPF)

# Identify the interfaces to other VXLAN switches

ospfInterfaces = getSpineLeafInt(url,user,passwd,hostnames)

# Configure OSPF on interfaces connecting to other VXLAN switches for each in ospf Interfaces:

print("Setting OSPF on interface " + each)


confIntOSPF = "interface " + each + " ; no switchport ; ip router ospf " + ospfName + " area
0.0.0.0 ; ip ospf network point-to-point ; no shutdown"
setInterface =
NxosCall.NxosAPI(url,NxosCall.NxosConfig(confIntOSPF),user,passwd)["ins_api"]["outputs"]["outp
ut"]
NxosCall.NxosResult(setInterface)

return "OSPF Underlay created at switch, " + url

Script 4 : BGPOverlay.py

BGPOverlay.py configures iBGP for VXLAN on Cisco Nexus 9000 switches.

def showBGP(url,user,passwd):
bgpEnabled = False
bgp=NxosCall.NxosAPI(url,NxosCall.NxosShow("show
feature"),user,passwd)["ins_api"]["outputs"]["output"]["body"]["TABLE_cfcFeatureCtrlTable"]["RO
W_cfcFeatureCtrlTable"]
for row in bgp:
if row["cfcFeatureCtrlName2"] == "bgp":
if row["cfcFeatureCtrlOpStatus2"] == re.compile("^enabled"):
bgpEnabled = True
return bgpEnabled

def getMyNeighbors(url,user,passwd,switches):
lldp=NxosCall.NxosAPI(url,NxosCall.NxosShow("show lldp
neighbors"),user,passwd)["ins_api"]["outputs"]["output"]["body"]["TABLE_nbor"]["ROW_nbor"]
connected = []
if type(lldp) is dict:
for item in switches:
if lldp["chassis_id"] == item:
connected.append(lldp["chassis_id"])
else:
for each in lldp:
for item in switches:
if each["chassis_id"] == item:
connected.append(each["chassis_id"])
return connected

def setBGPoverlay(url,user,passwd,asNum,isLeaf,neighborList,hostname,switches):
activateBGP = "router bgp " + asNum + " ; router-id " + neighborList[hostname] + " ; address-
family ipv4 unicast"

startBGP=NxosCall.NxosAPI(url,NxosCall.NxosConfig(activateBGP),user,passwd)["ins_api"]["outpu
ts"]["output"]
print(startBGP)
NxosCall.NxosResult(startBGP)

whoNeighbors = getMyNeighbors(url,user,passwd,switches)

confBGP = "router bgp " + asNum

if isLeaf:
for each in whoNeighbors:
makeNeighbor=confBGP + " ; neighbor " + neighborList[each] + " ; remote-as " + asNum
+ " ; update-source loopback0 ; address-family ipv4 unicast ; send-community both"

BGPneighbor=NxosCall.NxosAPI(url,NxosCall.NxosConfig(makeNeighbor),user,passwd)["ins_api"][
"outputs"]["output"]
NxosCall.NxosResult(BGPneighbor)
else:
for each in whoNeighbors:
addIPv4 = "address-family ipv4 unicast ; send-community both ; route-reflector-client"
makeNeighbor=confBGP + " ; neighbor " + neighborList[each] + " ; remote-as " + asNum
+ " ; update-source loopback0 ; " + addIPv4

BGPneighbor=NxosCall.NxosAPI(url,NxosCall.NxosConfig(makeNeighbor),user,passwd)["ins_api"][
"outputs"]["output"]
NxosCall.NxosResult(BGPneighbor)

return "BGP Overlay successfully created on switch " + url

def SetBGPevpn(url,user,passwd,asNum,isLeaf,neighborList,switches):
whoNeighbors = getMyNeighbors(url,user,passwd,switches)
confBGP = "router bgp " + asNum

setL2vpn = confBGP + " ; address-family l2vpn evpn ; retain route-target all"


addL2vpn =
NxosCall.NxosAPI(url,NxosCall.NxosConfig(setL2vpn),user,passwd)["ins_api"]["outputs"]["output"]
NxosCall.NxosResult(addL2vpn)

if isLeaf:
for each in whoNeighbors:
evpnNeighbor = confBGP + " ; neighbor " + neighborList[each] + " ; address-family l2vpn
evpn ; send-community both"
bgpEVPN =
NxosCall.NxosAPI(url,NxosCall.NxosConfig(evpnNeighbor),user,passwd)["ins_api"]["outputs"]["out
put"]
NxosCall.NxosResult(bgpEVPN)
else:
for each in whoNeighbors:
evpnNeighbor = confBGP + " ; neighbor " + neighborList[each] + " ; address-family l2vpn
evpn ; send-community both ; route-reflector-client"
bgpEVPN =
NxosCall.NxosAPI(url,NxosCall.NxosConfig(evpnNeighbor),user,passwd)["ins_api"]["outputs"]["out
put"]
NxosCall.NxosResult(bgpEVPN)

return "BGP EVPN successfully added to switch " + url

Script 5 : MulticastVxLAN.py

MulticastVxLAN.py configures PIM for VXLAN on Cisco Nexus 9000 switches.

def setMcastOverlay(url,user,passwd,leaf,pimInfo,ospfName,hostnames):

# Regular expressions for IPv4 Addresses with and without CIDR subnet notation.

ip = re.compile("^(25[0-5]|2[0-4][0-9]|[0|1]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[0|1]?[0-
9][0-9]?)){3}$")
ipCIDR = re.compile("^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-
9]|[01]?[0-9][0-9]?)){3}(\/([0-9]|[1-2][0-9]|3[0-2]))?$")

if ip.match(pimInfo["anycast"]):
lo1CIDR = pimInfo["anycast"] + "/32"
print("Setting Loopback1 IP Address to: ",lo1CIDR)
elif ipCIDR.match(pimInfo["anycast"]):
lo1CIDR = pimInfo["anycast"]
pimInfo["anycast"] = re.sub("(\/([0-9]|[1-2][0-9]|3[0-2]))?$", "", pimInfo["anycast"])
print("Setting Loopback1 IP Address to: ",lo1CIDR)
else:
print("ERROR: Loopback address must be an ip address. 'X.X.X.X' or 'X.X.X.X/X'")
return None

deployPIM = "ip pim rp-address " + pimInfo["anycast"] + " group-list " + pimInfo["group"]

setPIM=NxosCall.NxosAPI(url,NxosCall.NxosConfig(deployPIM),user,passwd)["ins_api"]["outputs"]
["output"]
print(setPIM)
NxosCall.NxosResult(setPIM)

interfaces = getSpineLeafInt(url,user,passwd,hostnames)

if leaf:
for each in interfaces:
pimInt = "interface " + each + " ; ip pim sparse-mode"
setInt =
NxosCall.NxosAPI(url,NxosCall.NxosConfig(pimInt),user,passwd)["ins_api"]["outputs"]["output"]
print(setInt)
NxosCall.NxosResult(setInt)
else:
lo1Settings = "ip address " + lo1CIDR + " ; ip router ospf " + ospfName + " area 0.0.0.0 ; ip
pim sparse-mode"
setLo1 = "interface loopback1 ; " + lo1Settings

createLo1=NxosCall.NxosAPI(url,NxosCall.NxosConfig(setLo1),user,passwd)["ins_api"]["outputs"][
"output"]
print(createLo1)
NxosCall.NxosResult(setPIM)

for addr in pimInfo["rpAddress"]:


setRP="ip pim anycast-rp " + pimInfo["anycast"] + " " + addr

pimRP=NxosCall.NxosAPI(url,NxosCall.NxosConfig(setRP),user,passwd)["ins_api"]["outputs"]["out
put"]
print(pimRP)
NxosCall.NxosResult(pimRP)

for each in interfaces:


pimInt = "interface " + each + " ; ip pim sparse-mode"
setInt =
NxosCall.NxosAPI(url,NxosCall.NxosConfig(pimInt),user,passwd)["ins_api"]["outputs"]["output"]
print(setInt)
NxosCall.NxosResult(setInt)

return "PIM multicast configured on switch, " + url


Network Automation for ACI Fabric

Abstract

This document discusses solutions for Network automation for ACI Fabric using Terraform, Python,
ACITOOLKIT. Three tasks are discussed in this document.

1. In the first task, configuring ACI Structure using Terraform

2. In the second task, configuring ACI Fabric Access Policies using Terraform

3. In the third task, troubleshooting scripts using python

Overview of Terraform
1. Cisco ACI has several tools to help you build and operate the ACI Fabric programmatically using its
APIs. Some of these tools are the API Inspector, ACI Cobra SDK, ACI Toolkits and Ansible.
2. Terraform is the newest way to automate ACI. It takes away the need to write and develop custom
scripts.
3. Terraform resources are pre-written blocks of code that perform a specific task, this saves you time
form researching API requests, developing code to make the proper API request, and then testing and
debugging the code.
Benefits of using Terraform
- Translate HCL code into JSON;
- Support multiple cloud platform;
- Make incremental changes to resources;
- Provide support for software defined networking;
- Import existing resources to a Terraform state; and
- Lock modules before applying state changes to ensure that only one person ca make changes at a time
- Terraforms readability makes it easy for network engineers to codify the configuration for an SDN
- Eliminate Configuration drift – Drift is when the real-world state of your infrastructure differs from
the state defined in your configuration. Terraform helps detect and manage drift when pushing new
configuration changes. Terraform will alert of any drifts as part of the provisioning process.

Terraform Commands

1. Terraform init - Initializes a working directory containing Terraform Configuration files. It searches the
configuration for both direct and indirect references to provides and attempts to load the required
plugins.
2. Terraform plan – used to create an execution plan. This command is convenient way to check
whether the execution plan for a set of changes matches your expectations without making any
changes to real resources or to the state.
3. Terraform apply – used to apply the changes required to reach the desired state of the configuration
4. Terraform destroy – Infrastructure managed by Terraform will be destroyed.

ACITOOLKIT

ACI Toolkit is a set of python libraries that allow basic configuration of the cisco APIC controller. It is intended
to allow users to quickly begin using the REST API and accelerate the learning curve necessary to begin using
the APIC.
Command : pip install acitoolkit
Part 1 : Configuring ACI Terraform using Terraform
# Configure the Provider with your Cisco APIC Credentials
terraform {
required_providers {
aci = {
source = "XXXXXXXXX"
}
}
}
Provider "aci" {
# APIC Username
username = var.user.username
# APIC Password
password = var.user.password
# APIC URL
url = var.user.url
insecure = true
}

# Creating Tenant
resource "aci_tenant" "DHS" {
name = "DHS_Tenant"
description = "from terraform"
}

# Creating VRF
resource "aci_vrf" "DHS" {
tenant_dn = aci_tenant.DHS.id
name = "DHS_VRF"
description = "from terraform"
annotation = "tag_vrf"
bd_enforced_enable = "no"
ip_data_plane_learning = "enabled"
knw_mcast_act = "permit"
name_alias = "alias_vrf"
pc_enf_dir = "egress"
pc_enf_pref = "unenforced"
}

# Creating Application Profile


resource "aci_application_profile" "DHS" {
tenant_dn = aci_tenant.DHS.id
name = "DHS_AP"
annotation = "tag"
description = "from terraform"
}
# Creating EPG
resource "aci_application_epg" "DHS" {
application_profile_dn = aci_application_profile.DHS.id
name = "DHS_EPG"
description = "from terraform"
annotation = "tag_epg"
exception_tag = "0"
flood_on_encap = "disabled"
fwd_ctrl = "none"
has_mcast_source = "no"
is_attr_based_epg = "no"
match_t = "AtleastOne"
name_alias = "alias_epg"
pc_enf_pref = "unenforced"
pref_gr_memb = "exclude"
prio = "unspecified"
shutdown = "no"
relation_fv_rs_bd = aci_bridge_domain.DHS.id (# BD, EPG and VRF Association)
}

# Creating multiple EPG - DHS_EPG1, DHS_EPG2


resource "aci_application_epg" "DHS2" {
application_profile_dn = aci_application_profile.DHS.id
name = "DHS_EPG2"
description = "from terraform"
annotation = "tag_epg"
exception_tag = "0"
flood_on_encap = "disabled"
fwd_ctrl = "none"
has_mcast_source = "no"
is_attr_based_epg = "no"
match_t = "AtleastOne"
name_alias = "alias_epg"
pc_enf_pref = "unenforced"
pref_gr_memb = "exclude"
prio = "unspecified"
shutdown = "no"
relation_fv_rs_bd = aci_bridge_domain.DHS.id (# BD, EPG and VRF Association)
}

resource "aci_application_epg" "DHS3" {


application_profile_dn = aci_application_profile.DHS.id
name = "DHS_EPG3"
description = "from terraform"
annotation = "tag_epg"
exception_tag = "0"
flood_on_encap = "disabled"
fwd_ctrl = "none"
has_mcast_source = "no"
is_attr_based_epg = "no"
match_t = "AtleastOne"
name_alias = "alias_epg"
pc_enf_pref = "unenforced"
pref_gr_memb = "exclude"
prio = "unspecified"
shutdown = "no"
relation_fv_rs_bd = aci_bridge_domain.DHS.id (# BD, EPG and VRF Association)
}

# Creating Bridge Domain


resource "aci_bridge_domain" "DHS" {
tenant_dn = aci_tenant.DHS.id
description = "from terraform"
name = "DHS_BD"
optimize_wan_bandwidth = "no"
annotation = "tag_bd"
arp_flood = "no"
ep_clear = "no"
ep_move_detect_mode = "garp"
host_based_routing = "no"
intersite_bum_traffic_allow = "yes"
intersite_l2_stretch = "yes"
ip_learning = "yes"
ipv6_mcast_allow = "no"
limit_ip_learn_to_subnets = "yes"
ll_addr = "::"
mac = "00:22:BD:F8:19:FF"
mcast_allow = "yes"
multi_dst_pkt_act = "bd-flood"
name_alias = "alias_bd"
bridge_domain_type = "regular"
unicast_route = "no"
unk_mac_ucast_act = "flood"
unk_mcast_act = "flood"
v6unk_mcast_act = "flood"
vmac = "not-applicable"
relation_fv_rs_ctx = aci.vrf.example.id (# BD, EPG and VRF Association)
}

# Creating Contract
resource "aci_contract" "DHS" {
tenant_dn = aci_tenant.DHS.id
description = "From Terraform"
name = "demo_contract"
annotation = "tag_contract"
name_alias = "alias_contract"
prio = "level1"
scope = "tenant"
target_dscp = "unspecified"
}

# Manages ACI Contract Subject


resource "aci_contract_subject" "DHS" {
contract_dn = aci_contract.DHS.id
description = "from terraform"
name = "demo_subject"
annotation = "tag_subject"
cons_match_t = "AtleastOne"
name_alias = "alias_subject"
prio = "level1"
prov_match_t = "AtleastOne"
rev_flt_ports = "yes"
target_dscp = "CS0"
}

# Manages ACI Filter


resource "aci_filter" "DHS" {
tenant_dn = aci_tenant.DHS.id
description = "From Terraform"
name = "DHS_Filter"
annotation = "tag_filter"
name_alias = "alias_filter"
}

# Manages ACI Filter Entry


resource "aci_filter_entry" "DHS" {
filter_dn = aci_filter.DHS.id
description = "From Terraform"
name = "DHS_Entry"
annotation = "tag_entry"
apply_to_frag = "no"
arp_opc = "unspecified"
d_from_port = "unspecified"
d_to_port = "unspecified"
ether_t = "ipv4"
icmpv4_t = "unspecified"
icmpv6_t = "unspecified"
match_dscp = "CS0"
name_alias = "alias_entry"
prot = "tcp"
s_from_port = "0"
s_to_port = "0"
stateful = "no"
tcp_rules = ["ack","rst"]
}

# Manages ACI Contract Subject Filter


resource "aci_contract_subject_filter" "DHS" {
contract_subject_dn = aci_contract_subject.DHS.id
filter_dn = aci_filter.DHS.id
action = "permit"
directives = ["none"]
priority_override = "default"
}

Part 2 : Configuring ACI Fabric Access Policies using Terraform


# Manages ACI VLAN Pool

resource "aci_vlan_pool" "DHS" {


name = "DHS_VLAN"
description = "From Terraform"
alloc_mode = "static"
annotation = "DHS"
name_alias = "DHS"
}
resource "aci_ranges" "range_1" {
vlan_pool_dn = aci_vlan_pool.DHS.id
description = "From Terraform"
from = "vlan-10"
to = "vlan-10"
alloc_mode = "inherit"
annotation = "DHS_VLAN"
name_alias = "name_alias"
role = "external"
}

# Manages ACI Physical Domain

resource "aci_physical_domain" "DHS" {


name = "DHS_Domain"
annotation = "tag_domain"
name_alias = "alias_domain"
relation_infra_rs_vlan_ns = (aci_vlan_pool.DHS.id) ----> Map VLAN to Domain
}
# Manages ACI L3 Domain Profile

resource "aci_l3_domain_profile" "DHS" {


name = "L3_DHS_Domain_Profile"
annotation = "l3_domain_profile_tag"
name_alias = "alias_name"
}

# CDP/LLDP Interface Policy Group

resource "aci_cdp_interface_policy" "example" {


name = "DHS"
admin_st = "enabled"
annotation = "tag_cdp"
name_alias = "alias_cdp"
description = "From Terraform"
}
resource "aci_lldp_interface_policy" "DHS" {
description = "DHS deccription"
name = "DHS_lldp_policy"
admin_rx_st = "enabled"
admin_tx_st = "enabled"
annotation = "tag_lldp"
name_alias = "alias_lldp"
}
resource "aci_leaf_access_port_policy_group" "DHS" {
description = "From Terraform"
name = "DHS_access_port"
annotation = "tag_ports"
name_alias = "name_alias"
relation_infra_rs_cdp_if_pol = (aci_cdp_interface_policy.DHS.id)
relation_infra_rs_lldp_if_pol = (aci_lldp_interface_policy.DHS.id)
relation_infra_rs_att_ent_p = (aci_attachable_access_entity_profile.DHS.id)

# Manages ACI Attachable Access Entity Profile

resource "aci_attachable_access_entity_profile" "DHS" {


description = "AAEP description"
name = "DHS_Entity"
annotation = "tag_entity"
name_alias = "alias_entity"
}
# Manages the ACI Attachable Access Entity Profile (AAEP) to domain (VMM, Physical or External domain)
relationship.

resource "aci_aaep_to_domain" "DHS" {


attachable_access_entity_profile_dn = aci_attachable_access_entity_profile.DHS.id
domain_dn = aci_l3_domain_profile.DHS.id
}

# Manages ACI Leaf Interface Profile

resource "aci_leaf_interface_profile" "DHS" {


description = "From Terraform"
name = "DHS_leaf_profile"
annotation = "tag_leaf"
name_alias = "name_alias"
}

# Manages ACI Access Port Selector

resource "aci_access_port_selector" "DHS" {


leaf_interface_profile_dn = aci_leaf_interface_profile.DHS.id
description = "from terraform"
name = "DHS_port_selector"
access_port_selector_type = "range"
annotation = "tag_port_selector"
name_alias = "alias_port_selector"
relation_infra_rs_acc_base_grp = aci_leaf_access_port_policy_group.DHS.id ----> MAP Port Selector to
Port Block
}
resource "aci_access_port_block" "DHS" {
access_port_selector_dn = aci_access_port_selector.DHS.id
description = "from terraform"
name = "DHS_port_block"
annotation = "tag_port_block"
from_port = "11"
name_alias = "a;lias_port_block"
to_port = "11"
}
# Manages ACI Leaf Profile

resource "aci_leaf_profile" "DHS" {


name = "leaf1"
lifecycle {
ignore_changes = [
relation_infra_rs_acc_port_p,
]
}
description = "From Terraform"
leaf_selector {
name = "one"
switch_association_type = "range"
node_block {
name = "blk1"
from_ = "101"
to_ = "102"
}
}
}
resource "aci_rest" "leaf_profile_to_int_profile_assocation" {
path = "/api/mo/${aci_leaf_profile.DHS.id}/rsaccPortP-[${aci_leaf_interface_profile.DHS.id}].json"
class_name = "infraRsAccPortP"
content = {
"annotation" : "orchestrator:terraform",
"tDn" : aci_leaf_interface_profile.DHS.id
}
}

Part 3 : Troubleshooting scripts using Python/ACITOOLKIT


# aci show ip endpoints

import acitoolkit.acitoolkit.as.aci
from.tabulate.import.tabulate

def.main():
description = ('Simple application that logs onto the APIC and displays all of the Endpoints.')
creds = aci.Credentials('apic', description)
args = creds.get()

# Login to APIC
session = aci.Session(args.url, args.login, args.password)
resp = session.login()
if not resp.ok:
print(%% Could not login to APIC')
return

# Download all of the interfaces and store the data as tuples in a list
data = [ ]
endpoints = aci.Endpoints.get(session)
for ep in endpoints:
epg = ep.get_parent()
app_profile = epg.get_parent()
tenant = app_profile.get_parent()
data.append((ep.mac, ep.ip, ep.if_name, ep.encap, tenant.name, app_profile.name, epg.name))
# Display the data downloaded
print(tabulate(data, headers=["MACADDRESS", "IPADDRESS", "INTERFACE", "ENCAP", "TENANT", "APP
PROFILE", "EPG"]))
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
pass

# aci show contract

import sys
import acitoolkit.acitoolkit as aci

data = []
longest_names = {'Tenant': len('Tenant'),
'Contract': len('Contract')}

def main():
"""
Main show contracts routine
:return: None
"""
description = ('Simple application that logs on to the APIC'
'and displays all of the Contracts.')
creds = aci.Credentials('apic', description)
creds.add_argument('--tenant', help='The name of Tenant')
args = creds.get()

# Login to APIC
session = aci.Session(args.url, args.login, args.password)
resp = session.login()
if not resp.ok:
print('%% Could not login to APIC')
sys.exit(0)

# Download all of the contracts


tenants = aci.Tenant.get(session)
for tenant in tenants:
check_longest_name(tenant.name, "Tenant")
if args.tenant is None:
get_contract(session, tenant)
else:
if tenant.name == args.tenant:
get_contract(session, tenant)
# IPython.embed()

# Display the data downloaded


template = '{0:' + str(longest_names["Tenant"]) + '} ' \
'{1:' + str(longest_names["Contract"]) + '}'
print(template.format("Tenant", "Contract"))
print(template.format('-' * longest_names["Tenant"],
'-' * longest_names["Contract"]))
for rec in sorted(data):
print(template.format(*rec))

def get_contract(session, tenant):


contracts = aci.Contract.get(session, tenant)
for contract in contracts:
check_longest_name(contract.name, "Contract")
data.append((tenant.name, contract.name))

def check_longest_name(item, title):


if len(item) > longest_names[title]:
longest_names[title] = len(item)

if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
pass

# aci show tenant faults

import acitoolkit as ACI


from acitoolkit import Faults

def main():
description = ('Simple application that logs on to the APIC'
' and displays all the faults. If tenant name is given, '
' shows the faults associated with that tenant')
creds = ACI.Credentials('apic', description)
creds.add_argument("-t", "--tenant_name",
help="name of the tenant of which faults are to be displayed")
creds.add_argument('--continuous', action='store_true',
help='Continuously monitor for tenant faults')
args = creds.get()

# Login to APIC
session = ACI.Session(args.url, args.login, args.password)
resp = session.login()
if not resp.ok:
print('%% Could not login to APIC')
return
if args.tenant_name is not None:
tenant_name = args.tenant_name
else:
tenant_name = None

faults_obj = Faults()
faults_obj.subscribe_faults(session)
while faults_obj.has_faults(session) or args.continuous:
if faults_obj.has_faults(session):
faults = faults_obj.get_faults(session, tenant_name=tenant_name)
if faults is not None:
for fault in faults:
if fault is not None:
print("****************")
if fault.descr is not None:
print(" descr : " + fault.descr)
else:
print(" descr : " + " ")
print(" dn : " + fault.dn)
print(" rule : " + fault.rule)
print(" severity : " + fault.severity)
print(" type : " + fault.type)
print(" domain : " + fault.domain)

if __name__ == '__main__':
main()

# aci show external networks

from acitoolkit import *

data = []
longest_names = {'Tenant': len('Tenant'),
'L3Out': len('L3Out'),
'External EPG': len('External EPG'),
'Subnet': len('Subnet'),
'Scope': len('Scope')}

def main():
# Login to APIC
description = ('Simple application that logs on to the APIC'
' and displays all of the External Subnets.')
creds = Credentials('apic', description)
creds.add_argument('--tenant', help='The name of Tenant')
args = creds.get()
session = Session(args.url, args.login, args.password)
resp = session.login()
if not resp.ok:
print('%% Could not login to APIC')
# Download all of the tenants, app profiles, and Subnets
# and store the names as tuples in a list
tenants = Tenant.get_deep(session, limit_to=['fvTenant',
'l3extOut',
'l3extInstP',
'l3extSubnet'])
for tenant in tenants:
check_longest_name(tenant.name, "Tenant")
if args.tenant is None:
get_external_epg(session, tenant)
else:
if tenant.name == args.tenant:
get_external_epg(session, tenant)
# Display the data downloaded
template = '{0:' + str(longest_names["Tenant"]) + '} ' \
'{1:' + str(longest_names["L3Out"]) + '} ' \
'{2:' + str(longest_names["External EPG"]) + '} ' \
'{3:' + str(longest_names["Subnet"]) + '} ' \
'{4:' + str(longest_names["Scope"]) + '}'
print(template.format("Tenant", "L3Out", "External EPG", "Subnet", "Scope"))
print(template.format('-' * longest_names["Tenant"],
'-' * longest_names["L3Out"],
'-' * longest_names["External EPG"],
'-' * longest_names["Subnet"],
'-' * longest_names["Scope"]))
for rec in sorted(data):
print(template.format(*rec))
def get_external_epg(session, tenant):
outside_l3s = tenant.get_children(only_class=OutsideL3)
for outside_l3 in outside_l3s:
check_longest_name(outside_l3.name, "L3Out")
outside_epgs = outside_l3.get_children(only_class=OutsideEPG)
for outside_epg in outside_epgs:
check_longest_name(outside_epg.name, "External EPG")
outside_networks = outside_epg.get_children(only_class=OutsideNetwork)
if len(outside_networks) == 0:
data.append((tenant.name, outside_l3.name, outside_epg.name, "", ""))
else:
for outside_network in outside_networks:
check_longest_name(outside_network.addr, "Subnet")
check_longest_name(outside_network.get_scope(), "Scope")
data.append((tenant.name,
outside_l3.name,
outside_epg.name,
outside_network.addr,
outside_network.get_scope()))
def check_longest_name(item, title):
if len(item) > longest_names[title]:
longest_names[title] = len(item)

if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
pass

You might also like